Project

General

Profile

Actions

Support #22243

open

Luminous: EC pool using more space than it should

Added by Daniel Neilson over 6 years ago. Updated over 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Tags:
erasure-code
Reviewed:
Affected Versions:
Component(RADOS):
Pull request ID:

Description

Hello,

I have an erasure-coded pool that is using more space on the OSDs than it should be. The EC profile is set to k=6 m=2 but I'm seeing the amount of data being used as more like k=2 m=1. Here is the 'ceph -s' output:

cluster:
id: b6e8901b-d94d-4b4f-b5e1-7eb05848227f
health: HEALTH_OK
services:
mon: 1 daemons, quorum dev-pve01
mgr: dev-pve01(active)
mds: pool01-1/1/1 up {0=dev-pve01=up:active}
osd: 10 osds: 10 up, 10 in
data:
pools: 2 pools, 160 pgs
objects: 1402k objects, 2650 GB
usage: 4034 GB used, 18603 GB / 22638 GB avail
pgs: 159 active+clean
1 active+clean+scrubbing+deep

As this output shows, the amount used is 52% higher than the object space stored which I'm generally expecting to be closer to 33%. I'm using this for CephFS, so one pool is for data and the other is metadata. Here is the EC profile that I'm using:

ceph osd erasure-code-profile get k6m2
crush-device-class=
crush-failure-domain=osd
crush-root=default
jerasure-per-chunk-alignment=false
k=6
m=2
plugin=jerasure
technique=reed_sol_van
w=8

Output of pool settings:

  1. ceph osd pool get pool01 erasure_code_profile
    erasure_code_profile: k6m2

Is there anything that I can dig into or look at that would get me closer to the 33% data used instead of 50%?

Thank you.

Actions

Also available in: Atom PDF