Bug #27362
Updated by John Spray over 5 years ago
<pre> ceph osd erasure-code-profile set hdderaprof k=4 m=2 crush-failure-domain=rack technique=reed_sol_r6_op crush-device-class=hdd ceph osd crush rule create-erasure hdderarule hdderaprof ceph osd erasure-code-profile set hdderaprof2 k=4 m=2 crush-failure-domain=rack crush-device-class=hdd ceph osd crush rule create-erasure hdderarule2 hdderaprof2 ceph osd pool create rbdn 128 128 erasure hdderaprof hdderarule ceph osd pool create rbd2 128 128 erasure hdderaprof2 hdderarule2 ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 117 TiB 115 TiB 2.8 TiB 2.36 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbdn 8 0 B 0 0 B 0 rbd2 9 0 B 0 69 TiB 0 </pre> If such a pool is used for cephfs - the file system will have a size of 1Z