Actions
Bug #27362
openWrong erasure pool MAX AVAIL size calculation with technique=reed_sol_r6_op
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
ceph osd erasure-code-profile set hdderaprof k=4 m=2 crush-failure-domain=rack technique=reed_sol_r6_op crush-device-class=hdd ceph osd crush rule create-erasure hdderarule hdderaprof ceph osd erasure-code-profile set hdderaprof2 k=4 m=2 crush-failure-domain=rack crush-device-class=hdd ceph osd crush rule create-erasure hdderarule2 hdderaprof2 ceph osd pool create rbdn 128 128 erasure hdderaprof hdderarule ceph osd pool create rbd2 128 128 erasure hdderaprof2 hdderarule2 ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 117 TiB 115 TiB 2.8 TiB 2.36 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbdn 8 0 B 0 0 B 0 rbd2 9 0 B 0 69 TiB 0
If such a pool is used for cephfs - the file system will have a size of 1Z
Updated by John Spray over 5 years ago
- Project changed from Ceph to RADOS
- Description updated (diff)
- Category deleted (
ceph cli)
Actions