Actions
Bug #21409
closedper-pool full flags set incorrectly?
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
[root@smithi014 sage]# ceph osd df tree 2017-09-15 20:30:45.269011 7fba95ccc700 -1 WARNING: all dangerous and experimental features are enabled. 2017-09-15 20:30:45.302945 7fba95ccc700 -1 WARNING: all dangerous and experimental features are enabled. ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME -1 0.26367 - 270G 15.8G 254G 5.85 1.00 - root default -3 0.26367 - 270G 15.8G 254G 5.85 1.00 - host smithi014 0 ssd 0.08789 1.00000 90G 6.77G 83.2G 7.52 1.29 122 osd.0 1 ssd 0.08789 1.00000 90G 4.41G 85.6G 4.90 0.84 122 osd.1 2 ssd 0.08789 1.00000 90G 4.62G 85.4G 5.13 0.88 122 osd.2 TOTAL 270G 15.8G 254G 5.85 MIN/MAX VAR: 0.84/1.29 STDDEV: 1.19
(osds not full!)
[root@smithi014 sage]# ceph osd dump ... pool 9 'c777785f-9508-4983-8903-38b9d3702468' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 40 flags hashpspool,full,full_no_quota max_objects 1 stripe_width 0
(pool marked full,full_no_quota)
Updated by xie xingguo over 6 years ago
- Status changed from 12 to Fix Under Review
Updated by Sage Weil over 6 years ago
- Status changed from Fix Under Review to Resolved
Actions