Bug #14710
Bad output from 'ceph df' causes problems for cinder
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When adding OSD's to a cluster using 'ceph-disk prepare' (by way of ceph-ansible) on all the new OSDs. This brings the OSDs into the cluster with a weight of 0 ('osd_crush_initial_weight = 0' set). The output of 'ceph df' reports 'MAX AVAIL' to be 0 instead of the proper value until the weight is changed to 0.01, then ceph df displays proper numerical values. This causes problems for OpenStack Cinder in Kilo because it thinks there isn't any available space for new volumes.
Before adding an OSD:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
589T 345T 243T 41.32
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 816M 0 102210G 376
metadata 1 120M 0 102210G 94
images 5 11990G 1.99 68140G 1536075
volumes 6 63603G 10.54 68140G 16462022
instances 8 5657G 0.94 68140G 1063602
rbench 12 260M 0 68140G 22569
scratch 13 40960 0 68140G 10
After adding an OSD:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
590T 346T 243T 41.24
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 816M 0 0 376
metadata 1 120M 0 0 94
images 5 11990G 1.98 0 1536075
volumes 6 63603G 10.52 0 16462022
instances 8 5657G 0.94 0 1063602
rbench 12 260M 0 0 22569
scratch 13 40960 0 0 10
Max Avail is showing 0's for all pools.
Assinging to kchai per sjust request.
Related issues
History
#1 Updated by Kefu Chai over 7 years ago
- Description updated (diff)
#2 Updated by Kefu Chai over 7 years ago
- Status changed from New to 12
reproducible in hammer, but not in master.
#3 Updated by Kefu Chai over 7 years ago
- Duplicates Backport #13930: hammer: Ceph Pools' MAX AVAIL is 0 if some OSDs' weight is 0 added