Actions
Bug #23430
closedPGs are stuck in 'creating+incomplete' status on vstart cluster
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Hi,
The PGs are stuck in 'creating+incomplete' status after creating an erasure coded pool on a vstart cluster.
I tested it on the master branch, commit https://github.com/ceph/ceph/commit/820dac980e9416fe05998d50cac633c81a87b9e3 and I'm observing this behavior about 12 days now.
Steps to reproduce:
1. Create a new vstart cluster
2. Create an erasure coded pool:
ceph-dev /ceph/build # bin/ceph osd pool create ecpool 12 12 erasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 2018-03-20 09:22:20.589 7f0717be2700 -1 WARNING: all dangerous and experimental features are enabled. 2018-03-20 09:22:20.609 7f0717be2700 -1 WARNING: all dangerous and experimental features are enabled. pool 'ecpool' created
3. After that my cluster is stuck in the following status:
ceph-dev /ceph/build # bin/ceph -s *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 2018-03-20 09:22:52.885 7f706d0cb700 -1 WARNING: all dangerous and experimental features are enabled. 2018-03-20 09:22:52.897 7f706d0cb700 -1 WARNING: all dangerous and experimental features are enabled. cluster: id: 78384e20-ab50-458e-b7b0-5248f7d26a20 health: HEALTH_WARN Reduced data availability: 12 pgs incomplete services: mon: 3 daemons, quorum a,b,c mgr: x(active) mds: cephfs_a-1/1/1 up {0=c=up:active}, 2 up:standby osd: 3 osds: 3 up, 3 in data: pools: 3 pools, 28 pgs objects: 21 objects, 2.19K usage: 3.01G used, 27.0G / 30G avail pgs: 42.857% pgs not active 16 active+clean 12 creating+incomplete
Please find the pg dump output attached.
I'm not really sure which log files are helpful here, but I could attach them afterwards. Just let me know what you would need.
Files
Actions