Actions
Bug #20435
closed`ceph -s` repeats some health details in Luminous RC release
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I'm installing ceph version 12.1.0 (262617c9f16c55e863693258061c5b25dea5b086) luminous (dev) on the Sepia lab cluster and `ceph -s` output is repeating some health details. See below.
cluster: id: 28f7427e-5558-4ffd-ae1a-51ec3042759a health: HEALTH_ERR 96 pgs are stuck inactive for more than 300 seconds 1497 pgs degraded 1 pgs inconsistent 186 pgs peering 1 pgs recovering 70 pgs stale 96 pgs stuck inactive 843 pgs stuck unclean 1497 pgs undersized 29 requests are blocked > 32 sec recovery 8861193/146442064 objects degraded (6.051%) 1 scrub errors 1 host (7 osds) down 7 osds down 26 osds exist in the crush map but not in the osdmap noout,norebalance,norecover,noscrub,nodeep-scrub flag(s) set 1 mons down, quorum 0,2 mira021,mira060 154 pgs are stuck inactive for more than 300 seconds 1875 pgs degraded 1 pgs inconsistent 1 pgs recovering 154 pgs stuck inactive 1095 pgs stuck unclean 1875 pgs undersized 17 requests are blocked > 32 sec 3 osds have slow requests recovery 10955807/144101661 objects degraded (7.603%) 1 scrub errors
Actions