Project

General

Profile

Bug #20435

`ceph -s` repeats some health details in Luminous RC release

Added by David Galloway almost 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm installing ceph version 12.1.0 (262617c9f16c55e863693258061c5b25dea5b086) luminous (dev) on the Sepia lab cluster and `ceph -s` output is repeating some health details. See below.

  cluster:
    id:     28f7427e-5558-4ffd-ae1a-51ec3042759a
    health: HEALTH_ERR
            96 pgs are stuck inactive for more than 300 seconds
            1497 pgs degraded
            1 pgs inconsistent
            186 pgs peering
            1 pgs recovering
            70 pgs stale
            96 pgs stuck inactive
            843 pgs stuck unclean
            1497 pgs undersized
            29 requests are blocked > 32 sec
            recovery 8861193/146442064 objects degraded (6.051%)
            1 scrub errors
            1 host (7 osds) down
            7 osds down
            26 osds exist in the crush map but not in the osdmap
            noout,norebalance,norecover,noscrub,nodeep-scrub flag(s) set
            1 mons down, quorum 0,2 mira021,mira060
            154 pgs are stuck inactive for more than 300 seconds
            1875 pgs degraded
            1 pgs inconsistent
            1 pgs recovering
            154 pgs stuck inactive
            1095 pgs stuck unclean
            1875 pgs undersized
            17 requests are blocked > 32 sec
            3 osds have slow requests
            recovery 10955807/144101661 objects degraded (7.603%)
            1 scrub errors

History

#1 Updated by Dan Mick almost 7 years ago

  • Priority changed from Normal to High

#2 Updated by Sage Weil over 6 years ago

  • Status changed from New to 12
  • Priority changed from High to Immediate

David, can you tell us whether this happening during or after the upgrade?

#3 Updated by David Galloway over 6 years ago

Sage Weil wrote:

David, can you tell us whether this happening during or after the upgrade?

I'm fairly certain once all daemons were reloaded and running luminous, the health details weren't duplicated. I'm adding OSDs and creating recovery output right now and none of it's duplicated so it appears it only happened during upgrade.

#4 Updated by Sage Weil over 6 years ago

  • Status changed from 12 to Fix Under Review

#5 Updated by Sage Weil over 6 years ago

  • Status changed from Fix Under Review to Resolved

Also available in: Atom PDF