Project

General

Profile

Bug #20435

`ceph -s` repeats some health details in Luminous RC release

Added by David Galloway over 1 year ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Immediate
Assignee:
-
Category:
-
Target version:
Start date:
06/27/2017
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:

Description

I'm installing ceph version 12.1.0 (262617c9f16c55e863693258061c5b25dea5b086) luminous (dev) on the Sepia lab cluster and `ceph -s` output is repeating some health details. See below.

  cluster:
    id:     28f7427e-5558-4ffd-ae1a-51ec3042759a
    health: HEALTH_ERR
            96 pgs are stuck inactive for more than 300 seconds
            1497 pgs degraded
            1 pgs inconsistent
            186 pgs peering
            1 pgs recovering
            70 pgs stale
            96 pgs stuck inactive
            843 pgs stuck unclean
            1497 pgs undersized
            29 requests are blocked > 32 sec
            recovery 8861193/146442064 objects degraded (6.051%)
            1 scrub errors
            1 host (7 osds) down
            7 osds down
            26 osds exist in the crush map but not in the osdmap
            noout,norebalance,norecover,noscrub,nodeep-scrub flag(s) set
            1 mons down, quorum 0,2 mira021,mira060
            154 pgs are stuck inactive for more than 300 seconds
            1875 pgs degraded
            1 pgs inconsistent
            1 pgs recovering
            154 pgs stuck inactive
            1095 pgs stuck unclean
            1875 pgs undersized
            17 requests are blocked > 32 sec
            3 osds have slow requests
            recovery 10955807/144101661 objects degraded (7.603%)
            1 scrub errors

History

#1 Updated by Dan Mick over 1 year ago

  • Priority changed from Normal to High

#2 Updated by Sage Weil over 1 year ago

  • Status changed from New to Verified
  • Priority changed from High to Immediate

David, can you tell us whether this happening during or after the upgrade?

#3 Updated by David Galloway over 1 year ago

Sage Weil wrote:

David, can you tell us whether this happening during or after the upgrade?

I'm fairly certain once all daemons were reloaded and running luminous, the health details weren't duplicated. I'm adding OSDs and creating recovery output right now and none of it's duplicated so it appears it only happened during upgrade.

#4 Updated by Sage Weil over 1 year ago

  • Status changed from Verified to Need Review

#5 Updated by Sage Weil over 1 year ago

  • Status changed from Need Review to Resolved

Also available in: Atom PDF