Project

General

Profile

Bug #23565

Inactive PGs don't seem to cause HEALTH_ERR

Added by Greg Farnum over 1 year ago. Updated about 1 month ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
04/05/2018
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:

Description

In looking at https://tracker.ceph.com/issues/23562, there were inactive PGs starting at

2018-04-04 16:57:43.702801 mon.reesi001 mon.0 10.8.130.101:6789/0 113 : cluster [WRN] Health check failed: Reduced data availability: 81 pgs inactive, 91 pgs peering (PG_AVAILABILITY)

immediately after the VDO OSD was turned on.
It settled down at
2018-04-04 16:59:38.517021 mon.reesi001 mon.0 10.8.130.101:6789/0 163 : cluster [WRN] overall HEALTH_WARN 463121/13876873 objects misplaced (3.337%); Reduced data availability: 61 pgs inactive; Degraded data redundancy: 14/13876873 objects degraded (0.000%), 152 pgs unclean, 6 pgs degraded; too many PGs per OSD (240 > max 200); clock skew detected on mon.reesi002, mon.reesi003

And then stayed pretty much that way indefinitely. It eventually transitioned to a HEALTH_ERR a couple hours later

2018-04-04 18:27:38.532992 mon.reesi001 mon.0 10.8.130.101:6789/0 1476 : cluster [WRN] overall HEALTH_WARN 405697/13877083 objects misplaced (2.924%); Reduced data availability: 61 pgs inactive; Degraded data redundancy: 13/13877083 objects degraded (0.000%), 139 pgs unclean, 2 pgs degraded; 1 slow requests are blocked > 32 sec; too many PGs per OSD (240 > max 200); clock skew detected on mon.reesi002, mon.reesi003
2018-04-04 18:28:38.533153 mon.reesi001 mon.0 10.8.130.101:6789/0 1494 : cluster [ERR] overall HEALTH_ERR 405508/13877089 objects misplaced (2.922%); Reduced data availability: 61 pgs inactive; Degraded data redundancy: 13/13877089 objects degraded (0.000%), 139 pgs unclean, 2 pgs degraded; 1 stuck requests are blocked > 4096 sec; too many PGs per OSD (240 > max 200); clock skew detected on mon.reesi002, mon.reesi003

But that seems to have been caused by the creation of stuck requests, not the inactive PGs.

I'm not quite sure what's going on here. Perhaps we only transition to HEALTH_ERR when PGs get stuck, but the primary for these inactive PGs was still sending in PGStats messages so that never happened?


Related issues

Related to RADOS - Bug #23049: ceph Status shows only WARN when traffic to cluster fails New 02/20/2018

History

#1 Updated by Greg Farnum over 1 year ago

  • Project changed from Ceph to RADOS

#2 Updated by Josh Durgin over 1 year ago

  • Assignee set to Brad Hubbard

Brad, can you take a look at this? I think it can be handled by the stuck pg code, that iirc already warns about pgs stuck unclean for some time.

#3 Updated by Greg Farnum about 1 month ago

  • Related to Bug #23049: ceph Status shows only WARN when traffic to cluster fails added

#4 Updated by Greg Farnum about 1 month ago

  • Assignee deleted (Brad Hubbard)
  • Priority changed from High to Normal

Also available in: Atom PDF