Project

General

Profile

Actions

Bug #51688

open

"stuck peering for" warning is misleading

Added by Dan van der Ster almost 3 years ago. Updated 8 months ago.

Status:
Pending Backport
Priority:
Normal
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
reef,quincy
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When OSDs restart or crush maps change it is common to see a HEALTH_WARN claiming that PGs have been stuck peering since awhile, even though they were active just seconds ago.
It would be preferable if we only report PG_AVAILABILITY issues if they really are stuck peering longer than 60s.

E.g.

HEALTH_WARN Reduced data availability: 50 pgs peering
PG_AVAILABILITY Reduced data availability: 50 pgs peering
    pg 3.7df is stuck peering for 792.178587, current state remapped+peering, last acting [100,113,352]
    pg 3.8ae is stuck peering for 280.567053, current state remapped+peering, last acting [226,345,350]
    pg 3.c0b is stuck peering for 1018.081127, current state remapped+peering, last acting [62,246,249]
    pg 3.fc9 is stuck peering for 65.799756, current state remapped+peering, last acting [123,447,351]
    pg 4.c is stuck peering for 208.471034, current state remapped+peering, last acting [123,501,247]
...

(Related: I proposed to change PG_AVAILABILITY issues to HEALTH_ERR at https://tracker.ceph.com/issues/23565 and https://github.com/ceph/ceph/pull/42192, so this needs to be fixed before merging that.)

I tracked this to `PGMap::get_health_checks` which will mark a PG as stuck peering if now - last_peered > mon_pg_stuck_threshold.
But the problem is that last_peered is only updated if there is IO on a PG -- an OSD doesn't send pgstats if it is idle.
To fix, we could update last_active/last_peered etc, and send a pg stats update more frequently even when idle?

Clearly osd_pg_stat_report_interval_max is related here, but the default is 500 and we have some PGs reported stuck peering longer than 500s, so there is still something missing here.

We observe this in nautilus, but the code hasn't changed much in master AFAICT.


Related issues 2 (2 open0 closed)

Copied to RADOS - Backport #62926: quincy: "stuck peering for" warning is misleadingNewShreyansh SanchetiActions
Copied to RADOS - Backport #62927: reef: "stuck peering for" warning is misleadingNewShreyansh SanchetiActions
Actions

Also available in: Atom PDF