Project

General

Profile

Actions

Bug #23576

closed

osd: active+clean+inconsistent pg will not scrub or repair

Added by Michael Sudnick about 6 years ago. Updated over 5 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
David Zafman
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

My apologies if I'm too premature in posting this.

Myself and so far two others on the mailing list: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-April/025819.html for the start of the thread seem to have hit a case where we have an active+clean+inconsistent pg that will not repair or deep-scrub and whose rados list-inconsistent-obj
returns:
No scrub information available for pg 49.11c
error 2: (2) No such file or directory.

David Turner has an EC-coded pool running ceph 12.2.2 while I myself have the affected pg in a replica 3, min_size 2 pool (the inconsistency came from a failing disk most likely in my case, the disk has now been replaced) acting as a cephfs data pool wish compression set to snappy/forced.


Related issues 3 (0 open3 closed)

Related to RADOS - Bug #23267: scrub errors not cleared on replicas can cause inconsistent pg state when replica takes over primaryResolvedDavid Zafman03/07/2018

Actions
Related to bluestore - Bug #23577: Inconsistent PG refusing to deep-scrub or repairCan't reproduceDavid Zafman04/06/2018

Actions
Related to RADOS - Bug #27988: Warn if queue of scrubs ready to run exceeds some thresholdRejectedDavid Zafman

Actions
Actions #1

Updated by Michael Sudnick about 6 years ago

Sorry, forgot to mention I am running 12.2.4.

Actions #2

Updated by Patrick Donnelly about 6 years ago

  • Project changed from Ceph to RADOS
  • Subject changed from active+clean+inconsistent pg will not scrub or repair to osd: active+clean+inconsistent pg will not scrub or repair
  • Category deleted (OSD)
  • Source set to Community (user)
  • Release deleted (luminous)
  • Component(RADOS) OSD added
Actions #3

Updated by Josh Durgin about 6 years ago

  • Assignee set to David Zafman

David, is there any way a missing object wouldn't be reported in list-inconsistent output?

Actions #4

Updated by David Zafman about 6 years ago

  • Related to Bug #23267: scrub errors not cleared on replicas can cause inconsistent pg state when replica takes over primary added
Actions #5

Updated by David Zafman about 6 years ago

  • Related to Bug #23577: Inconsistent PG refusing to deep-scrub or repair added
Actions #6

Updated by David Zafman about 6 years ago

Any scrub that completes without errors will set num_scrub_errors in pg stats to 0. That will cause the inconsistent flag to be cleared. There is a related tracker 23267 just recently fixed which can cause inconsistent flag to reappear after repairing a pg. It might be confuse diagnosis of this problem.

Actions #7

Updated by David Turner about 6 years ago

Is there any testing, logs, etc that will be helpful for tracking down the cause of this problem. I had a fairly bad event where backfilling while OSDs were crashing due to filestore subfolder splitting during the backfill. I'm assuming that's the cause of the now 3 inconsistent PGs, but I cannot get any of them to perform any sort of scrub to get more information on the object with increased logging. Running for over a week in HEALTH_ERR state in our largest production cluster is not something I wish to continue, but I can't seem to find anything I can do to help it. Even debug_osd = 20/20 doesn't show anything other than the OSD saying (that I can interpret) the PG can be scrubbed and that it's going to start it... and then never does.

Actions #8

Updated by David Turner about 6 years ago

Any update on this?

Actions #9

Updated by David Turner about 6 years ago

My opinion is that this is different from a problem where the inconsistent flag reappears after repairing a PG because I can't scrub or repair the PG. Also The inconsistent flag was set after a deep-scrub, not a repair. After it was initially set (on the 3 PGs I have now) I am unable to ever get the inconsistent PGs to scrub again. In an OSD log level 20/20 I see that the OSD is told to scrub the PG, it determines it has an available slot to do so, it says it's going to scrub it... and then nothing. No more reference to scrubbing that PG and the PG is never scrubbed. The pg info shows that the last time the PG was scrubbed or deep scrubbed was when it was initially flagged as inconsistent. even though that was weeks ago and it should have even scrubbed on it's own due to the config settings.

Actions #10

Updated by Sebastian Sobolewski almost 6 years ago

Ran into something similar this past week. ( active+clean+inconsistent) where forced scrubs would not run. The following procedure got it unstuck and we were able to repair the pg:

(1) set noscrub,nodeep-scrub
(2) kicked currently active scrubs one at a time using 'ceph osd down <id>' where the id was the primary osd of each scrubbing pg. ( Note to wait till cluster gets to normal state before kicking the next osd )
(3) After the last scrub was kicked, the forced scrub ran immediately
(4) After the forced scrub finished, 'ceph pg repair' was able to fix the inconsistancy

Something to note is that none of the running scrubs were running on OSDS overlapped with the OSDs we wanted to force the scrub on.

Actions #11

Updated by David Zafman almost 6 years ago

Are there messages "not scheduling scrubs due to active recovery" in the logs on any of the primary OSDs? That message would require osd_debug 20.

Actions #12

Updated by David Turner almost 6 years ago

No, I never had that message in any of our logs. After a month the PGs ran their own deep-scrub again and I was able to issue a repair to resolve the issue. I also made sure to increase the osd_max_scrubs on all OSDs involved in the PG before issuing new scrubs, but that never helped.

Actions #13

Updated by Michael Sudnick almost 6 years ago

Sorry for the lack of updates, there were no messages of any sort in the logs when attempting to deep scrub or repair. That said a few days ago it seems there was a regularly scheduled deep-scrub that repaired it, so I no longer have an inconsistent pg.

Actions #14

Updated by David Zafman almost 6 years ago

  • Status changed from New to Can't reproduce
Actions #15

Updated by Jacek S. over 5 years ago

Hi - we are still experiencing this issue on 12.2.7 (so latest Luminous version)

"shards": [
                {
                    "osd": 51,
                    "primary": false,
                    "errors": [
                        "omap_digest_mismatch_info" 
                    ],
                    "size": 0,
                    "omap_digest": "0x21403a1a",
                    "data_digest": "0xffffffff" 
                },
                {
                    "osd": 132,
                    "primary": false,
                    "errors": [
                        "omap_digest_mismatch_info" 
                    ],
                    "size": 0,
                    "omap_digest": "0x556f080e",
                    "data_digest": "0xffffffff" 
                },
                {
                    "osd": 195,
                    "primary": true,
                    "errors": [
                        "omap_digest_mismatch_info" 
                    ],
                    "size": 0,
                    "omap_digest": "0x556f080e",
                    "data_digest": "0xffffffff" 
                },
                {
                    "osd": 213,
                    "primary": false,
                    "errors": [
                        "omap_digest_mismatch_info" 
                    ],
                    "size": 0,
                    "omap_digest": "0x556f080e",
                    "data_digest": "0xffffffff" 
                }
            ]

Actions #16

Updated by David Turner over 5 years ago

I cam across this again as well and I did some more testing. As it turns out what resolved this issue for me was increasing the setting osd_deep_scrub_interval. As it turns out there were too many high priority deep scrubs queued due to the interval and that was preventing scheduling new deep scrubs or repairs. An added status in Ceph health that your PGs are unable to keep up with the current scrub/deep-scrub schedule would be helpful in diagnosing and preventing a problem where you can't issue repairs, deep-scrubs, or scrubs.

Actions #17

Updated by David Turner over 5 years ago

David Turner wrote:

I came across this again as well and I did some more testing. As it turns out what resolved this issue for me was increasing the setting osd_deep_scrub_interval. As it turns out there were too many high priority deep scrubs queued due to the interval and that was preventing scheduling new deep scrubs or repairs. An added status in Ceph health that your PGs are unable to keep up with the current scrub/deep-scrub schedule would be helpful in diagnosing and preventing a problem where you can't issue repairs, deep-scrubs, or scrubs.

Actions #18

Updated by David Zafman over 5 years ago

  • Related to Bug #27988: Warn if queue of scrubs ready to run exceeds some threshold added
Actions #19

Updated by David Zafman over 5 years ago

Created tracker https://tracker.ceph.com/issues/27988 to add warning about too many scrubs pending.

Actions

Also available in: Atom PDF