Actions
Bug #18113
closedDon't lose deep-scrub information
% Done:
0%
Source:
Tags:
Backport:
jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
We'd like to not lose deep-scrub information when more frequent scrub happens.
Updated by David Zafman over 7 years ago
Updated by David Zafman over 7 years ago
The only issue that became obvious when modifying the code is what to do if nodeep-scrub flag is set.
Sam Just:
Hmm, that's a good point. Probably, it would be best to refuse to do a shallow scrub and output a WRN message to clog.
Updated by David Zafman over 7 years ago
- Status changed from Resolved to Pending Backport
- Backport changed from Jewel to Jewel, Kraken
Updated by Loïc Dachary over 7 years ago
- Backport changed from Jewel, Kraken to jewel,kraken
Updated by Loïc Dachary over 7 years ago
- Copied to Backport #18502: jewel: Don't lose deep-scrub information added
Updated by Nathan Cutler about 7 years ago
- Backport changed from jewel,kraken to jewel
The commits were manually merged to kraken before the 11.2.0 release, so kraken backport is not needed.
Updated by David Zafman about 7 years ago
- Has duplicate Feature #14860: scrub/repair: persist scrub results (do not overwrite deep scrub results with non-deep scrub) added
Updated by David Zafman over 6 years ago
- Status changed from Pending Backport to Resolved
Actions