Feature #11202
openadd stop_scrub command for ceph
0%
Description
ceph osd stop-scrub <who>
ceph pg stop-scrub <pgid>
Updated by Xinze Chi about 9 years ago
Maybe we could use "ceph osd set noscrub" or "ceph osd unset noscrub" to achieve this.
Updated by Xinze Chi about 9 years ago
https://github.com/ceph/ceph/pull/4138 is pending for review.
Updated by Kefu Chai almost 9 years ago
- Status changed from New to Fix Under Review
- Assignee set to Xinze Chi
Updated by Loïc Dachary over 8 years ago
- Status changed from Fix Under Review to 12
Updated by Stefan Kooman over 4 years ago
We have set "ceph osd set noscrub" and "ceph osd set no-deep-scrub" flags:
noscrub,nodeep-scrub flag(s) set
And PGs are keep getting deep scrubbed. There is a rebalance (backfilling) going on. IIRC deep scrubs get started to verify integrity of the PG on the new OSD. Is that (still) true? I don't think this makes sense anymore with bluestore ... as it's already doing CRC checks etc. while doing the backfilling, right? If not, it would make sense to do this instead of doing this work twice.
Anyway ... when those flags are set the cluster should NOT keep scheduling deep-scrubs ... and it's still doing this ... really annoying because it slows down backfilling, potentially triggering SLOW_OPS (in our case it does) and or even cause the "OSD::osd_op_tp thread" to timeout ... leading to issue #41255 for at least Mimic (13.2.8). When backfilling is done the balancer might want to tune things ... but won't run because (deep-)scrubs ...
Updated by Niklas Hambuechen 12 months ago
I filed a related issue that scrubbing apparently cannot be drained from a cluster: