Feature #7198
closedceph task: scrub all pgs before tearing down cluster
0%
Updated by Anonymous about 10 years ago
I assume that what this entails is scanning for all osds and running a raw_cluster_cmd scrub on each osd (code similar to parts of teuthology/task/scrub.py). This scan should probably be done at the start of the code block in the finally section of main() in teuthology/run.py
Should this also be done for nuke as well?
Just out of curiosity, why is this change needed? Aren't all the osd's released after teuthology runs?
Updated by Anonymous about 10 years ago
Further update: would just doing a 'ceph osd scrub *' upon exit do the trick?
Updated by Anonymous about 10 years ago
- Status changed from In Progress to Fix Under Review
Pull request #184
Updated by Anonymous about 10 years ago
From Sage (comments on Pull request):
i think there is more invovled here. i think we need to
1- make sure all pgs are active+clean (or else scrub won't start)
2- note the start time
3- initiate scrub on all osds (this part looks ok as is)
4- loop and poll 'pg dump' and wait for all of the last_scrub stamps on the pgs to be above the start time.
4.5- bail out of no pgs have made scrubb progress in the last X seconds (maybe 120?)
Updated by Anonymous about 10 years ago
- Status changed from Fix Under Review to In Progress
Updated by Anonymous about 10 years ago
- Status changed from In Progress to Fix Under Review
Updated by Anonymous about 10 years ago
- Status changed from Fix Under Review to Resolved
Change has been reviewed and pushed.
Updated by Sage Weil about 10 years ago
- Target version changed from v0.77 to v0.78