Project

General

Profile

Backport #56152

Updated by Milind Changire almost 2 years ago

scrub status does not reflect the correct status after mgr restart 

 eg. 
 Before restart mgrs: 
 <pre> 
 [{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 6}, "start": "2022-06-07T12:50:00", "created": "2022-06-07T16:44:25", "first": "2022-06-07T16:50:00", "last": "2022-06-07T16:50:00", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true}] 
 $ ceph fs snap-schedule status / --format=json 
 </pre> 

 After Restart mgrs: 
 <pre> 
 [{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {}, "start": "2022-06-07T12:50:00", "created": "2022-06-07T16:44:25", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}] 
 $ ceph fs snap-schedule status / --format=json 

 Need for this backport only tracker: 
 ------------------------------------ 
 This issue needs to be fixed exclusively in the pacific branch since the db operations take place on the in-memory sqlite db and are not automatically persisted to permanent storage. 
 This db management policy is older and different than the current mainline. In the mainline sources, the sqlite db uses the ceph backend to persist db changes directly to RADOS objects. 
 Although there is a mechanism to write the in-memory db to the stable storage, the calls to the appropriate procedure needs to be added to functions which update the state of the db. 
 </pre> 

Back