Bug #62872
openceph osd_max_backfills default value is 1000
0%
Description
In the ceph 17 version, the ceph recovery parameters do not take effect. After checking the configuration parameters, it is found that although "osd_op_queue" is "mclock_scheduler", there are many unreasonable values:
"osd_max_backfills": "1000",
"osd_mclock_scheduler_background_best_effort_lim": "999999",
"osd_mclock_scheduler_client_lim": "999999",
"osd_recovery_max_active": "1000",
"osd_recovery_max_active_hdd": "1000",
"osd_recovery_max_active_ssd": "1000",
Updated by changzhi tan 8 months ago
I found that there is already a solution here: https://tracker.ceph.com/issues/58529. Under the mclock_scheduler flow control policy, if data recovery does not occur, will the client's IO still be limited in proportion? This does not provide maximum performance
Updated by Radoslaw Zarzynski 7 months ago
- Is duplicate of Bug #58529: osd: very slow recovery due to delayed push reply messages added