Project

General

Profile

Actions

Bug #62872

open

ceph osd_max_backfills default value is 1000

Added by changzhi tan 8 months ago. Updated 8 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Backfill/Recovery
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In the ceph 17 version, the ceph recovery parameters do not take effect. After checking the configuration parameters, it is found that although "osd_op_queue" is "mclock_scheduler", there are many unreasonable values:
"osd_max_backfills": "1000",
"osd_mclock_scheduler_background_best_effort_lim": "999999",
"osd_mclock_scheduler_client_lim": "999999",
"osd_recovery_max_active": "1000",
"osd_recovery_max_active_hdd": "1000",
"osd_recovery_max_active_ssd": "1000",


Related issues 1 (0 open1 closed)

Is duplicate of RADOS - Bug #58529: osd: very slow recovery due to delayed push reply messagesResolvedSridhar Seshasayee

Actions
Actions #1

Updated by changzhi tan 8 months ago

I found that there is already a solution here: https://tracker.ceph.com/issues/58529. Under the mclock_scheduler flow control policy, if data recovery does not occur, will the client's IO still be limited in proportion? This does not provide maximum performance

Actions #2

Updated by Radoslaw Zarzynski 7 months ago

  • Is duplicate of Bug #58529: osd: very slow recovery due to delayed push reply messages added
Actions

Also available in: Atom PDF