Project

General

Profile

Actions

Bug #57047

open

not able to configure osd_max_backfills

Added by Kenneth Waegeman over 1 year ago. Updated over 1 year ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi all,

I’m trying to limit an ongoing backfill, but it seems osd max backfills is somehow set to 1000 and I can’t get it lower:

[root@ceph301 ~]# ceph config show osd.20 osd_max_backfills
1000
[root@ceph301 ~]# ceph config rm osd.20 osd_max_backfills
[root@ceph301 ~]# ceph config show osd.20 osd_max_backfills
1000
[root@ceph301 ~]# ceph config set osd.20 osd_max_backfills 1
[root@ceph301 ~]# ceph config show osd.20 osd_max_backfills
1000
[root@ceph301 ~]# ceph config rm osd.20 osd_max_backfills
[root@ceph301 ~]# ceph config set osd osd_max_backfills 1
[root@ceph301 ~]# ceph config show osd.20 osd_max_backfills
1000

Also with injecting:
ceph tell osd.20 injectargs '--osd_max_backfills 1' {}
osd_max_backfills = '1'
[root@ceph301 ~]# ceph config show osd.20 osd_max_backfills
1000

Any clues here? Thank you very much!

Kenneth

Actions #1

Updated by Kenneth Waegeman over 1 year ago

It seems it is by design, for the mclock. It would be nice if this could be added to the `ceph config help osd_max_backfills` section

The actual issue is not the parameter, but the performance.
I added some nvme storage and am moving the data off the regular hdd. In earlier releases I could slow this down when there was load on the clients with the max_backfills setting, but now I don’t know how to do this:

pgs:     2880930/5352414 objects misplaced (53.825%)
447 active+remapped+backfilling
290 active+clean

[root@ceph301 ~]# ceph config show osd.20 osd_mclock_profile
high_client_ops

So it is already at the high_client_ops setting.

Actions

Also available in: Atom PDF