Bug #48750
closed
ceph config set using osd/host mask not working
Added by Kenneth Waegeman over 3 years ago.
Updated 7 months ago.
Category:
Administration/Usability
Backport:
pacific quincy reef
Description
this does not work, tested with 14.2.9 and 14.2.16:
[root@mds2802 ~]# ceph osd tree-from osd2801
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-3 0.28317 host osd2801
1 fast 0.04390 osd.1 up 1.00000 1.00000
18 fast 0.04390 osd.18 up 1.00000 1.00000
2 hdd 0.09769 osd.2 up 1.00000 1.00000
19 hdd 0.09769 osd.19 up 1.00000 1.00000
[root@mds2802 ~]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
[root@mds2802 ~]# ceph config get osd.2 osd_memory_target
4294967296
[root@mds2802 ~]# ceph config show osd.2 osd_memory_target
4294967296
[root@mds2802 ~]# ceph config set osd/host:osd2801 osd_memory_target 2147483648
[root@mds2802 ~]# ceph config get osd.2 osd_memory_target
2147483648
[root@mds2802 ~]# ceph config show osd.2 osd_memory_target
4294967296
-> So the config db recognizes the mask, but the daemon does not
It also does not work by restarting the daemon.
When I change the same setting for the specified daemon directly, it does work, so its not a runtime setting issue:
[root@mds2802 ~]# ceph config set osd.2 osd_memory_target 2147483648
[root@mds2802 ~]# ceph config get osd.2 osd_memory_target
2147483648
[root@mds2802 ~]# ceph config show osd.2 osd_memory_target
2147483648
- Project changed from Ceph to RADOS
- Category deleted (
common)
- Affected Versions v15.2.13 added
Do the other masks (non-host) masks work for you?
I have the same problem in octopus. class masks work, as do crush root/room/rack masks, but host masks do not work:
[root@cephoctopus-1 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.07794 root default
-5 0.07794 room 0000-0-0000
-4 0.07794 rack 0000
-3 0.01949 host cephoctopus-1
0 hdd 0.01949 osd.0 up 1.00000 1.00000
-9 0.01949 host cephoctopus-2
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-11 0.01949 host cephoctopus-3
2 hdd 0.01949 osd.2 up 1.00000 1.00000
-13 0.01949 host cephoctopus-4
3 hdd 0.01949 osd.3 up 1.00000 1.00000
[root@cephoctopus-1 ~]# ceph config set osd/root:default osd_max_backfills 2
[root@cephoctopus-1 ~]# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/active false
mgr advanced mgr/balancer/mode upmap
mgr advanced mgr/progress/enabled false *
osd root:default advanced osd_max_backfills 2
osd advanced osd_max_pg_log_entries 2
osd advanced osd_min_pg_log_entries 1
client advanced client_acl_type posix_acl *
client advanced fuse_default_permissions false *
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
"osd_max_backfills": "2"
}
[root@cephoctopus-1 ~]# ceph config rm osd/root:default osd_max_backfills
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
"osd_max_backfills": "1"
}
[root@cephoctopus-1 ~]# ceph config set osd/room:0000-0-0000 osd_max_backfills 2
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
"osd_max_backfills": "2"
}
[root@cephoctopus-1 ~]# ceph config rm osd/room:0000-0-0000 osd_max_backfills
[root@cephoctopus-1 ~]# ceph config set osd/rack:0000 osd_max_backfills 3
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
"osd_max_backfills": "3"
}
[root@cephoctopus-1 ~]# ceph config rm osd/rack:0000 osd_max_backfills
[root@cephoctopus-1 ~]# ceph config set osd/host:cephoctopus-1 osd_max_backfills 4 # doesn't work.
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
"osd_max_backfills": "1"
}
I have this exact problem in 16.2.4 as well. My workaround is to set it in ceph.conf
- Has duplicate Bug #53408: Centralized config mask not being applied to host added
- Subject changed from ceph config set using MASK not working to ceph config set using osd/host mask not working
- Affected Versions v16.2.4 added
- Category set to Administration/Usability
- Affected Versions deleted (
v14.2.10, v14.2.11, v14.2.12, v14.2.13, v14.2.14, v14.2.15, v14.2.16, v14.2.9, v15.2.13)
I also have this same problem with v17.2.5.
Unfortunately, this makes osd_memory_target_autotune useless.
Happy to provide more info if it helps.
I am working on this issue.
- Status changed from New to Fix Under Review
- Pull request ID set to 52088
- Target version set to v19.0.0
- Backport set to pacific quincy reef
- Status changed from Fix Under Review to Pending Backport
- Copied to Backport #62029: reef: ceph config set using osd/host mask not working added
- Copied to Backport #62030: quincy: ceph config set using osd/host mask not working added
- Copied to Backport #62031: pacific: ceph config set using osd/host mask not working added
- Tags set to backport_processed
- Status changed from Pending Backport to Resolved
- % Done changed from 0 to 100
Also available in: Atom
PDF