Project

General

Profile

Actions

Bug #48750

closed

ceph config set using osd/host mask not working

Added by Kenneth Waegeman over 3 years ago. Updated 6 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
Administration/Usability
Target version:
% Done:

100%

Source:
Community (user)
Tags:
backport_processed
Backport:
pacific quincy reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

this does not work, tested with 14.2.9 and 14.2.16:


[root@mds2802 ~]# ceph osd tree-from osd2801
ID CLASS WEIGHT  TYPE NAME    STATUS REWEIGHT PRI-AFF 
-3       0.28317 host osd2801                         
 1  fast 0.04390     osd.1        up  1.00000 1.00000 
18  fast 0.04390     osd.18       up  1.00000 1.00000 
 2   hdd 0.09769     osd.2        up  1.00000 1.00000 
19   hdd 0.09769     osd.19       up  1.00000 1.00000 
[root@mds2802 ~]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)

[root@mds2802 ~]# ceph config get osd.2 osd_memory_target
4294967296
[root@mds2802 ~]# ceph config show osd.2 osd_memory_target
4294967296

[root@mds2802 ~]# ceph config set osd/host:osd2801 osd_memory_target 2147483648         
[root@mds2802 ~]# ceph config get osd.2  osd_memory_target
2147483648
[root@mds2802 ~]# ceph config show osd.2 osd_memory_target
4294967296
      

-> So the config db recognizes the mask, but the daemon does not

It also does not work by restarting the daemon.

When I change the same setting for the specified daemon directly, it does work, so its not a runtime setting issue:

[root@mds2802 ~]# ceph config set osd.2 osd_memory_target 2147483648

[root@mds2802 ~]# ceph config get osd.2 osd_memory_target
2147483648
[root@mds2802 ~]# ceph config show osd.2 osd_memory_target
2147483648

Related issues 4 (0 open4 closed)

Has duplicate Ceph - Bug #53408: Centralized config mask not being applied to hostDuplicate

Actions
Copied to RADOS - Backport #62029: reef: ceph config set using osd/host mask not workingResolvedKonstantin ShalyginActions
Copied to RADOS - Backport #62030: quincy: ceph config set using osd/host mask not workingResolvedKonstantin ShalyginActions
Copied to RADOS - Backport #62031: pacific: ceph config set using osd/host mask not workingResolvedKonstantin ShalyginActions
Actions #1

Updated by Sage Weil almost 3 years ago

  • Project changed from Ceph to RADOS
  • Category deleted (common)
Actions #2

Updated by Dan van der Ster over 2 years ago

  • Affected Versions v15.2.13 added

Do the other masks (non-host) masks work for you?

I have the same problem in octopus. class masks work, as do crush root/room/rack masks, but host masks do not work:

[root@cephoctopus-1 ~]# ceph osd tree
ID   CLASS  WEIGHT   TYPE NAME                       STATUS  REWEIGHT  PRI-AFF
 -1         0.07794  root default                                             
 -5         0.07794      room 0000-0-0000                                     
 -4         0.07794          rack 0000                                        
 -3         0.01949              host cephoctopus-1                           
  0    hdd  0.01949                  osd.0               up   1.00000  1.00000
 -9         0.01949              host cephoctopus-2                           
  1    hdd  0.01949                  osd.1               up   1.00000  1.00000
-11         0.01949              host cephoctopus-3                           
  2    hdd  0.01949                  osd.2               up   1.00000  1.00000
-13         0.01949              host cephoctopus-4                           
  3    hdd  0.01949                  osd.3               up   1.00000  1.00000
[root@cephoctopus-1 ~]# ceph config set osd/root:default osd_max_backfills 2
[root@cephoctopus-1 ~]# ceph config dump
WHO       MASK          LEVEL     OPTION                                 VALUE      RO
  mon                   advanced  auth_allow_insecure_global_id_reclaim  false        
  mgr                   advanced  mgr/balancer/active                    false        
  mgr                   advanced  mgr/balancer/mode                      upmap        
  mgr                   advanced  mgr/progress/enabled                   false      * 
  osd     root:default  advanced  osd_max_backfills                      2            
  osd                   advanced  osd_max_pg_log_entries                 2            
  osd                   advanced  osd_min_pg_log_entries                 1            
  client                advanced  client_acl_type                        posix_acl  * 
  client                advanced  fuse_default_permissions               false      * 
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
    "osd_max_backfills": "2" 
}
[root@cephoctopus-1 ~]# ceph config rm osd/root:default osd_max_backfills
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
    "osd_max_backfills": "1" 
}
[root@cephoctopus-1 ~]# ceph config set osd/room:0000-0-0000 osd_max_backfills 2
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
    "osd_max_backfills": "2" 
}
[root@cephoctopus-1 ~]# ceph config rm osd/room:0000-0-0000 osd_max_backfills
[root@cephoctopus-1 ~]# ceph config set osd/rack:0000 osd_max_backfills 3
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
    "osd_max_backfills": "3" 
}
[root@cephoctopus-1 ~]# ceph config rm osd/rack:0000 osd_max_backfills
[root@cephoctopus-1 ~]# ceph config set osd/host:cephoctopus-1 osd_max_backfills 4  # doesn't work.
[root@cephoctopus-1 ~]# ceph daemon osd.0 config get osd_max_backfills
{
    "osd_max_backfills": "1" 
}
Actions #3

Updated by Jan-Philipp Litza over 2 years ago

I have this exact problem in 16.2.4 as well. My workaround is to set it in ceph.conf

Actions #4

Updated by Dan van der Ster over 2 years ago

  • Has duplicate Bug #53408: Centralized config mask not being applied to host added
Actions #5

Updated by Dan van der Ster over 2 years ago

  • Subject changed from ceph config set using MASK not working to ceph config set using osd/host mask not working
Actions #6

Updated by Dan van der Ster over 2 years ago

  • Affected Versions v16.2.4 added
Actions #7

Updated by Konstantin Shalygin over 2 years ago

  • Category set to Administration/Usability
  • Affected Versions deleted (v14.2.10, v14.2.11, v14.2.12, v14.2.13, v14.2.14, v14.2.15, v14.2.16, v14.2.9, v15.2.13)
Actions #8

Updated by Brian Koebbe about 1 year ago

I also have this same problem with v17.2.5.

Unfortunately, this makes osd_memory_target_autotune useless.

Happy to provide more info if it helps.

Actions #9

Updated by Didier Gazen 10 months ago

I am working on this issue.

Actions #11

Updated by Radoslaw Zarzynski 10 months ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 52088
Actions #12

Updated by Konstantin Shalygin 10 months ago

  • Target version set to v19.0.0
  • Backport set to pacific quincy reef
Actions #13

Updated by Yuri Weinstein 9 months ago

Didier Gazen wrote:

PR: https://github.com/ceph/ceph/pull/52088

merged

Actions #14

Updated by Konstantin Shalygin 9 months ago

  • Status changed from Fix Under Review to Pending Backport
Actions #15

Updated by Backport Bot 9 months ago

  • Copied to Backport #62029: reef: ceph config set using osd/host mask not working added
Actions #16

Updated by Backport Bot 9 months ago

  • Copied to Backport #62030: quincy: ceph config set using osd/host mask not working added
Actions #17

Updated by Backport Bot 9 months ago

  • Copied to Backport #62031: pacific: ceph config set using osd/host mask not working added
Actions #18

Updated by Backport Bot 9 months ago

  • Tags set to backport_processed
Actions #19

Updated by Konstantin Shalygin 6 months ago

  • Status changed from Pending Backport to Resolved
  • % Done changed from 0 to 100
Actions

Also available in: Atom PDF