Project

General

Profile

Actions

Bug #57007

closed

cephadm: osd config not updated if mon configuration changes

Added by Adam King over 1 year ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

osds are deployed with a config that includes the addresses of the mon daemons. When mons are added/removed this config should be updated, but currently it isn't.

Here in this example you can see a mon daemon on vm-00, vm-01 and vm-02. The osd config file shown correctly matches these mons. Then a new mon spec is applied and you can see in the next "ceph orch ps" there is now only a mon on vm-00 and vm-01. However, when looking at the config again it still shows an entry for all 3 mons. From my testing the config is never updated and it doesn't matter if you scale up or down the number of mon daemons.

[ceph: root@vm-00 /]# ceph orch ps
NAME              HOST   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION                 IMAGE ID      CONTAINER ID  
crash.vm-00       vm-00               running (96m)    78s ago  96m    6975k        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  c6649af805d0  
crash.vm-01       vm-01               running (94m)    79s ago  94m    7231k        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  836411be67f1  
crash.vm-02       vm-02               running (94m)    78s ago  94m    7235k        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  6dcfc3e626ab  
mgr.vm-00.nxddcs  vm-00  *:9283       running (97m)    78s ago  97m     491M        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  4b0f6b56c60c  
mgr.vm-02.hujgul  vm-02  *:8443,9283  running (94m)    78s ago  94m     439M        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  e711ee2dd0eb  
mon.vm-00         vm-00               running (97m)    78s ago  97m    77.8M    2048M  17.0.0-12762-g63f84c50  5ba23bdce8b2  a8e283bd83fc  
mon.vm-01         vm-01               running (94m)    79s ago  94m    69.7M    2048M  17.0.0-12762-g63f84c50  5ba23bdce8b2  b07b11b09711  
mon.vm-02         vm-02               running (94m)    78s ago  94m    73.2M    2048M  17.0.0-12762-g63f84c50  5ba23bdce8b2  cc86564abffc  
osd.0             vm-01               running (75m)    79s ago  75m    78.6M    13.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  8462ab53caf0  
osd.1             vm-00               running (75m)    78s ago  75m    80.2M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  b65775e0cf12  
osd.2             vm-02               running (75m)    78s ago  75m    77.6M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  7051fbd39f37  
osd.3             vm-01               running (75m)    79s ago  75m    79.5M    13.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  b4735efa955f  
osd.4             vm-00               running (75m)    78s ago  75m    76.3M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  9f75ffd3c98f  
osd.5             vm-02               running (75m)    78s ago  75m    79.4M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  4459ac0230e1 
ceph: root@vm-00 /]# ceph orch host ls
HOST   ADDR             LABELS  STATUS  
vm-00  192.168.122.121  _admin          
vm-01  192.168.122.190                  
vm-02  192.168.122.199       
[ceph: root@vm-00 /]# exit
exit
[root@vm-00 ~]# cat /var/lib/ceph/2b3d5b8e-126b-11ed-ba65-525400bb0473/osd.1/config 
# minimal ceph.conf for 2b3d5b8e-126b-11ed-ba65-525400bb0473
[global]
    fsid = 2b3d5b8e-126b-11ed-ba65-525400bb0473
    mon_host = [v2:192.168.122.121:3300/0,v1:192.168.122.121:6789/0] [v2:192.168.122.190:3300/0,v1:192.168.122.190:6789/0] [v2:192.168.122.199:3300/0,v1:192.168.122.199:6789/0]
[root@vm-00 ~]# 
[root@vm-00 ~]# cephadm shell
Inferring fsid 2b3d5b8e-126b-11ed-ba65-525400bb0473
Inferring config /var/lib/ceph/2b3d5b8e-126b-11ed-ba65-525400bb0473/mon.vm-00/config
Using ceph image with id '5ba23bdce8b2' and tag 'latest' created on 2022-08-02 13:31:39 +0000 UTC
quay.io/adk3798/ceph@sha256:e87f0a3cfe460aae5b9bcf49e0f8528aa5fb67a4b6ec5db739b6599a9c85e2fb
[ceph: root@vm-00 /]# ceph orch ls --service-name mon --export > mon.yml
[ceph: root@vm-00 /]# vi mon.yml 
[ceph: root@vm-00 /]# ceph orch apply -i mon.yml 
Scheduled mon update...
[ceph: root@vm-00 /]# ceph orch ps 
NAME              HOST   PORTS        STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION                 IMAGE ID      CONTAINER ID  
crash.vm-00       vm-00               running (100m)     5m ago  100m    6975k        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  c6649af805d0  
crash.vm-01       vm-01               running (98m)      5m ago   98m    7231k        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  836411be67f1  
crash.vm-02       vm-02               running (98m)     82s ago   98m    7235k        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  6dcfc3e626ab  
mgr.vm-00.nxddcs  vm-00  *:9283       running (101m)     5m ago  101m     491M        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  4b0f6b56c60c  
mgr.vm-02.hujgul  vm-02  *:8443,9283  running (98m)     82s ago   98m     439M        -  17.0.0-12762-g63f84c50  5ba23bdce8b2  e711ee2dd0eb  
mon.vm-00         vm-00               running (101m)     5m ago  101m    77.8M    2048M  17.0.0-12762-g63f84c50  5ba23bdce8b2  a8e283bd83fc  
mon.vm-01         vm-01               running (98m)      5m ago   98m    69.7M    2048M  17.0.0-12762-g63f84c50  5ba23bdce8b2  b07b11b09711  
osd.0             vm-01               running (79m)      5m ago   79m    78.6M    13.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  8462ab53caf0  
osd.1             vm-00               running (79m)      5m ago   79m    80.2M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  b65775e0cf12  
osd.2             vm-02               running (79m)     82s ago   79m    77.6M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  7051fbd39f37  
osd.3             vm-01               running (79m)      5m ago   79m    79.5M    13.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  b4735efa955f  
osd.4             vm-00               running (79m)      5m ago   79m    76.3M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  9f75ffd3c98f  
osd.5             vm-02               running (79m)     82s ago   79m    79.5M    11.1G  17.0.0-12762-g63f84c50  5ba23bdce8b2  4459ac0230e1  
[ceph: root@vm-00 /]# exit
exit
[root@vm-00 ~]# cat /var/lib/ceph/2b3d5b8e-126b-11ed-ba65-525400bb0473/osd.1/config 
# minimal ceph.conf for 2b3d5b8e-126b-11ed-ba65-525400bb0473
[global]
    fsid = 2b3d5b8e-126b-11ed-ba65-525400bb0473
    mon_host = [v2:192.168.122.121:3300/0,v1:192.168.122.121:6789/0] [v2:192.168.122.190:3300/0,v1:192.168.122.190:6789/0] [v2:192.168.122.199:3300/0,v1:192.168.122.199:6789/0]

Related issues 2 (0 open2 closed)

Copied to Orchestrator - Backport #57102: quincy: cephadm: osd config not updated if mon configuration changesResolvedAdam KingActions
Copied to Orchestrator - Backport #57103: pacific: cephadm: osd config not updated if mon configuration changesResolvedAdam KingActions
Actions #1

Updated by Adam King over 1 year ago

  • Pull request ID set to 47421
Actions #2

Updated by Adam King over 1 year ago

  • Status changed from In Progress to Pending Backport
Actions #3

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57102: quincy: cephadm: osd config not updated if mon configuration changes added
Actions #4

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57103: pacific: cephadm: osd config not updated if mon configuration changes added
Actions #5

Updated by Backport Bot over 1 year ago

  • Tags set to backport_processed
Actions #6

Updated by Adam King over 1 year ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF