Bug #43430
ceph-osd: ceph status error in task 'wait for all osd to be up'
0%
Description
host: test1 task: TASK: ceph-osd : wait for all osd to be up <class 'ansible.playbook.task.Task'>
TASK [ceph-osd : wait for all osd to be up] ****************************************************************************************************************************
Friday 27 December 2019 11:33:19 +0800 (0:00:00.147) 0:01:52.866 *
fatal: [test1]: FAILED! =>
msg: 'The conditional check ''(wait_for_all_osds_up.stdout | from_json)["osdmap"]["num_osds"] | int > 0'' failed. The error was: error while evaluating conditional ((wait_for_all_osds_up.stdout | from_json)["osdmap"]["num_osds"] | int > 0): ''dict object'' has no attribute ''num_osds'''
NO MORE HOSTS LEFT *****************************************************************************************************************************************************
PLAY RECAP *************************************************************************************************************************************************************
test1 : ok=270 changed=9 unreachable=0 failed=1 skipped=319 rescued=0 ignored=0
test2 : ok=126 changed=5 unreachable=0 failed=0 skipped=201 rescued=0 ignored=0
INSTALLER STATUS *******************************************************************************************************************************************************
Install Ceph Monitor : Complete (0:00:17)
Install Ceph Manager : Complete (0:00:17)
Install Ceph OSD : In Progress (0:00:32)
This phase can be restarted by running: roles/ceph-osd/tasks/main.yml
Friday 27 December 2019 11:33:19 +0800 (0:00:00.589) 0:01:53.456 *
===============================================================================
gather facts ------------------------------------------------------------------------------------------------------------------------------------------------------------ 3.34s
ceph-osd : apply operating system tuning -------------------------------------------------------------------------------------------------------------------------------- 1.56s
ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created ------------------------------------------------------------------------------- 1.42s
ceph-osd : systemd start osd -------------------------------------------------------------------------------------------------------------------------------------------- 1.35s
check for python -------------------------------------------------------------------------------------------------------------------------------------------------------- 1.32s
ceph-osd : use ceph-volume lvm batch to create osds --------------------------------------------------------------------------------------------------------------------- 1.30s
ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created ------------------------------------------------------------------------------- 1.27s
ceph-facts : get default crush rule value from ceph configuration ------------------------------------------------------------------------------------------------------- 1.25s
ceph-handler : check for a ceph mds socket ------------------------------------------------------------------------------------------------------------------------------ 1.25s
ceph-mon : fetch ceph initial keys -------------------------------------------------------------------------------------------------------------------------------------- 1.17s
ceph-infra : open monitor and manager ports ----------------------------------------------------------------------------------------------------------------------------- 1.09s
ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created ------------------------------------------------------------------------------- 1.07s
ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------------------------------------------------------------------------------------------------------- 1.03s
gather and delegate facts ----------------------------------------------------------------------------------------------------------------------------------------------- 0.97s
ceph-config : generate ceph configuration file: ceph.conf --------------------------------------------------------------------------------------------------------------- 0.95s
ceph-infra : generate route rule ---------------------------------------------------------------------------------------------------------------------------------------- 0.86s
ceph-infra : generate add table scripts --------------------------------------------------------------------------------------------------------------------------------- 0.84s
ceph-infra : generate numa node config file ----------------------------------------------------------------------------------------------------------------------------- 0.84s
ceph-infra : generate add to boot script -------------------------------------------------------------------------------------------------------------------------------- 0.84s
ceph-config : generate ceph configuration file: ceph.conf --------------------------------------------------------------------------------------------------------------- 0.82s
It's a bug of ceph status json info parser.
History
#1 Updated by Osama Elswah about 1 year ago
Any news on this?
I am running into the exact same problem
#2 Updated by Osama Elswah about 1 year ago
Osama Elswah wrote:
Any news on this?
I am running into the exact same problem
I was able to correct the behavior, now it runs
just edit the file "./roles/ceph-osd/tasks/main.yml" lines 83 and 84 t this:
83 - (wait_for_all_osds_up.stdout | from_json)["osdmap"]["osdmap"]["num_osds"] | int > 0
84 - (wait_for_all_osds_up.stdout | from_json)["osdmap"]["osdmap"]["num_osds"] == (wait_for_all_osds_up.stdout | from_json)["osdmap"][ "osdmap"]["num_up_osds"]