Project

General

Profile

Actions

Bug #44990

closed

cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory

Added by Sebastian Wagner about 4 years ago. Updated about 3 years ago.

Status:
Can't reproduce
Priority:
Normal
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

http://pulpito.ceph.com/yuriw-2020-04-07_17:39:28-rados-wip-octopus-rgw-msg-fixes-distro-basic-smithi/4931485/

2020-04-07T20:30:18.064 INFO:teuthology.orchestra.run.smithi163:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:99c8109c540eb4adfdfd778d8f345bafcf2366e7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 25ae96
72-790e-11ea-924d-001a4aab830c -- ceph orch daemon add mon smithi163:172.21.15.163=b
2020-04-07T20:30:19.340 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:19 smithi115 bash[8474]: cluster 2020-04-07T20:30:17.918224+0000 mgr.y (mgr.14144) 56 : cluster [DBG] pgmap v49: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:19.341 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:19 smithi002 bash[8292]: cluster 2020-04-07T20:30:17.918224+0000 mgr.y (mgr.14144) 56 : cluster [DBG] pgmap v49: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:21.340 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:21 smithi002 bash[8292]: cluster 2020-04-07T20:30:19.918699+0000 mgr.y (mgr.14144) 57 : cluster [DBG] pgmap v50: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:21.341 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:21 smithi002 bash[8292]: audit 2020-04-07T20:30:20.398410+0000 mgr.y (mgr.14144) 58 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daem
on_type": "mon", "placement": "smithi163:172.21.15.163=b", "target": ["mon-mgr", ""]}]: dispatch
2020-04-07T20:30:21.341 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:21 smithi002 bash[8292]: audit 2020-04-07T20:30:20.400823+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "auth get", "e
ntity": "mon."}]: dispatch
2020-04-07T20:30:21.343 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:21 smithi115 bash[8474]: cluster 2020-04-07T20:30:19.918699+0000 mgr.y (mgr.14144) 57 : cluster [DBG] pgmap v50: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:21.343 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:21 smithi115 bash[8474]: audit 2020-04-07T20:30:20.398410+0000 mgr.y (mgr.14144) 58 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daem
on_type": "mon", "placement": "smithi163:172.21.15.163=b", "target": ["mon-mgr", ""]}]: dispatch
2020-04-07T20:30:21.343 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:21 smithi115 bash[8474]: audit 2020-04-07T20:30:20.400823+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "auth get", "e
ntity": "mon."}]: dispatch
2020-04-07T20:30:21.345 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:21 smithi115 bash[8474]: audit 2020-04-07T20:30:20.402607+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "config genera
te-minimal-conf"}]: dispatch
2020-04-07T20:30:21.345 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:21 smithi115 bash[8474]: cephadm 2020-04-07T20:30:20.404042+0000 mgr.y (mgr.14144) 59 : cephadm [INF] Deploying daemon mon.b on smithi163
2020-04-07T20:30:21.345 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:21 smithi115 bash[8474]: audit 2020-04-07T20:30:20.404724+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "config get", 
"who": "mon.b", "key": "container_image"}]: dispatch
2020-04-07T20:30:21.346 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:21 smithi002 bash[8292]: audit 2020-04-07T20:30:20.402607+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "config genera
te-minimal-conf"}]: dispatch
2020-04-07T20:30:21.346 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:21 smithi002 bash[8292]: cephadm 2020-04-07T20:30:20.404042+0000 mgr.y (mgr.14144) 59 : cephadm [INF] Deploying daemon mon.b on smithi163
2020-04-07T20:30:21.347 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:21 smithi002 bash[8292]: audit 2020-04-07T20:30:20.404724+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "config get", 
"who": "mon.b", "key": "container_image"}]: dispatch
2020-04-07T20:30:23.361 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:23 smithi115 bash[8474]: cluster 2020-04-07T20:30:21.919192+0000 mgr.y (mgr.14144) 60 : cluster [DBG] pgmap v51: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:23.361 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:23 smithi002 bash[8292]: cluster 2020-04-07T20:30:21.919192+0000 mgr.y (mgr.14144) 60 : cluster [DBG] pgmap v51: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:23.405 INFO:teuthology.orchestra.run.smithi163.stdout:Deployed mon.b on host 'smithi163'
2020-04-07T20:30:24.216 INFO:teuthology.orchestra.run.smithi163:> true
2020-04-07T20:30:24.325 INFO:teuthology.orchestra.run.smithi163:mon.b> sudo journalctl -f -n 0 -u ceph-25ae9672-790e-11ea-924d-001a4aab830c@mon.b.service
2020-04-07T20:30:24.329 INFO:tasks.cephadm:Waiting for 3 mons in monmap...
2020-04-07T20:30:24.329 INFO:teuthology.orchestra.run.smithi163:> true
2020-04-07T20:30:24.350 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:24 smithi002 bash[8292]: audit 2020-04-07T20:30:23.387535+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix":"config-key set
","key":"mgr/cephadm/host.smithi163","val":"{\"daemons\": {\"mon.b\": {\"hostname\": \"smithi163\", \"daemon_id\": \"b\", \"daemon_type\": \"mon\", \"status\": 1, \"status_desc\": \"starting\"}}, \"devices\": [{\"rejected_reasons\": [\"LVM detected\", 
\"locked\", \"Insufficient space (<5GB) on vgs\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_de
vice_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 
GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_4\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_3\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", 
\"comment\": \"not used by ceph\"}, {\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT623300MD400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", 
\"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN06\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\",
 \"scheduler_mode\": \"deadline\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": 
\"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W5XFMX\"}], \"daemon_config_deps\": {\"mon.b\": {\
"deps\": [], \"last_config\": \"2020-04-07T20:30:20.402140\"}}, \"last_device_update\": \"2020-04-07T20:30:01.505668\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.163\"]}, \"last_host_check\": \"2020-04-07T20:29:57.183966\"}"}]: dispatch
2020-04-07T20:30:24.350 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:24 smithi002 bash[8292]: audit 2020-04-07T20:30:23.388406+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "config get", 
"who": "mon", "key": "container_image"}]: dispatch
2020-04-07T20:30:24.352 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:24 smithi115 bash[8474]: audit 2020-04-07T20:30:23.387535+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix":"config-key set
","key":"mgr/cephadm/host.smithi163","val":"{\"daemons\": {\"mon.b\": {\"hostname\": \"smithi163\", \"daemon_id\": \"b\", \"daemon_type\": \"mon\", \"status\": 1, \"status_desc\": \"starting\"}}, \"devices\": [{\"rejected_reasons\": [\"LVM detected\", 
\"locked\", \"Insufficient space (<5GB) on vgs\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_de
vice_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 
GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_4\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_3\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT623300MD400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN06\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"deadline\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W5XFMX\"}], \"daemon_config_deps\": {\"mon.b\": {\"deps\": [], \"last_config\": \"2020-04-07T20:30:20.402140\"}}, \"last_device_update\": \"2020-04-07T20:30:01.505668\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.163\"]}, \"last_host_check\": \"2020-04-07T20:29:57.183966\"}"}]: dispatch
2020-04-07T20:30:24.352 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:24 smithi115 bash[8474]: audit 2020-04-07T20:30:23.388406+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-04-07T20:30:24.353 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:24 smithi002 bash[8292]: audit 2020-04-07T20:30:23.392468+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi163","val":"{\"daemons\": {\"mon.b\": {\"hostname\": \"smithi163\", \"daemon_id\": \"b\", \"daemon_type\": \"mon\", \"status\": 1, \"status_desc\": \"starting\"}}, \"devices\": [{\"rejected_reasons\": [\"LVM detected\", \"locked\", \"Insufficient space (<5GB) on vgs\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_4\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_3\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT623300MD400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN06\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"deadline\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W5XFMX\"}], \"daemon_config_deps\": {\"mon.b\": {\"deps\": [], \"last_config\": \"2020-04-07T20:30:20.402140\"}}, \"last_device_update\": \"2020-04-07T20:30:01.505668\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.163\"]}, \"last_host_check\": \"2020-04-07T20:29:57.183966\"}"}]': finished
2020-04-07T20:30:24.355 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:24 smithi115 bash[8474]: audit 2020-04-07T20:30:23.392468+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.14144 172.21.15.2:0/3757663963' entity='mgr.y' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi163","val":"{\"daemons\": {\"mon.b\": {\"hostname\": \"smithi163\", \"daemon_id\": \"b\", \"daemon_type\": \"mon\", \"status\": 1, \"status_desc\": \"starting\"}}, \"devices\": [{\"rejected_reasons\": [\"LVM detected\", \"locked\", \"Insufficient space (<5GB) on vgs\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_4\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_3\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT623300MD400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN06\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"deadline\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W5XFMX\"}], \"daemon_config_deps\": {\"mon.b\": {\"deps\": [], \"last_config\": \"2020-04-07T20:30:20.402140\"}}, \"last_device_update\": \"2020-04-07T20:30:01.505668\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.163\"]}, \"last_host_check\": \"2020-04-07T20:29:57.183966\"}"}]': finished
2020-04-07T20:30:24.420 INFO:teuthology.orchestra.run.smithi163:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:99c8109c540eb4adfdfd778d8f345bafcf2366e7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 25ae9672-790e-11ea-924d-001a4aab830c -- ceph mon dump -f json
2020-04-07T20:30:24.482 INFO:ceph.mon.b.smithi163.stdout:-- Logs begin at Tue 2020-04-07 20:19:06 UTC. --
2020-04-07T20:30:24.482 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:23 smithi163 podman[8468]: 2020-04-07 20:30:23.964016659 +0000 UTC m=+0.572856243 container create 5e38b7398dd4098834c5fc1b074f530940cbfdb5f6ac8ffaf41d021be4014597 (image=quay.io/ceph-ci/ceph:99c8109c540eb4adfdfd778d8f345bafcf2366e7, name=ceph-25ae9672-790e-11ea-924d-001a4aab830c-mon.b)
2020-04-07T20:30:25.391 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:25 smithi002 bash[8292]: cluster 2020-04-07T20:30:23.919686+0000 mgr.y (mgr.14144) 61 : cluster [DBG] pgmap v52: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:25.392 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:25 smithi115 bash[8474]: cluster 2020-04-07T20:30:23.919686+0000 mgr.y (mgr.14144) 61 : cluster [DBG] pgmap v52: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:27.396 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:27 smithi002 bash[8292]: cluster 2020-04-07T20:30:25.920162+0000 mgr.y (mgr.14144) 62 : cluster [DBG] pgmap v53: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:27.407 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:27 smithi115 bash[8474]: cluster 2020-04-07T20:30:25.920162+0000 mgr.y (mgr.14144) 62 : cluster [DBG] pgmap v53: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:29.400 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:29 smithi002 bash[8292]: cluster 2020-04-07T20:30:27.920656+0000 mgr.y (mgr.14144) 63 : cluster [DBG] pgmap v54: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:29.401 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:29 smithi115 bash[8474]: cluster 2020-04-07T20:30:27.920656+0000 mgr.y (mgr.14144) 63 : cluster [DBG] pgmap v54: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:31.405 INFO:ceph.mon.a.smithi002.stdout:Apr 07 20:30:31 smithi002 bash[8292]: cluster 2020-04-07T20:30:29.921145+0000 mgr.y (mgr.14144) 64 : cluster [DBG] pgmap v55: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:31.406 INFO:ceph.mon.c.smithi115.stdout:Apr 07 20:30:31 smithi115 bash[8474]: cluster 2020-04-07T20:30:29.921145+0000 mgr.y (mgr.14144) 64 : cluster [DBG] pgmap v55: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail
2020-04-07T20:30:32.079 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 podman[8468]: 2020-04-07 20:30:32.092425133 +0000 UTC m=+8.701264795 container remove 5e38b7398dd4098834c5fc1b074f530940cbfdb5f6ac8ffaf41d021be4014597 (image=quay.io/ceph-ci/ceph:99c8109c540eb4adfdfd778d8f345bafcf2366e7, name=ceph-25ae9672-790e-11ea-924d-001a4aab830c-mon.b)
2020-04-07T20:30:32.082 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 bash[8465]: time="2020-04-07T20:30:32Z" level=error msg="unable to remove container 5e38b7398dd4098834c5fc1b074f530940cbfdb5f6ac8ffaf41d021be4014597 after failing to start and attach to it" 
2020-04-07T20:30:32.138 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 bash[8465]: Error: container_linux.go:345: starting container process caused "exec: \"/usr/bin/ceph-mon\": stat /usr/bin/ceph-mon: no such file or directory" 
2020-04-07T20:30:32.139 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 bash[8465]: : OCI runtime error
2020-04-07T20:30:32.163 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 systemd[1]: ceph-25ae9672-790e-11ea-924d-001a4aab830c@mon.b.service: main process exited, code=exited, status=127/n/a
2020-04-07T20:30:32.356 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 podman[8760]: Error: no container with name or ID ceph-25ae9672-790e-11ea-924d-001a4aab830c-mon.b found: no such container
2020-04-07T20:30:32.373 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 systemd[1]: Unit ceph-25ae9672-790e-11ea-924d-001a4aab830c@mon.b.service entered failed state.
2020-04-07T20:30:32.374 INFO:ceph.mon.b.smithi163.stdout:Apr 07 20:30:32 smithi163 systemd[1]: ceph-25ae9672-790e-11ea-924d-001a4aab830c@mon.b.service failed.


Related issues 5 (1 open4 closed)

Related to Orchestrator - Bug #44777: podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove containerResolved

Actions
Related to Orchestrator - Bug #46036: cephadm: killmode=none: systemd units failed, but containers still runningResolved

Actions
Related to Orchestrator - Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/storage/overlay/.../merged: device or resource busyResolvedAbhishek Lekshmanan

Actions
Related to Orchestrator - Bug #53175: podman: failed to exec pid1: Exec format error: wrongly using the amd64-only digestNew

Actions
Has duplicate Orchestrator - Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3ed65093dd89d593e40d2d1bbfa03c8dcb5f53ba7bdda77eacde8d9f1a9c28e after failing to start and attach to it"Duplicate

Actions
Actions

Also available in: Atom PDF