Project

General

Profile

Actions

Bug #44777

closed

podman: stat /usr/bin/ceph-mon: no such file or directory, then unable to remove container

Added by Sage Weil about 4 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

the mon.b unit:

Mar 26 22:34:26 smithi140 systemd[1]: Starting Ceph mon.b for 7f76a098-6fb1-11ea-924a-001a4aab830c...
Mar 26 22:34:26 smithi140 podman[6820]: Error: no container with name or ID ceph-7f76a098-6fb1-11ea-924a-001a4aab830c-mon.b found: no such container
Mar 26 22:34:26 smithi140 systemd[1]: Started Ceph mon.b for 7f76a098-6fb1-11ea-924a-001a4aab830c.
Mar 26 22:34:26 smithi140 podman[6843]: 2020-03-26 22:34:26.914023212 +0000 UTC m=+0.592997223 container create de4f678811fa1b714c7ca8c97659d079945413f79566cbe400c40652c1d7b2d0 (image=quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68, name=ceph-7f76a098-6fb1-11ea-924a-001a4aab830c-mon.b)
Mar 26 22:34:35 smithi140 podman[6843]: 2020-03-26 22:34:35.04049354 +0000 UTC m=+8.719467657 container remove de4f678811fa1b714c7ca8c97659d079945413f79566cbe400c40652c1d7b2d0 (image=quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68, name=ceph-7f76a098-6fb1-11ea-924a-001a4aab830c-mon.b)
Mar 26 22:34:35 smithi140 bash[6841]: time="2020-03-26T22:34:35Z" level=error msg="unable to remove container de4f678811fa1b714c7ca8c97659d079945413f79566cbe400c40652c1d7b2d0 after failing to start and attach to it" 
Mar 26 22:34:35 smithi140 bash[6841]: Error: container_linux.go:345: starting container process caused "exec: \"/usr/bin/ceph-mon\": stat /usr/bin/ceph-mon: no such file or directory" 
Mar 26 22:34:35 smithi140 bash[6841]: : OCI runtime error

teuthology.log:
reached maximum tries (180) after waiting for 180 seconds

/a/sage-2020-03-26_21:55:07-rados:thrash-old-clients-master-distro-basic-smithi/4892461
/a/sage-2020-03-26_21:55:07-rados:thrash-old-clients-master-distro-basic-smithi/4892454


Related issues 1 (0 open1 closed)

Related to Orchestrator - Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directoryCan't reproduceSebastian Wagner

Actions
Actions #2

Updated by Sebastian Wagner about 4 years ago

Fascinating!

Actions #3

Updated by Sebastian Wagner about 4 years ago

  • Status changed from Need More Info to Fix Under Review
  • Pull request ID set to 34260
Actions #4

Updated by Sebastian Wagner about 4 years ago

2020-03-26T22:31:13.457 INFO:teuthology.orchestra.run.smithi140:> git archive --remote=git://git.ceph.com/ceph.git 3ee082218cdc1e5bca6f5c10b26b089353653b68 src/cephadm/cephadm | tar -xO src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm
2020-03-26T22:31:13.863 INFO:teuthology.orchestra.run.smithi140:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm
2020-03-26T22:31:14.197 INFO:teuthology.orchestra.run.smithi140:> sudo mkdir -p /etc/ceph
2020-03-26T22:31:14.568 INFO:teuthology.orchestra.run.smithi140:> sudo chmod 777 /etc/ceph
2020-03-26T22:32:38.884 INFO:teuthology.orchestra.run.smithi140:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTA0XhrJzBq8cCO3Leab2NhxMqjO+jcY9mmv7LjBkVzDM0smEUWFWSVRBhpGQ3HzkzbK5hrkFzGaABD/jZ9QBWYmHiFzx91v8WimdwUS2639vkVKVDMznpPpYn6Uz7rnN6c6ykUDSzng2QK4v78Ds6OnmufNwCTGVPAVdq3YUowWcVlrhZ/WRI18HpRru/TLzuDXLcVUmCWuOi6+BIfeAsy8buzRfdJgJghhfnghEvSMKBvZtqR5wv2HaVAkrAaIKU3NbbQiWlX9TH5LxR6hP01Pg45WXrcTW4JfHMROx63WZ10jjb3VIEn2sJx0sSSy9Kal+Cg9FiRB8cJIT0mdx6Po0fJhsL0IlN7UyHQ30o5/luOzf/AIaa9b4EEPRlrUaQ7gmt1T1oAH6+iE+Ux1ISjVkyRURYDhQXrhblyXb8sD1LStEQps839DMPHRdifsh92h4cvC8FKC9bHucYfErA5/zQW+3dmWDntlCArm7DvfVtVGqmElbggowmIZwh1a8= ceph-7f76a098-6fb1-11ea-924a-001a4aab830c' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys
2020-03-26T22:32:39.041 INFO:teuthology.orchestra.run.smithi140.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTA0XhrJzBq8cCO3Leab2NhxMqjO+jcY9mmv7LjBkVzDM0smEUWFWSVRBhpGQ3HzkzbK5hrkFzGaABD/jZ9QBWYmHiFzx91v8WimdwUS2639vkVKVDMznpPpYn6Uz7rnN6c6ykUDSzng2QK4v78Ds6OnmufNwCTGVPAVdq3YUowWcVlrhZ/WRI18HpRru/TLzuDXLcVUmCWuOi6+BIfeAsy8buzRfdJgJghhfnghEvSMKBvZtqR5wv2HaVAkrAaIKU3NbbQiWlX9TH5LxR6hP01Pg45WXrcTW4JfHMROx63WZ10jjb3VIEn2sJx0sSSy9Kal+Cg9FiRB8cJIT0mdx6Po0fJhsL0IlN7UyHQ30o5/luOzf/AIaa9b4EEPRlrUaQ7gmt1T1oAH6+iE+Ux1ISjVkyRURYDhQXrhblyXb8sD1LStEQps839DMPHRdifsh92h4cvC8FKC9bHucYfErA5/zQW+3dmWDntlCArm7DvfVtVGqmElbggowmIZwh1a8= ceph-7f76a098-6fb1-11ea-924a-001a4aab830c
2020-03-26T22:33:23.077 INFO:tasks.cephadm:Writing conf and keyring to smithi140
2020-03-26T22:33:23.077 INFO:teuthology.orchestra.run.smithi140:> cat > /etc/ceph/ceph.conf
2020-03-26T22:33:23.165 INFO:teuthology.orchestra.run.smithi140:> cat > /etc/ceph/ceph.client.admin.keyring
2020-03-26T22:33:23.230 INFO:tasks.cephadm:Adding host smithi140 to orchestrator...
2020-03-26T22:33:23.230 INFO:teuthology.orchestra.run.smithi140:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7f76a098-6fb1-11ea-924a-001a4aab830c -- ceph orch host add smithi140
2020-03-26T22:33:59.473 INFO:ceph.mgr.y.smithi051.stdout:Mar 26 22:33:59 smithi051 bash[6885]: Warning: Permanently added 'smithi140,172.21.15.140' (ECDSA) to the list of known hosts.
2020-03-26T22:34:00.434 INFO:teuthology.orchestra.run.smithi140.stdout:Added host 'smithi140'
2020-03-26T22:34:00.699 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:00 smithi051 bash[6639]: audit 2020-03-26T22:33:59.399622+0000 mgr.y (mgr.14142) 41 : audit [DBG] from='client.14167 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "smithi140", "target": ["mon-mgr", ""]}]: dispatch
2020-03-26T22:34:00.701 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:00 smithi051 bash[6639]: audit 2020-03-26T22:34:00.418319+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/inventory","val":"{\"smithi051\": {\"hostname\": \"smithi051\", \"addr\": \"smithi051\", \"labels\": [], \"status\": \"\"}, \"smithi008\": {\"hostname\": \"smithi008\", \"addr\": \"smithi008\", \"labels\": [], \"status\": \"\"}, \"smithi140\": {\"hostname\": \"smithi140\", \"addr\": \"smithi140\", \"labels\": [], \"status\": \"\"}}"}]: dispatch
2020-03-26T22:34:00.703 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:00 smithi051 bash[6639]: audit 2020-03-26T22:34:00.422425+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/inventory","val":"{\"smithi051\": {\"hostname\": \"smithi051\", \"addr\": \"smithi051\", \"labels\": [], \"status\": \"\"}, \"smithi008\": {\"hostname\": \"smithi008\", \"addr\": \"smithi008\", \"labels\": [], \"status\": \"\"}, \"smithi140\": {\"hostname\": \"smithi140\", \"addr\": \"smithi140\", \"labels\": [], \"status\": \"\"}}"}]': finished
2020-03-26T22:34:01.155 INFO:teuthology.orchestra.run.smithi140:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7f76a098-6fb1-11ea-924a-001a4aab830c -- ceph orch host ls --format=json
2020-03-26T22:34:01.701 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:01 smithi051 bash[6639]: cephadm 2020-03-26T22:34:00.423182+0000 mgr.y (mgr.14142) 42 : cephadm [INF] Added host smithi140
2020-03-26T22:34:01.701 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:01 smithi051 bash[6639]: audit 2020-03-26T22:34:00.703582+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi140","val":"{\"daemons\": {}, \"devices\": [], \"daemon_config_deps\": {}, \"networks\": {}, \"last_host_check\": \"2020-03-26T22:34:00.702977\"}"}]: dispatch
2020-03-26T22:34:01.702 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:01 smithi051 bash[6639]: audit 2020-03-26T22:34:00.708009+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi140","val":"{\"daemons\": {}, \"devices\": [], \"daemon_config_deps\": {}, \"networks\": {}, \"last_host_check\": \"2020-03-26T22:34:00.702977\"}"}]': finished
2020-03-26T22:34:01.703 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:01 smithi051 bash[6639]: audit 2020-03-26T22:34:00.973029+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi140","val":"{\"daemons\": {}, \"devices\": [], \"daemon_config_deps\": {}, \"last_daemon_update\": \"2020-03-26T22:34:00.972446\", \"networks\": {}, \"last_host_check\": \"2020-03-26T22:34:00.702977\"}"}]: dispatch
2020-03-26T22:34:01.703 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:01 smithi051 bash[6639]: audit 2020-03-26T22:34:00.977949+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi140","val":"{\"daemons\": {}, \"devices\": [], \"daemon_config_deps\": {}, \"last_daemon_update\": \"2020-03-26T22:34:00.972446\", \"networks\": {}, \"last_host_check\": \"2020-03-26T22:34:00.702977\"}"}]': finished
2020-03-26T22:34:04.166 INFO:teuthology.orchestra.run.smithi140.stdout:
2020-03-26T22:34:04.166 INFO:teuthology.orchestra.run.smithi140.stdout:[{"addr": "smithi051", "hostname": "smithi051", "labels": [], "status": ""}, {"addr": "smithi008", "hostname": "smithi008", "labels": [], "status": ""}, {"addr": "smithi140", "hostname": "smithi140", "labels": [], "status": ""}]
2020-03-26T22:34:06.653 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:06 smithi051 bash[6639]: audit 2020-03-26T22:34:05.730437+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi140","val":"{\"daemons\": {}, \"devices\": [{\"rejected_reasons\": [\"LVM detected\", \"Insufficient space (<5GB) on vgs\", \"locked\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_4\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_3\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT62330038400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN06\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"deadline\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W5XV0X\"}], \"daemon_config_deps\": {}, \"last_daemon_update\": \"2020-03-26T22:34:00.972446\", \"last_device_update\": \"2020-03-26T22:34:05.729332\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.140\"]}, \"last_host_check\": \"2020-03-26T22:34:00.702977\"}"}]: dispatch
2020-03-26T22:34:06.653 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:06 smithi051 bash[6639]: audit 2020-03-26T22:34:05.735226+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi140","val":"{\"daemons\": {}, \"devices\": [{\"rejected_reasons\": [\"LVM detected\", \"Insufficient space (<5GB) on vgs\", \"locked\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_4\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_3\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT62330038400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN06\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"deadline\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W5XV0X\"}], \"daemon_config_deps\": {}, \"last_daemon_update\": \"2020-03-26T22:34:00.972446\", \"last_device_update\": \"2020-03-26T22:34:05.729332\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.140\"]}, \"last_host_check\": \"2020-03-26T22:34:00.702977\"}"}]': finished
2020-03-26T22:34:20.566 INFO:teuthology.orchestra.run.smithi140:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7f76a098-6fb1-11ea-924a-001a4aab830c -- ceph orch daemon add mon smithi140:172.21.15.140=b
2020-03-26T22:34:25.006 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:25 smithi051 bash[6639]: audit 2020-03-26T22:34:23.104773+0000 mgr.y (mgr.14142) 58 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "smithi140:172.21.15.140=b", "target": ["mon-mgr", ""]}]: dispatch
2020-03-26T22:34:25.007 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:25 smithi051 bash[6639]: cephadm 2020-03-26T22:34:23.109569+0000 mgr.y (mgr.14142) 59 : cephadm [INF] Deploying daemon mon.b on smithi140
2020-03-26T22:34:25.007 INFO:ceph.mon.c.smithi008.stdout:Mar 26 22:34:25 smithi008 bash[7027]: audit 2020-03-26T22:34:23.104773+0000 mgr.y (mgr.14142) 58 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "smithi140:172.21.15.140=b", "target": ["mon-mgr", ""]}]: dispatch
2020-03-26T22:34:25.008 INFO:ceph.mon.c.smithi008.stdout:Mar 26 22:34:25 smithi008 bash[7027]: cephadm 2020-03-26T22:34:23.109569+0000 mgr.y (mgr.14142) 59 : cephadm [INF] Deploying daemon mon.b on smithi140
2020-03-26T22:34:26.353 INFO:teuthology.orchestra.run.smithi140.stdout:Deployed mon.b on host 'smithi140'
2020-03-26T22:34:27.019 INFO:ceph.mon.a.smithi051.stdout:Mar 26 22:34:27 smithi051 bash[6639]: audit 2020-03-26T22:34:26.336845+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14142 172.21.15.51:0/536522488' entity='mgr.y' cmd='[{"prefix":"config-key se
t","key":"mgr/cephadm/host.smithi140","val":{
  "daemons": {
    "mon.b": {
      "hostname": "smithi140",
      "daemon_id": "b",
      "daemon_type": "mon",
      "status": 1,
      "status_desc": "starting" 
    }
  },
  "devices": [
    {
      "rejected_reasons": [
        "LVM detected",
        "Insufficient space (<5GB) on vgs",
        "locked" 
      ],
      "available": false,
      "path": "/dev/nvme0n1",
      "sys_api": {
        "removable": "0",
        "ro": "0",
        "vendor": "",
        "model": "INTEL SSDPEDMD400G4",
        "rev": "",
        "sas_address": "",
        "sas_device_handle": "",
        "support_discard": "512",
        "rotational": "0",
        "nr_requests": "1023",
        "scheduler_mode": "none",
        "partitions": {},
        "sectors": 0,
        "sectorsize": "512",
        "size": 400088457216,
        "human_readable_size": "372.61 GB",
        "path": "/dev/nvme0n1",
        "locked": 1
      },
      "lvs": [
        {
          "name": "lv_5",
          "comment": "not used by ceph" 
        },
        {
          "name": "lv_4",
          "comment": "not used by ceph" 
        },
        {
          "name": "lv_3",
          "comment": "not used by ceph" 
        },
        {
          "name": "lv_2",
          "comment": "not used by ceph" 
        },
        {
          "name": "lv_1",
          "comment": "not used by ceph" 
        }
      ],
      "human_readable_type": "ssd",
      "device_id": "_CVFT62330038400BGN" 
    },
    {
      "rejected_reasons": [
        "locked" 
      ],
      "available": false,
      "path": "/dev/sda",
      "sys_api": {
        "removable": "0",
        "ro": "0",
        "vendor": "ATA",
        "model": "ST1000NM0033-9ZM",
        "rev": "SN06",
        "sas_address": "",
        "sas_device_handle": "",
        "support_discard": "0",
        "rotational": "1",
        "nr_requests": "128",
        "scheduler_mode": "deadline",
        "partitions": {
          "sda1": {
            "start": "2048",
            "sectors": "1953522688",
            "sectorsize": 512,
            "size": 1000203616256,
            "human_readable_size": "931.51 GB",
            "holders": []
          }
        },
        "sectors": 0,
        "sectorsize": "512",
        "size": 1000204886016,
        "human_readable_size": "931.51 GB",
        "path": "/dev/sda",
        "locked": 1
      },
      "lvs": [],
      "human_readable_type": "hdd",
      "device_id": "ST1000NM0033-9ZM173_Z1W5XV0X" 
    }
  ],
  "daemon_config_deps": {
    "mon.b": {
      "deps": [],
      "last_config": "2020-03-26T22:34:23.107746" 
    }
  },
  "last_device_update": "2020-03-26T22:34:05.729332",
  "networks": {
    "172.21.0.0/20": [
      "172.21.15.140" 
    ]
  },
  "last_host_check": "2020-03-26T22:34:00.702977" 
}
"}]': finished
2020-03-26T22:34:27.174 INFO:teuthology.orchestra.run.smithi140:mon.b> sudo journalctl -f -n 0 -u ceph-7f76a098-6fb1-11ea-924a-001a4aab830c@mon.b.service
2020-03-26T22:34:27.180 INFO:teuthology.orchestra.run.smithi140:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7f76a098-6fb1-11ea-924a-001a4aab830c -- ceph mon dump -f json
2020-03-26T22:34:27.335 INFO:ceph.mon.b.smithi140.stdout:-- Logs begin at Thu 2020-03-26 22:19:24 UTC. --
2020-03-26T22:34:27.336 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:26 smithi140 podman[6843]: 2020-03-26 22:34:26.914023212 +0000 UTC m=+0.592997223 container create de4f678811fa1b714c7ca8c97659d079945413f79566cbe400c40652c1d7b2d0 (image=quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68, name=ceph-7f76a098-6fb1-11ea-924a-001a4aab830c-mon.b)
2020-03-26T22:34:35.045 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 podman[6843]: 2020-03-26 22:34:35.04049354 +0000 UTC m=+8.719467657 container remove de4f678811fa1b714c7ca8c97659d079945413f79566cbe400c40652c1d7b2d0 (image=quay.io/ceph-ci/ceph:3ee082218cdc1e5bca6f5c10b26b089353653b68, name=ceph-7f76a098-6fb1-11ea-924a-001a4aab830c-mon.b)
2020-03-26T22:34:35.047 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 bash[6841]: time="2020-03-26T22:34:35Z" level=error msg="unable to remove container de4f678811fa1b714c7ca8c97659d079945413f79566cbe400c40652c1d7b2d0 after failing to start and attach to it" 
2020-03-26T22:34:35.126 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 bash[6841]: Error: container_linux.go:345: starting container process caused "exec: \"/usr/bin/ceph-mon\": stat /usr/bin/ceph-mon: no such file or directory" 
2020-03-26T22:34:35.126 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 bash[6841]: : OCI runtime error
2020-03-26T22:34:35.140 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 systemd[1]: ceph-7f76a098-6fb1-11ea-924a-001a4aab830c@mon.b.service: main process exited, code=exited, status=127/n/a
2020-03-26T22:34:35.360 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 podman[7055]: Error: no container with name or ID ceph-7f76a098-6fb1-11ea-924a-001a4aab830c-mon.b found: no such container
2020-03-26T22:34:35.378 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 systemd[1]: Unit ceph-7f76a098-6fb1-11ea-924a-001a4aab830c@mon.b.service entered failed state.
2020-03-26T22:34:35.379 INFO:ceph.mon.b.smithi140.stdout:Mar 26 22:34:35 smithi140 systemd[1]: ceph-7f76a098-6fb1-11ea-924a-001a4aab830c@mon.b.service failed.
Actions #5

Updated by Sebastian Wagner about 4 years ago

Not yet understsand, why https://github.com/ceph/ceph/pull/34260 fixes this issue.

Actions #6

Updated by Sebastian Wagner about 4 years ago

  • Status changed from Fix Under Review to Resolved
Actions #7

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory added
Actions

Also available in: Atom PDF