Project

General

Profile

Bug #46247

cephadm mon failure: Error: no container with name or ID ... no such container

Added by Deepika Upadhyay over 3 years ago. Updated about 3 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
teuthology
Target version:
-
% Done:

0%

Source:
Tags:
cephadm
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2020-06-26T11:10:29.521 INFO:teuthology.orchestra.run.smithi025.stderr:DEBUG:cephadm:Running command: systemctl restart ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c@mon.a
2020-06-26T11:10:29.548 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:29 smithi025 systemd[1]: Stopping Ceph mon.a for 7ffc714e-b79d-11ea-a06d-001a4aab830c...
2020-06-26T11:10:29.636 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:29 smithi025 bash[7576]: debug 2020-06-26T11:10:29.631+0000 7f820e316700 -1 received  signal: Terminated from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2020-06-26T11:10:29.636 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:29 smithi025 bash[7576]: debug 2020-06-26T11:10:29.631+0000 7f820e316700 -1 mon.a@0(leader) e1 *** Got Signal Terminated ***
2020-06-26T11:10:29.792 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:29 smithi025 podman[7995]: 2020-06-26 11:10:29.787309308 +0000 UTC m=+0.212121236 container died c9f96d4c232aeafe6ce153e565d58ad8f83f62fb443e9f827bfe97866f4d25fa (image=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea, name=ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a)
2020-06-26T11:10:29.815 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:29 smithi025 podman[7995]: 2020-06-26 11:10:29.811686977 +0000 UTC m=+0.236498823 container stop c9f96d4c232aeafe6ce153e565d58ad8f83f62fb443e9f827bfe97866f4d25fa (image=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea, name=ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a)
2020-06-26T11:10:29.815 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:29 smithi025 podman[7995]: c9f96d4c232aeafe6ce153e565d58ad8f83f62fb443e9f827bfe97866f4d25fa
2020-06-26T11:10:30.234 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 systemd[1]: Stopped Ceph mon.a for 7ffc714e-b79d-11ea-a06d-001a4aab830c.
2020-06-26T11:10:30.234 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 systemd[1]: Starting Ceph mon.a for 7ffc714e-b79d-11ea-a06d-001a4aab830c...
2020-06-26T11:10:30.341 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 podman[8048]: Error: no container with name or ID ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a found: no such container
2020-06-26T11:10:30.346 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 systemd[1]: Started Ceph mon.a for 7ffc714e-b79d-11ea-a06d-001a4aab830c.
2020-06-26T11:10:30.348 INFO:teuthology.orchestra.run.smithi025.stderr:DEBUG:cephadm:systemctl:profile rt=0.8247237205505371, stop=False, exit=0, reads=[8, 10]
2020-06-26T11:10:30.348 INFO:teuthology.orchestra.run.smithi025.stderr:DEBUG:cephadm:systemctl:profile rt=0.8269872665405273, stop=True, exit=0, reads=[8, 10]
2020-06-26T11:10:30.349 INFO:teuthology.orchestra.run.smithi025.stderr:INFO:cephadm:Setting mon public_network...
2020-06-26T11:10:30.349 INFO:teuthology.orchestra.run.smithi025.stderr:DEBUG:cephadm:['/bin/podman', 'run', '--rm', '--net=host', '--ipc=host', '-e', 'CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea', '-e', 'NODE_NAME=smithi025', '-v', '/var/log/ceph/7ffc714e-b79d-11ea-a06d-001a4aab830c:/var/log/ceph:z', '-v', '/tmp/ceph-tmp86ha_v3f:/etc/ceph/ceph.client.admin.keyring:z', '-v', '/tmp/ceph-tmprug__cu5:/etc/ceph/ceph.conf:z', '--entrypoint', '/usr/bin/ceph', 'quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea', 'config', 'set', 'mon', 'public_network', '172.21.0.0/20']
2020-06-26T11:10:30.349 INFO:teuthology.orchestra.run.smithi025.stderr:DEBUG:cephadm:Running command: /bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea -e NODE_NAME=smithi025 -v /var/log/ceph/7ffc714e-b79d-11ea-a06d-001a4aab830c:/var/log/ceph:z -v /tmp/ceph-tmp86ha_v3f:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmprug__cu5:/etc/ceph/ceph.conf:z --entrypoint /usr/bin/ceph quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea config set mon public_network 172.21.0.0/20
2020-06-26T11:10:30.642 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 bash[8066]: Error: no container with name or ID ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a found: no such container
2020-06-26T11:10:30.703 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 bash[8066]: ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a
2020-06-26T11:10:30.704 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 bash[8066]: Error: no container with ID or name "ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a" found: no such container
2020-06-26T11:10:30.974 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:30 smithi025 podman[8120]: 2020-06-26 11:10:30.969826069 +0000 UTC m=+0.248229724 container create 8a3a8898ee80ca40edcd1513d4f3eccc75c0a744c7665b863823d143569877a5 (image=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea, name=ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a)
2020-06-26T11:10:31.156 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:31 smithi025 podman[8120]: 2020-06-26 11:10:31.153229652 +0000 UTC m=+0.431633288 container init 8a3a8898ee80ca40edcd1513d4f3eccc75c0a744c7665b863823d143569877a5 (image=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea, name=ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a)
2020-06-26T11:10:31.189 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:31 smithi025 podman[8120]: 2020-06-26 11:10:31.18650088 +0000 UTC m=+0.464904504 container start 8a3a8898ee80ca40edcd1513d4f3eccc75c0a744c7665b863823d143569877a5 (image=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea, name=ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a)
2020-06-26T11:10:31.189 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:31 smithi025 podman[8120]: 2020-06-26 11:10:31.186638798 +0000 UTC m=+0.465042501 container attach 8a3a8898ee80ca40edcd1513d4f3eccc75c0a744c7665b863823d143569877a5 (image=quay.ceph.io/ceph-ci/ceph:3a961532ad4058431d46b1d6573ce0aff5065bea, name=ceph-7ffc714e-b79d-11ea-a06d-001a4aab830c-mon.a)
2020-06-26T11:10:31.191 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:31 smithi025 bash[8066]: debug 2020-06-26T11:10:31.187+0000 7fb83c372700  0 set uid:gid to 167:167 (ceph:ceph)
2020-06-26T11:10:31.191 INFO:ceph.mon.a.smithi025.stdout:Jun 26 11:10:31 smithi025 bash[8066]: debug 2020-06-26T11:10:31.187+0000 7fb83c372700  0 ceph version 16.0.0-2924-g3a96153 (3a961532ad4058431d46b1d6573ce0aff5065bea) pacific (dev), process ceph-mon, pid 1

Related issues

Related to Orchestrator - Bug #45454: cephadm: teardown: hang at sudo systemctl stop ceph-453d3962-9141-11ea-a068-001a4aab830c@mgr.x Can't reproduce
Related to Orchestrator - Bug #45420: cephadmunit.py: teuthology.exceptions.CommandFailedError: Command failed on smithi094 with status 125: 'sudo docker kill -s 1 ceph-d8648236-8cc8-11ea-a068-001a4aab830c-osd.1' Can't reproduce

History

#2 Updated by Deepika Upadhyay over 3 years ago

  • Related to Bug #45454: cephadm: teardown: hang at sudo systemctl stop ceph-453d3962-9141-11ea-a068-001a4aab830c@mgr.x added

#3 Updated by Deepika Upadhyay over 3 years ago

  • Related to Bug #45420: cephadmunit.py: teuthology.exceptions.CommandFailedError: Command failed on smithi094 with status 125: 'sudo docker kill -s 1 ceph-d8648236-8cc8-11ea-a068-001a4aab830c-osd.1' added

#4 Updated by Sebastian Wagner over 3 years ago

  • Project changed from Ceph to Orchestrator
  • Category changed from teuthology to teuthology

#5 Updated by Sebastian Wagner over 3 years ago

the error itself is harmless and unrelated to the actual error.

#6 Updated by Deepika Upadhyay over 3 years ago

yes, looks like so, removing to avoid confusion

#7 Updated by Sebastian Wagner about 3 years ago

  • Status changed from New to Can't reproduce

This was fixed in the meantime

Also available in: Atom PDF