Project

General

Profile

Actions

Bug #49633

closed

podman: ERROR (catatonit:2): failed to exec pid1: No such file or directory

Added by Sage Weil about 3 years ago. Updated over 2 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
container tools
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2021-03-05T19:13:07.167 DEBUG:teuthology.orchestra.run.gibba012:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c49cb47f175a7b8d6bcdff99310657b790f3a296 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7d0fc8f4-7de6-11eb-9069-001a4aab830c -- ceph orch host ls --format=json
2021-03-05T19:13:09.214 INFO:teuthology.orchestra.run.gibba012.stderr:ERROR (catatonit:2): failed to exec pid1: No such file or directory
2021-03-05T19:13:20.114 INFO:teuthology.orchestra.run.gibba012.stderr:time="2021-03-05T19:13:20Z" level=error msg="Error removing container 3ca1bf480a5cf245a910c124d1ef28a649d0f526c1eda62bb523ecbe81c9dd98: error removing container 3ca1bf480a5cf245a910c124d1ef28a649d0f526c1eda62bb523ecbe81c9dd98 root filesystem: 1 error occurred:\n\t* unlinkat /var/lib/containers/storage/overlay/7952ec5564c7afb9d747e235bf814c4f347e7c065652e7443531f07d735514c7/merged: device or resource busy\n\n" 

/a/sage-2021-03-05_17:57:02-rados:cephadm-wip-sage-testing-2021-03-05-0948-distro-basic-gibba/5939785

Related issues 4 (0 open4 closed)

Related to Orchestrator - Bug #49739: `ceph` not found in $PATH: No such file or directory: OCI not foundCan't reproduce

Actions
Related to Orchestrator - Bug #49740: cephadm: error looking up supplemental groups for container ca2...23: Unable to find group diskRejected

Actions
Related to Orchestrator - Bug #49742: Mirror was added, but still "toomanyrequests: You have reached your pull rate limit"Can't reproduce

Actions
Related to Orchestrator - Bug #49909: CNI network "podman" not foundResolvedKefu Chai

Actions
Actions #1

Updated by Sage Weil about 3 years ago

/a/sage-2021-03-05_17:57:02-rados:cephadm-wip-sage-testing-2021-03-05-0948-distro-basic-gibba/5939780

distro/ubuntu_20.04_kubic_stable

the original was distro/ubuntu_20.04_kubic_testing

Actions #6

Updated by Sebastian Wagner about 3 years ago

  • Related to Bug #49739: `ceph` not found in $PATH: No such file or directory: OCI not found added
Actions #7

Updated by Sebastian Wagner about 3 years ago

  • Related to Bug #49740: cephadm: error looking up supplemental groups for container ca2...23: Unable to find group disk added
Actions #8

Updated by Sebastian Wagner about 3 years ago

  • Related to Bug #49742: Mirror was added, but still "toomanyrequests: You have reached your pull rate limit" added
Actions #10

Updated by Sage Weil about 3 years ago

  • Related to Bug #49909: CNI network "podman" not found added
Actions #11

Updated by Kefu Chai about 3 years ago

/a/kchai-2021-03-26_05:32:58-rados-wip-kefu-testing-2021-03-26-1134-distro-basic-smithi/6001117

0-distro/ubuntu_20.04_kubic_stable

Actions #12

Updated by Kefu Chai over 2 years ago

2021-07-28T14:56:44.028 INFO:journalctl@ceph.mon.a.smithi111.stdout:Jul 28 14:56:43 smithi111 ceph-mon[32115]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
2021-07-28T14:56:45.480 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]: 2021-07-28T14:56:45.424+0000 7f1f406f7700 -1 log_channel(cephadm) log [ERR] : cephadm exited wi
th an error code: 1, stderr:Non-zero exit code 1 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_
IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:d9f746eab4635ac06bea739dda0f0383ec5119684f991667cac8c392cdfcffb5 -e NODE_NAME=smithi118 -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/9a763628-efb3-11eb-8c24-001a4aab8
30c:/var/log/ceph:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/9a763628-efb3-11eb-8c
24-001a4aab830c/selinux:/sys/fs/selinux:ro -v /tmp/ceph-tmpzn1tvkw1:/etc/ceph/ceph.conf:z quay.ceph.io/ceph-ci/ceph@sha256:d9f746eab4635ac06bea739dda0f0383ec5119684f991667cac8c392cdfcffb5 inventory --form
at=json --filter-for-batch
2021-07-28T14:56:45.481 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]: /bin/podman: stderr ERROR (catatonit:2): failed to exec pid1: No such file or directory
2021-07-28T14:56:45.481 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]: Traceback (most recent call last):
2021-07-28T14:56:45.481 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/cephadm.d0ec491600c336b05ecbe5665c86
9bdddc9dea470e859d512c4d4608724c4459", line 8304, in <module>
2021-07-28T14:56:45.481 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     main()
2021-07-28T14:56:45.482 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/cephadm.d0ec491600c336b05ecbe5665c869bdddc9dea470e859d512c4d4608724c4459", line 8292, in main
2021-07-28T14:56:45.482 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     r = ctx.func(ctx)
2021-07-28T14:56:45.482 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/cephadm.d0ec491600c336b05ecbe5665c869bdddc9dea470e859d512c4d4608724c4459", line 1714, in _infer_config
2021-07-28T14:56:45.482 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     return func(ctx)
2021-07-28T14:56:45.483 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/cephadm.d0ec491600c336b05ecbe5665c869bdddc9dea470e859d512c4d4608724c4459", line 1655, in _infer_fsid
2021-07-28T14:56:45.483 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     return func(ctx)
2021-07-28T14:56:45.483 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/cephadm.d0ec491600c336b05ecbe5665c869bdddc9dea470e859d512c4d4608724c4459", line 1742, in _infer_image
2021-07-28T14:56:45.483 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     return func(ctx)
2021-07-28T14:56:45.483 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/cephadm.d0ec491600c336b05ecbe5665c869bdddc9dea470e859d512c4d4608724c4459", line 4662, in command_ceph_volume
2021-07-28T14:56:45.484 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     out, err, code = call_throws(ctx, c.run_cmd())
2021-07-28T14:56:45.484 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/cephadm.d0ec491600c336b05ecbe5665c869bdddc9dea470e859d512c4d4608724c4459", line 1454, in call_throws
2021-07-28T14:56:45.484 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     raise RuntimeError('Failed command: %s' % ' '.join(command))
2021-07-28T14:56:45.484 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]: RuntimeError: Failed command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host
--entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:d9f746eab4635ac06bea739dda0f0383ec5119684f991667cac8c392cdfcffb5 -e NODE_NAME=sm
ithi118 -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/9a763628-efb3-11eb-8c24-001a4aab830c:/var/log/ceph:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /r
un/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/selinux:/sys/fs/selinux:ro -v /tmp/ceph-tmpzn1tvkw1:/etc/ceph/ceph.conf:z quay.ceph.io/ceph-ci/ceph@sha
256:d9f746eab4635ac06bea739dda0f0383ec5119684f991667cac8c392cdfcffb5 inventory --format=json --filter-for-batch
2021-07-28T14:56:45.485 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]: Traceback (most recent call last):
2021-07-28T14:56:45.485 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 1353, in _remote_connection
2021-07-28T14:56:45.485 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     yield (conn, connr)
2021-07-28T14:56:45.485 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 1250, in _run_cephadm
2021-07-28T14:56:45.486 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]:     code, '\n'.join(err)))
2021-07-28T14:56:45.486 INFO:journalctl@ceph.mgr.a.smithi111.stdout:Jul 28 14:56:45 smithi111 conmon[32431]: orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr:Non-zer
o exit code 1 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha
256:d9f746eab4635ac06bea739dda0f0383ec5119684f991667cac8c392cdfcffb5 -e NODE_NAME=smithi118 -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/9a763628-efb3-11eb-8c24-001a4aab830c:/var/log/ceph:z -v /run/systemd
/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/9a763628-efb3-11eb-8c24-001a4aab830c/selinux:/sys/fs/sel
inux:ro -v /tmp/ceph-tmpzn1tvkw1:/etc/ceph/ceph.conf:z quay.ceph.io/ceph-ci/ceph@sha256:d9f746eab4635ac06bea739dda0f0383ec5119684f991667cac8c392cdfcffb5 inventory --format=json --filter-for-batch

/a/kchai-2021-07-28_14:31:31-rados-wip-kefu-testing-2021-07-28-1908-distro-basic-smithi/6298540

Actions #13

Updated by Sebastian Wagner over 2 years ago

  • Status changed from New to Can't reproduce

I hope with using container-tools, this is gone now

Actions

Also available in: Atom PDF