Project

General

Profile

Bug #61393

qa: cephadm command failed

Added by Kotresh Hiremath Ravishankar 9 months ago. Updated 9 months ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Description: fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}}

Log: http://qa-proxy.ceph.com/teuthology/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7283971/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=4c73cbf252ec4312a2588b9d1b8c9c60

Job Link: https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7283971/

Failure Reason:
Command failed on smithi038 with status 5: 'sudo systemctl stop '

2023-05-23T23:07:43.363 INFO:teuthology.orchestra.run.smithi169.stderr:Inferring config /var/lib/ceph/4e5a9eca-f9be-11ed-9b1b-001a4aab830c/config/ceph.conf
2023-05-23T23:07:44.284 INFO:teuthology.orchestra.run.smithi169.stderr:Non-zero exit code 127 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint stat --init -e CONTAINER_IMAGE=quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e4b3fdd44e85f
3660c3e61a9270adc76561da83e -e NODE_NAME=smithi169 -e CEPH_USE_RANDOM_NONCE=1 quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e4b3fdd44e85f3660c3e61a9270adc76561da83e -c %u %g /var/lib/ceph
2023-05-23T23:07:44.284 INFO:teuthology.orchestra.run.smithi169.stderr:stat: stderr Error: OCI runtime error: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks 
process caused: Unit libpod-31e68d5bfc7eba63b1f568d4a44ed710ba16de7bbeded71799e41ed9369bcbc6.scope not found.
2023-05-23T23:07:44.284 INFO:teuthology.orchestra.run.smithi169.stderr:ERROR: Failed to extract uid/gid for path /var/lib/ceph: Failed command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint stat --init -e CONTAINER_IMAGE=quay-quay-quay.apps
.os.sepia.ceph.com/ceph-ci/ceph:e4b3fdd44e85f3660c3e61a9270adc76561da83e -e NODE_NAME=smithi169 -e CEPH_USE_RANDOM_NONCE=1 quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e4b3fdd44e85f3660c3e61a9270adc76561da83e -c %u %g /var/lib/ceph: Error: OCI runtime error: contain
er_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: Unit libpod-31e68d5bfc7eba63b1f568d4a44ed710ba16de7bbeded71799e41ed9369bcbc6.scope not found.
2023-05-23T23:07:44.285 INFO:teuthology.orchestra.run.smithi169.stderr:
2023-05-23T23:07:44.297 INFO:journalctl@ceph.mon.a.smithi018.stdout:May 23 23:07:43 smithi018 ceph-mon[83248]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
2023-05-23T23:07:44.310 DEBUG:teuthology.orchestra.run:got remote process result: 1
2023-05-23T23:07:44.311 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_teuthology_6fb4f0c910de3902d27909a642e8d50d731dd28c/teuthology/contextutil.py", line 31, in nested
    vars.append(enter())
  File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_e4b3fdd44e85f3660c3e61a9270adc76561da83e/qa/tasks/cephadm.py", line 671, in ceph_mons
    r = _shell(
  File "/home/teuthworker/src/github.com_ceph_ceph-c_e4b3fdd44e85f3660c3e61a9270adc76561da83e/qa/tasks/cephadm.py", line 37, in _shell
    return remote.run(
  File "/home/teuthworker/src/git.ceph.com_teuthology_6fb4f0c910de3902d27909a642e8d50d731dd28c/teuthology/orchestra/remote.py", line 525, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_teuthology_6fb4f0c910de3902d27909a642e8d50d731dd28c/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_teuthology_6fb4f0c910de3902d27909a642e8d50d731dd28c/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_teuthology_6fb4f0c910de3902d27909a642e8d50d731dd28c/teuthology/orchestra/run.py", line 181, in _raise_for_status
    raise CommandFailedError(
teuthology.exceptions.CommandFailedError: Command failed on smithi169 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e4b3fdd44e85f3660c3e61a9270adc76561da83e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.clie
nt.admin.keyring --fsid 4e5a9eca-f9be-11ed-9b1b-001a4aab830c -- ceph mon dump -f json'
2023-05-23T23:07:44.311 INFO:tasks.cephadm:Cleaning up testdir ceph.* files...


Related issues

Duplicates Orchestrator - Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found New

History

#1 Updated by Laura Flores 9 months ago

  • Status changed from New to Duplicate

Looks like a dupe of tracker #49287.

#2 Updated by Laura Flores 9 months ago

  • Duplicates Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found added

Also available in: Atom PDF