Actions
Bug #51109
closed/bin/podman ps --format {{.Names}} exit code 125
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Description
2021-06-04T22:50:28.957+0000 7f55d272a700 0 [cephadm DEBUG cephadm.serve] code: 1 2021-06-04T22:50:28.957+0000 7f55d272a700 0 [cephadm DEBUG cephadm.serve] err: Non-zero exit code 125 from /bin/podman ps --format {{.Names}} /bin/podman: stderr Error: container 250d08dd5b83e0dc07f95d29a1799a96c52ce14cdc7dd61e5ae6f4db560f884c does not exist in database: no such container Traceback (most recent call last): File "/var/lib/ceph/1aae017c-c584-11eb-a087-a0423f47e972/cephadm.14ab71b12dc65e07009ce8e3249a710780f225056ba4bebdff6714c3bcf7a8ac", line 7934, in <module> main() File "/var/lib/ceph/1aae017c-c584-11eb-a087-a0423f47e972/cephadm.14ab71b12dc65e07009ce8e3249a710780f225056ba4bebdff6714c3bcf7a8ac", line 7922, in main r = ctx.func(ctx) File "/var/lib/ceph/1aae017c-c584-11eb-a087-a0423f47e972/cephadm.14ab71b12dc65e07009ce8e3249a710780f225056ba4bebdff6714c3bcf7a8ac", line 1727, in _default_image return func(ctx) File "/var/lib/ceph/1aae017c-c584-11eb-a087-a0423f47e972/cephadm.14ab71b12dc65e07009ce8e3249a710780f225056ba4bebdff6714c3bcf7a8ac", line 4147, in command_deploy if state == 'running' or is_container_running(ctx, container_name): File "/var/lib/ceph/1aae017c-c584-11eb-a087-a0423f47e972/cephadm.14ab71b12dc65e07009ce8e3249a710780f225056ba4bebdff6714c3bcf7a8ac", line 2033, in is_container_running '--format', '{{.Names}}']) File "/var/lib/ceph/1aae017c-c584-11eb-a087-a0423f47e972/cephadm.14ab71b12dc65e07009ce8e3249a710780f225056ba4bebdff6714c3bcf7a8ac", line 1421, in call_throws raise RuntimeError('Failed command: %s' % ' '.join(command)) RuntimeError: Failed command: /bin/podman ps --format {{.Names}} 2021-06-04T22:50:28.958+0000 7f55d272a700 0 [cephadm ERROR cephadm.serve] cephadm exited with an error code: 1, stderr:Non-zero exit code 125 from /bin/podman ps --format {{.Names}}
this seems to happen sporatically... i suspect due to a race in the podman code with pod shutdown? normally this command succeeds.
This seems to occur when cephadm is creating lots of OSDs on the same host.
Actions