Project

General

Profile

Actions

Bug #47336

closed

`orch device ls`: Unexpected argument '--wide'

Added by Sebastian Wagner over 3 years ago. Updated over 3 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

https://pulpito.ceph.com/teuthology-2020-09-03_07:01:02-rados-master-distro-basic-smithi/

2020-09-06T13:05:01.941 INFO:teuthology.orchestra.run.smithi187:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early orch device ls --wide
2020-09-06T13:05:02.197 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 podman[55616]: 2020-09-06 13:05:02.001339685 +0000 UTC m=+0.747795535 container died 10e3f81f605e50e0f4a16983e07a3902e4e81ecd873f8fe9498ad5c65102be68 (image=quay.ceph.i
o/ceph-ci/ceph:4e4c926faf627bcfcf316bfbde0da6544658fad4, name=ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-deactivate)
2020-09-06T13:05:02.262 INFO:teuthology.orchestra.run.smithi187.stderr:Invalid command: Unexpected argument '--wide'
2020-09-06T13:05:02.263 INFO:teuthology.orchestra.run.smithi187.stderr:orch device ls [<hostname>...] [plain|json|json-pretty|yaml] [--refresh] :  List devices on a host
2020-09-06T13:05:02.263 INFO:teuthology.orchestra.run.smithi187.stderr:Error EINVAL: invalid command
2020-09-06T13:05:02.265 DEBUG:teuthology.orchestra.run:got remote process result: 22
2020-09-06T13:05:02.266 INFO:teuthology.orchestra.run.smithi187:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Ended test tasks.cephadm_cases.test_cli.TestCephadmCLI.test_device_ls_wide
'
2020-09-06T13:05:02.932 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 podman[55616]: 2020-09-06 13:05:02.657678294 +0000 UTC m=+1.404134128 container remove 10e3f81f605e50e0f4a16983e07a3902e4e81ecd873f8fe9498ad5c65102be68 (image=quay.ceph
.io/ceph-ci/ceph:4e4c926faf627bcfcf316bfbde0da6544658fad4, name=ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-deactivate)
2020-09-06T13:05:02.932 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 systemd[1]: Stopped Ceph osd.0 for 1e94ddde-f040-11ea-a080-001a4aab830c.
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 systemd[1]: Starting Ceph osd.0 for 1e94ddde-f040-11ea-a080-001a4aab830c...
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 podman[55771]: Error: no container with name or ID ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0 found: no such container
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 bash[55786]: Error: Failed to evict container: "": Failed to find container "ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate" in state: no container with name 
or ID ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate found: no such container
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 bash[55786]: ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate
2020-09-06T13:05:02.934 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 bash[55786]: Error: no container with ID or name "ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate" found: no such container
2020-09-06T13:05:02.934 INFO:tasks.cephfs_test_runner:test_device_ls_wide (tasks.cephadm_cases.test_cli.TestCephadmCLI) ... ERROR
2020-09-06T13:05:02.935 INFO:tasks.cephfs_test_runner:
2020-09-06T13:05:02.936 INFO:tasks.cephfs_test_runner:======================================================================
2020-09-06T13:05:02.936 INFO:tasks.cephfs_test_runner:ERROR: test_device_ls_wide (tasks.cephadm_cases.test_cli.TestCephadmCLI)
2020-09-06T13:05:02.936 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-09-06T13:05:02.937 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-09-06T13:05:02.937 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/cephadm_cases/test_cli.py", line 55, in test_device_ls_wide
2020-09-06T13:05:02.937 INFO:tasks.cephfs_test_runner:    self._orch_cmd('device', 'ls', '--wide')
2020-09-06T13:05:02.938 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/cephadm_cases/test_cli.py", line 13, in _orch_cmd
2020-09-06T13:05:02.938 INFO:tasks.cephfs_test_runner:    return self._cmd("orch", *args)
2020-09-06T13:05:02.938 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/cephadm_cases/test_cli.py", line 10, in _cmd
2020-09-06T13:05:02.939 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd(*args)
2020-09-06T13:05:02.939 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/ceph_manager.py", line 1357, in raw_cluster_cmd
2020-09-06T13:05:02.939 INFO:tasks.cephfs_test_runner:    'stdout': StringIO()}).stdout.getvalue()
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/ceph_manager.py", line 1350, in run_cluster_cmd
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner:    return self.controller.run(**kwargs)
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 213, in run
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-09-06T13:05:02.941 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-09-06T13:05:02.941 INFO:tasks.cephfs_test_runner:    r.wait()
2020-09-06T13:05:02.941 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-09-06T13:05:02.942 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2020-09-06T13:05:02.942 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-09-06T13:05:02.942 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2020-09-06T13:05:02.943 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi187 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early orch device ls --wide'


Related issues 1 (0 open1 closed)

Related to teuthology - Bug #48671: suite_sha1 and workunit:sha1 can get out of sync with sha1ResolvedBrad Hubbard

Actions
Actions #1

Updated by Sebastian Wagner over 3 years ago

interesting: `master` actually has --wide!

orch device ls [<hostname>...] [plain|json|json-pretty|yaml] [--refresh] [--wide] :  List devices on a host
Actions #2

Updated by Sebastian Wagner over 3 years ago

  • Status changed from New to Can't reproduce
Actions #3

Updated by Sebastian Wagner over 3 years ago

  • Related to Bug #48671: suite_sha1 and workunit:sha1 can get out of sync with sha1 added
Actions

Also available in: Atom PDF