Project

General

Profile

Actions

Bug #54071

open

rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)

Added by Laura Flores about 2 years ago. Updated 2 months ago.

Status:
In Progress
Priority:
High
Category:
cephadm/osd
Target version:
-
% Done:

0%

Source:
Tags:
pacific
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Description: rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add}

/a/yuriw-2022-01-28_15:51:40-rados-wip-yuri2-testing-2022-01-21-0949-pacific-distro-default-smithi/6646486

2022-01-28T16:19:46.706 INFO:teuthology.orchestra.run.smithi064.stderr:+ ceph orch device zap --force
2022-01-28T16:19:47.025 INFO:teuthology.orchestra.run.smithi064.stderr:Invalid command: missing required parameter hostname(<string>)
2022-01-28T16:19:47.025 INFO:teuthology.orchestra.run.smithi064.stderr:orch device zap <hostname> <path> [--force] :  Zap (erase!) a device so it can be re-used
2022-01-28T16:19:47.026 INFO:teuthology.orchestra.run.smithi064.stderr:Error EINVAL: invalid command
2022-01-28T16:19:47.692 DEBUG:teuthology.orchestra.run:got remote process result: 22
2022-01-28T16:19:47.694 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_7c0cb8672986d9dbe53078a123af65593653ef7a/teuthology/run_tasks.py", line 91, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_7c0cb8672986d9dbe53078a123af65593653ef7a/teuthology/run_tasks.py", line 70, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_656fd1b72f2e6d6132afd224e66dc1ee26b8d0c8/qa/tasks/cephadm.py", line 1054, in shell
    extra_cephadm_args=args)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_656fd1b72f2e6d6132afd224e66dc1ee26b8d0c8/qa/tasks/cephadm.py", line 46, in _shell
    **kwargs
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_7c0cb8672986d9dbe53078a123af65593653ef7a/teuthology/orchestra/remote.py", line 509, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_7c0cb8672986d9dbe53078a123af65593653ef7a/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_7c0cb8672986d9dbe53078a123af65593653ef7a/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_7c0cb8672986d9dbe53078a123af65593653ef7a/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed on smithi064 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:656fd1b72f2e6d6132afd224e66dc1ee26b8d0c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid db117102-8054-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

Other occurrences:
/a/yuriw-2022-01-28_15:51:40-rados-wip-yuri2-testing-2022-01-21-0949-pacific-distro-default-smithi/6646495
/a/yuriw-2022-01-24_17:43:02-rados-wip-yuri2-testing-2022-01-21-0949-pacific-distro-default-smithi/6638266

The issue appears to be with `ceph orch device zap --force`, where it is expecting a hostname and not getting one.


Related issues 1 (0 open1 closed)

Has duplicate Orchestrator - Bug #57247: [cephadm] Error response from daemon: No such containerDuplicate

Actions
Actions #1

Updated by Adam King about 2 years ago

It looks like we're failing to grab the hostname for the command because the device ids don't match what we're expecting. On Pacific

2022-01-28T16:19:28.601 INFO:teuthology.orchestra.run.smithi064.stderr:+ ceph orch device ls
2022-01-28T16:19:28.927 INFO:teuthology.orchestra.run.smithi064.stdout:HOST       PATH          TYPE  DEVICE ID             SIZE  AVAILABLE  REJECT REASONS
2022-01-28T16:19:28.927 INFO:teuthology.orchestra.run.smithi064.stdout:smithi064  /dev/nvme0n1  ssd   _CVFT534400C2400BGN   400G             LVM detected, locked
2022-01-28T16:19:28.927 INFO:teuthology.orchestra.run.smithi064.stdout:smithi064  /dev/nvme1n1  ssd   _6d2a90ca9ac29a26    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.928 INFO:teuthology.orchestra.run.smithi064.stdout:smithi064  /dev/nvme2n1  ssd   _4d97944cede2b8c9    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.928 INFO:teuthology.orchestra.run.smithi064.stdout:smithi064  /dev/nvme3n1  ssd   _c4c890ca7f4c7097    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.928 INFO:teuthology.orchestra.run.smithi064.stdout:smithi064  /dev/nvme4n1  ssd   _7ecc36600692ab9c    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.928 INFO:teuthology.orchestra.run.smithi064.stdout:smithi103  /dev/nvme0n1  ssd   _PHFT6204012Z400BGN   400G             LVM detected, locked
2022-01-28T16:19:28.929 INFO:teuthology.orchestra.run.smithi064.stdout:smithi103  /dev/nvme1n1  ssd   _96e3dafc72eec49d    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.929 INFO:teuthology.orchestra.run.smithi064.stdout:smithi103  /dev/nvme2n1  ssd   _7b2b28a31bbf1f97    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.929 INFO:teuthology.orchestra.run.smithi064.stdout:smithi103  /dev/nvme3n1  ssd   _bff0832d89cda163    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.929 INFO:teuthology.orchestra.run.smithi064.stdout:smithi103  /dev/nvme4n1  ssd   _17f394152cbcf956    95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-28T16:19:28.941 INFO:teuthology.orchestra.run.smithi064.stderr:++ ceph device ls
2022-01-28T16:19:28.942 INFO:teuthology.orchestra.run.smithi064.stderr:++ grep osd.1
2022-01-28T16:19:28.942 INFO:teuthology.orchestra.run.smithi064.stderr:++ awk '{print $1}'
2022-01-28T16:19:29.273 INFO:teuthology.orchestra.run.smithi064.stderr:+ DEVID=Linux_6d2a90ca9ac29a26
2022-01-28T16:19:29.274 INFO:teuthology.orchestra.run.smithi064.stderr:++ ceph orch device ls
2022-01-28T16:19:29.274 INFO:teuthology.orchestra.run.smithi064.stderr:++ grep Linux_6d2a90ca9ac29a26
2022-01-28T16:19:29.275 INFO:teuthology.orchestra.run.smithi064.stderr:++ awk '{print $1}'
2022-01-28T16:19:29.606 INFO:teuthology.orchestra.run.smithi064.stderr:+ HOST=
2022-01-28T16:19:29.607 INFO:teuthology.orchestra.run.smithi064.stderr:++ ceph orch device ls
2022-01-28T16:19:29.608 INFO:teuthology.orchestra.run.smithi064.stderr:++ grep Linux_6d2a90ca9ac29a26
2022-01-28T16:19:29.608 INFO:teuthology.orchestra.run.smithi064.stderr:++ awk '{print $2}'
2022-01-28T16:19:29.959 INFO:teuthology.orchestra.run.smithi064.stderr:+ DEV=
2022-01-28T16:19:29.959 INFO:teuthology.orchestra.run.smithi064.stderr:+ echo 'host , dev , devid Linux_6d2a90ca9ac29a26'
2022-01-28T16:19:29.959 INFO:teuthology.orchestra.run.smithi064.stderr:+ ceph orch osd rm 1
2022-01-28T16:19:29.960 INFO:teuthology.orchestra.run.smithi064.stdout:host , dev , devid Linux_6d2a90ca9ac29a26

The same bit on the same test that passed on master

2022-01-26T07:13:34.687 INFO:teuthology.orchestra.run.smithi109.stderr:+ ceph orch device ls
2022-01-26T07:13:34.846 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Jan 26 07:13:34 smithi112 ceph-mon[36002]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
2022-01-26T07:13:34.962 INFO:teuthology.orchestra.run.smithi109.stdout:HOST       PATH          TYPE  DEVICE ID                                SIZE  AVAILABLE  REJECT REASONS
2022-01-26T07:13:34.963 INFO:teuthology.orchestra.run.smithi109.stdout:smithi109  /dev/nvme0n1  ssd   INTEL_SSDPEDMD400G4_PHFT6204016X400BGN   400G             LVM detected, locked
2022-01-26T07:13:34.963 INFO:teuthology.orchestra.run.smithi109.stdout:smithi109  /dev/nvme1n1  ssd   Linux_ef8d19b838c27683acca              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.963 INFO:teuthology.orchestra.run.smithi109.stdout:smithi109  /dev/nvme2n1  ssd   Linux_cce609b0affd8e91a14c              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.964 INFO:teuthology.orchestra.run.smithi109.stdout:smithi109  /dev/nvme3n1  ssd   Linux_5d732ac5ef391bffeae9              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.964 INFO:teuthology.orchestra.run.smithi109.stdout:smithi109  /dev/nvme4n1  ssd   Linux_b521c326e198763525dc              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.964 INFO:teuthology.orchestra.run.smithi109.stdout:smithi112  /dev/nvme0n1  ssd   INTEL_SSDPEDMD400G4_PHFT620400NV400BGN   400G             LVM detected, locked
2022-01-26T07:13:34.964 INFO:teuthology.orchestra.run.smithi109.stdout:smithi112  /dev/nvme1n1  ssd   Linux_41801bcc82e1568a4ea1              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.965 INFO:teuthology.orchestra.run.smithi109.stdout:smithi112  /dev/nvme2n1  ssd   Linux_55867f9a5d5a54548374              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.965 INFO:teuthology.orchestra.run.smithi109.stdout:smithi112  /dev/nvme3n1  ssd   Linux_b58eb44cada3a0ed7b0e              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.965 INFO:teuthology.orchestra.run.smithi109.stdout:smithi112  /dev/nvme4n1  ssd   Linux_c0419946386c227ade4d              95.9G             Insufficient space (<10 extents) on vgs, LVM detected, locked
2022-01-26T07:13:34.965 INFO:journalctl@ceph.mon.smithi109.smithi109.stdout:Jan 26 07:13:34 smithi109 ceph-mon[29486]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
2022-01-26T07:13:34.975 INFO:teuthology.orchestra.run.smithi109.stderr:++ ceph device ls
2022-01-26T07:13:34.977 INFO:teuthology.orchestra.run.smithi109.stderr:++ grep osd.1
2022-01-26T07:13:34.977 INFO:teuthology.orchestra.run.smithi109.stderr:++ awk '{print $1}'
2022-01-26T07:13:35.249 INFO:teuthology.orchestra.run.smithi109.stderr:+ DEVID=Linux_41801bcc82e1568a4ea1
2022-01-26T07:13:35.250 INFO:teuthology.orchestra.run.smithi109.stderr:++ ceph orch device ls
2022-01-26T07:13:35.250 INFO:teuthology.orchestra.run.smithi109.stderr:++ grep Linux_41801bcc82e1568a4ea1
2022-01-26T07:13:35.250 INFO:teuthology.orchestra.run.smithi109.stderr:++ awk '{print $1}'
2022-01-26T07:13:35.538 INFO:teuthology.orchestra.run.smithi109.stderr:+ HOST=smithi112
2022-01-26T07:13:35.538 INFO:teuthology.orchestra.run.smithi109.stderr:++ ceph orch device ls
2022-01-26T07:13:35.539 INFO:teuthology.orchestra.run.smithi109.stderr:++ grep Linux_41801bcc82e1568a4ea1
2022-01-26T07:13:35.539 INFO:teuthology.orchestra.run.smithi109.stderr:++ awk '{print $2}'
2022-01-26T07:13:35.823 INFO:teuthology.orchestra.run.smithi109.stdout:host smithi112, dev /dev/nvme1n1, devid Linux_41801bcc82e1568a4ea1
2022-01-26T07:13:35.824 INFO:teuthology.orchestra.run.smithi109.stderr:+ DEV=/dev/nvme1n1
2022-01-26T07:13:35.825 INFO:teuthology.orchestra.run.smithi109.stderr:+ echo 'host smithi112, dev /dev/nvme1n1, devid Linux_41801bcc82e1568a4ea1'
2022-01-26T07:13:35.825 INFO:teuthology.orchestra.run.smithi109.stderr:+ ceph orch osd rm 1
Actions #2

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-01-27_14:57:16-rados-wip-yuri-testing-2022-01-26-1810-pacific-distro-default-smithi/6643785

Found another instance; confirmed that the same phenomenon happened as pasted above:

2022-01-27T19:58:32.298 INFO:teuthology.orchestra.run.smithi042.stderr:++ ceph device ls
2022-01-27T19:58:32.299 INFO:teuthology.orchestra.run.smithi042.stderr:++ grep osd.1
2022-01-27T19:58:32.299 INFO:teuthology.orchestra.run.smithi042.stderr:++ awk '{print $1}'
2022-01-27T19:58:32.329 INFO:journalctl@ceph.mon.smithi170.smithi170.stdout:Jan 27 19:58:31 smithi170 bash[15157]: cluster 2022-01-27T19:58:30.877244+0000 mgr.smithi042.sbjcug (mgr.14188) 208 : cluster [DBG] pgmap v189: 1 pgs: 1 active+cl
ean; 0 B data, 41 MiB used, 715 GiB / 715 GiB avail
2022-01-27T19:58:32.372 INFO:journalctl@ceph.mon.smithi042.smithi042.stdout:Jan 27 19:58:31 smithi042 bash[12882]: cluster 2022-01-27T19:58:30.877244+0000 mgr.smithi042.sbjcug (mgr.14188) 208 : cluster [DBG] pgmap v189: 1 pgs: 1 active+cl
ean; 0 B data, 41 MiB used, 715 GiB / 715 GiB avail
2022-01-27T19:58:32.623 INFO:teuthology.orchestra.run.smithi042.stderr:+ DEVID=Linux_e844f6c126849785
2022-01-27T19:58:32.624 INFO:teuthology.orchestra.run.smithi042.stderr:++ ceph orch device ls
2022-01-27T19:58:32.624 INFO:teuthology.orchestra.run.smithi042.stderr:++ awk '{print $1}'
2022-01-27T19:58:32.624 INFO:teuthology.orchestra.run.smithi042.stderr:++ grep Linux_e844f6c126849785
2022-01-27T19:58:32.962 INFO:teuthology.orchestra.run.smithi042.stderr:+ HOST=
2022-01-27T19:58:32.963 INFO:teuthology.orchestra.run.smithi042.stderr:++ ceph orch device ls
2022-01-27T19:58:32.963 INFO:teuthology.orchestra.run.smithi042.stderr:++ grep Linux_e844f6c126849785
2022-01-27T19:58:32.963 INFO:teuthology.orchestra.run.smithi042.stderr:++ awk '{print $2}'
2022-01-27T19:58:33.299 INFO:teuthology.orchestra.run.smithi042.stderr:+ DEV=
2022-01-27T19:58:33.300 INFO:teuthology.orchestra.run.smithi042.stdout:host , dev , devid Linux_e844f6c126849785

Actions #3

Updated by Laura Flores about 2 years ago

Different "failure reason", but seems like the same root cause.

Description: rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate}

/a/yuriw-2022-02-06_16:04:09-rados-wip-yuri2-testing-2022-02-04-1646-pacific-distro-default-smithi/6665058

2022-02-06T16:30:47.647 INFO:teuthology.orchestra.run.smithi058.stderr:+ ceph orch ps
2022-02-06T16:30:47.923 INFO:teuthology.orchestra.run.smithi058.stdout:NAME                     HOST       PORTS        STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION               IMAGE ID      CONTAINER ID
2022-02-06T16:30:47.923 INFO:teuthology.orchestra.run.smithi058.stdout:alertmanager.smithi058   smithi058  *:9093,9094  running (94s)     75s ago    4m    12.1M        -  0.20.0                0881eb8f169f  140e0c60b7a0
2022-02-06T16:30:47.923 INFO:teuthology.orchestra.run.smithi058.stdout:crash.smithi058          smithi058               running (4m)      75s ago    4m    7000k        -  16.2.7-340-g6aa4fcc6  9eaf566eb582  2a1f30ae0878
2022-02-06T16:30:47.924 INFO:teuthology.orchestra.run.smithi058.stdout:crash.smithi139          smithi139               running (2m)      80s ago    2m    7138k        -  16.2.7-340-g6aa4fcc6  9eaf566eb582  6c372a30f478
2022-02-06T16:30:47.924 INFO:teuthology.orchestra.run.smithi058.stdout:grafana.smithi058        smithi058  *:3000       running (3m)      75s ago    3m    31.6M        -  6.7.4                 557c83e11646  53b741e08fd4
2022-02-06T16:30:47.924 INFO:teuthology.orchestra.run.smithi058.stdout:mgr.smithi058.ovedyh     smithi058  *:9283       running (5m)      75s ago    5m     438M        -  16.2.7-340-g6aa4fcc6  9eaf566eb582  00cad9c50ac0
2022-02-06T16:30:47.924 INFO:teuthology.orchestra.run.smithi058.stdout:mgr.smithi139.iebijb     smithi139  *:8443,9283  running (2m)      80s ago    2m     392M        -  16.2.7-340-g6aa4fcc6  9eaf566eb582  f655a4bd5d5c
2022-02-06T16:30:47.925 INFO:teuthology.orchestra.run.smithi058.stdout:mon.smithi058            smithi058               running (5m)      75s ago    5m    73.9M    2048M  16.2.7-340-g6aa4fcc6  9eaf566eb582  58f3134f20a7
2022-02-06T16:30:47.925 INFO:teuthology.orchestra.run.smithi058.stdout:mon.smithi139            smithi139               running (2m)      80s ago    2m    65.3M    2048M  16.2.7-340-g6aa4fcc6  9eaf566eb582  ff712bf89061
2022-02-06T16:30:47.925 INFO:teuthology.orchestra.run.smithi058.stdout:node-exporter.smithi058  smithi058  *:9100       running (3m)      75s ago    3m    17.9M        -  0.18.1                e5a616e4b9cf  1c63b53ade9e
2022-02-06T16:30:47.925 INFO:teuthology.orchestra.run.smithi058.stdout:node-exporter.smithi139  smithi139  *:9100       running (2m)      80s ago    2m    7235k        -  0.18.1                e5a616e4b9cf  7f272de555bc
2022-02-06T16:30:47.925 INFO:teuthology.orchestra.run.smithi058.stdout:osd.0                    smithi139               running (2m)      80s ago    2m    31.4M    4005M  16.2.7-340-g6aa4fcc6  9eaf566eb582  5dff97580512
2022-02-06T16:30:47.926 INFO:teuthology.orchestra.run.smithi058.stdout:osd.1                    smithi058               running (2m)      75s ago    2m    33.5M    3238M  16.2.7-340-g6aa4fcc6  9eaf566eb582  7a2a84d54e59
2022-02-06T16:30:47.926 INFO:teuthology.orchestra.run.smithi058.stdout:osd.2                    smithi139               running (2m)      80s ago    2m    32.2M    4005M  16.2.7-340-g6aa4fcc6  9eaf566eb582  fc17c89b8e0f
2022-02-06T16:30:47.926 INFO:teuthology.orchestra.run.smithi058.stdout:osd.3                    smithi058               running (117s)    75s ago  117s    32.4M    3238M  16.2.7-340-g6aa4fcc6  9eaf566eb582  fa1827127d5c
2022-02-06T16:30:47.926 INFO:teuthology.orchestra.run.smithi058.stdout:osd.4                    smithi139               running (117s)    80s ago  117s    32.6M    4005M  16.2.7-340-g6aa4fcc6  9eaf566eb582  db101e59ce81
2022-02-06T16:30:47.926 INFO:teuthology.orchestra.run.smithi058.stdout:osd.5                    smithi058               running (110s)    75s ago  110s    32.3M    3238M  16.2.7-340-g6aa4fcc6  9eaf566eb582  58e818d92fd3
2022-02-06T16:30:47.927 INFO:teuthology.orchestra.run.smithi058.stdout:osd.6                    smithi139               running (112s)    80s ago  112s    31.7M    4005M  16.2.7-340-g6aa4fcc6  9eaf566eb582  518b4d748b37
2022-02-06T16:30:47.927 INFO:teuthology.orchestra.run.smithi058.stdout:osd.7                    smithi058               running (103s)    75s ago  102s    29.9M    3238M  16.2.7-340-g6aa4fcc6  9eaf566eb582  b9d1772464c1
2022-02-06T16:30:47.927 INFO:teuthology.orchestra.run.smithi058.stdout:prometheus.smithi058     smithi058  *:9095       running (87s)     75s ago    3m    28.6M        -  2.18.1                de242295e225  aa593e495e97
2022-02-06T16:30:47.935 INFO:teuthology.orchestra.run.smithi058.stderr:++ hostname -s
2022-02-06T16:30:47.935 INFO:teuthology.orchestra.run.smithi058.stderr:bash: line 3: hostname: command not found
2022-02-06T16:30:47.936 INFO:teuthology.orchestra.run.smithi058.stderr:+ HOST=
2022-02-06T16:30:48.222 INFO:journalctl@ceph.mon.smithi058.smithi058.stdout:Feb 06 16:30:47 smithi058 conmon[28949]: audit 2022-02-06T16:30:47.191842+0000
2022-02-06T16:30:48.222 INFO:journalctl@ceph.mon.smithi058.smithi058.stdout:Feb 06 16:30:47 smithi058 conmon[28949]: mon.smithi058 (mon.0) 586 : audit [DBG] from='mgr.14176 172.21.15.58:0/1306308692' entity='mgr.smithi058.ovedyh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
2022-02-06T16:30:48.223 INFO:journalctl@ceph.mon.smithi058.smithi058.stdout:Feb 06 16:30:47 smithi058 conmon[28949]: audit 2022-02-06T16:30:47.192874+0000 mon.smithi058 (mon.0) 587 : audit [DBG] from='mgr.14176 172.21.15.58:0/1306308692' entity='mgr.smithi058.ovedyh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2022-02-06T16:30:48.223 INFO:journalctl@ceph.mon.smithi058.smithi058.stdout:Feb 06 16:30:47 smithi058 conmon[28949]: audit 2022-02-06T16:30:47.193450+0000 mon.smithi058 (mon.0) 588 : audit [INF] from='mgr.14176 172.21.15.58:0/1306308692' entity='mgr.smithi058.ovedyh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
2022-02-06T16:30:48.306 INFO:journalctl@ceph.mon.smithi139.smithi139.stdout:Feb 06 16:30:47 smithi139 conmon[32128]: audit 2022-02-06T16:30:47.191842+0000 mon.smithi058 (mon.0) 586
2022-02-06T16:30:48.306 INFO:journalctl@ceph.mon.smithi139.smithi139.stdout:Feb 06 16:30:47 smithi139 conmon[32128]:  : audit [DBG] from='mgr.14176 172.21.15.58:0/1306308692' entity='mgr.smithi058.ovedyh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
2022-02-06T16:30:48.306 INFO:journalctl@ceph.mon.smithi139.smithi139.stdout:Feb 06 16:30:47 smithi139 conmon[32128]: audit 2022-02-06T16:30:47.192874+0000 mon.smithi058 (mon.0) 587 : audit [DBG] from='mgr.14176 172.21.15.58:0/1306308692' entity='mgr.smithi058.ovedyh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2022-02-06T16:30:48.307 INFO:journalctl@ceph.mon.smithi139.smithi139.stdout:Feb 06 16:30:47 smithi139 conmon[32128]: audit 2022-02-06T16:30:47.193450+0000 mon.smithi058 (mon.0) 588 : audit [INF] from='mgr.14176 172.21.15.58:0/1306308692' entity='mgr.smithi058.ovedyh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
2022-02-06T16:30:48.667 DEBUG:teuthology.orchestra.run:got remote process result: 127
2022-02-06T16:30:48.669 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3094160cc590b786a43d55faaf7c99d5de71ce56/teuthology/run_tasks.py", line 91, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3094160cc590b786a43d55faaf7c99d5de71ce56/teuthology/run_tasks.py", line 70, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_6aa4fcc62bbc85390459e2e69fccdea5b9e83966/qa/tasks/cephadm.py", line 1054, in shell
    extra_cephadm_args=args)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_6aa4fcc62bbc85390459e2e69fccdea5b9e83966/qa/tasks/cephadm.py", line 46, in _shell
    **kwargs
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3094160cc590b786a43d55faaf7c99d5de71ce56/teuthology/orchestra/remote.py", line 509, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3094160cc590b786a43d55faaf7c99d5de71ce56/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3094160cc590b786a43d55faaf7c99d5de71ce56/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_3094160cc590b786a43d55faaf7c99d5de71ce56/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed on smithi058 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4773e2ce-8769-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nHOST=$(hostname -s)\nOSD=$(ceph orch ps $HOST | grep osd | head -n 1 | awk \'"\'"\'{print $1}\'"\'"\')\necho "host $HOST, osd $OSD"\nceph orch daemon stop $OSD\nwhile ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\nceph auth export $OSD > k\nceph orch daemon rm $OSD --force\nceph orch ps --refresh\nwhile ceph orch ps | grep $OSD ; do sleep 5 ; done\nceph auth add $OSD -i k\nceph cephadm osd activate $HOST\nwhile ! ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\n\''

Other occurrence:
/a/yuriw-2022-02-05_22:51:11-rados-wip-yuri2-testing-2022-02-04-1646-pacific-distro-default-smithi/6663940

Actions #4

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-02-06_16:04:09-rados-wip-yuri2-testing-2022-02-04-1646-pacific-distro-default-smithi/6665063

Actions #5

Updated by Guillaume Abrioux about 2 years ago

  • Assignee set to Guillaume Abrioux
Actions #6

Updated by Guillaume Abrioux about 2 years ago

  • Status changed from New to In Progress
Actions #7

Updated by Adam King about 2 years ago

  • Priority changed from Normal to High
Actions #8

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-03-01_17:45:51-rados-wip-yuri3-testing-2022-02-28-0757-pacific-distro-default-smithi/6714832

Actions #9

Updated by Kamoltat (Junior) Sirivadhna about 2 years ago

/a/yuriw-2022-03-17_14:54:32-rados-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/6742307

Actions #10

Updated by Aishwarya Mathuria about 2 years ago

/a/yuriw-2022-03-23_14:51:02-rados-wip-yuri4-testing-2022-03-21-1648-pacific-distro-default-smithi/6756005

Actions #11

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-04-06_14:02:46-rados-wip-yuri4-testing-2022-04-05-1720-pacific-distro-default-smithi/6779459

Actions #12

Updated by Laura Flores almost 2 years ago

@Guillaume Adam King and I suspect that ubuntu_18.04 is the culprit here. After running `rados:cephadm:osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add}` 20 times, all failed: https://pulpito.ceph.com/adking-2022-05-19_17:46:29-rados:cephadm:osds-wip-adk2-testing-2022-05-18-1605-pacific-distro-basic-smithi/

But the same test on different distros passed all 20 times: https://pulpito.ceph.com/adking-2022-05-19_18:30:22-rados:cephadm:osds-wip-adk2-testing-2022-05-18-1605-pacific-distro-basic-smithi/

Not sure how it can be fixed, but hopefully this can narrow it down. A similar phenomenon is happening with #54360, where ubuntu_18.04 is the test that's consistently failing.

Actions #13

Updated by Laura Flores almost 2 years ago

/a/yuriw-2022-06-02_00:50:42-rados-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/6859764

Description: rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add}

Actions #14

Updated by Laura Flores almost 2 years ago

/a/yuriw-2022-05-31_21:35:41-rados-wip-yuri2-testing-2022-05-31-1300-pacific-distro-default-smithi/6856442

Description: rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add}

Actions #15

Updated by Sridhar Seshasayee almost 2 years ago

/a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/6881304

Description: rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add}

Actions #16

Updated by Laura Flores almost 2 years ago

/a/yuriw-2022-06-21_16:28:27-rados-wip-yuri4-testing-2022-06-21-0704-pacific-distro-default-smithi/6889715

Description: rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add}

Actions #17

Updated by Laura Flores over 1 year ago

/a/yuriw-2022-08-01_22:58:38-rados-wip-yuri8-testing-2022-08-01-1413-pacific-distro-default-smithi/6955111

Actions #18

Updated by Kamoltat (Junior) Sirivadhna over 1 year ago

/a/yuriw-2022-08-04_11:58:29-rados-wip-yuri3-testing-2022-08-03-0828-pacific-distro-default-smithi/6958218

Actions #19

Updated by Matan Breizman over 1 year ago

/a/yuriw-2022-08-22_21:19:34-rados-wip-yuri4-testing-2022-08-18-1020-pacific-distro-default-smithi/6986481/

Actions #20

Updated by Laura Flores over 1 year ago

  • Has duplicate Bug #57247: [cephadm] Error response from daemon: No such container added
Actions #21

Updated by Laura Flores over 1 year ago

  • Translation missing: en.field_tag_list set to test-failure
Actions #22

Updated by Laura Flores over 1 year ago

/a/yuriw-2022-08-24_16:39:47-rados-wip-yuri4-testing-2022-08-24-0707-pacific-distro-default-smithi/6990175

Actions #23

Updated by Laura Flores over 1 year ago

  • Priority changed from Normal to High

Marking as high since this persists in the rados suite.

/a/amathuri-2022-09-08_14:19:23-rados-wip-yuri5-testing-2022-09-06-1334-pacific-distro-default-smithi/7020694

Actions #24

Updated by Laura Flores over 1 year ago

/a/yuriw-2022-09-09_14:59:25-rados-wip-yuri2-testing-2022-09-06-1007-pacific-distro-default-smithi/7022651

Actions #25

Updated by Aishwarya Mathuria over 1 year ago

/a/yuriw-2022-11-28_21:10:48-rados-wip-yuri10-testing-2022-11-28-1042-pacific-distro-default-smithi/7095174

Actions #26

Updated by Laura Flores over 1 year ago

@Guillaume, is there any further progress on this Tracker?

Actions #27

Updated by Kamoltat (Junior) Sirivadhna over 1 year ago

/a/ksirivad-2022-12-22_17:58:01-rados-wip-ksirivad-testing-pacific-distro-default-smithi/7125159/

Actions #28

Updated by Laura Flores about 1 year ago

/a/yuriw-2023-03-16_21:59:27-rados-wip-yuri6-testing-2023-03-12-0918-pacific-distro-default-smithi/7211171

Actions #29

Updated by Laura Flores about 1 year ago

/a/yuriw-2023-04-06_15:37:58-rados-wip-yuri3-testing-2023-04-04-0833-pacific-distro-default-smithi/7234308

Actions #30

Updated by Laura Flores 12 months ago

/a/yuriw-2023-04-25_14:15:40-rados-pacific-release-distro-default-smithi/7251380

Actions #31

Updated by Laura Flores 12 months ago

/a/yuriw-2023-04-25_18:56:08-rados-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/7252540

Actions #32

Updated by Laura Flores 12 months ago

/a/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/7264292

Actions #33

Updated by Laura Flores 12 months ago

/a/yuriw-2023-04-26_01:16:19-rados-wip-yuri11-testing-2023-04-25-1605-pacific-distro-default-smithi/7253899

Actions #34

Updated by Laura Flores 11 months ago

/a/yuriw-2023-05-17_19:39:18-rados-wip-yuri5-testing-2023-05-09-1324-pacific-distro-default-smithi/7276771

Actions #35

Updated by Laura Flores 10 months ago

/a/lflores-2023-07-05_17:10:12-rados-wip-yuri8-testing-2023-06-22-1309-pacific-distro-default-smithi/7327319

Actions #36

Updated by Laura Flores 9 months ago

/a/yuriw-2023-07-19_14:33:14-rados-wip-yuri11-testing-2023-07-18-0927-pacific-distro-default-smithi/7343598

Actions #37

Updated by Sridhar Seshasayee 9 months ago

/a/yuriw-2023-07-26_15:54:22-rados-wip-yuri6-testing-2023-07-24-0819-pacific-distro-default-smithi/7353507

Actions #38

Updated by Laura Flores 9 months ago

/a/yuriw-2023-08-10_20:12:19-rados-wip-yuri6-testing-2023-08-03-0807-pacific-distro-default-smithi/7365586

Actions #39

Updated by Aishwarya Mathuria 8 months ago

/a/yuriw-2023-08-16_22:40:18-rados-wip-yuri2-testing-2023-08-16-1142-pacific-distro-default-smithi/7371030

Actions #40

Updated by Laura Flores 6 months ago

/a/lflores-2023-11-01_18:38:59-rados-wip-yuri5-testing-2023-10-24-0737-pacific-distro-default-smithi/7443322

Actions #41

Updated by Aishwarya Mathuria 5 months ago

/a/yuriw-2023-11-16_22:29:26-rados-wip-yuri3-testing-2023-11-14-1227-pacific-distro-default-smithi/7461277

Actions #42

Updated by Nitzan Mordechai 4 months ago

/a/yuriw-2023-12-27_21:01:10-rados-wip-yuri7-testing-2023-12-27-1008-pacific-distro-default-smithi/7502367

Actions #43

Updated by Laura Flores 3 months ago

/a/yuriw-2024-01-25_17:23:47-rados-pacific-release-distro-default-smithi/7532156

Actions #44

Updated by Radoslaw Zarzynski 3 months ago

/a/yuriw-2024-01-28_20:30:08-rados-pacific-release-distro-default-smithi/7536030

2024-01-28T21:32:19.200 INFO:teuthology.orchestra.run.smithi001.stderr:+ ceph orch device zap --force
2024-01-28T21:32:19.531 INFO:teuthology.orchestra.run.smithi001.stderr:Invalid command: missing required parameter hostname(<string>)
Actions #45

Updated by Laura Flores 2 months ago

/a/yuriw-2024-02-08_23:15:40-rados-wip-yuri10-testing-2024-02-08-0854-pacific-distro-default-smithi/7552786

Actions #46

Updated by Kamoltat (Junior) Sirivadhna 2 months ago

  • % Done changed from 100 to 0

/a/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/7566747/

Actions

Also available in: Atom PDF