Project

General

Profile

Actions

Bug #52890

closed

lsblk: vg_nvme/lv_4: not a block device

Added by Deepika Upadhyay over 2 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Normal
Category:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

caught this error on latest master: https://github.com/ceph/teuthology/pull/1683

2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: Non-zero exit code 2 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:a1bd2ceb3c5d3ac2bd429d97ec43402b482837e0425fc4010c3740205a6b81c0 -e NODE_NAME=smithi061 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c:/var/run/ceph:z -v /var/log/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c:/var/log/ceph:z -v /var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/selinux:/sys/fs/selinux:ro -v /tmp/ceph-tmpy7kv3e3j:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpg6w4r4u7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:a1bd2ceb3c5d3ac2bd429d97ec43402b482837e0425fc4010c3740205a6b81c0 lvm batch --no-auto vg_nvme/lv_4 --yes --no-systemd
2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr  stderr: lsblk: vg_nvme/lv_4: not a block device
2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr  stderr: blkid: error: vg_nvme/lv_4: No such file or directory
2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr  stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
2021-10-11 12:36:21,363.363 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--auto] [--no-auto] [--bluestore] [--filestore]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--report] [--yes]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--format {json,json-pretty,pretty}] [--dmcrypt]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--crush-device-class CRUSH_DEVICE_CLASS]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--no-systemd]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--osds-per-device OSDS_PER_DEVICE]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--data-slots DATA_SLOTS]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--block-db-size BLOCK_DB_SIZE]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--block-db-slots BLOCK_DB_SLOTS]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--block-wal-size BLOCK_WAL_SIZE]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--block-wal-slots BLOCK_WAL_SLOTS]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--journal-size JOURNAL_SIZE]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--journal-slots JOURNAL_SLOTS] [--prepare]
2021-10-11 12:36:21,364.364 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [--osd-ids [OSD_IDS [OSD_IDS ...]]]
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr                              [DEVICES [DEVICES ...]]
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: /bin/podman: stderr ceph-volume lvm batch: error: Unable to proceed with non-existing device: vg_nvme/lv_4
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: Traceback (most recent call last):
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 8043, in <module>
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     main()
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 8031, in main
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     r = ctx.func(ctx)
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 1737, in _infer_config
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     return func(ctx)
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 1678, in _infer_fsid
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     return func(ctx)
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 1765, in _infer_image
2021-10-11 12:36:21,365.365 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     return func(ctx)
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 1665, in _validate_fsid
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     return func(ctx)
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 5006, in command_ceph_volume
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     out, err, code = call_throws(ctx, c.run_cmd())
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:   File "/var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/cephadm.39eec612583c5ed5fe23af6fedd2c9e8862e216c61c807d38e39b11966bf281c", line 1467, in call_throws
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:     raise RuntimeError('Failed command: %s' % ' '.join(command))
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]: RuntimeError: Failed command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:a1bd2ceb3c5d3ac2bd429d97ec43402b482837e0425fc4010c3740205a6b81c0 -e NODE_NAME=smithi061 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c:/var/run/ceph:z -v /var/log/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c:/var/log/ceph:z -v /var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/273b5b4a-2a8f-11ec-8c25-001a4aab830c/selinux:/sys/fs/selinux:ro -v /tmp/ceph-tmpy7kv3e3j:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpg6w4r4u7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:a1bd2ceb3c5d3ac2bd429d97ec43402b482837e0425fc4010c3740205a6b81c0 lvm batch --no-auto vg_nvme/lv_4 --yes --no-systemd
2021-10-11 12:36:21,366.366 INFO:journalctl@ceph.mgr.x.smithi061.stdout:Oct 11 12:36:21 smithi061 conmon[90050]:
2021-10-11 12:36:21,394.394 INFO:journalctl@ceph.mon.c.smithi086.stdout:Oct 11 12:36:21 smithi086 ceph-mon[9770]: pgmap v87: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
2021-10-11 12:36:21,619.619 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/ideepika/teuthology/teuthology/contextutil.py", line 31, in nested
    vars.append(enter())
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/ideepika/src/github.com_ideepika_ceph_32e341c68d2d0a0d18a7eba5529e689bc51e1b62/qa/tasks/cephadm.py", line 774, in ceph_osds
    remote.shortname + ':' + short_dev
  File "/home/ideepika/src/github.com_ideepika_ceph_32e341c68d2d0a0d18a7eba5529e689bc51e1b62/qa/tasks/cephadm.py", line 47, in _shell
    **kwargs
  File "/home/ideepika/teuthology/teuthology/orchestra/remote.py", line 509, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/ideepika/teuthology/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/ideepika/teuthology/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/ideepika/teuthology/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed on smithi061 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:741d041490baf72d9ab615a76c16193d0b31e385 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 273b5b4a-2a8f-11ec-8c25-001a4aab830c -- ceph orch daemon add osd smithi061:vg_nvme/lv_4'

log at teuthology sepia: /home/ideepika/rbd_nvme.log

Actions #1

Updated by Deepika Upadhyay over 2 years ago

  • Description updated (diff)
Actions #2

Updated by Guillaume Abrioux over 2 years ago

  • Assignee set to Guillaume Abrioux
Actions #4

Updated by Guillaume Abrioux about 2 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF