Project

General

Profile

Actions

Bug #57101

closed

cephadm rm-cluster tries to zap rootfs device

Added by Guillaume Abrioux over 1 year ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
quincy,pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

recently seen in cephadm-ansible CI.
`cephadm rm-cluster` tries to zap the device that corresponds to the root fs:


fatal: [ceph-node4]: FAILED! => changed=true 
  cmd:
  - cephadm
  - rm-cluster
  - --force
  - --zap-osds
  - --fsid
  - 4217f198-b8b7-11eb-941d-5254004b7a69
  delta: '0:00:28.784423'
  end: '2022-08-11 10:15:53.676985'
  msg: non-zero return code
  rc: 1
  start: '2022-08-11 10:15:24.892562'
  stderr: |-
    Traceback (most recent call last):
      File "/sbin/cephadm", line 9780, in <module>
        main()
      File "/sbin/cephadm", line 9768, in main
        r = ctx.func(ctx)
      File "/sbin/cephadm", line 7222, in command_rm_cluster
        _zap_osds(ctx)
      File "/sbin/cephadm", line 2152, in _infer_image
        return func(ctx)
      File "/sbin/cephadm", line 7162, in _zap_osds
        _zap(ctx, i.get('path'))
      File "/sbin/cephadm", line 7138, in _zap
        out, err, code = call_throws(ctx, c.run_cmd())
      File "/sbin/cephadm", line 1829, in call_throws
        raise RuntimeError('Failed command: %s' % ' '.join(command))
    RuntimeError: Failed command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:f50052ef84fe9a241e0f90d3d7a321394ab01b6792a5d802b6afe35bf52d35b9 -e NODE_NAME=ceph-node4 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm quay.ceph.io/ceph-ci/ceph@sha256:f50052ef84fe9a241e0f90d3d7a321394ab01b6792a5d802b6afe35bf52d35b9 lvm zap --destroy /dev/vda
  stderr_lines: <omitted>
  stdout: |-
    Using ceph image with id '9d87f3df8246' and tag '<none>' created on 2022-08-10 21:43:03 +0000 UTC
    quay.ceph.io/ceph-ci/ceph@sha256:f50052ef84fe9a241e0f90d3d7a321394ab01b6792a5d802b6afe35bf52d35b9
    Zapping /dev/sda...
    Zapping /dev/sdb...
    Zapping /dev/sdc...
    Zapping /dev/vda...
    Non-zero exit code 1 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:f50052ef84fe9a241e0f90d3d7a321394ab01b6792a5d802b6afe35bf52d35b9 -e NODE_NAME=ceph-node4 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm quay.ceph.io/ceph-ci/ceph@sha256:f50052ef84fe9a241e0f90d3d7a321394ab01b6792a5d802b6afe35bf52d35b9 lvm zap --destroy /dev/vda
    /bin/podman: stderr Traceback (most recent call last):
    /bin/podman: stderr   File "/usr/sbin/ceph-volume", line 11, in <module>
    /bin/podman: stderr     load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__
    /bin/podman: stderr     self.main(self.argv)
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    /bin/podman: stderr     return f(*a, **kw)
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main
    /bin/podman: stderr     terminal.dispatch(self.mapper, subcommand_args)
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    /bin/podman: stderr     instance.main()
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main
    /bin/podman: stderr     terminal.dispatch(self.mapper, self.argv)
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    /bin/podman: stderr     instance.main()
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/zap.py", line 401, in main
    /bin/podman: stderr     self.args = parser.parse_args(self.argv)
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 1734, in parse_args
    /bin/podman: stderr     args, argv = self.parse_known_args(args, namespace)
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 1766, in parse_known_args
    /bin/podman: stderr     namespace, args = self._parse_known_args(args, namespace)
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 1975, in _parse_known_args
    /bin/podman: stderr     stop_index = consume_positionals(start_index)
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 1931, in consume_positionals
    /bin/podman: stderr     take_action(action, args)
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 1824, in take_action
    /bin/podman: stderr     argument_values = self._get_values(action, argument_strings)
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 2279, in _get_values
    /bin/podman: stderr     value = [self._get_value(action, v) for v in arg_strings]
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 2279, in <listcomp>
    /bin/podman: stderr     value = [self._get_value(action, v) for v in arg_strings]
    /bin/podman: stderr   File "/usr/lib64/python3.6/argparse.py", line 2294, in _get_value
    /bin/podman: stderr     result = type_func(arg_string)
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line 57, in __call__
    /bin/podman: stderr     return self._format_device(self._is_valid_device())
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line 60, in _is_valid_device
    /bin/podman: stderr     super()._is_valid_device()
    /bin/podman: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line 48, in _is_valid_device
    /bin/podman: stderr     raise RuntimeError("Device {} has partitions.".format(self.dev_path))
    /bin/podman: stderr RuntimeError: Device /dev/vda has partitions.
  stdout_lines: <omitted>


Related issues 2 (0 open2 closed)

Copied to Orchestrator - Backport #57130: quincy: cephadm rm-cluster tries to zap rootfs deviceResolvedGuillaume AbriouxActions
Copied to Orchestrator - Backport #57131: pacific: cephadm rm-cluster tries to zap rootfs deviceResolvedGuillaume AbriouxActions
Actions #1

Updated by Guillaume Abrioux over 1 year ago

  • Pull request ID set to 47562
Actions #2

Updated by Guillaume Abrioux over 1 year ago

  • Status changed from In Progress to Fix Under Review
Actions #3

Updated by Adam King over 1 year ago

  • Status changed from Fix Under Review to Pending Backport
Actions #4

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57130: quincy: cephadm rm-cluster tries to zap rootfs device added
Actions #5

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57131: pacific: cephadm rm-cluster tries to zap rootfs device added
Actions #6

Updated by Backport Bot over 1 year ago

  • Tags set to backport_processed
Actions #7

Updated by Adam King over 1 year ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF