Actions
Bug #35534
closedno terminal error when rolling back from a failed OSD preparation
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
rock64@rockpro64-1:~/my-cluster$ sudo ceph-volume --cluster ceph lvm create --bluestore --data /dev/storage/foobar Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e7dd6d45-b556-461c-bad1-83d98a5a1afa --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1 --yes-i-really-mean-it stderr: no valid command found; 10 closest matches:
And the file log:
[2018-09-02 18:49:27,720][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py", line 216, in safe_prepare self.prepare(args) File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py", line 283, in prepare block_lv = self.prepare_device(args.data, 'block', cluster_fsid, osd_fsid) File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py", line 206, in prepare_device raise RuntimeError(' '.join(error)) RuntimeError: Cannot use device (/dev/storage/foobar). A vg/lv path or an existing device is needed [2018-09-02 18:49:27,722][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation [2018-09-02 18:49:27,723][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1 --yes-i-really-mean-it [2018-09-02 18:49:28,425][ceph_volume.process][INFO ] stderr no valid command found; 10 closest matches:
Updated by Alfredo Deza over 5 years ago
- Assignee set to Andrew Schoen
the `except Exception` should be caught and then reported on the terminal. Probably something like:
except Exception as error: terminal.error(str(error)) [...]
Updated by Roman Bogachev over 5 years ago
The same error.
$ ceph --version ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable)
When try to create new OSD.
# ceph-volume lvm create --bluestore --data /dev/mapper/ceph--block--0-block--0 --block.wal /dev/mapper/ceph--db--0-wal--0 --block.db /dev/mapper/ceph--db--0-db--0 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 018d024c-4f08-4e25-a952-ada8896085bb --> Was unable to complete a new OSD, will rollback changes Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it stderr: no valid command found; 10 closest matches: osd tier add-cache <poolname> <poolname> <int[0-]> osd tier remove-overlay <poolname> osd out <ids> [<ids>...] osd in <ids> [<ids>...] stderr: osd down <ids> [<ids>...] osd unset full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim osd require-osd-release luminous|mimic {--yes-i-really-mean-it} osd erasure-code-profile ls osd set full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds {--yes-i-really-mean-it} osd erasure-code-profile get <name> Error EINVAL: invalid command --> RuntimeError: command returned non-zero exit status: 22
Updated by Alfredo Deza over 5 years ago
Andrew, can you update this with the PRs associated with the fix? Having trouble finding those links
Updated by Andrew Schoen about 5 years ago
- Status changed from New to Resolved
Actions