Actions
Bug #45604
closedmgr/cephadm: Failed to create an OSD
Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
cephadm
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Creating an OSD using the following commands fails.
$ ceph --version ceph version 15.2.1-277-g17d346932e (17d346932e584056578f1b6c4c49b43ac2a712a2) octopus (stable) $ ceph orch device ls HOST PATH TYPE SIZE DEVICE AVAIL REJECT REASONS master /dev/vda hdd 42.0G False locked node1 /dev/vda hdd 42.0G False locked node1 /dev/vdb hdd 8192M 999162 False Insufficient space (<5GB) on vgs, locked, LVM detected node1 /dev/vdc hdd 8192M 230824 False Insufficient space (<5GB) on vgs, locked, LVM detected node2 /dev/vdb hdd 8192M 282151 True node2 /dev/vda hdd 42.0G False locked node2 /dev/vdc hdd 8192M 271426 False LVM detected node3 /dev/vdb hdd 8192M 577378 True node3 /dev/vdc hdd 8192M 727088 True node3 /dev/vda hdd 42.0G False locked $ cat <<EOF > /tmp/cephadm-apply.yml service_type: osd service_id: osd.testing_dg_node3 placement: host_pattern: node3 data_devices: all: true EOF $ ceph orch apply -i /tmp/cephadm-apply.yml
or
$ echo "{\"service_type\": \"osd\", \"placement\": {\"host_pattern\": \"node3\"}, \"service_id\": \"osd.testing_dg_node3\", \"data_devices\": {\"all\": True}}" | ceph orch apply osd -i -
The attached log from node3 contains for example
[2020-05-19 09:31:54,556][ceph_volume][ERROR ] exception caught by decorator Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 150, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 38, in main self.format_report(Devices()) File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 42, in format_report print(json.dumps(inventory.json_report())) File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 51, in json_report output.append(device.json_report()) File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 202, in json_report output['lvs'] = [lv.report() for lv in self.lvs] File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 202, in <listcomp> output['lvs'] = [lv.report() for lv in self.lvs] File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 945, in report 'cluster_name': self.tags['ceph.cluster_name'], KeyError: 'ceph.cluster_name'
Files
Updated by Volker Theile almost 4 years ago
- Source changed from Development to Q/A
Updated by Sebastian Wagner almost 4 years ago
- Is duplicate of Bug #44356: ceph-volume inventory: KeyError: 'ceph.cluster_name' added
Updated by Sebastian Wagner almost 4 years ago
- Status changed from New to Duplicate
Updated by Joshua Schmid almost 4 years ago
I haven't seen this issue in a while now. Would be interesting if this still exists in the latest master.
Actions