Project

General

Profile

Actions

Bug #43856

open

ceph-volume inventory --format json fails with 'KeyError: 'ceph.cluster_name''

Added by Joshua Schmid about 4 years ago. Updated about 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Admittedly it's a messy node:

> 
admin:~ # lsblk
NAME                                                                                                  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda                                                                                                   254:0    0  42G  0 disk 
├─vda1                                                                                                254:1    0   2M  0 part 
├─vda2                                                                                                254:2    0  20M  0 part /boot/efi
└─vda3                                                                                                254:3    0  42G  0 part /
vdb                                                                                                   254:16   0  25G  0 disk 
└─ceph--foo-osd--data--foo                                                                            253:1    0  24G  0 lvm  
vdc                                                                                                   254:32   0  25G  0 disk 
vdd                                                                                                   254:48   0  25G  0 disk 
vde                                                                                                   254:64   0  25G  0 disk 
vdf                                                                                                   254:80   0  25G  0 disk 
vdg                                                                                                   254:96   0  25G  0 disk 
└─ceph--0380f4e0--0815--4ae9--9127--0bfcb47421d2-osd--data--f29088ea--63c4--4c30--a4d1--86e6edbcdab1  253:0    0  24G  0 lvm  
vdh                                                                                                   254:112  0  25G  0 disk 
vdi                                                                                                   254:128  0  25G  0 disk 
vdj                                                                                                   254:144  0  25G  0 disk 
vdk                                                                                                   254:160  0  25G  0 disk 
└─vdk1                                                                                                254:161  0  25G  0 part 
  ├─foo-bar                                                                                           253:5    0  20G  0 lvm  
  └─foo-lvstuff                                                                                       253:6    0   3G  0 lvm  
vdl                                                                                                   254:176  0  25G  0 disk 
vdm                                                                                                   254:192  0  25G  0 disk 
vdn                                                                                                   254:208  0  25G  0 disk 
vdo                                                                                                   254:224  0  25G  0 disk 
vdp                                                                                                   254:240  0  25G  0 disk 
vdq                                                                                                   254:256  0  25G  0 disk 
└─ceph--0e6fa561--a021--4815--b90e--5793e09bacae-osd--block--902a942b--7ab6--4278--9078--61360f413294 253:4    0  24G  0 lvm  
vdr                                                                                                   254:272  0  25G  0 disk 
vds                                                                                                   254:288  0  25G  0 disk 
└─ceph--ac9c3398--9e94--4be5--98c5--5a26306f39c1-osd--block--3315307a--08d9--4d64--809a--53aaa5cf2943 253:3    0  24G  0 lvm  
vdt                                                                                                   254:304  0  25G  0 disk 
└─ceph--0ed54b3e--589c--472b--b864--0c42d514b63c-osd--block--4383d060--d500--46db--bc78--9d4930c6930a 253:2    0  24G  0 lvm  
vdu                                                                                                   254:320  0  25G  0 disk 
└─ceph--f6380ab4--c69c--4a33--876f--eb6d5a6b49bd-osd--block--a14be65f--9478--4ed9--946a--bc5f4e31ff9b 253:7    0  24G  0 lvm 

but this case should be handled though:

find the commands used to trigger this issue here:
https://gist.github.com/jschmid1/4bf2970e8748bcf02bd7120b3c550f46

admin:~ # ceph-volume inventory
 stderr: unable to read label for /dev/vda2: (2) No such file or directory
 stderr: unable to read label for /dev/vda3: (2) No such file or directory
 stderr: unable to read label for /dev/vda1: (2) No such file or directory
 stderr: unable to read label for /dev/vda: (2) No such file or directory
 stderr: unable to read label for /dev/vdb: (2) No such file or directory
 stderr: unable to read label for /dev/vdc: (2) No such file or directory
 stderr: unable to read label for /dev/vdd: (2) No such file or directory
 stderr: unable to read label for /dev/vde: (2) No such file or directory
 stderr: unable to read label for /dev/vdf: (2) No such file or directory
 stderr: unable to read label for /dev/vdg: (2) No such file or directory
 stderr: unable to read label for /dev/vdh: (2) No such file or directory
 stderr: unable to read label for /dev/vdi: (2) No such file or directory
 stderr: unable to read label for /dev/vdj: (2) No such file or directory
 stderr: unable to read label for /dev/vdk1: (2) No such file or directory
 stderr: unable to read label for /dev/vdk: (2) No such file or directory
 stderr: unable to read label for /dev/vdl: (2) No such file or directory
 stderr: unable to read label for /dev/vdm: (2) No such file or directory
 stderr: unable to read label for /dev/vdn: (2) No such file or directory
 stderr: unable to read label for /dev/vdo: (2) No such file or directory
 stderr: unable to read label for /dev/vdp: (2) No such file or directory
 stderr: unable to read label for /dev/vdq: (2) No such file or directory
 stderr: unable to read label for /dev/vdr: (2) No such file or directory
 stderr: unable to read label for /dev/vds: (2) No such file or directory
 stderr: unable to read label for /dev/vdt: (2) No such file or directory
 stderr: unable to read label for /dev/vdu: (2) No such file or directory

Device Path               Size         rotates available Model name
/dev/vdc                  25.00 GB     True    True      
/dev/vdd                  25.00 GB     True    True      
/dev/vde                  25.00 GB     True    True      
/dev/vdf                  25.00 GB     True    True      
/dev/vdh                  25.00 GB     True    True      
/dev/vdi                  25.00 GB     True    True      
/dev/vdj                  25.00 GB     True    True      
/dev/vdl                  25.00 GB     True    True      
/dev/vdm                  25.00 GB     True    True      
/dev/vdn                  25.00 GB     True    True      
/dev/vdo                  25.00 GB     True    True      
/dev/vdp                  25.00 GB     True    True      
/dev/vdr                  25.00 GB     True    True      
/dev/vda                  42.00 GB     True    False     
/dev/vdb                  25.00 GB     True    False     
/dev/vdg                  25.00 GB     True    False     
/dev/vdk                  25.00 GB     True    False     
/dev/vdq                  25.00 GB     True    False     
/dev/vds                  25.00 GB     True    False     
/dev/vdt                  25.00 GB     True    False     
/dev/vdu                  25.00 GB     True    False     
admin:~ # ceph-volume inventory --format=json
 stderr: unable to read label for /dev/vda2: (2) No such file or directory
 stderr: unable to read label for /dev/vda3: (2) No such file or directory
 stderr: unable to read label for /dev/vda1: (2) No such file or directory
 stderr: unable to read label for /dev/vda: (2) No such file or directory
 stderr: unable to read label for /dev/vdb: (2) No such file or directory
 stderr: unable to read label for /dev/vdc: (2) No such file or directory
 stderr: unable to read label for /dev/vdd: (2) No such file or directory
 stderr: unable to read label for /dev/vde: (2) No such file or directory
 stderr: unable to read label for /dev/vdf: (2) No such file or directory
 stderr: unable to read label for /dev/vdg: (2) No such file or directory
 stderr: unable to read label for /dev/vdh: (2) No such file or directory
 stderr: unable to read label for /dev/vdi: (2) No such file or directory
 stderr: unable to read label for /dev/vdj: (2) No such file or directory
 stderr: unable to read label for /dev/vdk1: (2) No such file or directory
 stderr: unable to read label for /dev/vdk: (2) No such file or directory
 stderr: unable to read label for /dev/vdl: (2) No such file or directory
 stderr: unable to read label for /dev/vdm: (2) No such file or directory
 stderr: unable to read label for /dev/vdn: (2) No such file or directory
 stderr: unable to read label for /dev/vdo: (2) No such file or directory
 stderr: unable to read label for /dev/vdp: (2) No such file or directory
 stderr: unable to read label for /dev/vdq: (2) No such file or directory
 stderr: unable to read label for /dev/vdr: (2) No such file or directory
 stderr: unable to read label for /dev/vds: (2) No such file or directory
 stderr: unable to read label for /dev/vdt: (2) No such file or directory
 stderr: unable to read label for /dev/vdu: (2) No such file or directory
-->  KeyError: 'ceph.cluster_name'
admin:~ # 
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 11, in <module>
    load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 39, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 150, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 38, in main
    self.format_report(Devices())
  File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 42, in format_report
    print(json.dumps(inventory.json_report()))
  File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 51, in json_report
    output.append(device.json_report())
  File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 196, in json_report
    output['lvs'] = [lv.report() for lv in self.lvs]
  File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 196, in <listcomp>
    output['lvs'] = [lv.report() for lv in self.lvs]
  File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 945, in report
    'cluster_name': self.tags['ceph.cluster_name'],
KeyError: 'ceph.cluster_name'

jxs@zulu ~/projects/ceph/build ±drive_group_ssh⚡ » ceph orchestrator osd create ssh-dev1:foo/bar     
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-01-28T12:33:13.301+0100 7f2eecc50700 -1 WARNING: all dangerous and experimental features are enabled.
2020-01-28T12:33:13.333+0100 7f2eecc50700 -1 WARNING: all dangerous and experimental features are enabled.
Error EINVAL: Traceback (most recent call last):
  File "/home/jxs/projects/ceph/src/pybind/mgr/mgr_module.py", line 1064, in _handle_command
    return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
  File "/home/jxs/projects/ceph/src/pybind/mgr/mgr_module.py", line 304, in call
    return self.func(mgr, **kwargs)
  File "/home/jxs/projects/ceph/src/pybind/mgr/orchestrator.py", line 140, in wrapper
    return func(*args, **kwargs)
  File "/home/jxs/projects/ceph/src/pybind/mgr/orchestrator_cli/module.py", line 366, in _create_osd
    orchestrator.raise_if_exception(completion)
  File "/home/jxs/projects/ceph/src/pybind/mgr/orchestrator.py", line 655, in raise_if_exception
    raise e
  File "/usr/lib64/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/usr/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 132, in do_work
    res = self._on_complete_(*args, **kwargs)
  File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 189, in <lambda>
    return cls(on_complete=lambda x: f(*x), value=value, name=name, **c_kwargs)
  File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 922, in _get_inventory
    ['--', 'inventory', '--format=json'])
  File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 681, in _run_cephadm
    code, '\n'.join(err)))
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:/usr/bin/podman:stderr -->  KeyError: 'ceph.cluster_name'
Traceback (most recent call last):
  File "<stdin>", line 2705, in <module>
  File "<stdin>", line 545, in _infer_fsid
  File "<stdin>", line 1981, in command_ceph_volume
  File "<stdin>", line 474, in call_throws
RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --privileged -e CONTAINER_IMAGE=ceph/daemon-base:latest-master-devel -e NODE_NAME=admin -v /var/log/ceph/307fa37f-5447-4436-8266-3366ed055a60:/var/log/ceph:z -v /var/lib/ceph/307fa37f-5447-4436-8266-3366ed055a60/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/sbin/ceph-volume ceph/daemon-base:latest-master-devel inventory --format=json
Actions #1

Updated by Sebastian Wagner about 4 years ago

  • Description updated (diff)
Actions #2

Updated by Sebastian Wagner about 4 years ago

  • Description updated (diff)
Actions #3

Updated by Jan Fajerski about 4 years ago

Which SHA is this?

Using ceph version 15.0.0-9862-g0e03207 (0e032079499cf51ac7d5f16a3dbc236b8316c94d) octopus (dev) this fails to reproduce:

node1:~ # lsblk
NAME                                                                                                MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda                                                                                                 253:0    0  42G  0 disk 
├─vda1                                                                                              253:1    0   2M  0 part 
├─vda2                                                                                              253:2    0  20M  0 part /boot/efi
└─vda3                                                                                              253:3    0  42G  0 part /
vdb                                                                                                 253:16   0   8G  0 disk 
└─vdb1                                                                                              253:17   0   8G  0 part 
  └─ceph--9183df8d--8053--4441--ad58--a88c7186b152-osd--data--01b1d95b--fc77--4c35--8fb8--d69c52983b7d
                                                                                                    254:0    0   8G  0 lvm  
vdc                                                                                                 253:32   0   8G  0 disk 
└─ceph--2da49cf2--6f76--47cd--a75f--63eef99500c5-osd--data--31fa565d--e341--4f2f--8ee0--f4dab98980a5
                                                                                                    254:1    0   8G  0 lvm  
node1:~ # ceph-volume inventory --format=json
 stderr: unable to read label for /dev/vda2: (2) No such file or directory
 stderr: unable to read label for /dev/vda3: (2) No such file or directory
 stderr: unable to read label for /dev/vda1: (2) No such file or directory
 stderr: unable to read label for /dev/vda: (2) No such file or directory
 stderr: unable to read label for /dev/vdb1: (2) No such file or directory
 stderr: unable to read label for /dev/vdb: (2) No such file or directory
 stderr: unable to read label for /dev/vdc: (2) No such file or directory
[{"path": "/dev/vda", "sys_api": {"removable": "0", "ro": "0", "vendor": "0x1af4", "model": "", "rev": "", "sas_address": "", "sas_device_handle": "", "support_discard": "0", "rotational": "1", "nr_requests": "256", "scheduler_mode": "mq-deadline", "partitions": {"vda2": {"start": "6144", "sectors": "40960", "sectorsize": 512, "size": 20971520.0, "human_readable_size": "20.00 MB", "holders": []}, "vda3": {"start": "47104", "sectors": "88033247", "sectorsize": 512, "size": 45073022464.0, "human_readable_size": "41.98 GB", "holders": []}, "vda1": {"start": "2048", "sectors": "4096", "sectorsize": 512, "size": 2097152.0, "human_readable_size": "2.00 MB", "holders": []}}, "sectors": 0, "sectorsize": "512", "size": 45097156608.0, "human_readable_size": "42.00 GB", "path": "/dev/vda", "locked": 1}, "available": false, "rejected_reasons": ["locked"], "device_id": "", "lvs": []}, {"path": "/dev/vdb", "sys_api": {"removable": "0", "ro": "0", "vendor": "0x1af4", "model": "", "rev": "", "sas_address": "", "sas_device_handle": "", "support_discard": "0", "rotational": "1", "nr_requests": "256", "scheduler_mode": "mq-deadline", "partitions": {"vdb1": {"start": "2048", "sectors": "16775135", "sectorsize": 512, "size": 8588869120.0, "human_readable_size": "8.00 GB", "holders": ["dm-0"]}}, "sectors": 0, "sectorsize": "512", "size": 8589934592.0, "human_readable_size": "8.00 GB", "path": "/dev/vdb", "locked": 1}, "available": false, "rejected_reasons": ["locked"], "device_id": "533448", "lvs": [{"name": "osd-data-01b1d95b-fc77-4c35-8fb8-d69c52983b7d", "osd_id": "2", "cluster_name": "ceph", "type": "block", "osd_fsid": "3578a4ce-a566-4f2c-b088-86970064b69f", "cluster_fsid": "e0f06dd5-19b9-40ec-b2f3-40cac3f169aa", "block_uuid": "LCVhnO-p1FM-jDW6-b08s-8fqp-6nKW-x7BWmq"}]}, {"path": "/dev/vdc", "sys_api": {"removable": "0", "ro": "0", "vendor": "0x1af4", "model": "", "rev": "", "sas_address": "", "sas_device_handle": "", "support_discard": "0", "rotational": "1", "nr_requests": "256", "scheduler_mode": "mq-deadline", "partitions": {}, "sectors": 0, "sectorsize": "512", "size": 8589934592.0, "human_readable_size": "8.00 GB", "path": "/dev/vdc", "locked": 1}, "available": false, "rejected_reasons": ["locked"], "device_id": "301443", "lvs": [{"name": "osd-data-31fa565d-e341-4f2f-8ee0-f4dab98980a5", "osd_id": "5", "cluster_name": "ceph", "type": "block", "osd_fsid": "db31c5fd-965b-4bb9-8541-44c6cc38fe67", "cluster_fsid": "e0f06dd5-19b9-40ec-b2f3-40cac3f169aa", "block_uuid": "a1VfwT-GdVc-v9cs-swKV-aPxV-AqqO-OUlrjg"}]}]
node1:~ #
Actions #4

Updated by Joshua Schmid about 4 years ago

Jan Fajerski wrote:

Which SHA is this?

Using ceph version 15.0.0-9862-g0e03207 (0e032079499cf51ac7d5f16a3dbc236b8316c94d) octopus (dev) this fails to reproduce:

nceph version 15.0.0-9473-gce16d20b2b (ce16d20b2bad4b42f665b420b02bb96bed348451) octopus (dev)

[...]

Actions

Also available in: Atom PDF