Fix #20224
openceph-disk deactivate fails with --verbose, and error message is confusing
0%
Description
I can reproduce this most of the time in a 12.0.3 cluster running in OpenStack.
While the ceph-osd@ systemd services are running, I try to "ceph-disk deactivate" the OSD's data device:
target217182143002:/home/ubuntu # ceph-disk deactivate /dev/vdc2 Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 9, in <module> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5664, in run main(sys.argv[1:]) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5617, in main main_catch(args.func, args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5642, in main_catch func(args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3821, in main_deactivate main_deactivate_locked(args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3854, in main_deactivate_locked status_code = _check_osd_status(args.cluster, osd_id) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3776, in _check_osd_status out_json = json.loads(out) File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded
If I give it --verbose
, the error is different:
get_dm_uuid: get_dm_uuid /dev/vda uuid path is /sys/dev/block/254:0/dm/uuid get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/254:32/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop0 uuid path is /sys/dev/block/7:0/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop1 uuid path is /sys/dev/block/7:1/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop2 uuid path is /sys/dev/block/7:2/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop3 uuid path is /sys/dev/block/7:3/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop4 uuid path is /sys/dev/block/7:4/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop5 uuid path is /sys/dev/block/7:5/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop6 uuid path is /sys/dev/block/7:6/dm/uuid get_dm_uuid: get_dm_uuid /dev/loop7 uuid path is /sys/dev/block/7:7/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vda1 command: Running command: /sbin/blkid -o udev -p /dev/vda1 list_devices: main_list: /dev/vda1 ptype = 21686148-6449-6e6f-744e-656564454649 uuid = 5401a0ee-3024-44de-be93-aa0c732f1c7c command: Running command: /sbin/blkid -o udev -p /dev/vda2 command: Running command: /sbin/blkid -o udev -p /dev/vda2 list_devices: main_list: /dev/vda2 ptype = c12a7328-f81f-11d2-ba4b-00a0c93ec93b uuid = 62597288-5eec-4d89-988b-2162d657eb1e command: Running command: /sbin/blkid -o udev -p /dev/vda3 command: Running command: /sbin/blkid -o udev -p /dev/vda3 list_devices: main_list: /dev/vda3 ptype = 0fc63daf-8483-4772-8e79-3d69d8477de4 uuid = 8478656a-89a4-4a3e-aec4-547f5553f4b9 command: Running command: /sbin/blkid -o udev -p /dev/vdb1 command: Running command: /sbin/blkid -o udev -p /dev/vdb1 list_devices: main_list: /dev/vdb1 ptype = 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid = 9440a906-cd2c-4faf-beef-1f653368ec72 command: Running command: /sbin/blkid -o udev -p /dev/vdb2 command: Running command: /sbin/blkid -o udev -p /dev/vdb2 list_devices: main_list: /dev/vdb2 ptype = 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid = c279ee0f-03b0-451f-a0b8-a8f6428df9cf command: Running command: /sbin/blkid -s TYPE /dev/vdb2 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs mount: Mounting /dev/vdb2 on /var/lib/ceph/tmp/mnt.XUo5br with options noatime,inode64 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/vdb2 /var/lib/ceph/tmp/mnt.XUo5br unmount: Unmounting /var/lib/ceph/tmp/mnt.XUo5br command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.XUo5br command: Running command: /sbin/blkid -o udev -p /dev/vdc1 command: Running command: /sbin/blkid -o udev -p /dev/vdc1 list_devices: main_list: /dev/vdc1 ptype = 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid = 35dfe9cb-cdc3-494a-9c15-2d2433e386f8 command: Running command: /sbin/blkid -o udev -p /dev/vdc2 command: Running command: /sbin/blkid -o udev -p /dev/vdc2 list_devices: main_list: /dev/vdc2 ptype = 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid = 77886c8c-2139-4db4-980b-476d65c18b5b command: Running command: /sbin/blkid -s TYPE /dev/vdc2 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs mount: Mounting /dev/vdc2 on /var/lib/ceph/tmp/mnt.ol_sLb with options noatime,inode64 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/vdc2 /var/lib/ceph/tmp/mnt.ol_sLb unmount: Unmounting /var/lib/ceph/tmp/mnt.ol_sLb command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.ol_sLb list_devices: main_list: {'vda': ['vda1', 'vda2', 'vda3'], 'vdc': ['vdc1', 'vdc2'], 'vdb': ['vdb1', 'vdb2'], 'loop3': [], 'loop2': [], 'loop1': [], 'loop0': [], 'loop7': [], 'loop6': [], 'loop5': [], 'loop4': []}, uuid_map = {u'9440a906-cd2c-4faf-beef-1f653368ec72': '/dev/vdb1', u'c279ee0f-03b0-451f-a0b8-a8f6428df9cf': '/dev/vdb2', u'77886c8c-2139-4db4-980b-476d65c18b5b': '/dev/vdc2', u'62597288-5eec-4d89-988b-2162d657eb1e': '/dev/vda2', u'35dfe9cb-cdc3-494a-9c15-2d2433e386f8': '/dev/vdc1', u'8478656a-89a4-4a3e-aec4-547f5553f4b9': '/dev/vda3', u'5401a0ee-3024-44de-be93-aa0c732f1c7c': '/dev/vda1'}, space_map = {u'9440a906-cd2c-4faf-beef-1f653368ec72': '/dev/vdb2', u'35dfe9cb-cdc3-494a-9c15-2d2433e386f8': '/dev/vdc2'} get_dm_uuid: get_dm_uuid /dev/loop0 uuid path is /sys/dev/block/7:0/dm/uuid list_dev: list_dev(dev = /dev/loop0, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop0 get_dm_uuid: get_dm_uuid /dev/loop1 uuid path is /sys/dev/block/7:1/dm/uuid list_dev: list_dev(dev = /dev/loop1, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop1 get_dm_uuid: get_dm_uuid /dev/loop2 uuid path is /sys/dev/block/7:2/dm/uuid list_dev: list_dev(dev = /dev/loop2, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop2 get_dm_uuid: get_dm_uuid /dev/loop3 uuid path is /sys/dev/block/7:3/dm/uuid list_dev: list_dev(dev = /dev/loop3, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop3 get_dm_uuid: get_dm_uuid /dev/loop4 uuid path is /sys/dev/block/7:4/dm/uuid list_dev: list_dev(dev = /dev/loop4, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop4 get_dm_uuid: get_dm_uuid /dev/loop5 uuid path is /sys/dev/block/7:5/dm/uuid list_dev: list_dev(dev = /dev/loop5, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop5 get_dm_uuid: get_dm_uuid /dev/loop6 uuid path is /sys/dev/block/7:6/dm/uuid list_dev: list_dev(dev = /dev/loop6, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop6 get_dm_uuid: get_dm_uuid /dev/loop7 uuid path is /sys/dev/block/7:7/dm/uuid list_dev: list_dev(dev = /dev/loop7, ptype = unknown) command: Running command: /sbin/blkid -s TYPE /dev/loop7 get_dm_uuid: get_dm_uuid /dev/vda1 uuid path is /sys/dev/block/254:1/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vda1 command: Running command: /sbin/blkid -o udev -p /dev/vda1 list_dev: list_dev(dev = /dev/vda1, ptype = 21686148-6449-6e6f-744e-656564454649) command: Running command: /sbin/blkid -s TYPE /dev/vda1 get_dm_uuid: get_dm_uuid /dev/vda2 uuid path is /sys/dev/block/254:2/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vda2 command: Running command: /sbin/blkid -o udev -p /dev/vda2 list_dev: list_dev(dev = /dev/vda2, ptype = c12a7328-f81f-11d2-ba4b-00a0c93ec93b) command: Running command: /sbin/blkid -s TYPE /dev/vda2 get_dm_uuid: get_dm_uuid /dev/vda3 uuid path is /sys/dev/block/254:3/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vda3 command: Running command: /sbin/blkid -o udev -p /dev/vda3 list_dev: list_dev(dev = /dev/vda3, ptype = 0fc63daf-8483-4772-8e79-3d69d8477de4) command: Running command: /sbin/blkid -s TYPE /dev/vda3 get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/254:17/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vdb1 command: Running command: /sbin/blkid -o udev -p /dev/vdb1 list_dev: list_dev(dev = /dev/vdb1, ptype = 45b0969e-9b03-4f30-b4c6-b4b80ceff106) get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/254:18/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vdb2 command: Running command: /sbin/blkid -o udev -p /dev/vdb2 list_dev: list_dev(dev = /dev/vdb2, ptype = 4fbd7e29-9d25-41b8-afd0-062c0ceff05d) command: Running command: /sbin/blkid -s TYPE /dev/vdb2 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/254:33/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vdc1 command: Running command: /sbin/blkid -o udev -p /dev/vdc1 list_dev: list_dev(dev = /dev/vdc1, ptype = 45b0969e-9b03-4f30-b4c6-b4b80ceff106) get_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/254:34/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/vdc2 command: Running command: /sbin/blkid -o udev -p /dev/vdc2 list_dev: list_dev(dev = /dev/vdc2, ptype = 4fbd7e29-9d25-41b8-afd0-062c0ceff05d) command: Running command: /sbin/blkid -s TYPE /dev/vdc2 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid list_devices: list_devices: [{'dmcrypt': {}, 'mount': u'/mnt/SES5-Internal', 'ptype': 'unknown', 'is_partition': False, 'fs_type': u'iso9660', 'path': '/dev/loop0', 'type': 'other'}, {'path': '/dev/loop1', 'type': 'other', 'dmcrypt': {}, 'ptype': 'unknown', 'is_partition': False}, {'path': '/dev/loop2', 'type': 'other', 'dmcrypt': {}, 'ptype': 'unknown', 'is_partition': False}, {'path': '/dev/loop3', 'type': 'other', 'dmcrypt': {}, 'ptype': 'unknown', 'is_partition': False}, {'path': '/dev/loop4', 'type': 'other', 'dmcrypt': {}, 'ptype': 'unknown', 'is_partition': False}, {'path': '/dev/loop5', 'type': 'other', 'dmcrypt': {}, 'ptype': 'unknown', 'is_partition': False}, {'path': '/dev/loop6', 'type': 'other', 'dmcrypt': {}, 'ptype': 'unknown', 'is_partition': False}, {'path': '/dev/loop7', 'type': 'other', 'dmcrypt': {}, 'ptype': 'unknown', 'is_partition': False}, {'path': '/dev/vda', 'partitions': [{'dmcrypt': {}, 'uuid': u'5401a0ee-3024-44de-be93-aa0c732f1c7c', 'ptype': u'21686148-6449-6e6f-744e-656564454649', 'is_partition': True, 'path': '/dev/vda1', 'type': 'other'}, {'dmcrypt': {}, 'uuid': u'62597288-5eec-4d89-988b-2162d657eb1e', 'mount': u'/boot/efi', 'ptype': u'c12a7328-f81f-11d2-ba4b-00a0c93ec93b', 'is_partition': True, 'fs_type': u'vfat', 'path': '/dev/vda2', 'type': 'other'}, {'dmcrypt': {}, 'uuid': u'8478656a-89a4-4a3e-aec4-547f5553f4b9', 'mount': u'/', 'ptype': u'0fc63daf-8483-4772-8e79-3d69d8477de4', 'is_partition': True, 'fs_type': u'xfs', 'path': '/dev/vda3', 'type': 'other'}]}, {'path': '/dev/vdb', 'partitions': [{'dmcrypt': {}, 'uuid': u'9440a906-cd2c-4faf-beef-1f653368ec72', 'ptype': u'45b0969e-9b03-4f30-b4c6-b4b80ceff106', 'is_partition': True, 'journal_for': '/dev/vdb2', 'path': '/dev/vdb1', 'type': 'journal'}, {'dmcrypt': {}, 'uuid': u'c279ee0f-03b0-451f-a0b8-a8f6428df9cf', 'mount': u'/var/lib/ceph/osd/ceph-1', 'ptype': u'4fbd7e29-9d25-41b8-afd0-062c0ceff05d', 'is_partition': True, 'cluster': 'ceph', 'state': 'active', 'fs_type': u'xfs', 'ceph_fsid': u'88cb88f4-08e5-3721-b6ab-06044ce8ce4c', 'path': '/dev/vdb2', 'type': 'data', 'whoami': u'1', 'journal_dev': '/dev/vdb1', 'journal_uuid': u'9440a906-cd2c-4faf-beef-1f653368ec72'}]}, {'path': '/dev/vdc', 'partitions': [{'dmcrypt': {}, 'uuid': u'35dfe9cb-cdc3-494a-9c15-2d2433e386f8', 'ptype': u'45b0969e-9b03-4f30-b4c6-b4b80ceff106', 'is_partition': True, 'journal_for': '/dev/vdc2', 'path': '/dev/vdc1', 'type': 'journal'}, {'dmcrypt': {}, 'uuid': u'77886c8c-2139-4db4-980b-476d65c18b5b', 'mount': u'/var/lib/ceph/osd/ceph-3', 'ptype': u'4fbd7e29-9d25-41b8-afd0-062c0ceff05d', 'is_partition': True, 'cluster': 'ceph', 'state': 'active', 'fs_type': u'xfs', 'ceph_fsid': u'88cb88f4-08e5-3721-b6ab-06044ce8ce4c', 'path': '/dev/vdc2', 'type': 'data', 'whoami': u'3', 'journal_dev': '/dev/vdc1', 'journal_uuid': u'35dfe9cb-cdc3-494a-9c15-2d2433e386f8'}]}] Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 9, in <module> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5664, in run main(sys.argv[1:]) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5615, in main args.func(args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3821, in main_deactivate main_deactivate_locked(args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3844, in main_deactivate_locked raise Error('Cannot find any match device!!') ceph_disk.main.Error: Error: Cannot find any match device!!
If I give /dev/vdc instead of /dev/vdc2, I get the same error as with --verbose /dev/v:
target217182143002:/home/ubuntu # ceph-disk deactivate /dev/sdc ceph-disk: Error: Cannot find any match device!! target217182143002:/home/ubuntu # ceph-disk --verbose deactivate /dev/sdc ((same output as with /dev/vdc2, above))
I cannot run "ceph osd dump --format json" on the same node (for lack of admin keyring), but the output (when run from the admin node) is: https://paste2.org/VKV61GaB
Updated by Loïc Dachary almost 7 years ago
- Tracker changed from Bug to Fix
- Subject changed from ceph-disk deactivate doesn't work to ceph-disk deactivate error message is confusing
- Status changed from New to 12
When ceph osd dump cannot be run for some reason, the error message should at least include the stderr instead of a terse "cannot parse json".
Updated by Nathan Cutler almost 7 years ago
There's a more serious bug here, I think. When I run ceph-disk --verbose deactivate /dev/vdc2
it gives me lots of output, ending in ceph_disk.main.Error: Error: Cannot find any match device!!
When I issue the exact same command without --verbose
, it succeeds:
ubuntu@target217182143194:~> sudo ceph-disk deactivate /dev/vdc2 Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@2.service. ubuntu@target217182143194:~>
(This is on a machine where ceph osd dump
succeeds, so the command is otherwise expected to work)
Updated by Nathan Cutler almost 7 years ago
- Subject changed from ceph-disk deactivate error message is confusing to ceph-disk deactivate fails with --verbose, and error message is confusing
Updated by Nathan Cutler almost 7 years ago
This error message (from another machine) is also confusing:
ceph-disk destroy /dev/vdb1 --zap Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 9, in <module> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5692, in run main(sys.argv[1:]) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5645, in main main_catch(args.func, args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5670, in main_catch func(args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4029, in main_destroy main_destroy_locked(args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4051, in main_destroy_locked osd_id = target_dev['whoami'] KeyError: 'whoami'