Project

General

Profile

Actions

Bug #45094

closed

ceph-volume inventory does not read mpath properly

Added by Sébastien Han about 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

It seems that the inventory command reports a device with insufficient space.

c-v log:

[2020-04-14 10:27:44,274][ceph_volume.main][INFO  ] Running command: ceph-volume  inventory --format json /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:27:44,275][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/sda   /dev/sda                                        disk
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/sda1  /dev/sda1                                       part
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/sda2  /dev/sda2                                       part
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/sda3  /dev/sda3                                       part
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/sdb   /dev/sdb                                        disk
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/sdb1  /dev/sdb1                                       part
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/dm-0  /dev/mapper/3600605b000ac584025e796031b286cb8   mpath
[2020-04-14 10:27:44,721][ceph_volume.process][INFO  ] stdout /dev/dm-1  /dev/mapper/3600605b000ac584025e796031b286cb8p1 part
[2020-04-14 10:27:44,722][ceph_volume.process][INFO  ] stdout /dev/dm-2  /dev/mapper/docker_data                         crypt
[2020-04-14 10:27:44,722][ceph_volume.process][INFO  ] stdout /dev/sdc   /dev/sdc                                        disk
[2020-04-14 10:27:44,722][ceph_volume.process][INFO  ] stdout /dev/dm-3  /dev/mapper/3600605b000ac584025e796091b840018   mpath
[2020-04-14 10:27:44,722][ceph_volume.process][INFO  ] stdout /dev/loop0 /dev/loop0                                      loop
[2020-04-14 10:27:44,727][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-14 10:27:45,183][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-04-14 10:27:45,183][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-04-14 10:27:45,183][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-04-14 10:27:45,183][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:27:45,620][ceph_volume.process][INFO  ] stdout NAME="3600605b000ac584025e796091b840018" KNAME="dm-3" MAJ:MIN="253:3" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="3.5T" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="deadline" TYPE="mpath" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-04-14 10:27:45,620][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:27:46,066][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:27:46,550][ceph_volume.process][INFO  ] stderr unable to read label for /dev/mapper/3600605b000ac584025e796091b840018: (2) No such file or directory
[2020-04-14 10:27:46,550][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:27:47,023][ceph_volume.process][INFO  ] stderr unable to read label for /dev/mapper/3600605b000ac584025e796091b840018: (2) No such file or directory
[2020-04-14 10:27:47,023][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/dm-name-3600605b000ac584025e796091b840018 /dev/disk/by-id/dm-uuid-mpath-3600605b000ac584025e796091b840018 /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/dm-3
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/virtual/block/dm-3
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DM_NAME=3600605b000ac584025e796091b840018
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DM_SUSPENDED=0
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DM_UDEV_PRIMARY_SOURCE_FLAG=1
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DM_UDEV_RULES_VSN=2
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout DM_UUID=mpath-3600605b000ac584025e796091b840018
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout MAJOR=253
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout MINOR=3
[2020-04-14 10:27:47,453][ceph_volume.process][INFO  ] stdout MPATH_SBIN_PATH=/sbin
[2020-04-14 10:27:47,454][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-04-14 10:27:47,454][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-04-14 10:27:47,454][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=151353765118
[2020-04-14 10:27:47,643][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list  --format json
[2020-04-14 10:27:47,644][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-14 10:27:48,130][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-04-14 10:27:48,131][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-04-14 10:27:48,131][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-04-14 10:27:48,213][ceph_volume.main][INFO  ] Running command: ceph-volume  raw list /mnt/set1-data-0-cg98z --format json
[2020-04-14 10:27:48,214][ceph_volume.devices.raw.list][DEBUG ] Examining /mnt/set1-data-0-cg98z
[2020-04-14 10:27:48,214][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /mnt/set1-data-0-cg98z
[2020-04-14 10:27:48,705][ceph_volume.process][INFO  ] stderr unable to read label for /mnt/set1-data-0-cg98z: (2) No such file or directory
[2020-04-14 10:27:48,705][ceph_volume.devices.raw.list][DEBUG ] No label on /mnt/set1-data-0-cg98z
[2020-04-14 10:57:27,211][ceph_volume.main][INFO  ] Running command: ceph-volume  inventory --format json /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:57:27,212][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-04-14 10:57:27,674][ceph_volume.process][INFO  ] stdout /dev/sda   /dev/sda                                        disk
[2020-04-14 10:57:27,674][ceph_volume.process][INFO  ] stdout /dev/sda1  /dev/sda1                                       part
[2020-04-14 10:57:27,674][ceph_volume.process][INFO  ] stdout /dev/sda2  /dev/sda2                                       part
[2020-04-14 10:57:27,674][ceph_volume.process][INFO  ] stdout /dev/sda3  /dev/sda3                                       part
[2020-04-14 10:57:27,674][ceph_volume.process][INFO  ] stdout /dev/sdb   /dev/sdb                                        disk
[2020-04-14 10:57:27,674][ceph_volume.process][INFO  ] stdout /dev/sdb1  /dev/sdb1                                       part
[2020-04-14 10:57:27,675][ceph_volume.process][INFO  ] stdout /dev/dm-0  /dev/mapper/3600605b000ac584025e796031b286cb8   mpath
[2020-04-14 10:57:27,675][ceph_volume.process][INFO  ] stdout /dev/dm-1  /dev/mapper/3600605b000ac584025e796031b286cb8p1 part
[2020-04-14 10:57:27,675][ceph_volume.process][INFO  ] stdout /dev/dm-2  /dev/mapper/docker_data                         crypt
[2020-04-14 10:57:27,675][ceph_volume.process][INFO  ] stdout /dev/sdc   /dev/sdc                                        disk
[2020-04-14 10:57:27,675][ceph_volume.process][INFO  ] stdout /dev/dm-3  /dev/mapper/3600605b000ac584025e796091b840018   mpath
[2020-04-14 10:57:27,675][ceph_volume.process][INFO  ] stdout /dev/loop0 /dev/loop0                                      loop
[2020-04-14 10:57:27,679][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-14 10:57:28,139][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-04-14 10:57:28,139][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-04-14 10:57:28,139][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-04-14 10:57:28,140][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:57:28,573][ceph_volume.process][INFO  ] stdout NAME="3600605b000ac584025e796091b840018" KNAME="dm-3" MAJ:MIN="253:3" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="3.5T" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="deadline" TYPE="mpath" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-04-14 10:57:28,573][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:57:29,025][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:57:29,511][ceph_volume.process][INFO  ] stderr unable to read label for /dev/mapper/3600605b000ac584025e796091b840018: (2) No such file or directory
[2020-04-14 10:57:29,511][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:57:29,983][ceph_volume.process][INFO  ] stderr unable to read label for /dev/mapper/3600605b000ac584025e796091b840018: (2) No such file or directory
[2020-04-14 10:57:29,984][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/dm-name-3600605b000ac584025e796091b840018 /dev/disk/by-id/dm-uuid-mpath-3600605b000ac584025e796091b840018 /dev/mapper/3600605b000ac584025e796091b840018
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/dm-3
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/virtual/block/dm-3
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DM_NAME=3600605b000ac584025e796091b840018
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DM_SUSPENDED=0
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DM_UDEV_PRIMARY_SOURCE_FLAG=1
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DM_UDEV_RULES_VSN=2
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout DM_UUID=mpath-3600605b000ac584025e796091b840018
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout MAJOR=253
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout MINOR=3
[2020-04-14 10:57:30,417][ceph_volume.process][INFO  ] stdout MPATH_SBIN_PATH=/sbin
[2020-04-14 10:57:30,418][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-04-14 10:57:30,418][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-04-14 10:57:30,418][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=151353765118
[2020-04-14 10:57:30,596][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list  --format json
[2020-04-14 10:57:30,597][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-14 10:57:31,041][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-04-14 10:57:31,041][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-04-14 10:57:31,041][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-04-14 10:57:31,121][ceph_volume.main][INFO  ] Running command: ceph-volume  raw list /mnt/set1-data-0-cg98z --format json
[2020-04-14 10:57:31,122][ceph_volume.devices.raw.list][DEBUG ] Examining /mnt/set1-data-0-cg98z
[2020-04-14 10:57:31,122][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /mnt/set1-data-0-cg98z
[2020-04-14 10:57:31,596][ceph_volume.process][INFO  ] stderr unable to read label for /mnt/set1-data-0-cg98z: (2) No such file or directory
[2020-04-14 10:57:31,597][ceph_volume.devices.raw.list][DEBUG ] No label on /mnt/set1-data-0-cg98z


Related issues 1 (0 open1 closed)

Copied to ceph-volume - Backport #47237: octopus: ceph-volume inventory does not read mpath properlyResolvedJan FajerskiActions
Actions #1

Updated by Jan Fajerski about 4 years ago

Any chance you could add a bit more details? What do you expect and what do you get?

Actions #2

Updated by Sébastien Han about 4 years ago

I'm reporting this on behalf of someone so I'll do my best.
The device has shown in the log is big, but when running cv inventory against it, cv fails with insufficient space.

I'm expecting the device to be available and can be used to become an OSD.

Actions #3

Updated by Sébastien Han almost 4 years ago

Jan, do you have enough information? Thanks

Actions #4

Updated by Jan Fajerski almost 4 years ago

No sorry, not really enough info. It would be helpful to see what c-v actually returns, this seems to be only log messages. Particularly the output of the following command would be interesting:
ceph-volume inventory --format json /dev/mapper/3600605b000ac584025e796091b840018

I'm also still not quite clear what is being attempted. I presume this is supposed to deploy a an OSD with the raw subcommand? Not sure why there are a bunch of calls to lvm list though and no prepare or create call is actually attempted.

Generally speaking c-v doesn't explicitly support mpath devices. Though things work well when on passes a device that is backed by an mpath device. Using the lvm subcommand in this scenario requires some additional lvm config to avoid errors about duplicate PVs. One should adjust the device filter of LVM in such a way that it only sees one path.

Actions #5

Updated by Christopher Blum almost 4 years ago

Note: This Bug was initiated due to ocs-operator issue discussions here: https://github.com/openshift/ocs-operator/issues/452

There is more information on the root cause on Github if that helps.

Actions #6

Updated by Christopher Blum almost 4 years ago

More info for Jan:

sh-4.2# lsblk
NAME                                    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                                       8:0    0  1.8T  0 disk  
|-sda1                                    8:1    0  256M  0 part  
|-sda2                                    8:2    0    1G  0 part  
`-sda3                                    8:3    0  1.8T  0 part  /var/lib/ceph/crash
sdb                                       8:16   0  1.8T  0 disk  
|-sdb1                                    8:17   0  1.8T  0 part  
`-3600605b000ac584025e796031b286cb8     253:0    0  1.8T  0 mpath 
  `-3600605b000ac584025e796031b286cb8p1 253:1    0  1.8T  0 part  
    `-docker_data                       253:2    0  1.8T  0 crypt /run/secrets
sdc                                       8:32   0  3.5T  0 disk  
`-3600605b000ac584025e796091b840018     253:3    0  3.5T  0 mpath 
loop0    
sh-4.2# ceph-volume inventory --format json /dev/mapper/3600605b000ac584025e796091b840018
{"available": false, "lvs": [], "rejected_reasons": ["Insufficient space (<5GB)"], "sys_api": {}, "path": "/dev/mapper/3600605b000ac584025e796091b840018", "device_id": ""}

Actions #7

Updated by Jan Fajerski almost 4 years ago

As mentioned in the earlier comment, ceph-volume will ignore devices of type 'mpath' for various reasons. The inventory subcommand only considers devices that lsblk lists as 'disk' devices.
Can you not use /dev/sdc instead of /dev/mapper/3600605b000ac584025e796091b840018 in the above example?

Actions #9

Updated by Jan Fajerski almost 4 years ago

Jan Fajerski wrote:

See also https://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#multipath-support

Hmm I just stumbled across the lvm setting multipath_component_detection = 1.

With this I think the doc statement above becomes obsolete...I'll think about how we can support mpath devices explicitly.

Actions #10

Updated by Jan Fajerski almost 4 years ago

ok mpath support looks promissing.

As for the reported problem: I can not reproduce this in a comparable setup., It seems like maybe some needed paths are not mapped to the container or so? ceph-volume can't retrieve the size of the disk, hence the rejected_reason.

As an example, here is the output of a mpath device queried from a container (with a quick test patch to support mpath devices):

># ceph-volume inventory /dev/mapper/360080e50002e53de000016405dea2ba4

====== Device report /dev/mapper/360080e50002e53de000016405dea2ba4 ======

     path                      /dev/mapper/360080e50002e53de000016405dea2ba4
     available                 True
     rejected reasons          
     device id                 
     removable                 0
     ro                        0
     vendor                    
     model                     
     sas address               
     rotational                1
     scheduler mode            mq-deadline
     human readable size       90.00 GB
>#
># ceph-volume inventory --format json /dev/mapper/360080e50002e53de000016405dea2ba4
{"path": "/dev/mapper/360080e50002e53de000016405dea2ba4", "sys_api": {"removable": "0", "ro": "0", "vendor": "", "model": "", "rev": "", "sas_address": "", "sas_device_handle": "", "support_discard": "0", "rotational": "1", "nr_requests": "256", "scheduler_mode": "mq-deadline", "partitions": {}, "sectors": 0, "sectorsize": "512", "size": 96636764160.0, "human_readable_size": "90.00 GB", "path": "/dev/mapper/360080e50002e53de000016405dea2ba4", "locked": 0}, "available": true, "rejected_reasons": [], "device_id": "", "lvs": []}

Actions #11

Updated by Jan Fajerski over 3 years ago

  • Status changed from New to Pending Backport
  • Backport set to octopus
  • Pull request ID set to 36241
Actions #12

Updated by Jan Fajerski over 3 years ago

  • Copied to Backport #47237: octopus: ceph-volume inventory does not read mpath properly added
Actions #13

Updated by Nathan Cutler over 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF