Project

General

Profile

Bug #43585

c-v inventory marks partitions with mounted fs as available

Added by Sébastien Han about 4 years ago. Updated about 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[root@c651e7c678af /]# lsblk /dev/vdc1  --bytes --nodeps --pairs --output SIZE,ROTA,RO,TYPE,PKNAME,NAME
SIZE="32210157568" ROTA="1" RO="0" TYPE="part" PKNAME="vdc" NAME="vdc1" 
[root@c651e7c678af /]# ceph-volume inventory /dev/vdc1

====== Device report /dev/vdc1 ======

     available                 True
     rejected reasons          
     path                      /dev/vdc1
     device id                 
[root@c651e7c678af /]# mount /dev/vdc1 /mnt/
[root@c651e7c678af /]# ceph-volume inventory /dev/vdc1

====== Device report /dev/vdc1 ======

     available                 True
     rejected reasons          
     path                      /dev/vdc1
     device id                 
[root@c651e7c678af /]# mout 
bash: mout: command not found
[root@c651e7c678af /]# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/3HDETLO6LXVDZ2MMB3P736MIMP:/var/lib/docker/overlay2/l/X5IGTUD62CM5R324VV3LC2OVZK:/var/lib/docker/overlay2/l/XJHQCJF6N6LEK25PQS3HX3VDHP:/var/lib/docker/overlay2/l/LP3MQ2GOLYXTCL2GEPJKE56VZR:/var/lib/docker/overlay2/l/7VPWQZUUEC7MSTRFN6PZYIQW4M:/var/lib/docker/overlay2/l/JRR4IMCBADNSE5TFO4N6HB4DDE:/var/lib/docker/overlay2/l/QDJ6QENRZJLGLAW2GMAGF72MHK:/var/lib/docker/overlay2/l/UU5K4ZGRRPGIZZM6INVZAAMZ6X,upperdir=/var/lib/docker/overlay2/45389c0ceb23811bbd10410b9c27cc0ed9ea002f3311161de76f57734f22d0e2/diff,workdir=/var/lib/docker/overlay2/45389c0ceb23811bbd10410b9c27cc0ed9ea002f3311161de76f57734f22d0e2/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
devtmpfs on /dev type devtmpfs (rw,relatime,size=7576588k,nr_inodes=1894147,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
/dev/vda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
devpts on /dev/console type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
/dev/vdc1 on /mnt type ext4 (rw,relatime,data=ordered)

We should do 2 things:

  • detect for a filesystem
  • detect for a mounted filesystem

It's interesting that the O_EXCL doesn't fail here.

History

#1 Updated by Jan Fajerski about 4 years ago

Indeed weird that O_EXCL doesn't work here, that test seems to be less and less useful.

One question: Why should we test for a mounted file system? I'd argue that any file system should immediately render a disk/partition unavailable, mounted or not.

#2 Updated by Sébastien Han about 4 years ago

Yes, checking for an fs signature is enough, we don't need to check whether or not it's mounted.

#3 Updated by Jan Fajerski about 4 years ago

Mind attaching a debug log of ceph-volume inventory /dev/vdc1?

#4 Updated by Sébastien Han about 4 years ago

[root@72ebacbb7b9e /]# cat /var/log/ceph 
[2020-01-13 16:34:51,613][ceph_volume.main][INFO  ] Running command: ceph-volume  inventory /dev/vdc1
[2020-01-13 16:34:51,613][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 141, in main
    conf.ceph = configuration.load(conf.path)
  File "/usr/lib/python2.7/site-packages/ceph_volume/configuration.py", line 51, in load
    raise exceptions.ConfigurationError(abspath=abspath)
ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-01-13 16:34:51,616][ceph_volume.process][INFO  ] Running command: /usr/sbin/dmsetup splitname --noheadings --separator=';' --nameprefixes /dev/dm-0
[2020-01-13 16:34:51,863][ceph_volume.process][INFO  ] stdout DM_VG_NAME='/dev/dm'';'DM_LV_NAME='0'';'DM_LV_LAYER=''
[2020-01-13 16:34:51,863][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-01-13 16:34:52,130][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-cec981b8-2eca-45cd-bf91-a4472779f2a9/osd-data-428984b7-f94d-40cd-9cb7-1458e1613eab,ceph.block_uuid=1a9uCz-7x5r-ZTm3-zAcu-5Cda-Gc8w-4wZ8TQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=0a19bbf6-037a-426e-a2f4-ef1e875a09ab,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=fe88c54c-d22b-475a-a0d1-3791959e85fc,ceph.osd_id=0,ceph.type=block,ceph.vdo=0";"/dev/ceph-cec981b8-2eca-45cd-bf91-a4472779f2a9/osd-data-428984b7-f94d-40cd-9cb7-1458e1613eab";"osd-data-428984b7-f94d-40cd-9cb7-1458e1613eab";"ceph-cec981b8-2eca-45cd-bf91-a4472779f2a9";"1a9uCz-7x5r-ZTm3-zAcu-5Cda-Gc8w-4wZ8TQ";"29.00g
[2020-01-13 16:34:52,130][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-01-13 16:34:52,130][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-01-13 16:34:52,397][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-cec981b8-2eca-45cd-bf91-a4472779f2a9/osd-data-428984b7-f94d-40cd-9cb7-1458e1613eab,ceph.block_uuid=1a9uCz-7x5r-ZTm3-zAcu-5Cda-Gc8w-4wZ8TQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=0a19bbf6-037a-426e-a2f4-ef1e875a09ab,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=fe88c54c-d22b-475a-a0d1-3791959e85fc,ceph.osd_id=0,ceph.type=block,ceph.vdo=0";"/dev/ceph-cec981b8-2eca-45cd-bf91-a4472779f2a9/osd-data-428984b7-f94d-40cd-9cb7-1458e1613eab";"osd-data-428984b7-f94d-40cd-9cb7-1458e1613eab";"ceph-cec981b8-2eca-45cd-bf91-a4472779f2a9";"1a9uCz-7x5r-ZTm3-zAcu-5Cda-Gc8w-4wZ8TQ";"29.00g
[2020-01-13 16:34:52,397][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-01-13 16:34:52,397][ceph_volume.process][INFO  ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc1
[2020-01-13 16:34:52,645][ceph_volume.process][INFO  ] stdout NAME="vdc1" KNAME="vdc1" MAJ:MIN="253:33" FSTYPE="ext4" MOUNTPOINT="" LABEL="" UUID="bd55694f-720c-483e-aab3-408a1ed6dd7b" RO="0" RM="0" MODEL="" SIZE="30G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="vdc" PARTLABEL="test" 
[2020-01-13 16:34:52,645][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdc1
[2020-01-13 16:34:52,896][ceph_volume.process][INFO  ] stdout /dev/vdc1: UUID="bd55694f-720c-483e-aab3-408a1ed6dd7b" VERSION="1.0" TYPE="ext4" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="test" PART_ENTRY_UUID="3daedee0-31e3-4b79-a1ba-e25434d269ed" PART_ENTRY_TYPE="ebd0a0a2-b9e5-4433-87c0-68b6b72699c7" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="62910464" PART_ENTRY_DISK="253:32" 
[2020-01-13 16:34:52,896][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --no-heading --readonly --separator=";" -o pv_name,pv_tags,pv_uuid,vg_name,lv_uuid
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stdout /dev/vdb1";"";"X4udVR-P6qY-on4S-XoRe-CdRe-JmBs-u7ZtrF";"ceph-cec981b8-2eca-45cd-bf91-a4472779f2a9";"1a9uCz-7x5r-ZTm3-zAcu-5Cda-Gc8w-4wZ8TQ
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Operation prohibited while global/metadata_read_only is set.
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Recovery of standalone physical volumes failed.
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Cannot process standalone physical volumes
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Operation prohibited while global/metadata_read_only is set.
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Recovery of standalone physical volumes failed.
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Cannot process standalone physical volumes
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Operation prohibited while global/metadata_read_only is set.
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Recovery of standalone physical volumes failed.
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] stderr Cannot process standalone physical volumes
[2020-01-13 16:34:53,171][ceph_volume.process][INFO  ] Running command: /bin/udevadm info --query=property /dev/vdc1
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-partlabel/test /dev/disk/by-partuuid/3daedee0-31e3-4b79-a1ba-e25434d269ed /dev/disk/by-path/pci-0000:00:09.0-part1 /dev/disk/by-path/virtio-pci-0000:00:09.0-part1 /dev/disk/by-uuid/bd55694f-720c-483e-aab3-408a1ed6dd7b
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdc1
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:09.0/virtio6/block/vdc/vdc1
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout DEVTYPE=partition
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=ext4
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=filesystem
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_FS_UUID=bd55694f-720c-483e-aab3-408a1ed6dd7b
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=bd55694f-720c-483e-aab3-408a1ed6dd7b
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=1.0
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_DISK=253:32
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_NAME=test
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_NUMBER=1
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_OFFSET=2048
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SCHEME=gpt
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_SIZE=62910464
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_TYPE=ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_ENTRY_UUID=3daedee0-31e3-4b79-a1ba-e25434d269ed
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_TYPE=gpt
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PART_TABLE_UUID=6fcbcf57-70dc-4cec-a744-61256e04d89a
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:09.0
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_09_0
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout MAJOR=253
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout MINOR=33
[2020-01-13 16:34:53,448][ceph_volume.process][INFO  ] stdout PARTN=1
[2020-01-13 16:34:53,449][ceph_volume.process][INFO  ] stdout PARTNAME=test
[2020-01-13 16:34:53,449][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-01-13 16:34:53,449][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-01-13 16:34:53,449][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=12154048385

Also available in: Atom PDF