Project

General

Profile

Bug #45604 ยป node3.log

Volker Theile, 05/19/2020 09:37 AM

 
[2020-05-19 09:31:53,005][ceph_volume.main][INFO ] Running command: ceph-volume inventory --format=json
[2020-05-19 09:31:53,006][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:31:53,008][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:31:53,011][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:31:53,011][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:31:53,012][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:31:53,012][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:31:53,013][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:31:53,013][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:31:53,013][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:31:53,017][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:53,063][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:53,064][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vda
[2020-05-19 09:31:53,069][ceph_volume.process][INFO ] stdout NAME="vda" KNAME="vda" MAJ:MIN="254:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="42G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:31:53,070][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vda
[2020-05-19 09:31:53,073][ceph_volume.process][INFO ] stdout /dev/vda: PTUUID="ee11615c-cc8c-4a75-8367-e5fc96f763ca" PTTYPE="gpt"
[2020-05-19 09:31:53,074][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda
[2020-05-19 09:31:53,119][ceph_volume.process][INFO ] stderr Failed to find device for physical volume "/dev/vda".
[2020-05-19 09:31:53,120][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda2
[2020-05-19 09:31:53,159][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda2".
[2020-05-19 09:31:53,160][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda3
[2020-05-19 09:31:53,203][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda3".
[2020-05-19 09:31:53,204][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda1
[2020-05-19 09:31:53,247][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda1".
[2020-05-19 09:31:53,248][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:53,291][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:53,292][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vda2
[2020-05-19 09:31:53,296][ceph_volume.process][INFO ] stdout NAME="vda2" KNAME="vda2" MAJ:MIN="254:2" FSTYPE="vfat" MOUNTPOINT="" LABEL="EFI" UUID="5B82-D32D" RO="0" RM="0" MODEL="" SIZE="20M" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="vda" PARTLABEL="p.UEFI"
[2020-05-19 09:31:53,296][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vda2
[2020-05-19 09:31:53,299][ceph_volume.process][INFO ] stdout /dev/vda2: SEC_TYPE="msdos" LABEL_FATBOOT="EFI" LABEL="EFI" UUID="5B82-D32D" VERSION="FAT16" TYPE="vfat" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="p.UEFI" PART_ENTRY_UUID="0f4a4234-d1cb-4685-abd6-b22604680b5e" PART_ENTRY_TYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="6144" PART_ENTRY_SIZE="40960" PART_ENTRY_DISK="254:0"
[2020-05-19 09:31:53,300][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda2
[2020-05-19 09:31:53,343][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda2".
[2020-05-19 09:31:53,344][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda2
[2020-05-19 09:31:53,363][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda2: (2) No such file or directory
[2020-05-19 09:31:53,364][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda2
[2020-05-19 09:31:53,385][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda2: (2) No such file or directory
[2020-05-19 09:31:53,386][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vda2
[2020-05-19 09:31:53,390][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-partuuid/0f4a4234-d1cb-4685-abd6-b22604680b5e /dev/disk/by-path/virtio-pci-0000:00:03.0-part2 /dev/disk/by-uuid/5B82-D32D /dev/disk/by-path/pci-0000:00:03.0-part2 /dev/disk/by-label/EFI /dev/disk/by-partlabel/p.UEFI
[2020-05-19 09:31:53,390][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vda2
[2020-05-19 09:31:53,390][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda2
[2020-05-19 09:31:53,390][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2020-05-19 09:31:53,390][ceph_volume.process][INFO ] stdout ID_FS_LABEL=EFI
[2020-05-19 09:31:53,390][ceph_volume.process][INFO ] stdout ID_FS_LABEL_ENC=EFI
[2020-05-19 09:31:53,391][ceph_volume.process][INFO ] stdout ID_FS_TYPE=vfat
[2020-05-19 09:31:53,391][ceph_volume.process][INFO ] stdout ID_FS_USAGE=filesystem
[2020-05-19 09:31:53,391][ceph_volume.process][INFO ] stdout ID_FS_UUID=5B82-D32D
[2020-05-19 09:31:53,392][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=5B82-D32D
[2020-05-19 09:31:53,392][ceph_volume.process][INFO ] stdout ID_FS_VERSION=FAT16
[2020-05-19 09:31:53,392][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=254:0
[2020-05-19 09:31:53,392][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=p.UEFI
[2020-05-19 09:31:53,392][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2020-05-19 09:31:53,392][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=6144
[2020-05-19 09:31:53,393][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2020-05-19 09:31:53,393][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=40960
[2020-05-19 09:31:53,393][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=c12a7328-f81f-11d2-ba4b-00a0c93ec93b
[2020-05-19 09:31:53,393][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=0f4a4234-d1cb-4685-abd6-b22604680b5e
[2020-05-19 09:31:53,393][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2020-05-19 09:31:53,393][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=ee11615c-cc8c-4a75-8367-e5fc96f763ca
[2020-05-19 09:31:53,393][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:03.0
[2020-05-19 09:31:53,394][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_03_0
[2020-05-19 09:31:53,394][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:53,394][ceph_volume.process][INFO ] stdout MINOR=2
[2020-05-19 09:31:53,394][ceph_volume.process][INFO ] stdout PARTN=2
[2020-05-19 09:31:53,394][ceph_volume.process][INFO ] stdout PARTNAME=p.UEFI
[2020-05-19 09:31:53,394][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:53,395][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:53,395][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8573711
[2020-05-19 09:31:53,395][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:53,439][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:53,440][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vda3
[2020-05-19 09:31:53,444][ceph_volume.process][INFO ] stdout NAME="vda3" KNAME="vda3" MAJ:MIN="254:3" FSTYPE="ext4" MOUNTPOINT="/var/log/ceph" LABEL="ROOT" UUID="285f4160-0d1c-4398-bb7e-c6598cf0a77e" RO="0" RM="0" MODEL="" SIZE="42G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="vda" PARTLABEL="p.lxroot"
[2020-05-19 09:31:53,445][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vda3
[2020-05-19 09:31:53,448][ceph_volume.process][INFO ] stdout /dev/vda3: LABEL="ROOT" UUID="285f4160-0d1c-4398-bb7e-c6598cf0a77e" VERSION="1.0" TYPE="ext4" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="p.lxroot" PART_ENTRY_UUID="ecd60730-1de3-4fb7-b675-8a25b56443cd" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="3" PART_ENTRY_OFFSET="47104" PART_ENTRY_SIZE="88033247" PART_ENTRY_DISK="254:0"
[2020-05-19 09:31:53,449][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda3
[2020-05-19 09:31:53,491][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda3".
[2020-05-19 09:31:53,492][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda3
[2020-05-19 09:31:53,513][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda3: (2) No such file or directory
[2020-05-19 09:31:53,514][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda3
[2020-05-19 09:31:53,536][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda3: (2) No such file or directory
[2020-05-19 09:31:53,537][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vda3
[2020-05-19 09:31:53,541][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-partuuid/ecd60730-1de3-4fb7-b675-8a25b56443cd /dev/disk/by-path/pci-0000:00:03.0-part3 /dev/disk/by-label/ROOT /dev/disk/by-partlabel/p.lxroot /dev/disk/by-uuid/285f4160-0d1c-4398-bb7e-c6598cf0a77e /dev/disk/by-path/virtio-pci-0000:00:03.0-part3
[2020-05-19 09:31:53,541][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vda3
[2020-05-19 09:31:53,541][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda3
[2020-05-19 09:31:53,541][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2020-05-19 09:31:53,541][ceph_volume.process][INFO ] stdout ID_FS_LABEL=ROOT
[2020-05-19 09:31:53,541][ceph_volume.process][INFO ] stdout ID_FS_LABEL_ENC=ROOT
[2020-05-19 09:31:53,542][ceph_volume.process][INFO ] stdout ID_FS_TYPE=ext4
[2020-05-19 09:31:53,542][ceph_volume.process][INFO ] stdout ID_FS_USAGE=filesystem
[2020-05-19 09:31:53,542][ceph_volume.process][INFO ] stdout ID_FS_UUID=285f4160-0d1c-4398-bb7e-c6598cf0a77e
[2020-05-19 09:31:53,542][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=285f4160-0d1c-4398-bb7e-c6598cf0a77e
[2020-05-19 09:31:53,542][ceph_volume.process][INFO ] stdout ID_FS_VERSION=1.0
[2020-05-19 09:31:53,542][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=254:0
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=p.lxroot
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=3
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=47104
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=88033247
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=ecd60730-1de3-4fb7-b675-8a25b56443cd
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=ee11615c-cc8c-4a75-8367-e5fc96f763ca
[2020-05-19 09:31:53,543][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:03.0
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_03_0
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout MINOR=3
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout PARTN=3
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout PARTNAME=p.lxroot
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:53,544][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8537869
[2020-05-19 09:31:53,545][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:53,588][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:53,588][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vda1
[2020-05-19 09:31:53,596][ceph_volume.process][INFO ] stdout NAME="vda1" KNAME="vda1" MAJ:MIN="254:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="2M" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="vda" PARTLABEL="p.legacy"
[2020-05-19 09:31:53,597][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vda1
[2020-05-19 09:31:53,606][ceph_volume.process][INFO ] stdout /dev/vda1: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="p.legacy" PART_ENTRY_UUID="e68b961d-e488-4b9a-adf3-706664511813" PART_ENTRY_TYPE="21686148-6449-6e6f-744e-656564454649" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="4096" PART_ENTRY_DISK="254:0"
[2020-05-19 09:31:53,607][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda1
[2020-05-19 09:31:53,647][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda1".
[2020-05-19 09:31:53,648][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda1
[2020-05-19 09:31:53,669][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda1: (2) No such file or directory
[2020-05-19 09:31:53,670][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda1
[2020-05-19 09:31:53,689][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda1: (2) No such file or directory
[2020-05-19 09:31:53,690][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vda1
[2020-05-19 09:31:53,693][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-partlabel/p.legacy /dev/disk/by-partuuid/e68b961d-e488-4b9a-adf3-706664511813 /dev/disk/by-path/pci-0000:00:03.0-part1 /dev/disk/by-path/virtio-pci-0000:00:03.0-part1
[2020-05-19 09:31:53,694][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vda1
[2020-05-19 09:31:53,694][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda1
[2020-05-19 09:31:53,694][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2020-05-19 09:31:53,694][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=254:0
[2020-05-19 09:31:53,694][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=p.legacy
[2020-05-19 09:31:53,694][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=1
[2020-05-19 09:31:53,694][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=2048
[2020-05-19 09:31:53,695][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2020-05-19 09:31:53,695][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=4096
[2020-05-19 09:31:53,695][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=21686148-6449-6e6f-744e-656564454649
[2020-05-19 09:31:53,695][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=e68b961d-e488-4b9a-adf3-706664511813
[2020-05-19 09:31:53,696][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2020-05-19 09:31:53,696][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=ee11615c-cc8c-4a75-8367-e5fc96f763ca
[2020-05-19 09:31:53,696][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:03.0
[2020-05-19 09:31:53,696][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_03_0
[2020-05-19 09:31:53,696][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:53,696][ceph_volume.process][INFO ] stdout MINOR=1
[2020-05-19 09:31:53,697][ceph_volume.process][INFO ] stdout PARTN=1
[2020-05-19 09:31:53,697][ceph_volume.process][INFO ] stdout PARTNAME=p.legacy
[2020-05-19 09:31:53,697][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:53,697][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:53,697][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8578204
[2020-05-19 09:31:53,698][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda
[2020-05-19 09:31:53,717][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda: (2) No such file or directory
[2020-05-19 09:31:53,718][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:53,756][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:53,757][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vda2
[2020-05-19 09:31:53,760][ceph_volume.process][INFO ] stdout NAME="vda2" KNAME="vda2" MAJ:MIN="254:2" FSTYPE="vfat" MOUNTPOINT="" LABEL="EFI" UUID="5B82-D32D" RO="0" RM="0" MODEL="" SIZE="20M" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="vda" PARTLABEL="p.UEFI"
[2020-05-19 09:31:53,761][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vda2
[2020-05-19 09:31:53,764][ceph_volume.process][INFO ] stdout /dev/vda2: SEC_TYPE="msdos" LABEL_FATBOOT="EFI" LABEL="EFI" UUID="5B82-D32D" VERSION="FAT16" TYPE="vfat" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="p.UEFI" PART_ENTRY_UUID="0f4a4234-d1cb-4685-abd6-b22604680b5e" PART_ENTRY_TYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="6144" PART_ENTRY_SIZE="40960" PART_ENTRY_DISK="254:0"
[2020-05-19 09:31:53,765][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda2
[2020-05-19 09:31:53,807][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda2".
[2020-05-19 09:31:53,808][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda2
[2020-05-19 09:31:53,828][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda2: (2) No such file or directory
[2020-05-19 09:31:53,828][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda2
[2020-05-19 09:31:53,849][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda2: (2) No such file or directory
[2020-05-19 09:31:53,850][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vda2
[2020-05-19 09:31:53,854][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:00:03.0-part2 /dev/disk/by-uuid/5B82-D32D /dev/disk/by-path/virtio-pci-0000:00:03.0-part2 /dev/disk/by-partlabel/p.UEFI /dev/disk/by-partuuid/0f4a4234-d1cb-4685-abd6-b22604680b5e /dev/disk/by-label/EFI
[2020-05-19 09:31:53,854][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vda2
[2020-05-19 09:31:53,854][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda2
[2020-05-19 09:31:53,855][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2020-05-19 09:31:53,855][ceph_volume.process][INFO ] stdout ID_FS_LABEL=EFI
[2020-05-19 09:31:53,856][ceph_volume.process][INFO ] stdout ID_FS_LABEL_ENC=EFI
[2020-05-19 09:31:53,856][ceph_volume.process][INFO ] stdout ID_FS_TYPE=vfat
[2020-05-19 09:31:53,856][ceph_volume.process][INFO ] stdout ID_FS_USAGE=filesystem
[2020-05-19 09:31:53,856][ceph_volume.process][INFO ] stdout ID_FS_UUID=5B82-D32D
[2020-05-19 09:31:53,856][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=5B82-D32D
[2020-05-19 09:31:53,856][ceph_volume.process][INFO ] stdout ID_FS_VERSION=FAT16
[2020-05-19 09:31:53,857][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=254:0
[2020-05-19 09:31:53,857][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=p.UEFI
[2020-05-19 09:31:53,857][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2020-05-19 09:31:53,857][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=6144
[2020-05-19 09:31:53,857][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2020-05-19 09:31:53,857][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=40960
[2020-05-19 09:31:53,858][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=c12a7328-f81f-11d2-ba4b-00a0c93ec93b
[2020-05-19 09:31:53,858][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=0f4a4234-d1cb-4685-abd6-b22604680b5e
[2020-05-19 09:31:53,858][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2020-05-19 09:31:53,858][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=ee11615c-cc8c-4a75-8367-e5fc96f763ca
[2020-05-19 09:31:53,858][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:03.0
[2020-05-19 09:31:53,858][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_03_0
[2020-05-19 09:31:53,858][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:53,859][ceph_volume.process][INFO ] stdout MINOR=2
[2020-05-19 09:31:53,859][ceph_volume.process][INFO ] stdout PARTN=2
[2020-05-19 09:31:53,859][ceph_volume.process][INFO ] stdout PARTNAME=p.UEFI
[2020-05-19 09:31:53,859][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:53,859][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:53,859][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8573711
[2020-05-19 09:31:53,860][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:53,899][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:53,901][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vda3
[2020-05-19 09:31:53,905][ceph_volume.process][INFO ] stdout NAME="vda3" KNAME="vda3" MAJ:MIN="254:3" FSTYPE="ext4" MOUNTPOINT="/var/log/ceph" LABEL="ROOT" UUID="285f4160-0d1c-4398-bb7e-c6598cf0a77e" RO="0" RM="0" MODEL="" SIZE="42G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="vda" PARTLABEL="p.lxroot"
[2020-05-19 09:31:53,906][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vda3
[2020-05-19 09:31:53,909][ceph_volume.process][INFO ] stdout /dev/vda3: LABEL="ROOT" UUID="285f4160-0d1c-4398-bb7e-c6598cf0a77e" VERSION="1.0" TYPE="ext4" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="p.lxroot" PART_ENTRY_UUID="ecd60730-1de3-4fb7-b675-8a25b56443cd" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="3" PART_ENTRY_OFFSET="47104" PART_ENTRY_SIZE="88033247" PART_ENTRY_DISK="254:0"
[2020-05-19 09:31:53,909][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda3
[2020-05-19 09:31:53,951][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda3".
[2020-05-19 09:31:53,952][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda3
[2020-05-19 09:31:53,971][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda3: (2) No such file or directory
[2020-05-19 09:31:53,972][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda3
[2020-05-19 09:31:53,990][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda3: (2) No such file or directory
[2020-05-19 09:31:53,991][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vda3
[2020-05-19 09:31:53,994][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-partlabel/p.lxroot /dev/disk/by-label/ROOT /dev/disk/by-uuid/285f4160-0d1c-4398-bb7e-c6598cf0a77e /dev/disk/by-path/virtio-pci-0000:00:03.0-part3 /dev/disk/by-path/pci-0000:00:03.0-part3 /dev/disk/by-partuuid/ecd60730-1de3-4fb7-b675-8a25b56443cd
[2020-05-19 09:31:53,995][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vda3
[2020-05-19 09:31:53,995][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda3
[2020-05-19 09:31:53,995][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2020-05-19 09:31:53,995][ceph_volume.process][INFO ] stdout ID_FS_LABEL=ROOT
[2020-05-19 09:31:53,995][ceph_volume.process][INFO ] stdout ID_FS_LABEL_ENC=ROOT
[2020-05-19 09:31:53,995][ceph_volume.process][INFO ] stdout ID_FS_TYPE=ext4
[2020-05-19 09:31:53,995][ceph_volume.process][INFO ] stdout ID_FS_USAGE=filesystem
[2020-05-19 09:31:53,996][ceph_volume.process][INFO ] stdout ID_FS_UUID=285f4160-0d1c-4398-bb7e-c6598cf0a77e
[2020-05-19 09:31:53,996][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=285f4160-0d1c-4398-bb7e-c6598cf0a77e
[2020-05-19 09:31:53,996][ceph_volume.process][INFO ] stdout ID_FS_VERSION=1.0
[2020-05-19 09:31:53,996][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=254:0
[2020-05-19 09:31:53,996][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=p.lxroot
[2020-05-19 09:31:53,996][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=3
[2020-05-19 09:31:53,997][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=47104
[2020-05-19 09:31:53,997][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2020-05-19 09:31:53,997][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=88033247
[2020-05-19 09:31:53,997][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2020-05-19 09:31:53,998][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=ecd60730-1de3-4fb7-b675-8a25b56443cd
[2020-05-19 09:31:53,998][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2020-05-19 09:31:53,998][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=ee11615c-cc8c-4a75-8367-e5fc96f763ca
[2020-05-19 09:31:53,998][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:03.0
[2020-05-19 09:31:53,998][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_03_0
[2020-05-19 09:31:53,999][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:53,999][ceph_volume.process][INFO ] stdout MINOR=3
[2020-05-19 09:31:53,999][ceph_volume.process][INFO ] stdout PARTN=3
[2020-05-19 09:31:53,999][ceph_volume.process][INFO ] stdout PARTNAME=p.lxroot
[2020-05-19 09:31:53,999][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:54,000][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:54,000][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8537869
[2020-05-19 09:31:54,000][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:54,043][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:54,044][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vda1
[2020-05-19 09:31:54,052][ceph_volume.process][INFO ] stdout NAME="vda1" KNAME="vda1" MAJ:MIN="254:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="2M" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="vda" PARTLABEL="p.legacy"
[2020-05-19 09:31:54,053][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vda1
[2020-05-19 09:31:54,061][ceph_volume.process][INFO ] stdout /dev/vda1: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="p.legacy" PART_ENTRY_UUID="e68b961d-e488-4b9a-adf3-706664511813" PART_ENTRY_TYPE="21686148-6449-6e6f-744e-656564454649" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="4096" PART_ENTRY_DISK="254:0"
[2020-05-19 09:31:54,062][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vda1
[2020-05-19 09:31:54,104][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/vda1".
[2020-05-19 09:31:54,105][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda1
[2020-05-19 09:31:54,127][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda1: (2) No such file or directory
[2020-05-19 09:31:54,127][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda1
[2020-05-19 09:31:54,147][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda1: (2) No such file or directory
[2020-05-19 09:31:54,148][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vda1
[2020-05-19 09:31:54,151][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-partlabel/p.legacy /dev/disk/by-partuuid/e68b961d-e488-4b9a-adf3-706664511813 /dev/disk/by-path/pci-0000:00:03.0-part1 /dev/disk/by-path/virtio-pci-0000:00:03.0-part1
[2020-05-19 09:31:54,152][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vda1
[2020-05-19 09:31:54,152][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda1
[2020-05-19 09:31:54,152][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2020-05-19 09:31:54,152][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=254:0
[2020-05-19 09:31:54,153][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=p.legacy
[2020-05-19 09:31:54,153][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=1
[2020-05-19 09:31:54,153][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=2048
[2020-05-19 09:31:54,153][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2020-05-19 09:31:54,154][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=4096
[2020-05-19 09:31:54,154][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=21686148-6449-6e6f-744e-656564454649
[2020-05-19 09:31:54,155][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=e68b961d-e488-4b9a-adf3-706664511813
[2020-05-19 09:31:54,155][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2020-05-19 09:31:54,155][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=ee11615c-cc8c-4a75-8367-e5fc96f763ca
[2020-05-19 09:31:54,156][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:03.0
[2020-05-19 09:31:54,156][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_03_0
[2020-05-19 09:31:54,157][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:54,157][ceph_volume.process][INFO ] stdout MINOR=1
[2020-05-19 09:31:54,157][ceph_volume.process][INFO ] stdout PARTN=1
[2020-05-19 09:31:54,158][ceph_volume.process][INFO ] stdout PARTNAME=p.legacy
[2020-05-19 09:31:54,158][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:54,158][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:54,158][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8578204
[2020-05-19 09:31:54,159][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vda
[2020-05-19 09:31:54,178][ceph_volume.process][INFO ] stderr unable to read label for /dev/vda: (2) No such file or directory
[2020-05-19 09:31:54,179][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vda
[2020-05-19 09:31:54,182][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/virtio-pci-0000:00:03.0 /dev/disk/by-path/pci-0000:00:03.0
[2020-05-19 09:31:54,183][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vda
[2020-05-19 09:31:54,183][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda
[2020-05-19 09:31:54,183][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:31:54,184][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2020-05-19 09:31:54,184][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=ee11615c-cc8c-4a75-8367-e5fc96f763ca
[2020-05-19 09:31:54,184][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:03.0
[2020-05-19 09:31:54,184][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_03_0
[2020-05-19 09:31:54,185][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:54,185][ceph_volume.process][INFO ] stdout MINOR=0
[2020-05-19 09:31:54,185][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:54,186][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:54,186][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8213738
[2020-05-19 09:31:54,187][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:54,227][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:54,229][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:31:54,232][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:31:54,233][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:31:54,236][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:31:54,237][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:31:54,275][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:31:54,276][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:31:54,319][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:54,320][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:31:54,340][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:31:54,341][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:31:54,360][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:31:54,361][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:31:54,364][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/virtio-577378 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-path/virtio-pci-0000:00:04.0 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:31:54,364][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:31:54,364][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:31:54,365][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:31:54,365][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:31:54,365][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:31:54,366][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:31:54,366][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:31:54,366][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:31:54,366][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:31:54,366][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:31:54,366][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:31:54,366][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:54,367][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:31:54,368][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:31:54,411][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:31:54,412][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:31:54,417][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:31:54,418][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:31:54,423][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:31:54,424][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:31:54,459][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:31:54,460][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:31:54,503][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:31:54,504][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:31:54,524][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:31:54,525][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:31:54,545][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:31:54,545][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:31:54,548][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/virtio-727088 /dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 /dev/disk/by-path/pci-0000:00:05.0
[2020-05-19 09:31:54,549][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:31:54,549][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:31:54,549][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:31:54,549][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:31:54,549][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:31:54,549][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:31:54,550][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:31:54,550][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:31:54,550][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:31:54,550][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:31:54,552][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:31:54,552][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:31:54,553][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:31:54,553][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:31:54,553][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:31:54,553][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:31:54,554][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:31:54,554][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:31:54,555][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:31:54,555][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:31:54,556][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 150, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 38, in main
self.format_report(Devices())
File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 42, in format_report
print(json.dumps(inventory.json_report()))
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 51, in json_report
output.append(device.json_report())
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 202, in json_report
output['lvs'] = [lv.report() for lv in self.lvs]
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 202, in <listcomp>
output['lvs'] = [lv.report() for lv in self.lvs]
File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 945, in report
'cluster_name': self.tags['ceph.cluster_name'],
KeyError: 'ceph.cluster_name'
[2020-05-19 09:32:02,197][ceph_volume.main][INFO ] Running command: ceph-volume lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-19 09:32:02,198][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:02,202][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:02,202][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:02,202][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:02,202][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:02,202][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:02,202][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:02,203][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:02,211][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:02,215][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:02,216][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:02,217][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:02,217][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:02,218][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:02,218][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:02,219][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:02,228][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:02,271][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:02,272][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:32:02,278][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:02,279][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:32:02,282][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:02,283][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:32:02,319][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:32:02,320][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:32:02,363][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:02,364][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:02,385][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:02,385][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:02,404][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:02,405][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:32:02,408][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/virtio-577378 /dev/disk/by-path/virtio-pci-0000:00:04.0 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:02,409][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:32:02,410][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:02,451][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:02,452][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:32:02,457][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:02,458][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:32:02,462][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:02,463][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:32:02,503][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:32:02,504][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:32:02,547][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:32:02,548][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:02,567][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:02,568][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:02,589][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:02,590][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-id/virtio-727088 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:32:02,593][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:02,594][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:32:03,871][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2020-05-19 09:32:03,871][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:32:03,873][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:03,915][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:05,286][ceph_volume.main][INFO ] Running command: ceph-volume lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-19 09:32:05,287][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:05,291][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:05,291][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:05,291][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:05,291][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:05,291][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:05,292][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:05,292][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:05,297][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:05,302][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:05,302][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:05,303][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:05,303][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:05,303][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:05,303][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:05,303][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:05,306][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:05,355][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:05,356][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:32:05,360][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:05,360][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:32:05,362][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:05,363][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:32:05,411][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:32:05,412][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:32:05,455][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:05,456][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:05,475][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:05,476][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:05,497][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:05,497][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:32:05,500][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/virtio-577378 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-path/virtio-pci-0000:00:04.0 /dev/disk/by-path/pci-0000:00:04.0
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:05,501][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:32:05,502][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:05,539][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:05,540][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:32:05,544][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:05,545][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:32:05,548][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:05,549][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:32:05,591][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:32:05,592][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:32:05,635][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:32:05,636][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:05,655][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:05,656][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:05,676][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:05,677][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:32:05,680][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-id/virtio-727088 /dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:05,681][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:32:05,682][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:05,682][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:32:07,088][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2020-05-19 09:32:07,089][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:32:07,091][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:07,132][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:08,654][ceph_volume.main][INFO ] Running command: ceph-volume lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-19 09:32:08,655][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:08,658][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:08,659][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:08,659][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:08,660][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:08,660][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:08,660][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:08,661][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:08,668][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:08,673][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:08,674][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:08,675][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:08,675][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:08,675][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:08,675][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:08,676][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:08,682][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:08,727][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:08,728][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:32:08,733][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:08,733][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:32:08,735][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:08,736][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:32:08,775][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:32:08,776][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:32:08,816][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:08,817][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:08,837][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:08,838][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:08,858][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:08,859][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:32:08,862][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-id/virtio-577378 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-path/virtio-pci-0000:00:04.0
[2020-05-19 09:32:08,863][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:32:08,863][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:32:08,863][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:08,864][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:08,864][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:08,865][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:08,866][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:32:08,867][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:08,907][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:08,908][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:32:08,913][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:08,914][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:32:08,918][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:08,918][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:32:08,959][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:32:08,960][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:32:09,003][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:32:09,004][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:09,024][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:09,025][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:09,044][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:09,045][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:32:09,048][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 /dev/disk/by-id/virtio-727088 /dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-path/virtio-pci-0000:00:05.0
[2020-05-19 09:32:09,049][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:32:09,049][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:32:09,049][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:09,049][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:09,049][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:09,049][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:09,049][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:09,050][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:32:09,051][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:09,051][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:32:10,327][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2020-05-19 09:32:10,327][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:32:10,329][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:10,371][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:11,662][ceph_volume.main][INFO ] Running command: ceph-volume lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-19 09:32:11,662][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:11,666][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:11,666][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:11,666][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:11,667][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:11,667][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:11,667][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:11,667][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:11,674][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:11,677][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:11,678][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:11,679][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:11,680][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:11,680][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:11,680][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:11,680][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:11,684][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:11,727][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:11,728][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:32:11,731][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:11,732][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:32:11,735][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:11,735][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:32:11,775][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:32:11,776][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:32:11,819][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:11,820][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:11,840][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:11,841][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:11,861][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:11,862][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:32:11,867][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-id/virtio-577378 /dev/disk/by-path/virtio-pci-0000:00:04.0
[2020-05-19 09:32:11,867][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:32:11,868][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:32:11,868][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:11,868][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:11,869][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:11,869][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:11,869][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:11,869][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:11,869][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:32:11,870][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:32:11,870][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:32:11,870][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:32:11,870][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:11,870][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:32:11,870][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:11,870][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:32:11,871][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:11,871][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:32:11,871][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:11,871][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:32:11,871][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:11,911][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:11,912][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:32:11,917][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:11,917][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:32:11,921][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:11,922][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:32:11,963][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:32:11,964][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:32:12,003][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:32:12,004][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:12,023][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:12,024][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:12,046][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:12,047][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:32:12,051][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-id/virtio-727088 /dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:12,051][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:32:12,052][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:32:12,052][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:12,052][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:12,053][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:12,053][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:12,053][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:12,053][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:12,053][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:32:12,053][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:32:12,054][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:32:12,054][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:32:12,054][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:12,054][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:32:12,054][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:12,054][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:32:12,055][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:12,055][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:32:12,055][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:12,055][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:32:13,352][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2020-05-19 09:32:13,352][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:32:13,353][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:13,395][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:14,634][ceph_volume.main][INFO ] Running command: ceph-volume lvm batch --no-auto /dev/vdb /dev/vdc --yes --no-systemd
[2020-05-19 09:32:14,635][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:14,638][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:14,638][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:14,638][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:14,639][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:14,639][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:14,639][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:14,640][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:14,647][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:14,652][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:14,653][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:14,653][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:14,654][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:14,654][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:14,654][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:14,655][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:14,661][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:14,703][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:14,704][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:32:14,708][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:14,708][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:32:14,710][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:14,711][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:32:14,755][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:32:14,757][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:32:14,807][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:14,808][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:14,830][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:14,831][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:14,850][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:14,851][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:32:14,854][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/virtio-577378 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-path/virtio-pci-0000:00:04.0
[2020-05-19 09:32:14,855][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:32:14,856][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:32:14,856][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:14,857][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:14,857][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:14,858][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:14,858][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:14,859][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:14,860][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:32:14,860][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:32:14,861][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:32:14,861][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:32:14,862][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:14,862][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:32:14,862][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:14,863][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:32:14,863][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:14,863][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:32:14,864][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:14,864][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:32:14,865][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:14,903][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:14,905][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:32:14,910][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:14,911][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:32:14,915][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:14,916][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:32:14,955][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:32:14,957][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:32:14,991][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:32:14,992][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:15,018][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:15,019][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:15,040][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:15,041][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-id/virtio-727088 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 /dev/disk/by-path/virtio-pci-0000:00:05.0
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:15,046][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:15,047][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:32:16,297][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2020-05-19 09:32:16,297][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:32:16,299][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:16,340][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:17,673][ceph_volume.main][INFO ] Running command: ceph-volume lvm batch --no-auto /dev/vdb /dev/vdc --yes --no-systemd
[2020-05-19 09:32:17,674][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:17,678][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:17,678][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:17,678][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:17,679][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:17,679][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:17,679][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:17,679][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:17,684][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:17,689][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:17,689][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:17,690][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:17,691][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:17,691][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:17,691][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:17,691][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:17,695][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:17,747][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:17,748][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:32:17,751][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:17,752][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:32:17,755][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:17,756][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:32:17,799][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:32:17,800][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:32:17,843][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:17,844][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:17,865][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:17,866][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:17,888][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:17,889][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:32:17,893][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/virtio-pci-0000:00:04.0 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-id/virtio-577378
[2020-05-19 09:32:17,894][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:32:17,894][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:32:17,895][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:17,895][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:17,895][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:17,896][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:17,896][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:17,896][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:17,896][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:32:17,897][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:32:17,897][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:32:17,897][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:32:17,898][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:17,898][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:32:17,898][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:17,898][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:32:17,899][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:17,899][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:32:17,899][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:17,900][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:32:17,900][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:17,947][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:17,948][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:32:17,953][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:17,954][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:32:17,958][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:17,959][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:32:17,996][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:32:17,998][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:32:18,035][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:32:18,037][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:18,063][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:18,064][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:18,089][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:18,090][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:32:18,094][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/virtio-727088 /dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 /dev/disk/by-path/pci-0000:00:05.0
[2020-05-19 09:32:18,095][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:32:18,096][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:32:18,096][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:18,097][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:18,098][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:18,098][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:18,098][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:18,099][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:18,099][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:32:18,100][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:32:18,101][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:32:18,101][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:32:18,102][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:18,102][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:32:18,103][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:18,103][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:32:18,104][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:18,105][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:32:18,105][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:18,106][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:32:19,554][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2020-05-19 09:32:19,554][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:32:19,556][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:19,596][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:20,991][ceph_volume.main][INFO ] Running command: ceph-volume lvm batch --no-auto /dev/vdb /dev/vdc --yes --no-systemd
[2020-05-19 09:32:20,993][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:20,996][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:20,997][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:20,997][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:20,998][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:20,998][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:20,998][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:20,998][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:21,006][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-19 09:32:21,011][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-19 09:32:21,012][ceph_volume.process][INFO ] stdout /dev/vda /dev/vda disk
[2020-05-19 09:32:21,012][ceph_volume.process][INFO ] stdout /dev/vda1 /dev/vda1 part
[2020-05-19 09:32:21,012][ceph_volume.process][INFO ] stdout /dev/vda2 /dev/vda2 part
[2020-05-19 09:32:21,012][ceph_volume.process][INFO ] stdout /dev/vda3 /dev/vda3 part
[2020-05-19 09:32:21,012][ceph_volume.process][INFO ] stdout /dev/vdb /dev/vdb disk
[2020-05-19 09:32:21,012][ceph_volume.process][INFO ] stdout /dev/vdc /dev/vdc disk
[2020-05-19 09:32:21,016][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:21,063][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:21,064][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-19 09:32:21,067][ceph_volume.process][INFO ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:21,068][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-19 09:32:21,070][ceph_volume.process][INFO ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:21,071][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-19 09:32:21,119][ceph_volume.process][INFO ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-19 09:32:21,120][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-19 09:32:21,159][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:21,160][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:21,183][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:21,184][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-19 09:32:21,206][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-19 09:32:21,207][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-19 09:32:21,212][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/virtio-577378 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-path/virtio-pci-0000:00:04.0
[2020-05-19 09:32:21,213][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdb
[2020-05-19 09:32:21,213][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-19 09:32:21,213][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:21,213][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:21,213][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:21,213][ceph_volume.process][INFO ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout ID_SERIAL=577378
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:21,214][ceph_volume.process][INFO ] stdout MINOR=16
[2020-05-19 09:32:21,215][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:21,215][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-19 09:32:21,215][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:21,215][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-19 09:32:21,215][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:21,215][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8421191
[2020-05-19 09:32:21,216][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:21,260][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-19 09:32:21,261][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-19 09:32:21,266][ceph_volume.process][INFO ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-05-19 09:32:21,267][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-19 09:32:21,271][ceph_volume.process][INFO ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid"
[2020-05-19 09:32:21,272][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-19 09:32:21,303][ceph_volume.process][INFO ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-19 09:32:21,304][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-19 09:32:21,347][ceph_volume.process][INFO ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-19 09:32:21,349][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:21,369][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:21,370][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-19 09:32:21,390][ceph_volume.process][INFO ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-19 09:32:21,391][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-19 09:32:21,395][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/virtio-727088 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 /dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-path/virtio-pci-0000:00:05.0
[2020-05-19 09:32:21,396][ceph_volume.process][INFO ] stdout DEVNAME=/dev/vdc
[2020-05-19 09:32:21,396][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-19 09:32:21,396][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-05-19 09:32:21,397][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2020-05-19 09:32:21,397][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2020-05-19 09:32:21,397][ceph_volume.process][INFO ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:21,398][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-19 09:32:21,398][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2020-05-19 09:32:21,398][ceph_volume.process][INFO ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-19 09:32:21,398][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-19 09:32:21,399][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-19 09:32:21,399][ceph_volume.process][INFO ] stdout ID_SERIAL=727088
[2020-05-19 09:32:21,399][ceph_volume.process][INFO ] stdout MAJOR=254
[2020-05-19 09:32:21,400][ceph_volume.process][INFO ] stdout MINOR=32
[2020-05-19 09:32:21,400][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-05-19 09:32:21,400][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-19 09:32:21,400][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2020-05-19 09:32:21,401][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-19 09:32:21,401][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-05-19 09:32:21,401][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=8509629
[2020-05-19 09:32:22,767][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2020-05-19 09:32:22,767][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
conf.ceph = configuration.load(conf.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-19 09:32:22,768][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-19 09:32:22,811][ceph_volume.process][INFO ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
    (1-1/1)