Project

General

Profile

Actions

Bug #45587

closed

mgr/cephadm: Failed to create encrypted OSD

Added by Volker Theile almost 4 years ago. Updated almost 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I can not create an encrypted OSD using Ceph 15.2.1-277-g17d346932e on SES7.

[
  {
    "service_type": "osd",
    "service_id": "dashboard-admin-1589814409945",
    "host_pattern": "node3",
    "data_devices": {
      "rotational": true
    },
    "encrypted": true
  }
]
node3:~ # tail -f /var/log/ceph/8da4db30-990a-11ea-9543-5254006700ba/ceph-volume.log 
[2020-05-18 14:56:33,982][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-18 14:56:33,983][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:33,986][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:33,986][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:33,986][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:33,987][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:33,987][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:33,988][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:33,988][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:33,993][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:33,997][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:33,997][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:33,997][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:33,997][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:33,997][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:33,997][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:33,997][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:34,002][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:34,040][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:34,041][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-18 14:56:34,045][ceph_volume.process][INFO  ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:34,046][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-18 14:56:34,048][ceph_volume.process][INFO  ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:34,049][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-18 14:56:34,091][ceph_volume.process][INFO  ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-18 14:56:34,092][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-18 14:56:34,135][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:34,136][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:34,157][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:34,158][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:34,180][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:34,181][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-18 14:56:34,184][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-path/virtio-pci-0000:00:04.0 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/virtio-577378 /dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:34,185][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdb
[2020-05-18 14:56:34,186][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-18 14:56:34,186][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:34,187][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:34,188][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:34,188][ceph_volume.process][INFO  ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:34,189][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:34,190][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:34,190][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-18 14:56:34,190][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-18 14:56:34,190][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout ID_SERIAL=577378
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout MINOR=16
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:34,191][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9914185
[2020-05-18 14:56:34,192][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:34,231][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:34,231][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-18 14:56:34,237][ceph_volume.process][INFO  ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:34,238][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-18 14:56:34,242][ceph_volume.process][INFO  ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:34,243][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-18 14:56:34,287][ceph_volume.process][INFO  ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-18 14:56:34,288][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-18 14:56:34,327][ceph_volume.process][INFO  ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-18 14:56:34,328][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:34,349][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:34,349][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:34,370][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:34,370][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-18 14:56:34,374][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/virtio-727088 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 /dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-path/virtio-pci-0000:00:05.0
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdc
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-18 14:56:34,375][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout ID_SERIAL=727088
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout MINOR=32
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-18 14:56:34,376][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:34,377][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9845721
[2020-05-18 14:56:35,745][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list --format json
[2020-05-18 14:56:35,745][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
    conf.ceph = configuration.load(conf.path)
  File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
    raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-18 14:56:35,746][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:35,791][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:37,157][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-18 14:56:37,158][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:37,161][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:37,161][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:37,161][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:37,161][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:37,162][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:37,162][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:37,162][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:37,167][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:37,172][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:37,173][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:37,173][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:37,173][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:37,173][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:37,173][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:37,173][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:37,177][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:37,219][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:37,219][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-18 14:56:37,224][ceph_volume.process][INFO  ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:37,225][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-18 14:56:37,228][ceph_volume.process][INFO  ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:37,229][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-18 14:56:37,267][ceph_volume.process][INFO  ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-18 14:56:37,267][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-18 14:56:37,307][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:37,308][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:37,329][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:37,330][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:37,350][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:37,351][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-18 14:56:37,354][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-path/virtio-pci-0000:00:04.0 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/virtio-577378
[2020-05-18 14:56:37,354][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdb
[2020-05-18 14:56:37,354][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-18 14:56:37,355][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:37,355][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:37,355][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:37,355][ceph_volume.process][INFO  ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:37,355][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:37,355][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:37,356][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-18 14:56:37,356][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-18 14:56:37,356][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-18 14:56:37,356][ceph_volume.process][INFO  ] stdout ID_SERIAL=577378
[2020-05-18 14:56:37,356][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:37,356][ceph_volume.process][INFO  ] stdout MINOR=16
[2020-05-18 14:56:37,356][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:37,357][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-18 14:56:37,357][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:37,357][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-18 14:56:37,357][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:37,357][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9914185
[2020-05-18 14:56:37,358][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:37,403][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:37,404][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-18 14:56:37,409][ceph_volume.process][INFO  ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:37,409][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-18 14:56:37,413][ceph_volume.process][INFO  ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:37,414][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-18 14:56:37,459][ceph_volume.process][INFO  ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-18 14:56:37,459][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-18 14:56:37,499][ceph_volume.process][INFO  ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-18 14:56:37,499][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:37,518][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:37,518][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:37,537][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:37,538][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-18 14:56:37,542][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 /dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-id/virtio-727088
[2020-05-18 14:56:37,542][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdc
[2020-05-18 14:56:37,542][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-18 14:56:37,542][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout ID_SERIAL=727088
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout MINOR=32
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-18 14:56:37,543][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:37,544][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-18 14:56:37,544][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:37,544][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9845721
[2020-05-18 14:56:38,849][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list --format json
[2020-05-18 14:56:38,849][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
    conf.ceph = configuration.load(conf.path)
  File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
    raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-18 14:56:38,851][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:38,895][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:40,189][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-18 14:56:40,189][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:40,192][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:40,193][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:40,193][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:40,193][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:40,193][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:40,193][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:40,193][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:40,199][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:40,203][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:40,204][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:40,204][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:40,206][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:40,206][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:40,207][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:40,207][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:40,210][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:40,251][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:40,252][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-18 14:56:40,255][ceph_volume.process][INFO  ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:40,256][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-18 14:56:40,258][ceph_volume.process][INFO  ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:40,259][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-18 14:56:40,299][ceph_volume.process][INFO  ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-18 14:56:40,300][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-18 14:56:40,343][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:40,344][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:40,366][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:40,367][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:40,388][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:40,389][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-18 14:56:40,392][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-id/virtio-577378 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-path/virtio-pci-0000:00:04.0
[2020-05-18 14:56:40,393][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdb
[2020-05-18 14:56:40,394][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-18 14:56:40,394][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:40,394][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:40,395][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:40,395][ceph_volume.process][INFO  ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:40,395][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:40,396][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:40,396][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-18 14:56:40,397][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-18 14:56:40,397][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-18 14:56:40,397][ceph_volume.process][INFO  ] stdout ID_SERIAL=577378
[2020-05-18 14:56:40,397][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:40,398][ceph_volume.process][INFO  ] stdout MINOR=16
[2020-05-18 14:56:40,398][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:40,398][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-18 14:56:40,399][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:40,399][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-18 14:56:40,399][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:40,400][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9914185
[2020-05-18 14:56:40,400][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:40,443][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:40,444][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-18 14:56:40,448][ceph_volume.process][INFO  ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:40,449][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-18 14:56:40,452][ceph_volume.process][INFO  ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:40,453][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-18 14:56:40,491][ceph_volume.process][INFO  ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-18 14:56:40,492][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-18 14:56:40,527][ceph_volume.process][INFO  ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-18 14:56:40,528][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:40,548][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:40,549][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:40,568][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:40,569][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-18 14:56:40,573][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-id/virtio-727088 /dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:40,574][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdc
[2020-05-18 14:56:40,574][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-18 14:56:40,574][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:40,575][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:40,575][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:40,575][ceph_volume.process][INFO  ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:40,576][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:40,576][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:40,576][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-18 14:56:40,577][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-18 14:56:40,577][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-18 14:56:40,577][ceph_volume.process][INFO  ] stdout ID_SERIAL=727088
[2020-05-18 14:56:40,578][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:40,579][ceph_volume.process][INFO  ] stdout MINOR=32
[2020-05-18 14:56:40,579][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:40,579][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-18 14:56:40,579][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:40,580][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-18 14:56:40,580][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:40,580][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9845721
[2020-05-18 14:56:41,823][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list --format json
[2020-05-18 14:56:41,823][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
    conf.ceph = configuration.load(conf.path)
  File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
    raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-18 14:56:41,824][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:41,863][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:43,160][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm batch --no-auto /dev/vdb /dev/vdc --dmcrypt --yes --no-systemd
[2020-05-18 14:56:43,161][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:43,164][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:43,164][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:43,164][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:43,165][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:43,165][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:43,165][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:43,165][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:43,172][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-05-18 14:56:43,175][ceph_volume.process][INFO  ] stdout /dev/dm-0 /dev/mapper/ceph--68b24983--1712--473e--ae24--cdc0354325f5-osd--data--29e335ea--a1ff--401f--8e7a--8d0455ebdef8 lvm
[2020-05-18 14:56:43,176][ceph_volume.process][INFO  ] stdout /dev/vda  /dev/vda                                                                                                       disk
[2020-05-18 14:56:43,176][ceph_volume.process][INFO  ] stdout /dev/vda1 /dev/vda1                                                                                                      part
[2020-05-18 14:56:43,176][ceph_volume.process][INFO  ] stdout /dev/vda2 /dev/vda2                                                                                                      part
[2020-05-18 14:56:43,176][ceph_volume.process][INFO  ] stdout /dev/vda3 /dev/vda3                                                                                                      part
[2020-05-18 14:56:43,176][ceph_volume.process][INFO  ] stdout /dev/vdb  /dev/vdb                                                                                                       disk
[2020-05-18 14:56:43,176][ceph_volume.process][INFO  ] stdout /dev/vdc  /dev/vdc                                                                                                       disk
[2020-05-18 14:56:43,183][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:43,227][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:43,228][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdb
[2020-05-18 14:56:43,232][ceph_volume.process][INFO  ] stdout NAME="vdb" KNAME="vdb" MAJ:MIN="254:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:43,233][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdb
[2020-05-18 14:56:43,235][ceph_volume.process][INFO  ] stdout /dev/vdb: UUID="ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:43,235][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdb
[2020-05-18 14:56:43,279][ceph_volume.process][INFO  ] stdout ceph-68b24983-1712-473e-ae24-cdc0354325f5";"1";"1";"wz--n-";"2047";"0";"4194304
[2020-05-18 14:56:43,280][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdb
[2020-05-18 14:56:43,315][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:43,316][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:43,334][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:43,335][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdb
[2020-05-18 14:56:43,355][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdb: (2) No such file or directory
[2020-05-18 14:56:43,355][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdb
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 /dev/disk/by-path/virtio-pci-0000:00:04.0 /dev/disk/by-path/pci-0000:00:04.0 /dev/disk/by-id/virtio-577378
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdb
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/block/vdb
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout ID_FS_UUID=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:43,359][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV ZuZ6uM-vbjO-aQBs-yqaO-q0ZK-Vxhn-02VO39 on /dev/vdb
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:04.0
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_04_0
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout ID_SERIAL=577378
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout MINOR=16
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:16
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:16.service
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:43,360][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9914185
[2020-05-18 14:56:43,361][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:43,399][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g
[2020-05-18 14:56:43,399][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/vdc
[2020-05-18 14:56:43,403][ceph_volume.process][INFO  ] stdout NAME="vdc" KNAME="vdc" MAJ:MIN="254:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" RO="0" RM="0" MODEL="" SIZE="8G" STATE="" OWNER="" GROUP="" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="bfq" TYPE="disk" DISC-ALN="512" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-05-18 14:56:43,404][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/vdc
[2020-05-18 14:56:43,407][ceph_volume.process][INFO  ] stdout /dev/vdc: UUID="y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" 
[2020-05-18 14:56:43,408][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/vdc
[2020-05-18 14:56:43,451][ceph_volume.process][INFO  ] stdout ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"1";"0";"wz--n-";"2047";"2047";"4194304
[2020-05-18 14:56:43,451][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/vdc
[2020-05-18 14:56:43,487][ceph_volume.process][INFO  ] stdout ";"/dev/ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da/";"";"ceph-c80b4fa2-d9f2-4905-a562-c117e25e37da";"";"0
[2020-05-18 14:56:43,487][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:43,506][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:43,507][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/vdc
[2020-05-18 14:56:43,525][ceph_volume.process][INFO  ] stderr unable to read label for /dev/vdc: (2) No such file or directory
[2020-05-18 14:56:43,526][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/vdc
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-path/virtio-pci-0000:00:05.0 /dev/disk/by-path/pci-0000:00:05.0 /dev/disk/by-id/virtio-727088 /dev/disk/by-id/lvm-pv-uuid-y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdc
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:05.0/virtio2/block/vdc
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=LVM2_member
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=raid
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout ID_FS_UUID=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout ID_FS_VERSION=LVM2 001
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout ID_MODEL=LVM PV y5xuUw-n1xr-UKJM-TQ69-SJ33-0kpi-CyNWk3 on /dev/vdc
[2020-05-18 14:56:43,530][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:05.0
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_05_0
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout ID_SERIAL=727088
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout MAJOR=254
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout MINOR=32
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout SYSTEMD_ALIAS=/dev/block/254:32
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout SYSTEMD_READY=1
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout SYSTEMD_WANTS=lvm2-pvscan@254:32.service
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-05-18 14:56:43,531][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=9845721
[2020-05-18 14:56:44,808][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list --format json
[2020-05-18 14:56:44,808][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 142, in main
    conf.ceph = configuration.load(conf.path)
  File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", line 51, in load
    raise exceptions.ConfigurationError(abspath=abspath)
ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf
[2020-05-18 14:56:44,810][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-05-18 14:56:44,855][ceph_volume.process][INFO  ] stdout ceph.cluster_fsid=null,ceph.osd_fsid=null,ceph.osd_id=null,ceph.type=null";"/dev/ceph-68b24983-1712-473e-ae24-cdc0354325f5/osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"osd-data-29e335ea-a1ff-401f-8e7a-8d0455ebdef8";"ceph-68b24983-1712-473e-ae24-cdc0354325f5";"XmKzXT-TrZd-hN9n-me16-jTCl-oQgJ-oHquZu";"8.00g

Related issues 2 (0 open2 closed)

Related to ceph-volume - Bug #51765: executing create_from_spec_one failed / KeyError: 'ceph.cluster_fsid'Duplicate

Actions
Is duplicate of Orchestrator - Feature #44625: cephadm: test dmcryptResolved

Actions
Actions

Also available in: Atom PDF