Project

General

Profile

Bug #52604 » ceph-volume.log

Konstantin Shalygin, 09/14/2021 12:20 PM

 
[2021-09-14 10:06:02,278][ceph_volume.main][INFO ] Running command: ceph-volume lvm create --crush-device-class=nvme_stat --data=/dev/nvme1n1
[2021-09-14 10:06:02,279][ceph_volume.process][INFO ] Running command: /bin/lsblk -plno KNAME,NAME,TYPE
[2021-09-14 10:06:02,281][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2021-09-14 10:06:02,281][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2021-09-14 10:06:02,281][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2021-09-14 10:06:02,281][ceph_volume.process][INFO ] stdout /dev/sda3 /dev/sda3 part
[2021-09-14 10:06:02,281][ceph_volume.process][INFO ] stdout /dev/sda4 /dev/sda4 part
[2021-09-14 10:06:02,281][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2021-09-14 10:06:02,281][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md0 /dev/md0 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md0 /dev/md0 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md1 /dev/md1 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md1 /dev/md1 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md2 /dev/md2 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md2 /dev/md2 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md3 /dev/md3 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/md3 /dev/md3 raid1
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/nvme0n1 /dev/nvme0n1 disk
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/nvme0n1p1 /dev/nvme0n1p1 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/nvme0n1p2 /dev/nvme0n1p2 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/nvme0n1p3 /dev/nvme0n1p3 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/nvme0n1p4 /dev/nvme0n1p4 part
[2021-09-14 10:06:02,282][ceph_volume.process][INFO ] stdout /dev/nvme1n1 /dev/nvme1n1 disk
[2021-09-14 10:06:02,284][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/nvme1n1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2021-09-14 10:06:02,323][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/nvme1n1
[2021-09-14 10:06:02,325][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" MAJ:MIN="259:5" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="INTEL SSDPEDMD016T4 " SIZE="1.5T" STATE="live" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="4096" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2021-09-14 10:06:02,326][ceph_volume.process][INFO ] Running command: /sbin/blkid -c /dev/null -p /dev/nvme1n1
[2021-09-14 10:06:02,329][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/nvme1n1
[2021-09-14 10:06:02,347][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/nvme1n1".
[2021-09-14 10:06:02,347][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/nvme1n1
[2021-09-14 10:06:02,360][ceph_volume.process][INFO ] stderr unable to read label for /dev/nvme1n1: (2) No such file or directory
[2021-09-14 10:06:02,361][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/nvme1n1
[2021-09-14 10:06:02,373][ceph_volume.process][INFO ] stderr unable to read label for /dev/nvme1n1: (2) No such file or directory
[2021-09-14 10:06:02,374][ceph_volume.process][INFO ] Running command: /sbin/udevadm info --query=property /dev/nvme1n1
[2021-09-14 10:06:02,375][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:02:00.0-nvme-1 /dev/disk/by-id/nvme-INTEL_SSDPEDMD016T4_CVFT5072000D1P6DGN /dev/disk/by-id/nvme-nvme.8086-43564654353037323030304431503644474e-494e54454c205353445045444d443031365434-00000001
[2021-09-14 10:06:02,375][ceph_volume.process][INFO ] stdout DEVNAME=/dev/nvme1n1
[2021-09-14 10:06:02,375][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:02.0/0000:02:00.0/nvme/nvme1/nvme1n1
[2021-09-14 10:06:02,375][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2021-09-14 10:06:02,375][ceph_volume.process][INFO ] stdout ID_MODEL=INTEL SSDPEDMD016T4
[2021-09-14 10:06:02,375][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:02:00.0-nvme-1
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_02_00_0-nvme-1
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout ID_SERIAL=INTEL SSDPEDMD016T4_CVFT5072000D1P6DGN
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=CVFT5072000D1P6DGN
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout ID_WWN=nvme.8086-43564654353037323030304431503644474e-494e54454c205353445045444d443031365434-00000001
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout MAJOR=259
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout MINOR=5
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=7682004
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] stdout net.ifnames=0
[2021-09-14 10:06:02,376][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2021-09-14 10:06:02,376][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2021-09-14 10:06:02,386][ceph_volume.process][INFO ] stdout AQDaSUBh0y0CFxAAMsEMMuTzvDbrJKjtbpoQYg==
[2021-09-14 10:06:02,387][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 339980cf-4ed1-4ca6-93ca-3f1e4758f3eb
[2021-09-14 10:06:02,813][ceph_volume.process][INFO ] stdout 59
[2021-09-14 10:06:02,813][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/nvme1n1
[2021-09-14 10:06:02,816][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" MAJ:MIN="259:5" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="INTEL SSDPEDMD016T4 " SIZE="1.5T" STATE="live" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="4096" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2021-09-14 10:06:02,816][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/nvme1n1
[2021-09-14 10:06:02,818][ceph_volume.process][INFO ] stdout NAME="nvme1n1" KNAME="nvme1n1" MAJ:MIN="259:5" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="INTEL SSDPEDMD016T4 " SIZE="1.5T" STATE="live" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="4096" ROTA="0" SCHED="none" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2T" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2021-09-14 10:06:02,818][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2021-09-14 10:06:02,819][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/nvme1n1
[2021-09-14 10:06:02,848][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/nvme1n1".
[2021-09-14 10:06:02,849][ceph_volume.process][INFO ] Running command: /sbin/vgcreate --force --yes ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4 /dev/nvme1n1
[2021-09-14 10:06:02,914][ceph_volume.process][INFO ] stdout Physical volume "/dev/nvme1n1" successfully created.
[2021-09-14 10:06:02,933][ceph_volume.process][INFO ] stdout Volume group "ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4" successfully created
[2021-09-14 10:06:02,934][ceph_volume.process][INFO ] Running command: /sbin/vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2021-09-14 10:06:02,974][ceph_volume.process][INFO ] stdout ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4";"1";"0";"wz--n-";"381546";"381546";"4194304
[2021-09-14 10:06:02,975][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 381546
[2021-09-14 10:06:02,975][ceph_volume.process][INFO ] Running command: /sbin/lvcreate --yes -l 381546 -n osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4
[2021-09-14 10:06:03,023][ceph_volume.process][INFO ] stdout Logical volume "osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb" created.
[2021-09-14 10:06:03,025][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S vg_name=ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4,lv_name=osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2021-09-14 10:06:03,085][ceph_volume.process][INFO ] stdout ";"/dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb";"osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb";"ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4";"nZgndb-0oUv-xedZ-uVdz-XkEY-zeCm-tZYMnL";"1600319913984
[2021-09-14 10:06:03,085][ceph_volume.process][INFO ] Running command: /sbin/lvchange --addtag ceph.block_device=/dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb --addtag ceph.type=block /dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb
[2021-09-14 10:06:03,130][ceph_volume.process][INFO ] stdout Logical volume ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb changed.
[2021-09-14 10:06:03,130][ceph_volume.process][INFO ] Running command: /sbin/lvchange --deltag ceph.block_device=/dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb --deltag ceph.type=block /dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb
[2021-09-14 10:06:03,178][ceph_volume.process][INFO ] stdout Logical volume ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb changed.
[2021-09-14 10:06:03,178][ceph_volume.process][INFO ] Running command: /sbin/lvchange --addtag ceph.osdspec_affinity= --addtag ceph.vdo=0 --addtag ceph.osd_id=59 --addtag ceph.osd_fsid=339980cf-4ed1-4ca6-93ca-3f1e4758f3eb --addtag ceph.cluster_name=ceph --addtag ceph.cluster_fsid=d168189f-6105-4223-b244-f59842404076 --addtag ceph.encrypted=0 --addtag ceph.cephx_lockbox_secret= --addtag ceph.type=block --addtag ceph.crush_device_class=nvme_stat --addtag ceph.block_device=/dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb --addtag ceph.block_uuid=nZgndb-0oUv-xedZ-uVdz-XkEY-zeCm-tZYMnL /dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb
[2021-09-14 10:06:03,243][ceph_volume.process][INFO ] stdout Logical volume ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb changed.
[2021-09-14 10:06:03,244][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2021-09-14 10:06:03,254][ceph_volume.process][INFO ] stdout AQDbSUBhu60gDxAAc3T5TrOKqg/boignVwCTmw==
[2021-09-14 10:06:03,255][ceph_volume.process][INFO ] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-59
[2021-09-14 10:06:03,257][ceph_volume.util.system][WARNING] Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
[2021-09-14 10:06:03,257][ceph_volume.process][INFO ] Running command: selinuxenabled
[2021-09-14 10:06:03,258][ceph_volume.util.system][INFO ] No SELinux found, skipping call to restorecon
[2021-09-14 10:06:03,259][ceph_volume.process][INFO ] Running command: /bin/chown -h ceph:ceph /dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb
[2021-09-14 10:06:03,260][ceph_volume.process][INFO ] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[2021-09-14 10:06:03,262][ceph_volume.process][INFO ] Running command: /bin/ln -s /dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb /var/lib/ceph/osd/ceph-59/block
[2021-09-14 10:06:03,263][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-59/activate.monmap
[2021-09-14 10:06:03,466][ceph_volume.process][INFO ] stderr got monmap epoch 53
[2021-09-14 10:06:03,476][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-59/keyring --create-keyring --name osd.59 --add-key AQDaSUBh0y0CFxAAMsEMMuTzvDbrJKjtbpoQYg==
[2021-09-14 10:06:03,488][ceph_volume.process][INFO ] stdout creating /var/lib/ceph/osd/ceph-59/keyring
added entity osd.59 auth(key=AQDaSUBh0y0CFxAAMsEMMuTzvDbrJKjtbpoQYg==)
[2021-09-14 10:06:03,489][ceph_volume.process][INFO ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-59/keyring
[2021-09-14 10:06:03,491][ceph_volume.process][INFO ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-59/
[2021-09-14 10:06:03,492][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 59 --monmap /var/lib/ceph/osd/ceph-59/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-59/ --osd-uuid 339980cf-4ed1-4ca6-93ca-3f1e4758f3eb --setuser ceph --setgroup ceph
[2021-09-14 10:06:11,202][ceph_volume.process][INFO ] stderr 2021-09-14 10:06:03.508 7fe70a358c00 -1 bluestore(/var/lib/ceph/osd/ceph-59/) _read_fsid unparsable uuid
[2021-09-14 10:06:11,203][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S tags={ceph.osd_fsid=339980cf-4ed1-4ca6-93ca-3f1e4758f3eb} -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2021-09-14 10:06:11,235][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb,ceph.block_uuid=nZgndb-0oUv-xedZ-uVdz-XkEY-zeCm-tZYMnL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d168189f-6105-4223-b244-f59842404076,ceph.cluster_name=ceph,ceph.crush_device_class=nvme_stat,ceph.encrypted=0,ceph.osd_fsid=339980cf-4ed1-4ca6-93ca-3f1e4758f3eb,ceph.osd_id=59,ceph.osdspec_affinity=,ceph.type=block,ceph.vdo=0";"/dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb";"osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb";"ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4";"nZgndb-0oUv-xedZ-uVdz-XkEY-zeCm-tZYMnL";"1600319913984
[2021-09-14 10:06:11,237][ceph_volume.devices.lvm.activate][DEBUG ] Found block device (osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb) with encryption: False
[2021-09-14 10:06:11,237][ceph_volume.devices.lvm.activate][DEBUG ] Found block device (osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb) with encryption: False
[2021-09-14 10:06:11,237][ceph_volume.process][INFO ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-59
[2021-09-14 10:06:11,239][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb --path /var/lib/ceph/osd/ceph-59 --no-mon-config
[2021-09-14 10:06:11,253][ceph_volume.process][INFO ] Running command: /bin/ln -snf /dev/ceph-31cc8c9d-bb3b-4cc9-a575-5f24b11ebaf4/osd-block-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb /var/lib/ceph/osd/ceph-59/block
[2021-09-14 10:06:11,254][ceph_volume.process][INFO ] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-59/block
[2021-09-14 10:06:11,256][ceph_volume.process][INFO ] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[2021-09-14 10:06:11,257][ceph_volume.process][INFO ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-59
[2021-09-14 10:06:11,259][ceph_volume.process][INFO ] Running command: /bin/systemctl enable ceph-volume@lvm-59-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb
[2021-09-14 10:06:11,262][ceph_volume.process][INFO ] stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-59-339980cf-4ed1-4ca6-93ca-3f1e4758f3eb.service → /lib/systemd/system/ceph-volume@.service.
[2021-09-14 10:06:12,350][ceph_volume.process][INFO ] Running command: /bin/systemctl enable --runtime ceph-osd@59
[2021-09-14 10:06:13,438][ceph_volume.process][INFO ] Running command: /bin/systemctl start ceph-osd@59
[2021-09-14 10:10:38,348][ceph_volume.main][INFO ] Running command: ceph-volume lvm create --help
(1-1/6)