https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2020-10-23T15:27:26Z
Ceph
ceph-volume - Bug #47966: Fails to deploy osd in rook, throws index error
https://tracker.ceph.com/issues/47966?journal_id=177855
2020-10-23T15:27:26Z
Jan Fajerski
lists@fajerski.name
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>In Progress</i></li></ul><p>Hmm batch shouldn't accept partitions. That is certainly a bug.</p>
<p>But batch should only be fed with bare devices or lvm logical volumes.</p>
ceph-volume - Bug #47966: Fails to deploy osd in rook, throws index error
https://tracker.ceph.com/issues/47966?journal_id=177860
2020-10-23T17:05:09Z
Varsha Rao
<ul></ul><p>On teuthology smithi machine, I don't get the error but osd's are not deployed too. Is this expected behaviour ?<br /><pre>
varsha@smithi061:~/rook/cluster/examples/kubernetes/ceph$ kubectl logs -n rook-ceph rook-ceph-osd-prepare-minikube-jr65q
2020-10-20 13:38:09.144451 I | rookcmd: starting Rook v1.4.0-alpha.0.490.gdfb37dd with arguments '/rook/rook ceph osd provision'
2020-10-20 13:38:09.144519 I | rookcmd: flag values: --cluster-id=c832a97b-e86e-4a83-a1ae-a3b80febc302, --data-device-filter=nvme0n1, --data-device-path-filter=, --data-devices=, --drive-groups=, --encrypted-device=false, --force-format=false, --help=false, --location=, --log-flush-frequency=5s, --log-level=DEBUG, --metadata-device=, --node-name=minikube, --operator-image=, --osd-database-size=0, --osd-store=, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --service-account=
2020-10-20 13:38:09.144524 I | op-mon: parsing mon endpoints: a=10.111.103.214:6789
2020-10-20 13:38:09.157650 I | op-osd: CRUSH location=root=default host=minikube
2020-10-20 13:38:09.157669 I | cephcmd: crush location of osd: root=default host=minikube
2020-10-20 13:38:09.157683 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt -- /usr/sbin/lvm --help
2020-10-20 13:38:09.168911 I | cephosd: successfully called nsenter
2020-10-20 13:38:09.168938 I | cephosd: binary "/usr/sbin/lvm" found on the host, proceeding with osd preparation
2020-10-20 13:38:09.183298 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2020-10-20 13:38:09.183679 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2020-10-20 13:38:09.183783 D | cephosd: config file @ /etc/ceph/ceph.conf: [global]
fsid = d0136c53-d3dc-47e2-b049-456a6d61c010
mon initial members = a
mon host = [v2:10.111.103.214:3300,v1:10.111.103.214:6789]
public addr = 172.17.0.9
cluster addr = 172.17.0.9
osd_pool_default_size = 1
mon_warn_on_pool_no_redundancy = false
[client.admin]
keyring = /var/lib/rook/rook-ceph/client.admin.keyring
2020-10-20 13:38:09.183792 I | cephosd: discovering hardware
2020-10-20 13:38:09.183800 D | exec: Running command: lsblk --all --noheadings --list --output KNAME
2020-10-20 13:38:09.189937 D | exec: Running command: lsblk /dev/loop0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.191914 W | inventory: skipping device "loop0". diskType is empty
2020-10-20 13:38:09.191930 D | exec: Running command: lsblk /dev/loop1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.194413 W | inventory: skipping device "loop1". diskType is empty
2020-10-20 13:38:09.194428 D | exec: Running command: lsblk /dev/loop2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.196224 W | inventory: skipping device "loop2". diskType is empty
2020-10-20 13:38:09.196240 D | exec: Running command: lsblk /dev/loop3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.198939 W | inventory: skipping device "loop3". diskType is empty
2020-10-20 13:38:09.198961 D | exec: Running command: lsblk /dev/loop4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.201105 W | inventory: skipping device "loop4". diskType is empty
2020-10-20 13:38:09.201119 D | exec: Running command: lsblk /dev/loop5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.202854 W | inventory: skipping device "loop5". diskType is empty
2020-10-20 13:38:09.202866 D | exec: Running command: lsblk /dev/loop6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.204653 W | inventory: skipping device "loop6". diskType is empty
2020-10-20 13:38:09.204665 D | exec: Running command: lsblk /dev/loop7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.206698 W | inventory: skipping device "loop7". diskType is empty
2020-10-20 13:38:09.206710 D | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.209193 D | exec: Running command: sgdisk --print /dev/sda
2020-10-20 13:38:09.211942 D | exec: Running command: udevadm info --query=property /dev/sda
2020-10-20 13:38:09.237181 D | exec: Running command: lsblk --noheadings --pairs /dev/sda
2020-10-20 13:38:09.243862 I | inventory: skipping device "sda" because it has child, considering the child instead.
2020-10-20 13:38:09.244971 D | exec: Running command: lsblk /dev/sda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.249087 D | exec: Running command: udevadm info --query=property /dev/sda1
2020-10-20 13:38:09.255875 D | exec: Running command: lsblk /dev/nbd0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.258846 W | inventory: skipping device "nbd0". diskType is empty
2020-10-20 13:38:09.258882 D | exec: Running command: lsblk /dev/nbd1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.262294 W | inventory: skipping device "nbd1". diskType is empty
2020-10-20 13:38:09.262327 D | exec: Running command: lsblk /dev/nbd2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.265234 W | inventory: skipping device "nbd2". diskType is empty
2020-10-20 13:38:09.265266 D | exec: Running command: lsblk /dev/nbd3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.267475 W | inventory: skipping device "nbd3". diskType is empty
2020-10-20 13:38:09.267497 D | exec: Running command: lsblk /dev/nbd4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.269607 W | inventory: skipping device "nbd4". diskType is empty
2020-10-20 13:38:09.269624 D | exec: Running command: lsblk /dev/nbd5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.271593 W | inventory: skipping device "nbd5". diskType is empty
2020-10-20 13:38:09.271607 D | exec: Running command: lsblk /dev/nbd6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.273025 W | inventory: skipping device "nbd6". diskType is empty
2020-10-20 13:38:09.273038 D | exec: Running command: lsblk /dev/nbd7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.332620 W | inventory: skipping device "nbd7". diskType is empty
2020-10-20 13:38:09.332650 D | exec: Running command: lsblk /dev/nvme0n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.336394 D | exec: Running command: sgdisk --print /dev/nvme0n1
2020-10-20 13:38:09.341831 D | exec: Running command: udevadm info --query=property /dev/nvme0n1
2020-10-20 13:38:09.352335 D | exec: Running command: lsblk --noheadings --pairs /dev/nvme0n1
2020-10-20 13:38:09.355855 I | inventory: skipping device "nvme0n1" because it has child, considering the child instead.
2020-10-20 13:38:09.355900 D | exec: Running command: lsblk /dev/nvme0n1p1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.357514 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p1
2020-10-20 13:38:09.364383 D | exec: Running command: lsblk /dev/nvme0n1p2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.366084 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p2
2020-10-20 13:38:09.372937 D | exec: Running command: lsblk /dev/nvme0n1p3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.375442 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p3
2020-10-20 13:38:09.381842 D | exec: Running command: lsblk /dev/nbd8 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.384194 W | inventory: skipping device "nbd8". diskType is empty
2020-10-20 13:38:09.384223 D | exec: Running command: lsblk /dev/nbd9 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.387382 W | inventory: skipping device "nbd9". diskType is empty
2020-10-20 13:38:09.387411 D | exec: Running command: lsblk /dev/nbd10 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.389575 W | inventory: skipping device "nbd10". diskType is empty
2020-10-20 13:38:09.389602 D | exec: Running command: lsblk /dev/nbd11 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.391783 W | inventory: skipping device "nbd11". diskType is empty
2020-10-20 13:38:09.391805 D | exec: Running command: lsblk /dev/nbd12 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.393312 W | inventory: skipping device "nbd12". diskType is empty
2020-10-20 13:38:09.393342 D | exec: Running command: lsblk /dev/nbd13 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.395726 W | inventory: skipping device "nbd13". diskType is empty
2020-10-20 13:38:09.395745 D | exec: Running command: lsblk /dev/nbd14 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.435917 W | inventory: skipping device "nbd14". diskType is empty
2020-10-20 13:38:09.435947 D | exec: Running command: lsblk /dev/nbd15 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.439255 W | inventory: skipping device "nbd15". diskType is empty
2020-10-20 13:38:09.439283 D | inventory: discovered disks are [0xc0001e4c60 0xc0001b79e0 0xc000196240 0xc0001965a0]
2020-10-20 13:38:09.439291 I | cephosd: creating and starting the osds
2020-10-20 13:38:09.446247 D | cephosd: No Drive Groups configured.
2020-10-20 13:38:09.446288 D | cephosd: desiredDevices are [{Name:nvme0n1 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: IsFilter:true IsDevicePathFilter:false}]
2020-10-20 13:38:09.446300 D | cephosd: context.Devices are [0xc0001e4c60 0xc0001b79e0 0xc000196240 0xc0001965a0]
2020-10-20 13:38:09.446311 D | exec: Running command: udevadm info --query=property /dev/sda1
2020-10-20 13:38:09.453406 D | exec: Running command: lsblk /dev/sda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.455612 D | exec: Running command: ceph-volume inventory --format json /dev/sda1
2020-10-20 13:38:10.374511 I | cephosd: device "sda1" is available.
2020-10-20 13:38:10.374569 I | cephosd: skipping device "sda1" that does not match the device filter/list ([{nvme0n1 1 0 true false}]). <nil>
2020-10-20 13:38:10.374580 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p1
2020-10-20 13:38:10.379168 D | exec: Running command: lsblk /dev/nvme0n1p1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:10.380806 D | exec: Running command: ceph-volume inventory --format json /dev/nvme0n1p1
2020-10-20 13:38:10.967833 I | cephosd: device "nvme0n1p1" is available.
2020-10-20 13:38:10.967885 I | cephosd: device "nvme0n1p1" matches device filter "nvme0n1"
2020-10-20 13:38:10.967894 I | cephosd: device "nvme0n1p1" is selected by the device filter/name "nvme0n1"
2020-10-20 13:38:10.967907 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p2
2020-10-20 13:38:10.976480 D | exec: Running command: lsblk /dev/nvme0n1p2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:10.979634 D | exec: Running command: ceph-volume inventory --format json /dev/nvme0n1p2
2020-10-20 13:38:11.542836 I | cephosd: device "nvme0n1p2" is available.
2020-10-20 13:38:11.542883 I | cephosd: device "nvme0n1p2" matches device filter "nvme0n1"
2020-10-20 13:38:11.542891 I | cephosd: device "nvme0n1p2" is selected by the device filter/name "nvme0n1"
2020-10-20 13:38:11.542902 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p3
2020-10-20 13:38:11.548780 D | exec: Running command: lsblk /dev/nvme0n1p3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:11.551117 D | exec: Running command: ceph-volume inventory --format json /dev/nvme0n1p3
2020-10-20 13:38:12.253183 I | cephosd: device "nvme0n1p3" is available.
2020-10-20 13:38:12.253250 I | cephosd: device "nvme0n1p3" matches device filter "nvme0n1"
2020-10-20 13:38:12.253261 I | cephosd: device "nvme0n1p3" is selected by the device filter/name "nvme0n1"
2020-10-20 13:38:12.253478 I | cephosd: configuring osd devices: {"Entries":{"nvme0n1p1":{"Data":-1,"Metadata":null,"Config":{"Name":"nvme0n1","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]},"nvme0n1p2":{"Data":-1,"Metadata":null,"Config":{"Name":"nvme0n1","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]},"nvme0n1p3":{"Data":-1,"Metadata":null,"Config":{"Name":"nvme0n1","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]}}}
2020-10-20 13:38:12.253550 I | cephclient: getting or creating ceph auth key "client.bootstrap-osd"
2020-10-20 13:38:12.253809 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/603019083
2020-10-20 13:38:12.492600 I | cephosd: configuring new device nvme0n1p2
2020-10-20 13:38:12.492634 I | cephosd: Base command - stdbuf
2020-10-20 13:38:12.492642 I | cephosd: immediateReportArgs - stdbuf
2020-10-20 13:38:12.492659 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p2]
2020-10-20 13:38:12.492670 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p2 --report
2020-10-20 13:38:13.140340 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:13.140402 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:13.140411 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:13.140417 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:13.140423 D | exec: --> relative data size: 1.0
2020-10-20 13:38:13.140428 D | exec: --> All data devices are unavailable
2020-10-20 13:38:13.167846 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p2
2020-10-20 13:38:13.739346 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:13.739391 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:13.739397 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:13.739402 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:13.739407 D | exec: --> relative data size: 1.0
2020-10-20 13:38:13.739413 D | exec: --> All data devices are unavailable
2020-10-20 13:38:13.769333 I | cephosd: configuring new device nvme0n1p3
2020-10-20 13:38:13.769361 I | cephosd: Base command - stdbuf
2020-10-20 13:38:13.769366 I | cephosd: immediateReportArgs - stdbuf
2020-10-20 13:38:13.769379 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p3]
2020-10-20 13:38:13.769387 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p3 --report
2020-10-20 13:38:14.458841 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:14.458891 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:14.458898 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:14.458907 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:14.458920 D | exec: --> relative data size: 1.0
2020-10-20 13:38:14.458929 D | exec: --> All data devices are unavailable
2020-10-20 13:38:14.486827 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p3
2020-10-20 13:38:15.102898 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:15.102949 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:15.102957 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:15.102965 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:15.102974 D | exec: --> relative data size: 1.0
2020-10-20 13:38:15.102983 D | exec: --> All data devices are unavailable
2020-10-20 13:38:15.139151 I | cephosd: configuring new device nvme0n1p1
2020-10-20 13:38:15.139186 I | cephosd: Base command - stdbuf
2020-10-20 13:38:15.139195 I | cephosd: immediateReportArgs - stdbuf
2020-10-20 13:38:15.139220 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p1]
2020-10-20 13:38:15.139236 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p1 --report
2020-10-20 13:38:15.766382 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:15.766418 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:15.766423 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:15.766426 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:15.766429 D | exec: --> relative data size: 1.0
2020-10-20 13:38:15.766432 D | exec: --> All data devices are unavailable
2020-10-20 13:38:15.796844 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p1
2020-10-20 13:38:16.361789 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:16.361824 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:16.361828 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:16.361831 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:16.361834 D | exec: --> relative data size: 1.0
2020-10-20 13:38:16.361837 D | exec: --> All data devices are unavailable
2020-10-20 13:38:16.389857 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list --format json
2020-10-20 13:38:16.793403 D | cephosd: {}
2020-10-20 13:38:16.793431 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2020-10-20 13:38:16.793448 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list /mnt/minikube --format json
2020-10-20 13:38:17.156838 D | cephosd: {}
2020-10-20 13:38:17.156873 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2020-10-20 13:38:17.156884 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "minikube"
</pre></p>
ceph-volume - Bug #47966: Fails to deploy osd in rook, throws index error
https://tracker.ceph.com/issues/47966?journal_id=178901
2020-11-10T14:58:11Z
Jan Fajerski
lists@fajerski.name
<ul></ul><p>I think so yes. The batch subcommand does not handle partitions, but full devices or LVs. See also <a class="external" href="https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/">https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/</a> and <code>ceph-volume lvm batch --help</code>.</p>
<p>You can create multiple OSDs on a single device by using the <code>--osds-per-device</code> option.</p>
ceph-volume - Bug #47966: Fails to deploy osd in rook, throws index error
https://tracker.ceph.com/issues/47966?journal_id=180068
2020-11-25T10:42:03Z
Jan Fajerski
lists@fajerski.name
<ul><li><strong>Status</strong> changed from <i>In Progress</i> to <i>Pending Backport</i></li><li><strong>Backport</strong> set to <i>octopus,nautilus</i></li><li><strong>Pull request ID</strong> set to <i>38156</i></li></ul>
ceph-volume - Bug #47966: Fails to deploy osd in rook, throws index error
https://tracker.ceph.com/issues/47966?journal_id=180069
2020-11-25T10:42:32Z
Jan Fajerski
lists@fajerski.name
<ul><li><strong>Copied to</strong> <i><a class="issue tracker-9 status-3 priority-4 priority-default closed" href="/issues/48352">Backport #48352</a>: nautilus: Fails to deploy osd in rook, throws index error</i> added</li></ul>
ceph-volume - Bug #47966: Fails to deploy osd in rook, throws index error
https://tracker.ceph.com/issues/47966?journal_id=180071
2020-11-25T10:42:40Z
Jan Fajerski
lists@fajerski.name
<ul><li><strong>Copied to</strong> <i><a class="issue tracker-9 status-3 priority-4 priority-default closed" href="/issues/48353">Backport #48353</a>: octopus: Fails to deploy osd in rook, throws index error</i> added</li></ul>
ceph-volume - Bug #47966: Fails to deploy osd in rook, throws index error
https://tracker.ceph.com/issues/47966?journal_id=181295
2020-12-14T22:19:49Z
Nathan Cutler
ncutler@suse.cz
<ul><li><strong>Status</strong> changed from <i>Pending Backport</i> to <i>Resolved</i></li></ul><p>While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".</p>