Project

General

Profile

Actions

Bug #47966

closed

Fails to deploy osd in rook, throws index error

Added by Varsha Rao over 3 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[root@rook-ceph-tools-78cdfd976c-klzls /]# ceph version
ceph version 16.0.0-6582-g179f6a1c (179f6a1c7ae7f8495292a676c36b331ea8facda2) pacific (dev)

$ kubectl get pods -n rook-ceph
NAME                                            READY   STATUS             RESTARTS   AGE
csi-cephfsplugin-g96pq                          3/3     Running            0          2m48s
csi-cephfsplugin-provisioner-859c666579-97cc5   6/6     Running            0          2m47s
csi-rbdplugin-provisioner-7c6d5c76cd-ct2nw      6/6     Running            0          2m48s
csi-rbdplugin-zd58c                             3/3     Running            0          2m49s
rook-ceph-mgr-a-f9546b454-5rb7g                 1/1     Running            3          8m31s
rook-ceph-mon-a-7866bc67d9-xschv                1/1     Running            0          8m38s
rook-ceph-operator-86756d44-vsz2b               1/1     Running            0          13m
rook-ceph-osd-prepare-minikube-f5m7f            0/1     CrashLoopBackOff   4          8m28s
rook-ceph-tools-78cdfd976c-klzls                1/1     Running            0          10m
rook-discover-qnc8q                             1/1     Running            0          13m

osd log

$ kubectl logs -n rook-ceph rook-ceph-osd-prepare-minikube-f5m7f
2020-10-23 08:58:16.808979 I | rookcmd: starting Rook v1.4.0-alpha.0.508.g39a23dd0 with arguments '/rook/rook ceph osd provision'
2020-10-23 08:58:16.809459 I | rookcmd: flag values: --cluster-id=b104a423-de14-4ebb-8a66-3b34e44fa056, --data-device-filter=vda3, --data-device-path-filter=, --data-devices=, --drive-groups=, --encrypted-device=false, --force-format=false, --help=false, --location=, --log-flush-frequency=5s, --log-level=DEBUG, --metadata-device=, --node-name=minikube, --operator-image=, --osd-database-size=0, --osd-store=, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --service-account=
2020-10-23 08:58:16.809495 I | op-mon: parsing mon endpoints: a=10.101.52.62:6789
2020-10-23 08:58:16.923161 I | op-osd: CRUSH location=root=default host=minikube
2020-10-23 08:58:16.923257 I | cephcmd: crush location of osd: root=default host=minikube
2020-10-23 08:58:16.923291 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt -- /usr/sbin/lvm --help
2020-10-23 08:58:17.039641 I | cephosd: successfully called nsenter
2020-10-23 08:58:17.039755 I | cephosd: binary "/usr/sbin/lvm" found on the host, proceeding with osd preparation
2020-10-23 08:58:17.200777 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2020-10-23 08:58:17.201244 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2020-10-23 08:58:17.201579 D | cephosd: config file @ /etc/ceph/ceph.conf: [global]
fsid                           = a7e3b606-940d-4ab6-8f39-b3b9e03b5f17
mon initial members            = a
mon host                       = [v2:10.101.52.62:3300,v1:10.101.52.62:6789]
public addr                    = 172.17.0.8
cluster addr                   = 172.17.0.8
osd_pool_default_size          = 1
mon_warn_on_pool_no_redundancy = false

[client.admin]
keyring = /var/lib/rook/rook-ceph/client.admin.keyring

2020-10-23 08:58:17.201599 I | cephosd: discovering hardware
2020-10-23 08:58:17.201616 D | exec: Running command: lsblk --all --noheadings --list --output KNAME
2020-10-23 08:58:17.211278 D | exec: Running command: lsblk /dev/loop0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.214985 W | inventory: skipping device "loop0". diskType is empty
2020-10-23 08:58:17.215041 D | exec: Running command: lsblk /dev/loop1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.217875 W | inventory: skipping device "loop1". diskType is empty
2020-10-23 08:58:17.217921 D | exec: Running command: lsblk /dev/loop2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.220821 W | inventory: skipping device "loop2". diskType is empty
2020-10-23 08:58:17.220858 D | exec: Running command: lsblk /dev/loop3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.223241 W | inventory: skipping device "loop3". diskType is empty
2020-10-23 08:58:17.223274 D | exec: Running command: lsblk /dev/loop4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.226626 W | inventory: skipping device "loop4". diskType is empty
2020-10-23 08:58:17.226664 D | exec: Running command: lsblk /dev/loop5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.229647 W | inventory: skipping device "loop5". diskType is empty
2020-10-23 08:58:17.229686 D | exec: Running command: lsblk /dev/loop6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.232473 W | inventory: skipping device "loop6". diskType is empty
2020-10-23 08:58:17.232517 D | exec: Running command: lsblk /dev/loop7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.235527 W | inventory: skipping device "loop7". diskType is empty
2020-10-23 08:58:17.235562 D | exec: Running command: lsblk /dev/sr0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.294957 W | inventory: skipping device "sr0". unsupported diskType rom
2020-10-23 08:58:17.294999 D | exec: Running command: lsblk /dev/vda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.298850 D | exec: Running command: sgdisk --print /dev/vda
2020-10-23 08:58:17.304704 D | exec: Running command: udevadm info --query=property /dev/vda
2020-10-23 08:58:17.320221 D | exec: Running command: lsblk --noheadings --pairs /dev/vda
2020-10-23 08:58:17.328745 I | inventory: skipping device "vda" because it has child, considering the child instead.
2020-10-23 08:58:17.328800 D | exec: Running command: lsblk /dev/vda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.403411 D | exec: Running command: udevadm info --query=property /dev/vda1
2020-10-23 08:58:17.512916 D | exec: Running command: lsblk /dev/vda2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.598158 D | exec: Running command: udevadm info --query=property /dev/vda2
2020-10-23 08:58:17.695649 D | exec: Running command: lsblk /dev/vda3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.710606 D | exec: Running command: udevadm info --query=property /dev/vda3
2020-10-23 08:58:17.900715 D | exec: Running command: lsblk /dev/vda5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:17.913137 D | exec: Running command: udevadm info --query=property /dev/vda5
2020-10-23 08:58:18.107870 D | exec: Running command: lsblk /dev/vda6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:18.117325 D | exec: Running command: udevadm info --query=property /dev/vda6
2020-10-23 08:58:18.195829 D | inventory: discovered disks are [0xc0005525a0 0xc0003c47e0 0xc0001ff7a0 0xc000193440 0xc0001d59e0]
2020-10-23 08:58:18.195861 I | cephosd: creating and starting the osds
2020-10-23 08:58:18.203699 D | cephosd: No Drive Groups configured.
2020-10-23 08:58:18.203762 D | cephosd: desiredDevices are [{Name:vda3 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: IsFilter:true IsDevicePathFilter:false}]
2020-10-23 08:58:18.203775 D | cephosd: context.Devices are [0xc0005525a0 0xc0003c47e0 0xc0001ff7a0 0xc000193440 0xc0001d59e0]
2020-10-23 08:58:18.203786 D | exec: Running command: udevadm info --query=property /dev/vda1
2020-10-23 08:58:18.217019 D | exec: Running command: lsblk /dev/vda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:18.220902 D | exec: Running command: ceph-volume inventory --format json /dev/vda1
2020-10-23 08:58:24.400385 I | cephosd: skipping device "vda1": ["Insufficient space (<5GB)"].
2020-10-23 08:58:24.400480 D | exec: Running command: udevadm info --query=property /dev/vda2
2020-10-23 08:58:24.520751 D | exec: Running command: lsblk /dev/vda2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:24.605460 D | exec: Running command: ceph-volume inventory --format json /dev/vda2
2020-10-23 08:58:31.728319 I | cephosd: skipping device "vda2": ["Insufficient space (<5GB)"].
2020-10-23 08:58:31.728395 D | exec: Running command: udevadm info --query=property /dev/vda3
2020-10-23 08:58:31.811440 D | exec: Running command: lsblk /dev/vda3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:31.821700 D | exec: Running command: ceph-volume inventory --format json /dev/vda3
2020-10-23 08:58:39.221035 I | cephosd: device "vda3" is available.
2020-10-23 08:58:39.221262 I | cephosd: device "vda3" matches device filter "vda3" 
2020-10-23 08:58:39.221294 I | cephosd: device "vda3" is selected by the device filter/name "vda3" 
2020-10-23 08:58:39.221332 D | exec: Running command: udevadm info --query=property /dev/vda5
2020-10-23 08:58:39.317742 D | exec: Running command: lsblk /dev/vda5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:39.396396 D | exec: Running command: ceph-volume inventory --format json /dev/vda5
2020-10-23 08:58:45.905478 I | cephosd: device "vda5" is available.
2020-10-23 08:58:45.905675 I | cephosd: skipping device "vda5" that does not match the device filter/list ([{vda3 1  0  true false}]). <nil>
2020-10-23 08:58:45.905715 D | exec: Running command: udevadm info --query=property /dev/vda6
2020-10-23 08:58:46.098009 D | exec: Running command: lsblk /dev/vda6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-23 08:58:46.112886 D | exec: Running command: ceph-volume inventory --format json /dev/vda6
2020-10-23 08:58:52.702590 I | cephosd: device "vda6" is available.
2020-10-23 08:58:52.703132 I | cephosd: skipping device "vda6" that does not match the device filter/list ([{vda3 1  0  true false}]). <nil>
2020-10-23 08:58:52.706485 I | cephosd: configuring osd devices: {"Entries":{"vda3":{"Data":-1,"Metadata":null,"Config":{"Name":"vda3","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]}}}
2020-10-23 08:58:52.706670 I | cephclient: getting or creating ceph auth key "client.bootstrap-osd" 
2020-10-23 08:58:52.707323 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/497580015
2020-10-23 08:59:09.809284 I | cephosd: configuring new device vda3
2020-10-23 08:59:09.809366 I | cephosd: Base command - stdbuf
2020-10-23 08:59:09.809386 I | cephosd: immediateReportArgs - stdbuf
2020-10-23 08:59:09.809426 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/vda3]
2020-10-23 08:59:09.812243 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/vda3 --report
2020-10-23 08:59:20.512992 D | exec: --> DEPRECATION NOTICE
2020-10-23 08:59:20.513154 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-23 08:59:20.513175 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-23 08:59:20.513191 D | exec: --> passed data devices: 0 physical, 1 LVM
2020-10-23 08:59:20.513205 D | exec: --> relative data size: 1.0
2020-10-23 08:59:20.699228 D | exec: Traceback (most recent call last):
2020-10-23 08:59:20.699307 D | exec:   File "/usr/sbin/ceph-volume", line 11, in <module>
2020-10-23 08:59:20.699329 D | exec:     load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
2020-10-23 08:59:20.699344 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__
2020-10-23 08:59:20.699357 D | exec:     self.main(self.argv)
2020-10-23 08:59:20.699374 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
2020-10-23 08:59:20.699386 D | exec:     return f(*a, **kw)
2020-10-23 08:59:20.699398 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 151, in main
2020-10-23 08:59:20.699409 D | exec:     terminal.dispatch(self.mapper, subcommand_args)
2020-10-23 08:59:20.699421 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
2020-10-23 08:59:20.699434 D | exec:     instance.main()
2020-10-23 08:59:20.699445 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
2020-10-23 08:59:20.699457 D | exec:     terminal.dispatch(self.mapper, self.argv)
2020-10-23 08:59:20.699479 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
2020-10-23 08:59:20.699497 D | exec:     instance.main()
2020-10-23 08:59:20.699509 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
2020-10-23 08:59:20.699520 D | exec:     return func(*a, **kw)
2020-10-23 08:59:20.699531 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 399, in main
2020-10-23 08:59:20.699542 D | exec:     plan = self.get_plan(self.args)
2020-10-23 08:59:20.699553 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 437, in get_plan
2020-10-23 08:59:20.699564 D | exec:     args.wal_devices)
2020-10-23 08:59:20.699583 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 457, in get_deployment_layout
2020-10-23 08:59:20.699593 D | exec:     plan.extend(get_lvm_osds(lvm_devs, args))
2020-10-23 08:59:20.699604 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 93, in get_lvm_osds
2020-10-23 08:59:20.699615 D | exec:     disk.Size(b=int(lv.lvs[0].lv_size)),
2020-10-23 08:59:20.699626 D | exec: IndexError: list index out of range
failed to configure devices: failed to initialize devices: failed ceph-volume report: exit status 1


Related issues 2 (0 open2 closed)

Copied to ceph-volume - Backport #48352: nautilus: Fails to deploy osd in rook, throws index errorResolvedJan FajerskiActions
Copied to ceph-volume - Backport #48353: octopus: Fails to deploy osd in rook, throws index errorResolvedJan FajerskiActions
Actions #1

Updated by Jan Fajerski over 3 years ago

  • Status changed from New to In Progress

Hmm batch shouldn't accept partitions. That is certainly a bug.

But batch should only be fed with bare devices or lvm logical volumes.

Actions #2

Updated by Varsha Rao over 3 years ago

On teuthology smithi machine, I don't get the error but osd's are not deployed too. Is this expected behaviour ?

varsha@smithi061:~/rook/cluster/examples/kubernetes/ceph$ kubectl logs -n rook-ceph rook-ceph-osd-prepare-minikube-jr65q
2020-10-20 13:38:09.144451 I | rookcmd: starting Rook v1.4.0-alpha.0.490.gdfb37dd with arguments '/rook/rook ceph osd provision'
2020-10-20 13:38:09.144519 I | rookcmd: flag values: --cluster-id=c832a97b-e86e-4a83-a1ae-a3b80febc302, --data-device-filter=nvme0n1, --data-device-path-filter=, --data-devices=, --drive-groups=, --encrypted-device=false, --force-format=false, --help=false, --location=, --log-flush-frequency=5s, --log-level=DEBUG, --metadata-device=, --node-name=minikube, --operator-image=, --osd-database-size=0, --osd-store=, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --service-account=
2020-10-20 13:38:09.144524 I | op-mon: parsing mon endpoints: a=10.111.103.214:6789
2020-10-20 13:38:09.157650 I | op-osd: CRUSH location=root=default host=minikube
2020-10-20 13:38:09.157669 I | cephcmd: crush location of osd: root=default host=minikube
2020-10-20 13:38:09.157683 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt -- /usr/sbin/lvm --help
2020-10-20 13:38:09.168911 I | cephosd: successfully called nsenter
2020-10-20 13:38:09.168938 I | cephosd: binary "/usr/sbin/lvm" found on the host, proceeding with osd preparation
2020-10-20 13:38:09.183298 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2020-10-20 13:38:09.183679 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2020-10-20 13:38:09.183783 D | cephosd: config file @ /etc/ceph/ceph.conf: [global]
fsid                           = d0136c53-d3dc-47e2-b049-456a6d61c010
mon initial members            = a
mon host                       = [v2:10.111.103.214:3300,v1:10.111.103.214:6789]
public addr                    = 172.17.0.9
cluster addr                   = 172.17.0.9
osd_pool_default_size          = 1
mon_warn_on_pool_no_redundancy = false

[client.admin]
keyring = /var/lib/rook/rook-ceph/client.admin.keyring

2020-10-20 13:38:09.183792 I | cephosd: discovering hardware
2020-10-20 13:38:09.183800 D | exec: Running command: lsblk --all --noheadings --list --output KNAME
2020-10-20 13:38:09.189937 D | exec: Running command: lsblk /dev/loop0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.191914 W | inventory: skipping device "loop0". diskType is empty
2020-10-20 13:38:09.191930 D | exec: Running command: lsblk /dev/loop1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.194413 W | inventory: skipping device "loop1". diskType is empty
2020-10-20 13:38:09.194428 D | exec: Running command: lsblk /dev/loop2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.196224 W | inventory: skipping device "loop2". diskType is empty
2020-10-20 13:38:09.196240 D | exec: Running command: lsblk /dev/loop3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.198939 W | inventory: skipping device "loop3". diskType is empty
2020-10-20 13:38:09.198961 D | exec: Running command: lsblk /dev/loop4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.201105 W | inventory: skipping device "loop4". diskType is empty
2020-10-20 13:38:09.201119 D | exec: Running command: lsblk /dev/loop5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.202854 W | inventory: skipping device "loop5". diskType is empty
2020-10-20 13:38:09.202866 D | exec: Running command: lsblk /dev/loop6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.204653 W | inventory: skipping device "loop6". diskType is empty
2020-10-20 13:38:09.204665 D | exec: Running command: lsblk /dev/loop7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.206698 W | inventory: skipping device "loop7". diskType is empty
2020-10-20 13:38:09.206710 D | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.209193 D | exec: Running command: sgdisk --print /dev/sda
2020-10-20 13:38:09.211942 D | exec: Running command: udevadm info --query=property /dev/sda
2020-10-20 13:38:09.237181 D | exec: Running command: lsblk --noheadings --pairs /dev/sda
2020-10-20 13:38:09.243862 I | inventory: skipping device "sda" because it has child, considering the child instead.
2020-10-20 13:38:09.244971 D | exec: Running command: lsblk /dev/sda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.249087 D | exec: Running command: udevadm info --query=property /dev/sda1
2020-10-20 13:38:09.255875 D | exec: Running command: lsblk /dev/nbd0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.258846 W | inventory: skipping device "nbd0". diskType is empty
2020-10-20 13:38:09.258882 D | exec: Running command: lsblk /dev/nbd1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.262294 W | inventory: skipping device "nbd1". diskType is empty
2020-10-20 13:38:09.262327 D | exec: Running command: lsblk /dev/nbd2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.265234 W | inventory: skipping device "nbd2". diskType is empty
2020-10-20 13:38:09.265266 D | exec: Running command: lsblk /dev/nbd3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.267475 W | inventory: skipping device "nbd3". diskType is empty
2020-10-20 13:38:09.267497 D | exec: Running command: lsblk /dev/nbd4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.269607 W | inventory: skipping device "nbd4". diskType is empty
2020-10-20 13:38:09.269624 D | exec: Running command: lsblk /dev/nbd5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.271593 W | inventory: skipping device "nbd5". diskType is empty
2020-10-20 13:38:09.271607 D | exec: Running command: lsblk /dev/nbd6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.273025 W | inventory: skipping device "nbd6". diskType is empty
2020-10-20 13:38:09.273038 D | exec: Running command: lsblk /dev/nbd7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.332620 W | inventory: skipping device "nbd7". diskType is empty
2020-10-20 13:38:09.332650 D | exec: Running command: lsblk /dev/nvme0n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.336394 D | exec: Running command: sgdisk --print /dev/nvme0n1
2020-10-20 13:38:09.341831 D | exec: Running command: udevadm info --query=property /dev/nvme0n1
2020-10-20 13:38:09.352335 D | exec: Running command: lsblk --noheadings --pairs /dev/nvme0n1
2020-10-20 13:38:09.355855 I | inventory: skipping device "nvme0n1" because it has child, considering the child instead.
2020-10-20 13:38:09.355900 D | exec: Running command: lsblk /dev/nvme0n1p1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.357514 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p1
2020-10-20 13:38:09.364383 D | exec: Running command: lsblk /dev/nvme0n1p2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.366084 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p2
2020-10-20 13:38:09.372937 D | exec: Running command: lsblk /dev/nvme0n1p3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.375442 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p3
2020-10-20 13:38:09.381842 D | exec: Running command: lsblk /dev/nbd8 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.384194 W | inventory: skipping device "nbd8". diskType is empty
2020-10-20 13:38:09.384223 D | exec: Running command: lsblk /dev/nbd9 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.387382 W | inventory: skipping device "nbd9". diskType is empty
2020-10-20 13:38:09.387411 D | exec: Running command: lsblk /dev/nbd10 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.389575 W | inventory: skipping device "nbd10". diskType is empty
2020-10-20 13:38:09.389602 D | exec: Running command: lsblk /dev/nbd11 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.391783 W | inventory: skipping device "nbd11". diskType is empty
2020-10-20 13:38:09.391805 D | exec: Running command: lsblk /dev/nbd12 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.393312 W | inventory: skipping device "nbd12". diskType is empty
2020-10-20 13:38:09.393342 D | exec: Running command: lsblk /dev/nbd13 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.395726 W | inventory: skipping device "nbd13". diskType is empty
2020-10-20 13:38:09.395745 D | exec: Running command: lsblk /dev/nbd14 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.435917 W | inventory: skipping device "nbd14". diskType is empty
2020-10-20 13:38:09.435947 D | exec: Running command: lsblk /dev/nbd15 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.439255 W | inventory: skipping device "nbd15". diskType is empty
2020-10-20 13:38:09.439283 D | inventory: discovered disks are [0xc0001e4c60 0xc0001b79e0 0xc000196240 0xc0001965a0]
2020-10-20 13:38:09.439291 I | cephosd: creating and starting the osds
2020-10-20 13:38:09.446247 D | cephosd: No Drive Groups configured.
2020-10-20 13:38:09.446288 D | cephosd: desiredDevices are [{Name:nvme0n1 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: IsFilter:true IsDevicePathFilter:false}]
2020-10-20 13:38:09.446300 D | cephosd: context.Devices are [0xc0001e4c60 0xc0001b79e0 0xc000196240 0xc0001965a0]
2020-10-20 13:38:09.446311 D | exec: Running command: udevadm info --query=property /dev/sda1
2020-10-20 13:38:09.453406 D | exec: Running command: lsblk /dev/sda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:09.455612 D | exec: Running command: ceph-volume inventory --format json /dev/sda1
2020-10-20 13:38:10.374511 I | cephosd: device "sda1" is available.
2020-10-20 13:38:10.374569 I | cephosd: skipping device "sda1" that does not match the device filter/list ([{nvme0n1 1  0  true false}]). <nil>
2020-10-20 13:38:10.374580 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p1
2020-10-20 13:38:10.379168 D | exec: Running command: lsblk /dev/nvme0n1p1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:10.380806 D | exec: Running command: ceph-volume inventory --format json /dev/nvme0n1p1
2020-10-20 13:38:10.967833 I | cephosd: device "nvme0n1p1" is available.
2020-10-20 13:38:10.967885 I | cephosd: device "nvme0n1p1" matches device filter "nvme0n1" 
2020-10-20 13:38:10.967894 I | cephosd: device "nvme0n1p1" is selected by the device filter/name "nvme0n1" 
2020-10-20 13:38:10.967907 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p2
2020-10-20 13:38:10.976480 D | exec: Running command: lsblk /dev/nvme0n1p2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:10.979634 D | exec: Running command: ceph-volume inventory --format json /dev/nvme0n1p2
2020-10-20 13:38:11.542836 I | cephosd: device "nvme0n1p2" is available.
2020-10-20 13:38:11.542883 I | cephosd: device "nvme0n1p2" matches device filter "nvme0n1" 
2020-10-20 13:38:11.542891 I | cephosd: device "nvme0n1p2" is selected by the device filter/name "nvme0n1" 
2020-10-20 13:38:11.542902 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p3
2020-10-20 13:38:11.548780 D | exec: Running command: lsblk /dev/nvme0n1p3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2020-10-20 13:38:11.551117 D | exec: Running command: ceph-volume inventory --format json /dev/nvme0n1p3
2020-10-20 13:38:12.253183 I | cephosd: device "nvme0n1p3" is available.
2020-10-20 13:38:12.253250 I | cephosd: device "nvme0n1p3" matches device filter "nvme0n1" 
2020-10-20 13:38:12.253261 I | cephosd: device "nvme0n1p3" is selected by the device filter/name "nvme0n1" 
2020-10-20 13:38:12.253478 I | cephosd: configuring osd devices: {"Entries":{"nvme0n1p1":{"Data":-1,"Metadata":null,"Config":{"Name":"nvme0n1","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]},"nvme0n1p2":{"Data":-1,"Metadata":null,"Config":{"Name":"nvme0n1","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]},"nvme0n1p3":{"Data":-1,"Metadata":null,"Config":{"Name":"nvme0n1","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]}}}
2020-10-20 13:38:12.253550 I | cephclient: getting or creating ceph auth key "client.bootstrap-osd" 
2020-10-20 13:38:12.253809 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/603019083
2020-10-20 13:38:12.492600 I | cephosd: configuring new device nvme0n1p2
2020-10-20 13:38:12.492634 I | cephosd: Base command - stdbuf
2020-10-20 13:38:12.492642 I | cephosd: immediateReportArgs - stdbuf
2020-10-20 13:38:12.492659 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p2]
2020-10-20 13:38:12.492670 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p2 --report
2020-10-20 13:38:13.140340 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:13.140402 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:13.140411 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:13.140417 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:13.140423 D | exec: --> relative data size: 1.0
2020-10-20 13:38:13.140428 D | exec: --> All data devices are unavailable
2020-10-20 13:38:13.167846 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p2
2020-10-20 13:38:13.739346 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:13.739391 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:13.739397 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:13.739402 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:13.739407 D | exec: --> relative data size: 1.0
2020-10-20 13:38:13.739413 D | exec: --> All data devices are unavailable
2020-10-20 13:38:13.769333 I | cephosd: configuring new device nvme0n1p3
2020-10-20 13:38:13.769361 I | cephosd: Base command - stdbuf
2020-10-20 13:38:13.769366 I | cephosd: immediateReportArgs - stdbuf
2020-10-20 13:38:13.769379 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p3]
2020-10-20 13:38:13.769387 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p3 --report
2020-10-20 13:38:14.458841 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:14.458891 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:14.458898 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:14.458907 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:14.458920 D | exec: --> relative data size: 1.0
2020-10-20 13:38:14.458929 D | exec: --> All data devices are unavailable
2020-10-20 13:38:14.486827 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p3
2020-10-20 13:38:15.102898 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:15.102949 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:15.102957 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:15.102965 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:15.102974 D | exec: --> relative data size: 1.0
2020-10-20 13:38:15.102983 D | exec: --> All data devices are unavailable
2020-10-20 13:38:15.139151 I | cephosd: configuring new device nvme0n1p1
2020-10-20 13:38:15.139186 I | cephosd: Base command - stdbuf
2020-10-20 13:38:15.139195 I | cephosd: immediateReportArgs - stdbuf
2020-10-20 13:38:15.139220 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p1]
2020-10-20 13:38:15.139236 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p1 --report
2020-10-20 13:38:15.766382 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:15.766418 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:15.766423 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:15.766426 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:15.766429 D | exec: --> relative data size: 1.0
2020-10-20 13:38:15.766432 D | exec: --> All data devices are unavailable
2020-10-20 13:38:15.796844 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/nvme0n1p1
2020-10-20 13:38:16.361789 D | exec: --> DEPRECATION NOTICE
2020-10-20 13:38:16.361824 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-10-20 13:38:16.361828 D | exec: --> The Pacific release will change the default to --no-auto
2020-10-20 13:38:16.361831 D | exec: --> passed data devices: 0 physical, 0 LVM
2020-10-20 13:38:16.361834 D | exec: --> relative data size: 1.0
2020-10-20 13:38:16.361837 D | exec: --> All data devices are unavailable
2020-10-20 13:38:16.389857 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list  --format json
2020-10-20 13:38:16.793403 D | cephosd: {}
2020-10-20 13:38:16.793431 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2020-10-20 13:38:16.793448 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list /mnt/minikube --format json
2020-10-20 13:38:17.156838 D | cephosd: {}
2020-10-20 13:38:17.156873 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2020-10-20 13:38:17.156884 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "minikube" 

Actions #3

Updated by Jan Fajerski over 3 years ago

I think so yes. The batch subcommand does not handle partitions, but full devices or LVs. See also https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ and ceph-volume lvm batch --help.

You can create multiple OSDs on a single device by using the --osds-per-device option.

Actions #4

Updated by Jan Fajerski over 3 years ago

  • Status changed from In Progress to Pending Backport
  • Backport set to octopus,nautilus
  • Pull request ID set to 38156
Actions #5

Updated by Jan Fajerski over 3 years ago

  • Copied to Backport #48352: nautilus: Fails to deploy osd in rook, throws index error added
Actions #6

Updated by Jan Fajerski over 3 years ago

  • Copied to Backport #48353: octopus: Fails to deploy osd in rook, throws index error added
Actions #7

Updated by Nathan Cutler over 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF