Actions
Bug #47966
closedFails to deploy osd in rook, throws index error
% Done:
0%
Source:
Community (dev)
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Description
[root@rook-ceph-tools-78cdfd976c-klzls /]# ceph version ceph version 16.0.0-6582-g179f6a1c (179f6a1c7ae7f8495292a676c36b331ea8facda2) pacific (dev) $ kubectl get pods -n rook-ceph NAME READY STATUS RESTARTS AGE csi-cephfsplugin-g96pq 3/3 Running 0 2m48s csi-cephfsplugin-provisioner-859c666579-97cc5 6/6 Running 0 2m47s csi-rbdplugin-provisioner-7c6d5c76cd-ct2nw 6/6 Running 0 2m48s csi-rbdplugin-zd58c 3/3 Running 0 2m49s rook-ceph-mgr-a-f9546b454-5rb7g 1/1 Running 3 8m31s rook-ceph-mon-a-7866bc67d9-xschv 1/1 Running 0 8m38s rook-ceph-operator-86756d44-vsz2b 1/1 Running 0 13m rook-ceph-osd-prepare-minikube-f5m7f 0/1 CrashLoopBackOff 4 8m28s rook-ceph-tools-78cdfd976c-klzls 1/1 Running 0 10m rook-discover-qnc8q 1/1 Running 0 13m
osd log
$ kubectl logs -n rook-ceph rook-ceph-osd-prepare-minikube-f5m7f 2020-10-23 08:58:16.808979 I | rookcmd: starting Rook v1.4.0-alpha.0.508.g39a23dd0 with arguments '/rook/rook ceph osd provision' 2020-10-23 08:58:16.809459 I | rookcmd: flag values: --cluster-id=b104a423-de14-4ebb-8a66-3b34e44fa056, --data-device-filter=vda3, --data-device-path-filter=, --data-devices=, --drive-groups=, --encrypted-device=false, --force-format=false, --help=false, --location=, --log-flush-frequency=5s, --log-level=DEBUG, --metadata-device=, --node-name=minikube, --operator-image=, --osd-database-size=0, --osd-store=, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --service-account= 2020-10-23 08:58:16.809495 I | op-mon: parsing mon endpoints: a=10.101.52.62:6789 2020-10-23 08:58:16.923161 I | op-osd: CRUSH location=root=default host=minikube 2020-10-23 08:58:16.923257 I | cephcmd: crush location of osd: root=default host=minikube 2020-10-23 08:58:16.923291 D | exec: Running command: nsenter --mount=/rootfs/proc/1/ns/mnt -- /usr/sbin/lvm --help 2020-10-23 08:58:17.039641 I | cephosd: successfully called nsenter 2020-10-23 08:58:17.039755 I | cephosd: binary "/usr/sbin/lvm" found on the host, proceeding with osd preparation 2020-10-23 08:58:17.200777 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2020-10-23 08:58:17.201244 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2020-10-23 08:58:17.201579 D | cephosd: config file @ /etc/ceph/ceph.conf: [global] fsid = a7e3b606-940d-4ab6-8f39-b3b9e03b5f17 mon initial members = a mon host = [v2:10.101.52.62:3300,v1:10.101.52.62:6789] public addr = 172.17.0.8 cluster addr = 172.17.0.8 osd_pool_default_size = 1 mon_warn_on_pool_no_redundancy = false [client.admin] keyring = /var/lib/rook/rook-ceph/client.admin.keyring 2020-10-23 08:58:17.201599 I | cephosd: discovering hardware 2020-10-23 08:58:17.201616 D | exec: Running command: lsblk --all --noheadings --list --output KNAME 2020-10-23 08:58:17.211278 D | exec: Running command: lsblk /dev/loop0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.214985 W | inventory: skipping device "loop0". diskType is empty 2020-10-23 08:58:17.215041 D | exec: Running command: lsblk /dev/loop1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.217875 W | inventory: skipping device "loop1". diskType is empty 2020-10-23 08:58:17.217921 D | exec: Running command: lsblk /dev/loop2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.220821 W | inventory: skipping device "loop2". diskType is empty 2020-10-23 08:58:17.220858 D | exec: Running command: lsblk /dev/loop3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.223241 W | inventory: skipping device "loop3". diskType is empty 2020-10-23 08:58:17.223274 D | exec: Running command: lsblk /dev/loop4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.226626 W | inventory: skipping device "loop4". diskType is empty 2020-10-23 08:58:17.226664 D | exec: Running command: lsblk /dev/loop5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.229647 W | inventory: skipping device "loop5". diskType is empty 2020-10-23 08:58:17.229686 D | exec: Running command: lsblk /dev/loop6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.232473 W | inventory: skipping device "loop6". diskType is empty 2020-10-23 08:58:17.232517 D | exec: Running command: lsblk /dev/loop7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.235527 W | inventory: skipping device "loop7". diskType is empty 2020-10-23 08:58:17.235562 D | exec: Running command: lsblk /dev/sr0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.294957 W | inventory: skipping device "sr0". unsupported diskType rom 2020-10-23 08:58:17.294999 D | exec: Running command: lsblk /dev/vda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.298850 D | exec: Running command: sgdisk --print /dev/vda 2020-10-23 08:58:17.304704 D | exec: Running command: udevadm info --query=property /dev/vda 2020-10-23 08:58:17.320221 D | exec: Running command: lsblk --noheadings --pairs /dev/vda 2020-10-23 08:58:17.328745 I | inventory: skipping device "vda" because it has child, considering the child instead. 2020-10-23 08:58:17.328800 D | exec: Running command: lsblk /dev/vda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.403411 D | exec: Running command: udevadm info --query=property /dev/vda1 2020-10-23 08:58:17.512916 D | exec: Running command: lsblk /dev/vda2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.598158 D | exec: Running command: udevadm info --query=property /dev/vda2 2020-10-23 08:58:17.695649 D | exec: Running command: lsblk /dev/vda3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.710606 D | exec: Running command: udevadm info --query=property /dev/vda3 2020-10-23 08:58:17.900715 D | exec: Running command: lsblk /dev/vda5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:17.913137 D | exec: Running command: udevadm info --query=property /dev/vda5 2020-10-23 08:58:18.107870 D | exec: Running command: lsblk /dev/vda6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:18.117325 D | exec: Running command: udevadm info --query=property /dev/vda6 2020-10-23 08:58:18.195829 D | inventory: discovered disks are [0xc0005525a0 0xc0003c47e0 0xc0001ff7a0 0xc000193440 0xc0001d59e0] 2020-10-23 08:58:18.195861 I | cephosd: creating and starting the osds 2020-10-23 08:58:18.203699 D | cephosd: No Drive Groups configured. 2020-10-23 08:58:18.203762 D | cephosd: desiredDevices are [{Name:vda3 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: IsFilter:true IsDevicePathFilter:false}] 2020-10-23 08:58:18.203775 D | cephosd: context.Devices are [0xc0005525a0 0xc0003c47e0 0xc0001ff7a0 0xc000193440 0xc0001d59e0] 2020-10-23 08:58:18.203786 D | exec: Running command: udevadm info --query=property /dev/vda1 2020-10-23 08:58:18.217019 D | exec: Running command: lsblk /dev/vda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:18.220902 D | exec: Running command: ceph-volume inventory --format json /dev/vda1 2020-10-23 08:58:24.400385 I | cephosd: skipping device "vda1": ["Insufficient space (<5GB)"]. 2020-10-23 08:58:24.400480 D | exec: Running command: udevadm info --query=property /dev/vda2 2020-10-23 08:58:24.520751 D | exec: Running command: lsblk /dev/vda2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:24.605460 D | exec: Running command: ceph-volume inventory --format json /dev/vda2 2020-10-23 08:58:31.728319 I | cephosd: skipping device "vda2": ["Insufficient space (<5GB)"]. 2020-10-23 08:58:31.728395 D | exec: Running command: udevadm info --query=property /dev/vda3 2020-10-23 08:58:31.811440 D | exec: Running command: lsblk /dev/vda3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:31.821700 D | exec: Running command: ceph-volume inventory --format json /dev/vda3 2020-10-23 08:58:39.221035 I | cephosd: device "vda3" is available. 2020-10-23 08:58:39.221262 I | cephosd: device "vda3" matches device filter "vda3" 2020-10-23 08:58:39.221294 I | cephosd: device "vda3" is selected by the device filter/name "vda3" 2020-10-23 08:58:39.221332 D | exec: Running command: udevadm info --query=property /dev/vda5 2020-10-23 08:58:39.317742 D | exec: Running command: lsblk /dev/vda5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:39.396396 D | exec: Running command: ceph-volume inventory --format json /dev/vda5 2020-10-23 08:58:45.905478 I | cephosd: device "vda5" is available. 2020-10-23 08:58:45.905675 I | cephosd: skipping device "vda5" that does not match the device filter/list ([{vda3 1 0 true false}]). <nil> 2020-10-23 08:58:45.905715 D | exec: Running command: udevadm info --query=property /dev/vda6 2020-10-23 08:58:46.098009 D | exec: Running command: lsblk /dev/vda6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME 2020-10-23 08:58:46.112886 D | exec: Running command: ceph-volume inventory --format json /dev/vda6 2020-10-23 08:58:52.702590 I | cephosd: device "vda6" is available. 2020-10-23 08:58:52.703132 I | cephosd: skipping device "vda6" that does not match the device filter/list ([{vda3 1 0 true false}]). <nil> 2020-10-23 08:58:52.706485 I | cephosd: configuring osd devices: {"Entries":{"vda3":{"Data":-1,"Metadata":null,"Config":{"Name":"vda3","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":true,"IsDevicePathFilter":false},"PersistentDevicePaths":[]}}} 2020-10-23 08:58:52.706670 I | cephclient: getting or creating ceph auth key "client.bootstrap-osd" 2020-10-23 08:58:52.707323 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/497580015 2020-10-23 08:59:09.809284 I | cephosd: configuring new device vda3 2020-10-23 08:59:09.809366 I | cephosd: Base command - stdbuf 2020-10-23 08:59:09.809386 I | cephosd: immediateReportArgs - stdbuf 2020-10-23 08:59:09.809426 I | cephosd: immediateExecuteArgs - [-oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/vda3] 2020-10-23 08:59:09.812243 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/vda3 --report 2020-10-23 08:59:20.512992 D | exec: --> DEPRECATION NOTICE 2020-10-23 08:59:20.513154 D | exec: --> You are using the legacy automatic disk sorting behavior 2020-10-23 08:59:20.513175 D | exec: --> The Pacific release will change the default to --no-auto 2020-10-23 08:59:20.513191 D | exec: --> passed data devices: 0 physical, 1 LVM 2020-10-23 08:59:20.513205 D | exec: --> relative data size: 1.0 2020-10-23 08:59:20.699228 D | exec: Traceback (most recent call last): 2020-10-23 08:59:20.699307 D | exec: File "/usr/sbin/ceph-volume", line 11, in <module> 2020-10-23 08:59:20.699329 D | exec: load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() 2020-10-23 08:59:20.699344 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ 2020-10-23 08:59:20.699357 D | exec: self.main(self.argv) 2020-10-23 08:59:20.699374 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc 2020-10-23 08:59:20.699386 D | exec: return f(*a, **kw) 2020-10-23 08:59:20.699398 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 151, in main 2020-10-23 08:59:20.699409 D | exec: terminal.dispatch(self.mapper, subcommand_args) 2020-10-23 08:59:20.699421 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 2020-10-23 08:59:20.699434 D | exec: instance.main() 2020-10-23 08:59:20.699445 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main 2020-10-23 08:59:20.699457 D | exec: terminal.dispatch(self.mapper, self.argv) 2020-10-23 08:59:20.699479 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 2020-10-23 08:59:20.699497 D | exec: instance.main() 2020-10-23 08:59:20.699509 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root 2020-10-23 08:59:20.699520 D | exec: return func(*a, **kw) 2020-10-23 08:59:20.699531 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 399, in main 2020-10-23 08:59:20.699542 D | exec: plan = self.get_plan(self.args) 2020-10-23 08:59:20.699553 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 437, in get_plan 2020-10-23 08:59:20.699564 D | exec: args.wal_devices) 2020-10-23 08:59:20.699583 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 457, in get_deployment_layout 2020-10-23 08:59:20.699593 D | exec: plan.extend(get_lvm_osds(lvm_devs, args)) 2020-10-23 08:59:20.699604 D | exec: File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 93, in get_lvm_osds 2020-10-23 08:59:20.699615 D | exec: disk.Size(b=int(lv.lvs[0].lv_size)), 2020-10-23 08:59:20.699626 D | exec: IndexError: list index out of range failed to configure devices: failed to initialize devices: failed ceph-volume report: exit status 1
Actions