Project

General

Profile

Bug #49096

ceph-volume batch osd failuer

Added by 伟 宋 about 3 years ago. Updated almost 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The new addition of blueestore type osd to add osd with db-device failed:

2021-02-02 07:58:51.943016 D | exec: Running command: lsblk /dev/vda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
2021-02-02 07:58:52.044108 D | exec: Running command: ceph-volume inventory --format json /dev/vda1
2021-02-02 07:58:59.910023 I | cephosd: skipping device "vda1": ["Insufficient space (<5GB)"].
2021-02-02 07:58:59.910314 I | cephosd: configuring osd devices: {"Entries":{"vde":{"Data":-1,"Metadata":null,"Config":{"Name":"vde","OSDsPerDevice":1,"MetadataDevice":"","DatabaseSizeMB":0,"DeviceClass":"","IsFilter":false,"IsDevicePathFilter":false},"PersistentDevicePaths":["/dev/disk/by-path/pci-0000:00:0b.0","/dev/disk/by-path/virtio-pci-0000:00:0b.0"]}}}
2021-02-02 07:58:59.910379 I | cephclient: getting or creating ceph auth key "client.bootstrap-osd"
2021-02-02 07:58:59.910594 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/114673444
2021-02-02 07:59:02.151854 I | cephosd: configuring new device vde
2021-02-02 07:59:02.151909 I | cephosd: using vdd as metadataDevice for device /dev/vde and let ceph-volume lvm batch decide how to create volumes
2021-02-02 07:59:02.151947 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 --block-db-size 32212254720 /dev/vde --db-devices /dev/vdd --report
2021-02-02 07:59:16.341715 D | exec: --> passed data devices: 1 physical, 0 LVM
2021-02-02 07:59:16.341816 D | exec: --> relative data size: 1.0
2021-02-02 07:59:16.341821 D | exec: --> passed block_db devices: 1 physical, 0 LVM
2021-02-02 07:59:16.341830 D | exec: --> 1 fast devices were passed, but none are available
2021-02-02 07:59:16.343743 D | exec:
2021-02-02 07:59:16.343767 D | exec: Total OSDs: 0
2021-02-02 07:59:16.343771 D | exec:
2021-02-02 07:59:16.343775 D | exec: Type Path LV Size % of device
2021-02-02 07:59:16.357084 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 --block-db-size 32212254720 /dev/vde --db-devices /dev/vdd --report --format json
2021-02-02 07:59:29.005448 D | cephosd: ceph-volume reports: []
failed to configure devices: failed to initialize devices: failed to create enough required devices, required: [], actual: []

History

#1 Updated by 伟 宋 about 3 years ago

ceph-volume batch osd failure

#2 Updated by 伟 宋 about 3 years ago

vdd already has two db lv messages;vde is a new osd device

#3 Updated by Ilya Dryomov almost 3 years ago

  • Project changed from Linux kernel client to ceph-volume

This is not a kernel client bug, moving to ceph-volume.

Also available in: Atom PDF