Project

General

Profile

Actions

Bug #57939

open

Not able to add additional disk sharing common wal/db device

Added by Daniel Olsson over 1 year ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-disk
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Ceph version: 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d)
quincy (stable)

Each of our nodes has 8x 16T rotational disks and one 2T NVMe.
wal/db on NVMe. It works fine if new fully populated host is added, but
when I want to add the disks one-by-one it does not work.

I tried to add node with only two disks and used following service spec:
spec:
data_devices:
rotational: 1
size: "8T:"
db_devices:
rotational: 0
size: ":4T"
filter_logic: AND
objectstore: bluestore
block_db_size: 238G

I tried to use "db_slots: 8" instead of block_db_size but that did not work. The NVMe
disk was divided in two 1TB lvms. When I tried to run "ceph-volume lvm batch --report /dev/sda /dev/sdb --db-devices /dev/nvme0n1 --block-db-slots 8" the NVMe size was correct...

However, with above the two OSD:s was added correctly.

Then I added another 16T disk, but that did not get wal/db on NVMe at all. I noticed that following was run: "ceph-volume lvm batch --no-auto /dev/sdd --block-db-size 238G --yes --no-systemd". This is not correct!

dry-run result: ################
OSDSPEC PREVIEWS ################
------------------+-------+----------+----+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL |
------------------+-------+----------+----+-----+ |osd |ses04-osd |ses04 |/dev/sdd |- |- |
------------------+-------+----------+----+-----+

If I run the command manually this is the result:
root@ses04:/# ceph-volume lvm batch --report --no-auto /dev/sdd --block-db-size 238G --yes --no-systemd
--> passed data devices: 1 physical, 0 LVM
--> relative data size: 1.0

Total OSDs: 1

Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
data /dev/sdd 14.55 TB 100.00%

When running following commands on host the result is as expected:
root@ses04:/# ceph-volume lvm batch --report /dev/sda /dev/sdb /dev/sdd --db-devices /dev/nvme0n1 --block-db-slots 8
--> passed data devices: 3 physical, 0 LVM
--> relative data size: 1.0
--> passed block_db devices: 1 physical, 0 LVM

Total OSDs: 1

Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
data /dev/sdd 14.55 TB 100.00%
block_db /dev/nvme0n1 238.47 GB 12.50%
root@ses04:/# ceph-volume lvm batch --report /dev/sda /dev/sdb /dev/sdd --db-devices /dev/nvme0n1 --block-db-size 238G
--> passed data devices: 3 physical, 0 LVM
--> relative data size: 1.0
--> passed block_db devices: 1 physical, 0 LVM

Total OSDs: 1

Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
data /dev/sdd 14.55 TB 100.00%
block_db /dev/nvme0n1 238.00 GB 12.48%

But, when running without the all disks listed the result is WRONG:

root@ses04:/# ceph-volume lvm batch --report /dev/sdd --db-devices /dev/nvme0n1 --block-db-size 238G
--> passed data devices: 1 physical, 0 LVM
--> relative data size: 1.0
--> passed block_db devices: 1 physical, 0 LVM
--> 1 fast devices were passed, but none are available

Total OSDs: 0

Type            Path                                                    LV Size         % of device
root@ses04:/#

I did find a workround that worked:

ceph orch daemon add osd ses04:data_devices=/dev/sda,/dev/sdb,/dev/sdd,db_devices=/dev/nvme0n1,block_db_size=238G

This added the disk with wal/db on 238G NVMe.

EXTRA INFO:
Extracted below information before new disk was added.

root@ses04:/# ceph-volume inventory

Device Path Size rotates available Model name
/dev/sdd 14.55 TB True True ST16000NM001G-2K
/dev/nvme0n1 1.86 TB False False INTEL SSDPEKNW020T8
/dev/sda 14.55 TB True False ST16000NM001G-2K
/dev/sdb 14.55 TB True False ST16000NM001G-2K
/dev/sdc 119.25 GB True False Extreme Pro

root@ses04:/# ceph-volume inventory --format json-pretty
[ {
"available": true,
"ceph_device": null,
"device_id": "ATA_ST16000NM001G-2K_ZL22P415",
"lsm_data": {},
"lvs": [],
"path": "/dev/sdd",
"rejected_reasons": [],
"sys_api": {
"human_readable_size": "14.55 TB",
"locked": 0,
"model": "ST16000NM001G-2K",
"nr_requests": "256",
"partitions": {},
"path": "/dev/sdd",
"removable": "0",
"rev": "SN02",
"ro": "0",
"rotational": "1",
"sas_address": "",
"sas_device_handle": "",
"scheduler_mode": "mq-deadline",
"sectors": 0,
"sectorsize": "512",
"size": 16000900661248.0,
"support_discard": "0",
"vendor": "ATA"
}
}, {
"available": false,
"ceph_device": null,
"device_id": "INTEL_SSDPEKNW020T8_BTNH84840ZWL2P0C",
"lsm_data": {},
"lvs": [ {
"cluster_fsid": "c55d7ef8-1287-4779-ac7c-078d98a7d794",
"cluster_name": "ceph",
"db_uuid": "sYhvQ3-QnmG-vf6z-IGCT-Ukvk-sF6L-jUvYjh",
"name": "osd-db-c331d71f-7b9b-4a25-bea2-d1059e598cf7",
"osd_fsid": "1ab86fc2-73e9-4ec2-89f0-9a530551f5c9",
"osd_id": "1",
"osdspec_affinity": "ses04-osd",
"type": "db"
}, {
"cluster_fsid": "c55d7ef8-1287-4779-ac7c-078d98a7d794",
"cluster_name": "ceph",
"db_uuid": "77wMMO-MGnF-yfC4-NB2X-WMqv-Q0Nz-PeB5Nk",
"name": "osd-db-5504c815-0b62-4042-9400-e15fa112b806",
"osd_fsid": "b400f2a3-52cc-4770-8be7-e2a2e42bb6f3",
"osd_id": "2",
"osdspec_affinity": "ses04-osd",
"type": "db"
}
],
"path": "/dev/nvme0n1",
"rejected_reasons": [
"LVM detected",
"locked"
],
"sys_api": {
"human_readable_size": "1.86 TB",
"locked": 1,
"model": "INTEL SSDPEKNW020T8",
"nr_requests": "255",
"partitions": {},
"path": "/dev/nvme0n1",
"removable": "0",
"rev": "",
"ro": "0",
"rotational": "0",
"sas_address": "",
"sas_device_handle": "",
"scheduler_mode": "none",
"sectors": 0,
"sectorsize": "512",
"size": 2048408248320.0,
"support_discard": "512",
"vendor": ""
}
}, {
"available": false,
"ceph_device": null,
"device_id": "ATA_ST16000NM001G-2K_ZL22L4NX",
"lsm_data": {},
"lvs": [ {
"block_uuid": "ZpO0zk-Axw5-2DNy-6u7V-JgGz-NlhK-ki1sFK",
"cluster_fsid": "c55d7ef8-1287-4779-ac7c-078d98a7d794",
"cluster_name": "ceph",
"name": "osd-block-1ab86fc2-73e9-4ec2-89f0-9a530551f5c9",
"osd_fsid": "1ab86fc2-73e9-4ec2-89f0-9a530551f5c9",
"osd_id": "1",
"osdspec_affinity": "ses04-osd",
"type": "block"
}
],
"path": "/dev/sda",
"rejected_reasons": [
"LVM detected",
"locked",
"Insufficient space (<10 extents) on vgs"
],
"sys_api": {
"human_readable_size": "14.55 TB",
"locked": 1,
"model": "ST16000NM001G-2K",
"nr_requests": "256",
"partitions": {},
"path": "/dev/sda",
"removable": "0",
"rev": "SN02",
"ro": "0",
"rotational": "1",
"sas_address": "",
"sas_device_handle": "",
"scheduler_mode": "mq-deadline",
"sectors": 0,
"sectorsize": "512",
"size": 16000900661248.0,
"support_discard": "0",
"vendor": "ATA"
}
}, {
"available": false,
"ceph_device": null,
"device_id": "ATA_ST16000NM001G-2K_ZL22KNJZ",
"lsm_data": {},
"lvs": [ {
"block_uuid": "MMxzpe-OobH-kpnm-r6MP-BDcU-iJxN-CGSSYl",
"cluster_fsid": "c55d7ef8-1287-4779-ac7c-078d98a7d794",
"cluster_name": "ceph",
"name": "osd-block-b400f2a3-52cc-4770-8be7-e2a2e42bb6f3",
"osd_fsid": "b400f2a3-52cc-4770-8be7-e2a2e42bb6f3",
"osd_id": "2",
"osdspec_affinity": "ses04-osd",
"type": "block"
}
],
"path": "/dev/sdb",
"rejected_reasons": [
"LVM detected",
"locked",
"Insufficient space (<10 extents) on vgs"
],
"sys_api": {
"human_readable_size": "14.55 TB",
"locked": 1,
"model": "ST16000NM001G-2K",
"nr_requests": "256",
"partitions": {},
"path": "/dev/sdb",
"removable": "0",
"rev": "SN02",
"ro": "0",
"rotational": "1",
"sas_address": "",
"sas_device_handle": "",
"scheduler_mode": "mq-deadline",
"sectors": 0,
"sectorsize": "512",
"size": 16000900661248.0,
"support_discard": "0",
"vendor": "ATA"
}
}, {
"available": false,
"ceph_device": null,
"device_id": "Extreme_Pro_11345678B5F8",
"lsm_data": {},
"lvs": [ {
"comment": "not used by ceph",
"name": "lvroot"
}
],
"path": "/dev/sdc",
"rejected_reasons": [
"Has partitions",
"LVM detected",
"Has GPT headers",
"locked",
"Insufficient space (<10 extents) on vgs"
],
"sys_api": {
"human_readable_size": "119.25 GB",
"locked": 1,
"model": "Extreme Pro",
"nr_requests": "2",
"partitions": {
"sdc1": {
"holders": [],
"human_readable_size": "1.00 MB",
"sectors": "2048",
"sectorsize": 512,
"size": 1048576.0,
"start": "2048"
},
"sdc2": {
"holders": [
"dm-0"
],
"human_readable_size": "119.24 GB",
"sectors": "250068992",
"sectorsize": 512,
"size": 128035323904.0,
"start": "4096"
}
},
"path": "/dev/sdc",
"removable": "1",
"rev": "0",
"ro": "0",
"rotational": "1",
"sas_address": "",
"sas_device_handle": "",
"scheduler_mode": "mq-deadline",
"sectors": 0,
"sectorsize": "512",
"size": 128043712512.0,
"support_discard": "0",
"vendor": "SanDisk"
}
}
]

root@ses04:/# ceph-volume lvm list

====== osd.1 =======

[db]          /dev/ceph-38def26d-2946-4ebc-8912-4dd86570012a/osd-db-c331d71f-7b9b-4a25-bea2-d1059e598cf7
block device              /dev/ceph-5a9b986b-c66b-4899-9491-33963f385360/osd-block-1ab86fc2-73e9-4ec2-89f0-9a530551f5c9
block uuid ZpO0zk-Axw5-2DNy-6u7V-JgGz-NlhK-ki1sFK
cephx lockbox secret
cluster fsid c55d7ef8-1287-4779-ac7c-078d98a7d794
cluster name ceph
crush device class
db device /dev/ceph-38def26d-2946-4ebc-8912-4dd86570012a/osd-db-c331d71f-7b9b-4a25-bea2-d1059e598cf7
db uuid sYhvQ3-QnmG-vf6z-IGCT-Ukvk-sF6L-jUvYjh
encrypted 0
osd fsid 1ab86fc2-73e9-4ec2-89f0-9a530551f5c9
osd id 1
osdspec affinity ses04-osd
type db
vdo 0
devices /dev/nvme0n1
[block]       /dev/ceph-5a9b986b-c66b-4899-9491-33963f385360/osd-block-1ab86fc2-73e9-4ec2-89f0-9a530551f5c9
block device              /dev/ceph-5a9b986b-c66b-4899-9491-33963f385360/osd-block-1ab86fc2-73e9-4ec2-89f0-9a530551f5c9
block uuid ZpO0zk-Axw5-2DNy-6u7V-JgGz-NlhK-ki1sFK
cephx lockbox secret
cluster fsid c55d7ef8-1287-4779-ac7c-078d98a7d794
cluster name ceph
crush device class
db device /dev/ceph-38def26d-2946-4ebc-8912-4dd86570012a/osd-db-c331d71f-7b9b-4a25-bea2-d1059e598cf7
db uuid sYhvQ3-QnmG-vf6z-IGCT-Ukvk-sF6L-jUvYjh
encrypted 0
osd fsid 1ab86fc2-73e9-4ec2-89f0-9a530551f5c9
osd id 1
osdspec affinity ses04-osd
type block
vdo 0
devices /dev/sda

====== osd.2 =======

[db]          /dev/ceph-38def26d-2946-4ebc-8912-4dd86570012a/osd-db-5504c815-0b62-4042-9400-e15fa112b806
block device              /dev/ceph-8d868e6b-a9d5-4f53-a9e8-d5d9d9bed89d/osd-block-b400f2a3-52cc-4770-8be7-e2a2e42bb6f3
block uuid MMxzpe-OobH-kpnm-r6MP-BDcU-iJxN-CGSSYl
cephx lockbox secret
cluster fsid c55d7ef8-1287-4779-ac7c-078d98a7d794
cluster name ceph
crush device class
db device /dev/ceph-38def26d-2946-4ebc-8912-4dd86570012a/osd-db-5504c815-0b62-4042-9400-e15fa112b806
db uuid 77wMMO-MGnF-yfC4-NB2X-WMqv-Q0Nz-PeB5Nk
encrypted 0
osd fsid b400f2a3-52cc-4770-8be7-e2a2e42bb6f3
osd id 2
osdspec affinity ses04-osd
type db
vdo 0
devices /dev/nvme0n1
[block]       /dev/ceph-8d868e6b-a9d5-4f53-a9e8-d5d9d9bed89d/osd-block-b400f2a3-52cc-4770-8be7-e2a2e42bb6f3
block device              /dev/ceph-8d868e6b-a9d5-4f53-a9e8-d5d9d9bed89d/osd-block-b400f2a3-52cc-4770-8be7-e2a2e42bb6f3
block uuid MMxzpe-OobH-kpnm-r6MP-BDcU-iJxN-CGSSYl
cephx lockbox secret
cluster fsid c55d7ef8-1287-4779-ac7c-078d98a7d794
cluster name ceph
crush device class
db device /dev/ceph-38def26d-2946-4ebc-8912-4dd86570012a/osd-db-5504c815-0b62-4042-9400-e15fa112b806
db uuid 77wMMO-MGnF-yfC4-NB2X-WMqv-Q0Nz-PeB5Nk
encrypted 0
osd fsid b400f2a3-52cc-4770-8be7-e2a2e42bb6f3
osd id 2
osdspec affinity ses04-osd
type block
vdo 0
devices /dev/sdb
root@ses04:/#

No data to display

Actions

Also available in: Atom PDF