Project

General

Profile

Actions

Bug #54541

open

OSD service specification doesn't account for size for multiple DB devices

Added by Glen Baars about 2 years ago. Updated almost 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The server has two 480GB DB devices and five 18TB HDD. With 100GB DB allocations, it should work but the check doesn't account for both DB devices.

Ceph 16.2.7

YAML:-

service_type: osd
service_id: nas-aubun-rk2-ceph10_osd_spec_hdd
block_db_size: 107374182400
placement:
host_pattern: nas-aubun-rk2-ceph10
data_devices:
rotational: 1
db_devices:
rotational: 0
size: ':1000G'

Drives:-
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REJECT REASONS
nas-aubun-rk2-ceph10 /dev/sdb ssd Micron_5300_MTFD_500a075132891ec5 480G Yes
nas-aubun-rk2-ceph10 /dev/sdc ssd Micron_5300_MTFD_500a075132891d06 480G Yes
nas-aubun-rk2-ceph10 /dev/sdd hdd WDC_WUH721818AL_5000cca2c2ecc57d 18.0T Yes
nas-aubun-rk2-ceph10 /dev/sde hdd WDC_WUH721818AL_5000cca2c2ecc9c7 18.0T Yes
nas-aubun-rk2-ceph10 /dev/sdf hdd WDC_WUH721818AL_5000cca2c2ecb24c 18.0T Yes
nas-aubun-rk2-ceph10 /dev/sdg hdd WDC_WUH721818AL_5000cca2c2ec9624 18.0T Yes
nas-aubun-rk2-ceph10 /dev/sdh hdd WDC_WUH721818AL_5000cca2c2ec9622 18.0T Yes

cephadm log:-

2022-03-12 07:22:34,099 7fd7a46fd740 DEBUG --------------------------------------------------------------------------------
cephadm ['--env', 'CEPH_VOLUME_OSDSPEC_AFFINITY=nas-aubun-rk2-ceph10_osd_spec_hdd', '--image', 'quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54', 'ceph-volume', '--fsid', 'b8c91c0d-6131-405f-b0b5-26817adb13ab', '--config-json', '-', '--', 'lvm', 'batch', '--no-auto', '/dev/sdd', '/dev/sde', '/dev/sdf', '/dev/sdg', '/dev/sdh', '--db-devices', '/dev/sdb', '/dev/sdc', '--block-db-size', '107374182400', '--yes', '--no-systemd']
2022-03-12 07:22:34,651 7fd7a46fd740 DEBUG Using default config: /etc/ceph/ceph.conf
2022-03-12 07:22:34,651 7fd7a46fd740 DEBUG Using specified fsid: b8c91c0d-6131-405f-b0b5-26817adb13ab
2022-03-12 07:22:34,856 7fd7a46fd740 DEBUG stat: 167 167
2022-03-12 07:22:34,906 7fd7a46fd740 DEBUG Acquiring lock 140564137254192 on /run/cephadm/b8c91c0d-6131-405f-b0b5-26817adb13ab.lock
2022-03-12 07:22:34,906 7fd7a46fd740 DEBUG Lock 140564137254192 acquired on /run/cephadm/b8c91c0d-6131-405f-b0b5-26817adb13ab.lock
2022-03-12 07:22:36,497 7fd7a46fd740 DEBUG /usr/bin/docker: --> passed data devices: 5 physical, 0 LVM
2022-03-12 07:22:36,498 7fd7a46fd740 DEBUG /usr/bin/docker: --> relative data size: 1.0
2022-03-12 07:22:36,498 7fd7a46fd740 DEBUG /usr/bin/docker: --> passed block_db devices: 2 physical, 0 LVM
2022-03-12 07:22:36,498 7fd7a46fd740 DEBUG /usr/bin/docker: --> 100.00 GB was requested for block_db_size, but only 74.52 GB can be fulfilled
2022-03-12 07:22:36,604 7fd7a46fd740 INFO Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54 -e NODE_NAME=nas-aubun-rk2-ceph10 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=nas-aubun-rk2-ceph10_osd_spec_hdd -v /var/log/ceph/b8c91c0d-6131-405f-b0b5-26817adb13ab:/var/log/ceph:z -v /var/lib/ceph/b8c91c0d-6131-405f-b0b5-26817adb13ab/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmp8vgjzmpf:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpx9shbd_q:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54 lvm batch --no-auto /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh --db-devices /dev/sdb /dev/sdc --block-db-size 107374182400 --yes --no-systemd
2022-03-12 07:22:36,605 7fd7a46fd740 INFO /usr/bin/docker: stderr -
> passed data devices: 5 physical, 0 LVM
2022-03-12 07:22:36,605 7fd7a46fd740 INFO /usr/bin/docker: stderr --> relative data size: 1.0
2022-03-12 07:22:36,605 7fd7a46fd740 INFO /usr/bin/docker: stderr --> passed block_db devices: 2 physical, 0 LVM
2022-03-12 07:22:36,605 7fd7a46fd740 INFO /usr/bin/docker: stderr --> 100.00 GB was requested for block_db_size, but only 74.52 GB can be fulfilled
2022-03-12 07:22:37,599 7fc7b099b740 DEBUG --------------------------------------------------------------------------------

Actions #1

Updated by Ilya Dryomov almost 2 years ago

  • Target version deleted (v16.2.8)
Actions

Also available in: Atom PDF