Project

General

Profile

Actions

Bug #44950

closed

OSDSpec: Reserving storage on db_devices

Added by Maran H about 4 years ago. Updated about 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm trying to setup a new Ceph cluster using cephadm.

To save costs I've gotten four OSD servers with only a handful of HDDs in them, leaving room to upgrade in the future when capacity is needed. I did buy an extra NVMe device for the db/wal in each server.

What I'm trying to achieve now is write a DriveSpec that has a (db_)limit that is bigger than the amount of HDDs currently present so that in the future the same NVMe device can be used for the new drives. However it seems that it will consume the NVMe in it's entirely even if I setup limits.

For instance:

service_type: osd
service_id: osd
placement:
  host_pattern: 'osd*'
data_devices:
  rotational: true
db_devices:
  rotational: false
  limit: 36
  db_limit: 36

Not sure if this is a bug or intended behaviour, I guess nobody tried to setup a poor man's cluster yet ;)


Related issues 1 (0 open1 closed)

Related to ceph-volume - Bug #44494: prepare: the *-slots arguments have no effectResolved

Actions
Actions #1

Updated by Sebastian Wagner about 4 years ago

  • Subject changed from Reserving storage on db_devices to OSDSpec: Reserving storage on db_devices
Actions #2

Updated by Joshua Schmid about 4 years ago

This can be achieved using the `slots` option of ceph-volume.

Unfortunately the slots option for wal/db (taken from ceph-volume lvm prepare --help)

  --block.db-slots BLOCK_DB_SLOTS
                        Intended number of slots on db device. The new OSD
                        gets oneof those slots or 1/nth of the available
                        capacity
  --block.wal-slots BLOCK_WAL_SLOTS
                        Intended number of slots on wal device. The new OSD
                        gets oneof those slots or 1/nth of the available
                        capacity

is not yet implemented in ceph-volume lvm batch, which cephadm almost exclusively uses.

I'll create a tracker for ceph-volume and link it as a blocker for this issue.

Actions #3

Updated by Joshua Schmid about 4 years ago

  • Blocked by Feature #44951: add support for 'slots' in lvm batch added
Actions #4

Updated by Maran H about 4 years ago

Joshua Schmid wrote:

This can be achieved using the `slots` option of ceph-volume.

I've read this as, 'it's not implement in batch' therefor I assumed it would be implemented in create or prepare. However using the option like `ceph-volume lvm create --bluestore --block.wal-slots 36 --block.wal /dev/nvme0n1 --data /dev/sdc` still seems to consume the whole disk. Am I using it wrong or is it missing from the whole of ceph-volume at this point?

Actions #5

Updated by Joshua Schmid about 4 years ago

Maran H wrote:

Joshua Schmid wrote:

This can be achieved using the `slots` option of ceph-volume.

I've read this as, 'it's not implement in batch' therefor I assumed it would be implemented in create or prepare. However using the option like `ceph-volume lvm create --bluestore --block.wal-slots 36 --block.wal /dev/nvme0n1 --data /dev/sdc` still seems to consume the whole disk. Am I using it wrong or is it missing from the whole of ceph-volume at this point?

Mh, this seems right to me. I could just encourage you to look into the logs to find anything fishy. I'll try to find something in the code.

Actions #6

Updated by Jan Fajerski about 4 years ago

Maran H wrote:

I've read this as, 'it's not implement in batch' therefor I assumed it would be implemented in create or prepare. However using the option like `ceph-volume lvm create --bluestore --block.wal-slots 36 --block.wal /dev/nvme0n1 --data /dev/sdc` still seems to consume the whole disk. Am I using it wrong or is it missing from the whole of ceph-volume at this point?

Which version are you running? There was a bug recently where the slots arguments were ignored. master and octopus should have that fix though. mimic and nautilus backports are still open. Here is the tracker ticket: https://tracker.ceph.com/issues/44494

Actions #7

Updated by Maran H about 4 years ago

Jan Fajerski wrote:

Which version are you running? There was a bug recently where the slots arguments were ignored. master and octopus should have that fix though. mimic and nautilus backports are still open. Here is the tracker ticket: https://tracker.ceph.com/issues/44494

Ah this is it. Because I couldn't get the slots working on Octopus I reinstall the cluster with Nautilus. Seems I'm a bit unlucky with my timing :)

Actions #8

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #44494: prepare: the *-slots arguments have no effect added
Actions #9

Updated by Sebastian Wagner about 4 years ago

  • Tracker changed from Feature to Bug
  • Regression set to No
  • Severity set to 3 - minor
Actions #10

Updated by Sebastian Wagner about 4 years ago

  • Blocked by deleted (Feature #44951: add support for 'slots' in lvm batch)
Actions #11

Updated by Sebastian Wagner about 4 years ago

  • Status changed from New to Duplicate
Actions

Also available in: Atom PDF