Project

General

Profile

Actions

Bug #44950

closed

OSDSpec: Reserving storage on db_devices

Added by Maran H about 4 years ago. Updated about 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm trying to setup a new Ceph cluster using cephadm.

To save costs I've gotten four OSD servers with only a handful of HDDs in them, leaving room to upgrade in the future when capacity is needed. I did buy an extra NVMe device for the db/wal in each server.

What I'm trying to achieve now is write a DriveSpec that has a (db_)limit that is bigger than the amount of HDDs currently present so that in the future the same NVMe device can be used for the new drives. However it seems that it will consume the NVMe in it's entirely even if I setup limits.

For instance:

service_type: osd
service_id: osd
placement:
  host_pattern: 'osd*'
data_devices:
  rotational: true
db_devices:
  rotational: false
  limit: 36
  db_limit: 36

Not sure if this is a bug or intended behaviour, I guess nobody tried to setup a poor man's cluster yet ;)


Related issues 1 (0 open1 closed)

Related to ceph-volume - Bug #44494: prepare: the *-slots arguments have no effectResolved

Actions
Actions

Also available in: Atom PDF