Project

General

Profile

Bug #45861

data_devices: limit 3 deployed 6 osds per node

Added by Denys Kondratenko about 1 month ago. Updated 3 days ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
Category:
orchestrator
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

We have 5 OSD nodes, all looks similar (except node 4 has -1 ssd):

blueshark-1:~ # ceph orch device ls blueshark-5
HOST         PATH          TYPE   SIZE  DEVICE                                  AVAIL  REJECT REASONS
blueshark-5  /dev/nvme0n1  ssd    745G  INTEL SSDPEDMD800G4_CVFT54700076800CGN  True
blueshark-5  /dev/nvme1n1  ssd    745G  INTEL SSDPEDMD800G4_CVFT6075000C800CGN  True
blueshark-5  /dev/sdb      ssd    372G  INTEL_SSDSC2BA400G4_BTHV6082036R400NGN  True
blueshark-5  /dev/sdc      ssd    372G  INTEL_SSDSC2BA400G4_BTHV611201F3400NGN  True
blueshark-5  /dev/sdd      ssd    372G  INTEL_SSDSC2BA400G4_BTHV608300WC400NGN  True
blueshark-5  /dev/sde      ssd    372G  INTEL_SSDSC2BA400G4_BTHV6082036F400NGN  True
blueshark-5  /dev/sdf      ssd    372G  INTEL_SSDSC2BA400G4_BTHV608203EY400NGN  True
blueshark-5  /dev/sdg      ssd    372G  INTEL_SSDSC2BA400G4_BTHV6082036H400NGN  True
blueshark-5  /dev/sdh      ssd   29.8G  SATA_SSD_67F407560E2400150839           True
blueshark-5  /dev/sdi      ssd   29.8G  SATA_SSD_AF340757042400153013           True
blueshark-5  /dev/sda      ssd    223G  Micron_5200_MTFDDAK240TDN_18532045B10D  False  locked

We wanted to deploy 3 OSDs on ssd with 3 DBs on NVMe:

service_type: osd
service_id: 3osd_3db
placement:
  host_pattern: 'blueshark-[4-8]'
data_devices:
  model: 'INTEL SSDSC2BA40'
  limit: 3
db_devices:
  model: 'INTEL SSDPEDMD800G4'
block_db_size: 51539607552

after applying this config, Ceph deployed 6 OSDs per node and actually also locked second NVMe:

blueshark-1:~ # ceph osd tree
ID   CLASS  WEIGHT    TYPE NAME             STATUS  REWEIGHT  PRI-AFF
 -1         11.25644  root default
 -3          1.96017      host blueshark-4
  0    ssd   0.41080          osd.0             up   1.00000  1.00000
  1    ssd   0.41080          osd.1             up   1.00000  1.00000
  2    ssd   0.41080          osd.2             up   1.00000  1.00000
 15    ssd   0.36389          osd.15            up   1.00000  1.00000
 16    ssd   0.36389          osd.16            up   1.00000  1.00000
 -9          2.32407      host blueshark-5
  9    ssd   0.41080          osd.9             up   1.00000  1.00000
 10    ssd   0.41080          osd.10          down         0  1.00000
 11    ssd   0.41080          osd.11            up   1.00000  1.00000
 23    ssd   0.36389          osd.23            up   1.00000  1.00000
 24    ssd   0.36389          osd.24            up   1.00000  1.00000
 25    ssd   0.36389          osd.25            up   1.00000  1.00000
 -7          2.32407      host blueshark-6
  6    ssd   0.41080          osd.6             up   1.00000  1.00000
  7    ssd   0.41080          osd.7             up   1.00000  1.00000
  8    ssd   0.41080          osd.8             up   1.00000  1.00000
 20    ssd   0.36389          osd.20            up   1.00000  1.00000
 21    ssd   0.36389          osd.21            up   1.00000  1.00000
 22    ssd   0.36389          osd.22            up   1.00000  1.00000
 -5          2.32407      host blueshark-7
  3    ssd   0.41080          osd.3             up   1.00000  1.00000
  4    ssd   0.41080          osd.4             up   1.00000  1.00000
  5    ssd   0.41080          osd.5             up   1.00000  1.00000
 17    ssd   0.36389          osd.17            up   1.00000  1.00000
 18    ssd   0.36389          osd.18            up   1.00000  1.00000
 19    ssd   0.36389          osd.19            up   1.00000  1.00000
-11          2.32407      host blueshark-8
 12    ssd   0.41080          osd.12            up   1.00000  1.00000
 13    ssd   0.41080          osd.13            up   1.00000  1.00000
 14    ssd   0.41080          osd.14            up   1.00000  1.00000
 26    ssd   0.36389          osd.26            up   1.00000  1.00000
 27    ssd   0.36389          osd.27            up   1.00000  1.00000
 28    ssd   0.36389          osd.28            up   1.00000  1.00000
blueshark-5  /dev/sdh      ssd   29.8G  SATA_SSD_67F407560E2400150839           True
blueshark-5  /dev/sdi      ssd   29.8G  SATA_SSD_AF340757042400153013           True
blueshark-5  /dev/nvme0n1  ssd    745G  _CVFT54700076800CGN                     False  LVM detected, locked
blueshark-5  /dev/nvme1n1  ssd    745G  _CVFT6075000C800CGN                     False  LVM detected
blueshark-5  /dev/sda      ssd    223G  Micron_5200_MTFDDAK240TDN_18532045B10D  False  locked
blueshark-5  /dev/sdb      ssd    372G  INTEL SSDSC2BA400G4_BTHV6082036R400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-5  /dev/sdc      ssd    372G  INTEL SSDSC2BA400G4_BTHV611201F3400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-5  /dev/sdd      ssd    372G  INTEL SSDSC2BA400G4_BTHV608300WC400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-5  /dev/sde      ssd    372G  INTEL SSDSC2BA400G4_BTHV6082036F400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-5  /dev/sdf      ssd    372G  INTEL SSDSC2BA400G4_BTHV608203EY400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-5  /dev/sdg      ssd    372G  INTEL SSDSC2BA400G4_BTHV6082036H400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-8  /dev/sdh      ssd   29.8G  SATA_SSD_96D707560D2400162881           True
blueshark-8  /dev/sdi      ssd   29.8G  SATA_SSD_B4500757052400104759           True
blueshark-8  /dev/nvme0n1  ssd    745G  _CVFT6075000W800CGN                     False  LVM detected
blueshark-8  /dev/nvme1n1  ssd    745G  _CVFT6075000A800CGN                     False  LVM detected, locked
blueshark-8  /dev/sda      ssd    223G  Micron_5200_MTFDDAK240TDN_18532045B113  False  locked
blueshark-8  /dev/sdb      ssd    372G  INTEL SSDSC2BA400G4_BTHV608204Y3400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-8  /dev/sdc      ssd    372G  INTEL SSDSC2BA400G4_BTHV608203E2400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-8  /dev/sdd      ssd    372G  INTEL SSDSC2BA400G4_BTHV608203EK400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-8  /dev/sde      ssd    372G  INTEL SSDSC2BA400G4_BTHV608203E1400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-8  /dev/sdf      ssd    372G  INTEL SSDSC2BA400G4_BTHV608203E0400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked
blueshark-8  /dev/sdg      ssd    372G  INTEL SSDSC2BA400G4_BTHV6082036Q400NGN  False  Insufficient space (<5GB) on vgs, LVM detected, locked

blueshark-5:~ # lsblk
NAME                                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                                                                     8:0    0 223.6G  0 disk
├─sda1                                                                                                                  8:1    0     2M  0 part
├─sda2                                                                                                                  8:2    0    20M  0 part /boot/efi
└─sda3                                                                                                                  8:3    0 223.6G  0 part /
sdb                                                                                                                     8:16   0 372.6G  0 disk
└─ceph--block--53536f82--7c70--4de1--929f--33644d58e570-osd--block--250099c1--6a7a--4cd2--92ef--89e194269a3f          254:0    0 372.6G  0 lvm
sdc                                                                                                                     8:32   0 372.6G  0 disk
└─ceph--block--0bb193bd--b21f--4243--a353--fba046323420-osd--block--b33beffd--2e7d--4b15--8b0d--72c4ddaf5ca2          254:2    0 372.6G  0 lvm
sdd                                                                                                                     8:48   0 372.6G  0 disk
└─ceph--block--26fd1ce9--4f28--4a3b--b62a--92d66a120d19-osd--block--87e7975b--cf88--4efe--8b6e--35fa4ae9124f          254:4    0 372.6G  0 lvm
sde                                                                                                                     8:64   0 372.6G  0 disk
└─ceph--01f195f2--5a5a--4818--b426--05d6e2075360-osd--data--6b7c5808--8564--4cf3--bb16--3b63bd3f342e                  254:6    0 372.6G  0 lvm
sdf                                                                                                                     8:80   0 372.6G  0 disk
└─ceph--12da475d--07c6--4361--b31d--28d4d25a7f04-osd--data--16c70891--d438--4b4f--a5a9--0288cf870274                  254:7    0 372.6G  0 lvm
sdg                                                                                                                     8:96   0 372.6G  0 disk
└─ceph--d0d4de2d--0f40--4ba3--bec5--fee526e9cd97-osd--data--aadff425--5b9b--49bc--9080--c581012e30ae                  254:8    0 372.6G  0 lvm
sdh                                                                                                                     8:112  0  29.8G  0 disk
sdi                                                                                                                     8:128  0  29.8G  0 disk
nvme0n1                                                                                                               259:0    0 745.2G  0 disk
├─ceph--block--dbs--7fcb3b5a--5ff4--4904--82cc--874410c9a825-osd--block--db--1ffef153--1254--4cac--bba5--d04145ebfc27 254:1    0    48G  0 lvm
├─ceph--block--dbs--7fcb3b5a--5ff4--4904--82cc--874410c9a825-osd--block--db--359e629b--0ffa--4c03--8c5d--26f6a1de4644 254:3    0    48G  0 lvm
└─ceph--block--dbs--7fcb3b5a--5ff4--4904--82cc--874410c9a825-osd--block--db--78f1963c--a186--4863--84be--3ffbd424a141 254:5    0    48G  0 lvm
nvme1n1                                                                                                               259:1    0 745.2G  0 disk
blueshark-1:~ # ssh blueshark-5 'cephadm shell -- ceph-volume inventory /dev/nvme1n1'
INFO:cephadm:Inferring fsid e72a8278-a4df-11ea-9304-000e1ec66a02
INFO:cephadm:Using recent ceph image registry.suse.de/suse/sle-15-sp2/update/products/ses7/milestones/containers/ses/7/ceph/ceph:latest
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.

====== Device report /dev/nvme1n1 ======

     path                      /dev/nvme1n1
     available                 False
     rejected reasons          LVM detected
     device id                 _CVFT6075000C800CGN
     removable                 0
     ro                        0
     vendor
     model                     INTEL SSDPEDMD800G4
     sas address
     rotational                0
     scheduler mode            mq-deadline
     human readable size       745.21 GB
blueshark-1:~ # ceph orch ls --service_type osd --export
block_db_size: 51539607552
block_wal_size: null
data_devices:
  all: false
  limit: 3
  model: INTEL SSDSC2BA40
  paths: []
  rotational: null
  size: null
  vendor: null
data_directories: null
db_devices:
  all: false
  limit: null
  model: INTEL SSDPEDMD800G4
  paths: []
  rotational: null
  size: null
  vendor: null
db_slots: null
encrypted: false
journal_devices: null
journal_size: null
objectstore: bluestore
osd_id_claims: {}
osds_per_device: null
placement:
  host_pattern: blueshark-[4-8]
service_id: 3osd_3db
service_name: osd.3osd_3db
service_type: osd
unmanaged: false

We have expected that it would deploy 3 OSD per node on ssd with 48G DB on 1 NVMe.


Related issues

Related to Orchestrator - Feature #45263: osdspec/drivegroup: not enough filters to define layout Fix Under Review
Related to Orchestrator - Bug #44888: Drivegroup's :limit: isn't working correctly Fix Under Review

History

#1 Updated by Sebastian Wagner about 1 month ago

  • Related to Feature #45263: osdspec/drivegroup: not enough filters to define layout added

#2 Updated by Sebastian Wagner about 1 month ago

  • Related to Bug #44888: Drivegroup's :limit: isn't working correctly added

#3 Updated by Joshua Schmid 3 days ago

  • Status changed from New to Fix Under Review
  • Assignee set to Joshua Schmid
  • Pull request ID set to 35945

Also available in: Atom PDF