Project

General

Profile

Actions

Bug #44749

closed

lvm batch does not re-use db devices with free space on VGs

Added by Joshua Schmid about 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

cephadm-dev:~ # vgs
  VG                                                  #PV #LV #SN Attr   VSize  VFree 
  ceph-block-8216011b-f986-4e27-a543-2269d7d9a1b5       1   1   0 wz--n- 25.00g     0 
  ceph-block-dbs-73211778-d508-46d1-af76-24af1b4966e3   1   2   0 wz--n- 50.00g 16.67g               # <-------- Free space
  ceph-block-f11503c2-6acc-4386-9cf1-c33eb1105e5d       1   1   0 wz--n- 25.00g     0 

cephadm-dev:~ # lsblk
vdb                                                                                                                   254:16   0   25G  0 disk 
vdc                                                                                                                   254:32   0   25G  0 disk 
└─ceph--block--8216011b--f986--4e27--a543--2269d7d9a1b5-osd--block--dcdce2ee--d545--43fa--a768--30a78f908fdb          253:2    0   25G  0 lvm  
vdd                                                                                                                   254:48   0   25G  0 disk 
└─ceph--block--f11503c2--6acc--4386--9cf1--c33eb1105e5d-osd--block--1f55dc9f--c6d1--426d--a519--91710a20a8f8          253:0    0   25G  0 lvm  
vde                                                                                                                   254:64   0   50G  0 disk 
├─ceph--block--dbs--73211778--d508--46d1--af76--24af1b4966e3-osd--block--db--f6f2ab96--db96--4d6d--bcf2--f72c4b8ded90 253:4    0 16.7G  0 lvm  
└─ceph--block--dbs--73211778--d508--46d1--af76--24af1b4966e3-osd--block--db--ead2ccdc--3c28--47b8--bf5a--912836977e24 253:5    0 16.7G  0 lvm  

cephadm-dev:~ # ceph-volume lvm batch /dev/vdb /dev/vdc /dev/vdd --db-devices /dev/vde

Filtered Devices:
  <Raw Device: /dev/vdc>
    Used by ceph already
  <Raw Device: /dev/vdd>
    Used by ceph already

Total OSDs: 1

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/vdb                                                24.00 GB        100.0%
--> The above OSDs would be created if the operation continues

I would expect that ceph-volume re-uses the VFREE space of 16.67g (which actually is 1/3 of the total size so it would fit).

Instead c-v tries to deploy a standalone OSD which should result in a warning that the provided --db-devices drive is being ignored (similar to what happens when you pass --yes)


Related issues 1 (0 open1 closed)

Related to Orchestrator - Bug #44756: drivegroups: replacement op will ignore existing wal/dbsResolved

Actions
Actions #1

Updated by Joshua Schmid about 4 years ago

  • Related to Bug #44756: drivegroups: replacement op will ignore existing wal/dbs added
Actions #2

Updated by Jan Fajerski over 3 years ago

  • Pull request ID set to 34740
Actions #3

Updated by Jan Fajerski over 3 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF