Project

General

Profile

Actions

Bug #47831

open

ceph-volume reject md-devices [rejected reason: Insufficient space <5GB]

Added by Stefan Ludwig over 3 years ago. Updated over 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
ceph-volume
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Until ceph v14.2.7 it was possible to use md-devices (MDADM) as OSDs.

-------------------- MD-Device --------------------
$ sudo mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Fri Oct 9 11:35:43 2020
Raid Level : linear
Array Size : 52395008 (49.97 GiB 53.65 GB)
Raid Devices : 1
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Fri Oct  9 11:35:43 2020
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Rounding : 0K

Consistency Policy : none

Name : dockertest01:md-sdb  (local to host dockertest01)
UUID : 92b86926:786b8c57:9d3bf28d:d8a0b0e4
Events : 0
Number   Major   Minor   RaidDevice State
0 8 16 0 active sync /dev/sdb
-------------------- MD-Device --------------------

-------------------- ceph v14.2.7 --------------------
Description: With Ceph 14.2.7 the /dev/md127 is usable by ceph-volume

$ sudo docker run --device=/dev/sdb --device=/dev/sdc --device=/dev/md127 --device=/dev/md/md-sdb --name=ceph-14-2-7 -it -d ceph/ceph:v14.2.7-20200819
3047b0977f8a89fa485ea0c1f7b5fd70767834ec0575c7ae6db520b4d3476044
[2020-10-09 11:36:56 user@dockertest01 ~]
$ sudo docker exec -it ceph-14-2-7 bash
[root@3047b0977f8a /]# ceph-volume inventory /dev/sdc

====== Device report /dev/sdc ======

available                 True
rejected reasons
path /dev/sdc
device id
scheduler mode deadline
rotational 1
vendor VMware
human readable size 50.00 GB
sas address
removable 0
model Virtual disk
ro 0
[root@3047b0977f8a /]# ceph-volume inventory /dev/md127

====== Device report /dev/md127 ======

available                 True
rejected reasons
path /dev/md127
device id
scheduler mode
rotational 1
vendor
human readable size 49.97 GB
sas address
removable 0
model
ro 0
[root@3047b0977f8a /]# exit
exit
[2020-10-09 11:37:44 user@dockertest01 ~]
$ sudo docker stop ceph-14-2-7
ceph-14-2-7
[2020-10-09 11:38:02 user@dockertest01 ~]
$
-------------------- ceph v14.2.7 --------------------

-------------------- ceph v14.2.8 and higher --------------------
Description: With Ceph 14.2.8 and higher the /dev/md127 is blocked by ceph-volume

[2020-10-09 11:45:16 user@dockertest01 ~]
$ sudo docker run --device=/dev/sdb --device=/dev/sdc --device=/dev/md127 --device=/dev/md/md-sdb --name=ceph-14-2-8 -it -d ceph/ceph:v14.2.8-20200819
Unable to find image 'ceph/ceph:v14.2.8-20200819' locally
v14.2.8-20200819: Pulling from ceph/ceph
524b0c1e57f8: Already exists
8c71722a08d2: Pull complete
Digest: sha256:9d52d92b8209c912eeeddaad4c52efb14f681da60d2d1a543062e0efc5a35c2e
Status: Downloaded newer image for ceph/ceph:v14.2.8-20200819
de86ebe065712e2302d5187c5435b6a69e323ac9ded3034950cc21e5a0f6bd16
[2020-10-09 11:45:59 user@dockertest01 ~]
$ sudo docker exec -it ceph-14-2-8 bash
[root@de86ebe06571 /]# ceph-volume inventory /dev/sdc

====== Device report /dev/sdc ======

available                 True
rejected reasons
path /dev/sdc
device id
scheduler mode deadline
rotational 1
vendor VMware
human readable size 50.00 GB
sas address
removable 0
model Virtual disk
ro 0
[root@de86ebe06571 /]# ceph-volume inventory /dev/md127

====== Device report /dev/md127 ======

available                 False
rejected reasons Insufficient space (&lt;5GB)
path /dev/md127
device id
[root@de86ebe06571 /]# ceph-volume inventory /dev/sdb

====== Device report /dev/sdb ======

available                 False
rejected reasons locked
path /dev/sdb
device id
scheduler mode deadline
rotational 1
vendor VMware
human readable size 50.00 GB
sas address
removable 0
model Virtual disk
ro 0
[root@de86ebe06571 /]# exit
exit
[2020-10-09 11:47:01 user@dockertest01 ~]
$ sudo docker stop ceph-14-2-8
ceph-14-2-8
[2020-10-09 11:47:15 user@dockertest01 ~]
$
-------------------- ceph v14.2.8 and higher --------------------
Actions #1

Updated by Kefu Chai over 3 years ago

  • Project changed from crimson to ceph-volume
Actions #2

Updated by Jan Fajerski over 3 years ago

I don remember a particular change in ceph-volume between these versions. Especially not something that shrinks the inventory fields like this.

Can we get a lsblk output please? From the host and ideally from inside the containers in both version.
Wondering if something in the container build changed.

Actions #3

Updated by Stefan Ludwig over 3 years ago

LSBLK fo MD127 on the docker host
[2020-11-03 14:09:40 user@dockertest01 ~]
 $ lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/md127
NAME="md127" KNAME="md127" MAJ:MIN="9:127" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="50G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="linear" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-11-03 14:09:44 user@dockertest01 ~]
 $ 

LSBLK fo MD127 from inside of the docker container with ceph version 14.2.7
[2020-11-03 14:09:44 user@dockertest01 ~]
 $ sudo docker exec -it ceph-14-2-7 bash
[root@3047b0977f8a /]# lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/md127
NAME="md127" KNAME="md127" MAJ:MIN="9:127" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="50G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="linear" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[root@3047b0977f8a /]# exit
exit
[2020-11-03 14:10:17 user@dockertest01 ~]
 $ 

LSBLK fo MD127 from inside of the docker container with ceph version 14.2.8
[2020-11-03 14:10:17 user@dockertest01 ~]
 $ sudo docker exec -it ceph-14-2-8 bash
[root@de86ebe06571 /]# lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/md127
NAME="md127" KNAME="md127" MAJ:MIN="9:127" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="50G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="linear" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[root@de86ebe06571 /]# exit
exit
[2020-11-03 14:10:49 user@dockertest01 ~]
 $

Actions #4

Updated by Stefan Ludwig over 3 years ago

  • LSBLK fo MD127 on the docker host
    [2020-11-03 14:09:40 user@dockertest01 ~]
     $ lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/md127
    NAME="md127" KNAME="md127" MAJ:MIN="9:127" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="50G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="linear" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
    [2020-11-03 14:09:44 user@dockertest01 ~]
     $ 
    
  • LSBLK fo MD127 from inside of the docker container with ceph version 14.2.7
    [2020-11-03 14:09:44 user@dockertest01 ~]
     $ sudo docker exec -it ceph-14-2-7 bash
    [root@3047b0977f8a /]# lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/md127
    NAME="md127" KNAME="md127" MAJ:MIN="9:127" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="50G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="linear" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
    [root@3047b0977f8a /]# exit
    exit
    [2020-11-03 14:10:17 user@dockertest01 ~]
     $ 
    
  • LSBLK fo MD127 from inside of the docker container with ceph version 14.2.8
    [2020-11-03 14:10:17 user@dockertest01 ~]
     $ sudo docker exec -it ceph-14-2-8 bash
    [root@de86ebe06571 /]# lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/md127
    NAME="md127" KNAME="md127" MAJ:MIN="9:127" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="50G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="" TYPE="linear" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
    [root@de86ebe06571 /]# exit
    exit
    [2020-11-03 14:10:49 user@dockertest01 ~]
     $
    
Actions #5

Updated by Paul Smith over 3 years ago

confirming that "Insufficient space <5GB" goes for md devices on newer versions of ceph
https://github.com/rook/rook/issues/6715

Actions

Also available in: Atom PDF