Project

General

Profile

Actions

Bug #37650

open

batch creates PVs on bare devices

Added by Jan Fajerski over 5 years ago. Updated over 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The batch subcommand will create PVs in raw block devices.
In practice it is advisable from the management perspective to create a partition under the PV.

When using a raw device for a PV, only LVM will recognize that the device is occupied by LVM. To most other disk related tools the device looks empty.

Adding a partition has hardly any downside (tiny management overhead).

Actions #1

Updated by Alfredo Deza over 5 years ago

I am not sure I follow why is it advisable to create a partition

The downside is the implementation and management of said partitioning.
It is one other thing needed in all the management of LVs, like
zapping, creating VGs and LVs etc...

I don't know what disk tools report the device empty, the ones we use in ceph-volume (lsblk and blkid) both report usage correctly:

(tmp) root@node9:/home/vagrant# lsblk /dev/sdi
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdi    8:128  0 10.8G  0 disk
(tmp) root@node9:/home/vagrant# lsblk /dev/nvme0n1
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0    0  100G  0 disk
(tmp) root@node9:/home/vagrant# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

(tmp) root@node9:/home/vagrant# ceph-volume lvm batch /dev/sdi /dev/nvme0n1

Total OSDs: 1

Solid State VG:
  Targets:   block.db                  Total size: 99.00 GB
  Total LVs: 1                         Size per LV: 99.00 GB
  Devices:   /dev/nvme0n1

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdi                                                9.00 GB         100%
  [block.db]      vg: vg/lv                                               99.00GB        100%
--> The above OSDs would be created if the operation continues
--> do you want to proceed? (yes/no) y
,,,,
--> ceph-volume lvm activate successful for osd ID: 16
--> ceph-volume lvm create successful for: ceph-block-875fd2df-20ce-429f-b6b3-b2
0eefba0bbb/osd-block-c9f4cc59-981f-4e6f-a0e5-6b19e7ff2843
(tmp) root@node9:/home/vagrant# lsblk /dev/nvme0n1
NAME
                                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1
                                      259:0    0  100G  0 disk
└─ceph--block--dbs--ec04f0c3--3e78--4ebc--bc66--e94e4cdaf619-osd--block--db--00b
88b8c--a593--40dc--bbf3--e0cdcbc64ca0 252:4    0   99G  0 lvm
(tmp) root@node9:/home/vagrant# blkid /dev/nvem0n1
(tmp) root@node9:/home/vagrant# blkid /dev/nvme0n1
/dev/nvme0n1: UUID="9Fh6RW-N6s0-buez-XCFk-eEOK-XHcW-vW5r31" TYPE="LVM2_member" 
(tmp) root@node9:/home/vagrant# blkid /dev/sdi
/dev/sdi: UUID="8vyNrS-gK2N-gEw5-5kLW-frFs-9WYC-XadaQj" TYPE="LVM2_member" 
(tmp) root@node9:/home/vagrant# lsblk /dev/sdi
NAME
                             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdi
                               8:128  0 10.8G  0 disk
└─ceph--block--875fd2df--20ce--429f--b6b3--b20eefba0bbb-osd--block--c9f4cc59--98
1f--4e6f--a0e5--6b19e7ff2843 252:3    0   10G  0 lvm

Actions #2

Updated by Jan Fajerski over 5 years ago

wow...sorry I think I hit the wrong button...didn't mean to edit Alfredos comment.

ok I think I fixed it...my apologies

Actions #3

Updated by Jan Fajerski over 5 years ago

Alfredo Deza wrote:

I don't know what disk tools report the device empty, the ones we use in ceph-volume (lsblk and blkid) both report usage correctly:

(tmp) root@node9:/home/vagrant# lsblk /dev/sdi
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdi    8:128  0 10.8G  0 disk
(tmp) root@node9:/home/vagrant# lsblk /dev/nvme0n1
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0    0  100G  0 disk
(tmp) root@node9:/home/vagrant# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

This is exactly the problem. This is indistinguishable from an empty unused disk.

Also
http://tldp.org/HOWTO/LVM-HOWTO/initdisks.html
and
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/LVM_components.html#multiple_partitions

Though admittedly the second link does not apply here.

I think adding a partition is just a safety measure against instances of fast admin fingers, while costing not much overhead.

Actions #4

Updated by Alfredo Deza over 5 years ago

There is no need to support multiple partitions (like you mentioned), and it does create an overhead:

  • another layer of detection is needed for any operation (whereas it isn't needed today)
  • a correlation of device to partition name would need to happen always
  • the above two items would increase the surface for problems in code that works well today

Lastly, the initial design of ceph-volume was to leave the creation of LVs up to the user. That idea is still good today for anything that ceph-volume does that doesn't quite
adhere to what an admin might want. If an admin would like to have partitions used in the underlying LV, it is certainly an acceptable thing to do today with ceph-volume, with
no changes required.

If the way ceph-volume is creating LVs might cause some issue for an admin that is not paying close attention, it doesn't warrant the addition of another layer for LV creation, increasing an
already complex task

Actions

Also available in: Atom PDF