Project

General

Profile

Actions

Bug #39442

closed

ceph-volume lvm batch wrong partitioning with multiple osd per device and db-devices

Added by Luca Cervigni almost 5 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
nautilus, mimic
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When using ceph-volume lvm batch on one device with separate db-device and multiple osd per device, the partitioning for the DATA device is not correct.
At first appears correct in the first prompt, but when running the DATA LV is created with full disk dimension instead with the correct partitioning (TOTAL SIZE / n-devices)

Detailed logs:
0) CHECKING DISKS
root@dev-ceph-2:~# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 1.5 TiB, 1600321314816 bytes, 3125627568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

root@dev-ceph-2:~# fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 698.7 GiB, 750156374016 bytes, 1465149168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

1) RUNNING WITH REPORT:

root@dev-ceph-2:~# ceph-volume lvm batch --db-devices /dev/nvme1n1 --osds-per-device 8 /dev/nvme0n1 --report --format json {
"changed": true,
"osds": [ {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}, {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}, {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}, {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}, {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}, {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}, {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}, {
"block.db": {
"human_readable_size": "87.12 GB",
"path": "vg: vg/lv",
"percentage": 12,
"size": 93549756416
},
"data": {
"human_readable_size": "186.12 GB",
"path": "/dev/nvme0n1",
"percentage": 12,
"size": 199850196992.0
}
}
],
"vg": {
"devices": "/dev/nvme1n1",
"human_readable_size": "697.00 GB",
"human_readable_sizes": "87.12 GB",
"parts": 8,
"percentages": 12,
"size": 748398051328,
"sizes": 93549756416
},
"vgs": []
}

2) RUNNING

root@dev-ceph-2:~# ceph-volume lvm batch --db-devices /dev/nvme1n1 --osds-per-device 8 /dev/nvme0n1

Total OSDs: 8

Solid State VG:
Targets: block.db Total size: 697.00 GB
Total LVs: 64 Size per LV: 87.12 GB
Devices: /dev/nvme1n1

Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 186.12 GB 12%
[block.db] vg: vg/lv 87.12 GB 12%
--> The above OSDs would be created if the operation continues
--> do you want to proceed? (yes/no) yes
Running command: /sbin/vgcreate s 1G --force --yes ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024 /dev/nvme0n1
stdout: Physical volume "/dev/nvme0n1" successfully created.
stdout: Volume group "ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024" successfully created
Running command: /sbin/vgcreate -s 1G --force --yes ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a /dev/nvme1n1
stdout: Physical volume "/dev/nvme1n1" successfully created.
stdout: Volume group "ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a" successfully created
Running command: /sbin/lvcreate --yes -l 1490 -n osd-block-b6792352-6620-4ff7-a296-7927104ebbef ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024
stdout: Logical volume "osd-block-b6792352-6620-4ff7-a296-7927104ebbef" created.
Running command: /sbin/lvcreate --yes -l 87 -n osd-block-db-94c380ec-b5d3-49a9-a2a7-6ae54ff447cf ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a
stdout: Logical volume "osd-block-db-94c380ec-b5d3-49a9-a2a7-6ae54ff447cf" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c5693e03-bdc3-4934-8f0f-5cbd75d92c0f
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
-
> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable contains common executable locations
Running command: /bin/chown h ceph:ceph /dev/ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024/osd-block-b6792352-6620-4ff7-a296-7927104ebbef
Running command: /bin/chown -R ceph:ceph /dev/dm-1
Running command: /bin/ln -s /dev/ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024/osd-block-b6792352-6620-4ff7-a296-7927104ebbef /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
stderr: 2019-04-23 07:43:58.206 7f1e9371a700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-04-23 07:43:58.206 7f1e9371a700 -1 AuthRegistry(0x7f1e8c042e58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQA7wr5cfVtMMhAAmkVBG7iG4l3nvmfxtw651g==
stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth(key=AQA7wr5cfVtMMhAAmkVBG7iG4l3nvmfxtw651g==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /bin/chown -h ceph:ceph /dev/ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a/osd-block-db-94c380ec-b5d3-49a9-a2a7-6ae54ff447cf
Running command: /bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --bluestore-block-db-path /dev/ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a/osd-block-db-94c380ec-b5d3-49a9-a2a7-6ae54ff447cf --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c5693e03-bdc3-4934-8f0f-5cbd75d92c0f --setuser ceph --setgroup ceph
-
> ceph-volume lvm prepare successful for: ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024/osd-block-b6792352-6620-4ff7-a296-7927104ebbef
Running command: /bin/chown R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024/osd-block-b6792352-6620-4ff7-a296-7927104ebbef --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /bin/ln -snf /dev/ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024/osd-block-b6792352-6620-4ff7-a296-7927104ebbef /var/lib/ceph/osd/ceph-0/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /bin/chown -R ceph:ceph /dev/dm-1
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /bin/ln -snf /dev/ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a/osd-block-db-94c380ec-b5d3-49a9-a2a7-6ae54ff447cf /var/lib/ceph/osd/ceph-0/block.db
Running command: /bin/chown -h ceph:ceph /dev/ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a/osd-block-db-94c380ec-b5d3-49a9-a2a7-6ae54ff447cf
Running command: /bin/chown -R ceph:ceph /dev/dm-2
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block.db
Running command: /bin/chown -R ceph:ceph /dev/dm-2
Running command: /bin/systemctl enable ceph-volume@lvm-0-c5693e03-bdc3-4934-8f0f-5cbd75d92c0f
stderr: Created symlink → /lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@0
stderr: Created symlink → /lib/systemd/system/ceph-osd@.service.
Running command: /bin/systemctl start ceph-osd@0
-
> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm create successful for: ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024/osd-block-b6792352-6620-4ff7-a296-7927104ebbef
Running command: /sbin/lvcreate --yes l 1490 -n osd-block-441531df-159f-4082-8fa7-7dff039f46c3 ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024
stderr: Volume group "ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024" has insufficient free space (0 extents): 1490 required.
-
> RuntimeError: command returned non-zero exit status: 5

3) INVESTIGATION
...
Running command: /sbin/lvcreate --yes l 1490 -n osd-block-b6792352-6620-4ff7-a296-7927104ebbef ceph-block-a4afa450-215f-47ea-8907-99d74dc8c024 < WRONG!!!!!!!!!
stdout: Logical volume "osd-block-b6792352-6620-4ff7-a296-7927104ebbef" created.
Running command: /sbin/lvcreate --yes l 87 -n osd-block-db-94c380ec-b5d3-49a9-a2a7-6ae54ff447cf ceph-block-dbs-d061a97f-7533-4d4d-bf03-7ad674a8b85a < RIGHT
...


Related issues 2 (0 open2 closed)

Copied to ceph-volume - Backport #43321: nautilus: ceph-volume lvm batch wrong partitioning with multiple osd per device and db-devicesResolvedActions
Copied to ceph-volume - Backport #43322: mimic: ceph-volume lvm batch wrong partitioning with multiple osd per device and db-devicesResolvedActions
Actions #1

Updated by Greg Farnum almost 5 years ago

  • Project changed from Ceph to ceph-volume
  • Category deleted (ceph cli)
Actions #2

Updated by Daniel Oliveira over 4 years ago

Is there an easier reproducible for this? Also, does it happen only with 'lvm batch'? I have looked at the code, but would like to have some easier/simpler way to test it just to make sure I am not missing anything.

Actions #3

Updated by Luca Cervigni over 4 years ago

Daniel Oliveira wrote:

Is there an easier reproducible for this? Also, does it happen only with 'lvm batch'? I have looked at the code, but would like to have some easier/simpler way to test it just to make sure I am not missing anything.

Hello Daniel,
It is easily reproducible, just a machine with two drives, and run
ceph-volume lvm batch --db-devices /dev/driveA --osds-per-device 8 /dev/driveB

And you will see the error straight away.

Or do you mean something I did not understand?

Actions #4

Updated by Jan Fajerski over 4 years ago

  • Status changed from New to Pending Backport
  • Backport set to nautilus, mimic
  • Pull request ID set to 32177
Actions #5

Updated by Nathan Cutler over 4 years ago

  • Copied to Backport #43321: nautilus: ceph-volume lvm batch wrong partitioning with multiple osd per device and db-devices added
Actions #6

Updated by Nathan Cutler over 4 years ago

  • Copied to Backport #43322: mimic: ceph-volume lvm batch wrong partitioning with multiple osd per device and db-devices added
Actions #7

Updated by Jan Fajerski about 4 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF