Actions
Bug #44017
closedceph-volume lvm list always returns the first LV
Status:
Duplicate
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When using ceph-volume lvm list on a specific device (either PV or VG/LV) and having multiple LVs in the same VG then the command will always return the first LV.
# ceph-volume lvm list test_group/data-lv1 ====== osd.0 ======= [block] /dev/test_group/data-lv1 block device /dev/test_group/data-lv1 block uuid YFi4D3-rSXY-8QBc-L5TQ-NC0z-8UNk-JFn9Hz cephx lockbox secret cluster fsid d0c7a7fc-2073-4d73-87fe-4c7248092c17 cluster name ceph crush device class None encrypted 0 osd fsid 951af6a6-65a4-42c3-80da-0cd441df8858 osd id 0 type block vdo 0 devices /dev/sdb [block] /dev/sdb PARTUUID YFi4D3-rSXY-8QBc-L5TQ-NC0z-8UNk-JFn9Hz # ceph-volume lvm list test_group/data-lv2 ====== osd.0 ======= [block] /dev/test_group/data-lv1 block device /dev/test_group/data-lv1 block uuid YFi4D3-rSXY-8QBc-L5TQ-NC0z-8UNk-JFn9Hz cephx lockbox secret cluster fsid d0c7a7fc-2073-4d73-87fe-4c7248092c17 cluster name ceph crush device class None encrypted 0 osd fsid 951af6a6-65a4-42c3-80da-0cd441df8858 osd id 0 type block vdo 0 devices /dev/sdb [block] /dev/sdb PARTUUID YFi4D3-rSXY-8QBc-L5TQ-NC0z-8UNk-JFn9Hz
The full lvm list command works as expected.
# ceph-volume lvm list ====== osd.0 ======= [block] /dev/test_group/data-lv1 block device /dev/test_group/data-lv1 block uuid YFi4D3-rSXY-8QBc-L5TQ-NC0z-8UNk-JFn9Hz cephx lockbox secret cluster fsid d0c7a7fc-2073-4d73-87fe-4c7248092c17 cluster name ceph crush device class None encrypted 0 osd fsid 951af6a6-65a4-42c3-80da-0cd441df8858 osd id 0 type block vdo 0 devices /dev/sdb ====== osd.1 ======= [db] /dev/journals/journal1 block device /dev/test_group/data-lv2 block uuid g2NSpT-GSOr-NKXp-ojDW-ybMd-pWWv-wvghBc cephx lockbox secret cluster fsid d0c7a7fc-2073-4d73-87fe-4c7248092c17 cluster name ceph crush device class None db device /dev/journals/journal1 db uuid jgDJqa-udOV-f4BL-Cunp-2GuE-C8Vw-sRusTt encrypted 0 osd fsid 333c0e27-6af5-4f52-ac5a-45abf62e84a6 osd id 1 type db vdo 0 devices /dev/sdc2 [block] /dev/test_group/data-lv2 block device /dev/test_group/data-lv2 block uuid g2NSpT-GSOr-NKXp-ojDW-ybMd-pWWv-wvghBc cephx lockbox secret cluster fsid d0c7a7fc-2073-4d73-87fe-4c7248092c17 cluster name ceph crush device class None db device /dev/journals/journal1 db uuid jgDJqa-udOV-f4BL-Cunp-2GuE-C8Vw-sRusTt encrypted 0 osd fsid 333c0e27-6af5-4f52-ac5a-45abf62e84a6 osd id 1 type block vdo 0 devices /dev/sdb
The issue is also present when trying to list on a PV
- ceph-volume lvm list /dev/sdb
====== osd.0 ======= [block] /dev/test_group/data-lv1 block device /dev/test_group/data-lv1 block uuid YFi4D3-rSXY-8QBc-L5TQ-NC0z-8UNk-JFn9Hz cephx lockbox secret cluster fsid d0c7a7fc-2073-4d73-87fe-4c7248092c17 cluster name ceph crush device class None encrypted 0 osd fsid 951af6a6-65a4-42c3-80da-0cd441df8858 osd id 0 type block vdo 0 devices /dev/sdb
The LVM configuration :
# pvs PV VG Fmt Attr PSize PFree /dev/sdb test_group lvm2 a-- <50.00g 0 /dev/sdc2 journals lvm2 a-- <25.00g 0 # vgs VG #PV #LV #SN Attr VSize VFree journals 1 1 0 wz--n- <25.00g 0 test_group 1 2 0 wz--n- <50.00g 0 # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert journal1 journals -wi-ao---- <25.00g data-lv1 test_group -wi-ao---- <25.00g data-lv2 test_group -wi-ao---- 25.00g
This regression was introduced by
- https://github.com/ceph/ceph/commit/17957d9beb42a04b8f180ccb7ba07d43179a41d3
- https://github.com/ceph/ceph/commit/d02bd7dd581a4bd4041eb397fae540a18f16a88b
Updated by Jan Fajerski about 4 years ago
- Status changed from New to Duplicate
Duplicates https://tracker.ceph.com/issues/44009
Actions