Actions
Bug #39388
closedceph device random device identification
Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
$ ceph device ls DEVICE HOST:DEV DAEMONS LIFE EXPECTANCY ... OCZ-VERTEX460A_A22MQ061448000601 node1:sda osd.1 osd.2 LVM_PV_iXdUC7-YkmA-f9gB-vGGZ-ftan-RTJo-QnaIfH_on_/dev/sdb_18351E45D011 node1:sdb osd.10 LVM_PV_GPoP3B-YBVF-vrVi-Yjjh-0dIo-pZKn-EKDE05_on_/dev/sdc_WMC6N0E7A7HZ node1:sdc osd.1 WDC_WD1003FBYX-01Y7B0_WD-WCAW31911438 node1:sdd osd.2 ...
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 111,8G 0 disk ├─sda1 8:1 0 256M 0 part ├─sda2 8:2 0 1M 0 part └─sda3 8:3 0 111,6G 0 part ├─node1-ubu 253:0 0 4G 0 lvm / ├─node1-swap 253:1 0 3,7G 0 lvm ├─node1-jd 253:4 0 5G 0 lvm └─node1-jc 253:6 0 5G 0 lvm sdb 8:16 0 894,3G 0 disk ├─n1_micron_b-test 253:3 0 10G 0 lvm └─n1_micron_b-data 253:5 0 884,3G 0 lvm sdc 8:32 0 1,8T 0 disk └─n1_blue_c-data 253:7 0 1,8T 0 lvm sdd 8:48 0 931,5G 0 disk └─node1-d_data 253:2 0 931,5G 0 lvm
Actually all these devices are used only through LVM. For some unknown reason their identifiers sometimes look like 'LVM_PV_....' and sometimes as normal ID built from vendor/model/serial. I think this could happen because the same disk can be identified in different ways, and ceph choose random from them.
Actions