Project

General

Profile

Bug #47553

OSDs being deployed bluestore mixed type for SSD disks

Added by Prashant D over 3 years ago. Updated over 3 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The existing OSD nodes have been deployed as single type bluestore OSD but on new OSD nodes, the SSD OSDs are deployed as mixed type by creating block LV on SSD devices and creating separate block.db LVs for each SSD devices on new VG (VG is created using 4 SSD devices).

With reference to ceph-ansible issue reported (https://github.com/ceph/ceph-ansible/issues/5412), this issue can be caused by storage driver reporting wrong value in /sys/block/<device>/queue/rotational.

The recent findings has also found that this issue is caused by race-condition where is_locked_raw_device triggering a udev event causing misreporting of non-rotational.


Related issues

Duplicates ceph-volume - Bug #47502: ceph-volume lvm batch race condition Resolved

History

#1 Updated by Jan Fajerski over 3 years ago

  • Duplicates Bug #47502: ceph-volume lvm batch race condition added

#2 Updated by Jan Fajerski over 3 years ago

  • Status changed from New to Duplicate

I'd be interested in more details about "this issue is caused by race-condition where is_locked_raw_device triggering a udev event causing misreporting of non-rotational."

But lets use the linked issue for this please.

#3 Updated by Prashant D over 3 years ago

The open and close of the device in is_locked_raw_device is triggering a udev event which we believe is somehow causing the misreporting of non-rotational.

I am good to close this tracker as a duplicates of #47502.

Also available in: Atom PDF