Project

General

Profile

Bug #42928

ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags

Added by Arthur van Kleef over 1 year ago. Updated about 1 month ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Adding a DB Device to an existing OSD with ceph-bluestore-tool bluefs-bdev-new-db does not update the lv tags of the OSD with the ceph.db_device and ceph.db_uuid tags.
This causes block.db symlinks to be missing on restart, which prevents the OSD to start.


Related issues

Related to ceph-volume - Bug #49554: add ceph-volume lvm [new-db|new-wal|migrate] commands Pending Backport

History

#1 Updated by Sage Weil over 1 year ago

  • Assignee set to Igor Fedotov

#2 Updated by Stefan Priebe about 1 year ago

Those two tags are missing:
ceph.db_device=/dev/BLOCKDB
ceph.db_uuid=k7Oc64-aShi-Z4Y0-Opov-daQM-Vdue-kJpIAN

#3 Updated by Stefan Priebe about 1 year ago

is ceph.db_uuid the LVM uuid?

#4 Updated by Stefan Priebe about 1 year ago

OK i was able to solve this by using the lv tools and copy all lv flags from the block device to db and just chaning the ceph.type from block to db.

Thanks!

#5 Updated by Simon Pierre Desrosiers 7 months ago

Quick question, what if the bluestore device is not an lvm device ? All my devices were created with luminous with ceph-deploy. I added the db device using bluestore, but the osd does not start on reboot ( I have to reset permission on the lvm device ). And I note that there are no rules in usr/lib/udev/rules.d/ anyway.

#6 Updated by Glen Baars 5 months ago

Any way to determine the correct DB->Block arrangement after they are lost? I have a host that has hit this bug and am trying to recover it.

#7 Updated by Glen Baars 5 months ago

to answer my question - head -n 2 /dev/vg/lv will give the block device uuid

#8 Updated by Igor Fedotov 4 months ago

Related patch in ceph-volume: https://github.com/ceph/ceph/pull/39580

#9 Updated by Igor Fedotov about 1 month ago

  • Related to Bug #49554: add ceph-volume lvm [new-db|new-wal|migrate] commands added

#10 Updated by Igor Fedotov about 1 month ago

  • Status changed from New to Closed

Complete bluefs volume migration is now implemented at ceph-volume level. See https://github.com/ceph/ceph/pull/39580
ceph-bluestore-tool's migrate command just implements the actual copying and ceph-volume handles LVM tags on its own.
Hence closing this ticket.

Also available in: Atom PDF