Project

General

Profile

Actions

Bug #42928

closed

ceph-bluestore-tool bluefs-bdev-new-db does not update lv tags

Added by Arthur van Kleef over 4 years ago. Updated almost 3 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Adding a DB Device to an existing OSD with ceph-bluestore-tool bluefs-bdev-new-db does not update the lv tags of the OSD with the ceph.db_device and ceph.db_uuid tags.
This causes block.db symlinks to be missing on restart, which prevents the OSD to start.


Related issues 2 (1 open1 closed)

Related to ceph-volume - Bug #49554: add ceph-volume lvm [new-db|new-wal|migrate] commandsResolved

Actions
Has duplicate bluestore - Bug #62593: ceph-bluestore-tool bluefs-bdev-new-db creates block.db symlink to non-persistent /dev/dm-## instead of given and stable /dev/VG_NMAE/LV_NAME pathNew

Actions
Actions #1

Updated by Sage Weil about 4 years ago

  • Assignee set to Igor Fedotov
Actions #2

Updated by Stefan Priebe about 4 years ago

Those two tags are missing:
ceph.db_device=/dev/BLOCKDB
ceph.db_uuid=k7Oc64-aShi-Z4Y0-Opov-daQM-Vdue-kJpIAN

Actions #3

Updated by Stefan Priebe about 4 years ago

is ceph.db_uuid the LVM uuid?

Actions #4

Updated by Stefan Priebe about 4 years ago

OK i was able to solve this by using the lv tools and copy all lv flags from the block device to db and just chaning the ceph.type from block to db.

Thanks!

Actions #5

Updated by Simon Pierre Desrosiers over 3 years ago

Quick question, what if the bluestore device is not an lvm device ? All my devices were created with luminous with ceph-deploy. I added the db device using bluestore, but the osd does not start on reboot ( I have to reset permission on the lvm device ). And I note that there are no rules in usr/lib/udev/rules.d/ anyway.

Actions #6

Updated by Glen Baars over 3 years ago

Any way to determine the correct DB->Block arrangement after they are lost? I have a host that has hit this bug and am trying to recover it.

Actions #7

Updated by Glen Baars over 3 years ago

to answer my question - head -n 2 /dev/vg/lv will give the block device uuid

Actions #8

Updated by Igor Fedotov about 3 years ago

Related patch in ceph-volume: https://github.com/ceph/ceph/pull/39580

Actions #9

Updated by Igor Fedotov almost 3 years ago

  • Related to Bug #49554: add ceph-volume lvm [new-db|new-wal|migrate] commands added
Actions #10

Updated by Igor Fedotov almost 3 years ago

  • Status changed from New to Closed

Complete bluefs volume migration is now implemented at ceph-volume level. See https://github.com/ceph/ceph/pull/39580
ceph-bluestore-tool's migrate command just implements the actual copying and ceph-volume handles LVM tags on its own.
Hence closing this ticket.

Actions #11

Updated by Igor Fedotov 8 months ago

  • Has duplicate Bug #62593: ceph-bluestore-tool bluefs-bdev-new-db creates block.db symlink to non-persistent /dev/dm-## instead of given and stable /dev/VG_NMAE/LV_NAME path added
Actions

Also available in: Atom PDF