Documentation #47839
closedBlueStore migration per-osd-device-copy section is misleading and incomplete
0%
Description
https://www.spinics.net/lists/ceph-users/msg62734.html points out shortcomings in this section of the docs:
https://docs.ceph.com/en/latest/rados/operations/bluestore-migration/#per-osd-device-copy
It follows a lengthy section on repaving entire nodes at a time. The intent is to describe a process for migrating one OSD at a time without incurring backfill / network traffic outside of the given node. This is valuable, but as written it needs work.
Notably it refers to a copy
function of ceph_objectstore_tool
, which is not found in the sources. I suspect that dup
is what's intended.
Moreover
Tooling not implemented!
Not documented!
isn't great. The process using dup
should be tested and more fully documented.
Files
Updated by Anthony D'Atri over 3 years ago
That said, feel free to assign it to me unless someone else is ready to run with it. @zdover23?
We should along the way make it clear that an undeployed drive local to each node is required for this approach. I've long wanted to reserve a drive bay per system for just this sort of thing, but I suspect that capacity pressure means that very few people will have this luxury. One could undeploy an existing OSD and let it recover elsewhere; this sort of defeats the whole idea, but at least it's only one OSD recovering across the net vs. all of them.
Since 12 bay chassis are common and 13 bay chassis aren't, change the example to read 11/12 vs 12/13 to reflect this and be more explicit about the need to reserve or evacuate a drive bay.
Updated by Chris Dunlop over 3 years ago
- File ceph-migrate-bluestore ceph-migrate-bluestore added
An example of how to use ceph-objectstore-tool dup
to convert from FileStore to BlueStore using a device copy: https://github.com/ceph/ceph/blob/master/qa/standalone/osd/osd-dup.sh
See attached ceph-migrate-bluestore
which uses ceph-volume lvm prepare
to get the basic structure for the new BlueStore version of the OSD.
"It Works For Me!"™
Updated by Chris Dunlop over 3 years ago
- File ceph-migrate-bluestore ceph-migrate-bluestore added
Sigh. You need to delete the old LV tags when adding the new tags, otherwise you end up with multiple tags, confusing ceph-volume lvm trigger
. It somehow managed to work anyway for my first two OSDs but failed on the third.
Fixed ceph-migrate-bluestore
attached.
Updated by Anthony D'Atri over 2 years ago
- Status changed from New to Resolved