Bug #14424
closed
disk replacements for the LRC
Added by Dan Mick over 8 years ago.
Updated over 8 years ago.
Description
mira049/2
mira021/6
mira060/5
mira116/7
mira120/7 missing (not sdg, beware)
mira055/2 still emptying
- Project changed from Ceph to sepia
- Category set to Test Node
- Source changed from other to Development
- Tags set to lrc
All disks replaced last night
- Status changed from In Progress to Resolved
To replace a disk (from memory, may be missing a step or have wrong order):
- unweight the OSD if it's still viable (perhaps gradually over time): ceph osd reweight <float-less-than-1.0> <osdnum>
- once data is off, mark OSD out (sets weight to 0): ceph osd out <osdnum>
- remove from crush: ceph osd crush remove <osd.num>
- remove from cluster: ceph osd rm <osdnum>
- remove from auth: ceph auth del <osd.num>
The last three steps might be doable with ceph-disk osd destroy, which still isn't part of ceph-deploy AFAICT
- unmount the osd data directory
and then it should be ready to remove.
Also available in: Atom
PDF