Project

General

Profile

Actions

Bug #14424

closed

disk replacements for the LRC

Added by Dan Mick over 8 years ago. Updated about 8 years ago.

Status:
Resolved
Priority:
Normal
Category:
Test Node
Target version:
-
% Done:

0%

Source:
Development
Tags:
lrc
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

mira049/2
mira021/6
mira060/5
mira116/7
mira120/7 missing (not sdg, beware)
mira055/2 still emptying

Actions #1

Updated by Dan Mick over 8 years ago

  • Project changed from Ceph to sepia
  • Category set to Test Node
  • Source changed from other to Development
  • Tags set to lrc
Actions #2

Updated by David Galloway over 8 years ago

All disks replaced last night

Actions #3

Updated by Dan Mick about 8 years ago

  • Status changed from In Progress to Resolved

To replace a disk (from memory, may be missing a step or have wrong order):

  • unweight the OSD if it's still viable (perhaps gradually over time): ceph osd reweight <float-less-than-1.0> <osdnum>
  • once data is off, mark OSD out (sets weight to 0): ceph osd out <osdnum>
  • remove from crush: ceph osd crush remove <osd.num>
  • remove from cluster: ceph osd rm <osdnum>
  • remove from auth: ceph auth del <osd.num>

The last three steps might be doable with ceph-disk osd destroy, which still isn't part of ceph-deploy AFAICT

  • unmount the osd data directory

and then it should be ready to remove.

Actions

Also available in: Atom PDF