Feature #3480
ceph-deploy: remove osd
0%
History
#1 Updated by Sage Weil about 11 years ago
- Subject changed from ceph-deploy: remove osd, mon to ceph-deploy: remove osd
- translation missing: en.field_position deleted (
1) - translation missing: en.field_position set to 1
#2 Updated by Sage Weil almost 11 years ago
- Category set to ceph-deploy
#3 Updated by Sage Weil almost 11 years ago
- translation missing: en.field_position deleted (
21) - translation missing: en.field_position set to 20
#4 Updated by Sage Weil over 10 years ago
- Target version set to v0.61 - Cuttlefish
#5 Updated by Neil Levine over 10 years ago
- Status changed from New to 12
#6 Updated by Ian Colle over 10 years ago
- Target version changed from v0.61 - Cuttlefish to v0.62a
#7 Updated by Ian Colle over 10 years ago
- Status changed from 12 to Need More Info
#8 Updated by Ian Colle over 10 years ago
- translation missing: en.field_story_points set to 5.00
#9 Updated by Tamilarasi muthamizhan over 10 years ago
- Assignee set to Dan Mick
#10 Updated by Ian Colle over 10 years ago
- Priority changed from Normal to Urgent
#11 Updated by Dan Mick over 10 years ago
Why is this marked "need more info"?
#12 Updated by Ian Colle over 10 years ago
Because Neil wanted to know more about the user story. Are you working on this?
#13 Updated by Dan Mick over 10 years ago
It's mine, and Urgent, so I assume it's my priority 1...
#14 Updated by Dan Mick over 10 years ago
Talking with Sam, I think we agree this ought to be two commands (names possibly fluid)
ceph-deploy remove-begin osd
ceph-deploy remove-finish osd
The difficulty is that we want to wait for the cluster to rebalance between 'osd out' and
'osd dead'; having only one command sorta implies that it will wait, and that would mean
progress messages, and that wouldn't be very script-friendly. So the proposal is to do
"remove-begin osd", followed (perhaps) by some sort of cluster monitoring to see when it appears the osd is empty, followed by "remove-finish osd". "remove-finish" would block if the cluster appears not to have finished its recovery. (Today, "appears to have finished its recovery" will be defined as "all pgs reporting active+clean"; tomorrow, that might be "there are no PGs left on the specific osd that's shutting down".)
Presumably "remove-begin" will basically do ceph osd out, and
"remove-finish" will kill the OSD, remove it from CRUSH, remove the osd key from the cluster, remove any 'I'm ready' marker temp files from the OSD FS, and osd rm.
#15 Updated by Sage Weil over 10 years ago
how about
ceph-deploy osd drain HOST:DISK
and then
ceph-deploy osd remove HOST:DISK
i think the trick is that we should make 'remove' leave the disk in a state where it won't mount itself, and the osd id is freed up, but the data and cluster fsid is still there. and you could re-add it with
ceph-deploy disk activate HOST:DISK
or nuke with
ceph-deploy disk zap HOST:DISK
?
#16 Updated by Sage Weil over 10 years ago
- Target version changed from v0.62a to v0.62b
#17 Updated by Dan Mick over 10 years ago
- Status changed from Need More Info to 12
#18 Updated by Sage Weil over 10 years ago
- Priority changed from Urgent to High
#19 Updated by Ian Colle over 10 years ago
- Target version changed from v0.62b to v0.63
#20 Updated by Ian Colle over 10 years ago
- Target version changed from v0.63 to v0.64
#21 Updated by Ian Colle over 10 years ago
- Target version changed from v0.64 to v0.65
#22 Updated by Ian Colle over 10 years ago
- Target version deleted (
v0.65)
#23 Updated by Dan Mick about 10 years ago
- Assignee deleted (
Dan Mick)
I'm not working on this, and am not likely to be in the near future. Changing to reflect that reality.
#24 Updated by Neil Levine about 9 years ago
- Project changed from devops to Ceph-deploy
- Category deleted (
ceph-deploy)
#25 Updated by Loïc Dachary almost 9 years ago
associated ceph-disk command : http://tracker.ceph.com/issues/7454
#26 Updated by Patrick Donnelly almost 4 years ago
- Status changed from 12 to New