Project

General

Profile

Actions

Bug #44769

closed

cephadm doesn't reuse osd_id of 'destroyed' osds

Added by Joshua Schmid about 4 years ago. Updated almost 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The replacement operation is supposed to work like this:

ceph orch osd rm $id --replace

See https://docs.ceph.com/docs/master/mgr/orchestrator_modules/#osd-replacement

This will leave the osd.$id in the crushmap with the "destroyed" flag.

The next osd deployment should pick those up and pass it to ceph-volume

ceph-volume lvm batch <devices> --osd-ids <ids_to_reuse>

Related issues 1 (0 open1 closed)

Related to Dashboard - Feature #38234: mgr/dashboard Replace broken osd ResolvedKiefer Chang

Actions
Actions #1

Updated by Kiefer Chang about 4 years ago

Actions #2

Updated by Sebastian Wagner about 4 years ago

  • Description updated (diff)
Actions #3

Updated by Joshua Schmid about 4 years ago

  • Status changed from New to In Progress
  • Assignee set to Joshua Schmid
Actions #4

Updated by Joshua Schmid about 4 years ago

  • Pull request ID set to 34346
Actions #5

Updated by Joshua Schmid about 4 years ago

  • Status changed from In Progress to Fix Under Review
Actions #6

Updated by Kiefer Chang about 4 years ago

  • Status changed from Fix Under Review to Pending Backport
  • Backport set to octopus
Actions #7

Updated by Sebastian Wagner almost 4 years ago

  • Status changed from Pending Backport to Resolved
  • Target version changed from v15.0.0 to v15.2.2
Actions

Also available in: Atom PDF