Project

General

Profile

Actions

Bug #45792

closed

cephadm: zapped OSD gets re-added to the cluster.

Added by David Capone almost 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
ux
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Using version 15.2.1 with octopus cluster running centos 8.

When the cluster was initially deployed, OSDs were created using

ceph orch apply osd --all-available-devices

2 weeks later any drive added to the server is immediately provisioned as an OSD by cephadm process. The orchestrator apparently never stops looking to provision new drives. This also prevents a problem is you attempt to zap a drive after removing it from the cluster with the intent to safely remove the drive physically from the server. The moment the zap completes successfully, cephadm sees the drive as available and reprovisions the drive as a new OSD.

If this is by design some documentation on how to stop this background progress when a user no longer wants it to deploy new drives would be appreciated.


Related issues 3 (2 open1 closed)

Related to Dashboard - Bug #44808: mgr/dashboard: Allow users to specify an unmanaged ServiceSpec when creating OSDsNew

Actions
Related to ceph-volume - Feature #45374: Add support for BLACKLISTED_DEVICES env var parsing.New

Actions
Related to Orchestrator - Bug #45907: cepham: daemon rm for managed services is completely brokenResolved

Actions
Actions

Also available in: Atom PDF