Project

General

Profile

Bug #45792

Updated by Sebastian Wagner almost 4 years ago

Using version 15.2.1 with octopus cluster running centos 8. 

 When the cluster was initially deployed, OSDs were created using  

 <pre> 
 ceph orch apply osd --all-available-devices 
 </pre> 

 2 weeks later any drive added to the server is immediately provisioned as an OSD by cephadm process.    The orchestrator apparently never stops looking to provision new drives.    This also prevents a problem is you attempt to zap a drive after removing it from the cluster with the intent to safely remove the drive physically from the server.    The moment the zap completes successfully, cephadm sees the drive as available and reprovisions the drive as a new OSD. 

 If this is by design some documentation on how to stop this background progress when a user no longer wants it to deploy new drives would be appreciated.

Back