Project

General

Profile

Bug #50472

orchestrator doesn't provide a way to remove an entire cluster

Added by Paul Cuzner 25 days ago. Updated 21 days ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
cephadm
Target version:
% Done:

0%

Source:
Tags:
Backport:
pacific
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In prior toolchains like ceph-ansible, purging a cluster and returning a set of hosts to their original state was possible. With the removal of ceph-ansible, this is no longer an option.

The ceph-ansible purge-cluster playbook removed all daemons and optionally zapped each OSD, in a simple and controlled manner

How is this to be done under orch/cephadm?


Related issues

Related to Orchestrator - Feature #50529: cephadm rm-cluster is also not resetting any disks that were used as osds New

History

#1 Updated by Sebastian Wagner 24 days ago

A few problems:

  • cephadm rm-cluster only removes the cluster on the local host
  • mgr/cephadm cannot remove the cluster, as it is intended to be super save and cannot remove itself.

Workaround is to run cephadm rm-cluster on all hosts using an ansible playbook or using ceph-salt

#2 Updated by Paul Cuzner 24 days ago

Sebastian Wagner wrote:

A few problems:

  • cephadm rm-cluster only removes the cluster on the local host
  • mgr/cephadm cannot remove the cluster, as it is intended to be super save and cannot remove itself.

Workaround is to run cephadm rm-cluster on all hosts using an ansible playbook or using ceph-salt

IMO, uur answer can NOT be the user still needs to invest in ansible or salt and maintain two separate configuration management solutions for ceph. rm-cluster is part of a cluster's lifecycle - and we should be providing that capability natively.

#3 Updated by Sebastian Wagner 21 days ago

  • Priority changed from High to Low

Paul Cuzner wrote:

Sebastian Wagner wrote:

A few problems:

  • cephadm rm-cluster only removes the cluster on the local host
  • mgr/cephadm cannot remove the cluster, as it is intended to be super save and cannot remove itself.

Workaround is to run cephadm rm-cluster on all hosts using an ansible playbook or using ceph-salt

IMO, uur answer can NOT be the user still needs to invest in ansible or salt and maintain two separate configuration management solutions for ceph. rm-cluster is part of a cluster's lifecycle - and we should be providing that capability natively.

I'm a bit reluctant introducing SSH capabilities to the cephadm binary just for this use case here. So many downsides:

  • there are much better tools for this job, like Salt or Ansible
  • without the ability to have proper python dependencies, we will end up duplicating python SSH libraries.
  • we're breaking the assumption that the cephadm binary is for managing the local host only.
  • Users need to maintain two separate ssh setups anyway.

#4 Updated by Sebastian Wagner 20 days ago

  • Related to Feature #50529: cephadm rm-cluster is also not resetting any disks that were used as osds added

Also available in: Atom PDF