Project

General

Profile

Actions

Bug #50472

closed

orchestrator doesn't provide a way to remove an entire cluster

Added by Paul Cuzner about 3 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
Low
Assignee:
-
Category:
cephadm
Target version:
% Done:

0%

Source:
Tags:
Backport:
pacific
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In prior toolchains like ceph-ansible, purging a cluster and returning a set of hosts to their original state was possible. With the removal of ceph-ansible, this is no longer an option.

The ceph-ansible purge-cluster playbook removed all daemons and optionally zapped each OSD, in a simple and controlled manner

How is this to be done under orch/cephadm?


Related issues 1 (0 open1 closed)

Related to Orchestrator - Feature #50529: cephadm rm-cluster is also not resetting any disks that were used as osdsResolved

Actions
Actions #1

Updated by Sebastian Wagner about 3 years ago

A few problems:

  • cephadm rm-cluster only removes the cluster on the local host
  • mgr/cephadm cannot remove the cluster, as it is intended to be super save and cannot remove itself.

Workaround is to run cephadm rm-cluster on all hosts using an ansible playbook or using ceph-salt

Actions #2

Updated by Paul Cuzner about 3 years ago

Sebastian Wagner wrote:

A few problems:

  • cephadm rm-cluster only removes the cluster on the local host
  • mgr/cephadm cannot remove the cluster, as it is intended to be super save and cannot remove itself.

Workaround is to run cephadm rm-cluster on all hosts using an ansible playbook or using ceph-salt

IMO, uur answer can NOT be the user still needs to invest in ansible or salt and maintain two separate configuration management solutions for ceph. rm-cluster is part of a cluster's lifecycle - and we should be providing that capability natively.

Actions #3

Updated by Sebastian Wagner about 3 years ago

  • Priority changed from High to Low

Paul Cuzner wrote:

Sebastian Wagner wrote:

A few problems:

  • cephadm rm-cluster only removes the cluster on the local host
  • mgr/cephadm cannot remove the cluster, as it is intended to be super save and cannot remove itself.

Workaround is to run cephadm rm-cluster on all hosts using an ansible playbook or using ceph-salt

IMO, uur answer can NOT be the user still needs to invest in ansible or salt and maintain two separate configuration management solutions for ceph. rm-cluster is part of a cluster's lifecycle - and we should be providing that capability natively.

I'm a bit reluctant introducing SSH capabilities to the cephadm binary just for this use case here. So many downsides:

  • there are much better tools for this job, like Salt or Ansible
  • without the ability to have proper python dependencies, we will end up duplicating python SSH libraries.
  • we're breaking the assumption that the cephadm binary is for managing the local host only.
  • Users need to maintain two separate ssh setups anyway.
Actions #4

Updated by Sebastian Wagner almost 3 years ago

  • Related to Feature #50529: cephadm rm-cluster is also not resetting any disks that were used as osds added
Actions #5

Updated by Sebastian Wagner almost 3 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF