https://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2021-04-22T11:47:50ZCeph Orchestrator - Bug #50472: orchestrator doesn't provide a way to remove an entire clusterhttps://tracker.ceph.com/issues/50472?journal_id=1930552021-04-22T11:47:50ZSebastian Wagner
<ul></ul><p>A few problems:</p>
<ul>
<li><strong>cephadm rm-cluster</strong> only removes the cluster on the local host</li>
<li><strong>mgr/cephadm</strong> cannot remove the cluster, as it is intended to be super save and cannot remove itself.</li>
</ul>
<p>Workaround is to run <strong>cephadm rm-cluster</strong> on all hosts using an ansible playbook or using ceph-salt</p> Orchestrator - Bug #50472: orchestrator doesn't provide a way to remove an entire clusterhttps://tracker.ceph.com/issues/50472?journal_id=1931152021-04-23T05:50:18ZPaul Cuzner
<ul></ul><p>Sebastian Wagner wrote:</p>
<blockquote>
<p>A few problems:</p>
<ul>
<li><strong>cephadm rm-cluster</strong> only removes the cluster on the local host</li>
<li><strong>mgr/cephadm</strong> cannot remove the cluster, as it is intended to be super save and cannot remove itself.</li>
</ul>
<p>Workaround is to run <strong>cephadm rm-cluster</strong> on all hosts using an ansible playbook or using ceph-salt</p>
</blockquote>
<p>IMO, uur answer can NOT be the user still needs to invest in ansible or salt and maintain two separate configuration management solutions for ceph. rm-cluster is part of a cluster's lifecycle - and we should be providing that capability natively.</p> Orchestrator - Bug #50472: orchestrator doesn't provide a way to remove an entire clusterhttps://tracker.ceph.com/issues/50472?journal_id=1932182021-04-26T08:25:45ZSebastian Wagner
<ul><li><strong>Priority</strong> changed from <i>High</i> to <i>Low</i></li></ul><p>Paul Cuzner wrote:</p>
<blockquote>
<p>Sebastian Wagner wrote:</p>
<blockquote>
<p>A few problems:</p>
<ul>
<li><strong>cephadm rm-cluster</strong> only removes the cluster on the local host</li>
<li><strong>mgr/cephadm</strong> cannot remove the cluster, as it is intended to be super save and cannot remove itself.</li>
</ul>
<p>Workaround is to run <strong>cephadm rm-cluster</strong> on all hosts using an ansible playbook or using ceph-salt</p>
</blockquote>
<p>IMO, uur answer can NOT be the user still needs to invest in ansible or salt and maintain two separate configuration management solutions for ceph. rm-cluster is part of a cluster's lifecycle - and we should be providing that capability natively.</p>
</blockquote>
<p>I'm a bit reluctant introducing SSH capabilities to the cephadm binary just for this use case here. So many downsides:</p>
<ul>
<li>there are much better tools for this job, like Salt or Ansible</li>
<li>without the ability to have proper python dependencies, we will end up duplicating python SSH libraries. </li>
<li>we're breaking the assumption that the cephadm binary is for managing the local host only. </li>
<li>Users need to maintain two separate ssh setups anyway.</li>
</ul> Orchestrator - Bug #50472: orchestrator doesn't provide a way to remove an entire clusterhttps://tracker.ceph.com/issues/50472?journal_id=1932602021-04-26T16:38:28ZSebastian Wagner
<ul><li><strong>Related to</strong> <i><a class="issue tracker-2 status-3 priority-4 priority-default closed" href="/issues/50529">Feature #50529</a>: cephadm rm-cluster is also not resetting any disks that were used as osds</i> added</li></ul> Orchestrator - Bug #50472: orchestrator doesn't provide a way to remove an entire clusterhttps://tracker.ceph.com/issues/50472?journal_id=1960352021-05-27T10:01:24ZSebastian Wagner
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Resolved</i></li></ul><p><a class="external" href="https://github.com/ceph/cephadm-ansible#purge">https://github.com/ceph/cephadm-ansible#purge</a></p>