Bug #53033
opencephadm removes MONs during upgrade 15.2.14 > 16.2.6 which leads to failed quorum and broken cluster
0%
Description
I started Upgrading our Production Clusters from Octopus v15.2.14 to Pacific v16.2.6 via Cephadm Orchestrator. The Upgrade of the first 4 Clusters went fine. Today I started the Upgrade of our biggest Cluster ( 1k OSDs / 3 PiB ) which lead to a broken Cluster during Upgrade ( no MON quorum ).
After some Analysis we found out that Cephadm decided after the 1st (out of 5) MON was running Pacific to remove 2 of the other MONs:
Oct 25 09:01:08 cephadm [INF] Filtered out host pod2-sc2: does not belong to mon public_network (10.27.251.128/25) Oct 25 09:01:08 cephadm [INF] Filtered out host pod2-sc3: does not belong to mon public_network (10.27.251.128/25) Oct 25 09:01:08 cephadm [INF] Safe to remove mon.pod2-sc1: new quorum should be ['pod2-mon5', 'pod2-mon6', 'pod2-sc2', 'pod2-sc3'] (from ['pod2-mon5', 'pod2-mon6', 'pod2-sc2', 'pod2-sc3']) Oct 25 09:01:08 cephadm [INF] Removing monitor pod2-sc1 from monmap... Oct 25 09:01:14 cephadm [INF] Safe to remove mon.pod2-sc2: new quorum should be ['pod2-mon5', 'pod2-mon6', 'pod2-sc3'] (from ['pod2-mon5', 'pod2-mon6', 'pod2-sc3']) Oct 25 09:01:14 cephadm [INF] Removing monitor pod2-sc2 from monmap...
The difference of this Cluster compared to the successfully upgraded ones: The others had MONs in umanaged state - this one had MONs managed.
To Fix the Issue we stopped the upgrade, disabled cephadm and stopped the upgraded MON via docker stop, wait for other MONs to build quorum and Cluster recovery, resume cephadm, set the MONs to unmanaged and restarted Upgrade.
To prevent such dangerous situations I would propose to implement A stop of all Cephadm/Orchestartor changes while a upgrade is running. It really doesn't make sense to remove Monitors/do any Changes while a upgrade is running