Project

General

Profile

Bug #54018

Suspicious behavior when deleting a cluster (by running cephadm rm-cluster)

Added by Redouane Kachach Elhichou about 2 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Normal
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
low-hanging-fruit
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Tags:

Description

It seems that new files are created once we have already ran cephadm rm-cluster in the node.

Steps to reproduce the issue:
1) Create a new cluster with few nodes (3 i.e)
2) Once the cluster is up & running, delete it by running from the first node:

   - ceph orch pause
   - cephadm rm-cluster --force --zap-osds --fsid <your_cluster_fs_id>

3) Observe the files remaining after this operaction
   > find / | grep <your_cluster_fs_id> | grep -v cgroup
  [root@ceph-node-00 ~]# find / | grep 36e3c242-7e88-11ec-b7c7-52540039ec3f | grep -v cgroup
/run/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f
/run/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f/ceph-osd.2.asok
/run/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f/ceph-mgr.ceph-node-00.puqqms.asok
/run/cephadm/36e3c242-7e88-11ec-b7c7-52540039ec3f.lock
/tmp/var/lib/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f

4) Wait for some minutes (3/4)
> re-run the find command again and you will see that new files have appear in the node

[root@ceph-node-00 ~]# find / | grep 36e3c242-7e88-11ec-b7c7-52540039ec3f | grep -v cgroup
/run/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f
/run/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f/ceph-osd.2.asok
/run/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f/ceph-mgr.ceph-node-00.puqqms.asok
/run/cephadm/36e3c242-7e88-11ec-b7c7-52540039ec3f.lock
/tmp/var/lib/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f
/var/log/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f
/var/log/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f/ceph-volume.log
/var/lib/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f
/var/lib/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f/selinux
/var/lib/ceph/36e3c242-7e88-11ec-b7c7-52540039ec3f/cephadm.b8155e009332629135b14912b69ce375925c8b28ed28167233f4661dc1bf7b7f

These new dirs/files (/var/lib/ceph/<fsid> i.e) seem to have been created by the cephadm (ran from another node).


Related issues

Related to Orchestrator - Bug #53010: cehpadm rm-cluster does not clean up /var/run/ceph Resolved
Related to Orchestrator - Bug #54142: quincy cephadm-purge-cluster needs work Resolved

History

#1 Updated by Redouane Kachach Elhichou about 2 years ago

  • Assignee set to Redouane Kachach Elhichou
  • Tags set to low-hanging-fruit

#2 Updated by Redouane Kachach Elhichou about 2 years ago

  • Related to Bug #53010: cehpadm rm-cluster does not clean up /var/run/ceph added

#3 Updated by Redouane Kachach Elhichou about 2 years ago

  • Status changed from New to In Progress

#4 Updated by Redouane Kachach Elhichou about 2 years ago

  • Status changed from In Progress to Closed

#5 Updated by Redouane Kachach Elhichou about 2 years ago

  • Status changed from Closed to Resolved

#6 Updated by Redouane Kachach Elhichou about 2 years ago

  • Pull request ID set to 44810

#7 Updated by Redouane Kachach Elhichou almost 2 years ago

  • Related to Bug #54142: quincy cephadm-purge-cluster needs work added

Also available in: Atom PDF