Documentation #61458
closedCephFS removal not completely documented (or not working as documented)
0%
Description
After deleting testing CephFS volumes with ceph fs volume rm
and then upgrading the cluster from 17.2.5 to 17.2.6 I noticed MDS running for all the already-removed CephFS volumes. Pretty sure they were stopped after the ceph fs volume rm
but cannot say for sure about that - they definitely are running now.
As listed in https://tracker.ceph.com/issues/48597 the mds_join_fs
exists for all of the already-removed CephFS volumes.
Additionally the data & metadata pools exist for all the already-removed CephFS volumes.
Currently the only documentation that I could find about removing CephFS volumes is at https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-volumes which says to simply ceph fs volume rm
the volume which should "try to remove" the running MDS as well.
- The data & metadata pools are not removed.
- The
mds_join_fs
exist after the pools are removed. - The MDS are running (again?) after the pools are removed. (related to #2?)
The documentation makes it seem like you just need ceph fs volume rm
and you're good. But that doesn't seem to be the case at all!
The documentation should be improved to at minimum list what are the leftovers after ceph fs volume rm
and possibly what to do to clean the leftovers.
Updated by Voja Molani 12 months ago
Hate that I cannot edit the OP. Anyhow:
The documentation should explain what does the "tries to remove MDS daemons" means. Does "try" in here mean that it might not be able to remove MDS daemons? If it fails then what should one do? Or does "try" here simply mean that if an orchestrator module exists then MDS daemons are removed?
The choice of words is really not good here, it makes it sound like it is not known/certain if the MDS should be removed after a ceph fs volume rm
.
Updated by Voja Molani 12 months ago
And after re-reading the documentation I noticed the sentence "This removes a file system and its data and metadata pools". But none of my .data
/ .meta
pools are actually deleted... so which one is wrong here, the documentation or cephadm
?
Updated by Voja Molani 12 months ago
Creating and removing some more testing CephFS the .data
& .meta
pools are removed and the volume is removed from mds_join_fs
so after a bit of pondering I understood what had happened. The previous volumes were deleted with ceph fs rm
and not ceph volume fs rm
. I'll just close this then.
Updated by Redouane Kachach Elhichou 10 months ago
- Status changed from New to Closed
Please, feel free to reopen if you still consider that there's an issue with docs or code.