Actions
Bug #48597
closedpybind/mgr/cephadm: mds_join_fs not cleaned up
% Done:
0%
Source:
Development
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Description
After creating a new file system and then deleting it:
[root@li1211-173 ~]# ceph config dump | grep mds mds.a basic mds_join_fs a mds.all basic mds_join_fs all mds.b basic mds_join_fs b mds.bar basic mds_join_fs bar [root@li1211-173 ~]# ceph fs volume create baz 'label:mdss' [root@li1211-173 ~]# ceph config dump | grep mds mds.a basic mds_join_fs a mds.all basic mds_join_fs all mds.b basic mds_join_fs b mds.bar basic mds_join_fs bar mds.baz basic mds_join_fs baz [root@li1211-173 ~]# ceph fs volume rm baz --yes-i-really-mean-it metadata pool: cephfs.baz.meta data pool: ['cephfs.baz.data'] removed [root@li1211-173 ~]# ceph config dump | grep mds mds.a basic mds_join_fs a mds.all basic mds_join_fs all mds.b basic mds_join_fs b mds.bar basic mds_join_fs bar mds.baz basic mds_join_fs baz
Tested with the latest octopus.
Actions