Actions
Bug #45834
closedcephadm: "fs volume create cephfs" overwrites existing placement specification
% Done:
0%
Source:
Development
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
mgr/mds_autoscaler, mgr/volumes
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
The orchestrator behaves unexpectedly with apply mds. Consider the following:
I have a ceph cluster running and want cephfs
ceph orch apply mds cephfs 3
gives me 3 MDS daemons but no cephfs. The "cephfs" argument doesn't seem to do anything.
Then I created an fs vie
ceph fs volume create cephfs
. This reduces the number of MDS daemons to 2.
A subsequent call to
ceph orch apply mds cephfs 3
recreates the initially desired state.
Actions