Documentation #54551
closeddocs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
0%
Description
The docu ADDING AN MDS starts by Create an mds data point /var/lib/ceph/mds/ceph-${id}
(according to Google, this is the only place in the net where the term data point appears).
This path seems to be wrong:
In a Pacific installation (rhel 8.5), there are many directories in /var/lib/ceph, but only /var/lib/ceph/<fsid> is populated.
On the running MDS (created by ceph fs volume create <CephFS-Name>
) , /var/lib/ceph/<fsid>/mds.<CephFS-Name>.<Mon-Name>.ltfjam contains config, keyring and unit.* files.
On a potential additional MDS, such a directory could be created manually, of course. The next step in the docu, ceph auth get-or-create mds.${id}...
would put the keyring into that directory,
but of course none of the other files.
Checking what actually gets started on the running MDS by (in the said installation) systemctl status ceph-<fsid>@mds.<CephFS-Name>.<Hostname>.whdfrs
,
that is /bin/bash /var/lib/ceph/<fsid>/%i/unit.run
, it is quite a bit podman rm
and podman run
.
All of the missing files could of course be scp'd by hand, but that is not how it should work?
In addition, docs.ceph.com/en/latest/cephfs/add-remove-mds/# after a lengthy description of provisioning hardware for MDSes proceeds straight to the faulty ADDING AN MDS, but does not explain how to
specify the hardware aka the hostname/IP . It seems this information cannot be found anywhere on docs.ceph.com.
(Counter example to the last point: If I wanted to add another monitor to the cluster, I would do @ceph orch host add <Hostname> <IP>)