https://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2022-12-01T17:30:32ZCeph Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2287002022-12-01T17:30:32ZJohn Mulligan
<ul><li><strong>File</strong> <a href="/attachments/download/6262/ganesha-log-2022-12-01-01.txt.xz">ganesha-log-2022-12-01-01.txt.xz</a> added</li></ul><p>I attempted to debug this situation locally on a 3-node VM cluster. I am able to reproduce the case where mount.nfs fails with 'No such file or directory'</p>
<p>We first investigated the mgr nfs module, but it appears to be functioning as expected. It creates the nfs-ganesha containers and populates the .nfs rados pool with configuration objects. The cluster was deployed using cephadm from 'main' branch.</p>
<p>Performing the following steps I reproduced the behavior seen in some of the test runs:</p>
<p>On node 0:<br /><pre>
[ceph@ceph0 ~]$ sudo cephadm shell
Inferring fsid 0a922e44-7195-11ed-8137-525400220000
Inferring config /var/lib/ceph/0a922e44-7195-11ed-8137-525400220000/mon.ceph0/config
Using ceph image with id '1606959841d3' and tag 'main' created on 2022-12-01 16:19:23 +0000 UTC
quay.ceph.io/ceph-ci/ceph@sha256:d7f07a8dc58edb9e4a6e64966a36cf3fd5e52698983308fd7a75d4c18fa957c3
[ceph: root@ceph0 /]# ceph -s
cluster:
id: 0a922e44-7195-11ed-8137-525400220000
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph0,ceph1,ceph2 (age 26m)
mgr: ceph0.bbwifl(active, since 30m), standbys: ceph1.pmrxrr
osd: 6 osds: 6 up (since 26m), 6 in (since 26m)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 921 MiB used, 29 GiB / 30 GiB avail
pgs: 1 active+clean
[ceph: root@ceph0 /]# ceph fs volume create fs1
[ceph: root@ceph0 /]# ceph fs volume ls
[
{
"name": "fs1"
}
]
[ceph: root@ceph0 /]# ceph nfs cluster create nfs1
[ceph: root@ceph0 /]# ceph nfs export create cephfs --cluster-id=nfs1 --pseudo-path=/fs1 --path=/ --fsname=fs1
{
"bind": "/fs1",
"fs": "fs1",
"path": "/",
"cluster": "nfs1",
"mode": "RW"
}
[ceph: root@ceph0 /]# ceph orch ps | grep nfs
nfs.nfs1.0.0.ceph0.mhjcwu ceph0 *:2049 running (2m) 2m ago 2m 17.6M - 4.2 1606959841d3 49e76424dbdf
</pre></p>
<p>On node 1:<br /><pre>
[ceph@ceph1 ~]$ sudo mount.nfs ceph0.cx.fdopen.net:/fs1 /mnt
mount.nfs: mounting ceph0.cx.fdopen.net:/fs1 failed, reason given by server: No such file or directory
</pre></p>
<p>Prior to running the commands on node 1 I edited the unit.run file to start ganesha with NIV_DEBUG. The logs are attached as ganesha-log-2022-12-01-01.txt.xz</p>
<p>I examined the config object created by the mgr module:<br /><pre>
[ceph: root@ceph0 /]# rados get --pool=.nfs --namespace=nfs1 export-1 /dev/stdout
EXPORT {
FSAL {
name = "CEPH";
user_id = "nfs.nfs1.1";
filesystem = "fs1";
secret_access_key = "AQBE3ohj6c+aCRAAdOAqF2oNHTYbargyiX2bnw==";
}
export_id = 1;
path = "/";
pseudo = "/fs1";
access_type = "RW";
squash = "none";
attr_expiration_time = 0;
security_label = true;
protocols = 4;
transports = "TCP";
}
</pre></p>
<p>Nothing jumped out at me as wrong. Logs showed:<br /><pre>
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] reclaim_reset :FSAL :DEBUG :Issuing reclaim reset for ganesha-nfs.nfs1.0-0001
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] create_export :FSAL :DEBUG :Ceph module export /.
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] dirmap_lru_init :NFS READDIR :DEBUG :Skipping dirmap Ceph/MDC
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] export_commit_common :CONFIG :WARN :A protocol is specified for export 1 that is not enabled in NFS_CORE_PARAM, fixing up
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] export_commit_common :CONFIG :INFO :Export 1 created at pseudo (/fs1) with path (/) and tag ((null)) perms (options=020031e0/077801e7 no_root_squash, RWrw, ---, ---, TCP, ----, , , , , expire= 0)
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] export_commit_common :CONFIG :INFO :Export 1 has 0 defined clients
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] build_default_root :EXPORT :DEBUG :Allocating Pseudo root export
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] pseudofs_create_export :FSAL :DEBUG :Created exp 0x15bb200 - /
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] dirmap_lru_init :NFS READDIR :DEBUG :Skipping dirmap PSEUDO/MDC
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] build_default_root :CONFIG :INFO :Export 0 (/) successfully created
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] ReadExports :EXPORT :INFO :Export 1 pseudo (/fs1) with path (/) and tag ((null)) perms (options=020031e0/077801e7 no_root_squash, RWrw, ---, ---, TCP, ----, , , , , expire= 0)
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] ReadExports :EXPORT :INFO :Export 0 pseudo (/) with path (/) and tag ((null)) perms (options=0221f080/0771f3e7 no_root_squash, --r-, -4-, ---, TCP, ----, , , , , , none, sys, krb5, krb5i, krb5p)
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] gsh_dbus_pkginit :DBUS :DEBUG :init
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] nfs_Init :NFS STARTUP :DEBUG :Now building NFSv4 ACL cache
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] nfs4_acls_init :NFS4 ACL :DEBUG :Initialize NFSv4 ACLs
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] nfs4_acls_init :NFS4 ACL :DEBUG :sizeof(fsal_ace_t)=20, sizeof(fsal_acl_t)=80
</pre></p>
<p>And at the connection attempt from the client<br /><pre>
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] cih_get_by_key_latch :HT CACHE :DEBUG :cih cache hit slot 18744
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] complete_op :NFS4 :DEBUG :Status of OP_PUTFH in position 1 = NFS4_OK, op response size is 4 total response size is 84
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] process_one_op :NFS4 :DEBUG :Request 2: opcode 15 is OP_LOOKUP
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] nfs4_op_lookup :NFS4 :DEBUG :name=fs1
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] mdc_lookup :NFS READDIR :DEBUG :Cache Miss detected for fs1
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] mdc_lookup_uncached :NFS READDIR :DEBUG :lookup fs1 failed with No such file or directory
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] complete_op :NFS4 :DEBUG :Status of OP_LOOKUP in position 2 = NFS4ERR_NOENT, op response size is 4 total response size is 92
01/12/2022 17:08:24 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[svc_4] complete_nfs4_compound :NFS4 :DEBUG :End status = NFS4ERR_NOENT lastindex = 3
</pre></p>
<p>Versions:<br /><pre>
[ceph@ceph0 ~]$ sudo podman exec -it ceph-0a922e44-7195-11ed-8137-525400220000-nfs-nfs1-0-0-ceph0-mhjcwu bash
[root@ceph0 /]# ganesha.nfsd -v
NFS-Ganesha Release = V4.2
</pre></p>
<pre>
[ceph@ceph0 ~]$ sudo podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.ceph.io/ceph-ci/ceph main 1606959841d3 57 minutes ago 1.4 GB
quay.io/ceph/ceph-grafana 8.3.5 dad864ee21e9 7 months ago 571 MB
quay.io/prometheus/prometheus v2.33.4 514e6a882f6e 9 months ago 205 MB
quay.io/prometheus/node-exporter v1.3.1 1dbe0e931976 12 months ago 22.3 MB
quay.io/prometheus/alertmanager v0.23.0 ba2b418f427c 15 months ago 58.9 MB
</pre>
<p>I also tried the same using a cephfs subvolume path, after getting the path with ceph fs subvolume getpath. The mount.nfs result was the same "reason given by server: No such file or directory". I can provide logs for this as well if requested.</p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2287602022-12-02T20:57:01ZLaura Flores
<ul><li><strong>Related to</strong> <i><a class="issue tracker-1 status-1 priority-4 priority-default" href="/issues/58096">Bug #58096</a>: test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory</i> added</li></ul> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289052022-12-06T04:20:48ZRamana Rajarraja@redhat.com
<ul></ul><p>I went through <a class="external" href="https://tracker.ceph.com/issues/58145#note-1">https://tracker.ceph.com/issues/58145#note-1</a> and the ganesha log. I don't nothing see anything obviously incorrect with the setup.</p>
<p>The following warning in the ganesha log looks interesting. I've not seen this before,<br /><pre>
01/12/2022 17:06:13 : epoch 6388df02 : ceph0 : ganesha.nfsd-7[main] export_commit_common :CONFIG :WARN :A protocol is specified for export 1 that is not enabled in NFS_CORE_PARAM, fixing up
</pre></p>
<p>Can you share the actual ganesha config file, "/etc/ganesha/ganesha.conf"?</p>
<p>Can you increase the log level from 'NIV_DEBUG' to 'NIV_FULL_DEBUG' for all components of NFS-Ganesha server temporarily using,<br /><a class="external" href="https://docs.ceph.com/en/quincy/mgr/nfs/#set-customized-nfs-ganesha-configuration">https://docs.ceph.com/en/quincy/mgr/nfs/#set-customized-nfs-ganesha-configuration</a> (See example use case 1.)<br />Maybe this will provide more hints?</p>
<p>I suggest reaching out to Frank Filz from the NFS-Ganesha team to look at this tracker ticket.</p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289352022-12-06T15:04:59ZJohn Mulligan
<ul></ul><p>Here's the /etc/ganesha/ganesha.conf from the original run.<br /><pre>
# This file is generated by cephadm.
NFS_CORE_PARAM {
Enable_NLM = false;
Enable_RQUOTA = false;
Protocols = 4;
NFS_Port = 2049;
}
NFSv4 {
Delegations = false;
RecoveryBackend = 'rados_cluster';
Minor_Versions = 1, 2;
}
RADOS_KV {
UserId = "nfs.nfs1.0.0.ceph0.mhjcwu";
nodeid = "nfs.nfs1.0";
pool = ".nfs";
namespace = "nfs1";
}
RADOS_URLS {
UserId = "nfs.nfs1.0.0.ceph0.mhjcwu";
watch_url = "rados://.nfs/nfs1/conf-nfs.nfs1";
}
RGW {
cluster = "ceph";
name = "client.nfs.nfs1.0.0.ceph0.mhjcwu-rgw";
}
%url rados://.nfs/nfs1/conf-nfs.nfs1
</pre></p>
<p>Next, I'll rerun my test setup with the NIV_FULL_DEBUG log level. I should have it soon.</p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289392022-12-06T15:46:18ZJohn Mulligan
<ul><li><strong>File</strong> <a href="/attachments/download/6280/ganesha-log-2022-12-06-01.txt.xz">ganesha-log-2022-12-06-01.txt.xz</a> added</li></ul><p>New log attached, NIV_FULL_DEBUG level. Same procedure (in short):<br /><pre>
[ceph@ceph0 ~]$ sudo cephadm shell
Inferring fsid 6a8b74bc-7579-11ed-b256-525400220000
Inferring config /var/lib/ceph/6a8b74bc-7579-11ed-b256-525400220000/mon.ceph0/config
Using ceph image with id '5700871d3e5a' and tag 'main' created on 2022-12-06 09:14:13 +0000 UTC
quay.ceph.io/ceph-ci/ceph@sha256:94a1dbe7c4ccbe6101d5f771c6763c553aa7da783bd0819147f44dd9617a4bfd
[ceph: root@ceph0 /]# ceph -s
cluster:
id: 6a8b74bc-7579-11ed-b256-525400220000
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph0,ceph2,ceph1 (age 9m)
mgr: ceph0.egfash(active, since 9m), standbys: ceph2.xqmxcg
osd: 6 osds: 6 up (since 8m), 6 in (since 8m)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 920 MiB used, 29 GiB / 30 GiB avail
pgs: 1 active+clean
[ceph: root@ceph0 /]# ceph fs volume create fs1
[ceph: root@ceph0 /]# ceph fs volume ls
[
{
"name": "fs1"
}
]
[ceph: root@ceph0 /]# ceph nfs cluster create nfs1
[ceph: root@ceph0 /]# ceph nfs export create cephfs --cluster-id=nfs1 --pseudo-path=/fs1 --path=/ --fsname=fs1
{
"bind": "/fs1",
"fs": "fs1",
"path": "/",
"cluster": "nfs1",
"mode": "RW"
}
[ceph: root@ceph0 /]# ceph orch ps | grep nfs
nfs.nfs1.0.0.ceph0.fqfjvf ceph0 *:2049 running (21s) 17s ago 21s 17.5M - 4.2 5700871d3e5a 700d096e6a4d
</pre></p>
<pre>
[ceph@ceph1 ~]$ sudo mount.nfs ceph0.cx.fdopen.net:/fs1 /mnt
mount.nfs: mounting ceph0.cx.fdopen.net:/fs1 failed, reason given by server: No such file or directory
</pre> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289412022-12-06T15:48:41ZJohn Mulligan
<ul></ul><pre>
$ sudo podman exec -it ceph-6a8b74bc-7579-11ed-b256-525400220000-nfs-nfs1-0-0-ceph0-fqfjvf cat /etc/ganesha/ganesha.conf ;echo
# This file is generated by cephadm.
NFS_CORE_PARAM {
Enable_NLM = false;
Enable_RQUOTA = false;
Protocols = 4;
NFS_Port = 2049;
}
NFSv4 {
Delegations = false;
RecoveryBackend = 'rados_cluster';
Minor_Versions = 1, 2;
}
RADOS_KV {
UserId = "nfs.nfs1.0.0.ceph0.fqfjvf";
nodeid = "nfs.nfs1.0";
pool = ".nfs";
namespace = "nfs1";
}
RADOS_URLS {
UserId = "nfs.nfs1.0.0.ceph0.fqfjvf";
watch_url = "rados://.nfs/nfs1/conf-nfs.nfs1";
}
RGW {
cluster = "ceph";
name = "client.nfs.nfs1.0.0.ceph0.fqfjvf-rgw";
}
%url rados://.nfs/nfs1/conf-nfs.nfs1
</pre> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289432022-12-06T15:57:32ZRamana Rajarraja@redhat.com
<ul></ul><p>Frank suspects it's <a class="external" href="https://github.com/nfs-ganesha/nfs-ganesha/issues/888">https://github.com/nfs-ganesha/nfs-ganesha/issues/888</a></p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289442022-12-06T16:27:50ZJohn Mulligan
<ul></ul><p>Based on suggestions in ceph-devel IRC, I added a EXPORT_DEFAULTS section:<br /><pre>
# This file is generated by cephadm.
NFS_CORE_PARAM {
Enable_NLM = false;
Enable_RQUOTA = false;
Protocols = 4;
NFS_Port = 2049;
}
NFSv4 {
Delegations = false;
RecoveryBackend = 'rados_cluster';
Minor_Versions = 1, 2;
}
RADOS_KV {
UserId = "nfs.nfs1.0.0.ceph0.fqfjvf";
nodeid = "nfs.nfs1.0";
pool = ".nfs";
namespace = "nfs1";
}
RADOS_URLS {
UserId = "nfs.nfs1.0.0.ceph0.fqfjvf";
watch_url = "rados://.nfs/nfs1/conf-nfs.nfs1";
}
RGW {
cluster = "ceph";
name = "client.nfs.nfs1.0.0.ceph0.fqfjvf-rgw";
}
EXPORT_DEFAULTS {
Protocols = 4;
}
%url rados://.nfs/nfs1/conf-nfs.nfs1
</pre></p>
<p>Restarted ganesha, and now the client can mount the export:<br /><pre>
[ceph@ceph1 ~]$ sudo mount.nfs ceph0.cx.fdopen.net:/fs1 /mnt
[ceph@ceph1 ~]$ mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
ceph0.cx.fdopen.net:/fs1 on /mnt type nfs4 (rw,relatime,seclabel,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.76.201,local_lock=none,addr=192.168.76.200)
</pre></p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289492022-12-06T18:16:24ZRamana Rajarraja@redhat.com
<ul></ul><p>Check with Frank on IRC, we can add the following section in the template, src/pybind/mgr/cephadm/templates/services/nfs/ganesha.conf.j2<br /><pre>
EXPORT_DEFAULTS {
Protocols = 4;
}
</pre><br />which should be backwards compatible. But as pointed out by John in the orchestrators weekly, we may want to wait for the fix in NFS-Ganesha that would void the need for the EXPORT_DEFAULTS block.</p>
<p>The NFS-Ganesha issue is tracked here, <a class="external" href="https://github.com/nfs-ganesha/nfs-ganesha/issues/888">https://github.com/nfs-ganesha/nfs-ganesha/issues/888</a></p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289502022-12-06T19:33:33ZFrank Filzffilz@redhat.com
<ul></ul><p>I believe I have a fix:</p>
<p><a class="external" href="https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/547188">https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/547188</a></p>
<p>There are two patches, so if you want to check it out, download:</p>
<p>git fetch ssh:<a class="email" href="mailto://ffilz@review.gerrithub.io">//ffilz@review.gerrithub.io</a>:29418/ffilz/nfs-ganesha refs/changes/88/547188/1 && git checkout FETCH_HEAD</p>
<p>It may be a few days before we tag a new V4.3 that includes this fix, in the meantime, if you can test this fix it would be helpful.</p>
<p>Thanks</p>
<p>Frank</p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2289592022-12-07T03:33:13ZRamana Rajarraja@redhat.com
<ul></ul><p>Frank Filz wrote:</p>
<blockquote>
<p>I believe I have a fix:</p>
<p><a class="external" href="https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/547188">https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/547188</a></p>
<p>There are two patches, so if you want to check it out, download:</p>
<p>git fetch ssh:<a class="email" href="mailto://ffilz@review.gerrithub.io">//ffilz@review.gerrithub.io</a>:29418/ffilz/nfs-ganesha refs/changes/88/547188/1 && git checkout FETCH_HEAD</p>
</blockquote>
<p>I tested this fix, and it worked for me. I locally built nfs-ganesha with this fix and used it with Ceph built from main branch. With this fix, I didn't have to add the EXPORT_DEFAULTS section to make the mounting work.</p>
<blockquote>
<p>It may be a few days before we tag a new V4.3 that includes this fix, in the meantime, if you can test this fix it would be helpful.</p>
<p>Thanks</p>
<p>Frank</p>
</blockquote> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2297832022-12-26T17:45:13ZAdam King
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Resolved</i></li></ul><p>this was fixed on the ganesha side by <a class="external" href="https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/547188">https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/547188</a></p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2551972024-02-21T18:12:07ZKamoltat (Junior) Sirivadhna
<ul><li><strong>Status</strong> changed from <i>Resolved</i> to <i>New</i></li></ul><p>Hi guys,<br />this problem popped up in a RADOS Pacific branch run:</p>
<p>/a/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/7566724/</p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2552012024-02-21T18:46:25ZKamoltat (Junior) Sirivadhna
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Pending Backport</i></li></ul><p>As discussed offline with Adam King</p>
<p>``tracker was "fixed" by a change in ganesha itself and then the version we're using in main being updated. I assume it's the same here where there is an issue with the ganesha version we use in pacific that's causing this``</p> Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)https://tracker.ceph.com/issues/58145?journal_id=2552032024-02-21T18:47:40ZBackport Bot
<ul><li><strong>Tags</strong> set to <i>backport_processed</i></li></ul>