Project

General

Profile

Bug #58096

test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory

Added by Laura Flores 2 months ago. Updated about 2 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2022-11-23T23:01:09.478 DEBUG:teuthology.orchestra.run.smithi008:> sudo mount -t nfs -o port=2049 172.21.15.8:/ceph /mnt
2022-11-23T23:01:10.002 INFO:teuthology.orchestra.run.smithi008.stderr:mount.nfs: mounting 172.21.15.8:/ceph failed, reason given by server: No such file or directory
2022-11-23T23:01:10.004 DEBUG:teuthology.orchestra.run:got remote process result: 32
2022-11-23T23:01:11.470 INFO:journalctl@ceph.mon.a.smithi008.stdout:Nov 23 23:01:11 smithi008 ceph-mon[94037]: pgmap v63: 97 pgs: 97 active+clean; 581 KiB data, 20 MiB used, 268 GiB / 268 GiB avail; 85 B/s rd, 0 op/s
2022-11-23T23:01:12.006 DEBUG:teuthology.orchestra.run.smithi008:> sudo mount -t nfs -o port=2049 172.21.15.8:/ceph /mnt
2022-11-23T23:01:12.167 INFO:teuthology.orchestra.run.smithi008.stderr:mount.nfs: mounting 172.21.15.8:/ceph failed, reason given by server: No such file or directory
2022-11-23T23:01:12.168 DEBUG:teuthology.orchestra.run:got remote process result: 32
2022-11-23T23:01:13.470 INFO:journalctl@ceph.mon.a.smithi008.stdout:Nov 23 23:01:13 smithi008 ceph-mon[94037]: pgmap v64: 97 pgs: 97 active+clean; 581 KiB data, 20 MiB used, 268 GiB / 268 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
2022-11-23T23:01:14.170 DEBUG:teuthology.orchestra.run.smithi008:> sudo mount -t nfs -o port=2049 172.21.15.8:/ceph /mnt
2022-11-23T23:01:14.338 INFO:teuthology.orchestra.run.smithi008.stderr:mount.nfs: mounting 172.21.15.8:/ceph failed, reason given by server: No such file or directory
2022-11-23T23:01:14.339 DEBUG:teuthology.orchestra.run:got remote process result: 32
2022-11-23T23:01:15.470 INFO:journalctl@ceph.mon.a.smithi008.stdout:Nov 23 23:01:15 smithi008 ceph-mon[94037]: pgmap v65: 97 pgs: 97 active+clean; 581 KiB data, 20 MiB used, 268 GiB / 268 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
2022-11-23T23:01:16.341 DEBUG:teuthology.orchestra.run.smithi008:> sudo mount -t nfs -o port=2049 172.21.15.8:/ceph /mnt
2022-11-23T23:01:16.512 INFO:teuthology.orchestra.run.smithi008.stderr:mount.nfs: mounting 172.21.15.8:/ceph failed, reason given by server: No such file or directory
2022-11-23T23:01:16.514 DEBUG:teuthology.orchestra.run:got remote process result: 32
2022-11-23T23:01:16.515 DEBUG:teuthology.orchestra.run.smithi008:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2022-11-23T23:01:17.194 INFO:tasks.cephfs_test_runner:test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) ... ERROR
2022-11-23T23:01:17.194 INFO:tasks.cephfs_test_runner:
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:======================================================================
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:ERROR: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ac01e2c21e15a00b6baca53640db7f447512f3ce/qa/tasks/cephfs/test_nfs.py", line 572, in test_cluster_set_reset_user_config
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:    self._test_mnt(pseudo_path, port, ip)
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ac01e2c21e15a00b6baca53640db7f447512f3ce/qa/tasks/cephfs/test_nfs.py", line 301, in _test_mnt
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:    f'{ip}:{pseudo_path}', '/mnt'])
2022-11-23T23:01:17.195 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_4da97cf64e542f347ec47b7bdbe5eca99759f9b7/teuthology/orchestra/cluster.py", line 85, in run
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:    procs = [remote.run(**kwargs, wait=_wait) for remote in remotes]
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_4da97cf64e542f347ec47b7bdbe5eca99759f9b7/teuthology/orchestra/cluster.py", line 85, in <listcomp>
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:    procs = [remote.run(**kwargs, wait=_wait) for remote in remotes]
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_4da97cf64e542f347ec47b7bdbe5eca99759f9b7/teuthology/orchestra/remote.py", line 525, in run
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_4da97cf64e542f347ec47b7bdbe5eca99759f9b7/teuthology/orchestra/run.py", line 455, in run
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:    r.wait()
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_4da97cf64e542f347ec47b7bdbe5eca99759f9b7/teuthology/orchestra/run.py", line 161, in wait
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_4da97cf64e542f347ec47b7bdbe5eca99759f9b7/teuthology/orchestra/run.py", line 183, in _raise_for_status
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2022-11-23T23:01:17.196 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi008 with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.8:/ceph /mnt'
2022-11-23T23:01:17.197 INFO:tasks.cephfs_test_runner:

Related issues

Related to Orchestrator - Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails) Resolved

History

#1 Updated by Laura Flores 2 months ago

Laura Flores wrote:

[...]

/a/yuriw-2022-11-23_15:09:06-rados-wip-yuri10-testing-2022-11-22-1711-distro-default-smithi/7087429

#2 Updated by Nitzan Mordechai 2 months ago

/a/yuriw-2022-11-28_21:26:12-rados-wip-yuri7-testing-2022-11-18-1548-distro-default-smithi/7096011

#4 Updated by Laura Flores about 2 months ago

@John might be. I'll mark it as related.

#5 Updated by Laura Flores about 2 months ago

  • Related to Bug #58145: orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails) added

#6 Updated by Laura Flores about 2 months ago

/a/yuriw-2022-11-28_21:09:37-rados-wip-yuri4-testing-2022-11-10-1051-distro-default-smithi/7094882

#7 Updated by Matan Breizman about 2 months ago

/a/yuriw-2022-11-28_21:13:47-rados-wip-yuri11-testing-2022-11-18-1506-distro-default-smithi/7095033/

#8 Updated by Laura Flores about 2 months ago

/a/lflores-2022-12-02_20:40:02-rados-wip-yuri6-testing-2022-11-23-1348-distro-default-smithi/7101847

#9 Updated by Laura Flores about 2 months ago

/a/yuriw-2022-12-07_15:48:38-rados-wip-yuri3-testing-2022-12-06-1211-distro-default-smithi/7106544

#10 Updated by Laura Flores about 2 months ago

/a/yuriw-2022-12-07_15:47:33-rados-wip-yuri-testing-2022-12-06-1204-distro-default-smithi/7106443

Also available in: Atom PDF