Project

General

Profile

Actions

Bug #58476

open

test_non_existent_cluster: cluster does not exist

Added by Laura Flores over 1 year ago. Updated 3 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
quincy,reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/yuriw-2023-01-12_20:11:41-rados-main-distro-default-smithi/7138818

2023-01-13T16:20:41.856 INFO:teuthology.orchestra.run.smithi086.stderr:Error ENOENT: cluster does not exist
2023-01-13T16:20:41.859 DEBUG:teuthology.orchestra.run:got remote process result: 2
2023-01-13T16:20:41.860 DEBUG:teuthology.orchestra.run.smithi086:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Ended test tasks.cephfs.test_nfs.TestNFS.test_non_existent_cluster'
2023-01-13T16:20:42.275 INFO:journalctl@ceph.mon.a.smithi086.stdout:Jan 13 16:20:42 smithi086 ceph-mon[88225]: from='client.14923 -' entity='client.admin' cmd=[{"prefix": "nfs cluster ls", "target": ["mon-mgr", ""]}]: dispatch
2023-01-13T16:20:42.275 INFO:journalctl@ceph.mon.a.smithi086.stdout:Jan 13 16:20:42 smithi086 ceph-mon[88225]: from='client.14925 -' entity='client.admin' cmd=[{"prefix": "nfs cluster info", "cluster_id": "foo", "target": ["mon-mgr", ""]}]: dispatch
2023-01-13T16:20:42.275 INFO:journalctl@ceph.mgr.a.smithi086.stdout:Jan 13 16:20:41 smithi086 ceph-76cf727c-935c-11ed-821d-001a4aab830c-mgr-a[115902]: 2023-01-13T16:20:41.854+0000 7f42574fb700 -1 mgr.server reply reply (2) No such file or directory cluster does not exist
2023-01-13T16:20:43.079 INFO:tasks.cephfs_test_runner:test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS) ... ERROR
2023-01-13T16:20:43.080 INFO:tasks.cephfs_test_runner:
2023-01-13T16:20:43.080 INFO:tasks.cephfs_test_runner:======================================================================
2023-01-13T16:20:43.081 INFO:tasks.cephfs_test_runner:ERROR: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)
2023-01-13T16:20:43.081 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2023-01-13T16:20:43.082 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2023-01-13T16:20:43.082 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_cb17f286272f7ae9dbdf8117ca7b077b0a5cf650/qa/tasks/cephfs/test_nfs.py", line 740, in test_non_existent_cluster
2023-01-13T16:20:43.082 INFO:tasks.cephfs_test_runner:    cluster_info = self._nfs_cmd('cluster', 'info', 'foo')
2023-01-13T16:20:43.083 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_cb17f286272f7ae9dbdf8117ca7b077b0a5cf650/qa/tasks/cephfs/test_nfs.py", line 22, in _nfs_cmd
2023-01-13T16:20:43.084 INFO:tasks.cephfs_test_runner:    return self._cmd("nfs", *args)
2023-01-13T16:20:43.084 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_cb17f286272f7ae9dbdf8117ca7b077b0a5cf650/qa/tasks/cephfs/test_nfs.py", line 19, in _cmd
2023-01-13T16:20:43.085 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd(*args)
2023-01-13T16:20:43.085 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_cb17f286272f7ae9dbdf8117ca7b077b0a5cf650/qa/tasks/ceph_manager.py", line 1609, in raw_cluster_cmd
2023-01-13T16:20:43.086 INFO:tasks.cephfs_test_runner:    return self.run_cluster_cmd(**kwargs).stdout.getvalue()
2023-01-13T16:20:43.086 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_cb17f286272f7ae9dbdf8117ca7b077b0a5cf650/qa/tasks/ceph_manager.py", line 1600, in run_cluster_cmd
2023-01-13T16:20:43.086 INFO:tasks.cephfs_test_runner:    return self.controller.run(**kwargs)
2023-01-13T16:20:43.087 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_19d18db866afdc31fd6586f558fc29b95a87ccfb/teuthology/orchestra/remote.py", line 525, in run
2023-01-13T16:20:43.087 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2023-01-13T16:20:43.088 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_19d18db866afdc31fd6586f558fc29b95a87ccfb/teuthology/orchestra/run.py", line 455, in run
2023-01-13T16:20:43.088 INFO:tasks.cephfs_test_runner:    r.wait()
2023-01-13T16:20:43.089 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_19d18db866afdc31fd6586f558fc29b95a87ccfb/teuthology/orchestra/run.py", line 161, in wait
2023-01-13T16:20:43.089 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2023-01-13T16:20:43.090 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_19d18db866afdc31fd6586f558fc29b95a87ccfb/teuthology/orchestra/run.py", line 181, in _raise_for_status
2023-01-13T16:20:43.090 INFO:tasks.cephfs_test_runner:    raise CommandFailedError(
2023-01-13T16:20:43.090 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi086 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info foo'

Actions #1

Updated by Nitzan Mordechai about 1 year ago

/a/yuriw-2023-01-21_17:58:46-rados-wip-yuri6-testing-2023-01-20-0728-distro-default-smithi/7132611

Actions #2

Updated by Laura Flores about 1 year ago

/a/yuriw-2023-03-09_20:16:24-rados-wip-yuri5-testing-2023-03-09-0941-quincy-distro-default-smithi/7200260

Actions #3

Updated by Laura Flores about 1 year ago

/a/yuriw-2023-03-24_13:25:17-rados-quincy-release-distro-default-smithi/7219232

Actions #4

Updated by Sridhar Seshasayee 12 months ago

/a/sseshasa-2023-05-01_18:57:15-rados-wip-sseshasa2-testing-2023-05-01-2153-quincy-distro-default-smithi/7259886

Actions #5

Updated by Laura Flores 11 months ago

/a/yuriw-2023-05-19_15:13:10-rados-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7278611

Actions #6

Updated by Laura Flores 11 months ago

/a/yuriw-2023-05-22_15:26:04-rados-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7282670

Actions #7

Updated by Laura Flores 11 months ago

/a/yuriw-2023-05-25_14:52:58-rados-wip-yuri3-testing-2023-05-24-1136-quincy-distro-default-smithi/7286311

Actions #8

Updated by Laura Flores 11 months ago

/a/yuriw-2023-05-31_16:09:42-rados-wip-yuri5-testing-2023-05-30-0828-quincy-distro-default-smithi/7292090

Actions #9

Updated by Laura Flores 10 months ago

/a/yuriw-2023-06-22_20:09:40-rados-wip-yuri6-testing-2023-06-22-1005-quincy-distro-default-smithi/7312582

Actions #10

Updated by Laura Flores 10 months ago

/a/yuriw-2023-06-22_20:09:40-rados-wip-yuri6-testing-2023-06-22-1005-quincy-distro-default-smithi/7312582

2023-06-23T03:30:33.952 INFO:teuthology.orchestra.run.smithi057.stdout:NFS Cluster Created Successfully
2023-06-23T03:30:34.939 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: 2023-06-23T03:30:34.485+0000 7f08ced8e700 -1 log_channel(cephadm) log [ERR] : Failed to apply nfs.test spec NFSServiceSpec.from_json(yaml.safe_load('''service_type: nfs
2023-06-23T03:30:34.939 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: service_id: test
2023-06-23T03:30:34.939 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: service_name: nfs.test
2023-06-23T03:30:34.939 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: placement:
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   count: 1
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: spec:
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   port: 2049
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: ''')): cephadm exited with an error code: 1, stderr: ERROR: Daemon not found: nfs.test.0.0.smithi057.wrisnn. See `cephadm ls`
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: Traceback (most recent call last):
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 522, in _apply_all_services
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     if self._apply_service(spec):
2023-06-23T03:30:34.940 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 776, in _apply_service
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     self._remove_daemon(d.name(), d.hostname)
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 1349, in _remove_daemon
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     host, name, 'rm-daemon', args))
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/usr/share/ceph/mgr/cephadm/module.py", line 634, in wait_async
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     return self.event_loop.get_result(coro, timeout)
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/usr/share/ceph/mgr/cephadm/ssh.py", line 63, in get_result
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     return future.result(timeout)
2023-06-23T03:30:34.941 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/lib64/python3.6/concurrent/futures/_base.py", line 432, in result
2023-06-23T03:30:34.942 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     return self.__get_result()
2023-06-23T03:30:34.942 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result
2023-06-23T03:30:34.942 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     raise self._exception
2023-06-23T03:30:34.942 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 1523, in _run_cephadm
2023-06-23T03:30:34.942 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]:     f'cephadm exited with an error code: {code}, stderr: {err}')
2023-06-23T03:30:34.942 INFO:journalctl@ceph.mgr.a.smithi057.stdout:Jun 23 03:30:34 smithi057 conmon[124609]: orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr: ERROR: Daemon not found: nfs.test.0.0.smithi057.wrisnn. See `cephadm ls`

...

2023-06-23T03:33:49.623 INFO:tasks.cephfs_test_runner:======================================================================
2023-06-23T03:33:49.624 INFO:tasks.cephfs_test_runner:ERROR: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)
2023-06-23T03:33:49.624 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2023-06-23T03:33:49.625 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2023-06-23T03:33:49.625 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e6c50e4e4b8a8d449b864060ef3b121b5791e41b/qa/tasks/cephfs/test_nfs.py", line 740, in test_non_existent_cluster
2023-06-23T03:33:49.626 INFO:tasks.cephfs_test_runner:    cluster_info = self._nfs_cmd('cluster', 'info', 'foo')
2023-06-23T03:33:49.626 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e6c50e4e4b8a8d449b864060ef3b121b5791e41b/qa/tasks/cephfs/test_nfs.py", line 22, in _nfs_cmd
2023-06-23T03:33:49.626 INFO:tasks.cephfs_test_runner:    return self._cmd("nfs", *args)
2023-06-23T03:33:49.627 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e6c50e4e4b8a8d449b864060ef3b121b5791e41b/qa/tasks/cephfs/test_nfs.py", line 19, in _cmd
2023-06-23T03:33:49.627 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd(*args)
2023-06-23T03:33:49.627 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e6c50e4e4b8a8d449b864060ef3b121b5791e41b/qa/tasks/ceph_manager.py", line 1624, in raw_cluster_cmd
2023-06-23T03:33:49.628 INFO:tasks.cephfs_test_runner:    return self.run_cluster_cmd(**kwargs).stdout.getvalue()
2023-06-23T03:33:49.628 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e6c50e4e4b8a8d449b864060ef3b121b5791e41b/qa/tasks/ceph_manager.py", line 1615, in run_cluster_cmd
2023-06-23T03:33:49.628 INFO:tasks.cephfs_test_runner:    return self.controller.run(**kwargs)
2023-06-23T03:33:49.628 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_076bbebc42a14f7d568aaa78eabb0038327bcb23/teuthology/orchestra/remote.py", line 525, in run
2023-06-23T03:33:49.629 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2023-06-23T03:33:49.629 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_076bbebc42a14f7d568aaa78eabb0038327bcb23/teuthology/orchestra/run.py", line 455, in run
2023-06-23T03:33:49.629 INFO:tasks.cephfs_test_runner:    r.wait()
2023-06-23T03:33:49.629 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_076bbebc42a14f7d568aaa78eabb0038327bcb23/teuthology/orchestra/run.py", line 161, in wait
2023-06-23T03:33:49.629 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2023-06-23T03:33:49.630 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_076bbebc42a14f7d568aaa78eabb0038327bcb23/teuthology/orchestra/run.py", line 181, in _raise_for_status
2023-06-23T03:33:49.630 INFO:tasks.cephfs_test_runner:    raise CommandFailedError(
2023-06-23T03:33:49.630 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi057 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info foo'
2023-06-23T03:33:49.630 INFO:tasks.cephfs_test_runner:
2023-06-23T03:33:49.630 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------

Actions #11

Updated by Laura Flores 10 months ago

  • Backport set to quincy
Actions #12

Updated by Laura Flores 10 months ago

  • Backport changed from quincy to quincy,reef
Actions #13

Updated by Aishwarya Mathuria 9 months ago

/a/yuriw-2023-07-28_14:25:29-rados-wip-yuri7-testing-2023-07-27-1336-quincy-distro-default-smithi/7355543

Actions #14

Updated by Laura Flores 9 months ago

/a/yuriw-2023-07-25_21:35:32-rados-wip-yuri8-testing-2023-07-24-0819-quincy-distro-default-smithi/7352333

Actions #15

Updated by Sridhar Seshasayee 7 months ago

/a/yuriw-2023-10-02_19:19:00-rados-wip-yuri4-testing-2023-10-02-0826-quincy-distro-default-smithi/7408751
/a/yuriw-2023-10-02_19:19:00-rados-wip-yuri4-testing-2023-10-02-0826-quincy-distro-default-smithi/7408901

Actions #16

Updated by Laura Flores 6 months ago

/a/yuriw-2023-10-05_23:59:08-rados-wip-yuri8-testing-2023-10-05-1127-quincy-distro-default-smithi/7413038

Actions #17

Updated by Laura Flores 6 months ago

/a/lflores-2023-10-10_13:56:55-rados-wip-lflores-testing-2023-10-09-2254-quincy-distro-default-smithi/7420302

2023-10-10T17:52:28.911 INFO:teuthology.orchestra.run.smithi046.stderr:Error ENOENT: cluster does not exist
2023-10-10T17:52:28.914 DEBUG:teuthology.orchestra.run:got remote process result: 2
2023-10-10T17:52:28.915 DEBUG:teuthology.orchestra.run.smithi046:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Ended test tasks.cephfs.test_nfs.TestNFS.test_non_existent_cluster'
2023-10-10T17:52:29.341 INFO:journalctl@ceph.mgr.a.smithi046.stdout:Oct 10 17:52:28 smithi046 ceph-ea2e15dc-6793-11ee-8db6-212e2dc638e7-mgr-a[151431]: 2023-10-10T17:52:28.909+0000 7f37f1812700 -1 mgr.server reply reply (2) No such file or directory cluster does not exist
2023-10-10T17:52:30.079 INFO:tasks.cephfs_test_runner:Test that cluster info doesn't throw junk data for non-existent cluster ... ERROR
2023-10-10T17:52:30.080 INFO:tasks.cephfs_test_runner:
2023-10-10T17:52:30.080 INFO:tasks.cephfs_test_runner:======================================================================
2023-10-10T17:52:30.080 INFO:tasks.cephfs_test_runner:ERROR: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)
2023-10-10T17:52:30.081 INFO:tasks.cephfs_test_runner:Test that cluster info doesn't throw junk data for non-existent cluster
2023-10-10T17:52:30.081 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2023-10-10T17:52:30.081 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2023-10-10T17:52:30.082 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_86713905f489fb17622169e62f37eeee2663e2c7/qa/tasks/cephfs/test_nfs.py", line 817, in test_non_existent_cluster
2023-10-10T17:52:30.082 INFO:tasks.cephfs_test_runner:    cluster_info = self._nfs_cmd('cluster', 'info', 'foo')
2023-10-10T17:52:30.083 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_86713905f489fb17622169e62f37eeee2663e2c7/qa/tasks/cephfs/test_nfs.py", line 22, in _nfs_cmd
2023-10-10T17:52:30.083 INFO:tasks.cephfs_test_runner:    return self._cmd("nfs", *args)
2023-10-10T17:52:30.083 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_86713905f489fb17622169e62f37eeee2663e2c7/qa/tasks/cephfs/test_nfs.py", line 19, in _cmd
2023-10-10T17:52:30.083 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd(*args)
2023-10-10T17:52:30.084 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_86713905f489fb17622169e62f37eeee2663e2c7/qa/tasks/ceph_manager.py", line 1624, in raw_cluster_cmd
2023-10-10T17:52:30.084 INFO:tasks.cephfs_test_runner:    return self.run_cluster_cmd(**kwargs).stdout.getvalue()
2023-10-10T17:52:30.084 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_86713905f489fb17622169e62f37eeee2663e2c7/qa/tasks/ceph_manager.py", line 1615, in run_cluster_cmd
2023-10-10T17:52:30.084 INFO:tasks.cephfs_test_runner:    return self.controller.run(**kwargs)
2023-10-10T17:52:30.085 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_54e62bcbac4e53d9685e08328b790d3b20d71cae/teuthology/orchestra/remote.py", line 522, in run
2023-10-10T17:52:30.085 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2023-10-10T17:52:30.085 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_54e62bcbac4e53d9685e08328b790d3b20d71cae/teuthology/orchestra/run.py", line 455, in run
2023-10-10T17:52:30.085 INFO:tasks.cephfs_test_runner:    r.wait()
2023-10-10T17:52:30.086 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_54e62bcbac4e53d9685e08328b790d3b20d71cae/teuthology/orchestra/run.py", line 161, in wait
2023-10-10T17:52:30.086 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2023-10-10T17:52:30.087 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_54e62bcbac4e53d9685e08328b790d3b20d71cae/teuthology/orchestra/run.py", line 181, in _raise_for_status
2023-10-10T17:52:30.087 INFO:tasks.cephfs_test_runner:    raise CommandFailedError(
2023-10-10T17:52:30.088 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi046 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info foo'

Actions #18

Updated by Radoslaw Zarzynski 6 months ago

http://qa-proxy.ceph.com/teuthology/yuriw-2023-10-13_20:02:13-rados-quincy-release-distro-default-smithi/7425497/teuthology.log

2023-10-14T14:58:05.128 INFO:tasks.cephfs_test_runner:Test that cluster info doesn't throw junk data for non-existent cluster ... ERROR
2023-10-14T14:58:05.129 INFO:tasks.cephfs_test_runner:
2023-10-14T14:58:05.129 INFO:tasks.cephfs_test_runner:======================================================================
2023-10-14T14:58:05.129 INFO:tasks.cephfs_test_runner:ERROR: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)
2023-10-14T14:58:05.130 INFO:tasks.cephfs_test_runner:Test that cluster info doesn't throw junk data for non-existent cluster
2023-10-14T14:58:05.130 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2023-10-14T14:58:05.130 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2023-10-14T14:58:05.131 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/cephfs/test_nfs.py", line 817, in test_non_existent_cluster
2023-10-14T14:58:05.131 INFO:tasks.cephfs_test_runner:    cluster_info = self._nfs_cmd('cluster', 'info', 'foo')
2023-10-14T14:58:05.131 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/cephfs/test_nfs.py", line 22, in _nfs_cmd
2023-10-14T14:58:05.131 INFO:tasks.cephfs_test_runner:    return self._cmd("nfs", *args)
2023-10-14T14:58:05.132 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/cephfs/test_nfs.py", line 19, in _cmd
2023-10-14T14:58:05.132 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd(*args)
2023-10-14T14:58:05.132 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/ceph_manager.py", line 1624, in raw_cluster_cmd
2023-10-14T14:58:05.132 INFO:tasks.cephfs_test_runner:    return self.run_cluster_cmd(**kwargs).stdout.getvalue()
2023-10-14T14:58:05.133 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/ceph_manager.py", line 1615, in run_cluster_cmd
2023-10-14T14:58:05.133 INFO:tasks.cephfs_test_runner:    return self.controller.run(**kwargs)
2023-10-14T14:58:05.133 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/remote.py", line 522, in run
2023-10-14T14:58:05.133 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2023-10-14T14:58:05.134 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/run.py", line 455, in run
2023-10-14T14:58:05.134 INFO:tasks.cephfs_test_runner:    r.wait()
2023-10-14T14:58:05.134 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/run.py", line 161, in wait
2023-10-14T14:58:05.135 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2023-10-14T14:58:05.135 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/run.py", line 181, in _raise_for_status
2023-10-14T14:58:05.135 INFO:tasks.cephfs_test_runner:    raise CommandFailedError(
2023-10-14T14:58:05.135 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi053 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info foo'
Actions #19

Updated by Radoslaw Zarzynski 6 months ago

Also http://qa-proxy.ceph.com/teuthology/yuriw-2023-10-13_20:02:13-rados-quincy-release-distro-default-smithi/7425647/teuthology.log

2023-10-14T16:36:18.191 INFO:tasks.cephfs_test_runner:Test that cluster info doesn't throw junk data for non-existent cluster ... ERROR
2023-10-14T16:36:18.191 INFO:tasks.cephfs_test_runner:
2023-10-14T16:36:18.192 INFO:tasks.cephfs_test_runner:======================================================================
2023-10-14T16:36:18.192 INFO:tasks.cephfs_test_runner:ERROR: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)
2023-10-14T16:36:18.192 INFO:tasks.cephfs_test_runner:Test that cluster info doesn't throw junk data for non-existent cluster
2023-10-14T16:36:18.192 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2023-10-14T16:36:18.193 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2023-10-14T16:36:18.193 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/cephfs/test_nfs.py", line 817, in test_non_existent_cluster
2023-10-14T16:36:18.193 INFO:tasks.cephfs_test_runner:    cluster_info = self._nfs_cmd('cluster', 'info', 'foo')
2023-10-14T16:36:18.194 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/cephfs/test_nfs.py", line 22, in _nfs_cmd
2023-10-14T16:36:18.194 INFO:tasks.cephfs_test_runner:    return self._cmd("nfs", *args)
2023-10-14T16:36:18.194 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/cephfs/test_nfs.py", line 19, in _cmd
2023-10-14T16:36:18.194 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd(*args)
2023-10-14T16:36:18.195 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/ceph_manager.py", line 1624, in raw_cluster_cmd
2023-10-14T16:36:18.195 INFO:tasks.cephfs_test_runner:    return self.run_cluster_cmd(**kwargs).stdout.getvalue()
2023-10-14T16:36:18.195 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_9d6d32bb3452d5179bde2ee1cfa05df7a65f4586/qa/tasks/ceph_manager.py", line 1615, in run_cluster_cmd
2023-10-14T16:36:18.196 INFO:tasks.cephfs_test_runner:    return self.controller.run(**kwargs)
2023-10-14T16:36:18.196 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/remote.py", line 522, in run
2023-10-14T16:36:18.196 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2023-10-14T16:36:18.196 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/run.py", line 455, in run
2023-10-14T16:36:18.197 INFO:tasks.cephfs_test_runner:    r.wait()
2023-10-14T16:36:18.197 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/run.py", line 161, in wait
2023-10-14T16:36:18.197 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2023-10-14T16:36:18.197 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_teuthology_8cdab074dcca9a68965bc5a50e9c30b691949723/teuthology/orchestra/run.py", line 181, in _raise_for_status
2023-10-14T16:36:18.198 INFO:tasks.cephfs_test_runner:    raise CommandFailedError(
2023-10-14T16:36:18.198 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi112 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info foo'
Actions #20

Updated by Laura Flores 5 months ago

/a/yuriw-2023-11-10_18:18:41-rados-wip-yuri3-testing-2023-11-09-1355-quincy-distro-default-smithi/7454561

Actions #21

Updated by Laura Flores 4 months ago

/a/yuriw-2023-12-14_22:13:20-rados-wip-yuri11-testing-2023-12-14-1108-quincy-distro-default-smithi/7491709

Actions

Also available in: Atom PDF