Project

General

Profile

Bug #44176

qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"

Added by Patrick Donnelly 7 months ago. Updated 7 months ago.

Status:
Resolved
Priority:
Immediate
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
mgr/volumes
Labels (FS):
Pull request ID:
Crash signature:

Description

2020-02-15T20:36:04.297 INFO:teuthology.orchestra.run.smithi135.stderr:Error EINVAL: 'Module' object has no attribute 'remove_mds'
2020-02-15T20:36:04.305 DEBUG:teuthology.orchestra.run:got remote process result: 22
2020-02-15T20:36:04.309 INFO:tasks.cephfs_test_runner:test_volume_create (tasks.cephfs.test_volumes.TestVolumes) ... ERROR
...
2020-02-15T20:36:41.803 INFO:tasks.cephfs_test_runner:======================================================================
2020-02-15T20:36:41.803 INFO:tasks.cephfs_test_runner:ERROR: test_volume_create (tasks.cephfs.test_volumes.TestVolumes)
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20200215.033325/qa/tasks/cephfs/test_volumes.py", line 199, in test_volume_create
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:    self._fs_cmd("volume", "rm", volname, "--yes-i-really-mean-it")
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20200215.033325/qa/tasks/cephfs/test_volumes.py", line 30, in _fs_cmd
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd("fs", *args)
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20200215.033325/qa/tasks/ceph_manager.py", line 1358, in raw_cluster_cmd
2020-02-15T20:36:41.804 INFO:tasks.cephfs_test_runner:    stdout=StringIO(),
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 198, in run
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 433, in run
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:    r.wait()
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 158, in wait
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 180, in _raise_for_status
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2020-02-15T20:36:41.805 INFO:tasks.cephfs_test_runner:CommandFailedError: Command failed on smithi135 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs volume rm volume_3124 --yes-i-really-mean-it'

From: /ceph/teuthology-archive/pdonnell-2020-02-15_16:51:06-fs-wip-pdonnell-testing-20200215.033325-distro-basic-smithi/4767684/teuthology.log

Caused by: b6f42a06617a9a9f3ea7961c256a1033e05eecd2

Fix for this shouldn't need backported I would think.

History

#1 Updated by Ramana Raja 7 months ago

  • Status changed from New to In Progress
  • Pull request ID set to 33384

#2 Updated by Sage Weil 7 months ago

  • Status changed from In Progress to Resolved
  • Pull request ID changed from 33384 to 33359

Also available in: Atom PDF