Project

General

Profile

Bug #46597

qa: Fs cleanup fails with a traceback

Added by Kotresh Hiremath Ravishankar 19 days ago. Updated 4 days ago.

Status:
Pending Backport
Priority:
Normal
Category:
Testing
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
qa-suite
Labels (FS):
qa
Pull request ID:
Crash signature:

Description

When two consecutive tests are run on single node local teuthology setup, the fs cleanup fails with following traceback.

2020-07-17 11:49:46,749.749 INFO:__main__:Starting test: test_2 (tasks.cephfs.test_volumes1.TestVolumes)
2020-07-17 11:49:46,749.749 INFO:__main__:Running ['./bin/ceph', 'log', 'Starting test tasks.cephfs.test_volumes1.TestVolumes.test_2']
2020-07-17 11:49:47,756.756 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json-pretty']
2020-07-17 11:49:48,008.008 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-07-17 11:49:48,274.274 INFO:__main__:Running ['./bin/ceph', 'fs', 'fail', 'cephfs']
2020-07-17 11:49:48,776.776 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-07-17 11:49:49,036.036 INFO:__main__:Running ['./bin/ceph', 'fs', 'rm', 'cephfs', '--yes-i-really-mean-it']
2020-07-17 11:49:49,787.787 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'delete', 'cephfs_metadata', 'cephfs_metadata', '--yes-i-really-really-mean-it']
2020-07-17T11:49:49.886+0530 7fba727e8700 -1 WARNING: all dangerous and experimental features are enabled.
2020-07-17T11:49:49.910+0530 7fba727e8700 -1 WARNING: all dangerous and experimental features are enabled.
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
2020-07-17 11:49:50,032.032 INFO:__main__:test_2 (tasks.cephfs.test_volumes1.TestVolumes) ... ERROR
2020-07-17 11:49:50,032.032 ERROR:__main__:Traceback (most recent call last):
  File "/root/sandbox/shyam-ceph/ceph/qa/tasks/cephfs/test_volumes1.py", line 227, in setUp
    super(TestVolumes, self).setUp()
  File "/root/sandbox/shyam-ceph/ceph/qa/tasks/cephfs/cephfs_test_case.py", line 100, in setUp
    self.mds_cluster.delete_all_filesystems()
  File "/root/sandbox/shyam-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 362, in delete_all_filesystems
    '--yes-i-really-really-mean-it')
  File "../qa/tasks/vstart_runner.py", line 873, in raw_cluster_cmd
    list(args), **kwargs, stdout=StringIO())
  File "../qa/tasks/vstart_runner.py", line 354, in run
    return self._do_run(**kwargs)
  File "../qa/tasks/vstart_runner.py", line 421, in _do_run
    proc.wait()
  File "../qa/tasks/vstart_runner.py", line 205, in wait
    raise CommandFailedError(self.args, self.exitstatus)
teuthology.exceptions.CommandFailedError: Command failed with status 1: ['./bin/ceph', 'osd', 'pool', 'delete', 'cephfs_metadata', 'cephfs_metadata', '--yes-i-really-really-mean-it']

2020-07-17 11:49:50,032.032 ERROR:__main__:Error in test 'test_2 (tasks.cephfs.test_volumes1.TestVolumes)', going interactive
Ceph test interactive mode, use ctx to interact with the cluster, press control-D to exit...
>>>

History

#1 Updated by Kotresh Hiremath Ravishankar 19 days ago

  • Status changed from New to In Progress

#2 Updated by Kotresh Hiremath Ravishankar 16 days ago

  • Status changed from In Progress to Fix Under Review
  • Pull request ID set to 36155

#3 Updated by Patrick Donnelly 4 days ago

  • Status changed from Fix Under Review to Pending Backport
  • Target version set to v16.0.0
  • Source set to Development
  • Backport set to octopus,nautilus

Also available in: Atom PDF