Actions
Bug #46597
closedqa: Fs cleanup fails with a traceback
Status:
Resolved
Priority:
Normal
Assignee:
Category:
Testing
Target version:
% Done:
0%
Source:
Development
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
qa-suite
Labels (FS):
qa
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When two consecutive tests are run on single node local teuthology setup, the fs cleanup fails with following traceback.
2020-07-17 11:49:46,749.749 INFO:__main__:Starting test: test_2 (tasks.cephfs.test_volumes1.TestVolumes) 2020-07-17 11:49:46,749.749 INFO:__main__:Running ['./bin/ceph', 'log', 'Starting test tasks.cephfs.test_volumes1.TestVolumes.test_2'] 2020-07-17 11:49:47,756.756 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json-pretty'] 2020-07-17 11:49:48,008.008 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json'] 2020-07-17 11:49:48,274.274 INFO:__main__:Running ['./bin/ceph', 'fs', 'fail', 'cephfs'] 2020-07-17 11:49:48,776.776 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json'] 2020-07-17 11:49:49,036.036 INFO:__main__:Running ['./bin/ceph', 'fs', 'rm', 'cephfs', '--yes-i-really-mean-it'] 2020-07-17 11:49:49,787.787 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'delete', 'cephfs_metadata', 'cephfs_metadata', '--yes-i-really-really-mean-it'] 2020-07-17T11:49:49.886+0530 7fba727e8700 -1 WARNING: all dangerous and experimental features are enabled. 2020-07-17T11:49:49.910+0530 7fba727e8700 -1 WARNING: all dangerous and experimental features are enabled. Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool 2020-07-17 11:49:50,032.032 INFO:__main__:test_2 (tasks.cephfs.test_volumes1.TestVolumes) ... ERROR 2020-07-17 11:49:50,032.032 ERROR:__main__:Traceback (most recent call last): File "/root/sandbox/shyam-ceph/ceph/qa/tasks/cephfs/test_volumes1.py", line 227, in setUp super(TestVolumes, self).setUp() File "/root/sandbox/shyam-ceph/ceph/qa/tasks/cephfs/cephfs_test_case.py", line 100, in setUp self.mds_cluster.delete_all_filesystems() File "/root/sandbox/shyam-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 362, in delete_all_filesystems '--yes-i-really-really-mean-it') File "../qa/tasks/vstart_runner.py", line 873, in raw_cluster_cmd list(args), **kwargs, stdout=StringIO()) File "../qa/tasks/vstart_runner.py", line 354, in run return self._do_run(**kwargs) File "../qa/tasks/vstart_runner.py", line 421, in _do_run proc.wait() File "../qa/tasks/vstart_runner.py", line 205, in wait raise CommandFailedError(self.args, self.exitstatus) teuthology.exceptions.CommandFailedError: Command failed with status 1: ['./bin/ceph', 'osd', 'pool', 'delete', 'cephfs_metadata', 'cephfs_metadata', '--yes-i-really-really-mean-it'] 2020-07-17 11:49:50,032.032 ERROR:__main__:Error in test 'test_2 (tasks.cephfs.test_volumes1.TestVolumes)', going interactive Ceph test interactive mode, use ctx to interact with the cluster, press control-D to exit... >>>
Updated by Kotresh Hiremath Ravishankar almost 4 years ago
- Status changed from New to In Progress
Updated by Kotresh Hiremath Ravishankar almost 4 years ago
- Status changed from In Progress to Fix Under Review
- Pull request ID set to 36155
Updated by Patrick Donnelly over 3 years ago
- Status changed from Fix Under Review to Pending Backport
- Target version set to v16.0.0
- Source set to Development
- Backport set to octopus,nautilus
Updated by Nathan Cutler over 3 years ago
- Copied to Backport #46947: octopus: qa: Fs cleanup fails with a traceback added
Updated by Nathan Cutler over 3 years ago
- Copied to Backport #46948: nautilus: qa: Fs cleanup fails with a traceback added
Updated by Nathan Cutler over 3 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".
Actions