Actions
Bug #57913
closedThrashosd: timeout 120 ceph --cluster ceph osd pool rm unique_pool_2 unique_pool_2 --yes-i-really-really-mean-it
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
/a/yuriw-2022-10-12_16:24:50-rados-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/7063868/
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-radosbench}
Failed to delete the pool in thrash osds
Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/run_tasks.py", line 103, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/run_tasks.py", line 82, in run_one_task return task(**kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/task/full_sequential.py", line 37, in task mgr.__exit__(*exc_info) File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/radosbench.py", line 144, in task manager.remove_pool(pool) File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/ceph_manager.py", line 2180, in remove_pool "--yes-i-really-really-mean-it") File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/ceph_manager.py", line 1615, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/ceph_manager.py", line 1606, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/orchestra/remote.py", line 525, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/orchestra/run.py", line 455, in run r.wait() File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/orchestra/run.py", line 183, in _raise_for_status node=self.hostname, label=self.label teuthology.exceptions.CommandFailedError: Command failed on smithi149 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool rm unique_pool_2 unique_pool_2 --yes-i-really-really-mean-it' 2022-10-14T00:44:40.848 ERROR:teuthology.run_tasks: Sentry event: https://sentry.ceph.com/organizations/ceph/?query=324cdf691ae74950acb33a1d061e259b Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/run_tasks.py", line 103, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/run_tasks.py", line 82, in run_one_task return task(**kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/task/full_sequential.py", line 37, in task mgr.__exit__(*exc_info) File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/radosbench.py", line 144, in task manager.remove_pool(pool) File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/ceph_manager.py", line 2180, in remove_pool "--yes-i-really-really-mean-it") File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/ceph_manager.py", line 1615, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthworker/src/github.com_ceph_ceph-c_18f1152e5abc61579d870e05168b42e885f4a242/qa/tasks/ceph_manager.py", line 1606, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/orchestra/remote.py", line 525, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_35ea38a9840006713a3d42472a2c536a25e88c15/teuthology/orchestra/run.py", line 455, in run
Updated by Kamoltat (Junior) Sirivadhna over 1 year ago
- Subject changed from Thrashosdtimeout 120 ceph --cluster ceph osd pool rm unique_pool_2 unique_pool_2 --yes-i-really-really-mean-it to Thrashosd: timeout 120 ceph --cluster ceph osd pool rm unique_pool_2 unique_pool_2 --yes-i-really-really-mean-it
Updated by Radoslaw Zarzynski over 1 year ago
- Status changed from New to Duplicate
In the teuthology log:
FAILED ceph_assert(rollback_info_trimmed_to == head)
Updated by Radoslaw Zarzynski over 1 year ago
- Is duplicate of Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head added
Actions