Actions
Bug #53601
closedvstart_runner: Running test_data_scan test locally fails with tracebacks
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
qa-suite
Labels (FS):
qa
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Following tracebacks are seen
1.
2021-12-14 13:26:31,935.935 INFO:__main__:====================================================================== 2021-12-14 13:26:31,935.935 INFO:__main__:ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) 2021-12-14 13:26:31,936.936 INFO:__main__:That when injecting a dentry into a fragmented directory, we put it in the right fragment. 2021-12-14 13:26:31,936.936 INFO:__main__:---------------------------------------------------------------------- 2021-12-14 13:26:31,936.936 INFO:__main__:Traceback (most recent call last): 2021-12-14 13:26:31,936.936 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/test_data_scan.py", line 464, in test_fragmented_injection 2021-12-14 13:26:31,936.936 INFO:__main__: keys = self._dirfrag_keys(frag_obj_id) 2021-12-14 13:26:31,936.936 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/test_data_scan.py", line 416, in _dirfrag_keys 2021-12-14 13:26:31,936.936 INFO:__main__: keys_str = self.fs.radosmo(["listomapkeys", object_id], stdout=StringIO()) 2021-12-14 13:26:31,936.936 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1116, in radosmo 2021-12-14 13:26:31,936.936 INFO:__main__: return self.radosm(*args, **kwargs, stdout=stdout).stdout.getvalue() 2021-12-14 13:26:31,936.936 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1109, in radosm 2021-12-14 13:26:31,936.936 INFO:__main__: return self.rados(*args, **kwargs, pool=self.get_metadata_pool_name()) 2021-12-14 13:26:31,937.937 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1102, in rados 2021-12-14 13:26:31,937.937 INFO:__main__: return self.mon_manager.do_rados(*args, **kwargs) 2021-12-14 13:26:31,937.937 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/ceph_manager.py", line 1728, in do_rados 2021-12-14 13:26:31,937.937 INFO:__main__: self.cluster, 2021-12-14 13:26:31,937.937 INFO:__main__:AttributeError: 'LocalCephManager' object has no attribute 'cluster'
2.
2021-12-14 13:33:01,613.613 INFO:__main__:ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) 2021-12-14 13:33:01,614.614 INFO:__main__:That when injecting a dentry into a fragmented directory, we put it in the right fragment. 2021-12-14 13:33:01,614.614 INFO:__main__:---------------------------------------------------------------------- 2021-12-14 13:33:01,614.614 INFO:__main__:Traceback (most recent call last): 2021-12-14 13:33:01,614.614 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/test_data_scan.py", line 464, in test_fragmented_injection 2021-12-14 13:33:01,614.614 INFO:__main__: keys = self._dirfrag_keys(frag_obj_id) 2021-12-14 13:33:01,614.614 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/test_data_scan.py", line 416, in _dirfrag_keys 2021-12-14 13:33:01,614.614 INFO:__main__: keys_str = self.fs.radosmo(["listomapkeys", object_id], stdout=StringIO()) 2021-12-14 13:33:01,614.614 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1116, in radosmo 2021-12-14 13:33:01,614.614 INFO:__main__: return self.radosm(*args, **kwargs, stdout=stdout).stdout.getvalue() 2021-12-14 13:33:01,614.614 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1109, in radosm 2021-12-14 13:33:01,614.614 INFO:__main__: return self.rados(*args, **kwargs, pool=self.get_metadata_pool_name()) 2021-12-14 13:33:01,614.614 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1102, in rados 2021-12-14 13:33:01,614.614 INFO:__main__: return self.mon_manager.do_rados(*args, **kwargs) 2021-12-14 13:33:01,615.615 INFO:__main__: File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/ceph_manager.py", line 1735, in do_rados 2021-12-14 13:33:01,615.615 INFO:__main__: proc = remote.run( 2021-12-14 13:33:01,615.615 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 410, in run 2021-12-14 13:33:01,615.615 INFO:__main__: return self._do_run(**kwargs) 2021-12-14 13:33:01,615.615 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 480, in _do_run 2021-12-14 13:33:01,615.615 INFO:__main__: proc.wait() 2021-12-14 13:33:01,615.615 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 221, in wait 2021-12-14 13:33:01,615.615 INFO:__main__: raise CommandFailedError(self.args, self.exitstatus) 2021-12-14 13:33:01,615.615 INFO:__main__:teuthology.exceptions.CommandFailedError: Command failed with status 127: ['None/archive/coverage', 'rados', '--cluster', 'ceph', '--pool', 'cephfs_metadata', 'listomapkeys', '10000000000.01000000']
3.
2021-12-14 15:13:34,092.092 ERROR:__main__:Traceback (most recent call last): File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/test_data_scan.py", line 580, in test_rebuild_linkage self.fs.radosm(["setomapval", dirfrag2_oid, file1_key], stdin=BytesIO(file1_omap_data)) File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1109, in radosm return self.rados(*args, **kwargs, pool=self.get_metadata_pool_name()) File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/cephfs/filesystem.py", line 1102, in rados return self.mon_manager.do_rados(*args, **kwargs) File "/root/sandbox/kotresh-ceph/ceph/qa/tasks/ceph_manager.py", line 1735, in do_rados proc = remote.run( File "../qa/tasks/vstart_runner.py", line 410, in run return self._do_run(**kwargs) File "../qa/tasks/vstart_runner.py", line 472, in _do_run subproc.stdin.write(stdin) TypeError: a bytes-like object is required, not '_io.BytesIO'
Updated by Kotresh Hiremath Ravishankar over 2 years ago
- Status changed from New to Fix Under Review
- Assignee set to Kotresh Hiremath Ravishankar
- Pull request ID set to 44305
Updated by Venky Shankar almost 2 years ago
- Status changed from Fix Under Review to Resolved
Actions