Bug #42022
openmgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to become empty from 0"
0%
Description
2019-09-22T12:58:08.384 INFO:tasks.cephfs_test_runner:====================================================================== 2019-09-22T12:58:08.384 INFO:tasks.cephfs_test_runner:ERROR: test_subvolume_snapshot_create_and_rm (tasks.cephfs.test_volumes.TestVolumes) 2019-09-22T12:58:08.384 INFO:tasks.cephfs_test_runner:---------------------------------------------------------------------- 2019-09-22T12:58:08.384 INFO:tasks.cephfs_test_runner:Traceback (most recent call last): 2019-09-22T12:58:08.385 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20190922.042910/qa/tasks/cephfs/test_volumes.py", line 436, in test_subvolume_snapshot_create_and_rm 2019-09-22T12:58:08.385 INFO:tasks.cephfs_test_runner: self._wait_for_trash_empty() 2019-09-22T12:58:08.385 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20190922.042910/qa/tasks/cephfs/test_volumes.py", line 84, in _wait_for_trash_empty 2019-09-22T12:58:08.385 INFO:tasks.cephfs_test_runner: self.mount_a.wait_for_dir_empty(trashdir) 2019-09-22T12:58:08.385 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20190922.042910/qa/tasks/cephfs/mount.py", line 267, in wait_for_dir_empty 2019-09-22T12:58:08.385 INFO:tasks.cephfs_test_runner: i, dirname, self.client_id)) 2019-09-22T12:58:08.386 INFO:tasks.cephfs_test_runner:RuntimeError: Timed out after 30s waiting for ./volumes/_deleting to become empty from 0 2019-09-22T12:58:08.386 INFO:tasks.cephfs_test_runner:
From: /ceph/teuthology-archive/pdonnell-2019-09-22_10:05:30-fs-wip-pdonnell-testing-20190922.042910-distro-basic-smithi/4326053/teuthology.log
Updated by Rishabh Dave over 4 years ago
- Assignee changed from Ramana Raja to Rishabh Dave
Updated by Rishabh Dave over 4 years ago
Couldn't reproduce this locally and on teuthology. On teuthology the test passed -
2019-10-21T14:14:55.548 INFO:tasks.cephfs_test_runner:Starting test: test_subvolume_snapshot_create_and_rm (tasks.cephfs.test_volumes.TestVolumes) 2019-10-21T14:14:55.548 INFO:teuthology.orchestra.run.smithi133:Running: 2019-10-21T14:14:55.549 INFO:teuthology.orchestra.run.smithi133:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Starting test tasks.cephfs.test_volumes.TestVolumes.test_subvolume_snapshot_create_and_rm'
Locally, I did see an issue but it was quite different. Since the description is pretty brief, I must ask if there's anything more I should know while trying to reproduce this issue.
Link to teuth the jobs - http://pulpito.front.sepia.ceph.com/rishabh-2019-10-21_12:48:04-fs-wip-rishabh-add-test-for-acls-ubuntu-fix-distro-basic-smithi/.
Updated by Rishabh Dave over 4 years ago
- Status changed from New to Need More Info
Updated by Patrick Donnelly over 4 years ago
Rishabh Dave wrote:
Couldn't reproduce this locally and on teuthology. On teuthology the test passed -
[...]Locally, I did see an issue but it was quite different. Since the description is pretty brief, I must ask if there's anything more I should know while trying to reproduce this issue.
Link to teuth the jobs - http://pulpito.front.sepia.ceph.com/rishabh-2019-10-21_12:48:04-fs-wip-rishabh-add-test-for-acls-ubuntu-fix-distro-basic-smithi/.
Have you looked at the mgr logs to see why the delay occurred? Try to use the available information in the teuthology job before trying to reproduce.