Project

General

Profile

Bug #42835

qa: test_scrub_abort fails during check_task_status("idle")

Added by Ramana Raja over 4 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2019-11-05T20:00:00.232 INFO:tasks.cephfs_test_runner:======================================================================
2019-11-05T20:00:00.233 INFO:tasks.cephfs_test_runner:ERROR: test_scrub_abort (tasks.cephfs.test_scrub_checks.TestScrubControls)
2019-11-05T20:00:00.233 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2019-11-05T20:00:00.233 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2019-11-05T20:00:00.233 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20191105.085703/qa/tasks/cephfs/test_scrub_checks.py", line 58, in test_scrub_abort
2019-11-05T20:00:00.234 INFO:tasks.cephfs_test_runner:    self._check_task_status("idle")
2019-11-05T20:00:00.234 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20191105.085703/qa/tasks/cephfs/test_scrub_checks.py", line 35, in _check_task_status
2019-11-05T20:00:00.234 INFO:tasks.cephfs_test_runner:    self.assertTrue(task_status['0'].startswith(expected_status))
2019-11-05T20:00:00.234 INFO:tasks.cephfs_test_runner:KeyError: '0'
2019-11-05T20:00:00.234 INFO:tasks.cephfs_test_runner:
2019-11-05T20:00:00.235 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2019-11-05T20:00:00.235 INFO:tasks.cephfs_test_runner:Ran 3 tests in 326.539s
2019-11-05T20:00:00.235 INFO:tasks.cephfs_test_runner:
2019-11-05T20:00:00.235 INFO:tasks.cephfs_test_runner:FAILED (errors=1)
2019-11-05T20:00:00.235 INFO:tasks.cephfs_test_runner:
2019-11-05T20:00:00.236 INFO:tasks.cephfs_test_runner:======================================================================
2019-11-05T20:00:00.236 INFO:tasks.cephfs_test_runner:ERROR: test_scrub_abort (tasks.cephfs.test_scrub_checks.TestScrubControls)
2019-11-05T20:00:00.236 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2019-11-05T20:00:00.236 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2019-11-05T20:00:00.236 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20191105.085703/qa/tasks/cephfs/test_scrub_checks.py", line 58, in test_scrub_abort
2019-11-05T20:00:00.236 INFO:tasks.cephfs_test_runner:    self._check_task_status("idle")
2019-11-05T20:00:00.237 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20191105.085703/qa/tasks/cephfs/test_scrub_checks.py", line 35, in _check_task_status
2019-11-05T20:00:00.237 INFO:tasks.cephfs_test_runner:    self.assertTrue(task_status['0'].startswith(expected_status))
2019-11-05T20:00:00.237 INFO:tasks.cephfs_test_runner:KeyError: '0'
2019-11-05T20:00:00.237 INFO:tasks.cephfs_test_runner:

See, http://qa-proxy.ceph.com/teuthology/pdonnell-2019-11-05_14:35:44-fs-wip-pdonnell-testing-20191105.085703-distro-basic-smithi/4474742/teuthology.log
http://qa-proxy.ceph.com/teuthology/pdonnell-2019-11-05_14:35:44-fs-wip-pdonnell-testing-20191105.085703-distro-basic-smithi/4474576/teuthology.log

Also in nautilus test branch,
http://qa-proxy.ceph.com/teuthology/yuriw-2019-11-13_20:47:25-fs-wip-yuri7-testing-2019-11-13-1707-nautilus-testing-basic-smithi/4505443/teuthology.log
http://qa-proxy.ceph.com/teuthology/yuriw-2019-11-13_20:47:25-fs-wip-yuri7-testing-2019-11-13-1707-nautilus-testing-basic-smithi/4505360/teuthology.log


Related issues

Duplicated by CephFS - Bug #42917: ceph: task status not available Duplicate
Copied to CephFS - Backport #44520: nautilus: qa: test_scrub_abort fails during check_task_status("idle") Resolved

History

#1 Updated by Patrick Donnelly over 4 years ago

  • Duplicated by Bug #42917: ceph: task status not available added

#2 Updated by Patrick Donnelly over 4 years ago

  • Assignee set to Venky Shankar

#3 Updated by Patrick Donnelly over 4 years ago

  • Blocks Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown added

#4 Updated by Patrick Donnelly over 4 years ago

  • Backport deleted (nautilus)

Nautilus backport will be tracked by #42738.

#5 Updated by Venky Shankar about 4 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 32657

#6 Updated by Patrick Donnelly about 4 years ago

  • Status changed from Fix Under Review to Resolved

#7 Updated by Patrick Donnelly about 4 years ago

  • Status changed from Resolved to Pending Backport
  • Backport set to nautilus

#9 Updated by Nathan Cutler about 4 years ago

  • Status changed from Pending Backport to Resolved
  • Backport deleted (nautilus)

Since this is a follow-on fix for #42299, let's handle the backporting there.

#10 Updated by Venky Shankar about 4 years ago

tracker #42738 seems to be incorrectly marked as a blocker for this tracker.

#11 Updated by Venky Shankar about 4 years ago

  • Blocks deleted (Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown)

#12 Updated by Venky Shankar about 4 years ago

  • Copied to Backport #44520: nautilus: qa: test_scrub_abort fails during check_task_status("idle") added

#13 Updated by Nathan Cutler about 4 years ago

tracker #42738 seems to be incorrectly marked as a blocker for this tracker.

Wasn't it the other way around? This was marked as blocking #42738 - i.e. I thought this one was a follow-on fix for #42738?

#14 Updated by Venky Shankar about 4 years ago

Nathan Cutler wrote:

tracker #42738 seems to be incorrectly marked as a blocker for this tracker.

Wasn't it the other way around? This was marked as blocking #42738 - i.e. I thought this one was a follow-on fix for #42738?

I meant that the trackers are not related. I check with Patrick (before he went on PTO) and he doesn't seem to recall why these trackers were marked dependent (probably a mistake).

Also available in: Atom PDF