Project

General

Profile

Actions

Bug #48680

open

mds: scrubbing stuck "scrub active (0 inodes in the stack)"

Added by Patrick Donnelly over 3 years ago. Updated 23 days ago.

Status:
New
Priority:
High
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
qa-failure
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2020-12-18T02:16:25.467 INFO:teuthology.orchestra.run.smithi193.stdout:{
2020-12-18T02:16:25.467 INFO:teuthology.orchestra.run.smithi193.stdout:    "status": "scrub active (0 inodes in the stack)",
2020-12-18T02:16:25.468 INFO:teuthology.orchestra.run.smithi193.stdout:    "scrubs": {
2020-12-18T02:16:25.468 INFO:teuthology.orchestra.run.smithi193.stdout:        "e366f5ff-325e-460c-b4c7-e6095d429c92": {
2020-12-18T02:16:25.468 INFO:teuthology.orchestra.run.smithi193.stdout:            "path": "/",
2020-12-18T02:16:25.468 INFO:teuthology.orchestra.run.smithi193.stdout:            "tag": "e366f5ff-325e-460c-b4c7-e6095d429c92",
2020-12-18T02:16:25.469 INFO:teuthology.orchestra.run.smithi193.stdout:            "options": "recursive,force" 
2020-12-18T02:16:25.469 INFO:teuthology.orchestra.run.smithi193.stdout:        }
2020-12-18T02:16:25.469 INFO:teuthology.orchestra.run.smithi193.stdout:    }
2020-12-18T02:16:25.469 INFO:teuthology.orchestra.run.smithi193.stdout:}
2020-12-18T02:16:25.470 INFO:tasks.fwd_scrub.fs.[cephfs]:scrub status for tag:e366f5ff-325e-460c-b4c7-e6095d429c92 - {'path': '/', 'tag': 'e366f5ff-325e-460c-b4c7-e6095d429c92', 'options': 'recursive,force'}
2020-12-18T02:16:25.470 ERROR:tasks.fwd_scrub.fs.[cephfs]:exception:
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201217.205941/qa/tasks/fwd_scrub.py", line 32, in _run
    self.do_scrub()
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201217.205941/qa/tasks/fwd_scrub.py", line 50, in do_scrub
    self.wait_until_scrub_complete(tag)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201217.205941/qa/tasks/fwd_scrub.py", line 55, in wait_until_scrub_complete
    while proceed():
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 133, in __call__
    raise MaxWhileTries(error_msg)
teuthology.exceptions.MaxWhileTries: reached maximum tries (10) after waiting for 300 seconds
2020-12-18T02:16:25.470 ERROR:tasks.fwd_scrub.fs.[cephfs]:exception:
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201217.205941/qa/tasks/fwd_scrub.py", line 100, in _run
    self.do_scrub()
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201217.205941/qa/tasks/fwd_scrub.py", line 144, in do_scrub
    raise RuntimeError('error during scrub thrashing')
RuntimeError: error during scrub thrashing
2020-12-18T02:16:28.197 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.6 is failed for ~280s
2020-12-18T02:16:28.198 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.9 is failed for ~266s
2020-12-18T02:16:28.198 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.5 is failed for ~259s
2020-12-18T02:16:28.198 INFO:tasks.daemonwatchdog.daemon_watchdog:thrasher.fs.[cephfs] failed
2020-12-18T02:16:28.198 INFO:tasks.daemonwatchdog.daemon_watchdog:BARK! unmounting mounts and killing all daemons

From: /ceph/teuthology-archive/pdonnell-2020-12-17_23:13:08-fs-wip-pdonnell-testing-20201217.205941-distro-basic-smithi/5715913/teuthology.log


Related issues 2 (2 open0 closed)

Related to CephFS - Bug #48773: qa: scrub does not completeIn ProgressKotresh Hiremath Ravishankar

Actions
Related to CephFS - Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 secondsPending BackportMilind Changire

Actions
Actions

Also available in: Atom PDF