Project

General

Profile

Actions

Bug #63211

closed

qa: error during scrub thrashing: reached maximum tries (31) after waiting

Added by Xiubo Li 7 months ago. Updated 7 months ago.

Status:
Closed
Priority:
Normal
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

https://pulpito.ceph.com/vshankar-2023-10-09_16:48:27-fs-wip-vshankar-testing-reef-20231009.131610-testing-default-smithi/7418649/

2023-10-09T23:25:21.544 DEBUG:teuthology.run_tasks:Unwinding manager fwd_scrub
2023-10-09T23:25:21.554 INFO:tasks.fwd_scrub:joining ForwardScrubbers
2023-10-09T23:25:21.554 ERROR:teuthology.run_tasks:Manager failed: fwd_scrub
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_teuthology_54e62bcbac4e53d9685e08328b790d3b20d71cae/teuthology/run_tasks.py", line 154, in run_tasks
    suppress = manager.__exit__(*exc_info)
  File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
    next(self.gen)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_9a774796f0ecdd11a4b771de396e627aa1a187af/qa/tasks/fwd_scrub.py", line 164, in task
    stop_all_fwd_scrubbers(ctx.ceph[config['cluster']].thrashers)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_9a774796f0ecdd11a4b771de396e627aa1a187af/qa/tasks/fwd_scrub.py", line 99, in stop_all_fwd_scrubbers
    raise RuntimeError(f"error during scrub thrashing: {thrasher.exception}")
RuntimeError: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
2023-10-09T23:25:21.555 DEBUG:teuthology.run_tasks:Unwinding manager check-counter

From mds.0, it keep retrying to scrub fragset_t(11\)*:

remote/smithi112/log/81222138-66f7-11ee-8db6-212e2dc638e7/ceph-mds.b.log.gz

   -32> 2023-10-09T23:26:09.568+0000 7f99493a2700 20 mds.0.scrubstack scrub_dir_inoderecursive mode, frags [111*,110*,101*,100*,011*,010*,001*,000*]
   -31> 2023-10-09T23:26:09.568+0000 7f99493a2700 20 mds.0.scrubstack scrub_dir_inode forward fragset_t(11*) to mds.2
   -30> 2023-10-09T23:26:09.568+0000 7f99493a2700  1 -- [v2:172.21.15.112:6838/1129462024,v1:172.21.15.112:6839/1129462024] send_to--> mds [v2:172.21.15.112:6836/548839104,v1:172.21.15.112:6837/548839104] -- mds_scrub(queue_dir 0x10000000420 fragset_t(11*) 41f44fce-6f24-4361-84df-1b51fd9028a1 force recursive) v1 -- ?+0 0x5640e0ac2fc0
   -29> 2023-10-09T23:26:09.568+0000 7f99493a2700  1 -- [v2:172.21.15.112:6838/1129462024,v1:172.21.15.112:6839/1129462024] --> [v2:172.21.15.112:6836/548839104,v1:172.21.15.112:6837/548839104] -- mds_scrub(queue_dir 0x10000000420 fragset_t(11*) 41f44fce-6f24-4361-84df-1b51fd9028a1 force recursive) v1 -- 0x5640e0ac2fc0 con 0x5640e000f800
   -28> 2023-10-09T23:26:09.568+0000 7f99493a2700  1 -- [v2:172.21.15.112:6838/1129462024,v1:172.21.15.112:6839/1129462024] <== mds.2 v2:172.21.15.112:6836/548839104 6063651 ==== mds_scrub(queue_dir_ack 0x10000000420 fragset_t() 41f44fce-6f24-4361-84df-1b51fd9028a1) v1 ==== 68+0+0 (crc 0 0 0) 0x5640e0ac2fc0 con 0x5640e000f800
   -27> 2023-10-09T23:26:09.568+0000 7f99493a2700 10 mds.0.scrubstack handle_scrub mds_scrub(queue_dir_ack 0x10000000420 fragset_t() 41f44fce-6f24-4361-84df-1b51fd9028a1) v1 from mds.2
   -26> 2023-10-09T23:26:09.568+0000 7f99493a2700 20 mds.0.scrubstack kick_off_scrubs: state=RUNNING
   -25> 2023-10-09T23:26:09.568+0000 7f99493a2700 20 mds.0.scrubstack kick_off_scrubs entering with 0 in progress and 1 in the stack

But from mds.2, it couldn't find the fragset_t(11\)* and return directly:

remote/smithi112/log/81222138-66f7-11ee-8db6-212e2dc638e7/ceph-mds.b.log.gz

2023-10-09T23:10:25.152+0000 7fc687392700 10 mds.2.scrubstack handle_scrub mds_scrub(queue_dir 0x10000000420 fragset_t(11*) 41f44fce-6f24-4361-84df-1b51fd9028a1 force recursive) v1 from mds.0
2023-10-09T23:10:25.152+0000 7fc687392700 10 mds.2.scrubstack handle_scrub no frag 11*
2023-10-09T23:10:25.152+0000 7fc687392700  1 -- [v2:172.21.15.112:6836/548839104,v1:172.21.15.112:6837/548839104] send_to--> mds [v2:172.21.15.112:6838/1129462024,v1:172.21.15.112:6839/1129462024] -- mds_scrub(queue_dir_ack 0x10000000420 fragset_t() 41f44fce-6f24-4361-84df-1b51fd9028a1) v1 -- ?+0 0x560343d87880
2023-10-09T23:10:25.152+0000 7fc687392700  1 -- [v2:172.21.15.112:6836/548839104,v1:172.21.15.112:6837/548839104] --> [v2:172.21.15.112:6838/1129462024,v1:172.21.15.112:6839/1129462024] -- mds_scrub(queue_dir_ack 0x10000000420 fragset_t() 41f44fce-6f24-4361-84df-1b51fd9028a1) v1 -- 0x560343d87880 con 0x5603438c6400

It seems the MDS doesn't correctly handle this corner case ?

Actions #1

Updated by Venky Shankar 7 months ago

  • Assignee set to Milind Changire

This is most likely a duplicate of the fix that Milind currently has a fix for,

Milind, please mark as dup and close.

Actions #3

Updated by Milind Changire 7 months ago

  • Status changed from New to Closed

Duplicate of 62658

Actions

Also available in: Atom PDF