Project

General

Profile

Bug #48805

mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"

Added by Patrick Donnelly 3 months ago. Updated 8 days ago.

Status:
Pending Backport
Priority:
Urgent
Category:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
pacific,octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
qa-failure, scrub, task(medium)
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2021-01-03T05:03:38.865 INFO:teuthology.orchestra.run.smithi191.stdout:2021-01-03T05:00:26.188079+0000 mds.a (mds.0) 124 : cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details

From: /ceph/teuthology-archive/teuthology-2021-01-03_03:15:02-fs-master-distro-basic-smithi/5751921/teuthology.log

Thought we had an issue for this already but I could not find it. Milind is working on this.


Related issues

Related to CephFS - Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" New
Copied to CephFS - Backport #50251: nautilus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" Rejected
Copied to CephFS - Backport #50252: octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" Need More Info
Copied to CephFS - Backport #50253: pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" In Progress

History

#1 Updated by Patrick Donnelly 3 months ago

  • Labels (FS) qa-failure, scrub, task(medium) added

#2 Updated by Patrick Donnelly 3 months ago

  • Target version changed from v16.0.0 to v17.0.0
  • Backport changed from octopus,nautilus to pacific,octopus,nautilus

#3 Updated by Milind Changire about 1 month ago

I'm unable to comment on the exact teuthology run mentioned in the description.
However, with the testing so far, there are two types of "scrub error on inode" issues:
  1. backtrace validation for dirs
    This issue is mostly (so far) caused only when validating unlinked (stray) entries
    For stray entries, the on_disk backtrace version differs with the in_memory version.
  2. raw stats validation for dirs
    There's the problem with in-memory and on-disk dirstat being different.
    (this needs to be investigated more; no leads on this one yet
    Then there's the problem with in-memory and on-disk rstat being different
    Here, the rctime and (file+dir) counts remain the same, but only the in-memory rstat version changes.

#4 Updated by Milind Changire 16 days ago

  • Status changed from In Progress to Fix Under Review
  • Pull request ID set to 40520

#5 Updated by Patrick Donnelly 8 days ago

  • Related to Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" added

#6 Updated by Patrick Donnelly 8 days ago

  • Status changed from Fix Under Review to Pending Backport

#7 Updated by Backport Bot 8 days ago

  • Copied to Backport #50251: nautilus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" added

#8 Updated by Backport Bot 8 days ago

  • Copied to Backport #50252: octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" added

#9 Updated by Backport Bot 8 days ago

  • Copied to Backport #50253: pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" added

Also available in: Atom PDF