Actions
Bug #48805
closedmds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
% Done:
0%
Source:
Q/A
Tags:
Backport:
pacific,octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
qa-failure, scrub, task(medium)
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2021-01-03T05:03:38.865 INFO:teuthology.orchestra.run.smithi191.stdout:2021-01-03T05:00:26.188079+0000 mds.a (mds.0) 124 : cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details
From: /ceph/teuthology-archive/teuthology-2021-01-03_03:15:02-fs-master-distro-basic-smithi/5751921/teuthology.log
Thought we had an issue for this already but I could not find it. Milind is working on this.
Updated by Patrick Donnelly over 3 years ago
- Labels (FS) qa-failure, scrub, task(medium) added
Updated by Patrick Donnelly over 3 years ago
- Target version changed from v16.0.0 to v17.0.0
- Backport changed from octopus,nautilus to pacific,octopus,nautilus
Updated by Milind Changire about 3 years ago
I'm unable to comment on the exact teuthology run mentioned in the description.
However, with the testing so far, there are two types of "scrub error on inode" issues:
However, with the testing so far, there are two types of "scrub error on inode" issues:
- backtrace validation for dirs
This issue is mostly (so far) caused only when validating unlinked (stray) entries
For stray entries, the on_disk backtrace version differs with the in_memory version. - raw stats validation for dirs
There's the problem with in-memory and on-disk dirstat being different.
(this needs to be investigated more; no leads on this one yet
Then there's the problem with in-memory and on-disk rstat being different
Here, the rctime and (file+dir) counts remain the same, but only the in-memory rstat version changes.
Updated by Milind Changire about 3 years ago
- Status changed from In Progress to Fix Under Review
- Pull request ID set to 40520
Updated by Patrick Donnelly about 3 years ago
- Related to Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") added
Updated by Patrick Donnelly about 3 years ago
- Status changed from Fix Under Review to Pending Backport
Updated by Backport Bot about 3 years ago
- Copied to Backport #50251: nautilus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" added
Updated by Backport Bot about 3 years ago
- Copied to Backport #50252: octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" added
Updated by Backport Bot about 3 years ago
- Copied to Backport #50253: pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" added
Updated by Laura Flores over 2 years ago
Came across a failure that looks related to this one in a recent Pacific run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-02_19:46:34-fs-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi/6478887/
Updated by Milind Changire about 2 years ago
- Status changed from Pending Backport to Resolved
- now available on master and quincy
Actions