Project

General

Profile

Feature #15619

Repair InoTable during forward scrub

Added by John Spray almost 8 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Normal
Category:
fsck/damage handling
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:
Component(FS):
Labels (FS):
Pull request ID:

Description

If an inode's number is not marked as used in the InoTable, fix that during forward scrub when repair is enabled.

This is the case where a system is damaged in such a way that we have an inode with number X, but the inode table also claims that inode number X is free. When we touch this inode in CInode::validate_disk_state, we can check if the inode number is indeed marked as used in mds->inotable, and if it isn't, and the user has enabled repair (scrub_infop->header->repair) then emit a log message and mark the inode used in the inotable.

It should be possible to construct a test within test_forward_scrub.py in ceph-qa-suite, by writing python code that interferes with the system to create this particular bad state (see also test_damage.py where some similar manipulation is done). You can run those tests on a vstart cluster using the tasks/cephfs/vstart_runner.py script in ceph-qa-suite (there are some notes in vstart_runner.py about how to run it).

History

#1 Updated by John Spray almost 8 years ago

  • Description updated (diff)

#2 Updated by Vishal Kanaujia almost 8 years ago

  • Assignee set to Vishal Kanaujia

#3 Updated by Vishal Kanaujia over 7 years ago

  • Status changed from New to In Progress

#4 Updated by John Spray over 7 years ago

  • Status changed from In Progress to Resolved
commit 5259683e7819c22c14b21b1dd678a33e14574f21
Author: Vishal Kanaujia <Vishal.Kanaujia@sandisk.com>
Date:   Wed Jul 13 18:50:28 2016 +0530

    cephfs: Inotable repair during forward scrub

Also available in: Atom PDF