Project

General

Profile

Actions

Bug #39571

closed

xfstest generic/452 exposes inode refcount leak

Added by Jeff Layton almost 5 years ago. Updated almost 5 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

After running xfstest generic/452, I saw this message pop up in the console:

[ 44.795039] VFS: Busy inodes after unmount of ceph. Self-destruct in 5 seconds. Have a nice day...

Basically we have some inodes that have elevated refcounts even after the superblock is gone. Eventually we'll trip over them and the box will crash.


Files

452.filtered (24.3 KB) 452.filtered Filtered strace Jeff Layton, 05/03/2019 12:43 PM
Actions #1

Updated by Jeff Layton almost 5 years ago

This problem is reliably reproducible too.

Actions #2

Updated by Jeff Layton almost 5 years ago

This reproduces on a stock v5.0.5 kernel too, so I'm pretty sure none of my patches broke it. I'm working on nailing down the specific sequence of things that happen to trigger this. It reliably reproduces under xfstests when I run this:

$ sudo ./check generic/452

...however, if I run the test itself directly it does not reproduce:

$ sudo ./tests/generic/452

...if I run it a second time, then the message does pop. That leads be to believe that this is happening when cleaning up the scratch directory from a previous run.

Actions #3

Updated by Jeff Layton almost 5 years ago

Filtered strace of the test, showing all syscalls that touch /mnt/scratch or /mnt/scratch/ls_on_scratch.

Actions #4

Updated by Jeff Layton almost 5 years ago

With some printk debugging, leftover inode is the one for /mnt/scratch/ls_on_scratch. Still not able to reproduce this by hand though, so I wonder if there is some raciness involved.

Actions #5

Updated by Jeff Layton almost 5 years ago

Single shell-script reproducer:

#!/bin/bash

mount /mnt/scratch
rm -r /mnt/scratch/*
umount /mnt/scratch
mount /mnt/scratch
umount /mnt/scratch
mount /mnt/scratch
ls /mnt/scratch
umount /mnt/scratch
mount /mnt/scratch
cp /usr/bin/ls /mnt/scratch/ls_on_scratch
/mnt/scratch/ls_on_scratch /mnt/scratch/ls_on_scratch
mount -o remount,ro /mnt/scratch
/mnt/scratch/ls_on_scratch /mnt/scratch/ls_on_scratch
umount /mnt/scratch

...with this in fstab:

192.168.XXX.YYY:40527:/scratch    /mnt/scratch    ceph    noauto,context="system_u:object_r:root_t:s0",acl    0 0

Probably, some of these steps are not needed so I'll try whittling this down next.

Actions #6

Updated by Jeff Layton almost 5 years ago

Slimmed-down reproducer:

#!/bin/bash

mount /mnt/scratch
cp /usr/bin/ls /mnt/scratch/ls_on_scratch
mount -o remount,ro /mnt/scratch
umount /mnt/scratch

...also, the extra mount options don't matter.

More interestingly, after the last umount, the ls_on_scratch file in cephfs seems to be the correct length, but it's completely zero-filled. I wonder if the remount,ro is occurring before we have a chance to flush the cache, and then that prevents writeback from succeeding?

EDIT: calling sync after doing the copy, but before the remount works around the issue.

Actions #7

Updated by Jeff Layton almost 5 years ago

Found the problem. ceph didn't have a remount_sb operation, so we just need that and to have that call sync_filesystem(). Patch posted:

https://marc.info/?l=ceph-devel&m=155723567025589&w=2

Actions #8

Updated by Jeff Layton almost 5 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF