Project

General

Profile

Actions

Bug #55411

closed

BUG: Dentry XXXXXX still in use (1) [unmount of ceph ceph]

Added by Jeff Layton almost 2 years ago. Updated almost 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

I've been able to fairly reliably reproduce a busy dentry at unmount problem with ceph by bouncing the MDS's regularly while running fsstress against it. I'm testing against a 3 node cephadm cluster. The fs has 3 MDSs and no standbys. On all 3 nodes, I run a script like this to bounce the MDSs each in turn:

#!/bin/bash

CEPHADM="/path/to/cephadm" 

# stagger them randomly
sleep $(($RANDOM % 30))

while true; do
    $CEPHADM ls --no-detail | jq -r '.[]|.systemd_unit' | egrep '(@mds)' |
    while read unit; do
        systemctl stop $unit
        sleep 1
        systemctl reset-failed $unit
        systemctl start $unit
    done
    sleep 60
done

On the client, I then run xfstest generic/013 (which is an fsstress test). Eventually, once the test completes and it unmounts ceph, we get some messages like this (along with some stack traces that aren't particularly helpful):

[ 2083.434109] BUG: Dentry 0000000005edbd24{i=2000007d09a,n=d94XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX}  still in use (1) [unmount of ceph ceph]
[ 2083.693821] BUG: Dentry 000000003f412ac2{i=3000004ff7f,n=l26}  still in use (1) [unmount of ceph ceph]
[ 2083.991207] BUG: Dentry 00000000bdbd2251{i=2000007cd23,n=f2XX}  still in use (1) [unmount of ceph ceph]
[ 2084.290827] BUG: Dentry 00000000e5f35b7f{i=2000007cfbe,n=c68X}  still in use (1) [unmount of ceph ceph]

I suspect that some of the kernel error handling or retransmission codepaths are resulting in a dentry leak. This was seen with the current "testing" branch kernel (which doesn't contain any of the fscrypt changes).


Related issues 1 (0 open1 closed)

Related to Linux kernel client - Bug #55284: kclient: filesystem sync will stuck for around 5 seconds sometimesResolvedXiubo Li

Actions
Actions #1

Updated by Jeff Layton almost 2 years ago

  • Assignee set to Jeff Layton
Actions #2

Updated by Xiubo Li almost 2 years ago

In the testing kernel I also could see this by running the ffsb and fsstress tests, not sure whether they are related:

[root@lxbceph1 ceph-client]# cat /sys/kernel/debug/kmemleak
unreferenced object 0xffff888153971e00 (size 256):
  comm "swapper/0", pid 1, jiffies 4294711494 (age 1054.435s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 48 00 00 00 00 00 00 00  ........H.......
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<0000000087ce56d0>] __kmalloc+0x165/0x2f0
    [<00000000a97688ce>] msr_build_context+0x3e/0x150
    [<00000000ce06fd86>] pm_check_save_msr+0xb7/0xe0
    [<000000002eebdca4>] do_one_initcall+0x89/0x2a0
    [<0000000017704906>] kernel_init_freeable+0x332/0x3c3
    [<00000000e8dff4fd>] kernel_init+0x18/0x140
    [<0000000093f127d4>] ret_from_fork+0x22/0x30
unreferenced object 0xffff8881733dbf00 (size 64):
  comm "mount", pid 1209, jiffies 4294790614 (age 975.574s)
  hex dump (first 32 bytes):
    73 79 73 74 65 6d 5f 75 3a 6f 62 6a 65 63 74 5f  system_u:object_
    72 3a 73 6e 61 70 70 79 5f 73 6e 61 70 5f 74 3a  r:snappy_snap_t:
  backtrace:
    [<00000000fec7e29a>] __kmalloc_track_caller+0x163/0x2d0
    [<000000000e8ebe9a>] kmemdup_nul+0x24/0x60
    [<000000007f4aa05d>] selinux_sb_eat_lsm_opts+0x18d/0x300
    [<0000000007acbb92>] security_sb_eat_lsm_opts+0x33/0x50
    [<00000000488fe241>] generic_parse_monolithic+0x7c/0x140
    [<0000000001552f76>] path_mount+0x914/0xd30
    [<000000000c82b396>] do_mount+0xcb/0xf0
    [<00000000c6603cd8>] __x64_sys_mount+0xf4/0x110
    [<0000000079cc9014>] do_syscall_64+0x3a/0x80
    [<000000000020fc35>] entry_SYSCALL_64_after_hwframe+0x44/0xae
unreferenced object 0xffff88811c6f6e00 (size 64):
  comm "mount", pid 1200, jiffies 4294790617 (age 975.572s)
  hex dump (first 32 bytes):
    73 79 73 74 65 6d 5f 75 3a 6f 62 6a 65 63 74 5f  system_u:object_
    72 3a 73 6e 61 70 70 79 5f 73 6e 61 70 5f 74 3a  r:snappy_snap_t:
  backtrace:
    [<00000000fec7e29a>] __kmalloc_track_caller+0x163/0x2d0
    [<000000000e8ebe9a>] kmemdup_nul+0x24/0x60
    [<000000007f4aa05d>] selinux_sb_eat_lsm_opts+0x18d/0x300
    [<0000000007acbb92>] security_sb_eat_lsm_opts+0x33/0x50
    [<00000000488fe241>] generic_parse_monolithic+0x7c/0x140
    [<0000000001552f76>] path_mount+0x914/0xd30
    [<000000000c82b396>] do_mount+0xcb/0xf0
    [<00000000c6603cd8>] __x64_sys_mount+0xf4/0x110
    [<0000000079cc9014>] do_syscall_64+0x3a/0x80
    [<000000000020fc35>] entry_SYSCALL_64_after_hwframe+0x44/0xae
unreferenced object 0xffff88816757ff00 (size 64):
  comm "mount", pid 1196, jiffies 4294790618 (age 975.571s)
  hex dump (first 32 bytes):
    73 79 73 74 65 6d 5f 75 3a 6f 62 6a 65 63 74 5f  system_u:object_
    72 3a 73 6e 61 70 70 79 5f 73 6e 61 70 5f 74 3a  r:snappy_snap_t:
  backtrace:
    [<00000000fec7e29a>] __kmalloc_track_caller+0x163/0x2d0
    [<000000000e8ebe9a>] kmemdup_nul+0x24/0x60
    [<000000007f4aa05d>] selinux_sb_eat_lsm_opts+0x18d/0x300
    [<0000000007acbb92>] security_sb_eat_lsm_opts+0x33/0x50
    [<00000000488fe241>] generic_parse_monolithic+0x7c/0x140
    [<0000000001552f76>] path_mount+0x914/0xd30
    [<000000000c82b396>] do_mount+0xcb/0xf0
    [<00000000c6603cd8>] __x64_sys_mount+0xf4/0x110
    [<0000000079cc9014>] do_syscall_64+0x3a/0x80
    [<000000000020fc35>] entry_SYSCALL_64_after_hwframe+0x44/0xae
unreferenced object 0xffff8885d3f5bf00 (size 64):
  comm "mount", pid 1192, jiffies 4294790618 (age 975.571s)
  hex dump (first 32 bytes):
    73 79 73 74 65 6d 5f 75 3a 6f 62 6a 65 63 74 5f  system_u:object_
    72 3a 73 6e 61 70 70 79 5f 73 6e 61 70 5f 74 3a  r:snappy_snap_t:
  backtrace:
    [<00000000fec7e29a>] __kmalloc_track_caller+0x163/0x2d0
    [<000000000e8ebe9a>] kmemdup_nul+0x24/0x60
    [<000000007f4aa05d>] selinux_sb_eat_lsm_opts+0x18d/0x300
    [<0000000007acbb92>] security_sb_eat_lsm_opts+0x33/0x50
    [<00000000488fe241>] generic_parse_monolithic+0x7c/0x140
    [<0000000001552f76>] path_mount+0x914/0xd30
    [<000000000c82b396>] do_mount+0xcb/0xf0
    [<00000000c6603cd8>] __x64_sys_mount+0xf4/0x110
    [<0000000079cc9014>] do_syscall_64+0x3a/0x80
    [<000000000020fc35>] entry_SYSCALL_64_after_hwframe+0x44/0xae
unreferenced object 0xffff88819010b680 (size 64):
  comm "mount", pid 1191, jiffies 4294790619 (age 975.625s)
  hex dump (first 32 bytes):
    73 79 73 74 65 6d 5f 75 3a 6f 62 6a 65 63 74 5f  system_u:object_
    72 3a 73 6e 61 70 70 79 5f 73 6e 61 70 5f 74 3a  r:snappy_snap_t:
  backtrace:
    [<00000000fec7e29a>] __kmalloc_track_caller+0x163/0x2d0
    [<000000000e8ebe9a>] kmemdup_nul+0x24/0x60
    [<000000007f4aa05d>] selinux_sb_eat_lsm_opts+0x18d/0x300
    [<0000000007acbb92>] security_sb_eat_lsm_opts+0x33/0x50
    [<00000000488fe241>] generic_parse_monolithic+0x7c/0x140
    [<0000000001552f76>] path_mount+0x914/0xd30
    [<000000000c82b396>] do_mount+0xcb/0xf0
    [<00000000c6603cd8>] __x64_sys_mount+0xf4/0x110
    [<0000000079cc9014>] do_syscall_64+0x3a/0x80
    [<000000000020fc35>] entry_SYSCALL_64_after_hwframe+0x44/0xae
Actions #3

Updated by Jeff Layton almost 2 years ago

That looks unrelated at first glance and they may be false positives. Those allocations are all part of the mount codepath? They may end up getting reaped when the unmount occurs?

In my case, I'm seeing some dentry/inode reference leaks. My suspicion is that we're leaking ceph_mds_request objects in some error cases, but I haven't quite nailed that down yet.

Actions #4

Updated by Jeff Layton almost 2 years ago

  • Status changed from New to Fix Under Review
  • Assignee changed from Jeff Layton to Xiubo Li

Found it, it's a recent regression in a patch that's in testing but not in mainline yet:

https://lore.kernel.org/ceph-devel/20220422172911.94861-1-jlayton@kernel.org/T/#u

Xiubo, want to fix up the original patch with this fix?

Actions #5

Updated by Xiubo Li almost 2 years ago

Jeff Layton wrote:

Found it, it's a recent regression in a patch that's in testing but not in mainline yet:

https://lore.kernel.org/ceph-devel/20220422172911.94861-1-jlayton@kernel.org/T/#u

Xiubo, want to fix up the original patch with this fix?

Sure. I will fold this. Thanks Jeff.

Actions #6

Updated by Xiubo Li almost 2 years ago

  • Status changed from Fix Under Review to Resolved
Actions #7

Updated by Xiubo Li almost 2 years ago

  • Related to Bug #55284: kclient: filesystem sync will stuck for around 5 seconds sometimes added
Actions

Also available in: Atom PDF