Actions
Bug #42331
closedfscache: possible circular locking dependency detected
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):
Description
Got this lockdep pop soon after kicking off an xfstests run:
[13314.541723] ====================================================== [13314.543270] WARNING: possible circular locking dependency detected [13314.546092] 5.4.0-rc3-00009-g66201b2e84d0 #19 Tainted: G O [13314.547412] ------------------------------------------------------ [13314.548957] 003/32728 is trying to acquire lock: [13314.550125] ffff8883f9615900 (&sb->s_type->i_mutex_key#20/2){+.+.}, at: ceph_fscache_register_inode_cookie+0x73/0x100 [ceph] [13314.552581] but task is already holding lock: [13314.553979] ffff88836fb071f0 (&type->i_mutex_dir_key#10){++++}, at: path_openat+0x2c3/0xc80 [13314.555911] which lock already depends on the new lock. [13314.557910] the existing dependency chain (in reverse order) is: [13314.559576] -> #1 (&type->i_mutex_dir_key#10){++++}: [13314.561075] down_write+0x3d/0x70 [13314.561968] vfs_rename+0x6b0/0x9c0 [13314.562892] do_renameat2+0x381/0x530 [13314.563856] __x64_sys_rename+0x1f/0x30 [13314.564848] do_syscall_64+0x56/0xa0 [13314.565803] entry_SYSCALL_64_after_hwframe+0x49/0xbe [13314.567036] -> #0 (&sb->s_type->i_mutex_key#20/2){+.+.}: [13314.568573] __lock_acquire+0xd93/0x1940 [13314.569546] lock_acquire+0xa2/0x1b0 [13314.570460] down_write_nested+0x43/0x80 [13314.571414] ceph_fscache_register_inode_cookie+0x73/0x100 [ceph] [13314.572695] ceph_init_file+0x4a/0x270 [ceph] [13314.573703] do_dentry_open+0x13b/0x380 [13314.574612] ceph_atomic_open+0x1d5/0x470 [ceph] [13314.577195] lookup_open+0x3f4/0x7e0 [13314.578107] path_openat+0x2db/0xc80 [13314.579904] do_filp_open+0x91/0x100 [13314.581618] do_sys_open+0x184/0x220 [13314.582564] do_syscall_64+0x56/0xa0 [13314.583425] entry_SYSCALL_64_after_hwframe+0x49/0xbe [13314.584504] other info that might help us debug this: [13314.586240] Possible unsafe locking scenario: [13314.587532] CPU0 CPU1 [13314.588531] ---- ---- [13314.589542] lock(&type->i_mutex_dir_key#10); [13314.590517] lock(&sb->s_type->i_mutex_key#20/2); [13314.592117] lock(&type->i_mutex_dir_key#10); [13314.593544] lock(&sb->s_type->i_mutex_key#20/2); [13314.594618] *** DEADLOCK *** [13314.596186] 2 locks held by 003/32728: [13314.597110] #0: ffff888403460410 (sb_writers#16){.+.+}, at: mnt_want_write+0x20/0x50 [13314.598836] #1: ffff88836fb071f0 (&type->i_mutex_dir_key#10){++++}, at: path_openat+0x2c3/0xc80 [13314.600733] stack backtrace: [13314.601838] CPU: 6 PID: 32728 Comm: 003 Tainted: G O 5.4.0-rc3-00009-g66201b2e84d0 #19 [13314.603790] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-2.fc30 04/01/2014 [13314.605636] Call Trace: [13314.606293] dump_stack+0x85/0xc0 [13314.608631] check_noncircular+0x16f/0x190 [13314.609720] ? __lock_acquire+0x246/0x1940 [13314.610985] __lock_acquire+0xd93/0x1940 [13314.612046] lock_acquire+0xa2/0x1b0 [13314.612930] ? ceph_fscache_register_inode_cookie+0x73/0x100 [ceph] [13314.614226] down_write_nested+0x43/0x80 [13314.615144] ? ceph_fscache_register_inode_cookie+0x73/0x100 [ceph] [13314.616449] ceph_fscache_register_inode_cookie+0x73/0x100 [ceph] [13314.617729] ? _raw_spin_unlock+0x24/0x30 [13314.618676] ceph_init_file+0x4a/0x270 [ceph] [13314.619695] ? ceph_llseek+0x100/0x100 [ceph] [13314.620705] do_dentry_open+0x13b/0x380 [13314.621638] ceph_atomic_open+0x1d5/0x470 [ceph] [13314.622694] lookup_open+0x3f4/0x7e0 [13314.623505] path_openat+0x2db/0xc80 [13314.624298] ? __lock_acquire+0x246/0x1940 [13314.625161] do_filp_open+0x91/0x100 [13314.625952] ? _raw_spin_unlock+0x24/0x30 [13314.626792] ? __alloc_fd+0xe9/0x1d0 [13314.627581] do_sys_open+0x184/0x220 [13314.628375] do_syscall_64+0x56/0xa0 [13314.629167] entry_SYSCALL_64_after_hwframe+0x49/0xbe [13314.630171] RIP: 0033:0x7fd0dc8ce2bc [13314.630957] Code: 00 00 41 00 3d 00 00 41 00 74 4b 48 8d 05 6c 83 0d 00 8b 00 85 c0 75 6f 44 89 e2 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 98 00 00 00 48 8b 4c 24 28 64 48 33 0c 25 [13314.634341] RSP: 002b:00007fffc7361980 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 [13314.635938] RAX: ffffffffffffffda RBX: 000055789e575290 RCX: 00007fd0dc8ce2bc [13314.637349] RDX: 0000000000000241 RSI: 000055789e57bbf0 RDI: 00000000ffffff9c [13314.639559] RBP: 000055789e57bbf0 R08: 0000000000000000 R09: 0000000000000020 [13314.640927] R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000000241 [13314.642644] R13: 0000000000000000 R14: 0000000000000001 R15: 000055789e57bbf0
Updated by Jeff Layton over 2 years ago
- Status changed from New to Won't Fix
This issue is almost certainly a problem in the fscache infrastructure itself. It's being worked out upstream, but I don't think we need to track it here.
Actions