Project

General

Profile

Actions

Tasks #1005

closed

Bug #910: Multi-MDS Ceph does not pass fsstress

xlock is not unpinning during rename across MDSes

Added by Greg Farnum about 13 years ago. Updated about 13 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

See logs in kai:~gregf/logs/fsstress/freeze_tree_assert.

I managed to narrow it down to inode 20000000166 having an authpin that's not removed. The auth_pin is taken during a slave rename -- there's a slave auth_pin and an xlock. It looks like the slave authpin is being put, but for some reason the xlock auth_pin is not being taken care of. Perhaps the check is failing:

        if((!lock->is_stable() &&
            lock->get_sm()->states[lock->get_next_state()].next == 0) &&
            !lock->is_locallock()) {

I added a debug output for that case in my local repo, but from what I can see that check should have succeeded, which makes me think it's getting taken out of the mdr xlock list somewhere. Nothing turned up in a code audit, though...

Actions #1

Updated by Greg Farnum about 13 years ago

  • Status changed from New to In Progress

Okay, reproduced with my extra logging. Locks 0x1163068 and 0x1162fb8 take an auth_pin but don't put one; these are the isnap and ilink locks on the inode. Logging reveals:

2011-04-15 15:53:56.329682 7fe1ad64a710 mds1.migrator finish_export_inode telling client4104 exported caps on [inode 200000000fa [...2,head] /p3/d6/d17/d1b/d58/d6d/ auth{0=1} v73 ap=2 AMBIGAUTH FROZEN na=2 f(v0 m2011-04-15 15:53:55.126373 4=3
+1) n(v1 rc2011-04-15 15:53:55.126373 b812698 a2 8=5+3) (ilink xlock x=1 by 0x10ce000) (isnap xlock x=1 by 0x10ce000) (inest mix dirty) (iversion lock x=1 by 0x10ce000) caps={4104=pAsXsFs/-@16},l=-1(4104) | ptrwaiter dirtyscattered request lo
ck dirfrag caps frozen replicated dirty waiter authpin 0x1162890]
2011-04-15 15:53:56.329691 7fe1ad64a710 mds1.1 send_message_client_counted client4104 seq 1350 client_caps(export ino 200000000fa 468 seq 16 caps=pAsXsFs dirty=- wanted=- follows 0 size 0/0 mtime 0.000000) v1
2011-04-15 15:53:56.329703 7fe1ad64a710 -- 10.0.1.205:6804/21680 --> 10.0.1.205:0/21703 -- client_caps(export ino 200000000fa 468 seq 16 caps=pAsXsFs dirty=- wanted=- follows 0 size 0/0 mtime 0.000000) v1 -- ?+0 0x10dc340 con 0xd49780
2011-04-15 15:53:56.329715 7fe1ad64a710 mds1.cache.ino(200000000fa) remove_client_cap last cap, leaving realm snaprealm(1 seq 1 lc 0 cr 0 cps 1 snaps={} 0xd13000)
2011-04-15 15:53:56.329732 7fe1ad64a710 mds1.cache.ino(200000000fa)  mark_clean [inode 200000000fa [...2,head] /p3/d6/d17/d1b/d58/d6d/ auth{0=1} v73 ap=2 AMBIGAUTH FROZEN na=2 f(v0 m2011-04-15 15:53:55.126373 4=3+1) n(v1 rc2011-04-15 15:53:55
.126373 b812698 a2 8=5+3) (ilink xlock x=1 by 0x10ce000) (isnap xlock x=1 by 0x10ce000) (inest mix dirty) (iversion lock x=1 by 0x10ce000) | ptrwaiter dirtyscattered request lock dirfrag frozen replicated dirty waiter authpin 0x1162890]
2011-04-15 15:53:56.329756 7fe1ad64a710 2011-04-15 15:53:56.329744 mds1.cache.ino(200000000fa) take_waiting mask ffffffffffffffff took 0x10bad80 tag 1000000000000000 on [inode 200000000fa [...2,head] /p3/d6/d17/d1b/d58/d6d/ rep@0.1 v73 ap=2 A
MBIGAUTH FROZEN na=2 f(v0 m2011-04-15 15:53:55.126373 4=3+1) n(v1 rc2011-04-15 15:53:55.126373 b812698 a2 8=5+3) (ilink lock x=1 by 0x10ce000) (isnap lock x=1 by 0x10ce000) (inest mix dirty) (iversion lock x=1 by 0x10ce000) | ptrwaiter dirtys
cattered request lock dirfrag frozen waiter authpin 0x1162890]
2011-04-15 15:53:56.329762 7fe1ad64a710 mds1.cache.ino(200000000fa) unfreeze_inode
2011-04-15 15:53:56.329768 7fe1ad64a710 mds1.server removing lock 0xfe8fe8, will be stable, unpinning
2011-04-15 15:53:56.329779 7fe1ad64a710 mds1.cache.den(10000000154 d47) auth_unpin by 0xfe8fe8 on [dentry #1/p3/d6/d24/d47 [2,head] auth{0=1} NULL (dn xlock) (dversion lock x=1 by 0x10ce000) v=75 ap=1+0 inode=0 | request lock replicated dirty authpin 0xfe8e98] now 1+0
2011-04-15 15:53:56.329805 7fe1ad64a710 mds1.cache.dir(10000000154) adjust_nested_auth_pins -1/-1 on [dir 10000000154 /p3/d6/d24/ [2,head] auth{0=1} v=76 cv=0/0 dir_auth=1 ap=1+1+1 state=1610612738|complete f(v1 m2011-04-15 15:53:55.793029 2=1+1)/f(v1 m2011-04-15 15:53:01.502384 3=1+2) n(v6 rc2011-04-15 15:53:55.793029 b230958 6=5+1)/n(v6 rc2011-04-15 15:53:35.261042 b1043656 a1 11=8+3) hs=2+3,ss=0+0 dirty=4 | child subtree replicated dirty authpin 0x1105860] count now 1 + 1
2011-04-15 15:53:56.329818 7fe1ad64a710 mds1.server removing lock 0xfe9010
2011-04-15 15:53:56.329826 7fe1ad64a710 mds1.server removing lock 0x1162f60
2011-04-15 15:53:56.329831 7fe1ad64a710 mds1.server removing lock 0x1162fb8
2011-04-15 15:53:56.329836 7fe1ad64a710 mds1.server removing lock 0x1163068

Note how the locks move from xlock to lock between the output from mark_clean and take_waiting.

Aha! export_twiddle() sets the state to get_replica_state(). I should be able to patch that up pretty easily.

Actions #2

Updated by Greg Farnum about 13 years ago

  • Status changed from In Progress to Resolved

Looks like this is fixed.

Actions

Also available in: Atom PDF