Project

General

Profile

Actions

Bug #17253

closed

Crash in Client::_invalidate_kernel_dcache when reconnecting during unmount

Added by John Spray over 7 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
Correctness/Safety
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
ceph-fuse
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version v11.0.0-2162-gccd7cd1 (ccd7cd17a24b9e7037993b2576c960da92041977)
 1: (()+0x24a892) [0x7f8ed31b8892]
 2: (()+0x10340) [0x7f8ed2062340]
 3: (Client::_invalidate_kernel_dcache()+0x23) [0x7f8ed3123c43]
 4: (Client::trim_caps(MetaSession*, int)+0x192) [0x7f8ed3143242]
 5: (Client::handle_client_session(MClientSession*)+0x4a1) [0x7f8ed314afe1]
 6: (Client::ms_dispatch(Message*)+0x46f) [0x7f8ed318274f]
 7: (DispatchQueue::entry()+0x78b) [0x7f8ed341b5bb]
 8: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f8ed32cd01d]
 9: (()+0x8182) [0x7f8ed205a182]
 10: (clone()+0x6d) [0x7f8ed09c947d]

http://pulpito.ceph.com/jspray-2016-09-08_16:42:57-fs-master-distro-basic-mira/405763


Related issues 2 (0 open2 closed)

Has duplicate CephFS - Bug #16857: Crash in Client::_invalidate_kernel_dcacheDuplicate07/29/2016

Actions
Copied to CephFS - Backport #17477: jewel: Crash in Client::_invalidate_kernel_dcache when reconnecting during unmountResolvedLoïc DacharyActions
Actions #1

Updated by John Spray over 7 years ago

  • Subject changed from Crash in Client::_invalidate_kernel_dcache with mds_cache_size=0 to Crash in Client::_invalidate_kernel_dcache
  • Priority changed from Normal to Urgent
Actions #2

Updated by John Spray over 7 years ago

Actions #3

Updated by John Spray over 7 years ago

  • Has duplicate Bug #16857: Crash in Client::_invalidate_kernel_dcache added
Actions #4

Updated by John Spray over 7 years ago

Note this is happening during umount

Actions #5

Updated by John Spray over 7 years ago

 -7248> 2016-09-12 22:40:21.241205 7f12e20e4ec0  2 client.4129 unmounting
 -7247> 2016-09-12 22:40:21.241212 7f12e20e4ec0 10 client.4129 waiting on 1 requests
 -5500> 2016-09-12 22:40:21.915748 7f12d7bf7700 10 client.4129 unmounting: trim pass, size was 0+29
 -5499> 2016-09-12 22:40:21.915749 7f12d7bf7700 10 client.4129 unmounting: trim pass, size still 0+29
     0> 2016-09-12 22:40:48.590007 7f12d7bf7700 -1 *** Caught signal (Segmentation fault) **
 in thread 7f12d7bf7700 thread_name:ms_dispatch

 ceph version v11.0.0-2234-gb324e81 (b324e81f9df3493b75d127f479924797953e4113)
 1: (()+0x24a892) [0x7f12e1b06892]
 2: (()+0x10330) [0x7f12e09b0330]
 3: (Client::_invalidate_kernel_dcache()+0x23) [0x7f12e1a71c43]
 4: (Client::trim_cache_for_reconnect(MetaSession*)+0x502) [0x7f12e1a8ffb2]
 5: (Client::handle_mds_map(MMDSMap*)+0x84a) [0x7f12e1acb4aa]
 6: (Client::ms_dispatch(Message*)+0x63b) [0x7f12e1ad091b]
 7: (DispatchQueue::entry()+0x78b) [0x7f12e1d695ab]
 8: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f12e1c1b01d]
 9: (()+0x8184) [0x7f12e09a8184]
 10: (clone()+0x6d) [0x7f12df31537d]

The interesting thing is that we're going into reconnect but this test isn't meant to have a thrasher.

Last comms from MDS was:

 -5507> 2016-09-12 22:40:21.915722 7f12d7bf7700  1 -- 172.21.5.130:0/2129103044 <== mds.0 172.21.4.112:6808/103879 6575 ==== client_caps(grant ino 1000000018e 400 seq 17 caps=pAsLsXsFsxcrwb dirty=- wanted=LsFxcb follows 0 size 104857600/268435456 ts 1/18446744073709551615 mtime 2016-09-12 22:40:00.675688) v9 ==== 232+0+0 (1085158713 0 0) 0x7f12f1acb000 con 0x7f12eb642800

MDS respawn of a-s

2016-09-12 22:40:48.542178 7fba3968b700 10 mds.a-s map says i am 172.21.4.112:6808/103879 mds.-1.-1 state ???
2016-09-12 22:40:48.542180 7fba3968b700  1 mds.a-s handle_mds_map i (172.21.4.112:6808/103879) dne in the mdsmap, respawning myself

MDS a takes over

2016-09-12 22:40:48.710304 7f1e0b80d700 10 MDSIOContextBase::complete: 12C_IO_Wrapper
2016-09-12 22:40:48.710317 7f1e10016700  1 -- 172.21.5.130:6808/14196 >> 172.21.4.112:6789/0 conn(0x7f1e21655800 sd=18 :-1 s=STATE_OPEN pgs=10 cs=1 l=1). == tx == 0x7f1e21572200 mon_subscribe({osdmap=11}) v2
2016-09-12 22:40:48.710391 7f1e0b80d700 10 MDSInternalContextBase::complete: 15C_MDS_BootStart
2016-09-12 22:40:48.710401 7f1e0b80d700  2 mds.0.8 boot_start 0: opening inotable
2016-09-12 22:40:48.710407 7f1e0b80d700 10 mds.0.inotable: load
2016-09-12 22:40:48.800424 7f1e0d811700 10 mds.beacon.a _send up:replay seq 289

mds.a-s at least thought it was sending beacons the whole time until the mon stopped acking:

2016-09-12 22:40:17.392506 7fba36e86700 10 mds.beacon.a-s _send up:active seq 282
2016-09-12 22:40:21.392571 7fba36e86700 10 mds.beacon.a-s _send up:active seq 283
2016-09-12 22:40:21.805458 7fba37687700 10 mds.0.server find_idle_sessions.  laggy until 2016-09-12 22:36:14.264343
2016-09-12 22:40:21.805464 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044
2016-09-12 22:40:21.805470 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044 and sufficiently new (2016-09-12 22:40:11.805353)
2016-09-12 22:40:21.805474 7fba37687700 10 mds.0.server  laggy_until 2016-09-12 22:36:14.264343 > cutoff 2016-09-12 22:35:21.805462, not kicking any clients to be safe
2016-09-12 22:40:25.392678 7fba36e86700 10 mds.beacon.a-s _send up:active seq 284
2016-09-12 22:40:26.805782 7fba37687700 10 mds.0.server find_idle_sessions.  laggy until 2016-09-12 22:36:14.264343
2016-09-12 22:40:26.805788 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044
2016-09-12 22:40:26.805792 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044 and sufficiently new (2016-09-12 22:40:11.805353)
2016-09-12 22:40:26.805796 7fba37687700 10 mds.0.server  laggy_until 2016-09-12 22:36:14.264343 > cutoff 2016-09-12 22:35:26.805787, not kicking any clients to be safe
2016-09-12 22:40:29.392795 7fba36e86700 10 mds.beacon.a-s _send up:active seq 285
2016-09-12 22:40:31.806054 7fba37687700 10 mds.0.server find_idle_sessions.  laggy until 2016-09-12 22:36:14.264343
2016-09-12 22:40:31.806071 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044
2016-09-12 22:40:31.806078 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044 and sufficiently new (2016-09-12 22:40:11.805353)
2016-09-12 22:40:31.806086 7fba37687700 10 mds.0.server  laggy_until 2016-09-12 22:36:14.264343 > cutoff 2016-09-12 22:35:31.806069, not kicking any clients to be safe
2016-09-12 22:40:33.392872 7fba36e86700 10 mds.beacon.a-s _send up:active seq 286
2016-09-12 22:40:36.806080 7fba37687700 10 mds.0.server find_idle_sessions.  laggy until 2016-09-12 22:36:14.264343
2016-09-12 22:40:36.806089 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044
2016-09-12 22:40:36.806098 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044 and sufficiently new (2016-09-12 22:40:11.805353)
2016-09-12 22:40:36.806105 7fba37687700 10 mds.0.server  laggy_until 2016-09-12 22:36:14.264343 > cutoff 2016-09-12 22:35:36.806087, not kicking any clients to be safe
2016-09-12 22:40:37.393004 7fba36e86700 10 mds.beacon.a-s _send up:active seq 287
2016-09-12 22:40:41.393149 7fba36e86700 10 mds.beacon.a-s _send up:active seq 288
2016-09-12 22:40:41.806181 7fba37687700 10 mds.0.server find_idle_sessions.  laggy until 2016-09-12 22:36:14.264343
2016-09-12 22:40:41.806190 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044
2016-09-12 22:40:41.806198 7fba37687700 20 mds.0.server laggiest active session is client.4129 172.21.5.130:0/2129103044 and sufficiently new (2016-09-12 22:40:11.805353)
2016-09-12 22:40:41.806206 7fba37687700 10 mds.0.server  laggy_until 2016-09-12 22:36:14.264343 > cutoff 2016-09-12 22:35:41.806188, not kicking any clients to be safe
2016-09-12 22:40:45.393314 7fba36e86700 10 mds.beacon.a-s _send up:active seq 289
2016-09-12 22:40:46.805467 7fba37687700  5 mds.0.6 tick bailing out since we seem laggy

Interestingly it's mon.b that was the leader, didn't we used to always have mon.a leader if it was available?

It stops seeing the beacons from the active MDS (last seen is seq 285)

2016-09-12 22:40:01.393051 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 278 v7) v7 noting time
2016-09-12 22:40:13.517570 7fe3427eb700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 278 v5) v7 noting time
2016-09-12 22:40:13.517716 7fe3427eb700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 279 v5) v7 noting time
2016-09-12 22:40:13.517788 7fe3427eb700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 280 v5) v7 noting time
2016-09-12 22:40:13.518196 7fe3427eb700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 279 v7) v7 noting time
2016-09-12 22:40:13.521166 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 280 v7) v7 noting time
2016-09-12 22:40:13.521332 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 281 v7) v7 noting time
2016-09-12 22:40:17.242518 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 281 v5) v7 noting time
2016-09-12 22:40:17.393143 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 282 v7) v7 noting time
2016-09-12 22:40:20.799617 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 282 v5) v7 noting time
2016-09-12 22:40:21.393144 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 283 v7) v7 noting time
2016-09-12 22:40:24.801459 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 283 v5) v7 noting time
2016-09-12 22:40:25.393653 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 284 v7) v7 noting time
2016-09-12 22:40:28.800246 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 284 v5) v7 noting time
2016-09-12 22:40:29.393627 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4115/a-s up:active seq 285 v7) v7 noting time
2016-09-12 22:40:32.800266 7fe33ffe6700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 285 v5) v7 noting time
2016-09-12 22:40:47.593399 7fe3427eb700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 286 v5) v7 noting time
2016-09-12 22:40:47.593478 7fe3427eb700 15 mon.b@0(leader).mds e7 _note_beacon mdsbeacon(4107/a up:standby seq 287 v5) v7 noting time

Actions #6

Updated by John Spray over 7 years ago

The beacons from the guy that got killed do make it eventually but with a severe delay:


remote/mira032/log/ceph-mds.a-s.log.gz:2016-09-12 22:40:33.392905 7fba36e86700  1 -- 172.21.4.112:6808/103879 >> 172.21.4.112:6790/0 conn(0x7fba4a9eb800 sd=18 :-1 s=STATE_OPEN pgs=11 cs=1 l=1). == tx == 0x7fba4aa82340 mdsbeacon(4115/a-s up:active seq 286 v7) v7

remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:33.393058 7fbc737df700  1 -- 172.21.4.112:6790/0 >> 172.21.4.112:6808/103879 conn(0x7fbc854b8800 sd=33 :6790 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1 cs=1 l=1). == rx == mds.0 seq 323 0x7fbc86e0f6c0 mdsbeacon(4115/a-s up:active seq 286 v7) v7
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:33.393187 7fbc727dd700  1 -- 172.21.4.112:6790/0 <== mds.0 172.21.4.112:6808/103879 323 ==== mdsbeacon(4115/a-s up:active seq 286 v7) v7 ==== 367+0+0 (2027675394 0 0) 0x7fbc86e0f6c0 con 0x7fbc854b8800
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:33.393246 7fbc727dd700 10 mon.c@2(peon).paxosservice(mdsmap 1..7) dispatch 0x7fbc86e0f6c0 mdsbeacon(4115/a-s up:active seq 286 v7) v7 from mds.0 172.21.4.112:6808/103879 con 0x7fbc854b8800
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:48.544674 7fbc727dd700 10 mon.c@2(peon).paxosservice(mdsmap 1..8) dispatch 0x7fbc86e0f6c0 mdsbeacon(4115/a-s up:active seq 286 v7) v7 from mds.0 172.21.4.112:6808/103879 con 0x7fbc854b8800
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:48.544686 7fbc727dd700 10 mon.c@2(peon).mds e8 preprocess_query mdsbeacon(4115/a-s up:active seq 286 v7) v7 from mds.0 172.21.4.112:6808/103879
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:48.544696 7fbc727dd700 12 mon.c@2(peon).mds e8 preprocess_beacon mdsbeacon(4115/a-s up:active seq 286 v7) v7 from mds.0 172.21.4.112:6808/103879 compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:48.544710 7fbc727dd700 10 mon.c@2(peon) e1 forward_request 765 request mdsbeacon(4115/a-s up:active seq 286 v7) v7 features 1152921504336314367
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:48.544714 7fbc727dd700  1 -- 172.21.4.112:6790/0 _send_message--> mon.0 172.21.4.112:6789/0 -- forward(mdsbeacon(4115/a-s up:active seq 286 v7) v7 caps allow * tid 765 con_features 1152921504336314367) to leader v3 -- ?+0 0x7fbc86f1ab00
remote/mira032/log/ceph-mon.c.log.gz:2016-09-12 22:40:48.544721 7fbc727dd700  1 -- 172.21.4.112:6790/0 >> 172.21.4.112:6789/0 conn(0x7fbc854ad000 sd=21 :-1 s=STATE_OPEN pgs=5 cs=1 l=0). == tx == 0x7fbc86f1ab00 forward(mdsbeacon(4115/a-s up:active seq 286 v7) v7 caps allow * tid 765 con_features 1152921504336314367) to leader v3

remote/mira032/log/ceph-mon.b.log.gz:2016-09-12 22:40:48.544817 7fe340fe8700  1 -- 172.21.4.112:6789/0 >> 172.21.4.112:6790/0 conn(0x7fe353203800 sd=30 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=3 cs=1 l=0). == rx == mon.2 seq 2510 0x7fe354495600 forward(mdsbeacon(4115/a-s up:active seq 286 v7) v7 caps allow * tid 765 con_features 1152921504336314367) to leader v3
remote/mira032/log/ceph-mon.b.log.gz:2016-09-12 22:40:48.545342 7fe33ffe6700  1 -- 172.21.4.112:6789/0 <== mon.2 172.21.4.112:6790/0 2510 ==== forward(mdsbeacon(4115/a-s up:active seq 286 v7) v7 caps allow * tid 765 con_features 1152921504336314367) to leader v3 ==== 634+0+0 (1224176700 0 0) 0x7fe354495600 con 0x7fe353203800
remote/mira032/log/ceph-mon.b.log.gz:2016-09-12 22:40:48.545391 7fe33ffe6700 10 mon.b@0(leader).paxosservice(mdsmap 1..8) dispatch 0x7fe354892340 mdsbeacon(4115/a-s up:active seq 286 v7) v7 from mds.0 172.21.4.112:6808/103879 con 0x7fe35359b100
remote/mira032/log/ceph-mon.b.log.gz:2016-09-12 22:40:48.545401 7fe33ffe6700 10 mon.b@0(leader).mds e8 preprocess_query mdsbeacon(4115/a-s up:active seq 286 v7) v7 from mds.0 172.21.4.112:6808/103879
remote/mira032/log/ceph-mon.b.log.gz:2016-09-12 22:40:48.545411 7fe33ffe6700 12 mon.b@0(leader).mds e8 preprocess_beacon mdsbeacon(4115/a-s up:active seq 286 v7) v7 from mds.0 172.21.4.112:6808/103879 compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
remote/mira032/log/ceph-mon.b.log.gz:2016-09-12 22:40:48.545416 7fe33ffe6700  7 mon.b@0(leader).mds e8 mds_beacon mdsbeacon(4115/a-s up:active seq 286 v7) v7 is not in fsmap (state up:active)
remote/mira032/log/ceph-mon.b.log.gz:2016-09-12 22:40:48.545431 7fe33ffe6700 15 mon.b@0(leader) e1 send_reply routing reply to 172.21.4.112:6808/103879 via 172.21.4.112:6790/0 for request mdsbeacon(4115/a-s up:active seq 286 v7) v7

So, this is actually a mon glitch causing a big delay forwarding a beacon, causing a MDS to get unexpectedly taken down, causing the client to go into reconnect, and then we have a client bug when we go into reconnect during unmount that we never noticed because we don't usually thrash that case I guess.

Actions #7

Updated by John Spray over 7 years ago

  • Subject changed from Crash in Client::_invalidate_kernel_dcache to Crash in Client::_invalidate_kernel_dcache when reconnecting during unmount
Actions #8

Updated by John Spray over 7 years ago

  • Assignee set to Zheng Yan

Zheng: do you have any ideas about why this path is crashing when called during unmount?

Actions #9

Updated by Zheng Yan over 7 years ago

  • Assignee changed from Zheng Yan to John Spray

there are some unconnected inodes (due to dirty data flush?) , root inode is deleted before the unconnected inodes get released

 -5766> 2016-09-12 22:40:21.900722 7f12e20e4ec0 10 client.4129 put_inode on 1.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={1024=0} open={} mode=41777 size=0/0 mtime=2016-09-12 22:21:59.363223 caps=p(0=p) has_dir_layout 0x7f12eb52fe00)
 -5765> 2016-09-12 22:40:21.900728 7f12e20e4ec0 10 client.4129 remove_cap mds.0 on 1.head(faked_ino=0 ref=0 ll_ref=0 cap_refs={1024=0} open={} mode=41777 size=0/0 mtime=2016-09-12 22:21:59.363223 caps=p(0=p) has_dir_layout 0x7f12eb52fe00)
 -5764> 2016-09-12 22:40:21.900735 7f12e20e4ec0 10 client.4129 put_inode deleting 1.head(faked_ino=0 ref=0 ll_ref=0 cap_refs={1024=0} open={} mode=41777 size=0/0 mtime=2016-09-12 22:21:59.363223 caps=- has_dir_layout 0x7f12eb52fe00)

Actions #10

Updated by Zheng Yan over 7 years ago

  • Status changed from New to Fix Under Review
Actions #11

Updated by John Spray over 7 years ago

  • Status changed from Fix Under Review to Pending Backport
  • Backport set to jewel
Actions #12

Updated by Loïc Dachary over 7 years ago

  • Copied to Backport #17477: jewel: Crash in Client::_invalidate_kernel_dcache when reconnecting during unmount added
Actions #13

Updated by Loïc Dachary over 7 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF