Project

General

Profile

Actions

Bug #23919

closed

mds: stuck during up:stopping

Added by Patrick Donnelly almost 6 years ago. Updated almost 6 years ago.

Status:
Resolved
Priority:
Urgent
Category:
-
Target version:
% Done:

0%

Source:
Support
Tags:
Backport:
luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
multimds
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2018-04-27 19:26:16.626260 7f4add9be700  7 mds.1.cache trim bytes_used=26kB limit=1GB reservation=0.05% count=0
2018-04-27 19:26:16.626280 7f4add9be700  7 mds.1.cache trim_lru trimming 0 items from LRU size=4 mid=0 pintail=3 pinned=3
2018-04-27 19:26:16.626289 7f4add9be700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-27 19:26:16.626292 7f4add9be700 20 mds.1.cache trim: maybe trimming base: [inode 0x1 [...2,head] / rep@0.2 v19387 snaprealm=0x564b7a399440 f(v0 m2018-04-26 16:24:17.790447 2=0+2) n(v2776 rc2018-04-27 09:55:23.568156 b1609945599898 3160260=3156697+3563)/n(v0 1=0+1) (inest mix) (iversion lock) | lock=0 dirfrag=1 0x564b7a360c00]
2018-04-27 19:26:16.626307 7f4add9be700 10 mds.1.cache trim_client_leases
2018-04-27 19:26:16.626781 7f4add9be700  2 mds.1.cache check_memory_usage total 1145100, rss 768440, heap 315452, baseline 315452, buffers 0, 0 / 6 inodes have caps, 0 caps, 0 caps per inode
2018-04-27 19:26:16.626802 7f4add9be700 10 mds.1.log trim 30 / 30 segments, 23850 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-27 19:26:16.626806 7f4add9be700 10 mds.1.log _trim_expired_segments waiting for 41040378/150726511930 to expire
2018-04-27 19:26:16.626834 7f4add9be700 15 mds.1.bal get_load mdsload<[0,0 0]/[0,0 0], req 24967, hr 0, qlen 0, cpu 0.39>
2018-04-27 19:26:16.626858 7f4add9be700 10 mds.1.server find_idle_sessions.  laggy until 0.000000
2018-04-27 19:26:16.626862 7f4add9be700 10 mds.1.locker scatter_tick
2018-04-27 19:26:16.626864 7f4add9be700 20 mds.1.locker caps_tick 0 revoking caps
2018-04-27 19:26:16.626866 7f4add9be700 20 mds.1.13587 updating export targets, currently 0 ranks are targets
2018-04-27 19:26:16.626868 7f4add9be700 10 mds.1.log trim 30 / 30 segments, 23850 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-27 19:26:16.626871 7f4add9be700 10 mds.1.log _trim_expired_segments waiting for 41040378/150726511930 to expire
2018-04-27 19:26:16.626873 7f4add9be700  7 mds.1.cache shutdown_pass
2018-04-27 19:26:16.626874 7f4add9be700 10 mds.1.cache shutdown_export_strays
2018-04-27 19:26:16.626878 7f4add9be700  7 mds.1.cache trim bytes_used=26kB limit=1GB reservation=0.05% count=18446744073709551615
2018-04-27 19:26:16.626886 7f4add9be700  7 mds.1.cache trim_lru trimming 18446744073709551615 items from LRU size=4 mid=0 pintail=3 pinned=3
2018-04-27 19:26:16.626894 7f4add9be700 12 mds.1.cache trim_dentry [dentry #0x1/fuse/client1/dir2/level00 [2,head] rep@0,-2.1 (dversion lock) v=42699 inode=0x564b7d75a600 state=0 0x564b82cf5880]
2018-04-27 19:26:16.626902 7f4add9be700 12 mds.1.cache  in container [dir 0x1 / [2,head] rep@0.3 REP dir_auth=0 state=0 f(v0 m2018-04-26 16:24:17.790447 2=0+2) n(v2742 rc2018-04-27 09:45:21.578177 b1596541555640 3094096=3090658+3438)/n(v2742 rc2018-04-27 09:44:59.614178 b1596255294385 3093001=3089563+3438) hs=1+0,ss=0+0 | child=1 subtree=1 subtreetemp=0 0x564b7a40ce00]
2018-04-27 19:26:16.626915 7f4add9be700 15 mds.1.cache trim_inode [inode 0x20000205a64 [...2,head] /fuse/client1/dir2/level00/ rep@0.1 fragtree_t(*^3) v42701 f(v1 m2018-04-27 09:47:18.644696 10590=10590+0) n(v10 rc2018-04-27 09:47:18.644696 b261563080 10591=10590+1) (idft lock) (inest mix) (iversion lock) 0x564b7d75a600]
2018-04-27 19:26:16.626932 7f4add9be700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-27 19:26:16.626935 7f4add9be700 20 mds.1.cache trim: maybe trimming base: [inode 0x1 [...2,head] / rep@0.2 v19387 snaprealm=0x564b7a399440 f(v0 m2018-04-26 16:24:17.790447 2=0+2) n(v2776 rc2018-04-27 09:55:23.568156 b1609945599898 3160260=3156697+3563)/n(v0 1=0+1) (inest mix) (iversion lock) | lock=0 dirfrag=1 0x564b7a360c00]
2018-04-27 19:26:16.626946 7f4add9be700  5 mds.1.cache lru size now 4/0
2018-04-27 19:26:16.626949 7f4add9be700  7 mds.1.cache looking for subtrees to export to mds0
2018-04-27 19:26:16.626952 7f4add9be700  7 mds.1.cache still have 2 subtrees
2018-04-27 19:26:16.626953 7f4add9be700 15 mds.1.cache show_subtrees
2018-04-27 19:26:16.626958 7f4add9be700 10 mds.1.cache |__ 1    auth [dir 0x101 ~mds1/ [2,head] auth v=54 cv=54/54 dir_auth=1 state=1073741824 f(v0 10=0+10) n(v10 rc2018-04-27 09:39:57.907164 b24 19=6+13) hs=0+0,ss=0+0 | child=0 subtree=1 subtreetemp=0 dirty=0 waiter=0 authpin=0 0x564b7a40ca80]
2018-04-27 19:26:16.626970 7f4add9be700 10 mds.1.cache |__ 0     rep [dir 0x1 / [2,head] rep@0.3 REP dir_auth=0 state=0 f(v0 m2018-04-26 16:24:17.790447 2=0+2) n(v2742 rc2018-04-27 09:45:21.578177 b1596541555640 3094096=3090658+3438)/n(v2742 rc2018-04-27 09:44:59.614178 b1596255294385 3093001=3089563+3438) hs=1+0,ss=0+0 | child=1 subtree=1 subtreetemp=0 0x564b7a40ce00]
2018-04-27 19:26:16.626986 7f4add9be700 10 mds.1.migrator show_importing
2018-04-27 19:26:16.626988 7f4add9be700 10 mds.1.migrator show_exporting
2018-04-27 19:26:16.626990 7f4add9be700  7 mds.1.cache show_cache
2018-04-27 19:26:16.626992 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.000* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1342=1342+0)/f(v0 m2018-04-27 09:47:18.644696 1342=1342+0) n(v10 b28308992 1342=1342+0) hs=0+0,ss=0+0 0x564b84fff180]
2018-04-27 19:26:16.627000 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.001* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1346=1346+0) n(v10 b37861295 1346=1346+0) hs=0+0,ss=0+0 0x564b7a626700]
2018-04-27 19:26:16.627006 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.010* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1210=1210+0) n(v10 b26127187 1210=1210+0) hs=0+0,ss=0+0 0x564b83255880]
2018-04-27 19:26:16.627011 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.011* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1311=1311+0) n(v10 b36629092 1311=1311+0) hs=0+0,ss=0+0 0x564b84fff880]
2018-04-27 19:26:16.627015 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.100* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1350=1350+0) n(v10 b34019854 1350=1350+0) hs=0+0,ss=0+0 0x564b996baa80]
2018-04-27 19:26:16.627020 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.101* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1372=1372+0) n(v10 b28344064 1372=1372+0) hs=0+0,ss=0+0 0x564b99b60e00]
2018-04-27 19:26:16.627025 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.110* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1294=1294+0) n(v10 b37347744 1294=1294+0) hs=0+0,ss=0+0 0x564b83254e00]
2018-04-27 19:26:16.627029 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x20000205a64.111* /fuse/client1/dir2/level00/ [2,head] rep@0.1 state=0 f(v0 1365=1365+0) n(v10 b32924852 1365=1365+0) hs=0+0,ss=0+0 0x564b7b363500]
2018-04-27 19:26:16.627034 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x10000000004 /fuse/client1/ [2,head] rep@0.1 state=0 f(v0 m2018-04-27 09:37:12.836613 8=0+8) n(v793 rc2018-04-27 09:47:18.644696 b100525070473 1173823=1173794+29) hs=1+0,ss=0+0 | child=1 replicated=0 dirty=0 authpin=0 tempexporting=0 0x564b805fca80]
2018-04-27 19:26:16.627043 7f4add9be700  7 mds.1.cache    dentry [dentry #0x1/fuse/client1/dir2 [2,head] rep@0,-2.1 (dversion lock) v=12979 inode=0x564b7d757600 state=0 | inodepin=1 replicated=0 dirty=0 tempexporting=0 0x564b82cf5180]
2018-04-27 19:26:16.627050 7f4add9be700  7 mds.1.cache     inode [inode 0x10000000007 [...2,head] /fuse/client1/dir2/ rep@0.1 v12979 f(v0 m2018-04-27 09:40:00.532199 1=0+1) n(v24 rc2018-04-27 09:47:18.644696 b261563080 10592=10590+2)/n(v23 rc2018-04-27 09:47:18.644696 b261563080 10592=10590+2) (inest mix) (iversion lock) | dirtyscattered=0 lock=0 dirfrag=1 caps=0 dirtyrstat=0 exportingcaps=0 replicated=0 dirty=0 authpin=0 tempexporting=0 0x564b7d757600]
2018-04-27 19:26:16.627066 7f4add9be700  7 mds.1.cache  unlinked [inode 0x101 [...2,head] ~mds1/ auth v21 snaprealm=0x564b7a10db00 f(v0 10=0+10) n(v10 rc2018-04-27 09:39:57.907164 b24 20=6+14)/n(v0 11=0+11) (inest lock) (iversion lock) | dirtyscattered=0 lock=0 dirfrag=1 dirty=0 waiter=0 authpin=0 0x564b7a360600]
2018-04-27 19:26:16.627077 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x101 ~mds1/ [2,head] auth v=54 cv=54/54 dir_auth=1 state=1073741824 f(v0 10=0+10) n(v10 rc2018-04-27 09:39:57.907164 b24 19=6+13) hs=0+0,ss=0+0 | child=0 subtree=1 subtreetemp=0 dirty=0 waiter=0 authpin=0 0x564b7a40ca80]
2018-04-27 19:26:16.627086 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x10000000007 /fuse/client1/dir2/ [2,head] rep@0.1 state=0 f(v0 m2018-04-27 09:40:00.532199 1=0+1) n(v24 rc2018-04-27 09:47:18.644696 b261563080 10591=10590+1) hs=1+0,ss=0+0 | child=1 replicated=0 dirty=0 authpin=0 tempexporting=0 0x564b7fbe9180]
2018-04-27 19:26:16.627095 7f4add9be700  7 mds.1.cache    dentry [dentry #0x1/fuse/client1/dir2/level00 [2,head] rep@0,-2.1 (dversion lock) v=42699 inode=0x564b7d75a600 state=0 0x564b82cf5880]
2018-04-27 19:26:16.627100 7f4add9be700  7 mds.1.cache     inode [inode 0x20000205a64 [...2,head] /fuse/client1/dir2/level00/ rep@0.1 fragtree_t(*^3) v42701 f(v1 m2018-04-27 09:47:18.644696 10590=10590+0) n(v10 rc2018-04-27 09:47:18.644696 b261563080 10591=10590+1) (idft lock) (inest mix) (iversion lock) 0x564b7d75a600]
2018-04-27 19:26:16.627111 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x10000000001 /fuse/ [2,head] rep@0.1 state=0 f(v0 m2018-04-26 16:24:49.140547 2=0+2) n(v1755 rc2018-04-27 09:47:18.644696 b178279774897 1273525=1272100+1425)/n(v1755 rc2018-04-27 09:47:18.644696 b178262997681 1273525=1272100+1425) hs=1+0,ss=0+0 | child=1 frozen=0 subtree=0 subtreetemp=0 replicated=0 dirty=0 waiter=0 authpin=0 tempexporting=0 0x564b7a40d180]
2018-04-27 19:26:16.627126 7f4add9be700  7 mds.1.cache    dentry [dentry #0x1/fuse/client1 [2,head] rep@0,-2.1 (dversion lock) v=27172 inode=0x564b80623000 state=0 | request=0 lock=0 inodepin=1 replicated=0 dirty=0 authpin=0 tempexporting=0 0x564b8061cfc0]
2018-04-27 19:26:16.627145 7f4add9be700  7 mds.1.cache     inode [inode 0x10000000004 [...2,head] /fuse/client1/ rep@0.1 v27185 f(v0 m2018-04-27 09:37:12.836613 8=0+8) n(v795 rc2018-04-27 09:47:18.644696 b100525070473 1173824=1173794+30)/n(v793 rc2018-04-27 09:47:18.644696 b100525070473 1173824=1173794+30) (inest mix) (iversion lock) | dirtyscattered=0 lock=0 dirfrag=1 caps=0 dirtyrstat=0 exportingcaps=0 replicated=0 dirty=0 authpin=0 tempexporting=0 0x564b80623000]
2018-04-27 19:26:16.627159 7f4add9be700  7 mds.1.cache  unlinked [inode 0x1 [...2,head] / rep@0.2 v19387 snaprealm=0x564b7a399440 f(v0 m2018-04-26 16:24:17.790447 2=0+2) n(v2776 rc2018-04-27 09:55:23.568156 b1609945599898 3160260=3156697+3563)/n(v0 1=0+1) (inest mix) (iversion lock) | lock=0 dirfrag=1 0x564b7a360c00]
2018-04-27 19:26:16.627167 7f4add9be700  7 mds.1.cache   dirfrag [dir 0x1 / [2,head] rep@0.3 REP dir_auth=0 state=0 f(v0 m2018-04-26 16:24:17.790447 2=0+2) n(v2742 rc2018-04-27 09:45:21.578177 b1596541555640 3094096=3090658+3438)/n(v2742 rc2018-04-27 09:44:59.614178 b1596255294385 3093001=3089563+3438) hs=1+0,ss=0+0 | child=1 subtree=1 subtreetemp=0 0x564b7a40ce00]
2018-04-27 19:26:16.627177 7f4add9be700  7 mds.1.cache    dentry [dentry #0x1/fuse [2,head] rep@0,-2.2 (dversion lock) v=0 inode=0x564b7a361200 state=0 | lock=0 inodepin=1 0x564b7a5f2700]
2018-04-27 19:26:16.627181 7f4add9be700  7 mds.1.cache     inode [inode 0x10000000001 [...2,head] /fuse/ rep@0.2 v47838 f(v0 m2018-04-26 16:24:49.140547 2=0+2) n(v1758 rc2018-04-27 09:47:18.644696 b178280823473 1273526=1272100+1426)/n(v1747 rc2018-04-27 09:44:34.373450 b178080699656 1269888=1268462+1426) (inest mix) (iversion lock) | dirtyscattered=0 request=0 lock=0 dirfrag=1 caps=0 waiter=0 0x564b7a361200]
2018-04-27 19:26:16.627194 7f4add9be700  7 mds.1.13587 shutdown_pass=false
2018-04-27 19:26:16.627196 7f4add9be700 20 mds.beacon.magna058 0 slow request found
2018-04-27 19:26:19.739957 7f4add1bd700 10 mds.beacon.magna058 _send up:stopping seq 8747
2018-04-27 19:26:19.741275 7f4ae01c3700 10 mds.beacon.magna058 handle_mds_beacon up:stopping seq 8747 rtt 0.001307

This repeats ad infinitum.

See log from: /ceph/tmp/pdonnell/bz1566016

ceph-mds.magna058.log.2.gz is the (large) log of the MDS prior to up:stopping and including most of up:stopping.

ceph-mds.magna058.log.1.gz is a small log of the MDS stuck in up:stopping.


Related issues 1 (0 open1 closed)

Copied to CephFS - Backport #23951: luminous: mds: stuck during up:stoppingResolvedPrashant DActions
Actions #1

Updated by Patrick Donnelly almost 6 years ago

crux of the issue appears to be here:

2018-04-27 19:24:21.624902 7f4add9be700 12 mds.1.cache trim_dentry [dentry #0x1/fuse/client1/dir2/level00 [2,head] rep@0,-2.1 (dversion lock) v=42699 inode=0x564b7d75a600 state=0 0x564b82cf5880]
2018-04-27 19:24:21.624912 7f4add9be700 12 mds.1.cache  in container [dir 0x1 / [2,head] rep@0.3 REP dir_auth=0 state=0 f(v0 m2018-04-26 16:24:17.790447 2=0+2) n(v2742 rc2018-04-27 09:45:21.578177 b1596541555640 3094096=3090658+3438)/n(v2742 rc2018-04-27 09:44:59.614178 b1596255294385 3093001=3089563+3438) hs=1+0,ss=0+0 | child=1 subtree=1 subtreetemp=0 0x564b7a40ce00]
2018-04-27 19:24:21.624927 7f4add9be700 15 mds.1.cache trim_inode [inode 0x20000205a64 [...2,head] /fuse/client1/dir2/level00/ rep@0.1 fragtree_t(*^3) v42701 f(v1 m2018-04-27 09:47:18.644696 10590=10590+0) n(v10 rc2018-04-27 09:47:18.644696 b261563080 10591=10590+1) (idft lock) (inest mix) (iversion lock) 0x564b7d75a600]

trim_inode fails on 0x20000205a64 because !in->dirfragtreelock.can_read(-1)):

https://github.com/ceph/ceph/blob/master/src/mds/MDCache.cc#L6896-L6904

Actions #2

Updated by Patrick Donnelly almost 6 years ago

/ceph/tmp/pdonnell/bz1566016/0x20000205a64.log.gz

holds the output of

zgrep -C5 0x20000205a64 ceph-mds.magna052.log.1.gz

on rank 0

Actions #3

Updated by Zheng Yan almost 6 years ago

I think we should call Locker::_readlock_kick in this case.

Actions #4

Updated by Patrick Donnelly almost 6 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Patrick Donnelly

Zheng Yan wrote:

I think we should call Locker::_readlock_kick in this case.

https://github.com/ceph/ceph/pull/21720

Look right?

Actions #5

Updated by Patrick Donnelly almost 6 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #6

Updated by Nathan Cutler almost 6 years ago

Actions #7

Updated by Nathan Cutler almost 6 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF