Project

General

Profile

Bug #20039

mds: replay of export pinned inode does not result in export

Added by Patrick Donnelly 2 months ago. Updated about 2 months ago.

Status:
Resolved
Priority:
Normal
Category:
multi-MDS
Target version:
-
Start date:
05/22/2017
Due date:
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Release:
Component(FS):
MDS
Needs Doc:
No

Description

Found this while thrashing exports. Example log:

2017-05-19 12:15:26.970543 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay updated dir [dir 10000000000 /files.1/ [2,head] auth v=201 cv=0/0 state=1610612738|complete f(v0 m2017-05-19 12:15:06.832976 100=100+0) n(v0 rc2017-05-19 12:15:06.832976 100=100+0) hs=99+0,ss=0+0 dirty=99 | child=1 dirty=1 0x55c799438d80]
2017-05-19 12:15:26.970556 7fe66e3ec700 12 mds.0.cache.dir(10000000000) add_null_dentry [dentry #1/files.1/file_99 [2,head] auth NULL (dversion lock) pv=0 v=201 inode=0 0x55c7995fdc00]
2017-05-19 12:15:26.970560 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay added (full) [dentry #1/files.1/file_99 [2,head] auth NULL (dversion lock) v=200 inode=0 | dirty=1 0x55c7995fdc00]
2017-05-19 12:15:26.970566 7fe66e3ec700 12 mds.0.cache.dir(10000000000) link_primary_inode [dentry #1/files.1/file_99 [2,head] auth NULL (dversion lock) v=200 inode=0 | dirty=1 0x55c7995fdc00] [inode 10000000064 [2,head] #10000000064 auth v200 s=0 n(v0 1=1+0) (iversion lock) cr={4741=0-4194304@1} 0x55c7995eea00]
2017-05-19 12:15:26.970587 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay added [inode 10000000064 [2,head] /files.1/file_99 auth v200 s=0 n(v0 1=1+0) (iversion lock) cr={4741=0-4194304@1} 0x55c7995eea00]
2017-05-19 12:15:26.970594 7fe66e3ec700 10 mds.0.cache.ino(10000000064) mark_dirty_parent
2017-05-19 12:15:26.970596 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay noting opened inode [inode 10000000064 [2,head] /files.1/file_99 auth v200 dirtyparent s=0 n(v0 1=1+0) (iversion lock) cr={4741=0-4194304@1} | dirtyparent=1 dirty=1 0x55c7995eea00]
2017-05-19 12:15:26.970600 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay inotable tablev 2 <= table 2
2017-05-19 12:15:26.970601 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay sessionmap v 102 -(1|2) == table 101 prealloc [] used 10000000064
2017-05-19 12:15:26.970603 7fe66e3ec700 20 mds.0.journal  (session prealloc [10000000064~385])
2017-05-19 12:15:26.970605 7fe66e3ec700 20 mds.0.sessionmap replay_dirty_session s=0x55c799412d80 name=client.4741 v=101
2017-05-19 12:15:26.970607 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay request client.4741:104 trim_to 5
2017-05-19 12:15:26.970617 7fe66e3ec700 10 mds.0.log _replay 4482148~872 / 4485531 2017-05-19 12:15:06.835085: ESubtreeMap 2 subtrees , 0 ambiguous [metablob 1, 2 dirs]
2017-05-19 12:15:26.970623 7fe66e3ec700 10 mds.0.journal ESubtreeMap.replay -- i already have import map; verifying
2017-05-19 12:15:26.970634 7fe66e3ec700 10 mds.0.log _replay 4483040~1579 / 4485531 2017-05-19 12:15:09.092676: EUpdate set vxattr layout [metablob 1, 1 dirs]
2017-05-19 12:15:26.970638 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay 1 dirlumps by unknown.0
2017-05-19 12:15:26.970641 7fe66e3ec700 10  mds.0.cache.snaprealm(1 seq 1 0x55c7992ba080) open_parents [1,head]
2017-05-19 12:15:26.970643 7fe66e3ec700 20 mds.0.cache.ino(1) decode_snap_blob snaprealm(1 seq 1 lc 0 cr 0 cps 1 snaps={} 0x55c7992ba080)
2017-05-19 12:15:26.970645 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay  updated root [inode 1 [...2,head] / auth v4 snaprealm=0x55c7992ba080 f(v0 m2017-05-19 12:15:06.500038 1=0+1) n(v0 1=0+1) (inest sync dirty) (iversion lock) | dirtyscattered=1 dirfrag=1 dirty=1 0x55c7992f3500]
2017-05-19 12:15:26.970652 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay dir 1
2017-05-19 12:15:26.970654 7fe66e3ec700 10 mds.0.cache.dir(1) mark_dirty (already dirty) [dir 1 / [2,head] auth v=205 cv=0/0 dir_auth=0 state=1610612736 f(v0 m2017-05-19 12:15:06.500038 1=0+1) n(v0 rc2017-05-19 12:15:06.832976 101=100+1)/n() hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 dirty=1 0x55c799437040] version 205
2017-05-19 12:15:26.970661 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay      dirty nestinfo on [dir 1 / [2,head] auth v=205 cv=0/0 dir_auth=0 state=1610612736 f(v0 m2017-05-19 12:15:06.500038 1=0+1) n(v0 rc2017-05-19 12:15:06.832976 101=100+1)/n() hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 dirty=1 0x55c799437040]
2017-05-19 12:15:26.970668 7fe66e3ec700 10 mds.0.locker mark_updated_scatterlock (inest sync dirty) - already on list since 2017-05-19 12:15:26.954461
2017-05-19 12:15:26.970671 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay      clean fragstat on [dir 1 / [2,head] auth v=205 cv=0/0 dir_auth=0 state=1610612736 f(v0 m2017-05-19 12:15:06.500038 1=0+1) n(v0 rc2017-05-19 12:15:06.832976 101=100+1)/n() hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 dirty=1 0x55c799437040]
2017-05-19 12:15:26.970677 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay updated dir [dir 1 / [2,head] auth v=205 cv=0/0 dir_auth=0 state=1610612736 f(v0 m2017-05-19 12:15:06.500038 1=0+1) n(v0 rc2017-05-19 12:15:06.832976 101=100+1)/n() hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 dirty=1 0x55c799437040]
2017-05-19 12:15:26.970689 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay for [2,head] had [dentry #1/files.1 [2,head] auth (dversion lock) v=204 inode=0x55c7992f3c00 | inodepin=1 dirty=1 0x55c7992d0fc0]
2017-05-19 12:15:26.970694 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay for [2,head] had [inode 10000000000 [...2,head] /files.1/ auth v204 dirtyparent f(v0 m2017-05-19 12:15:06.832976 100=100+0) n(v0 rc2017-05-19 12:15:06.832976 101=100+1) (iversion lock) | dirfrag=1 dirtyparent=1 dirty=1 export_pin=1 0x55c7992f3c00]
2017-05-19 12:15:26.970705 7fe66e3ec700 10 mds.0.journal EMetaBlob.replay request client.4741:105 trim_to 5

inode 10000000000 should be added to the export pin queue.

I have a PR ready.

History

#1 Updated by Patrick Donnelly 2 months ago

  • Status changed from In Progress to Need Review

#2 Updated by John Spray about 2 months ago

  • Status changed from Need Review to Resolved

Also available in: Atom PDF