Project

General

Profile

Bug #24665

qa: TestStrays.test_hardlink_reintegration fails self.assertTrue(self.get_backtrace_path(ino).startswith("stray"))

Added by Patrick Donnelly 6 months ago. Updated about 2 months ago.

Status:
Closed
Priority:
Urgent
Assignee:
Category:
Testing
Target version:
Start date:
06/26/2018
Due date:
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS, qa-suite
Labels (FS):
qa
Pull request ID:

Description

2018-06-23T06:26:09.471 INFO:tasks.cephfs_test_runner:======================================================================
2018-06-23T06:26:09.472 INFO:tasks.cephfs_test_runner:FAIL: test_hardlink_reintegration (tasks.cephfs.test_strays.TestStrays)
2018-06-23T06:26:09.472 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2018-06-23T06:26:09.472 INFO:tasks.cephfs_test_runner:eraceback (most recent call last):
2018-06-23T06:26:09.472 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20180622.235254/qa/tasks/cephfs/test_strays.py", line 390, in test_hardlink_reintegration
2018-06-23T06:26:09.472 INFO:tasks.cephfs_test_runner:    self.assertTrue(self.get_backtrace_path(ino).startswith("stray"))
2018-06-23T06:26:09.473 INFO:tasks.cephfs_test_runner:AssertionError: False is not true
2018-06-23T06:26:09.473 INFO:tasks.cephfs_test_runner:
2018-06-23T06:26:09.473 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2018-06-23T06:26:09.474 INFO:tasks.cephfs_test_runner:Ran 5 tests in 464.928s

From: /ceph/teuthology-archive/pdonnell-2018-06-23_02:11:08-fs-wip-pdonnell-testing-20180622.235254-testing-basic-smithi/2692591/teuthology.log

Looks like the MDS started stray integration before the test expected:

https://github.com/ceph/ceph/blob/6c8a3dea4a01e7c4ff969cc378ec150e3f4fb63a/qa/tasks/cephfs/test_strays.py#L376-L390

at L390, the stray should not have been reintegrated yet. But, looking at the MDS log, reintegration began after the unlink finished:

2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.server _unlink_local_finish [dentry #0x1/dir_1/file_a [2,head] auth (dn xlockdone x=1) (dversion lock w=1 last_client=4761) pv=13 v=11 ap=2+3 ino=0x10000000001 state=1073741824 | request=1 lock=2 inodepin=1 authpin=1 0x556065c21600]
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.ino(0x10000000001) pop_projected_snaprealm (early) 0x556065cb0300 seq 1
2018-06-23 06:25:06.477 7f3e1b2c8700 10  mds.0.cache.snaprealm(0x10000000001 seq 1 0x556065ad6580) open_parents [1,head]
...
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache request_cleanup request(client.4761:6 cr=0x556065c0cd80)
2018-06-23 06:25:06.477 7f3e1b2c8700  7 mds.0.locker rdlock_finish on (isnap sync) on [inode 0x1 [...2,head] / auth v7 ap=1+0 snaprealm=0x556065ad5b80 f(v0 m2018-06-23 06:24:53.925210 2=0+2) n(v2 rc2018-06-23 06:24:54.029368 b8388608 4=1+3)/n(v0 1=0+1) (inest sync dirty) (iversion lock) caps={4761=pAsLsXsFs/-@2} | dirtyscattered=1 request=0 lock=0 dirfrag=1 caps=1 openingsnapparents=0 authpin=1 0x556065bea700]
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.locker simple_eval (isnap sync) on [inode 0x1 [...2,head] / auth v7 ap=1+0 snaprealm=0x556065ad5b80 f(v0 m2018-06-23 06:24:53.925210 2=0+2) n(v2 rc2018-06-23 06:24:54.029368 b8388608 4=1+3)/n(v0 1=0+1) (inest sync dirty) (iversion lock) caps={4761=pAsLsXsFs/-@2} | dirtyscattered=1 request=0 lock=0 dirfrag=1 caps=1 openingsnapparents=0 authpin=1 0x556065bea700]
2018-06-23 06:25:06.477 7f3e242da700  1 -- 172.21.15.44:6818/682052173 <== osd.1 172.21.15.44:6801/14779 11 ==== osd_op_reply(40 mds0_openfiles.0 [omap-set-header,omap-set-vals] v72'9 uv9 ondisk = 0) v8 ==== 202+0+0 (3496852679 0 0) 0x556065cf0340 con 0x556065b68300
2018-06-23 06:25:06.477 7f3e1b2c8700  7 mds.0.locker rdlock_finish on (isnap sync) on [inode 0x10000000000 [...2,head] /dir_1/ auth v21 ap=1+1 f(v0 m2018-06-23 06:25:03.127264) n(v0 rc2018-06-23 06:25:03.127264 1=0+1)/n(v0 rc2018-06-23 06:24:54.029368 b8388608 2=1+1) (inest lock) (ifile excl) (iversion lock) caps={4761=pAsLsXsFsx/-@4},l=4761 | request=0 lock=0 dirfrag=1 caps=1 dirtyrstat=1 dirty=1 authpin=1 0x556065c26a00]
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.locker simple_eval (isnap sync) on [inode 0x10000000000 [...2,head] /dir_1/ auth v21 ap=1+1 f(v0 m2018-06-23 06:25:03.127264) n(v0 rc2018-06-23 06:25:03.127264 1=0+1)/n(v0 rc2018-06-23 06:24:54.029368 b8388608 2=1+1) (inest lock) (ifile excl) (iversion lock) caps={4761=pAsLsXsFsx/-@4},l=4761 | request=0 lock=0 dirfrag=1 caps=1 dirtyrstat=1 dirty=1 authpin=1 0x556065c26a00]
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.dir(0x1) auth_unpin by 0x556065aca680 on [dir 0x1 / [2,head] auth v=22 cv=0/0 dir_auth=0 ap=0+1+2 state=1610874881|complete f(v0 m2018-06-23 06:24:53.925210 2=0+2) n(v2 rc2018-06-23 06:24:54.029368 b8388608 3=1+2) hs=2+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x5560658b7700] count now 0 + 2
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.ino(0x1) auth_unpin by 0x556065aca680 on [inode 0x1 [...2,head] / auth v7 snaprealm=0x556065ad5b80 f(v0 m2018-06-23 06:24:53.925210 2=0+2) n(v2 rc2018-06-23 06:24:54.029368 b8388608 4=1+3)/n(v0 1=0+1) (inest sync dirty) (iversion lock) caps={4761=pAsLsXsFs/-@2} | dirtyscattered=1 request=0 lock=0 dirfrag=1 caps=1 openingsnapparents=0 authpin=0 0x556065bea700] now 0+0
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.dir(0x100) auth_unpin by 0x556065aca680 on [dir 0x100 ~mds0/ [2,head] auth v=4 cv=0/0 dir_auth=0 ap=0+1+2 state=1610612737|complete f(v0 10=0+10) n(v0 10=0+10) hs=10+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x556065bf2000] count now 0 + 2
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.dir(0x600) auth_unpin by 0x556065aca680 on [dir 0x600 ~mds0/stray0/ [2,head] auth v=12 cv=1/1 ap=0+2+2 state=1610883073|complete|sticky f(v0 m2018-06-23 06:25:03.127264 1=1+0) n(v0 rc2018-06-23 06:25:03.127264 b8388608 1=1+0) hs=1+0,ss=0+0 dirty=1 | child=1 sticky=1 dirty=1 waiter=0 authpin=0 0x556065bf2a00] count now 0 + 2
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.den(0x10000000000 file_a) auth_unpin by 0x556065aca680 on [dentry #0x1/dir_1/file_a [2,head] auth NULL (dversion lock) v=13 ino=(nil) state=1610612800|bottomlru | request=1 lock=0 inodepin=0 dirty=1 authpin=0 0x556065c21600] now 0+0
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache.dir(0x10000000000) adjust_nested_auth_pins -1/-1 on [dir 0x10000000000 /dir_1/ [2,head] auth v=14 cv=11/11 ap=1+0+0 state=1610612737|complete f(v0 m2018-06-23 06:25:03.127264) n(v0 rc2018-06-23 06:25:03.127264) hs=0+1,ss=0+0 dirty=1 | child=1 dirty=1 waiter=0 authpin=1 0x556065c8c000] by 0x556065aca680 count now 1 + 0
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.ino(0x600) auth_unpin by 0x556065aca680 on [inode 0x600 [...2,head] ~mds0/stray0/ auth v3 ap=0+1 dirtyparent f(v0 m2018-06-23 06:25:03.127264 1=1+0) n(v0 rc2018-06-23 06:25:03.127264 b8388608 2=1+1)/n(v0 1=0+1) (inest lock) (iversion lock) | lock=0 dirfrag=1 stickydirs=1 stray=1 dirtyrstat=1 dirtyparent=1 dirty=1 authpin=0 0x556065c25c00] now 0+1
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache.dir(0x100) adjust_nested_auth_pins -1/-1 on [dir 0x100 ~mds0/ [2,head] auth v=4 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612737|complete f(v0 10=0+10) n(v0 10=0+10) hs=10+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x556065bf2000] by 0x556065aca680 count now 0 + 1
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.ino(0x10000000000) auth_unpin by 0x556065aca680 on [inode 0x10000000000 [...2,head] /dir_1/ auth v21 ap=0+1 f(v0 m2018-06-23 06:25:03.127264) n(v0 rc2018-06-23 06:25:03.127264 1=0+1)/n(v0 rc2018-06-23 06:24:54.029368 b8388608 2=1+1) (inest lock) (ifile excl) (iversion lock) caps={4761=pAsLsXsFsx/-@4},l=4761 | request=0 lock=0 dirfrag=1 caps=1 dirtyrstat=1 dirty=1 authpin=0 0x556065c26a00] now 0+1
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache.dir(0x1) adjust_nested_auth_pins -1/-1 on [dir 0x1 / [2,head] auth v=22 cv=0/0 dir_auth=0 ap=0+0+1 state=1610874881|complete f(v0 m2018-06-23 06:24:53.925210 2=0+2) n(v2 rc2018-06-23 06:24:54.029368 b8388608 3=1+2) hs=2+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x5560658b7700] by 0x556065aca680 count now 0 + 1
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.ino(0x10000000001) auth_unpin by 0x556065aca680 on [inode 0x10000000001 [...2,head] ~mds0/stray0/10000000001 auth v11 snaprealm=0x556065ad6580 dirtyparent s=8388608 n(v0 rc2018-06-23 06:25:03.127264 b8388608 1=1+0) (iversion lock) caps={4761=pAsXsFscr/-@1} | request=1 lock=0 caps=1 remoteparent=1 dirtyparent=1 dirty=1 authpin=0 0x556065c27100] now 0+0
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache.dir(0x600) adjust_nested_auth_pins -1/-1 on [dir 0x600 ~mds0/stray0/ [2,head] auth v=12 cv=1/1 ap=0+1+1 state=1610883073|complete|sticky f(v0 m2018-06-23 06:25:03.127264 1=1+0) n(v0 rc2018-06-23 06:25:03.127264 b8388608 1=1+0) hs=1+0,ss=0+0 dirty=1 | child=1 sticky=1 dirty=1 waiter=0 authpin=0 0x556065bf2a00] by 0x556065aca680 count now 0 + 1
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.dir(0x10000000000) auth_unpin by 0x556065aca680 on [dir 0x10000000000 /dir_1/ [2,head] auth v=14 cv=11/11 state=1610612737|complete f(v0 m2018-06-23 06:25:03.127264) n(v0 rc2018-06-23 06:25:03.127264) hs=0+1,ss=0+0 dirty=1 | child=1 dirty=1 waiter=0 authpin=0 0x556065c8c000] count now 0 + 0
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache.dir(0x1) adjust_nested_auth_pins -1/0 on [dir 0x1 / [2,head] auth v=22 cv=0/0 dir_auth=0 state=1610874881|complete f(v0 m2018-06-23 06:24:53.925210 2=0+2) n(v2 rc2018-06-23 06:24:54.029368 b8388608 3=1+2) hs=2+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x5560658b7700] by 0x556065aca680 count now 0 + 0
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.den(0x600 10000000001) auth_unpin by 0x556065aca680 on [dentry #0x100/stray0/10000000001 [2,head] auth (dversion lock) v=11 ino=0x10000000001 state=1879048193|new | request=1 lock=0 inodepin=1 dirty=1 authpin=0 0x556065cbc200] now 0+0
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache.dir(0x600) adjust_nested_auth_pins -1/-1 on [dir 0x600 ~mds0/stray0/ [2,head] auth v=12 cv=1/1 state=1610883073|complete|sticky f(v0 m2018-06-23 06:25:03.127264 1=1+0) n(v0 rc2018-06-23 06:25:03.127264 b8388608 1=1+0) hs=1+0,ss=0+0 dirty=1 | child=1 sticky=1 dirty=1 waiter=0 authpin=0 0x556065bf2a00] by 0x556065aca680 count now 0 + 0
2018-06-23 06:25:06.477 7f3e1b2c8700 15 mds.0.cache.dir(0x100) adjust_nested_auth_pins -1/0 on [dir 0x100 ~mds0/ [2,head] auth v=4 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 10=0+10) n(v0 10=0+10) hs=10+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x556065bf2000] by 0x556065aca680 count now 0 + 0
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.locker kick_issue_caps released at current seq 1, reissuing
2018-06-23 06:25:06.477 7f3e1b2c8700  7 mds.0.locker issue_caps allowed=pAsLsXsFscr, xlocker allowed=pAsLsXsFscr on [inode 0x10000000001 [...2,head] ~mds0/stray0/10000000001 auth v11 snaprealm=0x556065ad6580 dirtyparent s=8388608 n(v0 rc2018-06-23 06:25:03.127264 b8388608 1=1+0) (iversion lock) caps={4761=pAsXsFscr/-@1} | request=1 lock=0 caps=1 remoteparent=1 dirtyparent=1 dirty=1 authpin=0 0x556065c27100]
2018-06-23 06:25:06.477 7f3e1b2c8700 20 mds.0.locker  client.4761 pending pAsXsFscr allowed pAsLsXsFscr wanted -
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.strays eval_stray [dentry #0x100/stray0/10000000001 [2,head] auth (dversion lock) v=11 ino=0x10000000001 state=1879048209|new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x556065cbc200]
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.strays  inode is [inode 0x10000000001 [...2,head] ~mds0/stray0/10000000001 auth v11 snaprealm=0x556065ad6580 dirtyparent s=8388608 n(v0 rc2018-06-23 06:25:03.127264 b8388608 1=1+0) (iversion lock) caps={4761=pAsXsFscr/-@1} | request=0 lock=0 caps=1 remoteparent=1 dirtyparent=1 dirty=1 authpin=0 0x556065c27100]
2018-06-23 06:25:06.477 7f3e1b2c8700 20 mds.0.cache.strays _eval_stray_remote [dentry #0x100/stray0/10000000001 [2,head] auth (dversion lock) v=11 ino=0x10000000001 state=1879048209|new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x556065cbc200]
2018-06-23 06:25:06.477 7f3e1b2c8700 10 mds.0.cache.strays reintegrate_stray [dentry #0x100/stray0/10000000001 [2,head] auth (dversion lock) v=11 ino=0x10000000001 state=1879048209|new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x556065cbc200] into [dentry #0x1/dir_2/file_b [2,head] auth REMOTE(reg) (dversion lock) pv=0 v=3 ino=0x10000000001 state=1073741824 0x556065c21a00]
2018-06-23 06:25:06.477 7f3e1b2c8700  1 -- 172.21.15.44:6818/682052173 _send_message--> mds.? 172.21.15.44:6818/682052173 -- client_request(unknown.0:1 rename #0x1/dir_2/file_b #0x100/stray0/10000000001 caller_uid=0, caller_gid=0{}) v4 -- ?+0 0x556065ba0080
2018-06-23 06:25:06.477 7f3e1b2c8700  1 -- 172.21.15.44:6818/682052173 --> 172.21.15.44:6818/682052173 -- client_request(unknown.0:1 rename #0x1/dir_2/file_b #0x100/stray0/10000000001 caller_uid=0, caller_gid=0{}) v4 -- 0x556065ba0080 con 0

From: /ceph/teuthology-archive/pdonnell-2018-06-23_02:11:08-fs-wip-pdonnell-testing-20180622.235254-testing-basic-smithi/2692591/remote/smithi044/log/ceph-mds.a.log.gz

History

#1 Updated by Zheng Yan 6 months ago

the remote dentry was in mds cache when unlink happened, so reintegration started immediately. There were two unexpected readdir in mds log, the remote dentry was populated into mds cache by one readdir. maybe there was a background job, which affected this test.

#2 Updated by Zheng Yan about 2 months ago

  • Status changed from New to Closed

close this because it's caused by test environment noise

Also available in: Atom PDF