Project

General

Profile

Bug #9997

test_client_pin case is failing

Added by Greg Farnum almost 5 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
Start date:
11/03/2014
Due date:
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:

Description

http://qa-proxy.ceph.com/teuthology/teuthology-2014-11-02_23:04:01-fs-next-testing-basic-multi/583588/

RuntimeError: Timed out after 600 seconds waiting for 160 (currently 252)

Associated revisions

Revision 491da517 (diff)
Added by Yan, Zheng over 4 years ago

client: invalidate kernel dentries one by one

Our trick to trim the whole kernel dentry tree does not work for 3.18+ kernel.
The fix is trim kernel dentries one by one.

Fixes: #9997
Signed-off-by: Yan, Zheng <>

History

#3 Updated by John Spray almost 5 years ago

  • Status changed from New to In Progress

#4 Updated by John Spray almost 5 years ago

After much head scratching and log examination, this appears to be a kernel regression (assuming our behaviour was valid to begin with).

v3.17 works
v3.18-rc6 does not work

Investigation continues... recent changes to d_invalidate look interesting.

#5 Updated by Zheng Yan over 4 years ago

yes, I think it caused by the d_invalidate change. In 3.18-rc kernel, d_invalidate() unhash dentry regardless if the dentry is still busy. the change makes our tick that invalidates kernel cache not work. but I don't think it's a kernel regression.

#6 Updated by Zheng Yan over 4 years ago

For 3.18+ kernel, I think we can iterate the all dir inodes and invalidate dentry one by one.

#7 Updated by John Spray over 4 years ago

  • Status changed from In Progress to Need Review
  • Assignee changed from John Spray to Zheng Yan

#8 Updated by John Spray over 4 years ago

  • Status changed from Need Review to Resolved

#9 Updated by Greg Farnum over 4 years ago

  • Status changed from Resolved to Pending Backport

Can we get a giant backport for this, please?

#11 Updated by Zheng Yan over 4 years ago

The fix is buggy, we shouldn't backport it. we should use patches for #10277 instead

#12 Updated by Greg Farnum over 4 years ago

  • Status changed from Pending Backport to Resolved

Hum, I was thinking that we could backport the simple fix since most users will be on older kernels where it behaves properly anyway. But I suppose Giant is new enough that's not really a safe bet.

Also available in: Atom PDF