Project

General

Profile

Bug #13271

Missing dentry in cache when doing readdirs under cache pressure (?????s in ls-l)

Added by John Spray almost 4 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
09/29/2015
Due date:
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:

Description

Fuse sends us a series of readdirs, because it only accepts so many entries in each. We only do an MDS request on the first one, and service subsequent readdirs from the cache of the original request. dir->dentries is populated on the response from the MDS request, so we rely on the dentry still being there between the original request and the later readdir from fuse. That assumption is broken if one of the just-loaded dentries gets trimmed. Like this:

2015-09-28 16:34:13.986842 7f372f7fe700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=3 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 0 frag * fragpos 0 at_end=0
2015-09-28 16:34:13.986938 7f372f7fe700 10 client.28239198 send_request client_request(unknown.0:846798 readdir #100009d72a1 2015-09-28 16:34:13.986893) v2 to mds.0
2015-09-28 16:34:14.106479 7f376f7fe700 15 client.28239198 insert_readdir_results  4da: 'data.2015-08-23_00-00-00.csv.bz2' -> 10002785e7b
2015-09-28 16:34:14.533099 7f3756ffd700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 4a frag * fragpos 4a at_end=0
2015-09-28 16:34:14.906300 7f3756ffd700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 93 frag * fragpos 93 at_end=0
2015-09-28 16:34:15.424303 7f372ffff700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset dc frag * fragpos dc at_end=0
2015-09-28 16:34:15.464552 7f37557fa700 15 client.28239198 trim_dentry unlinking dn data.2015-08-23_00-00-00.csv.bz2 in dir 100009d72a1
2015-09-28 16:34:15.968965 7f3755ffb700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 125 frag * fragpos 125 at_end=0
2015-09-28 16:34:16.443382 7f3755ffb700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 16e frag * fragpos 16e at_end=0
2015-09-28 16:34:16.908443 7f3756ffd700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 1b7 frag * fragpos 1b7 at_end=0
2015-09-28 16:34:17.355644 7f37557fa700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 200 frag * fragpos 200 at_end=0
2015-09-28 16:34:17.811355 7f372dffb700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 249 frag * fragpos 249 at_end=0
2015-09-28 16:34:18.293184 7f3755ffb700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 292 frag * fragpos 292 at_end=0
2015-09-28 16:34:18.727673 7f37567fc700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 2db frag * fragpos 2db at_end=0
2015-09-28 16:34:19.163582 7f3755ffb700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 324 frag * fragpos 324 at_end=0
2015-09-28 16:34:19.621639 7f372f7fe700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 36d frag * fragpos 36d at_end=0
2015-09-28 16:34:20.041697 7f3755ffb700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 3b6 frag * fragpos 3b6 at_end=0
2015-09-28 16:34:20.508832 7f372f7fe700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 3ff frag * fragpos 3ff at_end=0
2015-09-28 16:34:20.899534 7f372f7fe700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 448 frag * fragpos 448 at_end=0
2015-09-28 16:34:21.417498 7f372effd700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 491 frag * fragpos 491 at_end=0
2015-09-28 16:34:21.548014 7f37557fa700 10 client.28239198 readdir_r_cb 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50) offset 4da frag * fragpos 4da at_end=0
2015-09-28 16:34:21.548496 7f37557fa700 10 client.28239198  marking I_COMPLETE on 100009d72a1.head(ref=4 ll_ref=4 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) parents=0x7f3732ff97c0 0x7f370d1b1c50)
2015-09-28 16:34:21.548635 7f372effd700 10 client.28239198 _lookup concluded ENOENT locally for 100009d72a1.head(ref=4 ll_ref=5 cap_refs={} open={} mode=42775 size=0/0 mtime=2015-09-28 05:57:57.259306 caps=pAsLsXsFs(0=pAsLsXsFs) COMPLETE parents=0x7f3732ff97c0 0x7f370d1b1c50) dn 'data.2015-08-23_00-00-00.csv.bz2'

Even if we always clear the complete flag on trim, that doesn't help, because the flag is getting set after the trim, on the final readdir_r_cb.


Related issues

Related to fs - Bug #15508: client: simultaneous readdirs are very racy Resolved 04/14/2016

History

#1 Updated by Zheng Yan almost 4 years ago

  • Status changed from New to Closed

above circumstance does not suppose to happen. because dirp->buffer holds references to inodes.

I found it's bug in ceph v0.87 (it's fixed in master)

void Client::trim_dentry(Dentry *dn)
{
  ldout(cct, 15) << "trim_dentry unlinking dn " << dn->name
                 << " in dir " << hex << dn->dir->parent_inode->ino
                 << dendl;
  if (dn->dir->parent_inode->flags & I_COMPLETE) {
    ldout(cct, 10) << " clearing I_COMPLETE on " << *dn->dir->parent_inode << dendl;
    dn->dir->parent_inode->flags &= ~I_COMPLETE;
    dn->dir->release_count++;
    ~~~~  we should always increase relase_count.
  }
  unlink(dn, false, false);  // drop dir, drop dentry
}

#2 Updated by Zheng Yan almost 4 years ago

it's fixed by a5984ba34cb684dae623df22e338f350c8765ba5. updating v0.87.1 should fix this problem

#3 Updated by John Spray almost 4 years ago

  • Status changed from Closed to New

The original reporter confirmed that he saw this with 0.94

#4 Updated by Greg Farnum almost 4 years ago

I think this is 62dd63761701a7e0f7ce39f4071dcabc19bb1cf4, which doesn't appear to be in a release yet. See #12297.

If it's not fixing it I imagine it's a closely-related problem; as you said in standup there are/were definitely some very broken things here, so maybe you found another one.

#5 Updated by Greg Farnum over 3 years ago

This may get handled alongside http://tracker.ceph.com/issues/15508.

#6 Updated by Greg Farnum over 3 years ago

  • Related to Bug #15508: client: simultaneous readdirs are very racy added

#7 Updated by Greg Farnum about 3 years ago

  • Status changed from New to Resolved

Zheng fixed this.

Also available in: Atom PDF