Project

General

Profile

Actions

Bug #15508

closed

client: simultaneous readdirs are very racy

Added by Greg Farnum about 8 years ago. Updated almost 8 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Client
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Imagine we have a ceph-fuse user doing readdirs a and b on a very large directory (which requires multiple MDS round-trips, and multiple local readdir syscalls for every MDS round trip).

a finishes first. Because the directory wasn't changed, it marks the directory COMPLETE|ORDERED
b has last received an MDS readdir for offsets x to y and is serving those results

readdir c starts from offset 0.
b finishes up to y, and sends off an MDS request to readdir starting at y+1
readdir c reaches location y+1 from cache
b's response comes in. It pushes the range y+1 to z to the back of the directory's dentry xlist!
readdir c continues up to z before readdir b manages to get z+1 read back from the MDS.
readdir c ends prematurely because xlist::iterator::end() returns true.


Related issues 2 (0 open2 closed)

Related to CephFS - Bug #13271: Missing dentry in cache when doing readdirs under cache pressure (?????s in ls-l)Resolved09/29/2015

Actions
Copied to CephFS - Backport #16251: jewel: client: simultaneous readdirs are very racyResolvedGreg FarnumActions
Actions

Also available in: Atom PDF