Project

General

Profile

Feature #11950

Strays enqueued for purge cause MDCache to exceed size limit

Added by John Spray almost 2 years ago. Updated about 2 months ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Performance/Resource Usage
Target version:
Start date:
06/05/2015
Due date:
% Done:

0%

Source:
other
Tags:
Backport:
Reviewed:
User Impact:
Affected Versions:
Release:
Component(FS):
MDS
Needs Doc:
No

Description

If your purge operations are going slowly (either because of throttle or because of slow data pool), and you do lots of deletes, then the stray directories can grow very big.

As each stray dentry is created it gets eval_stray'd and potentially put onto the queue of things that are ready to be purged. But there is no guarantee that the queue progresses quickly, and everything in the queue is pinned in the cache. Even if it wasn't, the queued order has no locality with respect to dirfrags, so we would thrash dirfrags in and out of cache while draining the queue.

The solution is to have a new data structure outside of MDCache, where we put dentries that are beyond the "point of no return" in the stray->purge process. This would essentially just be a work queue where each work item is an inode, and the work is to purge it.

Test code for playing with this case:
https://github.com/ceph/ceph-qa-suite/tree/wip-cache-full-strays

History

#1 Updated by John Spray almost 2 years ago

  • Project changed from Ceph to fs
  • Category set to 47

#2 Updated by John Spray almost 2 years ago

Related to FSCK: if adding a persistent structure containing references to inodes to be purged, ensure backward scrub tools are able to interrogate this structure so as to avoid incorrectly thinking the to-be-purged inodes are orphans and incorrectly linking them into lost+found.

#3 Updated by Greg Farnum almost 2 years ago

  • Tracker changed from Bug to Feature

#4 Updated by Greg Farnum about 1 year ago

  • Priority changed from Normal to High

This came up in #15379. I think we're going to start seeing it more often with the Manila use case...

#5 Updated by John Spray about 1 year ago

Yep, this should be a fairly high priority to do something about.

The "real" solution (a scalable way of persisting the queue to purge) is not trivial, so maybe we need a stop-gap. It's an ugly hack, but we could do some crude back pressure by blocking unlinks (RetryRequest) when the purge queue is above a threshold and the cache is close to full, and putting a queue of contexts to kick into StrayManager. Actually that does sound quite sensible when I say it out loud, thoughts?

#6 Updated by Greg Farnum about 1 year ago

Possibly. I'm concerned about exposing slow deletes to users via Manila, but it may be the best we can do in the short term.

Once upon a time I was hoping to use the journaling stuff Jason wrote for RBD, but that got pretty large and I'm not sure the library is suitable for us any more. I don't remember why we end up pinning the way we do; maybe we can do some kind of hack with stray directories and the journal where we only keep a segment's (or some limit's) worth of stray inodes. That would mean slow trimming would just show up as an MDS whose log is longer than it should be (..and slow down restarts, I guess), but I'm not sure if it's feasible in the code logic off-hand.

#7 Updated by Zheng Yan about 1 year ago

  • Status changed from New to Need Review

#8 Updated by John Spray 11 months ago

  • Status changed from Need Review to Verified

I'm reverting state to verified because although we've merged the patch for this, it still needs more attention to have a full solution.

#9 Updated by Greg Farnum 10 months ago

  • Category changed from 47 to Performance/Resource Usage
  • Component(FS) MDS added

#10 Updated by John Spray 6 months ago

  • Assignee set to John Spray
  • Target version set to v12.0.0
  • Needs Doc set to No

Targeting for Luminous and assigning to me: we will use a single Journaler() instance per MDS to track a persistent purge queue.

#11 Updated by John Spray 5 months ago

  • Status changed from Verified to In Progress

#12 Updated by John Spray 3 months ago

  • Status changed from In Progress to Need Review

#13 Updated by John Spray about 2 months ago

  • Status changed from Need Review to Resolved

PurgeQueue has merged to master, will be in Luminous.

Also available in: Atom PDF