Strays enqueued for purge cause MDCache to exceed size limit
If your purge operations are going slowly (either because of throttle or because of slow data pool), and you do lots of deletes, then the stray directories can grow very big.
As each stray dentry is created it gets eval_stray'd and potentially put onto the queue of things that are ready to be purged. But there is no guarantee that the queue progresses quickly, and everything in the queue is pinned in the cache. Even if it wasn't, the queued order has no locality with respect to dirfrags, so we would thrash dirfrags in and out of cache while draining the queue.
The solution is to have a new data structure outside of MDCache, where we put dentries that are beyond the "point of no return" in the stray->purge process. This would essentially just be a work queue where each work item is an inode, and the work is to purge it.
Test code for playing with this case:
#2 Updated by John Spray almost 3 years ago
Related to FSCK: if adding a persistent structure containing references to inodes to be purged, ensure backward scrub tools are able to interrogate this structure so as to avoid incorrectly thinking the to-be-purged inodes are orphans and incorrectly linking them into lost+found.
#5 Updated by John Spray about 2 years ago
Yep, this should be a fairly high priority to do something about.
The "real" solution (a scalable way of persisting the queue to purge) is not trivial, so maybe we need a stop-gap. It's an ugly hack, but we could do some crude back pressure by blocking unlinks (RetryRequest) when the purge queue is above a threshold and the cache is close to full, and putting a queue of contexts to kick into StrayManager. Actually that does sound quite sensible when I say it out loud, thoughts?
#6 Updated by Greg Farnum about 2 years ago
Possibly. I'm concerned about exposing slow deletes to users via Manila, but it may be the best we can do in the short term.
Once upon a time I was hoping to use the journaling stuff Jason wrote for RBD, but that got pretty large and I'm not sure the library is suitable for us any more. I don't remember why we end up pinning the way we do; maybe we can do some kind of hack with stray directories and the journal where we only keep a segment's (or some limit's) worth of stray inodes. That would mean slow trimming would just show up as an MDS whose log is longer than it should be (..and slow down restarts, I guess), but I'm not sure if it's feasible in the code logic off-hand.