Project

General

Profile

Bug #19395

"Too many inodes in cache" warning can happen even when trimming is working

Added by John Spray 8 months ago. Updated 6 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
Start date:
03/28/2017
Due date:
% Done:

0%

Source:
Tags:
Backport:
jewel, kraken
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Release:
Component(FS):
Needs Doc:
No

History

#1 Updated by John Spray 8 months ago

Cache dumps in /root/19395 on mira060

#2 Updated by John Spray 8 months ago

Couple of observations from today:

  • after an mds failover, the issue cleared and we're back to having "ino" (CInodes) be less than "inodes" (CDentries).
  • the node where the issue happened (mira060) had a different ceph.conf, it was using mds cache size = 100000, the other MDS nodes are using mds cache size = 500000
  • looking at the cache dumps, we can see that the excess CInode instances are indeed present in the dump, i.e. they're linked into MDCache::inode_map

#3 Updated by John Spray 8 months ago

  • Subject changed from Lab cluster "Too many inodes in cache (196465/100000)" to "Too many inodes in cache" warning can happen even when trimming is working
  • Status changed from New to Need Review
  • Assignee set to John Spray

The bad state is long gone, so I'm just going to change this ticket to fixing the weird case where we were getting a health warning even though the trimming was doing its job okay.
https://github.com/ceph/ceph/pull/14197

#4 Updated by John Spray 8 months ago

  • Backport set to jewel, kraken

#5 Updated by John Spray 6 months ago

  • Status changed from Need Review to Resolved

Also available in: Atom PDF