Actions
Bug #19395
closed"Too many inodes in cache" warning can happen even when trimming is working
% Done:
0%
Source:
Tags:
Backport:
jewel, kraken
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Updated by John Spray about 7 years ago
Couple of observations from today:
- after an mds failover, the issue cleared and we're back to having "ino" (CInodes) be less than "inodes" (CDentries).
- the node where the issue happened (mira060) had a different ceph.conf, it was using mds cache size = 100000, the other MDS nodes are using mds cache size = 500000
- looking at the cache dumps, we can see that the excess CInode instances are indeed present in the dump, i.e. they're linked into MDCache::inode_map
Updated by John Spray about 7 years ago
- Subject changed from Lab cluster "Too many inodes in cache (196465/100000)" to "Too many inodes in cache" warning can happen even when trimming is working
- Status changed from New to Fix Under Review
- Assignee set to John Spray
The bad state is long gone, so I'm just going to change this ticket to fixing the weird case where we were getting a health warning even though the trimming was doing its job okay.
https://github.com/ceph/ceph/pull/14197
Updated by John Spray almost 7 years ago
- Status changed from Fix Under Review to Resolved
Actions