Bug #19395
closed
Cache dumps in /root/19395 on mira060
Couple of observations from today:
- after an mds failover, the issue cleared and we're back to having "ino" (CInodes) be less than "inodes" (CDentries).
- the node where the issue happened (mira060) had a different ceph.conf, it was using mds cache size = 100000, the other MDS nodes are using mds cache size = 500000
- looking at the cache dumps, we can see that the excess CInode instances are indeed present in the dump, i.e. they're linked into MDCache::inode_map
- Subject changed from Lab cluster "Too many inodes in cache (196465/100000)" to "Too many inodes in cache" warning can happen even when trimming is working
- Status changed from New to Fix Under Review
- Assignee set to John Spray
The bad state is long gone, so I'm just going to change this ticket to fixing the weird case where we were getting a health warning even though the trimming was doing its job okay.
https://github.com/ceph/ceph/pull/14197
- Backport set to jewel, kraken
- Status changed from Fix Under Review to Resolved
Also available in: Atom
PDF