Bug #24491
closedclient: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
0%
Description
We have encounter a process crash when using libcephfs.
the call stack is below:
#0 0x00007fdef24941f7 in raise () from /lib64/libc.so.6
#1 0x00007fdef24958e8 in abort () from /lib64/libc.so.6
#2 0x00007fdef1d923b5 in os::abort(bool) ()
from /usr/local/jdk-8u131/jre/lib/amd64/server/libjvm.so
#3 0x00007fdef1f34673 in VMError::report_and_die() ()
from /usr/local/jdk-8u131/jre/lib/amd64/server/libjvm.so
#4 0x00007fdef1d978bf in JVM_handle_linux_signal ()
from /usr/local/jdk-8u131/jre/lib/amd64/server/libjvm.so
#5 0x00007fdef1d8de13 in signalHandler(int, siginfo*, void*) ()
from /usr/local/jdk-8u131/jre/lib/amd64/server/libjvm.so
#6 <signal handler called>
#7 0x00007fdeca481ec5 in Client::_ll_drop_pins (this=0x7fdeecfa48d0)
at /ceph/src/client/Client.cc:10388
#8 0x00007fdeca45b325 in Client::unmount (this=0x7fdeecfa48d0)
at /ceph/src/client/Client.cc:5868
#9 0x00007fdeca421f82 in ceph_mount_info::shutdown (this=0x7fdeecec8870)
at /ceph/src/libcephfs.cc:146
#10 0x00007fdeca421f52 in ceph_mount_info::unmount (this=0x7fdeecec8870)
at /ceph/src/libcephfs.cc:139
#11 0x00007fdeca41bedb in ceph_unmount (cmount=0x7fdeecec8870)
at /ceph/src/libcephfs.cc:344
#12 0x00007fdeca8315e5 in Java_com_ceph_fs_CephMount_native_1ceph_1unmount (
env=0x7fdeed09e1f8, clz=0x7fdeb060a700, j_mntp=140595434391664)
at /ceph/src/java/native/libcephfs_jni.cc:464
and use gdb print in
p in
$1 = (Inode *) 0x1
Here is the code:
void Client::_ll_drop_pins()
{
ldout(cct, 10) << func << dendl;
ceph::unordered_map<vinodeno_t, Inode*>::iterator next;
for (ceph::unordered_map<vinodeno_t, Inode*>::iterator it = inode_map.begin();
it != inode_map.end();
it = next) {
Inode *in = it->second;
next = it;
++next;
if (in->ll_ref)
_ll_put(in, in->ll_ref);
}
when in is the root, and in->_ref==1, and root_parents map contains the inode 'next', there will be crash. Because 'next' has deleted after _ll_put function finish.
If you mount a deep subdirectory as root, this crash will happen in all probability.
Updated by Zheng Yan almost 6 years ago
- Description updated (diff)
Thank for reporting this. Could you fix this issue in a way similar to https://github.com/ceph/ceph/pull/22073?
Updated by Zheng Yan almost 6 years ago
- Status changed from New to Fix Under Review
Updated by Patrick Donnelly almost 6 years ago
- Subject changed from _ll_drop_pins travel inode_map may access invalid ‘next’ iterator to client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- Backport set to mimic,luminous,jewel
- Affected Versions deleted (
v0.56) - Component(FS) Client added
- Component(FS) deleted (
libcephfs)
Updated by Patrick Donnelly almost 6 years ago
- Category changed from Correctness/Safety to 48
- Status changed from Fix Under Review to Pending Backport
Updated by Patrick Donnelly almost 6 years ago
- Category changed from 48 to Correctness/Safety
Updated by Nathan Cutler almost 6 years ago
- Copied to Backport #24534: mimic: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator added
Updated by Nathan Cutler almost 6 years ago
- Copied to Backport #24535: luminous: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator added
Updated by Nathan Cutler almost 6 years ago
- Copied to Backport #24536: jewel: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator added
Updated by Patrick Donnelly over 5 years ago
- Status changed from Pending Backport to Resolved