Project

General

Profile

Actions

Bug #42746

closed

mds crashed in MDCache::request_forward

Added by Zheng Yan over 4 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

(gdb) bt
#0  0x00007fffeebfcd0f in raise () from /lib64/libpthread.so.0
#1  0x00005555559e3291 in reraise_fatal (signum=11) at /home/zhyan/Ceph/ceph/src/global/signal_handler.cc:326
#2  handle_fatal_signal (signum=11) at /home/zhyan/Ceph/ceph/src/global/signal_handler.cc:326
#3  <signal handler called>
#4  0x00005555557924a0 in std::_Rb_tree<int, std::pair<int const, std::unique_ptr<BatchOp, std::default_delete<BatchOp> > >, std::_Select1st<std::pair<int const, std::unique_ptr<BatchOp, std::default_delete<BatchOp> > > >, std::less<int>, std::allocator<std::pair<int const, std::unique_ptr<BatchOp, std::default_delete<BatchOp> > > > >::_M_lower_bound (this=<optimized out>, __k=<synthetic pointer>: <optimized out>, __y=0x5555563d5448, __x=0x400000005)
    at /usr/include/c++/8/bits/stl_tree.h:1883
#5  std::_Rb_tree<int, std::pair<int const, std::unique_ptr<BatchOp, std::default_delete<BatchOp> > >, std::_Select1st<std::pair<int const, std::unique_ptr<BatchOp, std::default_delete<BatchOp> > > >, std::less<int>, std::allocator<std::pair<int const, std::unique_ptr<BatchOp, std::default_delete<BatchOp> > > > >::find (__k=<synthetic pointer>: <optimized out>, this=0x555558673610) at /usr/include/c++/8/bits/stl_tree.h:2539
#6  std::map<int, std::unique_ptr<BatchOp, std::default_delete<BatchOp> >, std::less<int>, std::allocator<std::pair<int const, std::unique_ptr<BatchOp, std::default_delete<BatchOp> > > > >::find (__x=<synthetic pointer>: <optimized out>, this=0x555558673610) at /usr/include/c++/8/bits/stl_map.h:1169
#7  MDCache::request_forward (this=0x5555563ee400, mdr=..., who=1, port=<optimized out>) at /home/zhyan/Ceph/ceph/src/mds/MDCache.cc:9384
#8  0x000055555579ba2e in MDCache::path_traverse (this=0x5555563ee400, mdr=..., cf=..., path=..., flags=<optimized out>, pdnvec=0x555559f08628, 
    pin=0x555559f08660) at /home/zhyan/Ceph/ceph/src/mds/MDCache.cc:8363
#9  0x00005555556f8004 in Server::rdlock_path_pin_ref (this=0x555556389200, mdr=..., n=0, lov=..., want_auth=<optimized out>, no_want_auth=<optimized out>, 
    layout=0x0, no_lookup=false) at /home/zhyan/Ceph/ceph/src/mds/Server.cc:3317
#10 0x00005555556fa19c in Server::handle_client_getattr (this=0x555556389200, mdr=..., is_lookup=<optimized out>)
    at /home/zhyan/Ceph/ceph/src/mds/Server.cc:3538
#11 0x000055555572cab7 in Server::dispatch_client_request (this=0x555556389200, mdr=...) at /home/zhyan/Ceph/ceph/src/mds/Server.cc:2462
#12 0x00005555557c35b3 in MDCache::dispatch_request (this=<optimized out>, mdr=...) at /home/zhyan/Ceph/ceph/src/mds/MDCache.cc:9413
#13 0x00005555559542e3 in Context::complete (r=0, this=0x5555644f3be0) at /home/zhyan/Ceph/ceph/src/include/Context.h:77
#14 MDSContext::complete (this=0x5555644f3be0, r=0) at /home/zhyan/Ceph/ceph/src/mds/MDSContext.cc:29
#15 0x00005555556acf1b in MDSRank::_advance_queues (this=0x555556f88008) at /home/zhyan/Ceph/ceph/src/mds/MDSRank.cc:1239
#16 0x00005555556ad9e2 in MDSRank::_dispatch (this=0x555556f88008, m=..., new_msg=<optimized out>) at /home/zhyan/Ceph/ceph/src/mds/MDSRank.cc:1045
#17 0x00005555556ae666 in MDSRankDispatcher::ms_dispatch (this=this@entry=0x555556f88000, m=...) at /home/zhyan/Ceph/ceph/src/mds/MDSRank.cc:1015
#18 0x000055555569b5db in MDSDaemon::ms_dispatch2 (this=0x5555564ee000, m=...) at /home/zhyan/Ceph/ceph/src/common/RefCountedObj.h:55
#19 0x00007fffef84512c in Messenger::ms_deliver_dispatch (m=..., this=0x5555563e4000) at /home/zhyan/Ceph/ceph/src/msg/DispatchQueue.cc:200
#20 DispatchQueue::entry (this=0x5555563e4328) at /home/zhyan/Ceph/ceph/src/msg/DispatchQueue.cc:199
#21 0x00007fffef8e67ad in DispatchQueue::DispatchThread::entry (this=<optimized out>) at /home/zhyan/Ceph/ceph/src/msg/DispatchQueue.h:101
#22 0x00007fffeebf24aa in start_thread () from /lib64/libpthread.so.0
#23 0x00007fffee37a3f3 in clone () from /lib64/libc.so.6
(gdb)

Related issues 1 (0 open1 closed)

Related to CephFS - Feature #36608: mds: answering all pending getattr/lookups targeting the same inode in one go.ResolvedXuehan Xu

Actions
Actions #1

Updated by Zheng Yan over 4 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 31534
Actions #2

Updated by Patrick Donnelly over 4 years ago

  • Description updated (diff)
  • Assignee set to Zheng Yan
  • Target version set to v15.0.0
  • Start date deleted (11/11/2019)
  • Source set to Development
  • Component(FS) MDS added

Is this from a QA run or local testing?

Actions #3

Updated by Patrick Donnelly over 4 years ago

  • Related to Feature #36608: mds: answering all pending getattr/lookups targeting the same inode in one go. added
Actions #4

Updated by Patrick Donnelly over 4 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF