Project

General

Profile

Backport #17974

jewel: ceph/Client segfaults in handle_mds_map when switching mds

Added by Dan van der Ster 7 months ago. Updated 5 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
Release:
jewel

History

#1 Updated by Nathan Cutler 7 months ago

  • Category set to Manila

#2 Updated by John Spray 7 months ago

I wouldn't be surprised if it was fixed in Kraken but I'll look at the Jewel code.

#3 Updated by John Spray 7 months ago

(the updates to the code in Kraken were ceph-mgr related so if they fixed a bug it was completely accidental!)

#4 Updated by John Spray 7 months ago

  • Status changed from New to Need Review

In jewel there is no call to erase a command from the table after it receives a reply, so if a command has ever been sent then it will crash as soon as a failover happens.

https://github.com/ceph/ceph/pull/12137

#5 Updated by Loic Dachary 7 months ago

  • Tracker changed from Bug to Backport
  • Status changed from Need Review to In Progress

#6 Updated by Loic Dachary 7 months ago

  • Description updated (diff)

Original description

Our manila-share daemon is segfaulting when our active mds goes away and we switch to the standby.

The crash is in handle_mds_map:

(gdb) where
#0  0x0000000003580270 in ?? ()
#1  0x00007f06046ca84a in Client::handle_mds_map (this=this@entry=0x3b58bb0, m=m@entry=0x7f05d8001190) at client/Client.cc:2548
#2  0x00007f060470565b in Client::ms_dispatch (this=0x3b58bb0, m=0x7f05d8001190) at client/Client.cc:2443
#3  0x00007f060496dc9a in ms_deliver_dispatch (m=0x7f05d8001190, this=0x3b58340) at msg/Messenger.h:584
#4  DispatchQueue::entry (this=0x3b58510) at msg/simple/DispatchQueue.cc:185
#5  0x00007f06049df36d in DispatchQueue::DispatchThread::entry (this=<optimized out>) at msg/simple/DispatchQueue.h:103
#6  0x00007f061ddffdc5 in start_thread () from /lib64/libpthread.so.0
#7  0x00007f061d423ced in clone () from /lib64/libc.so.6

(gdb) up
#1  0x00007f06046ca84a in Client::handle_mds_map (this=this@entry=0x3b58bb0, m=m@entry=0x7f05d8001190) at client/Client.cc:2548
2548            i->second.on_finish->complete(-ETIMEDOUT);

(gdb) p i
$1 = {first = 1, second = {con = {px = },
    mds_gid = {<boost::totally_ordered1<mds_gid_t, boost::totally_ordered2<mds_gid_t, unsigned long, boost::detail::empty_base<mds_gid_t> > >> = {<boost::less_than_comparable1<mds_gid_t, boost::equality_comparable1<mds_gid_t, boost::totally_ordered2<mds_gid_t, unsigned long, boost::detail::empty_base<mds_gid_t> > > >> = {<boost::equality_comparable1<mds_gid_t, boost::totally_ordered2<mds_gid_t, unsigned long, boost::detail::empty_base<mds_gid_t> > >> = {<boost::totally_ordered2<mds_gid_t, unsigned long, boost::detail::empty_base<mds_gid_t> >> = {<boost::less_than_comparable2<mds_gid_t, unsigned long, boost::equality_comparable2<mds_gid_t, unsigned long, boost::detail::empty_base<mds_gid_t> > >> = {<boost::equality_comparable2<mds_gid_t, unsigned long, boost::detail::empty_base<mds_gid_t> >> = {<boost::detail::empty_base<mds_gid_t>> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, t = 310423241}, tid = 1, on_finish = , outbl = , outs = }}

Note that on_finish looks null there.

We get the same segfault with 10.2.3 and the jewel branch as of today (soon to be 10.2.4).

We've also noticed that the code has been refactored in kraken (which is why I assigned this to John):

client: refactor command handling
common: refactor CommandTable

so we tried the current kraken builds and indeed the crashes are fixed. It's not clear that it was the refactoring that fixes this -- it could have been something else in jewel...kraken instead.

Any ideas? Happy to help track this down.

#7 Updated by Loic Dachary 7 months ago

  • Description updated (diff)

#8 Updated by John Spray 5 months ago

  • Status changed from In Progress to Resolved

#9 Updated by Nathan Cutler 5 months ago

  • Target version set to v10.2.6

Also available in: Atom PDF