Project

General

Profile

Bug #5382

mds: failed objecter assert on shutdown

Added by Greg Farnum almost 11 years ago. Updated over 7 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I haven't been through this completely, but it looks like the mds went laggy, and then it received a SIGTERM (the test was ending); it marked down all its connections; and then it tried to process a deferred table request and hit an Objecter assert (I assume because it had already shut down the objecter).

2013-06-15T13:07:19.205 INFO:teuthology.task.ceph.mds.a.err:osdc/Objecter.cc: In function 'tid_t Objecter::op_submit(Objecter::Op*)' thread 9d36700 time 2013-06-15 13:08:13.690779
2013-06-15T13:07:19.205 INFO:teuthology.task.ceph.mds.a.err:osdc/Objecter.cc: 1146: FAILED assert(initialized)
2013-06-15T13:07:19.291 INFO:teuthology.task.ceph.mds.a.err: ceph version 0.64-400-g9253cd8 (9253cd8174ca7690414923259b01c0a484323f19)
2013-06-15T13:07:19.292 INFO:teuthology.task.ceph.mds.a.err: 1: (Objecter::op_submit(Objecter::Op*)+0x16b) [0x7b15bb]
2013-06-15T13:07:19.292 INFO:teuthology.task.ceph.mds.a.err: 2: (Objecter::write_trunc(object_t const&, object_locator_t const&, unsigned long, unsigned long, SnapContext const&, ceph::buffer::list const&, utime_t, int, unsigned long, unsigned int, Context*, Context*, eversion_t*, ObjectOperation*)+0x155) [0x56cc55]
2013-06-15T13:07:19.292 INFO:teuthology.task.ceph.mds.a.err: 3: (Objecter::sg_write_trunc(std::vector<ObjectExtent, std::allocator<ObjectExtent> >&, SnapContext const&, ceph::buffer::list const&, utime_t, int, unsigned long, unsigned int, Context*, Context*)+0x9db) [0x56e5ab]
2013-06-15T13:07:19.292 INFO:teuthology.task.ceph.mds.a.err: 4: (Journaler::_do_flush(unsigned int)+0x2e4) [0x795c14]
2013-06-15T13:07:19.292 INFO:teuthology.task.ceph.mds.a.err: 5: (Journaler::flush(Context*)+0x226) [0x796936]
2013-06-15T13:07:19.293 INFO:teuthology.task.ceph.mds.a.err: 6: (MDLog::flush()+0x16) [0x7885e6]
2013-06-15T13:07:19.293 INFO:teuthology.task.ceph.mds.a.err: 7: (MDSTableServer::handle_prepare(MMDSTableRequest*)+0x385) [0x76b705]
2013-06-15T13:07:19.293 INFO:teuthology.task.ceph.mds.a.err: 8: (MDS::handle_deferrable_message(Message*)+0xe74) [0x575c14]
2013-06-15T13:07:19.293 INFO:teuthology.task.ceph.mds.a.err: 9: (MDS::_dispatch(Message*)+0x682) [0x589822]
2013-06-15T13:07:19.293 INFO:teuthology.task.ceph.mds.a.err: 10: (MDS::ms_dispatch(Message*)+0x1d3) [0x58ac53]
2013-06-15T13:07:19.293 INFO:teuthology.task.ceph.mds.a.err: 11: (DispatchQueue::entry()+0x3f1) [0x942fd1]
2013-06-15T13:07:19.294 INFO:teuthology.task.ceph.mds.a.err: 12: (DispatchQueue::DispatchThread::entry()+0xd) [0x86da9d]
2013-06-15T13:07:19.294 INFO:teuthology.task.ceph.mds.a.err: 13: (()+0x7e9a) [0x4e39e9a]
2013-06-15T13:07:19.294 INFO:teuthology.task.ceph.mds.a.err: 14: (clone()+0x6d) [0x6665ccd]
2013-06-15T13:07:19.294 INFO:teuthology.task.ceph.mds.a.err: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

History

#1 Updated by Greg Farnum almost 11 years ago

Sorry, logs at /a/teuthology-2013-06-15_01:00:44-fs-next-testing-basic/36375

#2 Updated by Ian Colle almost 11 years ago

  • Priority changed from Normal to High

#3 Updated by Greg Farnum almost 11 years ago

/a/teuthology-2013-06-23_20:00:47-fs-cuttlefish-testing-basic/43843/teuthology.log

#4 Updated by John Spray about 10 years ago

What's happening is that suicide() is getting called from another thread while the dispatch thread is inside _dispatch. Suicide tears down objecter, and sets want_state=CEPH_MDS_STATE_DNE (this is checked at ms_dispatch entry), then the dispatch thread wakes up in the middle of _dispatch (it relinquishes mds_lock more than once) and tries to carry on.

We may want to fix this in the general case by adding a shutdown() operation that sets want_state to DNE but doesn't destroy anything (and do the destruction when dispatch loop is over), and making the suicide() operation terminate the process so that anything calling suicide (and things should only be calling it if they're panicking) stops the world after tearing things down.

#5 Updated by Greg Farnum about 10 years ago

I haven't looked at any of the code involved for real, but that sounds like a good plan to me. thumbs up

#6 Updated by Greg Farnum about 10 years ago

I'm pretty sure we had a discussion about your patch, but I can't find the comments and I don't remember the outcome. (Whoops.) I know it involved when we could change over into wanting DNE (I was worried about getting that while processing a new MDSMap, and your changes not handling that), but that's the only real problem I remember. Was there anything else, John?

#7 Updated by John Spray about 10 years ago

There was an earlier patch that introduced an "I'm in dispatch" flag, and a more recent one (https://github.com/ceph/ceph/commit/bc22955c562d2a348c40d89121953f2fa2fb283a) that puts in extra checks for DNE in places the lock is dropped and retaken -- not sure if you'd seen the later one, I think I missed creating a PR for this because I was feeling unsure about the safety of the changes in MDLog

#8 Updated by Greg Farnum about 10 years ago

Okay, leaving this alone for the moment — that patch is a good start, and I think the MDLog stuff is actually okay (barring one of the conditionals needs to be inside a loop instead of outside) but unfortunately I think there are still some places where it won't behave right if the op it's working on is going through a finish_contexts() call chain.

#9 Updated by Sage Weil over 9 years ago

  • Status changed from New to Can't reproduce

#10 Updated by Greg Farnum over 7 years ago

  • Component(FS) MDS added

Also available in: Atom PDF