Project

General

Profile

Actions

Bug #55773

closed

Assertion failure (ceph_assert(have_pending)) when creating new OSDs during OSD deployment.

Added by Sridhar Seshasayee almost 2 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

See https://bugzilla.redhat.com/show_bug.cgi?id=2086419 for more details.

Assertion Failure

2022-05-13T13:14:04.103+0000 7f1cccf9a700 -1 /builddir/build/BUILD/ceph-16.2.7/src/mon/PaxosService.cc: In function 'void PaxosService::propose_pending()' thread 7f1cccf9a700 time 2022-05-13T13:14:04.101637+0000
/builddir/build/BUILD/ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)

Backtrace

>>> ceph version 16.2.7-112.el8cp (e18db2ff03ac60c64a18f3315c032b9d5a0a3b8f) pacific (stable)
>>> 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x7f1cdc0dccfe]
>>> 2: /usr/lib64/ceph/libceph-common.so.2(+0x276f18) [0x7f1cdc0dcf18]
>>> 3: (PaxosService::propose_pending()+0x50d) [0x561fa6f61add]
>>> 4: (KVMonitor::do_osd_new(uuid_d const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1ba) [0x561fa6f0514a]
>>> 5: (OSDMonitor::prepare_command_osd_new(boost::intrusive_ptr<MonOpRequest>, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, boost::variant<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, double, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::vector<long, std::allocator<long> >, std::vector<double, std::allocator<double> > >, std::less<void>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, boost::variant<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, double, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::vector<long, std::allocator<long> >, std::vector<double, std::allocator<double> > > > > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&, std::__cxx11::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >&, ceph::Formatter*)+0x1fdb) [0x561fa6faec8b]
>>> 6: (OSDMonitor::prepare_command_impl(boost::intrusive_ptr<MonOpRequest>, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, boost::variant<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, double, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::vector<long, std::allocator<long> >, std::vector<double, std::allocator<double> > >, std::less<void>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, boost::variant<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, double, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::vector<long, std::allocator<long> >, std::vector<double, std::allocator<double> > > > > > const&)+0x17278) [0x561fa6fd0f68]
>>> 7: (OSDMonitor::prepare_command(boost::intrusive_ptr<MonOpRequest>)+0xf4) [0x561fa6fdd664]
>>> 8: (OSDMonitor::prepare_update(boost::intrusive_ptr<MonOpRequest>)+0x373) [0x561fa6fe1943]
>>> 9: (PaxosService::dispatch(boost::intrusive_ptr<MonOpRequest>)+0xa6d) [0x561fa6f6269d]
>>> 10: (PaxosService::C_RetryMessage::_finish(int)+0x64) [0x561fa6eb5514]
>>> 11: (C_MonOp::finish(int)+0x49) [0x561fa6e52829]
>>> 12: (Context::complete(int)+0xd) [0x561fa6e4fcfd]
>>> 13: (void finish_contexts<std::__cxx11::list<Context*, std::allocator<Context*> > >(ceph::common::CephContext*, std::__cxx11::list<Context*, std::allocator<Context*> >&, int)+0xa5) [0x561fa6e7c1f5]
>>> 14: (PaxosService::_active()+0x1cb) [0x561fa6f635ab]
>>> 15: (Context::complete(int)+0xd) [0x561fa6e4fcfd]
>>> 16: (void finish_contexts<std::__cxx11::list<Context*, std::allocator<Context*> > >(ceph::common::CephContext*, std::__cxx11::list<Context*, std::allocator<Context*> >&, int)+0xa5) [0x561fa6e7c1f5]
>>> 17: (Paxos::finish_round()+0x169) [0x561fa6f59ea9]
>>> 18: (Paxos::commit_finish()+0x8c0) [0x561fa6f5c9d0]
>>> 19: (C_Committed::finish(int)+0x45) [0x561fa6f605f5]
>>> 20: (Context::complete(int)+0xd) [0x561fa6e4fcfd]
>>> 21: (MonitorDBStore::C_DoTransaction::finish(int)+0x98) [0x561fa6f60318]
>>> 22: (Context::complete(int)+0xd) [0x561fa6e4fcfd]
>>> 23: (Finisher::finisher_thread_entry()+0x1a5) [0x7f1cdc17e2a5]
>>> 24: /lib64/libpthread.so.0(+0x81cf) [0x7f1cd9e111cf]
>>> 25: clone()

Related issues 2 (0 open2 closed)

Copied to RADOS - Backport #56059: pacific: Assertion failure (ceph_assert(have_pending)) when creating new OSDs during OSD deployment.ResolvedSridhar SeshasayeeActions
Copied to RADOS - Backport #56060: quincy: Assertion failure (ceph_assert(have_pending)) when creating new OSDs during OSD deployment.ResolvedSridhar SeshasayeeActions
Actions #1

Updated by Sridhar Seshasayee almost 2 years ago

ANALYSIS

Note that the analysis is for the first crash when the leader was: mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com.

1. Before the crash a "osd new" mon command with uuid 1ac02906-e852-4660-9814-fd268ae381b8 on the leader mon couldn't complete because the OSDMonitor service was not writeable and the mon command was pushed into the "wait_for_active" context queue.

  -835> 2022-05-26T08:37:55.156+0000 7f7924b5b700  0 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader) e5 handle_command mon_command({"prefix": "osd new", "uuid": "1ac02906-e852-4660-9814-fd268ae381b8"} v 0) v1
  -834> 2022-05-26T08:37:55.156+0000 7f7924b5b700  0 log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ac02906-e852-4660-9814-fd268ae381b8"}]: dispatch
  -833> 2022-05-26T08:37:55.156+0000 7f7924b5b700 10 log_client _send_to_mon log to self
  -832> 2022-05-26T08:37:55.156+0000 7f7924b5b700 10 log_client  log_queue is 3 last_log 3195 sent 3194 num 3 unsent 1 sending 1
  -831> 2022-05-26T08:37:55.156+0000 7f7924b5b700 10 log_client  will send 2022-05-26T08:37:55.157828+0000 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com (mon.0) 3195 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ac02906-e852-4660-9814-fd268ae381b8"}]: dispatch
  -830> 2022-05-26T08:37:55.156+0000 7f7924b5b700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(osdmap 1..72) dispatch 0x564a9c920600 mon_command({"prefix": "osd new", "uuid": "1ac02906-e852-4660-9814-fd268ae381b8"} v 0) v1 from client.? 10.1.45.13:0/4042230849 con 0x564a9b005500
  -829> 2022-05-26T08:37:55.156+0000 7f7924b5b700  5 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos updating c 503..1085) is_readable = 1 - now=2022-05-26T08:37:55.157853+0000 lease_expire=2022-05-26T08:38:00.121813+0000 has v0 lc 1085
  -828> 2022-05-26T08:37:55.156+0000 7f7924b5b700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(osdmap 1..72)  waiting for paxos -> writeable

2. Shortly after, a "config-key set" command from mon host.f28-h13-000-r930.rdu2.scalelab.redhat.com was received by the leader.

  -734> 2022-05-26T08:37:55.239+0000 7f7924b5b700  0 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader) e5 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.f28-h13-000-r930.rdu2.scalelab.redhat.com}] v 0) v1
  -733> 2022-05-26T08:37:55.239+0000 7f7924b5b700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..298) dispatch 0x564a9e5bc900 mon_command([{prefix=config-key set, key=mgr/cephadm/host.f28-h13-000-r930.rdu2.scalelab.redhat.com}] v 0) v1 from mgr.14187 10.1.45.219:0/3652952845 con 0x564a9bdf1000
  -732> 2022-05-26T08:37:55.239+0000 7f7924b5b700  5 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos updating c 503..1086) is_readable = 1 - now=2022-05-26T08:37:55.241031+0000 lease_expire=2022-05-26T08:38:00.209487+0000 has v0 lc 1086

3. But the KVMonitor service was not writeable at this point and therefore the mon command was pushed into the "wait_for_active" context queue.

  -731> 2022-05-26T08:37:55.239+0000 7f7924b5b700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..298)  waiting for paxos -> writeable

4. At a slightly later point in time, as part of Finisher::finisher_thread_entry, a pending transaction was started that calls Paxos::commit_finish,


   -76> 2022-05-26T08:37:55.264+0000 7f7923358700 20 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos writing c 503..1086) commit_finish 1087
   -75> 2022-05-26T08:37:55.264+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos writing c 503..1087)  sending commit to mon.1
   -74> 2022-05-26T08:37:55.264+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos writing c 503..1087)  sending commit to mon.2
   -73> 2022-05-26T08:37:55.264+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos writing c 503..1087)  sending commit to mon.3
   -72> 2022-05-26T08:37:55.264+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos writing c 503..1087)  sending commit to mon.4
   -71> 2022-05-26T08:37:55.264+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(mdsmap 1..1) refresh
   -70> 2022-05-26T08:37:55.264+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(osdmap 1..74) refresh
   -69> 2022-05-26T08:37:55.264+0000 7f7923358700  1 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).osd e74 e74: 58 total, 4 up, 58 in

5. All the associated paxos services go into the refresh state and commit_proposal() is called that attempts to finish the pending contexts in the finisher queue.

   -45> 2022-05-26T08:37:55.291+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos refresh c 503..1087) commit_proposal
   -44> 2022-05-26T08:37:55.291+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(mgrstat 1..246) _active - not active
   -43> 2022-05-26T08:37:55.291+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(auth 1..64) _active - not active
   -42> 2022-05-26T08:37:55.291+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..299) _active - not active
   -41> 2022-05-26T08:37:55.291+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(osdmap 1..74) _active - not active
   -40> 2022-05-26T08:37:55.291+0000 7f7923358700  7 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos refresh c 503..1087) extend_lease now+5 (2022-05-26T08:38:00.293159+0000)

6. As part of finish_round(), the KVMonitor service becomes writeable and the contexts waiting in the waiting_for_active' queue are dispatched. As a part of this, the mon command that was waiting for writeable (from point 2 above) is dispatched. As part of this mon command, an immediate propose is called resulting in 'propose_pending()' called by the KVMonitor service. Now the 'have_pending' flag is set to 'false' in the KVMonitor service,

   -39> 2022-05-26T08:37:55.291+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos refresh c 503..1087) finish_round
   -38> 2022-05-26T08:37:55.291+0000 7f7923358700 20 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos active c 503..1087) finish_round waiting_for_acting
   -37> 2022-05-26T08:37:55.292+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(mgrstat 1..246) _active
   -36> 2022-05-26T08:37:55.292+0000 7f7923358700  7 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(mgrstat 1..246) _active creating new pending
   -35> 2022-05-26T08:37:55.292+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(auth 1..64) _active
   -34> 2022-05-26T08:37:55.292+0000 7f7923358700  7 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(auth 1..64) _active creating new pending
   -33> 2022-05-26T08:37:55.292+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..299) _active
   -32> 2022-05-26T08:37:55.292+0000 7f7923358700  7 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..299) _active creating new pending
   -31> 2022-05-26T08:37:55.292+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..299) dispatch 0x564a9e5bc900 mon_command([{prefix=config-key set, key=mgr/cephadm/host.f28-h13-000-r930.rdu2.scalelab.redhat.com}] v 0) v1 from mgr.14187 10.1.45.219:0/3652952845 con 0x564a9bdf1000
   -30> 2022-05-26T08:37:55.292+0000 7f7923358700  5 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos active c 503..1087) is_readable = 1 - now=2022-05-26T08:37:55.293563+0000 lease_expire=2022-05-26T08:38:00.293159+0000 has v0 lc 1087
   -29> 2022-05-26T08:37:55.292+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..299) dispatch forced immediate propose
   -28> 2022-05-26T08:37:55.292+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..299) propose_pending

7. Subsequently another command waiting in the OSDMonitor service context queue is dispatched. Note this is the same "osd new" mon command that was added to the queue in point 1 above. As part of this, the sequence to create the new osd is triggered. As part of this sequence, OSDMonitor::prepare_command_impl() only checks that the authmon service is writeable and proceeds to plug paxos and call OSDMonitor::prepare_command_osd_new().

8. But when KVMonitor::do_osd_new() is called, it tries to propose_pending() but hits the assertion failure FAILED ceph_assert(have_pending) since KVMonitor service had already issued a propose_pending() as described in point 6 above!

   -19> 2022-05-26T08:37:55.322+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(osdmap 1..74) _active
   -18> 2022-05-26T08:37:55.322+0000 7f7923358700  7 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(osdmap 1..74) _active creating new pending
   ...
   ...
    -7> 2022-05-26T08:37:55.322+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(osdmap 1..74) dispatch 0x564a9c920600 mon_command({"prefix": "osd new", "uuid": "1ac02906-e852-4660-9814-fd268ae381b8"} v 0) v1 from client.? 10.1.45.13:0/4042230849 con 0x564a9b005500
    -6> 2022-05-26T08:37:55.322+0000 7f7923358700  5 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos updating c 503..1087) is_readable = 1 - now=2022-05-26T08:37:55.324184+0000 lease_expire=2022-05-26T08:38:00.293159+0000 has v0 lc 1087
    -5> 2022-05-26T08:37:55.323+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(auth 1..64) propose_pending
    -4> 2022-05-26T08:37:55.325+0000 7f7923358700  5 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos updating c 503..1087) queue_pending_finisher 0x564a9af03de0
    -3> 2022-05-26T08:37:55.325+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxos(paxos updating c 503..1087) trigger_propose plugged, not proposing now
    -2> 2022-05-26T08:37:55.325+0000 7f7923358700 10 mon.f25-h23-000-r730xd.rdu2.scalelab.redhat.com@0(leader).paxosservice(kv 1..299) propose_pending
    -1> 2022-05-26T08:37:55.328+0000 7f7923358700 -1 /builddir/build/BUILD/ceph-16.2.7/src/mon/PaxosService.cc: In function 'void PaxosService::propose_pending()' thread 7f7923358700 time 2022-05-26T08:37:55.326355+0000
/builddir/build/BUILD/ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)

9. Therefore, this is a case of attempting to propose_pending() when there was a call already made earlier by the same service.

FIX

Ensure that the KVMonitor service is writeable before proceeding further in OSDMonitor::prepare_command_impl(). If not writeable, then push the command back into the relevant context queue to be retried at a later point.

Actions #2

Updated by Sridhar Seshasayee almost 2 years ago

  • Subject changed from OSDMonitor: Ensure kvmon() is writable before proceeding to create a new OSD. to Assertion failure (ceph_assert(have_pending)) when creating new OSDs during OSD deployment.
  • Description updated (diff)
Actions #3

Updated by Sridhar Seshasayee almost 2 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 46428
Actions #4

Updated by Sridhar Seshasayee almost 2 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #5

Updated by Backport Bot almost 2 years ago

  • Copied to Backport #56059: pacific: Assertion failure (ceph_assert(have_pending)) when creating new OSDs during OSD deployment. added
Actions #6

Updated by Backport Bot almost 2 years ago

  • Copied to Backport #56060: quincy: Assertion failure (ceph_assert(have_pending)) when creating new OSDs during OSD deployment. added
Actions #7

Updated by Sridhar Seshasayee over 1 year ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF