Project

General

Profile

Actions

Bug #38513

closed

luminous: "AsyncReserver.h: 190: FAILED assert(!queue_pointers.count(item) && !in_progress.count(item))" in rados

Added by Yuri Weinstein about 5 years ago. Updated over 3 years ago.

Status:
Rejected
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Run: http://pulpito.ceph.com/yuriw-2019-02-27_17:20:44-rados-wip-yuri3-testing-2019-02-25-2101-luminous-distro-basic-smithi/
Job: 3645506
Logs: http://pulpito.ceph.com/yuriw-2019-02-27_17:20:44-rados-wip-yuri3-testing-2019-02-25-2101-luminous-distro-basic-smithi/3645506/

2019-02-27T18:20:15.636 INFO:tasks.ceph.osd.1.smithi166.stderr:/build/ceph-12.2.11-124-g34a20fc/src/common/AsyncReserver.h: In function 'void AsyncReserver<T>::request_reservation(T, Context*, unsigned int, Context*) [with T = spg_t]' thread 7f9a11093700 time 2019-02-27 18:20:15.634984
2019-02-27T18:20:15.636 INFO:tasks.ceph.osd.1.smithi166.stderr:/build/ceph-12.2.11-124-g34a20fc/src/common/AsyncReserver.h: 190: FAILED assert(!queue_pointers.count(item) && !in_progress.count(item))
2019-02-27T18:20:15.639 INFO:tasks.ceph.osd.1.smithi166.stderr: ceph version 12.2.11-124-g34a20fc (34a20fc0d402d777e4edc4b483a93f4d7a97d0d4) luminous (stable)
2019-02-27T18:20:15.639 INFO:tasks.ceph.osd.1.smithi166.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x55b588a6a542]
2019-02-27T18:20:15.639 INFO:tasks.ceph.osd.1.smithi166.stderr: 2: (AsyncReserver<spg_t>::request_reservation(spg_t, Context*, unsigned int, Context*)+0x203) [0x55b5885718e3]
2019-02-27T18:20:15.639 INFO:tasks.ceph.osd.1.smithi166.stderr: 3: (PG::RecoveryState::WaitLocalRecoveryReserved::WaitLocalRecoveryReserved(boost::statechart::state<PG::RecoveryState::WaitLocalRecoveryReserved, PG::RecoveryState::Active, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)+0x248) [0x55b58853c3c8]
2019-02-27T18:20:15.640 INFO:tasks.ceph.osd.1.smithi166.stderr: 4: (boost::statechart::detail::safe_reaction_result boost::statechart::simple_state<PG::RecoveryState::Clean, PG::RecoveryState::Active, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::transit_impl<PG::RecoveryState::WaitLocalRecoveryReserved, PG::RecoveryState::RecoveryMachine, boost::statechart::detail::no_transition_function>(boost::statechart::detail::no_transition_function const&)+0xaa) [0x55b58857b40a]
2019-02-27T18:20:15.640 INFO:tasks.ceph.osd.1.smithi166.stderr: 5: (boost::statechart::simple_state<PG::RecoveryState::Clean, PG::RecoveryState::Active, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x92) [0x55b58857b682]
2019-02-27T18:20:15.640 INFO:tasks.ceph.osd.1.smithi166.stderr: 6: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x69) [0x55b5885519a9]
2019-02-27T18:20:15.641 INFO:tasks.ceph.osd.1.smithi166.stderr: 7: (PG::handle_peering_event(std::shared_ptr<PG::CephPeeringEvt>, PG::RecoveryCtx*)+0x38d) [0x55b58850f39d]
2019-02-27T18:20:15.641 INFO:tasks.ceph.osd.1.smithi166.stderr: 8: (OSD::process_peering_events(std::__cxx11::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x29e) [0x55b588453cde]
2019-02-27T18:20:15.641 INFO:tasks.ceph.osd.1.smithi166.stderr: 9: (ThreadPool::BatchWorkQueue<PG>::_void_process(void*, ThreadPool::TPHandle&)+0x27) [0x55b5884c8b47]
2019-02-27T18:20:15.641 INFO:tasks.ceph.osd.1.smithi166.stderr: 10: (ThreadPool::worker(ThreadPool::WorkThread*)+0xdb9) [0x55b588a71449]
2019-02-27T18:20:15.642 INFO:tasks.ceph.osd.1.smithi166.stderr: 11: (ThreadPool::WorkThread::entry()+0x10) [0x55b588a72550]
2019-02-27T18:20:15.642 INFO:tasks.ceph.osd.1.smithi166.stderr: 12: (()+0x76ba) [0x7f9a2c9e56ba]
2019-02-27T18:20:15.642 INFO:tasks.ceph.osd.1.smithi166.stderr: 13: (clone()+0x6d) [0x7f9a2ba5c41d]
2019-02-27T18:20:15.642 INFO:tasks.ceph.osd.1.smithi166.stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.


Related issues 1 (0 open1 closed)

Has duplicate Ceph - Bug #38516: "AsyncReserver.h: 190: FAILED assert(!queue_pointers.count(item) && !in_progress.count(item))" in radosClosed02/28/2019

Actions
Actions #1

Updated by Yuri Weinstein about 5 years ago

  • Has duplicate Bug #38516: "AsyncReserver.h: 190: FAILED assert(!queue_pointers.count(item) && !in_progress.count(item))" in rados added
Actions #2

Updated by Greg Farnum about 5 years ago

  • Project changed from Ceph to RADOS
Actions #3

Updated by Neha Ojha over 4 years ago

  • Status changed from New to 12

/a/nojha-2019-08-26_20:27:46-rados-wip-bluefs-shared-alloc-luminous-2019-08-26-distro-basic-smithi/4255358/

Actions #4

Updated by Neha Ojha over 4 years ago

  • Subject changed from "AsyncReserver.h: 190: FAILED assert(!queue_pointers.count(item) && !in_progress.count(item))" in rados to luminous: "AsyncReserver.h: 190: FAILED assert(!queue_pointers.count(item) && !in_progress.count(item))" in rados
  • Priority changed from Normal to High

/a/nojha-2019-09-05_23:53:20-rados-wip-40769-luminous-distro-basic-smithi/4279855/

Actions #5

Updated by Patrick Donnelly over 4 years ago

  • Status changed from 12 to New
Actions #6

Updated by Neha Ojha over 3 years ago

  • Status changed from New to Rejected

Luminous is EOL.

Actions

Also available in: Atom PDF