Project

General

Profile

Actions

Bug #22916

closed

OSD crashing in peering

Added by Artemy Kapitula about 6 years ago. Updated about 6 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Bluestore OSD is crashed with a stacktrace:

Feb 03 23:46:37 host1 ceph-osd[27780]: /root/rpmbuild/BUILD/ceph-12.2.1/src/osd/PGLog.h: In function 'static void PGLog::_merge_object_divergent_entries(const PGLog::IndexedLog&, co
Feb 03 23:46:37 host1 ceph-osd[27780]: /root/rpmbuild/BUILD/ceph-12.2.1/src/osd/PGLog.h: 888: FAILED assert(i->prior_version == last || i->is_error())
Feb 03 23:46:38 host1 ceph-osd[27780]: ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
Feb 03 23:46:38 host1 ceph-osd[27780]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x55828ebcbe90]
Feb 03 23:46:38 host1 ceph-osd[27780]: 2: (void PGLog::_merge_object_divergent_entries<pg_missing_set<false> >(PGLog::IndexedLog const&, hobject_t const&, std::list<pg_log_entry_t, 
Feb 03 23:46:38 host1 ceph-osd[27780]: 3: (PGLog::proc_replica_log(pg_info_t&, pg_log_t const&, pg_missing_set<false>&, pg_shard_t) const+0x5d1) [0x55828e774ba1]
Feb 03 23:46:38 host1 ceph-osd[27780]: 4: (PG::proc_replica_log(pg_info_t&, pg_log_t const&, pg_missing_set<false>&, pg_shard_t)+0x80) [0x55828e6ed920]
Feb 03 23:46:38 host1 ceph-osd[27780]: 5: (PG::RecoveryState::GetMissing::react(PG::MLogRec const&)+0x77) [0x55828e6eddd7]
Feb 03 23:46:38 host1 ceph-osd[27780]: 6: (boost::statechart::simple_state<PG::RecoveryState::GetMissing, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, 
Feb 03 23:46:38 host1 ceph-osd[27780]: 7: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::
Feb 03 23:46:38 host1 ceph-osd[27780]: 8: (PG::handle_peering_event(std::shared_ptr<PG::CephPeeringEvt>, PG::RecoveryCtx*)+0x1ed) [0x55828e70f64d]
Feb 03 23:46:38 host1 ceph-osd[27780]: 9: (OSD::process_peering_events(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x22a) [0x55828e65c91a]
Feb 03 23:46:38 host1 ceph-osd[27780]: 10: (OSD::PeeringWQ::_process(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x17) [0x55828e6bdd17]
Feb 03 23:46:38 host1 ceph-osd[27780]: 11: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa8e) [0x55828ebd2a7e]
Feb 03 23:46:38 host1 ceph-osd[27780]: 12: (ThreadPool::WorkThread::entry()+0x10) [0x55828ebd3960]
Feb 03 23:46:38 host1 ceph-osd[27780]: 13: (()+0x7df5) [0x7fa62de13df5]
Feb 03 23:46:38 host1 ceph-osd[27780]: 14: (clone()+0x6d) [0x7fa62cf071ad]

Related issues 1 (0 open1 closed)

Has duplicate RADOS - Bug #21287: 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->is_error())"Duplicate

Actions
Actions #1

Updated by Greg Farnum about 6 years ago

  • Project changed from Ceph to RADOS
  • Subject changed from OSD crushing in peering to OSD crashing in peering
Actions #2

Updated by Kefu Chai about 6 years ago

  • Description updated (diff)
Actions #3

Updated by Chang Liu about 6 years ago

  • Has duplicate Bug #21287: 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->is_error())" added
Actions #4

Updated by Josh Durgin about 6 years ago

  • Status changed from New to Duplicate
Actions

Also available in: Atom PDF