Project

General

Profile

Actions

Bug #21211

open

12.2.0,cephfs(meta replica 2, data ec 2+1),ceph-osd coredump

Added by Yong Wang over 6 years ago. Updated over 6 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
1: (()+0xa23b21) [0x7fe4a148bb21]
2: (()+0xf370) [0x7fe49e098370]
3: (gsignal()+0x37) [0x7fe49d0c21d7]
4: (abort()+0x148) [0x7fe49d0c38c8]
5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) [0x7fe4a14ca684]
6: (PG::start_peering_interval(std::shared_ptr<OSDMap const>, std::vector<int, std::allocator<int> > const&, int, std::vector<int, std::allocator<int> > const&, int, ObjectStore::Transaction*)+0x1517) [0x7fe4a1022767]
7: (PG::RecoveryState::Reset::react(PG::AdvMap const&)+0x4f1) [0x7fe4a1022df1]
8: (boost::statechart::simple_state<PG::RecoveryState::Reset, PG::RecoveryState::RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x21c) [0x7fe4a1067d5c]
9: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::send_event(boost::statechart::event_base const&)+0x6b) [0x7fe4a1043ebb]
10: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_queued_events()+0x91) [0x7fe4a1044011]
11: (PG::handle_advance_map(std::shared_ptr<OSDMap const>, std::shared_ptr<OSDMap const>, std::vector<int, std::allocator<int> >&, int, std::vector<int, std::allocator<int> >&, int, PG::RecoveryCtx*)+0x49c) [0x7fe4a100f0ec]
12: (OSD::advance_pg(unsigned int, PG*, ThreadPool::TPHandle&, PG::RecoveryCtx*, std::set<boost::intrusive_ptr<PG>, std::less<boost::intrusive_ptr<PG> >, std::allocator<boost::intrusive_ptr<PG> > >)+0x2cf) [0x7fe4a0f5b25f]
13: (OSD::process_peering_events(std::list<PG
, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x173) [0x7fe4a0f5bcf3]
14: (OSD::PeeringWQ::_process(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x17) [0x7fe4a0fbd087]
15: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa8e) [0x7fe4a14d10fe]
16: (ThreadPool::WorkThread::entry()+0x10) [0x7fe4a14d1fe0]
17: (()+0x7dc5) [0x7fe49e090dc5]
18: (clone()+0x6d) [0x7fe49d18473d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Actions #1

Updated by Yong Wang over 6 years ago

12.2.0
create cephfs
meta pool: model : replica 2
data pool: model : ec 2+1

ceph-osd coredump after restart it.

ceph-osd ,ceph-mon logs file increase very fast

a lot of osds down

Actions #2

Updated by Patrick Donnelly over 6 years ago

  • Project changed from Ceph to RADOS
  • Category deleted (OSD)
  • Source set to Community (user)
  • Component(RADOS) OSD added
Actions #3

Updated by Greg Farnum over 6 years ago

  • Status changed from New to Need More Info

We can't do anything without logs and a cluster description here. Was this on bluestore?

Actions

Also available in: Atom PDF