Project

General

Profile

Actions

Bug #8863

closed

osd: second reservation rejection -> crash

Added by shaojun ruan almost 10 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
firefly
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I found bug#7624 resolved this problem?http://tracker.ceph.com/issues/7642? and the source code of OSDMonitor in 0.80.4 is correct with this fix?
But it still occur...
-----------------------------------------------------------------------------------
ceph version 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f)
1: /usr/bin/ceph-osd() [0x98aa3a]
2: (()+0xfcb0) [0x7ffe1b62bcb0]
3: (gsignal()+0x35) [0x7ffe1a216445]
4: (abort()+0x17b) [0x7ffe1a219bab]
5: (_gnu_cxx::_verbose_terminate_handler()+0x11d) [0x7ffe1ab6469d]
6: (()+0xb5846) [0x7ffe1ab62846]
7: (()+0xb5873) [0x7ffe1ab62873]
8: (()+0xb596e) [0x7ffe1ab6296e]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1df) [0xa6adcf]
10: (PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)+0xdf) [0x74706f]
11: /usr/bin/ceph-osd() [0x772de6]
12: (boost::statechart::detail::reaction_result boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0>::local_react_impl_non_empty::local_react_impl<boost::mpl::list2<boost::statechart::custom_reaction<PG::FlushedEvt>, boost::statechart::transition<boost::statechart::event_base, PG::RecoveryState::Crashed, boost::statechart::detail::no_context<boost::statechart::event_base>, &(boost::statechart::detail::no_context<boost::statechart::event_base>::no_function(boost::statechart::event_base const&))> >, boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0> >(boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0>&, boost::statechart::event_base const&, void const*)+0xc3) [0x79a6e3]
13: (boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc0) [0x79a840]
14: (boost::statechart::simple_state<PG::RecoveryState::ReplicaActive, PG::RecoveryState::Started, PG::RecoveryState::RepNotRecovering, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc9) [0x79b3b9]
15: (boost::statechart::simple_state<PG::RecoveryState::RepRecovering, PG::RecoveryState::ReplicaActive, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x10b) [0x79109b]
16: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::send_event(boost::statechart::event_base const&)+0x5b) [0x77f8eb]
17: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x11) [0x77fc41]
18: (PG::handle_peering_event(std::tr1::shared_ptr<PG::CephPeeringEvt>, PG::RecoveryCtx*)+0x303) [0x736e73]
19: (OSD::process_peering_events(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x2c6) [0x64d1f6]
20: (OSD::PeeringWQ::_process(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x12) [0x69c782]
21: (ThreadPool::worker(ThreadPool::WorkThread*)+0x4e6) [0xa5b5e6]
22: (ThreadPool::WorkThread::entry()+0x10) [0xa5d3f0]
23: (()+0x7e9a) [0x7ffe1b623e9a]
24: (clone()+0x6d) [0x7ffe1a2d24bd]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-osd.22.log
--
end dump of recent events ---


Files

ceph-osd.15.log.gz (252 KB) ceph-osd.15.log.gz shaojun ruan, 07/28/2014 12:37 AM
ceph-mon.a.log.gz (1.93 MB) ceph-mon.a.log.gz shaojun ruan, 07/28/2014 12:37 AM
ceph-osd.21.log._ (424 KB) ceph-osd.21.log._ Sahana Lokeshappa, 09/17/2014 10:52 PM
health (24.1 KB) health Sahana Lokeshappa, 09/18/2014 02:08 AM
Actions

Also available in: Atom PDF