Project

General

Profile

Actions

Bug #23385

open

osd: master osd crash when pg scrub

Added by rongzhen zhan about 6 years ago. Updated about 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
ARM
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

my ceph on arm 4.4.52-armada-17.06.2.I put a object into rados.when scrub the pg with handle,the master osd crash.below log and gdb debug info

<c(gdb) bt
#0  0x7f85809c in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider::_Alloc_hider (__a=..., __dat=<optimized out>, this=<optimized out>)
    at /usr/include/c++/6.3.0/bits/basic_string.h:110
#1  std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (__str=..., this=<optimized out>) at /usr/include/c++/6.3.0/bits/basic_string.h:399
#2  object_t::object_t (this=<optimized out>) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/include/object.h:32
#3  hobject_t::hobject_t (this=0x8599d190, rhs=...) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/common/hobject.h:97
#4  0x7f91ded8 in std::pair<hobject_t const, ScrubMap::object>::pair<hobject_t const&, 0u>(std::tuple<hobject_t const&>&, std::tuple<>&, std::_Index_tuple<0u>, std::_Index_tuple<>) (
    __tuple2=<synthetic pointer>..., __tuple1=..., this=0x8599d190) at /usr/include/c++/6.3.0/tuple:1586
#5  std::pair<hobject_t const, ScrubMap::object>::pair<hobject_t const&>(std::piecewise_construct_t, std::tuple<hobject_t const&>, std::tuple<>) (__second=..., __first=..., 
    this=0x8599d190) at /usr/include/c++/6.3.0/tuple:1575
#6  __gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<hobject_t const, ScrubMap::object> > >::construct<std::pair<hobject_t const, ScrubMap::object>, std::piecewise_construct_t const&, std::tuple<hobject_t const&>, std::tuple<> >(std::pair<hobject_t const, ScrubMap::object>*, std::piecewise_construct_t const&, std::tuple<hobject_t const&>&&, std::tuple<>&&) (
    this=<optimized out>, __p=0x8599d190) at /usr/include/c++/6.3.0/ext/new_allocator.h:120
#7  std::allocator_traits<std::allocator<std::_Rb_tree_node<std::pair<hobject_t const, ScrubMap::object> > > >::construct<std::pair<hobject_t const, ScrubMap::object>, std::piecewise_construct_t const&, std::tuple<hobject_t const&>, std::tuple<> >(std::allocator<std::_Rb_tree_node<std::pair<hobject_t const, ScrubMap::object> > >&, std::pair<hobject_t const, ScrubMap::object>*, std::piecewise_construct_t const&, std::tuple<hobject_t const&>&&, std::tuple<>&&) (__a=..., __p=<optimized out>) at /usr/include/c++/6.3.0/bits/alloc_traits.h:455
#8  std::_Rb_tree<hobject_t, std::pair<hobject_t const, ScrubMap::object>, std::_Select1st<std::pair<hobject_t const, ScrubMap::object> >, hobject_t::BitwiseComparator, std::allocator<std::pair<hobject_t const, ScrubMap::object> > >::_M_construct_node<std::piecewise_construct_t const&, std::tuple<hobject_t const&>, std::tuple<> >(std::_Rb_tree_node<std::pair<hobject_t const, ScrubMap::object> >*, std::piecewise_construct_t const&, std::tuple<hobject_t const&>&&, std::tuple<>&&) (this=0x85dd8b38, __node=0x8599d180) at /usr/include/c++/6.3.0/bits/stl_tree.h:543
#9  std::_Rb_tree<hobject_t, std::pair<hobject_t const, ScrubMap::object>, std::_Select1st<std::pair<hobject_t const, ScrubMap::object> >, hobject_t::BitwiseComparator, std::allocator<std::pair<hobject_t const, ScrubMap::object> > >::_M_create_node<std::piecewise_construct_t const&, std::tuple<hobject_t const&>, std::tuple<> >(std::piecewise_construct_t const&, std::tuple<hobject_t const&>&&, std::tuple<>&&) (this=0x85dd8b38) at /usr/include/c++/6.3.0/bits/stl_tree.h:560
#10 std::_Rb_tree<hobject_t, std::pair<hobject_t const, ScrubMap::object>, std::_Select1st<std::pair<hobject_t const, ScrubMap::object> >, hobject_t::BitwiseComparator, std::allocator<std::pair<hobject_t const, ScrubMap::object> > >::_M_emplace_hint_unique<std::piecewise_construct_t const&, std::tuple<hobject_t const&>, std::tuple<> >(std::_Rb_tree_const_iterator<std::pair<hobject_t const, ScrubMap::object> >, std::piecewise_construct_t const&, std::tuple<hobject_t const&>&&, std::tuple<>&&) (this=0x85dd8b38, __pos=..., __args#0=..., 
    __args#1=<unknown type in /usr/bin/ceph-osd, CU 0xdd4ce7, DIE 0x100304a>, __args#2=<unknown type in /usr/bin/ceph-osd, CU 0xdd4ce7, DIE 0x10579ff>)
    at /usr/include/c++/6.3.0/bits/stl_tree.h:2196
#11 0x7fad0498 in std::map<hobject_t, ScrubMap::object, hobject_t::BitwiseComparator, std::allocator<std::pair<hobject_t const, ScrubMap::object> > >::operator[] (__k=..., this=0x85dde254)
    at /usr/include/c++/6.3.0/bits/stl_map.h:483
#12 decode<hobject_t, ScrubMap::object, hobject_t::BitwiseComparator> (p=..., m=...) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/include/encoding.h:660
#13 ScrubMap::decode (this=0x85dde254, bl=..., pool=-7100317060680999916) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/osd_types.cc:5282
#14 0x7f901db8 in PG::sub_op_scrub_map (this=this@entry=0x85ca6000, op=...) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/PG.cc:3485
#15 0x7f94bb78 in ReplicatedPG::do_sub_op (this=this@entry=0x85ca6000, op=...) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/ReplicatedPG.cc:3212
#16 0x7f97101c in ReplicatedPG::do_request (this=0x85ca6000, op=..., handle=...) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/ReplicatedPG.cc:1501
#17 0x7f822e6c in OSD::dequeue_op (this=this@entry=0x85b1a000, pg=..., op=..., handle=...) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/OSD.cc:8815
#18 0x7f82312c in PGQueueable::RunVis::operator() (this=this@entry=0x9d769fb0, op=...) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/OSD.cc:163
#19 0x7f83b664 in boost::detail::variant::invoke_visitor<PGQueueable::RunVis>::internal_visit<std::shared_ptr<OpRequest> > (operand=..., this=<synthetic pointer>)
    at /usr/include/boost/variant/variant.hpp:1046
#20 boost::detail::variant::visitation_impl_invoke_impl<boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*, std::shared_ptr<OpRequest> > (storage=0x9d76a134, 
    visitor=<synthetic pointer>...) at /usr/include/boost/variant/detail/visitation_impl.hpp:114
#21 boost::detail::variant::visitation_impl_invoke<boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*, std::shared_ptr<OpRequest>, boost::variant<std::shared_ptr<OpRequest>, PGSnapTrim, PGScrub>::has_fallback_type_> (internal_which=<optimized out>, t=0x0, storage=0x9d76a134, visitor=<synthetic pointer>...)
    at /usr/include/boost/variant/detail/visitation_impl.hpp:157
#22 boost::detail::variant::visitation_impl<mpl_::int_<0>, boost::detail::variant::visitation_impl_step<boost::mpl::l_iter<boost::mpl::l_item<mpl_::long_<3l>, std::shared_ptr<OpRequest>, bo---Type <return> to continue, or q <return> to quit---
ost::mpl::l_item<mpl_::long_<2l>, PGSnapTrim, boost::mpl::l_item<mpl_::long_<1l>, PGScrub, boost::mpl::l_end> > > >, boost::mpl::l_iter<boost::mpl::l_end> >, boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*, boost::variant<std::shared_ptr<OpRequest>, PGSnapTrim, PGScrub>::has_fallback_type_> (no_backup_flag=..., storage=0x9d76a134, 
    visitor=<synthetic pointer>..., logical_which=<optimized out>, internal_which=<optimized out>) at /usr/include/boost/variant/detail/visitation_impl.hpp:238
#23 boost::variant<std::shared_ptr<OpRequest>, PGSnapTrim, PGScrub>::internal_apply_visitor_impl<boost::detail::variant::invoke_visitor<PGQueueable::RunVis>, void*> (storage=0x9d76a134, 
    visitor=<synthetic pointer>..., logical_which=<optimized out>, internal_which=<optimized out>) at /usr/include/boost/variant/variant.hpp:2389
#24 boost::variant<std::shared_ptr<OpRequest>, PGSnapTrim, PGScrub>::internal_apply_visitor<boost::detail::variant::invoke_visitor<PGQueueable::RunVis> > (visitor=<synthetic pointer>..., 
    this=0x9d76a130) at /usr/include/boost/variant/variant.hpp:2400
#25 boost::variant<std::shared_ptr<OpRequest>, PGSnapTrim, PGScrub>::apply_visitor<PGQueueable::RunVis> (visitor=..., this=0x9d76a130) at /usr/include/boost/variant/variant.hpp:2423
#26 boost::apply_visitor<PGQueueable::RunVis, boost::variant<std::shared_ptr<OpRequest>, PGSnapTrim, PGScrub> > (visitable=..., visitor=...)
    at /usr/include/boost/variant/detail/apply_visitor_unary.hpp:70
#27 PGQueueable::run (handle=..., pg=..., osd=<optimized out>, this=0x9d76a130) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/OSD.h:392
#28 OSD::ShardedOpWQ::_process (this=0x85b1aebc, thread_index=<optimized out>, hb=<optimized out>) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/osd/OSD.cc:8696
#29 0x7fea0da8 in ShardedThreadPool::shardedthreadpool_worker (this=0x85b1a5a8, thread_index=2146045352) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/common/WorkQueue.cc:340
#30 0x7fea3a18 in ShardedThreadPool::WorkThreadSharded::entry (this=<optimized out>) at /usr/src/debug/ceph-src/10.2.3-r0/git/src/common/WorkQueue.h:684
#31 0xb6c5cf38 in start_thread (arg=0x9d76aa30) at /usr/src/debug/glibc/2.25-r0/git/nptl/pthread_create.c:458
#32 0xb6904298 in ?? () at ../sysdeps/unix/sysv/linux/arm/clone.S:76 from /lib/libc.so.6
Backtrace stopped: previous frame identical to this frame (corrupt stack?)ode class="text">

ceph version  ()
 1: (()+0x7a7de8) [0x7fd1dde8]
 2: (__default_sa_restorer()+0) [0xb68db3c0]
 3: (()+0x24309c) [0x7f7b909c]
 4: (std::_Rb_tree_iterator<std::pair<hobject_t const, ScrubMap::object> > std::_Rb_tree<hobject_t, std::pair<hobject_t const, ScrubMap::object>, std::_Select1st<std::pair<hobject_t const, ScrubMap::object> >, hobject_t::BitwiseComparator, std::allocator<std::pair<hobject_t const, ScrubMap::object> > >::_M_emplace_hint_unique<std::piecewise_construct_t const&, std::tuple<hobject_t const&>, std::tuple<> >(std::_Rb_tree_const_iterator<std::pair<hobject_t const, ScrubMap::object> >, std::piecewise_construct_t const&, std::tuple<hobject_t const&>&&, std::tuple<>&&)+0x48) [0x7f87eed8]
 5: (ScrubMap::decode(ceph::buffer::list::iterator&, long long)+0x2b8) [0x7fa31498]
 6: (PG::sub_op_scrub_map(std::shared_ptr<OpRequest>)+0x1e8) [0x7f862db8]
 7: (ReplicatedPG::do_sub_op(std::shared_ptr<OpRequest>)+0x274) [0x7f8acb78]
 8: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x518) [0x7f8d201c]
 9: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3c4) [0x7f783e6c]
 10: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x68) [0x7f78412c]
 11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x5d4) [0x7f79c664]
 12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x764) [0x7fe01da8]
 13: (()+0x88ea18) [0x7fe04a18]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

     0> 2018-03-16 11:26:39.186442 95fe5a30  2 -- 172.16.10.31:6800/6528 >> 172.16.10.35:6789/0 pipe(0x86236000 sd=23 :41154 s=2 pgs=174 cs=1 l=1 c=0x8631b7c0).reader got KEEPALIVE_ACK
--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 xio
   1/ 5 compressor
   1/ 5 newstore
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   1/ 5 kinetic
   1/ 5 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.1.log

Can someone help me?

Actions #1

Updated by rongzhen zhan about 6 years ago

The ceph version is 10.2.3

Actions #2

Updated by Patrick Donnelly about 6 years ago

  • Project changed from Ceph to RADOS
  • Subject changed from master osd crash when pg scrub to osd: master osd crash when pg scrub
  • Category deleted (OSD)
  • Source set to Community (user)
  • Tags set to ARM
  • Release deleted (jewel)
  • Component(RADOS) OSD added
Actions

Also available in: Atom PDF