https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-10-21T09:37:04Z
Ceph
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=204687
2021-10-21T09:37:04Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Related to</strong> <i><a class="issue tracker-1 status-10 priority-6 priority-high2 closed" href="/issues/50788">Bug #50788</a>: crash in BlueStore::Onode::put()</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=204689
2021-10-21T09:39:41Z
Igor Fedotov
igor.fedotov@croit.io
<ul></ul><p>Dan van der Ster wrote:</p>
<blockquote>
<p>We've just seen this crash in the wild running 15.2.14. Maybe a dup of <a class="issue tracker-1 status-10 priority-6 priority-high2 closed" title="Bug: crash in BlueStore::Onode::put() (Duplicate)" href="https://tracker.ceph.com/issues/50788">#50788</a>?</p>
</blockquote>
<p>I'm pretty sure it is...</p>
<p>Aren't there any indications of a recent <abbr title="s">PG</abbr> split?</p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=204690
2021-10-21T09:43:53Z
Dan van der Ster
<ul></ul><p>Igor Fedotov wrote:</p>
<blockquote>
<p>Dan van der Ster wrote:</p>
<blockquote>
<p>We've just seen this crash in the wild running 15.2.14. Maybe a dup of <a class="issue tracker-1 status-10 priority-6 priority-high2 closed" title="Bug: crash in BlueStore::Onode::put() (Duplicate)" href="https://tracker.ceph.com/issues/50788">#50788</a>?</p>
</blockquote>
<p>I'm pretty sure it is...</p>
<p>Aren't there any indications of a recent <abbr title="s">PG</abbr> split?</p>
</blockquote>
<p>Not recently AFAIK... we have nopgchange set on all the pools.</p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=204691
2021-10-21T09:45:46Z
Dan van der Ster
<ul></ul><p>More context: the cluster was upgraded from 14.2.20 to 15.2.14 two weeks ago. We've never seen this before today; it happened only once on only this OSD so far.</p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=204694
2021-10-21T12:16:36Z
Dan van der Ster
<ul></ul><p>In frame 7 I can print the Onode. Some of the vals look quite strange (but I don't know if that's normal):</p>
<pre>
(gdb) f
#7 ~intrusive_ptr (this=0x55aa49c74c20, __in_chrg=<optimized out>)
at /usr/src/debug/ceph-15.2.14/build/boost/include/boost/smart_ptr/intrusive_ptr.hpp:98
98 if( px != 0 ) intrusive_ptr_release( px );
(gdb) list
93 if( px != 0 ) intrusive_ptr_add_ref( px );
94 }
95
96 ~intrusive_ptr()
97 {
98 if( px != 0 ) intrusive_ptr_release( px );
99 }
100
101 #if !defined(BOOST_NO_MEMBER_TEMPLATES) || defined(BOOST_MSVC6_MEMBER_TEMPLATES)
102
(gdb) p px
$11 = (BlueStore::Onode *) 0x55aa7ea2b440
(gdb) p *px
$12 = {nref = {<std::__atomic_base<int>> = {static _S_alignment = 4, _M_i = 1024138560},
static is_always_lock_free = true}, c = 0x200, oid = {hobj = {static POOL_META = -1,
static POOL_TEMP_START = -2, oid = {
name = <error reading variable: Cannot access memory at address 0x55aaffffffe7>},
snap = {val = 8295752894954156584}, hash = 543712117, max = 102,
nibblewise_key_cache = 544370464, hash_reverse_bits = 1701996900, pool = 521610949731,
nspace = "cta-cristina", key = ""}, generation = 18446744073709551615, shard_id = {
id = -1 '\377', static NO_SHARD = {id = -1 '\377',
static NO_SHARD = <same as static member of an already seen type>}}, max = false,
static NO_GEN = 18446744073709551615}, key = "",
lru_item = {<boost::intrusive::generic_hook<(boost::intrusive::algo_types)0, boost::intrusive::list_node_traits<void*>, boost::intrusive::member_tag, (boost::intrusive::link_mode_type)1, (boost::intrusive::base_hook_type)0>> = {<boost::intrusive::list_node<void*>> = {next_ = 0x0,
prev_ = 0x0}, <boost::intrusive::hook_tags_definer<boost::intrusive::generic_hook<(boost::intrusive::algo_types)0, boost::intrusive::list_node_traits<void*>, boost::intrusive::member_tag, (boost::intrusive::link_mode_type)1, (boost::intrusive::base_hook_type)0>, 0>> = {<No data fields>}, <No data fields>}, <No data fields>}, onode = {nid = 0, size = 0,
attrs = std::map with 0 elements,
extent_map_shards = std::vector of length 0, capacity 0, expected_object_size = 0,
expected_write_size = 0, alloc_hint_flags = 0, flags = 0 '\000'}, exists = false,
cached = false, pinned = {_M_base = {static _S_alignment = 1, _M_i = false},
static is_always_lock_free = true}, extent_map = {onode = 0x55aa7ea2b440,
extent_map = {<boost::intrusive::set_impl<boost::intrusive::bhtraits<BlueStore::Extent, boost::intrusive::rbtree_node_traits<void*, true>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 3>, void, void, unsigned long, true, void>> = {<boost::intrusive::bstree_impl<boost::intrusive::bhtraits<BlueStore::Extent, boost::intrusive::rbtree_node_traits<void*, true>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 3>, void, void, unsigned long, true, (boost::intrusive::algo_types)5, void>> = {<boost::intrusive::bstbase<boost::intrusive::bhtraits<BlueStore::Extent, boost::intrusive::rbtree_node_traits<void*, true>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 3>, void, void, true, unsigned long, (boost::intrusive::algo_types)5, void>> = {<boost::intrusive::bstbase_hack<boost::intrusive::bhtraits<BlueStore::Extent, boost::intrusive::rbtree_node_traits<void*, true>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 3>, void, void, true, unsigned long, (boost::intrusive::algo_types)5, void>> = {<boost::intrusive::detail::size_holder<true, unsigned long, void>> = {
static constant_time_size = <optimized out>,
size_ = 0}, <boost::intrusive::bstbase2<boost::intrusive::bhtraits<BlueStore::Extent, boost::intrusive::rbtree_node_traits<void*, true>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 3>, void, void, (boost::intrusive::algo_types)5, void>> = {<boost::intrusive::detail::ebo_functor_holder<boost::intrusive::tree_value_compare<BlueStore::Extent*, std::less<BlueStore::Extent>, boost::move_detail::identity<BlueStore::Extent>, bool, true>, void, false>> = {<boost::intrusive::tree_value_compare<BlueStore::Extent*, std::less<BlueStore::Extent>, boost::move_detail::identity<BlueStore::Extent>, bool, true>> = {<boost::intrusive::detail::ebo_functor_holder<std::less<BlueStore::Extent>, void, false>> = {<std::less<BlueStore::Extent>> = {<std::binary_function<BlueStore::Extent, BlueStore::Extent, bool>> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <boost::intrusive::bstbase3<boost::intrusive::bhtraits<BlueStore::Extent, boost::intrusive::rbtree_node_traits<void*, true>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 3>, (boost::intrus---Type <return> to continue, or q <return> to quit---
ive::algo_types)5, void>> = {static safemode_or_autounlink = <optimized out>,
static stateful_value_traits = <optimized out>,
static has_container_from_iterator = <optimized out>,
holder = {<boost::intrusive::bhtraits<BlueStore::Extent, boost::intrusive::rbtree_node_traits<void*, true>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 3>> = {<boost::intrusive::bhtraits_base<BlueStore::Extent, boost::intrusive::compact_rbtree_node<void*>*, boost::intrusive::dft_tag, 3>> = {<No data fields>},
static link_mode = boost::intrusive::safe_link},
root = {<boost::intrusive::compact_rbtree_node<void*>> = {parent_ = 0x0,
left_ = 0x55aa7ea2b540,
right_ = 0x55aa7ea2b540}, <No data fields>}}}, <No data fields>}, <No data fields>}, <No data fields>}, static constant_time_size = true,
static stateful_value_traits = <optimized out>,
static safemode_or_autounlink = true},
static constant_time_size = true}, <No data fields>},
spanning_blob_map = std::map with 0 elements,
shards = std::vector of length 0, capacity 0, inline_bl = {_buffers = {_root = {
next = 0x55aa7ea2b5c0}, _tail = 0x55aa7ea2b5c0},
_carriage = 0x55a9f17a8d90 <ceph::buffer::v15_2_0::list::always_empty_bptr>, _len = 0,
_num = 0, static always_empty_bptr = {_raw = 0x0, _off = 0, _len = 0}},
needs_reshard_begin = 0, needs_reshard_end = 0},
flushing_count = {<std::__atomic_base<int>> = {static _S_alignment = 4, _M_i = 0},
static is_always_lock_free = true}, waiting_count = {<std::__atomic_base<int>> = {
static _S_alignment = 4, _M_i = 0}, static is_always_lock_free = true},
flush_lock = {<std::__mutex_base> = {_M_mutex = {__data = {__lock = 0, __count = 0,
__owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {
__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>,
__align = 0}}, <No data fields>}, flush_cond = {_M_cond = {__data = {__lock = 1,
__futex = 0, __total_seq = 18446744073709551615, __wakeup_seq = 0, __woken_seq = 0,
__mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0},
__size = "\001\000\000\000\000\000\000\000\377\377\377\377\377\377\377\377", '\000' <repeats 31 times>, __align = 1}}}
(gdb)
</pre>
<p>E.g. down in frame 5, `c` has address 0x200 ?!!<br /><pre>
(gdb) f
#5 BlueStore::Onode::put (this=0x55aa7ea2b440)
at /usr/src/debug/ceph-15.2.14/src/os/bluestore/BlueStore.cc:3588
3588 ocs->lock.lock();
(gdb) list
3583 ocs->lock.lock();
3584 // It is possible that during waiting split_cache moved us to different OnodeCacheShard.
3585 while (ocs != c->get_onode_cache()) {
3586 ocs->lock.unlock();
3587 ocs = c->get_onode_cache();
3588 ocs->lock.lock();
3589 }
3590 bool need_unpin = pinned;
3591 pinned = pinned && nref > 2; // intentionally use > not >= as we have
3592 // +1 due to pinned state
(gdb) p c
$16 = (BlueStore::Collection *) 0x200
(gdb) p *c
Cannot access memory at address 0x200
</pre></p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=205019
2021-10-28T14:16:00Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Assignee</strong> set to <i>Igor Fedotov</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=205258
2021-11-02T12:12:51Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>In Progress</i></li><li><strong>Pull request ID</strong> set to <i>43770</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=205259
2021-11-02T12:13:17Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Backport</strong> set to <i>pacific, octopus</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=205702
2021-11-08T23:29:45Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>In Progress</i> to <i>Pending Backport</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=205703
2021-11-08T23:29:52Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>Pending Backport</i> to <i>Fix Under Review</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=207441
2021-12-14T23:23:21Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>Fix Under Review</i> to <i>Pending Backport</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=207442
2021-12-14T23:25:25Z
Backport Bot
<ul><li><strong>Copied to</strong> <i><a class="issue tracker-9 status-3 priority-4 priority-default closed" href="/issues/53608">Backport #53608</a>: pacific: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=207444
2021-12-14T23:25:35Z
Backport Bot
<ul><li><strong>Copied to</strong> <i><a class="issue tracker-9 status-3 priority-4 priority-default closed" href="/issues/53609">Backport #53609</a>: octopus: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=210685
2022-02-16T21:56:06Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>Pending Backport</i> to <i>Resolved</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=218678
2022-06-22T16:38:01Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicates</strong> <i><a class="issue tracker-1 status-10 priority-4 priority-default closed" href="/issues/56174">Bug #56174</a>: rook-ceph-osd crash randomly</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=222344
2022-08-08T08:27:16Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicates</strong> <i><a class="issue tracker-1 status-10 priority-4 priority-default closed" href="/issues/54727">Bug #54727</a>: crash: __pthread_mutex_lock()</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=222346
2022-08-08T08:27:56Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicates</strong> <i><a class="issue tracker-1 status-10 priority-4 priority-default closed" href="/issues/56200">Bug #56200</a>: crash: ceph::buffer::ptr::release()</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=222350
2022-08-08T08:30:48Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicates</strong> <i><a class="issue tracker-1 status-10 priority-4 priority-default closed" href="/issues/54650">Bug #54650</a>: crash: BlueStore::Onode::put()</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=222354
2022-08-08T08:32:13Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Related to</strong> <i><a class="issue tracker-1 status-10 priority-4 priority-default closed" href="/issues/47740">Bug #47740</a>: OSD crash when increase pg_num</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=223296
2022-08-11T16:16:27Z
Anonymous
<ul></ul><p>according to <a class="external" href="https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/PPWIFPEI3EVBU3GQYYO6ABGF23WR5SGZ/">https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/PPWIFPEI3EVBU3GQYYO6ABGF23WR5SGZ/</a> this is not resolved yet, could this be reopened, please?</p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=223486
2022-08-15T12:57:33Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>Resolved</i> to <i>New</i></li></ul><p>Looks like this hasn't been completely fixed yet.<br />We've got a bunch of new tickets from Telemetry bot which indicate the same or similar symptoms (Onode::put is primarily involved) for Ceph releases which had got PR <a class="issue tracker-9 status-3 priority-4 priority-default closed" title="Backport: nautilus: mount.ceph fails with ERANGE if name= option is longer than 37 characters (Resolved)" href="https://tracker.ceph.com/issues/43770">#43770</a> (and its backports).</p>
<p>Some of the cases from the field I observed personally:<br />1) 15.2.16<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: *<strong>* Caught signal (Segmentation fault) *</strong><br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: in thread 7f08cf3a0700 thread_name:tp_osd_tp<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 1: (()+0x12730) [0x7f08ec91e730]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 2: (ceph::buffer::v15_2_0::ptr::release()+0x26) [0x5650f3904d26]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 3: (BlueStore::Onode::put()+0x1a9) [0x5650f35b6a79]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 4: (std::_Hashtable<ghobject_t, std::pair<ghobject_t const, boost::intrusive_ptr<BlueStore::Onode> >, mempool::pool_allocator<(mempool::pool_index_t)4, std::pair<ghobject_t const, boost::intrusive_ptr<BlueStore::Onode> > >, std::__detail::_Select1st, std::equal_to<ghobject_t>, std::hash<ghobject_t>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_M_erase(unsigned long, std::__detail::_Hash_node_base*, std::__detail::_Hash_node<std::pair<ghobject_t const, boost::intrusive_ptr<BlueStore::Onode> >, true><strong>)+0x64) [0x5650f3662ca4]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 5: (BlueStore::OnodeSpace::_remove(ghobject_t const&)+0x290) [0x5650f35b68a0]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 6: (LruOnodeCacheShard::_trim_to(unsigned long)+0xdb) [0x5650f36631db]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 7: (BlueStore::OnodeSpace::add(ghobject_t const&, boost::intrusive_ptr<BlueStore::Onode>&)+0x48d) [0x5650f35b74cd]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 8: (BlueStore::Collection::get_onode(ghobject_t const&, bool, bool)+0x453) [0x5650f35fdac3]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 9: (BlueStore::_txc_add_transaction(BlueStore::TransContext</strong>, ceph::os::Transaction*)+0x1dc3) [0x5650f3633353]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 10: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ceph::os::Transaction, std::allocator<ceph::os::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x408) [0x5650f3634778]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 11: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ceph::os::Transaction, std::allocator<ceph::os::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x54) [0x5650f32e7c14]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 12: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xdf4) [0x5650f347b804]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 13: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x267) [0x5650f348ad57]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 14: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x57) [0x5650f331d917]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 15: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x62f) [0x5650f32c14df]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 16: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x325) [0x5650f3159d35]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 17: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x64) [0x5650f339dea4]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 18: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12fa) [0x5650f317678a]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 19: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b4) [0x5650f37801f4]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 20: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x5650f3782c70]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 21: (()+0x7fa3) [0x7f08ec913fa3]<br />Aug 05 23:34:51 ceph-osd<sup><a href="#fn2861">2861</a></sup>: 22: (clone()+0x3f) [0x7f08ec4beeff]</p>
<p>or</p>
<p>Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: *<strong>* Caught signal (Segmentation fault) *</strong><br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: in thread 7f4613a22700 thread_name:bstore_kv_final<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 1: (()+0x12730) [0x7f461ff7e730]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 2: (BlueStore::Onode::put()+0x193) [0x564c15db8a63]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 3: (std::_Rb_tree<boost::intrusive_ptr<BlueStore::Onode>, boost::intrusive_ptr<BlueStore::Onode>, std::_Identity<boost::intrusive_ptr<BlueStore::Onode> >, std::less<boost::intrusive_ptr<BlueStore::Onode> >, std::allocator<boost::intrusive_ptr<BlueStore::Onode> > >::_M_erase(std::_Rb_tree_node<boost::intrusive_ptr<BlueStore::Onode> ><strong>)+0x2d) [0x564c15e6460d]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 4: (BlueStore::TransContext::~TransContext()+0x117) [0x564c15e64747]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 5: (BlueStore::_txc_finish(BlueStore::TransContext</strong>)+0x24b) [0x564c15e0bb8b]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 6: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x234) [0x564c15e23744]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 7: (BlueStore::_kv_finalize_thread()+0x552) [0x564c15e2e3e2]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 8: (BlueStore::KVFinalizeThread::entry()+0xd) [0x564c15e69b8d]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 9: (()+0x7fa3) [0x7f461ff73fa3]<br />Aug 05 00:33:29 ceph-osd<sup><a href="#fn2863">2863</a></sup>: 10: (clone()+0x3f) [0x7f461fb1eeff]</p>
<p>2) different cluster at 15.2.16<br />backtrace:<br />0: (()+0x12730) [0x7fe8875d1730]<br />1: (gsignal()+0x10b) [0x7fe8870b07bb]<br />2: (abort()+0x121) [0x7fe88709b535]<br />3: (()+0x2240f) [0x7fe88709b40f]<br />4: (()+0x30102) [0x7fe8870a9102]<br />5: (()+0xeb47ca) [0x55e2237177ca]<br />6: (BlueStore::Onode::put()+0x2b1) [0x55e22372ab81]<br />7: (std::_Rb_tree<boost::intrusive_ptrBlueStore::Onode, boost::intrusive_ptrBlueStore::Onode, std::_Identity<boost::intrusive_ptrBlueStore::Onode >, std::less<boost::intrusive_ptrBlueStore::Onode >, std::allocator<boost::intrusive_ptrBlueStore::Onode > >::_M_erase(std::_Rb_tree_node<boost::intrusive_ptrBlueStore::Onode >)+0x2d) [0x55e2237d660d]<br />8: (BlueStore::TransContext::~TransContext()+0x124) [0x55e2237d6754]<br />9: (BlueStore::_txc_finish(BlueStore::TransContext)+0x24b) [0x55e22377db8b]<br />10: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x234) [0x55e223795744]<br />11: (BlueStore::_kv_finalize_thread()+0x552) [0x55e2237a03e2]<br />12: (BlueStore::KVFinalizeThread::entry()+0xd) [0x55e2237dbb8d]<br />13: (()+0x7fa3) [0x7fe8875c6fa3]<br />14: (clone()+0x3f) [0x7fe887171eff]</p>
<p>3) 16.2.9<br />Caught signal (Segmentation fault) <strong>* <br />2022-08-02 00:33:00 Ceph04 osd.21 in thread 7f2853f74700 thread_name:tp_osd_tp <br />2022-08-02 00:33:00 Ceph04 osd.21 ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific (stable) <br />2022-08-02 00:33:00 Ceph04 osd.21 1: /lib64/libpthread.so.0(+0x168c0) [0x7f287a1e98c0] <br />2022-08-02 00:33:00 Ceph04 osd.21 2: (ceph::buffer::v15_2_0::ptr::release()+0xf) [0x55670639336f] <br />2022-08-02 00:33:00 Ceph04 osd.21 3: (BlueStore::Onode::put()+0x1bc) [0x55670601feac] <br />2022-08-02 00:33:00 Ceph04 osd.21 4: (std::_detail::_Hashtable_alloc<mempool::pool_allocator >, true> > >::_M_deallocate_node(std::_detail::_Hash_node<std::pair >, true></strong>)+0x35) [0x5567060d2365]</std::pair</mempool::pool_allocator <br />2022-08-02 00:33:00 Ceph04 osd.21 5: (std::Hashtable >, mempool::pool_allocator<(mempool::pool_index_t)4, std::pair > >, std::detail::_Select1st, std::equal_to, std::hash, std::detail::_Mod_range_hashing, std::detail::_Default_ranged_hash, std::detail::_Prime_rehash_policy, std::detail::_Hashtable_traits >::_M_erase(unsigned long, std::detail::_Hash_node_base*, std::_detail::_Hash_node<std::pair >, true><strong>)+0x53) [0x5567060d27a3]</std::pair <br />2022-08-02 00:33:00 Ceph04 osd.21 6: (BlueStore::OnodeSpace::_remove(ghobject_t const&)+0x12c) [0x55670601fb5c] <br />2022-08-02 00:33:00 Ceph04 osd.21 7: (LruOnodeCacheShard::_trim_to(unsigned long)+0xce) [0x5567060d350e] <br />2022-08-02 00:33:00 Ceph04 osd.21 8: (BlueStore::OnodeSpace::add(ghobject_t const&, boost::intrusive_ptr&)+0x152) [0x5567060206a2] <br />2022-08-02 00:33:00 Ceph04 osd.21 9: (BlueStore::Collection::get_onode(ghobject_t const&, bool, bool)+0x299) [0x55670607fc39] <br />2022-08-02 00:33:00 Ceph04 osd.21 10: (BlueStore::_txc_add_transaction(BlueStore::TransContext</strong>, ceph::os::Transaction*)+0x1d32) [0x55670608b722] <br />2022-08-02 00:33:00 Ceph04 osd.21 11: (BlueStore::queue_transactions(boost::intrusive_ptr&, std::vector >&, boost::intrusive_ptr, ThreadPool::TPHandle*)+0x2fa) [0x5567060a555a] <br />2022-08-02 00:33:00 Ceph04 osd.21 12: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector >&, boost::intrusive_ptr)+0x54) [0x556705ce5cf4] <br />2022-08-02 00:33:00 Ceph04 osd.21 13: (ECBackend::handle_sub_write(pg_shard_t, boost::intrusive_ptr, ECSubWrite&, ZTracer::Trace const&)+0xa4d) [0x556705eff87d] <br />2022-08-02 00:33:00 Ceph04 osd.21 14: (ECBackend::try_reads_to_commit()+0x2509) [0x556705f10759] <br />2022-08-02 00:33:00 Ceph04 osd.21 15: (ECBackend::check_ops()+0x1c) [0x556705f1202c] <br />2022-08-02 00:33:00 Ceph04 osd.21 16: (ECBackend::handle_sub_write_reply(pg_shard_t, ECSubWriteReply const&, ZTracer::Trace const&)+0xde) [0x556705f1217e] <br />2022-08-02 00:33:00 Ceph04 osd.21 17: (ECBackend::_handle_message(boost::intrusive_ptr)+0x1cf) [0x556705f17cef] <br />2022-08-02 00:33:00 Ceph04 osd.21 18: (PGBackend::handle_message(boost::intrusive_ptr)+0x87) [0x556705d34117] <br />2022-08-02 00:33:00 Ceph04 osd.21 19: (PrimaryLogPG::do_request(boost::intrusive_ptr&, ThreadPool::TPHandle&)+0x684) [0x556705cd5264] <br />2022-08-02 00:33:00 Ceph04 osd.21 20: (OSD::dequeue_op(boost::intrusive_ptr, boost::intrusive_ptr, ThreadPool::TPHandle&)+0x159) [0x556705b5ee39] <br />2022-08-02 00:33:00 Ceph04 osd.21 21: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr&, ThreadPool::TPHandle&)+0x67) [0x556705dbaef7] <br />2022-08-02 00:33:00 Ceph04 osd.21 22: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xcf5) [0x556705b7c625] <br />2022-08-02 00:33:00 Ceph04 osd.21 23: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x4ac) [0x5567061e02ec] <br />2022-08-02 00:33:00 Ceph04 osd.21 24: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x5567061e37b0] <br />2022-08-02 00:33:00 Ceph04 osd.21 25: /lib64/libpthread.so.0(+0xa6ea) [0x7f287a1dd6ea] <br />2022-08-02 00:33:00 Ceph04 osd.21 26: clone()</p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=223487
2022-08-15T12:59:30Z
Igor Fedotov
igor.fedotov@croit.io
<ul></ul><p>4) Quincy case from Telemetry: <a class="external" href="https://tracker.ceph.com/issues/56382">https://tracker.ceph.com/issues/56382</a></p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=224126
2022-08-23T19:00:42Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>In Progress</i></li></ul><p>Another PR: <a class="external" href="https://github.com/ceph/ceph/pull/47702">https://github.com/ceph/ceph/pull/47702</a></p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=227682
2022-10-21T16:41:53Z
Anonymous
<ul></ul><p>We have almost daily crashes on our octopus cluster, which are also reported via telemetry, which look like this bug, could you confirm that these are the same, or if you need more information, just ask. I'm really waiting on a patch for this:</p>
<pre>
{
"backtrace": [
"(()+0x12980) [0x7f269ac06980]",
"(ceph::buffer::v15_2_0::ptr::release()+0x26) [0x55fc3e524206]",
"(BlueStore::Onode::put()+0x1c1) [0x55fc3e192a71]",
"(std::_Rb_tree<boost::intrusive_ptr<BlueStore::Onode>, boost::intrusive_ptr<BlueStore::Onode>, std::_Identity<boost::intrusive_ptr<BlueStore::Onode> >, std::less<boost::intrusive_ptr<BlueStore::Onode> >, std::allocator<boost::intrusive_ptr<BlueStore::Onode> > >::_M_erase(std::_Rb_tree_node<boost::intrusive_ptr<BlueStore::Onode> >*)+0x2d) [0x55fc3e248a0d]",
"(std::_Rb_tree<boost::intrusive_ptr<BlueStore::Onode>, boost::intrusive_ptr<BlueStore::Onode>, std::_Identity<boost::intrusive_ptr<BlueStore::Onode> >, std::less<boost::intrusive_ptr<BlueStore::Onode> >, std::allocator<boost::intrusive_ptr<BlueStore::Onode> > >::_M_erase(std::_Rb_tree_node<boost::intrusive_ptr<BlueStore::Onode> >*)+0x1b) [0x55fc3e2489fb]",
"(BlueStore::TransContext::~TransContext()+0x124) [0x55fc3e248b54]",
"(BlueStore::_txc_finish(BlueStore::TransContext*)+0x4b8) [0x55fc3e1d01b8]",
"(BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x24c) [0x55fc3e1d1b7c]",
"(BlueStore::_kv_finalize_thread()+0x48c) [0x55fc3e21b58c]",
"(BlueStore::KVFinalizeThread::entry()+0xd) [0x55fc3e24d09d]",
"(()+0x76db) [0x7f269abfb6db]",
"(clone()+0x3f) [0x7f269999b61f]"
],
"ceph_version": "15.2.17",
"crash_id": "2022-10-21T16:26:38.286992Z_ba5ffc75-58c3-45fc-9cda-950256b5efca",
"entity_name": "osd.127",
"os_id": "ubuntu",
"os_name": "Ubuntu",
"os_version": "18.04.6 LTS (Bionic Beaver)",
"os_version_id": "18.04",
"process_name": "ceph-osd",
"stack_sig": "b2e4aac01a4b8acbb3878c39b0f5b1269edcccb6a90435e54b6958716a9e703e",
"timestamp": "2022-10-21T16:26:38.286992Z",
"utsname_hostname": "ceph-osd08",
"utsname_machine": "x86_64",
"utsname_release": "5.4.0-107-generic",
"utsname_sysname": "Linux",
"utsname_version": "#121~18.04.1-Ubuntu SMP Thu Mar 24 17:21:33 UTC 2022"
}
</pre>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=227687
2022-10-21T18:26:58Z
Yaarit Hatuka
<ul></ul><p>Hi Sven,</p>
<p>Thanks for reporting telemetry! The issue you reported is tracked in <a class="external" href="https://tracker.ceph.com/issues/56200">https://tracker.ceph.com/issues/56200</a>, which is marked as a duplicate to this tracker (<a class="external" href="https://tracker.ceph.com/issues/53002">https://tracker.ceph.com/issues/53002</a>), so indeed they are the same.<br />Looks like the Octopus backport is already merged, but there is another PR (<a class="external" href="https://github.com/ceph/ceph/pull/47702">https://github.com/ceph/ceph/pull/47702</a>) which is still under review and not yet merged to main.</p>
<p>Regards,<br />Yaarit</p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=229797
2022-12-29T07:26:21Z
王子敬 wang
<ul></ul><p>(gdb) bt<br />#0 0x00007fc82cdb64aa in tc_newarray () from /lib64/libtcmalloc.so.4<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: gpf in tcp_sendpage (Closed)" href="https://tracker.ceph.com/issues/1">#1</a> 0x000055f6876050ba in ceph::buffer::v15_2_0::ptr_node::create<ceph::buffer::v15_2_0::ptr_node const&> ()<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/include/buffer.h:411<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: BUG at fs/ceph/caps.c:2178 (Closed)" href="https://tracker.ceph.com/issues/2">#2</a> ceph::buffer::v15_2_0::list::append (this=this@entry=0x55f6b308ceb8, bl=...) at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/common/buffer.cc:1424<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: leaked dentry ref on umount (Closed)" href="https://tracker.ceph.com/issues/3">#3</a> 0x000055f687150491 in ceph::encode (bl=..., s=...) at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/include/encoding.h:282<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: lockdep warning in socket code (Closed)" href="https://tracker.ceph.com/issues/4">#4</a> ceph::os::Transaction::encode (this=this@entry=0x7fc8039c7440, bl=...) at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/os/Transaction.h:1267<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: ./rados lspools sometimes hangs after listing all pools? (Closed)" href="https://tracker.ceph.com/issues/5">#5</a> 0x000055f687137698 in ceph::os::encode (features=0, bl=..., c=...) at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/os/Transaction.h:1293<br /><a class="issue tracker-2 status-6 priority-3 priority-lowest closed" title="Feature: libceph could use a backward-compatible-to function (Rejected)" href="https://tracker.ceph.com/issues/6">#6</a> ReplicatedBackend::generate_subop (this=0x55f6956f8180, soid=..., at_version=..., tid=10598176, reqid=..., pg_trim_to=..., min_last_complete_ondisk=..., new_temp_oid=...,<br /> discard_temp_oid=..., log_entries=..., hset_hist=std::optional<pg_hit_set_history_t> [no contained value], op_t=..., peer=..., pinfo=...)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/ReplicatedBackend.cc:968<br /><a class="issue tracker-6 status-3 priority-3 priority-lowest closed" title="Documentation: Document Monitor Commands (Resolved)" href="https://tracker.ceph.com/issues/7">#7</a> 0x000055f687138188 in ReplicatedBackend::issue_op (this=0x55f6956f8180, soid=..., at_version=..., tid=<optimized out>, reqid=..., pg_trim_to=..., min_last_complete_ondisk=...,<br /> new_temp_oid=..., discard_temp_oid=..., log_entries=..., hset_hist=..., op=<optimized out>, op_t=...)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/ReplicatedBackend.cc:1028<br /><a class="issue tracker-3 status-5 priority-4 priority-default closed" title="Support: Document differences from S3 (Closed)" href="https://tracker.ceph.com/issues/8">#8</a> 0x000055f68713ad14 in ReplicatedBackend::submit_transaction (this=0x55f6956f8180, soid=..., delta_stats=..., at_version=..., _t=..., trim_to=..., min_last_complete_ondisk=...,<br /> _log_entries=std::vector of length 1, capacity 1 = {...}, hset_history=std::optional<pg_hit_set_history_t> [no contained value], on_all_commit=0x55f6bfd47360, tid=10598176,<br /> reqid=..., orig_op=...) at /usr/include/c++/8/ext/aligned_buffer.h:76<br /><a class="issue tracker-2 status-8 priority-3 priority-lowest closed" title="Feature: Access unimported data (Won't Fix)" href="https://tracker.ceph.com/issues/9">#9</a> 0x000055f686f07ce0 in PrimaryLogPG::issue_repop (this=0x55f6961c4000, repop=0x55f696e73980, ctx=0x55f6dc76d200)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/PeeringState.h:2292<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: osd: Replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/10">#10</a> 0x000055f686f64c5a in PrimaryLogPG::execute_ctx (this=0x55f6961c4000, ctx=<optimized out>)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/PrimaryLogPG.cc:4166<br /><a class="issue tracker-4 status-3 priority-3 priority-lowest closed" title="Cleanup: mds: replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/11">#11</a> 0x000055f686f69004 in PrimaryLogPG::do_op (this=0x55f6961c4000, op=...) at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/PrimaryLogPG.cc:2381<br /><a class="issue tracker-2 status-3 priority-3 priority-lowest closed" title="Feature: uclient: Make cap handling smarter (Resolved)" href="https://tracker.ceph.com/issues/12">#12</a> 0x000055f686f76585 in PrimaryLogPG::do_request (this=0x55f6961c4000, op=..., handle=...) at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/PrimaryLogPG.cc:1779<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed parent" title="Feature: uclient: Make readdir use the cache (Resolved)" href="https://tracker.ceph.com/issues/13">#13</a> 0x000055f686df35d9 in OSD::dequeue_op (this=this@entry=0x55f692652000, pg=..., op=..., handle=...)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/OSD.cc:9754<br /><a class="issue tracker-1 status-10 priority-4 priority-default closed" title="Bug: osd: pg split breaks if not all osds are up (Duplicate)" href="https://tracker.ceph.com/issues/14">#14</a> 0x000055f68705b378 in ceph::osd::scheduler::PGOpItem::run (this=<optimized out>, osd=0x55f692652000, sdata=<optimized out>, pg=..., handle=...)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/PG.h:627<br /><a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mds rejoin: invented dirfrags (MDCache.cc:3469) (Resolved)" href="https://tracker.ceph.com/issues/15">#15</a> 0x000055f686e0ff4b in ceph::osd::scheduler::OpSchedulerItem::run (handle=..., pg=..., sdata=<optimized out>, osd=<optimized out>, this=0x7fc8039c83b0)<br /> at /usr/include/c++/8/bits/unique_ptr.h:345<br /><a class="issue tracker-1 status-3 priority-5 priority-high3 closed" title="Bug: mds restart vs dbench (Resolved)" href="https://tracker.ceph.com/issues/16">#16</a> OSD::ShardedOpWQ::_process (this=<optimized out>, thread_index=<optimized out>, hb=<optimized out>)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/osd/OSD.cc:10788<br /><a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: rm -r failure (Rejected)" href="https://tracker.ceph.com/issues/17">#17</a> 0x000055f687465644 in ShardedThreadPool::shardedthreadpool_worker (this=0x55f692652a28, thread_index=11)<br /> at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/common/WorkQueue.cc:311<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: reconnect fixups (Resolved)" href="https://tracker.ceph.com/issues/18">#18</a> 0x000055f6874682a4 in ShardedThreadPool::WorkThreadSharded::entry (this=<optimized out>) at /usr/src/debug/ceph-15.2.13-branch_2212260918.el8.x86_64/src/common/WorkQueue.h:715<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: rbd (Resolved)" href="https://tracker.ceph.com/issues/19">#19</a> 0x00007fc82c26014a in start_thread () from /lib64/libpthread.so.0<br /><a class="issue tracker-2 status-3 priority-5 priority-high3 closed" title="Feature: client: recover from a killed session (w/ blacklist) (Resolved)" href="https://tracker.ceph.com/issues/20">#20</a> 0x00007fc82b3c9dc3 in clone () from /lib64/libc.so.6</p>
<p>ceph_version 15.2.13<br />We've just seen some crash running 15.2.13</p>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=230108
2023-01-13T12:14:01Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicated by</strong> <i><a class="issue tracker-1 status-10 priority-4 priority-default closed" href="/issues/58439">Bug #58439</a>: octopus osd crash</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=230110
2023-01-13T12:14:32Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Pull request ID</strong> changed from <i>43770</i> to <i>47702</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=230111
2023-01-13T12:14:57Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>In Progress</i> to <i>Fix Under Review</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=234397
2023-04-05T21:56:51Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicated by</strong> <i><a class="issue tracker-1 status-3 priority-4 priority-default closed" href="/issues/56382">Bug #56382</a>: ONode ref counting is broken</i> added</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=234399
2023-04-05T21:57:12Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Status</strong> changed from <i>Fix Under Review</i> to <i>Duplicate</i></li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=234408
2023-04-05T21:59:08Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicated by</strong> deleted (<i><a class="issue tracker-1 status-3 priority-4 priority-default closed" href="/issues/56382">Bug #56382</a>: ONode ref counting is broken</i>)</li></ul>
bluestore - Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
https://tracker.ceph.com/issues/53002?journal_id=234410
2023-04-05T21:59:26Z
Igor Fedotov
igor.fedotov@croit.io
<ul><li><strong>Duplicated by</strong> <i><a class="issue tracker-1 status-3 priority-4 priority-default closed" href="/issues/56382">Bug #56382</a>: ONode ref counting is broken</i> added</li></ul>