Project

General

Profile

Actions

Bug #49900

closed

_txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0)

Added by Neha Ojha about 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
pacific, octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) _remove_collection 4.3_head = -39
2021-03-19T05:44:43.412+0000 7f9d61888700 -1 bluestore(/var/lib/ceph/osd/ceph-0) _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0)
2021-03-19T05:44:43.412+0000 7f9d61888700  0 _dump_transaction transaction dump:
{
    "ops": [
        {
            "op_num": 0,
            "op_name": "remove",
            "collection": "4.3_head",
            "oid": "#4:c0000000::::head#" 
        },
        {
            "op_num": 1,
            "op_name": "rmcoll",
            "collection": "4.3_head" 
        }
    ]
}

2021-03-19T05:44:43.416+0000 7f9d61888700 -1 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-2096-g0d3df9b9/rpm/el8/BUILD/ceph-17.0.0-2096-g0d3df9b9/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)' thread 7f9d61888700 time 2021-03-19T05:44:43.413041+0000
/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-2096-g0d3df9b9/rpm/el8/BUILD/ceph-17.0.0-2096-g0d3df9b9/src/os/bluestore/BlueStore.cc: 12820: ceph_abort_msg("unexpected error")

 ceph version 17.0.0-2096-g0d3df9b9 (0d3df9b9599b6294f204ae84913a36061e78fb76) quincy (dev)
 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xe5) [0x5606eadfd10e]
 2: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ceph::os::Transaction*)+0xc9c) [0x5606eb43bfbc]
 3: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ceph::os::Transaction, std::allocator<ceph::os::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x316) [0x5606eb43e6d6]
 4: (ObjectStore::queue_transaction(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ceph::os::Transaction&&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x85) [0x5606eaf67a75]
 5: (PG::do_delete_work(ceph::os::Transaction&, ghobject_t)+0xd1b) [0x5606eafb685b]
 6: (PeeringState::Deleting::react(PeeringState::DeleteSome const&)+0x108) [0x5606eb16fb48]
 7: (boost::statechart::simple_state<PeeringState::Deleting, PeeringState::ToDelete, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xe5) [0x5606eb1d4435]
 8: (boost::statechart::state_machine<PeeringState::PeeringMachine, PeeringState::Initial, std::allocator<boost::statechart::none>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x5b) [0x5606eafbdbbb]
 9: (PG::do_peering_event(std::shared_ptr<PGPeeringEvent>, PeeringCtx&)+0x2d1) [0x5606eafb2541]
 10: (OSD::dequeue_peering_evt(OSDShard*, PG*, std::shared_ptr<PGPeeringEvent>, ThreadPool::TPHandle&)+0x175) [0x5606eaf2c645]
 11: (OSD::dequeue_delete(OSDShard*, PG*, unsigned int, ThreadPool::TPHandle&)+0xc8) [0x5606eaf2c9a8]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x5606eaf1f568]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x5606eb580f34]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x5606eb583bd4]
 15: (Thread::_entry_func(void*)+0xd) [0x5606eb572a3d]
 16: /lib64/libpthread.so.0(+0x814a) [0x7f9d83fc114a]
 17: clone()

/a/nojha-2021-03-18_23:17:30-rados:thrash-master-distro-basic-gibba/5978595/

we fixed something similar in https://tracker.ceph.com/issues/38724, appears to be a different root cause this time


Related issues 3 (0 open3 closed)

Related to bluestore - Bug #49818: os/bluestore/BlueStore.cc: FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache()ClosedIgor Fedotov

Actions
Copied to bluestore - Backport #49989: pacific: _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0)ResolvedActions
Copied to bluestore - Backport #49990: octopus: _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0)ResolvedNeha OjhaActions
Actions #1

Updated by Neha Ojha about 3 years ago

  • Priority changed from Normal to Urgent
2021-03-19T05:44:43.381+0000 7f9d61888700 20 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_delete_work [#4:c0000000::::head#,#4:c0db3d37:::gibba02940939-276:head#,#4:c27d094c:::gibba02940939-264:head#,#4:c3609d36:::gibba02940939-224:head#]
2021-03-19T05:44:43.382+0000 7f9d61888700 20 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_delete_work deleting 3 objects
.
.

2021-03-19T05:44:43.410+0000 7f9d61888700 10 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_peering_event: epoch_sent: 172 epoch_requested: 172 DeleteSome
2021-03-19T05:44:43.410+0000 7f9d61888700 10 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_delete_work
2021-03-19T05:44:43.410+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) collection_list 4.3_head start GHMAX end GHMAX max 30
2021-03-19T05:44:43.411+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) collection_list 4.3_head start GHMAX end GHMAX max 30 = 0, ls.size() = 0, next = GHMAX
2021-03-19T05:44:43.411+0000 7f9d61888700 20 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_delete_work []
2021-03-19T05:44:43.411+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) collection_list 4.3_head start GHMIN end GHMAX max 30
2021-03-19T05:44:43.411+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _collection_list range #-6:c0000000::::0#0 to #-6:c4000000::::0#0 and #4:c0000000::::0#0 to #4:c4000000::::0#0 start GHMIN
2021-03-19T05:44:43.411+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _collection_list pend #-6:c4000000::::0#0
2021-03-19T05:44:43.411+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _collection_list oid #-5:c0000000:::scrub_3.3:head# >= #-6:c4000000::::0#0
2021-03-19T05:44:43.411+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _collection_list oid #4:c0000000::::head# end GHMAX
2021-03-19T05:44:43.412+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _collection_list oid #4:e0000000::::head# >= #4:c4000000::::0#0
2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) collection_list 4.3_head start GHMAX end GHMAX max 30 = 0, ls.size() = 1, next = GHMAX
2021-03-19T05:44:43.412+0000 7f9d61888700  0 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_delete_work additional unexpected onode list (new onodes has appeared since PG removal started[#4:c0000000::::head#]
2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) queue_transactions ch 0x5606efba3a40 4.3_head
2021-03-19T05:44:43.412+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _txc_create osr 0x5606efa091e0 = 0x5606ffb76000 seq 379
2021-03-19T05:44:43.412+0000 7f9d61888700 20 _pin0x5606ef392000   #4:c0000000::::head# pinned
2021-03-19T05:44:43.412+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _remove 4.3_head #4:c0000000::::head# onode 0x5606efbebb80 txc 0x5606ffb76000
2021-03-19T05:44:43.412+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _do_truncate 4.3_head #4:c0000000::::head# 0x0
2021-03-19T05:44:43.412+0000 7f9d61888700 20 bluestore.onode(0x5606efbebb80).flush flush done
2021-03-19T05:44:43.412+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _do_omap_clear remove range start: 0x00000000000004342D end: 0x00000000000004347E
2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) _remove 4.3_head #4:c0000000::::head# = 0
2021-03-19T05:44:43.412+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _remove_collection 4.3_head
2021-03-19T05:44:43.412+0000 7f9d61888700 20 bluestore.OnodeSpace(0x5606efba3b80 in 0x5606ef392000) map_any
2021-03-19T05:44:43.412+0000 7f9d61888700  1 bluestore(/var/lib/ceph/osd/ceph-0) operator() #4:cfed6c85:::gibba02940939-39:head# 0x5606efc9f400 exists in onode_map
2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) _remove_collection 4.3_head = -39

"additional unexpected onode list (new onodes has appeared since PG removal" part comes from 7f047005fc7, maybe related to https://github.com/ceph/ceph/pull/37314?

Actions #2

Updated by Neha Ojha about 3 years ago

  • Assignee set to Igor Fedotov

#4:c0000000::::head# was listed and skipped when we last did do_delete_work just before the crash

2021-03-19T05:44:43.381+0000 7f9d61888700 20 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_delete_work [#4:c0000000::::head#,#4:c0db3d37:::gibba02940939-276:head#,#4:c27d094c:::gibba02940939-264:head#,#4:c3609d36:::gibba02940939-224:head#]

then why are we trying to remove it now.

2021-03-19T05:44:43.412+0000 7f9d61888700  0 osd.0 pg_epoch: 172 pg[4.3( v 164'337 (164'334,164'337] lb MIN local-lis/les=169/170 n=6 ec=20/20 lis/c=169/169 les/c/f=170/170/0 sis=172 pruub=14.077317238s) [] r=-1 lpr=172 DELETING pi=[169,172)/1 crt=164'337 lcod 164'336 mlcod 0'0 unknown NOTIFY pruub 257.026885986s@ mbc={}] do_delete_work additional unexpected onode list (new onodes has appeared since PG removal started[#4:c0000000::::head#]
2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) queue_transactions ch 0x5606efba3a40 4.3_head

Igor, any ideas?

Actions #3

Updated by Neha Ojha about 3 years ago

  • Priority changed from Urgent to Immediate
Actions #4

Updated by Neha Ojha about 3 years ago

Neha Ojha wrote:

#4:c0000000::::head# was listed and skipped when we last did do_delete_work just before the crash

[...]

then why are we trying to remove it now.

[...]

Igor, any ideas?

2021-03-19T05:44:43.412+0000 7f9d61888700  1 bluestore(/var/lib/ceph/osd/ceph-0) operator() #4:cfed6c85:::gibba02940939-39:head# 0x5606efc9f400 exists in onode_map 

seems to be problem actually
Actions #5

Updated by Neha Ojha about 3 years ago

dead job in teuthology, osd.0's log copied to /home/nojha/49900 in senta02

Actions #6

Updated by Igor Fedotov about 3 years ago

Neha Ojha wrote:

dead job in teuthology, osd.0's log copied to /home/nojha/49900 in senta02

Neha, could you please share the log somehow?

Actions #7

Updated by Igor Fedotov about 3 years ago

Neha Ojha wrote:

#4:c0000000::::head# was listed and skipped when we last did do_delete_work just before the crash

[...]

then why are we trying to remove it now.

[...]

Igor, any ideas?

That's an interesting question but the object preserved in the collection and prevented it from removal is different: #4:cfed6c85:::gibba02940939-39:head
#4:c0000000::::head# has been escaping the removal till the last op somehow but the final check removed it:
2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) _remove 4.3_head #4:c0000000::::head# = 0

Here is the alive obj:
2021-03-19T05:44:43.412+0000 7f9d61888700 1 bluestore(/var/lib/ceph/osd/ceph-0) operator() #4:cfed6c85:::gibba02940939-39:head# 0x5606efc9f400 exists in onode_map

And I can't see it in the provided log snippets. Neither understand how it survived the final check.

So please share the full log or at least last lines of it (e.g. last 10000)

Actions #8

Updated by Igor Fedotov about 3 years ago

hmm...
Something weird with this remaining object:
1) it shouldn't belong to the collection being removed: It's hash is cfed6c85 while the pg being removed uses the following range: #4:c0000000::::0#0 to #4:c4000000::::0#0

2) From the full log one can see recent collection split and our beloved cache_split() function!!!!!!
The object in question is marked as 'not moving'...

2021-03-19T05:44:40.327+0000 7f9d61888700 20 _pin0x5606ef392000 #4:cfed6c85:::gibba02940939-39:head# pinned
2021-03-19T05:44:40.327+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(4.3_head 0x5606efba3a40) split_cache not moving 0x5606efc9f400 #4:cfed6c85:::gibb
a02940939-39:head#
2021-03-19T05:44:40.327+0000 7f9d61888700 20 _unpin0x5606ef392000 #4:cfed6c85:::gibba02940939-39:head# unpinned

So at this point this doesn't look like a bug in removal (although recent fixes could possibly reveal the issue). It's rather another appearance of pinning/split_cache issues.

Actions #9

Updated by Neha Ojha about 3 years ago

  • Project changed from RADOS to bluestore

I'm glad the root cause in not related to the pg removal optimization code!

Actions #10

Updated by Igor Fedotov about 3 years ago

IMO the root cause for the issue is the lack of proper locking in split_cache.
Here is the log analysis for onode #4:cfed6c85:::gibba02940939-39:head# which is finally unexpectedly found in the PG 4.3 during that PG's removal.

...
The failing object is in 4.3_head
2021-03-19T05:41:04.536+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) _setattrs 4.3_head #4:cfed6c85:::gibba02940939-39:head# 2 keys = 0
...

The first split cache for 4.3:
2021-03-19T05:41:09.628+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _split_collection 4.3_head to 4.13_head bits 5
2021-03-19T05:41:09.629+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(4.3_head 0x5606efba3a40) split_cache moving 0x5606fc85c280 #4:ced20890:::gibba029
40939-262:head#

!!!!!!! here is a concurring remove from the same OnodeSpace from another thread which presumably invalidate onode_map's iterator in split_cache and prevents from proper split!!!!

2021-03-19T05:41:09.629+0000 7f9d73f1c700 20 bluestore.OnodeSpace(0x5606efba3b80 in 0x5606ef392000) _remove #4:c534eb9d:::gibba02940939-90:head#
...
2021-03-19T05:41:09.629+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(4.3_head 0x5606efba3a40) split_cache not moving 0x5606eff4bb80 #4:c534eb9d:::%80s
%c6%ef%06V%00%0040939-90:head#
2021-03-19T05:41:09.629+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) _split_collection 4.3_head to 4.13_head bits 5 = 0
the first split end
...
collection list finds (in KV, not in cache) failing object in 4.13 while performing PG removal
2021-03-19T05:41:15.648+0000 7f9d61888700 10 osd.0 pg_epoch: 36 pg[4.13( v 29'126 (29'123,29'126] lb MIN local-lis/les=33/34 n=9 ec=30/20 lis/c=33/33 les/c/f=34/34/0 sis=35
pruub=13.997786522s) [2,3,7] r=-1 lpr=35 DELETING pi=[33,35)/1 crt=29'126 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 49.334480286s@ mbc={}] do_delete_work
2021-03-19T05:41:15.648+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) collection_list 4.13_head start GHMIN end GHMAX max 30
...
2021-03-19T05:41:15.648+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0) _collection_list oid #4:cfed6c85:::gibba02940939-39:head# end GHMAX
...
4.13 remove started, get_onode doesn't find the object in 4.13's onode cache
2021-03-19T05:41:15.655+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(4.13_head 0x5606fe74b2c0) get_onode oid #4:cfed6c85:::gibba02940939-39:head# key 0x7F8000000000000004CFED6C85'!gibba02940939-39!='0xFFFFFFFFFFFFFFFEFFFFFFFFFFFFFFFF6F
2021-03-19T05:41:15.655+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(4.13_head 0x5606fe74b2c0) r 0 v.len 843
2021-03-19T05:41:15.655+0000 7f9d61888700 20 bluestore.OnodeSpace(0x5606fe74b400 in 0x5606ef3bc800) add #4:cfed6c85:::gibba02940939-39:head# 0x5606fddbd680
2021-03-19T05:41:15.655+0000 7f9d61888700 20 _add 0x5606ef3bc800 #4:cfed6c85:::gibba02940939-39:head# added, num=7
2021-03-19T05:41:15.655+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _remove 4.13_head #4:cfed6c85:::gibba02940939-39:head# onode 0x5606fddbd680 txc 0x5606fd7a5180
2021-03-19T05:41:15.655+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _do_truncate 4.13_head #4:cfed6c85:::gibba02940939-39:head# 0x0
....
and finally the oid is removed
2021-03-19T05:41:15.657+0000 7f9d73f1c700 20 bluestore.OnodeSpace(0x5606fe74b400 in 0x5606ef3bc800) _remove #4:cfed6c85:::gibba02940939-39:head#

Then second split occurs in a while:
2021-03-19T05:44:40.327+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _split_collection 4.3_head to 4.23_head bits 6
...
and object is found in 4.3 and not moved to 4.23 (which is OK since it shouldn't be in 4.3 at this point)
2021-03-19T05:44:40.327+0000 7f9d61888700 20 _pin0x5606ef392000 #4:cfed6c85:::gibba02940939-39:head# pinned
2021-03-19T05:44:40.327+0000 7f9d61888700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(4.3_head 0x5606efba3a40) split_cache not moving 0x5606efc9f400 #4:cfed6c85:::gibba02940939-39:head#
2021-03-19T05:44:40.327+0000 7f9d61888700 20 _unpin0x5606ef392000 #4:cfed6c85:::gibba02940939-39:head# unpinned
....
then finally object is found in 4.3 while removing that PG
2021-03-19T05:44:43.412+0000 7f9d61888700 15 bluestore(/var/lib/ceph/osd/ceph-0) _remove_collection 4.3_head
...
2021-03-19T05:44:43.412+0000 7f9d61888700 1 bluestore(/var/lib/ceph/osd/ceph-0) operator() #4:cfed6c85:::gibba02940939-39:head# 0x5606efc9f400 exists in onode_map

Actions #11

Updated by Igor Fedotov about 3 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 40237
Actions #12

Updated by Igor Fedotov about 3 years ago

  • Backport set to pacific, octopus
Actions #13

Updated by Neha Ojha about 3 years ago

  • Assignee changed from Igor Fedotov to Adam Kupczyk
  • Pull request ID changed from 40237 to 40323
Actions #14

Updated by Neha Ojha about 3 years ago

  • Related to Bug #49818: os/bluestore/BlueStore.cc: FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache() added
Actions #15

Updated by Neha Ojha about 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #16

Updated by Backport Bot about 3 years ago

  • Copied to Backport #49989: pacific: _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0) added
Actions #17

Updated by Backport Bot about 3 years ago

  • Copied to Backport #49990: octopus: _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0) added
Actions #18

Updated by Loïc Dachary about 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF