Actions
Bug #49900
closed_txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0)
% Done:
0%
Source:
Tags:
Backport:
pacific, octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Description
2021-03-19T05:44:43.412+0000 7f9d61888700 10 bluestore(/var/lib/ceph/osd/ceph-0) _remove_collection 4.3_head = -39 2021-03-19T05:44:43.412+0000 7f9d61888700 -1 bluestore(/var/lib/ceph/osd/ceph-0) _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0) 2021-03-19T05:44:43.412+0000 7f9d61888700 0 _dump_transaction transaction dump: { "ops": [ { "op_num": 0, "op_name": "remove", "collection": "4.3_head", "oid": "#4:c0000000::::head#" }, { "op_num": 1, "op_name": "rmcoll", "collection": "4.3_head" } ] } 2021-03-19T05:44:43.416+0000 7f9d61888700 -1 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-2096-g0d3df9b9/rpm/el8/BUILD/ceph-17.0.0-2096-g0d3df9b9/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)' thread 7f9d61888700 time 2021-03-19T05:44:43.413041+0000 /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-2096-g0d3df9b9/rpm/el8/BUILD/ceph-17.0.0-2096-g0d3df9b9/src/os/bluestore/BlueStore.cc: 12820: ceph_abort_msg("unexpected error") ceph version 17.0.0-2096-g0d3df9b9 (0d3df9b9599b6294f204ae84913a36061e78fb76) quincy (dev) 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xe5) [0x5606eadfd10e] 2: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ceph::os::Transaction*)+0xc9c) [0x5606eb43bfbc] 3: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ceph::os::Transaction, std::allocator<ceph::os::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x316) [0x5606eb43e6d6] 4: (ObjectStore::queue_transaction(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ceph::os::Transaction&&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x85) [0x5606eaf67a75] 5: (PG::do_delete_work(ceph::os::Transaction&, ghobject_t)+0xd1b) [0x5606eafb685b] 6: (PeeringState::Deleting::react(PeeringState::DeleteSome const&)+0x108) [0x5606eb16fb48] 7: (boost::statechart::simple_state<PeeringState::Deleting, PeeringState::ToDelete, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xe5) [0x5606eb1d4435] 8: (boost::statechart::state_machine<PeeringState::PeeringMachine, PeeringState::Initial, std::allocator<boost::statechart::none>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x5b) [0x5606eafbdbbb] 9: (PG::do_peering_event(std::shared_ptr<PGPeeringEvent>, PeeringCtx&)+0x2d1) [0x5606eafb2541] 10: (OSD::dequeue_peering_evt(OSDShard*, PG*, std::shared_ptr<PGPeeringEvent>, ThreadPool::TPHandle&)+0x175) [0x5606eaf2c645] 11: (OSD::dequeue_delete(OSDShard*, PG*, unsigned int, ThreadPool::TPHandle&)+0xc8) [0x5606eaf2c9a8] 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa58) [0x5606eaf1f568] 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x5606eb580f34] 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x5606eb583bd4] 15: (Thread::_entry_func(void*)+0xd) [0x5606eb572a3d] 16: /lib64/libpthread.so.0(+0x814a) [0x7f9d83fc114a] 17: clone()
/a/nojha-2021-03-18_23:17:30-rados:thrash-master-distro-basic-gibba/5978595/
we fixed something similar in https://tracker.ceph.com/issues/38724, appears to be a different root cause this time
Actions