Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2024-03-07T08:26:12Z
Ceph
Redmine
crimson - Bug #64782 (New): test_python.sh TestIoctx.test_locator failes in cases of SeaStore
https://tracker.ceph.com/issues/64782
2024-03-07T08:26:12Z
Xuehan Xu
xxhdx1985126@gmail.com
<pre>
=================================== FAILURES ===================================
____________________________ TestIoctx.test_locator ____________________________
self = <test_rados.TestIoctx object at 0x2b50b6e1a438>
def test_locator(self):
self.ioctx.set_locator_key("bar")
self.ioctx.write('foo', b'contents1')
objects = [i for i in self.ioctx.list_objects()]
eq(len(objects), 1)
eq(self.ioctx.get_locator_key(), "bar")
self.ioctx.set_locator_key("")
objects[0].seek(0)
objects[0].write(b"contents2")
eq(self.ioctx.get_locator_key(), "")
self.ioctx.set_locator_key("bar")
contents = self.ioctx.read("foo")
> eq(contents, b"contents2")
../../../clone.client.0/src/test/pybind/test_rados.py:729:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = b'contents1', b = b'contents2'
def assert_equal(a, b):
> assert a == b
E AssertionError
../../../clone.client.0/src/test/pybind/assertions.py:2: AssertionError
</pre>
<p>Seems that this is due to SeaStore's onode tree doesn't save objects' location keys.</p>
crimson - Bug #64680 (New): transaction_manager_test/tm_random_block_device_test_t.scatter_alloca...
https://tracker.ceph.com/issues/64680
2024-03-04T08:03:24Z
Rongqi Sun
<p>ERROR seastore_cleaner - RBMSpaceTracker::equals: block addr 133074944 mismatch other used: false</p>
<p>Caused by multiple extents allocation feature?</p>
crimson - Bug #64556 (New): crimson osd crashes when got an empty omap header
https://tracker.ceph.com/issues/64556
2024-02-25T04:37:54Z
Xuehan Xu
xxhdx1985126@gmail.com
<pre>
DEBUG 2024-02-23 11:21:17,031 [shard 0:main] osd - pg_epoch 731 pg[2.0( v 10'97 (0'0,10'97] local-lis/les=9/10 n=2 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=9) [2,1] r=0 lpr=9 luod=10'98 lua=0'0 crt=10'98 mlcod 10'97 active+clean+scrubbing+deep ScrubScan::deep_scan_object: op: scrub_scan(id=1451078, detail=(deep=1, local=1, begin=MIN, end=MAX)), obj: #2:1b0e96ae:devicehealth::main.db-journal.0000000000000000:head#, progress: obj_scrub_progress_t(offset: <null>, header_done: false, next_key: <null>, keys_done: false) scanning omap header
DEBUG 2024-02-23 11:21:17,031 [shard 0:main] seastore - SeaStore::get_attr: 2.0_head #2:1b0e96ae:devicehealth::main.db-journal.0000000000000000:head#
DEBUG 2024-02-23 11:21:17,031 [shard 0:main] seastore_t - 0x3000030dde00 trans.70714 Cache::create_transaction: created name=get_attr, source=READ, is_weak=false
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: get root ...
DEBUG 2024-02-23 11:21:17,031 [shard 0:main] seastore_cache - 0x3000030dde00 trans.70714 Cache::get_root: root not on t -- CachedExtent(addr=0x300001fe4200, type=ROOT, version=648, dirty_from_or_retired_at=jseq(sseq(6), paddr<Seg[Dev(0),30],27127808>), modify_time=tp(2024-02-23 11:21:08), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=4, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000c3c7a00, backref_root_node=0x30000b693200))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::read_onode_root: 18446539312207167488
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: got root 0xffff45c53f630000
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Node::load_root: loading root_addr=ffff45c53f630000 ...
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: reading at 0xffff45c53f630000 ...
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::read_extent: 18446539312207167488
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::get_pin: 18446539312207167488
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::get_mapping: 18446539312207167488
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18446539312207167488
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_cache - 0x3000030dde00 trans.70714 Cache::get_root: root already on t -- CachedExtent(addr=0x300001fe4200, type=ROOT, version=648, dirty_from_or_retired_at=jseq(sseq(6), paddr<Seg[Dev(0),30],27127808>), modify_time=tp(2024-02-23 11:21:08), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000c3c7a00, backref_root_node=0x30000b693200))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18446539312207167488 got 18446539312207167488, lba_map_val_t(paddr<Seg[Dev(0),28],37081088>~8192, refcount=1, checksum=0)
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: read 8192B at 0xffff45c53f630000 -- CachedExtent(addr=0x30000893a300, type=ONODE_BLOCK_STAGED, version=0, dirty_from_or_retired_at=JOURNAL_SEQ_NULL, modify_time=tp(NULL), paddr=paddr<Seg[Dev(0),28],37081088>, prior_paddr=nullopt, length=8192, state=CLEAN, last_committed_crc=2507329540, refcount=3, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULLparent_tracker=0x30000a1f0ad0, parent=0x30000be04c00, laddr=18446539312207167488, fltree_header=headerI0(is_level_tail=1, level=2))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Node::load_root: loaded NodeI0@0xffff45c53f630000+2000Lv2$
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::InternalNode::get_or_track_child: loading child at pos(0, 0, 0) addr=ffff30f0e9260000 ...
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: reading at 0xffff30f0e9260000 ...
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::read_extent: 18446516409994706944
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::get_pin: 18446516409994706944
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::get_mapping: 18446516409994706944
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18446516409994706944
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_cache - 0x3000030dde00 trans.70714 Cache::get_root: root already on t -- CachedExtent(addr=0x300001fe4200, type=ROOT, version=648, dirty_from_or_retired_at=jseq(sseq(6), paddr<Seg[Dev(0),30],27127808>), modify_time=tp(2024-02-23 11:21:08), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000c3c7a00, backref_root_node=0x30000b693200))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18446516409994706944 got 18446516409994706944, lba_map_val_t(paddr<Seg[Dev(0),29],1667072>~8192, refcount=1, checksum=0)
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: read 8192B at 0xffff30f0e9260000 -- CachedExtent(addr=0x300008c8fe00, type=ONODE_BLOCK_STAGED, version=14, dirty_from_or_retired_at=jseq(sseq(5), paddr<Seg[Dev(0),29],2424832>), modify_time=tp(2024-02-23 11:18:28), paddr=paddr<Seg[Dev(0),29],1667072>, prior_paddr=nullopt, length=8192, state=DIRTY, last_committed_crc=1043794758, refcount=3, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULLparent_tracker=0x30000be71080, parent=0x30000cbe2c00, laddr=18446516409994706944, fltree_header=headerI0(is_level_tail=0, level=1))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::InternalNode::get_or_track_child: loaded child untracked NodeI0@0xffff30f0e9260000+2000Lv1
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::InternalNode::get_or_track_child: loading child at pos(26, 0, 0) addr=ff03980000004000 ...
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: reading at 0xff03980000004000 ...
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::read_extent: 18375698030369193984
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::get_pin: 18375698030369193984
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::get_mapping: 18375698030369193984
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18375698030369193984
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_cache - 0x3000030dde00 trans.70714 Cache::get_root: root already on t -- CachedExtent(addr=0x300001fe4200, type=ROOT, version=648, dirty_from_or_retired_at=jseq(sseq(6), paddr<Seg[Dev(0),30],27127808>), modify_time=tp(2024-02-23 11:21:08), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000c3c7a00, backref_root_node=0x30000b693200))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18375698030369193984 got 18375698030369193984, lba_map_val_t(paddr<Seg[Dev(0),29],32833536>~16384, refcount=1, checksum=0)
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::Seastore: read 16384B at 0xff03980000004000 -- CachedExtent(addr=0x30000683e600, type=ONODE_BLOCK_STAGED, version=0, dirty_from_or_retired_at=JOURNAL_SEQ_NULL, modify_time=tp(NULL), paddr=paddr<Seg[Dev(0),29],32833536>, prior_paddr=nullopt, length=16384, state=CLEAN, last_committed_crc=1513623895, refcount=3, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULLparent_tracker=0x300002dd7970, parent=0x30000a8cac00, laddr=18375698030369193984, fltree_header=headerL0(is_level_tail=0, level=0))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_onode - 0x3000030dde00 trans.70714 OTree::InternalNode::get_or_track_child: loaded child untracked NodeL0@0xff03980000004000+4000Lv0
DEBUG 2024-02-23 11:21:17,031 [shard 0:main] seastore_omap - 0x3000030dde00 trans.70714 BtreeOMapManager::omap_get_value: key=omap_header
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::read_extent: 18375279179119321088~8192
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_tm - 0x3000030dde00 trans.70714 TransactionManager::get_pin: 18375279179119321088
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::get_mapping: 18375279179119321088
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18375279179119321088
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_cache - 0x3000030dde00 trans.70714 Cache::get_root: root already on t -- CachedExtent(addr=0x300001fe4200, type=ROOT, version=648, dirty_from_or_retired_at=jseq(sseq(6), paddr<Seg[Dev(0),30],27127808>), modify_time=tp(2024-02-23 11:21:08), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000c3c7a00, backref_root_node=0x30000b693200))
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_lba - 0x3000030dde00 trans.70714 BtreeLBAManager::_get_mapping: 18375279179119321088 got 18375279179119321088, lba_map_val_t(paddr<Seg[Dev(0),29],233472>~8192, refcount=1, checksum=0)
DEBUG 2024-02-23 11:21:17,031 [shard 0:main] seastore_omap - 0x3000030dde00 trans.70714 OMapLeafNode::get_value: key = omap_header, this: CachedExtent(addr=0x300003bd1680, type=OMAP_LEAF, version=0, dirty_from_or_retired_at=JOURNAL_SEQ_NULL, modify_time=tp(NULL), paddr=paddr<Seg[Dev(0),29],233472>, prior_paddr=nullopt, length=8192, state=CLEAN, last_committed_crc=2781592532, refcount=3, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULLparent_tracker=0x300002dd7970, parent=0x30000a8cac00, laddr=18375279179119321088, size=8, depth=1)
TRACE 2024-02-23 11:21:17,031 [shard 0:main] seastore_t - 0x3000030dde00 trans.70714 Cache::on_transaction_destruct:
DEBUG 2024-02-23 11:21:17,031 [shard 0:main] seastore_t - 0x3000030dde00 trans.70714 Cache::on_transaction_destruct: done 9(57344B) read
Segmentation fault on shard 0.
Backtrace:
Reactor stalled for 95 ms on shard 0. Backtrace: 0x2a47bfe 0x2a02de0 0x2a03346 0x2a03527 0x12cdf 0xfa877 0x194b4dd 0x194b6f1 0x194bb98 0x194c3df 0x194a1a0 0x194a293 0x194b029 0x12cdf 0x140b699 0x140bdc9 0x29fed47 0x29ff0b0 0x2a38a8d 0x2a395ec 0x299a55b 0x299ac9b 0xff6556 0x3aca2 0x104e09d
kernel callstack: 0xffffffffffffff80 0xffffffff9600c9a7 0xffffffff958e0c59 0xffffffff958e182c 0xffffffff958e4abd 0xffffffff958e8aa2 0xffffffff958e8cea 0xffffffff956828e0 0xffffffff95682b67 0xffffffff962011ae
0# 0x000055EC2F6F569A in ceph-osd
1# 0x000055EC2F6F5DCA in ceph-osd
2# 0x000055EC30CE8D48 in ceph-osd
3# 0x000055EC30CE90B1 in ceph-osd
4# 0x000055EC30D22A8E in ceph-osd
5# 0x000055EC30D235ED in ceph-osd
6# 0x000055EC30C8455C in ceph-osd
7# 0x000055EC30C84C9C in ceph-osd
8# main in ceph-osd
9# __libc_start_main in /lib64/libc.so.6
10# _start in ceph-osd
Dump of siginfo:
si_signo: 11
si_errno: 0
si_code: 1
si_pid: 8
si_uid: 0
si_status: 0
si_utime: 0
si_stime: 0
si_int: 0
si_ptr: 0
si_overrun: 0
si_timerid: 8
si_addr: 0x8
si_band: 8
si_fd: 0
si_addr_lsb: 0
</pre>
crimson - Bug #64535 (New): crimson osd crashes during crimson-rados-experimental teuthology tests
https://tracker.ceph.com/issues/64535
2024-02-22T09:03:05Z
Xuehan Xu
xxhdx1985126@gmail.com
<pre>
DEBUG 2024-02-22 08:39:40,036 [shard 0:main] seastore - 0x300003146400 trans.40333 SeaStore::list_objects: pstart #2:40000000::::0#0, pend #2:60000000::::0#0, limit 18446744073709551615
DEBUG 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 FLTreeOnodeManager::list_onodes: start #2:40000000::::0#0, end #2:60000000::::0#0, limit 18446744073709551615
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: get root ...
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_cache - 0x300003146400 trans.40333 Cache::get_root: root already on t -- CachedExtent(addr=0x3000034f5a00, type=ROOT, version=231, dirty_from_or_retired_at=jseq(sseq(5), paddr<Seg[Dev(0),29],10010624>), modify_time=tp(2024-02-22 08:39:35), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000910c200, backref_root_node=0x30000a555200))
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_tm - 0x300003146400 trans.40333 TransactionManager::read_onode_root: 18446520859580825600
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: got root 0xffff34fce9260000
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Node::load_root: loading root_addr=ffff34fce9260000 ...
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: reading at 0xffff34fce9260000 ...
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_tm - 0x300003146400 trans.40333 TransactionManager::read_extent: 18446520859580825600
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_tm - 0x300003146400 trans.40333 TransactionManager::get_pin: 18446520859580825600
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::get_mapping: 18446520859580825600
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::_get_mapping: 18446520859580825600
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_cache - 0x300003146400 trans.40333 Cache::get_root: root already on t -- CachedExtent(addr=0x3000034f5a00, type=ROOT, version=231, dirty_from_or_retired_at=jseq(sseq(5), paddr<Seg[Dev(0),29],10010624>), modify_time=tp(2024-02-22 08:39:35), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000910c200, backref_root_node=0x30000a555200))
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::_get_mapping: 18446520859580825600 got 18446520859580825600, lba_map_val_t(paddr<Seg[Dev(0),28],23965696>~8192, refcount=1, checksum=0)
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: read 8192B at 0xffff34fce9260000 -- CachedExtent(addr=0x300003d22780, type=ONODE_BLOCK_STAGED, version=0, dirty_from_or_retired_at=JOURNAL_SEQ_NULL, modify_time=tp(NULL), paddr=paddr<Seg[Dev(0),28],23965696>, prior_paddr=nullopt, length=8192, state=CLEAN, last_committed_crc=3163152008, refcount=3, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULLparent_tracker=0x3000098c41c0, parent=0x300009e00400, laddr=18446520859580825600, fltree_header=headerI0(is_level_tail=1, level=2))
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Node::load_root: loaded NodeI0@0xffff34fce9260000+2000Lv2$
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::InternalNode::get_or_track_child: loading child at pos(0, 0, 0) addr=ffff30f0e9260000 ...
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: reading at 0xffff30f0e9260000 ...
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_tm - 0x300003146400 trans.40333 TransactionManager::read_extent: 18446516409994706944
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_tm - 0x300003146400 trans.40333 TransactionManager::get_pin: 18446516409994706944
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::get_mapping: 18446516409994706944
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::_get_mapping: 18446516409994706944
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_cache - 0x300003146400 trans.40333 Cache::get_root: root already on t -- CachedExtent(addr=0x3000034f5a00, type=ROOT, version=231, dirty_from_or_retired_at=jseq(sseq(5), paddr<Seg[Dev(0),29],10010624>), modify_time=tp(2024-02-22 08:39:35), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000910c200, backref_root_node=0x30000a555200))
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::_get_mapping: 18446516409994706944 got 18446516409994706944, lba_map_val_t(paddr<Seg[Dev(0),29],978944>~8192, refcount=1, checksum=0)
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: read 8192B at 0xffff30f0e9260000 -- CachedExtent(addr=0x300005c63680, type=ONODE_BLOCK_STAGED, version=3, dirty_from_or_retired_at=jseq(sseq(5), paddr<Seg[Dev(0),29],1961984>), modify_time=tp(2024-02-22 08:39:08), paddr=paddr<Seg[Dev(0),29],978944>, prior_paddr=nullopt, length=8192, state=DIRTY, last_committed_crc=67784944, refcount=3, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULLparent_tracker=0x3000098c41c0, parent=0x300009e00400, laddr=18446516409994706944, fltree_header=headerI0(is_level_tail=0, level=1))
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::InternalNode::get_or_track_child: loaded child untracked NodeI0@0xffff30f0e9260000+2000Lv1
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::InternalNode::get_or_track_child: loading child at pos(11, 0, 0) addr=ff02e00000004000 ...
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: reading at 0xff02e00000004000 ...
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_tm - 0x300003146400 trans.40333 TransactionManager::read_extent: 18375495720229683200
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_tm - 0x300003146400 trans.40333 TransactionManager::get_pin: 18375495720229683200
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::get_mapping: 18375495720229683200
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::_get_mapping: 18375495720229683200
TRACE 2024-02-22 08:39:40,036 [shard 0:main] seastore_cache - 0x300003146400 trans.40333 Cache::get_root: root already on t -- CachedExtent(addr=0x3000034f5a00, type=ROOT, version=231, dirty_from_or_retired_at=jseq(sseq(5), paddr<Seg[Dev(0),29],10010624>), modify_time=tp(2024-02-22 08:39:35), paddr=paddr<Dev(ROOT),0>, prior_paddr=nullopt, length=0, state=DIRTY, last_committed_crc=0, refcount=6, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULL, root_block(lba_root_node=0x30000910c200, backref_root_node=0x30000a555200))
TRACE 2024-02-22 08:39:40,037 [shard 0:main] seastore_lba - 0x300003146400 trans.40333 BtreeLBAManager::_get_mapping: 18375495720229683200 got 18375495720229683200, lba_map_val_t(paddr<Seg[Dev(0),21],44466176>~16384, refcount=1, checksum=0)
TRACE 2024-02-22 08:39:40,037 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::Seastore: read 16384B at 0xff02e00000004000 -- CachedExtent(addr=0x300000256300, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=jseq(sseq(3), paddr<Seg[Dev(0),21],62865408>), modify_time=tp(2024-02-22 08:10:02), paddr=paddr<Seg[Dev(0),21],44466176>, prior_paddr=nullopt, length=16384, state=DIRTY, last_committed_crc=2728444179, refcount=3, user_hint=Hint(NULL), fully_loaded=1, rewrite_gen=GEN_NULLparent_tracker=0x300007d8a3e0, parent=0x30000a190e00, laddr=18375495720229683200, fltree_header=headerL0(is_level_tail=0, level=0))
TRACE 2024-02-22 08:39:40,037 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 OTree::InternalNode::get_or_track_child: loaded child untracked NodeL0@0xff02e00000004000+4000Lv0
DEBUG 2024-02-22 08:39:40,037 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 FLTreeOnodeManager::list_onodes: found onode for #2:40000000::::head#
DEBUG 2024-02-22 08:39:40,037 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 FLTreeOnodeManager::list_onodes: found onode for #2:40000000:::snapmapper:0#
DEBUG 2024-02-22 08:39:40,037 [shard 0:main] seastore_onode - 0x300003146400 trans.40333 FLTreeOnodeManager::list_onodes: reached the end #2:60000000::::head# > #2:60000000::::0#0
DEBUG 2024-02-22 08:39:40,037 [shard 0:main] seastore - 0x300003146400 trans.40333 SeaStore::list_objects: got 2 objects, left limit 18446744073709551613
ceph-osd: /da1/xxh/rpmbuild/BUILD/ceph-19.0.0-1586-g7df0cc68304/src/crimson/os/seastore/seastore.cc:731: crimson::os::seastore::SeaStore::Shard::list_objects(crimson::os::FuturizedStore::Shard::CollectionRef, const ghobject_t&, const ghobject_t&, uint64_t) const::<lambda(auto:208&, auto:209&)>::<lambda()>::<lambda(auto:210&)>::<lambda(auto:211)>::<lambda()> mutable::<lambda(auto:212&&)> mutable [with auto:212 = std::tuple<std::vector<ghobject_t, std::allocator<ghobject_t> >, ghobject_t>]: Assertion `limit == 0 || std::get<1>(ret) == pend || std::get<1>(ret) == ghobject_t::get_max()' failed.
</pre>
<p>The crash is caused by scrubs.</p>
crimson - Bug #64282 (Fix Under Review): osd crashes due to unexpected pg creation
https://tracker.ceph.com/issues/64282
2024-02-01T05:23:58Z
Xuehan Xu
xxhdx1985126@gmail.com
<pre>
DEBUG 2024-01-30 05:30:06,943 [shard 2] osd - ShardServices::dispatch_context_transaction: empty transaction
DEBUG 2024-01-30 05:30:06,943 [shard 2] osd - peering_event(id=33554908, detail=PeeringEvent(from=0 pgid=2.2 sent=16 requested=16 evt=epoch_sent: 16 epoch_requested: 16 RenewLease)): exit
DEBUG 2024-01-30 05:30:06,943 [shard 2] osd - 0x0 LocalPeeringEvent::start: peering_event(id=33554908, detail=PeeringEvent(from=0 pgid=2.2 sent=16 requested=16 evt=epoch_sent: 16 epoch_requested: 16 RenewLease)): complete
INFO 2024-01-30 05:30:07,428 [shard 0] prioritycache - prioritycache tune_memory target: 4294967296 mapped: 15130624 unmapped: 729088 heap: 15859712 old mem: 2845415832 new mem: 2845415832
INFO 2024-01-30 05:30:07,651 [shard 0] alienstore - stat
DEBUG 2024-01-30 05:30:07,695 [shard 2] osd - pg_advance_map(id=33554905, detail=PGAdvanceMap(pg=6.7 from=100 to=112)): advancing map to 102
DEBUG 2024-01-30 05:30:07,695 [shard 2] osd - pg_epoch 101 pg[6.7( empty local-lis/les=0/0 n=0 ec=73/73 lis/c=0/0 les/c/f=0/0/0 sis=100) [] r=-1 lpr=100 pi=[73,100)/1 crt=0'0 mlcod 0'0 unknown NOTIFY PeeringState::advance_map handle_advance_map {}/{} -- -1/-1
DEBUG 2024-01-30 05:30:07,695 [shard 2] osd - pg_epoch 102 pg[6.7( empty local-lis/les=0/0 n=0 ec=73/73 lis/c=0/0 les/c/f=0/0/0 sis=100) [] r=-1 lpr=100 pi=[73,100)/1 crt=0'0 mlcod 0'0 unknown NOTIFY state<Started>: Started advmap
DEBUG 2024-01-30 05:30:07,695 [shard 2] osd - pg_epoch 102 pg[6.7( empty local-lis/les=0/0 n=0 ec=73/73 lis/c=0/0 les/c/f=0/0/0 sis=100) [] r=-1 lpr=100 pi=[73,100)/1 crt=0'0 mlcod 0'0 unknown NOTIFY check_recovery_sources no source osds () went down
DEBUG 2024-01-30 05:30:07,695 [shard 2] osd - pg_advance_map(id=33554905, detail=PGAdvanceMap(pg=6.7 from=100 to=112)): start: getting map 103
DEBUG 2024-01-30 05:30:07,695 [shard 0] osd - get_local_map loading osdmap.103 from disk
INFO 2024-01-30 05:30:07,695 [shard 0] osd - load_map osdmap.103
INFO 2024-01-30 05:30:07,695 [shard 0] osd - load_map osdmap.103
INFO 2024-01-30 05:30:07,695 [shard 2] osd - pg_epoch 88 pg[5.d( DNE empty local-lis/les=0/0 n=0 ec=0/0 lis/c=0/0 les/c/f=0/0/0 sis=0) [] r=-1 lpr=0 crt=0'0 mlcod 0'0 unknown enter Initial
DEBUG 2024-01-30 05:30:07,695 [shard 0] osd - load_map_bl loading osdmap.103 from disk
INFO 2024-01-30 05:30:07,695 [shard 2] osd - Entering state: Initial
DEBUG 2024-01-30 05:30:07,695 [shard 2] osd - snap_mapper.reset_prefix_itr::from <0> to <CEPH_NOSNAP> ::update_bits
DEBUG 2024-01-30 05:30:07,695 [shard 2] osd - pg_epoch 88 pg[5.d( DNE empty local-lis/les=0/0 n=0 ec=0/0 lis/c=0/0 les/c/f=0/0/0 sis=0) [] r=-1 lpr=0 crt=0'0 mlcod 0'0 unknown ScrubState::ScrubState: entering state ScrubMachine/Inactive
Segmentation fault on shard 2.
Backtrace:
DEBUG 2024-01-30 05:30:07,696 [shard 0] osd - pg_epoch 112 pg[1.0( v 111'92 (0'0,111'92] local-lis/les=13/15 n=0 ec=13/13 lis/c=13/13 les/c/f=15/18/0 sis=13) [3,0] r=1 lpr=13 luod=0'0 lua=0'0 crt=111'93 lcod 107'91 mlcod 111'93 active PeeringState::update_last_complete_ondisk updating last_complete_ondisk to: 111'92
DEBUG 2024-01-30 05:30:07,696 [shard 0] osd - replicated_request(id=1358, detail=RepRequest(from=3 req=osd_repop(client.4112.0:128 1.0 e111/13 1:30306672:devicehealth::main.db.0000000000000000:head v 111'93, mlcod=111'93) v3)): complete
DEBUG 2024-01-30 05:30:07,696 [shard 0] osd - replicated_request(id=1358, detail=RepRequest(from=3 req=osd_repop(client.4112.0:128 1.0 e111/13 1:30306672:devicehealth::main.db.0000000000000000:head v 111'93, mlcod=111'93) v3)): exit
0# 0x00005593591ABE41 in ceph-osd
1# 0x00005593591AC2F5 in ceph-osd
2# 0x000055935A8DFA68 in ceph-osd
3# 0x000055935A8DFDD1 in ceph-osd
4# 0x000055935A916C5E in ceph-osd
5# 0x000055935A917A56 in ceph-osd
6# 0x000055935A8B5832 in ceph-osd
7# 0x00002BA18CE8C1CA in /lib64/libpthread.so.0
8# clone in /lib64/libc.so.6
Dump of siginfo:
si_signo: 11
si_errno: 0
si_code: 1
si_pid: 24
si_uid: 0
si_status: 0
si_utime: 0
si_stime: 0
si_int: 0
si_ptr: 0
si_overrun: 0
si_timerid: 24
si_addr: 0x18
si_band: 24
si_fd: 0
si_addr_lsb: 0
si_lower: 0
si_upper: 0
si_pkey: 0
si_call_addr: 0x18
si_syscall: 0
si_arch: 0
</pre>
<p>The crash was caused by attempted pg creation after the corresponding pool had already been removed.</p>
crimson - Bug #61875 (Fix Under Review): crimson crashes during reboot when there are snap objects
https://tracker.ceph.com/issues/61875
2023-07-03T02:04:19Z
Xuehan Xu
xxhdx1985126@gmail.com
<p>This is because we put snap mapper infos into pg meta objects, and when rebooting, PGLog replay will meet those info and doesn't know what to do with it.</p>
CephFS - Bug #41327 (Fix Under Review): mds: dirty rstat lost during scatter-gather process
https://tracker.ceph.com/issues/41327
2019-08-18T13:55:24Z
Xuehan Xu
xxhdx1985126@gmail.com
<p>In the following scenario, the current lock's dirty state could be lost:</p>
<ol>
<li>1. current lock's state is LOCK_LOCK;</li>
<li>2. auth send LOCK_AC_MIX, and the current lock do finish_scatter_update, which wrlock_force the current lock;</li>
<li>3. auth send LOCK_AC_LOCK, and would be forced to wait, since the current lock is wrlocked;</li>
<li>4. the child inode want to pop rstat up to the current lock's parent, which would also be forced to wait since the current lock state is LOCK_MIX_LOCK; the current lock would be marked dirty, and the child inode would be attached to the corresponding dir's dirty_rstat_inodes;</li>
<li>5. the previous finish_scatter_update finishes, the codes reaches here and "start_flush" the current lock, which would clear the dirty flag of the current lock.</li>
</ol>
<p>As the CInode::encode_lock_state method doesn't pop dirty_rstat_inodes' rstat to the corresponding dir, step 5 actually lead to the lost of the dirty state marked by step 4.</p>
CephFS - Bug #40159 (Fix Under Review): mds: openfiletable prefetching large amounts of inodes le...
https://tracker.ceph.com/issues/40159
2019-06-05T06:31:46Z
Xuehan Xu
xxhdx1985126@gmail.com
<p>Recently, we found that both mdses of one of our cluster can't boot to up:active</p>
<p>After debugging, we believe this is due to openfiletable's prefetch phase is forced to load large amounts of "opened" inodes even if their caps are dropped by clients when reconnecting.</p>
CephFS - Feature #24725 (Fix Under Review): mds: propagate rstats from the leaf dirs up to the sp...
https://tracker.ceph.com/issues/24725
2018-07-01T09:27:36Z
Xuehan Xu
xxhdx1985126@gmail.com
<p>To make rsync "rstats" aware, we need to implement a way to propagate all rstats up through the subtree that is to be "rsynced", so that the rsync won't miss those files that are modified while their ctime hasn't climb up the subtree.</p>