Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2023-12-19T13:45:42Z
Ceph
Redmine
bluestore - Backport #63853 (New): quincy: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixSha...
https://tracker.ceph.com/issues/63853
2023-12-19T13:45:42Z
Backport Bot
bluestore - Bug #63769 (Pending Backport): ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixSha...
https://tracker.ceph.com/issues/63769
2023-12-08T12:07:45Z
Igor Fedotov
igor.fedotov@croit.io
<p>Assertions occurs if bluestore_allocator is set to bitmap.<br />Setting bluestore_elastic_shared_blobs to false fixes the issue.</p>
bluestore - Bug #63436 (Pending Backport): Typo in reshard example
https://tracker.ceph.com/issues/63436
2023-11-03T19:36:39Z
Adam Kupczyk
<p>See <a class="external" href="https://tracker.ceph.com/issues/63353">https://tracker.ceph.com/issues/63353</a>.<br />I missed the fact that "o"->"O" should be done too.</p>
bluestore - Bug #63353 (Pending Backport): resharding RocksDB after upgrade to Pacific breaks OSDs
https://tracker.ceph.com/issues/63353
2023-10-30T13:14:14Z
Denis Polom
<p>Hi</p>
<p>we upgraded our Ceph cluster from latest Octopus to Pacific 16.2.14 and then we followed the docs (<a class="external" href="https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding">https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding</a>) to reshard RocksDB on our OSDs.</p>
<p>Despite resharding reports operation as successful, OSD fails to start.</p>
<pre>
# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-5/ --sharding="m(3) p(3,0-12) o(3,0-13)=block_cache={type=binned_lru} l p" reshard
reshard success
</pre>
<pre>
Oct 30 12:44:17 octopus2 ceph-osd[4521]: /build/ceph-16.2.14/src/kv/RocksDBStore.cc: 1223: FAILED ceph_assert(recreate_mode)
Oct 30 12:44:17 octopus2 ceph-osd[4521]: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14b) [0x564047cb92b2]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 2: /usr/bin/ceph-osd(+0xaa948a) [0x564047cb948a]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 3: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1609) [0x564048794829]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 4: (BlueStore::_open_db(bool, bool, bool)+0x601) [0x564048240421]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 5: (BlueStore::_open_db_and_around(bool, bool)+0x26b) [0x5640482a5f8b]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 6: (BlueStore::_mount()+0x9c) [0x5640482a896c]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 7: (OSD::init()+0x38a) [0x564047daacea]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 8: main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 9: __libc_start_main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 10: _start()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 0> 2023-10-30T12:44:17.088+0000 7f4971ed2100 -1 *** Caught signal (Aborted) **
Oct 30 12:44:17 octopus2 ceph-osd[4521]: in thread 7f4971ed2100 thread_name:ceph-osd
Oct 30 12:44:17 octopus2 ceph-osd[4521]: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730) [0x7f4972921730]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 2: gsignal()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 3: abort()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x19c) [0x564047cb9303]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 5: /usr/bin/ceph-osd(+0xaa948a) [0x564047cb948a]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 6: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1609) [0x564048794829]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 7: (BlueStore::_open_db(bool, bool, bool)+0x601) [0x564048240421]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 8: (BlueStore::_open_db_and_around(bool, bool)+0x26b) [0x5640482a5f8b]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 9: (BlueStore::_mount()+0x9c) [0x5640482a896c]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 10: (OSD::init()+0x38a) [0x564047daacea]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 11: main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 12: __libc_start_main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 13: _start()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 30 12:44:17 octopus2 ceph-osd[4521]: -1> 2023-10-30T12:44:17.084+0000 7f4971ed2100 -1 /build/ceph-16.2.14/src/kv/RocksDBStore.cc: In function 'int RocksDBStore::do_open(std::ostream&, bool, bool, const string&)' thread 7f4971ed2100 time 2023-10-30T12:44:17.087172+0000
</pre>
<p>Any advice will be appreciated.</p>
<p>thx</p>
bluestore - Bug #63121 (Pending Backport): KeyValueDB/KVTest.RocksDB_estimate_size tests failing
https://tracker.ceph.com/issues/63121
2023-10-06T08:55:03Z
Aishwarya Mathuria
<pre>
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stdout:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/18.0.0-6569-g0e9a2b0e/rpm/el8/BUILD/ceph-18.0.0-6569-g0e9a2b0e/src/test/objectstore/test_kv.cc:567: Failure
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stdout:Expected: (size_a) > ((test + 1) * 1000 * 100 * 0.5), actual: 3987 vs 50000
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:53.876+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:53.876+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
2023-10-06T00:39:53.880 INFO:teuthology.orchestra.run.smithi184.stdout:==> rm -r kv_test_temp_dir
2023-10-06T00:39:53.882 INFO:teuthology.orchestra.run.smithi184.stdout:[ FAILED ] KeyValueDB/KVTest.RocksDB_estimate_size/0, where GetParam() = "rocksdb" (344 ms)
2023-10-06T00:39:54.332 INFO:teuthology.orchestra.run.smithi184.stdout:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/18.0.0-6569-g0e9a2b0e/rpm/el8/BUILD/ceph-18.0.0-6569-g0e9a2b0e/src/test/objectstore/test_kv.cc:599: Failure
2023-10-06T00:39:54.332 INFO:teuthology.orchestra.run.smithi184.stdout:Expected: (size_a) > ((test + 1) * 1000 * 100 * 0.5), actual: 3917 vs 50000
2023-10-06T00:39:54.332 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:54.330+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
2023-10-06T00:39:54.333 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:54.330+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
2023-10-06T00:39:54.333 INFO:teuthology.orchestra.run.smithi184.stdout:==> rm -r kv_test_temp_dir
2023-10-06T00:39:54.335 INFO:teuthology.orchestra.run.smithi184.stdout:[ FAILED ] KeyValueDB/KVTest.RocksDB_estimate_size_column_family/0, where GetParam() = "rocksdb" (454 ms)
</pre>
bluestore - Bug #62730 (New): ceph-bluestore-tool reshard broken
https://tracker.ceph.com/issues/62730
2023-09-06T19:49:52Z
Adam Kupczyk
<p>It is possible to specify same prefix twice. Here an example with "p" defined twice.</p>
<p>ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-1 --sharding="m(3) p(3,0-12) o(3,0-13)=block_cache={type=binned_lru} l p" reshard</p>
<p>After resharding we get:<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: do_open existing_cfs=11<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=l shard_idx=0 hash_l=0 hash_h=4294967295 handle=0x55e12cbf89e0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=0 hash_l=0 hash_h=4294967295 handle=0x55e12cbf8300<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=1 hash_l=0 hash_h=4294967295 handle=0x55e12cbf8980<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=2 hash_l=0 hash_h=4294967295 handle=0x55e12cbf91c0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=0 hash_l=0 hash_h=13 handle=0x55e12cbf8cc0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=1 hash_l=0 hash_h=13 handle=0x55e12cbf8ec0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=2 hash_l=0 hash_h=13 handle=0x55e12cbf8bc0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=0 hash_l=0 hash_h=12 handle=0x55e12cbf8ce0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=1 hash_l=0 hash_h=12 handle=0x55e12cbf8c00<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=2 hash_l=0 hash_h=12 handle=0x55e12cc92e20<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: do_open missing_cfs=1<br />2023-07-11T15:34:08.605+0000 7f138baaf200 -1 /builddir/build/BUILD/ceph-16.2.10/src/kv/RocksDBStore.cc: In function 'int RocksDBStore::do_open(std::ostream&, bool, bool, const string&)' thread 7f138baaf200 time 2023-07-11T15:34:08.602980+<br />0000<br />/builddir/build/BUILD/ceph-16.2.10/src/kv/RocksDBStore.cc: 1215: FAILED ceph_assert(recreate_mode)</p>
bluestore - Bug #62175 (Need More Info): "rados get" command can't get data when bluestore uses L...
https://tracker.ceph.com/issues/62175
2023-07-26T08:32:43Z
taki zhao
<p>I set the buffer cache of bluestore to LRU.</p>
<p>When I put an object whose size is about 100MB to rados using "./bin/rados -p cache_pool put tar /root/Boost.tar.bz2" command, I cannot get all the data using "./bin/rados -p cache_pool get tar /root/tmp1" command (I get the /root/tmp1 whose size is about 12MB), and the command does not seem to be terminated.</p>
<p>It is worth mentioning that I am using the vstart environment, and the vstart startup command is "../src/vstart.sh --debug --new -x --localhost --bluestore".</p>
<p><b>backtrace</b></p>
<pre>
0> 2023-07-26T15:48:58.726+0800 fff80e5caf90 -1 *** Caught signal (Aborted) **
in thread fff80e5caf90 thread_name:tp_osd_tp
ceph version b6a97989f (3b6a97989fdad7cc894771fbfe9f1ce241f2ada1) quincy (stable)
1: __kernel_rt_sigreturn()
2: gsignal()
3: abort()
4: /usr/lib64/libc.so.6(+0x2fa0c) [0xfffc2dbcfa0c]
5: /usr/lib64/libc.so.6(+0x2fa8c) [0xfffc2dbcfa8c]
6: (LruBufferCacheShard::_rm(BlueStore::Buffer*)+0xf4) [0xaaac355770bc]
7: (BlueStore::BufferSpace::_rm_buffer(BlueStore::BufferCacheShard*, std::_Rb_tree_iterator<std::pair<unsigned int const, std::unique_ptr<BlueStore::Buffer, std::default_delete<BlueStore::Buffer> > > >)+0x40) [0xaaac35582a5c]
8: (LruBufferCacheShard::_trim_to(unsigned long)+0xd8) [0xaaac355aab6c]
9: (BlueStore::BufferSpace::did_read(BlueStore::BufferCacheShard*, unsigned int, ceph::buffer::v15_2_0::list&)+0x210) [0xaaac355a4114]
10: (BlueStore::_generate_read_result_bl(boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, std::map<unsigned long, ceph::buffer::v15_2_0::list, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, ceph::buffer::v15_2_0::list> > >&, std::vector<ceph::buffer::v15_2_0::list, std::allocator<ceph::buffer::v15_2_0::list> >&, std::map<boost::intrusive_ptr<BlueStore::Blob>, std::__cxx11::list<BlueStore::read_req_t, std::allocator<BlueStore::read_req_t> >, std::less<boost::intrusive_ptr<BlueStore::Blob> >, std::allocator<std::pair<boost::intrusive_ptr<BlueStore::Blob> const, std::__cxx11::list<BlueStore::read_req_t, std::allocator<BlueStore::read_req_t> > > > >&, bool, bool*, ceph::buffer::v15_2_0::list&)+0x5f8) [0xaaac355140d4]
11: (BlueStore::_do_read(BlueStore::Collection*, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::v15_2_0::list&, unsigned int, unsigned long)+0x81c) [0xaaac35520770]
12: (BlueStore::read(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::v15_2_0::list&, unsigned int)+0x3a8) [0xaaac355581d4]
13: (ReplicatedBackend::objects_read_sync(hobject_t const&, unsigned long, unsigned long, unsigned int, ceph::buffer::v15_2_0::list*)+0x94) [0xaaac35385030]
14: (PrimaryLogPG::do_read(PrimaryLogPG::OpContext*, OSDOp&)+0x738) [0xaaac350a2218]
15: (PrimaryLogPG::do_osd_ops(PrimaryLogPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)+0xa08) [0xaaac350dbfc4]
16: (PrimaryLogPG::prepare_transaction(PrimaryLogPG::OpContext*)+0x158) [0xaaac350ebfe8]
17: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x474) [0xaaac350ec83c]
18: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x2f68) [0xaaac350f0a50]
19: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xad4) [0xaaac350f701c]
20: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x470) [0xaaac34f5dcfc]
21: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x80) [0xaaac35260fb0]
22: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x26ec) [0xaaac34f8b0c4]
23: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x830) [0xaaac356e0fe0]
24: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0xaaac356e3c8c]
25: (Thread::entry_wrapper()+0x4c) [0xaaac356ccfdc]
26: (Thread::_entry_func(void*)+0xc) [0xaaac356ccffc]
27: /usr/lib64/libpthread.so.0(+0x88cc) [0xfffc2e1388cc]
28: /usr/lib64/libc.so.6(+0xda12c) [0xfffc2dc7a12c]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
</pre>
bluestore - Backport #61465 (New): reef: Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/61465
2023-05-26T10:44:59Z
Backport Bot
bluestore - Backport #61463 (New): quincy: Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/61463
2023-05-26T10:44:45Z
Backport Bot
RADOS - Bug #59099 (New): PG move causes data duplication
https://tracker.ceph.com/issues/59099
2023-03-17T13:51:03Z
Adam Kupczyk
<p>Lets imagine we have a pool TEST.<br />In the PG we have object OBJ of size 1M.</p>
<p>We create snap SNAP-1 and write some 4K to OBJ.<br />As result we get OBJ.1 that takes 1M and OBJ.head that reuses all but 4K.<br />The total data usage is 1M + 4K.</p>
<p>Now we move PG to other OSD.<br />In some cases OBJ.head + OBJ.1 will take 2M.</p>
<p>The example of this happening is in attachment snap-pg-move-history.sh.<br />When data is on original PG on OSD.0:</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.1 GiB 101 MiB 0 B 21 MiB 100 GiB 1.09 1.05 2 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.0 GiB 740 KiB 0 B 20 MiB 100 GiB 0.99 0.95 1 up<br /> TOTAL 202 GiB 2.1 GiB 101 MiB 0 B 41 MiB 200 GiB 1.04 <br />MIN/MAX VAR: 0.95/1.05 STDDEV: 0.05</p>
<p>And after forcibly moving PG to OSD.</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.0 GiB 756 KiB 0 B 21 MiB 100 GiB 0.99 0.91 1 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.2 GiB 201 MiB 0 B 21 MiB 100 GiB 1.18 1.09 2 up<br /> TOTAL 202 GiB 2.2 GiB 201 MiB 0 B 42 MiB 200 GiB 1.09 <br />MIN/MAX VAR: 0.91/1.09 STDDEV: 0.10</p>
<p>The script was tested on Reef, but I do not believe it is limited to it.</p>
bluestore - Bug #58022 (Pending Backport): Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/58022
2022-11-14T17:06:44Z
Kevin Fox
<p>Due to issue <a class="external" href="https://tracker.ceph.com/issues/57672">https://tracker.ceph.com/issues/57672</a> we've been monitoring our clusters closely ensure it doesn't run into the same issue on our other clusters. We have a cluster running 16.2.9 that is showing a weird/bad behavior.</p>
<p>We've noticed some osd's suddenly start increasing their fragmentation at a constant rate until they are restarted. They then settle down and reduce their fragmentation very slowly.</p>
<p>Talking with @Vikhyat a bit, the theory was maybe compaction was kicking in repeatedly. We used the ceph_rocksdb_log_parser.py on one of the runaway osds and didn't see a significant number of compaction events during the time of its runaway fragmentation. So that is unlikely to be the cause.</p>
<p>Please see attached screenshot. You can see the run away osds do so over multiple days and then when we restart them, they level off and slowly decrease.</p>
<p>If it was workload related, we would expect it to continue to fragment after the restart as the workload continues on. But the behavior stops immediately on restart. So feels like some thread in the osd is doing something unusual until restarted.</p>
bluestore - Backport #55517 (New): quincy: test_cls_rbd.sh: multiple TestClsRbd failures during u...
https://tracker.ceph.com/issues/55517
2022-05-02T17:20:08Z
Backport Bot
bluestore - Bug #55444 (Pending Backport): test_cls_rbd.sh: multiple TestClsRbd failures during u...
https://tracker.ceph.com/issues/55444
2022-04-26T01:14:27Z
Laura Flores
<p>Description: rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04}</p>
<p>/a/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801098<br /><pre><code class="text syntaxhl"><span class="CodeRay">2022-04-23T08:54:27.447 INFO:tasks.workunit.client.0.smithi084.stdout:[ RUN ] TestClsRbd.directory_methods
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:297: Failure
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: -16
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: dir_state_set(&ioctx, oid, cls::rbd::DIRECTORY_STATE_ADD_DISABLED)
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.466 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.directory_methods (18 ms)
...
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:750: Failure
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 0
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: get_parent(&ioctx, oid, 10, &pspec, &size)
2022-04-23T08:54:27.634 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: -22
2022-04-23T08:54:27.634 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.parents_v1 (45 ms)
...
2022-04-23T08:54:27.729 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:1008: Failure
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 1u
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 1
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: snapc.snaps.size()
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.snapshots (6 ms)
...
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:1437: Failure
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 2U
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 2
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: pairs.size()
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.779 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.metadata (6 ms)
... + 22 more failed tests
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[==========] 67 tests from 1 test case ran. (22012 ms total)
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ PASSED ] 41 tests.
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] 26 tests, listed below:
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.directory_methods
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.parents_v1
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.snapshots
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.metadata
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image_status
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image_map
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_list
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_add
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.dir_add_already_existing
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_rename
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_remove
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_remove_missing
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_add
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_remove
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_list
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_clean
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.image_group_add
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_set_duplicate_name
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_set
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_list
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_remove
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.trash_methods
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.clone_child
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.namespace_methods
</span></code></pre></p>
bluestore - Bug #55187 (Need More Info): ceph_abort_msg(\"bluefs enospc\")
https://tracker.ceph.com/issues/55187
2022-04-05T16:10:38Z
Aishwarya Mathuria
<p>from osd crash info in gibba cluster: <br /><pre>
$ sudo ceph crash info 2022-04-05T02:08:50.176782Z_e45030ee-e34b-46a4-bdde-3bcb3e8005fa
{
"assert_condition": "abort",
"assert_file": "/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.1.0-163-g4e244311/rpm/el8/BUILD/ceph-17.1.0-163-g4e244311/src/os/bluestore/BlueFS.cc",
"assert_func": "int BlueFS::_flush_range_F(BlueFS::FileWriter*, uint64_t, uint64_t)",
"assert_line": 3137,
"assert_msg": "/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.1.0-163-g4e244311/rpm/el8/BUILD/ceph-17.1.0-163-g4e244311/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range_F(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7f10c8ce23c0 time 2022-04-05T02:08:50.161549+0000\n/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.1.0-163-g4e244311/rpm/el8/BUILD/ceph-17.1.0-163-g4e244311/src/os/bluestore/BlueFS.cc: 3137: ceph_abort_msg(\"bluefs enospc\")\n",
"assert_thread_name": "ceph-osd",
"backtrace": [
"/lib64/libpthread.so.0(+0x12ce0) [0x7f10c6ee7ce0]",
"gsignal()",
"abort()",
"(ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1b8) [0x56488be01c6c]",
"(BlueFS::_flush_range_F(BlueFS::FileWriter*, unsigned long, unsigned long)+0x943) [0x56488c5697f3]",
"(BlueFS::_flush_F(BlueFS::FileWriter*, bool, bool*)+0xa9) [0x56488c5699d9]",
"(BlueFS::fsync(BlueFS::FileWriter*)+0x19e) [0x56488c58620e]",
"(BlueRocksWritableFile::Sync()+0x18) [0x56488c595f88]",
"(rocksdb::LegacyWritableFileWrapper::Sync(rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x1f) [0x56488cab96bf]",
"(rocksdb::WritableFileWriter::SyncInternal(bool)+0x662) [0x56488cbe9e92]",
"(rocksdb::WritableFileWriter::Sync(bool)+0xf8) [0x56488cbeb858]",
"(rocksdb::SyncManifest(rocksdb::Env*, rocksdb::ImmutableDBOptions const*, rocksdb::WritableFileWriter*)+0x11d) [0x56488cbe2d5d]",
"(rocksdb::VersionSet::ProcessManifestWrites(std::deque<rocksdb::VersionSet::ManifestWriter, std::allocator<rocksdb::VersionSet::ManifestWriter> >&, rocksdb::InstrumentedMutex*, rocksdb::FSDirectory*, bool, rocksdb::ColumnFamilyOptions const*)+0x181c) [0x56488cba9aac]",
"(rocksdb::VersionSet::LogAndApply(rocksdb::autovector<rocksdb::ColumnFamilyData*, 8ul> const&, rocksdb::autovector<rocksdb::MutableCFOptions const*, 8ul> const&, rocksdb::autovector<rocksdb::autovector<rocksdb::VersionEdit*, 8ul>, 8ul> const&, rocksdb::InstrumentedMutex*, rocksdb::FSDirectory*, bool, rocksdb::ColumnFamilyOptions const*, std::vector<std::function<void (rocksdb::Status const&)>, std::allocator<std::function<void (rocksdb::Status const&)> > > const&)+0xad1) [0x56488cbab711]",
"(rocksdb::VersionSet::LogAndApply(rocksdb::ColumnFamilyData*, rocksdb::MutableCFOptions const&, rocksdb::VersionEdit*, rocksdb::InstrumentedMutex*, rocksdb::FSDirectory*, bool, rocksdb::ColumnFamilyOptions const*)+0x1c6) [0x56488cac4ba6]",
"(rocksdb::DBImpl::DeleteUnreferencedSstFiles()+0x99a) [0x56488caf4d1a]",
"(rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*)+0x1269) [0x56488cb035d9]",
"(rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool)+0x5a3) [0x56488cafc3c3]",
"(rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x15) [0x56488cafda75]",
"(RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x10a1) [0x56488ca70541]",
"(BlueStore::_open_db(bool, bool, bool)+0x68c) [0x56488c47100c]",
"(BlueStore::_open_db_and_around(bool, bool)+0x34b) [0x56488c4ba23b]",
"(BlueStore::_mount()+0x1ae) [0x56488c4bd38e]",
"(OSD::init()+0x403) [0x56488bf3f513]",
"main()",
"__libc_start_main()",
"_start()"
],
"ceph_version": "17.1.0-163-g4e244311",
"crash_id": "2022-04-05T02:08:50.176782Z_e45030ee-e34b-46a4-bdde-3bcb3e8005fa",
"entity_name": "osd.652",
"os_id": "centos",
"os_name": "CentOS Stream",
"os_version": "8",
"os_version_id": "8",
"process_name": "ceph-osd",
"stack_sig": "8399067597cb57a8aae13abe0f112a83b5cf7cc7d53b85d56fd1afb99a6c5bed",
"timestamp": "2022-04-05T02:08:50.176782Z",
"utsname_hostname": "gibba024",
"utsname_machine": "x86_64",
"utsname_release": "4.18.0-301.1.el8.x86_64",
"utsname_sysname": "Linux",
"utsname_version": "#1 SMP Tue Apr 13 16:24:22 UTC 2021"
}
</pre></p>
bluestore - Bug #52464 (New): FAILED ceph_assert(current_shard->second->valid())
https://tracker.ceph.com/issues/52464
2021-08-31T13:57:27Z
Jeff Layton
jlayton@redhat.com
<p>I've got a cephadm cluster I use for testing, and this morning one of the OSDs crashed down in bluestore code:<br /><pre>
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: get compressor snappy = 0x55b3c18b1b90
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm::NCB::freelist_type=null
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: freelist init
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: freelist _read_cfg
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator dump block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator score block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator fragmentation block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::restore_allocator::file_size=0,sizeof(extent_t)=16
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::restore_allocator::No Valid allocation info on disk (empty file)
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc::NCB::restore_allocator() failed!
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc::NCB::Run Full Recovery from ONodes (might take a while) ...
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::read_allocation_from_drive_on_startup::Start Allocation Recovery from ONodes ...
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: In function 'bool WholeMergeIteratorImpl::is_main_smaller()' thread 7f2d60f480c0 time 2021-08-31T13:51:40.899594+0000
/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: 2288: FAILED ceph_assert(current_shard->second->valid())
ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55b3bdcb0b50]
2: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
3: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
4: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
5: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
6: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
7: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
8: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
9: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
10: (OSD::init()+0x3ba) [0x55b3bddec0ba]
11: main()
12: __libc_start_main()
13: _start()
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: *** Caught signal (Aborted) **
in thread 7f2d60f480c0 thread_name:ceph-osd
ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
1: /lib64/libpthread.so.0(+0x12b20) [0x7f2d5eeeeb20]
2: gsignal()
3: abort()
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x55b3bdcb0bae]
5: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
6: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
7: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
8: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
9: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
10: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
11: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
12: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
13: (OSD::init()+0x3ba) [0x55b3bddec0ba]
14: main()
15: __libc_start_main()
16: _start()
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 31 09:51:40 cephadm2 conmon[20474]: -5> 2021-08-31T13:51:40.897+0000 7f2d60f480c0 -1 bluestore::NCB::restore_allocator::No Valid allocation info on disk (empty file)
Aug 31 09:51:40 cephadm2 conmon[20474]: -1> 2021-08-31T13:51:40.903+0000 7f2d60f480c0 -1 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: In function 'bool WholeMergeIteratorImpl::is_main_smaller()' thread 7f2d60f480c0 time 2021-08-31T13:51:40.899594+0000
Aug 31 09:51:40 cephadm2 conmon[20474]: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: 2288: FAILED ceph_assert(current_shard->second->valid())
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
Aug 31 09:51:40 cephadm2 conmon[20474]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55b3bdcb0b50]
Aug 31 09:51:40 cephadm2 conmon[20474]: 2: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
Aug 31 09:51:40 cephadm2 conmon[20474]: 3: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
Aug 31 09:51:40 cephadm2 conmon[20474]: 4: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
Aug 31 09:51:40 cephadm2 conmon[20474]: 5: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
Aug 31 09:51:40 cephadm2 conmon[20474]: 6: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
Aug 31 09:51:40 cephadm2 conmon[20474]: 7: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
Aug 31 09:51:40 cephadm2 conmon[20474]: 8: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
Aug 31 09:51:40 cephadm2 conmon[20474]: 9: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
Aug 31 09:51:40 cephadm2 conmon[20474]: 10: (OSD::init()+0x3ba) [0x55b3bddec0ba]
Aug 31 09:51:40 cephadm2 conmon[20474]: 11: main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 12: __libc_start_main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 13: _start()
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: 0> 2021-08-31T13:51:40.907+0000 7f2d60f480c0 -1 *** Caught signal (Aborted) **
Aug 31 09:51:40 cephadm2 conmon[20474]: in thread 7f2d60f480c0 thread_name:ceph-osd
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
Aug 31 09:51:40 cephadm2 conmon[20474]: 1: /lib64/libpthread.so.0(+0x12b20) [0x7f2d5eeeeb20]
Aug 31 09:51:40 cephadm2 conmon[20474]: 2: gsignal()
Aug 31 09:51:40 cephadm2 conmon[20474]: 3: abort()
Aug 31 09:51:40 cephadm2 conmon[20474]: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x55b3bdcb0bae]
Aug 31 09:51:40 cephadm2 conmon[20474]: 5: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
Aug 31 09:51:40 cephadm2 conmon[20474]: 6: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
Aug 31 09:51:40 cephadm2 conmon[20474]: 7: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
Aug 31 09:51:40 cephadm2 conmon[20474]: 8: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
Aug 31 09:51:40 cephadm2 conmon[20474]: 9: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
Aug 31 09:51:40 cephadm2 conmon[20474]: 10: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
Aug 31 09:51:40 cephadm2 conmon[20474]: 11: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
Aug 31 09:51:40 cephadm2 conmon[20474]: 12: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
Aug 31 09:51:40 cephadm2 conmon[20474]: 13: (OSD::init()+0x3ba) [0x55b3bddec0ba]
Aug 31 09:51:40 cephadm2 conmon[20474]: 14: main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 15: __libc_start_main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 16: _start()
Aug 31 09:51:40 cephadm2 conmon[20474]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:41 cephadm2 systemd-coredump[20743]: Process 20497 (ceph-osd) of user 167 dumped core.
Aug 31 09:51:41 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Main process exited, code=exited, status=134/n/a
Aug 31 09:51:42 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Failed with result 'exit-code'.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Service RestartSec=10s expired, scheduling restart.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Scheduled restart job, restart counter is at 6.
Aug 31 09:51:52 cephadm2 systemd[1]: Stopped Ceph osd.0 for 1d11c63a-09ac-11ec-83e1-52540031ba78.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Start request repeated too quickly.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Failed with result 'exit-code'.
Aug 31 09:51:52 cephadm2 systemd[1]: Failed to start Ceph osd.0 for 1d11c63a-09ac-11ec-83e1-52540031ba78.
</pre><br />The build I'm using is based on commit a49f10e760b4, with some MDS patches on top (nothing that should affect OSD).</p>