Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2023-12-19T13:45:42Z
Ceph
Redmine
bluestore - Backport #63853 (New): quincy: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixSha...
https://tracker.ceph.com/issues/63853
2023-12-19T13:45:42Z
Backport Bot
bluestore - Bug #63769 (Pending Backport): ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixSha...
https://tracker.ceph.com/issues/63769
2023-12-08T12:07:45Z
Igor Fedotov
igor.fedotov@croit.io
<p>Assertions occurs if bluestore_allocator is set to bitmap.<br />Setting bluestore_elastic_shared_blobs to false fixes the issue.</p>
bluestore - Backport #61465 (New): reef: Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/61465
2023-05-26T10:44:59Z
Backport Bot
bluestore - Backport #61463 (New): quincy: Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/61463
2023-05-26T10:44:45Z
Backport Bot
RADOS - Bug #59099 (New): PG move causes data duplication
https://tracker.ceph.com/issues/59099
2023-03-17T13:51:03Z
Adam Kupczyk
<p>Lets imagine we have a pool TEST.<br />In the PG we have object OBJ of size 1M.</p>
<p>We create snap SNAP-1 and write some 4K to OBJ.<br />As result we get OBJ.1 that takes 1M and OBJ.head that reuses all but 4K.<br />The total data usage is 1M + 4K.</p>
<p>Now we move PG to other OSD.<br />In some cases OBJ.head + OBJ.1 will take 2M.</p>
<p>The example of this happening is in attachment snap-pg-move-history.sh.<br />When data is on original PG on OSD.0:</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.1 GiB 101 MiB 0 B 21 MiB 100 GiB 1.09 1.05 2 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.0 GiB 740 KiB 0 B 20 MiB 100 GiB 0.99 0.95 1 up<br /> TOTAL 202 GiB 2.1 GiB 101 MiB 0 B 41 MiB 200 GiB 1.04 <br />MIN/MAX VAR: 0.95/1.05 STDDEV: 0.05</p>
<p>And after forcibly moving PG to OSD.</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.0 GiB 756 KiB 0 B 21 MiB 100 GiB 0.99 0.91 1 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.2 GiB 201 MiB 0 B 21 MiB 100 GiB 1.18 1.09 2 up<br /> TOTAL 202 GiB 2.2 GiB 201 MiB 0 B 42 MiB 200 GiB 1.09 <br />MIN/MAX VAR: 0.91/1.09 STDDEV: 0.10</p>
<p>The script was tested on Reef, but I do not believe it is limited to it.</p>
Ceph - Bug #58596 (New): rocksdb: rm_range_keys() (message with 'enter') logs binary data
https://tracker.ceph.com/issues/58596
2023-01-29T07:19:21Z
Ronen Friedman
rfriedma@redhat.com
<p>that log message contains keys in their binary format, causing<br />a problem for grep(1) and editors (and might create a security<br />issue).</p>
bluestore - Backport #55517 (New): quincy: test_cls_rbd.sh: multiple TestClsRbd failures during u...
https://tracker.ceph.com/issues/55517
2022-05-02T17:20:08Z
Backport Bot
bluestore - Bug #55444 (Pending Backport): test_cls_rbd.sh: multiple TestClsRbd failures during u...
https://tracker.ceph.com/issues/55444
2022-04-26T01:14:27Z
Laura Flores
<p>Description: rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04}</p>
<p>/a/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801098<br /><pre><code class="text syntaxhl"><span class="CodeRay">2022-04-23T08:54:27.447 INFO:tasks.workunit.client.0.smithi084.stdout:[ RUN ] TestClsRbd.directory_methods
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:297: Failure
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: -16
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: dir_state_set(&ioctx, oid, cls::rbd::DIRECTORY_STATE_ADD_DISABLED)
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.466 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.directory_methods (18 ms)
...
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:750: Failure
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 0
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: get_parent(&ioctx, oid, 10, &pspec, &size)
2022-04-23T08:54:27.634 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: -22
2022-04-23T08:54:27.634 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.parents_v1 (45 ms)
...
2022-04-23T08:54:27.729 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:1008: Failure
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 1u
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 1
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: snapc.snaps.size()
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.snapshots (6 ms)
...
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:1437: Failure
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 2U
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 2
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: pairs.size()
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.779 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.metadata (6 ms)
... + 22 more failed tests
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[==========] 67 tests from 1 test case ran. (22012 ms total)
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ PASSED ] 41 tests.
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] 26 tests, listed below:
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.directory_methods
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.parents_v1
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.snapshots
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.metadata
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image_status
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image_map
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_list
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_add
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.dir_add_already_existing
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_rename
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_remove
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_remove_missing
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_add
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_remove
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_list
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_clean
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.image_group_add
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_set_duplicate_name
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_set
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_list
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_remove
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.trash_methods
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.clone_child
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.namespace_methods
</span></code></pre></p>
bluestore - Bug #53359 (New): bluestore: missing block.db symlinks leads to confusing crash
https://tracker.ceph.com/issues/53359
2021-11-22T16:09:44Z
Sage Weil
sage@newdream.net
<p>A regression in ceph-volume (master branch) led to the block.db symlink not getting created. This leads to OSDs that crash like so:</p>
<pre>
"backtrace": [
"/lib64/libpthread.so.0(+0x12c20) [0x7f3573347c20]",
"gsignal()",
"abort()",
"/lib64/libstdc++.so.6(+0x9009b) [0x7f357295e09b]",
"/lib64/libstdc++.so.6(+0x9653c) [0x7f357296453c]",
"/lib64/libstdc++.so.6(+0x96597) [0x7f3572964597]",
"/lib64/libstdc++.so.6(+0x967f8) [0x7f35729647f8]",
"/usr/bin/ceph-osd(+0x5c7203) [0x55cc53713203]",
"(BlueFS::_open_super()+0x18f) [0x55cc53e66cff]",
"(BlueFS::mount()+0xeb) [0x55cc53e88ddb]",
"(BlueStore::_open_bluefs(bool, bool)+0x94) [0x55cc53d4bad4]",
"(BlueStore::_prepare_db_environment(bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)+0x6d9) [0x55cc53d4cc29]",
"(BlueStore::_open_db(bool, bool, bool)+0x15c) [0x55cc53d4df4c]",
"(BlueStore::_open_db_and_around(bool, bool)+0x2b4) [0x55cc53dc68d4]",
"(BlueStore::_mount()+0x1ae) [0x55cc53dc971e]",
"(OSD::init()+0x3ba) [0x55cc5385711a]",
"main()",
"__libc_start_main()",
"_start()"
],
"ceph_version": "17.0.0-9073-g6e528ed7",
</pre>
<p>The on-disk block that we are trying to decode is all zeros.</p>
<p>I thought we had a flag somewhere indicating whether a db and/or wal was expected so that we could provide a meaningful/informative error message, but maybe not?</p>
<p>(ceph-volume fix is here: <a class="external" href="https://github.com/ceph/ceph/pull/44030">https://github.com/ceph/ceph/pull/44030</a>)</p>
RADOS - Bug #52513 (New): BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15
https://tracker.ceph.com/issues/52513
2021-09-06T09:20:28Z
Konstantin Shalygin
k0ste@k0ste.ru
<p>We get crash of two simultaneously OSD's served 17.7ff [684,768,760]</p>
<pre>
RECENT_CRASH 2 daemons have recently crashed
osd.760 crashed on host meta114 at 2021-09-03 21:50:28.138745Z
osd.768 crashed on host meta115 at 2021-09-03 21:50:28.123223Z
</pre>
<p>Seems ENOFILE is unexpected, when object lock is acquired</p>
<pre>
-8> 2021-09-04 00:50:28.077 7f3299342700 -1 bluestore(/var/lib/ceph/osd/ceph-768) _txc_add_transaction error (2) No such file or directory not handled on operation 15 (op 0, counting from 0)
-7> 2021-09-04 00:50:28.077 7f3299342700 -1 bluestore(/var/lib/ceph/osd/ceph-768) unexpected error code
-6> 2021-09-04 00:50:28.077 7f3299342700 0 _dump_transaction transaction dump:
{
"ops": [
{
"op_num": 0,
"op_name": "setattrs",
"collection": "17.7ff_head",
"oid": "#17:ffffffff:::%2fv2%2fmeta%2fd732de8b-8b15-5b57-a54a-fc23aadce4fe%2f88e9261c-832b-5d13-9517-40015c81e84e%2f27%2f11033%2f11033693%2f556500fe714ab37.webp:head#",
"attr_lens": {
"_": 376,
"_lock.libcephv2.lock": 153,
"snapset": 35
}
}
]
}
-5> 2021-09-04 00:50:28.117 7f3299342700 -1 /build/ceph-14.2.22/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)' thread 7f3299342700 time 2021-09-04 00:50:28.083637
/build/ceph-14.2.22/src/os/bluestore/BlueStore.cc: 12391: ceph_abort_msg("unexpected error")
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xdf) [0x55980fe784c4]
2: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0xbde) [0x5598103dbaee]
3: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x2aa) [0x5598103e19fa]
4: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x54) [0x55981011b514]
5: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xb09) [0x55981021c0a9]
6: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x1a7) [0x55981022a407]
7: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x97) [0x55981012ee57]
8: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x705) [0x5598100dd965]
9: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1bf) [0x55980fefbd8f]
10: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x5598101b5b22]
11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xbf5) [0x55980ff19835]
12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x4ac) [0x5598105393ec]
13: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55981053c5b0]
14: (()+0x76db) [0x7f32bd6e66db]
15: (clone()+0x3f) [0x7f32bc47d71f]
</pre>
bluestore - Bug #52464 (New): FAILED ceph_assert(current_shard->second->valid())
https://tracker.ceph.com/issues/52464
2021-08-31T13:57:27Z
Jeff Layton
jlayton@redhat.com
<p>I've got a cephadm cluster I use for testing, and this morning one of the OSDs crashed down in bluestore code:<br /><pre>
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: get compressor snappy = 0x55b3c18b1b90
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm::NCB::freelist_type=null
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: freelist init
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: freelist _read_cfg
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator dump block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator score block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator fragmentation block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::restore_allocator::file_size=0,sizeof(extent_t)=16
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::restore_allocator::No Valid allocation info on disk (empty file)
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc::NCB::restore_allocator() failed!
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc::NCB::Run Full Recovery from ONodes (might take a while) ...
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::read_allocation_from_drive_on_startup::Start Allocation Recovery from ONodes ...
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: In function 'bool WholeMergeIteratorImpl::is_main_smaller()' thread 7f2d60f480c0 time 2021-08-31T13:51:40.899594+0000
/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: 2288: FAILED ceph_assert(current_shard->second->valid())
ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55b3bdcb0b50]
2: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
3: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
4: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
5: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
6: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
7: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
8: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
9: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
10: (OSD::init()+0x3ba) [0x55b3bddec0ba]
11: main()
12: __libc_start_main()
13: _start()
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: *** Caught signal (Aborted) **
in thread 7f2d60f480c0 thread_name:ceph-osd
ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
1: /lib64/libpthread.so.0(+0x12b20) [0x7f2d5eeeeb20]
2: gsignal()
3: abort()
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x55b3bdcb0bae]
5: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
6: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
7: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
8: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
9: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
10: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
11: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
12: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
13: (OSD::init()+0x3ba) [0x55b3bddec0ba]
14: main()
15: __libc_start_main()
16: _start()
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 31 09:51:40 cephadm2 conmon[20474]: -5> 2021-08-31T13:51:40.897+0000 7f2d60f480c0 -1 bluestore::NCB::restore_allocator::No Valid allocation info on disk (empty file)
Aug 31 09:51:40 cephadm2 conmon[20474]: -1> 2021-08-31T13:51:40.903+0000 7f2d60f480c0 -1 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: In function 'bool WholeMergeIteratorImpl::is_main_smaller()' thread 7f2d60f480c0 time 2021-08-31T13:51:40.899594+0000
Aug 31 09:51:40 cephadm2 conmon[20474]: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: 2288: FAILED ceph_assert(current_shard->second->valid())
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
Aug 31 09:51:40 cephadm2 conmon[20474]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55b3bdcb0b50]
Aug 31 09:51:40 cephadm2 conmon[20474]: 2: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
Aug 31 09:51:40 cephadm2 conmon[20474]: 3: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
Aug 31 09:51:40 cephadm2 conmon[20474]: 4: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
Aug 31 09:51:40 cephadm2 conmon[20474]: 5: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
Aug 31 09:51:40 cephadm2 conmon[20474]: 6: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
Aug 31 09:51:40 cephadm2 conmon[20474]: 7: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
Aug 31 09:51:40 cephadm2 conmon[20474]: 8: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
Aug 31 09:51:40 cephadm2 conmon[20474]: 9: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
Aug 31 09:51:40 cephadm2 conmon[20474]: 10: (OSD::init()+0x3ba) [0x55b3bddec0ba]
Aug 31 09:51:40 cephadm2 conmon[20474]: 11: main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 12: __libc_start_main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 13: _start()
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: 0> 2021-08-31T13:51:40.907+0000 7f2d60f480c0 -1 *** Caught signal (Aborted) **
Aug 31 09:51:40 cephadm2 conmon[20474]: in thread 7f2d60f480c0 thread_name:ceph-osd
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
Aug 31 09:51:40 cephadm2 conmon[20474]: 1: /lib64/libpthread.so.0(+0x12b20) [0x7f2d5eeeeb20]
Aug 31 09:51:40 cephadm2 conmon[20474]: 2: gsignal()
Aug 31 09:51:40 cephadm2 conmon[20474]: 3: abort()
Aug 31 09:51:40 cephadm2 conmon[20474]: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x55b3bdcb0bae]
Aug 31 09:51:40 cephadm2 conmon[20474]: 5: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
Aug 31 09:51:40 cephadm2 conmon[20474]: 6: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
Aug 31 09:51:40 cephadm2 conmon[20474]: 7: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
Aug 31 09:51:40 cephadm2 conmon[20474]: 8: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
Aug 31 09:51:40 cephadm2 conmon[20474]: 9: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
Aug 31 09:51:40 cephadm2 conmon[20474]: 10: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
Aug 31 09:51:40 cephadm2 conmon[20474]: 11: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
Aug 31 09:51:40 cephadm2 conmon[20474]: 12: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
Aug 31 09:51:40 cephadm2 conmon[20474]: 13: (OSD::init()+0x3ba) [0x55b3bddec0ba]
Aug 31 09:51:40 cephadm2 conmon[20474]: 14: main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 15: __libc_start_main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 16: _start()
Aug 31 09:51:40 cephadm2 conmon[20474]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:41 cephadm2 systemd-coredump[20743]: Process 20497 (ceph-osd) of user 167 dumped core.
Aug 31 09:51:41 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Main process exited, code=exited, status=134/n/a
Aug 31 09:51:42 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Failed with result 'exit-code'.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Service RestartSec=10s expired, scheduling restart.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Scheduled restart job, restart counter is at 6.
Aug 31 09:51:52 cephadm2 systemd[1]: Stopped Ceph osd.0 for 1d11c63a-09ac-11ec-83e1-52540031ba78.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Start request repeated too quickly.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Failed with result 'exit-code'.
Aug 31 09:51:52 cephadm2 systemd[1]: Failed to start Ceph osd.0 for 1d11c63a-09ac-11ec-83e1-52540031ba78.
</pre><br />The build I'm using is based on commit a49f10e760b4, with some MDS patches on top (nothing that should affect OSD).</p>
bluestore - Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
https://tracker.ceph.com/issues/50844
2021-05-17T16:36:35Z
Neha Ojha
nojha@redhat.com
<pre>
2021-05-17T12:08:17.044 INFO:tasks.workunit.client.0.smithi104.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh:179: TEST_bluestore: ceph-bluestore-tool --path td/osd-bluefs-volume-ops/1 --dev-target td/osd-bluefs-volume-ops/1/db --command bluefs-bdev-new-db
2021-05-17T12:08:17.053 INFO:tasks.workunit.client.0.smithi104.stdout:inferring bluefs devices from bluestore path
2021-05-17T12:08:24.848 INFO:tasks.workunit.client.0.smithi104.stderr:2021-05-17T12:08:24.846+0000 7f30f9fc5400 -1 bluefs _allocate_without_fallback unable to allocate 0x500000 on bdev 0, allocator name bluefs-wal, allocator type hybrid, capacity 0x20000000, block size 0x100000, free 0xff000, fragmentation 0, allocated 0x0
2021-05-17T12:08:24.848 INFO:tasks.workunit.client.0.smithi104.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-4229-gd98b3fc9/rpm/el8/BUILD/ceph-17.0.0-4229-gd98b3fc9/src/os/bluestore/BlueFS.cc: In function 'void BlueFS::_rewrite_log_and_layout_sync(bool, int, int, int, int, std::optional<bluefs_layout_t>)' thread 7f30f9fc5400 time 2021-05-17T12:08:24.846276+0000
2021-05-17T12:08:24.849 INFO:tasks.workunit.client.0.smithi104.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-4229-gd98b3fc9/rpm/el8/BUILD/ceph-17.0.0-4229-gd98b3fc9/src/os/bluestore/BlueFS.cc: 2241: FAILED ceph_assert(r == 0)
2021-05-17T12:08:24.849 INFO:tasks.workunit.client.0.smithi104.stderr: ceph version 17.0.0-4229-gd98b3fc9 (d98b3fc98cdd22d1e98566aab6a991dad70d1b4d) quincy (dev)
2021-05-17T12:08:24.850 INFO:tasks.workunit.client.0.smithi104.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x7f30f8228782]
2021-05-17T12:08:24.850 INFO:tasks.workunit.client.0.smithi104.stderr: 2: /usr/lib64/ceph/libceph-common.so.2(+0x27c98a) [0x7f30f822898a]
2021-05-17T12:08:24.850 INFO:tasks.workunit.client.0.smithi104.stderr: 3: (BlueFS::_rewrite_log_and_layout_sync(bool, int, int, int, int, std::optional<bluefs_layout_t>)+0x108a) [0x564969c2954a]
2021-05-17T12:08:24.851 INFO:tasks.workunit.client.0.smithi104.stderr: 4: (BlueFS::prepare_new_device(int, bluefs_layout_t const&)+0x19f) [0x564969c297bf]
2021-05-17T12:08:24.851 INFO:tasks.workunit.client.0.smithi104.stderr: 5: (BlueStore::add_new_bluefs_device(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2f7) [0x564969cde5c7]
2021-05-17T12:08:24.852 INFO:tasks.workunit.client.0.smithi104.stderr: 6: main()
2021-05-17T12:08:24.852 INFO:tasks.workunit.client.0.smithi104.stderr: 7: __libc_start_main()
2021-05-17T12:08:24.852 INFO:tasks.workunit.client.0.smithi104.stderr: 8: _start()
</pre>
<p>/a/sseshasa-2021-05-17_11:08:21-rados-wip-sseshasa-testing-2021-05-17-1504-distro-basic-smithi/6118192</p>
bluestore - Bug #38745 (In Progress): spillover that doesn't make sense
https://tracker.ceph.com/issues/38745
2019-03-14T22:08:56Z
Sage Weil
sage@newdream.net
<pre>
BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD(s)
osd.50 spilled over 1.3 GiB metadata from 'db' device (20 GiB used of 31 GiB) to slow device
osd.94 spilled over 1.1 GiB metadata from 'db' device (16 GiB used of 31 GiB) to slow device
osd.103 spilled over 1.0 GiB metadata from 'db' device (18 GiB used of 31 GiB) to slow device
</pre><br />this is on the sepia lab cluster.
rgw - Feature #20235 (Fix Under Review): vstart.sh fails to start more than one radosgw process
https://tracker.ceph.com/issues/20235
2017-06-09T12:56:17Z
Jens Harbott
j.harbott@x-ion.de
<p>When running something like</p>
<pre>
MON=1 OSD=3 MDS=0 RGW=2 ../src/vstart.sh -d -n -x
</pre>
<p>all the radosgw processes try to listen on the same port, so only one of them starts successfully. It would be nice if vstart.sh would instead start them on different ports.</p>
rgw - Cleanup #19851 (In Progress): Move AES_256_CTR to auth/Crypto for others to reuse
https://tracker.ceph.com/issues/19851
2017-05-04T02:55:43Z
Jos Collin
<p>The following warning was introduced by Adam Kupczyk. So creating a tracker for implementing the changes suggested by Adam Kupczyk.</p>
<p>ceph/src/rgw/rgw_crypt.cc:38:2: warning: #warning "TODO: move this code to auth/Crypto for others to reuse." [-Wcpp]<br /> #warning "TODO: move this code to auth/Crypto for others to reuse." <br /> ^<sub>~~~</sub>~<br />ceph/src/rgw/rgw_crypt.cc:247:2: warning: #warning "TODO: use auth/Crypto instead of reimplementing." [-Wcpp]<br /> #warning "TODO: use auth/Crypto instead of reimplementing." <br /> ^<sub>~~~</sub>~</p>