Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2023-12-19T13:45:42Z
Ceph
Redmine
bluestore - Backport #63853 (New): quincy: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixSha...
https://tracker.ceph.com/issues/63853
2023-12-19T13:45:42Z
Backport Bot
bluestore - Bug #63769 (Pending Backport): ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixSha...
https://tracker.ceph.com/issues/63769
2023-12-08T12:07:45Z
Igor Fedotov
igor.fedotov@croit.io
<p>Assertions occurs if bluestore_allocator is set to bitmap.<br />Setting bluestore_elastic_shared_blobs to false fixes the issue.</p>
bluestore - Bug #63436 (Pending Backport): Typo in reshard example
https://tracker.ceph.com/issues/63436
2023-11-03T19:36:39Z
Adam Kupczyk
<p>See <a class="external" href="https://tracker.ceph.com/issues/63353">https://tracker.ceph.com/issues/63353</a>.<br />I missed the fact that "o"->"O" should be done too.</p>
bluestore - Bug #63353 (Pending Backport): resharding RocksDB after upgrade to Pacific breaks OSDs
https://tracker.ceph.com/issues/63353
2023-10-30T13:14:14Z
Denis Polom
<p>Hi</p>
<p>we upgraded our Ceph cluster from latest Octopus to Pacific 16.2.14 and then we followed the docs (<a class="external" href="https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding">https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding</a>) to reshard RocksDB on our OSDs.</p>
<p>Despite resharding reports operation as successful, OSD fails to start.</p>
<pre>
# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-5/ --sharding="m(3) p(3,0-12) o(3,0-13)=block_cache={type=binned_lru} l p" reshard
reshard success
</pre>
<pre>
Oct 30 12:44:17 octopus2 ceph-osd[4521]: /build/ceph-16.2.14/src/kv/RocksDBStore.cc: 1223: FAILED ceph_assert(recreate_mode)
Oct 30 12:44:17 octopus2 ceph-osd[4521]: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14b) [0x564047cb92b2]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 2: /usr/bin/ceph-osd(+0xaa948a) [0x564047cb948a]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 3: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1609) [0x564048794829]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 4: (BlueStore::_open_db(bool, bool, bool)+0x601) [0x564048240421]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 5: (BlueStore::_open_db_and_around(bool, bool)+0x26b) [0x5640482a5f8b]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 6: (BlueStore::_mount()+0x9c) [0x5640482a896c]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 7: (OSD::init()+0x38a) [0x564047daacea]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 8: main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 9: __libc_start_main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 10: _start()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 0> 2023-10-30T12:44:17.088+0000 7f4971ed2100 -1 *** Caught signal (Aborted) **
Oct 30 12:44:17 octopus2 ceph-osd[4521]: in thread 7f4971ed2100 thread_name:ceph-osd
Oct 30 12:44:17 octopus2 ceph-osd[4521]: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730) [0x7f4972921730]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 2: gsignal()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 3: abort()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x19c) [0x564047cb9303]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 5: /usr/bin/ceph-osd(+0xaa948a) [0x564047cb948a]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 6: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1609) [0x564048794829]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 7: (BlueStore::_open_db(bool, bool, bool)+0x601) [0x564048240421]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 8: (BlueStore::_open_db_and_around(bool, bool)+0x26b) [0x5640482a5f8b]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 9: (BlueStore::_mount()+0x9c) [0x5640482a896c]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 10: (OSD::init()+0x38a) [0x564047daacea]
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 11: main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 12: __libc_start_main()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: 13: _start()
Oct 30 12:44:17 octopus2 ceph-osd[4521]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 30 12:44:17 octopus2 ceph-osd[4521]: -1> 2023-10-30T12:44:17.084+0000 7f4971ed2100 -1 /build/ceph-16.2.14/src/kv/RocksDBStore.cc: In function 'int RocksDBStore::do_open(std::ostream&, bool, bool, const string&)' thread 7f4971ed2100 time 2023-10-30T12:44:17.087172+0000
</pre>
<p>Any advice will be appreciated.</p>
<p>thx</p>
bluestore - Bug #63121 (Pending Backport): KeyValueDB/KVTest.RocksDB_estimate_size tests failing
https://tracker.ceph.com/issues/63121
2023-10-06T08:55:03Z
Aishwarya Mathuria
<pre>
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stdout:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/18.0.0-6569-g0e9a2b0e/rpm/el8/BUILD/ceph-18.0.0-6569-g0e9a2b0e/src/test/objectstore/test_kv.cc:567: Failure
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stdout:Expected: (size_a) > ((test + 1) * 1000 * 100 * 0.5), actual: 3987 vs 50000
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:53.876+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
2023-10-06T00:39:53.879 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:53.876+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
2023-10-06T00:39:53.880 INFO:teuthology.orchestra.run.smithi184.stdout:==> rm -r kv_test_temp_dir
2023-10-06T00:39:53.882 INFO:teuthology.orchestra.run.smithi184.stdout:[ FAILED ] KeyValueDB/KVTest.RocksDB_estimate_size/0, where GetParam() = "rocksdb" (344 ms)
2023-10-06T00:39:54.332 INFO:teuthology.orchestra.run.smithi184.stdout:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/18.0.0-6569-g0e9a2b0e/rpm/el8/BUILD/ceph-18.0.0-6569-g0e9a2b0e/src/test/objectstore/test_kv.cc:599: Failure
2023-10-06T00:39:54.332 INFO:teuthology.orchestra.run.smithi184.stdout:Expected: (size_a) > ((test + 1) * 1000 * 100 * 0.5), actual: 3917 vs 50000
2023-10-06T00:39:54.332 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:54.330+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
2023-10-06T00:39:54.333 INFO:teuthology.orchestra.run.smithi184.stderr:2023-10-06T00:39:54.330+0000 7f31711318c0 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
2023-10-06T00:39:54.333 INFO:teuthology.orchestra.run.smithi184.stdout:==> rm -r kv_test_temp_dir
2023-10-06T00:39:54.335 INFO:teuthology.orchestra.run.smithi184.stdout:[ FAILED ] KeyValueDB/KVTest.RocksDB_estimate_size_column_family/0, where GetParam() = "rocksdb" (454 ms)
</pre>
bluestore - Bug #62730 (New): ceph-bluestore-tool reshard broken
https://tracker.ceph.com/issues/62730
2023-09-06T19:49:52Z
Adam Kupczyk
<p>It is possible to specify same prefix twice. Here an example with "p" defined twice.</p>
<p>ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-1 --sharding="m(3) p(3,0-12) o(3,0-13)=block_cache={type=binned_lru} l p" reshard</p>
<p>After resharding we get:<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: do_open existing_cfs=11<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=l shard_idx=0 hash_l=0 hash_h=4294967295 handle=0x55e12cbf89e0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=0 hash_l=0 hash_h=4294967295 handle=0x55e12cbf8300<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=1 hash_l=0 hash_h=4294967295 handle=0x55e12cbf8980<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=2 hash_l=0 hash_h=4294967295 handle=0x55e12cbf91c0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=0 hash_l=0 hash_h=13 handle=0x55e12cbf8cc0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=1 hash_l=0 hash_h=13 handle=0x55e12cbf8ec0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=2 hash_l=0 hash_h=13 handle=0x55e12cbf8bc0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=0 hash_l=0 hash_h=12 handle=0x55e12cbf8ce0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=1 hash_l=0 hash_h=12 handle=0x55e12cbf8c00<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=2 hash_l=0 hash_h=12 handle=0x55e12cc92e20<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: do_open missing_cfs=1<br />2023-07-11T15:34:08.605+0000 7f138baaf200 -1 /builddir/build/BUILD/ceph-16.2.10/src/kv/RocksDBStore.cc: In function 'int RocksDBStore::do_open(std::ostream&, bool, bool, const string&)' thread 7f138baaf200 time 2023-07-11T15:34:08.602980+<br />0000<br />/builddir/build/BUILD/ceph-16.2.10/src/kv/RocksDBStore.cc: 1215: FAILED ceph_assert(recreate_mode)</p>
bluestore - Bug #62175 (Need More Info): "rados get" command can't get data when bluestore uses L...
https://tracker.ceph.com/issues/62175
2023-07-26T08:32:43Z
taki zhao
<p>I set the buffer cache of bluestore to LRU.</p>
<p>When I put an object whose size is about 100MB to rados using "./bin/rados -p cache_pool put tar /root/Boost.tar.bz2" command, I cannot get all the data using "./bin/rados -p cache_pool get tar /root/tmp1" command (I get the /root/tmp1 whose size is about 12MB), and the command does not seem to be terminated.</p>
<p>It is worth mentioning that I am using the vstart environment, and the vstart startup command is "../src/vstart.sh --debug --new -x --localhost --bluestore".</p>
<p><b>backtrace</b></p>
<pre>
0> 2023-07-26T15:48:58.726+0800 fff80e5caf90 -1 *** Caught signal (Aborted) **
in thread fff80e5caf90 thread_name:tp_osd_tp
ceph version b6a97989f (3b6a97989fdad7cc894771fbfe9f1ce241f2ada1) quincy (stable)
1: __kernel_rt_sigreturn()
2: gsignal()
3: abort()
4: /usr/lib64/libc.so.6(+0x2fa0c) [0xfffc2dbcfa0c]
5: /usr/lib64/libc.so.6(+0x2fa8c) [0xfffc2dbcfa8c]
6: (LruBufferCacheShard::_rm(BlueStore::Buffer*)+0xf4) [0xaaac355770bc]
7: (BlueStore::BufferSpace::_rm_buffer(BlueStore::BufferCacheShard*, std::_Rb_tree_iterator<std::pair<unsigned int const, std::unique_ptr<BlueStore::Buffer, std::default_delete<BlueStore::Buffer> > > >)+0x40) [0xaaac35582a5c]
8: (LruBufferCacheShard::_trim_to(unsigned long)+0xd8) [0xaaac355aab6c]
9: (BlueStore::BufferSpace::did_read(BlueStore::BufferCacheShard*, unsigned int, ceph::buffer::v15_2_0::list&)+0x210) [0xaaac355a4114]
10: (BlueStore::_generate_read_result_bl(boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, std::map<unsigned long, ceph::buffer::v15_2_0::list, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, ceph::buffer::v15_2_0::list> > >&, std::vector<ceph::buffer::v15_2_0::list, std::allocator<ceph::buffer::v15_2_0::list> >&, std::map<boost::intrusive_ptr<BlueStore::Blob>, std::__cxx11::list<BlueStore::read_req_t, std::allocator<BlueStore::read_req_t> >, std::less<boost::intrusive_ptr<BlueStore::Blob> >, std::allocator<std::pair<boost::intrusive_ptr<BlueStore::Blob> const, std::__cxx11::list<BlueStore::read_req_t, std::allocator<BlueStore::read_req_t> > > > >&, bool, bool*, ceph::buffer::v15_2_0::list&)+0x5f8) [0xaaac355140d4]
11: (BlueStore::_do_read(BlueStore::Collection*, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::v15_2_0::list&, unsigned int, unsigned long)+0x81c) [0xaaac35520770]
12: (BlueStore::read(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::v15_2_0::list&, unsigned int)+0x3a8) [0xaaac355581d4]
13: (ReplicatedBackend::objects_read_sync(hobject_t const&, unsigned long, unsigned long, unsigned int, ceph::buffer::v15_2_0::list*)+0x94) [0xaaac35385030]
14: (PrimaryLogPG::do_read(PrimaryLogPG::OpContext*, OSDOp&)+0x738) [0xaaac350a2218]
15: (PrimaryLogPG::do_osd_ops(PrimaryLogPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)+0xa08) [0xaaac350dbfc4]
16: (PrimaryLogPG::prepare_transaction(PrimaryLogPG::OpContext*)+0x158) [0xaaac350ebfe8]
17: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x474) [0xaaac350ec83c]
18: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x2f68) [0xaaac350f0a50]
19: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xad4) [0xaaac350f701c]
20: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x470) [0xaaac34f5dcfc]
21: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x80) [0xaaac35260fb0]
22: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x26ec) [0xaaac34f8b0c4]
23: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x830) [0xaaac356e0fe0]
24: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0xaaac356e3c8c]
25: (Thread::entry_wrapper()+0x4c) [0xaaac356ccfdc]
26: (Thread::_entry_func(void*)+0xc) [0xaaac356ccffc]
27: /usr/lib64/libpthread.so.0(+0x88cc) [0xfffc2e1388cc]
28: /usr/lib64/libc.so.6(+0xda12c) [0xfffc2dc7a12c]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
</pre>
RADOS - Bug #59099 (New): PG move causes data duplication
https://tracker.ceph.com/issues/59099
2023-03-17T13:51:03Z
Adam Kupczyk
<p>Lets imagine we have a pool TEST.<br />In the PG we have object OBJ of size 1M.</p>
<p>We create snap SNAP-1 and write some 4K to OBJ.<br />As result we get OBJ.1 that takes 1M and OBJ.head that reuses all but 4K.<br />The total data usage is 1M + 4K.</p>
<p>Now we move PG to other OSD.<br />In some cases OBJ.head + OBJ.1 will take 2M.</p>
<p>The example of this happening is in attachment snap-pg-move-history.sh.<br />When data is on original PG on OSD.0:</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.1 GiB 101 MiB 0 B 21 MiB 100 GiB 1.09 1.05 2 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.0 GiB 740 KiB 0 B 20 MiB 100 GiB 0.99 0.95 1 up<br /> TOTAL 202 GiB 2.1 GiB 101 MiB 0 B 41 MiB 200 GiB 1.04 <br />MIN/MAX VAR: 0.95/1.05 STDDEV: 0.05</p>
<p>And after forcibly moving PG to OSD.</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.0 GiB 756 KiB 0 B 21 MiB 100 GiB 0.99 0.91 1 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.2 GiB 201 MiB 0 B 21 MiB 100 GiB 1.18 1.09 2 up<br /> TOTAL 202 GiB 2.2 GiB 201 MiB 0 B 42 MiB 200 GiB 1.09 <br />MIN/MAX VAR: 0.91/1.09 STDDEV: 0.10</p>
<p>The script was tested on Reef, but I do not believe it is limited to it.</p>
bluestore - Bug #58022 (Pending Backport): Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/58022
2022-11-14T17:06:44Z
Kevin Fox
<p>Due to issue <a class="external" href="https://tracker.ceph.com/issues/57672">https://tracker.ceph.com/issues/57672</a> we've been monitoring our clusters closely ensure it doesn't run into the same issue on our other clusters. We have a cluster running 16.2.9 that is showing a weird/bad behavior.</p>
<p>We've noticed some osd's suddenly start increasing their fragmentation at a constant rate until they are restarted. They then settle down and reduce their fragmentation very slowly.</p>
<p>Talking with @Vikhyat a bit, the theory was maybe compaction was kicking in repeatedly. We used the ceph_rocksdb_log_parser.py on one of the runaway osds and didn't see a significant number of compaction events during the time of its runaway fragmentation. So that is unlikely to be the cause.</p>
<p>Please see attached screenshot. You can see the run away osds do so over multiple days and then when we restart them, they level off and slowly decrease.</p>
<p>If it was workload related, we would expect it to continue to fragment after the restart as the workload continues on. But the behavior stops immediately on restart. So feels like some thread in the osd is doing something unusual until restarted.</p>
bluestore - Backport #55517 (New): quincy: test_cls_rbd.sh: multiple TestClsRbd failures during u...
https://tracker.ceph.com/issues/55517
2022-05-02T17:20:08Z
Backport Bot
bluestore - Bug #55444 (Pending Backport): test_cls_rbd.sh: multiple TestClsRbd failures during u...
https://tracker.ceph.com/issues/55444
2022-04-26T01:14:27Z
Laura Flores
<p>Description: rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04}</p>
<p>/a/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801098<br /><pre><code class="text syntaxhl"><span class="CodeRay">2022-04-23T08:54:27.447 INFO:tasks.workunit.client.0.smithi084.stdout:[ RUN ] TestClsRbd.directory_methods
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:297: Failure
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: -16
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: dir_state_set(&ioctx, oid, cls::rbd::DIRECTORY_STATE_ADD_DISABLED)
2022-04-23T08:54:27.465 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.466 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.directory_methods (18 ms)
...
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:750: Failure
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 0
2022-04-23T08:54:27.633 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: get_parent(&ioctx, oid, 10, &pspec, &size)
2022-04-23T08:54:27.634 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: -22
2022-04-23T08:54:27.634 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.parents_v1 (45 ms)
...
2022-04-23T08:54:27.729 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:1008: Failure
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 1u
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 1
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: snapc.snaps.size()
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.730 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.snapshots (6 ms)
...
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout:/build/ceph-14.2.22/src/test/cls_rbd/test_cls_rbd.cc:1437: Failure
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Expected: 2U
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 2
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout:To be equal to: pairs.size()
2022-04-23T08:54:27.778 INFO:tasks.workunit.client.0.smithi084.stdout: Which is: 0
2022-04-23T08:54:27.779 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.metadata (6 ms)
... + 22 more failed tests
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[==========] 67 tests from 1 test case ran. (22012 ms total)
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ PASSED ] 41 tests.
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] 26 tests, listed below:
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.directory_methods
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.parents_v1
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.snapshots
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.metadata
2022-04-23T08:54:39.861 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image_status
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.mirror_image_map
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_list
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_add
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.dir_add_already_existing
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_rename
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_remove
2022-04-23T08:54:39.862 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_dir_remove_missing
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_add
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_remove
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_list
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_image_clean
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.image_group_add
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_set_duplicate_name
2022-04-23T08:54:39.863 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_set
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_list
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.group_snap_remove
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.trash_methods
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.clone_child
2022-04-23T08:54:39.864 INFO:tasks.workunit.client.0.smithi084.stdout:[ FAILED ] TestClsRbd.namespace_methods
</span></code></pre></p>
bluestore - Bug #53359 (New): bluestore: missing block.db symlinks leads to confusing crash
https://tracker.ceph.com/issues/53359
2021-11-22T16:09:44Z
Sage Weil
sage@newdream.net
<p>A regression in ceph-volume (master branch) led to the block.db symlink not getting created. This leads to OSDs that crash like so:</p>
<pre>
"backtrace": [
"/lib64/libpthread.so.0(+0x12c20) [0x7f3573347c20]",
"gsignal()",
"abort()",
"/lib64/libstdc++.so.6(+0x9009b) [0x7f357295e09b]",
"/lib64/libstdc++.so.6(+0x9653c) [0x7f357296453c]",
"/lib64/libstdc++.so.6(+0x96597) [0x7f3572964597]",
"/lib64/libstdc++.so.6(+0x967f8) [0x7f35729647f8]",
"/usr/bin/ceph-osd(+0x5c7203) [0x55cc53713203]",
"(BlueFS::_open_super()+0x18f) [0x55cc53e66cff]",
"(BlueFS::mount()+0xeb) [0x55cc53e88ddb]",
"(BlueStore::_open_bluefs(bool, bool)+0x94) [0x55cc53d4bad4]",
"(BlueStore::_prepare_db_environment(bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)+0x6d9) [0x55cc53d4cc29]",
"(BlueStore::_open_db(bool, bool, bool)+0x15c) [0x55cc53d4df4c]",
"(BlueStore::_open_db_and_around(bool, bool)+0x2b4) [0x55cc53dc68d4]",
"(BlueStore::_mount()+0x1ae) [0x55cc53dc971e]",
"(OSD::init()+0x3ba) [0x55cc5385711a]",
"main()",
"__libc_start_main()",
"_start()"
],
"ceph_version": "17.0.0-9073-g6e528ed7",
</pre>
<p>The on-disk block that we are trying to decode is all zeros.</p>
<p>I thought we had a flag somewhere indicating whether a db and/or wal was expected so that we could provide a meaningful/informative error message, but maybe not?</p>
<p>(ceph-volume fix is here: <a class="external" href="https://github.com/ceph/ceph/pull/44030">https://github.com/ceph/ceph/pull/44030</a>)</p>
RADOS - Bug #52513 (New): BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15
https://tracker.ceph.com/issues/52513
2021-09-06T09:20:28Z
Konstantin Shalygin
k0ste@k0ste.ru
<p>We get crash of two simultaneously OSD's served 17.7ff [684,768,760]</p>
<pre>
RECENT_CRASH 2 daemons have recently crashed
osd.760 crashed on host meta114 at 2021-09-03 21:50:28.138745Z
osd.768 crashed on host meta115 at 2021-09-03 21:50:28.123223Z
</pre>
<p>Seems ENOFILE is unexpected, when object lock is acquired</p>
<pre>
-8> 2021-09-04 00:50:28.077 7f3299342700 -1 bluestore(/var/lib/ceph/osd/ceph-768) _txc_add_transaction error (2) No such file or directory not handled on operation 15 (op 0, counting from 0)
-7> 2021-09-04 00:50:28.077 7f3299342700 -1 bluestore(/var/lib/ceph/osd/ceph-768) unexpected error code
-6> 2021-09-04 00:50:28.077 7f3299342700 0 _dump_transaction transaction dump:
{
"ops": [
{
"op_num": 0,
"op_name": "setattrs",
"collection": "17.7ff_head",
"oid": "#17:ffffffff:::%2fv2%2fmeta%2fd732de8b-8b15-5b57-a54a-fc23aadce4fe%2f88e9261c-832b-5d13-9517-40015c81e84e%2f27%2f11033%2f11033693%2f556500fe714ab37.webp:head#",
"attr_lens": {
"_": 376,
"_lock.libcephv2.lock": 153,
"snapset": 35
}
}
]
}
-5> 2021-09-04 00:50:28.117 7f3299342700 -1 /build/ceph-14.2.22/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)' thread 7f3299342700 time 2021-09-04 00:50:28.083637
/build/ceph-14.2.22/src/os/bluestore/BlueStore.cc: 12391: ceph_abort_msg("unexpected error")
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xdf) [0x55980fe784c4]
2: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0xbde) [0x5598103dbaee]
3: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x2aa) [0x5598103e19fa]
4: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x54) [0x55981011b514]
5: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xb09) [0x55981021c0a9]
6: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x1a7) [0x55981022a407]
7: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x97) [0x55981012ee57]
8: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x705) [0x5598100dd965]
9: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1bf) [0x55980fefbd8f]
10: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x5598101b5b22]
11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xbf5) [0x55980ff19835]
12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x4ac) [0x5598105393ec]
13: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55981053c5b0]
14: (()+0x76db) [0x7f32bd6e66db]
15: (clone()+0x3f) [0x7f32bc47d71f]
</pre>
bluestore - Bug #38363 (Need More Info): Failure in assert when calling: ceph-volume lvm prepare ...
https://tracker.ceph.com/issues/38363
2019-02-18T13:41:08Z
Rainer Krienke
<p>I run Ubuntu 18.04 and and ceph version 13.2.4-1bionic from this repo: <a class="external" href="https://download.ceph.com/debian-mimic">https://download.ceph.com/debian-mimic</a>.</p>
<p>When I try to create a new bluestore osd on several 4TB disks I get an error I first thought was related to <a class="external" href="http://tracker.ceph.com/issues/15386_(read_fsid">http://tracker.ceph.com/issues/15386_(read_fsid</a> unparsable uuid) . However a user cephs user list gave me a hint that in my error log I posted an assertion failure is the real problem not the _read_fsid unparsable uuid message, So I created this new bug report. The same also happens when I omit the --bluestore option.</p>
<p>So here is the complete log for a run of ceph-volume to create an osd which fails reproducebly. I also tried several different devices but the result was always the same:</p>
<ol>
<li>ceph-volume lvm prepare --bluestore --data /dev/sdg</li>
</ol>
<p>Running command: /usr/bin/ceph-authtool --gen-print-key<br />Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a87c3a87-cf22-41df-af4b-c971ed4c0e1a<br />Running command: /sbin/vgcreate --force --yes ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a /dev/sdg<br /> stdout: Physical volume "/dev/sdg" successfully created.<br /> stdout: Volume group "ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a" successfully created<br />Running command: /sbin/lvcreate --yes -l 100%FREE -n osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a<br /> stdout: Logical volume "osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a" created.<br />Running command: /usr/bin/ceph-authtool --gen-print-key<br />Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0<br />--> Absolute path not found for executable: restorecon<br />--> Ensure $PATH environment variable contains common executable locations<br />Running command: /bin/chown -h ceph:ceph /dev/ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a/osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a<br />Running command: /bin/chown -R ceph:ceph /dev/dm-8<br />Running command: /bin/ln -s /dev/ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a/osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a /var/lib/ceph/osd/ceph-0/block<br />Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap<br /> stderr: got monmap epoch 1<br />Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAQtGpcjkxOMxAARlPykBaxHWqIyndvjTMNuQ==<br /> stdout: creating /var/lib/ceph/osd/ceph-0/keyring<br />added entity osd.0 auth auth(auid = 18446744073709551615 key=AQAQtGpcjkxOMxAARlPykBaxHWqIyndvjTMNuQ== with 0 caps)<br />Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring<br />Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/<br />Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a87c3a87-cf22-41df-af4b-c971ed4c0e1a --setuser ceph --setgroup ceph<br /> stderr: 2019-02-18 14:33:07.093 7fb9508d5240 -1 bluestore(/var/lib/ceph/osd/ceph-0/) <em>read_fsid unparsable uuid<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function 'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IOContext*, bool)' thread 7fb9508d5240 time 2019-02-18 14:33:07.155877<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821: FAILED assert((uint64_t)r len)<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fb947cf53e2]<br /> stderr: 2: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 3: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 4: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 6: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 8: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 10: (main()+0x4222) [0x55a21df85462]<br /> stderr: 11: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 12: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.<br /> stderr: 2019-02-18 14:33:07.157 7fb9508d5240 -1 /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function 'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IOContext*, bool)' thread 7fb9508d5240 time 2019-02-18 14:33:07.155877<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821: FAILED assert((uint64_t)r len)<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fb947cf53e2]<br /> stderr: 2: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 3: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 4: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 6: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 8: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 10: (main()+0x4222) [0x55a21df85462]<br /> stderr: 11: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 12: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br /> stderr: -25> 2019-02-18 14:33:07.093 7fb9508d5240 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid<br /> stderr: 0> 2019-02-18 14:33:07.157 7fb9508d5240 -1 /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function 'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IOContext*, bool)' thread 7fb9508d5240 time 2019-02-18 14:33:07.155877<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821: FAILED assert((uint64_t)r == len)<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fb947cf53e2]<br /> stderr: 2: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 3: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 4: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 6: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 8: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 10: (main()+0x4222) [0x55a21df85462]<br /> stderr: 11: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 12: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br /> stderr: <b>* Caught signal (Aborted) *<strong><br /> stderr: in thread 7fb9508d5240 thread_name:ceph-osd<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (()+0x92aa40) [0x55a21e5e5a40]<br /> stderr: 2: (()+0x12890) [0x7fb9463f9890]<br /> stderr: 3: (gsignal()+0xc7) [0x7fb9452d4e97]<br /> stderr: 4: (abort()+0x141) [0x7fb9452d6801]<br /> stderr: 5: (ceph::</em>_ceph_assert_fail(char const</strong>, char const*, int, char const*)+0x250) [0x7fb947cf5530]<br /> stderr: 6: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 7: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 8: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 10: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 12: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 14: (main()+0x4222) [0x55a21df85462]<br /> stderr: 15: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 16: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: 2019-02-18 14:33:07.157 7fb9508d5240 -1 <strong></b> Caught signal (Aborted) <b><br /> stderr: in thread 7fb9508d5240 thread_name:ceph-osd<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (()+0x92aa40) [0x55a21e5e5a40]<br /> stderr: 2: (()+0x12890) [0x7fb9463f9890]<br /> stderr: 3: (gsignal()+0xc7) [0x7fb9452d4e97]<br /> stderr: 4: (abort()+0x141) [0x7fb9452d6801]<br /> stderr: 5: (ceph::</em>_ceph_assert_fail(char const</strong>, char const*, int, char const*)+0x250) [0x7fb947cf5530]<br /> stderr: 6: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 7: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 8: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 10: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 12: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 14: (main()+0x4222) [0x55a21df85462]<br /> stderr: 15: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 16: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br /> stderr: 0> 2019-02-18 14:33:07.157 7fb9508d5240 -1 <strong></b> Caught signal (Aborted) *</strong><br /> stderr: in thread 7fb9508d5240 thread_name:ceph-osd<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (()+0x92aa40) [0x55a21e5e5a40]<br /> stderr: 2: (()+0x12890) [0x7fb9463f9890]<br /> stderr: 3: (gsignal()+0xc7) [0x7fb9452d4e97]<br /> stderr: 4: (abort()+0x141) [0x7fb9452d6801]<br /> stderr: 5: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x250) [0x7fb947cf5530]<br /> stderr: 6: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 7: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 8: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 10: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 12: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 14: (main()+0x4222) [0x55a21df85462]<br /> stderr: 15: (__libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 16: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br />--> Was unable to complete a new OSD, will rollback changes<br />Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it<br /> stderr: purged osd.0<br />--> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a87c3a87-cf22-41df-af4b-c971ed4c0e1a --setuser ceph --setgroup ceph</p>
rgw - Cleanup #19851 (In Progress): Move AES_256_CTR to auth/Crypto for others to reuse
https://tracker.ceph.com/issues/19851
2017-05-04T02:55:43Z
Jos Collin
<p>The following warning was introduced by Adam Kupczyk. So creating a tracker for implementing the changes suggested by Adam Kupczyk.</p>
<p>ceph/src/rgw/rgw_crypt.cc:38:2: warning: #warning "TODO: move this code to auth/Crypto for others to reuse." [-Wcpp]<br /> #warning "TODO: move this code to auth/Crypto for others to reuse." <br /> ^<sub>~~~</sub>~<br />ceph/src/rgw/rgw_crypt.cc:247:2: warning: #warning "TODO: use auth/Crypto instead of reimplementing." [-Wcpp]<br /> #warning "TODO: use auth/Crypto instead of reimplementing." <br /> ^<sub>~~~</sub>~</p>