Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2023-12-19T13:45:42Z
Ceph
Redmine
bluestore - Backport #63853 (New): quincy: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixSha...
https://tracker.ceph.com/issues/63853
2023-12-19T13:45:42Z
Backport Bot
bluestore - Bug #62730 (New): ceph-bluestore-tool reshard broken
https://tracker.ceph.com/issues/62730
2023-09-06T19:49:52Z
Adam Kupczyk
<p>It is possible to specify same prefix twice. Here an example with "p" defined twice.</p>
<p>ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-1 --sharding="m(3) p(3,0-12) o(3,0-13)=block_cache={type=binned_lru} l p" reshard</p>
<p>After resharding we get:<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: do_open existing_cfs=11<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=l shard_idx=0 hash_l=0 hash_h=4294967295 handle=0x55e12cbf89e0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=0 hash_l=0 hash_h=4294967295 handle=0x55e12cbf8300<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=1 hash_l=0 hash_h=4294967295 handle=0x55e12cbf8980<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=m shard_idx=2 hash_l=0 hash_h=4294967295 handle=0x55e12cbf91c0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=0 hash_l=0 hash_h=13 handle=0x55e12cbf8cc0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=1 hash_l=0 hash_h=13 handle=0x55e12cbf8ec0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=o shard_idx=2 hash_l=0 hash_h=13 handle=0x55e12cbf8bc0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=0 hash_l=0 hash_h=12 handle=0x55e12cbf8ce0<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=1 hash_l=0 hash_h=12 handle=0x55e12cbf8c00<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: add_column_family column_name=p shard_idx=2 hash_l=0 hash_h=12 handle=0x55e12cc92e20<br />2023-07-11T15:34:08.601+0000 7f138baaf200 10 rocksdb: do_open missing_cfs=1<br />2023-07-11T15:34:08.605+0000 7f138baaf200 -1 /builddir/build/BUILD/ceph-16.2.10/src/kv/RocksDBStore.cc: In function 'int RocksDBStore::do_open(std::ostream&, bool, bool, const string&)' thread 7f138baaf200 time 2023-07-11T15:34:08.602980+<br />0000<br />/builddir/build/BUILD/ceph-16.2.10/src/kv/RocksDBStore.cc: 1215: FAILED ceph_assert(recreate_mode)</p>
bluestore - Bug #62175 (Need More Info): "rados get" command can't get data when bluestore uses L...
https://tracker.ceph.com/issues/62175
2023-07-26T08:32:43Z
taki zhao
<p>I set the buffer cache of bluestore to LRU.</p>
<p>When I put an object whose size is about 100MB to rados using "./bin/rados -p cache_pool put tar /root/Boost.tar.bz2" command, I cannot get all the data using "./bin/rados -p cache_pool get tar /root/tmp1" command (I get the /root/tmp1 whose size is about 12MB), and the command does not seem to be terminated.</p>
<p>It is worth mentioning that I am using the vstart environment, and the vstart startup command is "../src/vstart.sh --debug --new -x --localhost --bluestore".</p>
<p><b>backtrace</b></p>
<pre>
0> 2023-07-26T15:48:58.726+0800 fff80e5caf90 -1 *** Caught signal (Aborted) **
in thread fff80e5caf90 thread_name:tp_osd_tp
ceph version b6a97989f (3b6a97989fdad7cc894771fbfe9f1ce241f2ada1) quincy (stable)
1: __kernel_rt_sigreturn()
2: gsignal()
3: abort()
4: /usr/lib64/libc.so.6(+0x2fa0c) [0xfffc2dbcfa0c]
5: /usr/lib64/libc.so.6(+0x2fa8c) [0xfffc2dbcfa8c]
6: (LruBufferCacheShard::_rm(BlueStore::Buffer*)+0xf4) [0xaaac355770bc]
7: (BlueStore::BufferSpace::_rm_buffer(BlueStore::BufferCacheShard*, std::_Rb_tree_iterator<std::pair<unsigned int const, std::unique_ptr<BlueStore::Buffer, std::default_delete<BlueStore::Buffer> > > >)+0x40) [0xaaac35582a5c]
8: (LruBufferCacheShard::_trim_to(unsigned long)+0xd8) [0xaaac355aab6c]
9: (BlueStore::BufferSpace::did_read(BlueStore::BufferCacheShard*, unsigned int, ceph::buffer::v15_2_0::list&)+0x210) [0xaaac355a4114]
10: (BlueStore::_generate_read_result_bl(boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, std::map<unsigned long, ceph::buffer::v15_2_0::list, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, ceph::buffer::v15_2_0::list> > >&, std::vector<ceph::buffer::v15_2_0::list, std::allocator<ceph::buffer::v15_2_0::list> >&, std::map<boost::intrusive_ptr<BlueStore::Blob>, std::__cxx11::list<BlueStore::read_req_t, std::allocator<BlueStore::read_req_t> >, std::less<boost::intrusive_ptr<BlueStore::Blob> >, std::allocator<std::pair<boost::intrusive_ptr<BlueStore::Blob> const, std::__cxx11::list<BlueStore::read_req_t, std::allocator<BlueStore::read_req_t> > > > >&, bool, bool*, ceph::buffer::v15_2_0::list&)+0x5f8) [0xaaac355140d4]
11: (BlueStore::_do_read(BlueStore::Collection*, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::v15_2_0::list&, unsigned int, unsigned long)+0x81c) [0xaaac35520770]
12: (BlueStore::read(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::v15_2_0::list&, unsigned int)+0x3a8) [0xaaac355581d4]
13: (ReplicatedBackend::objects_read_sync(hobject_t const&, unsigned long, unsigned long, unsigned int, ceph::buffer::v15_2_0::list*)+0x94) [0xaaac35385030]
14: (PrimaryLogPG::do_read(PrimaryLogPG::OpContext*, OSDOp&)+0x738) [0xaaac350a2218]
15: (PrimaryLogPG::do_osd_ops(PrimaryLogPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)+0xa08) [0xaaac350dbfc4]
16: (PrimaryLogPG::prepare_transaction(PrimaryLogPG::OpContext*)+0x158) [0xaaac350ebfe8]
17: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x474) [0xaaac350ec83c]
18: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x2f68) [0xaaac350f0a50]
19: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xad4) [0xaaac350f701c]
20: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x470) [0xaaac34f5dcfc]
21: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x80) [0xaaac35260fb0]
22: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x26ec) [0xaaac34f8b0c4]
23: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x830) [0xaaac356e0fe0]
24: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0xaaac356e3c8c]
25: (Thread::entry_wrapper()+0x4c) [0xaaac356ccfdc]
26: (Thread::_entry_func(void*)+0xc) [0xaaac356ccffc]
27: /usr/lib64/libpthread.so.0(+0x88cc) [0xfffc2e1388cc]
28: /usr/lib64/libc.so.6(+0xda12c) [0xfffc2dc7a12c]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
</pre>
bluestore - Backport #61465 (New): reef: Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/61465
2023-05-26T10:44:59Z
Backport Bot
bluestore - Backport #61463 (New): quincy: Fragmentation score rising by seemingly stuck thread
https://tracker.ceph.com/issues/61463
2023-05-26T10:44:45Z
Backport Bot
RADOS - Bug #59099 (New): PG move causes data duplication
https://tracker.ceph.com/issues/59099
2023-03-17T13:51:03Z
Adam Kupczyk
<p>Lets imagine we have a pool TEST.<br />In the PG we have object OBJ of size 1M.</p>
<p>We create snap SNAP-1 and write some 4K to OBJ.<br />As result we get OBJ.1 that takes 1M and OBJ.head that reuses all but 4K.<br />The total data usage is 1M + 4K.</p>
<p>Now we move PG to other OSD.<br />In some cases OBJ.head + OBJ.1 will take 2M.</p>
<p>The example of this happening is in attachment snap-pg-move-history.sh.<br />When data is on original PG on OSD.0:</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.1 GiB 101 MiB 0 B 21 MiB 100 GiB 1.09 1.05 2 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.0 GiB 740 KiB 0 B 20 MiB 100 GiB 0.99 0.95 1 up<br /> TOTAL 202 GiB 2.1 GiB 101 MiB 0 B 41 MiB 200 GiB 1.04 <br />MIN/MAX VAR: 0.95/1.05 STDDEV: 0.05</p>
<p>And after forcibly moving PG to OSD.</p>
<p>ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS<br /> 0 ssd 0.09859 1.00000 101 GiB 1.0 GiB 756 KiB 0 B 21 MiB 100 GiB 0.99 0.91 1 up<br /> 1 ssd 0.09859 1.00000 101 GiB 1.2 GiB 201 MiB 0 B 21 MiB 100 GiB 1.18 1.09 2 up<br /> TOTAL 202 GiB 2.2 GiB 201 MiB 0 B 42 MiB 200 GiB 1.09 <br />MIN/MAX VAR: 0.91/1.09 STDDEV: 0.10</p>
<p>The script was tested on Reef, but I do not believe it is limited to it.</p>
Ceph - Bug #58596 (New): rocksdb: rm_range_keys() (message with 'enter') logs binary data
https://tracker.ceph.com/issues/58596
2023-01-29T07:19:21Z
Ronen Friedman
rfriedma@redhat.com
<p>that log message contains keys in their binary format, causing<br />a problem for grep(1) and editors (and might create a security<br />issue).</p>
bluestore - Backport #55517 (New): quincy: test_cls_rbd.sh: multiple TestClsRbd failures during u...
https://tracker.ceph.com/issues/55517
2022-05-02T17:20:08Z
Backport Bot
bluestore - Bug #55187 (Need More Info): ceph_abort_msg(\"bluefs enospc\")
https://tracker.ceph.com/issues/55187
2022-04-05T16:10:38Z
Aishwarya Mathuria
<p>from osd crash info in gibba cluster: <br /><pre>
$ sudo ceph crash info 2022-04-05T02:08:50.176782Z_e45030ee-e34b-46a4-bdde-3bcb3e8005fa
{
"assert_condition": "abort",
"assert_file": "/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.1.0-163-g4e244311/rpm/el8/BUILD/ceph-17.1.0-163-g4e244311/src/os/bluestore/BlueFS.cc",
"assert_func": "int BlueFS::_flush_range_F(BlueFS::FileWriter*, uint64_t, uint64_t)",
"assert_line": 3137,
"assert_msg": "/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.1.0-163-g4e244311/rpm/el8/BUILD/ceph-17.1.0-163-g4e244311/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range_F(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7f10c8ce23c0 time 2022-04-05T02:08:50.161549+0000\n/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.1.0-163-g4e244311/rpm/el8/BUILD/ceph-17.1.0-163-g4e244311/src/os/bluestore/BlueFS.cc: 3137: ceph_abort_msg(\"bluefs enospc\")\n",
"assert_thread_name": "ceph-osd",
"backtrace": [
"/lib64/libpthread.so.0(+0x12ce0) [0x7f10c6ee7ce0]",
"gsignal()",
"abort()",
"(ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1b8) [0x56488be01c6c]",
"(BlueFS::_flush_range_F(BlueFS::FileWriter*, unsigned long, unsigned long)+0x943) [0x56488c5697f3]",
"(BlueFS::_flush_F(BlueFS::FileWriter*, bool, bool*)+0xa9) [0x56488c5699d9]",
"(BlueFS::fsync(BlueFS::FileWriter*)+0x19e) [0x56488c58620e]",
"(BlueRocksWritableFile::Sync()+0x18) [0x56488c595f88]",
"(rocksdb::LegacyWritableFileWrapper::Sync(rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x1f) [0x56488cab96bf]",
"(rocksdb::WritableFileWriter::SyncInternal(bool)+0x662) [0x56488cbe9e92]",
"(rocksdb::WritableFileWriter::Sync(bool)+0xf8) [0x56488cbeb858]",
"(rocksdb::SyncManifest(rocksdb::Env*, rocksdb::ImmutableDBOptions const*, rocksdb::WritableFileWriter*)+0x11d) [0x56488cbe2d5d]",
"(rocksdb::VersionSet::ProcessManifestWrites(std::deque<rocksdb::VersionSet::ManifestWriter, std::allocator<rocksdb::VersionSet::ManifestWriter> >&, rocksdb::InstrumentedMutex*, rocksdb::FSDirectory*, bool, rocksdb::ColumnFamilyOptions const*)+0x181c) [0x56488cba9aac]",
"(rocksdb::VersionSet::LogAndApply(rocksdb::autovector<rocksdb::ColumnFamilyData*, 8ul> const&, rocksdb::autovector<rocksdb::MutableCFOptions const*, 8ul> const&, rocksdb::autovector<rocksdb::autovector<rocksdb::VersionEdit*, 8ul>, 8ul> const&, rocksdb::InstrumentedMutex*, rocksdb::FSDirectory*, bool, rocksdb::ColumnFamilyOptions const*, std::vector<std::function<void (rocksdb::Status const&)>, std::allocator<std::function<void (rocksdb::Status const&)> > > const&)+0xad1) [0x56488cbab711]",
"(rocksdb::VersionSet::LogAndApply(rocksdb::ColumnFamilyData*, rocksdb::MutableCFOptions const&, rocksdb::VersionEdit*, rocksdb::InstrumentedMutex*, rocksdb::FSDirectory*, bool, rocksdb::ColumnFamilyOptions const*)+0x1c6) [0x56488cac4ba6]",
"(rocksdb::DBImpl::DeleteUnreferencedSstFiles()+0x99a) [0x56488caf4d1a]",
"(rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*)+0x1269) [0x56488cb035d9]",
"(rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool)+0x5a3) [0x56488cafc3c3]",
"(rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x15) [0x56488cafda75]",
"(RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x10a1) [0x56488ca70541]",
"(BlueStore::_open_db(bool, bool, bool)+0x68c) [0x56488c47100c]",
"(BlueStore::_open_db_and_around(bool, bool)+0x34b) [0x56488c4ba23b]",
"(BlueStore::_mount()+0x1ae) [0x56488c4bd38e]",
"(OSD::init()+0x403) [0x56488bf3f513]",
"main()",
"__libc_start_main()",
"_start()"
],
"ceph_version": "17.1.0-163-g4e244311",
"crash_id": "2022-04-05T02:08:50.176782Z_e45030ee-e34b-46a4-bdde-3bcb3e8005fa",
"entity_name": "osd.652",
"os_id": "centos",
"os_name": "CentOS Stream",
"os_version": "8",
"os_version_id": "8",
"process_name": "ceph-osd",
"stack_sig": "8399067597cb57a8aae13abe0f112a83b5cf7cc7d53b85d56fd1afb99a6c5bed",
"timestamp": "2022-04-05T02:08:50.176782Z",
"utsname_hostname": "gibba024",
"utsname_machine": "x86_64",
"utsname_release": "4.18.0-301.1.el8.x86_64",
"utsname_sysname": "Linux",
"utsname_version": "#1 SMP Tue Apr 13 16:24:22 UTC 2021"
}
</pre></p>
bluestore - Bug #53359 (New): bluestore: missing block.db symlinks leads to confusing crash
https://tracker.ceph.com/issues/53359
2021-11-22T16:09:44Z
Sage Weil
sage@newdream.net
<p>A regression in ceph-volume (master branch) led to the block.db symlink not getting created. This leads to OSDs that crash like so:</p>
<pre>
"backtrace": [
"/lib64/libpthread.so.0(+0x12c20) [0x7f3573347c20]",
"gsignal()",
"abort()",
"/lib64/libstdc++.so.6(+0x9009b) [0x7f357295e09b]",
"/lib64/libstdc++.so.6(+0x9653c) [0x7f357296453c]",
"/lib64/libstdc++.so.6(+0x96597) [0x7f3572964597]",
"/lib64/libstdc++.so.6(+0x967f8) [0x7f35729647f8]",
"/usr/bin/ceph-osd(+0x5c7203) [0x55cc53713203]",
"(BlueFS::_open_super()+0x18f) [0x55cc53e66cff]",
"(BlueFS::mount()+0xeb) [0x55cc53e88ddb]",
"(BlueStore::_open_bluefs(bool, bool)+0x94) [0x55cc53d4bad4]",
"(BlueStore::_prepare_db_environment(bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)+0x6d9) [0x55cc53d4cc29]",
"(BlueStore::_open_db(bool, bool, bool)+0x15c) [0x55cc53d4df4c]",
"(BlueStore::_open_db_and_around(bool, bool)+0x2b4) [0x55cc53dc68d4]",
"(BlueStore::_mount()+0x1ae) [0x55cc53dc971e]",
"(OSD::init()+0x3ba) [0x55cc5385711a]",
"main()",
"__libc_start_main()",
"_start()"
],
"ceph_version": "17.0.0-9073-g6e528ed7",
</pre>
<p>The on-disk block that we are trying to decode is all zeros.</p>
<p>I thought we had a flag somewhere indicating whether a db and/or wal was expected so that we could provide a meaningful/informative error message, but maybe not?</p>
<p>(ceph-volume fix is here: <a class="external" href="https://github.com/ceph/ceph/pull/44030">https://github.com/ceph/ceph/pull/44030</a>)</p>
RADOS - Bug #52513 (New): BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15
https://tracker.ceph.com/issues/52513
2021-09-06T09:20:28Z
Konstantin Shalygin
k0ste@k0ste.ru
<p>We get crash of two simultaneously OSD's served 17.7ff [684,768,760]</p>
<pre>
RECENT_CRASH 2 daemons have recently crashed
osd.760 crashed on host meta114 at 2021-09-03 21:50:28.138745Z
osd.768 crashed on host meta115 at 2021-09-03 21:50:28.123223Z
</pre>
<p>Seems ENOFILE is unexpected, when object lock is acquired</p>
<pre>
-8> 2021-09-04 00:50:28.077 7f3299342700 -1 bluestore(/var/lib/ceph/osd/ceph-768) _txc_add_transaction error (2) No such file or directory not handled on operation 15 (op 0, counting from 0)
-7> 2021-09-04 00:50:28.077 7f3299342700 -1 bluestore(/var/lib/ceph/osd/ceph-768) unexpected error code
-6> 2021-09-04 00:50:28.077 7f3299342700 0 _dump_transaction transaction dump:
{
"ops": [
{
"op_num": 0,
"op_name": "setattrs",
"collection": "17.7ff_head",
"oid": "#17:ffffffff:::%2fv2%2fmeta%2fd732de8b-8b15-5b57-a54a-fc23aadce4fe%2f88e9261c-832b-5d13-9517-40015c81e84e%2f27%2f11033%2f11033693%2f556500fe714ab37.webp:head#",
"attr_lens": {
"_": 376,
"_lock.libcephv2.lock": 153,
"snapset": 35
}
}
]
}
-5> 2021-09-04 00:50:28.117 7f3299342700 -1 /build/ceph-14.2.22/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)' thread 7f3299342700 time 2021-09-04 00:50:28.083637
/build/ceph-14.2.22/src/os/bluestore/BlueStore.cc: 12391: ceph_abort_msg("unexpected error")
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xdf) [0x55980fe784c4]
2: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0xbde) [0x5598103dbaee]
3: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x2aa) [0x5598103e19fa]
4: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x54) [0x55981011b514]
5: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xb09) [0x55981021c0a9]
6: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x1a7) [0x55981022a407]
7: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x97) [0x55981012ee57]
8: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x705) [0x5598100dd965]
9: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1bf) [0x55980fefbd8f]
10: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x5598101b5b22]
11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xbf5) [0x55980ff19835]
12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x4ac) [0x5598105393ec]
13: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55981053c5b0]
14: (()+0x76db) [0x7f32bd6e66db]
15: (clone()+0x3f) [0x7f32bc47d71f]
</pre>
bluestore - Bug #52464 (New): FAILED ceph_assert(current_shard->second->valid())
https://tracker.ceph.com/issues/52464
2021-08-31T13:57:27Z
Jeff Layton
jlayton@redhat.com
<p>I've got a cephadm cluster I use for testing, and this morning one of the OSDs crashed down in bluestore code:<br /><pre>
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: get compressor snappy = 0x55b3c18b1b90
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm::NCB::freelist_type=null
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: freelist init
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: freelist _read_cfg
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator dump block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator score block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: asok(0x55b3c09f0000) register_command bluestore allocator fragmentation block hook 0x55b3c18b1ef0
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::restore_allocator::file_size=0,sizeof(extent_t)=16
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::restore_allocator::No Valid allocation info on disk (empty file)
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc::NCB::restore_allocator() failed!
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc::NCB::Run Full Recovery from ONodes (might take a while) ...
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: bluestore::NCB::read_allocation_from_drive_on_startup::Start Allocation Recovery from ONodes ...
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: In function 'bool WholeMergeIteratorImpl::is_main_smaller()' thread 7f2d60f480c0 time 2021-08-31T13:51:40.899594+0000
/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: 2288: FAILED ceph_assert(current_shard->second->valid())
ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55b3bdcb0b50]
2: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
3: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
4: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
5: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
6: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
7: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
8: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
9: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
10: (OSD::init()+0x3ba) [0x55b3bddec0ba]
11: main()
12: __libc_start_main()
13: _start()
Aug 31 09:51:40 cephadm2 ceph-osd[20497]: *** Caught signal (Aborted) **
in thread 7f2d60f480c0 thread_name:ceph-osd
ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
1: /lib64/libpthread.so.0(+0x12b20) [0x7f2d5eeeeb20]
2: gsignal()
3: abort()
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x55b3bdcb0bae]
5: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
6: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
7: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
8: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
9: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
10: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
11: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
12: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
13: (OSD::init()+0x3ba) [0x55b3bddec0ba]
14: main()
15: __libc_start_main()
16: _start()
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 31 09:51:40 cephadm2 conmon[20474]: -5> 2021-08-31T13:51:40.897+0000 7f2d60f480c0 -1 bluestore::NCB::restore_allocator::No Valid allocation info on disk (empty file)
Aug 31 09:51:40 cephadm2 conmon[20474]: -1> 2021-08-31T13:51:40.903+0000 7f2d60f480c0 -1 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: In function 'bool WholeMergeIteratorImpl::is_main_smaller()' thread 7f2d60f480c0 time 2021-08-31T13:51:40.899594+0000
Aug 31 09:51:40 cephadm2 conmon[20474]: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-7195-g7e7326c4/rpm/el8/BUILD/ceph-17.0.0-7195-g7e7326c4/src/kv/RocksDBStore.cc: 2288: FAILED ceph_assert(current_shard->second->valid())
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
Aug 31 09:51:40 cephadm2 conmon[20474]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55b3bdcb0b50]
Aug 31 09:51:40 cephadm2 conmon[20474]: 2: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
Aug 31 09:51:40 cephadm2 conmon[20474]: 3: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
Aug 31 09:51:40 cephadm2 conmon[20474]: 4: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
Aug 31 09:51:40 cephadm2 conmon[20474]: 5: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
Aug 31 09:51:40 cephadm2 conmon[20474]: 6: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
Aug 31 09:51:40 cephadm2 conmon[20474]: 7: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
Aug 31 09:51:40 cephadm2 conmon[20474]: 8: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
Aug 31 09:51:40 cephadm2 conmon[20474]: 9: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
Aug 31 09:51:40 cephadm2 conmon[20474]: 10: (OSD::init()+0x3ba) [0x55b3bddec0ba]
Aug 31 09:51:40 cephadm2 conmon[20474]: 11: main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 12: __libc_start_main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 13: _start()
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: 0> 2021-08-31T13:51:40.907+0000 7f2d60f480c0 -1 *** Caught signal (Aborted) **
Aug 31 09:51:40 cephadm2 conmon[20474]: in thread 7f2d60f480c0 thread_name:ceph-osd
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:40 cephadm2 conmon[20474]: ceph version 17.0.0-7195-g7e7326c4 (7e7326c4231f614aff0f7bef4d72beadce6a9c75) quincy (dev)
Aug 31 09:51:40 cephadm2 conmon[20474]: 1: /lib64/libpthread.so.0(+0x12b20) [0x7f2d5eeeeb20]
Aug 31 09:51:40 cephadm2 conmon[20474]: 2: gsignal()
Aug 31 09:51:40 cephadm2 conmon[20474]: 3: abort()
Aug 31 09:51:40 cephadm2 conmon[20474]: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x55b3bdcb0bae]
Aug 31 09:51:40 cephadm2 conmon[20474]: 5: /usr/bin/ceph-osd(+0x5ced71) [0x55b3bdcb0d71]
Aug 31 09:51:40 cephadm2 conmon[20474]: 6: (WholeMergeIteratorImpl::is_main_smaller()+0x13b) [0x55b3be8f93db]
Aug 31 09:51:40 cephadm2 conmon[20474]: 7: (WholeMergeIteratorImpl::next()+0x2c) [0x55b3be8f942c]
Aug 31 09:51:40 cephadm2 conmon[20474]: 8: (BlueStore::_open_collections()+0x660) [0x55b3be2e67f0]
Aug 31 09:51:40 cephadm2 conmon[20474]: 9: (BlueStore::read_allocation_from_drive_on_startup()+0x127) [0x55b3be2ffa97]
Aug 31 09:51:40 cephadm2 conmon[20474]: 10: (BlueStore::_init_alloc()+0xa01) [0x55b3be300bd1]
Aug 31 09:51:40 cephadm2 conmon[20474]: 11: (BlueStore::_open_db_and_around(bool, bool)+0x2f4) [0x55b3be3487e4]
Aug 31 09:51:40 cephadm2 conmon[20474]: 12: (BlueStore::_mount()+0x1ae) [0x55b3be34b55e]
Aug 31 09:51:40 cephadm2 conmon[20474]: 13: (OSD::init()+0x3ba) [0x55b3bddec0ba]
Aug 31 09:51:40 cephadm2 conmon[20474]: 14: main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 15: __libc_start_main()
Aug 31 09:51:40 cephadm2 conmon[20474]: 16: _start()
Aug 31 09:51:40 cephadm2 conmon[20474]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 31 09:51:40 cephadm2 conmon[20474]:
Aug 31 09:51:41 cephadm2 systemd-coredump[20743]: Process 20497 (ceph-osd) of user 167 dumped core.
Aug 31 09:51:41 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Main process exited, code=exited, status=134/n/a
Aug 31 09:51:42 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Failed with result 'exit-code'.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Service RestartSec=10s expired, scheduling restart.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Scheduled restart job, restart counter is at 6.
Aug 31 09:51:52 cephadm2 systemd[1]: Stopped Ceph osd.0 for 1d11c63a-09ac-11ec-83e1-52540031ba78.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Start request repeated too quickly.
Aug 31 09:51:52 cephadm2 systemd[1]: ceph-1d11c63a-09ac-11ec-83e1-52540031ba78@osd.0.service: Failed with result 'exit-code'.
Aug 31 09:51:52 cephadm2 systemd[1]: Failed to start Ceph osd.0 for 1d11c63a-09ac-11ec-83e1-52540031ba78.
</pre><br />The build I'm using is based on commit a49f10e760b4, with some MDS patches on top (nothing that should affect OSD).</p>
bluestore - Bug #50844 (Triaged): ceph_assert(r == 0) in BlueFS::_rewrite_log_and_layout_sync()
https://tracker.ceph.com/issues/50844
2021-05-17T16:36:35Z
Neha Ojha
nojha@redhat.com
<pre>
2021-05-17T12:08:17.044 INFO:tasks.workunit.client.0.smithi104.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh:179: TEST_bluestore: ceph-bluestore-tool --path td/osd-bluefs-volume-ops/1 --dev-target td/osd-bluefs-volume-ops/1/db --command bluefs-bdev-new-db
2021-05-17T12:08:17.053 INFO:tasks.workunit.client.0.smithi104.stdout:inferring bluefs devices from bluestore path
2021-05-17T12:08:24.848 INFO:tasks.workunit.client.0.smithi104.stderr:2021-05-17T12:08:24.846+0000 7f30f9fc5400 -1 bluefs _allocate_without_fallback unable to allocate 0x500000 on bdev 0, allocator name bluefs-wal, allocator type hybrid, capacity 0x20000000, block size 0x100000, free 0xff000, fragmentation 0, allocated 0x0
2021-05-17T12:08:24.848 INFO:tasks.workunit.client.0.smithi104.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-4229-gd98b3fc9/rpm/el8/BUILD/ceph-17.0.0-4229-gd98b3fc9/src/os/bluestore/BlueFS.cc: In function 'void BlueFS::_rewrite_log_and_layout_sync(bool, int, int, int, int, std::optional<bluefs_layout_t>)' thread 7f30f9fc5400 time 2021-05-17T12:08:24.846276+0000
2021-05-17T12:08:24.849 INFO:tasks.workunit.client.0.smithi104.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-4229-gd98b3fc9/rpm/el8/BUILD/ceph-17.0.0-4229-gd98b3fc9/src/os/bluestore/BlueFS.cc: 2241: FAILED ceph_assert(r == 0)
2021-05-17T12:08:24.849 INFO:tasks.workunit.client.0.smithi104.stderr: ceph version 17.0.0-4229-gd98b3fc9 (d98b3fc98cdd22d1e98566aab6a991dad70d1b4d) quincy (dev)
2021-05-17T12:08:24.850 INFO:tasks.workunit.client.0.smithi104.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x7f30f8228782]
2021-05-17T12:08:24.850 INFO:tasks.workunit.client.0.smithi104.stderr: 2: /usr/lib64/ceph/libceph-common.so.2(+0x27c98a) [0x7f30f822898a]
2021-05-17T12:08:24.850 INFO:tasks.workunit.client.0.smithi104.stderr: 3: (BlueFS::_rewrite_log_and_layout_sync(bool, int, int, int, int, std::optional<bluefs_layout_t>)+0x108a) [0x564969c2954a]
2021-05-17T12:08:24.851 INFO:tasks.workunit.client.0.smithi104.stderr: 4: (BlueFS::prepare_new_device(int, bluefs_layout_t const&)+0x19f) [0x564969c297bf]
2021-05-17T12:08:24.851 INFO:tasks.workunit.client.0.smithi104.stderr: 5: (BlueStore::add_new_bluefs_device(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2f7) [0x564969cde5c7]
2021-05-17T12:08:24.852 INFO:tasks.workunit.client.0.smithi104.stderr: 6: main()
2021-05-17T12:08:24.852 INFO:tasks.workunit.client.0.smithi104.stderr: 7: __libc_start_main()
2021-05-17T12:08:24.852 INFO:tasks.workunit.client.0.smithi104.stderr: 8: _start()
</pre>
<p>/a/sseshasa-2021-05-17_11:08:21-rados-wip-sseshasa-testing-2021-05-17-1504-distro-basic-smithi/6118192</p>
bluestore - Bug #38745 (In Progress): spillover that doesn't make sense
https://tracker.ceph.com/issues/38745
2019-03-14T22:08:56Z
Sage Weil
sage@newdream.net
<pre>
BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD(s)
osd.50 spilled over 1.3 GiB metadata from 'db' device (20 GiB used of 31 GiB) to slow device
osd.94 spilled over 1.1 GiB metadata from 'db' device (16 GiB used of 31 GiB) to slow device
osd.103 spilled over 1.0 GiB metadata from 'db' device (18 GiB used of 31 GiB) to slow device
</pre><br />this is on the sepia lab cluster.
bluestore - Bug #38363 (Need More Info): Failure in assert when calling: ceph-volume lvm prepare ...
https://tracker.ceph.com/issues/38363
2019-02-18T13:41:08Z
Rainer Krienke
<p>I run Ubuntu 18.04 and and ceph version 13.2.4-1bionic from this repo: <a class="external" href="https://download.ceph.com/debian-mimic">https://download.ceph.com/debian-mimic</a>.</p>
<p>When I try to create a new bluestore osd on several 4TB disks I get an error I first thought was related to <a class="external" href="http://tracker.ceph.com/issues/15386_(read_fsid">http://tracker.ceph.com/issues/15386_(read_fsid</a> unparsable uuid) . However a user cephs user list gave me a hint that in my error log I posted an assertion failure is the real problem not the _read_fsid unparsable uuid message, So I created this new bug report. The same also happens when I omit the --bluestore option.</p>
<p>So here is the complete log for a run of ceph-volume to create an osd which fails reproducebly. I also tried several different devices but the result was always the same:</p>
<ol>
<li>ceph-volume lvm prepare --bluestore --data /dev/sdg</li>
</ol>
<p>Running command: /usr/bin/ceph-authtool --gen-print-key<br />Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a87c3a87-cf22-41df-af4b-c971ed4c0e1a<br />Running command: /sbin/vgcreate --force --yes ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a /dev/sdg<br /> stdout: Physical volume "/dev/sdg" successfully created.<br /> stdout: Volume group "ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a" successfully created<br />Running command: /sbin/lvcreate --yes -l 100%FREE -n osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a<br /> stdout: Logical volume "osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a" created.<br />Running command: /usr/bin/ceph-authtool --gen-print-key<br />Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0<br />--> Absolute path not found for executable: restorecon<br />--> Ensure $PATH environment variable contains common executable locations<br />Running command: /bin/chown -h ceph:ceph /dev/ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a/osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a<br />Running command: /bin/chown -R ceph:ceph /dev/dm-8<br />Running command: /bin/ln -s /dev/ceph-22a3d361-78b5-40b4-8af3-74b1efe1b65a/osd-block-a87c3a87-cf22-41df-af4b-c971ed4c0e1a /var/lib/ceph/osd/ceph-0/block<br />Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap<br /> stderr: got monmap epoch 1<br />Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAQtGpcjkxOMxAARlPykBaxHWqIyndvjTMNuQ==<br /> stdout: creating /var/lib/ceph/osd/ceph-0/keyring<br />added entity osd.0 auth auth(auid = 18446744073709551615 key=AQAQtGpcjkxOMxAARlPykBaxHWqIyndvjTMNuQ== with 0 caps)<br />Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring<br />Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/<br />Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a87c3a87-cf22-41df-af4b-c971ed4c0e1a --setuser ceph --setgroup ceph<br /> stderr: 2019-02-18 14:33:07.093 7fb9508d5240 -1 bluestore(/var/lib/ceph/osd/ceph-0/) <em>read_fsid unparsable uuid<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function 'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IOContext*, bool)' thread 7fb9508d5240 time 2019-02-18 14:33:07.155877<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821: FAILED assert((uint64_t)r len)<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fb947cf53e2]<br /> stderr: 2: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 3: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 4: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 6: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 8: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 10: (main()+0x4222) [0x55a21df85462]<br /> stderr: 11: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 12: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.<br /> stderr: 2019-02-18 14:33:07.157 7fb9508d5240 -1 /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function 'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IOContext*, bool)' thread 7fb9508d5240 time 2019-02-18 14:33:07.155877<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821: FAILED assert((uint64_t)r len)<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fb947cf53e2]<br /> stderr: 2: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 3: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 4: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 6: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 8: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 10: (main()+0x4222) [0x55a21df85462]<br /> stderr: 11: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 12: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br /> stderr: -25> 2019-02-18 14:33:07.093 7fb9508d5240 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid<br /> stderr: 0> 2019-02-18 14:33:07.157 7fb9508d5240 -1 /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function 'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IOContext*, bool)' thread 7fb9508d5240 time 2019-02-18 14:33:07.155877<br /> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821: FAILED assert((uint64_t)r == len)<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fb947cf53e2]<br /> stderr: 2: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 3: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 4: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 6: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 8: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 10: (main()+0x4222) [0x55a21df85462]<br /> stderr: 11: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 12: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br /> stderr: <b>* Caught signal (Aborted) *<strong><br /> stderr: in thread 7fb9508d5240 thread_name:ceph-osd<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (()+0x92aa40) [0x55a21e5e5a40]<br /> stderr: 2: (()+0x12890) [0x7fb9463f9890]<br /> stderr: 3: (gsignal()+0xc7) [0x7fb9452d4e97]<br /> stderr: 4: (abort()+0x141) [0x7fb9452d6801]<br /> stderr: 5: (ceph::</em>_ceph_assert_fail(char const</strong>, char const*, int, char const*)+0x250) [0x7fb947cf5530]<br /> stderr: 6: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 7: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 8: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 10: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 12: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 14: (main()+0x4222) [0x55a21df85462]<br /> stderr: 15: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 16: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: 2019-02-18 14:33:07.157 7fb9508d5240 -1 <strong></b> Caught signal (Aborted) <b><br /> stderr: in thread 7fb9508d5240 thread_name:ceph-osd<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (()+0x92aa40) [0x55a21e5e5a40]<br /> stderr: 2: (()+0x12890) [0x7fb9463f9890]<br /> stderr: 3: (gsignal()+0xc7) [0x7fb9452d4e97]<br /> stderr: 4: (abort()+0x141) [0x7fb9452d6801]<br /> stderr: 5: (ceph::</em>_ceph_assert_fail(char const</strong>, char const*, int, char const*)+0x250) [0x7fb947cf5530]<br /> stderr: 6: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 7: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 8: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 10: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 12: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 14: (main()+0x4222) [0x55a21df85462]<br /> stderr: 15: (_<em>libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 16: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br /> stderr: 0> 2019-02-18 14:33:07.157 7fb9508d5240 -1 <strong></b> Caught signal (Aborted) *</strong><br /> stderr: in thread 7fb9508d5240 thread_name:ceph-osd<br /> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)<br /> stderr: 1: (()+0x92aa40) [0x55a21e5e5a40]<br /> stderr: 2: (()+0x12890) [0x7fb9463f9890]<br /> stderr: 3: (gsignal()+0xc7) [0x7fb9452d4e97]<br /> stderr: 4: (abort()+0x141) [0x7fb9452d6801]<br /> stderr: 5: (ceph::</em>_ceph_assert_fail(char const*, char const*, int, char const*)+0x250) [0x7fb947cf5530]<br /> stderr: 6: (()+0x26d5a7) [0x7fb947cf55a7]<br /> stderr: 7: (KernelDevice::read(unsigned long, unsigned long, ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x55a21e5d4817]<br /> stderr: 8: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0x435) [0x55a21e5945c5]<br /> stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x55a21e59a434]<br /> stderr: 10: (BlueFS::mount()+0x1f1) [0x55a21e59ec81]<br /> stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x55a21e4c504d]<br /> stderr: 12: (BlueStore::mkfs()+0x805) [0x55a21e4f5fe5]<br /> stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x55a21e09e480]<br /> stderr: 14: (main()+0x4222) [0x55a21df85462]<br /> stderr: 15: (__libc_start_main()+0xe7) [0x7fb9452b7b97]<br /> stderr: 16: (_start()+0x2a) [0x55a21e04e95a]<br /> stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.<br />--> Was unable to complete a new OSD, will rollback changes<br />Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it<br /> stderr: purged osd.0<br />--> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a87c3a87-cf22-41df-af4b-c971ed4c0e1a --setuser ceph --setgroup ceph</p>