Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-11-30T07:18:14Z
Ceph
Redmine
rbd - Bug #53434 (Resolved): DiffIterateTest/0.DiffIterate failed w/ librbd pwl cache.
https://tracker.ceph.com/issues/53434
2021-11-30T07:18:14Z
jianpeng ma
jianpeng.ma@intel.com
<p>[ RUN ] DiffIterateTest/0.DiffIterate<br />using new format!<br /> wrote [4104167~28361,7507937~35127,8211521~20835,10648079~7405,16584393~10572,18804007~45552,20390893~84624]<br /> wrote [2113334~66009,6360264~99806,6681612~87217,8221560~10796]<br /> diff was [2113334~66009,6360264~99806,6681612~87217]<br /> ... two - (two*diff) = [8221560~10796]<br />../src/test/librbd/test_librbd.cc:4294: Failure<br />Value of: two.subset_of(diff)<br /> Actual: false<br />Expected: true<br />[ FAILED ] DiffIterateTest/0.DiffIterate, where TypeParam = DiffIterateParams<false> (5318 ms)<br />[----------] 1 test from DiffIterateTest/0 (5318 ms total)</p>
rbd - Bug #53057 (Resolved): [pwl] TestDeepCopy.NoSnaps faild w/ enable rbd pwl.
https://tracker.ceph.com/issues/53057
2021-10-27T03:27:40Z
jianpeng ma
jianpeng.ma@intel.com
<pre>
2021-10-20T10:37:06.644 INFO:tasks.workunit.client.0.plana304.stdout:[ RUN ] TestDeepCopy.NoSnaps
2021-10-20T10:37:09.810 INFO:tasks.workunit.client.0.plana304.stdout:snap: null, block 20971520~4194304 differs
2021-10-20T10:37:09.811 INFO:tasks.workunit.client.0.plana304.stdout:src block:
2021-10-20T10:37:09.848 INFO:tasks.workunit.client.0.plana304.stdout:00000000 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 |1111111111111111|
2021-10-20T10:37:09.848 INFO:tasks.workunit.client.0.plana304.stdout:*
2021-10-20T10:37:09.849 INFO:tasks.workunit.client.0.plana304.stdout:003ffff0 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 |1111111111111111|
2021-10-20T10:37:09.849 INFO:tasks.workunit.client.0.plana304.stdout:00400000
2021-10-20T10:37:09.849 INFO:tasks.workunit.client.0.plana304.stdout:dst block:
2021-10-20T10:37:09.870 INFO:tasks.workunit.client.0.plana304.stdout:00000000 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 |1111111111111111|
2021-10-20T10:37:09.870 INFO:tasks.workunit.client.0.plana304.stdout:*
2021-10-20T10:37:09.871 INFO:tasks.workunit.client.0.plana304.stdout:00100000 31 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |11..............|
2021-10-20T10:37:09.871 INFO:tasks.workunit.client.0.plana304.stdout:00100010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
2021-10-20T10:37:09.871 INFO:tasks.workunit.client.0.plana304.stdout:*
2021-10-20T10:37:09.872 INFO:tasks.workunit.client.0.plana304.stdout:003ffff0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
2021-10-20T10:37:09.872 INFO:tasks.workunit.client.0.plana304.stdout:00400000
2021-10-20T10:37:09.872 INFO:tasks.workunit.client.0.plana304.stdout:/tmp/release/Ubuntu/WORKDIR/ceph-17.0.0-7897-g5ac9f523ea2/src/test/librbd/test_DeepCopy.cc:128: Failure
2021-10-20T10:37:09.873 INFO:tasks.workunit.client.0.plana304.stdout:Value of: src_bl.contents_equal(dst_bl)
2021-10-20T10:37:09.873 INFO:tasks.workunit.client.0.plana304.stdout: Actual: false
2021-10-20T10:37:09.873 INFO:tasks.workunit.client.0.plana304.stdout:Expected: true
2021-10-20T10:37:10.113 INFO:tasks.workunit.client.0.plana304.stdout:[ FAILED ] TestDeepCopy.NoSnaps (3470 ms)
</pre>
rbd - Bug #52511 (Resolved): [pwl ssd] flush cause io re-oreder to writeback layer
https://tracker.ceph.com/issues/52511
2021-09-06T06:13:40Z
jianpeng ma
jianpeng.ma@intel.com
<p>consider this workload:<br />writeA(0,4K)<br />writeB(0,512).</p>
<p>ssd can make sure writeA, writeB order in cache file. But when flush to osd, it will firstly read cache data and then flush to osd.<br />Although it keep order when send read op. But it use aio_read, so we can't keep the order of aio_read.<br />Context* WriteLog<I>::construct_flush_entry_ctx(<br /> std::shared_ptr<GenericLogEntry> log_entry) {<br /> // snapshot so we behave consistently<br /> bool invalidating = this->m_invalidating;</p>
<pre><code>Context *ctx = this->construct_flush_entry(log_entry, invalidating);</code></pre>
<pre><code>if (invalidating) {<br /> return ctx;<br /> }<br /> if (log_entry->is_write_entry()) {<br /> bufferlist *read_bl_ptr = new bufferlist;<br /> ctx = new LambdaContext(<br /> [this, log_entry, read_bl_ptr, ctx](int r) {<br /> bufferlist captured_entry_bl;<br /> captured_entry_bl.claim_append(*read_bl_ptr);<br /> delete read_bl_ptr;<br /> m_image_ctx.op_work_queue->queue(new LambdaContext(<br /> [this, log_entry, entry_bl=move(captured_entry_bl), ctx](int r) {<br /> auto captured_entry_bl = std::move(entry_bl);<br /> ldout(m_image_ctx.cct, 15) << "flushing:" << log_entry<br /> << " " << *log_entry << dendl;<br /> log_entry->writeback_bl(this->m_image_writeback, ctx,<br /> std::move(captured_entry_bl));<br /> }), 0);<br /> });<br /> ctx = new LambdaContext(<br /> [this, log_entry, read_bl_ptr, ctx](int r) {<br /> auto write_entry = static_pointer_cast&lt;WriteLogEntry&gt;(log_entry);<br /> write_entry->inc_bl_refs();<br /> aio_read_data_block(std::move(write_entry), read_bl_ptr, ctx);<br /> });<br /> return ctx;</code></pre>
<p>If read_cache_data(writeB) firstly arrived before read_cache_data(writeB). The send later layer order is: writeB, writeA.<br />This mabye be bug.</p>
<p><a class="external" href="https://tracker.ceph.com/issues/49876">https://tracker.ceph.com/issues/49876</a> is the example.</p>
rbd - Bug #52400 (Resolved): [pwl ssd] memory corruption (shared_ptr related?)
https://tracker.ceph.com/issues/52400
2021-08-25T01:26:48Z
jianpeng ma
jianpeng.ma@intel.com
<p>In xfstests w/ librbd/pwl/ssd, we met some crash.<br />#0 0x00007fd2457747b1 in std::__atomic_base<unsigned int>::fetch_add (_<em>m=std::memory_order_seq_cst, __i=1, this=0x51)<br /> at /usr/include/c++/9/bits/atomic_base.h:539<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: gpf in tcp_sendpage (Closed)" href="https://tracker.ceph.com/issues/1">#1</a> std::</em>_atomic_base<unsigned int>::operator++ (this=0x51) at /usr/include/c++/9/bits/atomic_base.h:303<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: BUG at fs/ceph/caps.c:2178 (Closed)" href="https://tracker.ceph.com/issues/2">#2</a> ceph::buffer::v15_2_0::ptr::ptr (this=this@entry=0x7fd2142b7448, p=...) at ../src/common/buffer.cc:386<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: leaked dentry ref on umount (Closed)" href="https://tracker.ceph.com/issues/3">#3</a> 0x00007fd245776630 in ceph::buffer::v15_2_0::ptr_node::ptr_node (this=0x7fd2142b7440) at ../src/include/buffer.h:397<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: lockdep warning in socket code (Closed)" href="https://tracker.ceph.com/issues/4">#4</a> ceph::buffer::v15_2_0::ptr_node::cloner::operator() (this=<optimized out>, clone_this=...) at ../src/common/buffer.cc:2240<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: ./rados lspools sometimes hangs after listing all pools? (Closed)" href="https://tracker.ceph.com/issues/5">#5</a> 0x00007fd23c18f023 in _<em>gnu_cxx::</em>_atomic_add_single (_<em>val=1, __mem=0x7fcf) at /usr/include/c++/9/ext/atomicity.h:98<br /><a class="issue tracker-2 status-6 priority-3 priority-lowest closed" title="Feature: libceph could use a backward-compatible-to function (Rejected)" href="https://tracker.ceph.com/issues/6">#6</a> __gnu_cxx::</em>_atomic_add_dispatch (_<em>val=1, __mem=0x7fcf) at /usr/include/c++/9/ext/atomicity.h:98<br /><a class="issue tracker-6 status-3 priority-3 priority-lowest closed" title="Documentation: Document Monitor Commands (Resolved)" href="https://tracker.ceph.com/issues/7">#7</a> std::_Sp_counted_base<(</em>_gnu_cxx::_Lock_policy)2>::_M_add_ref_copy (this=0x51) at /usr/include/c++/9/bits/shared_ptr_base.h:139<br /><a class="issue tracker-3 status-5 priority-4 priority-default closed" title="Support: Document differences from S3 (Closed)" href="https://tracker.ceph.com/issues/8">#8</a> std::__shared_count<(_<em>gnu_cxx::_Lock_policy)2>::</em>_shared_count (_<em>r=..., this=0x7fd233ffd0e8)<br /> at /usr/include/c++/9/bits/shared_ptr_base.h:737<br /><a class="issue tracker-2 status-8 priority-3 priority-lowest closed" title="Feature: Access unimported data (Won't Fix)" href="https://tracker.ceph.com/issues/9">#9</a> std::</em>_shared_ptr<librbd::cache::pwl::ssd::WriteLogEntry, (_<em>gnu_cxx::_Lock_policy)2>::</em>_shared_ptr<librbd::cache::pwl::GenericWriteLogEntry> (__p=0x7fcfc40333a0, __r=..., this=0x7fd233ffd0e0) at /usr/include/c++/9/bits/shared_ptr_base.h:1164<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: osd: Replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/10">#10</a> std::shared_ptr<librbd::cache::pwl::ssd::WriteLogEntry>::shared_ptr<librbd::cache::pwl::GenericWriteLogEntry> (<br /> __p=0x7fcfc40333a0, __r=std::shared_ptr<librbd::cache::pwl::GenericWriteLogEntry> (empty) = {...}, this=0x7fd233ffd0e0)<br /> at /usr/include/c++/9/bits/shared_ptr.h:235<br /><a class="issue tracker-4 status-3 priority-3 priority-lowest closed" title="Cleanup: mds: replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/11">#11</a> std::static_pointer_cast<librbd::cache::pwl::ssd::WriteLogEntry, librbd::cache::pwl::GenericWriteLogEntry> (<br /> __r=std::shared_ptr<librbd::cache::pwl::GenericWriteLogEntry> (empty) = {...}) at /usr/include/c++/9/bits/shared_ptr.h:494<br /><a class="issue tracker-2 status-3 priority-3 priority-lowest closed" title="Feature: uclient: Make cap handling smarter (Resolved)" href="https://tracker.ceph.com/issues/12">#12</a> librbd::cache::pwl::ssd::WriteLog<librbd::ImageCtx>::collect_read_extents (this=0x7fd2200153e0, read_buffer_offset=12288, <br /> map_entry=..., log_entries_to_read=std::vector of length 0, capacity 0, bls_to_read=std::vector of length 0, capacity 0, <br /> entry_hit_length=<optimized out>, hit_extent={...}, read_ctx=0x7fd21423ab80) at ../src/librbd/cache/pwl/ssd/WriteLog.cc:80<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed parent" title="Feature: uclient: Make readdir use the cache (Resolved)" href="https://tracker.ceph.com/issues/13">#13</a> 0x00007fd23c14f9a2 in librbd::cache::pwl::AbstractWriteLog<librbd::ImageCtx>::read (this=<optimized out>, image_extents=..., <br /> bl=<optimized out>, fadvise_flags=fadvise_flags@entry=0, on_finish=<optimized out>) at /usr/include/c++/9/ext/atomicity.h:96<br /><a class="issue tracker-1 status-10 priority-4 priority-default closed" title="Bug: osd: pg split breaks if not all osds are up (Duplicate)" href="https://tracker.ceph.com/issues/14">#14</a> 0x00007fd23c135daa in librbd::cache::WriteLogImageDispatch<librbd::ImageCtx>::read (this=0x7fd224033000, <br /> aio_comp=0x55c098c26a50, image_extents=..., read_result=..., <br /> io_context=std::shared_ptr<neorados::IOContext> (use count 8, weak count 0) = {...}, op_flags=0, read_flags=0, <br /> parent_trace=..., tid=4572348, image_dispatch_flags=0x55c0982a874c, dispatch_result=0x55c0982a8750, on_finish=0x55c098c26ba8, <br /> on_dispatched=0x55c0982a8730) at /usr/include/c++/9/optional:963<br /><a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mds rejoin: invented dirfrags (MDCache.cc:3469) (Resolved)" href="https://tracker.ceph.com/issues/15">#15</a> 0x00007fd245bef334 in librbd::io::ImageDispatcher<librbd::ImageCtx>::SendVisitor::operator() (read=..., this=0x7fd233ffd480)<br /> at /usr/include/c++/9/ext/atomicity.h:96<br /><a class="issue tracker-1 status-3 priority-5 priority-high3 closed" title="Bug: mds restart vs dbench (Resolved)" href="https://tracker.ceph.com/issues/16">#16</a> boost::detail::variant::invoke_visitor<librbd::io::ImageDispatcher<librbd::ImageCtx>::SendVisitor const, false>::internal_visit<librbd::io::ImageDispatchSpec::Read&> (operand=..., this=<synthetic pointer>) at boost/include/boost/variant/variant.hpp:1028<br /><a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: rm -r failure (Rejected)" href="https://tracker.ceph.com/issues/17">#17</a> boost::detail::variant::visitation_impl_invoke_impl<boost::detail::variant::invoke_visitor<librbd::io::ImageDispatcher<librbd::ImageCtx>::SendVisitor const, false>, void*, librbd::io::ImageDispatchSpec::Read> (storage=<optimized out>, <br /> visitor=<synthetic pointer>...) at boost/include/boost/variant/detail/visitation_impl.hpp:119<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: reconnect fixups (Resolved)" href="https://tracker.ceph.com/issues/18">#18</a> boost::detail::variant::visitation_impl_invoke<boost::detail::variant::invoke_visitor<librbd::io::ImageDispatcher<librbd::ImageCtx>::SendVisitor const, false>, void*, librbd::io::ImageDispatchSpec::Read, boost::variant<librbd::io::ImageDispatchSpec::Read, librbd::io::ImageDispatchSpec::Discard, librbd::io::ImageDispatchSpec::Write, librbd::io::ImageDispatchSpec::WriteSame, librbd::io::ImageDispatchSpec::CompareAndWrite, librbd::io::ImageDispatchSpec::Flush, librbd::io::ImageDispatchSpec::ListSnaps>::has_fallback_type_> (<br /> t=0x0, storage=<optimized out>, visitor=<synthetic pointer>..., internal_which=<optimized out>)</p>
rbd - Bug #52341 (Resolved): [pwl] m_bytes_allocated is calculated incorrectly on reopen
https://tracker.ceph.com/issues/52341
2021-08-20T09:28:35Z
jianpeng ma
jianpeng.ma@intel.com
<p>After restart and pwd/ssd cache will load existing entries and calc dirty_data which mean m_bytes_allocated. But the current code is less calculated .<br />This will cause overwrite which cause dirty-data overwrite by new data and can't flush to osd.</p>
rbd - Bug #52323 (Resolved): [pwl ssd] incorrect first_valid_entry calculation in retire_entries()
https://tracker.ceph.com/issues/52323
2021-08-19T08:37:17Z
jianpeng ma
jianpeng.ma@intel.com
<p>If we kill the running program which write data w/ pwl/ssd cache. It can't restart because read wrong . There are two reasons:<br />1: <br />2021-08-19T15:37:02.026+0800 7f812e5fc700 0 librbd::cache::pwl::ssd::WriteLog: 0x7f836c0151b0 schedule_update_root: New root: pool_size=1073741824 first_valid_entry=228626432 first_free_entry=228044800 flushed_sync_gen=19536<br />2021-08-19T15:37:02.030+0800 7f8398bc5700 0 librbd::cache::pwl::ssd::WriteLog: 0x7f836c0151b0 schedule_update_root: New root: pool_size=1073741824 first_valid_entry=228626432 first_free_entry=228696064 flushed_sync_gen=19536<br /> >> new data will overwrite no-retire-log. This mean we judge that there is a problem with the condition that the cache is full. This because we repeat free allocation of space .</p>
<p>2:Wrong calculation of the location of WriteLogCacheEntry cause decode failed.</p>
Ceph - Bug #41216 (Resolved): os/bluestore: Don't forget sub kv_submitted_waiters.
https://tracker.ceph.com/issues/41216
2019-08-13T01:18:42Z
jianpeng ma
jianpeng.ma@intel.com
<p>in func flush_all_but_last, it forgets to dec kv_submitted_waiters when it returns for condition "it->state >= TransContext::STATE_KV_SUBMITTE".<br />void flush_all_but_last() {<br /> std::unique_lock l(qlock);<br /> assert (q.size() >= 1);<br /> while (true) {<br /> // set flag before the check because the condition<br /> // may become true outside qlock, and we need to make<br /> // sure those threads see waiters and signal qcond.<br /> +<ins>kv_submitted_waiters;<br /> if (q.size() <= 1) { <br /> --kv_submitted_waiters;<br /> return;<br /> } else {<br /> auto it = q.rbegin();<br /> it</ins>+;<br /> if (it->state >= TransContext::STATE_KV_SUBMITTED) {<br /> return;<br /> } <br /> } <br /> qcond.wait(l);<br /> --kv_submitted_waiters;<br /> } <br /> }</p>
RADOS - Bug #40577 (Resolved): vstart.sh can't work.
https://tracker.ceph.com/issues/40577
2019-06-28T02:32:49Z
jianpeng ma
jianpeng.ma@intel.com
<p>When firstly do_cmake.sh, it will create a ceph.conf .in build dir.<br />plugin dir = lib<br />erasure code dir = lib<br />When do ../src/vstart.sh -n, it print the following message:</p>
<p>ceph-mgr dashboard not built - disabling.<br />global_init: error reading config file.<br />global_init: error reading config file.<br />global_init: error reading config file.<br />global_init: error reading config file.<br />global_init: error reading config file.<br />dirname: missing operand<br />Try 'dirname --help' for more information.</p>
<p>From debug, code "asok_dir=`dirname $($CEPH_BIN/ceph-conf -c $conf_fn --show-config-value admin_socket)`" cause erro and make vstart.sh return.<br />So i simple code: "/bin/ceph-conf -c ceph.conf --show-config-value admin_socket" <br />global_init: error reading config file.</p>
<p>By git bisect, i found commit b1289290247fcd724c9f794716176089342f1110 cause this bug.</p>
<p>BTY: if remove ceph.conf, no problem occur. I think in ceph.conf no infos about admin_socket which cause dirname with null. If ceph.conf contain admin_socket, this bug not occur.</p>
Ceph - Bug #39623 (Resolved): make cluster_network work well.
https://tracker.ceph.com/issues/39623
2019-05-08T03:26:59Z
jianpeng ma
jianpeng.ma@intel.com
<p>This temporary parameter make address is zero. So make cluster_addr is<br />equal public_addr and make cluster_network disable.</p>
rbd - Bug #39269 (Resolved): rbd-nbd return correctly error message when no-match device.
https://tracker.ceph.com/issues/39269
2019-04-12T09:12:16Z
jianpeng ma
jianpeng.ma@intel.com
<p>When exec: rbd-nbd map rbd/image --device /dev/image<br /> The error message is:<br /> rbd-nbd: failed to open device: /dev/image.</p>
<pre><code>In fact, it should print:<br /> rbd-nbd: invalid device path: /dev/image (expected /dev/nbd{num})</code></pre>
bluestore - Bug #24761 (Resolved): set correctly shard for existed Collection.
https://tracker.ceph.com/issues/24761
2018-07-03T23:29:03Z
jianpeng ma
jianpeng.ma@intel.com
<p>For existed Collection, the constructor be called in _open_collections.<br />But m_finisher_num can't setup when enable bluestore_shard_finishers.</p>
<p>So move m_finisher_num setup before _open_collections.</p>
bluestore - Bug #24561 (Resolved): if disableWAL is set, submit_transacton_sync will met error.
https://tracker.ceph.com/issues/24561
2018-06-19T03:03:44Z
jianpeng ma
jianpeng.ma@intel.com
<p>If disableWAL is set, it will met those error:</p>
<p>rocksdb: submit_common error: Invalid argument: Sync writes has to enable WAL. code = 4 Rocksdb transaction:</p>
Ceph - Bug #17760 (Closed): compile error
https://tracker.ceph.com/issues/17760
2016-11-01T07:20:41Z
jianpeng ma
jianpeng.ma@intel.com
<p>/mnt/ceph/src/test/erasure-code/TestErasureCodePluginJerasure.cc:25:25: fatal error: gtest/gtest.h: No such file or directory<br />compilation terminated.<br />src/test/erasure-code/CMakeFiles/unittest_erasure_code_plugin_jerasure.dir/build.make:62: recipe for target 'src/test/erasure-code/CMakeFiles/unittest_erasure_code_plugin_jerasure.dir/TestErasureCodePluginJerasure.cc.o' failed</p>
<p>This is because the PR:https://github.com/ceph/ceph/pull/11714</p>
Ceph - Bug #14954 (Rejected): BlueStore: met assert when write size==bluestore_overlay_max_length
https://tracker.ceph.com/issues/14954
2016-03-03T02:15:04Z
jianpeng ma
jianpeng.ma@intel.com
<p>Suppose set bluestore_overlay_max_length == bluestore_min_alloc_size && bluestore_overlay_max > 0.<br />write(bluestore_min_alloc_size, offset=0). It will met those errors:<br /> 0> 2016-03-03 18:32:22.089828 7f2719572700 -1 os/bluestore/BlueStore.cc: In function 'int BlueStore::_do_write(BlueStore::TransContext*, BlueStore::CollectionRef&, BlueStore::OnodeRef, uint64_t, uint64_t, ceph::bufferlist&, uint32_t)' thread 7f2719572700 time 2016-03-03 18:32:22.083797<br />os/bluestore/BlueStore.cc: 5601: FAILED assert(0 == "leaked unwritten extent")</p>
<pre><code>ceph version 10.0.3-2621-g045ad3d (045ad3d2a5bf85698d9d28e8e47bfe3ec2a136af)<br /> 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x80) [0x55a57b3990a0]<br /> 2: (BlueStore::_do_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;&, boost::intrusive_ptr&lt;BlueStore::Onode&gt;, unsigned long, unsigned long, ceph::buffer::list&, unsigned int)+0x302a) [0x55a57afd7b9a]<br /> 3: (BlueStore::_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;&, boost::intrusive_ptr&lt;BlueStore::Onode&gt;&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int)+0x2ee) [0x55a57afd873e]<br /> 4: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0xcd4) [0x55a57aff0d94]<br /> 5: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; >&, std::shared_ptr&lt;TrackedOp&gt;, ThreadPool::TPHandle*)+0x4cb) [0x55a57aff382b]<br /> 6: (ReplicatedPG::queue_transactions(std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; >&, std::shared_ptr&lt;OpRequest&gt;)+0x81) [0x55a57ae6eff1]<br /> 7: (void ReplicatedBackend::sub_op_modify_impl&lt;MOSDRepOp, 112&gt;(std::shared_ptr&lt;OpRequest&gt;)+0xc2d) [0x55a57aed585d]<br /> 8: (ReplicatedBackend::sub_op_modify(std::shared_ptr&lt;OpRequest&gt;)+0x44) [0x55a57aebf8b4]<br /> 9: (ReplicatedBackend::handle_message(std::shared_ptr&lt;OpRequest&gt;)+0x2fb) [0x55a57aebfc3b]<br /> 10: (ReplicatedPG::do_request(std::shared_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0xbd) [0x55a57ae1326d]<br /> 11: (OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, std::shared_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x411) [0x55a57acbb131]<br /> 12: (PGQueueable::RunVis::operator()(std::shared_ptr&lt;OpRequest&gt;&)+0x52) [0x55a57acbb382]<br /> 13: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x6e1) [0x55a57acd50c1]<br /> 14: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x734) [0x55a57b3892e4]<br /> 15: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55a57b38c3f0]<br /> 16: (()+0x760a) [0x7f2737a1360a]<br /> 17: (clone()+0x6d) [0x7f27359bca4d]<br /> NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.</code></pre>
Ceph - Bug #5444 (Rejected): ceph df print error message
https://tracker.ceph.com/issues/5444
2013-06-24T19:16:33Z
jianpeng ma
jianpeng.ma@intel.com
<p>root@ubuntu:/media# ceph df<br />GLOBAL:<br /> SIZE AVAIL RAW USED %RAW USED <br /> 3903M 2267M 1367M 35.03</p>
<p>POOLS:<br /> NAME ID USED %USED OBJECTS <br /> data 0 309G 8109.86 79144 <br /> metadata 1 36981K 0.93 30 <br /> rbd 2 0 0 0</p>
<p>The total size of cluster if 3903M,about 4G.But the pool of data used about 309G.</p>