Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2015-08-27T08:43:20Z
Ceph
Redmine
Ceph - Bug #12803 (Duplicate): osd: op_commit on cancelled op
https://tracker.ceph.com/issues/12803
2015-08-27T08:43:20Z
Zhiqiang Wang
wonzhq@hotmail.com
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/yuan-2015-08-20_00:19:52-rados-wip-intel-testing---basic-multi/1024108/">http://qa-proxy.ceph.com/teuthology/yuan-2015-08-20_00:19:52-rados-wip-intel-testing---basic-multi/1024108/</a></p>
<pre>
2015-08-20 06:40:22.212419 7f99c6ff6700 -1 *** Caught signal (Segmentation fault) **
in thread 7f99c6ff6700
ceph version 9.0.1-940-g452ea14 (452ea14e34af7609c8d5079eb5c7fc0b6617b9bf)
1: ceph-osd() [0xb4a71a]
2: (()+0x10340) [0x7f99d674c340]
3: (std::_Rb_tree<pg_shard_t, pg_shard_t, std::_Identity<pg_shard_t>, std::less<pg_shard_t>, std::allocator<pg_shard_t> >::equal_range(pg_shard_t const&)+0x10) [0x715de0]
4: (std::_Rb_tree<pg_shard_t, pg_shard_t, std::_Identity<pg_shard_t>, std::less<pg_shard_t>, std::allocator<pg_shard_t> >::erase(pg_shard_t const&)+0x16) [0x715eb6]
5: (ReplicatedBackend::op_commit(ReplicatedBackend::InProgressOp*)+0xb7) [0xa5a4e7]
6: (Context::complete(int)+0x9) [0x6f5909]
7: (ReplicatedPG::BlessedContext::finish(int)+0x94) [0x8e1944]
8: (Context::complete(int)+0x9) [0x6f5909]
9: (Finisher::finisher_thread_entry()+0x158) [0xb6f438]
10: (()+0x8182) [0x7f99d6744182]
11: (clone()+0x6d) [0x7f99d4caeefd]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
</pre>
<pre>
2015-08-20 06:40:22.118587 7f99bb66f700 10 osd.2 pg_epoch: 871 pg[128.6( v 871'22 (0'0,871'22] local-les=867 n=1 ec=850 les/c 867/867 871/871/871) [3,2,0] r=1 lpr=871 pi=866-870/1 crt=869'17 lcod 869'17 inactive] canceling repop tid 2232
2015-08-20 06:40:22.118605 7f99bb66f700 20 osd.2 pg_epoch: 871 pg[128.6( v 871'22 (0'0,871'22] local-les=867 n=1 ec=850 les/c 867/867 871/871/871) [3,2,0] r=1 lpr=871 pi=866-870/1 crt=869'17 lcod 869'17 inactive] remove_repop repgather(0x538df00 871'22 rep_tid=2232 committed?=0 applied?=0)
2015-08-20 06:40:22.118626 7f99bb66f700 20 osd.2 pg_epoch: 871 pg[128.6( v 871'22 (0'0,871'22] local-les=867 n=1 ec=850 les/c 867/867 871/871/871) [3,2,0] r=1 lpr=871 pi=866-870/1 crt=869'17 lcod 869'17 inactive] obc obc(6/hit_set_128.6_archive_2015-08-20 06:40:17.042660_2015-08-20 06:40:20.751790/head/.ceph-internal/128 rwstate(none n=0 w=0))
...
2015-08-20 06:40:22.141832 7f99c6ff6700 10 osd.2 pg_epoch: 871 pg[128.6( v 871'22 (0'0,871'22] local-les=867 n=1 ec=850 les/c 867/867 871/871/871) [3,2,0] r=1 lpr=871 pi=866-870/1 crt=869'17 lcod 869'17 inactive NOTIFY] op_commit: 2317437596494410039
</pre>
<p>The op is removing the hitset objects.</p>
Ceph - Feature #11066 (Resolved): Rados level support of setting object attributes (e.g: pin/unpi...
https://tracker.ceph.com/issues/11066
2015-03-09T01:27:13Z
Zhiqiang Wang
wonzhq@hotmail.com
<p>We discussed the following BP in the Infernalis CDS:<br /><a class="external" href="https://wiki.ceph.com/Planning/Blueprints/Infernalis/Dynamic_data_relocation_for_cache_tiering">https://wiki.ceph.com/Planning/Blueprints/Infernalis/Dynamic_data_relocation_for_cache_tiering</a><br />The consensus is that a rados level pin operation to force an object to stay in the hot tier woudld be a good idea. We want to implement a general purpose framework to support the setting of object attibutes at the rados level. Will use this to track its progress.</p>