Activity
From 06/01/2016 to 06/30/2016
06/30/2016
- 06:55 PM Bug #15653: crush: low weight devices get too many objects for num_rep > 1
- 10:52 AM Bug #16553 (New): Removing Writeback Cache Tier Does not clean up Incomplete_Clones
- It seem that the issue related to bug #8882 still happen on Infernalis (v9.2.1). After removing the writeback cache b...
06/28/2016
- 03:17 AM Bug #16500 (Resolved): ceph_erasure_code_benchmark parameter checking error for LRC plugin
- There's no parameter set could pass the parameter checking in cep_erasure_code_benchmark while using LRC....
06/24/2016
- 10:27 PM Bug #15653: crush: low weight devices get too many objects for num_rep > 1
- I just need to put in a special case for LIST, (I don't know if I should just not bother with TREE) test it some, and...
06/18/2016
- 03:56 PM Bug #16385: rados bench seq and rand tests do not work if op_size != object_size
- See pull request wip-krehm-16385 for code fix.
- 11:45 AM Bug #16385 (Fix Under Review): rados bench seq and rand tests do not work if op_size != object_size
- rados bench write correctly creates objects whose object_size is a multiple of op_size, but radios bench seq and rand...
06/17/2016
- 06:37 PM Bug #16379 (Closed): [ERROR ] "ceph auth get-or-create for keytype admin returned -1
- on master branch, the test has been failing due to below issue
http://teuthology.ovh.sepia.ceph.com/teuthology/teu... - 01:44 PM Bug #16365 (Resolved): Better network partition detection
- We had a situation where due to hardware issues our network was getting lossy for one OSD node, without the Ceph moni...
- 01:37 AM Documentation #16356 (Resolved): doc: manual deployment of ceph monitor needs fix
- http://docs.ceph.com/docs/master/install/manual-deployment/
while setting up ceph-mon
step 8 in the doc says to c...
06/16/2016
- 09:26 AM Bug #16258: ceph audit logs are not logging to ceph.audit.log if we specify "mon cluster log file...
- naga venkata bokka wrote:
> actually it is documentation defect, it should be stated clearly under this section http... - 09:25 AM Bug #16258: ceph audit logs are not logging to ceph.audit.log if we specify "mon cluster log file...
- actually it is documentation defect, it should be stated clearly under this section http://docs.ceph.com/docs/jewel/r...
06/14/2016
- 04:12 AM Bug #16279 (Closed): assert(objiter->second->version > last_divergent_update) failed
- We build a cluster with 2 osds,
write 100G files, and shutdown the power during deleting the files.
After the machi...
06/13/2016
- 01:20 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- /a/sage-2016-06-11_06:29:35-rados-jewel---basic-smithi/251518
(at least ceph_test_rados got ERANGE.. presumably it... - 10:06 AM Bug #16258 (New): ceph audit logs are not logging to ceph.audit.log if we specify "mon cluster lo...
- In ceph.conf if we specify option "mon cluster log file" both audit and cluster logs are going into the same file cep...
- 04:47 AM Bug #16177 (New): leveldb horrendously slow
- 04:46 AM Bug #16177: leveldb horrendously slow
- Adam,
seems we are throttling when exporting the pglog to the OSD (yeah, it's a bug, imo!). you can disable it by ...
06/11/2016
- 09:51 AM Backport #16239 (Resolved): 'ceph tell osd.0 flush_pg_stats' fails in rados qa run
- https://github.com/ceph/ceph/pull/15475
- 12:04 AM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- Ok, the first sequence of ops came from osd.1, the second from osd.3. The bug is that the queues do not guarantee or...
06/10/2016
- 11:09 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- 2016-06-10 04:45:58.553823 7fad3adb1700 15 osd.1 pg_epoch: 143 pg[1.3( v 134'1051 (110'825,134'1051] local-les=124 n=...
- 11:02 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- Same interval from
2016-06-10 04:45:49.866540 7fad38dad700 10 osd.1 136 dequeue_op 0x7fad5f96e400 prio 63 cost 64308... - 10:42 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- 2016-06-10 04:42:41.619711 7fad4f7df800 0 osd.1 0 using 0 op queue with priority op cut off at 196.
I guess we're... - 10:40 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- > 2016-06-10 04:45:49.866540 7fad38dad700 10 osd.1 136 dequeue_op 0x7fad5f96e400 client.4123.0:3612 1.df123c4b
wai... - 10:35 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- Samuel Just wrote:
> The interesting sequence is:
> 2016-06-10 04:45:37.605757 7fad47760700 15 osd.1 134 enqueue_op... - 10:20 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- The interesting sequence is:
2016-06-10 04:45:37.605757 7fad47760700 15 osd.1 134 enqueue_op 0x7fad5f96e400 client.4... - 10:16 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- sjust@teuthology:/a/samuelj-2016-06-09_19:31:27-rados-wip-16211-jewel-distro-basic-smithi/249150/remote$ grep -o '201...
- 10:13 PM Bug #16236 (Won't Fix): cache/proxied ops from different primaries (cache interval change) don't ...
- Came from finish_promote, writes happened between first and second copy_get which caused the version to change.
06/09/2016
- 04:51 PM Bug #16177: leveldb horrendously slow
- Since this export/import process is being so slow (now running for a week), and the inodes I'm working on extracting ...
- 09:43 AM Bug #16177: leveldb horrendously slow
- Hi Adam,
We're looking into why it appears to be spending so much time doing writes and why the write back throttl...
06/08/2016
- 12:26 PM Bug #16177: leveldb horrendously slow
- This is what I used when compressing the osd:
tar -c -z --xattrs --selinux --acls --preserve -f /tmp/ceph-osd.16.tar... - 06:36 AM Bug #16177 (Need More Info): leveldb horrendously slow
- 05:47 AM Bug #16177: leveldb horrendously slow
- i will try to reproduce this with ceph-osd.16.tar.gz, following the steps at http://thread.gmane.org/gmane.comp.file-...
- 05:25 AM Bug #16177: leveldb horrendously slow
- Adam. when did you collected the trace using "poormansprofiler". i think it was collected when you were importing the...
06/07/2016
- 12:26 PM Bug #16177: leveldb horrendously slow
- And, to add to that, 5 and a half hours of profiling using both methods:
http://people.cis.ksu.edu/~mozes/perftrac... - 04:57 AM Bug #16177: leveldb horrendously slow
- How about a ...
- 04:06 AM Bug #16177: leveldb horrendously slow
- Hey Adam,
I don't know a lot about ceph_objectstore_tool but to get the ball rolling my naive approach would be to... - 03:47 AM Bug #16177 (Closed): leveldb horrendously slow
- Recently ran into an issue using cephfs where loading pg data on osd start (with some lightning fast ssds) took long ...
06/06/2016
- 08:16 PM Feature #16172 (New): osd,librados: deprecate unbounded omap_get etc operations
- These return an unbounded amount of data (all omap keys). Deprecate these, and make the client-side switch to positi...
06/03/2016
- 07:09 AM Bug #16133 (New): rados bench: ./common/Mutex.h:96: void Mutex::_pre_unlock(): Assertion `nlock >...
- [root@robert xiucai]# rados bench -p ssd 10 seq
2016-06-03 06:38:36.066873 7f6059968700 0 -- :/578428433 >> 10.0.10...
Also available in: Atom