General

Profile

Mark Schouten

  • Email: mark@tuxis.nl
  • Registered on: 09/06/2016
  • Last connection: 02/11/2019

Issues

Activity

02/11/2019

12:12 PM rgw Bug #23651: Dynamic bucket indexing, resharding and tenants still seems to be broken
I believe this issue is fixed in https://ceph.com/releases/v12-2-11-luminous-released/ ?

06/20/2018

01:39 PM rgw Bug #23651: Dynamic bucket indexing, resharding and tenants still seems to be broken
Can someone tell me how to clean up the index? I have way too many objects now..

05/14/2018

08:31 AM rgw Bug #23651: Dynamic bucket indexing, resharding and tenants still seems to be broken
I already deleted the bucket. That didn't shrink the index-objects much though.
How can I provide you with useful ...

05/07/2018

12:39 PM rgw Bug #23651: Dynamic bucket indexing, resharding and tenants still seems to be broken
See the attached graph for what happened to the object-count. Also, see http://lists.ceph.com/pipermail/ceph-users-ce...
12:28 PM rgw Bug #23651: Dynamic bucket indexing, resharding and tenants still seems to be broken
Thanks. But those entries are not my main issue. The main issue is that my bucket index pool has 1035305 objects. Whi...

04/11/2018

03:08 PM rgw Bug #23651 (In Progress): Dynamic bucket indexing, resharding and tenants still seems to be broken
I've had issues with this before, which is described in https://tracker.ceph.com/issues/22046. But the issues remain ...

12/11/2017

02:10 PM rgw Bug #22094: Lots of reads on default.rgw.usage pool
Yes, after upgrading to 12.2.2, I haven't seen this behaviour anymore.

12/05/2017

09:52 AM rgw Bug #22094: Lots of reads on default.rgw.usage pool
root@osdnode01:~# rados -p default.rgw.usage ls > objs.out
root@osdnode01:~# for i in `cat objs.out`; do rados -p de...

11/30/2017

10:22 AM rgw Bug #22094: Lots of reads on default.rgw.usage pool
Can you clarify this? I now restart my rgw daemons every hour, which is ... ehm ... Suboptimal. :)

11/21/2017

09:24 PM rgw Bug #22094: Lots of reads on default.rgw.usage pool
See two logs. One of the users (from broken.log) needs a run from dynamic bucket sharding.

Also available in: Atom