General

Profile

Dan van der Ster

  • Registered on: 03/27/2013
  • Last connection: 11/20/2022

Issues

Projects

Activity

01/06/2023

11:06 PM RADOS Bug #44400: Marking OSD out causes primary-affinity 0 to be ignored when up_set has no common OSD...
Just confirming this is still present in pacific:...

11/28/2022

11:42 AM rgw Documentation #58092 (New): rgw_enable_gc_threads / lc_threads not documented on web
Options rgw_enable_gc_threads and rgw_enable_lc_threads are not rendered for docs.ceph.com.
I would expect those t...

09/11/2022

03:42 PM RADOS Bug #51194: PG recovery_unfound after scrub repair failed on primary
Just hit this in a v15.2.15 cluster too. Michel which version does your cluster run?

07/18/2022

01:37 PM RADOS Bug #47273 (Pending Backport): ceph report missing osdmap_clean_epochs if answered by peon

07/12/2022

01:57 PM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
Greg Farnum wrote:
> That said, I wouldn’t expect anything useful from running this — pool snaps are hard to use wel...

07/07/2022

08:31 AM bluestore Bug #56488 (Resolved): BlueStore doesn't defer small writes for pre-pacific hdd osds
We're upgrading clusters to v16.2.9 from v15.2.16, and our simple "rados bench -p test 10 write -b 4096 -t 1" latency...

06/28/2022

12:36 PM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
Venky Shankar wrote:
> Hi Dan,
>
> I need to check, but does the inconsistent object warning show up only after r...

06/24/2022

09:45 AM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
> Removing the pool snap then deep scrubbing again removes the inconsistent objects.
This isn't true -- my quick t...
07:26 AM RADOS Bug #56386 (Can't reproduce): Writes to a cephfs after metadata pool snapshot causes inconsistent...
If you take a snapshot of the meta pool, then decrease max_mds, metadata objects will be inconsistent.
Removing the ...

06/15/2022

07:29 AM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state in...
I cannot reproduce on a small 16.2.9 cluster -- I changed osd crush weights several times and the PGs never go degrad...

Also available in: Atom