General

Profile

Dan van der Ster

  • Registered on: 03/27/2013
  • Last connection: 07/19/2022

Issues

Projects

Activity

09/11/2022

03:42 PM RADOS Bug #51194: PG recovery_unfound after scrub repair failed on primary
Just hit this in a v15.2.15 cluster too. Michel which version does your cluster run?

07/18/2022

01:37 PM RADOS Bug #47273 (Pending Backport): ceph report missing osdmap_clean_epochs if answered by peon

07/12/2022

01:57 PM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
Greg Farnum wrote:
> That said, I wouldn’t expect anything useful from running this — pool snaps are hard to use wel...

07/07/2022

08:31 AM bluestore Bug #56488 (Fix Under Review): pacific doesn't defer small writes for pre-pacific hdd osds
We're upgrading clusters to v16.2.9 from v15.2.16, and our simple "rados bench -p test 10 write -b 4096 -t 1" latency...

06/28/2022

12:36 PM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
Venky Shankar wrote:
> Hi Dan,
>
> I need to check, but does the inconsistent object warning show up only after r...

06/24/2022

09:45 AM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
> Removing the pool snap then deep scrubbing again removes the inconsistent objects.
This isn't true -- my quick t...
07:26 AM RADOS Bug #56386 (Can't reproduce): Writes to a cephfs after metadata pool snapshot causes inconsistent...
If you take a snapshot of the meta pool, then decrease max_mds, metadata objects will be inconsistent.
Removing the ...

06/15/2022

07:29 AM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state in...
I cannot reproduce on a small 16.2.9 cluster -- I changed osd crush weights several times and the PGs never go degrad...

05/25/2022

12:12 PM RADOS Feature #55764 (New): Adaptive mon_warn_pg_not_deep_scrubbed_ratio according to actual scrub thro...
This request comes from the Science Users Working Group https://pad.ceph.com/p/Ceph_Science_User_Group_20220524
Fo...

05/23/2022

10:02 AM CephFS Feature #55715: pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
FWIW here's a report on manually upgrading a small 15.2.15 cluster to 16.2.9. Two active MDSs, upgraded without decre...

Also available in: Atom