Dan van der Ster
- Registered on: 03/27/2013
- Last connection: 11/20/2022
Issues
- Assigned issues: 11
- Reported issues: 162
Projects
- Ceph (Developer, 07/08/2020)
- Linux kernel client (Developer, 11/25/2020)
- phprados (Developer, 11/25/2020)
- devops (Developer, 11/25/2020)
- rbd (Developer, 11/25/2020)
- rgw (Developer, 11/25/2020)
- sepia (Developer, 07/08/2020)
- CephFS (Developer, 11/25/2020)
- teuthology (Developer, 12/04/2020)
- rados-java (Developer, 11/25/2020)
- Calamari (Developer, 07/08/2020)
- Ceph-deploy (Developer, 07/08/2020)
- ceph-dokan (Developer, 11/25/2020)
- Stable releases (Developer, 11/25/2020)
- Tools (Developer, 07/08/2020)
- Infrastructure (Developer, 07/08/2020)
- paddles (Developer, 08/16/2021)
- downburst (Developer, 07/08/2020)
- ovh (Developer, 07/08/2020)
- www.ceph.com (Developer, 07/08/2020)
- mgr (Developer, 07/08/2020)
- rgw-testing (Developer, 11/25/2020)
- RADOS (Developer, 07/08/2020)
- bluestore (Developer, 07/08/2020)
- ceph-volume (Developer, 11/25/2020)
- Messengers (Developer, 07/08/2020)
- Orchestrator (Developer, 07/08/2020)
- crimson (Developer, 11/25/2020)
- dmclock (Developer, 08/13/2020)
- cephsqlite (Developer, 03/19/2021)
- Dashboard (Developer, 04/22/2021)
- cleanup (Developer, 08/11/2022)
Activity
01/06/2023
- 11:06 PM RADOS Bug #44400: Marking OSD out causes primary-affinity 0 to be ignored when up_set has no common OSD...
- Just confirming this is still present in pacific:...
11/28/2022
- 11:42 AM rgw Documentation #58092 (New): rgw_enable_gc_threads / lc_threads not documented on web
- Options rgw_enable_gc_threads and rgw_enable_lc_threads are not rendered for docs.ceph.com.
I would expect those t...
09/11/2022
- 03:42 PM RADOS Bug #51194: PG recovery_unfound after scrub repair failed on primary
- Just hit this in a v15.2.15 cluster too. Michel which version does your cluster run?
07/18/2022
- 01:37 PM RADOS Bug #47273 (Pending Backport): ceph report missing osdmap_clean_epochs if answered by peon
07/12/2022
- 01:57 PM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- Greg Farnum wrote:
> That said, I wouldn’t expect anything useful from running this — pool snaps are hard to use wel...
07/07/2022
- 08:31 AM bluestore Bug #56488 (Resolved): BlueStore doesn't defer small writes for pre-pacific hdd osds
- We're upgrading clusters to v16.2.9 from v15.2.16, and our simple "rados bench -p test 10 write -b 4096 -t 1" latency...
06/28/2022
- 12:36 PM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- Venky Shankar wrote:
> Hi Dan,
>
> I need to check, but does the inconsistent object warning show up only after r...
06/24/2022
- 09:45 AM RADOS Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- > Removing the pool snap then deep scrubbing again removes the inconsistent objects.
This isn't true -- my quick t... - 07:26 AM RADOS Bug #56386 (Can't reproduce): Writes to a cephfs after metadata pool snapshot causes inconsistent...
- If you take a snapshot of the meta pool, then decrease max_mds, metadata objects will be inconsistent.
Removing the ...
06/15/2022
- 07:29 AM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state in...
- I cannot reproduce on a small 16.2.9 cluster -- I changed osd crush weights several times and the PGs never go degrad...
Also available in: Atom