General

Profile

Wes Dillingham

Issues

open closed Total
Assigned issues 0 0 0
Reported issues 5 2 7

Activity

03/29/2024

03:09 PM Ceph Bug #65228 (Fix Under Review): class:device-class config database mask does not work for osd_compact_on_start
I have a cluster which needs osd_compact_on_start = true
In this cluster this only applicable to ssd OSDs (where rgw...
Wes Dillingham
01:02 PM RADOS Bug #59670: Ceph status shows PG recovering when norecover flag is set
I think its more than just a cosmetic issue of the PG showing recovering as its state. It does in fact "recover" obje... Wes Dillingham
12:54 PM RADOS Bug #65227 (New): noscrub cluster flag prevents deep-scrubs from starting
Observed on a 17.2.7 cluster and confirmed on an additional 17.2.7 cluster.
Reproduction steps:
- On a cluster th...
Wes Dillingham

03/12/2024

07:44 PM rgw Feature #64083: Response code from rgw rate-limit should be 429 not 503 , best is configurable
+1 on this request Wes Dillingham

02/16/2024

02:44 PM Orchestrator Bug #64467 (New): cephadm deployed NFS Ganesha clusters should disable attribute and dir caching
By default I believe cephadm should create the following stanza within the NFS config:... Wes Dillingham

06/16/2022

10:20 PM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state instead of just remapped
I should note the ceph osd tree etc in my initial "Steps to reproduce" was on a different cluster than the cluster I ... Wes Dillingham
10:12 PM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state instead of just remapped
I am attaching osdmap epochs 745 to 748 corresponding to the above Wes Dillingham
06:17 PM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state instead of just remapped
I am attaching the ceph.log and osd log for the osd marked out and the log covers the period during the osd getting a... Wes Dillingham

06/15/2022

05:26 PM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state instead of just remapped
I have found that I can only reproduce this on clusters built initially on pacific. Have reproduced on 3 separate pac... Wes Dillingham

06/14/2022

07:51 PM Ceph Bug #56046: Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state instead of just remapped
I have disabled the upmap balancer and removed all upmaps from the osdmap and the problem is still reproducible. Wes Dillingham

Also available in: Atom