General

Profile

Dan Williams

  • Login: diw
  • Registered on: 09/19/2017
  • Last sign in: 03/19/2021

Issues

open closed Total
Assigned issues 0 0 0
Reported issues 2 1 3

Activity

11/16/2020

11:53 AM Orchestrator Feature #48247 (New): cephadm: RGW rgw_ldap_secret
When deploying RGW via cephadm the path specified in rgw_ldap_secret should be mounted into the container.
Alterna...
Dan Williams

10/28/2020

05:49 PM Orchestrator Bug #48031: Cephadm: Needs to pass cluster.listen-address to alertmanager
I've submitted a pull request.
https://github.com/ceph/ceph/pull/37883
Dan Williams
05:12 PM Orchestrator Bug #48031 (Resolved): Cephadm: Needs to pass cluster.listen-address to alertmanager
When using public IP with the ceph dashboard/monitoring, alert manager will not start with the following error messag... Dan Williams

10/06/2020

10:59 AM rgw Bug #36530: Radosgw bucket policy does not work when applying to LDAP user
Echoing Thomas's comments above it was quite frustrating to work this one out.
I guess it only applies if running in...
Dan Williams

10/17/2017

06:54 PM Ceph Bug #21820: Ceph OSD crash with Segfault
I forgot to mention that i’m Running cents 7.4 up to date as of 2017/10/17 12:00 UTC.
I’m using bluestore all-in-o...
Dan Williams
05:31 PM Ceph Bug #21820: Ceph OSD crash with Segfault
I’m also seeing this.
With 72 osd ~10mb/s client io, (20 filestore/50 bluestore) although the osd restarts successfu...
Dan Williams

09/19/2017

06:10 PM devops Bug #21461: SELinux file_context update causes OSDs to restart when upgrading to Luminous from Kraken or Jewel.
I'm happy to provide further further details incase my random waffle above doesn't make sense. Dan Williams
06:08 PM devops Bug #21461 (New): SELinux file_context update causes OSDs to restart when upgrading to Luminous from Kraken or Jewel.
I've got a small cluster of 4 nodes each of which has 10 disks.
Nodes 1-3 run ceph-mon.
When following the releas...
Dan Williams

Also available in: Atom