Gaudenz Steinlin
- Login: gaudenz
- Email: gaudenz@debian.org
- Registered on: 10/06/2015
- Last sign in: 11/22/2023
Issues
open | closed | Total | |
---|---|---|---|
Assigned issues | 0 | 0 | 0 |
Reported issues | 2 | 1 | 3 |
Activity
11/27/2023
- 10:00 PM Ceph Bug #63656 (New): RGW usage not trimmed fully if many keys to delete in a shard
- If there are a lot of usage log entries to trim in a usage shard, not all entries are trimmed in some cases.
I sus...
11/22/2023
- 01:19 PM ceph-volume Bug #62320: lvm list should filter also on vg name
- PR fixing this issue: https://github.com/ceph/ceph/pull/53841
BTW this also breaks ceph-ansible (at least in some ...
01/31/2021
- 09:46 PM rgw Bug #47869: RGW Bucket Lifecycle configuration on non metadata master zones returns HTTP 503
- The pull request was merged to master a few days ago. Would be nice to have a backport to Octopus for this. Should I ...
11/21/2020
- 09:31 PM rgw Bug #47866: Object not found on healthy cluster
- I made some tests in our lab and can confirm the theory. I was able to reproduce this bug. I used the following setti...
10/15/2020
- 02:46 PM rgw Bug #22648: rgw: secondary site's lc configuration erased by multisite sync
- I created https://tracker.ceph.com/issues/47869 for the issue described above.
- 09:54 AM rgw Bug #22648: rgw: secondary site's lc configuration erased by multisite sync
- This issue is "fixed" in Octopus by commmit "d3fb699d6da8479a5e88207a9ae28a44122203b6", but forwarding to the metadat...
- 02:45 PM rgw Bug #47869 (Resolved): RGW Bucket Lifecycle configuration on non metadata master zones returns HTTP 503
- We have a multi zonegroup RGW setup with two zonegroups in two different datacenter each with one zone. We choose thi...
04/30/2020
- 02:40 PM rgw Bug #22648: rgw: secondary site's lc configuration erased by multisite sync
- We have the same issue in an multi zonegroup configuration:
Ceph version: 13.2.8
Steps to reproduce:
1. create...
07/02/2019
- 01:11 PM RADOS Bug #23117: PGs stuck in "activating" after osd_max_pg_per_osd_hard_ratio has been exceeded once
- Ceph version was 13.2.5 on the reinstalled host and 13.2.4 on the other hosts.
- 01:09 PM RADOS Bug #23117: PGs stuck in "activating" after osd_max_pg_per_osd_hard_ratio has been exceeded once
- We also hit this problem with a cluster which had replicated pools with a replication factor of 3 and a CRUSH rule wi...
Also available in: Atom