General

Profile

Boris B

  • Registered on: 05/20/2021
  • Last connection: 07/20/2021

Issues

Activity

10/13/2023

07:29 AM Ceph Bug #54434: hdd osd's crashing after pacific upgrade
Indeed,
we just rolled out latest pacific two weeks ago, and since then everything is stable.

09/18/2023

10:28 AM Orchestrator Bug #52573: CEPHADM_CHECK_PUBLIC_MEMBERSHIP - fails, wrongly includes fe80::/8 addresses
It seems to be related to the sorting of `/proc/net/if_net6`...
10:13 AM Orchestrator Bug #52573: CEPHADM_CHECK_PUBLIC_MEMBERSHIP - fails, wrongly includes fe80::/8 addresses
We have the same issue with `fe80` addresses:...

04/23/2023

11:29 AM rgw Bug #53585: RGW Garbage collector leads to slow ops and osd down when removing large object
Nevermind. It seemed to help a bit, but we still have the problem.
It also seemed to raise the load on the system, s...

04/21/2023

01:36 PM rgw Bug #53585: RGW Garbage collector leads to slow ops and osd down when removing large object
I think this PR might fix it: https://github.com/ceph/ceph/pull/50893
Mark Nelson had this in his talk at cephaloc...

10/25/2022

07:18 AM rgw Bug #57919 (New): bucket can not be resharded after cancelling prior reshard process
Hi,
we run a multisite setup where only the metadata get synced, but not the actual data.
I wanted to reshard a b...

10/07/2022

07:33 AM rgw Bug #57784: beast frontend crashes on exception from socket.local_endpoint()
Hey,
here is a full stack trace from the RGW daemon. I removed bucket/file/user names.
The host is:
Ubuntu 20.04...

08/29/2022

09:51 AM rgw Bug #53585: RGW Garbage collector leads to slow ops and osd down when removing large object
Hi,
it looks like that laggy/flapping GC OSDs lead to the following errors:...

07/29/2022

08:00 AM Ceph Bug #54434: hdd osd's crashing after pacific upgrade
So, we have move further:
We added some SSDs to the cluster, and moved all pools except the data pools to them and...

04/26/2022

12:16 PM Ceph Bug #54434: hdd osd's crashing after pacific upgrade
So, we've seen the same problem with a nearly fresh octopus cluster, that got 12x8TB disks without cache.db
How ca...

Also available in: Atom