General

Profile

Jonas Jelten

  • Registered on: 09/11/2018
  • Last connection: 09/07/2019

Issues

Projects

Activity

04/08/2021

05:53 PM bluestore Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
Another attempt on a different host, now we upgrade just one 1T device...
I've set @ceph config set osd bluestore_fs...

03/26/2021

05:31 PM bluestore Bug #50017: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag
All OSDs on that host except two are now corrupted.
These two hang during fsck with 100% load. One OSDs has one Shal...
04:36 PM bluestore Bug #50017 (New): OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteB...
I just started upgrading a cluster to octopus. MON and MGR are all on 15.2.10 and everything looks nice.
Then I up...

03/15/2021

07:48 PM bluestore Bug #49815 (Pending Backport): BlueRocksEnv::GetChildren may pass trailing slashes to BlueFS readdir
ceph-osd --mkfs fails with RocksDB 6.15.5.
This seems to be that the trailing / in the directory lookup leads to d...

03/05/2021

07:15 PM Ceph Bug #48298: hitting mon_max_pg_per_osd right after creating OSD, then decreases slowly
Another observation: I have nobackfill set, and I'm currently adding 8 new OSDs.
The first of the newly added OSDs...

02/10/2021

03:12 PM RADOS Bug #46847: Loss of placement information on OSD reboot
Given the "severity" I'd be really glad if some of the Ceph core devs could have a look at this :) I'm really not tha...

01/11/2021

05:11 PM CephFS Bug #44100: cephfs rsync kworker high load.
I also encountered this on Ubuntu 20.04 @Linux 5.4.0-60-generic #67-Ubuntu SMP Tue Jan 5 18:31:36 UTC 2021@
The clus...

12/25/2020

06:13 PM Ceph Bug #48718 (Resolved): PG up and acting documentation unclear
The documentation in https://github.com/ceph/ceph/blob/0cb56e0f13dc57167271ec7f20f11421416196a2/doc/rados/operations/...

12/22/2020

12:42 PM bluestore Bug #48276: OSD Crash with ceph_assert(is_valid_io(off, len))
Yes dead in permanently dead until I recreate the OSD. The drive itself is well, just the OSD data corrupts. The star...

12/21/2020

07:02 PM bluestore Bug #48276: OSD Crash with ceph_assert(is_valid_io(off, len))
It's set by @bluefs_allocator@ at bluestore @mkfs@ time: https://github.com/ceph/ceph/blob/b1d0fa70590c23e80a09638df9...

Also available in: Atom