General

Profile

Jonas Jelten

  • Registered on: 09/11/2018
  • Last connection: 03/22/2019

Issues

Projects

Activity

04/16/2019

04:53 PM RADOS Bug #39330 (New): recovery transfer rate not correct
When running all OSDs inside a QEMU VM (with real disks attached through virtio-scsi), the gathered recovery statisti...

04/01/2019

10:59 PM RADOS Bug #37439: Degraded PG does not discover remapped data on originating OSD
My proposal to fix this bug is to call @discover_all_missing@ not only if there are missing objects, but also when th...
01:07 AM RADOS Bug #37439: Degraded PG does not discover remapped data on originating OSD
More findings, now on Nautilus 14.2.0:
OSD.62 once was part of pg 6.65, but content on it got remapped. A restart ...

01/02/2019

06:35 PM Ceph Revision 3b829cb2 (ceph): ceph-volume: enable device discards
When using SSDs as encrypted OSD device, discards do not pass the
encryption layer. This option activates discard req...
06:34 PM Ceph Revision eea47c27 (ceph): ceph-volume: enable device discards
When using SSDs as encrypted OSD device, discards do not pass the
encryption layer. This option activates discard req...
03:52 PM Ceph Revision 33333041 (ceph): ceph-volume: enable device discards
When using SSDs as encrypted OSD device, discards do not pass the
encryption layer. This option activates discard req...

12/27/2018

06:54 PM RADOS Bug #37768 (Duplicate): mon gets stuck op for failing OSDs
@6 slow ops, oldest one blocked for 736706 sec, mon.rofl has slow ops@
I have several slow monitor ops that were t...

12/13/2018

01:11 PM RADOS Bug #37439: Degraded PG does not discover remapped data on originating OSD
Tested on a 5-node cluster with 20 OSDs and 14 3-replica pools.
Here's the log file (level 20) of OSD 18, which is...
12:07 PM RADOS Bug #37439: Degraded PG does not discover remapped data on originating OSD
please please let us edit issues and comments...
-I made a mistake in the above post: *please ignore* the @ceph os...
11:13 AM RADOS Bug #37439: Degraded PG does not discover remapped data on originating OSD
Easy steps to reproduce seem to be:
* Have a healthy cluster
* @ceph osd set pause # make sure no writes me...

Also available in: Atom