General

Profile

Eric Petit

  • Login: titer
  • Registered on: 12/13/2019
  • Last sign in: 01/26/2024

Issues

open closed Total
Assigned issues 0 0 0
Reported issues 1 1 2

Activity

11/12/2020

05:14 PM bluestore Bug #45765: BlueStore::_collection_list causes huge latency growth pg deletion
> @Eric How is that @bluefs_buffered_io = true@ working for you? We are considering to re-enable it to help workarou... Eric Petit

10/15/2020

06:22 AM bluestore Bug #45765: BlueStore::_collection_list causes huge latency growth pg deletion
> Besides recently we switched backed to direct IO for bluefs, see https://github.com/ceph/ceph/pull/34297
> Likely ...
Eric Petit

02/03/2020

01:12 PM RADOS Bug #43948 (New): Remapped PGs are sometimes not deleted from previous OSDs
I noticed on several clusters (all Nautilus 14.2.6) that on occasion, some OSDs may still hold data for some PGs long... Eric Petit

12/24/2019

07:48 AM mgr Bug #43364: ceph-mgr's finisher queue can grow indefinitely, making python modules/commands unresponsive
Yep, that did it - the CPU spikes are much shorter (less than 1 sec) with the last patch, the processing queue isn't ... Eric Petit

12/23/2019

09:23 AM mgr Bug #43364: ceph-mgr's finisher queue can grow indefinitely, making python modules/commands unresponsive
Thank you for the patch,
I have tried the test build, but I'm afraid I did not see a reduction in CPU usage and th...
Eric Petit

12/20/2019

07:15 AM mgr Bug #43364: ceph-mgr's finisher queue can grow indefinitely, making python modules/commands unresponsive
Attaching gdbpmp profile, osd_stat_t::dump appears to be the hotspot
I tried the heartbeat change:...
Eric Petit

12/18/2019

07:02 AM mgr Bug #43364 (Resolved): ceph-mgr's finisher queue can grow indefinitely, making python modules/commands unresponsive
After upgrading from Luminous to Nautilus, I noticed that ceph-mgr would become partly unresponsive on larger cluster... Eric Petit

Also available in: Atom