Tyler Brekke



  • Ceph (Developer, Reporter, 09/18/2012)
  • Linux kernel client (Developer, Reporter, 09/18/2012)
  • phprados (Developer, Reporter, 09/18/2012)
  • devops (Developer, Reporter, 09/18/2012)
  • rbd (Developer, Reporter, 09/18/2012)
  • rgw (Developer, Reporter, 09/18/2012)
  • sepia (Developer, Reporter, 09/18/2012)
  • fs (Developer, Reporter, 08/11/2013)
  • rados-java (Developer, Reporter, 05/24/2013)
  • Calamari (Developer, Reporter, 08/27/2014)
  • CI (Reporter, 01/10/2017)
  • mgr (Developer, Reporter, 06/28/2017)
  • rgw-testing (Developer, Reporter, 11/01/2016)
  • RADOS (Developer, Reporter, 06/07/2017)
  • bluestore (Developer, Reporter, 11/29/2017)
  • Messengers (Developer, Reporter, 03/12/2019)
  • Orchestrator (Developer, Reporter, 01/16/2020)
  • dmclock (Developer, Reporter, 08/13/2020)



07:07 AM Calamari Bug #10186 (Resolved): salt ceph module crashes with client asok
In the case where there is a client admin socket in /var/run/ceph salt will error since it does not match mon|osd|mds...


03:58 PM rgw Bug #9651 (Duplicate): RGW: Object Removal Atomicity
The issue appears then a system does down when there are pending object deletions. The object can be removed but will...


11:53 AM Ceph Feature #8973 (New): Add support for collecting usage information by namespace
As of now there is no simple way to determine how much data is being used by a particular namespace. Customers curren...


12:18 PM Ceph Bug #8921: ceph pg dump <{summary|sum|delta|pools|osds|pgs|pgs_brief}> only work correctly as json
Source: ZD #1671
12:16 PM Ceph Bug #8921 (Won't Fix): ceph pg dump <{summary|sum|delta|pools|osds|pgs|pgs_brief}> only work corr...
When ceph pg dump is ran with an argument without specifying json, The normal output from ceph pg dump is returned.


05:56 PM Ceph Revision c35ceef5 (ceph): ReplicatedPG: 'ajusted' typo
Signed-off-by: <Tyler Brekke>


03:47 PM Ceph Documentation #8281 (Resolved): Documentation: Detailed explanation of ceph df output is non-exis...
Right now ceph df output for pool usage is vague and there is no documentation explaining what the numbers means.


10:23 AM rbd Feature #7507 (New): krbd: Make device symlinks cluster aware
Currently when a device is mapped a udev script creates a symlink at /dev/rbd/<pool>/<imagename>
Would be nice if ...


06:07 PM Ceph Bug #7328 (Resolved): osd: reweight-by-utilization ended up with stuck remapped pgs
Running ceph osd reweight-by-utilization resulted in stuck pgs....


10:16 AM rbd Feature #7272 (Duplicate): rbd: import performance
Currently the rbd import appears to be single threaded which means the import process is being written to a single di...

Also available in: Atom