General

Profile

Henrik Korkuc

  • Email: bugs@kirneh.eu
  • Registered on: 05/28/2014
  • Last connection: 10/03/2017

Issues

Activity

10/01/2017

06:21 AM rbd Bug #21567: rbd does not delete snaps in (ec) data pool
Not sure if it helps, but without knowing internals and looking at that patch I have impression that data pool snapsh...

09/29/2017

12:00 PM rbd Bug #21567: rbd does not delete snaps in (ec) data pool
same issue on 12.2.1 too

09/27/2017

10:17 AM rbd Bug #21567 (Resolved): rbd does not delete snaps in (ec) data pool
After deleting RBD image snapshots space is not reclaimed. Reproduced with:
rbd-ec(id 1, EC 4+2) and rbd-meta (id 2,...

09/19/2017

08:20 AM RADOS Bug #21287: 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->is_error())"
I had to delete affected pool to reclaim occupied space so I am unable to verify any fixes
08:07 AM bluestore Bug #21259: bluestore: segv in BlueStore::TwoQCache::_trim
I upgraded cluster to 12.2.0-178-gba746cd (ba746cd14ddd70a4f24a734f83ff9d276dd327d1) last week (to mitigate aio submi...

09/08/2017

05:54 PM bluestore Bug #21259: bluestore: segv in BlueStore::TwoQCache::_trim
btw I am running on debian Jessie, it looks like shaman does not build for it, so I am going to build it myself, I al...
06:54 AM bluestore Bug #21259: bluestore: segv in BlueStore::TwoQCache::_trim
it looks like build failed... https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAIL...

09/07/2017

09:01 AM RADOS Bug #21287: 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->is_error())"
btw down pg is 1.1735.
Starting OSD 381 crashes 65, 133 and 118. Stoping 65 enables to start remaining OSDs, start...
08:16 AM RADOS Bug #21180: Bluestore throttler causes down OSD
pool used for this workload is blocked by down PG (#21287), but I'll try to replicate on same cluster with newly crea...
08:14 AM RADOS Bug #21287 (Duplicate): 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->i...
One PG went down for me during large rebalance (I added racks to OSD placement, almost all data had to be shuffled). ...

Also available in: Atom