Activity
From 09/27/2017 to 10/26/2017
10/26/2017
- 02:48 AM Bug #21480: bluestore: flush_commit is racy
- 08-rados-wip-sage2-testing-2017-10-25-1347-distro-basic-smithi/1773520
10/25/2017
- 09:48 PM Bug #21259: bluestore: segv in BlueStore::TwoQCache::_trim
- Crash here:...
- 08:01 PM Bug #21809: Raw Used space is 70x higher than actually used space (maybe orphaned objects from po...
- I'm (not) sure if this helps
- 08:00 PM Bug #21809: Raw Used space is 70x higher than actually used space (maybe orphaned objects from po...
- I'm sure if this helps as I'm using bluestore....
10/23/2017
- 12:18 PM Bug #21809: Raw Used space is 70x higher than actually used space (maybe orphaned objects from po...
- Can you go into the /var/lib/ceph/osd/ceph-NNN/current directory of one of the OSDs, and do a 'du -hs *', and attach ...
10/20/2017
- 06:57 PM Bug #21781 (Need More Info): bluestore: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixNoCsum...
- Did this run contain 9ad1f4f10ff7bfe32d0a37361640fe5c65e56699 ? It should have fixed this.. but maybe it broke somet...
- 04:04 PM Bug #21809: Raw Used space is 70x higher than actually used space (maybe orphaned objects from po...
- Hi Sage,
unfortunately the RAW space has not dropped. It's at 2775G when the sum of both pools is at 700G. There's... - 03:55 PM Bug #21809 (Need More Info): Raw Used space is 70x higher than actually used space (maybe orphane...
- deleting old pgs is asynchronous. unfortunately i don't think there are metrics reported to globally observe them.. ...
10/15/2017
- 11:04 AM Bug #21809 (Can't reproduce): Raw Used space is 70x higher than actually used space (maybe orphan...
- Hi,
I've had a pool named vm-0d29db27 which has used round about 3T of storage.
I wanted to purge this pool and a...
10/13/2017
- 04:44 AM Bug #21781 (Can't reproduce): bluestore: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixNoCsu...
- ...
10/12/2017
- 02:53 PM Bug #21332: OSD Caught signal with StupidAllocator on 12.2.0
- Hello,
I'm running Ceph v12.2.1 and *I got the same error when I run FIO for bechmark VM*.
FIO command: fio --r...
10/10/2017
- 07:21 AM Feature #21741: os/bluestore: multi-tier support in BlueStore
- *PR*: https://github.com/ceph/ceph/pull/18211
- 01:11 AM Feature #21741 (Closed): os/bluestore: multi-tier support in BlueStore
- Currently, BlueStore stores data in a single block device.
Allow users to add a fast tier (ex. SSD) to BlueStore so ...
10/09/2017
- 06:57 PM Bug #21736 (Can't reproduce): Cannot create bluestore OSD
- ```
root@plk-sr0047-013-dm:~# ceph-disk prepare --bluestore /dev/sda --osd-id 4 --osd-uuid `uuidgen`
Creating new ...
10/08/2017
- 12:42 PM Bug #20997: bluestore_types.h: 739: FAILED assert(p != extents.end())
- Sorry for the above post, i did not sleep much ;).
Operations like "ceph-objectstore-tool list" shows warning:
<p... - 12:31 PM Bug #20997: bluestore_types.h: 739: FAILED assert(p != extents.end())
- Update: After investigation i narrow problem to osd.1 pg "22.5s3" and object "22.5s3_head,3#22:addebba8:::10000014f8...
10/06/2017
- 02:05 AM Bug #21259: bluestore: segv in BlueStore::TwoQCache::_trim
- Ok, the patch is merged to teh latest luminous branch, which you can install from https://shaman.ceph.com/.
10/04/2017
- 09:18 PM Bug #20997: bluestore_types.h: 739: FAILED assert(p != extents.end())
- Hi
The same problem hit my setup: single node, cephfs on ec-pool(zlib) on bluestore on dmcrypt, pool was created o...
10/03/2017
- 08:57 PM Bug #21259 (Need More Info): bluestore: segv in BlueStore::TwoQCache::_trim
- 08:56 PM Bug #21259: bluestore: segv in BlueStore::TwoQCache::_trim
- https://github.com/ceph/ceph/pull/18103 is the luminous backport.. will ping you on this ticket once it is merged.
...
10/01/2017
- 06:07 PM Bug #21259 (New): bluestore: segv in BlueStore::TwoQCache::_trim
- https://www.mail-archive.com/ceph-users@lists.ceph.com/msg40646.html
12.2.1 3e7492b9ada8bdc9a5cd0feafd42fbca27f9c3...
09/29/2017
- 04:10 PM Bug #20557: segmentation fault with rocksdb|BlueStore and jemalloc
- See http://tracker.ceph.com/issues/21318 ... this still happens (intermittently) on current luminous.
Also available in: Atom