Activity
From 07/25/2017 to 08/23/2017
08/23/2017
- 11:01 PM Bug #21087: BlueFS Becomes Totally Inaccessible when Failed to Allocate
- Some of my investigations:
* No space seems allocatable on BlueFS, obviously. BlueFS::_allocate failed with r = al... - 10:48 PM Bug #21087 (Can't reproduce): BlueFS Becomes Totally Inaccessible when Failed to Allocate
- After trying to insert thousands of missing osdmaps to the bluefs partitions, they now appear to be full and not acce...
- 10:47 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- https://www.spinics.net/lists/ceph-users/msg37945.html
https://www.spinics.net/lists/ceph-users/msg38213.html - 05:04 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- I believe I'm also encountering this issue. Here's the root of the mailing list thread discussing my issue: http://...
- 01:01 AM Bug #21068: ceph-disk deploy bluestore fails to create correct block symlink for multipath devices
- Also target version should be luminous, next release.
- 12:59 AM Bug #21068: ceph-disk deploy bluestore fails to create correct block symlink for multipath devices
- Title of bug should be: 'ceph-disk prepare --bluestore fails to create correct block simlink for multipath devices.'
... - 12:50 AM Bug #21068 (Won't Fix): ceph-disk deploy bluestore fails to create correct block symlink for mult...
- Instead of pointing to the aggregated multipath device /dev/dm-XX, the block symlink would point to a single path.
...
08/22/2017
- 01:42 PM Bug #21062: ceph-osd crashes in bluestore
- There are two similar OSDs with the same error.
- 01:40 PM Bug #21062 (Can't reproduce): ceph-osd crashes in bluestore
- ceph-osd luminous crashes on startup after emulating power loss (via magic sysrq reboot):
Aug 22 13:27:47 dpr-2a17...
08/18/2017
- 02:30 PM Bug #21040 (Resolved): bluestore: multiple objects (clones?) referencing same blocks (on all repl...
- Hi Ceph,
Been using Ceph Luminous from 12.0.x and running well. But since Ceph Luminous 12.1.3 and 12.1.4 I got 1 ...
08/14/2017
- 02:59 PM Bug #20997 (Can't reproduce): bluestore_types.h: 739: FAILED assert(p != extents.end())
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-08-12_05:00:19-rbd-luminous-distro-basic-smithi/1517639/teutholog...
08/08/2017
- 03:16 PM Bug #20917: kraken "bluestore/BitMapAllocator.cc: 82: FAILED assert(!(need % m_block_size))" in p...
- Kraken is not a big deal.
08/04/2017
- 07:36 PM Bug #20917 (Won't Fix): kraken "bluestore/BitMapAllocator.cc: 82: FAILED assert(!(need % m_block_...
- This is for kraken v11.2.1 point release
Run: http://pulpito.ceph.com/yuriw-2017-08-04_16:10:42-krbd-kraken-testi...
08/03/2017
- 05:32 PM Bug #20870: OSD compression: incorrect display of the used disk space
- Hi,
I just discovered "ceph df detail"...
08/02/2017
- 03:55 PM Bug #20870: OSD compression: incorrect display of the used disk space
- Exactly!
Theses *ceph df* outputs are correct (so it's not a Bug), but would need to be more precise.
I could ima... - 03:17 PM Bug #20870: OSD compression: incorrect display of the used disk space
- The per-pool USED is the logical user data written (before compression). The RAW USED space is the actual on-disk sp...
- 10:42 AM Bug #20842: Ceph 12.1 BlueStore low performance
- I'd like to share more about test result on BlueStore.
This time, I use rbd bench and rados bench to test our envi... - 07:25 AM Bug #20842: Ceph 12.1 BlueStore low performance
- preliminary analysis of logs.
From log, it is easy to see io throughput in FileStore is much higher than BlueStore...
08/01/2017
- 01:56 PM Bug #20870 (Resolved): OSD compression: incorrect display of the used disk space
- Hi,
I tested bluestore OSD compression with:
/etc/ceph/ceph.conf...
07/28/2017
- 03:04 PM Bug #20847 (Closed): low performance for bluestore rbd block creation vs filestore
- It appears that bluestore may be slower than filestore when creating RBD blocks (ie if pre-filling an RBD volume with...
- 10:34 AM Bug #20842: Ceph 12.1 BlueStore low performance
- blkparse_sda.out # blkparse -i sda -d blkparse_sda.out
blkparse_sdd.out # blkparse -i sdd -d blkparse_sdd.out
Tho... - 10:21 AM Bug #20842 (Closed): Ceph 12.1 BlueStore low performance
- Hi,
We hvae a tricky issue about comparing perfomance between FileStore and BlueStore on Ceph 12.1. In our 3 nodes ...
07/27/2017
- 05:38 PM Feature #20801 (Rejected): ability to rebuild BlueStore WAL journals is missing
- We have a six-node CEPH cluster v12.0.1 with BlueStore back-end, all RocksDB WAL journals were located on Flashtec NV...
Also available in: Atom