Activity
From 03/03/2020 to 04/01/2020
04/01/2020
- 07:39 PM Backport #44819 (In Progress): octopus: perf regression due to bluefs_buffered_io=true
- https://github.com/ceph/ceph/pull/34353
mergedReviewed-by: Vikhyat Umrao <vikhyat@redhat.com> - 01:07 PM Bug #44880 (Resolved): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
- ...
- 10:42 AM Bug #44878 (Resolved): mimic: incorrect SSD bluestore compression/allocation defaults
- The current defaults for SSD are:
bluestore_compression_min_blob_size_ssd 8192
bluestore_min_alloc_size_ssd 16384...
03/31/2020
- 01:53 AM Bug #44774: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- h1. How to reproduce this problem
h2. 1. modify your vstart.sh
# git diff ../src/vstart.sh
diff --git a/src/...
03/30/2020
- 06:15 PM Backport #44818 (In Progress): nautilus: perf regression due to bluefs_buffered_io=true
- 04:50 PM Backport #44818 (Resolved): nautilus: perf regression due to bluefs_buffered_io=true
- https://github.com/ceph/ceph/pull/34297
- 04:50 PM Backport #44819 (Resolved): octopus: perf regression due to bluefs_buffered_io=true
- https://github.com/ceph/ceph/pull/34353
03/29/2020
- 06:38 PM Bug #44757 (Pending Backport): perf regression due to bluefs_buffered_io=true
- 02:19 PM Bug #41901 (Resolved): bluestore: unused calculation is broken
03/27/2020
03/26/2020
- 05:26 PM Bug #44774: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- https://github.com/ceph/ceph/pull/34219
- 05:11 PM Bug #44774 (Resolved): ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- h1. bluefs-bdev-new-wal...
- 02:44 PM Bug #38745: spillover that doesn't make sense
- Marcin,
30-64GB is an optimal configuration. Certainly if one can affort 250+ GB drive and hence serve L4 - it's O... - 08:53 AM Bug #38745: spillover that doesn't make sense
- Hi Igor,
So, your recommendation is to create a volume which can serve enough space for levels 1-3, compaction and... - 07:47 AM Bug #38745: spillover that doesn't make sense
- Thanks a lot Igor. Will wait for the backport PR and will report back the results.
03/25/2020
- 10:01 PM Bug #38745: spillover that doesn't make sense
- In short - I'm trying to pretty conservative when suggesting 64GB for DB/WAL volume.
- 09:58 PM Bug #38745: spillover that doesn't make sense
- Marcin W wrote:
> BTW. These are the default RocksDB params IIRC (in GB):
> base=256, multiplier=10, levels=5
>
... - 09:50 PM Bug #38745: spillover that doesn't make sense
- Eneko,
to be honest 60GB and 64GB are pretty the same to me. The estimate which resulted in this value did some ro... - 11:24 AM Bug #38745: spillover that doesn't make sense
- Thanks @Igor
Sorry for the delay getting back, I didn't receive any email regarding your update from tracker.
I... - 05:15 PM Bug #44757 (Resolved): perf regression due to bluefs_buffered_io=true
- RHBZ - https://bugzilla.redhat.com/show_bug.cgi?id=1802199
This option was enabled with the help of PR - https://git...
03/24/2020
- 04:50 PM Bug #44731 (Closed): Space leak in Bluestore
- Hi.
I'm experiencing some kind of a space leak in Bluestore. I use EC, compression and snapshots. First I thought ... - 11:56 AM Bug #38745: spillover that doesn't make sense
- BTW. These are the default RocksDB params IIRC (in GB):
base=256, multiplier=10, levels=5
0= 0.25
1= 0,25
2= 2,... - 11:47 AM Bug #38745: spillover that doesn't make sense
- @Igor,
I've read somewhere in docs that recommended DB size is no less that 5% of block. IMO if we consider either... - 10:00 AM Bug #38745: spillover that doesn't make sense
- Eneko,
could you please attach output for 'ceph-kvstore-tool bluestore-kv <path-to-osd> stats' command?
I suppose... - 09:25 AM Bug #38745: spillover that doesn't make sense
- Hi, we're seeing this issue too. Using 14.2.8 (proxmox build)
We originally had 1GB rocks.db partition:
# ceph ...
03/11/2020
- 08:00 AM Bug #38745: spillover that doesn't make sense
- Hello,
I also have this message on a Nautilus cluster, should I must worry about ?...
03/10/2020
- 12:57 PM Bug #44544: VDO, Wrong RAW space calculation after add 512gb of data in pool.
- Cluster filled 100% deduplicated data.
VDO version: 6.2.2.117 - 12:50 PM Bug #44544 (New): VDO, Wrong RAW space calculation after add 512gb of data in pool.
- if in Ceph with VDO OSD add more than 512Gb of data, RAW space start wrong calculating.
I think in Ceph implemented...
03/08/2020
- 06:50 PM Bug #44509 (Won't Fix): using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k lefto...
- Atfer using bluefs-bdev-new-db bluefs-bdev-new-db on 24 osds. All of those OSDs are in the following state even after...
Also available in: Atom