Project

General

Profile

Activity

From 02/26/2020 to 03/26/2020

03/26/2020

05:26 PM Bug #44774: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
https://github.com/ceph/ceph/pull/34219 Honggang Yang
05:11 PM Bug #44774 (Resolved): ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
h1. bluefs-bdev-new-wal... Honggang Yang
02:44 PM Bug #38745: spillover that doesn't make sense
Marcin,
30-64GB is an optimal configuration. Certainly if one can affort 250+ GB drive and hence serve L4 - it's O...
Igor Fedotov
08:53 AM Bug #38745: spillover that doesn't make sense
Hi Igor,
So, your recommendation is to create a volume which can serve enough space for levels 1-3, compaction and...
Marcin W
07:47 AM Bug #38745: spillover that doesn't make sense
Thanks a lot Igor. Will wait for the backport PR and will report back the results. Eneko Lacunza

03/25/2020

10:01 PM Bug #38745: spillover that doesn't make sense
In short - I'm trying to pretty conservative when suggesting 64GB for DB/WAL volume. Igor Fedotov
09:58 PM Bug #38745: spillover that doesn't make sense
Marcin W wrote:
> BTW. These are the default RocksDB params IIRC (in GB):
> base=256, multiplier=10, levels=5
>
...
Igor Fedotov
09:50 PM Bug #38745: spillover that doesn't make sense
Eneko,
to be honest 60GB and 64GB are pretty the same to me. The estimate which resulted in this value did some ro...
Igor Fedotov
11:24 AM Bug #38745: spillover that doesn't make sense
Thanks @Igor
Sorry for the delay getting back, I didn't receive any email regarding your update from tracker.
I...
Eneko Lacunza
05:15 PM Bug #44757 (Resolved): perf regression due to bluefs_buffered_io=true
RHBZ - https://bugzilla.redhat.com/show_bug.cgi?id=1802199
This option was enabled with the help of PR - https://git...
Vikhyat Umrao

03/24/2020

04:50 PM Bug #44731 (Closed): Space leak in Bluestore
Hi.
I'm experiencing some kind of a space leak in Bluestore. I use EC, compression and snapshots. First I thought ...
Vitaliy Filippov
11:56 AM Bug #38745: spillover that doesn't make sense
BTW. These are the default RocksDB params IIRC (in GB):
base=256, multiplier=10, levels=5
0= 0.25
1= 0,25
2= 2,...
Marcin W
11:47 AM Bug #38745: spillover that doesn't make sense
@Igor,
I've read somewhere in docs that recommended DB size is no less that 5% of block. IMO if we consider either...
Marcin W
10:00 AM Bug #38745: spillover that doesn't make sense
Eneko,
could you please attach output for 'ceph-kvstore-tool bluestore-kv <path-to-osd> stats' command?
I suppose...
Igor Fedotov
09:25 AM Bug #38745: spillover that doesn't make sense
Hi, we're seeing this issue too. Using 14.2.8 (proxmox build)
We originally had 1GB rocks.db partition:
# ceph ...
Eneko Lacunza

03/11/2020

08:00 AM Bug #38745: spillover that doesn't make sense
Hello,
I also have this message on a Nautilus cluster, should I must worry about ?...
Yoann Moulin

03/10/2020

12:57 PM Bug #44544: VDO, Wrong RAW space calculation after add 512gb of data in pool.
Cluster filled 100% deduplicated data.
VDO version: 6.2.2.117
Sergey Ponomarev
12:50 PM Bug #44544 (New): VDO, Wrong RAW space calculation after add 512gb of data in pool.
if in Ceph with VDO OSD add more than 512Gb of data, RAW space start wrong calculating.
I think in Ceph implemented...
Sergey Ponomarev

03/08/2020

06:50 PM Bug #44509 (Won't Fix): using ceph-bluestore-tool bluefs-bdev-new-db results in 128k or 64k lefto...
Atfer using bluefs-bdev-new-db bluefs-bdev-new-db on 24 osds. All of those OSDs are in the following state even after... Stefan Priebe

02/28/2020

10:48 PM Bug #44359 (New): Raw usage reported by 'ceph osd df' incorrect when using WAL/DB on another drive
While converting a nautilus (14.2.6) cluster from using FileStore to BlueStore I've noticed that the RAW USE reported... Bryan Stillwell
01:05 PM Bug #36455: BlueStore: ENODATA not fully handled
I'm seeing this on 14.2.7 during deep-scrubbing. I'll investigate and maybe open a new issue.... Paul Emmerich
12:13 PM Bug #42209 (Resolved): STATE_KV_SUBMITTED is set too early.
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler

02/27/2020

06:15 PM Backport #42833 (Resolved): mimic: STATE_KV_SUBMITTED is set too early.
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31673
m...
Nathan Cutler

02/26/2020

11:43 PM Backport #42833: mimic: STATE_KV_SUBMITTED is set too early.
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31673
merged
Yuri Weinstein
 

Also available in: Atom