Project

General

Profile

Activity

From 03/19/2020 to 04/17/2020

04/17/2020

04:37 PM Bug #45133 (Fix Under Review): BlueStore asserting on fs upgrade tests
Igor Fedotov
03:58 PM Bug #45133: BlueStore asserting on fs upgrade tests
Yes indeed, BlueStore::_upgrade_super() when you are starting with format 2 invokes _prepare_ondisk_format_super() as... Greg Farnum
03:10 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
See for instance http://pulpito.front.sepia.ceph.com/gregf-2020-04-15_20:54:50-fs-wip-greg-testing-415-1-distro-basic... Greg Farnum
01:48 PM Backport #45122 (In Progress): octopus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
10:26 AM Backport #45122 (Resolved): octopus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
01:46 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
Doing partial backport for now to override an issue from comment #2. Igor Fedotov
01:46 PM Backport #45123 (In Progress): nautilus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34611 Igor Fedotov
11:06 AM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
Looks like this backport depends on backporting per-pool omap stats collection support, see
https://github.com/ceph...
Igor Fedotov
10:27 AM Backport #45123 (Resolved): nautilus: OSD might fail to recover after ENOSPC crash
https://github.com/ceph/ceph/pull/34611 Igor Fedotov
12:54 PM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
Gerdriaan, sorry, missed you inquiry.
What I wanted is the output for:
ceph-bluestore-tool --path <path-to-osd> --c...
Igor Fedotov
12:47 PM Bug #43068: on disk size (81292) does not match object info size (81237)
It looks to me that this bug has rather out-of-bluestore scope. Am I right it applies to CephFS files/objects only? A... Igor Fedotov
12:24 AM Bug #43068 (New): on disk size (81292) does not match object info size (81237)
we get one more hit
2020-04-16 14:21:30.139493 osd.869 (osd.869) 10192 : cluster [ERR] 6.68b shard 869 soid 6:d16f...
Xiaoxi Chen
10:27 AM Backport #45127 (Resolved): octopus: Extent leak after main device expand
https://github.com/ceph/ceph/pull/34610 Igor Fedotov
10:27 AM Backport #45126 (Resolved): nautilus: Extent leak after main device expand
https://github.com/ceph/ceph/pull/34711 Igor Fedotov
10:27 AM Backport #45125 (Rejected): mimic: Extent leak after main device expand
Igor Fedotov

04/16/2020

10:21 AM Bug #45112 (Resolved): OSD might fail to recover after ENOSPC crash
While opening after such a crash KV might need to flush some data and hence needs additional disk space. But allocato... Igor Fedotov
10:05 AM Bug #45110 (Resolved): Extent leak after main device expand
To reproduce the issue one can expand device of 3,147,480,064 bytes to
4,147,480,064 using bluefs-bdev-expand comman...
Igor Fedotov

04/15/2020

03:25 PM Bug #44880 (Fix Under Review): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
Igor Fedotov

04/11/2020

09:42 AM Backport #45064 (Resolved): nautilus: bluestore: unused calculation is broken
https://github.com/ceph/ceph/pull/34794 Nathan Cutler
09:42 AM Backport #45063 (Resolved): octopus: bluestore: unused calculation is broken
https://github.com/ceph/ceph/pull/34793 Nathan Cutler
09:42 AM Backport #45062 (Rejected): mimic: bluestore: unused calculation is broken
Nathan Cutler
09:38 AM Backport #45045 (Resolved): nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damag...
https://github.com/ceph/ceph/pull/34796 Nathan Cutler
09:37 AM Backport #45044 (Resolved): octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage...
https://github.com/ceph/ceph/pull/34795 Nathan Cutler

04/10/2020

08:00 PM Backport #43920 (In Progress): nautilus: common/bl: claim_append() corrupts memory when a bl cons...
Shyukri Shyukriev
07:57 PM Backport #43087 (In Progress): nautilus: bluefs: sync_metadata leaks dirty files if log_t is empty
Shyukri Shyukriev
07:55 PM Backport #38160 (In Progress): luminous: KernelDevice exclusive lock broken
Shyukri Shyukriev
07:24 PM Backport #41462 (In Progress): luminous: incorrect RW_IO_MAX
Shyukri Shyukriev
11:09 AM Bug #44937: bluestore rocksdb max_background_compactions regression in 12.2.13
Looks like rocksdb fix [2] isn't present in mimic as well. Hence setting new default for max_background_compactions n... Igor Fedotov
04:40 AM Bug #44774 (Pending Backport): ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
Kefu Chai

04/09/2020

09:46 AM Backport #44818 (Resolved): nautilus: perf regression due to bluefs_buffered_io=true
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34297
m...
Nathan Cutler

04/08/2020

10:52 PM Backport #44818: nautilus: perf regression due to bluefs_buffered_io=true
Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/34297
merged
Yuri Weinstein
06:28 AM Bug #43068: on disk size (81292) does not match object info size (81237)
bluestore Xiaoxi Chen

04/07/2020

04:11 PM Feature #44978 (New): support "bad block" isolation
A while ago we had a question at a meetup:
> Does BlueStore support bad block isolation?
> In FileStore if you hi...
Greg Farnum

04/06/2020

12:52 PM Bug #44878 (Fix Under Review): mimic: incorrect SSD bluestore compression/allocation defaults
In fact this bug causes no real issues - it's just result in some confusion for users. BlueStore has some code which ... Igor Fedotov

04/03/2020

09:32 PM Bug #44937: bluestore rocksdb max_background_compactions regression in 12.2.13
I know Luminous is EOL. Opened this issue to assess the regression for severity.
The rocksdb issue is an edge cas...
Dan Hill
09:25 PM Bug #44937 (Need More Info): bluestore rocksdb max_background_compactions regression in 12.2.13
In pr#29027 [0] Mark called out a rocksdb pr#3926 [1] where concurrent compactions can result in output files that ha... Dan Hill
11:00 AM Bug #44880: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
I'm able to reproduce the issue locally. With a bit different symptoms though.
Anyway looks like a test case bug rat...
Igor Fedotov
10:34 AM Bug #44924 (Resolved): High memory usage in fsck/repair
Originally this issue appeared at:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/JSUDXTQWBAPXTCLM5...
Igor Fedotov

04/02/2020

02:53 PM Bug #41901 (Pending Backport): bluestore: unused calculation is broken
Igor Fedotov

04/01/2020

07:39 PM Backport #44819 (In Progress): octopus: perf regression due to bluefs_buffered_io=true
https://github.com/ceph/ceph/pull/34353
mergedReviewed-by: Vikhyat Umrao <vikhyat@redhat.com>
Neha Ojha
01:07 PM Bug #44880 (Resolved): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
... Sage Weil
10:42 AM Bug #44878 (Resolved): mimic: incorrect SSD bluestore compression/allocation defaults
The current defaults for SSD are:
bluestore_compression_min_blob_size_ssd 8192
bluestore_min_alloc_size_ssd 16384...
Frank Schilder

03/31/2020

01:53 AM Bug #44774: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
h1. How to reproduce this problem
h2. 1. modify your vstart.sh
# git diff ../src/vstart.sh
diff --git a/src/...
Honggang Yang

03/30/2020

06:15 PM Backport #44818 (In Progress): nautilus: perf regression due to bluefs_buffered_io=true
Vikhyat Umrao
04:50 PM Backport #44818 (Resolved): nautilus: perf regression due to bluefs_buffered_io=true
https://github.com/ceph/ceph/pull/34297 Neha Ojha
04:50 PM Backport #44819 (Resolved): octopus: perf regression due to bluefs_buffered_io=true
https://github.com/ceph/ceph/pull/34353 Neha Ojha

03/29/2020

06:38 PM Bug #44757 (Pending Backport): perf regression due to bluefs_buffered_io=true
Neha Ojha
02:19 PM Bug #41901 (Resolved): bluestore: unused calculation is broken
Kefu Chai

03/27/2020

02:38 PM Bug #44757 (Fix Under Review): perf regression due to bluefs_buffered_io=true
Neha Ojha

03/26/2020

05:26 PM Bug #44774: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
https://github.com/ceph/ceph/pull/34219 Honggang Yang
05:11 PM Bug #44774 (Resolved): ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
h1. bluefs-bdev-new-wal... Honggang Yang
02:44 PM Bug #38745: spillover that doesn't make sense
Marcin,
30-64GB is an optimal configuration. Certainly if one can affort 250+ GB drive and hence serve L4 - it's O...
Igor Fedotov
08:53 AM Bug #38745: spillover that doesn't make sense
Hi Igor,
So, your recommendation is to create a volume which can serve enough space for levels 1-3, compaction and...
Marcin W
07:47 AM Bug #38745: spillover that doesn't make sense
Thanks a lot Igor. Will wait for the backport PR and will report back the results. Eneko Lacunza

03/25/2020

10:01 PM Bug #38745: spillover that doesn't make sense
In short - I'm trying to pretty conservative when suggesting 64GB for DB/WAL volume. Igor Fedotov
09:58 PM Bug #38745: spillover that doesn't make sense
Marcin W wrote:
> BTW. These are the default RocksDB params IIRC (in GB):
> base=256, multiplier=10, levels=5
>
...
Igor Fedotov
09:50 PM Bug #38745: spillover that doesn't make sense
Eneko,
to be honest 60GB and 64GB are pretty the same to me. The estimate which resulted in this value did some ro...
Igor Fedotov
11:24 AM Bug #38745: spillover that doesn't make sense
Thanks @Igor
Sorry for the delay getting back, I didn't receive any email regarding your update from tracker.
I...
Eneko Lacunza
05:15 PM Bug #44757 (Resolved): perf regression due to bluefs_buffered_io=true
RHBZ - https://bugzilla.redhat.com/show_bug.cgi?id=1802199
This option was enabled with the help of PR - https://git...
Vikhyat Umrao

03/24/2020

04:50 PM Bug #44731 (Closed): Space leak in Bluestore
Hi.
I'm experiencing some kind of a space leak in Bluestore. I use EC, compression and snapshots. First I thought ...
Vitaliy Filippov
11:56 AM Bug #38745: spillover that doesn't make sense
BTW. These are the default RocksDB params IIRC (in GB):
base=256, multiplier=10, levels=5
0= 0.25
1= 0,25
2= 2,...
Marcin W
11:47 AM Bug #38745: spillover that doesn't make sense
@Igor,
I've read somewhere in docs that recommended DB size is no less that 5% of block. IMO if we consider either...
Marcin W
10:00 AM Bug #38745: spillover that doesn't make sense
Eneko,
could you please attach output for 'ceph-kvstore-tool bluestore-kv <path-to-osd> stats' command?
I suppose...
Igor Fedotov
09:25 AM Bug #38745: spillover that doesn't make sense
Hi, we're seeing this issue too. Using 14.2.8 (proxmox build)
We originally had 1GB rocks.db partition:
# ceph ...
Eneko Lacunza
 

Also available in: Atom