Activity
From 03/19/2020 to 04/17/2020
04/17/2020
- 04:37 PM Bug #45133 (Fix Under Review): BlueStore asserting on fs upgrade tests
- 03:58 PM Bug #45133: BlueStore asserting on fs upgrade tests
- Yes indeed, BlueStore::_upgrade_super() when you are starting with format 2 invokes _prepare_ondisk_format_super() as...
- 03:10 PM Bug #45133 (Resolved): BlueStore asserting on fs upgrade tests
- See for instance http://pulpito.front.sepia.ceph.com/gregf-2020-04-15_20:54:50-fs-wip-greg-testing-415-1-distro-basic...
- 01:48 PM Backport #45122 (In Progress): octopus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34610
- 10:26 AM Backport #45122 (Resolved): octopus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34610
- 01:46 PM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
- Doing partial backport for now to override an issue from comment #2.
- 01:46 PM Backport #45123 (In Progress): nautilus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34611
- 11:06 AM Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash
- Looks like this backport depends on backporting per-pool omap stats collection support, see
https://github.com/ceph... - 10:27 AM Backport #45123 (Resolved): nautilus: OSD might fail to recover after ENOSPC crash
- https://github.com/ceph/ceph/pull/34611
- 12:54 PM Bug #43370: OSD crash in function bluefs::_flush_range with ceph_abort_msg "bluefs enospc"
- Gerdriaan, sorry, missed you inquiry.
What I wanted is the output for:
ceph-bluestore-tool --path <path-to-osd> --c... - 12:47 PM Bug #43068: on disk size (81292) does not match object info size (81237)
- It looks to me that this bug has rather out-of-bluestore scope. Am I right it applies to CephFS files/objects only? A...
- 12:24 AM Bug #43068 (New): on disk size (81292) does not match object info size (81237)
- we get one more hit
2020-04-16 14:21:30.139493 osd.869 (osd.869) 10192 : cluster [ERR] 6.68b shard 869 soid 6:d16f... - 10:27 AM Backport #45127 (Resolved): octopus: Extent leak after main device expand
- https://github.com/ceph/ceph/pull/34610
- 10:27 AM Backport #45126 (Resolved): nautilus: Extent leak after main device expand
- https://github.com/ceph/ceph/pull/34711
- 10:27 AM Backport #45125 (Rejected): mimic: Extent leak after main device expand
04/16/2020
- 10:21 AM Bug #45112 (Resolved): OSD might fail to recover after ENOSPC crash
- While opening after such a crash KV might need to flush some data and hence needs additional disk space. But allocato...
- 10:05 AM Bug #45110 (Resolved): Extent leak after main device expand
- To reproduce the issue one can expand device of 3,147,480,064 bytes to
4,147,480,064 using bluefs-bdev-expand comman...
04/15/2020
04/11/2020
- 09:42 AM Backport #45064 (Resolved): nautilus: bluestore: unused calculation is broken
- https://github.com/ceph/ceph/pull/34794
- 09:42 AM Backport #45063 (Resolved): octopus: bluestore: unused calculation is broken
- https://github.com/ceph/ceph/pull/34793
- 09:42 AM Backport #45062 (Rejected): mimic: bluestore: unused calculation is broken
- 09:38 AM Backport #45045 (Resolved): nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damag...
- https://github.com/ceph/ceph/pull/34796
- 09:37 AM Backport #45044 (Resolved): octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage...
- https://github.com/ceph/ceph/pull/34795
04/10/2020
- 08:00 PM Backport #43920 (In Progress): nautilus: common/bl: claim_append() corrupts memory when a bl cons...
- 07:57 PM Backport #43087 (In Progress): nautilus: bluefs: sync_metadata leaks dirty files if log_t is empty
- 07:55 PM Backport #38160 (In Progress): luminous: KernelDevice exclusive lock broken
- 07:24 PM Backport #41462 (In Progress): luminous: incorrect RW_IO_MAX
- 11:09 AM Bug #44937: bluestore rocksdb max_background_compactions regression in 12.2.13
- Looks like rocksdb fix [2] isn't present in mimic as well. Hence setting new default for max_background_compactions n...
- 04:40 AM Bug #44774 (Pending Backport): ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
04/09/2020
- 09:46 AM Backport #44818 (Resolved): nautilus: perf regression due to bluefs_buffered_io=true
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34297
m...
04/08/2020
- 10:52 PM Backport #44818: nautilus: perf regression due to bluefs_buffered_io=true
- Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/34297
merged - 06:28 AM Bug #43068: on disk size (81292) does not match object info size (81237)
- bluestore
04/07/2020
- 04:11 PM Feature #44978 (New): support "bad block" isolation
- A while ago we had a question at a meetup:
> Does BlueStore support bad block isolation?
> In FileStore if you hi...
04/06/2020
- 12:52 PM Bug #44878 (Fix Under Review): mimic: incorrect SSD bluestore compression/allocation defaults
- In fact this bug causes no real issues - it's just result in some confusion for users. BlueStore has some code which ...
04/03/2020
- 09:32 PM Bug #44937: bluestore rocksdb max_background_compactions regression in 12.2.13
- I know Luminous is EOL. Opened this issue to assess the regression for severity.
The rocksdb issue is an edge cas... - 09:25 PM Bug #44937 (Need More Info): bluestore rocksdb max_background_compactions regression in 12.2.13
- In pr#29027 [0] Mark called out a rocksdb pr#3926 [1] where concurrent compactions can result in output files that ha...
- 11:00 AM Bug #44880: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
- I'm able to reproduce the issue locally. With a bit different symptoms though.
Anyway looks like a test case bug rat... - 10:34 AM Bug #44924 (Resolved): High memory usage in fsck/repair
- Originally this issue appeared at:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/JSUDXTQWBAPXTCLM5...
04/02/2020
04/01/2020
- 07:39 PM Backport #44819 (In Progress): octopus: perf regression due to bluefs_buffered_io=true
- https://github.com/ceph/ceph/pull/34353
mergedReviewed-by: Vikhyat Umrao <vikhyat@redhat.com> - 01:07 PM Bug #44880 (Resolved): ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
- ...
- 10:42 AM Bug #44878 (Resolved): mimic: incorrect SSD bluestore compression/allocation defaults
- The current defaults for SSD are:
bluestore_compression_min_blob_size_ssd 8192
bluestore_min_alloc_size_ssd 16384...
03/31/2020
- 01:53 AM Bug #44774: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- h1. How to reproduce this problem
h2. 1. modify your vstart.sh
# git diff ../src/vstart.sh
diff --git a/src/...
03/30/2020
- 06:15 PM Backport #44818 (In Progress): nautilus: perf regression due to bluefs_buffered_io=true
- 04:50 PM Backport #44818 (Resolved): nautilus: perf regression due to bluefs_buffered_io=true
- https://github.com/ceph/ceph/pull/34297
- 04:50 PM Backport #44819 (Resolved): octopus: perf regression due to bluefs_buffered_io=true
- https://github.com/ceph/ceph/pull/34353
03/29/2020
- 06:38 PM Bug #44757 (Pending Backport): perf regression due to bluefs_buffered_io=true
- 02:19 PM Bug #41901 (Resolved): bluestore: unused calculation is broken
03/27/2020
03/26/2020
- 05:26 PM Bug #44774: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- https://github.com/ceph/ceph/pull/34219
- 05:11 PM Bug #44774 (Resolved): ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs
- h1. bluefs-bdev-new-wal...
- 02:44 PM Bug #38745: spillover that doesn't make sense
- Marcin,
30-64GB is an optimal configuration. Certainly if one can affort 250+ GB drive and hence serve L4 - it's O... - 08:53 AM Bug #38745: spillover that doesn't make sense
- Hi Igor,
So, your recommendation is to create a volume which can serve enough space for levels 1-3, compaction and... - 07:47 AM Bug #38745: spillover that doesn't make sense
- Thanks a lot Igor. Will wait for the backport PR and will report back the results.
03/25/2020
- 10:01 PM Bug #38745: spillover that doesn't make sense
- In short - I'm trying to pretty conservative when suggesting 64GB for DB/WAL volume.
- 09:58 PM Bug #38745: spillover that doesn't make sense
- Marcin W wrote:
> BTW. These are the default RocksDB params IIRC (in GB):
> base=256, multiplier=10, levels=5
>
... - 09:50 PM Bug #38745: spillover that doesn't make sense
- Eneko,
to be honest 60GB and 64GB are pretty the same to me. The estimate which resulted in this value did some ro... - 11:24 AM Bug #38745: spillover that doesn't make sense
- Thanks @Igor
Sorry for the delay getting back, I didn't receive any email regarding your update from tracker.
I... - 05:15 PM Bug #44757 (Resolved): perf regression due to bluefs_buffered_io=true
- RHBZ - https://bugzilla.redhat.com/show_bug.cgi?id=1802199
This option was enabled with the help of PR - https://git...
03/24/2020
- 04:50 PM Bug #44731 (Closed): Space leak in Bluestore
- Hi.
I'm experiencing some kind of a space leak in Bluestore. I use EC, compression and snapshots. First I thought ... - 11:56 AM Bug #38745: spillover that doesn't make sense
- BTW. These are the default RocksDB params IIRC (in GB):
base=256, multiplier=10, levels=5
0= 0.25
1= 0,25
2= 2,... - 11:47 AM Bug #38745: spillover that doesn't make sense
- @Igor,
I've read somewhere in docs that recommended DB size is no less that 5% of block. IMO if we consider either... - 10:00 AM Bug #38745: spillover that doesn't make sense
- Eneko,
could you please attach output for 'ceph-kvstore-tool bluestore-kv <path-to-osd> stats' command?
I suppose... - 09:25 AM Bug #38745: spillover that doesn't make sense
- Hi, we're seeing this issue too. Using 14.2.8 (proxmox build)
We originally had 1GB rocks.db partition:
# ceph ...
Also available in: Atom