Activity
From 03/28/2019 to 04/26/2019
04/26/2019
- 02:35 AM Bug #39318: w_await high when rockdb compacting
- My config rocksdb....
- 01:14 AM Feature #38816: Deferred writes do not work for random writes
- master: https://github.com/ceph/ceph/pull/27789
nautilus: https://github.com/ceph/ceph/pull/27819 merged
04/25/2019
- 06:40 PM Bug #39318: w_await high when rockdb compacting
- w_await is literally how long it's taking for the device to service requests. If the latency is increasing that mean...
- 02:22 PM Bug #39318: w_await high when rockdb compacting
- This sounds like the SSD is just busy from compaction and/or client work. The w_await latency does sound a bit high ...
- 05:29 PM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- > would it be possible to try that OSD one more time
do you mean to zap/recreate it and try again or just start it... - 05:26 PM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- Dan, that dump appears to have multiple errors:...
- 02:23 PM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- Same devices (HDD for data and ssd partition for block.db) for both failures.
We have left the osd down since the ... - 02:18 PM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- Taking a look at this. It's interesting that it happened twice on the same device(s)... did it occur again after tha...
- 03:52 PM Feature #38816: Deferred writes do not work for random writes
- Vitaliy Filippov wrote:
> 1) Bluestore doesn't honor the max_deferred_txc parameter. It starts to flush operations a... - 03:49 PM Feature #38816: Deferred writes do not work for random writes
- Igor Fedotov wrote:
> @Mark, thanks for the update.
> From you perf counter dumps (after?.json) one can see the fol... - 02:44 PM Bug #36482 (In Progress): High amount of Read I/O on BlueFS/DB when listing omap keys
- 01:30 PM Bug #20236: bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTest/2 failure
- /a/sage-2019-04-24_22:14:24-rados-wip-sage2-testing-2019-04-24-0829-distro-basic-smithi/3890129...
04/24/2019
- 11:57 AM Bug #39144: ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main device (slot 2)
- Adding "[needs luminous backport]" prefix to subject line so that, just in case the bug gets accidentally marked "Res...
- 11:21 AM Backport #39444: luminous: OSD crashed in BitmapAllocator::init_add_free()
- https://github.com/ceph/ceph/pull/27739
- 11:06 AM Backport #39444 (In Progress): luminous: OSD crashed in BitmapAllocator::init_add_free()
- 10:41 AM Backport #39444 (Resolved): luminous: OSD crashed in BitmapAllocator::init_add_free()
- https://github.com/ceph/ceph/pull/27739
- 11:21 AM Backport #39446: nautilus: OSD crashed in BitmapAllocator::init_add_free()
- https://github.com/ceph/ceph/pull/27740
- 11:05 AM Backport #39446 (In Progress): nautilus: OSD crashed in BitmapAllocator::init_add_free()
- 10:41 AM Backport #39446 (Resolved): nautilus: OSD crashed in BitmapAllocator::init_add_free()
- https://github.com/ceph/ceph/pull/27740
- 11:07 AM Backport #39445: mimic: OSD crashed in BitmapAllocator::init_add_free()
- https://github.com/ceph/ceph/pull/27738
- 11:05 AM Backport #39445 (In Progress): mimic: OSD crashed in BitmapAllocator::init_add_free()
- 10:41 AM Backport #39445 (Resolved): mimic: OSD crashed in BitmapAllocator::init_add_free()
- https://github.com/ceph/ceph/pull/27738
04/23/2019
04/22/2019
- 02:43 AM Bug #39318: w_await high when rockdb compacting
- update.
my log in attach file.
rocksdb of osd compact after 30 minutes. i thinks it too fast.
when osd has t...
04/19/2019
- 01:41 PM Bug #39334 (Fix Under Review): OSD crashed in BitmapAllocator::init_add_free()
- 12:00 PM Bug #39334 (In Progress): OSD crashed in BitmapAllocator::init_add_free()
- Sage Weil wrote:
> looks like an off-by-one perhaps?
> [...]
> it looks like the code lets pos reach pos_e, or l0_...
04/18/2019
- 05:20 PM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- looks like an off-by-one perhaps?...
- 05:16 PM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- ...
- 05:15 PM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- received updated ceph version (14.2.0-299-g167dbbf (167dbbfbf80acbc7fdcbd3d204d941d24dc6c788) nautilus (stable))
i... - 05:10 PM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- It does contain 7f9543c11f1d7d81162f93b996a6f95656cc1a01 (the merge commit).
04/17/2019
- 01:53 PM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- @Alexander, thanks for the update.
Looks like you have a custom Ceph build.
At github tag v14.2.0 ends with comm... - 11:38 AM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- re-ran with confirmed logging level for bluestore
- 11:28 AM Bug #39103 (Fix Under Review): bitmap allocator osd make %use of osd over 100%
- 11:26 AM Backport #39348 (In Progress): nautilus: raise an alert when per pool stats aren't used
- 06:11 AM Backport #39348 (Resolved): nautilus: raise an alert when per pool stats aren't used
- https://github.com/ceph/ceph/pull/27645
04/16/2019
- 09:37 PM Feature #39146 (Pending Backport): raise an alert when per pool stats aren't used
- 09:04 PM Bug #39151 (Won't Fix): past end of block device is detected but not repaired
- The root cause for the issue is disk size change happened after OSD deployment
- 08:52 PM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- @Alexander, sorry but you weren't successful in setting 'debug bluestore = 20'
see the following lines at the end of... - 08:11 PM Bug #39334: OSD crashed in BitmapAllocator::init_add_free()
- Issue occurs when install nautilus (ceph version 14.2.0-142-g2f9c072) on RHEL 8 using ceph-ansible. Issue occurs on e...
- 07:32 PM Bug #39334 (Need More Info): OSD crashed in BitmapAllocator::init_add_free()
- Could you please collect the log with debug bluestore = 20
Any other information about the issue:
Is it reproduc... - 07:07 PM Bug #39334 (Resolved): OSD crashed in BitmapAllocator::init_add_free()
- ...
- 10:55 AM Bug #39143 (Resolved): ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL is conf...
- 10:55 AM Backport #39253 (Resolved): nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if...
- 09:45 AM Bug #39318: w_await high when rockdb compacting
- Please update issue object to 'high w_await when rockdb compact'.
Thanks - 09:42 AM Bug #39318 (Closed): w_await high when rockdb compacting
- hi all.
I have a ceph cluster and all disk ssd.
I benchmark disk. it ok
sometime disk has w_await to 300 or 10...
04/15/2019
- 08:04 PM Backport #39253: nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL is ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27523
merged
04/13/2019
- 03:06 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- I patched code.
It work :)
Thanks.
04/12/2019
- 08:57 PM Backport #39255 (In Progress): mimic: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWrites...
- 08:55 PM Bug #39151: past end of block device is detected but not repaired
- Daniel, mind if I close the ticket?
- 02:08 PM Bug #39151: past end of block device is detected but not repaired
- See
https://github.com/ceph/ceph/pull/27519 - 12:02 PM Bug #39151: past end of block device is detected but not repaired
- If I remember correctly the disc where in a different server back then and the hardware died. We had to move the disc...
- 10:27 AM Bug #39151: past end of block device is detected but not repaired
- Thanks, Daniel.
In the new log I can see the following lines:
2019-04-12 11:54:13.109186 7fab6fc4fe00 10 bluestore(... - 10:00 AM Bug #39151: past end of block device is detected but not repaired
- I rebuild the original osd with problems, but there is another one that now runs into the same problem. Log of it sta...
- 08:49 PM Bug #38761 (Resolved): Bitmap allocator might fail to return contiguous chunk despite having enou...
- 08:49 PM Backport #38910 (Resolved): mimic: Bitmap allocator might fail to return contiguous chunk despite...
- 08:23 PM Backport #38910: mimic: Bitmap allocator might fail to return contiguous chunk despite having eno...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27298
merged - 10:36 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Unfortunately I don't use rpm in my lab. And have pretty limited expertise in this area so can hardly advise somethin...
- 10:12 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Igor Fedotov wrote:
> Looks good to me.
>
> Suggest to insert some new logging, e.g.
> int BlueStore::_mount(bo...
04/11/2019
- 05:56 PM Bug #39151: past end of block device is detected but not repaired
- Looks like OSD was created by a release which improperly handled disk size unaligned with allocation granularity (64K...
- 04:52 PM Backport #39254 (In Progress): luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWri...
- https://github.com/ceph/ceph/pull/27529
- 04:06 PM Backport #39254 (Resolved): luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWrites...
- https://github.com/ceph/ceph/pull/27529
- 04:37 PM Backport #39253: nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL is ...
- @Igor, thanks. If you're going to be doing a lot of backporting, you might look into src/script/ceph-backport.sh (in ...
- 04:26 PM Backport #39253: nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL is ...
- Nevermind, closing my PR.
- 04:25 PM Backport #39253: nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL is ...
- Nathan, sorry if you started working on this. Missed you assigned yourself for the ticket.
-https://github.com/cep... - 04:24 PM Backport #39253: nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL is ...
- Nathan, sorry if you started working on this. Missed you assigned yourself for the ticket.
-https://github.com/cep... - 04:04 PM Backport #39253 (In Progress): nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert...
- 04:03 PM Backport #39253 (Resolved): nautilus: ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if...
- https://github.com/ceph/ceph/pull/27523
- 04:33 PM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Looks good to me.
Suggest to insert some new logging, e.g.
int BlueStore::_mount(bool kv_only, bool open_db)
{
... - 03:14 PM Bug #39103: bitmap allocator osd make %use of osd over 100%
- This is my step to build rpm and install.
Checkout branch remotes/origin/mimic from github.com/ceph/ceph
Apply th... - 10:25 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
> Could you please double check if you applied the patch properly?
Yes. i will rebuild and recheck.
- 10:12 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Just managed to reproduce your issue in mimic HEAD.
After applying the patch from https://github.com/ceph/ceph/pull/... - 09:16 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Igor Fedotov wrote:
> Could you please collect OSD startup log with 'debug bluestore' set to 20?
Yes. my log is o... - 09:09 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Could you please collect OSD startup log with 'debug bluestore' set to 20?
- 02:32 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Igor Fedotov wrote:
> Probably that's the missed backport which triggers the issue.
> See https://github.com/ceph/... - 04:19 PM Backport #39256: nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
- https://github.com/ceph/ceph/pull/27525
- 04:18 PM Backport #39256 (In Progress): nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWri...
- 04:06 PM Backport #39256 (Resolved): nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWrites...
- https://github.com/ceph/ceph/pull/27525
- 04:06 PM Backport #39255 (Resolved): mimic: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTes...
- https://github.com/ceph/ceph/pull/27570
- 02:04 PM Feature #39146 (Fix Under Review): raise an alert when per pool stats aren't used
- 01:34 PM Bug #21312 (Pending Backport): occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 ...
- 01:24 PM Bug #39143 (Pending Backport): ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL...
- 11:17 AM Backport #39246 (In Progress): mimic: os/bluestore: fix length overflow
- 11:14 AM Backport #39246 (Resolved): mimic: os/bluestore: fix length overflow
- https://github.com/ceph/ceph/pull/27366
- 11:16 AM Backport #39247 (In Progress): luminous: os/bluestore: fix length overflow
- 11:14 AM Backport #39247 (Resolved): luminous: os/bluestore: fix length overflow
- https://github.com/ceph/ceph/pull/27365
- 11:13 AM Bug #39245 (Resolved): os/bluestore: fix length overflow
- please backport https://github.com/ceph/ceph/pull/22610 to mimic and luminous
- 08:52 AM Feature #38816: Deferred writes do not work for random writes
- I think we are beginning to discuss something different from the original question, but ...
I checked the code and... - 07:58 AM Feature #38816: Deferred writes do not work for random writes
- Igor, what about RBD? these writes are always aligned to 4K, so, as far as I understand, such writes will never be de...
04/10/2019
- 10:42 PM Backport #37991: luminous: Compression not working, and when applied OSD disks are failing randomly
- first attempted backport https://github.com/ceph/ceph/pull/26025 was closed
- 01:08 PM Bug #39151: past end of block device is detected but not repaired
- Before finding the kvstore solution I started developing a fix that is mostly finished. Not so sure about the proper ...
- 01:03 PM Bug #39151: past end of block device is detected but not repaired
- FIY I was able to rapair the osd by pruning the transaction log. There was only one transaction in the log, so not mu...
04/09/2019
- 06:37 PM Bug #39151: past end of block device is detected but not repaired
- Definitly luminous but some low patch level number, maybe ~ 12.2.6
- 01:06 PM Bug #39151: past end of block device is detected but not repaired
- @Daniel, wondering if you remember what was the initial Ceph version this cluster (or this specific OSD) was created by?
- 01:38 PM Feature #38816: Deferred writes do not work for random writes
- @Mark, thanks for the update.
From you perf counter dumps (after?.json) one can see the following small (~4K) write ... - 09:46 AM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- We just saw this on an osd (block.db on ssd, data on hdd). OSD is from a cephfs cluster running 12.2.11.
We're act...
04/08/2019
- 11:52 PM Bug #39151: past end of block device is detected but not repaired
- http://files.poelzi.org/ceph/ceph-repair.log repair log
- 11:48 PM Bug #39151 (Won't Fix): past end of block device is detected but not repaired
- While reconfiguring our cluster we ran into the issue that bluestore somehow sheduled a transaction beyond the end of...
- 07:28 PM Feature #39146 (Resolved): raise an alert when per pool stats aren't used
- One can get this after upgrade. Even worse if new osd having per-pools stats are added which results in a mess in sta...
- 07:23 PM Bug #37839 (Resolved): Compression not working, and when applied OSD disks are failing randomly
- 07:22 PM Backport #37991 (Resolved): luminous: Compression not working, and when applied OSD disks are fai...
- https://github.com/ceph/ceph/pull/26544
- 07:20 PM Bug #39143 (Fix Under Review): ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL...
- 06:23 PM Bug #39143 (Resolved): ceph-bluestore-tool: bluefs-bdev-expand cmd might assert if no WAL is conf...
- There is no check for device presence when enumerating devices in BlueStore::expand_devices
- 07:13 PM Bug #39144 (Fix Under Review): ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main dev...
- 06:50 PM Bug #39144 (Resolved): ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main device (slo...
- Pre-nautilis releases are unable to expand main device but they do not report about that hence user thinks expansion ...
- 05:39 PM Backport #38910: mimic: Bitmap allocator might fail to return contiguous chunk despite having eno...
- https://github.com/ceph/ceph/pull/27298
- 05:18 PM Bug #21312 (Fix Under Review): occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 ...
- The root cause seem to be the lack of proper fault_range() call in do_write_small(). It might happen that write reque...
- 12:10 PM Bug #38760: BlueFS might request more space from slow device than is actually needed
- Igor writes: [luminous and mimic backports] not needed since we don't have explicit space allocation from BlueFS in m...
- 12:09 PM Bug #38760 (Resolved): BlueFS might request more space from slow device than is actually needed
04/07/2019
- 07:51 PM Feature #38816: Deferred writes do not work for random writes
- compare:
before1.json with after1.json - with debug turned off
before2.json with after2.json - with debug turned on
- 07:46 PM Feature #38816: Deferred writes do not work for random writes
- ceph config dump fragment:...
- 07:33 PM Feature #38816: Deferred writes do not work for random writes
- Igor, I have done what you want. During OSD log inspection, take into account mtime of attached JSON files.
04/05/2019
- 12:48 PM Bug #38554 (Duplicate): ObjectStore/StoreTestSpecificAUSize.TooManyBlobsTest/2 fail, Expected: (r...
- https://tracker.ceph.com/issues/21312
- 09:44 AM Bug #38554: ObjectStore/StoreTestSpecificAUSize.TooManyBlobsTest/2 fail, Expected: (res_stat.allo...
- I managed to reproduce this in Luminous by multiple (~20) running the test case.
- 03:14 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Igor Fedotov wrote:
> @hoan nv, could you please apply the patch when it's available and report back if it helps
...
04/04/2019
- 10:32 AM Bug #38250 (Rejected): assert failure crash prevents ceph-osd from running
- 10:31 AM Bug #38250: assert failure crash prevents ceph-osd from running
- @Adam - IMO you can proceed with this disk usage but you should redeploy OSD over it. The rationale - some data is co...
- 10:22 AM Bug #38065 (Resolved): deep fsck fails on inspecting very large onodes
- 10:22 AM Backport #38188 (Resolved): luminous: deep fsck fails on inspecting very large onodes
- 10:19 AM Bug #38795 (Fix Under Review): fsck on mkfs breaks ObjectStore/StoreTestSpecificAUSize.BlobReuseO...
- https://github.com/ceph/ceph/pull/27055
- 09:50 AM Bug #39103 (Need More Info): bitmap allocator osd make %use of osd over 100%
- @hoan nv, could you please apply the patch when it's available and report back if it helps
- 09:49 AM Bug #39103: bitmap allocator osd make %use of osd over 100%
- Probably that's the missed backport which triggers the issue.
See https://github.com/ceph/ceph/pull/27366
- 09:02 AM Bug #39103 (Resolved): bitmap allocator osd make %use of osd over 100%
- after pull request https://github.com/ceph/ceph/pull/26983 is merged.
I install package... - 09:46 AM Backport #38911 (Resolved): luminous: Bitmap allocator might fail to return contiguous chunk desp...
- 09:29 AM Bug #24598 (Resolved): ceph_test_objecstore: bluefs mount fail with overlapping op_alloc_add
- 09:28 AM Backport #38778 (Resolved): luminous: ceph_test_objecstore: bluefs mount fail with overlapping op...
04/03/2019
- 08:39 PM Bug #39097 (Need More Info): _verify_csum bad crc32c/0x10000 checksum at blob offset 0x0, got 0x4...
- no bluestore logs. object was written, and a minute later read with a crc error.
written here:... - 07:32 PM Bug #39097: _verify_csum bad crc32c/0x10000 checksum at blob offset 0x0, got 0x478682d5, expected...
- ...
- 07:30 PM Bug #39097 (Won't Fix): _verify_csum bad crc32c/0x10000 checksum at blob offset 0x0, got 0x478682...
- ...
- 08:22 PM Backport #38911: luminous: Bitmap allocator might fail to return contiguous chunk despite having ...
- https://github.com/ceph/ceph/pull/27312 merged
- 02:39 PM Bug #22796 (Resolved): bluestore gets to ENOSPC with small devices
- with Igor's recent changes, we no longer rely on the slow feedback, so i think we can close this now.
04/02/2019
- 11:14 AM Backport #38911 (In Progress): luminous: Bitmap allocator might fail to return contiguous chunk d...
- 09:48 AM Bug #38738: ceph ssd osd latency increase over time, until restart
- The solution is to switch to new bitmap allocator that has been merged into both luminous and mimic.
04/01/2019
- 09:47 PM Backport #38188: luminous: deep fsck fails on inspecting very large onodes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26387
merged - 08:53 PM Bug #38738 (Resolved): ceph ssd osd latency increase over time, until restart
- 08:41 PM Backport #38778: luminous: ceph_test_objecstore: bluefs mount fail with overlapping op_alloc_add
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26979
merged - 06:58 PM Bug #25098 (Fix Under Review): Bluestore OSD failed to start with `bluefs_types.h: 54: FAILED ass...
- https://github.com/ceph/ceph/pull/27300
- 05:18 PM Backport #38910 (In Progress): mimic: Bitmap allocator might fail to return contiguous chunk desp...
- 04:26 PM Backport #38914 (Rejected): luminous: BlueFS might request more space from slow device than is ac...
- This is not needed since we don't have explicit space allocation from BlueFS in mimic/luminous
- 04:26 PM Backport #38913 (Rejected): mimic: BlueFS might request more space from slow device than is actua...
- This is not needed since we don't have explicit space allocation from BlueFS in mimic/luminous
- 02:22 PM Backport #38779 (Resolved): mimic: ceph_test_objecstore: bluefs mount fail with overlapping op_al...
- 01:52 PM Backport #38779: mimic: ceph_test_objecstore: bluefs mount fail with overlapping op_alloc_add
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26983
merged
03/30/2019
- 04:27 PM Backport #38915 (Resolved): nautilus: BlueFS might request more space from slow device than is ac...
- 01:35 PM Backport #38912 (Resolved): nautilus: Bitmap allocator might fail to return contiguous chunk desp...
03/29/2019
- 02:29 PM Bug #38176: Unable to recover from ENOSPC in BlueFS, WAL
- Solution proposal: https://github.com/ceph/ceph/pull/26276
It was decided not to implement it at this stage.
03/28/2019
- 03:35 PM Bug #38489: bluestore_prefer_deferred_size_hdd units are not clear
- But wait! documentation is still not fixed!
- 02:08 PM Bug #38489 (Resolved): bluestore_prefer_deferred_size_hdd units are not clear
- 02:26 PM Bug #38176 (Won't Fix): Unable to recover from ENOSPC in BlueFS, WAL
- We decided to not fix this.
- 02:22 PM Bug #38559 (Fix Under Review): 50-100% iops lost due to bluefs_preextend_wal_files = false
- https://github.com/ceph/ceph/pull/26909
- 02:20 PM Bug #38637 (Need More Info): BlueStore::ExtentMap::fault_range() assert
- 02:18 PM Bug #38738 (In Progress): ceph ssd osd latency increase over time, until restart
- 02:17 PM Feature #38816 (Need More Info): Deferred writes do not work for random writes
- 02:13 PM Bug #37282: rocksdb: submit_transaction_sync error: Corruption: block checksum mismatch code = 2
- Keeping "needs more info" state.
- 11:08 AM Backport #38586 (Resolved): luminous: OSD crashes in get_str_map while creating with ceph-volume
- 02:04 AM Bug #38745: spillover that doesn't make sense
- 256MB+2.56GB+25.6GB=~28-29GB - for default Luminous options.
Also available in: Atom