50-100% iops lost due to bluefs_preextend_wal_files = false
I was investigating why RocksDB performance is so bad considering random 4K iops. I was looking at strace and one thing catched my eye - that was the OSD doing TWO transactions for each small (deferred) incoming write. In both mimic and nautilus strace looks like the following:
There are groups of 5 operations repeated by `bstore_kv_sync`:
- pwritev(8-12 kb, offset=1440402997248), offsets always increase
- sync_file_range(just written 8-12 kb)
- io_submit(op=pwritev, iov_len=4 kb, aio_offset=1455403167744) - offsets differ from first step, but also increase by 4 kb with each write
After every 64 such groups there come some io_submit's from `bstore_kv_final` - this is obviously the application of deferred writes, and the first pwritev is obviously the RocksDB WAL.
But what's the remaining io_submit?
It is the BlueFS's WAL! And all that it seems to do is to increase the size of RocksDB WAL and changing its modification time. So again you have "journaling of the journal"-like issue, as in old days with filestore :)
Then I found the "bluefs_preextend_wal_files" option, and yes, it disables this behaviour when set to true, and random IOPS increase by +50..+100% depending on the workload. But it corrupts the RocksDB when the OSD is shut down uncleanly. It's https://tracker.ceph.com/issues/18338 which I easily reproduced by starting a single OSD locally and writing with "fio -ioengine=rbd -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=16 -rw=write -pool=bench -rbdname=testimg" into it.
I think this is REALLY ugly. Bluestore is just wasting 1/3 to 1/2 random iops performance. It must be fixed :)
#4 Updated by Vitaliy Filippov 6 months ago
Yes, I've thought of that but I haven't tested it... However this is rather strange then. Who does the fsync if BlueFS isn't writing to WALs? sync_file_range is called with unsafe flags so it's something else...
In fact I've submitted a pull request here https://github.com/ceph/ceph/pull/26870
It adds more wait flags to sync_file_range, makes it safe and thus makes it possible to enable bluefs_preextend_wal_files. I experience +100% iops on HDD setups with this patch and bluefs_preextend_wal_files set to true.