limit XFS extent fragmentation for rbd
A user with the handle pmjdebruijn was asking earlier today about XFS extent fragmentation due to ceph writes on the freenode xfs channel. We should look into this for testing at some point.
We can use xfs_bmap to examine the extent ranges. Sample output from the user is here:
Suggestions from Christoph Hellwig are to use 512 byte inodes and set rbd_writeback_window to a large value. Sounds like typically there should be no more than 2-3 extent ranges for files of that size and the user is seeing 8.
#1 Updated by Sage Weil almost 12 years ago
Hmm, I'm don't think that rbd_writeback_window will help much here; it doesn't affect the size of IOs sent to rados.
Try setting 'filestore flusher = false' in ceph.conf for the [osd] section and see if that helps. You'll probably see latency go up periodically, but if the fragmentation improves we at least know what's going on.