Project

General

Profile

Actions

Bug #36317

closed

fallocate implementation on the kernel cephfs client

Added by Luis Henriques over 5 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

I remember seeing a comment somewhere (mailing list?) about this but couldn't find any reference to the issue so I decided to open a bug.

The problem: fallocate doesn't seem to be doing what it's supposed to do. I haven't been able to spend time looking at the code to understand the details, but here's a summary of the issue, on a very small test cluster:

node5:~ # df -Th /mnt
Filesystem        Type  Size  Used Avail Use% Mounted on
192.168.122.101:/ ceph   14G  228M   14G   2% /mnt

So, I have ~14G available and fallocate a big file:

node5:/mnt # xfs_io -f -c "falloc 0 1T" hugefile
node5:/mnt # ls -lh
total 1.0T
-rw------- 1 root root 1.0T Oct  4 14:17 hugefile
drwxr-xr-x 2 root root    6 Oct  4 14:17 mydir

I would expect this to fail, and it looks like the available space hasn't changed.
node5:/mnt # df -Th /mnt
Filesystem        Type  Size  Used Avail Use% Mounted on
192.168.122.101:/ ceph   14G  228M   14G   2% /mnt

Anyway, a successful call to fallocate(2) should mean that "subsequent writes into the range specified by offset and len are guaranteed not to fail because of lack of disk space". Which isn't going to be the case in the above example.

I guess that a fix for this would require a CEPH_MSG_STATFS to the monitors to get the actual free space. But as I said, I haven't spent too much time looking at the problem.

Actions

Also available in: Atom PDF