Bug #5524
closeddf shows incorrect disk usage/size for cephfs mount
0%
Description
After mounting a cephfs volume (ceph version 0.61.4) and filling it with 11GB of .wav files I used 'df' to list the mount points:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 24G 1.2G 22G 6% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 7.9G 4.0K 7.9G 1% /dev
tmpfs 1.6G 260K 1.6G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 0 7.9G 0% /run/shm
none 100M 0 100M 0% /run/user
172.31.2.103:6789:/ 852M 83M 769M 10% /mnt/ceph
The '/mnt/ceph/' shows a size of 852M with 83M used.
Using 'ceph df' we see the correct size for the data pool:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
212G 192G 21237M 9.74
POOLS:
NAME ID USED %USED OBJECTS
data 0 10557M 4.84 2653
metadata 1 1296K 0 21
rbd 2 0 0 0
.rgw 3 0 0 0
.rgw.gc 4 0 0 32
.rgw.control 5 0 0 8
.users.uid 6 272 0 1
.users.email 7 10 0 1
.users 8 10 0 1
Using 'du' also shows the correct amount of data:
root@ceph4:~# du -shc /mnt/ceph/
11G /mnt/ceph/
11G total
Updated by Sage Weil almost 11 years ago
what kernel version?
this was fixed in 3.9 or so
Updated by Shain Miley almost 11 years ago
I am using '3.8.0-19-generic' on Ubuntu 13.04.
If this was fixed in 3.9 then it is not really an issue going forward. Thanks a lot for the update.