mds: recursive statistics are either inaccurate or too "chunky"
Thanks Wido for this very interesting (and very simple) feature. But does it work well? Because, I use Hammer in a Ubuntu Trusty cluster nodes, and in a Ubuntu Trusty client with 3.16 kernel and cephfs mounted with the kernel module client, I have this: ~# mount | grep cephfs # /mnt is my mounted cephfs 10.0.2.150,10.0.2.151,10.0.2.152:/ on /mnt type ceph (noacl,name=cephfs,key=client.cephfs) ~# ls -lah /mnt/dir1/ total 0 drwxr-xr-x 1 root root 96M May 12 21:06 . drwxr-xr-x 1 root root 103M May 17 23:56 .. drwxr-xr-x 1 root root 96M May 12 21:06 8 drwxr-xr-x 1 root root 4.0M May 17 23:57 test As you can see: /mnt/dir1/8/ => 96M /mnt/dir1/test/ => 4.0M But: /mnt/dir1/ (ie .) => 96M I should have: size("/mnt/dir1/") = size("/mnt/dir1/8/") + size("/mnt/dir1/test/")
I have a suspicion this is a result of changing (one of?) our block sizes to 4MB for the stat results, but I'm not sure if it's being misinterpreted by this particular stack or if we're now stuck on all stacks reporting stuff in 4MB multiples. (For every file, or just for directories?)