Project

General

Profile

Actions

Bug #50271

closed

vmware esxi NFS client cannot create thin provisioned vmdk files

Added by Robert Toole about 3 years ago. Updated about 3 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I have a 3 node Ceph octopus 15.2.7 cluster running on fully up to date Centos 7 with nfs-ganesha 3.5.

After following the Ceph install guide https://docs.ceph.com/en/octopus/cephadm/install/#deploying-nfs-ganesha I am able to create a NFS 4.1 Datastore in vmware using the ip address of all three nodes. Everything appears to work OK..

The issue however is that for some reason esxi is creating thick provisioned eager zeroed disks instead of thin provisioned disks on this datastore, whether I am migrating, cloning, or creating new vms. Even running vmkfstools -i disk.vmdk -d thin thin_disk.vmdk still results in a thick eager zeroed vmdk file.

This should not be possible on an NFS datastore, because vmware requires a VAAI NAS plugin to accomplish thick provisioning over NFS before it can thick provision disks.

Linux clients to the same datastore can create thin qcow2 images, and when looking at the images created by esxi from the linux hosts you can see that the vmdks are indeed thick:

ls lsh
total 81G
512 -rw-r--r-
. 1 root root 230 Mar 25 15:17 test_vm-2221e939.hlog
40G rw------. 1 root root 40G Mar 25 15:17 test_vm-flat.vmdk
40G rw------. 1 root root 40G Mar 25 15:56 test_vm_thin-flat.vmdk
512 rw------. 1 root root 501 Mar 25 15:57 test_vm_thin.vmdk
512 rw------. 1 root root 473 Mar 25 15:17 test_vm.vmdk
0 rw-r--r-. 1 root root 0 Jan 6 1970 test_vm.vmsd
2.0K -rwxr-xr-x. 1 root root 2.0K Mar 25 15:17 test_vm.vmx

but the qcow2 files from the linux hosts are thin as one would expect:

qemu-img create -f qcow2 big_disk_2.img 500G

ls -lsh

total 401K
200K rw-r--r-. 1 root root 200K Mar 25 15:47 big_disk_2.img
200K rw-r--r-. 1 root root 200K Mar 25 15:44 big_disk.img
512 drwxr-xr-x. 2 root root 81G Mar 25 15:57 test_vm

These ls -lsh results are the same from esx, linux nfs clients and from cephfs kernel client.

ceph nfs export ls dev-nfs-cluster --detailed

[ {
"export_id": 1,
"path": "/Development-Datastore",
"cluster_id": "dev-nfs-cluster",
"pseudo": "/Development-Datastore",
"access_type": "RW",
"squash": "no_root_squash",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "dev-nfs-cluster1",
"fs_name": "dev_cephfs_vol",
"sec_label_xattr": ""
},
"clients": []
}
]

rpm -qa | grep ganesha

nfs-ganesha-ceph-3.5-1.el7.x86_64
nfs-ganesha-rados-grace-3.5-1.el7.x86_64
nfs-ganesha-rados-urls-3.5-1.el7.x86_64
nfs-ganesha-3.5-1.el7.x86_64
centos-release-nfs-ganesha30-1.0-2.el7.centos.noarch

rpm -qa | grep ceph

python3-cephfs-15.2.7-0.el7.x86_64
nfs-ganesha-ceph-3.5-1.el7.x86_64
python3-ceph-argparse-15.2.7-0.el7.x86_64
python3-ceph-common-15.2.7-0.el7.x86_64
cephadm-15.2.7-0.el7.x86_64
libcephfs2-15.2.7-0.el7.x86_64
ceph-common-15.2.7-0.el7.x86_64

ceph -v

ceph version 15.2.7 (<ceph_uuid>) octopus (stable)

The ceph cluster is healthy using bluestore on raw 3.84TB sata 7200 rpm disks.

I have already brought this issue to the NFS-Ganesha support list: https://bit.ly/3uBcYXz and they seem to think it's a CephFS issue.

I have a full packet capture from esx and the ceph ganesha node where a new vmdk file is created and will provide it upon request.


Files

nfs_41_esxi_thick_provision.pcap.tar.gz (166 KB) nfs_41_esxi_thick_provision.pcap.tar.gz packet capture - esxi create vmdk Robert Toole, 04/10/2021 03:58 PM
Actions #1

Updated by Robert Toole about 3 years ago

I have upgraded now to Octopus 15.2.10 and the issue is still present

Attached is the packet capture from the esxi & ceph side of the file creation process

Actions #2

Updated by Patrick Donnelly about 3 years ago

  • Status changed from New to Triaged
  • Assignee set to Jeff Layton
Actions #3

Updated by Jeff Layton about 3 years ago

I responded in the original email thread. The problem here is that ceph doesn't report sparse file usage correctly. When you set the size of the file via SETATTR, it looks like the files are thickly-provisioned, even though they really aren't.

The only real way to do this would be to scan for all of the objects that exist for the inode and then tally up the object size for each. That could be done -- possibly via a new vxattr or an ioctl, though I'm not sure how we'd properly use that via ganesha.

Actions #4

Updated by Jeff Layton about 3 years ago

  • Status changed from Triaged to Won't Fix

Closing this as WONTFIX since we can't really do it without harming performance for important workloads.

Actions

Also available in: Atom PDF