Project

General

Profile

Actions

Bug #14071

closed

Ceph FS not working as backstore for Linux SCSI target

Added by Eric Eastman over 8 years ago. Updated over 8 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

I have been testing native Linux SCSI target (LIO) with the backstore file being a file on a Ceph File System, kernel mounted. With a 35GB backstore file, the exported iSCSI block device works with VMware ESXi 5.0 and I can create and use the filestore backend. When I create a 11TB file based backstore, export it as iSCSI, and try to create a new file system on the ESXi 5.0 server, the operation fails on the ESXi server, and LIO outputs this error to kern.log:

Dec 13 14:54:47 dfgw01 kernel: [69472.402518] fd_do_rw() write returned -27

Creating a 11TB backstore file on an XFS file system, and exporting it using iSCSI works on the ESXi server, so it looks like a Ceph file system problem.

Linux Kernel: Tested with 4.3 and 4.4rc4
OS: Ubuntu Trusty
uname -a
Linux dfgw01 4.4.0-040400rc4-generic #201512061930 SMP Mon Dec 7 00:32:31 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
ceph -v
ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)

Looking at the LIO code found at:
http://lxr.free-electrons.com/source/drivers/target/target_core_file.c
288         iov_iter_bvec(&iter, ITER_BVEC, bvec, sgl_nents, len);
289         if (is_write)
290                 ret = vfs_iter_write(fd, &iter, &pos);
291         else
292                 ret = vfs_iter_read(fd, &iter, &pos);
…
296         if (is_write) {
297                 if (ret < 0 || ret != data_length) {
298                         pr_err("%s() write returned %d\n", __func__, ret);
299                         return (ret < 0 ? ret : -EINVAL);
300                 }
Since the log file is showing -27, Looks like a file too large error

Looking at the vfs_iter_write found at:
http://lxr.free-electrons.com/source/fs/read_write.c
352 ssize_t vfs_iter_write(struct file *file, struct iov_iter *iter, loff_t *ppos)
353 {
354         struct kiocb kiocb;
355         ssize_t ret;
356 
357         if (!file->f_op->write_iter)
358                 return -EINVAL;
359 
360         init_sync_kiocb(&kiocb, file);
361         kiocb.ki_pos = *ppos;
362 
363         iter->type |= WRITE;
364         ret = file->f_op->write_iter(&kiocb, iter);
365         BUG_ON(ret == -EIOCBQUEUED);
366         if (ret > 0)
367                 *ppos = kiocb.ki_pos;
368         return ret;
369 }

ceph df detail
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED     OBJECTS 
    241T      192T       49976G         20.23       5605k 
POOLS:
    NAME                ID     CATEGORY     USED       %USED     MAX AVAIL     OBJECTS     DIRTY     READ       WRITE  
    rbd                 0      -             4060M         0        50603G        1052      1052        980      80486 
    cephfs_data         1      -            16590G      6.72        50603G     5518067     5388k     23502k     17305k 
    cephfs_metadata     2      -            66150k         0        50603G      221249      216k       652k      8391k 
    kSAFEbackup         3      -              563M         0        50603G          23        23          0        185 

ceph -s
    cluster c261c2dc-5e29-11e5-98ba-68b599c50db0
     health HEALTH_OK
     monmap e1: 3 mons at {dfmon01=10.16.51.21:6789/0,dfmon02=10.16.51.22:6789/0,dfmon03=10.16.51.23:6789/0}
            election epoch 18, quorum 0,1,2 dfmon01,dfmon02,dfmon03
     mdsmap e505: 1/1/1 up {0=dfmds02=up:active}, 1 up:standby
     osdmap e1102: 158 osds: 157 up, 157 in
            flags sortbitwise
      pgmap v406717: 8512 pgs, 4 pools, 16595 GB data, 5605 kobjects
            49978 GB used, 192 TB / 241 TB avail
                8512 active+clean
  client io 3628 kB/s wr, 9 op/s

grep ceph /proc/mounts
10.16.51.21,10.16.51.22,10.16.51.23:/ /cephfs ceph rw,noatime,name=cephfs,secret=<hidden>,acl 0 0

SCSI target info from targetcli
targetcli
targetcli 3.0.pre4.5~ga125182 (rtslib 3.0.pre4.9~g6fd0bbf)
Copyright (c) 2011-2014 by Datera, Inc.
All rights reserved.

/> ls
o- / ................................................................................................................... [...]
  o- backstores ........................................................................................................ [...]
  | o- fileio ............................................................................................ [3 Storage Objects]
  | | o- iscsi1 ...................................................................... [35.0G, /tmpfs/iscsitmpfs1.img, in use]
  | | o- iscsiCephFS11TBa ..................................................... [11.0T, /cephfs/iscsi/iscsi_11TBa.img, in use]
  | | o- iscsixfs11TBa ........................................................... [11.0T, /XFS/iscsi_xfs_1_11TBa.img, in use]
  | o- iblock ............................................................................................. [0 Storage Object]
  | o- pscsi .............................................................................................. [0 Storage Object]
  | o- rd_mcp ............................................................................................. [0 Storage Object]
  o- iscsi ........................................................................................................ [1 Target]
  | o- iqn.2015-15.com.keepertech:t1 ................................................................................. [1 TPG]
  |   o- tpg1 ...................................................................................................... [enabled]
  |     o- acls ..................................................................................................... [0 ACLs]
  |     o- luns ..................................................................................................... [3 LUNs]
  |     | o- lun0 ................................................................... [fileio/iscsi1 (/tmpfs/iscsitmpfs1.img)]
  |     | o- lun1 ........................................................ [fileio/iscsixfs11TBa (/XFS/iscsi_xfs_1_11TBa.img)]
  |     | o- lun2 .................................................. [fileio/iscsiCephFS11TBa (/cephfs/iscsi/iscsi_11TBa.img)]
  |     o- portals ................................................................................................ [1 Portal]
  |       o- 10.16.51.41:3260 ............................................................................ [OK, iser disabled]
  o- loopback .................................................................................................... [0 Targets]
  o- usb_gadget .................................................................................................. [0 Targets]
  o- vhost ....................................................................................................... [0 Targets]
/> 

Actions #1

Updated by John Spray over 8 years ago

  • Status changed from New to Rejected

CephFS imposes a maximum file size. It's 1<<40 (aka one terabyte) by default. You can adjust it with "ceph mds set max_file_size"

Actions

Also available in: Atom PDF