Project

General

Profile

Bug #13730

Impossible to change a file with cephfs in Infernalis

Added by Francois Lafont over 8 years ago. Updated over 8 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

I have mounted a cephfs via fuse and I can remove, add files but I can't change a file.
In the attached file at the end of this ticket, there is the log ceph-client.cephfs.log with "debug client = 20" and "debug ms = 1" in the ceph.conf of the client node. During, this log, here is my commands:

~# mount /mnt/cephfs
libust[9774/9774]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
ceph-fuse[9788]: starting ceph client
2015-11-09 16:45:11.762346 7f095ef8d7c0 -1 init, newargv = 0x7f0962a1ca90 newargc=13
ceph-fuse[9788]: starting fuse

~# ll /mnt/cephfs/
total 5
drwxr-xr-x 1 root root    0 Nov  9 16:24 ./
drwxr-xr-x 3 root root 4096 Nov  9 16:02 ../
-rw-r--r-- 1 root root    0 Nov  9 16:41 welcome-to-cephfs

~# cat /mnt/cephfs/welcome-to-cephfs 
cat: /mnt/cephfs/welcome-to-cephfs: Operation not permitted

~# echo "Welcome" >/mnt/cephfs/welcome-to-cephfs 
-bash: echo: write error: Operation not permitted

~# umount /mnt/cephfs 

Here is supplementary information:

In the cluster I use Ubuntu 14.04 with ceph version 9.2.0 (17df5d2948d929e997b9d320b228caffc8314e58)

The client is a Ubuntu 14.04 too with exactly the same version of ceph.

Here is the line in fstab to mount the cephfs:

id=cephfs,keyring=/etc/ceph/ceph.client.cephfs.keyring,client_mountpoint=/ /mnt/cephfs fuse.ceph noatime,defaults,_netdev 0 0

Here the ceph account I use in the client :

# ceph auth export client.cephfs
export auth(auid = 18446744073709551615 key=AQB1fhRWkM5tFxAADYKzOgTbDZw9LEMgbPw4yw== with 3 caps)
[client.cephfs]
    key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
    caps mds = "allow" 
    caps mon = "allow r" 
    caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=data" 

My ceph.conf in each node of the cluster:

  fsid                           = f875b4c1-535a-4f17-9883-2793079d410a
  cluster network                = 192.168.22.0/24
  public network                 = 10.0.2.0/24
  auth cluster required          = cephx
  auth service required          = cephx
  auth client required           = cephx
  filestore xattr use omap       = true
  osd pool default size          = 3
  osd pool default min size      = 1
  osd pool default pg num        = 64
  osd pool default pgp num       = 64
  osd crush chooseleaf type      = 1
  osd journal size               = 0
  osd max backfills              = 1
  osd recovery max active        = 1
  osd client op priority         = 63
  osd recovery op priority       = 1
  osd op threads                 = 4
  mds cache size                 = 1000000
  osd scrub begin hour           = 3
  osd scrub end hour             = 5
  mon allow pool delete          = false
  mon osd down out subtree limit = host
  mon osd min down reporters     = 4

[mon.ceph01]
  host     = ceph01
  mon addr = 10.0.2.101

[mon.ceph02]
  host     = ceph02
  mon addr = 10.0.2.102

[mon.ceph03]
  host     = ceph03
  mon addr = 10.0.2.103

My ceph status:

~# ceph -s
    cluster f875b4c1-535a-4f17-9883-2793079d410a
     health HEALTH_OK
     monmap e9: 3 mons at {ceph01=10.0.2.101:6789/0,ceph02=10.0.2.102:6789/0,ceph03=10.0.2.103:6789/0}
            election epoch 140, quorum 0,1,2 ceph01,ceph02,ceph03
     mdsmap e101: 1/1/1 up {0=ceph01=up:active}
     osdmap e290: 15 osds: 15 up, 15 in
            flags sortbitwise
      pgmap v1436: 192 pgs, 3 pools, 148 kB data, 20 objects
            572 MB used, 55862 GB / 55863 GB avail
                 192 active+clean

ceph-client.cephfs.log.zip - log in client side (10.3 KB) Francois Lafont, 11/09/2015 04:22 PM

History

#1 Updated by Francois Lafont over 8 years ago

Another information, here is the curious behaviour is I try the same commands when I mount cephfs via the kernel module :

~# mount /mnt/cephfs 

~# cat /mnt/cephfs/welcome-to-cephfs 
cat: /mnt/cephfs/welcome-to-cephfs: Operation not permitted

~# echo "Hello" >/mnt/cephfs/welcome-to-cephfs 

~# cat /mnt/cephfs/welcome-to-cephfs 
Hello

~# umount /mnt/cephfs 

~# mount /mnt/cephfs 

~# cat /mnt/cephfs/welcome-to-cephfs # no output?
~#

# ll -h /mnt/cephfs/welcome-to-cephfs 
-rw-r--r-- 1 root root 6 Nov  9 17:31 /mnt/cephfs/welcome-to-cephfs

The kernel of my client node is 3.13.0-67 (the default kernel in Ubuntu 14.04).
Here is my line in fstab to mount cephfs via the kernel module :

ceph01,ceph02,ceph03:/ /mnt/cephfs ceph noatime,name=cephfs,secretfile=/etc/ceph/ceph.client.cephfs.secret,_netdev 0 0

Regards.

#2 Updated by John Spray over 8 years ago

Hmm. Usually this is a symptom of bad OSD capabilities on the client.

You've got "allow class-read object_prefix rbd_children, allow rwx pool=data". Is your cephfs data pool definitely called 'data'?

#3 Updated by Francois Lafont over 8 years ago

Oh, Greg and John, I'm so sorry to have disturbed you for this dummy ticket. It's totally my fault:

~# ceph fs ls
name: cephfs, metadata pool: cephfsmetadata, data pools: [cephfsdata ]

The data pool is called "cephfsdata" not "data". Shame on me! Of course, with the correct ceph rights, it works better. It seems to me I can't close this ticket in my Web interface but of course it's closed.

Sorry again.
Regards.

#4 Updated by John Spray over 8 years ago

  • Status changed from New to Rejected

Also available in: Atom PDF