Project

General

Profile

Actions

Bug #11059

closed

Unix rights of the same "cephfs" directory different in client1 and client2

Added by Francois Lafont about 9 years ago. Updated almost 9 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
fs/ceph
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

My ceph cluster has 3 identical nodes. For each node:
  • OS Ubuntu 14.04 (updated)
  • Kernel 3.16.0-31-generic (kernel installed with apt-get install linux-generic-lts-utopic)
  • Ceph 0.80.8-1trusty (installed via the Debian repository http://ceph.com/debian-firefly/)
  • 1 OSD daemon in a dedicated "xfs" disk of 20GB (it's a cluster for testing)
  • 1 monitor daemon
  • 1 mds daemon (mode active/standby)

All seems to be OK in the cluster side:

~# ceph -s
    cluster e865b3d0-535a-4f18-9883-2793079d400b
     health HEALTH_OK
     monmap e3: 3 mons at {1=172.31.10.1:6789/0,2=172.31.10.2:6789/0,3=172.31.10.3:6789/0}, election epoch 6, quorum 0,1,2 1,2,3
     osdmap e18: 3 osds: 3 up, 3 in
      pgmap v25: 192 pgs, 3 pools, 0 bytes data, 0 objects
            3175 MB used, 58231 MB / 61407 MB avail
                 192 active+clean
Then, I have 2 client nodes, ceph-client1 and ceph-client2. They are identical:
  • OS Ubuntu 14.04 (updated)
  • Kernel 3.16.0-31-generic (kernel installed with apt-get install linux-generic-lts-utopic)
  • Ceph 0.80.8-1trusty (installed via the Debian repository http://ceph.com/debian-firefly/)

In these nodes, I mount the cephfs like this:

mount -t ceph 172.31.10.1,172.31.10.2,172.31.10.3:/ /mnt -o name=cephfsuser,secretfile=/etc/ceph/ceph.client.cephfsuser.secret

Then, in the "ceph-client1" node, I do:

root@ceph-client1 ~# uname -r
3.16.0-31-generic

root@ceph-client1 ~# \ls -la /mnt/
total 4
drwxr-xr-x  1 root root    0 Mar  7 03:02 .
drwxr-xr-x 22 root root 4096 Mar  7 02:42 ..

root@ceph-client1 ~# mkdir /mnt/dir1
root@ceph-client1 ~# \ls -la /mnt/
total 4
drwxr-xr-x  1 root root    0 Mar  7 03:04 .
drwxr-xr-x 22 root root 4096 Mar  7 02:42 ..
drwxr-xr-x  1 root root    0 Mar  7 03:04 dir1

Then, in the "ceph-client2" node, I do:

root@ceph-client2 ~# \ls -la /mnt
total 4
drwxr-xr-x  1 root root    0 Mar  7 03:04 .
drwxr-xr-x 22 root root 4096 Mar  7 02:42 ..
drwxrwxrwx  1 root root    0 Mar  7 03:04 dir1

The Unix rights of dir1/ are not the same if I am in ceph-client1 or if I am in ceph-client2. The issue can be reproduced systematically.

Note1: if I mount the cephfs via ceph-fuse instead of the mount.ceph command in ceph-client1 and in ceph-client2, I have not the problem.

Note2: on ceph-client1 and ceph-client2, if I boot with the default kernel of Ubuntu 14.04, ie the kernel 3.13.0-46-generic, I have not the problem.

Regards.
François Lafont

Actions #1

Updated by Zheng Yan about 9 years ago

  • Status changed from New to 12

It's bug in ACL code. The bug has been fixed in 3.18 kernel. Please use 3.18 >= kernel or add 'noacl' mount option

Actions #2

Updated by Francois Lafont about 9 years ago

Hi,

Zheng Yan wrote:

It's bug in ACL code. The bug has been fixed in 3.18 kernel. Please use 3.18 >= kernel or add 'noacl' mount option

Ok, I have tested and indeed:

  • I have not the issue if the client nodes use a 3.18 kernel;
  • and I have not the issue if the client nodes use a 3.13 kernel and mount the cephfs with the 'noacl' option.

So it's ok for me. ;)
Thanks a lot for your precise answer.

Regards
François Lafont

Actions #3

Updated by Zheng Yan almost 9 years ago

  • Status changed from 12 to Won't Fix
  • Regression set to No
Actions

Also available in: Atom PDF