Project

General

Profile

Actions

Bug #4738

closed

libceph: unlink vs. readdir (and other dir orders)

Added by Denis kaganovich about 11 years ago. Updated about 5 years ago.

Status:
Closed
Priority:
Low
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Samba/CIFS
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Combining (stacking) in samba vfs_scannedonly with vfs_ceph, I experienced some bugs, looks like libceph readdir problems.
Code illustration: http://raw.googlecode.com/svn/trunk/app-portage/ppatch/files/extensions/net-fs/samba/compile/samba-scan-ceph.patch
In human language: in time of readdir() loop, vfs_scannedonly stuck in ceph_unlink() (expired .scanned:filename in time of readdir of "filename"). Really there are more calls going before: at least ceph_stat() for both files.

Next - more problematic. This code do telldir(), walking forward and seekdir(). Probably - in this time created .scanned:filename again and next file readdir() order broken against system behaviour.

Second issue may be ignored in this code if there are unstrict system behaviour, but both looks like common ceph_readdir() mismatch in single context against normal system order. When unlink going around in kernel mount context - all fine.

PS About this vfs stacking: I just do "ln -s /mnt/ceph/shared /shared" and move clamav & unlink() into system fs context.

Actions #1

Updated by Sam Lang about 11 years ago

  • Status changed from New to Need More Info

Denis,

I've seen similar behavior with the smbtorture dir1 test, but it happens without the vfs_ceph module. Does this only happen to you when you're using vfs_scannedonly ontop of vfs_ceph, or does it also happen when you use vfs_scannedonly ontop of a local filesystem?

Actions #2

Updated by Sage Weil about 11 years ago

  • Project changed from Ceph to CephFS
Actions #3

Updated by Denis kaganovich about 11 years ago

On top only:
vfs objects = scannedonly ceph
And if i switching to:
vfs objects = scannedonly
or:
vfs objects = ceph
- all OK (again, I have duplicate directory structure to do so).

Duplicated mount is kernel-level on server machines - yes, I read words against it, but looks good on semi-RT (preempt) kernel. So, probems ONLY in way around mount.

Another problematic point: I use ctdb lock on this mount, but ping_pong is unhappy. But looks nothing wrong if ONLY mount is used in all aspects, so I beleave there are only vfs+libceph problem.

Actions #4

Updated by Greg Farnum about 11 years ago

I don't believe locking is implemented yet via the Samba VFS bindings, since we don't have a userspace implementation of that.
The kernel does have an implementation, so if that's working right then all is as we expect.

As for the readdir stuff; I'll leave that for Sam as I haven't looked into the interfaces here at all. :)

Actions #5

Updated by Greg Farnum about 10 years ago

  • Priority changed from Normal to Low

Need more info, samba, uclient, etc.

Actions #6

Updated by Greg Farnum over 7 years ago

  • Category set to 43
  • Status changed from Need More Info to Closed

We have file locking and redid the listing code.

Actions #7

Updated by Patrick Donnelly about 5 years ago

  • Category deleted (43)
  • Labels (FS) Samba/CIFS added
Actions

Also available in: Atom PDF