Project

General

Profile

Bug #36359

cephfs slow down when export with samba server

Added by peng zhang over 5 years ago. Updated almost 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS, kceph
Labels (FS):
Samba/CIFS
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

i create a jewel cephfs cluster,and run a cephfs,then export the mount dir through samba server

then run a windows client,map the samba dir to the windows.

create a single dir on the windows local disk ,then put 10000 1M file in this dir,and cp the dir to samba dir,the speed will slow like a line

i set ceph mds debug_mds to 20,and see some strage below:

2018-10-09 21:22:41.432696 7f774c463700 10 mds.0.cache.ino(10000027d5d) encode_inodestat issuing pAsxLsXsxFsxcrwb seq 1151
2018-10-09 21:22:41.432700 7f774c463700 10 mds.0.cache.ino(10000027d5d) encode_inodestat caps pAsxLsXsxFsxcrwb seq 1151 mseq 0 xattrv 0 len 0
2018-10-09 21:22:41.432746 7f774c463700 10 mds.0.cache.ino(100000274b4) encode_inodestat issuing pAsxLsXsxFsxcrwb seq 3368
2018-10-09 21:22:41.432750 7f774c463700 10 mds.0.cache.ino(100000274b4) encode_inodestat caps pAsxLsXsxFsxcrwb seq 3368 mseq 0 xattrv 0 len 0
2018-10-09 21:22:41.432791 7f774c463700 10 mds.0.cache.ino(10000026c5f) encode_inodestat issuing pAsxLsXsxFsxcrwb seq 5501
2018-10-09 21:22:41.432795 7f774c463700 10 mds.0.cache.ino(10000026c5f) encode_inodestat caps pAsxLsXsxFsxcrwb seq 5501 mseq 0 xattrv 0 len 0
2018-10-09 21:22:41.432843 7f774c463700 10 mds.0.cache.ino(10000026969) encode_inodestat issuing pAsxLsXsxFsxcrwb seq 6259
2018-10-09 21:22:41.432847 7f774c463700 10 mds.0.cache.ino(10000026969) encode_inodestat caps pAsxLsXsxFsxcrwb seq 6259 mseq 0 xattrv 0 len 0

and many request to the file off this dir

2018-10-09 21:22:48.963079 7f774c463700 12 mds.0.server including    dn [dentry #1/b2/dir1/4_2467.jpg [2,head] auth (dversion lock) v=6626 inode=0x55ecf69c7640 state=new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x55ecf50d50a0]
2018-10-09 21:22:48.963086 7f774c463700 20 mds.0.locker issue_client_lease no/null lease on [dentry #1/b2/dir1/4_2467.jpg [2,head] auth (dversion lock) v=6626 inode=0x55ecf69c7640 state=new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x55ecf50d50a0]
2018-10-09 21:22:48.963091 7f774c463700 12 mds.0.server including inode [inode 100000262f8 [2,head] /b2/dir1/4_2467.jpg auth v6626 dirtyparent s=891400 n(v0 b891400 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={174115=0-4194304@1} caps={174115=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@7921},l=174115 | ptrwaiter=0 request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x55ecf69c7640]
2018-10-09 21:22:49.483688 7f774c463700 12 mds.0.server including    dn [dentry #1/b2/dir1/4_2467.jpg [2,head] auth (dversion lock) v=6626 inode=0x55ecf69c7640 state=new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x55ecf50d50a0]
2018-10-09 21:22:49.483694 7f774c463700 20 mds.0.locker issue_client_lease no/null lease on [dentry #1/b2/dir1/4_2467.jpg [2,head] auth (dversion lock) v=6626 inode=0x55ecf69c7640 state=new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x55ecf50d50a0]
2018-10-09 21:22:49.483700 7f774c463700 12 mds.0.server including inode [inode 100000262f8 [2,head] /b2/dir1/4_2467.jpg auth v6626 dirtyparent s=891400 n(v0 b891400 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={174115=0-4194304@1} caps={174115=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@7922},l=174115 | ptrwaiter=0 request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x55ecf69c7640]

is it a bug between samba and cephfs ,when i cp the same dir to cephfs on linux kernel kc not see so many issue_client_lease and encode_inodestat

some one will check this issue?

History

#1 Updated by Patrick Donnelly over 5 years ago

i create a jewel cephfs cluster,and run a cephfs,then export the mount dir through samba server

Have you tried running Mimic instead? Why choose jewel?

#2 Updated by peng zhang over 5 years ago

i had run a luminous cluster,the same ,i find when a file to the windows mount dir, it will refresh the dir ,so get all file stat,maybe the windows make it slow ,and the mds cannot respons so many request

#3 Updated by Jeff Layton over 5 years ago

So you're mounting the directory using kcephfs and exporting that via samba? Have you tried using the vfs_ceph module that interacts with cephfs in userland and avoids the kernel mount? I'm not sure it'll be any better, but it's a simpler configuration overall.

#4 Updated by qinglong li over 5 years ago

The reason maybe cause by the samba,you can turn on the samba param of case senstive and then have a try.

#5 Updated by Patrick Donnelly about 5 years ago

  • Category deleted (43)
  • Labels (FS) Samba/CIFS added

#6 Updated by Patrick Donnelly almost 5 years ago

  • Target version deleted (v10.2.12)
  • Start date deleted (10/09/2018)
  • Affected Versions deleted (v10.2.9)
  • ceph-qa-suite deleted (fs)

Also available in: Atom PDF