Project

General

Profile

Actions

Bug #13948

closed

fs has no limit to PATH_MAX

Added by Arthur Liu over 8 years ago. Updated almost 8 years ago.

Status:
Rejected
Priority:
Low
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
5 - suggestion
Reviewed:
ceph-qa-suite:
Component(FS):
Client
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Trying to build coreutils on a CephFS mountpoint (kclient), part of the configure tests has a test that checks for maximum path name length. Ceph doesn't seem to have this limitation so the test runs without completion. The result is a very deep directory tree.

It's probably good to have a limit so that any FS can't end up with infinitely deep directory trees.

Actions #1

Updated by Zheng Yan over 8 years ago

  • Status changed from New to Rejected

no other FS impose this limitation. PATH_MAX is max length of path buffer. For very deep directory trees, you still can access all file by change working directory.

Actions #2

Updated by Greg Farnum over 8 years ago

  • Status changed from Rejected to New

I don't think that's an accurate assessment of the problem. Presumably this test passes on local FSes and fails for us.

It might be because we are happily returning a cwd of arbitrary size, whereas the others will freak out because it's too deep, in which case we are probably doing a buffer overflow or something. Or perhaps we just keep telling users to give us a bigger buffer for long paths, and need to short-circuit out?

Or, maybe, some local FSes really do restrict all paths to be less than PATH_MAX. The enforcement around this is pretty weird and spotty in general...

Actions #3

Updated by Zheng Yan over 8 years ago

getcwd() is handle by VFS, nothing do with cephfs kernel driver. besides,it's not feasible to check how deep a directory is when moving the directory into another (deep) directory.

Actions #4

Updated by Arthur Liu over 8 years ago

Zheng is right. I finally had a chance to re-test this on an idle filesystem and it actually completed. I think it was running into problems with a very deep tree on a busy filesystem that also slowed the filesystem to a crawl, so it appeared to me that it didn't complete (it was in the order of about 1 hr). glibc was doing a readlink for /proc/self/cwd, so it was trying to traverse and stat the entire directory tree. Please close this.

Actions #5

Updated by Zheng Yan over 8 years ago

  • Status changed from New to Rejected
Actions #6

Updated by Greg Farnum almost 8 years ago

  • Component(FS) Client added
Actions

Also available in: Atom PDF