Bug #13948
closed
fs has no limit to PATH_MAX
Added by Arthur Liu over 8 years ago.
Updated almost 8 years ago.
Description
Trying to build coreutils on a CephFS mountpoint (kclient), part of the configure tests has a test that checks for maximum path name length. Ceph doesn't seem to have this limitation so the test runs without completion. The result is a very deep directory tree.
It's probably good to have a limit so that any FS can't end up with infinitely deep directory trees.
- Status changed from New to Rejected
no other FS impose this limitation. PATH_MAX is max length of path buffer. For very deep directory trees, you still can access all file by change working directory.
- Status changed from Rejected to New
I don't think that's an accurate assessment of the problem. Presumably this test passes on local FSes and fails for us.
It might be because we are happily returning a cwd of arbitrary size, whereas the others will freak out because it's too deep, in which case we are probably doing a buffer overflow or something. Or perhaps we just keep telling users to give us a bigger buffer for long paths, and need to short-circuit out?
Or, maybe, some local FSes really do restrict all paths to be less than PATH_MAX. The enforcement around this is pretty weird and spotty in general...
getcwd() is handle by VFS, nothing do with cephfs kernel driver. besides,it's not feasible to check how deep a directory is when moving the directory into another (deep) directory.
Zheng is right. I finally had a chance to re-test this on an idle filesystem and it actually completed. I think it was running into problems with a very deep tree on a busy filesystem that also slowed the filesystem to a crawl, so it appeared to me that it didn't complete (it was in the order of about 1 hr). glibc was doing a readlink for /proc/self/cwd, so it was trying to traverse and stat the entire directory tree. Please close this.
- Status changed from New to Rejected
- Component(FS) Client added
Also available in: Atom
PDF