Project

General

Profile

Bug #24894

client: allow overwrites to files with size greater than the max_file_size config

Added by Patrick Donnelly over 4 years ago. Updated 3 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
Correctness/Safety
Target version:
% Done:

0%

Source:
Community (user)
Tags:
backport_processed
Backport:
quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Client, kceph
Labels (FS):
task(easy)
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

It's confusing to not allow overwrites as the file size is not further increasing beyond the limit (which it already exceeds).


Related issues

Copied to CephFS - Backport #55930: quincy: client: allow overwrites to files with size greater than the max_file_size config Resolved
Copied to CephFS - Backport #55931: pacific: client: allow overwrites to files with size greater than the max_file_size config Resolved

History

#1 Updated by Patrick Donnelly over 4 years ago

  • Assignee set to Rishabh Dave

#2 Updated by Patrick Donnelly over 4 years ago

  • Assignee deleted (Rishabh Dave)

#3 Updated by Patrick Donnelly over 3 years ago

  • Target version changed from v14.0.0 to v15.0.0

#4 Updated by Patrick Donnelly over 3 years ago

  • Target version deleted (v15.0.0)

#5 Updated by Tamar Shacked 7 months ago

PR to enable overwriting till actually file-size: https://github.com/ceph/ceph/pull/46267

Tested with ceph-fuse client:
1. create on the mount-point FILE > 64k

-rw-rw-r--. 1 tshacked tshacked 65547 May 15 11:28 FILE

2. set max_file_size to 64k (min val)

3. Code_with_PR: overwrite data works fine w/o returning "File too large" error

$ printf "11111111111"| dd of=FILE bs=512 seek=128 conv=notrunc
0+1 records in
0+1 records out
11 bytes copied, 0.0467811 s, 0.2 kB/s

4. request to overwrite which exceeded file-size returned error even though data is being overwrites (with or w/o PR)

$ printf "222222222222"| dd of=FILE bs=512 seek=128 conv=notrunc
dd: error writing 'FILE': File too large
0+1 records in
0+0 records out
0 bytes copied, 0.000336058 s, 0.0 kB/s

The behavior in (4) seems to be ambiguous - the fop fails while the data is partially updated (till file-size).
Should the data be updated error is returned?
I"ll check if posix define it and what happen on local FS

#6 Updated by Tamar Shacked 7 months ago

On F34 (FS XFS) attempt to overwrite from file-size-limit fails w/o any write.
This is not the case on the upstream which returns error but also do the overwrite

#-rw-r--r--. 1 root root 65553 May 16 15:06 FILE
# set -o posix
# ulimit -f 128   # limit file-size to 64k
# printf "XXXX" | dd of=FILE bs=512 seek=128 conv=notrunc
File size limit exceeded (core dumped)

#7 Updated by Greg Farnum 7 months ago

  • Pull request ID set to 46267

#8 Updated by Tamar Shacked 7 months ago

During the work I saw a 2nd issue which I didn't fix and exists on the upstream code:
1. ceph_max_file = default
2. create file-size=64k+10b

$ ll /mnt/ceph/FILE 
-rw-rw-r--. 1 tshacked tshacked 65546 May 23 19:28 /mnt/ceph/FILE

$ hexdump -c -s 65530 /mnt/ceph/FILE
000fffa  \t 023   W   O 277   I   1   2   3   4   5   6   7   8   9  \n
001000a

3. limit ceph_max_file = 64k
4. write 20b at offset 64k
5. result: EFBIG and data is written till actual size.
$ printf "TAMAR12345TAMAR6789" | dd of=/mnt/ceph/FILE bs=512 seek=128 conv=notrunc
dd: error writing '/mnt/ceph/FILE': File too large
0+1 records in
0+0 records out
0 bytes copied, 0.00121106 s, 0.0 kB/s
$ ll /mnt/ceph/FILE 
-rw-rw-r--. 1 tshacked tshacked 65546 May 23 19:28 /mnt/ceph/FILE

$ hexdump -c -s 65530 /mnt/ceph/FILE
000fffa  \t 023   W   O 277   I   T   A   M   A   R   1   2   3   4   5
001000a

#9 Updated by Venky Shankar 6 months ago

  • Status changed from New to Pending Backport
  • Target version set to v18.0.0
  • Backport changed from mimic,luminous to quincy, pacific

#10 Updated by Backport Bot 6 months ago

  • Copied to Backport #55930: quincy: client: allow overwrites to files with size greater than the max_file_size config added

#11 Updated by Backport Bot 6 months ago

  • Copied to Backport #55931: pacific: client: allow overwrites to files with size greater than the max_file_size config added

#12 Updated by Backport Bot 4 months ago

  • Tags set to backport_processed

#13 Updated by Xiubo Li 3 months ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF