Project

General

Profile

Actions

Bug #61151

open

libcephfs: incorrectly showing the size for snapshots when stating them

Added by Xiubo Li 12 months ago. Updated 11 months ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

If the rstat is enabled for the .snap snapdir it should report the total size for all the snapshots. And at the same time for each snapshot it should report the rstat size when making the snapshot.

Currently it will always show the current parent dir's rstat, which is incorrect.


Related issues 1 (1 open0 closed)

Copied to Linux kernel client - Bug #61331: libcephfs: incorrectly showing the size for snapshots when stating themNewXiubo Li

Actions
Actions #1

Updated by Xiubo Li 12 months ago

  • Description updated (diff)
Actions #2

Updated by Xiubo Li 12 months ago

There is one option mds_snap_rstat in MDS daemons, after it's enabled it will enable nested rstat for snapshots, and we can send the old_inodes's rstats to client side.

Actions #3

Updated by Greg Farnum 12 months ago

If the rstat is enabled for the .snap snapdir it should report the total size for all the snapshots.

I'm not sure about this. The total size of all snapshots is a pretty meaningless number, isn't it? If there's 10GB of data in a directory, and we snapshot it 10 times without changing any data, we get 10 user-visible snapshots that take up a total of 10GB of space on disk. But the MDS can't actually see how much space the OSDs are using — it doesn't have that information reported. So all it could really report for total size is 100GB, right?

But any user who sees that would expect to free up 10GB of space by deleting one of those ten snapshots, which also won't happen...So I don't think that's a great UX option. :/

Actions #4

Updated by Xiubo Li 11 months ago

  • Status changed from In Progress to Fix Under Review
  • Pull request ID set to 51659
Actions #5

Updated by Xiubo Li 11 months ago

The tests worked as expected:

[xiubli@ceph libcephfs]$ mkdir dir2
[xiubli@ceph libcephfs]$ stat dir2/.snap/
  File: dir2/.snap/
  Size: 0             Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 282574488338941  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 13:24:26.720969831 +0800
Change: 2023-05-22 13:24:26.720969831 +0800
 Birth: -
[xiubli@ceph libcephfs]$ dd if=/dev/random of=./dir2/file1111 bs=1K count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB, 1.0 KiB) copied, 0.0140872 s, 72.7 kB/s
[xiubli@ceph libcephfs]$ stat dir2/.snap/
  File: dir2/.snap/
  Size: 0             Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 282574488338941  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 13:24:26.720969831 +0800
Change: 2023-05-22 13:24:26.720969831 +0800
 Birth: -
[xiubli@ceph libcephfs]$ ll -h dir2/
total 1.0K
-rw-r--r-- 1 xiubli xiubli 1.0K May 22 16:45 file1111
[xiubli@ceph libcephfs]$ mkdir dir2/.snap/snapshot1
[xiubli@ceph libcephfs]$ stat dir2/.snap/
  File: dir2/.snap/
  Size: 1024          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 282574488338941  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:46:27.430254821 +0800
Change: 2023-05-22 16:46:27.430254821 +0800
 Birth: -
[xiubli@ceph libcephfs]$ stat dir2/.snap/snapshot1/
  File: dir2/.snap/snapshot1/
  Size: 1024          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 564049465049597  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:45:54.545237972 +0800
Change: 2023-05-22 16:46:27.430254821 +0800
 Birth: -
[xiubli@ceph libcephfs]$ stat dir2/.snap/snapshot1/file1111 
  File: dir2/.snap/snapshot1/file1111
  Size: 1024          Blocks: 2          IO Block: 4194304 regular file
Device: 2ah/42d    Inode: 564049465049598  Links: 1
Access: (0644/-rw-r--r--)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:54.545237972 +0800
Modify: 2023-05-22 16:45:54.583346799 +0800
Change: 2023-05-22 16:45:54.583346799 +0800
 Birth: -
[xiubli@ceph libcephfs]$ dd if=/dev/random of=./dir2/file2222 bs=1K count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB, 1.0 KiB) copied, 0.00332263 s, 308 kB/s
[xiubli@ceph libcephfs]$ stat dir2/.snap/snapshot1/
  File: dir2/.snap/snapshot1/
  Size: 1024          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 564049465049597  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:45:54.545237972 +0800
Change: 2023-05-22 16:46:27.430254821 +0800
 Birth: -
[xiubli@ceph libcephfs]$ stat dir2/.snap/snapshot1/file1111 
  File: dir2/.snap/snapshot1/file1111
  Size: 1024          Blocks: 2          IO Block: 4194304 regular file
Device: 2ah/42d    Inode: 564049465049598  Links: 1
Access: (0644/-rw-r--r--)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:54.545237972 +0800
Modify: 2023-05-22 16:45:54.583346799 +0800
Change: 2023-05-22 16:45:54.583346799 +0800
 Birth: -
[xiubli@ceph libcephfs]$ mkdir dir2/.snap/snapshot2
[xiubli@ceph libcephfs]$ stat dir2/.snap/
  File: dir2/.snap/
  Size: 3072          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 282574488338941  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:48:15.306613624 +0800
Change: 2023-05-22 16:48:15.306613624 +0800
 Birth: -
[xiubli@ceph libcephfs]$ stat dir2/.snap/snapshot2/
  File: dir2/.snap/snapshot2/
  Size: 2048          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 845524441760253  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:46:45.947429018 +0800
Change: 2023-05-22 16:48:15.306613624 +0800
 Birth: -
[xiubli@ceph libcephfs]$ dd if=/dev/random of=./dir2/file3333 bs=1K count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB, 1.0 KiB) copied, 0.0030967 s, 331 kB/s
[xiubli@ceph libcephfs]$ stat dir2/.snap/
  File: dir2/.snap/
  Size: 3072          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 282574488338941  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:48:15.306613624 +0800
Change: 2023-05-22 16:48:15.306613624 +0800
 Birth: -
[xiubli@ceph libcephfs]$ mkdir dir2/.snap/snapshot3
[xiubli@ceph libcephfs]$ stat dir2/.snap/
  File: dir2/.snap/
  Size: 6144          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 282574488338941  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:50:24.724830748 +0800
Change: 2023-05-22 16:50:24.724830748 +0800
 Birth: -
[xiubli@ceph libcephfs]$ stat dir2/.snap/snapshot3/
  File: dir2/.snap/snapshot3/
  Size: 3072          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 1126999418470909  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:48:45.221649134 +0800
Change: 2023-05-22 16:50:24.724830748 +0800
 Birth: -
[xiubli@ceph libcephfs]$ rmdir dir2/.snap/snapshot1/
[xiubli@ceph libcephfs]$ stat dir2/.snap/
  File: dir2/.snap/
  Size: 5120          Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d    Inode: 282574488338941  Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  xiubli)   Gid: ( 1000/  xiubli)
Access: 2023-05-22 16:45:10.198927211 +0800
Modify: 2023-05-22 16:50:56.561229503 +0800
Change: 2023-05-22 16:50:56.561229503 +0800
 Birth: -

Actions #6

Updated by Xiubo Li 11 months ago

  • Copied to Bug #61331: libcephfs: incorrectly showing the size for snapshots when stating them added
Actions #7

Updated by Xiubo Li 11 months ago

Greg Farnum wrote:

If the rstat is enabled for the .snap snapdir it should report the total size for all the snapshots.

I'm not sure about this. The total size of all snapshots is a pretty meaningless number, isn't it? If there's 10GB of data in a directory, and we snapshot it 10 times without changing any data, we get 10 user-visible snapshots that take up a total of 10GB of space on disk. But the MDS can't actually see how much space the OSDs are using — it doesn't have that information reported. So all it could really report for total size is 100GB, right?

Yeah, sounds reasonable, the total size of the snapshots in the snapdir couldn't represent the total disk spaces the snapshots used.

But any user who sees that would expect to free up 10GB of space by deleting one of those ten snapshots, which also won't happen...So I don't think that's a great UX option. :/

Let's show the total snapshot numbers always here instead ?

And also we should document about this somewhere. Because the snapshots may share the same Objects in backend Rados, so just removing one snapshot also won't make sure we can free the corresponding size of disk spaces.

Actions

Also available in: Atom PDF