Feature #46074
closed
mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume
Added by Shyamsundar Ranganathan almost 4 years ago.
Updated about 3 years ago.
Category:
Administration/Usability
Backport:
octopus,nautilus
Description
Issue is originally discussed here: https://github.com/ceph/ceph-csi/issues/1133
This bug is filed to provide discussed alternatives as in the issue above, specifically,
"For cephfs volume, we only create snapshots at volume root. we can disable the special handling for inodes with multiple links. If the special handling is disabled, that can help avoiding the 400 snapshot per-file-system limit"
For subvolumes, the intention is that these are used in isolation with each other, and hence hard links across subvolumes should not be a concern. Given this, the handling as discussed above can be relaxed for subvolumes.
- Subject changed from Provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume to mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume
- Priority changed from Normal to Urgent
- Target version set to v16.0.0
- Status changed from New to Triaged
- Assignee set to Zheng Yan
I only see per-dir limit in the mds code. where is the 400 snapshot per-file-system limit from?
- Related to Bug #21420: ceph_osdc_writepages(): pre-allocated osdc->msgpool_op messages vs large number of snapshots added
- Status changed from Triaged to Fix Under Review
- Pull request ID set to 36472
- Tracker changed from Bug to Feature
- Status changed from Fix Under Review to Pending Backport
Wiring up mgr/volumes will happen in another ticket.
- Copied to Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume added
- Copied to Backport #47096: nautilus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume added
- Related to Bug #47154: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve snapshot scalbility of subvolumes added
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".
When upgrading an existing cluster with subvolumes (csi volumes), does the old limit of 400 snapshots still apply to previously existing subvolumes? (i.e. is this patch only applicable to new subvolumes created after applying the update?)
Are there any performance implications (improvements) to this update as mentioned here? (https://bugzilla.redhat.com/show_bug.cgi?id=1848503) If yes, is it advisable to remove snapshots created before this patch?
Andras Sali wrote:
When upgrading an existing cluster with subvolumes (csi volumes), does the old limit of 400 snapshots still apply to previously existing subvolumes? (i.e. is this patch only applicable to new subvolumes created after applying the update?)
Yes, but the main change is that each subvolume is marked with a special vxattr "ceph.dir.subvolume". This builds protections into subvolumes (namely no hard links outside the subvolume) so that snapshot limits can be removed. You can correct this manually doing:
$ for subvol in /cephfs/volumes/*/*; do setfattr -n ceph.dir.subvolume -v 1 "$subvol"; done
Are there any performance implications (improvements) to this update as mentioned here? (https://bugzilla.redhat.com/show_bug.cgi?id=1848503) If yes, is it advisable to remove snapshots created before this patch?
Should not be necessary.
Also available in: Atom
PDF