Project

General

Profile

Actions

Feature #46074

closed

mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume

Added by Shyamsundar Ranganathan almost 4 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
Administration/Usability
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
octopus,nautilus
Reviewed:
Affected Versions:
Component(FS):
MDS
Labels (FS):
snapshots
Pull request ID:

Description

Issue is originally discussed here: https://github.com/ceph/ceph-csi/issues/1133

This bug is filed to provide discussed alternatives as in the issue above, specifically,

"For cephfs volume, we only create snapshots at volume root. we can disable the special handling for inodes with multiple links. If the special handling is disabled, that can help avoiding the 400 snapshot per-file-system limit"

For subvolumes, the intention is that these are used in isolation with each other, and hence hard links across subvolumes should not be a concern. Given this, the handling as discussed above can be relaxed for subvolumes.


Related issues 4 (1 open3 closed)

Related to Linux kernel client - Bug #21420: ceph_osdc_writepages(): pre-allocated osdc->msgpool_op messages vs large number of snapshotsNew

Actions
Related to CephFS - Bug #47154: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve snapshot scalbility of subvolumesResolvedShyamsundar Ranganathan

Actions
Copied to CephFS - Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volumeResolvedPatrick DonnellyActions
Copied to CephFS - Backport #47096: nautilus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volumeResolvedZheng YanActions
Actions

Also available in: Atom PDF