Project

General

Profile

Feature #46074

mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume

Added by Shyamsundar Ranganathan 5 months ago. Updated 3 months ago.

Status:
Pending Backport
Priority:
Urgent
Assignee:
Category:
Administration/Usability
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
octopus,nautilus
Reviewed:
Affected Versions:
Component(FS):
MDS
Labels (FS):
snapshots
Pull request ID:

Description

Issue is originally discussed here: https://github.com/ceph/ceph-csi/issues/1133

This bug is filed to provide discussed alternatives as in the issue above, specifically,

"For cephfs volume, we only create snapshots at volume root. we can disable the special handling for inodes with multiple links. If the special handling is disabled, that can help avoiding the 400 snapshot per-file-system limit"

For subvolumes, the intention is that these are used in isolation with each other, and hence hard links across subvolumes should not be a concern. Given this, the handling as discussed above can be relaxed for subvolumes.


Related issues

Related to Linux kernel client - Bug #21420: ceph_osdc_writepages(): pre-allocated osdc->msgpool_op messages vs large number of snapshots New
Related to CephFS - Bug #47154: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve snapshot scalbility of subvolumes Pending Backport
Copied to CephFS - Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume New
Copied to CephFS - Backport #47096: nautilus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume Resolved

History

#1 Updated by Patrick Donnelly 5 months ago

  • Subject changed from Provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume to mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume
  • Priority changed from Normal to Urgent
  • Target version set to v16.0.0

#2 Updated by Patrick Donnelly 5 months ago

  • Status changed from New to Triaged
  • Assignee set to Zheng Yan

#3 Updated by Zheng Yan 4 months ago

I only see per-dir limit in the mds code. where is the 400 snapshot per-file-system limit from?

#4 Updated by Patrick Donnelly 4 months ago

Zheng Yan wrote:

I only see per-dir limit in the mds code. where is the 400 snapshot per-file-system limit from?

I think that number came from https://tracker.ceph.com/issues/21420#note-3

#5 Updated by Patrick Donnelly 4 months ago

  • Related to Bug #21420: ceph_osdc_writepages(): pre-allocated osdc->msgpool_op messages vs large number of snapshots added

#6 Updated by Zheng Yan 4 months ago

  • Status changed from Triaged to Fix Under Review
  • Pull request ID set to 36472

#7 Updated by Patrick Donnelly 3 months ago

  • Tracker changed from Bug to Feature
  • Status changed from Fix Under Review to Pending Backport

Wiring up mgr/volumes will happen in another ticket.

#8 Updated by Nathan Cutler 3 months ago

  • Copied to Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume added

#9 Updated by Nathan Cutler 3 months ago

  • Copied to Backport #47096: nautilus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume added

#10 Updated by Patrick Donnelly 3 months ago

  • Related to Bug #47154: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve snapshot scalbility of subvolumes added

Also available in: Atom PDF