pybind/mgr/volumes: restore from snapshot
Manila's cephfs driver does not support recovering data from snapshots,The driver uses the ceph_volume_client library.
If implement cephfs_volume_client's `_cp_r` method1, it useful for data recovery of manila cephfs drivers.
#1 Updated by John Spray over 2 years ago
I just checked who wrote that "TODO" comment, and it turns out it was me, even though I have no memory of it :-)
IIRC, the hope was that there would be a "read only clone" mechanism (i.e. a clone but the new share would have a readonly flag set), that would in reality just map the new volume to the proper .snap subdirectory, and we'd only do the full copy on a writable clone. I'm not sure whether the Manila clone API ended up in a form that provides that distinction, so that would be something to check.
#5 Updated by Patrick Donnelly over 1 year ago
- Subject changed from ceph_volume_client: Implementation of the cp method to pybind/mgr/volumes: restore from snapshot
- Description updated (diff)
- Assignee changed from Ramana Raja to Rishabh Dave
- Start date deleted (
- Backport changed from mimic,luminous to nautilus
- Component(FS) mgr/volumes added
- Component(FS) deleted (
#8 Updated by Ramana Raja about 1 year ago
This feature will be used by Ceph CSI to create a PVC from a snapshot , and by OpenStack Manila to create a share from a snapshot . Since snapshot from clone might take a while, we'd want a mgr/volumes call to initiate the asynchronous subvolume creation from a subvolume snapshot, and an another call to check the status of the subvolume. The `ceph fs subvolume create` CLI can be extended to trigger the subvolume creation from snapshot and return immediately,
ceph fs subvolume create <volname> <subvolname> [--group_name <group_name>] [--size <size>] [--snapshot_source <src snapshot name>] [--subvol_source <src subvolname>]
and have an another CLI say
ceph fs subvolume show <volname> <subvolname> [--group_name <group_id>]
to return the status of the subvolume creation.
See discussion here for more details,
It should also be possible to garbage collect the subvolumes that were partially restored from a snapshot.
#9 Updated by Venky Shankar about 1 year ago
clone operation design & interface:
Introduce `clone` sub-command in `subvolume snapshot` command
$ ceph fs subvolume snapshot clone <source> <target> [<pool-layout>]
source: (filesystem, group, subvolume, snapshot) tuple
target: (filesystem, group, subvolume) tuple
pool-layout: optional, default to pool-layout of target group
Clone operation is asynchronous. Also, allow clone to a different
subvolume group. Cloning a subvolumegroup is not a requirement.
b. Since clone is asynchronous, what if the snap is removed when clone
is in progress? The clone operation should be canceled and marked as
failed (user interrupted).
c. Clone operation status
Introduce `clone status` subcommand.
#2 subcommand takes "<source> <target>" and displays its status.
$ ceph fs subvolume snapshot clone status <source> <target>
A clone operation can be in the following states:
- not started
- in progress
`not started and `in progress` are fairly self-explanatory.
Maintain (or figure out) list of failed clones (and this list needs to be available across mgr restarts and failover so as to provide consistent state to CSI). mgr/volumes should skip over failed clones on restart/failover. Provide a CLI command to clear failed clones.
d. Control operations: Support canceling an on-going clone operation (+ CLI command)
Changes to subvolume provisioning by mgr/volumes:
Maintain subvolume metadata in cephfs -- this would allow mgr/volumes to persist subvolume metadata to satisfy consistent `clone status` reporting across manager restarts and failover. Metadata would also carry "version" of a subvolume that would be bumped up as features get added (cloning a subvolume). Backward compatibility would be maintained so no migration of subvolume to the new format is needed.