Project

General

Profile

Bug #52642

snap scheduler: cephfs snapshot schedule status doesn't list the snapshot count properly

Added by Milind Changire 4 months ago. Updated 13 days ago.

Status:
Pending Backport
Priority:
Normal
Category:
-
Target version:
% Done:

0%

Source:
other
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
mgr/snap_schedule
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

# ceph fs snap-schedule list / --recursive=true --fs=cephfs
/nvol5 1h 
/kvol3 1h 
/fvol3 1h 

** Checked the status after a day of snapshots created from the schedules.
>>
# ceph fs snap-schedule status /fvol3 --fs=cephfs
{"fs": "cephfs", "subvol": null, "path": "/fvol3", "rel_path": "/fvol3", "schedule": "1h", "retention": {}, "start": "2021-07-28T00:00:00", "created": "2021-07-28T16:03:27", "first": "2021-07-28T17:00:00", "last": "2021-07-30T08:00:00", "last_pruned": null, "created_count": 36, "pruned_count": 0, "active": true}

[# ceph fs snap-schedule status /kvol3 --fs=cephfs
{"fs": "cephfs", "subvol": null, "path": "/kvol3", "rel_path": "/kvol3", "schedule": "1h", "retention": {}, "start": "2021-07-28T00:00:00", "created": "2021-07-28T16:03:18", "first": "2021-07-28T17:00:00", "last": "2021-07-30T08:00:00", "last_pruned": null, "created_count": 38, "pruned_count": 0, "active": true}

** list the snapshots of the directory used for schedules.

# mount | grep kvol3
10.8.128.21:6789,10.8.128.22:6789,10.8.128.23:6789:/ on /mnt/kvol3 type ceph (rw,noatime,seclabel,name=admin,secret=<hidden>,acl,mds_namespace=cephfs,_netdev)

# ls /mnt/kvol3/kvol3/.snap/
scheduled-2021-07-28-17_00_00  scheduled-2021-07-28-23_00_00  scheduled-2021-07-29-05_00_00  scheduled-2021-07-29-11_00_00  scheduled-2021-07-29-17_00_00  scheduled-2021-07-29-23_00_00  scheduled-2021-07-30-05_00_00
scheduled-2021-07-28-18_00_00  scheduled-2021-07-29-00_00_00  scheduled-2021-07-29-06_00_00  scheduled-2021-07-29-12_00_00  scheduled-2021-07-29-18_00_00  scheduled-2021-07-30-00_00_00  scheduled-2021-07-30-06_00_00
scheduled-2021-07-28-19_00_00  scheduled-2021-07-29-01_00_00  scheduled-2021-07-29-07_00_01  scheduled-2021-07-29-13_00_00  scheduled-2021-07-29-19_00_00  scheduled-2021-07-30-01_00_00  scheduled-2021-07-30-07_00_00
scheduled-2021-07-28-20_00_00  scheduled-2021-07-29-02_00_00  scheduled-2021-07-29-08_00_00  scheduled-2021-07-29-14_00_00  scheduled-2021-07-29-20_00_00  scheduled-2021-07-30-02_00_00  scheduled-2021-07-30-08_00_00
scheduled-2021-07-28-21_00_00  scheduled-2021-07-29-03_00_00  scheduled-2021-07-29-09_00_00  scheduled-2021-07-29-15_00_00  scheduled-2021-07-29-21_00_00  scheduled-2021-07-30-03_00_00  snapk1
scheduled-2021-07-28-22_00_00  scheduled-2021-07-29-04_00_00  scheduled-2021-07-29-10_00_00  scheduled-2021-07-29-16_00_00  scheduled-2021-07-29-22_00_00  scheduled-2021-07-30-04_00_00
# ls /mnt/kvol3/kvol3/.snap/ | wc -l
41

# mount | grep fvol3
ceph-fuse on /mnt/fvol3 type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

# ls /mnt/fvol3/fvol3/.snap
scheduled-2021-07-28-17_00_00  scheduled-2021-07-28-23_00_00  scheduled-2021-07-29-05_00_00  scheduled-2021-07-29-11_00_00  scheduled-2021-07-29-17_00_00  scheduled-2021-07-29-23_00_00  scheduled-2021-07-30-05_00_00
scheduled-2021-07-28-18_00_00  scheduled-2021-07-29-00_00_00  scheduled-2021-07-29-06_00_00  scheduled-2021-07-29-12_00_00  scheduled-2021-07-29-18_00_00  scheduled-2021-07-30-00_00_00  scheduled-2021-07-30-06_00_00
scheduled-2021-07-28-19_00_00  scheduled-2021-07-29-01_00_00  scheduled-2021-07-29-07_00_00  scheduled-2021-07-29-13_00_00  scheduled-2021-07-29-19_00_00  scheduled-2021-07-30-01_00_00  scheduled-2021-07-30-07_00_00
scheduled-2021-07-28-20_00_00  scheduled-2021-07-29-02_00_00  scheduled-2021-07-29-08_00_00  scheduled-2021-07-29-14_00_00  scheduled-2021-07-29-20_00_00  scheduled-2021-07-30-02_00_00  scheduled-2021-07-30-08_00_00
scheduled-2021-07-28-21_00_00  scheduled-2021-07-29-03_00_00  scheduled-2021-07-29-09_00_00  scheduled-2021-07-29-15_00_00  scheduled-2021-07-29-21_00_00  scheduled-2021-07-30-03_00_00  snapf1
scheduled-2021-07-28-22_00_00  scheduled-2021-07-29-04_00_00  scheduled-2021-07-29-10_00_00  scheduled-2021-07-29-16_00_00  scheduled-2021-07-29-22_00_00  scheduled-2021-07-30-04_00_00

# ls /mnt/fvol3/fvol3/.snap | wc -l
41

As per the snapshot schedule status of /kvol3 the snapshot count is 38 and /fvol3 is 36 , where as the snapshot created under these paths are 40 (1 snap was created manually on both the paths).

There is a mismatch in the counts of snapshots created under the directory and the status.


Related issues

Copied to CephFS - Backport #53760: pacific: snap scheduler: cephfs snapshot schedule status doesn't list the snapshot count properly New

History

#1 Updated by Patrick Donnelly 4 months ago

  • Status changed from New to Triaged
  • Assignee set to Milind Changire

#2 Updated by Venky Shankar 4 months ago

  • Status changed from Triaged to Fix Under Review
  • Pull request ID set to 43236

#3 Updated by Patrick Donnelly 4 months ago

  • Target version set to v17.0.0
  • Source set to other
  • Backport set to pacific

#4 Updated by Venky Shankar 13 days ago

  • Status changed from Fix Under Review to Pending Backport

#5 Updated by Backport Bot 13 days ago

  • Copied to Backport #53760: pacific: snap scheduler: cephfs snapshot schedule status doesn't list the snapshot count properly added

Also available in: Atom PDF