Project

General

Profile

Actions

Bug #52388

closed

mgr/snap-schedule: retention set calculation for multiple retention specs is wrong

Added by Jan Fajerski over 2 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

On Wed, Aug 18, 2021 at 12:19 AM Prayank Saxena <> wrote:

Hello everyone,

We have a ceph cluster with version Pacific v16.2.4

We are trying to implement the ceph module snap-schedule from this document
https://docs.ceph.com/en/latest/cephfs/snap-schedule/

It works if you have say, hourly and retention is h 3

ceph fs snap-schedule add /volumes/user1/vol7 1h <time>
ceph fs snap-schedule retention add /volumes/user1/vol7 h 6

But we tried the following retention configuration, it did not quite
make the result we are expecting:

ceph fs snap-schedule add /volumes/user1/vol7 1h 2021-08-12T23:41:00
ceph fs snap-schedule retention add /volumes/user1/vol7 d 2
ceph fs snap-schedule retention add /volumes/user1/vol7 h 6

by definition this should : take a snapshot every one hour, then retain 6
snapshots with an hour apart and 2 snapshot days apart.

ceph fs snap-schedule status /volumes/user1/vol7

{"fs": "cephfs", "subvol": null, "path": "/volumes/user1/vol7", "rel_path":
"/volumes/user1/vol7", "schedule": "1h", "retention": {"d": 2, "h": 6},
"start": "2021-08-12T23:41:00", "created": "2021-08-12T23:41:07", "first":
"2021-08-13T00:41:00", "last": "2021-08-17T09:41:00", "last_pruned":
"2021-08-17T09:41:00", "created_count": 106, "pruned_count": 96, "active":
true}

ceph fs subvolume snapshot ls cephfs vol7 --group_name user1 | grep name

"name": "scheduled-2021-08-13-23_41_00" <--- this should be
deleted based on retention
"name": "scheduled-2021-08-14-23_41_00" <--- this too
"name": "scheduled-2021-08-15-23_41_00"
"name": "scheduled-2021-08-16-23_41_00"
"name": "scheduled-2021-08-17-04_41_00"
"name": "scheduled-2021-08-17-05_41_00"
"name": "scheduled-2021-08-17-06_41_00"
"name": "scheduled-2021-08-17-07_41_00"
"name": "scheduled-2021-08-17-08_41_00"
"name": "scheduled-2021-08-17-09_41_00"

this is what we get in the log
--- start log ----
... truncated ...
2021-08-17 08:41:00,271 [Thread-3194] [INFO]
[snap_schedule.fs.schedule_client] created scheduled snapshot of
/volumes/user1/vol7
2021-08-17 08:41:00,271 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] created scheduled snapshot
/volumes/user1/vol7/.snap/scheduled-2021-08-17-08_41_00
2021-08-17 08:41:00,271 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] SnapDB on cephfs changed for
/volumes/user1/vol7, updating next Timer
2021-08-17 08:41:00,271 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] Creating new snapshot timer for
/volumes/user1/vol7
2021-08-17 08:41:00,272 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] Will snapshot /volumes/user1/vol7 in fs
cephfs in 3600s
2021-08-17 08:41:00,272 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] Pruning snapshots
2021-08-17 08:41:00,272 [Thread-3194] [DEBUG] [mgr_util] self.fs_id=1,
fs_id=1
2021-08-17 08:41:00,273 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] skipping dir entry b'.'
2021-08-17 08:41:00,274 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] skipping dir entry b'..'
2021-08-17 08:41:00,275 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-13-23_41_00' to
pruning
2021-08-17 08:41:00,275 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-14-23_41_00' to
pruning
2021-08-17 08:41:00,276 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-15-23_41_00' to
pruning
2021-08-17 08:41:00,276 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-16-23_41_00' to
pruning
2021-08-17 08:41:00,277 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-17-02_41_00' to
pruning
2021-08-17 08:41:00,278 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-17-03_41_00' to
pruning
2021-08-17 08:41:00,278 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-17-04_41_00' to
pruning
2021-08-17 08:41:00,279 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-17-05_41_00' to
pruning
2021-08-17 08:41:00,279 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-17-06_41_00' to
pruning
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-17-07_41_00' to
pruning
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] add b'scheduled-2021-08-17-08_41_00' to
pruning
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] compiling keep set for period n
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] compiling keep set for period M
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] compiling keep set for period h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-17-08_41_00'
due to 6h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-17-07_41_00'
due to 6h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-17-06_41_00'
due to 6h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-17-05_41_00'
due to 6h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-17-04_41_00'
due to 6h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-17-03_41_00'
due to 6h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] found enough snapshots for 6h
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] compiling keep set for period d
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-16-23_41_00'
due to 2d
2021-08-17 08:41:00,280 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-15-23_41_00'
due to 2d
2021-08-17 08:41:00,281 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-14-23_41_00'
due to 2d
2021-08-17 08:41:00,281 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] keeping b'scheduled-2021-08-13-23_41_00'
due to 2d
2021-08-17 08:41:00,281 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] compiling keep set for period w
2021-08-17 08:41:00,281 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] compiling keep set for period m
2021-08-17 08:41:00,281 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] compiling keep set for period y
2021-08-17 08:41:00,281 [Thread-3194] [DEBUG]
[snap_schedule.fs.schedule_client] rmdir on scheduled-2021-08-17-02_41_00
... truncated ...
--- end log ----

On the log the 'due to 2d' is mentioned but it still did not prune the two
old ones scheduled-2021-08-14-23_41_00 and scheduled-2021-08-13-23_41_00


Related issues 1 (0 open1 closed)

Copied to mgr - Backport #52412: pacific: mgr/snap-schedule: retention set calculation for multiple retention specs is wrongResolvedLaura PaduanoActions
Actions #1

Updated by Patrick Donnelly over 2 years ago

  • Assignee set to Jan Fajerski
  • Target version set to v17.0.0
  • Source set to Community (user)
  • Backport set to pacific
Actions #2

Updated by Venky Shankar over 2 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #3

Updated by Backport Bot over 2 years ago

  • Copied to Backport #52412: pacific: mgr/snap-schedule: retention set calculation for multiple retention specs is wrong added
Actions #4

Updated by Loïc Dachary over 2 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF