Project

General

Profile

Actions

Bug #17193

closed

truncate can cause unflushed snapshot data lose

Added by John Spray over 7 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Failure in test TestStrays.test_snapshot_remove
http://qa-proxy.ceph.com/teuthology/jspray-2016-08-30_12:07:21-kcephfs:recovery-master-testing-basic-mira/392448/teuthology.log

This differs from the main snapshot tests in that we do an unmount/mount between creating a snapshot and trying to read it back, so I wonder if this is a bug unmounting where we should be waiting to write back buffered data?

The sequence of operations is:

  • write some data to snapdir/subdir/file_a
  • snapshot snapdir
  • write some other data to snapdir/subdir/file_a
  • unlink snapdir/subdir/file_a and snapdir/subdir
  • unmount the client
  • mount the client again
  • read back snapdir/.snap/<snapshot>/subdir/file_a and check the original data is still there

I haven't tried reproducing this by hand outside of the automated test, that would be the next natural step.


Related issues 2 (0 open2 closed)

Related to CephFS - Bug #18211: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) failed at data pool empty checkResolvedZheng Yan12/09/2016

Actions
Copied to CephFS - Backport #18103: jewel: truncate can cause unflushed snapshot data loseResolvedLoïc DacharyActions
Actions

Also available in: Atom PDF