Actions
Bug #17193
closedtruncate can cause unflushed snapshot data lose
% Done:
0%
Source:
other
Tags:
Backport:
jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Failure in test TestStrays.test_snapshot_remove
http://qa-proxy.ceph.com/teuthology/jspray-2016-08-30_12:07:21-kcephfs:recovery-master-testing-basic-mira/392448/teuthology.log
This differs from the main snapshot tests in that we do an unmount/mount between creating a snapshot and trying to read it back, so I wonder if this is a bug unmounting where we should be waiting to write back buffered data?
The sequence of operations is:
- write some data to snapdir/subdir/file_a
- snapshot snapdir
- write some other data to snapdir/subdir/file_a
- unlink snapdir/subdir/file_a and snapdir/subdir
- unmount the client
- mount the client again
- read back snapdir/.snap/<snapshot>/subdir/file_a and check the original data is still there
I haven't tried reproducing this by hand outside of the automated test, that would be the next natural step.
Actions