Actions
Bug #23629
closedRBD corruption after power off
Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Hello,
we have ran into a nasty bug regarding RBD in Ceph Luminous - we have encountered this across multiple different hardware configurations(various disk types, machine types etc etc) - we use Ceph as a storage backend for Openstack, but we found out that after an unclean shutdown of the hypervisors, all the volumes that are hosted on Ceph get corrupted and we can't use them anymore. All of this works, if the storage backend is on Jewel. I'm not sure if this is a bug or just our mistake, but taking the fact this has happenned multiple times to us, it seems rather like a bug. Our hypervisor/mon confs are attached.
Files
Actions