Actions
Bug #1403
closedosd: FAILED assert(0 == "we got a bad state machine event")
% Done:
0%
Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Files
Actions
#1
Updated by Sam Lang over 12 years ago
- File core.gz core.gz added
- File osd.10.error osd.10.error added
Oops. Here's the rest of the report:
../../src/osd/PG.cc: 3891: FAILED assert(0 == "we got a bad state machine event")
The usage scenario here was 6-8 osds per node (5 nodes total). Two nodes were shutdown, causing the osds on those nodes to disappear from the map, and rebalancing to begin. That all seemed to go fine, until the nodes and osds restarted on the two nodes that had been shutdown. This caused many of the osds (but not all) on the other nodes to crash. They all seem to fail at the same assertion above. The end of the log from one of the crashed osds is attached, as well as a core file.
Updated by Sage Weil over 12 years ago
- Target version set to v0.36
- Translation missing: en.field_position set to 2
Updated by Sage Weil over 12 years ago
- Translation missing: en.field_position deleted (
12) - Translation missing: en.field_position set to 10
Updated by Sage Weil over 12 years ago
- Subject changed from osd crash to osd: FAILED assert(0 == "we got a bad state machine event")
- Translation missing: en.field_position deleted (
10) - Translation missing: en.field_position set to 10
Updated by Sage Weil over 12 years ago
- Translation missing: en.field_position deleted (
10) - Translation missing: en.field_position set to 7
Updated by Sage Weil over 12 years ago
- Target version changed from v0.36 to v0.37
Actions