Project

General

Profile

Actions

Bug #12047

closed

monitor segmentation fault on faulty crushmap

Added by Jonas Weismüller almost 9 years ago. Updated almost 9 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,
I accidently removed a root bucket "osd crush remove platter" (root=platter). The platter object was still used by a ruleset and the ruleset was still in use by a pool. My issue looks similar to #9485.

outline of the crushmap

root platter {
        id -1           # do not change unnecessarily
        # weight 12.000
        alg straw
        hash 0  # rjenkins1
        item platter-rack1 weight 4.000
        item platter-rack2 weight 4.000
        item platter-rack3 weight 4.000
}

rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take platter
        step chooseleaf firstn 0 type rack
        step emit
}

Until now I am not able to get the monitors up and running again.


Files

mon_crash_20150617.log (3.93 KB) mon_crash_20150617.log Jonas Weismüller, 06/17/2015 06:54 AM
store.db.tar.bz2 (865 KB) store.db.tar.bz2 Jonas Weismüller, 06/30/2015 12:46 PM

Related issues 2 (0 open2 closed)

Related to Ceph - Feature #12193: OSD's are not updating osdmap properly after monitoring crashResolved07/01/2015

Actions
Is duplicate of Ceph - Bug #11680: mon crashes when "ceph osd tree 85 --format json"Can't reproduceKefu Chai05/19/2015

Actions
Actions

Also available in: Atom PDF