Actions
Bug #4159
closedafter setting pool size to zero, osd's segv and apparently can't recover
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
While it's probably the case that the various tools should disallow setting pool size to 0, currently it's possible; however, if you do so, bad things happen. I think it caused the osd's to both die, but it certainly prevents them from starting up with a segv in
0x000000000123d782 in pg_interval_t::check_new_interval ( old_acting=std::vector of length 0, capacity 0, new_acting=std::vector of length 2, capacity 2 = {...}, old_up=std::vector of length 0, capacity 0, new_up=std::vector of length 2, capacity 2 = {...}, same_interval_since=377, last_epoch_clean=375, osdmap= std::tr1::shared_ptr (count 13, weak 1) 0x20d2b60, lastmap=std::tr1::shared_ptr (count 1033, weak 1) 0x20d2680, pool_id=2, pgid=..., past_intervals=0x2976130, out=0x0) at osd/osd_types.cc:1602
I suspect there are some places that a "if !v.empty()" should be added.
Actions