Bug #11554
Osds not start after upgrade
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
After upgrade from giant to hammer some osds in my cluster refuse to start. I get following output from ceph-osd -f -i <id>
starting osd.1 at :/0 osd_data /run/ceph/osd.1 /dev/disk/by-partlabel/osd.1.J
osd/PG.cc: In function 'static epoch_t PG::peek_map_epoch(ObjectStore*, spg_t, ceph::bufferlist*)' thread 7f62bc44d780 time 2015-05-07 11:34:42.684738
osd/PG.cc: 2856: FAILED assert(values.size() == 1)
ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x78) [0xb110a8]
2: (PG::peek_map_epoch(ObjectStore*, spg_t, ceph::buffer::list*)+0xa01) [0x78ba61]
3: (OSD::load_pgs()+0xa64) [0x66bfa4]
4: (OSD::init()+0x132f) [0x66fc1f]
5: (main()+0x2a49) [0x60c6b9]
6: (__libc_start_main()+0xf0) [0x7f62b9cb0770]
7: (_start()+0x29) [0x612a69]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'
starting osd.1 at :/0 osd_data /run/ceph/osd.1 /dev/disk/by-partlabel/osd.1.J
osd/PG.cc: In function 'static epoch_t PG::peek_map_epoch(ObjectStore*, spg_t, ceph::bufferlist*)' thread 7f62bc44d780 time 2015-05-07 11:34:42.684738
osd/PG.cc: 2856: FAILED assert(values.size() == 1)
ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x78) [0xb110a8]
2: (PG::peek_map_epoch(ObjectStore*, spg_t, ceph::buffer::list*)+0xa01) [0x78ba61]
3: (OSD::load_pgs()+0xa64) [0x66bfa4]
4: (OSD::init()+0x132f) [0x66fc1f]
5: (main()+0x2a49) [0x60c6b9]
6: (__libc_start_main()+0xf0) [0x7f62b9cb0770]
7: (_start()+0x29) [0x612a69]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'
- Caught signal (Aborted) *
in thread 7f62bc44d780
ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
1: ceph-osd() [0xa1c4c4]
2: (()+0x105f0) [0x7f62bb3305f0]
3: (gsignal()+0x38) [0x7f62b9cc3508]
4: (abort()+0x16a) [0x7f62b9cc491a]
5: (_gnu_cxx::_verbose_terminate_handler()+0x15d) [0x7f62ba7ee7ed]
6: (()+0x8c856) [0x7f62ba7ec856]
7: (()+0x8c8a1) [0x7f62ba7ec8a1]
8: (()+0x8cab8) [0x7f62ba7ecab8]
9: (ceph::__ceph_assert_fail(char const, char const*, int, char const*)+0x25f) [0xb1128f]
10: (PG::peek_map_epoch(ObjectStore*, spg_t, ceph::buffer::list*)+0xa01) [0x78ba61]
11: (OSD::load_pgs()+0xa64) [0x66bfa4]
12: (OSD::init()+0x132f) [0x66fc1f]
13: (main()+0x2a49) [0x60c6b9]
14: (__libc_start_main()+0xf0) [0x7f62b9cb0770]
15: (_start()+0x29) [0x612a69]
Log file is here https://yadi.sk/d/nOgWd6CRgUNbR
Related issues
History
#1 Updated by Michael Uleysky almost 8 years ago
Version is 0.94.1 with patch 4abacbe283a34f5cd8ae9805541d056b86cd5873 from issue 11429.
#2 Updated by Kefu Chai almost 8 years ago
- Status changed from New to Duplicate