Bug #36611
closedceph-mds failure
0%
Description
2018-10-28 21:41:56.372 7fd652569700 -1 *** Caught signal (Aborted) ** in thread 7fd652569700 thread_name:md_log_replay ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable) 1: /usr/bin/ceph-mds() [0x7bc6b0] 2: (()+0x11390) [0x7fd661c6c390] 3: (gsignal()+0x38) [0x7fd6613b9428] 4: (abort()+0x16a) [0x7fd6613bb02a] 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x250) [0x7fd6623ff240] 6: (()+0x3162b7) [0x7fd6623ff2b7] 7: (EMetaBlob::replay(MDSRank*, LogSegment*, MDSlaveUpdate*)+0x5f4b) [0x7a7a6b] 8: (EUpdate::replay(MDSRank*)+0x39) [0x7a8fa9] 9: (MDLog::_replay_thread()+0x864) [0x752164] 10: (MDLog::ReplayThread::entry()+0xd) [0x4f021d] 11: (()+0x76ba) [0x7fd661c626ba] 12: (clone()+0x6d) [0x7fd66148b41d] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. --- begin dump of recent events --- 0> 2018-10-28 21:41:56.372 7fd652569700 -1 *** Caught signal (Aborted) ** in thread 7fd652569700 thread_name:md_log_replay ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable) 1: /usr/bin/ceph-mds() [0x7bc6b0] 2: (()+0x11390) [0x7fd661c6c390] 3: (gsignal()+0x38) [0x7fd6613b9428] 4: (abort()+0x16a) [0x7fd6613bb02a] 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x250) [0x7fd6623ff240] 6: (()+0x3162b7) [0x7fd6623ff2b7] 7: (EMetaBlob::replay(MDSRank*, LogSegment*, MDSlaveUpdate*)+0x5f4b) [0x7a7a6b] 8: (EUpdate::replay(MDSRank*)+0x39) [0x7a8fa9] 9: (MDLog::_replay_thread()+0x864) [0x752164] 10: (MDLog::ReplayThread::entry()+0xd) [0x4f021d] 11: (()+0x76ba) [0x7fd661c626ba] 12: (clone()+0x6d) [0x7fd66148b41d] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Updated by Patrick Donnelly over 5 years ago
- Project changed from Ceph to CephFS
- Description updated (diff)
Updated by Patrick Donnelly over 5 years ago
- Status changed from New to Won't Fix
- Target version deleted (
v13.2.2)
Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the MDS daemons to 13.2.1.
Edit: Wait. did you create this file system on 13.2.2 or is it older?
Updated by Jon Morby over 5 years ago
Patrick Donnelly wrote:
Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the MDS daemons to 13.2.1.
Edit: Wait. did you create this file system on 13.2.2 or is it older?
This was an accidental upgrade from 12.2.8 to 13.2.2
The intention was to do a minor release update with ceph-deploy ... instead, we found ceph-deploy forcing a major release upgrade without any warning and then entered a world of pain :(
Thankfully, Yan Zheng spent some time with me this morning and helped me restore the file system
Updated by Patrick Donnelly over 5 years ago
Please update the list with what you did to fix the FS so everyone can learn from the experience. =)