Project

General

Profile

Actions

Bug #21416

closed

osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to) after upgrade to luminous

Added by Patrick Fruh over 6 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
OSD
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
mimic,luminous
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

After upgrading my hosts to luminous, I'm seeing loads of segfaults on some of my OSDs, which haven't had any issues pre luminous.

ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)

Sep 12 22:43:25 node1.ceph ceph-osd[1686]: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/osd/PGLog.cc: In function 'void PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t>*, std::set<std::basic_string<char> >*, bool*)' thread 7f920b3cd700 time 2017-09-12 22:43:25.412926
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x55b57fccc510]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2: (PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t, std::less<eversion_t>, std::allocator<eversion_t> >*, std::set<std::string, std::less<std::string>, std::allocator<std::string> >*, bool*)+0xbd7) [0x55b57f8719d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 3: (PGLog::trim(eversion_t, pg_info_t&)+0xd9) [0x55b57f871b19]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 4: (OSD::handle_pg_trim(boost::intrusive_ptr<OpRequest>)+0x3a8) [0x55b57f740908]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 5: (OSD::dispatch_op(boost::intrusive_ptr<OpRequest>)+0x1b1) [0x55b57f76c121]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 6: (OSD::_dispatch(Message*)+0x3bc) [0x55b57f76cb4c]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 7: (OSD::ms_dispatch(Message*)+0x87) [0x55b57f76ce87]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 8: (DispatchQueue::entry()+0x792) [0x55b57ff42cb2]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 9: (DispatchQueue::DispatchThread::entry()+0xd) [0x55b57fd5775d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 10: (()+0x7dc5) [0x7f9229d19dc5]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 11: (clone()+0x6d) [0x7f9228e0d76d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2017-09-12 22:43:25.416144 7f920b3cd700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/osd/PGLog.cc: In function 'void PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t>*, std::set<std::basic_string<char> >*, bool*)' thread 7f920b3cd700 time 2017-09-12 22:43:25.412926
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x55b57fccc510]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2: (PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t, std::less<eversion_t>, std::allocator<eversion_t> >*, std::set<std::string, std::less<std::string>, std::allocator<std::string> >*, bool*)+0xbd7) [0x55b57f8719d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 3: (PGLog::trim(eversion_t, pg_info_t&)+0xd9) [0x55b57f871b19]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 4: (OSD::handle_pg_trim(boost::intrusive_ptr<OpRequest>)+0x3a8) [0x55b57f740908]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 5: (OSD::dispatch_op(boost::intrusive_ptr<OpRequest>)+0x1b1) [0x55b57f76c121]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 6: (OSD::_dispatch(Message*)+0x3bc) [0x55b57f76cb4c]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 7: (OSD::ms_dispatch(Message*)+0x87) [0x55b57f76ce87]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 8: (DispatchQueue::entry()+0x792) [0x55b57ff42cb2]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 9: (DispatchQueue::DispatchThread::entry()+0xd) [0x55b57fd5775d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 10: (()+0x7dc5) [0x7f9229d19dc5]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 11: (clone()+0x6d) [0x7f9228e0d76d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 0> 2017-09-12 22:43:25.416144 7f920b3cd700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/osd/PGLog.cc: In function 'void PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t>*, std::set<std::basic_string<char> >*, bool*)' thread 7f920b3cd700 time 2017-09-12 22:43:25.412926
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x55b57fccc510]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2: (PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t, std::less<eversion_t>, std::allocator<eversion_t> >*, std::set<std::string, std::less<std::string>, std::allocator<std::string> >*, bool*)+0xbd7) [0x55b57f8719d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 3: (PGLog::trim(eversion_t, pg_info_t&)+0xd9) [0x55b57f871b19]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 4: (OSD::handle_pg_trim(boost::intrusive_ptr<OpRequest>)+0x3a8) [0x55b57f740908]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 5: (OSD::dispatch_op(boost::intrusive_ptr<OpRequest>)+0x1b1) [0x55b57f76c121]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 6: (OSD::_dispatch(Message*)+0x3bc) [0x55b57f76cb4c]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 7: (OSD::ms_dispatch(Message*)+0x87) [0x55b57f76ce87]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 8: (DispatchQueue::entry()+0x792) [0x55b57ff42cb2]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 9: (DispatchQueue::DispatchThread::entry()+0xd) [0x55b57fd5775d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 10: (()+0x7dc5) [0x7f9229d19dc5]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 11: (clone()+0x6d) [0x7f9228e0d76d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: *** Caught signal (Aborted) **
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: in thread 7f920b3cd700 thread_name:ms_dispatch
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 1: (()+0xa23b21) [0x55b57fc8db21]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2: (()+0xf370) [0x7f9229d21370]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 3: (gsignal()+0x37) [0x7f9228d4b1d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 4: (abort()+0x148) [0x7f9228d4c8c8]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) [0x55b57fccc684]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 6: (PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t, std::less<eversion_t>, std::allocator<eversion_t> >*, std::set<std::string, std::less<std::string>, std::allocator<std::string> >*, bool*)+0xbd7) [0x55b57f8719d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 7: (PGLog::trim(eversion_t, pg_info_t&)+0xd9) [0x55b57f871b19]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 8: (OSD::handle_pg_trim(boost::intrusive_ptr<OpRequest>)+0x3a8) [0x55b57f740908]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 9: (OSD::dispatch_op(boost::intrusive_ptr<OpRequest>)+0x1b1) [0x55b57f76c121]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 10: (OSD::_dispatch(Message*)+0x3bc) [0x55b57f76cb4c]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 11: (OSD::ms_dispatch(Message*)+0x87) [0x55b57f76ce87]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 12: (DispatchQueue::entry()+0x792) [0x55b57ff42cb2]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 13: (DispatchQueue::DispatchThread::entry()+0xd) [0x55b57fd5775d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 14: (()+0x7dc5) [0x7f9229d19dc5]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 15: (clone()+0x6d) [0x7f9228e0d76d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2017-09-12 22:43:25.441817 7f920b3cd700 -1 *** Caught signal (Aborted) **
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: in thread 7f920b3cd700 thread_name:ms_dispatch
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 1: (()+0xa23b21) [0x55b57fc8db21]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2: (()+0xf370) [0x7f9229d21370]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 3: (gsignal()+0x37) [0x7f9228d4b1d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 4: (abort()+0x148) [0x7f9228d4c8c8]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) [0x55b57fccc684]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 6: (PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t, std::less<eversion_t>, std::allocator<eversion_t> >*, std::set<std::string, std::less<std::string>, std::allocator<std::string> >*, bool*)+0xbd7) [0x55b57f8719d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 7: (PGLog::trim(eversion_t, pg_info_t&)+0xd9) [0x55b57f871b19]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 8: (OSD::handle_pg_trim(boost::intrusive_ptr<OpRequest>)+0x3a8) [0x55b57f740908]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 9: (OSD::dispatch_op(boost::intrusive_ptr<OpRequest>)+0x1b1) [0x55b57f76c121]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 10: (OSD::_dispatch(Message*)+0x3bc) [0x55b57f76cb4c]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 11: (OSD::ms_dispatch(Message*)+0x87) [0x55b57f76ce87]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 12: (DispatchQueue::entry()+0x792) [0x55b57ff42cb2]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 13: (DispatchQueue::DispatchThread::entry()+0xd) [0x55b57fd5775d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 14: (()+0x7dc5) [0x7f9229d19dc5]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 15: (clone()+0x6d) [0x7f9228e0d76d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 0> 2017-09-12 22:43:25.441817 7f920b3cd700 -1 *** Caught signal (Aborted) **
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: in thread 7f920b3cd700 thread_name:ms_dispatch
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 1: (()+0xa23b21) [0x55b57fc8db21]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 2: (()+0xf370) [0x7f9229d21370]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 3: (gsignal()+0x37) [0x7f9228d4b1d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 4: (abort()+0x148) [0x7f9228d4c8c8]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) [0x55b57fccc684]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 6: (PGLog::IndexedLog::trim(CephContext*, eversion_t, std::set<eversion_t, std::less<eversion_t>, std::allocator<eversion_t> >*, std::set<std::string, std::less<std::string>, std::allocator<std::string> >*, bool*)+0xbd7) [0x55b57f8719d7]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 7: (PGLog::trim(eversion_t, pg_info_t&)+0xd9) [0x55b57f871b19]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 8: (OSD::handle_pg_trim(boost::intrusive_ptr<OpRequest>)+0x3a8) [0x55b57f740908]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 9: (OSD::dispatch_op(boost::intrusive_ptr<OpRequest>)+0x1b1) [0x55b57f76c121]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 10: (OSD::_dispatch(Message*)+0x3bc) [0x55b57f76cb4c]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 11: (OSD::ms_dispatch(Message*)+0x87) [0x55b57f76ce87]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 12: (DispatchQueue::entry()+0x792) [0x55b57ff42cb2]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 13: (DispatchQueue::DispatchThread::entry()+0xd) [0x55b57fd5775d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 14: (()+0x7dc5) [0x7f9229d19dc5]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: 15: (clone()+0x6d) [0x7f9228e0d76d]
Sep 12 22:43:25 node1.ceph ceph-osd[1686]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 12 22:43:25 node1.ceph systemd[1]: ceph-osd@1.service: main process exited, code=killed, status=6/ABRT
Sep 12 22:43:25 node1.ceph systemd[1]: Unit ceph-osd@1.service entered failed state.
Sep 12 22:43:25 node1.ceph systemd[1]: ceph-osd@1.service failed.

I've attached a pretty long coherent log of one of my OSDs, showing the segfaults that started with the upgrade to luminous.
I've even upgraded this OSD from filestore to bluestore and it's segfaulting (different errors) with that as well.

You can see the segfaults happening in threads ms_dispatch, bstore_kv_sync, ceph-osd, rocksdb:bg0, tp_osd_tp and tp_peering

Sep 16 12:02:56 node1.ceph systemd[1]: Starting Ceph object storage daemon osd.1...
Sep 16 12:02:56 node1.ceph systemd[1]: Started Ceph object storage daemon osd.1.
Sep 16 12:02:56 node1.ceph ceph-osd[13875]: 2017-09-16 12:02:56.257560 7fe478ee1d00 -1 Public network was set, but cluster network was not set
Sep 16 12:02:56 node1.ceph ceph-osd[13875]: 2017-09-16 12:02:56.257566 7fe478ee1d00 -1     Using public network also for cluster network
Sep 16 12:02:56 node1.ceph ceph-osd[13875]: starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: *** Caught signal (Segmentation fault) **
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: in thread 7fe478ee1d00 thread_name:ceph-osd
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 1: (()+0xa23b21) [0x55c8ae72eb21]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 2: (()+0xf5e0) [0x7fe4762f75e0]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 3: (()+0x1cdff) [0x7fe478ac4dff]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0xd6) [0x55c8aea9d5e6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3dc) [0x55c8aea9e67c]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x127) [0x55c8aea9e8d7]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x89) [0x55c8aeaa6ef9]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 8: (()+0xdbd596) [0x55c8aeac8596]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 9: (()+0xdbd82c) [0x55c8aeac882c]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 10: (()+0xdbd8a6) [0x55c8aeac88a6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 11: (rocksdb::MergingIterator::Next()+0x24d) [0x55c8aeab053d]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 12: (rocksdb::DBIter::Next()+0xa6) [0x55c8aeb2f5b6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x55c8ae684faa]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 14: (BitmapFreelistManager::enumerate_next(unsigned long*, unsigned long*)+0xd7) [0x55c8ae6dbbd7]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 15: (BlueStore::_open_alloc()+0x1dd) [0x55c8ae5dbecd]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 16: (BlueStore::_mount(bool)+0x443) [0x55c8ae648b83]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 17: (OSD::init()+0x3ba) [0x55c8ae211eaa]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 18: (main()+0x2def) [0x55c8ae11981f]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 19: (__libc_start_main()+0xf5) [0x7fe47530cc05]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 20: (()+0x4acb56) [0x55c8ae1b7b56]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 2017-09-16 12:02:57.122423 7fe478ee1d00 -1 *** Caught signal (Segmentation fault) **
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: in thread 7fe478ee1d00 thread_name:ceph-osd
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 1: (()+0xa23b21) [0x55c8ae72eb21]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 2: (()+0xf5e0) [0x7fe4762f75e0]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 3: (()+0x1cdff) [0x7fe478ac4dff]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0xd6) [0x55c8aea9d5e6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3dc) [0x55c8aea9e67c]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x127) [0x55c8aea9e8d7]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x89) [0x55c8aeaa6ef9]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 8: (()+0xdbd596) [0x55c8aeac8596]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 9: (()+0xdbd82c) [0x55c8aeac882c]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 10: (()+0xdbd8a6) [0x55c8aeac88a6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 11: (rocksdb::MergingIterator::Next()+0x24d) [0x55c8aeab053d]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 12: (rocksdb::DBIter::Next()+0xa6) [0x55c8aeb2f5b6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x55c8ae684faa]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 14: (BitmapFreelistManager::enumerate_next(unsigned long*, unsigned long*)+0xd7) [0x55c8ae6dbbd7]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 15: (BlueStore::_open_alloc()+0x1dd) [0x55c8ae5dbecd]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 16: (BlueStore::_mount(bool)+0x443) [0x55c8ae648b83]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 17: (OSD::init()+0x3ba) [0x55c8ae211eaa]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 18: (main()+0x2def) [0x55c8ae11981f]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 19: (__libc_start_main()+0xf5) [0x7fe47530cc05]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 20: (()+0x4acb56) [0x55c8ae1b7b56]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: -222> 2017-09-16 12:02:56.257560 7fe478ee1d00 -1 Public network was set, but cluster network was not set
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: -221> 2017-09-16 12:02:56.257566 7fe478ee1d00 -1     Using public network also for cluster network
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 0> 2017-09-16 12:02:57.122423 7fe478ee1d00 -1 *** Caught signal (Segmentation fault) **
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: in thread 7fe478ee1d00 thread_name:ceph-osd
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 1: (()+0xa23b21) [0x55c8ae72eb21]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 2: (()+0xf5e0) [0x7fe4762f75e0]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 3: (()+0x1cdff) [0x7fe478ac4dff]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0xd6) [0x55c8aea9d5e6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3dc) [0x55c8aea9e67c]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x127) [0x55c8aea9e8d7]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x89) [0x55c8aeaa6ef9]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 8: (()+0xdbd596) [0x55c8aeac8596]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 9: (()+0xdbd82c) [0x55c8aeac882c]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 10: (()+0xdbd8a6) [0x55c8aeac88a6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 11: (rocksdb::MergingIterator::Next()+0x24d) [0x55c8aeab053d]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 12: (rocksdb::DBIter::Next()+0xa6) [0x55c8aeb2f5b6]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x55c8ae684faa]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 14: (BitmapFreelistManager::enumerate_next(unsigned long*, unsigned long*)+0xd7) [0x55c8ae6dbbd7]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 15: (BlueStore::_open_alloc()+0x1dd) [0x55c8ae5dbecd]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 16: (BlueStore::_mount(bool)+0x443) [0x55c8ae648b83]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 17: (OSD::init()+0x3ba) [0x55c8ae211eaa]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 18: (main()+0x2def) [0x55c8ae11981f]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 19: (__libc_start_main()+0xf5) [0x7fe47530cc05]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: 20: (()+0x4acb56) [0x55c8ae1b7b56]
Sep 16 12:02:57 node1.ceph ceph-osd[13875]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 16 12:02:57 node1.ceph systemd[1]: ceph-osd@1.service: main process exited, code=killed, status=11/SEGV
Sep 16 12:02:57 node1.ceph systemd[1]: Unit ceph-osd@1.service entered failed state.
Sep 16 12:02:57 node1.ceph systemd[1]: ceph-osd@1.service failed.
Sep 16 12:03:17 node1.ceph systemd[1]: ceph-osd@1.service holdoff time over, scheduling restart.
Sep 16 12:03:17 node1.ceph systemd[1]: Starting Ceph object storage daemon osd.1...
Sep 16 12:03:17 node1.ceph systemd[1]: Started Ceph object storage daemon osd.1.
Sep 16 12:03:17 node1.ceph ceph-osd[14310]: 2017-09-16 12:03:17.519046 7f5ae3163d00 -1 Public network was set, but cluster network was not set
Sep 16 12:03:17 node1.ceph ceph-osd[14310]: 2017-09-16 12:03:17.519053 7f5ae3163d00 -1     Using public network also for cluster network
Sep 16 12:03:17 node1.ceph ceph-osd[14310]: starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 2017-09-16 12:04:32.154790 7f5ae3163d00 -1 osd.1 19911 log_to_monitors {default=true}
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: *** Caught signal (Segmentation fault) **
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: in thread 7f5ac1ff6700 thread_name:tp_peering
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 1: (()+0xa23b21) [0x56095c70db21]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 2: (()+0xf5e0) [0x7f5ae05795e0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 3: (()+0x1cdff) [0x7f5ae2d46dff]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0xd6) [0x56095ca7c5e6]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3dc) [0x56095ca7d67c]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x127) [0x56095ca7d8d7]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x89) [0x56095ca85ef9]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 8: (()+0xdbd596) [0x56095caa7596]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 9: (()+0xdbdb5d) [0x56095caa7b5d]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 10: (()+0xdbdb6f) [0x56095caa7b6f]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 11: (rocksdb::MergingIterator::Seek(rocksdb::Slice const&)+0xce) [0x56095ca8fc8e]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 12: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x174) [0x56095cb0f0e4]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::lower_bound(std::string const&, std::string const&)+0xa2) [0x56095c664962]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 14: (BlueStore::_omap_rmkey_range(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, std::string const&, std::string const&)+0x111) [0x56095c5c6c91]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 15: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x175e) [0x56095c62332e]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 16: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x3a0) [0x56095c6240a0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 17: (ObjectStore::queue_transaction(ObjectStore::Sequencer*, ObjectStore::Transaction&&, Context*, Context*, Context*, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x171) [0x56095c226631]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 18: (OSD::dispatch_context_transaction(PG::RecoveryCtx&, PG*, ThreadPool::TPHandle*)+0x76) [0x56095c1b3456]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 19: (OSD::process_peering_events(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x3bb) [0x56095c1ddf3b]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 20: (OSD::PeeringWQ::_process(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x17) [0x56095c23f087]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 21: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa8e) [0x56095c7530fe]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 22: (ThreadPool::WorkThread::entry()+0x10) [0x56095c753fe0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 23: (()+0x7e25) [0x7f5ae0571e25]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 24: (clone()+0x6d) [0x7f5adf66534d]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 2017-09-16 12:04:32.798442 7f5ac1ff6700 -1 *** Caught signal (Segmentation fault) **
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: in thread 7f5ac1ff6700 thread_name:tp_peering
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 1: (()+0xa23b21) [0x56095c70db21]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 2: (()+0xf5e0) [0x7f5ae05795e0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 3: (()+0x1cdff) [0x7f5ae2d46dff]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0xd6) [0x56095ca7c5e6]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3dc) [0x56095ca7d67c]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x127) [0x56095ca7d8d7]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x89) [0x56095ca85ef9]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 8: (()+0xdbd596) [0x56095caa7596]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 9: (()+0xdbdb5d) [0x56095caa7b5d]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 10: (()+0xdbdb6f) [0x56095caa7b6f]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 11: (rocksdb::MergingIterator::Seek(rocksdb::Slice const&)+0xce) [0x56095ca8fc8e]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 12: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x174) [0x56095cb0f0e4]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::lower_bound(std::string const&, std::string const&)+0xa2) [0x56095c664962]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 14: (BlueStore::_omap_rmkey_range(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, std::string const&, std::string const&)+0x111) [0x56095c5c6c91]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 15: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x175e) [0x56095c62332e]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 16: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x3a0) [0x56095c6240a0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 17: (ObjectStore::queue_transaction(ObjectStore::Sequencer*, ObjectStore::Transaction&&, Context*, Context*, Context*, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x171) [0x56095c226631]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 18: (OSD::dispatch_context_transaction(PG::RecoveryCtx&, PG*, ThreadPool::TPHandle*)+0x76) [0x56095c1b3456]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 19: (OSD::process_peering_events(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x3bb) [0x56095c1ddf3b]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 20: (OSD::PeeringWQ::_process(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x17) [0x56095c23f087]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 21: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa8e) [0x56095c7530fe]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 22: (ThreadPool::WorkThread::entry()+0x10) [0x56095c753fe0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 23: (()+0x7e25) [0x7f5ae0571e25]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 24: (clone()+0x6d) [0x7f5adf66534d]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: -5413> 2017-09-16 12:03:17.519046 7f5ae3163d00 -1 Public network was set, but cluster network was not set
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: -5412> 2017-09-16 12:03:17.519053 7f5ae3163d00 -1     Using public network also for cluster network
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: -4086> 2017-09-16 12:04:32.154790 7f5ae3163d00 -1 osd.1 19911 log_to_monitors {default=true}
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 0> 2017-09-16 12:04:32.798442 7f5ac1ff6700 -1 *** Caught signal (Segmentation fault) **
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: in thread 7f5ac1ff6700 thread_name:tp_peering
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 1: (()+0xa23b21) [0x56095c70db21]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 2: (()+0xf5e0) [0x7f5ae05795e0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 3: (()+0x1cdff) [0x7f5ae2d46dff]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0xd6) [0x56095ca7c5e6]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3dc) [0x56095ca7d67c]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x127) [0x56095ca7d8d7]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x89) [0x56095ca85ef9]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 8: (()+0xdbd596) [0x56095caa7596]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 9: (()+0xdbdb5d) [0x56095caa7b5d]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 10: (()+0xdbdb6f) [0x56095caa7b6f]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 11: (rocksdb::MergingIterator::Seek(rocksdb::Slice const&)+0xce) [0x56095ca8fc8e]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 12: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x174) [0x56095cb0f0e4]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::lower_bound(std::string const&, std::string const&)+0xa2) [0x56095c664962]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 14: (BlueStore::_omap_rmkey_range(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, std::string const&, std::string const&)+0x111) [0x56095c5c6c91]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 15: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x175e) [0x56095c62332e]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 16: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x3a0) [0x56095c6240a0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 17: (ObjectStore::queue_transaction(ObjectStore::Sequencer*, ObjectStore::Transaction&&, Context*, Context*, Context*, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x171) [0x56095c226631]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 18: (OSD::dispatch_context_transaction(PG::RecoveryCtx&, PG*, ThreadPool::TPHandle*)+0x76) [0x56095c1b3456]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 19: (OSD::process_peering_events(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x3bb) [0x56095c1ddf3b]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 20: (OSD::PeeringWQ::_process(std::list<PG*, std::allocator<PG*> > const&, ThreadPool::TPHandle&)+0x17) [0x56095c23f087]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 21: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa8e) [0x56095c7530fe]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 22: (ThreadPool::WorkThread::entry()+0x10) [0x56095c753fe0]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 23: (()+0x7e25) [0x7f5ae0571e25]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: 24: (clone()+0x6d) [0x7f5adf66534d]
Sep 16 12:04:32 node1.ceph ceph-osd[14310]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Sep 16 12:04:32 node1.ceph systemd[1]: ceph-osd@1.service: main process exited, code=killed, status=11/SEGV
Sep 16 12:04:32 node1.ceph systemd[1]: Unit ceph-osd@1.service entered failed state.
Sep 16 12:04:32 node1.ceph systemd[1]: ceph-osd@1.service failed.

Files

osd.log (688 KB) osd.log Patrick Fruh, 09/16/2017 10:44 AM

Related issues 2 (0 open2 closed)

Copied to Ceph - Backport #35072: luminous: osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to) after upgrade to luminousResolvedNeha OjhaActions
Copied to Ceph - Backport #35073: mimic: osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to) after upgrade to luminousResolvedNeha OjhaActions
Actions

Also available in: Atom PDF