Project

General

Profile

Bug #20557 » osd.14-segfault.txt

SSD - Mikko Tanner, 08/27/2017 02:05 PM

 
ceph-osd[21481]: *** Caught signal (Segmentation fault) **
ceph-osd[21481]: in thread 7fdc477eb700 thread_name:tp_osd_tp
ceph-osd[21481]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[21481]: 1: (()+0xa542b4) [0xa1acc332b4]
ceph-osd[21481]: 2: (()+0x11390) [0x7fdc65c81390]
ceph-osd[21481]: 3: (()+0x1f8af) [0x7fdc675ae8af]
ceph-osd[21481]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0xa1acfff1e9]
ceph-osd[21481]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0xa1ad000bc7]
ceph-osd[21481]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0xa1ad000f0c]
ceph-osd[21481]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0xa1ad009687]
ceph-osd[21481]: 8: (()+0xe5594e) [0xa1ad03494e]
ceph-osd[21481]: 9: (()+0xe55a16) [0xa1ad034a16]
ceph-osd[21481]: 10: (rocksdb::MergingIterator::Next()+0x449) [0xa1ad017ce9]
ceph-osd[21481]: 11: (rocksdb::DBIter::FindNextUserEntryInternal(bool, bool)+0x182) [0xa1ad0b50b2]
ceph-osd[21481]: 12: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x314) [0xa1ad0b6454]
ceph-osd[21481]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::lower_bound(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x46) [0xa1acb76036]
ceph-osd[21481]: 14: (BlueStore::_omap_rmkey_range(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x14e) [0xa1acad0a3e]
ceph-osd[21481]: 15: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x1e50) [0xa1acb32930]
ceph-osd[21481]: 16: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x52e) [0xa1acb3379e]
ceph-osd[21481]: 17: (PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x66) [0xa1ac85db86]
ceph-osd[21481]: 18: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xba9) [0xa1ac982139]
ceph-osd[21481]: 19: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x294) [0xa1ac98b184]
ceph-osd[21481]: 20: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x50) [0xa1ac89b6e0]
ceph-osd[21481]: 21: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x53d) [0xa1ac8003cd]
ceph-osd[21481]: 22: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3a9) [0xa1ac684fd9]
ceph-osd[21481]: 23: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x57) [0xa1ac91c3f7]
ceph-osd[21481]: 24: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x130e) [0xa1ac6ac57e]
ceph-osd[21481]: 25: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0xa1acc7ae34]
ceph-osd[21481]: 26: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xa1acc7de70]
ceph-osd[21481]: 27: (()+0x76ba) [0x7fdc65c776ba]
ceph-osd[21481]: 28: (clone()+0x6d) [0x7fdc64cee3dd]

ceph-osd[21481]: 2017-08-27 15:51:11.623997 7fdc477eb700 -1 *** Caught signal (Segmentation fault) **
ceph-osd[21481]: in thread 7fdc477eb700 thread_name:tp_osd_tp
ceph-osd[21481]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[21481]: 1: (()+0xa542b4) [0xa1acc332b4]
ceph-osd[21481]: 2: (()+0x11390) [0x7fdc65c81390]
ceph-osd[21481]: 3: (()+0x1f8af) [0x7fdc675ae8af]
ceph-osd[21481]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0xa1acfff1e9]
ceph-osd[21481]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0xa1ad000bc7]
ceph-osd[21481]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0xa1ad000f0c]
ceph-osd[21481]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0xa1ad009687]
ceph-osd[21481]: 8: (()+0xe5594e) [0xa1ad03494e]
ceph-osd[21481]: 9: (()+0xe55a16) [0xa1ad034a16]
ceph-osd[21481]: 10: (rocksdb::MergingIterator::Next()+0x449) [0xa1ad017ce9]
ceph-osd[21481]: 11: (rocksdb::DBIter::FindNextUserEntryInternal(bool, bool)+0x182) [0xa1ad0b50b2]
ceph-osd[21481]: 12: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x314) [0xa1ad0b6454]
ceph-osd[21481]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::lower_bound(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x46) [0xa1acb76036]
ceph-osd[21481]: 14: (BlueStore::_omap_rmkey_range(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x14e) [0xa1acad0a3e]
ceph-osd[21481]: 15: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x1e50) [0xa1acb32930]
ceph-osd[21481]: 16: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x52e) [0xa1acb3379e]
ceph-osd[21481]: 17: (PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x66) [0xa1ac85db86]
ceph-osd[21481]: 18: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xba9) [0xa1ac982139]
ceph-osd[21481]: 19: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x294) [0xa1ac98b184]
ceph-osd[21481]: 20: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x50) [0xa1ac89b6e0]
ceph-osd[21481]: 21: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x53d) [0xa1ac8003cd]
ceph-osd[21481]: 22: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3a9) [0xa1ac684fd9]
ceph-osd[21481]: 23: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x57) [0xa1ac91c3f7]
ceph-osd[21481]: 24: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x130e) [0xa1ac6ac57e]
ceph-osd[21481]: 25: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0xa1acc7ae34]
ceph-osd[21481]: 26: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xa1acc7de70]
ceph-osd[21481]: 27: (()+0x76ba) [0x7fdc65c776ba]
ceph-osd[21481]: 28: (clone()+0x6d) [0x7fdc64cee3dd]
ceph-osd[21481]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

ceph-osd[21481]: 0> 2017-08-27 15:51:11.623997 7fdc477eb700 -1 *** Caught signal (Segmentation fault) **
ceph-osd[21481]: in thread 7fdc477eb700 thread_name:tp_osd_tp
ceph-osd[21481]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[21481]: 1: (()+0xa542b4) [0xa1acc332b4]
ceph-osd[21481]: 2: (()+0x11390) [0x7fdc65c81390]
ceph-osd[21481]: 3: (()+0x1f8af) [0x7fdc675ae8af]
ceph-osd[21481]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0xa1acfff1e9]
ceph-osd[21481]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0xa1ad000bc7]
ceph-osd[21481]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0xa1ad000f0c]
ceph-osd[21481]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0xa1ad009687]
ceph-osd[21481]: 8: (()+0xe5594e) [0xa1ad03494e]
ceph-osd[21481]: 9: (()+0xe55a16) [0xa1ad034a16]
ceph-osd[21481]: 10: (rocksdb::MergingIterator::Next()+0x449) [0xa1ad017ce9]
ceph-osd[21481]: 11: (rocksdb::DBIter::FindNextUserEntryInternal(bool, bool)+0x182) [0xa1ad0b50b2]
ceph-osd[21481]: 12: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x314) [0xa1ad0b6454]
ceph-osd[21481]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::lower_bound(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x46) [0xa1acb76036]
ceph-osd[21481]: 14: (BlueStore::_omap_rmkey_range(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x14e) [0xa1acad0a3e]
ceph-osd[21481]: 15: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x1e50) [0xa1acb32930]
ceph-osd[21481]: 16: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x52e) [0xa1acb3379e]
ceph-osd[21481]: 17: (PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x66) [0xa1ac85db86]
ceph-osd[21481]: 18: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xba9) [0xa1ac982139]
ceph-osd[21481]: 19: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x294) [0xa1ac98b184]
ceph-osd[21481]: 20: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x50) [0xa1ac89b6e0]
ceph-osd[21481]: 21: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x53d) [0xa1ac8003cd]
ceph-osd[21481]: 22: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3a9) [0xa1ac684fd9]
ceph-osd[21481]: 23: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x57) [0xa1ac91c3f7]
ceph-osd[21481]: 24: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x130e) [0xa1ac6ac57e]
ceph-osd[21481]: 25: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0xa1acc7ae34]
ceph-osd[21481]: 26: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xa1acc7de70]
ceph-osd[21481]: 27: (()+0x76ba) [0x7fdc65c776ba]
ceph-osd[21481]: 28: (clone()+0x6d) [0x7fdc64cee3dd]
ceph-osd[21481]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

systemd[1]: ceph-osd@14.service: Main process exited, code=killed, status=11/SEGV
systemd[1]: ceph-osd@14.service: Unit entered failed state.
systemd[1]: ceph-osd@14.service: Failed with result 'signal'.
systemd[1]: ceph-osd@14.service: Service hold-off time over, scheduling restart.
systemd[1]: Stopped Ceph object storage daemon osd.14.
systemd[1]: Starting Ceph object storage daemon osd.14...
systemd[1]: Started Ceph object storage daemon osd.14.
ceph-osd[1616]: starting osd.14 at - osd_data /var/lib/ceph/osd/ceph-14 /var/lib/ceph/osd/ceph-14/journal

ceph-osd[1616]: *** Caught signal (Segmentation fault) **
ceph-osd[1616]: in thread 7f21c0a18e40 thread_name:ceph-osd
ceph-osd[1616]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[1616]: 1: (()+0xa542b4) [0xe7e52be2b4]
ceph-osd[1616]: 2: (()+0x11390) [0x7f21becc9390]
ceph-osd[1616]: 3: (()+0x1f8af) [0x7f21c05f68af]
ceph-osd[1616]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0xe7e568a1e9]
ceph-osd[1616]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0xe7e568bbc7]
ceph-osd[1616]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0xe7e568bf0c]
ceph-osd[1616]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0xe7e5694687]
ceph-osd[1616]: 8: (()+0xe5594e) [0xe7e56bf94e]
ceph-osd[1616]: 9: (()+0xe55a16) [0xe7e56bfa16]
ceph-osd[1616]: 10: (()+0xe55b91) [0xe7e56bfb91]
ceph-osd[1616]: 11: (rocksdb::MergingIterator::Next()+0x449) [0xe7e56a2ce9]
ceph-osd[1616]: 12: (rocksdb::DBIter::Next()+0xd3) [0xe7e5740d53]
ceph-osd[1616]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0xe7e51fee3a]
ceph-osd[1616]: 14: (BitmapFreelistManager::enumerate_next(unsigned long*, unsigned long*)+0x953) [0xe7e52645c3]
ceph-osd[1616]: 15: (BlueStore::_open_alloc()+0x213) [0xe7e514f4a3]
ceph-osd[1616]: 16: (BlueStore::_mount(bool)+0x3ce) [0xe7e51c230e]
ceph-osd[1616]: 17: (OSD::init()+0x3df) [0xe7e4d3da8f]
ceph-osd[1616]: 18: (main()+0x2eb8) [0xe7e4c50138]
ceph-osd[1616]: 19: (__libc_start_main()+0xf0) [0x7f21bdc4f830]
ceph-osd[1616]: 20: (_start()+0x29) [0xe7e4cdba69]

ceph-osd[1616]: 2017-08-27 15:51:34.723871 7f21c0a18e40 -1 *** Caught signal (Segmentation fault) **
ceph-osd[1616]: in thread 7f21c0a18e40 thread_name:ceph-osd
ceph-osd[1616]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[1616]: 1: (()+0xa542b4) [0xe7e52be2b4]
ceph-osd[1616]: 2: (()+0x11390) [0x7f21becc9390]
ceph-osd[1616]: 3: (()+0x1f8af) [0x7f21c05f68af]
ceph-osd[1616]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0xe7e568a1e9]
ceph-osd[1616]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0xe7e568bbc7]
ceph-osd[1616]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0xe7e568bf0c]
ceph-osd[1616]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0xe7e5694687]
ceph-osd[1616]: 8: (()+0xe5594e) [0xe7e56bf94e]
ceph-osd[1616]: 9: (()+0xe55a16) [0xe7e56bfa16]
ceph-osd[1616]: 10: (()+0xe55b91) [0xe7e56bfb91]
ceph-osd[1616]: 11: (rocksdb::MergingIterator::Next()+0x449) [0xe7e56a2ce9]
ceph-osd[1616]: 12: (rocksdb::DBIter::Next()+0xd3) [0xe7e5740d53]
ceph-osd[1616]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0xe7e51fee3a]
ceph-osd[1616]: 14: (BitmapFreelistManager::enumerate_next(unsigned long*, unsigned long*)+0x953) [0xe7e52645c3]
ceph-osd[1616]: 15: (BlueStore::_open_alloc()+0x213) [0xe7e514f4a3]
ceph-osd[1616]: 16: (BlueStore::_mount(bool)+0x3ce) [0xe7e51c230e]
ceph-osd[1616]: 17: (OSD::init()+0x3df) [0xe7e4d3da8f]
ceph-osd[1616]: 18: (main()+0x2eb8) [0xe7e4c50138]
ceph-osd[1616]: 19: (__libc_start_main()+0xf0) [0x7f21bdc4f830]
ceph-osd[1616]: 20: (_start()+0x29) [0xe7e4cdba69]
ceph-osd[1616]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

ceph-osd[1616]: 0> 2017-08-27 15:51:34.723871 7f21c0a18e40 -1 *** Caught signal (Segmentation fault) **
ceph-osd[1616]: in thread 7f21c0a18e40 thread_name:ceph-osd
ceph-osd[1616]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[1616]: 1: (()+0xa542b4) [0xe7e52be2b4]
ceph-osd[1616]: 2: (()+0x11390) [0x7f21becc9390]
ceph-osd[1616]: 3: (()+0x1f8af) [0x7f21c05f68af]
ceph-osd[1616]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0xe7e568a1e9]
ceph-osd[1616]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0xe7e568bbc7]
ceph-osd[1616]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0xe7e568bf0c]
ceph-osd[1616]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0xe7e5694687]
ceph-osd[1616]: 8: (()+0xe5594e) [0xe7e56bf94e]
ceph-osd[1616]: 9: (()+0xe55a16) [0xe7e56bfa16]
ceph-osd[1616]: 10: (()+0xe55b91) [0xe7e56bfb91]
ceph-osd[1616]: 11: (rocksdb::MergingIterator::Next()+0x449) [0xe7e56a2ce9]
ceph-osd[1616]: 12: (rocksdb::DBIter::Next()+0xd3) [0xe7e5740d53]
ceph-osd[1616]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0xe7e51fee3a]
ceph-osd[1616]: 14: (BitmapFreelistManager::enumerate_next(unsigned long*, unsigned long*)+0x953) [0xe7e52645c3]
ceph-osd[1616]: 15: (BlueStore::_open_alloc()+0x213) [0xe7e514f4a3]
ceph-osd[1616]: 16: (BlueStore::_mount(bool)+0x3ce) [0xe7e51c230e]
ceph-osd[1616]: 17: (OSD::init()+0x3df) [0xe7e4d3da8f]
ceph-osd[1616]: 18: (main()+0x2eb8) [0xe7e4c50138]
ceph-osd[1616]: 19: (__libc_start_main()+0xf0) [0x7f21bdc4f830]
ceph-osd[1616]: 20: (_start()+0x29) [0xe7e4cdba69]
ceph-osd[1616]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

systemd[1]: ceph-osd@14.service: Main process exited, code=dumped, status=11/SEGV
systemd[1]: ceph-osd@14.service: Unit entered failed state.
systemd[1]: ceph-osd@14.service: Failed with result 'core-dump'.
systemd[1]: ceph-osd@14.service: Service hold-off time over, scheduling restart.
systemd[1]: Stopped Ceph object storage daemon osd.14.
systemd[1]: Starting Ceph object storage daemon osd.14...
systemd[1]: Started Ceph object storage daemon osd.14.

ceph-osd[1980]: starting osd.14 at - osd_data /var/lib/ceph/osd/ceph-14 /var/lib/ceph/osd/ceph-14/journal
ceph-osd[1980]: 2017-08-27 15:52:09.315142 7f3c090d6e40 -1 osd.14 6038 log_to_monitors {default=true}

========================================================

ceph-osd[1980]: *** Caught signal (Segmentation fault) **
ceph-osd[1980]: in thread 7f3be63ea700 thread_name:tp_osd_tp
ceph-osd[1980]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[1980]: 1: (()+0xa542b4) [0x1f00d662b4]
ceph-osd[1980]: 2: (()+0x11390) [0x7f3c07387390]
ceph-osd[1980]: 3: (()+0x1f8af) [0x7f3c08cb48af]
ceph-osd[1980]: 4: (rocksdb::Arena::AllocateNewBlock(unsigned long)+0x7c) [0x1f0116860c]
ceph-osd[1980]: 5: (rocksdb::Arena::AllocateFallback(unsigned long, bool)+0x45) [0x1f01168785]
ceph-osd[1980]: 6: (rocksdb::Arena::AllocateAligned(unsigned long, unsigned long, rocksdb::Logger*)+0x100) [0x1f01168910]
ceph-osd[1980]: 7: (rocksdb::NewTwoLevelIterator(rocksdb::TwoLevelIteratorState*, rocksdb::InternalIterator*, rocksdb::Arena*, bool)+0x35) [0x1f01167475]
ceph-osd[1980]: 8: (rocksdb::Version::AddIteratorsForLevel(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::MergeIteratorBuilder*, int, rocksdb::RangeDelAggregator*)+0x384) [0x1f010e95f4]
ceph-osd[1980]: 9: (rocksdb::Version::AddIterators(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::MergeIteratorBuilder*, rocksdb::RangeDelAggregator*)+0x53) [0x1f010e9703]
ceph-osd[1980]: 10: (rocksdb::DBImpl::NewInternalIterator(rocksdb::ReadOptions const&, rocksdb::ColumnFamilyData*, rocksdb::SuperVersion*, rocksdb::Arena*, rocksdb::RangeDelAggregator*)+0xf5) [0x1f011b1e95]
ceph-osd[1980]: 11: (rocksdb::DBImpl::NewIterator(rocksdb::ReadOptions const&, rocksdb::ColumnFamilyHandle*)+0x143) [0x1f011b22b3]
ceph-osd[1980]: 12: (RocksDBStore::_get_iterator()+0x66) [0x1f00ca7206]
ceph-osd[1980]: 13: (KeyValueDB::get_iterator(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2e) [0x1f00be2b4e]
ceph-osd[1980]: 14: (BlueStore::get_omap_iterator(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&)+0x222) [0x1f00c33d52]
ceph-osd[1980]: 15: (BlueStore::get_omap_iterator(coll_t const&, ghobject_t const&)+0x5d) [0x1f00c230dd]
ceph-osd[1980]: 16: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x428) [0x1f00aab0a8]
ceph-osd[1980]: 17: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x3e7) [0x1f009cebb7]
ceph-osd[1980]: 18: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x237) [0x1f00870af7]
ceph-osd[1980]: 19: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3ea) [0x1f0089e4ba]
ceph-osd[1980]: 20: (PG::scrub(unsigned int, ThreadPool::TPHandle&)+0x45c) [0x1f0089ffcc]
ceph-osd[1980]: 21: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12d0) [0x1f007df540]
ceph-osd[1980]: 22: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0x1f00dade34]
ceph-osd[1980]: 23: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x1f00db0e70]
ceph-osd[1980]: 24: (()+0x76ba) [0x7f3c0737d6ba]
ceph-osd[1980]: 25: (clone()+0x6d) [0x7f3c063f43dd]

ceph-osd[1980]: 2017-08-27 15:52:33.530923 7f3be63ea700 -1 *** Caught signal (Segmentation fault) **
ceph-osd[1980]: in thread 7f3be63ea700 thread_name:tp_osd_tp
ceph-osd[1980]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[1980]: 1: (()+0xa542b4) [0x1f00d662b4]
ceph-osd[1980]: 2: (()+0x11390) [0x7f3c07387390]
ceph-osd[1980]: 3: (()+0x1f8af) [0x7f3c08cb48af]
ceph-osd[1980]: 4: (rocksdb::Arena::AllocateNewBlock(unsigned long)+0x7c) [0x1f0116860c]
ceph-osd[1980]: 5: (rocksdb::Arena::AllocateFallback(unsigned long, bool)+0x45) [0x1f01168785]
ceph-osd[1980]: 6: (rocksdb::Arena::AllocateAligned(unsigned long, unsigned long, rocksdb::Logger*)+0x100) [0x1f01168910]
ceph-osd[1980]: 7: (rocksdb::NewTwoLevelIterator(rocksdb::TwoLevelIteratorState*, rocksdb::InternalIterator*, rocksdb::Arena*, bool)+0x35) [0x1f01167475]
ceph-osd[1980]: 8: (rocksdb::Version::AddIteratorsForLevel(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::MergeIteratorBuilder*, int, rocksdb::RangeDelAggregator*)+0x384) [0x1f010e95f4]
ceph-osd[1980]: 9: (rocksdb::Version::AddIterators(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::MergeIteratorBuilder*, rocksdb::RangeDelAggregator*)+0x53) [0x1f010e9703]
ceph-osd[1980]: 10: (rocksdb::DBImpl::NewInternalIterator(rocksdb::ReadOptions const&, rocksdb::ColumnFamilyData*, rocksdb::SuperVersion*, rocksdb::Arena*, rocksdb::RangeDelAggregator*)+0xf5) [0x1f011b1e95]
ceph-osd[1980]: 11: (rocksdb::DBImpl::NewIterator(rocksdb::ReadOptions const&, rocksdb::ColumnFamilyHandle*)+0x143) [0x1f011b22b3]
ceph-osd[1980]: 12: (RocksDBStore::_get_iterator()+0x66) [0x1f00ca7206]
ceph-osd[1980]: 13: (KeyValueDB::get_iterator(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2e) [0x1f00be2b4e]
ceph-osd[1980]: 14: (BlueStore::get_omap_iterator(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&)+0x222) [0x1f00c33d52]
ceph-osd[1980]: 15: (BlueStore::get_omap_iterator(coll_t const&, ghobject_t const&)+0x5d) [0x1f00c230dd]
ceph-osd[1980]: 16: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x428) [0x1f00aab0a8]
ceph-osd[1980]: 17: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x3e7) [0x1f009cebb7]
ceph-osd[1980]: 18: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x237) [0x1f00870af7]
ceph-osd[1980]: 19: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3ea) [0x1f0089e4ba]
ceph-osd[1980]: 20: (PG::scrub(unsigned int, ThreadPool::TPHandle&)+0x45c) [0x1f0089ffcc]
ceph-osd[1980]: 21: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12d0) [0x1f007df540]
ceph-osd[1980]: 22: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0x1f00dade34]
ceph-osd[1980]: 23: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x1f00db0e70]
ceph-osd[1980]: 24: (()+0x76ba) [0x7f3c0737d6ba]
ceph-osd[1980]: 25: (clone()+0x6d) [0x7f3c063f43dd]
ceph-osd[1980]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

ceph-osd[1980]: -5700> 2017-08-27 15:52:09.315142 7f3c090d6e40 -1 osd.14 6038 log_to_monitors {default=true}

ceph-osd[1980]: 0> 2017-08-27 15:52:33.530923 7f3be63ea700 -1 *** Caught signal (Segmentation fault) **
ceph-osd[1980]: in thread 7f3be63ea700 thread_name:tp_osd_tp
ceph-osd[1980]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[1980]: 1: (()+0xa542b4) [0x1f00d662b4]
ceph-osd[1980]: 2: (()+0x11390) [0x7f3c07387390]
ceph-osd[1980]: 3: (()+0x1f8af) [0x7f3c08cb48af]
ceph-osd[1980]: 4: (rocksdb::Arena::AllocateNewBlock(unsigned long)+0x7c) [0x1f0116860c]
ceph-osd[1980]: 5: (rocksdb::Arena::AllocateFallback(unsigned long, bool)+0x45) [0x1f01168785]
ceph-osd[1980]: 6: (rocksdb::Arena::AllocateAligned(unsigned long, unsigned long, rocksdb::Logger*)+0x100) [0x1f01168910]
ceph-osd[1980]: 7: (rocksdb::NewTwoLevelIterator(rocksdb::TwoLevelIteratorState*, rocksdb::InternalIterator*, rocksdb::Arena*, bool)+0x35) [0x1f01167475]
ceph-osd[1980]: 8: (rocksdb::Version::AddIteratorsForLevel(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::MergeIteratorBuilder*, int, rocksdb::RangeDelAggregator*)+0x384) [0x1f010e95f4]
ceph-osd[1980]: 9: (rocksdb::Version::AddIterators(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::MergeIteratorBuilder*, rocksdb::RangeDelAggregator*)+0x53) [0x1f010e9703]
ceph-osd[1980]: 10: (rocksdb::DBImpl::NewInternalIterator(rocksdb::ReadOptions const&, rocksdb::ColumnFamilyData*, rocksdb::SuperVersion*, rocksdb::Arena*, rocksdb::RangeDelAggregator*)+0xf5) [0x1f011b1e95]
ceph-osd[1980]: 11: (rocksdb::DBImpl::NewIterator(rocksdb::ReadOptions const&, rocksdb::ColumnFamilyHandle*)+0x143) [0x1f011b22b3]
ceph-osd[1980]: 12: (RocksDBStore::_get_iterator()+0x66) [0x1f00ca7206]
ceph-osd[1980]: 13: (KeyValueDB::get_iterator(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2e) [0x1f00be2b4e]
ceph-osd[1980]: 14: (BlueStore::get_omap_iterator(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&)+0x222) [0x1f00c33d52]
ceph-osd[1980]: 15: (BlueStore::get_omap_iterator(coll_t const&, ghobject_t const&)+0x5d) [0x1f00c230dd]
ceph-osd[1980]: 16: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x428) [0x1f00aab0a8]
ceph-osd[1980]: 17: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x3e7) [0x1f009cebb7]
ceph-osd[1980]: 18: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x237) [0x1f00870af7]
ceph-osd[1980]: 19: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3ea) [0x1f0089e4ba]
ceph-osd[1980]: 20: (PG::scrub(unsigned int, ThreadPool::TPHandle&)+0x45c) [0x1f0089ffcc]
ceph-osd[1980]: 21: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12d0) [0x1f007df540]
ceph-osd[1980]: 22: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0x1f00dade34]
ceph-osd[1980]: 23: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x1f00db0e70]
ceph-osd[1980]: 24: (()+0x76ba) [0x7f3c0737d6ba]
ceph-osd[1980]: 25: (clone()+0x6d) [0x7f3c063f43dd]

ceph-osd[1980]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
systemd[1]: ceph-osd@14.service: Main process exited, code=killed, status=11/SEGV
systemd[1]: ceph-osd@14.service: Unit entered failed state.
systemd[1]: ceph-osd@14.service: Failed with result 'signal'.
systemd[1]: ceph-osd@14.service: Service hold-off time over, scheduling restart.
systemd[1]: Stopped Ceph object storage daemon osd.14.
systemd[1]: Starting Ceph object storage daemon osd.14...
systemd[1]: Started Ceph object storage daemon osd.14.
ceph-osd[2853]: starting osd.14 at - osd_data /var/lib/ceph/osd/ceph-14 /var/lib/ceph/osd/ceph-14/journal

ceph-osd[2853]: *** Caught signal (Segmentation fault) **
ceph-osd[2853]: in thread 7f8d1edd0e40 thread_name:ceph-osd
ceph-osd[2853]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[2853]: 1: (()+0xa542b4) [0x69ca1ec2b4]
ceph-osd[2853]: 2: (()+0x11390) [0x7f8d1d081390]
ceph-osd[2853]: 3: (()+0x1f8af) [0x7f8d1e9ae8af]
ceph-osd[2853]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0x69ca5b81e9]
ceph-osd[2853]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0x69ca5b9bc7]
ceph-osd[2853]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0x69ca5b9f0c]
ceph-osd[2853]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0x69ca5c2687]
ceph-osd[2853]: 8: (()+0xe5594e) [0x69ca5ed94e]
ceph-osd[2853]: 9: (()+0xe55a16) [0x69ca5eda16]
ceph-osd[2853]: 10: (()+0xe55b91) [0x69ca5edb91]
ceph-osd[2853]: 11: (rocksdb::MergingIterator::Next()+0x449) [0x69ca5d0ce9]
ceph-osd[2853]: 12: (rocksdb::DBIter::Next()+0xd3) [0x69ca66ed53]
ceph-osd[2853]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x69ca12ce3a]
ceph-osd[2853]: 14: (BlueStore::_collection_list(BlueStore::Collection*, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x1170) [0x69ca08c770]
ceph-osd[2853]: 15: (BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x25a) [0x69ca08dc0a]
ceph-osd[2853]: 16: (BlueStore::collection_list(coll_t const&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x73) [0x69ca0a8c73]
ceph-osd[2853]: 17: (OSD::clear_temp_objects()+0x740) [0x69c9c28060]
ceph-osd[2853]: 18: (OSD::init()+0x21cf) [0x69c9c6d87f]
ceph-osd[2853]: 19: (main()+0x2eb8) [0x69c9b7e138]
ceph-osd[2853]: 20: (__libc_start_main()+0xf0) [0x7f8d1c007830]
ceph-osd[2853]: 21: (_start()+0x29) [0x69c9c09a69]

ceph-osd[2853]: 2017-08-27 15:52:56.239222 7f8d1edd0e40 -1 *** Caught signal (Segmentation fault) **
ceph-osd[2853]: in thread 7f8d1edd0e40 thread_name:ceph-osd
ceph-osd[2853]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[2853]: 1: (()+0xa542b4) [0x69ca1ec2b4]
ceph-osd[2853]: 2: (()+0x11390) [0x7f8d1d081390]
ceph-osd[2853]: 3: (()+0x1f8af) [0x7f8d1e9ae8af]
ceph-osd[2853]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0x69ca5b81e9]
ceph-osd[2853]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0x69ca5b9bc7]
ceph-osd[2853]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0x69ca5b9f0c]
ceph-osd[2853]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0x69ca5c2687]
ceph-osd[2853]: 8: (()+0xe5594e) [0x69ca5ed94e]
ceph-osd[2853]: 9: (()+0xe55a16) [0x69ca5eda16]
ceph-osd[2853]: 10: (()+0xe55b91) [0x69ca5edb91]
ceph-osd[2853]: 11: (rocksdb::MergingIterator::Next()+0x449) [0x69ca5d0ce9]
ceph-osd[2853]: 12: (rocksdb::DBIter::Next()+0xd3) [0x69ca66ed53]
ceph-osd[2853]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x69ca12ce3a]
ceph-osd[2853]: 14: (BlueStore::_collection_list(BlueStore::Collection*, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x1170) [0x69ca08c770]
ceph-osd[2853]: 15: (BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x25a) [0x69ca08dc0a]
ceph-osd[2853]: 16: (BlueStore::collection_list(coll_t const&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x73) [0x69ca0a8c73]
ceph-osd[2853]: 17: (OSD::clear_temp_objects()+0x740) [0x69c9c28060]
ceph-osd[2853]: 18: (OSD::init()+0x21cf) [0x69c9c6d87f]
ceph-osd[2853]: 19: (main()+0x2eb8) [0x69c9b7e138]
ceph-osd[2853]: 20: (__libc_start_main()+0xf0) [0x7f8d1c007830]
ceph-osd[2853]: 21: (_start()+0x29) [0x69c9c09a69]
ceph-osd[2853]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

ceph-osd[2853]: 0> 2017-08-27 15:52:56.239222 7f8d1edd0e40 -1 *** Caught signal (Segmentation fault) **
ceph-osd[2853]: in thread 7f8d1edd0e40 thread_name:ceph-osd
ceph-osd[2853]: ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
ceph-osd[2853]: 1: (()+0xa542b4) [0x69ca1ec2b4]
ceph-osd[2853]: 2: (()+0x11390) [0x7f8d1d081390]
ceph-osd[2853]: 3: (()+0x1f8af) [0x7f8d1e9ae8af]
ceph-osd[2853]: 4: (rocksdb::BlockBasedTable::PutDataBlockToCache(rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Cache*, rocksdb::Cache*, rocksdb::ReadOptions const&, rocksdb::ImmutableCFOptions const&, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, rocksdb::Block*, unsigned int, rocksdb::Slice const&, unsigned long, bool, rocksdb::Cache::Priority)+0x1d9) [0x69ca5b81e9]
ceph-osd[2853]: 5: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x3b7) [0x69ca5b9bc7]
ceph-osd[2853]: 6: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0x69ca5b9f0c]
ceph-osd[2853]: 7: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0x69ca5c2687]
ceph-osd[2853]: 8: (()+0xe5594e) [0x69ca5ed94e]
ceph-osd[2853]: 9: (()+0xe55a16) [0x69ca5eda16]
ceph-osd[2853]: 10: (()+0xe55b91) [0x69ca5edb91]
ceph-osd[2853]: 11: (rocksdb::MergingIterator::Next()+0x449) [0x69ca5d0ce9]
ceph-osd[2853]: 12: (rocksdb::DBIter::Next()+0xd3) [0x69ca66ed53]
ceph-osd[2853]: 13: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x69ca12ce3a]
ceph-osd[2853]: 14: (BlueStore::_collection_list(BlueStore::Collection*, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x1170) [0x69ca08c770]
ceph-osd[2853]: 15: (BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x25a) [0x69ca08dc0a]
ceph-osd[2853]: 16: (BlueStore::collection_list(coll_t const&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x73) [0x69ca0a8c73]
ceph-osd[2853]: 17: (OSD::clear_temp_objects()+0x740) [0x69c9c28060]
ceph-osd[2853]: 18: (OSD::init()+0x21cf) [0x69c9c6d87f]
ceph-osd[2853]: 19: (main()+0x2eb8) [0x69c9b7e138]
ceph-osd[2853]: 20: (__libc_start_main()+0xf0) [0x7f8d1c007830]
ceph-osd[2853]: 21: (_start()+0x29) [0x69c9c09a69]
ceph-osd[2853]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

systemd[1]: ceph-osd@14.service: Main process exited, code=dumped, status=11/SEGV
systemd[1]: ceph-osd@14.service: Unit entered failed state.
systemd[1]: ceph-osd@14.service: Failed with result 'core-dump'.
Aug 27 15:53:16 vmhost2 systemd[1]: ceph-osd@14.service: Service hold-off time over, scheduling restart.
Aug 27 15:53:16 vmhost2 systemd[1]: Stopped Ceph object storage daemon osd.14.
Aug 27 15:53:16 vmhost2 systemd[1]: Starting Ceph object storage daemon osd.14...
Aug 27 15:53:16 vmhost2 systemd[1]: Started Ceph object storage daemon osd.14.
Aug 27 15:53:17 vmhost2 ceph-osd[3201]: starting osd.14 at - osd_data /var/lib/ceph/osd/ceph-14 /var/lib/ceph/osd/ceph-14/journal
Aug 27 15:53:30 vmhost2 ceph-osd[3201]: 2017-08-27 15:53:30.418168 7fdbd14d3e40 -1 osd.14 6042 log_to_monitors {default=true}
(3-3/3)