Project

General

Profile

Actions

Bug #21314

closed

Ceph OSDs crashing in BlueStore::queue_transactions() using EC

Added by Bob Bobington over 6 years ago. Updated over 6 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
Correctness/Safety
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
BlueStore
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Log is attached. 3 of my 4 OSDs have crashed in a similar manner at different times. I'm running Ceph on a single node for testing. I have 4, 8TB OSDs running BlueStore with dm-crypt, 1 mgr, 1 mds and 1 mon. The machine is running Arch Linux and Ceph has been built from the source release. I have a 2/1 EC pool with 256 pgs for data and a size 2 replicated pool with 256 pgs for metadata. The only client was a single instance of rsync.


Files

cephlog.txt (13.5 KB) cephlog.txt Bob Bobington, 09/08/2017 03:39 PM

Related issues 1 (0 open1 closed)

Related to RADOS - Bug #21171: bluestore: aio submission deadlockResolvedSage Weil08/29/2017

Actions
Actions #1

Updated by Greg Farnum over 6 years ago

  • Project changed from Ceph to RADOS
  • Subject changed from Ceph OSDs crashing using EC, Luminous to Ceph OSDs crashing in BlueStore::queue_transactions() using EC
  • Category changed from OSD to Correctness/Safety
  • Component(RADOS) BlueStore added
Sep 08 06:17:07 host ceph-osd[10263]: *** Caught signal (Aborted) **                         
Sep 08 06:17:07 host ceph-osd[10263]:  in thread 7fa865782700 thread_name:tp_osd_tp          
Sep 08 06:17:07 host ceph-osd[10263]:  ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43
bab910c) luminous (rc)                                                                                  
Sep 08 06:17:07 host ceph-osd[10263]:  1: (()+0x9a6968) [0x55c1a0eba968]                     
Sep 08 06:17:07 host ceph-osd[10263]:  2: (()+0x117e0) [0x7fa880e207e0]                      
Sep 08 06:17:07 host ceph-osd[10263]:  3: (pthread_cond_wait()+0x1fd) [0x7fa880e1c1ad]       
Sep 08 06:17:07 host ceph-osd[10263]:  4: (Throttle::_wait(long)+0x33c) [0x55c1a0ef5c8c]     
Sep 08 06:17:07 host ceph-osd[10263]:  5: (Throttle::get(long, long)+0x2a2) [0x55c1a0ef6a52] 
Sep 08 06:17:07 host ceph-osd[10263]:  6: (BlueStore::queue_transactions(ObjectStore::Sequenc
er*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive
_ptr<TrackedOp>, ThreadPool::TPHandle*)+0xdc7) [0x55c1a0dbe137]                                         
Sep 08 06:17:07 host ceph-osd[10263]:  7: (non-virtual thunk to PrimaryLogPG::queue_transacti
ons(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_
ptr<OpRequest>)+0x68) [0x55c1a0af9d08]                                                                  
Sep 08 06:17:07 host ceph-osd[10263]:  8: (ECBackend::handle_sub_write(pg_shard_t, boost::int
rusive_ptr<OpRequest>, ECSubWrite&, ZTracer::Trace const&, Context*)+0x963) [0x55c1a0c383b3]            
Sep 08 06:17:07 host ceph-osd[10263]:  9: (ECBackend::try_reads_to_commit()+0xa7b) [0x55c1a0c
45c5b]                                                                                                  
Sep 08 06:17:07 host ceph-osd[10263]:  10: (ECBackend::check_ops()+0x1c) [0x55c1a0c4852c]    
Sep 08 06:17:07 host ceph-osd[10263]:  11: (ECBackend::start_rmw(ECBackend::Op*, std::unique_
ptr<PGTransaction, std::default_delete<PGTransaction> >&&)+0x1e0a) [0x55c1a0c5285a]                     
Sep 08 06:17:07 host ceph-osd[10263]:  12: (ECBackend::submit_transaction(hobject_t const&, o
bject_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, std::default_delete<PGTransac
tion> >&&, eversion_t const&, eversion_t const&, std::vector<pg_log_entry_t, std::allocator<pg_log_entry
_t> > const&, boost::optional<pg_hit_set_history_t>&, Context*, Context*, Context*, unsigned long, osd_r
eqid_t, boost::intrusive_ptr<OpRequest>)+0x4a1) [0x55c1a0c53751]                                        
Sep 08 06:17:07 host ceph-osd[10263]:  13: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather
*, PrimaryLogPG::OpContext*)+0x810) [0x55c1a0a9b800]                                                    
Sep 08 06:17:07 host ceph-osd[10263]:  14: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext
*)+0x1238) [0x55c1a0ae0538]                                                                             
Sep 08 06:17:07 host ceph-osd[10263]:  15: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpReques
t>&)+0x389e) [0x55c1a0ae464e]                                                                           
Sep 08 06:17:07 host ceph-osd[10263]:  16: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpR
equest>&, ThreadPool::TPHandle&)+0xe17) [0x55c1a0aa2e27]                                                
Sep 08 06:17:07 host ceph-osd[10263]:  17: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::
intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x387) [0x55c1a0931d07]                                
Sep 08 06:17:07 host ceph-osd[10263]:  18: (PGQueueable::RunVis::operator()(boost::intrusive_
ptr<OpRequest> const&)+0x5a) [0x55c1a0bb4ffa]                                                           
Sep 08 06:17:07 host ceph-osd[10263]:  19: (OSD::ShardedOpWQ::_process(unsigned int, ceph::he
artbeat_handle_d*)+0x1ae7) [0x55c1a0955b97]                                                             
Sep 08 06:17:07 host ceph-osd[10263]:  20: (ShardedThreadPool::shardedthreadpool_worker(unsig
ned int)+0x5b2) [0x55c1a0f05032]                                                                        
Sep 08 06:17:07 host ceph-osd[10263]:  21: (ShardedThreadPool::WorkThreadSharded::entry()+0x1
0) [0x55c1a0f085a0]                                                                                     
Sep 08 06:17:07 host ceph-osd[10263]:  22: (()+0x7049) [0x7fa880e16049]                      
Sep 08 06:17:07 host ceph-osd[10263]:  23: (clone()+0x3f) [0x7fa8802a5f0f]
Actions #2

Updated by Sage Weil over 6 years ago

  • Status changed from New to Duplicate

It is hard to tell because the lines preceding the snippet are missing, but I'm pretty sure this is a dup of #21171, in which threads would get stuck waiting for that throttle and eventually be killed by the heartbeat checks. The preceding log lines should confirm that.

The fix was just merged into master and is about to go into the luminous branch for 12.2.1.

If your log doesn't include something about a heartbeat check failing on the blocked thread, please let me know!

Actions #3

Updated by Sage Weil over 6 years ago

  • Related to Bug #21171: bluestore: aio submission deadlock added
Actions #4

Updated by Bob Bobington over 6 years ago

There are no log entries regarding failed heartbeat checks on the failing OSDs, only on the other OSDs witnessing the failure. None were logged before the first failure.

Actions #5

Updated by Bob Bobington over 6 years ago

Hmm, I found another log file and came across this:

    -2> 2017-09-08 14:41:53.132565 7fa5f5086700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fa5e0810700' had timed out after 15
    -1> 2017-09-08 14:41:53.132591 7fa5f5086700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fa5e0810700' had suicide timed out after 150
     0> 2017-09-08 14:41:53.152151 7fa5e0810700 -1 *** Caught signal (Aborted) **
 in thread 7fa5e0810700 thread_name:tp_osd_tp

 ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
 1: (()+0x9a6968) [0x557fca01a968]
 2: (()+0x117e0) [0x7fa5fbeae7e0]
 3: (pthread_cond_wait()+0x1fd) [0x7fa5fbeaa1ad]
 4: (Throttle::_wait(long)+0x33c) [0x557fca055c8c]
 5: (Throttle::get(long, long)+0x2a2) [0x557fca056a52]
 6: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x105b) [0x557fc9f1e3cb]
 7: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x68) [0x557fc9c59d08]
 8: (ECBackend::handle_sub_write(pg_shard_t, boost::intrusive_ptr<OpRequest>, ECSubWrite&, ZTracer::Trace const&, Context*)+0x963) [0x557fc9d983b3]
 9: (ECBackend::try_reads_to_commit()+0xa7b) [0x557fc9da5c5b]
 10: (ECBackend::check_ops()+0x1c) [0x557fc9da852c]
 11: (ECBackend::start_rmw(ECBackend::Op*, std::unique_ptr<PGTransaction, std::default_delete<PGTransaction> >&&)+0x1e0a) [0x557fc9db285a]
 12: (ECBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, std::vector<pg_log_entry
_t, std::allocator<pg_log_entry_t> > const&, boost::optional<pg_hit_set_history_t>&, Context*, Context*, Context*, unsigned long, osd_reqid_t, boost::intrusive_ptr<OpRequest>)+0x4a1) [0x557fc9db3751]
 13: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0x810) [0x557fc9bfb800]
 14: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x1238) [0x557fc9c40538]
 15: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x389e) [0x557fc9c4464e]
 16: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xe17) [0x557fc9c02e27]
 17: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x387) [0x557fc9a91d07]
 18: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x5a) [0x557fc9d14ffa]
 19: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x1ae7) [0x557fc9ab5b97]
 20: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b2) [0x557fca065032]
 21: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x557fca0685a0]
 22: (()+0x7049) [0x7fa5fbea4049]
 23: (clone()+0x3f) [0x7fa5fb333f0f]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

This must be what you meant. I was looking at the wrong log (systemd journal rather than /var/log/ceph/ceph-osd.1.log).

Actions #6

Updated by Bob Bobington over 6 years ago

I've applied the changes in the Git pull request referenced in that issue and the issue still persists:

2017-09-09 02:47:11.592201 7f6cb9ff4700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f6ca6780700' had timed out after 60
2017-09-09 02:47:11.592214 7f6cb9ff4700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f6ca577e700' had timed out after 60
2017-09-09 02:47:11.592216 7f6cb9ff4700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f6ca7782700' had timed out after 15
2017-09-09 02:47:11.592219 7f6cb9ff4700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f6ca7782700' had suicide timed out after 150
2017-09-09 02:47:11.606442 7f6ca7782700 -1 *** Caught signal (Aborted) **
 in thread 7f6ca7782700 thread_name:tp_osd_tp

 ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
 1: (()+0x9a6f48) [0x55f07f2fef48]
 2: (()+0x117e0) [0x7f6cc0e1c7e0]
 3: (pthread_cond_wait()+0x1fd) [0x7f6cc0e181ad]
 4: (Throttle::_wait(long)+0x33c) [0x55f07f33a26c]
 5: (Throttle::get(long, long)+0x2a2) [0x55f07f33b032]
 6: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x105b) [0x55f07f20293b]
 7: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x68) [0x55f07ef3dd08]
 8: (ECBackend::handle_sub_write(pg_shard_t, boost::intrusive_ptr<OpRequest>, ECSubWrite&, ZTracer::Trace const&, Context*)+0x963) [0x55f07f07c3b3]
 9: (ECBackend::try_reads_to_commit()+0xa7b) [0x55f07f089c5b]
 10: (ECBackend::check_ops()+0x1c) [0x55f07f08c52c]
 11: (ECBackend::start_rmw(ECBackend::Op*, std::unique_ptr<PGTransaction, std::default_delete<PGTransaction> >&&)+0x1e0a) [0x55f07f09685a]
 12: (ECBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, std::vector<pg_log_entry_t, std::allocator<pg_log_entry_t> > const&, boost::optional<pg_hit_set_history_t>&, Context*, Context*, Context*, unsigned long, osd_reqid_t, boost::intrusive_ptr<OpRequest>)+0x4a1) [0x55f07f097751]
 13: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0x810) [0x55f07eedf800]
 14: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x1238) [0x55f07ef24538]
 15: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x389e) [0x55f07ef2864e]
 16: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xe17) [0x55f07eee6e27]
 17: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x387) [0x55f07ed75d07]
 18: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x5a) [0x55f07eff8ffa]
 19: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x1ae7) [0x55f07ed99b97]
 20: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b2) [0x55f07f349612]
 21: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55f07f34cb80]
 22: (()+0x7049) [0x7f6cc0e12049]
 23: (clone()+0x3f) [0x7f6cc02a1f0f]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
Actions #7

Updated by Bob Bobington over 6 years ago

After a few crashes the OSDs become permanently lost, consistently displaying errors like this upon startup:

Sep 09 12:15:34 myhost ceph-osd[23535]: 2017-09-09 12:15:34.226853 7fd5c29dee00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
Sep 09 12:15:34 myhost ceph-osd[23535]: 2017-09-09 12:15:34.227126 7fd5c29dee00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
Sep 09 12:15:34 myhost ceph-osd[23535]: starting osd.3 at - osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal                                                                                                 Sep 09 12:15:34 myhost ceph-osd[23535]: 2017-09-09 12:15:34.234439 7fd5c29dee00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
Sep 09 12:16:16 myhost ceph-osd[23535]: 2017-09-09 12:16:16.156248 7fd5c29dee00 -1 osd.3 575 log_to_monitors {default=true}                                                                                                    Sep 09 12:16:16 myhost ceph-osd[23535]: 2017-09-09 12:16:16.163130 7fd5a8d0c700 -1 distro_detect - can't detect distro_version
Sep 09 12:16:52 myhost ceph-osd[23535]: 2017-09-09 12:16:52.864425 7fd5a6d08700 -1 bluestore(/var/lib/ceph/osd/ceph-3) _txc_add_transaction error (2) No such file or directory not handled on operation 15 (op 0, counting fro
m 0)
Sep 09 12:16:52 myhost ceph-osd[23535]: /nvmedisk/ceph-git/src/ceph/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)' thread 7fd5a6d08700
time 2017-09-09 12:16:52.864489
Sep 09 12:16:52 myhost ceph-osd[23535]: /nvmedisk/ceph-git/src/ceph/src/os/bluestore/BlueStore.cc: 9290: FAILED assert(0 == "unexpected error")
Sep 09 12:16:52 myhost ceph-osd[23535]: 2017-09-09 12:16:52.864445 7fd5a6d08700 -1 bluestore(/var/lib/ceph/osd/ceph-3) unexpected error code
Sep 09 12:16:52 myhost ceph-osd[23535]:  ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 09 12:16:52 myhost ceph-osd[23535]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0xf5) [0x561ee1075ba5]
Sep 09 12:16:52 myhost ceph-osd[23535]:  2: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0xf1c) [0x561ee0f30aec]
Sep 09 12:16:52 myhost ceph-osd[23535]:  3: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadP
ool::TPHandle*)+0x4f8) [0x561ee0f32dd8]
Sep 09 12:16:52 myhost ceph-osd[23535]:  4: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x68) [0
x561ee0c6ed08]
Sep 09 12:16:52 myhost ceph-osd[23535]:  5: (ECBackend::handle_sub_write(pg_shard_t, boost::intrusive_ptr<OpRequest>, ECSubWrite&, ZTracer::Trace const&, Context*)+0x963) [0x561ee0dad3b3]
Sep 09 12:16:52 myhost ceph-osd[23535]:  6: (ECBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x321) [0x561ee0dc4a51]
Sep 09 12:16:52 myhost ceph-osd[23535]:  7: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x97) [0x561ee0cabc47]
Sep 09 12:16:52 myhost ceph-osd[23535]:  8: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x63e) [0x561ee0c1764e]
Sep 09 12:16:52 myhost ceph-osd[23535]:  9: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x387) [0x561ee0aa6d07]
Sep 09 12:16:52 myhost ceph-osd[23535]:  10: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x5a) [0x561ee0d29ffa]
Sep 09 12:16:52 myhost ceph-osd[23535]:  11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x1ae7) [0x561ee0acab97]
Sep 09 12:16:52 myhost ceph-osd[23535]:  12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b2) [0x561ee107a612]
Sep 09 12:16:52 myhost ceph-osd[23535]:  13: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x561ee107db80]
Sep 09 12:16:52 myhost ceph-osd[23535]:  14: (()+0x7049) [0x7fd5c0398049]
Sep 09 12:16:52 myhost ceph-osd[23535]:  15: (clone()+0x3f) [0x7fd5bf827f0f]
Actions #8

Updated by Bob Bobington over 6 years ago

I also tried the workaround in http://tracker.ceph.com/issues/21180 by adding these to ceph.conf but no luck:

bluestore throttle bytes = 0
bluestore deferred throttle bytes = 0
Actions

Also available in: Atom PDF