Project

General

Profile

Actions

Bug #649

closed

OSD: CryptoPP::StreamTransformationFilter::LastPut

Added by Wido den Hollander over 13 years ago. Updated over 13 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
OSD
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This morning on my test machine (noisy.ceph.widodh.nl, 1 MON, 1 MDS, 3 OSD) all three OSD's died at exact the same moment (04:31 24-hour time).

The backtraces were identical:

Core was generated by `/usr/bin/cosd -i 0 -c /etc/ceph/ceph.conf'.
Program terminated with signal 6, Aborted.
#0  0x00007f8901e68a75 in raise () from /lib/libc.so.6
(gdb) bt
#0  0x00007f8901e68a75 in raise () from /lib/libc.so.6
#1  0x00007f8901e6c5c0 in abort () from /lib/libc.so.6
#2  0x00007f890271e8e5 in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/libstdc++.so.6
#3  0x00007f890271cd16 in ?? () from /usr/lib/libstdc++.so.6
#4  0x00007f890271cd43 in std::terminate() () from /usr/lib/libstdc++.so.6
#5  0x00007f890271ce3e in __cxa_throw () from /usr/lib/libstdc++.so.6
#6  0x00007f8902ed1f28 in CryptoPP::StreamTransformationFilter::LastPut(unsigned char const*, unsigned long) () from /usr/lib/libcrypto++.so.8
#7  0x00007f8902ed3dbc in CryptoPP::FilterWithBufferedInput::PutMaybeModifiable(unsigned char*, unsigned long, int, bool, bool) () from /usr/lib/libcrypto++.so.8
#8  0x000000000059c502 in CryptoPP::BufferedTransformation::MessageEnd (this=<value optimized out>, secret=<value optimized out>, in=..., out=<value optimized out>) at /usr/include/cryptopp/cryptlib.h:820
#9  CryptoAES::decrypt (this=<value optimized out>, secret=<value optimized out>, in=..., out=<value optimized out>) at auth/Crypto.cc:181
#10 0x0000000000596e3f in int decode_decrypt_enc_bl<ceph::buffer::list>(ceph::buffer::list&, CryptoKey, ceph::buffer::list&) ()
#11 0x0000000000599426 in int decode_decrypt<ceph::buffer::list>(ceph::buffer::list&, CryptoKey, ceph::buffer::list::iterator&) ()
#12 0x0000000000595912 in CephXTicketHandler::verify_service_ticket_reply (this=0x220a028, secret=<value optimized out>, indata=...) at auth/cephx/CephxProtocol.cc:134
#13 0x000000000059616e in CephXTicketManager::verify_service_ticket_reply (this=<value optimized out>, secret=<value optimized out>, indata=...) at auth/cephx/CephxProtocol.cc:246
#14 0x00000000005f6fa9 in CephxClientHandler::handle_response (this=0x153d300, ret=0, indata=...) at auth/cephx/CephxClientHandler.cc:115
#15 0x00000000005e233b in MonClient::handle_auth (this=0x7fff9ecbac70, m=0x28e1600) at mon/MonClient.cc:379
#16 0x00000000005e4323 in MonClient::ms_dispatch (this=0x7fff9ecbac70, m=0x28e1600) at mon/MonClient.cc:227
#17 0x000000000046d279 in Messenger::ms_deliver_dispatch (this=0x1541000) at msg/Messenger.h:97
#18 SimpleMessenger::dispatch_entry (this=0x1541000) at msg/SimpleMessenger.cc:352
#19 0x0000000000463fcc in SimpleMessenger::DispatchThread::entry (this=0x1541488) at msg/SimpleMessenger.h:531
#20 0x000000000047985a in Thread::_entry_func (arg=0x80e) at ./common/Thread.h:39
#21 0x00007f890328f9ca in start_thread () from /lib/libpthread.so.0
#22 0x00007f8901f1b70d in clone () from /lib/libc.so.6
#23 0x0000000000000000 in ?? ()

To me it seems to be something with the switch from OpenSSL to GnuTLS?

My libcrypto is from Ubuntu's 10.04 repository:

root@noisy:~# dpkg -l|grep libcrypto
ii  libcrypto++-dev                          5.6.0-5                                          General purpose cryptographic library - C++ 
ii  libcrypto++8                             5.6.0-5                                          General purpose cryptographic library - shar
root@noisy:~# dpkg -l|grep libgnutls
ii  libgnutls-dev                            2.8.5-2                                          the GNU TLS library - development files
ii  libgnutls26                              2.8.5-2                                          the GNU TLS library - runtime library
root@noisy:~#

The logs gave me:

root@noisy:~# zcat /var/log/ceph/osd.0.log.1.gz |tail -n 100
2010-12-14 04:13:24.882008 7f88fdc14710 filestore(/var/lib/ceph/osd.0) _journaled_ahead 150382 0x28e02c8,0x28e0468
2010-12-14 04:13:24.882031 7f88fe415710 journal write_thread_entry going to sleep
2010-12-14 04:13:24.882049 7f88fdc14710 journal op_apply_start 150382
2010-12-14 04:13:24.882064 7f88fdc14710 filestore(/var/lib/ceph/osd.0) queue_op new osr 0x1a60d80/0x1ee5b98
2010-12-14 04:13:24.882074 7f88fdc14710 filestore(/var/lib/ceph/osd.0) queue_op 0x23acc00 seq 150382 5344 bytes   (queue has 1 ops and 5344 bytes)
2010-12-14 04:13:24.882090 7f88fdc14710 filestore(/var/lib/ceph/osd.0)  queueing ondisk 0x2893ce0
2010-12-14 04:13:24.882105 7f88fcc12710 filestore(/var/lib/ceph/osd.0) _do_op 0x23acc00 150382 osr 0x1a60d80/0x1ee5b98 start
2010-12-14 04:13:24.882121 7f88fcc12710 filestore(/var/lib/ceph/osd.0) _do_transaction on 0x28e02c8
2010-12-14 04:13:24.882137 7f88fac0e710 osd0 251 pg[3.9( v 251'2024 (251'2022,251'2024] n=10 ec=2 les=248 238/243/229) [0,2,1] r=0 luod=251'2023 lcod 251'2023 mlcod 251'2022 active+clean] op_commit repgather(0x2aa80f0 applying 251'2024 rep_tid=6649 wfack=0,1,2 wfdisk=0,1,2 op=osd_op(client5803.0:18839 rb.0.2.00000000078d [write 2883584~4096] 3.8c09) v1)
2010-12-14 04:13:24.882193 7f88fac0e710 osd0 251 pg[3.9( v 251'2024 (251'2022,251'2024] n=10 ec=2 les=248 238/243/229) [0,2,1] r=0 mlcod 251'2022 active+clean] eval_repop repgather(0x2aa80f0 applying 251'2024 rep_tid=6649 wfack=0,1,2 wfdisk=1,2 op=osd_op(client5803.0:18839 rb.0.2.00000000078d [write 2883584~4096] 3.8c09) v1) wants=ad
2010-12-14 04:13:24.882223 7f88fcc12710 filestore(/var/lib/ceph/osd.0) write /var/lib/ceph/osd.0/current/3.9_head/rb.0.2.00000000078d_head 2883584~4096
2010-12-14 04:13:24.882334 7f88fcc12710 filestore(/var/lib/ceph/osd.0) queue_flusher ep 3715 fd 21 2883584~4096 qlen 1
2010-12-14 04:13:24.882350 7f88fcc12710 filestore(/var/lib/ceph/osd.0) write /var/lib/ceph/osd.0/current/3.9_head/rb.0.2.00000000078d_head 2883584~4096 = 4096
2010-12-14 04:13:24.882366 7f88fbc10710 filestore(/var/lib/ceph/osd.0) flusher_entry awoke
2010-12-14 04:13:24.882385 7f88fcc12710 filestore(/var/lib/ceph/osd.0) setattr /var/lib/ceph/osd.0/current/3.9_head/rb.0.2.00000000078d_head '_' len 144
2010-12-14 04:13:24.882404 7f88fbc10710 filestore(/var/lib/ceph/osd.0) flusher_entry flushing+closing 21 ep 3715
2010-12-14 04:13:24.882452 7f88fcc12710 filestore(/var/lib/ceph/osd.0) setattr /var/lib/ceph/osd.0/current/3.9_head/rb.0.2.00000000078d_head '_' len 144 = 144
2010-12-14 04:13:24.882488 7f88fbc10710 filestore(/var/lib/ceph/osd.0) flusher_entry sleeping
2010-12-14 04:13:24.882527 7f88fcc12710 filestore(/var/lib/ceph/osd.0) setattr /var/lib/ceph/osd.0/current/3.9_head/rb.0.2.00000000078d_head 'snapset' len 26
2010-12-14 04:13:24.882572 7f88fcc12710 filestore(/var/lib/ceph/osd.0) setattr /var/lib/ceph/osd.0/current/3.9_head/rb.0.2.00000000078d_head 'snapset' len 26 = 26
2010-12-14 04:13:24.882584 7f88fcc12710 filestore(/var/lib/ceph/osd.0) _do_transaction on 0x28e0468
2010-12-14 04:13:24.882600 7f88fcc12710 filestore(/var/lib/ceph/osd.0) write /var/lib/ceph/osd.0/current/meta/pglog_3.9_0 42630~98
2010-12-14 04:13:25.088850 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x265fe00
2010-12-14 04:13:25.088878 7f88f8c0a710 osd0 251 handle_osd_ping from osd2 got stat stat(2010-12-14 04:13:25.088496 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:25.088907 7f88f8c0a710 osd0 251 _share_map_incoming osd2 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6809/2220 251
2010-12-14 04:13:25.088934 7f88f8c0a710 osd0 251 take_peer_stat peer osd2 stat(2010-12-14 04:13:25.088496 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:25.390545 7f88fec16710 osd0 251 tick
2010-12-14 04:13:25.390675 7f88fec16710 osd0 251 scrub_should_schedule loadavg 14.8 >= max 0.5 = no, load too high
2010-12-14 04:13:25.390718 7f88fec16710 osd0 251 do_mon_report
2010-12-14 04:13:25.390729 7f88fec16710 osd0 251 send_alive up_thru currently 248 want 245
2010-12-14 04:13:25.390740 7f88fec16710 osd0 251 send_pg_stats
2010-12-14 04:13:25.390752 7f88fec16710 osd0 251 send_pg_stats - 2 pgs updated
2010-12-14 04:13:25.888021 7f88f5b03710 osd0 251 update_osd_stat osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:25.888070 7f88f5b03710 osd0 251 heartbeat: stat(2010-12-14 04:13:25.887908 oprate=0.442031 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:25.888097 7f88f5b03710 osd0 251 heartbeat: osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:26.089189 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x269e380
2010-12-14 04:13:26.089217 7f88f8c0a710 osd0 251 handle_osd_ping from osd2 got stat stat(2010-12-14 04:13:26.088798 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:26.089244 7f88f8c0a710 osd0 251 _share_map_incoming osd2 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6809/2220 251
2010-12-14 04:13:26.089270 7f88f8c0a710 osd0 251 take_peer_stat peer osd2 stat(2010-12-14 04:13:26.088798 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:26.170795 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x2b48540
2010-12-14 04:13:26.170817 7f88f8c0a710 osd0 251 handle_osd_ping from osd1 got stat stat(2010-12-14 04:13:26.170425 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:26.170844 7f88f8c0a710 osd0 251 _share_map_incoming osd1 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6806/2130 251
2010-12-14 04:13:26.170863 7f88f8c0a710 osd0 251 take_peer_stat peer osd1 stat(2010-12-14 04:13:26.170425 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:26.390860 7f88fec16710 osd0 251 tick
2010-12-14 04:13:26.390978 7f88fec16710 osd0 251 scrub_should_schedule loadavg 14.8 >= max 0.5 = no, load too high
2010-12-14 04:13:26.688344 7f88f5b03710 osd0 251 update_osd_stat osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:26.688409 7f88f5b03710 osd0 251 heartbeat: stat(2010-12-14 04:13:26.688206 oprate=0.442031 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:26.688444 7f88f5b03710 osd0 251 heartbeat: osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:26.771063 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x2762380
2010-12-14 04:13:26.771087 7f88f8c0a710 osd0 251 handle_osd_ping from osd1 got stat stat(2010-12-14 04:13:26.770711 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:26.771113 7f88f8c0a710 osd0 251 _share_map_incoming osd1 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6806/2130 251
2010-12-14 04:13:26.771153 7f88f8c0a710 osd0 251 take_peer_stat peer osd1 stat(2010-12-14 04:13:26.770711 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:27.189535 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x2499380
2010-12-14 04:13:27.189565 7f88f8c0a710 osd0 251 handle_osd_ping from osd2 got stat stat(2010-12-14 04:13:27.189101 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:27.189592 7f88f8c0a710 osd0 251 _share_map_incoming osd2 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6809/2220 251
2010-12-14 04:13:27.189612 7f88f8c0a710 osd0 251 take_peer_stat peer osd2 stat(2010-12-14 04:13:27.189101 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:27.233006 7f88f940b710 osd0 251 _dispatch 0x23c5d80 osd_sub_op_reply(client5803.0:18839 3.9 rb.0.2.00000000078d/head [] ondisk = 0) v1
2010-12-14 04:13:27.233042 7f88f940b710 osd0 251 require_same_or_newer_map 251 (i am 251) 0x23c5d80
2010-12-14 04:13:27.233057 7f88f940b710 osd0 251 _share_map_incoming osd1 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6805/2130 251
2010-12-14 04:13:27.233086 7f88f940b710 osd0 251 pg[3.9( v 251'2024 (251'2022,251'2024] n=10 ec=2 les=248 238/243/229) [0,2,1] r=0 mlcod 251'2022 active+clean] enqueue_op 0x23c5d80 osd_sub_op_reply(client5803.0:18839 3.9 rb.0.2.00000000078d/head [] ondisk = 0) v1
2010-12-14 04:13:27.233141 7f88f7b07710 osd0 251 dequeue_op osd_sub_op_reply(client5803.0:18839 3.9 rb.0.2.00000000078d/head [] ondisk = 0) v1 pg pg[3.9( v 251'2024 (251'2022,251'2024] n=10 ec=2 les=248 238/243/229) [0,2,1] r=0 mlcod 251'2022 active+clean], 0 more pending
2010-12-14 04:13:27.233168 7f88f7b07710 osd0 251 take_peer_stat peer osd1 stat(2010-12-14 04:13:27.230509 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:27.233198 7f88f7b07710 osd0 251 pg[3.9( v 251'2024 (251'2022,251'2024] n=10 ec=2 les=248 238/243/229) [0,2,1] r=0 mlcod 251'2022 active+clean] repop_ack rep_tid 6649 op osd_op(client5803.0:18839 rb.0.2.00000000078d [write 2883584~4096] 3.8c09) v1 result 0 ack_type 4 from osd1
2010-12-14 04:13:27.233229 7f88f7b07710 osd0 251 pg[3.9( v 251'2024 (251'2022,251'2024] n=10 ec=2 les=248 238/243/229) [0,2,1] r=0 mlcod 251'2022 active+clean] eval_repop repgather(0x2aa80f0 applying 251'2024 rep_tid=6649 wfack=0,2 wfdisk=2 op=osd_op(client5803.0:18839 rb.0.2.00000000078d [write 2883584~4096] 3.8c09) v1) wants=ad
2010-12-14 04:13:27.233258 7f88f7b07710 osd0 251 dequeue_op 0x23c5d80 finish
2010-12-14 04:13:27.391094 7f88fec16710 osd0 251 tick
2010-12-14 04:13:27.391214 7f88fec16710 osd0 251 scrub_should_schedule loadavg 14.8 >= max 0.5 = no, load too high
2010-12-14 04:13:27.951675 7f88fd413710 filestore(/var/lib/ceph/osd.0) sync_entry woke after 5.000063
2010-12-14 04:13:27.951743 7f88fd413710 journal commit_start op_seq 150382, applied_seq 150381, committed_seq 150380
2010-12-14 04:13:27.951754 7f88fd413710 journal commit_start blocked, waiting for 1 open ops
2010-12-14 04:13:27.988675 7f88f5b03710 osd0 251 update_osd_stat osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:27.988721 7f88f5b03710 osd0 251 heartbeat: stat(2010-12-14 04:13:27.988551 oprate=0.291372 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:27.988757 7f88f5b03710 osd0 251 heartbeat: osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:28.171348 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x280e700
2010-12-14 04:13:28.171374 7f88f8c0a710 osd0 251 handle_osd_ping from osd1 got stat stat(2010-12-14 04:13:28.170991 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.171403 7f88f8c0a710 osd0 251 _share_map_incoming osd1 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6806/2130 251
2010-12-14 04:13:28.171428 7f88f8c0a710 osd0 251 take_peer_stat peer osd1 stat(2010-12-14 04:13:28.170991 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.189671 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x2499700
2010-12-14 04:13:28.189711 7f88f8c0a710 osd0 251 handle_osd_ping from osd2 got stat stat(2010-12-14 04:13:28.189432 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.189735 7f88f8c0a710 osd0 251 _share_map_incoming osd2 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6809/2220 251
2010-12-14 04:13:28.189753 7f88f8c0a710 osd0 251 take_peer_stat peer osd2 stat(2010-12-14 04:13:28.189432 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.391328 7f88fec16710 osd0 251 tick
2010-12-14 04:13:28.391451 7f88fec16710 osd0 251 scrub_should_schedule loadavg 14.74 >= max 0.5 = no, load too high
2010-12-14 04:13:28.671697 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x2751a80
2010-12-14 04:13:28.671726 7f88f8c0a710 osd0 251 handle_osd_ping from osd1 got stat stat(2010-12-14 04:13:28.671307 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.671756 7f88f8c0a710 osd0 251 _share_map_incoming osd1 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6806/2130 251
2010-12-14 04:13:28.671782 7f88f8c0a710 osd0 251 take_peer_stat peer osd1 stat(2010-12-14 04:13:28.671307 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.740587 7f88f9c0c710 osd0 251 _dispatch 0x2251a80 pg_stats_ack(1 pgs) v1
2010-12-14 04:13:28.740628 7f88f9c0c710 osd0 251 handle_pg_stats_ack 
2010-12-14 04:13:28.740653 7f88f9c0c710 osd0 251 _dispatch 0x2c4b380 pg_stats_ack(1 pgs) v1
2010-12-14 04:13:28.740663 7f88f9c0c710 osd0 251 handle_pg_stats_ack 
2010-12-14 04:13:28.740675 7f88f9c0c710 osd0 251 _dispatch 0x2958a80 pg_stats_ack(1 pgs) v1
2010-12-14 04:13:28.740685 7f88f9c0c710 osd0 251 handle_pg_stats_ack 
2010-12-14 04:13:28.788969 7f88f5b03710 osd0 251 update_osd_stat osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:28.789018 7f88f5b03710 osd0 251 heartbeat: stat(2010-12-14 04:13:28.788860 oprate=0.291372 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.789050 7f88f5b03710 osd0 251 heartbeat: osd_stat(5670 MB used, 94692 MB avail, 102400 MB total, peers [1,2]/[1,2])
2010-12-14 04:13:28.890015 7f88f8c0a710 osd0 251 heartbeat_dispatch 0x2887e00
2010-12-14 04:13:28.890046 7f88f8c0a710 osd0 251 handle_osd_ping from osd2 got stat stat(2010-12-14 04:13:28.889636 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2010-12-14 04:13:28.890076 7f88f8c0a710 osd0 251 _share_map_incoming osd2 [2a00:f10:113:1:230:48ff:fe8d:a21f]:6809/2220 251
2010-12-14 04:13:28.890095 7f88f8c0a710 osd0 251 take_peer_stat peer osd2 stat(2010-12-14 04:13:28.889636 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)

scrub_should_schedule loadavg 14.74 >= max 0.5 = no, load too high is what I noticed, but I'm not sure what the machine was doing at that moment. There was one Virtual Machine running on Qemu-RBD, but it was idle too (Or should be at least).

Besides that, there is not backtrace of the crash in the logs?

I've ran cdebugpack and uploaded the data to logger.ceph.widodh.nl:/srv/ceph/issues/osd_crash_cryptopp/

This was on the RC branch ( 346a2aac421dd902579d848891726d807e01ec52 )

Actions #1

Updated by Sage Weil over 13 years ago

  • Assignee set to Yehuda Sadeh
Actions #2

Updated by Yehuda Sadeh over 13 years ago

  • Status changed from New to Resolved

Fixed in b989087ddf8775588ddbb6234d099398a2e18072. CryptoPP threw an exception when failed to decode message (probably due to an expired key). Now catching exception and returning appropriate return code.

Actions

Also available in: Atom PDF