Project

General

Profile

Bug #13527

moniter segmentation fault

Added by Ruifeng Yang over 7 years ago. Updated almost 7 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version 9.1.0-227-gc55f8d4 (c55f8d4300019bf2da1fe52edb0c2151cc8e3d2e)
1: (()+0x68d75a) [0x7f066bd2975a]
2: (()+0x10340) [0x7f066a538340]
3: (PK11_DoesMechanism()+0x5f) [0x7f066a2404bf]
4: (()+0x33027) [0x7f066a221027]
5: (()+0x4cf2a) [0x7f066a23af2a]
6: (()+0x4cf7b) [0x7f066a23af7b]
7: (PK11_CreateContextBySymKey()+0x22) [0x7f066a21fe62]
8: (()+0x374407) [0x7f066ba10407]
9: (CryptoAESKeyHandler::decrypt(ceph::buffer::list const&, ceph::buffer::list&, std::string*) const+0x2a) [0x7f066ba111ca]
10: (void decode_decrypt_enc_bl<CephXServiceTicketInfo>(CephContext*, CephXServiceTicketInfo&, CryptoKey, ceph::buffer::list&, std::string&)+0xba) [0x7f066ba0874a]
11: (cephx_verify_authorizer(CephContext*, KeyStore*, ceph::buffer::list::iterator&, CephXServiceTicketInfo&, ceph::buffer::list&)+0x394) [0x7f066ba03a94]
12: (Monitor::ms_verify_authorizer(Connection*, int, int, ceph::buffer::list&, ceph::buffer::list&, bool&, CryptoKey&)+0x409) [0x7f066b8b3179]
13: (SimpleMessenger::verify_authorizer(Connection*, int, int, ceph::buffer::list&, ceph::buffer::list&, bool&, CryptoKey&)+0x7f) [0x7f066bad955f]
14: (Pipe::accept()+0x1e51) [0x7f066bc9f191]
15: (Pipe::reader()+0x19ff) [0x7f066bca3e1f]
16: (Pipe::Reader::entry()+0xd) [0x7f066bcabccd]
17: (()+0x8182) [0x7f066a530182]
18: (clone()+0x6d) [0x7f0668a9d47d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1


Related issues

Related to Ceph - Bug #14958: PK11_DestroyContext() is called twice if PK11_DigestFinal() fails Resolved 03/03/2016
Duplicates Ceph - Bug #9744: cephx: verify_reply couldn't decrypt with error: error decoding block for decryption Won't Fix 10/10/2014

History

#1 Updated by Ruifeng Yang over 7 years ago

2015-10-19 13:45:32.686078 7f16758e8940 0 ceph version 9.1.0-227-gc55f8d4 (c55f8d4300019bf2da1fe52edb0c2151cc8e3d2e), process ceph-mon, pid 5796
2015-10-19 13:45:32.696939 7f16758e8940 0 starting mon.ceph2 rank 1 at 192.168.72.42:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph2 fsid f47da131-dc71-4768-844c-e71bf68d4f03
2015-10-19 13:45:32.697098 7f16758e8940 1 mon.ceph2@-1(probing) e0 preinit fsid f47da131-dc71-4768-844c-e71bf68d4f03
2015-10-19 13:45:32.697140 7f16758e8940 1 mon.ceph2@-1(probing) e0 initial_members ceph1,ceph2,ceph3, filtering seed monmap
2015-10-19 13:45:32.697508 7f16758e8940 0 mon.ceph2@-1(probing) e0 my rank is now 0 (was 1)
2015-10-19 13:45:32.697689 7f16758e6700 0 -
192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f1678dfc000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f1678c41340).fault
2015-10-19 13:45:32.697806 7f1674f10700 0 -- 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f1678e01000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f1678c411e0).fault
2015-10-19 13:45:32.697953 7f166a550700 0 -- 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f1678e06000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f1678c40f20).fault
2015-10-19 13:45:32.704079 7f166a651700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
2015-10-19 13:45:32.704087 7f166a651700 0 -- 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f1678e0b000 sd=14 :56464 s=1 pgs=0 cs=0 l=0 c=0x7f1678c41080).failed verifying authorize reply
2015-10-19 13:45:32.704112 7f166a651700 0 -- 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f1678e0b000 sd=14 :56464 s=1 pgs=0 cs=0 l=0 c=0x7f1678c41080).fault
2015-10-19 13:45:32.704588 7f1669c4e700 0 cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: 8190
2015-10-19 13:45:32.704606 7f166a651700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
2015-10-19 13:45:32.704597 7f1669c4e700 0 -
192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f1678e2c000 sd=24 :6789 s=0 pgs=0 cs=0 l=0 c=0x7f1678c414a0).accept connect_seq 0 vs existing 0 state connecting
2015-10-19 13:45:32.704610 7f166a651700 0 -- 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f1678e0b000 sd=14 :56468 s=1 pgs=0 cs=0 l=0 c=0x7f1678c41080).failed verifying authorize reply
2015-10-19 13:45:32.704823 7f1669c4e700 0 -- 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f1678e2c000 sd=24 :6789 s=1 pgs=14155 cs=1 l=0 c=0x7f1678c41080).fault
2015-10-19 14:35:58.785991 7f166a44f700 1 mon.ceph2@0(probing) e0 * Got Signal Terminated
2015-10-19 14:35:58.786027 7f166a44f700 1 mon.ceph2@0(probing) e0 shutdown
2015-10-19 14:38:39.403683 7f4c237d9940 0 ceph version 9.1.0-227-gc55f8d4 (c55f8d4300019bf2da1fe52edb0c2151cc8e3d2e), process ceph-mon, pid 1458
2015-10-19 14:38:39.654858 7f4c237d9940 0 starting mon.ceph2 rank 1 at 192.168.72.42:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph2 fsid f47da131-dc71-4768-844c-e71bf68d4f03
2015-10-19 14:38:39.667082 7f4c237d9940 1 mon.ceph2@-1(probing) e0 preinit fsid f47da131-dc71-4768-844c-e71bf68d4f03
2015-10-19 14:38:39.667158 7f4c237d9940 1 mon.ceph2@-1(probing) e0 initial_members ceph1,ceph3,ceph2, filtering seed monmap
2015-10-19 14:38:39.701353 7f4c237d9940 0 mon.ceph2@-1(probing) e0 my rank is now 0 (was -1)
2015-10-19 14:38:39.701533 7f4c237d7700 0 -
192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f4c26600000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f4c26445340).fault
2015-10-19 14:38:39.701554 7f4c22e01700 0 -- 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f4c26605000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f4c264451e0).fault
2015-10-19 14:38:40.205808 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:40.205882 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:41.250115 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:41.250193 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:42.291365 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:42.291438 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:42.701094 7f4c18542700 0 -- 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f4c2660f000 sd=14 :0 s=1 pgs=0 cs=0 l=0 c=0x7f4c26445080).fault
2015-10-19 14:38:42.701097 7f4c18441700 0 -- 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f4c2660a000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f4c26444f20).fault
2015-10-19 14:38:43.332292 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:43.332364 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:44.372352 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:44.372443 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:45.412892 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:45.412976 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:46.452619 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:46.452703 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:47.493913 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:47.493998 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:48.534158 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:48.534219 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:49.574923 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:49.574995 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:50.615193 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:50.615262 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:51.655645 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:51.655716 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:52.696145 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:52.696214 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:53.737704 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:53.737770 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:54.779753 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:54.779829 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:55.820096 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:55.820162 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:56.860762 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:56.860832 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:57.901181 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:57.901264 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:58.941331 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:58.941404 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:38:59.981174 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:38:59.981263 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:01.021912 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:01.021984 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:02.064217 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:02.064284 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:03.105485 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:03.105578 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:04.145460 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:04.145529 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:05.184647 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:05.184728 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:06.223813 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:06.223882 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:07.263490 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:07.263564 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:08.303523 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:08.303595 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:09.343200 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-10-19 14:39:09.343261 7f4c1c8ba700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-10-19 14:39:09.713066 7f4c17b3f700 0 cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: 8190
2015-10-19 14:39:09.713076 7f4c17b3f700 0 -
192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f4c26784000 sd=24 :6789 s=0 pgs=0 cs=0 l=0 c=0x7f4c264454a0).accept connect_seq 0 vs existing 0 state connecting
2015-10-19 14:39:11.831121 7f2163fa3940 0 ceph version 9.1.0-227-gc55f8d4 (c55f8d4300019bf2da1fe52edb0c2151cc8e3d2e), process ceph-mon, pid 1903
2015-10-19 14:39:11.840114 7f2163fa3940 0 starting mon.ceph2 rank 1 at 192.168.72.42:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph2 fsid f47da131-dc71-4768-844c-e71bf68d4f03
2015-10-19 14:39:11.840287 7f2163fa3940 1 mon.ceph2@-1(probing) e0 preinit fsid f47da131-dc71-4768-844c-e71bf68d4f03
2015-10-19 14:39:11.840323 7f2163fa3940 1 mon.ceph2@-1(probing) e0 initial_members ceph1,ceph3,ceph2, filtering seed monmap
2015-10-19 14:39:11.840666 7f2163fa3940 0 mon.ceph2@-1(probing) e0 my rank is now 0 (was 1)
2015-10-19 14:39:11.840813 7f2163fa1700 0 -
192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f2166bbe000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03340).fault
2015-10-19 14:39:11.840827 7f21635cb700 0 -- 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f2166bc3000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a031e0).fault
2015-10-19 14:39:11.841017 7f2158c0b700 0 -- 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f2166bc8000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a02f20).fault
2015-10-19 14:39:11.867845 7f2158d0c700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
2015-10-19 14:39:11.867851 7f2158d0c700 0 -- 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bcd000 sd=14 :41700 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03080).failed verifying authorize reply
2015-10-19 14:39:11.867893 7f2158d0c700 0 -- 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bcd000 sd=14 :41700 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03080).fault
2015-10-19 14:39:11.868158 7f2158309700 0 cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: 8190
2015-10-19 14:39:11.868163 7f2158309700 0 -
192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bee000 sd=24 :6789 s=0 pgs=0 cs=0 l=0 c=0x7f2166a034a0).accept connect_seq 0 vs existing 0 state connecting
2015-10-19 14:39:11.877845 7f215a50f700 -1 *
* Caught signal (Segmentation fault) **
in thread 7f215a50f700

ceph version 9.1.0-227-gc55f8d4 (c55f8d4300019bf2da1fe52edb0c2151cc8e3d2e)
1: (()+0x68d75a) [0x7f2163c5c75a]
2: (()+0x10340) [0x7f216246b340]
3: (PrioritizedQueue&lt;DispatchQueue::QueueItem, unsigned long&gt;::dequeue()+0x13b) [0x7f2163bafddb]
4: (DispatchQueue::entry()+0xa3) [0x7f2163bacff3]
5: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f2163a1531d]
6: (()+0x8182) [0x7f2162463182]
7: (clone()+0x6d) [0x7f21609d047d]
NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.

--- begin dump of recent events ---
79> 2015-10-19 14:39:11.827928 7f2163fa3940 5 asok(0x7f2166a72000) register_command perfcounters_dump hook 0x7f21669e6050
-78> 2015-10-19 14:39:11.827944 7f2163fa3940 5 asok(0x7f2166a72000) register_command 1 hook 0x7f21669e6050
-77> 2015-10-19 14:39:11.827948 7f2163fa3940 5 asok(0x7f2166a72000) register_command perf dump hook 0x7f21669e6050
-76> 2015-10-19 14:39:11.827951 7f2163fa3940 5 asok(0x7f2166a72000) register_command perfcounters_schema hook 0x7f21669e6050
-75> 2015-10-19 14:39:11.827953 7f2163fa3940 5 asok(0x7f2166a72000) register_command 2 hook 0x7f21669e6050
-74> 2015-10-19 14:39:11.827955 7f2163fa3940 5 asok(0x7f2166a72000) register_command perf schema hook 0x7f21669e6050
-73> 2015-10-19 14:39:11.827958 7f2163fa3940 5 asok(0x7f2166a72000) register_command perf reset hook 0x7f21669e6050
-72> 2015-10-19 14:39:11.827960 7f2163fa3940 5 asok(0x7f2166a72000) register_command config show hook 0x7f21669e6050
-71> 2015-10-19 14:39:11.827963 7f2163fa3940 5 asok(0x7f2166a72000) register_command config set hook 0x7f21669e6050
-70> 2015-10-19 14:39:11.827965 7f2163fa3940 5 asok(0x7f2166a72000) register_command config get hook 0x7f21669e6050
-69> 2015-10-19 14:39:11.827968 7f2163fa3940 5 asok(0x7f2166a72000) register_command config diff hook 0x7f21669e6050
-68> 2015-10-19 14:39:11.827969 7f2163fa3940 5 asok(0x7f2166a72000) register_command log flush hook 0x7f21669e6050
-67> 2015-10-19 14:39:11.827972 7f2163fa3940 5 asok(0x7f2166a72000) register_command log dump hook 0x7f21669e6050
-66> 2015-10-19 14:39:11.827973 7f2163fa3940 5 asok(0x7f2166a72000) register_command log reopen hook 0x7f21669e6050
-65> 2015-10-19 14:39:11.831071 7f2163fa3940 0 set uid:gid to 1000:1000
-64> 2015-10-19 14:39:11.831121 7f2163fa3940 0 ceph version 9.1.0-227-gc55f8d4 (c55f8d4300019bf2da1fe52edb0c2151cc8e3d2e), process ceph-mon, pid 1903
-63> 2015-10-19 14:39:11.832291 7f2163fa3940 5 asok(0x7f2166a72000) init /var/run/ceph/ceph-mon.ceph2.asok
-62> 2015-10-19 14:39:11.832298 7f2163fa3940 5 asok(0x7f2166a72000) bind_and_listen /var/run/ceph/ceph-mon.ceph2.asok
-61> 2015-10-19 14:39:11.832312 7f2163fa3940 5 asok(0x7f2166a72000) register_command 0 hook 0x7f21669e20b8
-60> 2015-10-19 14:39:11.832316 7f2163fa3940 5 asok(0x7f2166a72000) register_command version hook 0x7f21669e20b8
-59> 2015-10-19 14:39:11.832319 7f2163fa3940 5 asok(0x7f2166a72000) register_command git_version hook 0x7f21669e20b8
-58> 2015-10-19 14:39:11.832322 7f2163fa3940 5 asok(0x7f2166a72000) register_command help hook 0x7f21669e6240
-57> 2015-10-19 14:39:11.832325 7f2163fa3940 5 asok(0x7f2166a72000) register_command get_command_descriptions hook 0x7f21669e6230
-56> 2015-10-19 14:39:11.832347 7f215d084700 5 asok(0x7f2166a72000) entry start
-55> 2015-10-19 14:39:11.840114 7f2163fa3940 0 starting mon.ceph2 rank 1 at 192.168.72.42:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph2 fsid f47da131-dc71-4768-844c-e71bf68d4f03
-54> 2015-10-19 14:39:11.840158 7f2163fa3940 1 -
192.168.72.42:6789/0 learned my addr 192.168.72.42:6789/0
53> 2015-10-19 14:39:11.840163 7f2163fa3940 1 accepter.accepter.bind my_inst.addr is 192.168.72.42:6789/0 need_addr=0
-52> 2015-10-19 14:39:11.840216 7f2163fa3940 5 adding auth protocol: cephx
-51> 2015-10-19 14:39:11.840219 7f2163fa3940 5 adding auth protocol: cephx
-50> 2015-10-19 14:39:11.840231 7f2163fa3940 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info)
-49> 2015-10-19 14:39:11.840234 7f2163fa3940 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info)
-48> 2015-10-19 14:39:11.840287 7f2163fa3940 1 mon.ceph2@-1(probing) e0 preinit fsid f47da131-dc71-4768-844c-e71bf68d4f03
-47> 2015-10-19 14:39:11.840323 7f2163fa3940 1 mon.ceph2@-1(probing) e0 initial_members ceph1,ceph3,ceph2, filtering seed monmap
-46> 2015-10-19 14:39:11.840327 7f2163fa3940 1 removing noname-a 192.168.72.41:6789/0
-45> 2015-10-19 14:39:11.840332 7f2163fa3940 1 keeping ceph2 192.168.72.42:6789/0
-44> 2015-10-19 14:39:11.840334 7f2163fa3940 1 removing noname-c 192.168.72.43:6789/0
-43> 2015-10-19 14:39:11.840336 7f2163fa3940 1 adding ceph1 0.0.0.0:0/1
-42> 2015-10-19 14:39:11.840340 7f2163fa3940 1 adding ceph3 0.0.0.0:0/2
-41> 2015-10-19 14:39:11.840566 7f2163fa3940 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-ceph2/keyring
-40> 2015-10-19 14:39:11.840571 7f2163fa3940 5 asok(0x7f2166a72000) register_command mon_status hook 0x7f21669e62c0
-39> 2015-10-19 14:39:11.840576 7f2163fa3940 5 asok(0x7f2166a72000) register_command quorum_status hook 0x7f21669e62c0
-38> 2015-10-19 14:39:11.840580 7f2163fa3940 5 asok(0x7f2166a72000) register_command sync_force hook 0x7f21669e62c0
-37> 2015-10-19 14:39:11.840583 7f2163fa3940 5 asok(0x7f2166a72000) register_command add_bootstrap_peer_hint hook 0x7f21669e62c0
-36> 2015-10-19 14:39:11.840587 7f2163fa3940 5 asok(0x7f2166a72000) register_command quorum enter hook 0x7f21669e62c0
-35> 2015-10-19 14:39:11.840590 7f2163fa3940 5 asok(0x7f2166a72000) register_command quorum exit hook 0x7f21669e62c0
-34> 2015-10-19 14:39:11.840593 7f2163fa3940 5 asok(0x7f2166a72000) register_command ops hook 0x7f21669e62c0
-33> 2015-10-19 14:39:11.840600 7f2163fa3940 1 -
192.168.72.42:6789/0 messenger.start
32> 2015-10-19 14:39:11.840623 7f2163fa3940 2 mon.ceph2@-1(probing) e0 init
-31> 2015-10-19 14:39:11.840655 7f2163fa3940 1 accepter.accepter.start
-30> 2015-10-19 14:39:11.840666 7f2163fa3940 0 mon.ceph2@-1(probing) e0 my rank is now 0 (was -1)
-29> 2015-10-19 14:39:11.840669 7f2163fa3940 1 -
192.168.72.42:6789/0 mark_down_all
28> 2015-10-19 14:39:11.840676 7f2163fa3940 1 - 192.168.72.42:6789/0 --> mon.1 0.0.0.0:0/1 -- mon_probe(probe f47da131-dc71-4768-844c-e71bf68d4f03 name ceph2 new) v6 -- ?+0 0x7f2166ad8780
27> 2015-10-19 14:39:11.840738 7f2163fa3940 1 - 192.168.72.42:6789/0 --> mon.2 0.0.0.0:0/2 -- mon_probe(probe f47da131-dc71-4768-844c-e71bf68d4f03 name ceph2 new) v6 -- ?+0 0x7f2166ad9680
26> 2015-10-19 14:39:11.840758 7f2163fa3940 1 - 192.168.72.42:6789/0 --> mon.? 192.168.72.41:6789/0 -- mon_probe(probe f47da131-dc71-4768-844c-e71bf68d4f03 name ceph2 new) v6 -- ?+0 0x7f2166ad9400
25> 2015-10-19 14:39:11.840782 7f2163fa3940 1 - 192.168.72.42:6789/0 --> mon.? 192.168.72.43:6789/0 -- mon_probe(probe f47da131-dc71-4768-844c-e71bf68d4f03 name ceph2 new) v6 -- ?+0 0x7f2166ad9180
24> 2015-10-19 14:39:11.840781 7f2163fa1700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f2166bbe000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03340).connect error 0.0.0.0:0/1, (111) Connection refused
23> 2015-10-19 14:39:11.840803 7f2163fa1700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f2166bbe000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03340).fault (111) Connection refused
22> 2015-10-19 14:39:11.840798 7f21635cb700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f2166bc3000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a031e0).connect error 0.0.0.0:0/2, (111) Connection refused
21> 2015-10-19 14:39:11.840813 7f2163fa1700 0 - 192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f2166bbe000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03340).fault
20> 2015-10-19 14:39:11.840821 7f21635cb700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f2166bc3000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a031e0).fault (111) Connection refused
19> 2015-10-19 14:39:11.840827 7f21635cb700 0 - 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f2166bc3000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a031e0).fault
18> 2015-10-19 14:39:11.840847 7f21635cb700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f2166bc3000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a031e0).connect error 0.0.0.0:0/2, (111) Connection refused
17> 2015-10-19 14:39:11.840855 7f21635cb700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/2 pipe(0x7f2166bc3000 sd=13 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a031e0).fault (111) Connection refused
16> 2015-10-19 14:39:11.840849 7f2163fa1700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f2166bbe000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03340).connect error 0.0.0.0:0/1, (111) Connection refused
15> 2015-10-19 14:39:11.840862 7f2163fa1700 2 - 192.168.72.42:6789/0 >> 0.0.0.0:0/1 pipe(0x7f2166bbe000 sd=12 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03340).fault (111) Connection refused
14> 2015-10-19 14:39:11.840982 7f2158c0b700 2 - 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f2166bc8000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a02f20).connect error 192.168.72.43:6789/0, (111) Connection refused
13> 2015-10-19 14:39:11.841009 7f2158c0b700 2 - 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f2166bc8000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a02f20).fault (111) Connection refused
12> 2015-10-19 14:39:11.841017 7f2158c0b700 0 - 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f2166bc8000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a02f20).fault
11> 2015-10-19 14:39:11.841119 7f2158c0b700 2 - 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f2166bc8000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a02f20).connect error 192.168.72.43:6789/0, (111) Connection refused
10> 2015-10-19 14:39:11.841136 7f2158c0b700 2 - 192.168.72.42:6789/0 >> 192.168.72.43:6789/0 pipe(0x7f2166bc8000 sd=15 :0 s=1 pgs=0 cs=0 l=0 c=0x7f2166a02f20).fault (111) Connection refused
9> 2015-10-19 14:39:11.867845 7f2158d0c700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
-8> 2015-10-19 14:39:11.867851 7f2158d0c700 0 -
192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bcd000 sd=14 :41700 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03080).failed verifying authorize reply
7> 2015-10-19 14:39:11.867870 7f2158d0c700 2 - 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bcd000 sd=14 :41700 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03080).fault (0) Success
6> 2015-10-19 14:39:11.867893 7f2158d0c700 0 - 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bcd000 sd=14 :41700 s=1 pgs=0 cs=0 l=0 c=0x7f2166a03080).fault
5> 2015-10-19 14:39:11.868016 7f2158309700 1 - 192.168.72.42:6789/0 >> :/0 pipe(0x7f2166bee000 sd=24 :6789 s=0 pgs=0 cs=0 l=0 c=0x7f2166a034a0).accept sd=24 192.168.72.41:43464/0
4> 2015-10-19 14:39:11.868158 7f2158309700 0 cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: -8190
-3> 2015-10-19 14:39:11.868163 7f2158309700 0 -
192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bee000 sd=24 :6789 s=0 pgs=0 cs=0 l=0 c=0x7f2166a034a0).accept connect_seq 0 vs existing 0 state connecting
2> 2015-10-19 14:39:11.868179 7f2158d0c700 2 - 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bcd000 sd=14 :41705 s=4 pgs=0 cs=0 l=0 c=0x7f2166a03080).connect read reply (0) Success
1> 2015-10-19 14:39:11.868200 7f2158d0c700 3 - 192.168.72.42:6789/0 >> 192.168.72.41:6789/0 pipe(0x7f2166bcd000 sd=14 :41705 s=4 pgs=0 cs=0 l=0 c=0x7f2166a03080).connect fault, but state = closed != connecting, stopping
0> 2015-10-19 14:39:11.877845 7f215a50f700 -1 ** Caught signal (Segmentation fault) *
in thread 7f215a50f700

ceph version 9.1.0-227-gc55f8d4 (c55f8d4300019bf2da1fe52edb0c2151cc8e3d2e)
1: (()+0x68d75a) [0x7f2163c5c75a]
2: (()+0x10340) [0x7f216246b340]
3: (PrioritizedQueue&lt;DispatchQueue::QueueItem, unsigned long&gt;::dequeue()+0x13b) [0x7f2163bafddb]
4: (DispatchQueue::entry()+0xa3) [0x7f2163bacff3]
5: (DispatchQueue::DispatchThread::entry()+0xd) [0x7f2163a1531d]
6: (()+0x8182) [0x7f2162463182]
7: (clone()+0x6d) [0x7f21609d047d]
NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
1/ 5 compressor
1/ 5 newstore
2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-mon.ceph2.log
--
end dump of recent events ---

#2 Updated by Loïc Dachary about 7 years ago

  • Description updated (diff)
  • Status changed from New to Duplicate

#3 Updated by Loïc Dachary about 7 years ago

  • Duplicates Bug #9744: cephx: verify_reply couldn't decrypt with error: error decoding block for decryption added

#4 Updated by Brad Hubbard almost 7 years ago

This crash is fixed by commit e9e05333ac7c64758bf14d80f6179e001c0fdbfd

#5 Updated by Brad Hubbard almost 7 years ago

  • Related to Bug #14958: PK11_DestroyContext() is called twice if PK11_DigestFinal() fails added

Also available in: Atom PDF