Project

General

Profile

Bug #14021 » coredump1.log

Clive Xu, 12/08/2015 08:01 AM

 

-19> 2015-11-25 18:49:05.430206 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26095 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5823f00 con 0x4cbf220
-18> 2015-11-25 18:49:05.430209 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26097 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5823480 con 0x4cbf220
-17> 2015-11-25 18:49:05.430211 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26099 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5821dc0 con 0x4cbf220
-16> 2015-11-25 18:49:05.430214 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26101 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5824ec0 con 0x4cbf220
-15> 2015-11-25 18:49:05.430216 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26103 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5823640 con 0x4cbf220
-14> 2015-11-25 18:49:05.430219 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26105 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5825940 con 0x4cbf220
-13> 2015-11-25 18:49:05.430221 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26107 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5823b80 con 0x4cbf220
-12> 2015-11-25 18:49:05.430224 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26109 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5824280 con 0x4cbf220
-11> 2015-11-25 18:49:05.430226 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26111 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5820c40 con 0x4cbf220
-10> 2015-11-25 18:49:05.430229 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26113 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x58232c0 con 0x4cbf220
-9> 2015-11-25 18:49:05.430232 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26115 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5821180 con 0x4cbf220
-8> 2015-11-25 18:49:05.430234 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26117 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x58278c0 con 0x4cbf220
-7> 2015-11-25 18:49:05.430237 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26119 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x5825780 con 0x4cbf220
-6> 2015-11-25 18:49:05.430239 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26121 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x6d401c0 con 0x4cbf220
-5> 2015-11-25 18:49:05.430242 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== client.4135 10.0.0.13:0/1034298 26123 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x51108c0 con 0x4cbf220
-4> 2015-11-25 18:49:05.453251 7faeda34f700 10 monclient(hunting): get_version osdmap req 0x59d3680
-3> 2015-11-25 18:49:05.465540 7faeda34f700 1 -- 10.0.0.12:6800/28647 <== mon.0 10.0.0.11:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (2483598769 0 0) 0x57d5c40 con 0x4cbe460
-2> 2015-11-25 18:49:05.465681 7faeda34f700 10 monclient(hunting): _send_mon_message to mon.node0 at 10.0.0.11:6789/0
-1> 2015-11-25 18:49:05.465684 7faeda34f700 1 -- 10.0.0.12:6800/28647 --> 10.0.0.11:6789/0 -- auth(proto 2 128 bytes epoch 0) v1 -- ?+0 0x57d33c0 con 0x4cbe460
0> 2015-11-25 18:49:05.471589 7faece337700 -1 *** Caught signal (Aborted) **
in thread 7faece337700

ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
1: /usr/bin/ceph-osd() [0xac5642]
2: (()+0xf130) [0x7faee962d130]
3: (gsignal()+0x37) [0x7faee80475d7]
4: (abort()+0x148) [0x7faee8048cc8]
5: (__gnu_cxx::__verbose_terminate_handler()+0x165) [0x7faee894b9b5]
6: (()+0x5e926) [0x7faee8949926]
7: (()+0x5e953) [0x7faee8949953]
8: (()+0x5eb73) [0x7faee8949b73]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x27a) [0xbc583a]
10: (ceph::HeartbeatMap::_check(ceph::heartbeat_handle_d*, char const*, long)+0x2d9) [0xafb449]
11: (ceph::HeartbeatMap::reset_timeout(ceph::heartbeat_handle_d*, long, long)+0x89) [0xafb769]
12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x39b) [0x694c5b]
13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x86f) [0xbb529f]
14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xbb73d0]
15: (()+0x7df5) [0x7faee9625df5]
16: (clone()+0x6d) [0x7faee81081ad]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-osd.1.log
--- end dump of recent events ---
(1-1/2)