Project

General

Profile

Support #11073 » mds.log

mds log file (latest dump) - Jean-Sébastien Frerot, 03/09/2015 08:13 PM

 
2015-03-09 19:49:17.412687 7fed6aee8800 0 ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e), process ceph-mds, pid 18339
2015-03-09 19:49:18.373628 7fed652da700 1 mds.-1.0 handle_mds_map standby
2015-03-09 19:49:18.475734 7fed652da700 1 mds.0.261 handle_mds_map i am now mds.0.261
2015-03-09 19:49:18.475743 7fed652da700 1 mds.0.261 handle_mds_map state change up:standby --> up:replay
2015-03-09 19:49:18.475754 7fed652da700 1 mds.0.261 replay_start
2015-03-09 19:49:18.475765 7fed652da700 1 mds.0.261 recovery set is
2015-03-09 19:49:18.475813 7fed652da700 1 mds.0.261 waiting for osdmap 6363 (which blacklists prior instance)
2015-03-09 19:49:18.480420 7fed5ffcd700 0 mds.0.cache creating system inode with ino:100
2015-03-09 19:49:18.480614 7fed5ffcd700 0 mds.0.cache creating system inode with ino:1
2015-03-09 19:49:18.486569 7fed5ffcd700 1 mds.0.261 replay_done
2015-03-09 19:49:18.486578 7fed5ffcd700 1 mds.0.261 making mds journal writeable
2015-03-09 19:49:19.588435 7fed652da700 1 mds.0.261 handle_mds_map i am now mds.0.261
2015-03-09 19:49:19.588441 7fed652da700 1 mds.0.261 handle_mds_map state change up:replay --> up:reconnect
2015-03-09 19:49:19.588447 7fed652da700 1 mds.0.261 reconnect_start
2015-03-09 19:49:19.588450 7fed652da700 1 mds.0.261 reopen_log
2015-03-09 19:49:19.588456 7fed652da700 1 mds.0.server reconnect_clients -- 2 sessions
2015-03-09 19:49:19.588525 7fed652da700 0 log_channel(default) log [DBG] : reconnect by client.567456 192.168.255.132:0/3368 after 0.000007
2015-03-09 19:49:19.588634 7fed652da700 0 log_channel(default) log [DBG] : reconnect by client.567435 192.168.255.129:0/11675 after 0.000141
2015-03-09 19:49:19.588642 7fed652da700 1 mds.0.261 reconnect_done
2015-03-09 19:49:24.328119 7fed609d0700 -1 mds.0.261 *** got signal Terminated ***
2015-03-09 19:49:24.328153 7fed609d0700 1 mds.0.261 suicide. wanted down:dne, now up:reconnect
2015-03-09 19:51:14.240323 7f2a314b9800 0 ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e), process ceph-mds, pid 18570
2015-03-09 19:51:15.055648 7f2a2b8ab700 1 mds.-1.0 handle_mds_map standby
2015-03-09 19:51:15.189295 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
2015-03-09 19:51:15.189305 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:standby --> up:replay
2015-03-09 19:51:15.189317 7f2a2b8ab700 1 mds.0.262 replay_start
2015-03-09 19:51:15.189327 7f2a2b8ab700 1 mds.0.262 recovery set is
2015-03-09 19:51:15.189373 7f2a2b8ab700 1 mds.0.262 waiting for osdmap 6364 (which blacklists prior instance)
2015-03-09 19:51:15.246332 7f2a2659e700 0 mds.0.cache creating system inode with ino:100
2015-03-09 19:51:15.246591 7f2a2659e700 0 mds.0.cache creating system inode with ino:1
2015-03-09 19:51:15.267387 7f2a2659e700 1 mds.0.262 replay_done
2015-03-09 19:51:15.267402 7f2a2659e700 1 mds.0.262 making mds journal writeable
2015-03-09 19:51:16.548390 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
2015-03-09 19:51:16.548398 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:replay --> up:reconnect
2015-03-09 19:51:16.548404 7f2a2b8ab700 1 mds.0.262 reconnect_start
2015-03-09 19:51:16.548407 7f2a2b8ab700 1 mds.0.262 reopen_log
2015-03-09 19:51:16.548414 7f2a2b8ab700 1 mds.0.server reconnect_clients -- 2 sessions
2015-03-09 19:51:16.548472 7f2a2b8ab700 0 log_channel(default) log [DBG] : reconnect by client.567435 192.168.255.129:0/11675 after 0.000007
2015-03-09 19:51:16.548542 7f2a2b8ab700 0 log_channel(default) log [DBG] : reconnect by client.567456 192.168.255.132:0/3368 after 0.000111
2015-03-09 19:51:16.548576 7f2a2b8ab700 1 mds.0.262 reconnect_done
2015-03-09 19:51:20.981775 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
2015-03-09 19:51:20.981778 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:reconnect --> up:rejoin
2015-03-09 19:51:20.981781 7f2a2b8ab700 1 mds.0.262 rejoin_start
2015-03-09 19:51:20.981811 7f2a2b8ab700 1 mds.0.262 rejoin_joint_start
2015-03-09 19:51:20.981899 7f2a2b8ab700 1 mds.0.262 rejoin_done
2015-03-09 19:51:25.694096 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
2015-03-09 19:51:25.694101 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:rejoin --> up:active
2015-03-09 19:51:25.694104 7f2a2b8ab700 1 mds.0.262 recovery_done -- successful recovery!
2015-03-09 19:51:25.694325 7f2a2b8ab700 1 mds.0.262 active_start
2015-03-09 19:51:25.694439 7f2a2b8ab700 1 mds.0.262 cluster recovered.
2015-03-09 19:51:25.769257 7f2a277a2700 -1 mds/MDCache.cc: In function 'void MDCache::add_inode(CInode*)' thread 7f2a277a2700 time 2015-03-09 19:51:25.766126
mds/MDCache.cc: 254: FAILED assert(inode_map.count(in->vino()) == 0)

ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0x90261b]
2: /usr/bin/ceph-mds() [0x61d4e7]
3: (Server::prepare_new_inode(std::tr1::shared_ptr<MDRequestImpl>&, CDir*, inodeno_t, unsigned int, ceph_file_layout*)+0x12a3) [0x5c6763]
4: (Server::handle_client_mkdir(std::tr1::shared_ptr<MDRequestImpl>&)+0x252) [0x5dbe82]
5: (Server::dispatch_client_request(std::tr1::shared_ptr<MDRequestImpl>&)+0x4c6) [0x5edd16]
6: (Server::handle_client_request(MClientRequest*)+0x34a) [0x5ee26a]
7: (Server::dispatch(Message*)+0x383) [0x5f53a3]
8: (MDS::handle_deferrable_message(Message*)+0x7ef) [0x57a83f]
9: (MDS::_dispatch(Message*)+0x6d) [0x59034d]
10: (C_MDS_RetryMessage::finish(int)+0x1c) [0x5f6aec]
11: (MDSInternalContextBase::complete(int)+0x153) [0x78bc03]
12: (MDS::_advance_queues()+0x4a5) [0x587f75]
13: (MDS::ProgressThread::entry()+0x4a) [0x5882ca]
14: (()+0x8182) [0x7f2a30e2b182]
15: (clone()+0x6d) [0x7f2a2f7a247d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
-258> 2015-03-09 19:51:14.238666 7f2a314b9800 5 asok(0x3c6a000) register_command perfcounters_dump hook 0x3c52010
-257> 2015-03-09 19:51:14.238713 7f2a314b9800 5 asok(0x3c6a000) register_command 1 hook 0x3c52010
-256> 2015-03-09 19:51:14.238722 7f2a314b9800 5 asok(0x3c6a000) register_command perf dump hook 0x3c52010
-255> 2015-03-09 19:51:14.238732 7f2a314b9800 5 asok(0x3c6a000) register_command perfcounters_schema hook 0x3c52010
-254> 2015-03-09 19:51:14.238747 7f2a314b9800 5 asok(0x3c6a000) register_command 2 hook 0x3c52010
-253> 2015-03-09 19:51:14.238754 7f2a314b9800 5 asok(0x3c6a000) register_command perf schema hook 0x3c52010
-252> 2015-03-09 19:51:14.238760 7f2a314b9800 5 asok(0x3c6a000) register_command config show hook 0x3c52010
-251> 2015-03-09 19:51:14.238767 7f2a314b9800 5 asok(0x3c6a000) register_command config set hook 0x3c52010
-250> 2015-03-09 19:51:14.238773 7f2a314b9800 5 asok(0x3c6a000) register_command config get hook 0x3c52010
-249> 2015-03-09 19:51:14.238779 7f2a314b9800 5 asok(0x3c6a000) register_command config diff hook 0x3c52010
-248> 2015-03-09 19:51:14.238784 7f2a314b9800 5 asok(0x3c6a000) register_command log flush hook 0x3c52010
-247> 2015-03-09 19:51:14.238791 7f2a314b9800 5 asok(0x3c6a000) register_command log dump hook 0x3c52010
-246> 2015-03-09 19:51:14.238797 7f2a314b9800 5 asok(0x3c6a000) register_command log reopen hook 0x3c52010
-245> 2015-03-09 19:51:14.240323 7f2a314b9800 0 ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e), process ceph-mds, pid 18570
-244> 2015-03-09 19:51:14.240665 7f2a314b9800 1 accepter.accepter.bind my_inst.addr is 0.0.0.0:6800/18570 need_addr=1
-243> 2015-03-09 19:51:14.242036 7f2a314b9800 5 asok(0x3c6a000) init /var/run/ceph/ceph-mds.compute01.asok
-242> 2015-03-09 19:51:14.242050 7f2a314b9800 5 asok(0x3c6a000) bind_and_listen /var/run/ceph/ceph-mds.compute01.asok
-241> 2015-03-09 19:51:14.242073 7f2a314b9800 5 asok(0x3c6a000) register_command 0 hook 0x3c4e0b8
-240> 2015-03-09 19:51:14.242091 7f2a314b9800 5 asok(0x3c6a000) register_command version hook 0x3c4e0b8
-239> 2015-03-09 19:51:14.242096 7f2a314b9800 5 asok(0x3c6a000) register_command git_version hook 0x3c4e0b8
-238> 2015-03-09 19:51:14.242099 7f2a314b9800 5 asok(0x3c6a000) register_command help hook 0x3c520c0
-237> 2015-03-09 19:51:14.242103 7f2a314b9800 5 asok(0x3c6a000) register_command get_command_descriptions hook 0x3c520b0
-236> 2015-03-09 19:51:14.242132 7f2a314b9800 10 monclient(hunting): build_initial_monmap
-235> 2015-03-09 19:51:14.242154 7f2a2d0ae700 5 asok(0x3c6a000) entry start
-234> 2015-03-09 19:51:14.242203 7f2a314b9800 1 -- 0.0.0.0:6800/18570 messenger.start
-233> 2015-03-09 19:51:14.242251 7f2a314b9800 5 adding auth protocol: cephx
-232> 2015-03-09 19:51:14.242257 7f2a314b9800 5 adding auth protocol: cephx
-231> 2015-03-09 19:51:14.242363 7f2a314b9800 5 asok(0x3c6a000) register_command objecter_requests hook 0x3c52150
-230> 2015-03-09 19:51:14.242449 7f2a314b9800 1 accepter.accepter.start
-229> 2015-03-09 19:51:14.242471 7f2a314b9800 10 monclient(hunting): init
-228> 2015-03-09 19:51:14.242481 7f2a314b9800 5 adding auth protocol: cephx
-227> 2015-03-09 19:51:14.242485 7f2a314b9800 10 monclient(hunting): auth_supported 2 method cephx
-226> 2015-03-09 19:51:14.242627 7f2a314b9800 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-compute01/keyring
-225> 2015-03-09 19:51:14.242714 7f2a314b9800 10 monclient(hunting): _reopen_session rank -1 name
-224> 2015-03-09 19:51:14.242753 7f2a314b9800 10 monclient(hunting): picked mon.noname-b con 0x3c84f20 addr 192.168.255.130:6789/0
-223> 2015-03-09 19:51:14.242780 7f2a314b9800 10 monclient(hunting): _send_mon_message to mon.noname-b at 192.168.255.130:6789/0
-222> 2015-03-09 19:51:14.242787 7f2a314b9800 1 -- 0.0.0.0:6800/18570 --> 192.168.255.130:6789/0 -- auth(proto 0 34 bytes epoch 0) v1 -- ?+0 0x3c66000 con 0x3c84f20
-221> 2015-03-09 19:51:14.242804 7f2a314b9800 10 monclient(hunting): renew_subs
-220> 2015-03-09 19:51:14.243205 7f2a314b7700 1 -- 192.168.255.129:6800/18570 learned my addr 192.168.255.129:6800/18570
-219> 2015-03-09 19:51:14.243826 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 1 ==== mon_map magic: 0 v1 ==== 494+0+0 (3243345492 0 0) 0x3c7d800 con 0x3c84f20
-218> 2015-03-09 19:51:14.243861 7f2a2b8ab700 10 monclient(hunting): handle_monmap mon_map magic: 0 v1
-217> 2015-03-09 19:51:14.243889 7f2a2b8ab700 10 monclient(hunting): got monmap 14, mon.noname-b is now rank -1
-216> 2015-03-09 19:51:14.243894 7f2a2b8ab700 10 monclient(hunting): dump:
epoch 14
fsid 50dc0404-c081-4c43-ac3f-872ba5494bd7
last_changed 2014-12-21 19:48:53.274897
created 0.000000
0: 192.168.255.129:6789/0 mon.compute01
1: 192.168.255.130:6789/0 mon.compute02
2: 192.168.255.131:6789/0 mon.mgmt01

-215> 2015-03-09 19:51:14.243944 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (1227691390 0 0) 0x3c66900 con 0x3c84f20
-214> 2015-03-09 19:51:14.244001 7f2a2b8ab700 10 monclient(hunting): my global_id is 567598
-213> 2015-03-09 19:51:14.244174 7f2a2b8ab700 10 monclient(hunting): _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-212> 2015-03-09 19:51:14.244185 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x3c66b40 con 0x3c84f20
-211> 2015-03-09 19:51:14.244779 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 222+0+0 (3765776098 0 0) 0x3c66b40 con 0x3c84f20
-210> 2015-03-09 19:51:14.244922 7f2a2b8ab700 10 monclient(hunting): _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-209> 2015-03-09 19:51:14.244934 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- auth(proto 2 181 bytes epoch 0) v1 -- ?+0 0x3c66900 con 0x3c84f20
-208> 2015-03-09 19:51:14.245613 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 596+0+0 (2861728181 0 0) 0x3c66000 con 0x3c84f20
-207> 2015-03-09 19:51:14.245745 7f2a2b8ab700 1 monclient(hunting): found mon.compute02
-206> 2015-03-09 19:51:14.245753 7f2a2b8ab700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-205> 2015-03-09 19:51:14.245761 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x3d00000 con 0x3c84f20
-204> 2015-03-09 19:51:14.245815 7f2a2b8ab700 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2015-03-09 19:50:44.245814)
-203> 2015-03-09 19:51:14.245831 7f2a2b8ab700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-202> 2015-03-09 19:51:14.245839 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- auth(proto 2 2 bytes epoch 0) v1 -- ?+0 0x3c66b40 con 0x3c84f20
-201> 2015-03-09 19:51:14.245866 7f2a314b9800 5 monclient: authenticate success, global_id 567598
-200> 2015-03-09 19:51:14.245878 7f2a314b9800 10 monclient: wait_auth_rotating waiting (until 2015-03-09 19:51:44.245877)
-199> 2015-03-09 19:51:14.246266 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 5 ==== mon_map magic: 0 v1 ==== 494+0+0 (3243345492 0 0) 0x3c7da00 con 0x3c84f20
-198> 2015-03-09 19:51:14.246287 7f2a2b8ab700 10 monclient: handle_monmap mon_map magic: 0 v1
-197> 2015-03-09 19:51:14.246312 7f2a2b8ab700 10 monclient: got monmap 14, mon.compute02 is now rank 1
-196> 2015-03-09 19:51:14.246338 7f2a2b8ab700 10 monclient: dump:
epoch 14
fsid 50dc0404-c081-4c43-ac3f-872ba5494bd7
last_changed 2014-12-21 19:48:53.274897
created 0.000000
0: 192.168.255.129:6789/0 mon.compute01
1: 192.168.255.130:6789/0 mon.compute02
2: 192.168.255.131:6789/0 mon.mgmt01

-195> 2015-03-09 19:51:14.246366 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1176079066 0 0) 0x3ca0a80 con 0x3c84f20
-194> 2015-03-09 19:51:14.246381 7f2a2b8ab700 10 monclient: handle_subscribe_ack sent 2015-03-09 19:51:14.242807 renew after 2015-03-09 19:53:44.242807
-193> 2015-03-09 19:51:14.246391 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 7 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (1034460302 0 0) 0x3c67680 con 0x3c84f20
-192> 2015-03-09 19:51:14.246476 7f2a2b8ab700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2015-03-09 19:50:44.246476)
-191> 2015-03-09 19:51:14.246493 7f2a314b9800 10 monclient: wait_auth_rotating done
-190> 2015-03-09 19:51:14.246510 7f2a314b9800 10 monclient: renew_subs
-189> 2015-03-09 19:51:14.246515 7f2a314b9800 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-188> 2015-03-09 19:51:14.246521 7f2a314b9800 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mon_subscribe({monmap=15+,osdmap=0}) v2 -- ?+0 0x3d003c0 con 0x3c84f20
-187> 2015-03-09 19:51:14.246532 7f2a314b9800 10 monclient: renew_subs
-186> 2015-03-09 19:51:14.246536 7f2a314b9800 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-185> 2015-03-09 19:51:14.246541 7f2a314b9800 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mon_subscribe({mdsmap=0+,monmap=15+,osdmap=0}) v2 -- ?+0 0x3d001e0 con 0x3c84f20
-184> 2015-03-09 19:51:14.247394 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 8 ==== osd_map(6363..6363 src has 5714..6363) v3 ==== 18960+0+0 (2502001515 0 0) 0x3c67440 con 0x3c84f20
-183> 2015-03-09 19:51:14.247668 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 9 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1176079066 0 0) 0x3ca0e00 con 0x3c84f20
-182> 2015-03-09 19:51:14.247685 7f2a2b8ab700 10 monclient: handle_subscribe_ack sent 2015-03-09 19:51:14.246514 renew after 2015-03-09 19:53:44.246514
-181> 2015-03-09 19:51:14.247699 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 10 ==== mdsmap(e 1377) v1 ==== 685+0+0 (2913871232 0 0) 0x3c67200 con 0x3c84f20
-180> 2015-03-09 19:51:14.247760 7f2a314b9800 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-179> 2015-03-09 19:51:14.247778 7f2a314b9800 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mdsbeacon(567598/compute01 up:boot seq 1 v0) v3 -- ?+0 0x3cd7080 con 0x3c84f20
-178> 2015-03-09 19:51:14.247829 7f2a314b9800 5 asok(0x3c6a000) register_command status hook 0x3c52250
-177> 2015-03-09 19:51:14.247836 7f2a314b9800 5 asok(0x3c6a000) register_command dump_ops_in_flight hook 0x3c52250
-176> 2015-03-09 19:51:14.247848 7f2a314b9800 5 asok(0x3c6a000) register_command dump_historic_ops hook 0x3c52250
-175> 2015-03-09 19:51:14.247853 7f2a314b9800 5 asok(0x3c6a000) register_command session evict hook 0x3c52250
-174> 2015-03-09 19:51:14.247857 7f2a314b9800 5 asok(0x3c6a000) register_command session ls hook 0x3c52250
-173> 2015-03-09 19:51:14.247883 7f2a2b8ab700 5 mds.-1.0 handle_mds_map epoch 1377 from mon.1
-172> 2015-03-09 19:51:14.247941 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 11 ==== osd_map(6363..6363 src has 5714..6363) v3 ==== 18960+0+0 (2502001515 0 0) 0x3c67680 con 0x3c84f20
-171> 2015-03-09 19:51:14.247956 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 12 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1176079066 0 0) 0x3ca0c40 con 0x3c84f20
-170> 2015-03-09 19:51:14.247963 7f2a2b8ab700 10 monclient: handle_subscribe_ack sent 0.000000, ignoring
-169> 2015-03-09 19:51:15.055563 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 13 ==== mdsmap(e 1378) v1 ==== 677+0+0 (3698199829 0 0) 0x3c66000 con 0x3c84f20
-168> 2015-03-09 19:51:15.055606 7f2a2b8ab700 5 mds.-1.-1 handle_mds_map epoch 1378 from mon.1
-167> 2015-03-09 19:51:15.055629 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 mark_down 192.168.255.129:6800/18339 -- pipe dne
-166> 2015-03-09 19:51:15.055648 7f2a2b8ab700 1 mds.-1.0 handle_mds_map standby
-165> 2015-03-09 19:51:15.189191 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 14 ==== mdsmap(e 1379) v1 ==== 685+0+0 (3149673271 0 0) 0x3c66fc0 con 0x3c84f20
-164> 2015-03-09 19:51:15.189244 7f2a2b8ab700 5 mds.-1.0 handle_mds_map epoch 1379 from mon.1
-163> 2015-03-09 19:51:15.189295 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
-162> 2015-03-09 19:51:15.189305 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:standby --> up:replay
-161> 2015-03-09 19:51:15.189317 7f2a2b8ab700 1 mds.0.262 replay_start
-160> 2015-03-09 19:51:15.189327 7f2a2b8ab700 1 mds.0.262 recovery set is
-159> 2015-03-09 19:51:15.189339 7f2a2b8ab700 10 monclient: renew_subs
-158> 2015-03-09 19:51:15.189345 7f2a2b8ab700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-157> 2015-03-09 19:51:15.189352 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mon_subscribe({mdsmap=1380+,monmap=15+,osdmap=6364}) v2 -- ?+0 0x3d00000 con 0x3c84f20
-156> 2015-03-09 19:51:15.189373 7f2a2b8ab700 1 mds.0.262 waiting for osdmap 6364 (which blacklists prior instance)
-155> 2015-03-09 19:51:15.189869 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 15 ==== osd_map(6364..6364 src has 5714..6364) v3 ==== 350+0+0 (2951012273 0 0) 0x3c66d80 con 0x3c84f20
-154> 2015-03-09 19:51:15.189947 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 16 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1176079066 0 0) 0x3ca1dc0 con 0x3c84f20
-153> 2015-03-09 19:51:15.189959 7f2a290a6700 2 mds.0.262 boot_start 0: opening inotable
-152> 2015-03-09 19:51:15.189962 7f2a2b8ab700 10 monclient: handle_subscribe_ack sent 2015-03-09 19:51:15.189342 renew after 2015-03-09 19:53:45.189342
-151> 2015-03-09 19:51:15.190114 7f2a290a6700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6801/9122 -- osd_op(mds.0.262:1 mds0_inotable [read 0~0] 1.b852b893 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9400 con 0x3c863c0
-150> 2015-03-09 19:51:15.190149 7f2a290a6700 2 mds.0.262 boot_start 0: opening sessionmap
-149> 2015-03-09 19:51:15.190221 7f2a290a6700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:2 mds0_sessionmap [read 0~0] 1.3270c60b ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9900 con 0x3c86520
-148> 2015-03-09 19:51:15.190241 7f2a290a6700 2 mds.0.262 boot_start 0: opening mds log
-147> 2015-03-09 19:51:15.190247 7f2a290a6700 5 mds.0.log open discovering log bounds
-146> 2015-03-09 19:51:15.190317 7f2a290a6700 2 mds.0.262 boot_start 0: opening snap table
-145> 2015-03-09 19:51:15.190438 7f2a290a6700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:3 mds_snaptable [read 0~0] 1.d90270ad ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9680 con 0x3c87020
-144> 2015-03-09 19:51:15.190531 7f2a2659e700 4 mds.0.journalpointer Reading journal pointer '400.00000000'
-143> 2015-03-09 19:51:15.190570 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:4 400.00000000 [read 0~0] 1.64e96f8f ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9b80 con 0x3c87020
-142> 2015-03-09 19:51:15.191944 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 1 ==== osd_op_reply(2 mds0_sessionmap [read 0~22] v0'0 uv120620 ondisk = 0) v6 ==== 182+0+22 (143139629 0 3628653333) 0x3cd7b80 con 0x3c86520
-141> 2015-03-09 19:51:15.192093 7f2a2539a700 1 -- 192.168.255.129:6800/18570 <== osd.1 192.168.255.130:6801/9122 1 ==== osd_op_reply(1 mds0_inotable [read 0~34] v0'0 uv116107 ondisk = 0) v6 ==== 180+0+34 (1842418751 0 1695280775) 0x3cd7e40 con 0x3c863c0
-140> 2015-03-09 19:51:15.218704 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 1 ==== osd_op_reply(3 mds_snaptable [read 0~46] v0'0 uv118595 ondisk = 0) v6 ==== 180+0+46 (2607241132 0 2603725198) 0x3cd8100 con 0x3c87020
-139> 2015-03-09 19:51:15.244451 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 2 ==== osd_op_reply(4 400.00000000 [read 0~22] v0'0 uv116933 ondisk = 0) v6 ==== 179+0+22 (2880555823 0 3825830296) 0x3cd8100 con 0x3c87020
-138> 2015-03-09 19:51:15.244516 7f2a2659e700 1 mds.0.journaler(ro) recover start
-137> 2015-03-09 19:51:15.244531 7f2a2659e700 1 mds.0.journaler(ro) read_head
-136> 2015-03-09 19:51:15.244575 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:5 200.00000000 [read 0~0] 1.844f3494 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9b80 con 0x3c86520
-135> 2015-03-09 19:51:15.244603 7f2a2659e700 4 mds.0.log Waiting for journal 200 to recover...
-134> 2015-03-09 19:51:15.245411 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 2 ==== osd_op_reply(5 200.00000000 [read 0~90] v0'0 uv1224456 ondisk = 0) v6 ==== 179+0+90 (2085923383 0 1280029238) 0x3cd7b80 con 0x3c86520
-133> 2015-03-09 19:51:15.245472 7f2a290a6700 1 mds.0.journaler(ro) _finish_read_head loghead(trim 4194304, expire 4194304, write 4271565, stream_format 1). probing for end of log (from 4271565)...
-132> 2015-03-09 19:51:15.245490 7f2a290a6700 1 mds.0.journaler(ro) probing for end of the log
-131> 2015-03-09 19:51:15.245542 7f2a290a6700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:6 200.00000001 [stat] 1.6e5f474 ack+read+rwordered+known_if_redirected e6364) v4 -- ?+0 0x3ce9e00 con 0x3c87020
-130> 2015-03-09 19:51:15.245575 7f2a290a6700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:7 200.00000002 [stat] 1.eb272dbb ack+read+rwordered+known_if_redirected e6364) v4 -- ?+0 0x3ce9b80 con 0x3c86520
-129> 2015-03-09 19:51:15.246198 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 3 ==== osd_op_reply(7 200.00000002 [stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6 ==== 179+0+0 (3204363138 0 0) 0x3cd7b80 con 0x3c86520
-128> 2015-03-09 19:51:15.246218 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 3 ==== osd_op_reply(6 200.00000001 [stat] v0'0 uv145641 ondisk = 0) v6 ==== 179+0+16 (841574646 0 4233121765) 0x3cd8100 con 0x3c87020
-127> 2015-03-09 19:51:15.246284 7f2a290a6700 1 mds.0.journaler(ro) _finish_probe_end write_pos = 4280722 (header had 4271565). recovered.
-126> 2015-03-09 19:51:15.246307 7f2a2659e700 4 mds.0.log Journal 200 recovered.
-125> 2015-03-09 19:51:15.246315 7f2a2659e700 4 mds.0.log Recovered journal 200 in format 1
-124> 2015-03-09 19:51:15.246322 7f2a2659e700 2 mds.0.262 boot_start 1: loading/discovering base inodes
-123> 2015-03-09 19:51:15.246332 7f2a2659e700 0 mds.0.cache creating system inode with ino:100
-122> 2015-03-09 19:51:15.246514 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6806/9185 -- osd_op(mds.0.262:8 100.00000000 [getxattr inode] 1.c5265ab3 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9680 con 0x3c875a0
-121> 2015-03-09 19:51:15.246563 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:9 100.00000000.inode [read 0~0] 1.85dde07f ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cea580 con 0x3c87020
-120> 2015-03-09 19:51:15.246591 7f2a2659e700 0 mds.0.cache creating system inode with ino:1
-119> 2015-03-09 19:51:15.246624 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:10 1.00000000 [getxattr inode] 1.6b2cdaff ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cea300 con 0x3c87020
-118> 2015-03-09 19:51:15.246663 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:11 1.00000000.inode [read 0~0] 1.232c0e14 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cea080 con 0x3c86520
-117> 2015-03-09 19:51:15.247713 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 4 ==== osd_op_reply(11 1.00000000.inode [read 0~452] v0'0 uv1224429 ondisk = 0) v6 ==== 183+0+452 (3060299222 0 3256178618) 0x3cd7b80 con 0x3c86520
-116> 2015-03-09 19:51:15.248330 7f2a25097700 1 -- 192.168.255.129:6800/18570 <== osd.3 192.168.255.130:6806/9185 1 ==== osd_op_reply(8 100.00000000 [getxattr] v0'0 uv0 ondisk = -61 ((61) No data available)) v6 ==== 179+0+0 (2443073943 0 0) 0x3cd8680 con 0x3c875a0
-115> 2015-03-09 19:51:15.257752 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 4 ==== osd_op_reply(9 100.00000000.inode [read 0~452] v0'0 uv118190 ondisk = 0) v6 ==== 185+0+452 (2681602654 0 1891005259) 0x3cd8100 con 0x3c87020
-114> 2015-03-09 19:51:15.264830 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 5 ==== osd_op_reply(10 1.00000000 [getxattr] v0'0 uv0 ondisk = -61 ((61) No data available)) v6 ==== 177+0+0 (974232379 0 0) 0x3cd8100 con 0x3c87020
-113> 2015-03-09 19:51:15.264903 7f2a290a6700 2 mds.0.262 boot_start 2: replaying mds log
-112> 2015-03-09 19:51:15.265016 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:12 200.00000001 [read 0~86418] 1.6e5f474 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9e00 con 0x3c87020
-111> 2015-03-09 19:51:15.265867 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 6 ==== osd_op_reply(12 200.00000001 [read 0~86418] v0'0 uv145641 ondisk = 0) v6 ==== 179+0+86418 (725062667 0 3735611498) 0x3cd8100 con 0x3c87020
-110> 2015-03-09 19:51:15.267387 7f2a2659e700 1 mds.0.262 replay_done
-109> 2015-03-09 19:51:15.267402 7f2a2659e700 1 mds.0.262 making mds journal writeable
-108> 2015-03-09 19:51:15.267406 7f2a2659e700 1 mds.0.journaler(ro) set_writeable
-107> 2015-03-09 19:51:15.267447 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:13 200.00000001 [zero 86418~4107886] 1.6e5f474 ondisk+write+known_if_redirected e6364) v4 -- ?+0 0x3cea300 con 0x3c87020
-106> 2015-03-09 19:51:15.267494 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:14 200.00000002 [delete] 1.eb272dbb ondisk+write+known_if_redirected e6364) v4 -- ?+0 0x3cea580 con 0x3c86520
-105> 2015-03-09 19:51:15.267543 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:15 200.00000003 [delete] 1.95e3ab ondisk+write+known_if_redirected e6364) v4 -- ?+0 0x3cead00 con 0x3c87020
-104> 2015-03-09 19:51:15.267589 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:16 200.00000004 [delete] 1.1ba6bc1f ondisk+write+known_if_redirected e6364) v4 -- ?+0 0x3ceaa80 con 0x3c87020
-103> 2015-03-09 19:51:15.267658 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:17 200.00000005 [delete] 1.96d5827a ondisk+write+known_if_redirected e6364) v4 -- ?+0 0x3cea800 con 0x3c86520
-102> 2015-03-09 19:51:15.267726 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6801/9122 -- osd_op(mds.0.262:18 200.00000006 [delete] 1.ade7efde ondisk+write+known_if_redirected e6364) v4 -- ?+0 0x3ceaa80 con 0x3c863c0
-101> 2015-03-09 19:51:15.267759 7f2a2659e700 2 mds.0.262 i am alone, moving to state reconnect
-100> 2015-03-09 19:51:15.267764 7f2a2659e700 3 mds.0.262 request_state up:reconnect
-99> 2015-03-09 19:51:15.267782 7f2a2659e700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-98> 2015-03-09 19:51:15.267790 7f2a2659e700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mdsbeacon(567598/compute01 up:reconnect seq 2 v1379) v3 -- ?+0 0x3cd8c00 con 0x3c84f20
-97> 2015-03-09 19:51:15.268719 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 7 ==== osd_op_reply(13 200.00000001 [zero 86418~4107886] v0'0 uv145641 ondisk = 0) v6 ==== 179+0+0 (185461413 0 0) 0x3cd8100 con 0x3c87020
-96> 2015-03-09 19:51:15.268817 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 5 ==== osd_op_reply(14 200.00000002 [delete] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v6 ==== 179+0+0 (1396983298 0 0) 0x3cd7b80 con 0x3c86520
-95> 2015-03-09 19:51:15.268843 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 8 ==== osd_op_reply(15 200.00000003 [delete] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v6 ==== 179+0+0 (3083423939 0 0) 0x3cd8100 con 0x3c87020
-94> 2015-03-09 19:51:15.268940 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 6 ==== osd_op_reply(17 200.00000005 [delete] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v6 ==== 179+0+0 (2509694608 0 0) 0x3cd7b80 con 0x3c86520
-93> 2015-03-09 19:51:15.268971 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 9 ==== osd_op_reply(16 200.00000004 [delete] v0'0 uv119490 ondisk = -2 ((2) No such file or directory)) v6 ==== 179+0+0 (3999145170 0 0) 0x3cd8100 con 0x3c87020
-92> 2015-03-09 19:51:15.269078 7f2a2539a700 1 -- 192.168.255.129:6800/18570 <== osd.1 192.168.255.130:6801/9122 2 ==== osd_op_reply(18 200.00000006 [delete] v0'0 uv117748 ondisk = -2 ((2) No such file or directory)) v6 ==== 179+0+0 (3953926054 0 0) 0x3cd7e40 con 0x3c863c0
-91> 2015-03-09 19:51:16.520405 7f2a24f96700 1 -- 192.168.255.129:6800/18570 >> :/0 pipe(0x3cd7080 sd=21 :6800 s=0 pgs=0 cs=0 l=0 c=0x3c879c0).accept sd=21 192.168.255.129:50950/0
-90> 2015-03-09 19:51:16.520597 7f2a24e95700 1 -- 192.168.255.129:6800/18570 >> :/0 pipe(0x3cd9180 sd=22 :6800 s=0 pgs=0 cs=0 l=0 c=0x3c87b20).accept sd=22 192.168.255.132:60829/0
-89> 2015-03-09 19:51:16.521526 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== client.567435 192.168.255.129:0/11675 1 ==== client_reconnect(1 caps) v3 ==== 0+0+82 (0 0 495791181) 0x3c7f800 con 0x3c879c0
-88> 2015-03-09 19:51:16.521577 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== client.567456 192.168.255.132:0/3368 1 ==== client_reconnect(18 caps) v3 ==== 0+0+1478 (0 0 834905971) 0x3c7dc00 con 0x3c87b20
-87> 2015-03-09 19:51:16.548305 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 17 ==== mdsmap(e 1380) v1 ==== 685+0+0 (1477032334 0 0) 0x3c69cc0 con 0x3c84f20
-86> 2015-03-09 19:51:16.548340 7f2a2b8ab700 5 mds.0.262 handle_mds_map epoch 1380 from mon.1
-85> 2015-03-09 19:51:16.548390 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
-84> 2015-03-09 19:51:16.548398 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:replay --> up:reconnect
-83> 2015-03-09 19:51:16.548404 7f2a2b8ab700 1 mds.0.262 reconnect_start
-82> 2015-03-09 19:51:16.548407 7f2a2b8ab700 1 mds.0.262 reopen_log
-81> 2015-03-09 19:51:16.548414 7f2a2b8ab700 1 mds.0.server reconnect_clients -- 2 sessions
-80> 2015-03-09 19:51:16.548422 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:0/11675 -- client_session(open) v2 -- ?+0 0x3c7da00 con 0x3c879c0
-79> 2015-03-09 19:51:16.548472 7f2a2b8ab700 0 log_channel(default) log [DBG] : reconnect by client.567435 192.168.255.129:0/11675 after 0.000007
-78> 2015-03-09 19:51:16.548525 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.132:0/3368 -- client_session(open) v2 -- ?+0 0x3c7f800 con 0x3c87b20
-77> 2015-03-09 19:51:16.548542 7f2a2b8ab700 0 log_channel(default) log [DBG] : reconnect by client.567456 192.168.255.132:0/3368 after 0.000111
-76> 2015-03-09 19:51:16.548576 7f2a2b8ab700 1 mds.0.262 reconnect_done
-75> 2015-03-09 19:51:16.548581 7f2a2b8ab700 3 mds.0.262 request_state up:rejoin
-74> 2015-03-09 19:51:16.548594 7f2a2b8ab700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-73> 2015-03-09 19:51:16.548604 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mdsbeacon(567598/compute01 up:rejoin seq 3 v1379) v3 -- ?+0 0x3cd7340 con 0x3c84f20
-72> 2015-03-09 19:51:16.548652 7f2a2b8ab700 5 mds.0.bal rebalance done
-71> 2015-03-09 19:51:16.549253 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== client.567435 192.168.255.129:0/11675 2 ==== client_session(request_renewcaps seq 176) v1 ==== 28+0+0 (846611282 0 0) 0x3c7da00 con 0x3c879c0
-70> 2015-03-09 19:51:16.549296 7f2a2b8ab700 3 mds.0.server not active yet, waiting
-69> 2015-03-09 19:51:16.549507 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== client.567456 192.168.255.132:0/3368 2 ==== client_session(request_renewcaps seq 135) v1 ==== 28+0+0 (2428301207 0 0) 0x3c7f800 con 0x3c87b20
-68> 2015-03-09 19:51:16.549530 7f2a2b8ab700 3 mds.0.server not active yet, waiting
-67> 2015-03-09 19:51:16.550008 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 18 ==== mdsbeacon(567598/compute01 up:rejoin seq 3 v1380) v3 ==== 121+0+0 (3345850665 0 0) 0x3cd9700 con 0x3c84f20
-66> 2015-03-09 19:51:17.242753 7f2a2a0a8700 10 monclient: tick
-65> 2015-03-09 19:51:17.242792 7f2a2a0a8700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2015-03-09 19:50:47.242790)
-64> 2015-03-09 19:51:17.242804 7f2a2a0a8700 10 monclient: renew subs? (now: 2015-03-09 19:51:17.242803; renew after: 2015-03-09 19:53:45.189342) -- no
-63> 2015-03-09 19:51:20.548674 7f2a27fa3700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-62> 2015-03-09 19:51:20.548712 7f2a27fa3700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mdsbeacon(567598/compute01 up:rejoin seq 4 v1380) v3 -- ?+0 0x3cd9c80 con 0x3c84f20
-61> 2015-03-09 19:51:20.981696 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 19 ==== mdsmap(e 1381) v1 ==== 685+0+0 (4010653999 0 0) 0x3c69a80 con 0x3c84f20
-60> 2015-03-09 19:51:20.981738 7f2a2b8ab700 5 mds.0.262 handle_mds_map epoch 1381 from mon.1
-59> 2015-03-09 19:51:20.981775 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
-58> 2015-03-09 19:51:20.981778 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:reconnect --> up:rejoin
-57> 2015-03-09 19:51:20.981781 7f2a2b8ab700 1 mds.0.262 rejoin_start
-56> 2015-03-09 19:51:20.981811 7f2a2b8ab700 1 mds.0.262 rejoin_joint_start
-55> 2015-03-09 19:51:20.981899 7f2a2b8ab700 1 mds.0.262 rejoin_done
-54> 2015-03-09 19:51:20.981914 7f2a2b8ab700 3 mds.0.262 request_state up:active
-53> 2015-03-09 19:51:20.981932 7f2a2b8ab700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-52> 2015-03-09 19:51:20.981944 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mdsbeacon(567598/compute01 up:active seq 5 v1380) v3 -- ?+0 0x3cd99c0 con 0x3c84f20
-51> 2015-03-09 19:51:20.981970 7f2a2b8ab700 5 mds.0.bal rebalance done
-50> 2015-03-09 19:51:21.014664 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 20 ==== mdsbeacon(567598/compute01 up:active seq 5 v1381) v3 ==== 121+0+0 (4197584256 0 0) 0x3cd99c0 con 0x3c84f20
-49> 2015-03-09 19:51:24.982029 7f2a27fa3700 10 monclient: _send_mon_message to mon.compute02 at 192.168.255.130:6789/0
-48> 2015-03-09 19:51:24.982057 7f2a27fa3700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6789/0 -- mdsbeacon(567598/compute01 up:active seq 6 v1381) v3 -- ?+0 0x3dac2c0 con 0x3c84f20
-47> 2015-03-09 19:51:25.670361 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== client.567456 192.168.255.132:0/3368 3 ==== client_request(client.567456:105 mkdir #10000000004/documents 2015-03-09 19:33:26.137734) v2 ==== 175+0+0 (3410157915 0 0) 0x3ce9400 con 0x3c87b20
-46> 2015-03-09 19:51:25.670418 7f2a2b8ab700 3 mds.0.server not active yet, waiting
-45> 2015-03-09 19:51:25.694029 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mon.1 192.168.255.130:6789/0 21 ==== mdsmap(e 1382) v1 ==== 685+0+0 (355715618 0 0) 0x3c69840 con 0x3c84f20
-44> 2015-03-09 19:51:25.694061 7f2a2b8ab700 5 mds.0.262 handle_mds_map epoch 1382 from mon.1
-43> 2015-03-09 19:51:25.694096 7f2a2b8ab700 1 mds.0.262 handle_mds_map i am now mds.0.262
-42> 2015-03-09 19:51:25.694101 7f2a2b8ab700 1 mds.0.262 handle_mds_map state change up:rejoin --> up:active
-41> 2015-03-09 19:51:25.694104 7f2a2b8ab700 1 mds.0.262 recovery_done -- successful recovery!
-40> 2015-03-09 19:51:25.694115 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> mds.0 192.168.255.129:6800/18570 -- mds_table_request(snaptable server_ready) v1 -- ?+0 0x3c69a80
-39> 2015-03-09 19:51:25.694286 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6806/9185 -- osd_op(mds.0.262:19 100.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.c5265ab3 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9b80 con 0x3c875a0
-38> 2015-03-09 19:51:25.694325 7f2a2b8ab700 1 mds.0.262 active_start
-37> 2015-03-09 19:51:25.694355 7f2a2b8ab700 3 mds.0.server handle_client_session client_session(request_renewcaps seq 176) v1 from client.567435
-36> 2015-03-09 19:51:25.694373 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:0/11675 -- client_session(renewcaps seq 176) v2 -- ?+0 0x3c7d600 con 0x3c879c0
-35> 2015-03-09 19:51:25.694389 7f2a2b8ab700 3 mds.0.server handle_client_session client_session(request_renewcaps seq 135) v1 from client.567456
-34> 2015-03-09 19:51:25.694400 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.132:0/3368 -- client_session(renewcaps seq 135) v2 -- ?+0 0x3c7da00 con 0x3c87b20
-33> 2015-03-09 19:51:25.694421 7f2a2b8ab700 4 mds.0.server handle_client_request client_request(client.567456:105 mkdir #10000000004/documents 2015-03-09 19:33:26.137734) v2
-32> 2015-03-09 19:51:25.694433 7f2a2b8ab700 5 mds.0.server waiting for root
-31> 2015-03-09 19:51:25.694439 7f2a2b8ab700 1 mds.0.262 cluster recovered.
-30> 2015-03-09 19:51:25.694451 7f2a2b8ab700 5 mds.0.bal rebalance done
-29> 2015-03-09 19:51:25.694480 7f2a2b8ab700 1 -- 192.168.255.129:6800/18570 <== mds.0 192.168.255.129:6800/18570 0 ==== mds_table_request(snaptable server_ready) v1 ==== 0+0+0 (0 0 0) 0x3c69a80 con 0x3c84160
-28> 2015-03-09 19:51:25.695674 7f2a25097700 1 -- 192.168.255.129:6800/18570 <== osd.3 192.168.255.130:6806/9185 2 ==== osd_op_reply(19 100.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv123084 ondisk = 0) v6 ==== 221+0+4594 (647149209 0 1081285212) 0x3cd8680 con 0x3c875a0
-27> 2015-03-09 19:51:25.696042 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:20 1.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.6b2cdaff ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9680 con 0x3c87020
-26> 2015-03-09 19:51:25.697134 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 10 ==== osd_op_reply(20 1.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv118189 ondisk = 0) v6 ==== 219+0+1382 (324304880 0 1255099106) 0x3cd8100 con 0x3c87020
-25> 2015-03-09 19:51:25.697312 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:21 601.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.81580290 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cea080 con 0x3c87020
-24> 2015-03-09 19:51:25.726909 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 11 ==== osd_op_reply(21 601.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv119599 ondisk = 0) v6 ==== 221+0+226 (1612231321 0 2554655224) 0x3cd8100 con 0x3c87020
-23> 2015-03-09 19:51:25.727055 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:22 602.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.2a8b6bd0 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9b80 con 0x3c87020
-22> 2015-03-09 19:51:25.728170 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 12 ==== osd_op_reply(22 602.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv119601 ondisk = 0) v6 ==== 221+0+226 (1869870548 0 2554655224) 0x3cd8100 con 0x3c87020
-21> 2015-03-09 19:51:25.728319 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:23 603.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.7a4d91b0 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cea300 con 0x3c86520
-20> 2015-03-09 19:51:25.729172 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 7 ==== osd_op_reply(23 603.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv117100 ondisk = 0) v6 ==== 221+0+226 (2631086793 0 2554655224) 0x3cd7b80 con 0x3c86520
-19> 2015-03-09 19:51:25.729280 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6806/9185 -- osd_op(mds.0.262:24 604.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.43e85c95 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ce9e00 con 0x3c875a0
-18> 2015-03-09 19:51:25.730198 7f2a25097700 1 -- 192.168.255.129:6800/18570 <== osd.3 192.168.255.130:6806/9185 3 ==== osd_op_reply(24 604.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv107107 ondisk = 0) v6 ==== 221+0+226 (277987496 0 2554655224) 0x3cd8680 con 0x3c875a0
-17> 2015-03-09 19:51:25.730321 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:25 605.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.d0c18e1d ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cea580 con 0x3c86520
-16> 2015-03-09 19:51:25.731171 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 8 ==== osd_op_reply(25 605.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv120655 ondisk = 0) v6 ==== 221+0+226 (978676035 0 2554655224) 0x3cd7b80 con 0x3c86520
-15> 2015-03-09 19:51:25.731271 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:26 606.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.f89eaaf4 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ceb480 con 0x3c87020
-14> 2015-03-09 19:51:25.746977 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 13 ==== osd_op_reply(26 606.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv145582 ondisk = 0) v6 ==== 221+0+226 (3727274824 0 2554655224) 0x3cd8100 con 0x3c87020
-13> 2015-03-09 19:51:25.747100 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:27 607.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.bb590b7c ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ceb200 con 0x3c86520
-12> 2015-03-09 19:51:25.748019 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 9 ==== osd_op_reply(27 607.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv114775 ondisk = 0) v6 ==== 221+0+226 (2403444123 0 2554655224) 0x3cd7b80 con 0x3c86520
-11> 2015-03-09 19:51:25.748144 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.130:6801/9122 -- osd_op(mds.0.262:28 608.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.c4405e8e ack+read+known_if_redirected e6364) v4 -- ?+0 0x3ceaf80 con 0x3c863c0
-10> 2015-03-09 19:51:25.749168 7f2a2539a700 1 -- 192.168.255.129:6800/18570 <== osd.1 192.168.255.130:6801/9122 3 ==== osd_op_reply(28 608.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv120033 ondisk = 0) v6 ==== 221+0+226 (1162576825 0 2554655224) 0x3cd7e40 con 0x3c863c0
-9> 2015-03-09 19:51:25.749335 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6801/9498 -- osd_op(mds.0.262:29 609.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.60b82d07 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cead00 con 0x3c87020
-8> 2015-03-09 19:51:25.765726 7f2a25299700 1 -- 192.168.255.129:6800/18570 <== osd.2 192.168.255.129:6801/9498 14 ==== osd_op_reply(29 609.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv112826 ondisk = 0) v6 ==== 221+0+226 (3780375018 0 2554655224) 0x3cd8100 con 0x3c87020
-7> 2015-03-09 19:51:25.765896 7f2a277a2700 1 -- 192.168.255.129:6800/18570 --> 192.168.255.129:6811/9641 -- osd_op(mds.0.262:30 600.00000000 [omap-get-header 0~0,omap-get-vals 0~16] 1.c2e541b0 ack+read+known_if_redirected e6364) v4 -- ?+0 0x3cead00 con 0x3c86520
-6> 2015-03-09 19:51:25.765930 7f2a277a2700 4 mds.0.server handle_client_request client_request(client.567456:105 mkdir #10000000004/documents 2015-03-09 19:33:26.137734) v2
-5> 2015-03-09 19:51:25.765950 7f2a277a2700 5 -- op tracker -- seq: 1, time: 2015-03-09 19:51:25.670234, event: throttled, op: client_request(client.567456:105 mkdir #10000000004/documents 2015-03-09 19:33:26.137734)
-4> 2015-03-09 19:51:25.765965 7f2a277a2700 5 -- op tracker -- seq: 1, time: 2015-03-09 19:51:25.670330, event: all_read, op: client_request(client.567456:105 mkdir #10000000004/documents 2015-03-09 19:33:26.137734)
-3> 2015-03-09 19:51:25.765976 7f2a277a2700 5 -- op tracker -- seq: 1, time: 2015-03-09 19:51:25.670411, event: dispatched, op: client_request(client.567456:105 mkdir #10000000004/documents 2015-03-09 19:33:26.137734)
-2> 2015-03-09 19:51:25.766084 7f2a277a2700 5 -- op tracker -- seq: 1, time: 2015-03-09 19:51:25.766084, event: acquired locks, op: client_request(client.567456:105 mkdir #10000000004/documents 2015-03-09 19:33:26.137734)
-1> 2015-03-09 19:51:25.766805 7f2a2549b700 1 -- 192.168.255.129:6800/18570 <== osd.0 192.168.255.129:6811/9641 10 ==== osd_op_reply(30 600.00000000 [omap-get-header 0~0,omap-get-vals 0~16] v0'0 uv117098 ondisk = 0) v6 ==== 221+0+226 (730497187 0 2554655224) 0x3cd7b80 con 0x3c86520
0> 2015-03-09 19:51:25.769257 7f2a277a2700 -1 mds/MDCache.cc: In function 'void MDCache::add_inode(CInode*)' thread 7f2a277a2700 time 2015-03-09 19:51:25.766126
mds/MDCache.cc: 254: FAILED assert(inode_map.count(in->vino()) == 0)

ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0x90261b]
2: /usr/bin/ceph-mds() [0x61d4e7]
3: (Server::prepare_new_inode(std::tr1::shared_ptr<MDRequestImpl>&, CDir*, inodeno_t, unsigned int, ceph_file_layout*)+0x12a3) [0x5c6763]
4: (Server::handle_client_mkdir(std::tr1::shared_ptr<MDRequestImpl>&)+0x252) [0x5dbe82]
5: (Server::dispatch_client_request(std::tr1::shared_ptr<MDRequestImpl>&)+0x4c6) [0x5edd16]
6: (Server::handle_client_request(MClientRequest*)+0x34a) [0x5ee26a]
7: (Server::dispatch(Message*)+0x383) [0x5f53a3]
8: (MDS::handle_deferrable_message(Message*)+0x7ef) [0x57a83f]
9: (MDS::_dispatch(Message*)+0x6d) [0x59034d]
10: (C_MDS_RetryMessage::finish(int)+0x1c) [0x5f6aec]
11: (MDSInternalContextBase::complete(int)+0x153) [0x78bc03]
12: (MDS::_advance_queues()+0x4a5) [0x587f75]
13: (MDS::ProgressThread::entry()+0x4a) [0x5882ca]
14: (()+0x8182) [0x7f2a30e2b182]
15: (clone()+0x6d) [0x7f2a2f7a247d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-mds.compute01.log
--- end dump of recent events ---
2015-03-09 19:51:25.774027 7f2a277a2700 -1 *** Caught signal (Aborted) **
in thread 7f2a277a2700

ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)
1: /usr/bin/ceph-mds() [0x81a75a]
2: (()+0x10340) [0x7f2a30e33340]
3: (gsignal()+0x39) [0x7f2a2f6decc9]
4: (abort()+0x148) [0x7f2a2f6e20d8]
5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7f2a2ffe96b5]
6: (()+0x5e836) [0x7f2a2ffe7836]
7: (()+0x5e863) [0x7f2a2ffe7863]
8: (()+0x5eaa2) [0x7f2a2ffe7aa2]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x278) [0x902808]
10: /usr/bin/ceph-mds() [0x61d4e7]
11: (Server::prepare_new_inode(std::tr1::shared_ptr<MDRequestImpl>&, CDir*, inodeno_t, unsigned int, ceph_file_layout*)+0x12a3) [0x5c6763]
12: (Server::handle_client_mkdir(std::tr1::shared_ptr<MDRequestImpl>&)+0x252) [0x5dbe82]
13: (Server::dispatch_client_request(std::tr1::shared_ptr<MDRequestImpl>&)+0x4c6) [0x5edd16]
14: (Server::handle_client_request(MClientRequest*)+0x34a) [0x5ee26a]
15: (Server::dispatch(Message*)+0x383) [0x5f53a3]
16: (MDS::handle_deferrable_message(Message*)+0x7ef) [0x57a83f]
17: (MDS::_dispatch(Message*)+0x6d) [0x59034d]
18: (C_MDS_RetryMessage::finish(int)+0x1c) [0x5f6aec]
19: (MDSInternalContextBase::complete(int)+0x153) [0x78bc03]
20: (MDS::_advance_queues()+0x4a5) [0x587f75]
21: (MDS::ProgressThread::entry()+0x4a) [0x5882ca]
22: (()+0x8182) [0x7f2a30e2b182]
23: (clone()+0x6d) [0x7f2a2f7a247d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
0> 2015-03-09 19:51:25.774027 7f2a277a2700 -1 *** Caught signal (Aborted) **
in thread 7f2a277a2700

ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)
1: /usr/bin/ceph-mds() [0x81a75a]
2: (()+0x10340) [0x7f2a30e33340]
3: (gsignal()+0x39) [0x7f2a2f6decc9]
4: (abort()+0x148) [0x7f2a2f6e20d8]
5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7f2a2ffe96b5]
6: (()+0x5e836) [0x7f2a2ffe7836]
7: (()+0x5e863) [0x7f2a2ffe7863]
8: (()+0x5eaa2) [0x7f2a2ffe7aa2]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x278) [0x902808]
10: /usr/bin/ceph-mds() [0x61d4e7]
11: (Server::prepare_new_inode(std::tr1::shared_ptr<MDRequestImpl>&, CDir*, inodeno_t, unsigned int, ceph_file_layout*)+0x12a3) [0x5c6763]
12: (Server::handle_client_mkdir(std::tr1::shared_ptr<MDRequestImpl>&)+0x252) [0x5dbe82]
13: (Server::dispatch_client_request(std::tr1::shared_ptr<MDRequestImpl>&)+0x4c6) [0x5edd16]
14: (Server::handle_client_request(MClientRequest*)+0x34a) [0x5ee26a]
15: (Server::dispatch(Message*)+0x383) [0x5f53a3]
16: (MDS::handle_deferrable_message(Message*)+0x7ef) [0x57a83f]
17: (MDS::_dispatch(Message*)+0x6d) [0x59034d]
18: (C_MDS_RetryMessage::finish(int)+0x1c) [0x5f6aec]
19: (MDSInternalContextBase::complete(int)+0x153) [0x78bc03]
20: (MDS::_advance_queues()+0x4a5) [0x587f75]
21: (MDS::ProgressThread::entry()+0x4a) [0x5882ca]
22: (()+0x8182) [0x7f2a30e2b182]
23: (clone()+0x6d) [0x7f2a2f7a247d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-mds.compute01.log
--- end dump of recent events ---
    (1-1/1)