Project

General

Profile

Bug #167 ยป mds2.log

log of the crashing mds - ar Fred, 06/01/2010 01:21 PM

 
10.06.01_22:10:06.162537 --- opened log /var/log/ceph/r1-11.5559 ---
ceph version 0.21~rc ()
10.06.01_22:10:06.162786 7f1d238dd720 -- 0.0.0.0:6800/5559 accepter.bind ms_addr is 0.0.0.0:6800/5559 need_addr=1
10.06.01_22:10:06.162832 7f1d238dd720 -- 0.0.0.0:6800/5559 messenger.start
10.06.01_22:10:06.162841 7f1d238dd720 -- 0.0.0.0:6800/5559 messenger.start daemonizing
10.06.01_22:10:06.163149 7f1d238dd720 ---- renamed log /var/log/ceph/r1-11.5559 -> /var/log/ceph/r1-11.5560 ----
10.06.01_22:10:06.163258 7f1d238dd720 -- 0.0.0.0:6800/5559 accepter.start
10.06.01_22:10:06.163431 7f1d238dd720 mds-1.0 120 MDSCacheObject
10.06.01_22:10:06.163442 7f1d238dd720 mds-1.0 1568 CInode
10.06.01_22:10:06.163448 7f1d238dd720 mds-1.0 16 elist<>::item *7=112
10.06.01_22:10:06.163454 7f1d238dd720 mds-1.0 352 inode_t
10.06.01_22:10:06.163459 7f1d238dd720 mds-1.0 56 nest_info_t
10.06.01_22:10:06.163465 7f1d238dd720 mds-1.0 32 frag_info_t
10.06.01_22:10:06.163470 7f1d238dd720 mds-1.0 40 SimpleLock *5=200
10.06.01_22:10:06.163475 7f1d238dd720 mds-1.0 48 ScatterLock *3=144
10.06.01_22:10:06.163481 7f1d238dd720 mds-1.0 416 CDentry
10.06.01_22:10:06.163486 7f1d238dd720 mds-1.0 16 elist<>::item
10.06.01_22:10:06.163494 7f1d238dd720 mds-1.0 40 SimpleLock
10.06.01_22:10:06.163499 7f1d238dd720 mds-1.0 1480 CDir
10.06.01_22:10:06.163505 7f1d238dd720 mds-1.0 16 elist<>::item *2=32
10.06.01_22:10:06.163510 7f1d238dd720 mds-1.0 192 fnode_t
10.06.01_22:10:06.163515 7f1d238dd720 mds-1.0 56 nest_info_t *2
10.06.01_22:10:06.163521 7f1d238dd720 mds-1.0 32 frag_info_t *2
10.06.01_22:10:06.163526 7f1d238dd720 mds-1.0 168 Capability
10.06.01_22:10:06.163531 7f1d238dd720 mds-1.0 32 xlist<>::item *2=64
10.06.01_22:10:06.163593 7f1d238dd720 ---- renamed symlink /var/log/ceph/mds.r1-11 -> /var/log/ceph/mds.r1-11.0 ----
10.06.01_22:10:06.163623 7f1d238dd720 ---- created symlink /var/log/ceph/mds.r1-11 -> r1-11.5560 ----
10.06.01_22:10:06.163735 7f1d238dd720 -- 0.0.0.0:6800/5559 --> mon1 172.16.2.10:6789/0 -- auth(proto 0 30 bytes) v1 -- ?+0 0xb9fb50
10.06.01_22:10:06.164111 7f1d209cd710 -- 172.16.2.11:6800/5559 learned my addr 172.16.2.11:6800/5559
10.06.01_22:10:06.164149 7f1d209cd710 mds-1.0 MDS::ms_get_authorizer type=mon
10.06.01_22:10:06.164297 7f1d219cf710 mds-1.0 ms_handle_connect on 172.16.2.10:6789/0
10.06.01_22:10:06.164587 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 1 ==== auth_reply(proto 1 0 Success) v1 ==== 24+0+0 (2683869798 0 0) 0xb9fb50
10.06.01_22:10:06.164640 7f1d219cf710 monclient(hunting): found mon1
10.06.01_22:10:06.164648 7f1d219cf710 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mon_subscribe({monmap=0+}) v1 -- ?+0 0xba68d0
10.06.01_22:10:06.164698 7f1d238dd720 mds-1.0 beacon_send up:boot seq 1 (currently up:boot)
10.06.01_22:10:06.164717 7f1d238dd720 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mdsbeacon(11224/r1-11 up:boot seq 1 v0) v1 -- ?+0 0xb9fb50
10.06.01_22:10:06.164805 7f1d238dd720 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mon_subscribe({monmap=0+,osdmap=0}) v1 -- ?+0 0xb9fb50
10.06.01_22:10:06.164843 7f1d238dd720 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mon_subscribe({mdsmap=0+,monmap=0+,osdmap=0}) v1 -- ?+0 0xba70d0
10.06.01_22:10:06.164925 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 2 ==== mon_map v1 ==== 481+0+0 (3368855601 0 0) 0xb9fb50
10.06.01_22:10:06.164949 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 3 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (432179016 0 0) 0xba7350
10.06.01_22:10:06.165059 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 4 ==== mon_map v1 ==== 481+0+0 (3368855601 0 0) 0xb9fb50
10.06.01_22:10:06.165154 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 5 ==== osd_map(177,177) v1 ==== 1996+0+0 (3909417258 0 0) 0xb9fb50
10.06.01_22:10:06.165255 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (432179016 0 0) 0xba70d0
10.06.01_22:10:06.165271 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 7 ==== mdsmap(e 1323) v1 ==== 2078+0+0 (3978728923 0 0) 0x7f1d18001310
10.06.01_22:10:06.165295 7f1d219cf710 mds-1.0 handle_mds_map epoch 1323 from mon1
10.06.01_22:10:06.165326 7f1d219cf710 mds-1.0 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:06.165340 7f1d219cf710 mds-1.0 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:06.165347 7f1d219cf710 mds-1.0 map says i am 172.16.2.11:6800/5559 mds-1 state down:dne
10.06.01_22:10:06.165357 7f1d219cf710 mds-1.0 not in map yet
10.06.01_22:10:06.165367 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 8 ==== mon_map v1 ==== 481+0+0 (3368855601 0 0) 0x7f1d18001770
10.06.01_22:10:06.165383 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 9 ==== osd_map(177,177) v1 ==== 1996+0+0 (3909417258 0 0) 0x7f1d18002230
10.06.01_22:10:06.165399 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 10 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (432179016 0 0) 0x7f1d180025d0
10.06.01_22:10:07.028702 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 11 ==== mdsmap(e 1324) v1 ==== 2279+0+0 (2262088182 0 0) 0x7f1d180013d0
10.06.01_22:10:07.028725 7f1d219cf710 mds-1.0 handle_mds_map epoch 1324 from mon1
10.06.01_22:10:07.028752 7f1d219cf710 mds-1.0 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:07.028760 7f1d219cf710 mds-1.0 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:07.028767 7f1d219cf710 mds-1.0 map says i am 172.16.2.11:6800/5559 mds-1 state down:dne
10.06.01_22:10:07.028776 7f1d219cf710 mds-1.0 not in map yet
10.06.01_22:10:08.991028 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 12 ==== mdsmap(e 1325) v1 ==== 2481+0+0 (3716030284 0 0) 0x7f1d180014a0
10.06.01_22:10:08.991052 7f1d219cf710 mds-1.0 handle_mds_map epoch 1325 from mon1
10.06.01_22:10:08.991079 7f1d219cf710 mds-1.0 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:08.991087 7f1d219cf710 mds-1.0 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:08.991094 7f1d219cf710 mds-1.0 map says i am 172.16.2.11:6800/5559 mds-1 state down:dne
10.06.01_22:10:08.991103 7f1d219cf710 mds-1.0 not in map yet
10.06.01_22:10:10.164797 7f1d1f9cb710 mds-1.0 beacon_send up:boot seq 2 (currently down:dne)
10.06.01_22:10:10.164819 7f1d1f9cb710 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mdsbeacon(11224/r1-11 up:boot seq 2 v1325) v1 -- ?+0 0xbaed90
10.06.01_22:10:11.164930 7f1d1f9cb710 mds-1.bal get_load no root, no load
10.06.01_22:10:11.165028 7f1d1f9cb710 mds-1.bal get_load mdsload<[0,0 0]/[0,0 0], req 0, hr 0, qlen 0, cpu 0.43>
10.06.01_22:10:11.815886 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 13 ==== mdsmap(e 1326) v1 ==== 2683+0+0 (343295712 0 0) 0x7f1d18001550
10.06.01_22:10:11.815912 7f1d219cf710 mds-1.0 handle_mds_map epoch 1326 from mon1
10.06.01_22:10:11.815943 7f1d219cf710 mds-1.0 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:11.815951 7f1d219cf710 mds-1.0 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:11.815959 7f1d219cf710 mds-1.0 map says i am 172.16.2.11:6800/5559 mds-1 state up:standby
10.06.01_22:10:11.815967 7f1d219cf710 mds-1.0 handle_mds_map standby
10.06.01_22:10:14.164914 7f1d1f9cb710 mds-1.0 beacon_send up:standby seq 3 (currently up:standby)
10.06.01_22:10:14.164931 7f1d1f9cb710 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mdsbeacon(11224/r1-11 up:standby seq 3 v1326) v1 -- ?+0 0xbad000
10.06.01_22:10:14.260732 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 14 ==== mdsbeacon(11224/r1-11 up:standby seq 3 v1326) v1 ==== 71+0+0 (3957465932 0 0) 0x7f1d18000960
10.06.01_22:10:14.260761 7f1d219cf710 mds-1.0 handle_mds_beacon up:standby seq 3 rtt 0.095833
10.06.01_22:10:16.164981 7f1d1f9cb710 mds-1.bal get_load no root, no load
10.06.01_22:10:16.165016 7f1d1f9cb710 mds-1.bal get_load mdsload<[0,0 0]/[0,0 0], req 0, hr 0, qlen 0, cpu 0.39>
10.06.01_22:10:18.165015 7f1d1f9cb710 mds-1.0 beacon_send up:standby seq 4 (currently up:standby)
10.06.01_22:10:18.165030 7f1d1f9cb710 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mdsbeacon(11224/r1-11 up:standby seq 4 v1326) v1 -- ?+0 0xbad000
10.06.01_22:10:18.165556 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 15 ==== mdsbeacon(11224/r1-11 up:standby seq 4 v1326) v1 ==== 71+0+0 (560746911 0 0) 0x7f1d18000960
10.06.01_22:10:18.165582 7f1d219cf710 mds-1.0 handle_mds_beacon up:standby seq 4 rtt 0.000552
10.06.01_22:10:18.507539 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 16 ==== mdsmap(e 1327) v1 ==== 2070+0+0 (2876296017 0 0) 0x7f1d18001130
10.06.01_22:10:18.507560 7f1d219cf710 mds-1.0 handle_mds_map epoch 1327 from mon1
10.06.01_22:10:18.507587 7f1d219cf710 mds-1.0 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:18.507595 7f1d219cf710 mds-1.0 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:18.507603 7f1d219cf710 mds2.0 map says i am 172.16.2.11:6800/5559 mds2 state up:replay
10.06.01_22:10:18.507637 7f1d219cf710 ---- renamed symlink /var/log/ceph/mds2 -> /var/log/ceph/mds2.0 ----
10.06.01_22:10:18.507660 7f1d219cf710 ---- created symlink /var/log/ceph/mds2 -> r1-11.5560 ----
10.06.01_22:10:18.507827 7f1d219cf710 mds2.11 handle_mds_map i am now mds2.11
10.06.01_22:10:18.507836 7f1d219cf710 mds2.11 handle_mds_map state change up:standby --> up:replay
10.06.01_22:10:18.507843 7f1d219cf710 mds2.11 replay_start
10.06.01_22:10:18.507850 7f1d219cf710 mds2.11 now replay. my recovery peers are 0,1
10.06.01_22:10:18.507862 7f1d219cf710 mds2.cache set_recovery_set 0,1
10.06.01_22:10:18.507870 7f1d219cf710 mds2.11 boot_start 1: opening inotable
10.06.01_22:10:18.507878 7f1d219cf710 mds2.inotable: load
10.06.01_22:10:18.507907 7f1d219cf710 -- 172.16.2.11:6800/5559 --> osd0 172.16.2.9:6801/2678 -- osd_op(mds2.11:1 mds2_inotable [read 0~0] 1.6843) v1 -- ?+0 0xbbaa80
10.06.01_22:10:18.507952 7f1d219cf710 mds2.11 boot_start 1: opening sessionmap
10.06.01_22:10:18.507964 7f1d219cf710 mds2.sessionmap load
10.06.01_22:10:18.507981 7f1d219cf710 -- 172.16.2.11:6800/5559 --> osd2 172.16.2.11:6801/3015 -- osd_op(mds2.11:2 mds2_sessionmap [read 0~0] 1.3c91) v1 -- ?+0 0xbbb6f0
10.06.01_22:10:18.508017 7f1d219cf710 mds2.11 boot_start 1: opening mds log
10.06.01_22:10:18.508028 7f1d219cf710 mds2.log open discovering log bounds
10.06.01_22:10:18.508061 7f1d219cf710 -- 172.16.2.11:6800/5559 --> osd0 172.16.2.9:6801/2678 -- osd_op(mds2.11:3 202.00000000 [read 0~0] 1.dda1) v1 -- ?+0 0xbbcf80
10.06.01_22:10:18.508085 7f1d219cf710 mds2.cache handle_mds_failure mds0
10.06.01_22:10:18.508095 7f1d219cf710 mds2.cache handle_mds_failure mds0 : recovery peers are 0,1
10.06.01_22:10:18.508102 7f1d219cf710 mds2.cache wants_resolve
10.06.01_22:10:18.508108 7f1d219cf710 mds2.cache got_resolve
10.06.01_22:10:18.508113 7f1d219cf710 mds2.cache rejoin_sent
10.06.01_22:10:18.508118 7f1d219cf710 mds2.cache rejoin_gather
10.06.01_22:10:18.508127 7f1d219cf710 mds2.cache rejoin_ack_gather
10.06.01_22:10:18.508138 7f1d1d9c7710 mds2.11 MDS::ms_get_authorizer type=osd
10.06.01_22:10:18.508158 7f1d219cf710 mds2.migrator handle_mds_failure_or_stop mds0
10.06.01_22:10:18.508188 7f1d219cf710 mds2.cache show_subtrees - no subtrees
10.06.01_22:10:18.508195 7f1d219cf710 mds2.cache handle_mds_failure mds1
10.06.01_22:10:18.508203 7f1d219cf710 mds2.cache handle_mds_failure mds1 : recovery peers are 0,1
10.06.01_22:10:18.508210 7f1d219cf710 mds2.cache wants_resolve
10.06.01_22:10:18.508215 7f1d219cf710 mds2.cache got_resolve
10.06.01_22:10:18.508220 7f1d219cf710 mds2.cache rejoin_sent
10.06.01_22:10:18.508225 7f1d219cf710 mds2.cache rejoin_gather
10.06.01_22:10:18.508230 7f1d219cf710 mds2.cache rejoin_ack_gather
10.06.01_22:10:18.508236 7f1d219cf710 mds2.migrator handle_mds_failure_or_stop mds1
10.06.01_22:10:18.508243 7f1d219cf710 mds2.cache show_subtrees - no subtrees
10.06.01_22:10:18.508259 7f1d219cf710 mds2.cache handle_mds_failure mds2
10.06.01_22:10:18.508273 7f1d219cf710 mds2.cache handle_mds_failure mds2 : recovery peers are 0,1
10.06.01_22:10:18.508280 7f1d219cf710 mds2.cache wants_resolve
10.06.01_22:10:18.508290 7f1d219cf710 mds2.cache got_resolve
10.06.01_22:10:18.508299 7f1d219cf710 mds2.cache rejoin_sent
10.06.01_22:10:18.508308 7f1d219cf710 mds2.cache rejoin_gather
10.06.01_22:10:18.508317 7f1d219cf710 mds2.cache rejoin_ack_gather
10.06.01_22:10:18.508327 7f1d219cf710 mds2.migrator handle_mds_failure_or_stop mds2
10.06.01_22:10:18.508339 7f1d219cf710 mds2.cache show_subtrees - no subtrees
10.06.01_22:10:18.508354 7f1d219cf710 mds2.bal check_targets have need want
10.06.01_22:10:18.508365 7f1d219cf710 mds2.bal rebalance done
10.06.01_22:10:18.508375 7f1d219cf710 mds2.cache show_subtrees - no subtrees
10.06.01_22:10:18.508389 7f1d1e1c8710 mds2.11 MDS::ms_get_authorizer type=osd
10.06.01_22:10:18.508407 7f1d219cf710 mds2.11 ms_handle_connect on 172.16.2.11:6801/3015
10.06.01_22:10:18.508914 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd2 172.16.2.11:6801/3015 1 ==== osd_op_reply(2 mds2_sessionmap [read 0~17] = 0) v1 ==== 101+0+17 (2914000337 0 2334645356) 0xbaed60
10.06.01_22:10:18.508944 7f1d219cf710 mds2.sessionmap dump
10.06.01_22:10:18.508954 7f1d219cf710 mds2.sessionmap _load_finish v 0, 0 sessions, 17 bytes
10.06.01_22:10:18.508961 7f1d219cf710 mds2.sessionmap dump
10.06.01_22:10:18.530618 7f1d219cf710 mds2.11 ms_handle_connect on 172.16.2.9:6801/2678
10.06.01_22:10:18.531994 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd0 172.16.2.9:6801/2678 1 ==== osd_op_reply(1 mds2_inotable [read 0~29] = 0) v1 ==== 99+0+29 (2892176182 0 936214950) 0xbaed60
10.06.01_22:10:18.532017 7f1d219cf710 mds2.inotable: load_2 got 29 bytes
10.06.01_22:10:18.532024 7f1d219cf710 mds2.inotable: load_2 loaded v0
10.06.01_22:10:18.532840 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd0 172.16.2.9:6801/2678 2 ==== osd_op_reply(3 202.00000000 [read 0~84] = 0) v1 ==== 98+0+84 (2503914187 0 707038117) 0xbaed60
10.06.01_22:10:18.532886 7f1d219cf710 -- 172.16.2.11:6800/5559 --> osd2 172.16.2.11:6801/3015 -- osd_op(mds2.11:4 202.00000001 [stat 0~0] 1.ae91) v1 -- ?+0 0xbbcf80
10.06.01_22:10:18.532915 7f1d219cf710 -- 172.16.2.11:6800/5559 --> osd0 172.16.2.9:6801/2678 -- osd_op(mds2.11:5 202.00000002 [stat 0~0] 1.49be) v1 -- ?+0 0x7f1d180008e0
10.06.01_22:10:18.533909 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd0 172.16.2.9:6801/2678 3 ==== osd_op_reply(5 202.00000002 [stat 0~0] = -2 (No such file or directory)) v1 ==== 98+0+0 (2901398551 0 0) 0xbaed60
10.06.01_22:10:18.676635 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd2 172.16.2.11:6801/3015 2 ==== osd_op_reply(4 202.00000001 [stat 0~0] = 0) v1 ==== 98+0+16 (2688090868 0 2635895410) 0xbaed60
10.06.01_22:10:18.676735 7f1d219cf710 mds2.11 boot_start 3: replaying mds log
10.06.01_22:10:18.676747 7f1d219cf710 mds2.log replay start, from 4194304 to 4197724
10.06.01_22:10:18.676822 7f1d17fff710 mds2.log _replay_thread start
10.06.01_22:10:18.676868 7f1d17fff710 -- 172.16.2.11:6800/5559 --> osd2 172.16.2.11:6801/3015 -- osd_op(mds2.11:6 202.00000001 [read 0~3420] 1.ae91) v1 -- ?+0 0xbbcf80
10.06.01_22:10:18.859705 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd2 172.16.2.11:6801/3015 3 ==== osd_op_reply(6 202.00000001 [read 0~3420] = 0) v1 ==== 98+0+3420 (2272035302 0 3833640900) 0xbbfd80
10.06.01_22:10:18.859766 7f1d17fff710 mds2.log _replay 4194304~95 / 4197724 : subtree_map 0 subtrees [metablob]
10.06.01_22:10:18.859784 7f1d17fff710 mds2.journal ESubtreeMap.replay -- reconstructing (auth) subtree spanning tree
10.06.01_22:10:18.859794 7f1d17fff710 mds2.journal EMetaBlob.replay 0 dirlumps
10.06.01_22:10:18.859806 7f1d17fff710 mds2.cache show_subtrees - no subtrees
10.06.01_22:10:18.859827 7f1d17fff710 mds2.log _replay 4194403~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.859836 7f1d17fff710 mds2.journal ESubtreeMap.replay -- reconstructing (auth) subtree spanning tree
10.06.01_22:10:18.859849 7f1d17fff710 mds2.journal EMetaBlob.replay 1 dirlumps
10.06.01_22:10:18.859855 7f1d17fff710 mds2.journal EMetaBlob.replay dir 102
10.06.01_22:10:18.859869 7f1d17fff710 mds2.journal EMetaBlob.replay created base [inode 102 [...2,head] ~mds2/ auth v1 snaprealm=0xbbcf10 f(v0 m0.000000 0=0+0) ds=0=0+0 rb=0 rf=0 rd=0 (iauth sync) (ilink sync) (idft sync) (isnap sync) (inest sync) (ifile sync) (ixattr sync) (iversion lock) 0x7f1d180017d0]
10.06.01_22:10:18.859931 7f1d17fff710 mds2.cache adjust_subtree_auth -1,-2 -> -2,-2 on [dir 102 ~mds2/ [2,head] auth v=0 cv=0/0 state=1073741824 f(v0 m0.000000 0=0+0) s=0=0+0 rb=0/0 rf=0/0 rd=0/0 hs=0+0,ss=0+0 0xbc3050]
10.06.01_22:10:18.859951 7f1d17fff710 mds2.cache show_subtrees - no subtrees
10.06.01_22:10:18.859962 7f1d17fff710 mds2.cache current root is [dir 102 ~mds2/ [2,head] auth v=0 cv=0/0 state=1073741824 f(v0 m0.000000 0=0+0) s=0=0+0 rb=0/0 rf=0/0 rd=0/0 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.859975 7f1d17fff710 mds2.cache.dir(102) setting dir_auth=-2,-2 from -1,-2 on [dir 102 ~mds2/ [2,head] auth v=0 cv=0/0 state=1073741824 f(v0 m0.000000 0=0+0) s=0=0+0 rb=0/0 rf=0/0 rd=0/0 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.859988 7f1d17fff710 mds2.cache.dir(102) new subtree root, adjusting auth_pins
10.06.01_22:10:18.859997 7f1d17fff710 mds2.cache show_subtrees
10.06.01_22:10:18.860013 7f1d17fff710 mds2.cache |__-2 auth [dir 102 ~mds2/ [2,head] auth v=0 cv=0/0 dir_auth=-2 state=1073741824 f(v0 m0.000000 0=0+0) s=0=0+0 rb=0/0 rf=0/0 rd=0/0 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860027 7f1d17fff710 mds2.journal EMetaBlob.replay added dir [dir 102 ~mds2/ [2,head] auth v=0 cv=0/0 dir_auth=-2 state=1073741824 f(v0 m0.000000 0=0+0) s=0=0+0 rb=0/0 rf=0/0 rd=0/0 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860046 7f1d17fff710 mds2.cache adjust_bounded_subtree_auth -2,-2 -> 2,-2 on [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=-2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050] bound_dfs []
10.06.01_22:10:18.860060 7f1d17fff710 mds2.cache adjust_bounded_subtree_auth -2,-2 -> 2,-2 on [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=-2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050] bounds
10.06.01_22:10:18.860074 7f1d17fff710 mds2.cache show_subtrees
10.06.01_22:10:18.860082 7f1d17fff710 mds2.cache |__-2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=-2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860095 7f1d17fff710 mds2.cache current root is [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=-2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860107 7f1d17fff710 mds2.cache.dir(102) setting dir_auth=2,-2 from -2,-2 on [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=-2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860124 7f1d17fff710 mds2.cache show_subtrees
10.06.01_22:10:18.860132 7f1d17fff710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860145 7f1d17fff710 mds2.cache show_subtrees
10.06.01_22:10:18.860153 7f1d17fff710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860172 7f1d17fff710 mds2.log _replay 4194772~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860181 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860192 7f1d17fff710 mds2.log _replay 4195141~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860201 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860218 7f1d17fff710 mds2.log _replay 4195510~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860226 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860237 7f1d17fff710 mds2.log _replay 4195879~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860245 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860256 7f1d17fff710 mds2.log _replay 4196248~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860264 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860275 7f1d17fff710 mds2.log _replay 4196617~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860283 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860294 7f1d17fff710 mds2.log _replay 4196986~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860302 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860313 7f1d17fff710 mds2.log _replay 4197355~365 / 4197724 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860322 7f1d17fff710 mds2.journal ESubtreeMap.replay -- ignoring, already have import map
10.06.01_22:10:18.860329 7f1d17fff710 mds2.log _replay - complete, 10 events, new read/expire pos is 4194304
10.06.01_22:10:18.860335 7f1d17fff710 mds2.log _replay - truncating at 4197724
10.06.01_22:10:18.860343 7f1d17fff710 mds2.log _replay_thread nothing to truncate, kicking waiters
10.06.01_22:10:18.860350 7f1d17fff710 mds2.11 replay_done in=3 failed=0
10.06.01_22:10:18.860356 7f1d17fff710 mds2.11 i am not alone, moving to state resolve
10.06.01_22:10:18.860362 7f1d17fff710 mds2.11 request_state up:resolve
10.06.01_22:10:18.860367 7f1d17fff710 mds2.11 beacon_send up:resolve seq 5 (currently up:replay)
10.06.01_22:10:18.860376 7f1d17fff710 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mdsbeacon(11224/r1-11 up:resolve seq 5 v1327) v1 -- ?+0 0xbbda40
10.06.01_22:10:18.860439 7f1d17fff710 mds2.log start_new_segment at 4197724
10.06.01_22:10:18.860452 7f1d17fff710 mds2.cache create_subtree_map 1 subtrees, 1 fullauth
10.06.01_22:10:18.860460 7f1d17fff710 mds2.cache subtree [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:18.860492 7f1d17fff710 mds2.log submit_entry 4197724~365 : subtree_map 1 subtrees [metablob 102, 1 dirs]
10.06.01_22:10:18.860524 7f1d17fff710 mds2.log _replay_thread finish
10.06.01_22:10:19.462370 7f1d177fe710 mds2.11 new session 0xbce970 for mds11223 172.16.2.9:6800/6024
10.06.01_22:10:19.462612 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mds1 172.16.2.9:6800/6024 1 ==== mdsmap(e 1328) v1 ==== 2070+0+0 (530702378 0 0) 0xbaf870
10.06.01_22:10:19.462639 7f1d219cf710 mds2.11 handle_mds_map epoch 1328 from mds1
10.06.01_22:10:19.462649 7f1d219cf710 mds2.11 peer mds1 has mdsmap epoch >= 1328
10.06.01_22:10:19.462679 7f1d219cf710 mds2.11 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:19.462688 7f1d219cf710 mds2.11 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:19.462696 7f1d219cf710 mds2.11 map says i am 172.16.2.11:6800/5559 mds2 state up:replay
10.06.01_22:10:19.462712 7f1d219cf710 mds2.bal check_targets have need want
10.06.01_22:10:19.462745 7f1d219cf710 mds2.bal map: i imported [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050] from 2
10.06.01_22:10:19.462762 7f1d219cf710 mds2.bal rebalance done
10.06.01_22:10:19.462770 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:19.462779 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:19.462809 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mds1 172.16.2.9:6800/6024 2 ==== mds_resolve(2+0 subtrees +0 slave requests) v1 ==== 44+0+0 (284640460 0 0) 0x7f1d18000db0
10.06.01_22:10:19.462828 7f1d219cf710 mds2.cache handle_resolve from mds1
10.06.01_22:10:19.462836 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:19.462844 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:19.462860 7f1d219cf710 mds2.cache maybe_resolve_finish still waiting for more resolves, got (1), need (0,1)
10.06.01_22:10:19.505834 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 17 ==== mdsmap(e 1328) v1 ==== 2070+0+0 (530702378 0 0) 0x7f1d18000db0
10.06.01_22:10:19.505850 7f1d219cf710 mds2.11 handle_mds_map epoch 1328 from mon1
10.06.01_22:10:19.505856 7f1d219cf710 mds2.11 old map epoch 1328 <= 1328, discarding
10.06.01_22:10:19.505881 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 18 ==== mdsbeacon(11224/r1-11 up:resolve seq 5 v1328) v1 ==== 71+0+0 (381527815 0 0) 0x7f1d18000db0
10.06.01_22:10:19.505896 7f1d219cf710 mds2.11 handle_mds_beacon up:resolve seq 5 rtt 0.645522
10.06.01_22:10:21.165062 7f1d1f9cb710 -- 172.16.2.11:6800/5559 --> osd2 172.16.2.11:6801/3015 -- osd_op(mds2.11:7 202.00000001 [write 3420~369] 1.ae91) v1 -- ?+0 0xbae890
10.06.01_22:10:21.165112 7f1d1f9cb710 -- 172.16.2.11:6800/5559 --> osd0 172.16.2.9:6801/2678 -- osd_op(mds2.11:8 202.00000000 [writefull 0~84] 1.dda1) v1 -- ?+0 0x7f1d1800e760
10.06.01_22:10:21.165161 7f1d1f9cb710 mds2.bal get_load mdsload<[0,0 0]/[0,0 0], req 0, hr 0, qlen 0, cpu 0.6>
10.06.01_22:10:21.182004 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd2 172.16.2.11:6801/3015 4 ==== osd_op_reply(7 202.00000001 [write 3420~369] = 0) v1 ==== 98+0+0 (1359384031 0 0) 0xb9fb50
10.06.01_22:10:21.182043 7f1d219cf710 mds2.log _logged_subtree_map at 4197724
10.06.01_22:10:21.188522 7f1d219cf710 -- 172.16.2.11:6800/5559 <== osd0 172.16.2.9:6801/2678 4 ==== osd_op_reply(8 202.00000000 [writefull 0~84] = 0) v1 ==== 98+0+0 (503290713 0 0) 0xb9fb50
10.06.01_22:10:22.842505 7f1d167fc710 mds2.11 new session 0xbcede0 for mds11221 172.16.2.10:6800/5796
10.06.01_22:10:22.842775 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mds0 172.16.2.10:6800/5796 1 ==== mdsmap(e 1329) v1 ==== 2070+0+0 (20415635 0 0) 0xbaf870
10.06.01_22:10:22.842807 7f1d219cf710 mds2.11 handle_mds_map epoch 1329 from mds0
10.06.01_22:10:22.842817 7f1d219cf710 mds2.11 peer mds0 has mdsmap epoch >= 1329
10.06.01_22:10:22.842849 7f1d219cf710 mds2.11 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:22.842858 7f1d219cf710 mds2.11 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:22.842866 7f1d219cf710 mds2.11 map says i am 172.16.2.11:6800/5559 mds2 state up:replay
10.06.01_22:10:22.842882 7f1d219cf710 mds2.bal check_targets have need want
10.06.01_22:10:22.842892 7f1d219cf710 mds2.bal map: i imported [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050] from 2
10.06.01_22:10:22.842908 7f1d219cf710 mds2.bal rebalance done
10.06.01_22:10:22.842915 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:22.842925 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:22.842950 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mds0 172.16.2.10:6800/5796 2 ==== mds_resolve(2+0 subtrees +0 slave requests) v1 ==== 56+0+0 (2291435366 0 0) 0xbae890
10.06.01_22:10:22.842965 7f1d219cf710 mds2.cache handle_resolve from mds0
10.06.01_22:10:22.842979 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:22.842987 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:22.843008 7f1d219cf710 mds2.cache maybe_resolve_finish got all resolves+resolve_acks, done.
10.06.01_22:10:22.843017 7f1d219cf710 mds2.cache disambiguate_imports
10.06.01_22:10:22.843024 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:22.843032 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:22.860486 7f1d1f9cb710 mds2.11 beacon_send up:resolve seq 6 (currently up:replay)
10.06.01_22:10:22.860504 7f1d1f9cb710 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mdsbeacon(11224/r1-11 up:resolve seq 6 v1329) v1 -- ?+0 0x7f1d1800d5d0
10.06.01_22:10:22.897875 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mon1 172.16.2.10:6789/0 19 ==== mdsmap(e 1329) v1 ==== 2070+0+0 (20415635 0 0) 0x7f1d1800e0d0
10.06.01_22:10:22.897898 7f1d219cf710 mds2.11 handle_mds_map epoch 1329 from mon1
10.06.01_22:10:22.897905 7f1d219cf710 mds2.11 old map epoch 1329 <= 1329, discarding
10.06.01_22:10:23.154806 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mds1 172.16.2.9:6800/6024 3 ==== mdsmap(e 1330) v1 ==== 2070+0+0 (3913802586 0 0) 0x7f1d1800e0d0
10.06.01_22:10:23.154840 7f1d219cf710 mds2.11 handle_mds_map epoch 1330 from mds1
10.06.01_22:10:23.154847 7f1d219cf710 mds2.11 peer mds1 has mdsmap epoch >= 1330
10.06.01_22:10:23.154877 7f1d219cf710 mds2.11 my compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:23.154886 7f1d219cf710 mds2.11 mdsmap compat compat={},rocompat={},incompat={1=base v0.20}
10.06.01_22:10:23.154894 7f1d219cf710 mds2.11 map says i am 172.16.2.11:6800/5559 mds2 state up:resolve
10.06.01_22:10:23.154903 7f1d219cf710 mds2.11 handle_mds_map state change up:replay --> up:resolve
10.06.01_22:10:23.154909 7f1d219cf710 mds2.11 resolve_start
10.06.01_22:10:23.154919 7f1d219cf710 mds2.cache resolve_start
10.06.01_22:10:23.154926 7f1d219cf710 mds2.cache send_resolve_now to mds0
10.06.01_22:10:23.154934 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.154944 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.154978 7f1d219cf710 -- 172.16.2.11:6800/5559 --> mds0 172.16.2.10:6800/5796 -- mdsmap(e 1330) v1 -- ?+0 0x7f1d18010960
10.06.01_22:10:23.154997 7f1d219cf710 -- 172.16.2.11:6800/5559 --> mds0 172.16.2.10:6800/5796 -- mds_resolve(1+0 subtrees +0 slave requests) v1 -- ?+0 0x7f1d18010600
10.06.01_22:10:23.155011 7f1d219cf710 mds2.cache send_resolve_now to mds1
10.06.01_22:10:23.155018 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155026 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155041 7f1d219cf710 -- 172.16.2.11:6800/5559 --> mds1 172.16.2.9:6800/6024 -- mds_resolve(1+0 subtrees +0 slave requests) v1 -- ?+0 0x7f1d18010c10
10.06.01_22:10:23.155058 7f1d219cf710 mds2.11 resolve set is 0,1,2, was 0,1
10.06.01_22:10:23.155072 7f1d219cf710 mds2.bal check_targets have need want
10.06.01_22:10:23.155081 7f1d219cf710 mds2.bal map: i imported [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050] from 2
10.06.01_22:10:23.155095 7f1d219cf710 mds2.bal rebalance done
10.06.01_22:10:23.155101 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155109 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155133 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mds1 172.16.2.9:6800/6024 4 ==== mds_resolve(2+0 subtrees +0 slave requests) v1 ==== 44+0+0 (284640460 0 0) 0x7f1d1800f6c0
10.06.01_22:10:23.155154 7f1d219cf710 mds2.cache handle_resolve from mds1
10.06.01_22:10:23.155162 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155170 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155184 7f1d219cf710 mds2.cache maybe_resolve_finish got all resolves+resolve_acks, done.
10.06.01_22:10:23.155190 7f1d219cf710 mds2.cache disambiguate_imports
10.06.01_22:10:23.155196 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155204 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155219 7f1d219cf710 mds2.cache trim_unlinked_inodes
10.06.01_22:10:23.155228 7f1d219cf710 mds2.cache recalc_auth_bits
10.06.01_22:10:23.155235 7f1d219cf710 mds2.cache subtree auth=1 for [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155251 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155259 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155272 7f1d219cf710 mds2.cache show_cache
10.06.01_22:10:23.155278 7f1d219cf710 mds2.cache unlinked [inode 102 [...2,head] ~mds2/ auth v1 snaprealm=0xbbcf10 f(v0 m0.000000 0=0+0) ds=0=0+0 rb=0 rf=0 rd=0 (iauth sync) (ilink sync) (idft sync) (isnap sync) (inest sync) (ifile sync) (ixattr sync) (iversion lock) | nref=1 0x7f1d180017d0]
10.06.01_22:10:23.155297 7f1d219cf710 mds2.cache dirfrag [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155310 7f1d219cf710 mds2.cache unlinked [inode 1 [...2,head] / rep@0.0 v1 snaprealm=0x7f1d180008b0 f(v0 m0.000000 0=0+0) ds=0=0+0 rb=0 rf=0 rd=0 (iauth sync) (ilink sync) (idft sync) (isnap sync) (inest sync) (ifile sync) (ixattr sync) (iversion lock) 0x7f1d180011b0]
10.06.01_22:10:23.155334 7f1d219cf710 mds2.cache trim_non_auth
10.06.01_22:10:23.155341 7f1d219cf710 mds2.cache ... [inode 1 [...2,head] / rep@0.0 v1 snaprealm=0x7f1d180008b0 f(v0 m0.000000 0=0+0) ds=0=0+0 rb=0 rf=0 rd=0 (iauth sync) (ilink sync) (idft sync) (isnap sync) (inest sync) (ifile sync) (ixattr sync) (iversion lock) 0x7f1d180011b0]
10.06.01_22:10:23.155356 7f1d219cf710 mds2.cache remove_inode [inode 1 [...2,head] / rep@0.0 v1 snaprealm=0x7f1d180008b0 f(v0 m0.000000 0=0+0) ds=0=0+0 rb=0 rf=0 rd=0 (iauth sync) (ilink sync) (idft sync) (isnap sync) (inest sync) (ifile sync) (ixattr sync) (iversion lock) 0x7f1d180011b0]
10.06.01_22:10:23.155375 7f1d219cf710 mds2.cache.ino(1) close_snaprealm snaprealm(1 seq 1 lc 0 snaps={} 0x7f1d180008b0)
10.06.01_22:10:23.155390 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155398 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155412 7f1d219cf710 mds2.11 resolve_done
10.06.01_22:10:23.155418 7f1d219cf710 mds2.11 request_state up:reconnect
10.06.01_22:10:23.155424 7f1d219cf710 mds2.11 beacon_send up:reconnect seq 7 (currently up:resolve)
10.06.01_22:10:23.155433 7f1d219cf710 -- 172.16.2.11:6800/5559 --> mon1 172.16.2.10:6789/0 -- mdsbeacon(11224/r1-11 up:reconnect seq 7 v1330) v1 -- ?+0 0x7f1d18010960
10.06.01_22:10:23.155467 7f1d219cf710 -- 172.16.2.11:6800/5559 <== mds0 172.16.2.10:6800/5796 3 ==== mds_resolve(2+0 subtrees +0 slave requests) v1 ==== 56+0+0 (2291435366 0 0) 0xbae890
10.06.01_22:10:23.155482 7f1d219cf710 mds2.cache handle_resolve from mds0
10.06.01_22:10:23.155489 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155502 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155516 7f1d219cf710 mds2.cache maybe_resolve_finish got all resolves+resolve_acks, done.
10.06.01_22:10:23.155522 7f1d219cf710 mds2.cache disambiguate_imports
10.06.01_22:10:23.155528 7f1d219cf710 mds2.cache show_subtrees
10.06.01_22:10:23.155535 7f1d219cf710 mds2.cache |__ 2 auth [dir 102 ~mds2/ [2,head] auth v=1 cv=0/0 dir_auth=2 state=1073741824 f(v0 m0.000000 2=1+1) s=2=1+1 rb=0/0 rf=1/1 rd=1/1 hs=0+0,ss=0+0 | nref=1 0xbc3050]
10.06.01_22:10:23.155548 7f1d219cf710 mds2.cache trim_unlinked_inodes
10.06.01_22:10:23.155554 7f1d219cf710 mds2.cache recalc_auth_bits
    (1-1/1)