-181> 2017-03-29 12:11:06.298128 7f5e1b151700 10 mds.0.locker eval set loner to client.264103 -180> 2017-03-29 12:11:06.298130 7f5e1b151700 7 mds.0.locker file_eval wanted= loner_wanted= other_wanted= filelock=(ifile sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -179> 2017-03-29 12:11:06.298139 7f5e1b151700 7 mds.0.locker file_eval stable, bump to loner (ifile sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -178> 2017-03-29 12:11:06.298150 7f5e1b151700 7 mds.0.locker file_excl (ifile sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -177> 2017-03-29 12:11:06.298161 7f5e1b151700 10 mds.0.locker simple_eval (iauth sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -176> 2017-03-29 12:11:06.298172 7f5e1b151700 10 mds.0.locker simple_eval (ilink sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -175> 2017-03-29 12:11:06.298182 7f5e1b151700 10 mds.0.locker simple_eval (ixattr sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -174> 2017-03-29 12:11:06.298193 7f5e1b151700 10 mds.0.locker scatter_eval (inest lock dirty) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -173> 2017-03-29 12:11:06.298203 7f5e1b151700 10 mds.0.locker simple_eval (iflock sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -172> 2017-03-29 12:11:06.298212 7f5e1b151700 10 mds.0.locker simple_eval (ipolicy sync) on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -171> 2017-03-29 12:11:06.298223 7f5e1b151700 7 mds.0.locker issue_caps loner client.264103 allowed=pAsLsXsFsxcrwbl, xlocker allowed=pAsLsXsFsxcrwbl, others allowed=pAsLsXs on [inode 10000000402 [...2,head] /orbit-raid/crille/ auth v86631 f(v0 m2017-03-26 23:11:00.238514 1=0+1) n(v2021 rc2017-03-26 23:11:13.938803 b22867205265 146=124+22) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba403b8] -170> 2017-03-29 12:11:06.298234 7f5e1b151700 20 mds.0.locker client.264103 pending pAsLsXsFs allowed pAsLsXsFsxcrwbl wanted - -169> 2017-03-29 12:11:06.298236 7f5e1b151700 10 mds.0.locker eval done -168> 2017-03-29 12:11:06.298239 7f5e1b151700 10 mds.0.locker eval 2496 [inode 1 [...2,head] / auth v8693 snaprealm=0x55cb6b93a780 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143351=130809+12542)/n(v0 1=0+1) (inest lock dirty) (iversion lock) caps={244165=pAsLsXsFs/-@0,264103=pAsLsXsFs/-@0} | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3c628] -167> 2017-03-29 12:11:06.298250 7f5e1b151700 10 mds.0.locker eval doesn't want loner -166> 2017-03-29 12:11:06.298252 7f5e1b151700 7 mds.0.locker file_eval wanted= loner_wanted= other_wanted= filelock=(ifile sync) on [inode 1 [...2,head] / auth v8693 snaprealm=0x55cb6b93a780 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143351=130809+12542)/n(v0 1=0+1) (inest lock dirty) (iversion lock) caps={244165=pAsLsXsFs/-@0,264103=pAsLsXsFs/-@0} | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3c628] -165> 2017-03-29 12:11:06.298264 7f5e1b151700 10 mds.0.locker simple_eval (iauth sync) on [inode 1 [...2,head] / auth v8693 snaprealm=0x55cb6b93a780 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143351=130809+12542)/n(v0 1=0+1) (inest lock dirty) (iversion lock) caps={244165=pAsLsXsFs/-@0,264103=pAsLsXsFs/-@0} | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3c628] -164> 2017-03-29 12:11:06.298275 7f5e1b151700 10 mds.0.locker simple_eval (ilink sync) on [inode 1 [...2,head] / auth v8693 snaprealm=0x55cb6b93a780 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143351=130809+12542)/n(v0 1=0+1) (inest lock dirty) (iversion lock) caps={244165=pAsLsXsFs/-@0,264103=pAsLsXsFs/-@0} | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3c628] -163> 2017-03-29 12:11:06.298287 7f5e1b151700 10 mds.0.locker simple_eval (ixattr sync) on [inode 1 [...2,head] / auth v8693 snaprealm=0x55cb6b93a780 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143351=130809+12542)/n(v0 1=0+1) (inest lock dirty) (iversion lock) caps={244165=pAsLsXsFs/-@0,264103=pAsLsXsFs/-@0} | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3c628] -162> 2017-03-29 12:11:06.298298 7f5e1b151700 10 mds.0.locker eval done -161> 2017-03-29 12:11:06.298300 7f5e1b151700 7 mds.0.locker issue_caps allowed=pAsLsXsFscr, xlocker allowed=pAsLsXsFscr on [inode 1 [...2,head] / auth v8693 snaprealm=0x55cb6b93a780 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143351=130809+12542)/n(v0 1=0+1) (inest lock dirty) (iversion lock) caps={244165=pAsLsXsFs/-@0,264103=pAsLsXsFs/-@0} | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3c628] -160> 2017-03-29 12:11:06.298313 7f5e1b151700 20 mds.0.locker client.244165 pending pAsLsXsFs allowed pAsLsXsFscr wanted - -159> 2017-03-29 12:11:06.298315 7f5e1b151700 20 mds.0.locker client.264103 pending pAsLsXsFs allowed pAsLsXsFscr wanted - -158> 2017-03-29 12:11:06.298317 7f5e1b151700 10 mds.0.locker eval 2496 [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (iversion lock) caps={264103=pAsLsXsFs/-@0} | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -157> 2017-03-29 12:11:06.298326 7f5e1b151700 10 mds.0.locker eval set loner to client.264103 -156> 2017-03-29 12:11:06.298328 7f5e1b151700 7 mds.0.locker file_eval wanted= loner_wanted= other_wanted= filelock=(ifile sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -155> 2017-03-29 12:11:06.298336 7f5e1b151700 7 mds.0.locker file_eval stable, bump to loner (ifile sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -154> 2017-03-29 12:11:06.298344 7f5e1b151700 7 mds.0.locker file_excl (ifile sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -153> 2017-03-29 12:11:06.298354 7f5e1b151700 10 mds.0.locker simple_eval (iauth sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -152> 2017-03-29 12:11:06.298366 7f5e1b151700 10 mds.0.locker simple_eval (ilink sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -151> 2017-03-29 12:11:06.298376 7f5e1b151700 10 mds.0.locker simple_eval (ixattr sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -150> 2017-03-29 12:11:06.298387 7f5e1b151700 10 mds.0.locker scatter_eval (inest lock dirty) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -149> 2017-03-29 12:11:06.298397 7f5e1b151700 10 mds.0.locker simple_eval (iflock sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -148> 2017-03-29 12:11:06.298407 7f5e1b151700 10 mds.0.locker simple_eval (ipolicy sync) on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -147> 2017-03-29 12:11:06.298416 7f5e1b151700 7 mds.0.locker issue_caps loner client.264103 allowed=pAsLsXsFsxcrwbl, xlocker allowed=pAsLsXsFsxcrwbl, others allowed=pAsLsXs on [inode 100000003fe [...2,head] /orbit-raid/ auth v242413 f(v0 m2017-03-26 22:50:12.850318 11=1+10) n(v2802 rc2017-03-26 23:11:13.938803 b22867205308 458=126+332) (inest lock dirty) (ifile excl) (iversion lock) caps={264103=pAsLsXsFs/-@0},l=264103 | dirtyscattered=1 dirfrag=1 caps=1 dirty=1 0x55cb6ba3f140] -146> 2017-03-29 12:11:06.298427 7f5e1b151700 20 mds.0.locker client.264103 pending pAsLsXsFs allowed pAsLsXsFsxcrwbl wanted - -145> 2017-03-29 12:11:06.298429 7f5e1b151700 10 mds.0.locker eval done -144> 2017-03-29 12:11:06.298432 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -143> 2017-03-29 12:11:06.298436 7f5e1b151700 4 mds.0.server handle_client_request client_request(client.244165:37317 getattr pAsLsXsFs #1 2017-03-26 23:26:06.214233 RETRY=28) v3 -142> 2017-03-29 12:11:06.298442 7f5e1b151700 5 mds.0.server waiting for root -141> 2017-03-29 12:11:06.298446 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -140> 2017-03-29 12:11:06.298447 7f5e1b151700 4 mds.0.server handle_client_request client_request(client.244165:37318 getattr pAsLsXsFs #1 2017-03-27 06:25:04.405806 RETRY=3) v3 -139> 2017-03-29 12:11:06.298451 7f5e1b151700 5 mds.0.server waiting for root -138> 2017-03-29 12:11:06.298452 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -137> 2017-03-29 12:11:06.298453 7f5e1b151700 4 mds.0.server handle_client_request client_request(client.244165:37319 getattr pAsLsXsFs #1 2017-03-28 05:31:57.345274 RETRY=3) v3 -136> 2017-03-29 12:11:06.298456 7f5e1b151700 5 mds.0.server waiting for root -135> 2017-03-29 12:11:06.298458 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -134> 2017-03-29 12:11:06.298459 7f5e1b151700 20 mds.0.server get_session have 0x55cb6b987800 client.244165 10.12.0.33:0/3931259975 state open -133> 2017-03-29 12:11:06.298464 7f5e1b151700 3 mds.0.server handle_client_session client_session(request_renewcaps seq 30629) v1 from client.244165 -132> 2017-03-29 12:11:06.298467 7f5e1b151700 10 mds.0.sessionmap touch_session s=0x55cb6b987800 name=client.244165 -131> 2017-03-29 12:11:06.298471 7f5e1b151700 1 -- 10.12.0.33:6818/756857066 --> 10.12.0.33:0/3931259975 -- client_session(renewcaps seq 30629) v2 -- ?+0 0x55cb6c72b3c0 con 0x55cb6b982780 -130> 2017-03-29 12:11:06.298481 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -129> 2017-03-29 12:11:06.298484 7f5e1b151700 4 mds.0.server handle_client_request client_request(client.264103:40830 rmdir #10000000535/.bh_meta 2017-03-26 23:11:14.287827 RETRY=42) v3 -128> 2017-03-29 12:11:06.298493 7f5e1b151700 5 mds.0.server waiting for root -127> 2017-03-29 12:11:06.298497 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -126> 2017-03-29 12:11:06.298500 7f5e1b151700 4 mds.0.server handle_client_request client_request(client.264103:40831 getattr pAsLsXsFs #1 2017-03-26 23:26:06.212140 RETRY=28) v3 -125> 2017-03-29 12:11:06.298506 7f5e1b151700 5 mds.0.server waiting for root -124> 2017-03-29 12:11:06.298509 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -123> 2017-03-29 12:11:06.298511 7f5e1b151700 4 mds.0.server handle_client_request client_request(client.264103:40832 getattr pAsLsXsFs #1 2017-03-26 23:37:20.674220 RETRY=3) v3 -122> 2017-03-29 12:11:06.298517 7f5e1b151700 5 mds.0.server waiting for root -121> 2017-03-29 12:11:06.298520 7f5e1b151700 10 MDSInternalContextBase::complete: 18C_MDS_RetryMessage -120> 2017-03-29 12:11:06.298522 7f5e1b151700 20 mds.0.server get_session have 0x55cb6b987c00 client.264103 10.12.0.30:0/3952805459 state open -119> 2017-03-29 12:11:06.298528 7f5e1b151700 3 mds.0.server handle_client_session client_session(request_renewcaps seq 30624) v1 from client.264103 -118> 2017-03-29 12:11:06.298532 7f5e1b151700 10 mds.0.sessionmap touch_session s=0x55cb6b987c00 name=client.264103 -117> 2017-03-29 12:11:06.298535 7f5e1b151700 1 -- 10.12.0.33:6818/756857066 --> 10.12.0.30:0/3952805459 -- client_session(renewcaps seq 30624) v2 -- ?+0 0x55cb6c4a1cc0 con 0x55cb6c793d80 -116> 2017-03-29 12:11:06.298557 7f5e1b151700 1 mds.0.634 cluster recovered. -115> 2017-03-29 12:11:06.298568 7f5e1b151700 10 mds.0.bal check_targets have need want -114> 2017-03-29 12:11:06.298573 7f5e1b151700 15 mds.0.bal map: i imported [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612736 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=3+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 0x55cb6be5a000] from 0 -113> 2017-03-29 12:11:06.298597 7f5e1b151700 15 mds.0.bal map: i imported [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 ap=1+0+0 state=1610612864 f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x55cb6be5a308] from 0 -112> 2017-03-29 12:11:06.298619 7f5e1b151700 5 mds.0.bal rebalance done -111> 2017-03-29 12:11:06.298624 7f5e1b151700 15 mds.0.cache show_subtrees -110> 2017-03-29 12:11:06.298630 7f5e1b151700 10 mds.0.cache |__ 0 auth [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 ap=1+0+0 state=1610612864 f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x55cb6be5a308] -109> 2017-03-29 12:11:06.298645 7f5e1b151700 10 mds.0.cache |__ 0 auth [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612736 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=3+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 0x55cb6be5a000] -108> 2017-03-29 12:11:06.298664 7f5e1b151700 4 mds.0.634 set_osd_epoch_barrier: epoch=1938 -107> 2017-03-29 12:11:06.298680 7f5e1b151700 1 -- 10.12.0.33:6818/756857066 <== mon.2 10.12.0.33:6789/0 19 ==== mdsbeacon(324131/neutron up:active seq 4 v637) v7 ==== 132+0+0 (2415496658 0 0) 0x55cb6ba21f80 con 0x55cb6b982300 -106> 2017-03-29 12:11:06.298689 7f5e1b151700 10 mds.beacon.neutron handle_mds_beacon up:active seq 4 rtt 1.307470 -105> 2017-03-29 12:11:06.298693 7f5e1b151700 1 -- 10.12.0.33:6818/756857066 <== mds.0 10.12.0.33:6818/756857066 0 ==== mds_table_request(snaptable server_ready) v1 ==== 0+0+0 (0 0 0) 0x55cb6c4a1840 con 0x55cb6b982000 -104> 2017-03-29 12:11:06.298679 7f5e16746700 10 mds.0.cache.dir(100) _fetched header 258 bytes 10 keys for [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 ap=1+0+0 state=1610612864 f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x55cb6be5a308] want_dn= -103> 2017-03-29 12:11:06.298702 7f5e16746700 10 mds.0.cache.dir(100) _fetched version 143095 -102> 2017-03-29 12:11:06.298704 7f5e16746700 10 mds.0.cache.snaprealm(100 seq 1 0x55cb6b93b680) have_past_parents_open [1,head] -101> 2017-03-29 12:11:06.298707 7f5e16746700 10 mds.0.cache.snaprealm(100 seq 1 0x55cb6b93b680) build_snap_set [0,head] on snaprealm(100 seq 1 lc 0 cr 0 cps 1 snaps={} 0x55cb6b93b680) -100> 2017-03-29 12:11:06.298711 7f5e16746700 10 mds.0.cache.snaprealm(100 seq 1 0x55cb6b93b680) build_snap_trace my_snaps [] -99> 2017-03-29 12:11:06.298717 7f5e16746700 10 mds.0.cache.snaprealm(100 seq 1 0x55cb6b93b680) check_cache rebuilt seq 1 cached_seq 1 cached_last_created 0 cached_last_destroyed 0) -98> 2017-03-29 12:11:06.298722 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 9 marker 'I' dname 'stray9 [2,head] -97> 2017-03-29 12:11:06.298725 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray9') -96> 2017-03-29 12:11:06.298729 7f5e16746700 20 mds.0.cache.dir(100) hit -> (stray9,head) -95> 2017-03-29 12:11:06.298738 7f5e16746700 12 mds.0.cache.dir(100) _fetched had dentry [dentry #100/stray9 [2,head] auth (dversion lock) v=0 inode=0x55cb6c59c000 | inodepin=1 0x55cb6d281960] -94> 2017-03-29 12:11:06.298753 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 8 marker 'I' dname 'stray8 [2,head] -93> 2017-03-29 12:11:06.298756 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray8') -92> 2017-03-29 12:11:06.298759 7f5e16746700 20 mds.0.cache.dir(100) hit -> (stray8,head) -91> 2017-03-29 12:11:06.298765 7f5e16746700 12 mds.0.cache.dir(100) _fetched had dentry [dentry #100/stray8 [2,head] auth (dversion lock) v=0 inode=0x55cb6ba47ed8 | inodepin=1 0x55cb6bce15e0] -90> 2017-03-29 12:11:06.298772 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 7 marker 'I' dname 'stray7 [2,head] -89> 2017-03-29 12:11:06.298774 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray7') -88> 2017-03-29 12:11:06.298777 7f5e16746700 20 mds.0.cache.dir(100) hit -> (stray7,head) -87> 2017-03-29 12:11:06.298780 7f5e16746700 12 mds.0.cache.dir(100) _fetched had dentry [dentry #100/stray7 [2,head] auth (dversion lock) v=0 inode=0x55cb6ba478b0 | inodepin=1 0x55cb6bba6a60] -86> 2017-03-29 12:11:06.298784 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 6 marker 'I' dname 'stray6 [2,head] -85> 2017-03-29 12:11:06.298785 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray6') -84> 2017-03-29 12:11:06.298787 7f5e16746700 20 mds.0.cache.dir(100) hit -> (stray6,head) -83> 2017-03-29 12:11:06.298799 7f5e16746700 12 mds.0.cache.dir(100) _fetched had dentry [dentry #100/stray6 [2,head] auth (dversion lock) v=0 inode=0x55cb6ba46c60 | inodepin=1 0x55cb6ca1d7b0] -82> 2017-03-29 12:11:06.298804 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 5 marker 'I' dname 'stray5 [2,head] -81> 2017-03-29 12:11:06.298806 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray5') -80> 2017-03-29 12:11:06.298807 7f5e16746700 20 mds.0.cache.dir(100) hit -> (stray5,head) -79> 2017-03-29 12:11:06.298810 7f5e16746700 12 mds.0.cache.dir(100) _fetched had dentry [dentry #100/stray5 [2,head] auth (dversion lock) v=0 inode=0x55cb6ba3eb18 | inodepin=1 0x55cb6c3ed460] -78> 2017-03-29 12:11:06.298814 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 4 marker 'I' dname 'stray4 [2,head] -77> 2017-03-29 12:11:06.298815 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray4') -76> 2017-03-29 12:11:06.298819 7f5e16746700 0 mds.0.cache.dir(100) _fetched badness: got (but i already had) [inode 604 [...2,head] ~mds0/stray4/ auth v142790 f(v7 m2017-03-15 17:26:48.614897 7878=7227+651) n(v62 rc2017-03-15 17:26:48.614897 b215967112272 7870=7222+648) (inest lock dirty) (ifile lock dirty) (iversion lock) | dirtyscattered=2 dirfrag=1 0x55cb6ba3e4f0] mode 16704 mtime 2017-03-15 17:26:48.614897 -75> 2017-03-29 12:11:06.298850 7f5e16746700 -1 log_channel(cluster) log [ERR] : loaded dup inode 604 [2,head] v142790 at ~mds0/stray4, but inode 604.head v142790 already exists at ~mds0/stray4 -74> 2017-03-29 12:11:06.298856 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 3 marker 'I' dname 'stray3 [2,head] -73> 2017-03-29 12:11:06.298857 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray3') -72> 2017-03-29 12:11:06.298859 7f5e16746700 20 mds.0.cache.dir(100) hit -> (stray3,head) -71> 2017-03-29 12:11:06.298862 7f5e16746700 12 mds.0.cache.dir(100) _fetched had dentry [dentry #100/stray3 [2,head] auth (dversion lock) v=143230 inode=0x55cb6ba3dec8 | inodepin=1 dirty=1 0x55cb6d376e80] -70> 2017-03-29 12:11:06.298867 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 2 marker 'I' dname 'stray2 [2,head] -69> 2017-03-29 12:11:06.298868 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray2') -68> 2017-03-29 12:11:06.298870 7f5e16746700 20 mds.0.cache.dir(100) miss -> (stray0,head) -67> 2017-03-29 12:11:06.298874 7f5e16746700 0 mds.0.cache.dir(100) _fetched badness: got (but i already had) [inode 602 [...2,head] ~mds0/stray2/ auth v143073 f(v6 m2017-03-17 15:15:19.995740 8994=8215+779) n(v69 rc2017-03-17 15:15:19.995740 b930314053756 8983=8209+774) (inest lock dirty) (ifile lock dirty) (iversion lock) | dirtyscattered=2 dirfrag=1 0x55cb6ba3d8a0] mode 16704 mtime 2017-03-17 15:15:19.995740 -66> 2017-03-29 12:11:06.298891 7f5e16746700 -1 log_channel(cluster) log [ERR] : loaded dup inode 602 [2,head] v143073 at ~mds0/stray2, but inode 602.head v143073 already exists at ~mds0/stray2 -65> 2017-03-29 12:11:06.298895 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 1 marker 'I' dname 'stray1 [2,head] -64> 2017-03-29 12:11:06.298898 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray1') -63> 2017-03-29 12:11:06.298900 7f5e16746700 20 mds.0.cache.dir(100) miss -> (stray8,head) -62> 2017-03-29 12:11:06.298905 7f5e16746700 0 mds.0.cache.dir(100) _fetched badness: got (but i already had) [inode 601 [...2,head] ~mds0/stray1/ auth v143068 f(v8 m2017-03-15 21:37:11.199303 8113=7114+999) n(v62 rc2017-03-15 21:37:11.199303 b595062636506 8089=7101+988) (inest lock dirty) (ifile lock dirty) (iversion lock) | dirtyscattered=2 dirfrag=1 0x55cb6ba3d278] mode 16704 mtime 2017-03-15 21:37:11.199303 -61> 2017-03-29 12:11:06.298928 7f5e16746700 -1 log_channel(cluster) log [ERR] : loaded dup inode 601 [2,head] v143068 at ~mds0/stray1, but inode 601.head v143068 already exists at ~mds0/stray1 -60> 2017-03-29 12:11:06.298932 7f5e16746700 20 mds.0.cache.dir(100) _fetched pos 0 marker 'I' dname 'stray0 [2,head] -59> 2017-03-29 12:11:06.298934 7f5e16746700 20 mds.0.cache.dir(100) lookup (head, 'stray0') -58> 2017-03-29 12:11:06.298935 7f5e16746700 20 mds.0.cache.dir(100) hit -> (stray0,head) -57> 2017-03-29 12:11:06.298938 7f5e16746700 12 mds.0.cache.dir(100) _fetched had dentry [dentry #100/stray0 [2,head] auth (dversion lock) v=143126 inode=0x55cb6ba3cc50 | inodepin=1 dirty=1 0x55cb6c1f0380] -56> 2017-03-29 12:11:06.298944 7f5e16746700 10 mds.0.cache.dir(100) auth_unpin by 0x55cb6be5a308 on [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=0 0x55cb6be5a308] count now 0 + 0 -55> 2017-03-29 12:11:06.298957 7f5e16746700 11 mds.0.cache.dir(100) finish_waiting mask 2 result 0 on [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=0 0x55cb6be5a308] -54> 2017-03-29 12:11:06.298989 7f5e16f47700 7 mds.0.634 mds has 1 queued contexts -53> 2017-03-29 12:11:06.298996 7f5e16f47700 10 mds.0.634 0x55cb6b9d8b00 -52> 2017-03-29 12:11:06.298998 7f5e16f47700 10 mds.0.634 finish 0x55cb6b9d8b00 -51> 2017-03-29 12:11:06.299002 7f5e16f47700 10 MDSInternalContextBase::complete: 19C_MDS_RetryOpenRoot -50> 2017-03-29 12:11:06.299004 7f5e16f47700 10 mds.0.cache open_root -49> 2017-03-29 12:11:06.299006 7f5e16f47700 10 mds.0.cache.dir(1) fetch on [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612736 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=3+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 0x55cb6be5a000] -48> 2017-03-29 12:11:06.299020 7f5e16f47700 10 mds.0.cache.dir(1) auth_pin by 0x55cb6be5a000 on [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=1+0+1 state=1610612736 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=3+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x55cb6be5a000] count now 1 + 1 -47> 2017-03-29 12:11:06.299072 7f5e16f47700 1 -- 10.12.0.33:6818/756857066 --> 10.12.0.33:6801/4685 -- osd_op(mds.0.634:24 5.6b2cdaff 1.00000000 [omap-get-header 0~0,omap-get-vals 0~16,getxattr parent] snapc 0=[] ack+read+known_if_redirected+full_force e1938) v7 -- ?+0 0x55cb6b977740 con 0x55cb6b983800 -46> 2017-03-29 12:11:06.299110 7f5e1b151700 10 mds.0.tableclient(snaptable) handle_request mds_table_request(snaptable server_ready) v1 -45> 2017-03-29 12:11:06.299674 7f5e1493d700 1 -- 10.12.0.33:6818/756857066 <== osd.2 10.12.0.33:6801/4685 8 ==== osd_op_reply(24 1.00000000 [omap-get-header 0~0,omap-get-vals 0~16,getxattr (30)] v0'0 uv2403 ondisk = 0) v7 ==== 214+0+2126 (895752340 0 1478335496) 0x55cb6b976d80 con 0x55cb6b983800 -44> 2017-03-29 12:11:06.299728 7f5e16746700 10 MDSIOContextBase::complete: 21C_IO_Dir_OMAP_Fetched -43> 2017-03-29 12:11:06.299736 7f5e16746700 10 mds.0.cache.dir(1) _fetched header 258 bytes 4 keys for [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=1+0+1 state=1610612864 f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=3+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x55cb6be5a000] want_dn= -42> 2017-03-29 12:11:06.299752 7f5e16746700 10 mds.0.cache.dir(1) _fetched version 242099 -41> 2017-03-29 12:11:06.299755 7f5e16746700 10 mds.0.cache.snaprealm(1 seq 1 0x55cb6b93a780) have_past_parents_open [1,head] -40> 2017-03-29 12:11:06.299758 7f5e16746700 20 mds.0.cache.dir(1) _fetched pos 3 marker 'I' dname 'sumpan [2,head] -39> 2017-03-29 12:11:06.299759 7f5e16746700 20 mds.0.cache.dir(1) lookup (head, 'sumpan') -38> 2017-03-29 12:11:06.299761 7f5e16746700 20 mds.0.cache.dir(1) hit -> (sumpan,head) -37> 2017-03-29 12:11:06.299767 7f5e16746700 12 mds.0.cache.dir(1) _fetched had dentry [dentry #1/sumpan [2,head] auth (dversion lock) v=242415 ap=0+1 inode=0x55cb6ba46638 | inodepin=1 dirty=1 0x55cb6c1ef7b0] -36> 2017-03-29 12:11:06.299777 7f5e16746700 20 mds.0.cache.dir(1) _fetched pos 2 marker 'I' dname 'orbit-raid [2,head] -35> 2017-03-29 12:11:06.299779 7f5e16746700 20 mds.0.cache.dir(1) lookup (head, 'orbit-raid') -34> 2017-03-29 12:11:06.299782 7f5e16746700 20 mds.0.cache.dir(1) hit -> (orbit-raid,head) -33> 2017-03-29 12:11:06.299787 7f5e16746700 12 mds.0.cache.dir(1) _fetched had dentry [dentry #1/orbit-raid [2,head] auth (dversion lock) v=242413 inode=0x55cb6ba3f140 | inodepin=1 dirty=1 0x55cb6c1f01d0] -32> 2017-03-29 12:11:06.299794 7f5e16746700 20 mds.0.cache.dir(1) _fetched pos 1 marker 'I' dname 'lost+found [head,head] -31> 2017-03-29 12:11:06.299797 7f5e16746700 20 mds.0.cache.dir(1) lookup (head, 'lost+found') -30> 2017-03-29 12:11:06.299799 7f5e16746700 20 mds.0.cache.dir(1) miss -> (ceph-deploy-ceph.log,head) -29> 2017-03-29 12:11:06.299811 7f5e16746700 12 mds.0.cache.dir(1) add_primary_dentry [dentry #1/lost+found [head,head] auth (dversion lock) pv=0 v=242416 inode=0x55cb6c5a9778 0x55cb6d37fa90] -28> 2017-03-29 12:11:06.299817 7f5e16746700 12 mds.0.cache.dir(1) _fetched got [dentry #1/lost+found [head,head] auth (dversion lock) pv=0 v=242416 inode=0x55cb6c5a9778 0x55cb6d37fa90] [inode 4 [...head,head] /lost+found/ auth v242021 f(v0 m2017-03-17 15:08:53.916517) n(v0 rc2017-03-17 15:08:53.916517) (iversion lock) 0x55cb6c5a9778] -27> 2017-03-29 12:11:06.299829 7f5e16746700 20 mds.0.cache.dir(1) _fetched pos 0 marker 'I' dname 'ceph-deploy-ceph.log [2,head] -26> 2017-03-29 12:11:06.299832 7f5e16746700 20 mds.0.cache.dir(1) lookup (head, 'ceph-deploy-ceph.log') -25> 2017-03-29 12:11:06.299834 7f5e16746700 20 mds.0.cache.dir(1) hit -> (ceph-deploy-ceph.log,head) -24> 2017-03-29 12:11:06.299838 7f5e16746700 12 mds.0.cache.dir(1) _fetched had dentry [dentry #1/ceph-deploy-ceph.log [2,head] auth (dversion lock) v=0 inode=0x55cb6ba47288 0x55cb6bb44ab0] -23> 2017-03-29 12:11:06.299842 7f5e16746700 10 mds.0.cache.dir(1) auth_unpin by 0x55cb6be5a000 on [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612738|complete f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=4+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=0 0x55cb6be5a000] count now 0 + 1 -22> 2017-03-29 12:11:06.299853 7f5e16746700 11 mds.0.cache.dir(1) finish_waiting mask 2 result 0 on [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612738|complete f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=4+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=0 0x55cb6be5a000] -21> 2017-03-29 12:11:06.299883 7f5e16f47700 7 mds.0.634 mds has 1 queued contexts -20> 2017-03-29 12:11:06.299890 7f5e16f47700 10 mds.0.634 0x55cb6b8face0 -19> 2017-03-29 12:11:06.299892 7f5e16f47700 10 mds.0.634 finish 0x55cb6b8face0 -18> 2017-03-29 12:11:06.299894 7f5e16f47700 10 MDSInternalContextBase::complete: 19C_MDS_RetryOpenRoot -17> 2017-03-29 12:11:06.299895 7f5e16f47700 10 mds.0.cache open_root -16> 2017-03-29 12:11:06.299897 7f5e16f47700 7 mds.0.cache adjust_subtree_auth 0,-2 -> 0,-2 on [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a308] -15> 2017-03-29 12:11:06.299911 7f5e16f47700 15 mds.0.cache show_subtrees -14> 2017-03-29 12:11:06.299915 7f5e16f47700 10 mds.0.cache |__ 0 auth [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a308] -13> 2017-03-29 12:11:06.299925 7f5e16f47700 10 mds.0.cache |__ 0 auth [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612738|complete f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=4+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a000] -12> 2017-03-29 12:11:06.299936 7f5e16f47700 7 mds.0.cache current root is [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a308] -11> 2017-03-29 12:11:06.299946 7f5e16f47700 10 mds.0.cache.dir(100) setting dir_auth=0,-2 from 0,-2 on [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a308] -10> 2017-03-29 12:11:06.299958 7f5e16f47700 15 mds.0.cache show_subtrees -9> 2017-03-29 12:11:06.299961 7f5e16f47700 10 mds.0.cache |__ 0 auth [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a308] -8> 2017-03-29 12:11:06.299971 7f5e16f47700 10 mds.0.cache |__ 0 auth [dir 1 / [2,head] auth v=242416 cv=0/0 dir_auth=0 ap=0+0+1 state=1610612738|complete f(v0 m2017-03-14 13:55:24.824023 3=1+2) n(v1931 rc2017-03-26 23:11:13.938803 b426637777633 143351=130809+12542)/n(v1931 rc2017-03-26 22:51:51.491696 b423170014153 143350=130809+12541) hs=4+1,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a000] -7> 2017-03-29 12:11:06.299982 7f5e16f47700 10 mds.0.cache populate_mydir [dir 100 ~mds0/ [2,head] auth v=143231 cv=0/0 dir_auth=0 state=1610612738|complete f(v0 10=0+10) n(v2190 rc2017-03-26 23:11:13.938803 b3862119229905 46247=42702+3545)/n(v2190 rc2017-03-26 23:11:10.175137 b3862119229905 46246=42701+3545) hs=10+3,ss=0+0 dirty=3 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=0 0x55cb6be5a308] -6> 2017-03-29 12:11:06.299995 7f5e16f47700 20 mds.0.cache.dir(100) lookup (head, 'stray0') -5> 2017-03-29 12:11:06.299998 7f5e16f47700 20 mds.0.cache.dir(100) hit -> (stray0,head) -4> 2017-03-29 12:11:06.300000 7f5e16f47700 20 mds.0.cache stray num 0 is [inode 600 [...2,head] ~mds0/stray0/ auth v143126 f(v37 m2017-03-22 09:08:07.704490 -62=-20+-42) n(v91 rc2017-03-22 09:08:07.704490 b-317055 -75=-28+-47) (inest lock dirty) (ifile lock dirty) (iversion lock) | dirtyscattered=2 dirfrag=1 stickydirs=1 stray=1 dirty=1 0x55cb6ba3cc50] -3> 2017-03-29 12:11:06.300015 7f5e16f47700 20 mds.0.cache.dir(100) lookup (head, 'stray1') -2> 2017-03-29 12:11:06.300017 7f5e16f47700 20 mds.0.cache.dir(100) miss -> (stray8,head) -1> 2017-03-29 12:11:06.300018 7f5e16f47700 0 mds.0.cache creating system inode with ino:601 0> 2017-03-29 12:11:06.301915 7f5e16f47700 -1 mds/MDCache.cc: In function 'void MDCache::add_inode(CInode*)' thread 7f5e16f47700 time 2017-03-29 12:11:06.300029 mds/MDCache.cc: 265: FAILED assert(inode_map.count(in->vino()) == 0) ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x80) [0x55cb60e1bf80] 2: (()+0x2e8e75) [0x55cb60ac8e75] 3: (MDCache::create_system_inode(inodeno_t, int)+0x14d) [0x55cb60ac8fcd] 4: (MDCache::populate_mydir()+0x5d2) [0x55cb60b2bb62] 5: (MDCache::open_root()+0xde) [0x55cb60b2c05e] 6: (MDSInternalContextBase::complete(int)+0x18b) [0x55cb60c65fbb] 7: (MDSRank::_advance_queues()+0x6a7) [0x55cb60a17c97] 8: (MDSRank::ProgressThread::entry()+0x4a) [0x55cb60a181ca] 9: (()+0x76ba) [0x7f5e214156ba] 10: (clone()+0x6d) [0x7f5e1f8d682d] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 20/20 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 newstore 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 1/ 5 kinetic 1/ 5 fuse -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-mds.neutron.log --- end dump of recent events --- 2017-03-29 12:11:06.342405 7f5e16f47700 -1 *** Caught signal (Aborted) ** in thread 7f5e16f47700 thread_name:mds_rank_progr ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f) 1: (()+0x53373e) [0x55cb60d1373e] 2: (()+0x11390) [0x7f5e2141f390] 3: (gsignal()+0x38) [0x7f5e1f805428] 4: (abort()+0x16a) [0x7f5e1f80702a] 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x26b) [0x55cb60e1c16b] 6: (()+0x2e8e75) [0x55cb60ac8e75] 7: (MDCache::create_system_inode(inodeno_t, int)+0x14d) [0x55cb60ac8fcd] 8: (MDCache::populate_mydir()+0x5d2) [0x55cb60b2bb62] 9: (MDCache::open_root()+0xde) [0x55cb60b2c05e] 10: (MDSInternalContextBase::complete(int)+0x18b) [0x55cb60c65fbb] 11: (MDSRank::_advance_queues()+0x6a7) [0x55cb60a17c97] 12: (MDSRank::ProgressThread::entry()+0x4a) [0x55cb60a181ca] 13: (()+0x76ba) [0x7f5e214156ba] 14: (clone()+0x6d) [0x7f5e1f8d682d] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- 0> 2017-03-29 12:11:06.342405 7f5e16f47700 -1 *** Caught signal (Aborted) ** in thread 7f5e16f47700 thread_name:mds_rank_progr ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f) 1: (()+0x53373e) [0x55cb60d1373e] 2: (()+0x11390) [0x7f5e2141f390] 3: (gsignal()+0x38) [0x7f5e1f805428] 4: (abort()+0x16a) [0x7f5e1f80702a] 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x26b) [0x55cb60e1c16b] 6: (()+0x2e8e75) [0x55cb60ac8e75] 7: (MDCache::create_system_inode(inodeno_t, int)+0x14d) [0x55cb60ac8fcd] 8: (MDCache::populate_mydir()+0x5d2) [0x55cb60b2bb62] 9: (MDCache::open_root()+0xde) [0x55cb60b2c05e] 10: (MDSInternalContextBase::complete(int)+0x18b) [0x55cb60c65fbb] 11: (MDSRank::_advance_queues()+0x6a7) [0x55cb60a17c97] 12: (MDSRank::ProgressThread::entry()+0x4a) [0x55cb60a181ca] 13: (()+0x76ba) [0x7f5e214156ba] 14: (clone()+0x6d) [0x7f5e1f8d682d] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 20/20 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 newstore 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 1/ 5 kinetic 1/ 5 fuse -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-mds.neutron.log --- end dump of recent events ---