Project

General

Profile

Bug #44023 » ceph-mds.ceph0.log

Michael Sudnick, 02/06/2020 10:24 PM

 
2020-02-06 17:22:32.906 7f0f667151c0 0 set uid:gid to 167:167 (ceph:ceph)
2020-02-06 17:22:32.906 7f0f667151c0 0 ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable), process ceph-mds, pid 2154801
2020-02-06 17:22:32.906 7f0f667151c0 0 pidfile_write: ignore empty --pid-file
2020-02-06 17:22:32.945 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252661 from mon.4
2020-02-06 17:22:33.244 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252662 from mon.4
2020-02-06 17:22:33.244 7f0f54265700 1 mds.ceph0 Map has assigned me to become a standby
2020-02-06 17:22:33.288 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252663 from mon.4
2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:boot --> up:replay
2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 replay_start
2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 recovery set is 1
2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 waiting for osdmap 459132 (which blacklists prior instance)
2020-02-06 17:22:33.306 7f0f4d257700 0 mds.0.cache creating system inode with ino:0x100
2020-02-06 17:22:33.306 7f0f4d257700 0 mds.0.cache creating system inode with ino:0x1
2020-02-06 17:22:43.223 7f0f4c255700 1 mds.0.252663 Finished replaying journal
2020-02-06 17:22:43.223 7f0f4c255700 1 mds.0.252663 making mds journal writeable
2020-02-06 17:22:43.260 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252664 from mon.4
2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:replay --> up:resolve
2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 resolve_start
2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 reopen_log
2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 recovery set is 1
2020-02-06 17:22:43.333 7f0f54265700 1 mds.0.252663 resolve_done
2020-02-06 17:22:44.312 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252665 from mon.4
2020-02-06 17:22:44.312 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
2020-02-06 17:22:44.312 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:resolve --> up:reconnect
2020-02-06 17:22:44.312 7f0f54265700 1 mds.0.252663 reconnect_start
2020-02-06 17:22:44.313 7f0f54265700 1 mds.0.252663 reconnect_done
2020-02-06 17:22:45.325 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252666 from mon.4
2020-02-06 17:22:45.325 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
2020-02-06 17:22:45.325 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:reconnect --> up:rejoin
2020-02-06 17:22:45.325 7f0f54265700 1 mds.0.252663 rejoin_start
2020-02-06 17:22:45.326 7f0f54265700 1 mds.0.252663 rejoin_joint_start
2020-02-06 17:22:45.436 7f0f4f25b700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.7/rpm/el7/BUILD/ceph-14.2.7/src/mds/MDCache.cc: In function 'void MDCache::rejoin_send_rejoins()' thread 7f0f4f25b700 time 2020-02-06 17:22:45.435620
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.7/rpm/el7/BUILD/ceph-14.2.7/src/mds/MDCache.cc: 4054: FAILED ceph_assert(auth >= 0)

ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x7f0f5d78b031]
2: (()+0x2661f9) [0x7f0f5d78b1f9]
3: (MDCache::rejoin_send_rejoins()+0x26f7) [0x55704d6a3d67]
4: (MDCache::process_imported_caps()+0x1236) [0x55704d6a5046]
5: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
6: (Context::complete(int)+0x9) [0x55704d586e49]
7: (MDSContext::complete(int)+0x74) [0x55704d804b14]
8: (void finish_contexts<std::vector<MDSContext*, std::allocator<MDSContext*> > >(CephContext*, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int)+0x7d) [0x55704d58e63d]
9: (OpenFileTable::_open_ino_finish(inodeno_t, int)+0x109) [0x55704d824c79]
10: (OpenFileTable::_prefetch_inodes()+0x250) [0x55704d823cd0]
11: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
12: (Context::complete(int)+0x9) [0x55704d586e49]
13: (MDSContext::complete(int)+0x74) [0x55704d804b14]
14: (C_GatherBase<MDSContext, C_MDSInternalNoop>::sub_finish(MDSContext*, int)+0x117) [0x55704d5bf2f7]
15: (C_GatherBase<MDSContext, C_MDSInternalNoop>::C_GatherSub::complete(int)+0x21) [0x55704d5bf671]
16: (MDSRank::_advance_queues()+0xa4) [0x55704d596634]
17: (MDSRank::ProgressThread::entry()+0x3d) [0x55704d596cad]
18: (()+0x7e65) [0x7f0f5b638e65]
19: (clone()+0x6d) [0x7f0f5a2e688d]

2020-02-06 17:22:45.438 7f0f4f25b700 -1 *** Caught signal (Aborted) **
in thread 7f0f4f25b700 thread_name:mds_rank_progr

ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
1: (()+0xf5f0) [0x7f0f5b6405f0]
2: (gsignal()+0x37) [0x7f0f5a21e337]
3: (abort()+0x148) [0x7f0f5a21fa28]
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x199) [0x7f0f5d78b080]
5: (()+0x2661f9) [0x7f0f5d78b1f9]
6: (MDCache::rejoin_send_rejoins()+0x26f7) [0x55704d6a3d67]
7: (MDCache::process_imported_caps()+0x1236) [0x55704d6a5046]
8: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
9: (Context::complete(int)+0x9) [0x55704d586e49]
10: (MDSContext::complete(int)+0x74) [0x55704d804b14]
11: (void finish_contexts<std::vector<MDSContext*, std::allocator<MDSContext*> > >(CephContext*, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int)+0x7d) [0x55704d58e63d]
12: (OpenFileTable::_open_ino_finish(inodeno_t, int)+0x109) [0x55704d824c79]
13: (OpenFileTable::_prefetch_inodes()+0x250) [0x55704d823cd0]
14: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
15: (Context::complete(int)+0x9) [0x55704d586e49]
16: (MDSContext::complete(int)+0x74) [0x55704d804b14]
17: (C_GatherBase<MDSContext, C_MDSInternalNoop>::sub_finish(MDSContext*, int)+0x117) [0x55704d5bf2f7]
18: (C_GatherBase<MDSContext, C_MDSInternalNoop>::C_GatherSub::complete(int)+0x21) [0x55704d5bf671]
19: (MDSRank::_advance_queues()+0xa4) [0x55704d596634]
20: (MDSRank::ProgressThread::entry()+0x3d) [0x55704d596cad]
21: (()+0x7e65) [0x7f0f5b638e65]
22: (clone()+0x6d) [0x7f0f5a2e688d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
-377> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command assert hook 0x55704e5cc1f0
-376> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command abort hook 0x55704e5cc1f0
-375> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command perfcounters_dump hook 0x55704e5cc1f0
-374> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command 1 hook 0x55704e5cc1f0
-373> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command perf dump hook 0x55704e5cc1f0
-372> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command perfcounters_schema hook 0x55704e5cc1f0
-371> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command perf histogram dump hook 0x55704e5cc1f0
-370> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command 2 hook 0x55704e5cc1f0
-369> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command perf schema hook 0x55704e5cc1f0
-368> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command perf histogram schema hook 0x55704e5cc1f0
-367> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command perf reset hook 0x55704e5cc1f0
-366> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command config show hook 0x55704e5cc1f0
-365> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command config help hook 0x55704e5cc1f0
-364> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command config set hook 0x55704e5cc1f0
-363> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command config unset hook 0x55704e5cc1f0
-362> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command config get hook 0x55704e5cc1f0
-361> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command config diff hook 0x55704e5cc1f0
-360> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command config diff get hook 0x55704e5cc1f0
-359> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command log flush hook 0x55704e5cc1f0
-358> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command log dump hook 0x55704e5cc1f0
-357> 2020-02-06 17:22:32.844 7f0f667151c0 5 asok(0x55704e642000) register_command log reopen hook 0x55704e5cc1f0
-356> 2020-02-06 17:22:32.845 7f0f667151c0 5 asok(0x55704e642000) register_command dump_mempools hook 0x55704e60e2c8
-355> 2020-02-06 17:22:32.854 7f0f667151c0 10 monclient: get_monmap_and_config
-354> 2020-02-06 17:22:32.893 7f0f667151c0 10 monclient: build_initial_monmap
-353> 2020-02-06 17:22:32.893 7f0f667151c0 10 monclient: monmap:
epoch 0
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2020-02-06 17:22:32.894378
created 2020-02-06 17:22:32.894378
min_mon_release 0 (unknown)
0: v1:10.0.150.0:6789/0 mon.ceph0
1: v1:10.0.151.0:6789/0 mon.ceph1
2: v1:10.0.152.0:6789/0 mon.ceph2
3: v1:10.0.153.0:6789/0 mon.ceph3
4: v1:10.0.154.0:6789/0 mon.ceph4

-352> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding auth protocol: cephx
-351> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding auth protocol: cephx
-350> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding auth protocol: cephx
-349> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-348> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-347> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-346> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-345> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-344> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-343> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-342> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-341> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-340> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-339> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-338> 2020-02-06 17:22:32.893 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-337> 2020-02-06 17:22:32.894 7f0f667151c0 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-ceph0/keyring
-336> 2020-02-06 17:22:32.895 7f0f667151c0 10 monclient: init
-335> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding auth protocol: cephx
-334> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding auth protocol: cephx
-333> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding auth protocol: cephx
-332> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: secure
-331> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: crc
-330> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: secure
-329> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: crc
-328> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: secure
-327> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: crc
-326> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: crc
-325> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: secure
-324> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: crc
-323> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: secure
-322> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: crc
-321> 2020-02-06 17:22:32.895 7f0f667151c0 5 AuthRegistry(0x7ffd27d397f8) adding con mode: secure
-320> 2020-02-06 17:22:32.895 7f0f667151c0 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-ceph0/keyring
-319> 2020-02-06 17:22:32.895 7f0f667151c0 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-ceph0/keyring
-318> 2020-02-06 17:22:32.895 7f0f667151c0 10 monclient: _reopen_session rank -1
-317> 2020-02-06 17:22:32.895 7f0f667151c0 10 monclient(hunting): picked mon.ceph3 con 0x55704e64b800 addr v1:10.0.153.0:6789/0
-316> 2020-02-06 17:22:32.895 7f0f667151c0 10 monclient(hunting): picked mon.ceph0 con 0x55704f20e000 addr v1:10.0.150.0:6789/0
-315> 2020-02-06 17:22:32.895 7f0f667151c0 10 monclient(hunting): picked mon.ceph1 con 0x55704e64ac00 addr v1:10.0.151.0:6789/0
-314> 2020-02-06 17:22:32.895 7f0f667151c0 10 monclient(hunting): _renew_subs
-313> 2020-02-06 17:22:32.895 7f0f667151c0 10 monclient(hunting): authenticate will time out at 2020-02-06 17:27:32.896207
-312> 2020-02-06 17:22:32.897 7f0f564f3700 10 monclient(hunting): _init_auth method 2
-311> 2020-02-06 17:22:32.897 7f0f564f3700 10 monclient(hunting): my global_id is 302236084
-310> 2020-02-06 17:22:32.897 7f0f564f3700 10 monclient(hunting): _init_auth method 2
-309> 2020-02-06 17:22:32.897 7f0f564f3700 10 monclient(hunting): my global_id is 302243771
-308> 2020-02-06 17:22:32.897 7f0f564f3700 10 monclient(hunting): _init_auth method 2
-307> 2020-02-06 17:22:32.897 7f0f564f3700 10 monclient(hunting): my global_id is 302227443
-306> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: _finish_hunting 0
-305> 2020-02-06 17:22:32.898 7f0f564f3700 1 monclient: found mon.ceph0
-304> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: _send_mon_message to mon.ceph0 at v1:10.0.150.0:6789/0
-303> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: _finish_auth 0
-302> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2020-02-06 17:22:02.898915)
-301> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: _send_mon_message to mon.ceph0 at v1:10.0.150.0:6789/0
-300> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: handle_monmap mon_map magic: 0 v1
-299> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: got monmap 32 from mon.ceph0 (according to old e32)
-298> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: dump:
epoch 32
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2019-05-26 16:47:44.703927
created 2015-08-03 08:09:20.236054
min_mon_release 14 (nautilus)
0: [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0] mon.ceph4
1: [v2:10.0.153.0:3300/0,v1:10.0.153.0:6789/0] mon.ceph3
2: [v2:10.0.152.0:3300/0,v1:10.0.152.0:6789/0] mon.ceph2
3: [v2:10.0.151.0:3300/0,v1:10.0.151.0:6789/0] mon.ceph1
4: [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0] mon.ceph0

-297> 2020-02-06 17:22:32.898 7f0f564f3700 1 monclient: mon.4 has (v2) addrs [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0] but i'm connected to v1:10.0.150.0:6789/0, reconnecting
-296> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient: _reopen_session rank -1
-295> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient(hunting): picked mon.ceph4 con 0x55704e64bc00 addr [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0]
-294> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient(hunting): picked mon.ceph3 con 0x55704f20ec00 addr [v2:10.0.153.0:3300/0,v1:10.0.153.0:6789/0]
-293> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient(hunting): picked mon.ceph2 con 0x55704e64b400 addr [v2:10.0.152.0:3300/0,v1:10.0.152.0:6789/0]
-292> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient(hunting): start opening mon connection
-291> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient(hunting): start opening mon connection
-290> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient(hunting): start opening mon connection
-289> 2020-02-06 17:22:32.898 7f0f564f3700 10 monclient(hunting): _renew_subs
-288> 2020-02-06 17:22:32.899 7f0f56cf4700 10 monclient(hunting): get_auth_request con 0x55704f20ec00 auth_method 0
-287> 2020-02-06 17:22:32.899 7f0f56cf4700 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2]
-286> 2020-02-06 17:22:32.899 7f0f56cf4700 10 monclient(hunting): _init_auth method 2
-285> 2020-02-06 17:22:32.899 7f0f574f5700 10 monclient(hunting): get_auth_request con 0x55704e64bc00 auth_method 0
-284> 2020-02-06 17:22:32.899 7f0f574f5700 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2]
-283> 2020-02-06 17:22:32.899 7f0f574f5700 10 monclient(hunting): _init_auth method 2
-282> 2020-02-06 17:22:32.899 7f0f56cf4700 10 monclient(hunting): handle_auth_reply_more payload 9
-281> 2020-02-06 17:22:32.899 7f0f56cf4700 10 monclient(hunting): handle_auth_reply_more payload_len 9
-280> 2020-02-06 17:22:32.899 7f0f56cf4700 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes
-279> 2020-02-06 17:22:32.899 7f0f57cf6700 10 monclient(hunting): get_auth_request con 0x55704e64b400 auth_method 0
-278> 2020-02-06 17:22:32.899 7f0f57cf6700 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2]
-277> 2020-02-06 17:22:32.899 7f0f57cf6700 10 monclient(hunting): _init_auth method 2
-276> 2020-02-06 17:22:32.899 7f0f574f5700 10 monclient(hunting): handle_auth_reply_more payload 9
-275> 2020-02-06 17:22:32.899 7f0f574f5700 10 monclient(hunting): handle_auth_reply_more payload_len 9
-274> 2020-02-06 17:22:32.899 7f0f574f5700 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes
-273> 2020-02-06 17:22:32.900 7f0f56cf4700 10 monclient(hunting): handle_auth_done global_id 302236084 payload 386
-272> 2020-02-06 17:22:32.900 7f0f56cf4700 10 monclient: _finish_hunting 0
-271> 2020-02-06 17:22:32.900 7f0f56cf4700 1 monclient: found mon.ceph3
-270> 2020-02-06 17:22:32.900 7f0f56cf4700 10 monclient: _send_mon_message to mon.ceph3 at v2:10.0.153.0:3300/0
-269> 2020-02-06 17:22:32.900 7f0f56cf4700 10 monclient: _finish_auth 0
-268> 2020-02-06 17:22:32.900 7f0f56cf4700 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2020-02-06 17:22:02.900845)
-267> 2020-02-06 17:22:32.900 7f0f56cf4700 10 monclient: _send_mon_message to mon.ceph3 at v2:10.0.153.0:3300/0
-266> 2020-02-06 17:22:32.900 7f0f667151c0 5 monclient: authenticate success, global_id 302236084
-265> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: handle_monmap mon_map magic: 0 v1
-264> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: got monmap 32 from mon.ceph3 (according to old e32)
-263> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: dump:
epoch 32
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2019-05-26 16:47:44.703927
created 2015-08-03 08:09:20.236054
min_mon_release 14 (nautilus)
0: [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0] mon.ceph4
1: [v2:10.0.153.0:3300/0,v1:10.0.153.0:6789/0] mon.ceph3
2: [v2:10.0.152.0:3300/0,v1:10.0.152.0:6789/0] mon.ceph2
3: [v2:10.0.151.0:3300/0,v1:10.0.151.0:6789/0] mon.ceph1
4: [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0] mon.ceph0

-262> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: handle_config config(8 keys) v1
-261> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: handle_monmap mon_map magic: 0 v1
-260> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: got monmap 32 from mon.ceph3 (according to old e32)
-259> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: dump:
epoch 32
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2019-05-26 16:47:44.703927
created 2015-08-03 08:09:20.236054
min_mon_release 14 (nautilus)
0: [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0] mon.ceph4
1: [v2:10.0.153.0:3300/0,v1:10.0.153.0:6789/0] mon.ceph3
2: [v2:10.0.152.0:3300/0,v1:10.0.152.0:6789/0] mon.ceph2
3: [v2:10.0.151.0:3300/0,v1:10.0.151.0:6789/0] mon.ceph1
4: [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0] mon.ceph0

-258> 2020-02-06 17:22:32.901 7f0f54cf0700 4 set_mon_vals no callback set
-257> 2020-02-06 17:22:32.901 7f0f667151c0 10 monclient: get_monmap_and_config success
-256> 2020-02-06 17:22:32.901 7f0f667151c0 10 monclient: shutdown
-255> 2020-02-06 17:22:32.901 7f0f54cf0700 10 set_mon_vals device_failure_prediction_mode = local
-254> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: _finish_auth 0
-253> 2020-02-06 17:22:32.901 7f0f564f3700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2020-02-06 17:22:02.902285)
-252> 2020-02-06 17:22:32.901 7f0f54cf0700 10 set_mon_vals osd_max_pg_per_osd_hard_ratio = 5.000000
-251> 2020-02-06 17:22:32.901 7f0f54cf0700 10 set_mon_vals public_addr = 10.0.150.0:0/0
-250> 2020-02-06 17:22:32.906 7f0f667151c0 0 set uid:gid to 167:167 (ceph:ceph)
-249> 2020-02-06 17:22:32.906 7f0f667151c0 0 ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable), process ceph-mds, pid 2154801
-248> 2020-02-06 17:22:32.906 7f0f667151c0 0 pidfile_write: ignore empty --pid-file
-247> 2020-02-06 17:22:32.938 7f0f667151c0 5 asok(0x55704e642000) init /var/run/ceph/ceph-mds.ceph0.asok
-246> 2020-02-06 17:22:32.938 7f0f667151c0 5 asok(0x55704e642000) bind_and_listen /var/run/ceph/ceph-mds.ceph0.asok
-245> 2020-02-06 17:22:32.938 7f0f667151c0 5 asok(0x55704e642000) register_command 0 hook 0x55704e5ca930
-244> 2020-02-06 17:22:32.938 7f0f667151c0 5 asok(0x55704e642000) register_command version hook 0x55704e5ca930
-243> 2020-02-06 17:22:32.938 7f0f667151c0 5 asok(0x55704e642000) register_command git_version hook 0x55704e5ca930
-242> 2020-02-06 17:22:32.938 7f0f667151c0 5 asok(0x55704e642000) register_command help hook 0x55704e5cc270
-241> 2020-02-06 17:22:32.938 7f0f667151c0 5 asok(0x55704e642000) register_command get_command_descriptions hook 0x55704e5cc2b0
-240> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding auth protocol: cephx
-239> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding auth protocol: cephx
-238> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding auth protocol: cephx
-237> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-236> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-235> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-234> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-233> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-232> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-231> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-230> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-229> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-228> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-227> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: crc
-226> 2020-02-06 17:22:32.938 7f0f667151c0 5 AuthRegistry(0x55704f1dca38) adding con mode: secure
-225> 2020-02-06 17:22:32.938 7f0f554f1700 5 asok(0x55704e642000) entry start
-224> 2020-02-06 17:22:32.938 7f0f667151c0 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-ceph0/keyring
-223> 2020-02-06 17:22:32.939 7f0f667151c0 10 monclient: build_initial_monmap
-222> 2020-02-06 17:22:32.939 7f0f667151c0 10 monclient: monmap:
epoch 0
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2020-02-06 17:22:32.940537
created 2020-02-06 17:22:32.940537
min_mon_release 0 (unknown)
0: v1:10.0.150.0:6789/0 mon.ceph0
1: v1:10.0.151.0:6789/0 mon.ceph1
2: v1:10.0.152.0:6789/0 mon.ceph2
3: v1:10.0.153.0:6789/0 mon.ceph3
4: v1:10.0.154.0:6789/0 mon.ceph4

-221> 2020-02-06 17:22:32.940 7f0f667151c0 10 monclient: init
-220> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding auth protocol: cephx
-219> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding auth protocol: cephx
-218> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding auth protocol: cephx
-217> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: secure
-216> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: crc
-215> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: secure
-214> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: crc
-213> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: secure
-212> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: crc
-211> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: crc
-210> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: secure
-209> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: crc
-208> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: secure
-207> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: crc
-206> 2020-02-06 17:22:32.940 7f0f667151c0 5 AuthRegistry(0x7ffd27d3b3e8) adding con mode: secure
-205> 2020-02-06 17:22:32.940 7f0f667151c0 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-ceph0/keyring
-204> 2020-02-06 17:22:32.940 7f0f667151c0 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-ceph0/keyring
-203> 2020-02-06 17:22:32.940 7f0f667151c0 10 monclient: _reopen_session rank -1
-202> 2020-02-06 17:22:32.940 7f0f667151c0 10 monclient(hunting): picked mon.ceph4 con 0x55704f20e000 addr v1:10.0.154.0:6789/0
-201> 2020-02-06 17:22:32.940 7f0f667151c0 10 monclient(hunting): picked mon.ceph1 con 0x55704f20f800 addr v1:10.0.151.0:6789/0
-200> 2020-02-06 17:22:32.940 7f0f667151c0 10 monclient(hunting): picked mon.ceph3 con 0x55704e64bc00 addr v1:10.0.153.0:6789/0
-199> 2020-02-06 17:22:32.940 7f0f667151c0 10 monclient(hunting): _renew_subs
-198> 2020-02-06 17:22:32.941 7f0f54265700 10 monclient(hunting): _init_auth method 2
-197> 2020-02-06 17:22:32.941 7f0f54265700 10 monclient(hunting): my global_id is 302227448
-196> 2020-02-06 17:22:32.941 7f0f54265700 10 monclient(hunting): _init_auth method 2
-195> 2020-02-06 17:22:32.941 7f0f54265700 10 monclient(hunting): my global_id is 302257475
-194> 2020-02-06 17:22:32.942 7f0f54265700 10 monclient(hunting): _init_auth method 2
-193> 2020-02-06 17:22:32.942 7f0f54265700 10 monclient(hunting): my global_id is 302243776
-192> 2020-02-06 17:22:32.942 7f0f54265700 10 monclient(hunting): handle_monmap mon_map magic: 0 v1
-191> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): got monmap 32 from mon.ceph4 (according to old e32)
-190> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): dump:
epoch 32
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2019-05-26 16:47:44.703927
created 2015-08-03 08:09:20.236054
min_mon_release 14 (nautilus)
0: [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0] mon.ceph4
1: [v2:10.0.153.0:3300/0,v1:10.0.153.0:6789/0] mon.ceph3
2: [v2:10.0.152.0:3300/0,v1:10.0.152.0:6789/0] mon.ceph2
3: [v2:10.0.151.0:3300/0,v1:10.0.151.0:6789/0] mon.ceph1
4: [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0] mon.ceph0

-189> 2020-02-06 17:22:32.943 7f0f54265700 1 monclient(hunting): mon.0 has (v2) addrs [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0] but i'm connected to v1:10.0.154.0:6789/0, reconnecting
-188> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): _reopen_session rank -1
-187> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): picked mon.ceph0 con 0x55704f258000 addr [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0]
-186> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): picked mon.ceph1 con 0x55704f258400 addr [v2:10.0.151.0:3300/0,v1:10.0.151.0:6789/0]
-185> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): picked mon.ceph4 con 0x55704f258800 addr [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0]
-184> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): start opening mon connection
-183> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): start opening mon connection
-182> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): start opening mon connection
-181> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): _renew_subs
-180> 2020-02-06 17:22:32.943 7f0f54265700 10 monclient(hunting): _finish_auth 0
-179> 2020-02-06 17:22:32.943 7f0f574f5700 10 monclient(hunting): get_auth_request con 0x55704f258000 auth_method 0
-178> 2020-02-06 17:22:32.943 7f0f574f5700 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2]
-177> 2020-02-06 17:22:32.943 7f0f574f5700 10 monclient(hunting): _init_auth method 2
-176> 2020-02-06 17:22:32.943 7f0f574f5700 10 monclient(hunting): handle_auth_reply_more payload 9
-175> 2020-02-06 17:22:32.943 7f0f574f5700 10 monclient(hunting): handle_auth_reply_more payload_len 9
-174> 2020-02-06 17:22:32.943 7f0f574f5700 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes
-173> 2020-02-06 17:22:32.943 7f0f56cf4700 10 monclient(hunting): get_auth_request con 0x55704f258400 auth_method 0
-172> 2020-02-06 17:22:32.943 7f0f56cf4700 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2]
-171> 2020-02-06 17:22:32.943 7f0f56cf4700 10 monclient(hunting): _init_auth method 2
-170> 2020-02-06 17:22:32.943 7f0f57cf6700 10 monclient(hunting): get_auth_request con 0x55704f258800 auth_method 0
-169> 2020-02-06 17:22:32.943 7f0f57cf6700 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2]
-168> 2020-02-06 17:22:32.943 7f0f57cf6700 10 monclient(hunting): _init_auth method 2
-167> 2020-02-06 17:22:32.944 7f0f574f5700 10 monclient(hunting): handle_auth_done global_id 302236089 payload 1102
-166> 2020-02-06 17:22:32.944 7f0f574f5700 10 monclient: _finish_hunting 0
-165> 2020-02-06 17:22:32.944 7f0f574f5700 1 monclient: found mon.ceph0
-164> 2020-02-06 17:22:32.944 7f0f574f5700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-163> 2020-02-06 17:22:32.944 7f0f574f5700 10 monclient: _finish_auth 0
-162> 2020-02-06 17:22:32.944 7f0f574f5700 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2020-02-06 17:22:02.944797)
-161> 2020-02-06 17:22:32.944 7f0f574f5700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-160> 2020-02-06 17:22:32.944 7f0f667151c0 5 monclient: authenticate success, global_id 302236089
-159> 2020-02-06 17:22:32.944 7f0f667151c0 10 monclient: wait_auth_rotating waiting (until 2020-02-06 17:23:02.944863)
-158> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: handle_monmap mon_map magic: 0 v1
-157> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: got monmap 32 from mon.ceph0 (according to old e32)
-156> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: dump:
epoch 32
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2019-05-26 16:47:44.703927
created 2015-08-03 08:09:20.236054
min_mon_release 14 (nautilus)
0: [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0] mon.ceph4
1: [v2:10.0.153.0:3300/0,v1:10.0.153.0:6789/0] mon.ceph3
2: [v2:10.0.152.0:3300/0,v1:10.0.152.0:6789/0] mon.ceph2
3: [v2:10.0.151.0:3300/0,v1:10.0.151.0:6789/0] mon.ceph1
4: [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0] mon.ceph0

-155> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: handle_config config(8 keys) v1
-154> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: handle_monmap mon_map magic: 0 v1
-153> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: got monmap 32 from mon.ceph0 (according to old e32)
-152> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: dump:
epoch 32
fsid aca834ef-5617-47fd-be18-283faba1f0b1
last_changed 2019-05-26 16:47:44.703927
created 2015-08-03 08:09:20.236054
min_mon_release 14 (nautilus)
0: [v2:10.0.154.0:3300/0,v1:10.0.154.0:6789/0] mon.ceph4
1: [v2:10.0.153.0:3300/0,v1:10.0.153.0:6789/0] mon.ceph3
2: [v2:10.0.152.0:3300/0,v1:10.0.152.0:6789/0] mon.ceph2
3: [v2:10.0.151.0:3300/0,v1:10.0.151.0:6789/0] mon.ceph1
4: [v2:10.0.150.0:3300/0,v1:10.0.150.0:6789/0] mon.ceph0

-151> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: _finish_auth 0
-150> 2020-02-06 17:22:32.944 7f0f54265700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2020-02-06 17:22:02.945448)
-149> 2020-02-06 17:22:32.944 7f0f52a62700 4 set_mon_vals no callback set
-148> 2020-02-06 17:22:32.944 7f0f667151c0 10 monclient: wait_auth_rotating done
-147> 2020-02-06 17:22:32.944 7f0f667151c0 10 monclient: _renew_subs
-146> 2020-02-06 17:22:32.944 7f0f667151c0 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-145> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command status hook 0x55704e5cc2a0
-144> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump_ops_in_flight hook 0x55704e5cc2a0
-143> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command ops hook 0x55704e5cc2a0
-142> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump_blocked_ops hook 0x55704e5cc2a0
-141> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump_historic_ops hook 0x55704e5cc2a0
-140> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump_historic_ops_by_duration hook 0x55704e5cc2a0
-139> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command scrub_path hook 0x55704e5cc2a0
-138> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command tag path hook 0x55704e5cc2a0
-137> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command flush_path hook 0x55704e5cc2a0
-136> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command export dir hook 0x55704e5cc2a0
-135> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump cache hook 0x55704e5cc2a0
-134> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command cache status hook 0x55704e5cc2a0
-133> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump tree hook 0x55704e5cc2a0
-132> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump loads hook 0x55704e5cc2a0
-131> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump snaps hook 0x55704e5cc2a0
-130> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command session evict hook 0x55704e5cc2a0
-129> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command osdmap barrier hook 0x55704e5cc2a0
-128> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command session ls hook 0x55704e5cc2a0
-127> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command flush journal hook 0x55704e5cc2a0
-126> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command force_readonly hook 0x55704e5cc2a0
-125> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command get subtrees hook 0x55704e5cc2a0
-124> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dirfrag split hook 0x55704e5cc2a0
-123> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dirfrag merge hook 0x55704e5cc2a0
-122> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dirfrag ls hook 0x55704e5cc2a0
-121> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command openfiles ls hook 0x55704e5cc2a0
-120> 2020-02-06 17:22:32.944 7f0f667151c0 5 asok(0x55704e642000) register_command dump inode hook 0x55704e5cc2a0
-119> 2020-02-06 17:22:32.945 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252661 from mon.4
-118> 2020-02-06 17:22:32.946 7f0f5125f700 5 mds.beacon.ceph0 Sending beacon up:boot seq 1
-117> 2020-02-06 17:22:32.947 7f0f54265700 4 mgrc handle_mgr_map Got map version 258053
-116> 2020-02-06 17:22:32.947 7f0f54265700 4 mgrc handle_mgr_map Active mgr is now [v2:10.0.150.0:6834/2150000,v1:10.0.150.0:6835/2150000]
-115> 2020-02-06 17:22:32.947 7f0f54265700 4 mgrc reconnect Starting new session with [v2:10.0.150.0:6834/2150000,v1:10.0.150.0:6835/2150000]
-114> 2020-02-06 17:22:32.947 7f0f56cf4700 10 monclient: get_auth_request con 0x55704f20e000 auth_method 0
-113> 2020-02-06 17:22:32.947 7f0f5125f700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-112> 2020-02-06 17:22:32.949 7f0f54265700 4 mgrc handle_mgr_configure stats_period=5
-111> 2020-02-06 17:22:32.949 7f0f54265700 4 mgrc handle_mgr_configure updated stats threshold: 5
-110> 2020-02-06 17:22:32.949 7f0f54265700 4 mgrc ms_handle_reset ms_handle_reset con 0x55704f20e000
-109> 2020-02-06 17:22:32.949 7f0f54265700 4 mgrc reconnect Terminating session with v2:10.0.150.0:6834/2150000
-108> 2020-02-06 17:22:32.949 7f0f54265700 4 mgrc reconnect waiting to retry connect until 2020-02-06 17:22:33.947925
-107> 2020-02-06 17:22:33.244 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252662 from mon.4
-106> 2020-02-06 17:22:33.244 7f0f54265700 5 mds.beacon.ceph0 set_want_state: up:boot -> up:standby
-105> 2020-02-06 17:22:33.244 7f0f54265700 1 mds.ceph0 Map has assigned me to become a standby
-104> 2020-02-06 17:22:33.246 7f0f574f5700 5 mds.beacon.ceph0 received beacon reply up:boot seq 1 rtt 0.300003
-103> 2020-02-06 17:22:33.288 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252663 from mon.4
-102> 2020-02-06 17:22:33.289 7f0f54265700 4 mds.0.purge_queue operator(): data pool 68 not found in OSDMap
-101> 2020-02-06 17:22:33.289 7f0f54265700 5 asok(0x55704e642000) register_command objecter_requests hook 0x55704e5cc330
-100> 2020-02-06 17:22:33.289 7f0f54265700 10 monclient: _renew_subs
-99> 2020-02-06 17:22:33.289 7f0f54265700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-98> 2020-02-06 17:22:33.289 7f0f54265700 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
-97> 2020-02-06 17:22:33.290 7f0f54265700 4 mds.0.purge_queue operator(): data pool 68 not found in OSDMap
-96> 2020-02-06 17:22:33.290 7f0f54265700 4 mds.0.0 handle_osd_map epoch 0, 0 new blacklist entries
-95> 2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
-94> 2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:boot --> up:replay
-93> 2020-02-06 17:22:33.290 7f0f54265700 5 mds.beacon.ceph0 set_want_state: up:standby -> up:replay
-92> 2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 replay_start
-91> 2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 recovery set is 1
-90> 2020-02-06 17:22:33.290 7f0f54265700 1 mds.0.252663 waiting for osdmap 459132 (which blacklists prior instance)
-89> 2020-02-06 17:22:33.292 7f0f54265700 4 mds.0.252663 handle_osd_map epoch 459132, 0 new blacklist entries
-88> 2020-02-06 17:22:33.292 7f0f54265700 10 monclient: _renew_subs
-87> 2020-02-06 17:22:33.292 7f0f54265700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-86> 2020-02-06 17:22:33.292 7f0f4da58700 2 mds.0.252663 Booting: 0: opening inotable
-85> 2020-02-06 17:22:33.293 7f0f4da58700 2 mds.0.252663 Booting: 0: opening sessionmap
-84> 2020-02-06 17:22:33.293 7f0f4da58700 2 mds.0.252663 Booting: 0: opening mds log
-83> 2020-02-06 17:22:33.293 7f0f4da58700 5 mds.0.log open discovering log bounds
-82> 2020-02-06 17:22:33.293 7f0f4da58700 2 mds.0.252663 Booting: 0: opening purge queue (async)
-81> 2020-02-06 17:22:33.293 7f0f4da58700 4 mds.0.purge_queue open: opening
-80> 2020-02-06 17:22:33.293 7f0f4da58700 1 mds.0.journaler.pq(ro) recover start
-79> 2020-02-06 17:22:33.293 7f0f4da58700 1 mds.0.journaler.pq(ro) read_head
-78> 2020-02-06 17:22:33.293 7f0f4da58700 2 mds.0.252663 Booting: 0: loading open file table (async)
-77> 2020-02-06 17:22:33.293 7f0f4d257700 4 mds.0.journalpointer Reading journal pointer '400.00000000'
-76> 2020-02-06 17:22:33.293 7f0f4da58700 2 mds.0.252663 Booting: 0: opening snap table
-75> 2020-02-06 17:22:33.293 7f0f56cf4700 10 monclient: get_auth_request con 0x55704e64b000 auth_method 0
-74> 2020-02-06 17:22:33.294 7f0f57cf6700 10 monclient: get_auth_request con 0x55704f258c00 auth_method 0
-73> 2020-02-06 17:22:33.294 7f0f574f5700 10 monclient: get_auth_request con 0x55704f20f800 auth_method 0
-72> 2020-02-06 17:22:33.299 7f0f4ea5a700 1 mds.0.journaler.pq(ro) _finish_read_head loghead(trim 352321536, expire 355297712, write 355297712, stream_format 1). probing for end of log (from 355297712)...
-71> 2020-02-06 17:22:33.299 7f0f4ea5a700 1 mds.0.journaler.pq(ro) probing for end of the log
-70> 2020-02-06 17:22:33.299 7f0f4d257700 1 mds.0.journaler.mdlog(ro) recover start
-69> 2020-02-06 17:22:33.299 7f0f4d257700 1 mds.0.journaler.mdlog(ro) read_head
-68> 2020-02-06 17:22:33.299 7f0f4d257700 4 mds.0.log Waiting for journal 0x200 to recover...
-67> 2020-02-06 17:22:33.300 7f0f56cf4700 10 monclient: get_auth_request con 0x55704f259400 auth_method 0
-66> 2020-02-06 17:22:33.300 7f0f4da58700 1 mds.0.journaler.mdlog(ro) _finish_read_head loghead(trim 7464179204096, expire 7464180805414, write 7464885615609, stream_format 1). probing for end of log (from 7464885615609)...
-65> 2020-02-06 17:22:33.300 7f0f4da58700 1 mds.0.journaler.mdlog(ro) probing for end of the log
-64> 2020-02-06 17:22:33.301 7f0f57cf6700 10 monclient: get_auth_request con 0x55704f259800 auth_method 0
-63> 2020-02-06 17:22:33.306 7f0f4da58700 1 mds.0.journaler.mdlog(ro) _finish_probe_end write_pos = 7464885615784 (header had 7464885615609). recovered.
-62> 2020-02-06 17:22:33.306 7f0f4d257700 4 mds.0.log Journal 0x200 recovered.
-61> 2020-02-06 17:22:33.306 7f0f4d257700 4 mds.0.log Recovered journal 0x200 in format 1
-60> 2020-02-06 17:22:33.306 7f0f4d257700 2 mds.0.252663 Booting: 1: loading/discovering base inodes
-59> 2020-02-06 17:22:33.306 7f0f4d257700 0 mds.0.cache creating system inode with ino:0x100
-58> 2020-02-06 17:22:33.306 7f0f4ea5a700 1 mds.0.journaler.pq(ro) _finish_probe_end write_pos = 355297712 (header had 355297712). recovered.
-57> 2020-02-06 17:22:33.306 7f0f4ea5a700 4 mds.0.purge_queue operator(): open complete
-56> 2020-02-06 17:22:33.306 7f0f4ea5a700 1 mds.0.journaler.pq(ro) set_writeable
-55> 2020-02-06 17:22:33.306 7f0f4d257700 0 mds.0.cache creating system inode with ino:0x1
-54> 2020-02-06 17:22:33.307 7f0f4da58700 2 mds.0.252663 Booting: 2: replaying mds log
-53> 2020-02-06 17:22:33.307 7f0f4da58700 2 mds.0.252663 Booting: 2: waiting for purge queue recovered
-52> 2020-02-06 17:22:33.946 7f0f52261700 4 mgrc reconnect Starting new session with [v2:10.0.150.0:6834/2150000,v1:10.0.150.0:6835/2150000]
-51> 2020-02-06 17:22:33.946 7f0f574f5700 10 monclient: get_auth_request con 0x55704f20e000 auth_method 0
-50> 2020-02-06 17:22:33.947 7f0f54265700 4 mgrc handle_mgr_configure stats_period=5
-49> 2020-02-06 17:22:36.947 7f0f5125f700 5 mds.beacon.ceph0 Sending beacon up:replay seq 2
-48> 2020-02-06 17:22:36.947 7f0f5125f700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-47> 2020-02-06 17:22:36.948 7f0f574f5700 5 mds.beacon.ceph0 received beacon reply up:replay seq 2 rtt 0.00100001
-46> 2020-02-06 17:22:40.947 7f0f5125f700 5 mds.beacon.ceph0 Sending beacon up:replay seq 3
-45> 2020-02-06 17:22:40.947 7f0f5125f700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-44> 2020-02-06 17:22:40.950 7f0f574f5700 5 mds.beacon.ceph0 received beacon reply up:replay seq 3 rtt 0.00300003
-43> 2020-02-06 17:22:42.939 7f0f53263700 10 monclient: tick
-42> 2020-02-06 17:22:42.939 7f0f53263700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2020-02-06 17:22:12.941071)
-41> 2020-02-06 17:22:43.223 7f0f4c255700 1 mds.0.252663 Finished replaying journal
-40> 2020-02-06 17:22:43.223 7f0f4c255700 1 mds.0.252663 making mds journal writeable
-39> 2020-02-06 17:22:43.223 7f0f4c255700 1 mds.0.journaler.mdlog(ro) set_writeable
-38> 2020-02-06 17:22:43.223 7f0f4c255700 2 mds.0.252663 i am not alone, moving to state resolve
-37> 2020-02-06 17:22:43.223 7f0f4c255700 3 mds.0.252663 request_state up:resolve
-36> 2020-02-06 17:22:43.223 7f0f4c255700 5 mds.beacon.ceph0 set_want_state: up:replay -> up:resolve
-35> 2020-02-06 17:22:43.223 7f0f4c255700 5 mds.beacon.ceph0 Sending beacon up:resolve seq 4
-34> 2020-02-06 17:22:43.223 7f0f4c255700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-33> 2020-02-06 17:22:43.260 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252664 from mon.4
-32> 2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
-31> 2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:replay --> up:resolve
-30> 2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 resolve_start
-29> 2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 reopen_log
-28> 2020-02-06 17:22:43.260 7f0f54265700 1 mds.0.252663 recovery set is 1
-27> 2020-02-06 17:22:43.261 7f0f56cf4700 10 monclient: get_auth_request con 0x55704e64bc00 auth_method 0
-26> 2020-02-06 17:22:43.263 7f0f54265700 5 mds.ceph0 handle_mds_map old map epoch 252664 <= 252664, discarding
-25> 2020-02-06 17:22:43.264 7f0f574f5700 5 mds.beacon.ceph0 received beacon reply up:resolve seq 4 rtt 0.0410004
-24> 2020-02-06 17:22:43.333 7f0f54265700 1 mds.0.252663 resolve_done
-23> 2020-02-06 17:22:43.333 7f0f54265700 3 mds.0.252663 request_state up:reconnect
-22> 2020-02-06 17:22:43.333 7f0f54265700 5 mds.beacon.ceph0 set_want_state: up:resolve -> up:reconnect
-21> 2020-02-06 17:22:43.333 7f0f54265700 5 mds.beacon.ceph0 Sending beacon up:reconnect seq 5
-20> 2020-02-06 17:22:43.333 7f0f54265700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-19> 2020-02-06 17:22:44.312 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252665 from mon.4
-18> 2020-02-06 17:22:44.312 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
-17> 2020-02-06 17:22:44.312 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:resolve --> up:reconnect
-16> 2020-02-06 17:22:44.312 7f0f54265700 1 mds.0.252663 reconnect_start
-15> 2020-02-06 17:22:44.313 7f0f54265700 4 mds.0.252663 reconnect_start: killed 0 blacklisted sessions (182 blacklist entries, 0)
-14> 2020-02-06 17:22:44.313 7f0f54265700 1 mds.0.252663 reconnect_done
-13> 2020-02-06 17:22:44.313 7f0f54265700 3 mds.0.252663 request_state up:rejoin
-12> 2020-02-06 17:22:44.313 7f0f54265700 5 mds.beacon.ceph0 set_want_state: up:reconnect -> up:rejoin
-11> 2020-02-06 17:22:44.313 7f0f54265700 5 mds.beacon.ceph0 Sending beacon up:rejoin seq 6
-10> 2020-02-06 17:22:44.313 7f0f54265700 10 monclient: _send_mon_message to mon.ceph0 at v2:10.0.150.0:3300/0
-9> 2020-02-06 17:22:44.316 7f0f574f5700 5 mds.beacon.ceph0 received beacon reply up:reconnect seq 5 rtt 0.98301
-8> 2020-02-06 17:22:45.325 7f0f54265700 1 mds.ceph0 Updating MDS map to version 252666 from mon.4
-7> 2020-02-06 17:22:45.325 7f0f54265700 1 mds.0.252663 handle_mds_map i am now mds.0.252663
-6> 2020-02-06 17:22:45.325 7f0f54265700 1 mds.0.252663 handle_mds_map state change up:reconnect --> up:rejoin
-5> 2020-02-06 17:22:45.325 7f0f54265700 1 mds.0.252663 rejoin_start
-4> 2020-02-06 17:22:45.326 7f0f54265700 1 mds.0.252663 rejoin_joint_start
-3> 2020-02-06 17:22:45.326 7f0f54265700 5 mds.ceph0 handle_mds_map old map epoch 252666 <= 252666, discarding
-2> 2020-02-06 17:22:45.330 7f0f574f5700 5 mds.beacon.ceph0 received beacon reply up:rejoin seq 6 rtt 1.01701
-1> 2020-02-06 17:22:45.436 7f0f4f25b700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.7/rpm/el7/BUILD/ceph-14.2.7/src/mds/MDCache.cc: In function 'void MDCache::rejoin_send_rejoins()' thread 7f0f4f25b700 time 2020-02-06 17:22:45.435620
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.7/rpm/el7/BUILD/ceph-14.2.7/src/mds/MDCache.cc: 4054: FAILED ceph_assert(auth >= 0)

ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x7f0f5d78b031]
2: (()+0x2661f9) [0x7f0f5d78b1f9]
3: (MDCache::rejoin_send_rejoins()+0x26f7) [0x55704d6a3d67]
4: (MDCache::process_imported_caps()+0x1236) [0x55704d6a5046]
5: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
6: (Context::complete(int)+0x9) [0x55704d586e49]
7: (MDSContext::complete(int)+0x74) [0x55704d804b14]
8: (void finish_contexts<std::vector<MDSContext*, std::allocator<MDSContext*> > >(CephContext*, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int)+0x7d) [0x55704d58e63d]
9: (OpenFileTable::_open_ino_finish(inodeno_t, int)+0x109) [0x55704d824c79]
10: (OpenFileTable::_prefetch_inodes()+0x250) [0x55704d823cd0]
11: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
12: (Context::complete(int)+0x9) [0x55704d586e49]
13: (MDSContext::complete(int)+0x74) [0x55704d804b14]
14: (C_GatherBase<MDSContext, C_MDSInternalNoop>::sub_finish(MDSContext*, int)+0x117) [0x55704d5bf2f7]
15: (C_GatherBase<MDSContext, C_MDSInternalNoop>::C_GatherSub::complete(int)+0x21) [0x55704d5bf671]
16: (MDSRank::_advance_queues()+0xa4) [0x55704d596634]
17: (MDSRank::ProgressThread::entry()+0x3d) [0x55704d596cad]
18: (()+0x7e65) [0x7f0f5b638e65]
19: (clone()+0x6d) [0x7f0f5a2e688d]

0> 2020-02-06 17:22:45.438 7f0f4f25b700 -1 *** Caught signal (Aborted) **
in thread 7f0f4f25b700 thread_name:mds_rank_progr

ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
1: (()+0xf5f0) [0x7f0f5b6405f0]
2: (gsignal()+0x37) [0x7f0f5a21e337]
3: (abort()+0x148) [0x7f0f5a21fa28]
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x199) [0x7f0f5d78b080]
5: (()+0x2661f9) [0x7f0f5d78b1f9]
6: (MDCache::rejoin_send_rejoins()+0x26f7) [0x55704d6a3d67]
7: (MDCache::process_imported_caps()+0x1236) [0x55704d6a5046]
8: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
9: (Context::complete(int)+0x9) [0x55704d586e49]
10: (MDSContext::complete(int)+0x74) [0x55704d804b14]
11: (void finish_contexts<std::vector<MDSContext*, std::allocator<MDSContext*> > >(CephContext*, std::vector<MDSContext*, std::allocator<MDSContext*> >&, int)+0x7d) [0x55704d58e63d]
12: (OpenFileTable::_open_ino_finish(inodeno_t, int)+0x109) [0x55704d824c79]
13: (OpenFileTable::_prefetch_inodes()+0x250) [0x55704d823cd0]
14: (FunctionContext::finish(int)+0x2c) [0x55704d58a40c]
15: (Context::complete(int)+0x9) [0x55704d586e49]
16: (MDSContext::complete(int)+0x74) [0x55704d804b14]
17: (C_GatherBase<MDSContext, C_MDSInternalNoop>::sub_finish(MDSContext*, int)+0x117) [0x55704d5bf2f7]
18: (C_GatherBase<MDSContext, C_MDSInternalNoop>::C_GatherSub::complete(int)+0x21) [0x55704d5bf671]
19: (MDSRank::_advance_queues()+0xa4) [0x55704d596634]
20: (MDSRank::ProgressThread::entry()+0x3d) [0x55704d596cad]
21: (()+0x7e65) [0x7f0f5b638e65]
22: (clone()+0x6d) [0x7f0f5a2e688d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_mirror
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
1/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 journal
0/ 0 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 1 reserver
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/ 5 rgw_sync
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
1/ 5 compressor
1/ 5 bluestore
1/ 5 bluefs
1/ 3 bdev
1/ 5 kstore
4/ 5 rocksdb
4/ 5 leveldb
4/ 5 memdb
1/ 5 kinetic
1/ 5 fuse
1/ 5 mgr
1/ 5 mgrc
1/ 5 dpdk
1/ 5 eventtrace
1/ 5 prioritycache
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-mds.ceph0.log
--- end dump of recent events ---
2020-02-06 17:22:45.864 7f10035011c0 0 set uid:gid to 167:167 (ceph:ceph)
2020-02-06 17:22:45.864 7f10035011c0 0 ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable), process ceph-mds, pid 2154837
2020-02-06 17:22:45.865 7f10035011c0 0 pidfile_write: ignore empty --pid-file
2020-02-06 17:22:45.925 7f0ff1051700 1 mds.ceph0 Updating MDS map to version 252666 from mon.4
2020-02-06 17:22:46.378 7f0ff1051700 1 mds.ceph0 Updating MDS map to version 252667 from mon.4
2020-02-06 17:22:46.378 7f0ff1051700 1 mds.ceph0 Map has assigned me to become a standby
2020-02-06 17:22:46.394 7f0ff1051700 1 mds.ceph0 Updating MDS map to version 252668 from mon.4
2020-02-06 17:22:46.396 7f0ff1051700 1 mds.0.252668 handle_mds_map i am now mds.0.252668
2020-02-06 17:22:46.396 7f0ff1051700 1 mds.0.252668 handle_mds_map state change up:boot --> up:replay
2020-02-06 17:22:46.396 7f0ff1051700 1 mds.0.252668 replay_start
2020-02-06 17:22:46.396 7f0ff1051700 1 mds.0.252668 recovery set is 1
2020-02-06 17:22:46.396 7f0ff1051700 1 mds.0.252668 waiting for osdmap 459133 (which blacklists prior instance)
2020-02-06 17:22:46.406 7f0fea043700 0 mds.0.cache creating system inode with ino:0x100
2020-02-06 17:22:46.406 7f0fea043700 0 mds.0.cache creating system inode with ino:0x1
2020-02-06 17:22:53.787 7f0ff1adc700 -1 received signal: Terminated from /usr/lib/systemd/systemd --system --deserialize 23 (PID: 1) UID: 0
2020-02-06 17:22:53.787 7f0ff1adc700 -1 mds.ceph0 *** got signal Terminated ***
2020-02-06 17:22:53.787 7f0ff1adc700 1 mds.ceph0 suicide! Wanted state up:replay
2020-02-06 17:22:53.927 7f0ff1adc700 1 mds.0.252668 shutdown: shutting down rank 0
2020-02-06 17:22:53.927 7f0ff1051700 0 ms_deliver_dispatch: unhandled message 0x55bc03738f00 osd_map(459134..459134 src has 458591..459134) v4 from mon.4 v2:10.0.150.0:3300/0
(2-2/3)