2019-01-25 14:54:58.378349 7f83ec74a700 10 Processor -- accept listen_fd=11 2019-01-25 14:54:58.378394 7f83ec74a700 15 RDMAServerSocketImpl accept 2019-01-25 14:54:58.378452 7f83ec74a700 20 Infiniband init started. 2019-01-25 14:54:58.379013 7f83ec74a700 20 Infiniband init successfully create queue pair: qp=0x56414483bc08 2019-01-25 14:54:58.379545 7f83ec74a700 20 Infiniband init successfully change queue pair to INIT: qp=0x56414483bc08 2019-01-25 14:54:58.379571 7f83ec74a700 20 Event(0x564144ca8580 nevent=5000 time_id=1).wakeup 2019-01-25 14:54:58.379584 7f83ec74a700 20 RDMAServerSocketImpl accept accepted a new QP, tcp_fd: 34 2019-01-25 14:54:58.379587 7f83ec74a700 10 Processor -- accept accepted incoming on sd 35 2019-01-25 14:54:58.379607 7f83ec74a700 10 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :-1 s=STATE_NONE pgs=0 cs=0 l=0).accept sd=35 2019-01-25 14:54:58.379633 7f83ec74a700 15 RDMAServerSocketImpl accept 2019-01-25 14:54:58.379657 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event started fd=34 mask=1 original mask is 0 2019-01-25 14:54:58.379677 7f83f2f57700 20 EpollDriver.add_event add event fd=34 cur_mask=0 add_mask=1 to 23 2019-01-25 14:54:58.379686 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event end fd=34 mask=1 original mask is 1 2019-01-25 14:54:58.379707 7f83f2f57700 20 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :-1 s=STATE_ACCEPTING pgs=0 cs=0 l=0).process prev state is STATE_ACCEPTING 2019-01-25 14:54:58.379745 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event started fd=35 mask=1 original mask is 0 2019-01-25 14:54:58.379748 7f83f2f57700 20 EpollDriver.add_event add event fd=35 cur_mask=0 add_mask=1 to 23 2019-01-25 14:54:58.379753 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event end fd=35 mask=1 original mask is 1 2019-01-25 14:54:58.379761 7f83f2f57700 1 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING pgs=0 cs=0 l=0)._process_connection sd=35 - 2019-01-25 14:54:58.379778 7f83f2f57700 20 RDMAConnectedSocketImpl send fake send to upper, QP: 167 2019-01-25 14:54:58.379781 7f83f2f57700 10 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING pgs=0 cs=0 l=0)._try_send sent bytes 281 remaining bytes 0 2019-01-25 14:54:58.379792 7f83f2f57700 10 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING_WAIT_BANNER_ADDR pgs=0 cs=0 l=0)._process_connection write banner and addr done: - 2019-01-25 14:54:58.379802 7f83f2f57700 20 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING_WAIT_BANNER_ADDR pgs=0 cs=0 l=0).process prev state is STATE_ACCEPTING 2019-01-25 14:54:58.379813 7f83f2f57700 20 RDMAConnectedSocketImpl read notify_fd : 0 in 167 r = -1 2019-01-25 14:54:58.379825 7f83f2f57700 20 RDMAConnectedSocketImpl handle_connection QP: 167 tcp_fd: 34 notify_fd: 35 2019-01-25 14:54:58.379849 7f83f2f57700 5 Infiniband recv_msg recevd: 2, 166, 0, 0, fe80000000000000506b4b0300ed6256 2019-01-25 14:54:58.379862 7f83f2f57700 10 Infiniband send_msg sending: 2, 167, 0, 0, fe80000000000000506b4b0300ed6256 2019-01-25 14:54:58.379923 7f83f2f57700 20 RDMAConnectedSocketImpl activate Choosing gid_index 67, sl 3 2019-01-25 14:54:58.379946 7f83f2f57700 -1 RDMAConnectedSocketImpl activate failed to transition to RTR state: (22) Invalid argument 2019-01-25 14:54:58.383313 7f83f2f57700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: In function 'void RDMAConnectedSocketImpl::handle_connection()' thread 7f83f2f57700 time 2019-01-25 14:54:58.379971 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: 244: FAILED assert(!r) ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x56413aeec320] 2: (RDMAConnectedSocketImpl::handle_connection()+0x45e) [0x56413b1e972e] 3: (EventCenter::process_events(int, std::chrono::duration >*)+0x359) [0x56413afa3f59] 4: (()+0x6eeb1e) [0x56413afa6b1e] 5: (()+0xb5070) [0x7f83f6960070] 6: (()+0x7e25) [0x7f83f8ecee25] 7: (clone()+0x6d) [0x7f83f60c4bad] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- -399> 2019-01-25 14:52:02.643564 7f83f9b28000 5 asok(0x56414485a1c0) register_command perfcounters_dump hook 0x56414480e190 -398> 2019-01-25 14:52:02.643587 7f83f9b28000 5 asok(0x56414485a1c0) register_command 1 hook 0x56414480e190 -397> 2019-01-25 14:52:02.643592 7f83f9b28000 5 asok(0x56414485a1c0) register_command perf dump hook 0x56414480e190 -396> 2019-01-25 14:52:02.643596 7f83f9b28000 5 asok(0x56414485a1c0) register_command perfcounters_schema hook 0x56414480e190 -395> 2019-01-25 14:52:02.643600 7f83f9b28000 5 asok(0x56414485a1c0) register_command perf histogram dump hook 0x56414480e190 -394> 2019-01-25 14:52:02.643603 7f83f9b28000 5 asok(0x56414485a1c0) register_command 2 hook 0x56414480e190 -393> 2019-01-25 14:52:02.643607 7f83f9b28000 5 asok(0x56414485a1c0) register_command perf schema hook 0x56414480e190 -392> 2019-01-25 14:52:02.643610 7f83f9b28000 5 asok(0x56414485a1c0) register_command perf histogram schema hook 0x56414480e190 -391> 2019-01-25 14:52:02.643614 7f83f9b28000 5 asok(0x56414485a1c0) register_command perf reset hook 0x56414480e190 -390> 2019-01-25 14:52:02.643618 7f83f9b28000 5 asok(0x56414485a1c0) register_command config show hook 0x56414480e190 -389> 2019-01-25 14:52:02.643621 7f83f9b28000 5 asok(0x56414485a1c0) register_command config help hook 0x56414480e190 -388> 2019-01-25 14:52:02.643625 7f83f9b28000 5 asok(0x56414485a1c0) register_command config set hook 0x56414480e190 -387> 2019-01-25 14:52:02.643629 7f83f9b28000 5 asok(0x56414485a1c0) register_command config get hook 0x56414480e190 -386> 2019-01-25 14:52:02.643632 7f83f9b28000 5 asok(0x56414485a1c0) register_command config diff hook 0x56414480e190 -385> 2019-01-25 14:52:02.643635 7f83f9b28000 5 asok(0x56414485a1c0) register_command config diff get hook 0x56414480e190 -384> 2019-01-25 14:52:02.643639 7f83f9b28000 5 asok(0x56414485a1c0) register_command log flush hook 0x56414480e190 -383> 2019-01-25 14:52:02.643642 7f83f9b28000 5 asok(0x56414485a1c0) register_command log dump hook 0x56414480e190 -382> 2019-01-25 14:52:02.643651 7f83f9b28000 5 asok(0x56414485a1c0) register_command log reopen hook 0x56414480e190 -381> 2019-01-25 14:52:02.643689 7f83f9b28000 5 asok(0x56414485a1c0) register_command dump_mempools hook 0x564144a03328 -380> 2019-01-25 14:52:02.654675 7f83f9b28000 0 set uid:gid to 167:167 (ceph:ceph) -379> 2019-01-25 14:52:02.654706 7f83f9b28000 0 ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable), process ceph-mon, pid 216922 -378> 2019-01-25 14:52:02.654774 7f83f9b28000 0 pidfile_write: ignore empty --pid-file -377> 2019-01-25 14:52:02.685886 7f83f9b28000 5 asok(0x56414485a1c0) init /var/run/ceph/ceph-mon.node1.asok -376> 2019-01-25 14:52:02.685902 7f83f9b28000 5 asok(0x56414485a1c0) bind_and_listen /var/run/ceph/ceph-mon.node1.asok -375> 2019-01-25 14:52:02.685988 7f83f9b28000 5 asok(0x56414485a1c0) register_command 0 hook 0x56414480c180 -374> 2019-01-25 14:52:02.685996 7f83f9b28000 5 asok(0x56414485a1c0) register_command version hook 0x56414480c180 -373> 2019-01-25 14:52:02.685999 7f83f9b28000 5 asok(0x56414485a1c0) register_command git_version hook 0x56414480c180 -372> 2019-01-25 14:52:02.686003 7f83f9b28000 5 asok(0x56414485a1c0) register_command help hook 0x56414480e2a0 -371> 2019-01-25 14:52:02.686010 7f83f9b28000 5 asok(0x56414485a1c0) register_command get_command_descriptions hook 0x56414480e2b0 -370> 2019-01-25 14:52:02.686075 7f83f3fa1700 5 asok(0x56414485a1c0) entry start -369> 2019-01-25 14:52:02.693847 7f83f9b28000 0 load: jerasure load: lrc load: isa -368> 2019-01-25 14:52:02.693979 7f83f9b28000 0 set rocksdb option compression = kNoCompression -367> 2019-01-25 14:52:02.693988 7f83f9b28000 0 set rocksdb option level_compaction_dynamic_level_bytes = true -366> 2019-01-25 14:52:02.693994 7f83f9b28000 0 set rocksdb option write_buffer_size = 33554432 -365> 2019-01-25 14:52:02.694019 7f83f9b28000 0 set rocksdb option compression = kNoCompression -364> 2019-01-25 14:52:02.694025 7f83f9b28000 0 set rocksdb option level_compaction_dynamic_level_bytes = true -363> 2019-01-25 14:52:02.694027 7f83f9b28000 0 set rocksdb option write_buffer_size = 33554432 -362> 2019-01-25 14:52:02.694209 7f83f9b28000 4 rocksdb: RocksDB version: 5.4.0 -361> 2019-01-25 14:52:02.694218 7f83f9b28000 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ -360> 2019-01-25 14:52:02.694220 7f83f9b28000 4 rocksdb: Compile date Aug 30 2018 -359> 2019-01-25 14:52:02.694223 7f83f9b28000 4 rocksdb: DB SUMMARY -358> 2019-01-25 14:52:02.694276 7f83f9b28000 4 rocksdb: CURRENT file: CURRENT -357> 2019-01-25 14:52:02.694280 7f83f9b28000 4 rocksdb: IDENTITY file: IDENTITY -356> 2019-01-25 14:52:02.694285 7f83f9b28000 4 rocksdb: MANIFEST file: MANIFEST-000036 size: 262 Bytes -355> 2019-01-25 14:52:02.694287 7f83f9b28000 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-node1/store.db dir, Total Num: 4, files: 000027.sst 000029.sst 000032.sst 000035.sst -354> 2019-01-25 14:52:02.694289 7f83f9b28000 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-node1/store.db: 000037.log size: 26570 ; -353> 2019-01-25 14:52:02.694291 7f83f9b28000 4 rocksdb: Options.error_if_exists: 0 -352> 2019-01-25 14:52:02.694293 7f83f9b28000 4 rocksdb: Options.create_if_missing: 0 -351> 2019-01-25 14:52:02.694293 7f83f9b28000 4 rocksdb: Options.paranoid_checks: 1 -350> 2019-01-25 14:52:02.694294 7f83f9b28000 4 rocksdb: Options.env: 0x564143a321c0 -349> 2019-01-25 14:52:02.694295 7f83f9b28000 4 rocksdb: Options.info_log: 0x564144ab2de0 -348> 2019-01-25 14:52:02.694301 7f83f9b28000 4 rocksdb: Options.max_open_files: -1 -347> 2019-01-25 14:52:02.694302 7f83f9b28000 4 rocksdb: Options.max_file_opening_threads: 16 -346> 2019-01-25 14:52:02.694303 7f83f9b28000 4 rocksdb: Options.use_fsync: 0 -345> 2019-01-25 14:52:02.694304 7f83f9b28000 4 rocksdb: Options.max_log_file_size: 0 -344> 2019-01-25 14:52:02.694305 7f83f9b28000 4 rocksdb: Options.max_manifest_file_size: 18446744073709551615 -343> 2019-01-25 14:52:02.694306 7f83f9b28000 4 rocksdb: Options.log_file_time_to_roll: 0 -342> 2019-01-25 14:52:02.694306 7f83f9b28000 4 rocksdb: Options.keep_log_file_num: 1000 -341> 2019-01-25 14:52:02.694307 7f83f9b28000 4 rocksdb: Options.recycle_log_file_num: 0 -340> 2019-01-25 14:52:02.694308 7f83f9b28000 4 rocksdb: Options.allow_fallocate: 1 -339> 2019-01-25 14:52:02.694309 7f83f9b28000 4 rocksdb: Options.allow_mmap_reads: 0 -338> 2019-01-25 14:52:02.694309 7f83f9b28000 4 rocksdb: Options.allow_mmap_writes: 0 -337> 2019-01-25 14:52:02.694310 7f83f9b28000 4 rocksdb: Options.use_direct_reads: 0 -336> 2019-01-25 14:52:02.694313 7f83f9b28000 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 -335> 2019-01-25 14:52:02.694315 7f83f9b28000 4 rocksdb: Options.create_missing_column_families: 0 -334> 2019-01-25 14:52:02.694317 7f83f9b28000 4 rocksdb: Options.db_log_dir: -333> 2019-01-25 14:52:02.694318 7f83f9b28000 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-node1/store.db -332> 2019-01-25 14:52:02.694319 7f83f9b28000 4 rocksdb: Options.table_cache_numshardbits: 6 -331> 2019-01-25 14:52:02.694320 7f83f9b28000 4 rocksdb: Options.max_subcompactions: 1 -330> 2019-01-25 14:52:02.694321 7f83f9b28000 4 rocksdb: Options.max_background_flushes: 1 -329> 2019-01-25 14:52:02.694322 7f83f9b28000 4 rocksdb: Options.WAL_ttl_seconds: 0 -328> 2019-01-25 14:52:02.694323 7f83f9b28000 4 rocksdb: Options.WAL_size_limit_MB: 0 -327> 2019-01-25 14:52:02.694324 7f83f9b28000 4 rocksdb: Options.manifest_preallocation_size: 4194304 -326> 2019-01-25 14:52:02.694325 7f83f9b28000 4 rocksdb: Options.is_fd_close_on_exec: 1 -325> 2019-01-25 14:52:02.694325 7f83f9b28000 4 rocksdb: Options.advise_random_on_open: 1 -324> 2019-01-25 14:52:02.694326 7f83f9b28000 4 rocksdb: Options.db_write_buffer_size: 0 -323> 2019-01-25 14:52:02.694327 7f83f9b28000 4 rocksdb: Options.access_hint_on_compaction_start: 1 -322> 2019-01-25 14:52:02.694328 7f83f9b28000 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 -321> 2019-01-25 14:52:02.694328 7f83f9b28000 4 rocksdb: Options.compaction_readahead_size: 0 -320> 2019-01-25 14:52:02.694329 7f83f9b28000 4 rocksdb: Options.random_access_max_buffer_size: 1048576 -319> 2019-01-25 14:52:02.694330 7f83f9b28000 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 -318> 2019-01-25 14:52:02.694331 7f83f9b28000 4 rocksdb: Options.use_adaptive_mutex: 0 -317> 2019-01-25 14:52:02.694332 7f83f9b28000 4 rocksdb: Options.rate_limiter: (nil) -316> 2019-01-25 14:52:02.694333 7f83f9b28000 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 -315> 2019-01-25 14:52:02.694333 7f83f9b28000 4 rocksdb: Options.bytes_per_sync: 0 -314> 2019-01-25 14:52:02.694334 7f83f9b28000 4 rocksdb: Options.wal_bytes_per_sync: 0 -313> 2019-01-25 14:52:02.694335 7f83f9b28000 4 rocksdb: Options.wal_recovery_mode: 2 -312> 2019-01-25 14:52:02.694336 7f83f9b28000 4 rocksdb: Options.enable_thread_tracking: 0 -311> 2019-01-25 14:52:02.694336 7f83f9b28000 4 rocksdb: Options.allow_concurrent_memtable_write: 1 -310> 2019-01-25 14:52:02.694337 7f83f9b28000 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 -309> 2019-01-25 14:52:02.694338 7f83f9b28000 4 rocksdb: Options.write_thread_max_yield_usec: 100 -308> 2019-01-25 14:52:02.694339 7f83f9b28000 4 rocksdb: Options.write_thread_slow_yield_usec: 3 -307> 2019-01-25 14:52:02.694339 7f83f9b28000 4 rocksdb: Options.row_cache: None -306> 2019-01-25 14:52:02.694344 7f83f9b28000 4 rocksdb: Options.wal_filter: None -305> 2019-01-25 14:52:02.694346 7f83f9b28000 4 rocksdb: Options.avoid_flush_during_recovery: 0 -304> 2019-01-25 14:52:02.694347 7f83f9b28000 4 rocksdb: Options.base_background_compactions: 1 -303> 2019-01-25 14:52:02.694348 7f83f9b28000 4 rocksdb: Options.max_background_compactions: 1 -302> 2019-01-25 14:52:02.694348 7f83f9b28000 4 rocksdb: Options.avoid_flush_during_shutdown: 0 -301> 2019-01-25 14:52:02.694349 7f83f9b28000 4 rocksdb: Options.delayed_write_rate : 16777216 -300> 2019-01-25 14:52:02.694350 7f83f9b28000 4 rocksdb: Options.max_total_wal_size: 0 -299> 2019-01-25 14:52:02.694350 7f83f9b28000 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 -298> 2019-01-25 14:52:02.694352 7f83f9b28000 4 rocksdb: Options.stats_dump_period_sec: 600 -297> 2019-01-25 14:52:02.694353 7f83f9b28000 4 rocksdb: Compression algorithms supported: -296> 2019-01-25 14:52:02.694355 7f83f9b28000 4 rocksdb: Snappy supported: 1 -295> 2019-01-25 14:52:02.694356 7f83f9b28000 4 rocksdb: Zlib supported: 0 -294> 2019-01-25 14:52:02.694357 7f83f9b28000 4 rocksdb: Bzip supported: 0 -293> 2019-01-25 14:52:02.694358 7f83f9b28000 4 rocksdb: LZ4 supported: 0 -292> 2019-01-25 14:52:02.694359 7f83f9b28000 4 rocksdb: ZSTD supported: 0 -291> 2019-01-25 14:52:02.694360 7f83f9b28000 4 rocksdb: Fast CRC32 supported: 1 -290> 2019-01-25 14:52:02.694561 7f83f9b28000 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/version_set.cc:2609] Recovering from manifest file: MANIFEST-000036 -289> 2019-01-25 14:52:02.694683 7f83f9b28000 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/column_family.cc:407] --------------- Options for column family [default]: -288> 2019-01-25 14:52:02.694688 7f83f9b28000 4 rocksdb: Options.comparator: leveldb.BytewiseComparator -287> 2019-01-25 14:52:02.694690 7f83f9b28000 4 rocksdb: Options.merge_operator: -286> 2019-01-25 14:52:02.694691 7f83f9b28000 4 rocksdb: Options.compaction_filter: None -285> 2019-01-25 14:52:02.694692 7f83f9b28000 4 rocksdb: Options.compaction_filter_factory: None -284> 2019-01-25 14:52:02.694693 7f83f9b28000 4 rocksdb: Options.memtable_factory: SkipListFactory -283> 2019-01-25 14:52:02.694694 7f83f9b28000 4 rocksdb: Options.table_factory: BlockBasedTable -282> 2019-01-25 14:52:02.694721 7f83f9b28000 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56414480c168) cache_index_and_filter_blocks: 1 cache_index_and_filter_blocks_with_high_priority: 1 pin_l0_filter_and_index_blocks_in_cache: 1 index_type: 0 hash_index_allow_collision: 1 checksum: 1 no_block_cache: 0 block_cache: 0x564144877438 block_cache_name: LRUCache block_cache_options: capacity : 134217728 num_shard_bits : 4 strict_capacity_limit : 0 high_pri_pool_ratio: 0.000 block_cache_compressed: (nil) persistent_cache: (nil) block_size: 4096 block_size_deviation: 10 block_restart_interval: 16 index_block_restart_interval: 1 filter_policy: rocksdb.BuiltinBloomFilter whole_key_filtering: 1 format_version: 2 -281> 2019-01-25 14:52:02.694730 7f83f9b28000 4 rocksdb: Options.write_buffer_size: 33554432 -280> 2019-01-25 14:52:02.694732 7f83f9b28000 4 rocksdb: Options.max_write_buffer_number: 2 -279> 2019-01-25 14:52:02.694733 7f83f9b28000 4 rocksdb: Options.compression: NoCompression -278> 2019-01-25 14:52:02.694734 7f83f9b28000 4 rocksdb: Options.bottommost_compression: Disabled -277> 2019-01-25 14:52:02.694735 7f83f9b28000 4 rocksdb: Options.prefix_extractor: nullptr -276> 2019-01-25 14:52:02.694736 7f83f9b28000 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr -275> 2019-01-25 14:52:02.694736 7f83f9b28000 4 rocksdb: Options.num_levels: 7 -274> 2019-01-25 14:52:02.694737 7f83f9b28000 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 -273> 2019-01-25 14:52:02.694738 7f83f9b28000 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 -272> 2019-01-25 14:52:02.694739 7f83f9b28000 4 rocksdb: Options.compression_opts.window_bits: -14 -271> 2019-01-25 14:52:02.694740 7f83f9b28000 4 rocksdb: Options.compression_opts.level: -1 -270> 2019-01-25 14:52:02.694741 7f83f9b28000 4 rocksdb: Options.compression_opts.strategy: 0 -269> 2019-01-25 14:52:02.694741 7f83f9b28000 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 -268> 2019-01-25 14:52:02.694742 7f83f9b28000 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 -267> 2019-01-25 14:52:02.694743 7f83f9b28000 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 -266> 2019-01-25 14:52:02.694744 7f83f9b28000 4 rocksdb: Options.level0_stop_writes_trigger: 36 -265> 2019-01-25 14:52:02.694751 7f83f9b28000 4 rocksdb: Options.target_file_size_base: 67108864 -264> 2019-01-25 14:52:02.694753 7f83f9b28000 4 rocksdb: Options.target_file_size_multiplier: 1 -263> 2019-01-25 14:52:02.694753 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_base: 268435456 -262> 2019-01-25 14:52:02.694754 7f83f9b28000 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 -261> 2019-01-25 14:52:02.694755 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 -260> 2019-01-25 14:52:02.694759 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 -259> 2019-01-25 14:52:02.694761 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 -258> 2019-01-25 14:52:02.694762 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 -257> 2019-01-25 14:52:02.694762 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 -256> 2019-01-25 14:52:02.694763 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 -255> 2019-01-25 14:52:02.694764 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 -254> 2019-01-25 14:52:02.694765 7f83f9b28000 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 -253> 2019-01-25 14:52:02.694765 7f83f9b28000 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 -252> 2019-01-25 14:52:02.694766 7f83f9b28000 4 rocksdb: Options.max_compaction_bytes: 1677721600 -251> 2019-01-25 14:52:02.694767 7f83f9b28000 4 rocksdb: Options.arena_block_size: 4194304 -250> 2019-01-25 14:52:02.694768 7f83f9b28000 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 -249> 2019-01-25 14:52:02.694769 7f83f9b28000 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 -248> 2019-01-25 14:52:02.694770 7f83f9b28000 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 -247> 2019-01-25 14:52:02.694771 7f83f9b28000 4 rocksdb: Options.disable_auto_compactions: 0 -246> 2019-01-25 14:52:02.694772 7f83f9b28000 4 rocksdb: Options.compaction_style: kCompactionStyleLevel -245> 2019-01-25 14:52:02.694773 7f83f9b28000 4 rocksdb: Options.compaction_pri: kByCompensatedSize -244> 2019-01-25 14:52:02.694774 7f83f9b28000 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 -243> 2019-01-25 14:52:02.694775 7f83f9b28000 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 -242> 2019-01-25 14:52:02.694776 7f83f9b28000 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 -241> 2019-01-25 14:52:02.694777 7f83f9b28000 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 -240> 2019-01-25 14:52:02.694778 7f83f9b28000 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 -239> 2019-01-25 14:52:02.694779 7f83f9b28000 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 -238> 2019-01-25 14:52:02.694779 7f83f9b28000 4 rocksdb: Options.table_properties_collectors: -237> 2019-01-25 14:52:02.694780 7f83f9b28000 4 rocksdb: Options.inplace_update_support: 0 -236> 2019-01-25 14:52:02.694785 7f83f9b28000 4 rocksdb: Options.inplace_update_num_locks: 10000 -235> 2019-01-25 14:52:02.694786 7f83f9b28000 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 -234> 2019-01-25 14:52:02.694787 7f83f9b28000 4 rocksdb: Options.memtable_huge_page_size: 0 -233> 2019-01-25 14:52:02.694789 7f83f9b28000 4 rocksdb: Options.bloom_locality: 0 -232> 2019-01-25 14:52:02.694790 7f83f9b28000 4 rocksdb: Options.max_successive_merges: 0 -231> 2019-01-25 14:52:02.694790 7f83f9b28000 4 rocksdb: Options.optimize_filters_for_hits: 0 -230> 2019-01-25 14:52:02.694791 7f83f9b28000 4 rocksdb: Options.paranoid_file_checks: 0 -229> 2019-01-25 14:52:02.694792 7f83f9b28000 4 rocksdb: Options.force_consistency_checks: 0 -228> 2019-01-25 14:52:02.694793 7f83f9b28000 4 rocksdb: Options.report_bg_io_stats: 0 -227> 2019-01-25 14:52:02.697349 7f83f9b28000 3 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/version_set.cc:2087] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. -226> 2019-01-25 14:52:02.697372 7f83f9b28000 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/version_set.cc:2859] Recovered from manifest file:/var/lib/ceph/mon/ceph-node1/store.db/MANIFEST-000036 succeeded,manifest_file_number is 36, next_file_number is 38, last_sequence is 4853, log_number is 0,prev_log_number is 0,max_column_family is 0 -225> 2019-01-25 14:52:02.697376 7f83f9b28000 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/version_set.cc:2867] Column family [default] (ID 0), log number is 35 -224> 2019-01-25 14:52:02.697457 7f83f9b28000 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122697447, "job": 1, "event": "recovery_started", "log_files": [37]} -223> 2019-01-25 14:52:02.697463 7f83f9b28000 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_open.cc:482] Recovering log #37 mode 2 -222> 2019-01-25 14:52:02.698295 7f83f9b28000 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_open.cc:815] [default] [WriteLevel0TableForRecovery] Level-0 table #38: started -221> 2019-01-25 14:52:02.699926 7f83f9b28000 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122699898, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 38, "file_size": 26931, "table_properties": {"data_size": 25757, "index_size": 141, "filter_size": 118, "raw_key_size": 457, "raw_average_key_size": 20, "raw_value_size": 25250, "raw_average_value_size": 1147, "num_data_blocks": 5, "num_entries": 22, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}} -220> 2019-01-25 14:52:02.699943 7f83f9b28000 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_open.cc:847] [default] [WriteLevel0TableForRecovery] Level-0 table #38: 26931 bytes OK -219> 2019-01-25 14:52:02.699969 7f83f9b28000 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/version_set.cc:2395] Creating manifest 39 -218> 2019-01-25 14:52:02.700320 7f83f9b28000 3 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/version_set.cc:2087] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. -217> 2019-01-25 14:52:02.701603 7f83f9b28000 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122701596, "job": 1, "event": "recovery_finished"} -216> 2019-01-25 14:52:02.701901 7f83f9b28000 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_files.cc:307] [JOB 2] Delete /var/lib/ceph/mon/ceph-node1/store.db//MANIFEST-000036 type=3 #36 -- OK -215> 2019-01-25 14:52:02.701962 7f83f9b28000 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_files.cc:307] [JOB 2] Delete /var/lib/ceph/mon/ceph-node1/store.db//000037.log type=0 #37 -- OK -214> 2019-01-25 14:52:02.706637 7f83f9b28000 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_open.cc:1063] DB pointer 0x564144bac000 -213> 2019-01-25 14:52:02.707654 7f83eb748700 3 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl.cc:447] ------- DUMPING STATS ------- -212> 2019-01-25 14:52:02.707670 7f83eb748700 3 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl.cc:448] ** DB Stats ** Uptime(secs): 0.0 total, 0.0 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent ** Compaction Stats [default] ** Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop ---------------------------------------------------------------------------------------------------------------------------------------------------------- L0 4/0 229.22 KB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 11.4 0 1 0.002 0 0 L6 1/0 1.50 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 5/0 1.72 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 11.4 0 1 0.002 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 11.4 0 1 0.002 0 0 Uptime(secs): 0.0 total, 0.0 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 2.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 2.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count ** File Read Latency Histogram By Level [default] ** ** Compaction Stats [default] ** Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop ---------------------------------------------------------------------------------------------------------------------------------------------------------- L0 4/0 229.22 KB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 11.4 0 1 0.002 0 0 L6 1/0 1.50 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 5/0 1.72 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 11.4 0 1 0.002 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 0.0 total, 0.0 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 1.98 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count ** File Read Latency Histogram By Level [default] ** -211> 2019-01-25 14:52:02.707791 7f83eb748700 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/compaction_job.cc:1403] [default] [JOB 3] Compacting 4@0 + 1@6 files to L6, score 1.00 -210> 2019-01-25 14:52:02.707810 7f83eb748700 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/compaction_job.cc:1407] [default] Compaction start summary: Base version 2 Base level 0, inputs: [38(26KB) 35(109KB) 32(25KB) 29(67KB)], [27(1533KB)] -209> 2019-01-25 14:52:02.707857 7f83eb748700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122707817, "job": 3, "event": "compaction_started", "files_L0": [38, 35, 32, 29], "files_L6": [27], "score": 1, "input_data_size": 1804843} -208> 2019-01-25 14:52:02.708741 7f83f9b28000 1 RDMAStack RDMAStack ms_async_rdma_enable_hugepage value is: 0 -207> 2019-01-25 14:52:02.708873 7f83ecf4b700 2 Event(0x56414486d980 nevent=5000 time_id=1).set_owner idx=1 owner=140204592903936 -206> 2019-01-25 14:52:02.708886 7f83ec74a700 2 Event(0x56414486d480 nevent=5000 time_id=1).set_owner idx=0 owner=140204584511232 -205> 2019-01-25 14:52:02.708925 7f83f2f57700 2 Event(0x564144ca8580 nevent=5000 time_id=1).set_owner idx=2 owner=140204693616384 -204> 2019-01-25 14:52:02.709019 7f83f9b28000 0 starting mon.node1 rank 0 at public addr 10.0.0.12:6789/0 at bind addr 10.0.0.12:6789/0 mon_data /var/lib/ceph/mon/ceph-node1 fsid 56e15358-b701-4c38-ac30-ecfc8751a08e -203> 2019-01-25 14:52:02.709097 7f83f9b28000 0 starting mon.node1 rank 0 at 10.0.0.12:6789/0 mon_data /var/lib/ceph/mon/ceph-node1 fsid 56e15358-b701-4c38-ac30-ecfc8751a08e -202> 2019-01-25 14:52:02.709159 7f83f9b28000 5 adding auth protocol: cephx -201> 2019-01-25 14:52:02.709168 7f83f9b28000 5 adding auth protocol: cephx -200> 2019-01-25 14:52:02.709220 7f83f9b28000 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201) -199> 2019-01-25 14:52:02.709233 7f83f9b28000 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201) -198> 2019-01-25 14:52:02.709868 7f83f9b28000 1 mon.node1@-1(probing) e1 preinit fsid 56e15358-b701-4c38-ac30-ecfc8751a08e -197> 2019-01-25 14:52:02.710156 7f83f9b28000 1 mon.node1@-1(probing).mds e0 Unable to load 'last_metadata' -196> 2019-01-25 14:52:02.710265 7f83f9b28000 1 mon.node1@-1(probing).paxosservice(pgmap 1..2) refresh upgraded, format 0 -> 1 -195> 2019-01-25 14:52:02.710277 7f83f9b28000 1 mon.node1@-1(probing).pg v0 on_upgrade discarding in-core PGMap -194> 2019-01-25 14:52:02.710445 7f83f9b28000 4 mon.node1@-1(probing).mds e1 new map -193> 2019-01-25 14:52:02.710452 7f83f9b28000 0 mon.node1@-1(probing).mds e1 print_map e1 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2} legacy client fscid: -1 No filesystems configured -192> 2019-01-25 14:52:02.710787 7f83f9b28000 0 mon.node1@-1(probing).osd e12 crush map has features 288514050185494528, adjusting msgr requires -191> 2019-01-25 14:52:02.710798 7f83f9b28000 0 mon.node1@-1(probing).osd e12 crush map has features 288514050185494528, adjusting msgr requires -190> 2019-01-25 14:52:02.710803 7f83f9b28000 0 mon.node1@-1(probing).osd e12 crush map has features 1009089990564790272, adjusting msgr requires -189> 2019-01-25 14:52:02.710805 7f83f9b28000 0 mon.node1@-1(probing).osd e12 crush map has features 288514050185494528, adjusting msgr requires -188> 2019-01-25 14:52:02.711276 7f83f9b28000 1 mon.node1@-1(probing).paxosservice(auth 1..71) refresh upgraded, format 0 -> 2 -187> 2019-01-25 14:52:02.713444 7f83f9b28000 4 mon.node1@-1(probing).mgr e0 loading version 4 -186> 2019-01-25 14:52:02.713480 7f83f9b28000 4 mon.node1@-1(probing).mgr e4 active server: -(0) -185> 2019-01-25 14:52:02.713516 7f83f9b28000 4 mon.node1@-1(probing).mgr e4 mkfs or daemon transitioned to available, loading commands -184> 2019-01-25 14:52:02.713604 7f83f9b28000 4 mgrc handle_mgr_map Got map version 4 -183> 2019-01-25 14:52:02.713612 7f83f9b28000 4 mgrc handle_mgr_map Active mgr is now - -182> 2019-01-25 14:52:02.713614 7f83f9b28000 4 mgrc reconnect No active mgr available yet -181> 2019-01-25 14:52:02.713954 7f83f9b28000 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-node1/keyring -180> 2019-01-25 14:52:02.713963 7f83f9b28000 5 asok(0x56414485a1c0) register_command mon_status hook 0x56414480e750 -179> 2019-01-25 14:52:02.713970 7f83f9b28000 5 asok(0x56414485a1c0) register_command quorum_status hook 0x56414480e750 -178> 2019-01-25 14:52:02.713975 7f83f9b28000 5 asok(0x56414485a1c0) register_command sync_force hook 0x56414480e750 -177> 2019-01-25 14:52:02.713979 7f83f9b28000 5 asok(0x56414485a1c0) register_command add_bootstrap_peer_hint hook 0x56414480e750 -176> 2019-01-25 14:52:02.713989 7f83f9b28000 5 asok(0x56414485a1c0) register_command quorum enter hook 0x56414480e750 -175> 2019-01-25 14:52:02.713998 7f83f9b28000 5 asok(0x56414485a1c0) register_command quorum exit hook 0x56414480e750 -174> 2019-01-25 14:52:02.714011 7f83f9b28000 5 asok(0x56414485a1c0) register_command ops hook 0x56414480e750 -173> 2019-01-25 14:52:02.714019 7f83f9b28000 5 asok(0x56414485a1c0) register_command sessions hook 0x56414480e750 -172> 2019-01-25 14:52:02.714038 7f83f9b28000 1 -- - start start -171> 2019-01-25 14:52:02.714044 7f83f9b28000 1 -- - start start -170> 2019-01-25 14:52:02.714049 7f83f9b28000 2 mon.node1@-1(probing) e1 init -169> 2019-01-25 14:52:02.720988 7f83ec74a700 1 Infiniband binding_port found active port 1 -168> 2019-01-25 14:52:02.721083 7f83ec74a700 1 Infiniband init assigning: 1024 receive buffers -167> 2019-01-25 14:52:02.721093 7f83ec74a700 1 Infiniband init assigning: 1024 send buffers -166> 2019-01-25 14:52:02.721095 7f83ec74a700 1 Infiniband init device allow 4194303 completion entries -165> 2019-01-25 14:52:02.775464 7f83eb748700 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/compaction_job.cc:1116] [default] [JOB 3] Generated table #43: 1143 keys, 1797412 bytes -164> 2019-01-25 14:52:02.775533 7f83eb748700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122775504, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 43, "file_size": 1797412, "table_properties": {"data_size": 1780513, "index_size": 8940, "filter_size": 7039, "raw_key_size": 20191, "raw_average_key_size": 17, "raw_value_size": 1758597, "raw_average_value_size": 1538, "num_data_blocks": 303, "num_entries": 1143, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}} -163> 2019-01-25 14:52:02.776183 7f83eb748700 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/compaction_job.cc:1173] [default] [JOB 3] Compacted 4@0 + 1@6 files to L6 => 1797412 bytes -162> 2019-01-25 14:52:02.776330 7f83eb748700 3 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/version_set.cc:2087] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. -161> 2019-01-25 14:52:02.831066 7f83eb748700 4 rocksdb: (Original Log Time 2019/01/25-14:52:02.776900) [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/compaction_job.cc:621] [default] compacted to: base level 6 max bytes base 26843546 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.4 rd, 26.3 wr, level 6, files in(4, 1) out(1) MB in(0.2, 1.5) out(1.7), read-write-amplify(15.3) write-amplify(7.7) OK, records in: 1184, records dropped: 41 -160> 2019-01-25 14:52:02.831123 7f83eb748700 4 rocksdb: (Original Log Time 2019/01/25-14:52:02.776949) EVENT_LOG_v1 {"time_micros": 1548399122776926, "job": 3, "event": "compaction_finished", "compaction_time_micros": 68288, "output_level": 6, "num_output_files": 1, "total_output_size": 1797412, "num_input_records": 1184, "num_output_records": 1143, "num_subcompactions": 1, "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} -159> 2019-01-25 14:52:02.831318 7f83eb748700 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_files.cc:307] [JOB 3] Delete /var/lib/ceph/mon/ceph-node1/store.db/000038.sst type=2 #38 -- OK -158> 2019-01-25 14:52:02.831353 7f83eb748700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122831339, "job": 3, "event": "table_file_deletion", "file_number": 38} -157> 2019-01-25 14:52:02.831470 7f83eb748700 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_files.cc:307] [JOB 3] Delete /var/lib/ceph/mon/ceph-node1/store.db/000035.sst type=2 #35 -- OK -156> 2019-01-25 14:52:02.831489 7f83eb748700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122831484, "job": 3, "event": "table_file_deletion", "file_number": 35} -155> 2019-01-25 14:52:02.831558 7f83eb748700 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_files.cc:307] [JOB 3] Delete /var/lib/ceph/mon/ceph-node1/store.db/000032.sst type=2 #32 -- OK -154> 2019-01-25 14:52:02.831575 7f83eb748700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122831569, "job": 3, "event": "table_file_deletion", "file_number": 32} -153> 2019-01-25 14:52:02.831677 7f83eb748700 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_files.cc:307] [JOB 3] Delete /var/lib/ceph/mon/ceph-node1/store.db/000029.sst type=2 #29 -- OK -152> 2019-01-25 14:52:02.831693 7f83eb748700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122831689, "job": 3, "event": "table_file_deletion", "file_number": 29} -151> 2019-01-25 14:52:02.832340 7f83eb748700 5 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_files.cc:307] [JOB 3] Delete /var/lib/ceph/mon/ceph-node1/store.db/000027.sst type=2 #27 -- OK -150> 2019-01-25 14:52:02.832352 7f83eb748700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548399122832349, "job": 3, "event": "table_file_deletion", "file_number": 27} -149> 2019-01-25 14:52:02.837673 7f83f9b28000 1 -- 10.0.0.12:6789/0 learned_addr learned my addr 10.0.0.12:6789/0 -148> 2019-01-25 14:52:02.837693 7f83f9b28000 1 -- 10.0.0.12:6789/0 _finish_bind bind my_inst.addr is 10.0.0.12:6789/0 -147> 2019-01-25 14:52:02.837699 7f83f9b28000 1 Processor -- start -146> 2019-01-25 14:52:02.837895 7f83f9b28000 1 Processor -- start -145> 2019-01-25 14:52:02.837988 7f83f9b28000 0 mon.node1@-1(probing) e1 my rank is now 0 (was -1) -144> 2019-01-25 14:52:02.838000 7f83f9b28000 1 -- 10.0.0.12:6789/0 shutdown_connections -143> 2019-01-25 14:52:02.838038 7f83f9b28000 1 mon.node1@0(probing) e1 win_standalone_election -142> 2019-01-25 14:52:02.838124 7f83f9b28000 1 mon.node1@0(probing).elector(17) init, last seen epoch 17, mid-election, bumping -141> 2019-01-25 14:52:02.840448 7f83f9b28000 0 log_channel(cluster) log [INF] : mon.node1 is new leader, mons node1 in quorum (ranks 0) -140> 2019-01-25 14:52:02.840460 7f83f9b28000 10 log_client _send_to_monlog to self -139> 2019-01-25 14:52:02.840462 7f83f9b28000 10 log_client log_queue is 1 last_log 1 sent 0 num 1 unsent 1 sending 1 -138> 2019-01-25 14:52:02.840469 7f83f9b28000 10 log_client will send 2019-01-25 14:52:02.840459 mon.node1 mon.0 10.0.0.12:6789/0 1 : cluster [INF] mon.node1 is new leader, mons node1 in quorum (ranks 0) -137> 2019-01-25 14:52:02.840491 7f83f9b28000 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(1 entries from seq 1 at 2019-01-25 14:52:02.840459) v1 -- 0x564144d64d80 con 0 -136> 2019-01-25 14:52:02.840566 7f83f9b28000 0 log_channel(cluster) log [DBG] : monmap e1: 1 mons at {node1=10.0.0.12:6789/0} -135> 2019-01-25 14:52:02.840575 7f83f9b28000 10 log_client _send_to_monlog to self -134> 2019-01-25 14:52:02.840578 7f83f9b28000 10 log_client log_queue is 2 last_log 2 sent 1 num 2 unsent 1 sending 1 -133> 2019-01-25 14:52:02.840583 7f83f9b28000 10 log_client will send 2019-01-25 14:52:02.840574 mon.node1 mon.0 10.0.0.12:6789/0 2 : cluster [DBG] monmap e1: 1 mons at {node1=10.0.0.12:6789/0} -132> 2019-01-25 14:52:02.840601 7f83f9b28000 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(1 entries from seq 2 at 2019-01-25 14:52:02.840574) v1 -- 0x564144d64fc0 con 0 -131> 2019-01-25 14:52:02.840636 7f83f9b28000 5 mon.node1@0(leader).paxos(paxos active c 1..638) is_readable = 1 - now=2019-01-25 14:52:02.840637 lease_expire=0.000000 has v0 lc 638 -130> 2019-01-25 14:52:02.840677 7f83f9b28000 0 log_channel(cluster) log [DBG] : fsmap -129> 2019-01-25 14:52:02.840683 7f83f9b28000 10 log_client _send_to_monlog to self -128> 2019-01-25 14:52:02.840684 7f83f9b28000 10 log_client log_queue is 3 last_log 3 sent 2 num 3 unsent 1 sending 1 -127> 2019-01-25 14:52:02.840651 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(1 entries from seq 1 at 2019-01-25 14:52:02.840459) v1 ==== 0+0+0 (0 0 0) 0x564144d64d80 con 0x564144c3a000 -126> 2019-01-25 14:52:02.840688 7f83f9b28000 10 log_client will send 2019-01-25 14:52:02.840682 mon.node1 mon.0 10.0.0.12:6789/0 3 : cluster [DBG] fsmap -125> 2019-01-25 14:52:02.840699 7f83f9b28000 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(1 entries from seq 3 at 2019-01-25 14:52:02.840682) v1 -- 0x564144d65200 con 0 -124> 2019-01-25 14:52:02.840795 7f83f9b28000 0 log_channel(cluster) log [DBG] : osdmap e12: 2 total, 0 up, 1 in -123> 2019-01-25 14:52:02.840803 7f83f9b28000 10 log_client _send_to_monlog to self -122> 2019-01-25 14:52:02.840804 7f83f9b28000 10 log_client log_queue is 4 last_log 4 sent 3 num 4 unsent 1 sending 1 -121> 2019-01-25 14:52:02.840807 7f83f9b28000 10 log_client will send 2019-01-25 14:52:02.840801 mon.node1 mon.0 10.0.0.12:6789/0 4 : cluster [DBG] osdmap e12: 2 total, 0 up, 1 in -120> 2019-01-25 14:52:02.840821 7f83f9b28000 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(1 entries from seq 4 at 2019-01-25 14:52:02.840801) v1 -- 0x564144d65440 con 0 -119> 2019-01-25 14:52:02.840899 7f83f9b28000 0 log_channel(cluster) log [DBG] : mgrmap e4: no daemons active -118> 2019-01-25 14:52:02.840906 7f83f9b28000 10 log_client _send_to_monlog to self -117> 2019-01-25 14:52:02.840907 7f83f9b28000 10 log_client log_queue is 5 last_log 5 sent 4 num 5 unsent 1 sending 1 -116> 2019-01-25 14:52:02.840909 7f83f9b28000 10 log_client will send 2019-01-25 14:52:02.840905 mon.node1 mon.0 10.0.0.12:6789/0 5 : cluster [DBG] mgrmap e4: no daemons active -115> 2019-01-25 14:52:02.840920 7f83f9b28000 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(1 entries from seq 5 at 2019-01-25 14:52:02.840905) v1 -- 0x564144d65680 con 0 -114> 2019-01-25 14:52:02.840993 7f83f9b28000 5 mon.node1@0(leader) e1 apply_quorum_to_compatset_features -113> 2019-01-25 14:52:02.841012 7f83f9b28000 5 mon.node1@0(leader) e1 apply_monmap_to_compatset_features -112> 2019-01-25 14:52:02.841163 7f83ee6bf700 5 mon.node1@0(leader) e1 _ms_dispatch setting monitor caps on this connection -111> 2019-01-25 14:52:02.841194 7f83ee6bf700 5 mon.node1@0(leader).paxos(paxos active c 1..638) is_readable = 1 - now=2019-01-25 14:52:02.841195 lease_expire=0.000000 has v0 lc 638 -110> 2019-01-25 14:52:02.841258 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(1 entries from seq 2 at 2019-01-25 14:52:02.840574) v1 ==== 0+0+0 (0 0 0) 0x564144d64fc0 con 0x564144c3a000 -109> 2019-01-25 14:52:02.841275 7f83ee6bf700 5 mon.node1@0(leader).paxos(paxos active c 1..638) is_readable = 1 - now=2019-01-25 14:52:02.841276 lease_expire=0.000000 has v0 lc 638 -108> 2019-01-25 14:52:02.841295 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(1 entries from seq 3 at 2019-01-25 14:52:02.840682) v1 ==== 0+0+0 (0 0 0) 0x564144d65200 con 0x564144c3a000 -107> 2019-01-25 14:52:02.841308 7f83ee6bf700 5 mon.node1@0(leader).paxos(paxos active c 1..638) is_readable = 1 - now=2019-01-25 14:52:02.841308 lease_expire=0.000000 has v0 lc 638 -106> 2019-01-25 14:52:02.841327 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(1 entries from seq 4 at 2019-01-25 14:52:02.840801) v1 ==== 0+0+0 (0 0 0) 0x564144d65440 con 0x564144c3a000 -105> 2019-01-25 14:52:02.841340 7f83ee6bf700 5 mon.node1@0(leader).paxos(paxos active c 1..638) is_readable = 1 - now=2019-01-25 14:52:02.841341 lease_expire=0.000000 has v0 lc 638 -104> 2019-01-25 14:52:02.841356 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(1 entries from seq 5 at 2019-01-25 14:52:02.840905) v1 ==== 0+0+0 (0 0 0) 0x564144d65680 con 0x564144c3a000 -103> 2019-01-25 14:52:02.841368 7f83ee6bf700 5 mon.node1@0(leader).paxos(paxos active c 1..638) is_readable = 1 - now=2019-01-25 14:52:02.841368 lease_expire=0.000000 has v0 lc 638 -102> 2019-01-25 14:52:02.891440 7f83f1f55700 5 mon.node1@0(leader).paxos(paxos active c 1..638) queue_pending_finisher 0x56414480e890 -101> 2019-01-25 14:52:02.893461 7f83ebf49700 4 mgrc handle_mgr_map Got map version 4 -100> 2019-01-25 14:52:02.893475 7f83ebf49700 4 mgrc handle_mgr_map Active mgr is now - -99> 2019-01-25 14:52:02.893478 7f83ebf49700 4 mgrc reconnect No active mgr available yet -98> 2019-01-25 14:52:02.893649 7f83ebf49700 2 mon.node1@0(leader) e1 send_reply 0x564144bdcc60 0x564144c050e0 log(last 1) v1 -97> 2019-01-25 14:52:02.893666 7f83ebf49700 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(last 1) v1 -- 0x564144c050e0 con 0 -96> 2019-01-25 14:52:02.893718 7f83ebf49700 2 mon.node1@0(leader) e1 send_reply 0x564144bdcea0 0x564144c045a0 log(last 2) v1 -95> 2019-01-25 14:52:02.893729 7f83ebf49700 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(last 2) v1 -- 0x564144c045a0 con 0 -94> 2019-01-25 14:52:02.893749 7f83ebf49700 2 mon.node1@0(leader) e1 send_reply 0x564144bdd0e0 0x564144c04780 log(last 3) v1 -93> 2019-01-25 14:52:02.893758 7f83ebf49700 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(last 3) v1 -- 0x564144c04780 con 0 -92> 2019-01-25 14:52:02.893767 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(last 1) v1 ==== 0+0+0 (0 0 0) 0x564144c050e0 con 0x564144c3a000 -91> 2019-01-25 14:52:02.893781 7f83ebf49700 2 mon.node1@0(leader) e1 send_reply 0x564144bdd320 0x564144c04960 log(last 4) v1 -90> 2019-01-25 14:52:02.893787 7f83ebf49700 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(last 4) v1 -- 0x564144c04960 con 0 -89> 2019-01-25 14:52:02.893809 7f83ebf49700 2 mon.node1@0(leader) e1 send_reply 0x564144bdd560 0x564144c04b40 log(last 5) v1 -88> 2019-01-25 14:52:02.893819 7f83ebf49700 1 -- 10.0.0.12:6789/0 --> 10.0.0.12:6789/0 -- log(last 5) v1 -- 0x564144c04b40 con 0 -87> 2019-01-25 14:52:02.893859 7f83ee6bf700 10 log_client handle_log_ack log(last 1) v1 -86> 2019-01-25 14:52:02.893869 7f83ee6bf700 10 log_client logged 2019-01-25 14:52:02.840459 mon.node1 mon.0 10.0.0.12:6789/0 1 : cluster [INF] mon.node1 is new leader, mons node1 in quorum (ranks 0) -85> 2019-01-25 14:52:02.893900 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(last 2) v1 ==== 0+0+0 (0 0 0) 0x564144c045a0 con 0x564144c3a000 -84> 2019-01-25 14:52:02.893910 7f83ee6bf700 10 log_client handle_log_ack log(last 2) v1 -83> 2019-01-25 14:52:02.893912 7f83ee6bf700 10 log_client logged 2019-01-25 14:52:02.840574 mon.node1 mon.0 10.0.0.12:6789/0 2 : cluster [DBG] monmap e1: 1 mons at {node1=10.0.0.12:6789/0} -82> 2019-01-25 14:52:02.893924 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(last 3) v1 ==== 0+0+0 (0 0 0) 0x564144c04780 con 0x564144c3a000 -81> 2019-01-25 14:52:02.893934 7f83ee6bf700 10 log_client handle_log_ack log(last 3) v1 -80> 2019-01-25 14:52:02.893935 7f83ee6bf700 10 log_client logged 2019-01-25 14:52:02.840682 mon.node1 mon.0 10.0.0.12:6789/0 3 : cluster [DBG] fsmap -79> 2019-01-25 14:52:02.893945 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(last 4) v1 ==== 0+0+0 (0 0 0) 0x564144c04960 con 0x564144c3a000 -78> 2019-01-25 14:52:02.893954 7f83ee6bf700 10 log_client handle_log_ack log(last 4) v1 -77> 2019-01-25 14:52:02.893955 7f83ee6bf700 10 log_client logged 2019-01-25 14:52:02.840801 mon.node1 mon.0 10.0.0.12:6789/0 4 : cluster [DBG] osdmap e12: 2 total, 0 up, 1 in -76> 2019-01-25 14:52:02.893967 7f83ee6bf700 1 -- 10.0.0.12:6789/0 <== mon.0 10.0.0.12:6789/0 0 ==== log(last 5) v1 ==== 0+0+0 (0 0 0) 0x564144c04b40 con 0x564144c3a000 -75> 2019-01-25 14:52:02.893975 7f83ee6bf700 10 log_client handle_log_ack log(last 5) v1 -74> 2019-01-25 14:52:02.893976 7f83ee6bf700 10 log_client logged 2019-01-25 14:52:02.840905 mon.node1 mon.0 10.0.0.12:6789/0 5 : cluster [DBG] mgrmap e4: no daemons active -73> 2019-01-25 14:52:07.714393 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -72> 2019-01-25 14:52:07.714566 7f83f1f55700 5 mon.node1@0(leader).paxos(paxos active c 1..639) queue_pending_finisher 0x56414480e950 -71> 2019-01-25 14:52:07.716608 7f83ebf49700 4 mgrc handle_mgr_map Got map version 4 -70> 2019-01-25 14:52:07.716620 7f83ebf49700 4 mgrc handle_mgr_map Active mgr is now - -69> 2019-01-25 14:52:07.716624 7f83ebf49700 4 mgrc reconnect No active mgr available yet -68> 2019-01-25 14:52:12.715826 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -67> 2019-01-25 14:52:17.716151 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -66> 2019-01-25 14:52:22.716429 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -65> 2019-01-25 14:52:27.716745 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -64> 2019-01-25 14:52:32.717068 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -63> 2019-01-25 14:52:37.717701 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -62> 2019-01-25 14:52:42.718028 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -61> 2019-01-25 14:52:47.718443 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -60> 2019-01-25 14:52:52.718840 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -59> 2019-01-25 14:52:57.719173 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -58> 2019-01-25 14:53:02.714025 7f83f1f55700 0 mon.node1@0(leader).data_health(19) update_stats avail 91% total 132GiB, used 4.06GiB, avail 121GiB -57> 2019-01-25 14:53:02.719389 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -56> 2019-01-25 14:53:07.719684 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -55> 2019-01-25 14:53:12.720079 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -54> 2019-01-25 14:53:17.720414 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -53> 2019-01-25 14:53:22.720729 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -52> 2019-01-25 14:53:27.721088 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -51> 2019-01-25 14:53:32.721482 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -50> 2019-01-25 14:53:37.721763 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -49> 2019-01-25 14:53:42.722052 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -48> 2019-01-25 14:53:47.722432 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -47> 2019-01-25 14:53:52.722824 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -46> 2019-01-25 14:53:57.723094 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -45> 2019-01-25 14:54:02.714341 7f83f1f55700 0 mon.node1@0(leader).data_health(19) update_stats avail 91% total 132GiB, used 4.06GiB, avail 121GiB -44> 2019-01-25 14:54:02.723412 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -43> 2019-01-25 14:54:07.723768 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -42> 2019-01-25 14:54:12.724166 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -41> 2019-01-25 14:54:17.724449 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -40> 2019-01-25 14:54:22.724773 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -39> 2019-01-25 14:54:27.725134 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -38> 2019-01-25 14:54:32.725534 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -37> 2019-01-25 14:54:37.725810 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -36> 2019-01-25 14:54:42.726135 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -35> 2019-01-25 14:54:45.051899 7f83f3fa1700 5 asok(0x56414485a1c0) AdminSocket: request 'get_command_descriptions' '' to 0x56414480e2b0 returned 2859 bytes -34> 2019-01-25 14:54:45.067892 7f83f3fa1700 1 do_command 'config set' 'val:[20] var:debug_ms -33> 2019-01-25 14:54:45.070558 7f83f3fa1700 1 do_command 'config set' 'val:[20] var:debug_ms result is 22 bytes -32> 2019-01-25 14:54:45.070569 7f83f3fa1700 5 asok(0x56414485a1c0) AdminSocket: request 'config set' '' to 0x56414480e190 returned 22 bytes -31> 2019-01-25 14:54:47.726504 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -30> 2019-01-25 14:54:52.726791 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -29> 2019-01-25 14:54:57.727073 7f83f1f55700 5 mon.node1@0(leader).osd e12 can_mark_out current in_ratio 0.5 < min 0.75, will not mark osds out -28> 2019-01-25 14:54:58.378349 7f83ec74a700 10 Processor -- accept listen_fd=11 -27> 2019-01-25 14:54:58.378394 7f83ec74a700 15 RDMAServerSocketImpl accept -26> 2019-01-25 14:54:58.378452 7f83ec74a700 20 Infiniband init started. -25> 2019-01-25 14:54:58.379013 7f83ec74a700 20 Infiniband init successfully create queue pair: qp=0x56414483bc08 -24> 2019-01-25 14:54:58.379545 7f83ec74a700 20 Infiniband init successfully change queue pair to INIT: qp=0x56414483bc08 -23> 2019-01-25 14:54:58.379571 7f83ec74a700 20 Event(0x564144ca8580 nevent=5000 time_id=1).wakeup -22> 2019-01-25 14:54:58.379584 7f83ec74a700 20 RDMAServerSocketImpl accept accepted a new QP, tcp_fd: 34 -21> 2019-01-25 14:54:58.379587 7f83ec74a700 10 Processor -- accept accepted incoming on sd 35 -20> 2019-01-25 14:54:58.379607 7f83ec74a700 10 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :-1 s=STATE_NONE pgs=0 cs=0 l=0).accept sd=35 -19> 2019-01-25 14:54:58.379633 7f83ec74a700 15 RDMAServerSocketImpl accept -18> 2019-01-25 14:54:58.379657 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event started fd=34 mask=1 original mask is 0 -17> 2019-01-25 14:54:58.379677 7f83f2f57700 20 EpollDriver.add_event add event fd=34 cur_mask=0 add_mask=1 to 23 -16> 2019-01-25 14:54:58.379686 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event end fd=34 mask=1 original mask is 1 -15> 2019-01-25 14:54:58.379707 7f83f2f57700 20 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :-1 s=STATE_ACCEPTING pgs=0 cs=0 l=0).process prev state is STATE_ACCEPTING -14> 2019-01-25 14:54:58.379745 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event started fd=35 mask=1 original mask is 0 -13> 2019-01-25 14:54:58.379748 7f83f2f57700 20 EpollDriver.add_event add event fd=35 cur_mask=0 add_mask=1 to 23 -12> 2019-01-25 14:54:58.379753 7f83f2f57700 20 Event(0x564144ca8580 nevent=5000 time_id=1).create_file_event create event end fd=35 mask=1 original mask is 1 -11> 2019-01-25 14:54:58.379761 7f83f2f57700 1 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING pgs=0 cs=0 l=0)._process_connection sd=35 - -10> 2019-01-25 14:54:58.379778 7f83f2f57700 20 RDMAConnectedSocketImpl send fake send to upper, QP: 167 -9> 2019-01-25 14:54:58.379781 7f83f2f57700 10 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING pgs=0 cs=0 l=0)._try_send sent bytes 281 remaining bytes 0 -8> 2019-01-25 14:54:58.379792 7f83f2f57700 10 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING_WAIT_BANNER_ADDR pgs=0 cs=0 l=0)._process_connection write banner and addr done: - -7> 2019-01-25 14:54:58.379802 7f83f2f57700 20 -- 10.0.0.12:6789/0 >> - conn(0x564144e5c000 :6789 s=STATE_ACCEPTING_WAIT_BANNER_ADDR pgs=0 cs=0 l=0).process prev state is STATE_ACCEPTING -6> 2019-01-25 14:54:58.379813 7f83f2f57700 20 RDMAConnectedSocketImpl read notify_fd : 0 in 167 r = -1 -5> 2019-01-25 14:54:58.379825 7f83f2f57700 20 RDMAConnectedSocketImpl handle_connection QP: 167 tcp_fd: 34 notify_fd: 35 -4> 2019-01-25 14:54:58.379849 7f83f2f57700 5 Infiniband recv_msg recevd: 2, 166, 0, 0, fe80000000000000506b4b0300ed6256 -3> 2019-01-25 14:54:58.379862 7f83f2f57700 10 Infiniband send_msg sending: 2, 167, 0, 0, fe80000000000000506b4b0300ed6256 -2> 2019-01-25 14:54:58.379923 7f83f2f57700 20 RDMAConnectedSocketImpl activate Choosing gid_index 67, sl 3 -1> 2019-01-25 14:54:58.379946 7f83f2f57700 -1 RDMAConnectedSocketImpl activate failed to transition to RTR state: (22) Invalid argument 0> 2019-01-25 14:54:58.383313 7f83f2f57700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: In function 'void RDMAConnectedSocketImpl::handle_connection()' thread 7f83f2f57700 time 2019-01-25 14:54:58.379971 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.8/rpm/el7/BUILD/ceph-12.2.8/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: 244: FAILED assert(!r) ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x56413aeec320] 2: (RDMAConnectedSocketImpl::handle_connection()+0x45e) [0x56413b1e972e] 3: (EventCenter::process_events(int, std::chrono::duration >*)+0x359) [0x56413afa3f59] 4: (()+0x6eeb1e) [0x56413afa6b1e] 5: (()+0xb5070) [0x7f83f6960070] 6: (()+0x7e25) [0x7f83f8ecee25] 7: (clone()+0x6d) [0x7f83f60c4bad] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 1/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 20/20 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 1 reserver 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 4/ 5 memdb 1/ 5 kinetic 1/ 5 fuse 1/ 5 mgr 1/ 5 mgrc 1/ 5 dpdk 1/ 5 eventtrace -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-mon.node1.log --- end dump of recent events --- 2019-01-25 14:54:58.390917 7f83f2f57700 -1 *** Caught signal (Aborted) ** in thread 7f83f2f57700 thread_name:msgr-worker-2 ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable) 1: (()+0x93f2d1) [0x56413b1f72d1] 2: (()+0xf6d0) [0x7f83f8ed66d0] 3: (gsignal()+0x37) [0x7f83f5ffc277] 4: (abort()+0x148) [0x7f83f5ffd968] 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) [0x56413aeec494] 6: (RDMAConnectedSocketImpl::handle_connection()+0x45e) [0x56413b1e972e] 7: (EventCenter::process_events(int, std::chrono::duration >*)+0x359) [0x56413afa3f59] 8: (()+0x6eeb1e) [0x56413afa6b1e] 9: (()+0xb5070) [0x7f83f6960070] 10: (()+0x7e25) [0x7f83f8ecee25] 11: (clone()+0x6d) [0x7f83f60c4bad] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- 0> 2019-01-25 14:54:58.390917 7f83f2f57700 -1 *** Caught signal (Aborted) ** in thread 7f83f2f57700 thread_name:msgr-worker-2 ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable) 1: (()+0x93f2d1) [0x56413b1f72d1] 2: (()+0xf6d0) [0x7f83f8ed66d0] 3: (gsignal()+0x37) [0x7f83f5ffc277] 4: (abort()+0x148) [0x7f83f5ffd968] 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) [0x56413aeec494] 6: (RDMAConnectedSocketImpl::handle_connection()+0x45e) [0x56413b1e972e] 7: (EventCenter::process_events(int, std::chrono::duration >*)+0x359) [0x56413afa3f59] 8: (()+0x6eeb1e) [0x56413afa6b1e] 9: (()+0xb5070) [0x7f83f6960070] 10: (()+0x7e25) [0x7f83f8ecee25] 11: (clone()+0x6d) [0x7f83f60c4bad] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 1/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 20/20 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 1 reserver 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 4/ 5 memdb 1/ 5 kinetic 1/ 5 fuse 1/ 5 mgr 1/ 5 mgrc 1/ 5 dpdk 1/ 5 eventtrace -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-mon.node1.log --- end dump of recent events ---