Project

General

Profile

Bug #51673

MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4

Added by Daniel Keller over 1 year ago. Updated about 1 year ago.

Status:
Resolved
Priority:
Normal
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
pacific,octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDSMonitor
Labels (FS):
crash
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

i tried to update my ceph from 15.2.13 to 16.2.4 on my proxmox 7.0 severs.

after restarting the first monitor crashes immediately.


--- begin dump of recent events ---
  -267> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command assert hook 0x563d72ed0590
  -266> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command abort hook 0x563d72ed0590
  -265> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command leak_some_memory hook 0x563d72ed0590
  -264> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command perfcounters_dump hook 0x563d72ed0590
  -263> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command 1 hook 0x563d72ed0590
  -262> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command perf dump hook 0x563d72ed0590
  -261> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command perfcounters_schema hook 0x563d72ed0590
  -260> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command perf histogram dump hook 0x563d72ed0590
  -259> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command 2 hook 0x563d72ed0590
  -258> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command perf schema hook 0x563d72ed0590
  -257> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command perf histogram schema hook 0x563d72ed0590
  -256> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command perf reset hook 0x563d72ed0590
  -255> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command config show hook 0x563d72ed0590
  -254> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command config help hook 0x563d72ed0590
  -253> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command config set hook 0x563d72ed0590
  -252> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command config unset hook 0x563d72ed0590
  -251> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command config get hook 0x563d72ed0590
  -250> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command config diff hook 0x563d72ed0590
  -249> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command config diff get hook 0x563d72ed0590
  -248> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command injectargs hook 0x563d72ed0590
  -247> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command log flush hook 0x563d72ed0590
  -246> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command log dump hook 0x563d72ed0590
  -245> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command log reopen hook 0x563d72ed0590
  -244> 2021-07-08T22:39:23.878+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command dump_mempools hook 0x563d73b78068
  -243> 2021-07-08T22:39:23.882+0200 7f8f57c9b580  0 set uid:gid to 64045:64045 (ceph:ceph)
  -242> 2021-07-08T22:39:23.882+0200 7f8f57c9b580  0 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable), process ceph-mon, pid 1484743
  -241> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  0 pidfile_write: ignore empty --pid-file
  -240> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  5 asok(0x563d72f82000) init /var/run/ceph/ceph-mon.gcd-virthost2.asok
  -239> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  5 asok(0x563d72f82000) bind_and_listen /var/run/ceph/ceph-mon.gcd-virthost2.asok
  -238> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command 0 hook 0x563d72ece0d8
  -237> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command version hook 0x563d72ece0d8
  -236> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command git_version hook 0x563d72ece0d8
  -235> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command help hook 0x563d72ed0230
  -234> 2021-07-08T22:39:23.886+0200 7f8f57c9b580  5 asok(0x563d72f82000) register_command get_command_descriptions hook 0x563d72ed01e0
  -233> 2021-07-08T22:39:23.886+0200 7f8f56c3f700  5 asok(0x563d72f82000) entry start
  -232> 2021-07-08T22:39:23.902+0200 7f8f57c9b580  0 load: jerasure load: lrc load: isa 
  -231> 2021-07-08T22:39:23.902+0200 7f8f57c9b580  1  set rocksdb option level_compaction_dynamic_level_bytes = true
  -230> 2021-07-08T22:39:23.902+0200 7f8f57c9b580  1  set rocksdb option compression = kNoCompression
  -229> 2021-07-08T22:39:23.902+0200 7f8f57c9b580  1  set rocksdb option write_buffer_size = 33554432
  -228> 2021-07-08T22:39:23.902+0200 7f8f57c9b580  1  set rocksdb option level_compaction_dynamic_level_bytes = true
  -227> 2021-07-08T22:39:23.902+0200 7f8f57c9b580  1  set rocksdb option compression = kNoCompression
  -226> 2021-07-08T22:39:23.902+0200 7f8f57c9b580  1  set rocksdb option write_buffer_size = 33554432
  -225> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  5 rocksdb: verify_sharding column families from rocksdb: [default]
  -224> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: RocksDB version: 6.8.1

  -223> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
  -222> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Compile date May 20 2021
  -221> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: DB SUMMARY

  -220> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: CURRENT file:  CURRENT

  -219> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: IDENTITY file:  IDENTITY

  -218> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: MANIFEST file:  MANIFEST-1111314 size: 1353447 Bytes

  -217> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-gcd-virthost2/store.db dir, Total Num: 2, files: 1134132.sst 1134133.sst 

  -216> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-gcd-virthost2/store.db: 1134130.log size: 618250 ; 

  -215> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                         Options.error_if_exists: 0
  -214> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                       Options.create_if_missing: 0
  -213> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                         Options.paranoid_checks: 1
  -212> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                                     Options.env: 0x563d7236f140
  -211> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                                      Options.fs: Posix File System
  -210> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                                Options.info_log: 0x563d72f32520
  -209> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.max_file_opening_threads: 16
  -208> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                              Options.statistics: (nil)
  -207> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                               Options.use_fsync: 0
  -206> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                       Options.max_log_file_size: 0
  -205> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
  -204> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                   Options.log_file_time_to_roll: 0
  -203> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                       Options.keep_log_file_num: 1000
  -202> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                    Options.recycle_log_file_num: 0
  -201> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                         Options.allow_fallocate: 1
  -200> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                        Options.allow_mmap_reads: 0
  -199> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                       Options.allow_mmap_writes: 0
  -198> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                        Options.use_direct_reads: 0
  -197> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
  -196> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:          Options.create_missing_column_families: 0
  -195> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                              Options.db_log_dir: 
  -194> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/mon/ceph-gcd-virthost2/store.db
  -193> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.table_cache_numshardbits: 6
  -192> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                      Options.max_subcompactions: 1
  -191> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.max_background_flushes: -1
  -190> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                         Options.WAL_ttl_seconds: 0
  -189> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                       Options.WAL_size_limit_MB: 0
  -188> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
  -187> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.manifest_preallocation_size: 4194304
  -186> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                     Options.is_fd_close_on_exec: 1
  -185> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                   Options.advise_random_on_open: 1
  -184> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                    Options.db_write_buffer_size: 0
  -183> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                    Options.write_buffer_manager: 0x563d72fc5d70
  -182> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:         Options.access_hint_on_compaction_start: 1
  -181> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 0
  -180> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
  -179> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                      Options.use_adaptive_mutex: 0
  -178> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                            Options.rate_limiter: (nil)
  -177> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
  -176> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                       Options.wal_recovery_mode: 2
  -175> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.enable_thread_tracking: 0
  -174> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.enable_pipelined_write: 0
  -173> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.unordered_write: 0
  -172> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
  -171> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
  -170> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.write_thread_max_yield_usec: 100
  -169> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
  -168> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                               Options.row_cache: None
  -167> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                              Options.wal_filter: None
  -166> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.avoid_flush_during_recovery: 0
  -165> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.allow_ingest_behind: 0
  -164> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.preserve_deletes: 0
  -163> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.two_write_queues: 0
  -162> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.manual_wal_flush: 0
  -161> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.atomic_flush: 0
  -160> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.avoid_unnecessary_blocking_io: 0
  -159> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                 Options.persist_stats_to_disk: 0
  -158> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                 Options.write_dbid_to_manifest: 0
  -157> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                 Options.log_readahead_size: 0
  -156> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                 Options.sst_file_checksum_func: Unknown
  -155> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.max_background_jobs: 2
  -154> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.max_background_compactions: -1
  -153> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
  -152> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
  -151> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.delayed_write_rate : 16777216
  -150> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.max_total_wal_size: 0
  -149> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
  -148> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                   Options.stats_dump_period_sec: 600
  -147> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                 Options.stats_persist_period_sec: 600
  -146> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                 Options.stats_history_buffer_size: 1048576
  -145> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                          Options.max_open_files: -1
  -144> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                          Options.bytes_per_sync: 0
  -143> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                      Options.wal_bytes_per_sync: 0
  -142> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                   Options.strict_bytes_per_sync: 0
  -141> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:       Options.compaction_readahead_size: 0
  -140> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Compression algorithms supported:
  -139> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kZSTDNotFinalCompression supported: 0
  -138> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kZSTD supported: 0
  -137> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kXpressCompression supported: 0
  -136> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kLZ4HCCompression supported: 1
  -135> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kLZ4Compression supported: 1
  -134> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kBZip2Compression supported: 0
  -133> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kZlibCompression supported: 1
  -132> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     kSnappyCompression supported: 1
  -131> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Fast CRC32 supported: Supported on x86
  -130> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: [version_set.cc:4412] Recovering from manifest file: /var/lib/ceph/mon/ceph-gcd-virthost2/store.db/MANIFEST-1111314

  -129> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: [column_family.cc:550] --------------- Options for column family [default]:

  -128> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:               Options.comparator: leveldb.BytewiseComparator
  -127> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:           Options.merge_operator: 
  -126> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:        Options.compaction_filter: None
  -125> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:        Options.compaction_filter_factory: None
  -124> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:         Options.memtable_factory: SkipListFactory
  -123> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:            Options.table_factory: BlockBasedTable
  -122> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563d72ece0f8)
  cache_index_and_filter_blocks: 1
  cache_index_and_filter_blocks_with_high_priority: 0
  pin_l0_filter_and_index_blocks_in_cache: 0
  pin_top_level_index_and_filter: 1
  index_type: 0
  data_block_index_type: 0
  index_shortening: 1
  data_block_hash_table_util_ratio: 0.750000
  hash_index_allow_collision: 1
  checksum: 1
  no_block_cache: 0
  block_cache: 0x563d72f0af10
  block_cache_name: BinnedLRUCache
  block_cache_options:
    capacity : 536870912
    num_shard_bits : 4
    strict_capacity_limit : 0
    high_pri_pool_ratio: 0.000
  block_cache_compressed: (nil)
  persistent_cache: (nil)
  block_size: 4096
  block_size_deviation: 10
  block_restart_interval: 16
  index_block_restart_interval: 1
  metadata_block_size: 4096
  partition_filters: 0
  use_delta_encoding: 1
  filter_policy: rocksdb.BuiltinBloomFilter
  whole_key_filtering: 1
  verify_compression: 0
  read_amp_bytes_per_bit: 0
  format_version: 2
  enable_index_compression: 1
  block_align: 0

  -121> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:        Options.write_buffer_size: 33554432
  -120> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:  Options.max_write_buffer_number: 2
  -119> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:          Options.compression: NoCompression
  -118> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.bottommost_compression: Disabled
  -117> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:       Options.prefix_extractor: nullptr
  -116> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
  -115> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.num_levels: 7
  -114> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:        Options.min_write_buffer_number_to_merge: 1
  -113> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     Options.max_write_buffer_number_to_maintain: 0
  -112> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:     Options.max_write_buffer_size_to_maintain: 0
  -111> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:            Options.bottommost_compression_opts.window_bits: -14
  -110> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.bottommost_compression_opts.level: 32767
  -109> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:               Options.bottommost_compression_opts.strategy: 0
  -108> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
  -107> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
  -106> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.bottommost_compression_opts.enabled: false
  -105> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:            Options.compression_opts.window_bits: -14
  -104> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.compression_opts.level: 32767
  -103> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:               Options.compression_opts.strategy: 0
  -102> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:         Options.compression_opts.max_dict_bytes: 0
  -101> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
  -100> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                  Options.compression_opts.enabled: false
   -99> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:      Options.level0_file_num_compaction_trigger: 4
   -98> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:          Options.level0_slowdown_writes_trigger: 20
   -97> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:              Options.level0_stop_writes_trigger: 36
   -96> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                   Options.target_file_size_base: 67108864
   -95> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:             Options.target_file_size_multiplier: 1
   -94> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.max_bytes_for_level_base: 268435456
   -93> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
   -92> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
   -91> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
   -90> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
   -89> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
   -88> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
   -87> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
   -86> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
   -85> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
   -84> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:       Options.max_sequential_skip_in_iterations: 8
   -83> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                    Options.max_compaction_bytes: 1677721600
   -82> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                        Options.arena_block_size: 4194304
   -81> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
   -80> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
   -79> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:       Options.rate_limit_delay_max_milliseconds: 100
   -78> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.disable_auto_compactions: 0
   -77> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                        Options.compaction_style: kCompactionStyleLevel
   -76> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
   -75> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_universal.size_ratio: 1
   -74> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
   -73> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
   -72> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
   -71> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
   -70> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
   -69> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
   -68> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
   -67> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                   Options.table_properties_collectors: 
   -66> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                   Options.inplace_update_support: 0
   -65> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                 Options.inplace_update_num_locks: 10000
   -64> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
   -63> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:               Options.memtable_whole_key_filtering: 0
   -62> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:   Options.memtable_huge_page_size: 0
   -61> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                           Options.bloom_locality: 0
   -60> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                    Options.max_successive_merges: 0
   -59> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.optimize_filters_for_hits: 0
   -58> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.paranoid_file_checks: 0
   -57> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.force_consistency_checks: 0
   -56> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                Options.report_bg_io_stats: 0
   -55> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:                               Options.ttl: 2592000
   -54> 2021-07-08T22:39:23.954+0200 7f8f57c9b580  4 rocksdb:          Options.periodic_compaction_seconds: 0
   -53> 2021-07-08T22:39:24.086+0200 7f8f57c9b580  4 rocksdb: [version_set.cc:4558] Recovered from manifest file:/var/lib/ceph/mon/ceph-gcd-virthost2/store.db/MANIFEST-1111314 succeeded,manifest_file_number is 1111314, next_file_number is 1134135, last_sequence is 522790337, log_number is 1134130,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0

   -52> 2021-07-08T22:39:24.086+0200 7f8f57c9b580  4 rocksdb: [version_set.cc:4574] Column family [default] (ID 0), log number is 1134130

   -51> 2021-07-08T22:39:24.086+0200 7f8f57c9b580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1625776764092451, "job": 1, "event": "recovery_started", "log_files": [1134130]}
   -50> 2021-07-08T22:39:24.086+0200 7f8f57c9b580  4 rocksdb: [db_impl/db_impl_open.cc:758] Recovering log #1134130 mode 2
   -49> 2021-07-08T22:39:24.094+0200 7f8f57c9b580  3 rocksdb: [le/block_based/filter_policy.cc:579] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5.
   -48> 2021-07-08T22:39:24.118+0200 7f8f57c9b580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1625776764120898, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 1134135, "file_size": 421281, "table_properties": {"data_size": 419442, "index_size": 829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 197, "raw_key_size": 990, "raw_average_key_size": 24, "raw_value_size": 418121, "raw_average_value_size": 10198, "num_data_blocks": 23, "num_entries": 41, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1625776764, "oldest_key_time": 3, "file_creation_time": 0}}
   -47> 2021-07-08T22:39:24.118+0200 7f8f57c9b580  4 rocksdb: [version_set.cc:3825] Creating manifest 1134136

   -46> 2021-07-08T22:39:24.154+0200 7f8f57c9b580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1625776764157067, "job": 1, "event": "recovery_finished"}
   -45> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  4 rocksdb: DB pointer 0x563d72fd1800
   -44> 2021-07-08T22:39:24.170+0200 7f8f4d1b4700  4 rocksdb: [db_impl/db_impl.cc:849] ------- DUMPING STATS -------
   -43> 2021-07-08T22:39:24.170+0200 7f8f4d1b4700  4 rocksdb: [db_impl/db_impl.cc:851] 
** DB Stats **
Uptime(secs): 0.2 total, 0.2 interval
Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      1/0   411.41 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.1      0.02              0.00         1    0.024       0      0
  L6      2/0   76.06 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
 Sum      3/0   76.46 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.1      0.02              0.00         1    0.024       0      0
 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.1      0.02              0.00         1    0.024       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.1      0.02              0.00         1    0.024       0      0
Uptime(secs): 0.2 total, 0.2 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 1.86 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 1.86 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      1/0   411.41 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.1      0.02              0.00         1    0.024       0      0
  L6      2/0   76.06 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
 Sum      3/0   76.46 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.1      0.02              0.00         1    0.024       0      0
 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.1      0.02              0.00         1    0.024       0      0
Uptime(secs): 0.2 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 1.86 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

   -42> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding auth protocol: cephx
   -41> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding auth protocol: cephx
   -40> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding auth protocol: cephx
   -39> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding auth protocol: none
   -38> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: secure
   -37> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: crc
   -36> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: secure
   -35> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: crc
   -34> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: secure
   -33> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: crc
   -32> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: crc
   -31> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: secure
   -30> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: crc
   -29> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: secure
   -28> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: crc
   -27> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbe140) adding con mode: secure
   -26> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-gcd-virthost2/keyring
   -25> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  0 starting mon.gcd-virthost2 rank 2 at public addrs [v2:192.168.20.71:3300/0,v1:192.168.20.71:6789/0] at bind addrs [v2:192.168.20.71:3300/0,v1:192.168.20.71:6789/0] mon_data /var/lib/ceph/mon/ceph-gcd-virthost2 fsid 63b215c4-1240-42f3-83fa-feb0d06089a8
   -24> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding auth protocol: cephx
   -23> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding auth protocol: cephx
   -22> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding auth protocol: cephx
   -21> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding auth protocol: none
   -20> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: secure
   -19> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: crc
   -18> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: secure
   -17> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: crc
   -16> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: secure
   -15> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: crc
   -14> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: crc
   -13> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: secure
   -12> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: crc
   -11> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: secure
   -10> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: crc
    -9> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 AuthRegistry(0x563d73cbea40) adding con mode: secure
    -8> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-gcd-virthost2/keyring
    -7> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 adding auth protocol: cephx
    -6> 2021-07-08T22:39:24.170+0200 7f8f57c9b580  5 adding auth protocol: cephx
    -5> 2021-07-08T22:39:24.170+0200 7f8f57c9b580 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
    -4> 2021-07-08T22:39:24.170+0200 7f8f57c9b580 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
    -3> 2021-07-08T22:39:24.174+0200 7f8f57c9b580  1 mon.gcd-virthost2@-1(???) e30 preinit fsid 63b215c4-1240-42f3-83fa-feb0d06089a8
    -2> 2021-07-08T22:39:24.174+0200 7f8f57c9b580  5 mon.gcd-virthost2@-1(???).mds e0 Unable to load 'last_metadata'
    -1> 2021-07-08T22:39:24.178+0200 7f8f57c9b580 -1 ./src/mds/FSMap.cc: In function 'void FSMap::decode(ceph::buffer::v15_2_0::list::const_iterator&)' thread 7f8f57c9b580 time 2021-07-08T22:39:24.179008+0200
./src/mds/FSMap.cc: 648: ceph_abort_msg("abort() called")

 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable)
 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xd3) [0x7f8f58b747af]
 2: (FSMap::decode(ceph::buffer::v15_2_0::list::iterator_impl<true>&)+0x898) [0x7f8f590d1db8]
 3: (MDSMonitor::update_from_paxos(bool*)+0x257) [0x563d71b3cba7]
 4: (Monitor::refresh_from_paxos(bool*)+0x163) [0x563d7190a6f3]
 5: (Monitor::preinit()+0x9af) [0x563d7193651f]
 6: main()
 7: __libc_start_main()
 8: _start()

     0> 2021-07-08T22:39:24.182+0200 7f8f57c9b580 -1 *** Caught signal (Aborted) **
 in thread 7f8f57c9b580 thread_name:ceph-mon

 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f8f58657140]
 2: gsignal()
 3: abort()
 4: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x18a) [0x7f8f58b74866]
 5: (FSMap::decode(ceph::buffer::v15_2_0::list::iterator_impl<true>&)+0x898) [0x7f8f590d1db8]
 6: (MDSMonitor::update_from_paxos(bool*)+0x257) [0x563d71b3cba7]
 7: (Monitor::refresh_from_paxos(bool*)+0x163) [0x563d7190a6f3]
 8: (Monitor::preinit()+0x9af) [0x563d7193651f]
 9: main()
 10: __libc_start_main()
 11: _start()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 rbd_pwl
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 immutable_obj_cache
   0/ 5 client
   1/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 0 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 1 reserver
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 rgw_sync
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 compressor
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   4/ 5 memdb
   1/ 5 fuse
   1/ 5 mgr
   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
   1/ 5 prioritycache
   0/ 5 test
   0/ 5 cephfs_mirror
   0/ 5 cephsqlite
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
--- pthread ID / name mapping for recent threads ---
  140253450684160 / rocksdb:dump_st
  140253612734208 / admin_socket
  140253629887872 / ceph-mon
  max_recent     10000
  max_new        10000
  log_file /var/lib/ceph/crash/2021-07-08T20:39:24.187405Z_b9c8f463-9b28-4edc-b106-17f7f4d6414f/log
--- end dump of recent events ---


Related issues

Copied to CephFS - Backport #51939: octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4 Resolved
Copied to CephFS - Backport #51940: pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4 Resolved

History

#1 Updated by Neha Ojha over 1 year ago

  • Project changed from RADOS to CephFS

The crash is in FSMap::decode().

#2 Updated by Daniel Keller over 1 year ago

btw in the cluster no CephFS is used and there are no mds running either

#3 Updated by Patrick Donnelly over 1 year ago

  • Subject changed from monitor crash after upgrade from ceph 15.2.13 to 16.2.4 to MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
  • Assignee set to Patrick Donnelly
  • Target version set to v17.0.0
  • Source set to Community (user)
  • Backport set to pacific,octopus
  • Component(FS) MDSMonitor added
  • Labels (FS) crash added

Daniel Keller wrote:

btw in the cluster no CephFS is used and there are no mds running either

Thanks for the report. I will try to reproduce this.

How old is this cluster and what upgrades have been done to it?

#4 Updated by Daniel Keller over 1 year ago

it was installed in 2015 with 0.80 Firefly or 0.87 Giant I'm not sure

and then upgraded to 0.94 Hammer > 10 Jewel > 12 Luminous > 14 Nautilus > 15 Octopus

#6 Updated by Patrick Donnelly over 1 year ago

Daniel Keller wrote:

it was installed in 2015 with 0.80 Firefly or 0.87 Giant I'm not sure

and then upgraded to 0.94 Hammer > 10 Jewel > 12 Luminous > 14 Nautilus > 15 Octopus

Have you downgraded that mon back to Octopus or left it offline?

In any case, you should be able to fix the situation by creating and removing a dummy fs (no need to spawn MDS):

$ ceph osd pool create data
$ ceph osd pool create meta
$ ceph fs new cephfs meta data
$ ceph fs fail cephfs
$ ceph fs rm cephfs --yes-i-really-mean-it
$ ceph osd pool rm data data --yes-i-really-really-mean-it
$ ceph osd pool rm meta meta --yes-i-really-really-mean-it
$ ceph fs dump
e4
...
$ echo epoch is 4
$ ceph config set mon mon_mds_force_trim_to 3 # one less than 4
$ ceph config set mon paxos_service_trim_min 1
$ ceph fs dump 2 # repeat until you can verify cannot access e-2
$ ceph config rm mon mon_mds_force_trim_to
$ ceph config rm mon paxos_service_trim_min

Please let us know if that helps your cluster get through the upgrade.

#7 Updated by Patrick Donnelly over 1 year ago

  • Status changed from New to Fix Under Review

#8 Updated by Patrick Donnelly over 1 year ago

  • Pull request ID set to 42349

#9 Updated by Daniel Keller over 1 year ago

i have downgraded the mon.

yes after creating and deleting the fs the upgrade ran through and all is fine

#10 Updated by Patrick Donnelly over 1 year ago

  • Status changed from Fix Under Review to Pending Backport

#11 Updated by Backport Bot over 1 year ago

  • Copied to Backport #51939: octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4 added

#12 Updated by Backport Bot over 1 year ago

  • Copied to Backport #51940: pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4 added

#13 Updated by Loïc Dachary about 1 year ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Also available in: Atom PDF