Project

General

Profile

Bug #44702 ยป qp_double_destroy.txt

chunsong feng, 03/21/2020 08:48 AM

 
2020-03-21T06:25:01.664+0800 ffff9d2709f0 -1 received signal: Hangup from (PID: 2335813) UID: 0
2020-03-21T08:55:32.036+0800 ffffaadd3010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T08:55:32.036+0800 ffffaadd3010 0 ceph version 15.1.0-33-ga36fe9c0c4 (a36fe9c0c42c67d52406333784929dcbd2c15bf8) octopus (rc), process ceph-osd, pid 2346225
2020-03-21T08:55:32.036+0800 ffffaadd3010 0 pidfile_write: ignore empty --pid-file
2020-03-21T08:55:33.212+0800 ffffaadd3010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T08:55:33.212+0800 ffffaadd3010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T08:55:33.216+0800 ffffaadd3010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T08:55:33.248+0800 ffffaadd3010 0 load: jerasure load: lrc
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.720+0800 ffffaadd3010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option compression = kNoCompression
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option max_background_compactions = 2
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T08:55:33.724+0800 ffffaadd3010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option compression = kNoCompression
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option max_background_compactions = 2
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option compression = kNoCompression
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option max_background_compactions = 2
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T08:55:33.740+0800 ffffaadd3010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option compression = kNoCompression
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option max_background_compactions = 2
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T08:55:35.348+0800 ffffaadd3010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option compression = kNoCompression
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option max_background_compactions = 2
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option compression = kNoCompression
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option max_background_compactions = 2
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T08:55:35.356+0800 ffffaadd3010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T08:55:35.484+0800 ffffaadd3010 0 _get_class not permitted to load sdk
2020-03-21T08:55:35.484+0800 ffffaadd3010 0 _get_class not permitted to load kvs
2020-03-21T08:55:35.484+0800 ffffaadd3010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T08:55:35.484+0800 ffffaadd3010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T08:55:35.488+0800 ffffaadd3010 0 _get_class not permitted to load queue
2020-03-21T08:55:35.488+0800 ffffaadd3010 0 _get_class not permitted to load lua
2020-03-21T08:55:35.488+0800 ffffaadd3010 0 osd.26 8829 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T08:55:35.488+0800 ffffaadd3010 0 osd.26 8829 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T08:55:35.488+0800 ffffaadd3010 0 osd.26 8829 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T08:55:35.636+0800 ffffaadd3010 0 osd.26 8829 load_pgs
2020-03-21T08:55:37.572+0800 ffffaadd3010 0 osd.26 8829 load_pgs opened 14 pgs
2020-03-21T08:55:37.576+0800 ffffaadd3010 -1 osd.26 8829 log_to_monitors {default=true}
2020-03-21T08:55:37.580+0800 ffffaadd3010 0 osd.26 8829 done with init, starting boot process
2020-03-21T08:55:37.588+0800 ffffa28929f0 -1 osd.26 8829 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T08:55:43.004+0800 ffffa950a9f0 -1 osd.26 8836 build_incremental_map_msg missing incremental map 8836
2020-03-21T08:55:43.980+0800 ffffaa50c9f0 -1 osd.26 8836 build_incremental_map_msg missing incremental map 8836
2020-03-21T09:20:20.424+0800 ffffb9007010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T09:20:20.424+0800 ffffb9007010 0 ceph version 15.1.0-33-ga36fe9c0c4 (a36fe9c0c42c67d52406333784929dcbd2c15bf8) octopus (rc), process ceph-osd, pid 2362181
2020-03-21T09:20:20.424+0800 ffffb9007010 0 pidfile_write: ignore empty --pid-file
2020-03-21T09:20:34.444+0800 ffff95c78010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T09:20:34.444+0800 ffff95c78010 0 ceph version 15.1.0-33-ga36fe9c0c4 (a36fe9c0c42c67d52406333784929dcbd2c15bf8) octopus (rc), process ceph-osd, pid 2363254
2020-03-21T09:20:34.444+0800 ffff95c78010 0 pidfile_write: ignore empty --pid-file
2020-03-21T09:20:35.552+0800 ffff95c78010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T09:20:35.560+0800 ffff95c78010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T09:20:35.564+0800 ffff95c78010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T09:20:35.604+0800 ffff95c78010 0 load: jerasure load: lrc
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.016+0800 ffff95c78010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option compression = kNoCompression
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option max_background_compactions = 2
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T09:20:36.020+0800 ffff95c78010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option compression = kNoCompression
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option max_background_compactions = 2
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option compression = kNoCompression
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option max_background_compactions = 2
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T09:20:36.024+0800 ffff95c78010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option compression = kNoCompression
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option max_background_compactions = 2
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T09:20:37.404+0800 ffff95c78010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option compression = kNoCompression
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option max_background_compactions = 2
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option compression = kNoCompression
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option max_background_compactions = 2
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T09:20:37.412+0800 ffff95c78010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T09:20:37.448+0800 ffff95c78010 0 _get_class not permitted to load sdk
2020-03-21T09:20:37.448+0800 ffff95c78010 0 _get_class not permitted to load kvs
2020-03-21T09:20:37.448+0800 ffff95c78010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T09:20:37.448+0800 ffff95c78010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T09:20:37.448+0800 ffff95c78010 0 _get_class not permitted to load queue
2020-03-21T09:20:37.452+0800 ffff95c78010 0 _get_class not permitted to load lua
2020-03-21T09:20:37.452+0800 ffff95c78010 0 osd.26 8837 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T09:20:37.452+0800 ffff95c78010 0 osd.26 8837 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T09:20:37.452+0800 ffff95c78010 0 osd.26 8837 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T09:20:37.588+0800 ffff95c78010 0 osd.26 8837 load_pgs
2020-03-21T09:20:39.516+0800 ffff95c78010 0 osd.26 8837 load_pgs opened 14 pgs
2020-03-21T09:20:39.516+0800 ffff95c78010 -1 osd.26 8837 log_to_monitors {default=true}
2020-03-21T09:20:39.524+0800 ffff95c78010 0 osd.26 8837 done with init, starting boot process
2020-03-21T09:20:39.536+0800 ffff8d7379f0 -1 osd.26 8837 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T09:20:41.712+0800 ffff943af9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.504+0800 ffff94bb09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.508+0800 ffff94bb09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.524+0800 ffff943af9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.524+0800 ffff94bb09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.628+0800 ffff943af9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.628+0800 ffff94bb09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.720+0800 ffff953b19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.720+0800 ffff94bb09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.792+0800 ffff953b19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.792+0800 ffff94bb09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:22:52.792+0800 ffff953b19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:23:13.264+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6836 osd.46 since back 2020-03-21T09:22:51.561969+0800 front 2020-03-21T09:22:51.562009+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:13.268+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6966 osd.55 since back 2020-03-21T09:22:51.562087+0800 front 2020-03-21T09:22:51.562311+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:13.268+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6879 osd.57 since back 2020-03-21T09:22:51.562705+0800 front 2020-03-21T09:22:51.562586+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:13.268+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6946 osd.78 since back 2020-03-21T09:22:51.562886+0800 front 2020-03-21T09:22:51.562908+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:13.268+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:7069 osd.83 since back 2020-03-21T09:22:51.563072+0800 front 2020-03-21T09:22:51.562862+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:14.236+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6836 osd.46 since back 2020-03-21T09:22:51.561969+0800 front 2020-03-21T09:22:51.562009+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:14.236+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6966 osd.55 since back 2020-03-21T09:22:51.562087+0800 front 2020-03-21T09:22:51.562311+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:14.236+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6879 osd.57 since back 2020-03-21T09:22:51.562705+0800 front 2020-03-21T09:22:51.562586+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:14.236+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:6946 osd.78 since back 2020-03-21T09:22:51.562886+0800 front 2020-03-21T09:22:51.562908+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:14.236+0800 ffff9067d9f0 -1 osd.26 8855 heartbeat_check: no reply from 172.19.36.252:7069 osd.83 since back 2020-03-21T09:22:51.563072+0800 front 2020-03-21T09:22:51.562862+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:15.256+0800 ffff9067d9f0 -1 osd.26 8856 heartbeat_check: no reply from 172.19.36.252:6966 osd.55 since back 2020-03-21T09:22:51.562087+0800 front 2020-03-21T09:22:51.562311+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:15.256+0800 ffff9067d9f0 -1 osd.26 8856 heartbeat_check: no reply from 172.19.36.252:6879 osd.57 since back 2020-03-21T09:22:51.562705+0800 front 2020-03-21T09:22:51.562586+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:15.256+0800 ffff9067d9f0 -1 osd.26 8856 heartbeat_check: no reply from 172.19.36.252:6946 osd.78 since back 2020-03-21T09:22:51.562886+0800 front 2020-03-21T09:22:51.562908+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:15.256+0800 ffff9067d9f0 -1 osd.26 8856 heartbeat_check: no reply from 172.19.36.252:7069 osd.83 since back 2020-03-21T09:22:51.563072+0800 front 2020-03-21T09:22:51.562862+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:16.272+0800 ffff9067d9f0 -1 osd.26 8857 heartbeat_check: no reply from 172.19.36.252:6966 osd.55 since back 2020-03-21T09:22:51.562087+0800 front 2020-03-21T09:22:51.562311+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:16.272+0800 ffff9067d9f0 -1 osd.26 8857 heartbeat_check: no reply from 172.19.36.252:6879 osd.57 since back 2020-03-21T09:22:51.562705+0800 front 2020-03-21T09:22:51.562586+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:16.272+0800 ffff9067d9f0 -1 osd.26 8857 heartbeat_check: no reply from 172.19.36.252:6946 osd.78 since back 2020-03-21T09:22:51.562886+0800 front 2020-03-21T09:22:51.562908+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:16.272+0800 ffff9067d9f0 -1 osd.26 8857 heartbeat_check: no reply from 172.19.36.252:7069 osd.83 since back 2020-03-21T09:22:51.563072+0800 front 2020-03-21T09:22:51.562862+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:16.896+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] conn(0xaaac0ea47680 0xaaac013d6580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] is using msgr V1 protocol
2020-03-21T09:23:17.096+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] conn(0xaaac0ea47680 0xaaac013d6580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] is using msgr V1 protocol
2020-03-21T09:23:17.248+0800 ffff9067d9f0 -1 osd.26 8858 heartbeat_check: no reply from 172.19.36.252:6966 osd.55 since back 2020-03-21T09:22:51.562087+0800 front 2020-03-21T09:22:51.562311+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:17.248+0800 ffff9067d9f0 -1 osd.26 8858 heartbeat_check: no reply from 172.19.36.252:6879 osd.57 since back 2020-03-21T09:22:51.562705+0800 front 2020-03-21T09:22:51.562586+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:17.248+0800 ffff9067d9f0 -1 osd.26 8858 heartbeat_check: no reply from 172.19.36.252:6946 osd.78 since back 2020-03-21T09:22:51.562886+0800 front 2020-03-21T09:22:51.562908+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:17.248+0800 ffff9067d9f0 -1 osd.26 8858 heartbeat_check: no reply from 172.19.36.252:7069 osd.83 since back 2020-03-21T09:22:51.563072+0800 front 2020-03-21T09:22:51.562862+0800 (oldest deadline 2020-03-21T09:23:13.262348+0800)
2020-03-21T09:23:17.276+0800 ffff943af9f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] conn(0xaaac0ea47b00 0xaaac013d6000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] is using msgr V1 protocol
2020-03-21T09:23:17.480+0800 ffff943af9f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] conn(0xaaac0ea47b00 0xaaac013d6000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] is using msgr V1 protocol
2020-03-21T09:23:17.500+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] conn(0xaaac0ea47680 0xaaac013d6580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] is using msgr V1 protocol
2020-03-21T09:23:17.884+0800 ffff943af9f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] conn(0xaaac0ea47b00 0xaaac013d6000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] is using msgr V1 protocol
2020-03-21T09:23:18.304+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] conn(0xaaac0ea47680 0xaaac013d6580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7052/178834,v1:172.19.36.252:7053/178834] is using msgr V1 protocol
2020-03-21T09:23:18.684+0800 ffff943af9f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] conn(0xaaac0ea47b00 0xaaac013d6000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7054/178834,v1:172.19.36.252:7055/178834] is using msgr V1 protocol
2020-03-21T09:30:27.876+0800 ffff953b19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:30:27.876+0800 ffff943af9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:30:27.940+0800 ffff943af9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:30:27.940+0800 ffff943af9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:30:28.368+0800 ffff953b19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:30:28.368+0800 ffff953b19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T09:30:49.968+0800 ffff9067d9f0 -1 osd.26 8871 heartbeat_check: no reply from 172.19.36.253:7070 osd.128 since back 2020-03-21T09:30:25.543936+0800 front 2020-03-21T09:30:25.543976+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:49.968+0800 ffff9067d9f0 -1 osd.26 8871 heartbeat_check: no reply from 172.19.36.253:7050 osd.132 since back 2020-03-21T09:30:25.543967+0800 front 2020-03-21T09:30:25.544050+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:49.968+0800 ffff9067d9f0 -1 osd.26 8871 heartbeat_check: no reply from 172.19.36.253:7064 osd.137 since back 2020-03-21T09:30:25.544031+0800 front 2020-03-21T09:30:25.543995+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:50.956+0800 ffff9067d9f0 -1 osd.26 8871 heartbeat_check: no reply from 172.19.36.253:7070 osd.128 since back 2020-03-21T09:30:25.543936+0800 front 2020-03-21T09:30:25.543976+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:50.956+0800 ffff9067d9f0 -1 osd.26 8871 heartbeat_check: no reply from 172.19.36.253:7050 osd.132 since back 2020-03-21T09:30:25.543967+0800 front 2020-03-21T09:30:25.544050+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:50.956+0800 ffff9067d9f0 -1 osd.26 8871 heartbeat_check: no reply from 172.19.36.253:7064 osd.137 since back 2020-03-21T09:30:25.544031+0800 front 2020-03-21T09:30:25.543995+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:51.916+0800 ffff9067d9f0 -1 osd.26 8873 heartbeat_check: no reply from 172.19.36.253:7064 osd.137 since back 2020-03-21T09:30:25.544031+0800 front 2020-03-21T09:30:25.543995+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:52.924+0800 ffff9067d9f0 -1 osd.26 8874 heartbeat_check: no reply from 172.19.36.253:7064 osd.137 since back 2020-03-21T09:30:25.544031+0800 front 2020-03-21T09:30:25.543995+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:53.924+0800 ffff9067d9f0 -1 osd.26 8875 heartbeat_check: no reply from 172.19.36.253:7064 osd.137 since back 2020-03-21T09:30:25.544031+0800 front 2020-03-21T09:30:25.543995+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:54.876+0800 ffff9067d9f0 -1 osd.26 8876 heartbeat_check: no reply from 172.19.36.253:7064 osd.137 since back 2020-03-21T09:30:25.544031+0800 front 2020-03-21T09:30:25.543995+0800 (oldest deadline 2020-03-21T09:30:49.644215+0800)
2020-03-21T09:30:55.176+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.253:7064/254443,v1:172.19.36.253:7074/254443] conn(0xaaabffc13180 0xaaac0e9f0680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7064/254443,v1:172.19.36.253:7074/254443] is using msgr V1 protocol
2020-03-21T09:30:55.416+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.253:7063/254371,v1:172.19.36.253:7072/254371] conn(0xaaac0133df80 0xaaac01342100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7063/254371,v1:172.19.36.253:7072/254371] is using msgr V1 protocol
2020-03-21T09:30:55.420+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.253:7046/254371,v1:172.19.36.253:7054/254371] conn(0xaaac00b0d180 0xaaac08945b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7046/254371,v1:172.19.36.253:7054/254371] is using msgr V1 protocol
2020-03-21T09:30:55.616+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.253:7063/254371,v1:172.19.36.253:7072/254371] conn(0xaaac0133df80 0xaaac01342100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7063/254371,v1:172.19.36.253:7072/254371] is using msgr V1 protocol
2020-03-21T09:30:55.624+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.253:7046/254371,v1:172.19.36.253:7054/254371] conn(0xaaac00b0d180 0xaaac08945b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7046/254371,v1:172.19.36.253:7054/254371] is using msgr V1 protocol
2020-03-21T09:30:56.016+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.253:7063/254371,v1:172.19.36.253:7072/254371] conn(0xaaac0133df80 0xaaac01342100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7063/254371,v1:172.19.36.253:7072/254371] is using msgr V1 protocol
2020-03-21T09:30:56.028+0800 ffff953b19f0 -1 --2- 172.19.36.251:0/2363254 >> [v2:172.19.36.253:7046/254371,v1:172.19.36.253:7054/254371] conn(0xaaac00b0d180 0xaaac08945b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7046/254371,v1:172.19.36.253:7054/254371] is using msgr V1 protocol
2020-03-21T09:30:59.168+0800 ffff953b19f0 -1 osd.26 8880 build_incremental_map_msg missing incremental map 8880
2020-03-21T10:29:30.860+0800 ffff85569010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T10:29:30.860+0800 ffff85569010 0 ceph version 15.1.0-34-g3299aff999 (3299aff9992182b5c5dd659028bc37039b031dff) octopus (rc), process ceph-osd, pid 2435780
2020-03-21T10:29:30.860+0800 ffff85569010 0 pidfile_write: ignore empty --pid-file
2020-03-21T10:29:32.000+0800 ffff85569010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T10:29:32.004+0800 ffff85569010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T10:29:32.008+0800 ffff85569010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T10:29:32.052+0800 ffff85569010 0 load: jerasure load: lrc
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.456+0800 ffff85569010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:29:32.460+0800 ffff85569010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:29:32.464+0800 ffff85569010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:29:33.636+0800 ffff85569010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:29:33.640+0800 ffff85569010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:29:33.684+0800 ffff85569010 0 _get_class not permitted to load sdk
2020-03-21T10:29:33.684+0800 ffff85569010 0 _get_class not permitted to load kvs
2020-03-21T10:29:33.684+0800 ffff85569010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T10:29:33.684+0800 ffff85569010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T10:29:33.688+0800 ffff85569010 0 _get_class not permitted to load queue
2020-03-21T10:29:33.688+0800 ffff85569010 0 _get_class not permitted to load lua
2020-03-21T10:29:33.688+0800 ffff85569010 0 osd.26 8881 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T10:29:33.688+0800 ffff85569010 0 osd.26 8881 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T10:29:33.688+0800 ffff85569010 0 osd.26 8881 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T10:29:33.784+0800 ffff85569010 0 osd.26 8881 load_pgs
2020-03-21T10:29:35.428+0800 ffff85569010 0 osd.26 8881 load_pgs opened 14 pgs
2020-03-21T10:29:35.428+0800 ffff85569010 -1 osd.26 8881 log_to_monitors {default=true}
2020-03-21T10:29:35.428+0800 ffff844a19f0 0 auth: could not find secret_id=70
2020-03-21T10:29:35.428+0800 ffff844a19f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=70
2020-03-21T10:29:35.468+0800 ffff85569010 0 osd.26 8881 done with init, starting boot process
2020-03-21T10:29:35.512+0800 ffff7d0289f0 -1 osd.26 8881 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T10:29:43.804+0800 ffff84ca29f0 -1 --2- 172.19.36.251:0/2435780 >> [v2:172.19.36.252:6860/210792,v1:172.19.36.252:6861/210792] conn(0xaaac211db680 0xaaac1da44000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6860/210792,v1:172.19.36.252:6861/210792] is using msgr V1 protocol
2020-03-21T10:29:44.616+0800 ffff844a19f0 -1 --2- [v2:172.19.36.251:6997/2435780,v1:172.19.36.251:6999/2435780] >> [v2:172.19.36.252:7068/211131,v1:172.19.36.252:7070/211131] conn(0xaaac211d6000 0xaaac211f8000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7068/211131,v1:172.19.36.252:7070/211131] is using msgr V1 protocol
2020-03-21T10:29:44.972+0800 ffff844a19f0 -1 --2- 172.19.36.251:0/2435780 >> [v2:172.19.36.252:7072/211131,v1:172.19.36.252:7074/211131] conn(0xaaac23208d80 0xaaac23221080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7072/211131,v1:172.19.36.252:7074/211131] is using msgr V1 protocol
2020-03-21T10:29:45.060+0800 ffff84ca29f0 -1 --2- 172.19.36.251:0/2435780 >> [v2:172.19.36.252:6959/211069,v1:172.19.36.252:6962/211069] conn(0xaaac211d7680 0xaaac266cc580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6959/211069,v1:172.19.36.252:6962/211069] is using msgr V1 protocol
2020-03-21T10:29:49.284+0800 ffff84ca29f0 -1 --2- 172.19.36.251:0/2435780 >> [v2:172.19.36.252:6922/210913,v1:172.19.36.252:6924/210913] conn(0xaaac211d8d00 0xaaac266cec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6922/210913,v1:172.19.36.252:6924/210913] is using msgr V1 protocol
2020-03-21T10:29:49.484+0800 ffff84ca29f0 -1 --2- 172.19.36.251:0/2435780 >> [v2:172.19.36.252:6922/210913,v1:172.19.36.252:6924/210913] conn(0xaaac211d8d00 0xaaac266cec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6922/210913,v1:172.19.36.252:6924/210913] is using msgr V1 protocol
2020-03-21T10:29:49.888+0800 ffff84ca29f0 -1 --2- 172.19.36.251:0/2435780 >> [v2:172.19.36.252:6922/210913,v1:172.19.36.252:6924/210913] conn(0xaaac211d8d00 0xaaac266cec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6922/210913,v1:172.19.36.252:6924/210913] is using msgr V1 protocol
2020-03-21T10:29:50.208+0800 ffff84ca29f0 -1 --2- 172.19.36.251:0/2435780 >> [v2:172.19.36.252:6860/210792,v1:172.19.36.252:6861/210792] conn(0xaaac211db680 0xaaac1da44000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6860/210792,v1:172.19.36.252:6861/210792] is using msgr V1 protocol
2020-03-21T10:46:56.040+0800 ffff93d1a010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T10:46:56.040+0800 ffff93d1a010 0 ceph version 15.1.0-34-g3299aff999 (3299aff9992182b5c5dd659028bc37039b031dff) octopus (rc), process ceph-osd, pid 2453195
2020-03-21T10:46:56.040+0800 ffff93d1a010 0 pidfile_write: ignore empty --pid-file
2020-03-21T10:46:57.132+0800 ffff93d1a010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T10:46:57.136+0800 ffff93d1a010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T10:46:57.136+0800 ffff93d1a010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T10:46:57.156+0800 ffff93d1a010 0 load: jerasure load: lrc
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.496+0800 ffff93d1a010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:46:57.500+0800 ffff93d1a010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:46:57.504+0800 ffff93d1a010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:46:58.576+0800 ffff93d1a010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:46:58.580+0800 ffff93d1a010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:46:58.612+0800 ffff93d1a010 0 _get_class not permitted to load sdk
2020-03-21T10:46:58.612+0800 ffff93d1a010 0 _get_class not permitted to load kvs
2020-03-21T10:46:58.612+0800 ffff93d1a010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T10:46:58.612+0800 ffff93d1a010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T10:46:58.612+0800 ffff93d1a010 0 _get_class not permitted to load queue
2020-03-21T10:46:58.616+0800 ffff93d1a010 0 _get_class not permitted to load lua
2020-03-21T10:46:58.616+0800 ffff93d1a010 0 osd.26 8903 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T10:46:58.616+0800 ffff93d1a010 0 osd.26 8903 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T10:46:58.616+0800 ffff93d1a010 0 osd.26 8903 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T10:46:58.700+0800 ffff93d1a010 0 osd.26 8903 load_pgs
2020-03-21T10:47:00.364+0800 ffff93d1a010 0 osd.26 8903 load_pgs opened 14 pgs
2020-03-21T10:47:00.364+0800 ffff93d1a010 -1 osd.26 8903 log_to_monitors {default=true}
2020-03-21T10:47:00.372+0800 ffff93d1a010 0 osd.26 8903 done with init, starting boot process
2020-03-21T10:47:00.380+0800 ffff8b7d99f0 -1 osd.26 8903 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T10:47:04.348+0800 ffff924519f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T10:47:04.348+0800 ffff924519f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T10:47:05.528+0800 ffff8f1639f0 -1 received signal: Terminated from /sbin/init ealycon (PID: 1) UID: 0
2020-03-21T10:47:05.528+0800 ffff8f1639f0 -1 osd.26 8918 *** Got signal Terminated ***
2020-03-21T10:47:05.528+0800 ffff8f1639f0 -1 osd.26 8918 *** Immediate shutdown (osd_fast_shutdown=true) ***
2020-03-21T10:47:24.224+0800 ffffb8dc7010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T10:47:24.224+0800 ffffb8dc7010 0 ceph version 15.1.0-34-g3299aff999 (3299aff9992182b5c5dd659028bc37039b031dff) octopus (rc), process ceph-osd, pid 2455094
2020-03-21T10:47:24.224+0800 ffffb8dc7010 0 pidfile_write: ignore empty --pid-file
2020-03-21T10:47:25.380+0800 ffffb8dc7010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T10:47:25.384+0800 ffffb8dc7010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T10:47:25.384+0800 ffffb8dc7010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T10:47:25.412+0800 ffffb8dc7010 0 load: jerasure load: lrc
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.756+0800 ffffb8dc7010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:47:25.760+0800 ffffb8dc7010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:47:25.764+0800 ffffb8dc7010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:47:26.940+0800 ffffb8dc7010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option compression = kNoCompression
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option max_background_compactions = 2
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T10:47:26.948+0800 ffffb8dc7010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T10:47:26.980+0800 ffffb8dc7010 0 _get_class not permitted to load sdk
2020-03-21T10:47:26.980+0800 ffffb8dc7010 0 _get_class not permitted to load kvs
2020-03-21T10:47:26.980+0800 ffffb8dc7010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T10:47:26.984+0800 ffffb8dc7010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T10:47:26.984+0800 ffffb8dc7010 0 _get_class not permitted to load queue
2020-03-21T10:47:26.984+0800 ffffb8dc7010 0 _get_class not permitted to load lua
2020-03-21T10:47:26.984+0800 ffffb8dc7010 0 osd.26 8918 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T10:47:26.984+0800 ffffb8dc7010 0 osd.26 8918 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T10:47:26.984+0800 ffffb8dc7010 0 osd.26 8918 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T10:47:27.092+0800 ffffb8dc7010 0 osd.26 8918 load_pgs
2020-03-21T10:47:28.828+0800 ffffb8dc7010 0 osd.26 8918 load_pgs opened 14 pgs
2020-03-21T10:47:28.828+0800 ffffb8dc7010 -1 osd.26 8918 log_to_monitors {default=true}
2020-03-21T10:47:28.836+0800 ffffb8dc7010 0 osd.26 8918 done with init, starting boot process
2020-03-21T10:47:28.844+0800 ffffb08869f0 -1 osd.26 8918 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T10:47:32.800+0800 ffffb85009f0 -1 osd.26 8931 build_incremental_map_msg missing incremental map 8931
2020-03-21T10:47:32.816+0800 ffffb7cff9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:49:24.920+0800 ffffb8669010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T14:49:24.920+0800 ffffb8669010 0 ceph version 15.1.0-35-gdeba62656d (deba62656d6bc55b66cb67ef83759f89a51eff9f) octopus (rc), process ceph-osd, pid 2706092
2020-03-21T14:49:24.920+0800 ffffb8669010 0 pidfile_write: ignore empty --pid-file
2020-03-21T14:49:26.068+0800 ffffb8669010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T14:49:26.076+0800 ffffb8669010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T14:49:26.092+0800 ffffb8669010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T14:49:26.164+0800 ffffb8669010 0 load: jerasure load: lrc
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.492+0800 ffffb8669010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option compression = kNoCompression
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option max_background_compactions = 2
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T14:49:26.496+0800 ffffb8669010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option compression = kNoCompression
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option max_background_compactions = 2
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option compression = kNoCompression
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option max_background_compactions = 2
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T14:49:26.504+0800 ffffb8669010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option compression = kNoCompression
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option max_background_compactions = 2
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T14:49:27.636+0800 ffffb8669010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option compression = kNoCompression
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option max_background_compactions = 2
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option compression = kNoCompression
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option max_background_compactions = 2
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T14:49:27.640+0800 ffffb8669010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T14:49:27.724+0800 ffffb8669010 0 _get_class not permitted to load sdk
2020-03-21T14:49:27.724+0800 ffffb8669010 0 _get_class not permitted to load kvs
2020-03-21T14:49:27.724+0800 ffffb8669010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T14:49:27.728+0800 ffffb8669010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T14:49:27.728+0800 ffffb8669010 0 _get_class not permitted to load queue
2020-03-21T14:49:27.728+0800 ffffb8669010 0 _get_class not permitted to load lua
2020-03-21T14:49:27.728+0800 ffffb8669010 0 osd.26 9004 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T14:49:27.728+0800 ffffb8669010 0 osd.26 9004 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T14:49:27.728+0800 ffffb8669010 0 osd.26 9004 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T14:49:27.824+0800 ffffb8669010 0 osd.26 9004 load_pgs
2020-03-21T14:49:29.548+0800 ffffb8669010 0 osd.26 9004 load_pgs opened 14 pgs
2020-03-21T14:49:29.548+0800 ffffb8669010 -1 osd.26 9004 log_to_monitors {default=true}
2020-03-21T14:49:29.556+0800 ffffb8669010 0 osd.26 9004 done with init, starting boot process
2020-03-21T14:49:29.584+0800 ffffb01289f0 -1 osd.26 9004 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T14:51:10.880+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:10.880+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:11.016+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:11.016+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:11.172+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:11.172+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.096+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.096+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.172+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.172+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.232+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.232+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.340+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.340+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.396+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:21.396+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:34.324+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:44.120+0800 ffffb306e9f0 -1 osd.26 9016 heartbeat_check: no reply from 172.19.36.252:6804 osd.46 since back 2020-03-21T14:51:18.187868+0800 front 2020-03-21T14:51:18.187551+0800 (oldest deadline 2020-03-21T14:51:44.087528+0800)
2020-03-21T14:51:44.120+0800 ffffb306e9f0 -1 osd.26 9016 heartbeat_check: no reply from 172.19.36.252:6828 osd.55 since back 2020-03-21T14:51:18.187713+0800 front 2020-03-21T14:51:18.187924+0800 (oldest deadline 2020-03-21T14:51:44.087528+0800)
2020-03-21T14:51:44.120+0800 ffffb306e9f0 -1 osd.26 9016 heartbeat_check: no reply from 172.19.36.252:6868 osd.57 since back 2020-03-21T14:51:18.187607+0800 front 2020-03-21T14:51:18.187842+0800 (oldest deadline 2020-03-21T14:51:44.087528+0800)
2020-03-21T14:51:44.120+0800 ffffb306e9f0 -1 osd.26 9016 heartbeat_check: no reply from 172.19.36.252:7004 osd.78 since back 2020-03-21T14:51:18.187717+0800 front 2020-03-21T14:51:18.187759+0800 (oldest deadline 2020-03-21T14:51:44.087528+0800)
2020-03-21T14:51:44.120+0800 ffffb306e9f0 -1 osd.26 9016 heartbeat_check: no reply from 172.19.36.252:7068 osd.83 since back 2020-03-21T14:51:18.187635+0800 front 2020-03-21T14:51:18.187792+0800 (oldest deadline 2020-03-21T14:51:44.087528+0800)
2020-03-21T14:51:45.164+0800 ffffb306e9f0 -1 osd.26 9017 heartbeat_check: no reply from 172.19.36.252:6868 osd.57 since back 2020-03-21T14:51:18.187607+0800 front 2020-03-21T14:51:18.187842+0800 (oldest deadline 2020-03-21T14:51:44.087528+0800)
2020-03-21T14:51:45.244+0800 ffffb6da09f0 -1 --2- 172.19.36.251:0/2706092 >> [v2:172.19.36.252:6918/228443,v1:172.19.36.252:6919/228443] conn(0xaaac189a4400 0xaaac189a9180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6918/228443,v1:172.19.36.252:6919/228443] is using msgr V1 protocol
2020-03-21T14:51:45.448+0800 ffffb6da09f0 -1 --2- 172.19.36.251:0/2706092 >> [v2:172.19.36.252:6918/228443,v1:172.19.36.252:6919/228443] conn(0xaaac189a4400 0xaaac189a9180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6918/228443,v1:172.19.36.252:6919/228443] is using msgr V1 protocol
2020-03-21T14:51:45.604+0800 ffffb75a19f0 -1 --2- 172.19.36.251:0/2706092 >> [v2:172.19.36.252:7027/228661,v1:172.19.36.252:7030/228661] conn(0xaaac24fbe900 0xaaac1e288580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7027/228661,v1:172.19.36.252:7030/228661] is using msgr V1 protocol
2020-03-21T14:51:45.604+0800 ffffb6da09f0 -1 --2- 172.19.36.251:0/2706092 >> [v2:172.19.36.252:7021/228661,v1:172.19.36.252:7024/228661] conn(0xaaac24fbed80 0xaaac1e288b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7021/228661,v1:172.19.36.252:7024/228661] is using msgr V1 protocol
2020-03-21T14:51:45.628+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T14:51:45.804+0800 ffffb6da09f0 -1 --2- 172.19.36.251:0/2706092 >> [v2:172.19.36.252:7021/228661,v1:172.19.36.252:7024/228661] conn(0xaaac24fbed80 0xaaac1e288b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7021/228661,v1:172.19.36.252:7024/228661] is using msgr V1 protocol
2020-03-21T14:51:45.808+0800 ffffb75a19f0 -1 --2- 172.19.36.251:0/2706092 >> [v2:172.19.36.252:7027/228661,v1:172.19.36.252:7030/228661] conn(0xaaac24fbe900 0xaaac1e288580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7027/228661,v1:172.19.36.252:7030/228661] is using msgr V1 protocol
2020-03-21T14:51:48.992+0800 ffffb6da09f0 -1 osd.26 9021 build_incremental_map_msg missing incremental map 9021
2020-03-21T15:34:29.988+0800 ffff988f99f0 -1 osd.26 9045 build_incremental_map_msg missing incremental map 9045
2020-03-21T15:34:30.152+0800 ffffb7da29f0 -1 osd.26 9045 build_incremental_map_msg missing incremental map 9045
2020-03-21T15:34:54.988+0800 ffff8e8e59f0 -1 osd.26 9048 build_incremental_map_msg missing incremental map 9048
2020-03-21T15:34:54.996+0800 ffff8c8e19f0 -1 osd.26 9048 build_incremental_map_msg missing incremental map 9048
2020-03-21T15:35:18.980+0800 ffffb6da09f0 0 --2- [v2:172.19.36.251:7011/2706092,v1:172.19.36.251:7017/2706092] >> [v2:172.19.36.253:6953/2357359,v1:172.19.36.253:6955/2357359] conn(0xaaac17ebed00 0xaaac22643700 crc :-1 s=SESSION_ACCEPTING pgs=194 cs=0 l=0 rx=0 tx=0).handle_reconnect no existing connection exists, reseting client
2020-03-21T15:35:21.596+0800 ffff988f99f0 -1 osd.26 9052 build_incremental_map_msg missing incremental map 9052
2020-03-21T15:35:21.608+0800 ffffb75a19f0 -1 osd.26 9052 build_incremental_map_msg missing incremental map 9052
2020-03-21T15:35:21.760+0800 ffffb75a19f0 -1 osd.26 9052 build_incremental_map_msg missing incremental map 9052
2020-03-21T15:35:46.752+0800 ffff8e8e59f0 -1 osd.26 9055 build_incremental_map_msg missing incremental map 9055
2020-03-21T15:35:47.012+0800 ffffb6da09f0 -1 osd.26 9055 build_incremental_map_msg missing incremental map 9055
2020-03-21T15:35:51.280+0800 ffff8e8e59f0 -1 osd.26 9055 build_incremental_map_msg missing incremental map 9055
2020-03-21T15:36:10.752+0800 ffffb7da29f0 0 --2- [v2:172.19.36.251:7011/2706092,v1:172.19.36.251:7017/2706092] >> [v2:172.19.36.253:6953/4357359,v1:172.19.36.253:6955/4357359] conn(0xaaac17ebf600 0xaaac21835180 crc :-1 s=SESSION_ACCEPTING pgs=295 cs=0 l=0 rx=0 tx=0).handle_reconnect no existing connection exists, reseting client
2020-03-21T15:36:38.352+0800 ffffb6da09f0 0 --2- [v2:172.19.36.251:7011/2706092,v1:172.19.36.251:7017/2706092] >> [v2:172.19.36.253:7043/5357359,v1:172.19.36.253:7046/5357359] conn(0xaaac29abf680 0xaaac292f4680 crc :-1 s=SESSION_ACCEPTING pgs=342 cs=0 l=0 rx=0 tx=0).handle_reconnect no existing connection exists, reseting client
2020-03-21T15:45:56.128+0800 ffff988f99f0 -1 osd.26 9072 build_incremental_map_msg missing incremental map 9072
2020-03-21T15:45:56.412+0800 ffffb6da09f0 -1 osd.26 9072 build_incremental_map_msg missing incremental map 9072
2020-03-21T15:46:43.684+0800 ffff8e8e59f0 0 log_channel(cluster) log [DBG] : 4.e1 starting backfill to osd.117 from (0'0,0'0] MAX to 8644'1293564
2020-03-21T15:47:15.516+0800 ffff8e8e59f0 -1 osd.26 9088 build_incremental_map_msg missing incremental map 9088
2020-03-21T15:47:15.576+0800 ffffb6da09f0 -1 osd.26 9088 build_incremental_map_msg missing incremental map 9088
2020-03-21T15:47:16.024+0800 ffffb6da09f0 -1 osd.26 9088 build_incremental_map_msg missing incremental map 9088
2020-03-21T15:47:19.868+0800 ffff988f99f0 -1 osd.26 9090 build_incremental_map_msg missing incremental map 9088
2020-03-21T15:57:26.172+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.172+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.316+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.316+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.396+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.396+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.492+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.492+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.500+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:26.500+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.376+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.376+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.696+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.696+0800 ffffb6da09f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.776+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.776+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.808+0800 ffffb7da29f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:32.808+0800 ffffb75a19f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T15:57:46.428+0800 ffffb306e9f0 -1 osd.26 9100 heartbeat_check: no reply from 172.19.36.252:7001 osd.57 since back 2020-03-21T15:57:23.282973+0800 front 2020-03-21T15:57:23.282423+0800 (oldest deadline 2020-03-21T15:57:46.180936+0800)
2020-03-21T15:57:47.412+0800 ffffb306e9f0 -1 osd.26 9100 heartbeat_check: no reply from 172.19.36.252:7001 osd.57 since back 2020-03-21T15:57:23.282973+0800 front 2020-03-21T15:57:23.282423+0800 (oldest deadline 2020-03-21T15:57:46.180936+0800)
2020-03-21T15:57:48.460+0800 ffffb306e9f0 -1 osd.26 9100 heartbeat_check: no reply from 172.19.36.252:7001 osd.57 since back 2020-03-21T15:57:23.282973+0800 front 2020-03-21T15:57:23.282423+0800 (oldest deadline 2020-03-21T15:57:46.180936+0800)
2020-03-21T15:57:49.468+0800 ffffb306e9f0 -1 osd.26 9101 heartbeat_check: no reply from 172.19.36.252:6804 osd.46 since back 2020-03-21T15:57:23.281454+0800 front 2020-03-21T15:57:23.281684+0800 (oldest deadline 2020-03-21T15:57:49.081642+0800)
2020-03-21T15:57:49.468+0800 ffffb306e9f0 -1 osd.26 9101 heartbeat_check: no reply from 172.19.36.252:6836 osd.55 since back 2020-03-21T15:57:23.282904+0800 front 2020-03-21T15:57:23.282547+0800 (oldest deadline 2020-03-21T15:57:49.081642+0800)
2020-03-21T15:57:49.468+0800 ffffb306e9f0 -1 osd.26 9101 heartbeat_check: no reply from 172.19.36.252:7001 osd.57 since back 2020-03-21T15:57:23.282973+0800 front 2020-03-21T15:57:23.282423+0800 (oldest deadline 2020-03-21T15:57:46.180936+0800)
2020-03-21T15:57:49.468+0800 ffffb306e9f0 -1 osd.26 9101 heartbeat_check: no reply from 172.19.36.252:6895 osd.78 since back 2020-03-21T15:57:23.282139+0800 front 2020-03-21T15:57:23.282682+0800 (oldest deadline 2020-03-21T15:57:49.081642+0800)
2020-03-21T15:57:49.468+0800 ffffb306e9f0 -1 osd.26 9101 heartbeat_check: no reply from 172.19.36.252:7017 osd.83 since back 2020-03-21T15:57:23.282799+0800 front 2020-03-21T15:57:23.282614+0800 (oldest deadline 2020-03-21T15:57:49.081642+0800)
2020-03-21T15:58:09.260+0800 ffff876f2010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T15:58:09.260+0800 ffff876f2010 0 ceph version 15.1.0-35-gdeba62656d (deba62656d6bc55b66cb67ef83759f89a51eff9f) octopus (rc), process ceph-osd, pid 2835317
2020-03-21T15:58:09.260+0800 ffff876f2010 0 pidfile_write: ignore empty --pid-file
2020-03-21T15:58:10.348+0800 ffff876f2010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T15:58:10.352+0800 ffff876f2010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T15:58:10.352+0800 ffff876f2010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T15:58:10.384+0800 ffff876f2010 0 load: jerasure load: lrc
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.732+0800 ffff876f2010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T15:58:10.736+0800 ffff876f2010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T15:58:10.736+0800 ffff876f2010 0 set rocksdb option compression = kNoCompression
2020-03-21T15:58:10.736+0800 ffff876f2010 0 set rocksdb option max_background_compactions = 2
2020-03-21T15:58:10.736+0800 ffff876f2010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T15:58:10.736+0800 ffff876f2010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T15:58:10.740+0800 ffff876f2010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T15:58:10.740+0800 ffff876f2010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T15:58:10.740+0800 ffff876f2010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option compression = kNoCompression
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option max_background_compactions = 2
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option compression = kNoCompression
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option max_background_compactions = 2
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T15:58:10.744+0800 ffff876f2010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option compression = kNoCompression
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option max_background_compactions = 2
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T15:58:11.880+0800 ffff876f2010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option compression = kNoCompression
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option max_background_compactions = 2
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option compression = kNoCompression
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option max_background_compactions = 2
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T15:58:11.884+0800 ffff876f2010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T15:58:11.956+0800 ffff876f2010 0 _get_class not permitted to load sdk
2020-03-21T15:58:11.956+0800 ffff876f2010 0 _get_class not permitted to load kvs
2020-03-21T15:58:11.956+0800 ffff876f2010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T15:58:11.960+0800 ffff876f2010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T15:58:11.960+0800 ffff876f2010 0 _get_class not permitted to load queue
2020-03-21T15:58:11.960+0800 ffff876f2010 0 _get_class not permitted to load lua
2020-03-21T15:58:11.964+0800 ffff876f2010 0 osd.26 9101 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T15:58:11.964+0800 ffff876f2010 0 osd.26 9101 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T15:58:11.964+0800 ffff876f2010 0 osd.26 9101 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T15:58:12.052+0800 ffff876f2010 0 osd.26 9101 load_pgs
2020-03-21T15:58:13.712+0800 ffff876f2010 0 osd.26 9101 load_pgs opened 14 pgs
2020-03-21T15:58:13.712+0800 ffff876f2010 -1 osd.26 9101 log_to_monitors {default=true}
2020-03-21T15:58:13.720+0800 ffff876f2010 0 osd.26 9101 done with init, starting boot process
2020-03-21T15:58:13.756+0800 ffff7f1b19f0 -1 osd.26 9101 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T16:15:38.544+0800 ffff85e299f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.544+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.676+0800 ffff86e2b9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.676+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.788+0800 ffff86e2b9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.788+0800 ffff86e2b9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.816+0800 ffff85e299f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.816+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.876+0800 ffff85e299f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:38.876+0800 ffff86e2b9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:39.880+0800 ffff85e299f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:39.880+0800 ffff86e2b9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:39.880+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:39.908+0800 ffff85e299f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:39.908+0800 ffff86e2b9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:39.924+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:39.924+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:15:59.884+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6812 osd.46 since back 2020-03-21T16:15:37.326545+0800 front 2020-03-21T16:15:37.327083+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:15:59.884+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6975 osd.55 since back 2020-03-21T16:15:37.326530+0800 front 2020-03-21T16:15:37.327162+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:15:59.884+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6977 osd.57 since back 2020-03-21T16:15:37.326958+0800 front 2020-03-21T16:15:37.326740+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:15:59.884+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:7033 osd.78 since back 2020-03-21T16:15:37.326864+0800 front 2020-03-21T16:15:37.326807+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:15:59.884+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6865 osd.83 since back 2020-03-21T16:15:37.327071+0800 front 2020-03-21T16:15:37.327692+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:00.856+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6812 osd.46 since back 2020-03-21T16:15:37.326545+0800 front 2020-03-21T16:15:37.327083+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:00.856+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6975 osd.55 since back 2020-03-21T16:15:37.326530+0800 front 2020-03-21T16:15:37.327162+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:00.856+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6977 osd.57 since back 2020-03-21T16:15:37.326958+0800 front 2020-03-21T16:15:37.326740+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:00.856+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:7033 osd.78 since back 2020-03-21T16:15:37.326864+0800 front 2020-03-21T16:15:37.326807+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:00.856+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6865 osd.83 since back 2020-03-21T16:15:37.327071+0800 front 2020-03-21T16:15:37.327692+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:01.820+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6812 osd.46 since back 2020-03-21T16:15:37.326545+0800 front 2020-03-21T16:15:37.327083+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:01.820+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6975 osd.55 since back 2020-03-21T16:15:37.326530+0800 front 2020-03-21T16:15:37.327162+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:01.820+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6977 osd.57 since back 2020-03-21T16:15:37.326958+0800 front 2020-03-21T16:15:37.326740+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:01.820+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:7033 osd.78 since back 2020-03-21T16:15:37.326864+0800 front 2020-03-21T16:15:37.326807+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:01.820+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6865 osd.83 since back 2020-03-21T16:15:37.327071+0800 front 2020-03-21T16:15:37.327692+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:02.796+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6812 osd.46 since back 2020-03-21T16:15:37.326545+0800 front 2020-03-21T16:15:37.327083+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:02.796+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6975 osd.55 since back 2020-03-21T16:15:37.326530+0800 front 2020-03-21T16:15:37.327162+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:02.796+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6977 osd.57 since back 2020-03-21T16:15:37.326958+0800 front 2020-03-21T16:15:37.326740+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:02.796+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:7033 osd.78 since back 2020-03-21T16:15:37.326864+0800 front 2020-03-21T16:15:37.326807+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:02.796+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6865 osd.83 since back 2020-03-21T16:15:37.327071+0800 front 2020-03-21T16:15:37.327692+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:03.776+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6812 osd.46 since back 2020-03-21T16:15:37.326545+0800 front 2020-03-21T16:15:37.327083+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:03.776+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6975 osd.55 since back 2020-03-21T16:15:37.326530+0800 front 2020-03-21T16:15:37.327162+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:03.776+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6977 osd.57 since back 2020-03-21T16:15:37.326958+0800 front 2020-03-21T16:15:37.326740+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:03.776+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:7033 osd.78 since back 2020-03-21T16:15:37.326864+0800 front 2020-03-21T16:15:37.326807+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:03.776+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6865 osd.83 since back 2020-03-21T16:15:37.327071+0800 front 2020-03-21T16:15:37.327692+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:03.952+0800 ffff85e299f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.252:6871/248646,v1:172.19.36.252:6875/248646] conn(0xaaabf6b0ed80 0xaaabf6a7f600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6871/248646,v1:172.19.36.252:6875/248646] is using msgr V1 protocol
2020-03-21T16:16:03.956+0800 ffff8662a9f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.252:6865/248646,v1:172.19.36.252:6868/248646] conn(0xaaabf40bc400 0xaaabf6a80c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6865/248646,v1:172.19.36.252:6868/248646] is using msgr V1 protocol
2020-03-21T16:16:04.204+0800 ffff86e2b9f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.252:7068/248586,v1:172.19.36.252:7079/248586] conn(0xaaabf40bcd00 0xaaabf6a7e580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7068/248586,v1:172.19.36.252:7079/248586] is using msgr V1 protocol
2020-03-21T16:16:04.204+0800 ffff86e2b9f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.252:7033/248586,v1:172.19.36.252:7052/248586] conn(0xaaabf7ff7600 0xaaaac50b3080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7033/248586,v1:172.19.36.252:7052/248586] is using msgr V1 protocol
2020-03-21T16:16:04.224+0800 ffff8662a9f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.252:6975/248307,v1:172.19.36.252:6996/248307] conn(0xaaabf7742880 0xaaabf4814680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6975/248307,v1:172.19.36.252:6996/248307] is using msgr V1 protocol
2020-03-21T16:16:04.228+0800 ffff85e299f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.252:7015/248307,v1:172.19.36.252:7032/248307] conn(0xaaabf4057b00 0xaaabf6a81180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:7015/248307,v1:172.19.36.252:7032/248307] is using msgr V1 protocol
2020-03-21T16:16:04.292+0800 ffff85e299f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.252:6814/248202,v1:172.19.36.252:6815/248202] conn(0xaaabf7743180 0xaaabf67d2100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.252:6814/248202,v1:172.19.36.252:6815/248202] is using msgr V1 protocol
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6812 osd.46 since back 2020-03-21T16:15:37.326545+0800 front 2020-03-21T16:15:37.327083+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6975 osd.55 since back 2020-03-21T16:15:37.326530+0800 front 2020-03-21T16:15:37.327162+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6977 osd.57 since back 2020-03-21T16:15:37.326958+0800 front 2020-03-21T16:15:37.326740+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:7033 osd.78 since back 2020-03-21T16:15:37.326864+0800 front 2020-03-21T16:15:37.326807+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.252:6865 osd.83 since back 2020-03-21T16:15:37.327071+0800 front 2020-03-21T16:15:37.327692+0800 (oldest deadline 2020-03-21T16:15:59.626273+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:04.804+0800 ffff820f79f0 -1 osd.26 9133 heartbeat_check: no reply from 172.19.36.253:7078 osd.137 since back 2020-03-21T16:15:37.327508+0800 front 2020-03-21T16:15:37.326601+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:05.844+0800 ffff820f79f0 -1 osd.26 9135 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:05.844+0800 ffff820f79f0 -1 osd.26 9135 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:05.844+0800 ffff820f79f0 -1 osd.26 9135 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:05.844+0800 ffff820f79f0 -1 osd.26 9135 heartbeat_check: no reply from 172.19.36.253:7078 osd.137 since back 2020-03-21T16:15:37.327508+0800 front 2020-03-21T16:15:37.326601+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:06.280+0800 ffff86e2b9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:16:06.876+0800 ffff820f79f0 -1 osd.26 9136 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:06.876+0800 ffff820f79f0 -1 osd.26 9136 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:06.876+0800 ffff820f79f0 -1 osd.26 9136 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:07.892+0800 ffff820f79f0 -1 osd.26 9137 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:07.892+0800 ffff820f79f0 -1 osd.26 9137 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:07.892+0800 ffff820f79f0 -1 osd.26 9137 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:08.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:08.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:08.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:09.856+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:09.856+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:09.856+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:10.336+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:16:10.336+0800 ffff8662a9f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:16:10.832+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:10.832+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:10.832+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:11.868+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:11.868+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:11.868+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:12.848+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:12.848+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:12.848+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:13.856+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:13.856+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:13.856+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:14.828+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:14.828+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:14.832+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:15.860+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:15.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:15.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:16.816+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:16.816+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:16.816+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:17.804+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:17.804+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:17.804+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:18.828+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:18.828+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:18.828+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:19.852+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:19.852+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:19.852+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:20.836+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:20.836+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:20.836+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:21.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:21.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:21.864+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:21.864+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:22.448+0800 ffff85e299f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.253:7030/367075,v1:172.19.36.253:7034/367075] conn(0xaaabf412d180 0xaaabf4813080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7030/367075,v1:172.19.36.253:7034/367075] is using msgr V1 protocol
2020-03-21T16:16:22.468+0800 ffff8662a9f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.253:7039/367075,v1:172.19.36.253:7044/367075] conn(0xaaabf412c880 0xaaabf61e7700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7039/367075,v1:172.19.36.253:7044/367075] is using msgr V1 protocol
2020-03-21T16:16:22.468+0800 ffff85e299f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.253:7046/367124,v1:172.19.36.253:7048/367124] conn(0xaaabf7740900 0xaaabf4812b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7046/367124,v1:172.19.36.253:7048/367124] is using msgr V1 protocol
2020-03-21T16:16:22.600+0800 ffff8662a9f0 -1 --2- [v2:172.19.36.251:6978/2835317,v1:172.19.36.251:6979/2835317] >> [v2:172.19.36.253:7037/367124,v1:172.19.36.253:7042/367124] conn(0xaaabeb2fd200 0xaaabeddcf600 unknown :-1 s=BANNER_CONNECTING pgs=29 cs=9 l=0 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7037/367124,v1:172.19.36.253:7042/367124] is using msgr V1 protocol
2020-03-21T16:16:22.860+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:22.860+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:22.860+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:22.860+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:23.888+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:23.888+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:23.888+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:23.888+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:24.884+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:24.884+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:24.884+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:24.884+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:25.900+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:25.900+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:25.900+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:25.900+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:26.936+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:26.936+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:26.936+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:26.936+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:26.936+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:27.952+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:27.952+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:27.952+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:27.956+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:27.956+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:28.992+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:28.992+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:28.992+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:28.992+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:28.992+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:30.004+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:30.004+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:30.004+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:30.004+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:30.004+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:31.048+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:31.048+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:31.048+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:31.048+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:31.048+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:32.092+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:32.092+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:32.092+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:32.092+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:32.092+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:33.076+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:33.076+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:33.076+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:33.076+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:33.076+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:34.120+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:34.120+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:34.120+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:34.120+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:34.120+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:35.112+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:35.112+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:35.112+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:35.112+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:35.112+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6820 osd.137 since back 2020-03-21T16:16:09.432897+0800 front 2020-03-21T16:16:09.432886+0800 (oldest deadline 2020-03-21T16:16:34.732676+0800)
2020-03-21T16:16:35.112+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:36.152+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:36.152+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:36.152+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:36.152+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:36.152+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6820 osd.137 since back 2020-03-21T16:16:09.432897+0800 front 2020-03-21T16:16:09.432886+0800 (oldest deadline 2020-03-21T16:16:34.732676+0800)
2020-03-21T16:16:36.152+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:37.108+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:37.108+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:37.108+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:37.108+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:37.108+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6820 osd.137 since back 2020-03-21T16:16:09.432897+0800 front 2020-03-21T16:16:09.432886+0800 (oldest deadline 2020-03-21T16:16:34.732676+0800)
2020-03-21T16:16:37.108+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:37.452+0800 ffff85e299f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.253:7030/367075,v1:172.19.36.253:7034/367075] conn(0xaaabf412d180 0xaaabf4813080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7030/367075,v1:172.19.36.253:7034/367075] is using msgr V1 protocol
2020-03-21T16:16:37.468+0800 ffff8662a9f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.253:7039/367075,v1:172.19.36.253:7044/367075] conn(0xaaabf412c880 0xaaabf61e7700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7039/367075,v1:172.19.36.253:7044/367075] is using msgr V1 protocol
2020-03-21T16:16:37.476+0800 ffff85e299f0 -1 --2- 172.19.36.251:0/2835317 >> [v2:172.19.36.253:7046/367124,v1:172.19.36.253:7048/367124] conn(0xaaabf7740900 0xaaabf4812b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7046/367124,v1:172.19.36.253:7048/367124] is using msgr V1 protocol
2020-03-21T16:16:37.600+0800 ffff8662a9f0 -1 --2- [v2:172.19.36.251:6978/2835317,v1:172.19.36.251:6979/2835317] >> [v2:172.19.36.253:7037/367124,v1:172.19.36.253:7042/367124] conn(0xaaabeb2fd200 0xaaabeddcf600 unknown :-1 s=BANNER_CONNECTING pgs=29 cs=10 l=0 rx=0 tx=0)._handle_peer_banner peer [v2:172.19.36.253:7037/367124,v1:172.19.36.253:7042/367124] is using msgr V1 protocol
2020-03-21T16:16:38.060+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:38.060+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:38.060+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:38.060+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:38.060+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6820 osd.137 since back 2020-03-21T16:16:09.432897+0800 front 2020-03-21T16:16:09.432886+0800 (oldest deadline 2020-03-21T16:16:34.732676+0800)
2020-03-21T16:16:38.060+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:39.072+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:39.072+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:39.072+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:39.072+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:39.072+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6820 osd.137 since back 2020-03-21T16:16:09.432897+0800 front 2020-03-21T16:16:09.432886+0800 (oldest deadline 2020-03-21T16:16:34.732676+0800)
2020-03-21T16:16:39.072+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:40.100+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:40.100+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:40.100+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7030 osd.128 since back 2020-03-21T16:15:37.326388+0800 front 2020-03-21T16:15:37.326642+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:40.100+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:40.100+0800 ffff820f79f0 -1 osd.26 9138 heartbeat_check: no reply from 172.19.36.253:6820 osd.137 since back 2020-03-21T16:16:09.432897+0800 front 2020-03-21T16:16:09.432886+0800 (oldest deadline 2020-03-21T16:16:34.732676+0800)
2020-03-21T16:16:40.100+0800 ffff820f79f0 -1 osd.26 9138 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:41.096+0800 ffff820f79f0 -1 osd.26 9139 heartbeat_check: no reply from 172.19.36.252:6884 osd.55 ever on either front or back, first ping sent 2020-03-21T16:16:06.531351+0800 (oldest deadline 2020-03-21T16:16:26.531351+0800)
2020-03-21T16:16:41.096+0800 ffff820f79f0 -1 osd.26 9139 heartbeat_check: no reply from 172.19.36.253:6988 osd.117 since back 2020-03-21T16:15:37.327014+0800 front 2020-03-21T16:15:37.327451+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:41.096+0800 ffff820f79f0 -1 osd.26 9139 heartbeat_check: no reply from 172.19.36.253:7046 osd.132 since back 2020-03-21T16:15:37.327333+0800 front 2020-03-21T16:15:37.327598+0800 (oldest deadline 2020-03-21T16:16:04.326818+0800)
2020-03-21T16:16:41.096+0800 ffff820f79f0 -1 osd.26 9139 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:42.120+0800 ffff820f79f0 -1 osd.26 9140 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:16:43.152+0800 ffff820f79f0 -1 osd.26 9141 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.397461.0:31859 4.91 4:8970d2ee:::rbd_header.3716e91838736:head [watch ping cookie 794586880] snapc 0=[] ondisk+write+known_if_redirected e9133)
2020-03-21T16:17:28.168+0800 ffff9c1a5010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T16:17:28.168+0800 ffff9c1a5010 0 ceph version 15.1.0-35-gdeba62656d (deba62656d6bc55b66cb67ef83759f89a51eff9f) octopus (rc), process ceph-osd, pid 2857581
2020-03-21T16:17:28.168+0800 ffff9c1a5010 0 pidfile_write: ignore empty --pid-file
2020-03-21T16:17:29.388+0800 ffff9c1a5010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T16:17:29.388+0800 ffff9c1a5010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T16:17:29.392+0800 ffff9c1a5010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T16:17:29.432+0800 ffff9c1a5010 0 load: jerasure load: lrc
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 _get_class not permitted to load sdk
2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 _get_class not permitted to load kvs
2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 _get_class not permitted to load queue
2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 _get_class not permitted to load lua
2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 osd.26 9142 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 osd.26 9142 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 osd.26 9142 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T16:17:31.144+0800 ffff9c1a5010 0 osd.26 9142 load_pgs
2020-03-21T16:17:32.892+0800 ffff9c1a5010 0 osd.26 9142 load_pgs opened 14 pgs
2020-03-21T16:17:32.896+0800 ffff9c1a5010 -1 osd.26 9142 log_to_monitors {default=true}
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 auth: could not find secret_id=76
2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
2020-03-21T16:17:32.976+0800 ffff9c1a5010 0 osd.26 9142 done with init, starting boot process
2020-03-21T16:17:32.996+0800 ffff93c649f0 -1 osd.26 9142 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T16:17:34.056+0800 ffff9b8de9f0 -1 Infiniband modify_qp_to_init failed to switch to INIT state Queue Pair, qp number: 985228 Error: (5) Input/output error
2020-03-21T16:17:34.160+0800 ffff9b8de9f0 -1 *** Caught signal (Segmentation fault) **
in thread ffff9b8de9f0 thread_name:msgr-worker-0

ceph version 15.1.0-35-gdeba62656d (deba62656d6bc55b66cb67ef83759f89a51eff9f) octopus (rc)
1: (__kernel_rt_sigreturn()+0) [0xffff9cc0a5c0]
2: (ibv_destroy_qp()+0x8) [0xffff9c6d4fc0]
3: (Infiniband::QueuePair::~QueuePair()+0x48) [0xaaaac2212058]
4: (Infiniband::create_queue_pair(CephContext*, RDMAWorker*, ibv_qp_type, rdma_cm_id*)+0x8c) [0xaaaac22125e4]
5: (RDMAConnectedSocketImpl::RDMAConnectedSocketImpl(CephContext*, std::shared_ptr<Infiniband>&, std::shared_ptr<RDMADispatcher>&, RDMAWorker*)+0x188) [0xaaaac2212f20]
6: (RDMAWorker::connect(entity_addr_t const&, SocketOptions const&, ConnectedSocket*)+0x10c) [0xaaaac201bc04]
7: (AsyncConnection::process()+0x554) [0xaaaac21ba3fc]
8: (EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*)+0xbb0) [0xaaaac200ad50]
9: (()+0x123b8d0) [0xaaaac20108d0]
10: (()+0xc9ed4) [0xffff9c561ed4]
11: (()+0x7088) [0xffff9c71d088]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
-144> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command assert hook 0xaaaaf9e2e7f0
-143> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command abort hook 0xaaaaf9e2e7f0
-142> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command perfcounters_dump hook 0xaaaaf9e2e7f0
-141> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command 1 hook 0xaaaaf9e2e7f0
-140> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command perf dump hook 0xaaaaf9e2e7f0
-139> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command perfcounters_schema hook 0xaaaaf9e2e7f0
-138> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command perf histogram dump hook 0xaaaaf9e2e7f0
-137> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command 2 hook 0xaaaaf9e2e7f0
-136> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command perf schema hook 0xaaaaf9e2e7f0
-135> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command perf histogram schema hook 0xaaaaf9e2e7f0
-134> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command perf reset hook 0xaaaaf9e2e7f0
-133> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command config show hook 0xaaaaf9e2e7f0
-132> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command config help hook 0xaaaaf9e2e7f0
-131> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command config set hook 0xaaaaf9e2e7f0
-130> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command config unset hook 0xaaaaf9e2e7f0
-129> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command config get hook 0xaaaaf9e2e7f0
-128> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command config diff hook 0xaaaaf9e2e7f0
-127> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command config diff get hook 0xaaaaf9e2e7f0
-126> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command injectargs hook 0xaaaaf9e2e7f0
-125> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command log flush hook 0xaaaaf9e2e7f0
-124> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command log dump hook 0xaaaaf9e2e7f0
-123> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command log reopen hook 0xaaaaf9e2e7f0
-122> 2020-03-21T16:17:24.420+0800 ffff9c1a5010 5 asok(0xaaaaf9f00000) register_command dump_mempools hook 0xaaaafaac8068
-121> 2020-03-21T16:17:24.452+0800 ffff9b8de9f0 0 Infiniband name hns_2 osd id is 26 num_comp_vectors 63
-120> 2020-03-21T16:17:24.452+0800 ffff9b8de9f0 0 Infiniband name hns_0 osd id is 26 num_comp_vectors 63
-119> 2020-03-21T16:17:24.452+0800 ffff9b8de9f0 0 Infiniband name mlx5_0 osd id is 26 num_comp_vectors 63
-118> 2020-03-21T16:17:24.452+0800 ffff9b8de9f0 0 Infiniband name hns_3 osd id is 26 num_comp_vectors 63
-117> 2020-03-21T16:17:24.452+0800 ffff9b8de9f0 0 Infiniband name hns_1 osd id is 26 num_comp_vectors 63
-116> 2020-03-21T16:17:24.456+0800 ffff9b8de9f0 0 Infiniband name mlx5_1 osd id is 26 num_comp_vectors 63
-115> 2020-03-21T16:17:28.168+0800 ffff9c1a5010 0 set uid:gid to 0:64045 (ceph:ceph)
-114> 2020-03-21T16:17:28.168+0800 ffff9c1a5010 0 ceph version 15.1.0-35-gdeba62656d (deba62656d6bc55b66cb67ef83759f89a51eff9f) octopus (rc), process ceph-osd, pid 2857581
-113> 2020-03-21T16:17:28.168+0800 ffff9c1a5010 0 pidfile_write: ignore empty --pid-file
-112> 2020-03-21T16:17:29.388+0800 ffff9c1a5010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
-111> 2020-03-21T16:17:29.388+0800 ffff9c1a5010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
-110> 2020-03-21T16:17:29.392+0800 ffff9c1a5010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
-109> 2020-03-21T16:17:29.432+0800 ffff9c1a5010 0 load: jerasure load: lrc
-108> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-107> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-106> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-105> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-104> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-103> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-102> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-101> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-100> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-99> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-98> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-97> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-96> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-95> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-94> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-93> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-92> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-91> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-90> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-89> 2020-03-21T16:17:29.884+0800 ffff9c1a5010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
-88> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
-87> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
-86> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
-85> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
-84> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
-83> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
-82> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
-81> 2020-03-21T16:17:29.888+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
-80> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
-79> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
-78> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
-77> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
-76> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
-75> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
-74> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
-73> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
-72> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
-71> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
-70> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
-69> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
-68> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
-67> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
-66> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
-65> 2020-03-21T16:17:29.892+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
-64> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
-63> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
-62> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
-61> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
-60> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
-59> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
-58> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
-57> 2020-03-21T16:17:31.004+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
-56> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
-55> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
-54> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
-53> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
-52> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
-51> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
-50> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
-49> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
-48> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compaction_readahead_size = 2097152
-47> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option compression = kNoCompression
-46> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_background_compactions = 2
-45> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option max_write_buffer_number = 4
-44> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option min_write_buffer_number_to_merge = 1
-43> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option recycle_log_file_num = 4
-42> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option writable_file_max_buffer_size = 0
-41> 2020-03-21T16:17:31.008+0800 ffff9c1a5010 0 set rocksdb option write_buffer_size = 268435456
-40> 2020-03-21T16:17:31.048+0800 ffff8cc569f0 5 prioritycache tune_memory target: 4294967296 mapped: 4868440064 unmapped: 32555008 heap: 4900995072 old mem: 134217728 new mem: 134217728
-39> 2020-03-21T16:17:31.048+0800 ffff8cc569f0 5 prioritycache tune_memory target: 4294967296 mapped: 4868464640 unmapped: 32530432 heap: 4900995072 old mem: 134217728 new mem: 134217728
-38> 2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 _get_class not permitted to load sdk
-37> 2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 _get_class not permitted to load kvs
-36> 2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
-35> 2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
-34> 2020-03-21T16:17:31.052+0800 ffff9c1a5010 0 _get_class not permitted to load queue
-33> 2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 _get_class not permitted to load lua
-32> 2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 osd.26 9142 crush map has features 288514051259236352, adjusting msgr requires for clients
-31> 2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 osd.26 9142 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
-30> 2020-03-21T16:17:31.056+0800 ffff9c1a5010 0 osd.26 9142 crush map has features 3314933000852226048, adjusting msgr requires for osds
-29> 2020-03-21T16:17:31.144+0800 ffff9c1a5010 0 osd.26 9142 load_pgs
-28> 2020-03-21T16:17:32.048+0800 ffff8cc569f0 5 prioritycache tune_memory target: 4294967296 mapped: 4964745216 unmapped: 8601600 heap: 4973346816 old mem: 134217728 new mem: 134217728
-27> 2020-03-21T16:17:32.892+0800 ffff9c1a5010 0 osd.26 9142 load_pgs opened 14 pgs
-26> 2020-03-21T16:17:32.896+0800 ffff9c1a5010 -1 osd.26 9142 log_to_monitors {default=true}
-25> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
-24> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-23> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
-22> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-21> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
-20> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-19> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 auth: could not find secret_id=76
-18> 2020-03-21T16:17:32.896+0800 ffff9b0dd9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-17> 2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 auth: could not find secret_id=76
-16> 2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-15> 2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 auth: could not find secret_id=76
-14> 2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-13> 2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 auth: could not find secret_id=76
-12> 2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-11> 2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 auth: could not find secret_id=76
-10> 2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-9> 2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 auth: could not find secret_id=76
-8> 2020-03-21T16:17:32.896+0800 ffff9b8de9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-7> 2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 auth: could not find secret_id=76
-6> 2020-03-21T16:17:32.896+0800 ffff9a8dc9f0 0 cephx: verify_authorizer could not get service secret for service osd secret_id=76
-5> 2020-03-21T16:17:32.976+0800 ffff9c1a5010 0 osd.26 9142 done with init, starting boot process
-4> 2020-03-21T16:17:32.996+0800 ffff93c649f0 -1 osd.26 9142 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
-3> 2020-03-21T16:17:33.052+0800 ffff8cc569f0 5 prioritycache tune_memory target: 4294967296 mapped: 5078843392 unmapped: 9240576 heap: 5088083968 old mem: 134217728 new mem: 134217728
-2> 2020-03-21T16:17:34.052+0800 ffff8cc569f0 5 prioritycache tune_memory target: 4294967296 mapped: 5080784896 unmapped: 7299072 heap: 5088083968 old mem: 134217728 new mem: 134217728
-1> 2020-03-21T16:17:34.056+0800 ffff9b8de9f0 -1 Infiniband modify_qp_to_init failed to switch to INIT state Queue Pair, qp number: 985228 Error: (5) Input/output error
0> 2020-03-21T16:17:34.160+0800 ffff9b8de9f0 -1 *** Caught signal (Segmentation fault) **
in thread ffff9b8de9f0 thread_name:msgr-worker-0

ceph version 15.1.0-35-gdeba62656d (deba62656d6bc55b66cb67ef83759f89a51eff9f) octopus (rc)
1: (__kernel_rt_sigreturn()+0) [0xffff9cc0a5c0]
2: (ibv_destroy_qp()+0x8) [0xffff9c6d4fc0]
3: (Infiniband::QueuePair::~QueuePair()+0x48) [0xaaaac2212058]
4: (Infiniband::create_queue_pair(CephContext*, RDMAWorker*, ibv_qp_type, rdma_cm_id*)+0x8c) [0xaaaac22125e4]
5: (RDMAConnectedSocketImpl::RDMAConnectedSocketImpl(CephContext*, std::shared_ptr<Infiniband>&, std::shared_ptr<RDMADispatcher>&, RDMAWorker*)+0x188) [0xaaaac2212f20]
6: (RDMAWorker::connect(entity_addr_t const&, SocketOptions const&, ConnectedSocket*)+0x10c) [0xaaaac201bc04]
7: (AsyncConnection::process()+0x554) [0xaaaac21ba3fc]
8: (EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*)+0xbb0) [0xaaaac200ad50]
9: (()+0x123b8d0) [0xaaaac20108d0]
10: (()+0xc9ed4) [0xffff9c561ed4]
11: (()+0x7088) [0xffff9c71d088]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 0 none
0/ 0 lockdep
0/ 0 context
0/ 0 crush
0/ 0 mds
0/ 0 mds_balancer
0/ 0 mds_locker
0/ 0 mds_log
0/ 0 mds_log_expire
0/ 0 mds_migrator
0/ 0 buffer
0/ 0 timer
0/ 0 filer
0/ 0 striper
0/ 0 objecter
0/ 0 rados
0/ 0 rbd
0/ 0 rbd_mirror
0/ 0 rbd_replay
0/ 0 journaler
0/ 0 objectcacher
0/ 5 immutable_obj_cache
0/ 0 client
0/ 0 osd
0/ 0 optracker
0/ 0 objclass
0/ 0 filestore
0/ 0 journal
0/ 0 ms
0/ 0 mon
0/ 0 monc
0/ 0 paxos
0/ 0 tp
0/ 0 auth
0/ 0 crypto
0/ 0 finisher
0/ 0 reserver
0/ 0 heartbeatmap
0/ 0 perfcounter
0/ 0 rgw
1/ 5 rgw_sync
0/ 0 civetweb
0/ 0 javaclient
0/ 0 asok
0/ 0 throttle
0/ 0 refs
0/ 0 compressor
0/ 0 bluestore
0/ 0 bluefs
0/ 0 bdev
0/ 0 kstore
0/ 0 rocksdb
0/ 0 leveldb
0/ 0 memdb
0/ 0 fuse
0/ 0 mgr
0/ 0 mgrc
0/ 0 dpdk
0/ 0 eventtrace
1/ 5 prioritycache
0/ 5 test
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
--- pthread ID / name mapping for recent threads ---
ffff8cc569f0 / bstore_mempool
ffff93c649f0 / fn_anonymous
ffff9a8dc9f0 / msgr-worker-2
ffff9b0dd9f0 / msgr-worker-1
ffff9b8de9f0 / msgr-worker-0
ffff9c1a5010 / ceph-osd
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-osd.26.log
--- end dump of recent events ---
2020-03-21T16:17:47.392+0800 ffffa8d4f010 0 set uid:gid to 0:64045 (ceph:ceph)
2020-03-21T16:17:47.392+0800 ffffa8d4f010 0 ceph version 15.1.0-35-gdeba62656d (deba62656d6bc55b66cb67ef83759f89a51eff9f) octopus (rc), process ceph-osd, pid 2864674
2020-03-21T16:17:47.392+0800 ffffa8d4f010 0 pidfile_write: ignore empty --pid-file
2020-03-21T16:17:48.516+0800 ffffa8d4f010 0 starting osd.26 osd_data /var/lib/ceph/osd/ceph-26 /var/lib/ceph/osd/ceph-26/journal
2020-03-21T16:17:48.516+0800 ffffa8d4f010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T16:17:48.520+0800 ffffa8d4f010 -1 unable to find any IPv4 address in networks '172.19.36.0/24' interfaces ''
2020-03-21T16:17:48.584+0800 ffffa8d4f010 0 load: jerasure load: lrc
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:5.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:6.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:7.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:8.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:9.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:10.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:11.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:12.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:13.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:14.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:15.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:16.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:17.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:18.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.928+0800 ffffa8d4f010 0 osd.26:19.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:48.932+0800 ffffa8d4f010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:48.936+0800 ffffa8d4f010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:50.012+0800 ffffa8d4f010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option compaction_readahead_size = 2097152
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option compression = kNoCompression
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option max_background_compactions = 2
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option max_write_buffer_number = 4
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option min_write_buffer_number_to_merge = 1
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option recycle_log_file_num = 4
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option writable_file_max_buffer_size = 0
2020-03-21T16:17:50.016+0800 ffffa8d4f010 0 set rocksdb option write_buffer_size = 268435456
2020-03-21T16:17:50.044+0800 ffffa8d4f010 0 _get_class not permitted to load sdk
2020-03-21T16:17:50.044+0800 ffffa8d4f010 0 _get_class not permitted to load kvs
2020-03-21T16:17:50.044+0800 ffffa8d4f010 0 <cls> /root/chunsong/ceph/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2020-03-21T16:17:50.044+0800 ffffa8d4f010 0 <cls> /root/chunsong/ceph/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-21T16:17:50.048+0800 ffffa8d4f010 0 _get_class not permitted to load queue
2020-03-21T16:17:50.048+0800 ffffa8d4f010 0 _get_class not permitted to load lua
2020-03-21T16:17:50.048+0800 ffffa8d4f010 0 osd.26 9151 crush map has features 288514051259236352, adjusting msgr requires for clients
2020-03-21T16:17:50.048+0800 ffffa8d4f010 0 osd.26 9151 crush map has features 288514051259236352 was 8705, adjusting msgr requires for mons
2020-03-21T16:17:50.048+0800 ffffa8d4f010 0 osd.26 9151 crush map has features 3314933000852226048, adjusting msgr requires for osds
2020-03-21T16:17:50.128+0800 ffffa8d4f010 0 osd.26 9151 load_pgs
2020-03-21T16:17:51.740+0800 ffffa8d4f010 0 osd.26 9151 load_pgs opened 14 pgs
2020-03-21T16:17:51.740+0800 ffffa8d4f010 -1 osd.26 9151 log_to_monitors {default=true}
2020-03-21T16:17:51.748+0800 ffffa8d4f010 0 osd.26 9151 done with init, starting boot process
2020-03-21T16:17:51.756+0800 ffffa080e9f0 -1 osd.26 9151 set_numa_affinity unable to identify public interface 'rocevlan' numa node: (2) No such file or directory
2020-03-21T16:19:24.968+0800 ffffa84889f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:24.992+0800 ffffa84889f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:24.992+0800 ffffa74869f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:25.088+0800 ffffa84889f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:25.088+0800 ffffa74869f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:25.088+0800 ffffa74869f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:25.128+0800 ffffa84889f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:25.132+0800 ffffa84889f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:25.264+0800 ffffa7c879f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:25.268+0800 ffffa7c879f0 -1 Infiniband recv_cm_meta got error -104: (104) Connection reset by peer
2020-03-21T16:19:49.644+0800 ffffa37549f0 -1 osd.26 9159 heartbeat_check: no reply from 172.19.36.252:6863 osd.46 since back 2020-03-21T16:19:24.666968+0800 front 2020-03-21T16:19:24.667177+0800 (oldest deadline 2020-03-21T16:19:48.766672+0800)
2020-03-21T16:19:49.644+0800 ffffa37549f0 -1 osd.26 9159 heartbeat_check: no reply from 172.19.36.252:7036 osd.80 since back 2020-03-21T16:19:24.666942+0800 front 2020-03-21T16:19:24.667086+0800 (oldest deadline 2020-03-21T16:19:48.766672+0800)
2020-03-21T16:19:49.644+0800 ffffa37549f0 -1 osd.26 9159 heartbeat_check: no reply from 172.19.36.252:7044 osd.83 since back 2020-03-21T16:19:24.667029+0800 front 2020-03-21T16:19:24.667057+0800 (oldest deadline 2020-03-21T16:19:48.766672+0800)
2020-03-21T16:19:50.624+0800 ffffa37549f0 -1 osd.26 9160 heartbeat_check: no reply from 172.19.36.252:7036 osd.80 since back 2020-03-21T16:19:24.666942+0800 front 2020-03-21T16:19:24.667086+0800 (oldest deadline 2020-03-21T16:19:48.766672+0800)
2020-03-21T16:19:51.612+0800 ffffa37549f0 -1 osd.26 9161 heartbeat_check: no reply from 172.19.36.252:7036 osd.80 since back 2020-03-21T16:19:24.666942+0800 front 2020-03-21T16:19:24.667086+0800 (oldest deadline 2020-03-21T16:19:48.766672+0800)
2020-03-21T16:19:52.604+0800 ffffa37549f0 -1 osd.26 9161 heartbeat_check: no reply from 172.19.36.252:7036 osd.80 since back 2020-03-21T16:19:24.666942+0800 front 2020-03-21T16:19:24.667086+0800 (oldest deadline 2020-03-21T16:19:48.766672+0800)
2020-03-21T16:19:53.560+0800 ffffa37549f0 -1 osd.26 9162 heartbeat_check: no reply from 172.19.36.252:7036 osd.80 since back 2020-03-21T16:19:24.666942+0800 front 2020-03-21T16:19:24.667086+0800 (oldest deadline 2020-03-21T16:19:48.766672+0800)
    (1-1/1)