2021-06-16 18:19:34.991 ffff90ac1550 0 set uid:gid to 167:167 (ceph:ceph) 2021-06-16 18:19:34.991 ffff90ac1550 0 ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable), process ceph-osd, pid 3935215 2021-06-16 18:19:34.991 ffff90ac1550 0 pidfile_write: ignore empty --pid-file 2021-06-16 18:19:35.031 ffff90ac1550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs path /var/lib/ceph/osd/ceph-48/ 2021-06-16 18:19:35.041 ffff90ac1550 -1 bluestore(/var/lib/ceph/osd/ceph-48/) _read_fsid unparsable uuid 2021-06-16 18:19:35.041 ffff90ac1550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs using provided fsid 51416055-db8e-492d-b30a-88338cab2e4e 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264a700 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264a700 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264a700 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported 2021-06-16 18:19:35.041 ffff90ac1550 1 bluestore(/var/lib/ceph/osd/ceph-48/) _set_cache_sizes cache_size 3221225472 meta 0.4 kv 0.4 data 0.2 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264aa80 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264aa80 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264aa80 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-48//block size 2.9 TiB 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs _add_block_extent bdev 1 0x165b2ae0000~1dcee40000 skip 0 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs mkfs osd_uuid 51416055-db8e-492d-b30a-88338cab2e4e 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs mkfs uuid 82415be9-f440-493e-a8a4-5a1b84dda134 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs mount 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 2021-06-16 18:19:35.091 ffff90ac1550 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff90ac1550 time 2021-06-16 18:19:35.091520 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0) ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaabae72c3c] 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaabae72e08] 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaabb3fa734] 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaabb409964] 5: (BlueFS::mount()+0x180) [0xaaaabb4164a0] 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaabb308e08] 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaabb30a0cc] 8: (BlueStore::mkfs()+0x4d4) [0xaaaabb3660c4] 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaabaecb0ac] 10: (main()+0x15a8) [0xaaaabae77b70] 11: (__libc_start_main()+0xf0) [0xffff90da15d4] 12: (()+0x4cab9c) [0xaaaabaeaab9c] 2021-06-16 18:19:35.091 ffff90ac1550 -1 *** Caught signal (Aborted) ** in thread ffff90ac1550 thread_name:ceph-osd ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: [0xffff91c2066c] 2: (gsignal()+0x4c) [0xffff90db50e8] 3: (abort()+0x11c) [0xffff90db6760] 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaabae72c90] 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaabae72e08] 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaabb3fa734] 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaabb409964] 8: (BlueFS::mount()+0x180) [0xaaaabb4164a0] 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaabb308e08] 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaabb30a0cc] 11: (BlueStore::mkfs()+0x4d4) [0xaaaabb3660c4] 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaabaecb0ac] 13: (main()+0x15a8) [0xaaaabae77b70] 14: (__libc_start_main()+0xf0) [0xffff90da15d4] 15: (()+0x4cab9c) [0xaaaabaeaab9c] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- -154> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command assert hook 0xaaaae19b4560 -153> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command abort hook 0xaaaae19b4560 -152> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command perfcounters_dump hook 0xaaaae19b4560 -151> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command 1 hook 0xaaaae19b4560 -150> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command perf dump hook 0xaaaae19b4560 -149> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command perfcounters_schema hook 0xaaaae19b4560 -148> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command perf histogram dump hook 0xaaaae19b4560 -147> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command 2 hook 0xaaaae19b4560 -146> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command perf schema hook 0xaaaae19b4560 -145> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command perf histogram schema hook 0xaaaae19b4560 -144> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command perf reset hook 0xaaaae19b4560 -143> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command config show hook 0xaaaae19b4560 -142> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command config help hook 0xaaaae19b4560 -141> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command config set hook 0xaaaae19b4560 -140> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command config unset hook 0xaaaae19b4560 -139> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command config get hook 0xaaaae19b4560 -138> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command config diff hook 0xaaaae19b4560 -137> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command config diff get hook 0xaaaae19b4560 -136> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command log flush hook 0xaaaae19b4560 -135> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command log dump hook 0xaaaae19b4560 -134> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command log reopen hook 0xaaaae19b4560 -133> 2021-06-16 18:19:34.931 ffff90ac1550 5 asok(0xaaaae1a58000) register_command dump_mempools hook 0xaaaae1a047c8 -132> 2021-06-16 18:19:34.941 ffff90ac1550 10 monclient: get_monmap_and_config -131> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient: build_initial_monmap -130> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient: monmap: epoch 1 fsid 13a2d867-70d5-4800-9303-c41c5f68996d last_changed 2021-06-16 18:13:28.704684 created 2021-06-16 18:13:28.704684 min_mon_release 14 (nautilus) 0: [v2:10.101.1.1:3300/0,v1:10.101.1.1:6789/0] mon.ceph-node1 1: [v2:10.101.1.2:3300/0,v1:10.101.1.2:6789/0] mon.ceph-node2 2: [v2:10.101.1.3:3300/0,v1:10.101.1.3:6789/0] mon.ceph-node3 -129> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding auth protocol: cephx -128> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding auth protocol: cephx -127> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding auth protocol: cephx -126> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding auth protocol: none -125> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: secure -124> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: crc -123> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: secure -122> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: crc -121> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: secure -120> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: crc -119> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: crc -118> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: secure -117> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: crc -116> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: secure -115> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: crc -114> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xaaaae1a34a40) adding con mode: secure -113> 2021-06-16 18:19:34.981 ffff90ac1550 2 auth: KeyRing::load: loaded key file /var/lib/ceph/osd/ceph-48//keyring -112> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient: init -111> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding auth protocol: cephx -110> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding auth protocol: cephx -109> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding auth protocol: cephx -108> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding auth protocol: none -107> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: secure -106> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: crc -105> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: secure -104> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: crc -103> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: secure -102> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: crc -101> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: crc -100> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: secure -99> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: crc -98> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: secure -97> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: crc -96> 2021-06-16 18:19:34.981 ffff90ac1550 5 AuthRegistry(0xffffc581dad0) adding con mode: secure -95> 2021-06-16 18:19:34.981 ffff90ac1550 2 auth: KeyRing::load: loaded key file /var/lib/ceph/osd/ceph-48//keyring -94> 2021-06-16 18:19:34.981 ffff90ac1550 2 auth: KeyRing::load: loaded key file /var/lib/ceph/osd/ceph-48//keyring -93> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient: _reopen_session rank -1 -92> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): picked mon.ceph-node2 con 0xaaaae1a4fb00 addr [v2:10.101.1.2:3300/0,v1:10.101.1.2:6789/0] -91> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): picked mon.ceph-node3 con 0xaaaae1a4ff80 addr [v2:10.101.1.3:3300/0,v1:10.101.1.3:6789/0] -90> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): picked mon.ceph-node1 con 0xaaaae1a50400 addr [v2:10.101.1.1:3300/0,v1:10.101.1.1:6789/0] -89> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): start opening mon connection -88> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): start opening mon connection -87> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): start opening mon connection -86> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): _renew_subs -85> 2021-06-16 18:19:34.981 ffff90ac1550 10 monclient(hunting): authenticate will time out at 2021-06-16 18:24:34.992086 -84> 2021-06-16 18:19:34.981 ffff8efbced0 10 monclient(hunting): get_auth_request con 0xaaaae1a4ff80 auth_method 0 -83> 2021-06-16 18:19:34.981 ffff8efbced0 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2] -82> 2021-06-16 18:19:34.981 ffff8efbced0 10 monclient(hunting): _init_auth method 2 -81> 2021-06-16 18:19:34.981 ffff8efbced0 10 monclient(hunting): _init_auth creating new auth -80> 2021-06-16 18:19:34.991 ffff8ffdced0 10 monclient(hunting): get_auth_request con 0xaaaae1a50400 auth_method 0 -79> 2021-06-16 18:19:34.991 ffff8ffdced0 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2] -78> 2021-06-16 18:19:34.991 ffff8ffdced0 10 monclient(hunting): _init_auth method 2 -77> 2021-06-16 18:19:34.991 ffff8ffdced0 10 monclient(hunting): _init_auth creating new auth -76> 2021-06-16 18:19:34.991 ffff8f7cced0 10 monclient(hunting): get_auth_request con 0xaaaae1a4fb00 auth_method 0 -75> 2021-06-16 18:19:34.991 ffff8f7cced0 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2] -74> 2021-06-16 18:19:34.991 ffff8f7cced0 10 monclient(hunting): _init_auth method 2 -73> 2021-06-16 18:19:34.991 ffff8f7cced0 10 monclient(hunting): _init_auth creating new auth -72> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient(hunting): handle_auth_reply_more payload 9 -71> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient(hunting): handle_auth_reply_more payload_len 9 -70> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes -69> 2021-06-16 18:19:34.991 ffff8ffdced0 10 monclient(hunting): handle_auth_reply_more payload 9 -68> 2021-06-16 18:19:34.991 ffff8ffdced0 10 monclient(hunting): handle_auth_reply_more payload_len 9 -67> 2021-06-16 18:19:34.991 ffff8ffdced0 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes -66> 2021-06-16 18:19:34.991 ffff8f7cced0 10 monclient(hunting): handle_auth_reply_more payload 9 -65> 2021-06-16 18:19:34.991 ffff8f7cced0 10 monclient(hunting): handle_auth_reply_more payload_len 9 -64> 2021-06-16 18:19:34.991 ffff8f7cced0 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes -63> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient(hunting): handle_auth_done global_id 5411 payload 210 -62> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient: _finish_hunting 0 -61> 2021-06-16 18:19:34.991 ffff8efbced0 1 monclient: found mon.ceph-node3 -60> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient: _send_mon_message to mon.ceph-node3 at v2:10.101.1.3:3300/0 -59> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient: _finish_auth 0 -58> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2021-06-16 18:19:04.992807) -57> 2021-06-16 18:19:34.991 ffff8efbced0 10 monclient: _send_mon_message to mon.ceph-node3 at v2:10.101.1.3:3300/0 -56> 2021-06-16 18:19:34.991 ffff90ac1550 5 monclient: authenticate success, global_id 5411 -55> 2021-06-16 18:19:34.991 ffff8e7aced0 10 monclient: handle_monmap mon_map magic: 0 v1 -54> 2021-06-16 18:19:34.991 ffff8e7aced0 10 monclient: got monmap 1 from mon.ceph-node3 (according to old e1) -53> 2021-06-16 18:19:34.991 ffff8e7aced0 10 monclient: dump: epoch 1 fsid 13a2d867-70d5-4800-9303-c41c5f68996d last_changed 2021-06-16 18:13:28.704684 created 2021-06-16 18:13:28.704684 min_mon_release 14 (nautilus) 0: [v2:10.101.1.1:3300/0,v1:10.101.1.1:6789/0] mon.ceph-node1 1: [v2:10.101.1.2:3300/0,v1:10.101.1.2:6789/0] mon.ceph-node2 2: [v2:10.101.1.3:3300/0,v1:10.101.1.3:6789/0] mon.ceph-node3 -52> 2021-06-16 18:19:34.991 ffff8e7aced0 10 monclient: handle_config config(0 keys) v1 -51> 2021-06-16 18:19:34.991 ffff8cf7ced0 4 set_mon_vals no callback set -50> 2021-06-16 18:19:34.991 ffff8e7aced0 10 monclient: _finish_auth 0 -49> 2021-06-16 18:19:34.991 ffff8e7aced0 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2021-06-16 18:19:04.993290) -48> 2021-06-16 18:19:34.991 ffff90ac1550 10 monclient: get_monmap_and_config success -47> 2021-06-16 18:19:34.991 ffff90ac1550 10 monclient: shutdown -46> 2021-06-16 18:19:34.991 ffff90ac1550 0 set uid:gid to 167:167 (ceph:ceph) -45> 2021-06-16 18:19:34.991 ffff90ac1550 0 ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable), process ceph-osd, pid 3935215 -44> 2021-06-16 18:19:34.991 ffff90ac1550 0 pidfile_write: ignore empty --pid-file -43> 2021-06-16 18:19:35.031 ffff90ac1550 5 asok(0xaaaae1a58000) init /var/run/ceph/ceph-osd.48.asok -42> 2021-06-16 18:19:35.031 ffff90ac1550 5 asok(0xaaaae1a58000) bind_and_listen /var/run/ceph/ceph-osd.48.asok -41> 2021-06-16 18:19:35.031 ffff90ac1550 5 asok(0xaaaae1a58000) register_command 0 hook 0xaaaae19b21f8 -40> 2021-06-16 18:19:35.031 ffff90ac1550 5 asok(0xaaaae1a58000) register_command version hook 0xaaaae19b21f8 -39> 2021-06-16 18:19:35.031 ffff90ac1550 5 asok(0xaaaae1a58000) register_command git_version hook 0xaaaae19b21f8 -38> 2021-06-16 18:19:35.031 ffff90ac1550 5 asok(0xaaaae1a58000) register_command help hook 0xaaaae19b4140 -37> 2021-06-16 18:19:35.031 ffff90ac1550 5 asok(0xaaaae1a58000) register_command get_command_descriptions hook 0xaaaae19b4100 -36> 2021-06-16 18:19:35.031 ffff8f7cced0 5 asok(0xaaaae1a58000) entry start -35> 2021-06-16 18:19:35.031 ffff90ac1550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs path /var/lib/ceph/osd/ceph-48/ -34> 2021-06-16 18:19:35.031 ffff90ac1550 2 bluestore(/var/lib/ceph/osd/ceph-48//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::v14_2_0::list::const_iterator&) decode past end of struct encoding -33> 2021-06-16 18:19:35.041 ffff90ac1550 2 bluestore(/var/lib/ceph/osd/ceph-48//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::v14_2_0::list::const_iterator&) decode past end of struct encoding -32> 2021-06-16 18:19:35.041 ffff90ac1550 2 bluestore(/var/lib/ceph/osd/ceph-48//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::v14_2_0::list::const_iterator&) decode past end of struct encoding -31> 2021-06-16 18:19:35.041 ffff90ac1550 -1 bluestore(/var/lib/ceph/osd/ceph-48/) _read_fsid unparsable uuid -30> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs using provided fsid 51416055-db8e-492d-b30a-88338cab2e4e -29> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel -28> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264a700 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block -27> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264a700 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway -26> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264a700 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported -25> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluestore(/var/lib/ceph/osd/ceph-48/) _set_cache_sizes cache_size 3221225472 meta 0.4 kv 0.4 data 0.2 -24> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore bluefs available hook 0xaaaae19b44a0 -23> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore bluefs stats hook 0xaaaae19b44a0 -22> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluefs debug_inject_read_zeros hook 0xaaaae19b44a0 -21> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel -20> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264aa80 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block -19> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264aa80 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway -18> 2021-06-16 18:19:35.041 ffff90ac1550 1 bdev(0xaaaae264aa80 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported -17> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-48//block size 2.9 TiB -16> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs _add_block_extent bdev 1 0x165b2ae0000~1dcee40000 skip 0 -15> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs mkfs osd_uuid 51416055-db8e-492d-b30a-88338cab2e4e -14> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 -13> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore allocator dump bluefs-db hook 0xaaaae2614ba0 -12> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore allocator score bluefs-db hook 0xaaaae2614ba0 -11> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore allocator fragmentation bluefs-db hook 0xaaaae2614ba0 -10> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs mkfs uuid 82415be9-f440-493e-a8a4-5a1b84dda134 -9> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) unregister_command bluestore allocator dump bluefs-db -8> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) unregister_command bluestore allocator score bluefs-db -7> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) unregister_command bluestore allocator fragmentation bluefs-db -6> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs mount -5> 2021-06-16 18:19:35.041 ffff90ac1550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 -4> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore allocator dump bluefs-db hook 0xaaaae2614ba0 -3> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore allocator score bluefs-db hook 0xaaaae2614ba0 -2> 2021-06-16 18:19:35.041 ffff90ac1550 5 asok(0xaaaae1a58000) register_command bluestore allocator fragmentation bluefs-db hook 0xaaaae2614ba0 -1> 2021-06-16 18:19:35.091 ffff90ac1550 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff90ac1550 time 2021-06-16 18:19:35.091520 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0) ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaabae72c3c] 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaabae72e08] 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaabb3fa734] 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaabb409964] 5: (BlueFS::mount()+0x180) [0xaaaabb4164a0] 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaabb308e08] 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaabb30a0cc] 8: (BlueStore::mkfs()+0x4d4) [0xaaaabb3660c4] 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaabaecb0ac] 10: (main()+0x15a8) [0xaaaabae77b70] 11: (__libc_start_main()+0xf0) [0xffff90da15d4] 12: (()+0x4cab9c) [0xaaaabaeaab9c] 0> 2021-06-16 18:19:35.091 ffff90ac1550 -1 *** Caught signal (Aborted) ** in thread ffff90ac1550 thread_name:ceph-osd ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: [0xffff91c2066c] 2: (gsignal()+0x4c) [0xffff90db50e8] 3: (abort()+0x11c) [0xffff90db6760] 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaabae72c90] 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaabae72e08] 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaabb3fa734] 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaabb409964] 8: (BlueFS::mount()+0x180) [0xaaaabb4164a0] 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaabb308e08] 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaabb30a0cc] 11: (BlueStore::mkfs()+0x4d4) [0xaaaabb3660c4] 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaabaecb0ac] 13: (main()+0x15a8) [0xaaaabae77b70] 14: (__libc_start_main()+0xf0) [0xffff90da15d4] 15: (()+0x4cab9c) [0xaaaabaeaab9c] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 1/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 0/ 0 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 1 reserver 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/ 5 rgw_sync 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 4/ 5 memdb 1/ 5 kinetic 1/ 5 fuse 1/ 5 mgr 1/ 5 mgrc 1/ 5 dpdk 1/ 5 eventtrace 1/ 5 prioritycache -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-osd.48.log --- end dump of recent events --- 2021-06-16 18:19:43.541 ffff9d961550 0 set uid:gid to 167:167 (ceph:ceph) 2021-06-16 18:19:43.541 ffff9d961550 0 ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable), process ceph-osd, pid 3935453 2021-06-16 18:19:43.541 ffff9d961550 0 pidfile_write: ignore empty --pid-file 2021-06-16 18:19:43.591 ffff9d961550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs path /var/lib/ceph/osd/ceph-48/ 2021-06-16 18:19:43.591 ffff9d961550 -1 bluestore(/var/lib/ceph/osd/ceph-48/) _read_fsid unparsable uuid 2021-06-16 18:19:43.591 ffff9d961550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs using provided fsid 80521d47-3724-4b48-90d9-054f562cab95 2021-06-16 18:19:43.591 ffff9d961550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930a700 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930a700 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930a700 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported 2021-06-16 18:19:43.591 ffff9d961550 1 bluestore(/var/lib/ceph/osd/ceph-48/) _set_cache_sizes cache_size 3221225472 meta 0.4 kv 0.4 data 0.2 2021-06-16 18:19:43.591 ffff9d961550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930aa80 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930aa80 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930aa80 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-48//block size 2.9 TiB 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs _add_block_extent bdev 1 0x165b2ae0000~1dcee40000 skip 0 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs mkfs osd_uuid 80521d47-3724-4b48-90d9-054f562cab95 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs mkfs uuid 8faf21c3-98dc-447f-a395-562058437a3f 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs mount 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 2021-06-16 18:19:43.641 ffff9d961550 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff9d961550 time 2021-06-16 18:19:43.643144 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0) ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaac87b2c3c] 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaac87b2e08] 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaac8d3a734] 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaac8d49964] 5: (BlueFS::mount()+0x180) [0xaaaac8d564a0] 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaac8c48e08] 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaac8c4a0cc] 8: (BlueStore::mkfs()+0x4d4) [0xaaaac8ca60c4] 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaac880b0ac] 10: (main()+0x15a8) [0xaaaac87b7b70] 11: (__libc_start_main()+0xf0) [0xffff9dc415d4] 12: (()+0x4cab9c) [0xaaaac87eab9c] 2021-06-16 18:19:43.641 ffff9d961550 -1 *** Caught signal (Aborted) ** in thread ffff9d961550 thread_name:ceph-osd ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: [0xffff9eac066c] 2: (gsignal()+0x4c) [0xffff9dc550e8] 3: (abort()+0x11c) [0xffff9dc56760] 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaac87b2c90] 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaac87b2e08] 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaac8d3a734] 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaac8d49964] 8: (BlueFS::mount()+0x180) [0xaaaac8d564a0] 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaac8c48e08] 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaac8c4a0cc] 11: (BlueStore::mkfs()+0x4d4) [0xaaaac8ca60c4] 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaac880b0ac] 13: (main()+0x15a8) [0xaaaac87b7b70] 14: (__libc_start_main()+0xf0) [0xffff9dc415d4] 15: (()+0x4cab9c) [0xaaaac87eab9c] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- -154> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command assert hook 0xaaaaf8674560 -153> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command abort hook 0xaaaaf8674560 -152> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command perfcounters_dump hook 0xaaaaf8674560 -151> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command 1 hook 0xaaaaf8674560 -150> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command perf dump hook 0xaaaaf8674560 -149> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command perfcounters_schema hook 0xaaaaf8674560 -148> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command perf histogram dump hook 0xaaaaf8674560 -147> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command 2 hook 0xaaaaf8674560 -146> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command perf schema hook 0xaaaaf8674560 -145> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command perf histogram schema hook 0xaaaaf8674560 -144> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command perf reset hook 0xaaaaf8674560 -143> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command config show hook 0xaaaaf8674560 -142> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command config help hook 0xaaaaf8674560 -141> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command config set hook 0xaaaaf8674560 -140> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command config unset hook 0xaaaaf8674560 -139> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command config get hook 0xaaaaf8674560 -138> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command config diff hook 0xaaaaf8674560 -137> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command config diff get hook 0xaaaaf8674560 -136> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command log flush hook 0xaaaaf8674560 -135> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command log dump hook 0xaaaaf8674560 -134> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command log reopen hook 0xaaaaf8674560 -133> 2021-06-16 18:19:43.481 ffff9d961550 5 asok(0xaaaaf8718000) register_command dump_mempools hook 0xaaaaf86c47c8 -132> 2021-06-16 18:19:43.491 ffff9d961550 10 monclient: get_monmap_and_config -131> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient: build_initial_monmap -130> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient: monmap: epoch 1 fsid 13a2d867-70d5-4800-9303-c41c5f68996d last_changed 2021-06-16 18:13:28.704684 created 2021-06-16 18:13:28.704684 min_mon_release 14 (nautilus) 0: [v2:10.101.1.1:3300/0,v1:10.101.1.1:6789/0] mon.ceph-node1 1: [v2:10.101.1.2:3300/0,v1:10.101.1.2:6789/0] mon.ceph-node2 2: [v2:10.101.1.3:3300/0,v1:10.101.1.3:6789/0] mon.ceph-node3 -129> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding auth protocol: cephx -128> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding auth protocol: cephx -127> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding auth protocol: cephx -126> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding auth protocol: none -125> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: secure -124> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: crc -123> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: secure -122> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: crc -121> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: secure -120> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: crc -119> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: crc -118> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: secure -117> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: crc -116> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: secure -115> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: crc -114> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xaaaaf86f4a40) adding con mode: secure -113> 2021-06-16 18:19:43.541 ffff9d961550 2 auth: KeyRing::load: loaded key file /var/lib/ceph/osd/ceph-48//keyring -112> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient: init -111> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding auth protocol: cephx -110> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding auth protocol: cephx -109> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding auth protocol: cephx -108> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding auth protocol: none -107> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: secure -106> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: crc -105> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: secure -104> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: crc -103> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: secure -102> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: crc -101> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: crc -100> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: secure -99> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: crc -98> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: secure -97> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: crc -96> 2021-06-16 18:19:43.541 ffff9d961550 5 AuthRegistry(0xffffe7e10cf0) adding con mode: secure -95> 2021-06-16 18:19:43.541 ffff9d961550 2 auth: KeyRing::load: loaded key file /var/lib/ceph/osd/ceph-48//keyring -94> 2021-06-16 18:19:43.541 ffff9d961550 2 auth: KeyRing::load: loaded key file /var/lib/ceph/osd/ceph-48//keyring -93> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient: _reopen_session rank -1 -92> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): picked mon.ceph-node3 con 0xaaaaf870fb00 addr [v2:10.101.1.3:3300/0,v1:10.101.1.3:6789/0] -91> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): picked mon.ceph-node2 con 0xaaaaf870ff80 addr [v2:10.101.1.2:3300/0,v1:10.101.1.2:6789/0] -90> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): picked mon.ceph-node1 con 0xaaaaf8710400 addr [v2:10.101.1.1:3300/0,v1:10.101.1.1:6789/0] -89> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): start opening mon connection -88> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): start opening mon connection -87> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): start opening mon connection -86> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): _renew_subs -85> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient(hunting): authenticate will time out at 2021-06-16 18:24:43.543542 -84> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): get_auth_request con 0xaaaaf870ff80 auth_method 0 -83> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2] -82> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): _init_auth method 2 -81> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): _init_auth creating new auth -80> 2021-06-16 18:19:43.541 ffff9ce7ced0 10 monclient(hunting): get_auth_request con 0xaaaaf8710400 auth_method 0 -79> 2021-06-16 18:19:43.541 ffff9ce7ced0 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2] -78> 2021-06-16 18:19:43.541 ffff9ce7ced0 10 monclient(hunting): _init_auth method 2 -77> 2021-06-16 18:19:43.541 ffff9ce7ced0 10 monclient(hunting): _init_auth creating new auth -76> 2021-06-16 18:19:43.541 ffff9c66ced0 10 monclient(hunting): get_auth_request con 0xaaaaf870fb00 auth_method 0 -75> 2021-06-16 18:19:43.541 ffff9c66ced0 10 monclient(hunting): get_auth_request method 2 preferred_modes [1,2] -74> 2021-06-16 18:19:43.541 ffff9c66ced0 10 monclient(hunting): _init_auth method 2 -73> 2021-06-16 18:19:43.541 ffff9c66ced0 10 monclient(hunting): _init_auth creating new auth -72> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): handle_auth_reply_more payload 9 -71> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): handle_auth_reply_more payload_len 9 -70> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes -69> 2021-06-16 18:19:43.541 ffff9ce7ced0 10 monclient(hunting): handle_auth_reply_more payload 9 -68> 2021-06-16 18:19:43.541 ffff9ce7ced0 10 monclient(hunting): handle_auth_reply_more payload_len 9 -67> 2021-06-16 18:19:43.541 ffff9ce7ced0 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes -66> 2021-06-16 18:19:43.541 ffff9c66ced0 10 monclient(hunting): handle_auth_reply_more payload 9 -65> 2021-06-16 18:19:43.541 ffff9c66ced0 10 monclient(hunting): handle_auth_reply_more payload_len 9 -64> 2021-06-16 18:19:43.541 ffff9c66ced0 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes -63> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient(hunting): handle_auth_done global_id 5479 payload 210 -62> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient: _finish_hunting 0 -61> 2021-06-16 18:19:43.541 ffff9be5ced0 1 monclient: found mon.ceph-node2 -60> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient: _send_mon_message to mon.ceph-node2 at v2:10.101.1.2:3300/0 -59> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient: _finish_auth 0 -58> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2021-06-16 18:19:13.544345) -57> 2021-06-16 18:19:43.541 ffff9be5ced0 10 monclient: _send_mon_message to mon.ceph-node2 at v2:10.101.1.2:3300/0 -56> 2021-06-16 18:19:43.541 ffff9d961550 5 monclient: authenticate success, global_id 5479 -55> 2021-06-16 18:19:43.541 ffff9b64ced0 10 monclient: handle_monmap mon_map magic: 0 v1 -54> 2021-06-16 18:19:43.541 ffff9b64ced0 10 monclient: got monmap 1 from mon.ceph-node2 (according to old e1) -53> 2021-06-16 18:19:43.541 ffff9b64ced0 10 monclient: dump: epoch 1 fsid 13a2d867-70d5-4800-9303-c41c5f68996d last_changed 2021-06-16 18:13:28.704684 created 2021-06-16 18:13:28.704684 min_mon_release 14 (nautilus) 0: [v2:10.101.1.1:3300/0,v1:10.101.1.1:6789/0] mon.ceph-node1 1: [v2:10.101.1.2:3300/0,v1:10.101.1.2:6789/0] mon.ceph-node2 2: [v2:10.101.1.3:3300/0,v1:10.101.1.3:6789/0] mon.ceph-node3 -52> 2021-06-16 18:19:43.541 ffff9b64ced0 10 monclient: handle_config config(0 keys) v1 -51> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient: get_monmap_and_config success -50> 2021-06-16 18:19:43.541 ffff9d961550 10 monclient: shutdown -49> 2021-06-16 18:19:43.541 ffff99e1ced0 4 set_mon_vals no callback set -48> 2021-06-16 18:19:43.541 ffff9b64ced0 10 monclient: _finish_auth 0 -47> 2021-06-16 18:19:43.541 ffff9b64ced0 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2021-06-16 18:19:13.544824) -46> 2021-06-16 18:19:43.541 ffff9d961550 0 set uid:gid to 167:167 (ceph:ceph) -45> 2021-06-16 18:19:43.541 ffff9d961550 0 ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable), process ceph-osd, pid 3935453 -44> 2021-06-16 18:19:43.541 ffff9d961550 0 pidfile_write: ignore empty --pid-file -43> 2021-06-16 18:19:43.581 ffff9d961550 5 asok(0xaaaaf8718000) init /var/run/ceph/ceph-osd.48.asok -42> 2021-06-16 18:19:43.581 ffff9d961550 5 asok(0xaaaaf8718000) bind_and_listen /var/run/ceph/ceph-osd.48.asok -41> 2021-06-16 18:19:43.581 ffff9d961550 5 asok(0xaaaaf8718000) register_command 0 hook 0xaaaaf86721f8 -40> 2021-06-16 18:19:43.581 ffff9d961550 5 asok(0xaaaaf8718000) register_command version hook 0xaaaaf86721f8 -39> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command git_version hook 0xaaaaf86721f8 -38> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command help hook 0xaaaaf8674140 -37> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command get_command_descriptions hook 0xaaaaf8674100 -36> 2021-06-16 18:19:43.591 ffff9c66ced0 5 asok(0xaaaaf8718000) entry start -35> 2021-06-16 18:19:43.591 ffff9d961550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs path /var/lib/ceph/osd/ceph-48/ -34> 2021-06-16 18:19:43.591 ffff9d961550 2 bluestore(/var/lib/ceph/osd/ceph-48//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::v14_2_0::list::const_iterator&) decode past end of struct encoding -33> 2021-06-16 18:19:43.591 ffff9d961550 2 bluestore(/var/lib/ceph/osd/ceph-48//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::v14_2_0::list::const_iterator&) decode past end of struct encoding -32> 2021-06-16 18:19:43.591 ffff9d961550 2 bluestore(/var/lib/ceph/osd/ceph-48//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::v14_2_0::list::const_iterator&) decode past end of struct encoding -31> 2021-06-16 18:19:43.591 ffff9d961550 -1 bluestore(/var/lib/ceph/osd/ceph-48/) _read_fsid unparsable uuid -30> 2021-06-16 18:19:43.591 ffff9d961550 1 bluestore(/var/lib/ceph/osd/ceph-48/) mkfs using provided fsid 80521d47-3724-4b48-90d9-054f562cab95 -29> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel -28> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930a700 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block -27> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930a700 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway -26> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930a700 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported -25> 2021-06-16 18:19:43.591 ffff9d961550 1 bluestore(/var/lib/ceph/osd/ceph-48/) _set_cache_sizes cache_size 3221225472 meta 0.4 kv 0.4 data 0.2 -24> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore bluefs available hook 0xaaaaf86744a0 -23> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore bluefs stats hook 0xaaaaf86744a0 -22> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluefs debug_inject_read_zeros hook 0xaaaaf86744a0 -21> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev create path /var/lib/ceph/osd/ceph-48//block type kernel -20> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930aa80 /var/lib/ceph/osd/ceph-48//block) open path /var/lib/ceph/osd/ceph-48//block -19> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930aa80 /var/lib/ceph/osd/ceph-48//block) open backing device/file reports st_blksize 65536, using bdev_block_size 4096 anyway -18> 2021-06-16 18:19:43.591 ffff9d961550 1 bdev(0xaaaaf930aa80 /var/lib/ceph/osd/ceph-48//block) open size 3200627245056 (0x2e934400000, 2.9 TiB) block_size 4096 (4 KiB) non-rotational discard supported -17> 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-48//block size 2.9 TiB -16> 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs _add_block_extent bdev 1 0x165b2ae0000~1dcee40000 skip 0 -15> 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs mkfs osd_uuid 80521d47-3724-4b48-90d9-054f562cab95 -14> 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 -13> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore allocator dump bluefs-db hook 0xaaaaf92d4ba0 -12> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore allocator score bluefs-db hook 0xaaaaf92d4ba0 -11> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore allocator fragmentation bluefs-db hook 0xaaaaf92d4ba0 -10> 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs mkfs uuid 8faf21c3-98dc-447f-a395-562058437a3f -9> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) unregister_command bluestore allocator dump bluefs-db -8> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) unregister_command bluestore allocator score bluefs-db -7> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) unregister_command bluestore allocator fragmentation bluefs-db -6> 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs mount -5> 2021-06-16 18:19:43.591 ffff9d961550 1 bluefs _init_alloc id 1 alloc_size 0x10000 size 0x2e934400000 -4> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore allocator dump bluefs-db hook 0xaaaaf92d4ba0 -3> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore allocator score bluefs-db hook 0xaaaaf92d4ba0 -2> 2021-06-16 18:19:43.591 ffff9d961550 5 asok(0xaaaaf8718000) register_command bluestore allocator fragmentation bluefs-db hook 0xaaaaf92d4ba0 -1> 2021-06-16 18:19:43.641 ffff9d961550 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread ffff9d961550 time 2021-06-16 18:19:43.643144 /home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueFS.cc: 1849: FAILED ceph_assert(r == 0) ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x15c) [0xaaaac87b2c3c] 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaac87b2e08] 3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaac8d3a734] 4: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaac8d49964] 5: (BlueFS::mount()+0x180) [0xaaaac8d564a0] 6: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaac8c48e08] 7: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaac8c4a0cc] 8: (BlueStore::mkfs()+0x4d4) [0xaaaac8ca60c4] 9: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaac880b0ac] 10: (main()+0x15a8) [0xaaaac87b7b70] 11: (__libc_start_main()+0xf0) [0xffff9dc415d4] 12: (()+0x4cab9c) [0xaaaac87eab9c] 0> 2021-06-16 18:19:43.641 ffff9d961550 -1 *** Caught signal (Aborted) ** in thread ffff9d961550 thread_name:ceph-osd ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) 1: [0xffff9eac066c] 2: (gsignal()+0x4c) [0xffff9dc550e8] 3: (abort()+0x11c) [0xffff9dc56760] 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0xaaaac87b2c90] 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaac87b2e08] 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::v14_2_0::list*, char*)+0xfd4) [0xaaaac8d3a734] 7: (BlueFS::_replay(bool, bool)+0x2b4) [0xaaaac8d49964] 8: (BlueFS::mount()+0x180) [0xaaaac8d564a0] 9: (BlueStore::_open_bluefs(bool)+0x80) [0xaaaac8c48e08] 10: (BlueStore::_open_db(bool, bool, bool)+0x514) [0xaaaac8c4a0cc] 11: (BlueStore::mkfs()+0x4d4) [0xaaaac8ca60c4] 12: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0xa4) [0xaaaac880b0ac] 13: (main()+0x15a8) [0xaaaac87b7b70] 14: (__libc_start_main()+0xf0) [0xffff9dc415d4] 15: (()+0x4cab9c) [0xaaaac87eab9c] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 1/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 0/ 0 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 1 reserver 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/ 5 rgw_sync 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 4/ 5 memdb 1/ 5 kinetic 1/ 5 fuse 1/ 5 mgr 1/ 5 mgrc 1/ 5 dpdk 1/ 5 eventtrace 1/ 5 prioritycache -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-osd.48.log --- end dump of recent events ---