Project

General

Profile

Actions

Bug #42223

closed

ceph-14.2.4/src/os/bluestore/fastbmap_allocator_impl.h: 750: FAILED ceph_assert(available >= allocated)

Added by Tobias Fischer over 4 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
nautilus
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):

cfcd3f82113b772cf45adf054aaa6b69c6d22a1bd3306f482a959e5ef2b265ce
6c5e5bc740ec75bba3bec5f9ac4a3f296bba9e87f7023ffe4817ee2544a7ec68
7df2a475de25d9f4ffa74b7952402bf454616fe5be176c45263fc8538a03a4f9
d8a01d6ecc303267dac76b09a441f973f7af20b5df564b079fefa53ab1b31c73

Crash signature (v2):

Description

after update to Nautilus 14.2.4 random OSDs on different Clusters keep failing after random time. Only known fix is to recreate the OSD.


Files

ceph-osd.3.log.zip (327 KB) ceph-osd.3.log.zip Tobias Fischer, 10/08/2019 08:41 AM
ceph-osd.513.log (659 KB) ceph-osd.513.log Tobias Fischer, 10/08/2019 09:03 AM
debug.txt (4.35 KB) debug.txt Dmitry Mishin, 10/25/2019 12:38 AM
ceph-bluestore-tool-repair.log (191 KB) ceph-bluestore-tool-repair.log ceph-bluestore-tool repair Tobias Fischer, 10/25/2019 01:13 PM
osd-startup.log.zip (348 KB) osd-startup.log.zip Tobias Fischer, 10/25/2019 01:14 PM
ceph-osd-logs.zip (485 KB) ceph-osd-logs.zip Tobias Fischer, 10/28/2019 02:27 PM
dmesg-osd-4.txt (84.8 KB) dmesg-osd-4.txt Tobias Fischer, 10/28/2019 02:51 PM
dmesg-osd-1.txt (90.6 KB) dmesg-osd-1.txt Tobias Fischer, 10/28/2019 02:51 PM
osd1-logs.tar.gz (731 KB) osd1-logs.tar.gz Tobias Fischer, 10/28/2019 03:05 PM
osd4-logs.tar.gz (630 KB) osd4-logs.tar.gz Tobias Fischer, 10/28/2019 03:05 PM
osd1-kvstore.txt (33.1 KB) osd1-kvstore.txt Tobias Fischer, 10/28/2019 03:26 PM
crash-folder-osd1.tar.gz (757 KB) crash-folder-osd1.tar.gz Tobias Fischer, 10/28/2019 03:26 PM
osd4-crash-folder.tar.gz (550 KB) osd4-crash-folder.tar.gz Tobias Fischer, 10/28/2019 03:26 PM
osd4-kvstore.txt (33.1 KB) osd4-kvstore.txt Tobias Fischer, 10/28/2019 03:26 PM
ceph-kvstore-tool_rocksdb_stats.txt (6.58 KB) ceph-kvstore-tool_rocksdb_stats.txt as per filename Marcin W, 10/30/2019 09:31 AM
003808.sst.part01.rar (999 KB) 003808.sst.part01.rar Marcin W, 10/30/2019 09:42 AM
003808.sst.part02.rar (999 KB) 003808.sst.part02.rar Marcin W, 10/30/2019 09:42 AM
003808.sst.part03.rar (999 KB) 003808.sst.part03.rar Marcin W, 10/30/2019 09:42 AM
003808.sst.part04.rar (999 KB) 003808.sst.part04.rar Marcin W, 10/30/2019 09:42 AM
003808.sst.part05.rar (414 KB) 003808.sst.part05.rar Marcin W, 10/30/2019 09:42 AM
dump.txt (33.1 KB) dump.txt ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-4/ dump Tobias Fischer, 10/30/2019 12:11 PM
osd1.z01 (999 KB) osd1.z01 Tobias Fischer, 10/30/2019 12:41 PM
osd1.zip (710 KB) osd1.zip Tobias Fischer, 10/30/2019 12:41 PM
osd1.z02 (999 KB) osd1.z02 Tobias Fischer, 10/30/2019 12:41 PM
osd4.z02 (999 KB) osd4.z02 Tobias Fischer, 10/30/2019 12:42 PM
osd4.z01 (999 KB) osd4.z01 Tobias Fischer, 10/30/2019 12:42 PM
osd4.z03 (999 KB) osd4.z03 Tobias Fischer, 10/30/2019 12:42 PM
osd4.zip (837 KB) osd4.zip Tobias Fischer, 10/30/2019 12:42 PM
dump_more.diff (955 Bytes) dump_more.diff Igor Fedotov, 11/06/2019 02:49 PM
ceph-osd.6.log.gz (248 KB) ceph-osd.6.log.gz Krzysztof Klimonda, 11/08/2019 09:18 AM
ceph-osd.50.log.gz (371 KB) ceph-osd.50.log.gz Krzysztof Klimonda, 11/08/2019 01:25 PM

Related issues 4 (0 open4 closed)

Related to bluestore - Bug #51753: crash: void BlueStore::_kv_sync_thread(): assert(r == 0)Won't Fix

Actions
Has duplicate bluestore - Bug #42284: fastbmap_allocator_impl.h: FAILED ceph_assert(available >= allocated) in upgrade:luminous-x-nautilusDuplicate10/11/2019

Actions
Has duplicate bluestore - Bug #42683: OSD Segmentation faultDuplicate11/07/2019

Actions
Has duplicate bluestore - Bug #43211: Bluestore OSDs don't start after upgrade to 14.2.4Duplicate

Actions
Actions #1

Updated by Tobias Fischer over 4 years ago

log of another OSD on a different cluster

Actions #2

Updated by Igor Fedotov over 4 years ago

Hi Tobias,
may I have some clarifications, please?
- I can see the similar issue report at ceph-users mailing list: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-September/036833.html
Wondering if this is the same or different occurrence?

- Did you perform any manipulations with main devices for the failing OSDs shortly before the issue, e.g. volume expansion, migration etc?

And one note on the issue - for the both occurrences (here and in the mailing list) I can see th eline showing that allocator (or more specifically FreeListManager) is unable to load any free extents:
"bluestore(/var/lib/ceph/osd/ceph-3) _open_alloc loaded 0 B in 0 extents"

So looks like some corruption/invalid data in RocksDB rather than allocator bug.

Tobias, do you have any broken OSD that we can dissect for a bit?

Thanks,
Igor

Actions #3

Updated by Tobias Fischer over 4 years ago

Hi Igor,

the issue mentioned in the mailing list looks like exact the same issue we have. the only thing i did with the OSDs after v14.2.2 Release is to apply "ceph-bluestore-tool repair" as stated in the release notes as almost all our cluster were "born" several years ago and so we had legacy stats warning. besides this we did not perform any other manipulations. after applying we had no issues with the OSDs. we skipped v14.2.3 release. the problems started soon after running the v14.2.4 update.

- So looks like some corruption/invalid data in RocksDB rather than allocator bug.

I remember that we converted from LevelDB to RocksDB while running on Luminous and before the Nautilus upgrade - maybe this is related? but we had no problems at all after the conversion. so it must have something to do with changes introduced with ceph-14.2.4.

at the moment we do not have any broken OSDs as I already recreated them. But I'm quite sure that we will get some new soon ;-) I can let you know when this happens?

best,
tobi

Actions #4

Updated by Igor Fedotov over 4 years ago

Tobias, thanks for the info.
And yeah, please ping me once you have another broken OSD.

Actions #5

Updated by Tobias Fischer over 4 years ago

Hi Igor,

the next OSD broke. Let me know what you need.

Actions #6

Updated by Dmitriy Rabotyagov over 4 years ago

I feel like I may some common problem while running ceph-bluestore-tool repair in 14.2.3: trace

Actions #7

Updated by Igor Fedotov over 4 years ago

Dmitry,
could you please share the whole startup log?

Preferably with "debug bluestore" set to 20.

And are you getting the same assertion during regular OSD startup?

Actions #8

Updated by Dmitry Mishin over 4 years ago

I've also got this in 14.2.4

Actions #9

Updated by Igor Fedotov over 4 years ago

Dmitry - could you please provide more details as per my comment #7?

Actions #10

Updated by Dmitry Mishin over 4 years ago

How can I enable debug bluestore for this OSD?

Actions #11

Updated by Igor Fedotov over 4 years ago

To collect log for ceph-bluestore-tool one can use something like that:
CEPH_ARGS="--log-file=output.log --debug-bluestore=20" ceph-bluestore-tool .....

For regular osd daemon one can use:
ceph config set osd.N debug_bluestore 20
then restart OSD, collect the log and revert back to default logging level with:
ceph config rm osd.N debug_bluestore

Updated by Tobias Fischer over 4 years ago

Hi Igor,

new logs: startup of broken OSD & ceph-bluestore-tool repair of broken OSD. ceph-bluestore-tool did run successfully - was aborted.

Actions #13

Updated by Igor Fedotov over 4 years ago

Tobias, thanks a lot! Looking into this, please preserve OSD for a while if possible.

Could you please clarify the following sentence though:
"ceph-bluestore-tool did run successfully - was aborted."

Do you mean "didn't run successfully"?

Actions #14

Updated by Tobias Fischer over 4 years ago

exactly. ceph-bluestore-tool didn't run successfully. it crashed somehow. unfortuntelly i already replaced the OSD. But if you want I can preserve the next one for a while. just let me know.

Actions #15

Updated by Sage Weil over 4 years ago

BTW I'm seeing 4 different clusters hitting this assert on 14.2.4 (via telemetry).

Actions #16

Updated by Sage Weil over 4 years ago

  • Status changed from New to In Progress
  • Assignee set to Ilya Dryomov
  • Priority changed from Normal to Urgent
Actions #17

Updated by Ilya Dryomov over 4 years ago

  • Assignee changed from Ilya Dryomov to Igor Fedotov

Sage, I assume you meant to assign this to Igor.

Actions #18

Updated by Tobias Fischer over 4 years ago

Igor, I have two failed OSDs. I will preserve them util end of the week in case you need them

Actions #19

Updated by Igor Fedotov over 4 years ago

Tobias, great! Thanks a lot.

May I have the logs for these two OSDs where they first hit the issue?

Actions #20

Updated by Tobias Fischer over 4 years ago

here you go

Actions #21

Updated by Igor Fedotov over 4 years ago

Thanks a lot!

Actions #22

Updated by Igor Fedotov over 4 years ago

And preliminary intermediate analysis is as follows (from last to first):

1) All avaialble OSD logs have the following lines :
... freelist init size 0x0 bytes_per_block 0x1002 blocks 0x0 blocks_per_key 0x69206e6f00000000
...
... _open_alloc loaded 0 B in 0 extents

which indicates freelist manager issue, most probably lacking/broken records in DB.

Assertion in allocator is just a consequence.

2) 2 of 3 (the third one log was cut off before the upload so most probably it had the same artefacts) OSD startup logs have a different assertion preceding the allocator's one which are, e.g.:

2019-10-08 10:12:56.451 7f9d7ae62700 2 rocksdb: [db/db_impl_compaction_flush.cc:2212] Waiting after background compaction error: Corruption: block checksum mismatch: expected 3412624973, got 2324967102 in db/085660.sst offset 0 size 4708, Accumulated background error counts: 1
2019-10-08 10:12:56.451 7f9d7965f700 -1 rocksdb: submit_common error: Corruption: block checksum mismatch: expected 3412624973, got 2324967102 in db/085660.sst offset 0 size 4708 code = 2 Rocksdb transaction:
Delete( Prefix = O key = 0x7f8000000000000008ab477096216265'nchmark_data_pod1-mon1_564326_object324!='0xfffffffffffffffeffffffffffffffff'o')
Put( Prefix = M key = 0x0000000000000d7a'.0000017954.00000000000017484613' Value size = 185)
Put( Prefix = M key = 0x0000000000000d7a'._fastinfo' Value size = 186)
Merge( Prefix = b key = 0x000000027c800000 Value size = 16)
Merge( Prefix = T key = 0x0000000000000008 Value size = 40)
2019-10-08 10:12:56.455 7f9d7965f700 -1 /data/Debian/WORKDIR/ceph-14.2.4/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_kv_sync_thread()' thread 7f9d7965f700 time 2019-10-08 10:12:56.454735
/data/Debian/WORKDIR/ceph-14.2.4/src/os/bluestore/BlueStore.cc: 10549: FAILED ceph_assert(r == 0)

which indicates DB access error which most probably resulted in broken free list manager.

And this looks similar to the following tickets:

https://tracker.ceph.com/issues/37282 (mimic)
https://tracker.ceph.com/issues/41367 (luminous)

Actions #23

Updated by Igor Fedotov over 4 years ago

Tobias, could you please check for H/W errors using dmesg for nodes with broken OSD's ?

Actions #24

Updated by Igor Fedotov over 4 years ago

Also could you please check earlier logs of the broken OSDs for other assertions, e.g. one I shared in
comment #16 for https://tracker.ceph.com/issues/37282

Updated by Tobias Fischer over 4 years ago

here the dmesg output of both severs containing the broken OSDs. On a quick look I could not find anything. And honestly I cannot imagine that it is a HW error - because it affects different servers across all our clusters and just appeared after using ceph-bluestore-tool to fix the legacy stats issue. will check the earlier logs now...

Updated by Tobias Fischer over 4 years ago

could not find any other assertions in old logs - but we don't keep so many old logs. i attached all logs i have for both OSDs

Actions #27

Updated by Igor Fedotov over 4 years ago

Well, nothing interesting in dmesgs...

Could you please run:
ceph-kvstore-tool bluestore-kv <path_to_osd> list "b"

against any of broken OSD and share the results?

Updated by Tobias Fischer over 4 years ago

ceph-kvstore-tool crashes on both OSDs - see files. I also attached the /var/lib/ceph/crash folder of both servers containg the broken OSDs

Actions #29

Updated by Marcin W over 4 years ago

I have the same issue with 2 OSDs so far. The initial error was:

/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/src/kv/RocksDBStore.cc: 1247: ceph_abort_msg("block checksum mismatch: expected 3250489453, got 2324967102  in db/022804.sst offset 1203875 size 4071")

And subsequent runs end up with:

   -13> 2019-10-29 12:26:58.968 7f93d0554dc0 10 freelist init size 0x0 bytes_per_block 0x0 blocks 0x0 blocks_per_key 0x0
   -12> 2019-10-29 12:26:58.968 7f93d0554dc0 10 freelist _init_misc bytes_per_key 0x0, key_mask 0x0
   -11> 2019-10-29 12:26:58.968 7f93d0554dc0 10 bluefs get_block_extents bdev 2
   -10> 2019-10-29 12:26:58.968 7f93d0554dc0 10 bluestore(/var/lib/ceph/osd/ceph-403) _open_alloc bluefs extents 0x[45dade00000~5d23e00000]
    -9> 2019-10-29 12:26:58.968 7f93d0554dc0 10 fbmap_alloc 0x5585970efc20 BitmapAllocator 0x9187fa00000/10000
    -8> 2019-10-29 12:26:59.011 7f93d0554dc0  1 bluestore(/var/lib/ceph/osd/ceph-403) _open_alloc opening allocation metadata
    -7> 2019-10-29 12:26:59.011 7f93d0554dc0 10 freelist enumerate_next start
    -6> 2019-10-29 12:26:59.011 7f93d0554dc0 10 freelist enumerate_next end
    -5> 2019-10-29 12:26:59.012 7f93d0554dc0  1 bluestore(/var/lib/ceph/osd/ceph-403) _open_alloc loaded 0 B in 0 extents
    -4> 2019-10-29 12:26:59.012 7f93d0554dc0 10 fbmap_alloc 0x5585970efc20 init_rm_free 0x45dade00000~5d23e00000
    -3> 2019-10-29 12:26:59.012 7f93d0554dc0 10 fbmaph _mark_allocated available 0x0
    -2> 2019-10-29 12:26:59.012 7f93d0554dc0 10 fbmaph _mark_allocated allocated 0x5d23e00000
    -1> 2019-10-29 12:26:59.026 7f93d0554dc0 -1 /data/marcin/ceph/src/os/bluestore/fastbmap_allocator_impl.h: In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t) [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread 7f93d0554dc0 time 2019-10-29 12:26:59.013508
/data/marcin/ceph/src/os/bluestore/fastbmap_allocator_impl.h: 763: FAILED ceph_assert(available >= allocated)

Please note that rocksdb options are calculated for each OSD separately:

[osd.403]
bluestore_rocksdb_options = compaction_readahead_size=2MB,compaction_style=kCompactionStyleLevel,compaction_threads=32,compression=kNoCompression,flusher_threads=8,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=24,level0_stop_writes_trigger=32,max_background_compactions=32,max_bytes_for_level_base=256MB,max_bytes_for_level_multiplier=6,max_write_buffer_number=16,min_write_buffer_number_to_merge=8,num_levels=5,recycle_log_file_num=2,target_file_size_base=16MB,write_buffer_size=16MB

I have a working compiler and I can add some extra dout statements if needed.

Actions #30

Updated by Marcin W over 4 years ago

DB partition size is 437888MB and according to my calculations RocksDB size including all 5 levels[256, 1536, 9216, 55296, 331776] should be 398080MB. I'd rule out corruption due to lack of space on DB or DB_SLOW.

# find db/ -name '*.sst' | wc -l
963

# du -sxmc *
14294    db
0    db.slow
56    db.wal
14350    total
Actions #31

Updated by Nathan Cutler over 4 years ago

  • Backport set to nautilus
Actions #32

Updated by Igor Fedotov over 4 years ago

Marcin, thanks for info.
Some questions, please

DB partition size is 437888MB

do you mean KB not MB ?

You've exported DB to filesystem, right?

if so is db/022804.sst file present there?

Actions #33

Updated by Igor Fedotov over 4 years ago

Tobias, Marcin, with what Ceph version were your clusters (broken OSDs specifically) created?
Have you observed the same failure with newly deployed OSDs?

Tobias - you mentioned an upgrade from LevelDB to RocksDB - could you please elaborate a bit? This I suppose this wasn't applicable to OSD....

Actions #34

Updated by Marcin W over 4 years ago

It's MiB. DB partition size aims at 5% of block device.

osd.403 where it was coming from has been already demolished by I still have a copy of db/3808.sst coming from osd.101 having the same symptoms:


levels in MiB:
partition_db_size=173184, levels=[192, 1728, 15552, 139968], levels_total=157440

ceph.conf:
[osd.101]
bluestore_rocksdb_options = compaction_readahead_size=2MB,compaction_style=kCompactionStyleLevel,compaction_threads=24,compression=kNoCompression,flusher_threads=8,level0_file_num_compaction_trigger=6,level0_slowdown_writes_trigger=20,level0_stop_writes_trigger=28,max_background_compactions=24,max_bytes_for_level_base=192MB,max_bytes_for_level_multiplier=9,max_write_buffer_number=12,min_write_buffer_number_to_merge=6,num_levels=4,recycle_log_file_num=2,target_file_size_base=16MB,write_buffer_size=16MB

log:
Store::get(const string&, const char*, size_t, ceph::bufferlist*)' thread 7f2436910700 time 2019-10-28 20:00:08.393856
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/src/kv/RocksDBStore.cc: 1247: ceph_abort_msg("block checksum mismatch: expected 1072552583, got 2324967102  in db/003808.sst offset 5687664 size 3875")

BlueFS export:
# du -sxmc *
5276    db
0    db.slow
0    db.wal
5276    total

db dir has 352 files
db/003808.sst uploaded.

Actions #35

Updated by Tobias Fischer over 4 years ago

We started our Clusters with different Versions - but all prior Mimic and for Clusters prior to Luminous with Filestore OSDs. Regarding the upgrade to RocksDB you are right - it was regarding the MONs. So regarding the OSDs we did the Filestore > Bluestore Upgrade for Clusters created before Luminous. The Upgrade path was to recreate all OSDs as Bluestore - step by step.
I'm not 100% sure but as far as I can remember we only got the broken OSDs for OSDs that have been converted to Bluestore and thus Clusters created before Luminous.
Hope that helps...

Actions #36

Updated by Tobias Fischer over 4 years ago

I can confirm that the Bug only affects OSDs on Clusters created prior to Luminous

Actions #38

Updated by Igor Fedotov over 4 years ago

Tobias, and you did ALL OSD filestore->bluestore migration while running Luminous, didn't you?

I can confirm that the Bug only affects OSDs on Clusters created prior to Luminous

I think the more correct statement should be:
"bug only affects OSDs created/redeployed while running Luminous..."

Actions #39

Updated by Marcin W over 4 years ago

In my case, all OSDs were created just now with 14.2.4

Actions #40

Updated by Igor Fedotov over 4 years ago

Marcin W wrote:

In my case, all OSDs were created just now with 14.2.4

So you're saying you have failing OSDs that were initially deployed with Nautilus. Is that correct?

Actions #41

Updated by Tobias Fischer over 4 years ago

bug only affects OSDs redeployed while running Luminous...
we have a cluster that was build with luminous and bluestore - this one looks stable until now.

Actions #42

Updated by Marcin W over 4 years ago

Igor Fedotov wrote:

Marcin W wrote:

In my case, all OSDs were created just now with 14.2.4

So you're saying you have failing OSDs that were initially deployed with Nautilus. Is that correct?

Correct

Actions #43

Updated by Igor Fedotov over 4 years ago

hmm... so some observations for now:

1) ALL block checksum mismatch errors I can see have the same 'got' checksum value equal to 2324967102, e.g. " ...block checksum mismatch: expected 3412624973, got 2324967102
From rocksdb code it looks like this value follows data block RocksDB reads and contains this block checksum. Having the same value for different OSDs at different clusters probably means some persistent format incompatibility.

2) Searching for the same pattern over inet found the following complain: https://lists.launchpad.net/maria-discuss/msg05614.html
Please note the same "got 2324967102" pattern there. And this is MariaDB not Ceph! So good chances issue isn't Ceph specific.
Also there is a note about RocksDB upgrade from 5.14.0 to 6.2.0 as a possible root cause.
And we have pretty similar RocksDB upgrage between Luminous and Nautilus

Unfortunately Marcin's statement about fresh Nautilus cluster is out of line with this hypothesis... The same for Tobias's second cluster...

3) I've done some content inspection for 003808.sst file Marcin shared. Original log error is "got 2324967102 in db/003808.sst offset 5687664 size 3875"
So I checked what's present at offsets: 5687664 and 5687664+3875 - a bunch of zeroes at both locations. Generally the file is pretty sparse at those offsets and has pretty short 4K aligned non-zero blocks of data. Can't say how valid it is.

Searching for 8A942EBE/BE2E948A (2324967102 in hex in both big and little endianess) patterns in files found no matches.

Actions #44

Updated by Igor Fedotov over 4 years ago

Marcin, would you be able to export bluefs for osd.101 once again and do binary comparison for both previous and new snapshots of 003808.sst

Just want to double that you don't have any random read failures (we had some a while ago that were probably bound to some kernel issues)

Actions #45

Updated by Igor Fedotov over 4 years ago

Tobias, would you be able to export bluefs for osd.1 or .4 and inspect or share sst file that showed the checksum mismatch?

Actions #46

Updated by Marcin W over 4 years ago

I took only one snapshot when it happened and recreated the volume. Sorry :(

I think it's important to mention few facts:
- Total OSD memory target is 70% of OS memory
- NICs use 9000 jumbo frames
- servers have ECC memory although, dmidecode -t17 | grep Width shows on one server:

        Total Width: 128 bits
    Data Width: 64 bits

and on other(also crashed OSD):

        Total Width: 72 bits
    Data Width: 72 bits

which may indicate that ECC is not used on last one.

- Started to receive in /var/log/messages:

swapper/20: page allocation failure: order:2, mode:0x104020
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: CPU: 20 PID: 0 Comm: swapper/20 Kdump: loaded Tainted: G
...
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8c9c23d0>] warn_alloc_failed+0x110/0x180
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8cf7485c>] __alloc_pages_slowpath+0x6b6/0x724
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8c8dee88>] ? __enqueue_entity+0x78/0x80
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8c9c6b84>] __alloc_pages_nodemask+0x404/0x420
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8ca14c68>] alloc_pages_current+0x98/0x110
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8c9c117e>] __get_free_pages+0xe/0x40
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8ca2052e>] kmalloc_order_trace+0x2e/0xa0
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffff8ca24571>] __kmalloc+0x211/0x230
Oct 29 15:15:56 ecosd1.tensor-tech.io kernel: [<ffffffffc084fd7a>] bnx2x_frag_alloc.isra.58+0x2a/0x40 [bnx2x]

which indicates heavy memory fragmentation(as per http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/024126.html) - it sounds very convincing
So, after all it doesn't have to be RockDB code or the way Ceph code uses it.
Steps taken now to mitigate:
- swapoff --all
- vm.min_free_kbytes = 1048576 # 1GB
- total OSD memory target = 50% of OS memory = 12871MiB for each
- echo 1 >/proc/sys/vm/compact_memory in cron every hour

I'm going to reboot all servers and wait for results.

Actions #47

Updated by Marcin W over 4 years ago

"So I checked what's present at offsets: 5687664 and 5687664+3875 - a bunch of zeroes at both locations. Generally the file is pretty sparse at those offsets and has pretty short 4K aligned non-zero blocks of data. Can't say how valid it is.

Searching for 8A942EBE/BE2E948A (2324967102 in hex in both big and little endianess) patterns in files found no matches."

kmem alloc fails would certainly explain it. I'm applying changes to reduce fragmentation and will keep observing OSDs

Actions #48

Updated by Tobias Fischer over 4 years ago

Igor Fedotov wrote:

Tobias, would you be able to export bluefs for osd.1 or .4 and inspect or share sst file that showed the checksum mismatch?

sorry but i never exported bluefs ;) is this the right command:

ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-4/ store-copy osd4-db/

I tried, and also
ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-4/ dump

but ceph-kvstore-tool always crashes - see attached log.

maybe it's worth to mention that I build the ceph deb packages myself as we are using debian buster and there are no official deb packages available.

Actions #49

Updated by Igor Fedotov over 4 years ago

the proper command is:
ceph-bluestore-tool --path <osd-path> --out-dir <destination dir> --command bluefs-export

Please make sure you have enough space at <destination dir>

Updated by Tobias Fischer over 4 years ago

Igor Fedotov wrote:

Tobias, would you be able to export bluefs for osd.1 or .4 and inspect or share sst file that showed the checksum mismatch?

thanks for the command :)

Actions #51

Updated by Igor Fedotov over 4 years ago

Thanks!
The same as in Marcin's sst

Actions #52

Updated by Marcin W over 4 years ago

Tobias,

Perhaps the below helps to avoid corruption:

I also had https://tracker.ceph.com/issues/20381 in logs:

7fe21bf73700 -1 bdev(0x7fe239a43e00 /var/lib/ceph/osd/ceph-1/block) aio_submit retries 10

and I've applied bdev_aio_max_queue_depth=8192 to osd.X as per https://tracker.ceph.com/issues/19511

Actions #53

Updated by Dmitry Mishin over 4 years ago

My both corruptions happened under heavy IOPS load, on older osds created in previous versions of ceph and migrated to nautilus after

Actions #54

Updated by Igor Fedotov over 4 years ago

Tobias, Marcin, Dmitry - could you please do the following:
1) For all OSDs (or at least ones that co-locate the failing one) check 'bluestore_reads_with_retries' performance counter. Are there any non-zero occurrences?
2) For the same OSDs grep existing logs for the following pattern: "_verify_csum bad". Any occurrences?

The idea is to verify if your nodes are suffering from https://tracker.ceph.com/issues/22464 which are sporadic interim read errors under heavy load (swap?).
We have a workaround (https://github.com/ceph/ceph/pull/24647) for such an error happening at main device but some chances it happens for DB/WAL devices and cause checksum errors there.

Thanks!

Actions #55

Updated by Tobias Fischer over 4 years ago

1) for all OSDs (3) on the affected Servers (2) "bluestore_reads_with_retries": 0
2) no occurence for the pattern on any of the OSDs

Actions #56

Updated by Marcin W over 4 years ago

Same here. Although, I have another OSD with same issues.

Actions #57

Updated by Marcin W over 4 years ago

I ran bluefs-export export twice on the same OSD and data MD5 hashes are identical for both copies.

Actions #58

Updated by Rafal Wadolowski over 4 years ago

Igor,
We are facing this same issue. Clean install 14.2.3.
Right now we have over 10 OSDs down.

 -3> 2019-11-05 05:21:15.405 7efe4659d700  2 rocksdb: [db/db_impl_compaction_flush.cc:2212] Waiting after background compaction error: Corruption: block checksum mismatch: expected 2476826016, got 2324967102  in db/021730.sst offset 0 size 2240, Accumulated background error counts: 1
    -2> 2019-11-05 05:21:15.413 7efe44d9a700 -1 rocksdb: submit_common error: Corruption: block checksum mismatch: expected 2476826016, got 2324967102  in db/021730.sst offset 0 size 2240 code = 2 Rocksdb transaction:
Put( Prefix = P key = 0x000000000000104c'.0000026260.00000000000000894890' Value size = 184)
Put( Prefix = P key = 0x000000000000104c'._fastinfo' Value size = 186)
Put( Prefix = O key = 0x7f80000000000000085ba8eaa9217262'd_data.48688f70c6a529.0000000000000703!='0xfffffffffffffffeffffffffffffffff6f00000000'x' Value size = 546)
Put( Prefix = O key = 0x7f80000000000000085ba8eaa9217262'd_data.48688f70c6a529.0000000000000703!='0xfffffffffffffffeffffffffffffffff'o' Value size = 429)
Put( Prefix = L key = 0x00000000000bff6f Value size = 4135)
    -1> 2019-11-05 05:21:15.413 7efe44d9a700 -1 /build/ceph-14.2.3/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_kv_sync_thread()' thread 7efe44d9a700 time 2019-11-05 05:21:15.416716
/build/ceph-14.2.3/src/os/bluestore/BlueStore.cc: 10549: FAILED ceph_assert(r == 0)

As you can see, we also have "strange" value 2324967102.
What do you think about switching to stupid allocator as workaround? Will it preserve drives from failing?

Actions #59

Updated by Igor Fedotov over 4 years ago

Rafal, thanks for your information.
I don't think switching to stupid allocator will help. IMO this problem isn't related to allocator at all - see my comment #22

Actions #60

Updated by Igor Fedotov over 4 years ago

General questions to everybody experiencing the issue.

1)Have you run Nautilus before v14.2.3? Have you ever seen the bug for Nautilus versions before v14.2.3 if any?
As far as I can see RocksDB version upgrade happened to v14.2.3: https://tracker.ceph.com/issues/40940
So I suspect this upgrade (or this release itself) as a root cause. New RocksDB was mentioned in presumably related MariaDB issue - see https://lists.launchpad.net/maria-discuss/msg05614.html

2) For those who are running multiple clusters - could you name any notable differences between working and failing clusters? E.g. in Ceph versions, major use cases, H/w etc...

Actions #61

Updated by Krzysztof Klimonda over 4 years ago

We have 2 clusters that were deployed with 14.2.3 and have failed OSDs during stress testing. We have 1 more cluster deployed with 14.2.2 and upgraded to 14.2.3, however we currently have no means of stress testing it with same workload as two other clusters. All 3 clusters are running mostly same hardware (the only difference is HDD models - some are SAS only, some mixed SAS+SATA).

We also have been running a small nautilus cluster (24 HDDs + 24 SSDs) that is running a mix of 14.2.0 and 14.2.1 - this cluster has been running without any issues since its deployment back in april or may. We've done no stress tests there though.

Actions #62

Updated by Igor Fedotov over 4 years ago

Krzysztof - are you referring the same clusters as Rafal above?

Actions #63

Updated by Rafal Wadolowski over 4 years ago

Krzysztof and me are working on the same clusters

Actions #64

Updated by Rafal Wadolowski over 4 years ago

There were two changes in rocksdb default configuration.
First was version upgrade from 5.17.2 to 6.1.20 (https://github.com/ceph/ceph/pull/29440)
Second parameter change in concurrent background compaction (https://github.com/ceph/ceph/pull/29162)

I tested on 14.2.3 cluster option

bluestore_rocksdb_options = "max_background_compactions=1" 

but it didn't work. I successfully failed one OSD.
I think we should consider testing an old version of rocksdb. I will read realease notes and recompile ceph with older rocksdb version.

Actions #65

Updated by Dmitry Mishin over 4 years ago

I had the cluster upgraded to 14.2.4, and after that problems started under heavy load.
I don't have any other clusters.

Actions #66

Updated by Krzysztof Klimonda over 4 years ago

I'm currently considering biting the bullet and redeploying one of our clusters with one of earlier nautilus releases. @Igor Gajowiak, do you think, based on this bug, that 14.2.2 would be the best version to test, or should I go with an even farther back to 14.2.1 for example?

Actions #67

Updated by Igor Fedotov over 4 years ago

Yeah, from this ticket point of view 14.2.2 is the best choice. Not sure how good it is in general though - it makes sense to check release notes for 14.2.3|4 to verify for critical issues...

Actions #68

Updated by Rafal Wadolowski over 4 years ago

Did anybody find a way to fast reproduce the issue? Right now it's still occurs randomly.

Actions #69

Updated by Igor Fedotov over 4 years ago

Can anybody try to make a custom build for Nautilus 14.2.3 or. 4 using this patch, update a cluster and collect logs for NEW crashing OSD?
By this I'd like to get binary dump of the block that has invalid checksum and learn if it's obtained from prefetch buffer of from the disk.

Actions #70

Updated by Dmitry Mishin over 4 years ago

Mines aren't crashing anymore... But I managed to reduce the load

Actions #71

Updated by Krzysztof Klimonda over 4 years ago

As a quick update, I've purged and re-deployed our cluster with 14.2.2. I've been running same stress tests for the last 10 hours or so without any OSDs failures. I'll keep tests running overnight and see if it stays that way.

Actions #72

Updated by Tobias Fischer over 4 years ago

Igor Fedotov wrote:

Can anybody try to make a custom build for Nautilus 14.2.3 or. 4 using this patch, update a cluster and collect logs for NEW crashing OSD?
By this I'd like to get binary dump of the block that has invalid checksum and learn if it's obtained from prefetch buffer of from the disk.

i will try today

Actions #73

Updated by Tobias Fischer over 4 years ago

@Igor Gajowiak: I installed 14.2.4 with the patch on 2 Clusters that already had failed OSDs. As soon as new OSDs fail i will let you know

Actions #74

Updated by Igor Fedotov over 4 years ago

Tobias Fischer wrote:

@Igor Gajowiak: I installed 14.2.4 with the patch on 2 Clusters that already had failed OSDs. As soon as new OSDs fail i will let you know

Thanks, Tobias.

Actions #75

Updated by Sage Weil over 4 years ago

  • Crash signature (v1) updated (diff)
Actions #76

Updated by Sage Weil over 4 years ago

  • Crash signature (v1) updated (diff)
Actions #77

Updated by Sage Weil over 4 years ago

  • Crash signature (v1) updated (diff)
Actions #78

Updated by Sage Weil over 4 years ago

Another option to try (once the current tests are done): set rocksdb_enable_rmrange = false

Actions #79

Updated by Krzysztof Klimonda over 4 years ago

Attached is a log from OSD that has crashed with Igor's patch applied, the interesting part is below:

    -1> 2019-11-08 09:12:04.843 7f2743040700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.1/src/kv/RocksDBStore.cc: In function 'virtual int RocksDBStore::get(const string&, const char*, size_t, ceph::bufferlist*)' thread 7f2743040700 time 2019-11-08 09:12:04.841106
/tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.1/src/kv/RocksDBStore.cc: 1247: ceph_abort_msg("block checksum mismatch: expected 4264966682, got 2324967102  in db/000441.sst offset 1308566 size 1551 prefetch 0 footer checksum: 1
  metaindex handle: EAE19E0156
  index handle: E9E89C01B9F301
  footer version: 2
  table_magic_number: 9863518390377041911
   data: 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")

 ceph version 14.2.4.1 (30ddf1955d1370b5459fa6c0894f6b56d84acb30) nautilus (stable)
 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xdf) [0x560f316e7bce]
 2: (RocksDBStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char const*, unsigned long, ceph::buffer::v14_2_0::list*)+0x41d) [0x560f31c9412d]
 3: (BlueStore::Collection::get_onode(ghobject_t const&, bool)+0xa75) [0x560f31bd69c5]
 4: (BlueStore::getattr(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, char const*, ceph::buffer::v14_2_0::ptr&)+0xe3) [0x560f31bf1dd3]
 5: (PGBackend::objects_get_attr(hobject_t const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ceph::buffer::v14_2_0::list*)+0xc7) [0x560f3198f717]
 6: (PrimaryLogPG::get_object_context(hobject_t const&, bool, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, ceph::buffer::v14_2_0::list, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, ceph::buffer::v14_2_0::list> > > const*)+0x66f) [0x560f318e417f]
 7: (PrimaryLogPG::find_object_context(hobject_t const&, std::shared_ptr<ObjectContext>*, bool, bool, hobject_t*)+0x250) [0x560f318e8a90]
 8: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x2c71) [0x560f31943041]
 9: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xd08) [0x560f31945b98]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1bf) [0x560f317694cf]
 11: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x560f31a1c7f2]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xbf5) [0x560f317873e5]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x4ac) [0x560f31db537c]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x560f31db8530]
 15: (()+0x76db) [0x7f27668f46db]
 16: (clone()+0x3f) [0x7f276569488f]

Actions #80

Updated by Igor Fedotov over 4 years ago

@Krzystzof- thanks for the info. Please share more when available.

Please also export bluefs for this OSD and share
db/000441.sst offset 1308566 size 1551 or preferably the whole file...

Actions #81

Updated by Igor Fedotov over 4 years ago

And this time it is still happens during compaction, isn't it?

Actions #82

Updated by Nathan Cutler over 4 years ago

  • Severity changed from 2 - major to 1 - critical
Actions #83

Updated by Igor Fedotov over 4 years ago

IF anyone has spare cluster to try custom build here is nautilus v14.2.4 with RocksDB updated to 6.4.6.
Worth checking if this fixes the issue.
https://github.com/ifed01/ceph/tree/wip-ifed-v14-2-4-db-6-4-6

(NOT FOR PRODUCTION! USE AT YOUR OWN RISK!!!)

Actions #84

Updated by Sage Weil over 4 years ago

  • Crash signature (v1) updated (diff)
Actions #85

Updated by Krzysztof Klimonda over 4 years ago

Another crash, attached osd log, and exert below:

2019-11-08 13:15:20.884 7f557aa82700  4 rocksdb: [db/flush_job.cc:392] [default] [JOB 490] Level-0 flush table #969: 958406 bytes OK
2019-11-08 13:15:20.884 7f557aa82700  4 rocksdb: (Original Log Time 2019/11/08-13:15:20.886985) [db/memtable_list.cc:380] [default] Level-0 commit table #969 started
2019-11-08 13:15:20.884 7f557aa82700  4 rocksdb: (Original Log Time 2019/11/08-13:15:20.887178) [db/memtable_list.cc:428] [default] Level-0 commit table #969: memtable #1 done
2019-11-08 13:15:20.884 7f557aa82700  4 rocksdb: (Original Log Time 2019/11/08-13:15:20.887193) EVENT_LOG_v1 {"time_micros": 1573218920887187, "job": 490, "event": "flush_finished", "output_compression": "Snappy", "lsm_state": [4, 2, 0, 0, 0, 0, 0], "immutable_memtables": 0}
2019-11-08 13:15:20.884 7f557aa82700  4 rocksdb: (Original Log Time 2019/11/08-13:15:20.887218) [db/db_impl_compaction_flush.cc:201] [default] Level summary: files[4 2 0 0 0 0 0] max score 1.00

2019-11-08 13:15:20.884 7f557aa82700  4 rocksdb: [db/db_impl_files.cc:356] [JOB 490] Try to delete WAL files size 63257723, prev total WAL file size 63728551, number of live WAL files 2.

2019-11-08 13:15:20.884 7f557b283700  4 rocksdb: [db/compaction_job.cc:1645] [default] [JOB 491] Compacting 4@0 + 2@1 files to L1, score 1.00
2019-11-08 13:15:20.884 7f557b283700  4 rocksdb: [db/compaction_job.cc:1649] [default] Compaction start summary: Base version 490 Base level 0, inputs: [969(935KB) 967(835KB) 965(913KB) 963(856KB)], [960(65MB) 961(32MB)]

2019-11-08 13:15:20.884 7f557b283700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1573218920887362, "job": 491, "event": "compaction_started", "compaction_reason": "LevelL0FilesNum", "files_L0": [969, 967, 965, 963], "files_L1": [960, 961], "score": 1, "input_data_size": 106119555}
2019-11-08 13:15:20.892 7f557b283700  3 rocksdb: [db/db_impl_compaction_flush.cc:2660] Compaction error: Corruption: block checksum mismatch: expected 672242049, got 2324967102  in db/000960.sst offset 454655 size 1838 prefetch 0 footer checksum: 1
  metaindex handle: 8FB3C12056
  index handle: F89D9820D28F29
  footer version: 2
  table_magic_number: 9863518390377041911
   data: F701016C0000007EEA09412FC64A4A981FD9677651BEC89AE53F00000000004C0000000801011F000000C807D8F13F3A6AC55D497B7C2E01010000000101070000005B0300008308010801011F000000C607D3922E3A6AC55D0ED6D92E00010000000101070000005303000083080071481BF9000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
2019-11-08 13:15:20.892 7f557b283700  4 rocksdb: (Original Log Time 2019/11/08-13:15:20.896071) [db/compaction_job.cc:768] [default] compacted to: files[4 2 0 0 0 0 0] max score 0.00, MB/sec: 12238.4 rd, 0.0 wr, level 1, files in(4, 2) out(1) MB in(3.5, 97.7) out(0.0), read-write-amplify(29.3) write-amplify(0.0) Corruption: block checksum mismatch: expected 672242049, got 2324967102  in db/000960.sst offset 454655 size 1838 prefetch 0 footer checksum: 1
  metaindex handle: 8FB3C12056
  index handle: F89D9820D28F29
  footer version: 2
  table_magic_number: 9863518390377041911
   data:
2019-11-08 13:15:20.892 7f557b283700  4 rocksdb: (Original Log Time 2019/11/08-13:15:20.896113) EVENT_LOG_v1 {"time_micros": 1573218920896100, "job": 491, "event": "compaction_finished", "compaction_time_micros": 8671, "compaction_time_cpu_micros": 8564, "output_level": 1, "num_output_files": 1, "total_output_size": 627658, "num_input_records": 2098, "num_output_records": 1977, "num_subcompactions": 1, "output_compression": "Snappy", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [4, 2, 0, 0, 0, 0, 0]}
2019-11-08 13:15:20.892 7f557b283700  2 rocksdb: [db/db_impl_compaction_flush.cc:2212] Waiting after background compaction error: Corruption: block checksum mismatch: expected 672242049, got 2324967102  in db/000960.sst offset 454655 size 1838 prefetch 0 footer checksum: 1
  metaindex handle: 8FB3C12056
  index handle: F89D9820D28F29
  footer version: 2
  table_magic_number: 9863518390377041911
   data: F701016C0000007EEA09412FC64A4A981FD9677651BEC89AE53F00000000004C0000000801011F000000C807D8F13F3A6AC55D497B7C2E01010000000101070000005B0300008308010801011F000000C607D3922E3A6AC55D0ED6D92E00010000000101070000005303000083080071481BF9000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, Accumulated background error counts: 1
2019-11-08 13:15:20.892 7f5579a80700 -1 rocksdb: submit_common error: Corruption: block checksum mismatch: expected 672242049, got 2324967102  in db/000960.sst offset 454655 size 1838 prefetch 0 footer checksum: 1
  metaindex handle: 8FB3C12056
  index handle: F89D9820D28F29
  footer version: 2
  table_magic_number: 9863518390377041911
   data: F701016C0000007EEA09412FC64A4A981FD9677651BEC89AE53F00000000004C0000000801011F000000C807D8F13F3A6AC55D497B7C2E01010000000101070000005B0300008308010801011F000000C607D3922E3A6AC55D0ED6D92E00010000000101070000005303000083080071481BF9000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 code = 2 Rocksdb transaction: 
Put( Prefix = P key = 0x00000000000013c6'.0000003125.00000000000000112259' Value size = 184)
Put( Prefix = P key = 0x00000000000013c6'._fastinfo' Value size = 186)
Put( Prefix = O key = 0x7f800000000000000208afc8'q!rbd_data.3f8867c4c4e2ee.000000000000001d!='0xfffffffffffffffeffffffffffffffff6f00200000'x' Value size = 565)
Put( Prefix = O key = 0x7f800000000000000208afc8'q!rbd_data.3f8867c4c4e2ee.000000000000001d!='0xfffffffffffffffeffffffffffffffff'o' Value size = 423)
Put( Prefix = L key = 0x00000000003d1c9d Value size = 4135)
2019-11-08 13:15:20.892 7f5579a80700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.1/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_kv_sync_thread()' thread 7f5579a80700 time 2019-11-08 13:15:20.896353
/tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.1/src/os/bluestore/BlueStore.cc: 10549: FAILED ceph_assert(r == 0)

 ceph version 14.2.4.1 (30ddf1955d1370b5459fa6c0894f6b56d84acb30) nautilus (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x564d4221f524]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x564d4221f6ff]
 3: (BlueStore::_kv_sync_thread()+0xf54) [0x564d427853b4]
 4: (BlueStore::KVSyncThread::entry()+0xd) [0x564d427a792d]
 5: (()+0x76db) [0x7f558ab0f6db]
 6: (clone()+0x3f) [0x7f55898af88f]

2019-11-08 13:15:20.896 7f5579a80700 -1 *** Caught signal (Aborted) **
 in thread 7f5579a80700 thread_name:bstore_kv_sync

 ceph version 14.2.4.1 (30ddf1955d1370b5459fa6c0894f6b56d84acb30) nautilus (stable)
 1: (()+0x12890) [0x7f558ab1a890]
 2: (gsignal()+0xc7) [0x7f55897cce97]
 3: (abort()+0x141) [0x7f55897ce801]
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a3) [0x564d4221f575]
 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x564d4221f6ff]
 6: (BlueStore::_kv_sync_thread()+0xf54) [0x564d427853b4]
 7: (BlueStore::KVSyncThread::entry()+0xd) [0x564d427a792d]
 8: (()+0x76db) [0x7f558ab0f6db]
 9: (clone()+0x3f) [0x7f55898af88f]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

I have also extracted bluefs from this one, and I'll attach db/000960.sst to this ticket somehow.

Actions #86

Updated by Krzysztof Klimonda over 4 years ago

Entire db/000960.sst can be downloaded from https://cf2.cloudferro.com:8080/swift/v1/AUTH_36d73de2691345449a04f9b9f95dd06a/ceph-misc/000960.sst.gz (md5: 4d4389051ea3922a3cf9e47fd49d6f86)

Actions #87

Updated by Krzysztof Klimonda over 4 years ago

Entire db/000960.sst for osd.50 can be downloaded from https://cf2.cloudferro.com:8080/swift/v1/AUTH_36d73de2691345449a04f9b9f95dd06a/ceph-misc/000960.sst.gz (md5: 4d4389051ea3922a3cf9e47fd49d6f86)
Another sst (db/001456.sst) for osd.75 is here: https://cf2.cloudferro.com:8080/swift/v1/AUTH_36d73de2691345449a04f9b9f95dd06a/ceph-misc/osd-75-001456.sst.gz

and the excerpt from osd.75 log is below:

2019-11-08 13:46:00.341 7f268b57f700  4 rocksdb: [db/compaction_job.cc:1645] [default] [JOB 678] Compacting 4@0 + 3@1 files to L1, score 1.00
2019-11-08 13:46:00.341 7f268b57f700  4 rocksdb: [db/compaction_job.cc:1649] [default] Compaction start summary: Base version 677 Base level 0, inputs: [1458(830KB) 1456(13MB) 1454(23MB) 1452(935KB)], [1448(65MB) 1449(65MB) 1450(29MB)]

2019-11-08 13:46:00.341 7f268b57f700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1573220760348605, "job": 678, "event": "compaction_started", "compaction_reason": "LevelL0FilesNum", "files_L0": [1458, 1456, 1454, 1452], "files_L1": [1448, 1449, 1450], "score": 1, "input_data_size": 208231136}
2019-11-08 13:46:00.405 7f268b57f700  3 rocksdb: [db/db_impl_compaction_flush.cc:2660] Compaction error: Corruption: block checksum mismatch: expected 2704997081, got 2329464754  in db/001456.sst offset 1046829 size 3898 prefetch 0 footer checksum: 1
  metaindex handle: C89ECC0656
  index handle: EDC5C50691D306
  footer version: 2
  table_magic_number: 9863518390377041911
   data: 0058B1044F007F80000000000000022D8C16ED217262645F646174612E33663964666339343961616463612E30303030303030303030303030393663213DFFFFFFFFFFFFFFFEFFFFFFFFFFFFFFFF6F0018000078012336A702000000020102830C830407E9296000830149A560004359BB6000431DA3610043290B6200438923620043954262004304040C800424A966D52E358D66917975705F1C15AC3AB1AE84F8CF358557E3F576EA8BA1B982AA91432201014622F6CF6212671ADB40F755C450C153E508EE6C320EC475DAC210C3EE95D35321F9F3B15CFC61492B3792341017F65B8C086E121B2DE355CE17BDEAF4076220607F4E2EF37854B156BD226E1794490FB2787176C6BF1FDF4C5F2D2AD224899B07A3E58A9093500A3BA9C96AB6D2D3E2C90A9928A096B3FA3E3A7DE400AB1FFE4053FA667B7DE4D177C10EB79EBB6B95B67D9E975B9D6599FC52008EF028BC290A1AE81D2B872A87DF81C64AF3EA9FB01D19C36784777643DD5912C8F35A77EF5148F5B814A79720D9F4AAC978B523FF3DB443A78C12E2FB9D71DC9F21CD739F1D33020DDF13849D23F23F9582C73C217D9A1100BFCB24F43882700065F0418CF04DF85E119846BA0AF9CE377644AE66D1D70525DF69A612D503B51BDD5AC4C53D39ED2EBBB06A439E8819E631F812CC5A003C05EA7E985D25F4BBA43920BE70190E9B6CC11F1CC786C46CC3C3F2E492B12AA945503AFAF29C0FE8B4FD117198693A514BCD51663F7D36D9EC93661051431FBA6FF40F8F2613A979867B9ADB358E1AC4E4E625089B0FC946E16B06DAAE47F48BCE179CB93E6A774AC68375682CD6687983E636948D9601CC1ED95CDF249630B916495A34515A2843B325378CF7D25BA0E8EEF68B77C9DDE78130EE41ECF472229737BF12A5EF4C0C930420000078014136A702000000020102831083040131246000830404040C8004EBFDD52778D122672FF0AC32B2904ECB6F1BFBC3BE065CF21BF70F18A5613BAD77ADDC6070868C240F6260F83F2BE5A57B577600C0FE4CDBE8D41AB3F49F8C172F6E2ACF4598EA0B6F2627B6D395507DE5E43A0F7CF5B63810752307EC3CC6387B73F9D67513D88EDF07C263794439EF8D5ACCAC186454D8AE40C00C176AB312E4C22FD5CD02CBA6AC3D2BA51FBB41F3D934F967224E624281F3D35CD34B2BAFBB860B5B5EBB3C233E78863349B0735A92CDC3E934A391519B0A758CFE4CC9D4DF5DBB3B9B8A3D18F807A9957A1C3B544BD480B604C641EDA9DBB95A69B12DB50F1C6EAAF8DDCD392F34953691333C5BD28A685F164EE4FE98A0DB3D7C239BC46F46EC5B4CAA6A006E98F4DDEE7834C58A9DC565DB8FD9CAD3195C0D295EB98040BEFCA34FB7B2896BA90D218242BCEF66CAFF0484ADA38CC833E4DAC2751AA40388C8E0B0EC556C1BB4C2BFC0CFF1C48132B6ED106B17652C956CDFC823A5D6B36B032C35249EBF6D79C705F77DD0853B3A978B07C8784F5D8FC02B22F3DF368E43C761C6F08C523A4CB9B7843CA6A0927510C16D0D9BB744208E3247C4934357CD04BF890F6CC593A13149E26A1C5EC8CDECFD1CAFDD421DD51678CDB8BB50DBB3DE37C70A3429EC4719940318BBCFEAE5B85E9BAB56A26DE3B967D1C189FA89C253D5647B6E601B284A2624E6144BFA96B47CB455474BD008D35C354C11334C0C930428000078014236A702000000020102831483040151246000830404040C80042CE7AEDA267B45699937BD7F5752DDA332FF668BF081FD8A5FAD3D79E2C569B68AE4594C2A4FC9492AB8076D1A29D2D448B99DCB588F9BEA00A0A43D068ABDD5CA5E0BE19D9D9B2EF1BD7953F42F81243FDCFC1F1FB893830020DA1425AD9DC11FF322FB0F2CE86F7700E6FC701A7959B56CA6189C07C7BD703FBEC9B75117435763E2DD2CC75308ABAB429F9B1EC234A187A2B9DA9D2AC602D7E0AF9EFD323132332C0FA351364F5BB4AE7475AA1978C9407F91B3255DB975D05F54952B51F35A4E46FF20F2E10512A6D5248F644FD0898882FCE2D17812118DAF8B7F388BD2515C00FC5239275E40BB701BAFD9E8D6FCE40177BD6D3732BC0D6F831AAC33927992572EC53D57123B4CC5D01BCA552CFA715D8DCF72E972925FC8B0C36A3C378A3EC86AF80F44FF45B6961E90087205F180FF794CE0AEDEDF4BEDD061E8DADA0BFBD3D2528A0D3231A3E6B4B8248B9180572E3EF05C04550872CDE576D6952AFCF56C3628F0B81606D5A4CE17520F89CC220BCCFAAA5ABE22E78D5F32B43A9307A67CF2193F5066321F83C25928F7723E97249C6E5E994C17F4A7FB07C1EE1CA1374E749295FD81128A2CE92D465300C10829640E946648FCC5061894F7F6657F040E8C7D26E4D960374BE93EDA45990982D6D65491EC9938F7DE46527A9955200D7B2A3FC23FDD53EE20E1FEC5BFC6D5A9493FE60F24FB7A6C5F38B75C6DE0EF2F300010084200626C6F636B7301000D0108801809010A280810085F7065725F6B6579111B008001182400000003160879746573051F053D1121040000090A18020C0873697A65090D010104C07F055530130A430031312E355F68656164111E080101040154010410100A352E344A1D002008000000050F0A36395A1C000C060E0A665A1B0010050F0A3961661C0000615E53000C050F0A645E38000C060E0A625A53001403100A372E314A1D0001E01802120A322E31354E1F00000B09E20437324A3B00011C0C060E0A335A1B002000140A4300322E31395E21000C050F0A626A58004E71010D7300655A730010050F0A6336661C000465635A1C001004100A32306A8B006A700000334EE10101A721E16A380000616AA80000624EC6010D5400665EA6016A8A010C060E0A3766FA000433334EF802016F2DC20433335E3E0041FC5EA70104050F56C502015941C55E8C010C050F0A65CA8B011004100A34306A8C0100336A54005EDF0008060D0A5A37010C050F0A376A6D0000386AF30200396AF90100615EDD011004100A35315EA70108060E0A4E120421692DC20435326A8A0200636A1C0000646AAD000465645A1D015AF60401900C050F0A665E37001004100A36305E4E0276540000315EDD0141A15E1C0141A15EFA011004100A37314E83050DC56E50010433385A180141356AA90000635E19012DC5003762A9010C040F0A395E59000C040F0A615E8E020004569F0601E418020C0A6D657461360C0705010C12A7204C05090C0054709C192018211000009C7054051A01164C01010B100000010118429903070010000000C0E30118A800E3F43D22C8D1130A9C3ED6091A69D30DD3C7EC213868710CFA1854A0101C770B1F838D075893BC0B3713052F10005548B902055604C0FB0511E818ABFD345441DF1563B513B1ABCDB909ACF6709B2B615613D51E18F86CFE8815DA83988FC1A30D0A7B1006F0ABECD61B0F42C37592E5C11D00C0A70540E8089D95A161A9911DA1B3AE65616439187456E5D2B479FD05CEAA22EFC345111459557D282731C20FAB2A633AE6B1931C55E5F6310B598C0600C0FF0540E8958B1D907AE41E0E7231398C8094591F2E26CE7873EC481FC5C4025C2761710598D81D89237E3D0413BBD78435EB9D07627711D3FAFA1D0000C0D30540E8DD25603C623B3F15BB849D343CC9C11A973036C4D81E170D12463B00C081970CC268FE5C1AF0B00E18CD3E1784EF961EA3D95B1C51D2B80100C0032508E8663F3D892656161FECA7743126861913FD9438720A18C40B9F92C803E5491E0D5392295758474A034AB2BB41561D4204497612EFCC22630700C0F30580E8D939F3243B6C7F143BE7B1C70D5B211FE7BC58C588473A169C9737BA44E6931BF3F278CBAC1F941E5E9EF51F391EE117CBB3CF54CC93F10800C0E70540E8CF0EDAFAA7C86610D9C1F3F7A7DBA11D3BF8111E7B3F8F0007BFA467E6583117E017716D4AF7E319FC22BEC13048B7165FC4B9062788A20400C2E30540A811778F23613A2400E26EE96884F50A14DC2DACD53B2A6D1FBB85C76371BAD308B7709B696AC53E1937130125F7550000C24900E8F71F36C0A642BA06FE430BC839E8840D7F68029701B905020FCDA0FE9EEB780BA1994C79239E480F341359221E57FE116622B13AB7B91C0000C2A70580E889649E382A147300914CCF14364678079269511C5FFE8A05322DC12EA3D1FF07A6253FCFD3620C10B4E414E7BAA4A106969C88EA0CC1621200C2FF0540E872C217A0D8549B1E4EF8C9328F1A0516093F32EAEC091B04E1C7C17D24EC8B06FCB8C7ABA33EA61D1FF7FA98F0F14604E3DE8F97C091A90D00C2D30540E87B0F82C60D2B550BEFC112B120C280153DD82AB5BABF681F07DB86EA61BD7A02605BCDCF68620F096CAB496C66848D176D35D3F7677AB20500C2034500F4F002D5FC31794AF8101E9ABFBB5070E22F00F3774A2AF0D3830DFECE4FC958659D1EDFF9AA31034899123BDF45E92A90AA07E73B8B6D62DDBC0000C2F30100000000EFBC39E2BD94CF139DB7CFDD7A01E412F376EB7F4711AA03DEEE83343382B617DB7D216F04366F0ABBAFF6CE2FAF881CF75501A26A2E560700C2E701000000005795C93FF779821BAAB24D1D5AAF151255B6546A44A7BB0ACA162032BE3FC818D902DFBAE7EE3F185B60EF756D20431D0B6C90FE15A92E1B00C4E30100000000B011790AD5B5E60A362277187E51421646E4F331F43D0314887C1B94C44EF007916F3FD4CABF531F37130100000000005548B9020000000000C4FB0100000000B61B3E1085AEBC0476C32C94B10DCF156E98CAD09F179B0E0D53E2AD8E7A851D61CA35AC6058840C4C395630F134C71929C76409734ED41600C4A701000000001C834073D70165196310DAAC9AE604010CC2694D6646C8024138A7E817E279180867F460EF478A1FE18C9AB88398310F9CD1A230C3568B0B00C4FF0100000000464B8FB2ADFBAB0068E9AE102EA6A8172DDD614D6E0CB617A5BB15DBDC9ABA01743790BDD0E50B0FEE06F8FBF11D2506DD00083CC2A5CD1E00C4D3010000000003D0D4484D4F1B0B001A19417FC511054023235B2398E2136864C4D972A52A1D8D8C0429CCE7C3159111DA3E77686D0732C212DFA10D691700C4030200000000082B36AFE0D8331861C562008123CE01ACD85BBD3E44E00F157B55EBD9A99011622FE07F901F9809EC054BF801C4980CBD6013BC5A1C061E00C4F3010000000082FDD83E1EF90F0AB01F1A49B7DFC01AF6434BD969774C057E682E9947A22E1B0FCDE6BEB1073406A15955D182A19600342B5AAD856B780E00C4E70100000000AC284FD1CF36970C15E5D36532FEDE07A2FC2F5ADCAC350494FFF48C83880612F29FD4F1950A0B06FE93416E686894067F32C943EC55101100C6E30100000000C9B4BF4A7E2599109976F3EE4FFF9202D3EE91427068990CDABDE810C50B601CBB17304304791F00371301000000000055D10000C68900E841
2019-11-08 13:46:00.405 7f268b57f700  4 rocksdb: (Original Log Time 2019/11/08-13:46:00.412653) [db/compaction_job.cc:768] [default] compacted to: files[4 3 0 0 0 0 0] max score 0.00, MB/sec: 3253.0 rd, 0.0 wr, level 1, files in(4, 3) out(1) MB in(38.7, 159.9) out(0.0), read-write-amplify(5.1) write-amplify(0.0) Corruption: block checksum mismatch: expected 2704997081, got 2329464754  in db/001456.sst offset 1046829 size 3898 prefetch 0 footer checksum: 1
  metaindex handle: C89ECC0656
  index handle: EDC5C50691D306
  footer version: 2
  table_magic_number: 9863518390377041911
   dat
2019-11-08 13:46:00.405 7f268b57f700  4 rocksdb: (Original Log Time 2019/11/08-13:46:00.412718) EVENT_LOG_v1 {"time_micros": 1573220760412702, "job": 678, "event": "compaction_finished", "compaction_time_micros": 64012, "compaction_time_cpu_micros": 63484, "output_level": 1, "num_output_files": 1, "total_output_size": 6282402, "num_input_records": 19592, "num_output_records": 15330, "num_subcompactions": 1, "output_compression": "Snappy", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [4, 3, 0, 0, 0, 0, 0]}
2019-11-08 13:46:00.405 7f268b57f700  2 rocksdb: [db/db_impl_compaction_flush.cc:2212] Waiting after background compaction error: Corruption: block checksum mismatch: expected 2704997081, got 2329464754  in db/001456.sst offset 1046829 size 3898 prefetch 0 footer checksum: 1
  metaindex handle: C89ECC0656
  index handle: EDC5C50691D306
  footer version: 2
  table_magic_number: 9863518390377041911
   data: 0058B1044F007F80000000000000022D8C16ED217262645F646174612E33663964666339343961616463612E30303030303030303030303030393663213DFFFFFFFFFFFFFFFEFFFFFFFFFFFFFFFF6F0018000078012336A702000000020102830C830407E9296000830149A560004359BB6000431DA3610043290B6200438923620043954262004304040C800424A966D52E358D66917975705F1C15AC3AB1AE84F8CF358557E3F576EA8BA1B982AA91432201014622F6CF6212671ADB40F755C450C153E508EE6C320EC475DAC210C3EE95D35321F9F3B15CFC61492B3792341017F65B8C086E121B2DE355CE17BDEAF4076220607F4E2EF37854B156BD226E1794490FB2787176C6BF1FDF4C5F2D2AD224899B07A3E58A9093500A3BA9C96AB6D2D3E2C90A9928A096B3FA3E3A7DE400AB1FFE4053FA667B7DE4D177C10EB79EBB6B95B67D9E975B9D6599FC52008EF028BC290A1AE81D2B872A87DF81C64AF3EA9FB01D19C36784777643DD5912C8F35A77EF5148F5B814A79720D9F4AAC978B523FF3DB443A78C12E2FB9D71DC9F21CD739F1D33020DDF13849D23F23F9582C73C217D9A1100BFCB24F43882700065F0418CF04DF85E119846BA0AF9CE377644AE66D1D70525DF69A612D503B51BDD5AC4C53D39ED2EBBB06A439E8819E631F812CC5A003C05EA7E985D25F4BBA43920BE70190E9B6CC11F1CC786C46CC3C3F2E492B12AA945503AFAF29C0FE8B4FD117198693A514BCD51663F7D36D9EC93661051431FBA6FF40F8F2613A979867B9ADB358E1AC4E4E625089B0FC946E16B06DAAE47F48BCE179CB93E6A774AC68375682CD6687983E636948D9601CC1ED95CDF249630B916495A34515A2843B325378CF7D25BA0E8EEF68B77C9DDE78130EE41ECF472229737BF12A5EF4C0C930420000078014136A702000000020102831083040131246000830404040C8004EBFDD52778D122672FF0AC32B2904ECB6F1BFBC3BE065CF21BF70F18A5613BAD77ADDC6070868C240F6260F83F2BE5A57B577600C0FE4CDBE8D41AB3F49F8C172F6E2ACF4598EA0B6F2627B6D395507DE5E43A0F7CF5B63810752307EC3CC6387B73F9D67513D88EDF07C263794439EF8D5ACCAC186454D8AE40C00C176AB312E4C22FD5CD02CBA6AC3D2BA51FBB41F3D934F967224E624281F3D35CD34B2BAFBB860B5B5EBB3C233E78863349B0735A92CDC3E934A391519B0A758CFE4CC9D4DF5DBB3B9B8A3D18F807A9957A1C3B544BD480B604C641EDA9DBB95A69B12DB50F1C6EAAF8DDCD392F34953691333C5BD28A685F164EE4FE98A0DB3D7C239BC46F46EC5B4CAA6A006E98F4DDEE7834C58A9DC565DB8FD9CAD3195C0D295EB98040BEFCA34FB7B2896BA90D218242BCEF66CAFF0484ADA38CC833E4DAC2751AA40388C8E0B0EC556C1BB4C2BFC0CFF1C48132B6ED106B17652C956CDFC823A5D6B36B032C35249EBF6D79C705F77DD0853B3A978B07C8784F5D8FC02B22F3DF368E43C761C6F08C523A4CB9B7843CA6A0927510C16D0D9BB744208E3247C4934357CD04BF890F6CC593A13149E26A1C5EC8CDECFD1CAFDD421DD51678CDB8BB50DBB3DE37C70A3429EC4719940318BBCFEAE5B85E9BAB56A26DE3B967D1C189FA89C253D5647B6E601B284A2624E6144BFA96B47CB455474BD008D35C354C11334C0C930428000078014236A702000000020102831483040151246000830404040C80042CE7AEDA267B45699937BD7F5752DDA332FF668BF081FD8A5FAD3D79E2C569B68AE4594C2A4FC9492AB8076D1A29D2D448B99DCB588F9BEA00A0A43D068ABDD5CA5E0BE19D9D9B2EF1BD7953F42F81243FDCFC1F1FB893830020DA1425AD9DC11FF322FB0F2CE86F7700E6FC701A7959B56CA6189C07C7BD703FBEC9B75117435763E2DD2CC75308ABAB429F9B1EC234A187A2B9DA9D2AC602D7E0AF9EFD323132332C0FA351364F5BB4AE7475AA1978C9407F91B3255DB975D05F54952B51F35A4E46FF20F2E10512A6D5248F644FD0898882FCE2D17812118DAF8B7F388BD2515C00FC5239275E40BB701BAFD9E8D6FCE40177BD6D3732BC0D6F831AAC33927992572EC53D57123B4CC5D01BCA552CFA715D8DCF72E972925FC8B0C36A3C378A3EC86AF80F44FF45B6961E90087205F180FF794CE0AEDEDF4BEDD061E8DADA0BFBD3D2528A0D3231A3E6B4B8248B9180572E3EF05C04550872CDE576D6952AFCF56C3628F0B81606D5A4CE17520F89CC220BCCFAAA5ABE22E78D5F32B43A9307A67CF2193F5066321F83C25928F7723E97249C6E5E994C17F4A7FB07C1EE1CA1374E749295FD81128A2CE92D465300C10829640E946648FCC5061894F7F6657F040E8C7D26E4D960374BE93EDA45990982D6D65491EC9938F7DE46527A9955200D7B2A3FC23FDD53EE20E1FEC5BFC6D5A9493FE60F24FB7A6C5F38B75C6DE0EF2F300010084200626C6F636B7301000D0108801809010A280810085F7065725F6B6579111B008001182400000003160879746573051F053D1121040000090A18020C0873697A65090D010104C07F055530130A430031312E355F68656164111E080101040154010410100A352E344A1D002008000000050F0A36395A1C000C060E0A665A1B0010050F0A3961661C0000615E53000C050F0A645E38000C060E0A625A53001403100A372E314A1D0001E01802120A322E31354E1F00000B09E20437324A3B00011C0C060E0A335A1B002000140A4300322E31395E21000C050F0A626A58004E71010D7300655A730010050F0A6336661C000465635A1C001004100A32306A8B006A700000334EE10101A721E16A380000616AA80000624EC6010D5400665EA6016A8A010C060E0A3766FA000433334EF802016F2DC20433335E3E0041FC5EA70104050F56C502015941C55E8C010C050F0A65CA8B011004100A34306A8C0100336A54005EDF0008060D0A5A37010C050F0A376A6D0000386AF30200396AF90100615EDD011004100A35315EA70108060E0A4E120421692DC20435326A8A0200636A1C0000646AAD000465645A1D015AF60401900C050F0A665E37001004100A36305E4E0276540000315EDD0141A15E1C0141A15EFA011004100A37314E83050DC56E50010433385A180141356AA90000635E19012DC5003762A9010C040F0A395E59000C040F0A615E8E020004569F0601E418020C0A6D657461360C0705010C12A7204C05090C0054709C192018211000009C7054051A01164C01010B100000010118429903070010000000C0E30118A800E3F43D22C8D1130A9C3ED6091A69D30DD3C7EC213868710CFA1854A0101C770B1F838D075893BC0B3713052F10005548B902055604C0FB0511E818ABFD345441DF1563B513B1ABCDB909ACF6709B2B615613D51E18F86CFE8815DA83988FC1A30D0A7B1006F0ABECD61B0F42C37592E5C11D00C0A70540E8089D95A161A9911DA1B3AE65616439187456E5D2B479FD05CEAA22EFC345111459557D282731C20FAB2A633AE6B1931C55E5F6310B598C0600C0FF0540E8958B1D907AE41E0E7231398C8094591F2E26CE7873EC481FC5C4025C2761710598D81D89237E3D0413BBD78435EB9D07627711D3FAFA1D0000C0D30540E8DD25603C623B3F15BB849D343CC9C11A973036C4D81E170D12463B00C081970CC268FE5C1AF0B00E18CD3E1784EF961EA3D95B1C51D2B80100C0032508E8663F3D892656161FECA7743126861913FD9438720A18C40B9F92C803E5491E0D5392295758474A034AB2BB41561D4204497612EFCC22630700C0F30580E8D939F3243B6C7F143BE7B1C70D5B211FE7BC58C588473A169C9737BA44E6931BF3F278CBAC1F941E5E9EF51F391EE117CBB3CF54CC93F10800C0E70540E8CF0EDAFAA7C86610D9C1F3F7A7DBA11D3BF8111E7B3F8F0007BFA467E6583117E017716D4AF7E319FC22BEC13048B7165FC4B9062788A20400C2E30540A811778F23613A2400E26EE96884F50A14DC2DACD53B2A6D1FBB85C76371BAD308B7709B696AC53E1937130125F7550000C24900E8F71F36C0A642BA06FE430BC839E8840D7F68029701B905020FCDA0FE9EEB780BA1994C79239E480F341359221E57FE116622B13AB7B91C0000C2A70580E889649E382A147300914CCF14364678079269511C5FFE8A05322DC12EA3D1FF07A6253FCFD3620C10B4E414E7BAA4A106969C88EA0CC1621200C2FF0540E872C217A0D8549B1E4EF8C9328F1A0516093F32EAEC091B04E1C7C17D24EC8B06FCB8C7ABA33EA61D1FF7FA98F0F14604E3DE8F97C091A90D00C2D30540E87B0F82C60D2B550BEFC112B120C280153DD82AB5BABF681F07DB86EA61BD7A02605BCDCF68620F096CAB496C66848D176D35D3F7677AB20500C2034500F4F002D5FC31794AF8101E9ABFBB5070E22F00F3774A2AF0D3830DFECE4FC958659D1EDFF9AA31034899123BDF45E92A90AA07E73B8B6D62DDBC0000C2F30100000000EFBC39E2BD94CF139DB7CFDD7A01E412F376EB7F4711AA03DEEE83343382B617DB7D216F04366F0ABBAFF6CE2FAF881CF75501A26A2E560700C2E701000000005795C93FF779821BAAB24D1D5AAF151255B6546A44A7BB0ACA162032BE3FC818D902DFBAE7EE3F185B60EF756D20431D0B6C90FE15A92E1B00C4E30100000000B011790AD5B5E60A362277187E51421646E4F331F43D0314887C1B94C44EF007916F3FD4CABF531F37130100000000005548B9020000000000C4FB0100000000B61B3E1085AEBC0476C32C94B10DCF156E98CAD09F179B0E0D53E2AD8E7A851D61CA35AC6058840C4C395630F134C71929C76409734ED41600C4A701000000001C834073D70165196310DAAC9AE604010CC2694D6646C8024138A7E817E279180867F460EF478A1FE18C9AB88398310F9CD1A230C3568B0B00C4FF0100000000464B8FB2ADFBAB0068E9AE102EA6A8172DDD614D6E0CB617A5BB15DBDC9ABA01743790BDD0E50B0FEE06F8FBF11D2506DD00083CC2A5CD1E00C4D3010000000003D0D4484D4F1B0B001A19417FC511054023235B2398E2136864C4D972A52A1D8D8C0429CCE7C3159111DA3E77686D0732C212DFA10D691700C4030200000000082B36AFE0D8331861C562008123CE01ACD85BBD3E44E00F157B55EBD9A99011622FE07F901F9809EC054BF801C4980CBD6013BC5A1C061E00C4F3010000000082FDD83E1EF90F0AB01F1A49B7DFC01AF6434BD969774C057E682E9947A22E1B0FCDE6BEB1073406A15955D182A19600342B5AAD856B780E00C4E70100000000AC284FD1CF36970C15E5D36532FEDE07A2FC2F5ADCAC350494FFF48C83880612F29FD4F1950A0B06FE93416E686894067F32C943EC55101100C6E30100000000C9B4BF4A7E2599109976F3EE4FFF9202D3EE91427068990CDABDE810C50B601CBB17304304791F00371301000000000055D10000C68900E841, Accumulated background error counts: 1
2019-11-08 13:46:00.405 7f2689d7c700 -1 rocksdb: submit_common error: Corruption: block checksum mismatch: expected 2704997081, got 2329464754  in db/001456.sst offset 1046829 size 3898 prefetch 0 footer checksum: 1
  metaindex handle: C89ECC0656
  index handle: EDC5C50691D306
  footer version: 2
  table_magic_number: 9863518390377041911
   data: 0058B1044F007F80000000000000022D8C16ED217262645F646174612E33663964666339343961616463612E30303030303030303030303030393663213DFFFFFFFFFFFFFFFEFFFFFFFFFFFFFFFF6F0018000078012336A702000000020102830C830407E9296000830149A560004359BB6000431DA3610043290B6200438923620043954262004304040C800424A966D52E358D66917975705F1C15AC3AB1AE84F8CF358557E3F576EA8BA1B982AA91432201014622F6CF6212671ADB40F755C450C153E508EE6C320EC475DAC210C3EE95D35321F9F3B15CFC61492B3792341017F65B8C086E121B2DE355CE17BDEAF4076220607F4E2EF37854B156BD226E1794490FB2787176C6BF1FDF4C5F2D2AD224899B07A3E58A9093500A3BA9C96AB6D2D3E2C90A9928A096B3FA3E3A7DE400AB1FFE4053FA667B7DE4D177C10EB79EBB6B95B67D9E975B9D6599FC52008EF028BC290A1AE81D2B872A87DF81C64AF3EA9FB01D19C36784777643DD5912C8F35A77EF5148F5B814A79720D9F4AAC978B523FF3DB443A78C12E2FB9D71DC9F21CD739F1D33020DDF13849D23F23F9582C73C217D9A1100BFCB24F43882700065F0418CF04DF85E119846BA0AF9CE377644AE66D1D70525DF69A612D503B51BDD5AC4C53D39ED2EBBB06A439E8819E631F812CC5A003C05EA7E985D25F4BBA43920BE70190E9B6CC11F1CC786C46CC3C3F2E492B12AA945503AFAF29C0FE8B4FD117198693A514BCD51663F7D36D9EC93661051431FBA6FF40F8F2613A979867B9ADB358E1AC4E4E625089B0FC946E16B06DAAE47F48BCE179CB93E6A774AC68375682CD6687983E636948D9601CC1ED95CDF249630B916495A34515A2843B325378CF7D25BA0E8EEF68B77C9DDE78130EE41ECF472229737BF12A5EF4C0C930420000078014136A702000000020102831083040131246000830404040C8004EBFDD52778D122672FF0AC32B2904ECB6F1BFBC3BE065CF21BF70F18A5613BAD77ADDC6070868C240F6260F83F2BE5A57B577600C0FE4CDBE8D41AB3F49F8C172F6E2ACF4598EA0B6F2627B6D395507DE5E43A0F7CF5B63810752307EC3CC6387B73F9D67513D88EDF07C263794439EF8D5ACCAC186454D8AE40C00C176AB312E4C22FD5CD02CBA6AC3D2BA51FBB41F3D934F967224E624281F3D35CD34B2BAFBB860B5B5EBB3C233E78863349B0735A92CDC3E934A391519B0A758CFE4CC9D4DF5DBB3B9B8A3D18F807A9957A1C3B544BD480B604C641EDA9DBB95A69B12DB50F1C6EAAF8DDCD392F34953691333C5BD28A685F164EE4FE98A0DB3D7C239BC46F46EC5B4CAA6A006E98F4DDEE7834C58A9DC565DB8FD9CAD3195C0D295EB98040BEFCA34FB7B2896BA90D218242BCEF66CAFF0484ADA38CC833E4DAC2751AA40388C8E0B0EC556C1BB4C2BFC0CFF1C48132B6ED106B17652C956CDFC823A5D6B36B032C35249EBF6D79C705F77DD0853B3A978B07C8784F5D8FC02B22F3DF368E43C761C6F08C523A4CB9B7843CA6A0927510C16D0D9BB744208E3247C4934357CD04BF890F6CC593A13149E26A1C5EC8CDECFD1CAFDD421DD51678CDB8BB50DBB3DE37C70A3429EC4719940318BBCFEAE5B85E9BAB56A26DE3B967D1C189FA89C253D5647B6E601B284A2624E6144BFA96B47CB455474BD008D35C354C11334C0C930428000078014236A702000000020102831483040151246000830404040C80042CE7AEDA267B45699937BD7F5752DDA332FF668BF081FD8A5FAD3D79E2C569B68AE4594C2A4FC9492AB8076D1A29D2D448B99DCB588F9BEA00A0A43D068ABDD5CA5E0BE19D9D9B2EF1BD7953F42F81243FDCFC1F1FB893830020DA1425AD9DC11FF322FB0F2CE86F7700E6FC701A7959B56CA6189C07C7BD703FBEC9B75117435763E2DD2CC75308ABAB429F9B1EC234A187A2B9DA9D2AC602D7E0AF9EFD323132332C0FA351364F5BB4AE7475AA1978C9407F91B3255DB975D05F54952B51F35A4E46FF20F2E10512A6D5248F644FD0898882FCE2D17812118DAF8B7F388BD2515C00FC5239275E40BB701BAFD9E8D6FCE40177BD6D3732BC0D6F831AAC33927992572EC53D57123B4CC5D01BCA552CFA715D8DCF72E972925FC8B0C36A3C378A3EC86AF80F44FF45B6961E90087205F180FF794CE0AEDEDF4BEDD061E8DADA0BFBD3D2528A0D3231A3E6B4B8248B9180572E3EF05C04550872CDE576D6952AFCF56C3628F0B81606D5A4CE17520F89CC220BCCFAAA5ABE22E78D5F32B43A9307A67CF2193F5066321F83C25928F7723E97249C6E5E994C17F4A7FB07C1EE1CA1374E749295FD81128A2CE92D465300C10829640E946648FCC5061894F7F6657F040E8C7D26E4D960374BE93EDA45990982D6D65491EC9938F7DE46527A9955200D7B2A3FC23FDD53EE20E1FEC5BFC6D5A9493FE60F24FB7A6C5F38B75C6DE0EF2F300010084200626C6F636B7301000D0108801809010A280810085F7065725F6B6579111B008001182400000003160879746573051F053D1121040000090A18020C0873697A65090D010104C07F055530130A430031312E355F68656164111E080101040154010410100A352E344A1D002008000000050F0A36395A1C000C060E0A665A1B0010050F0A3961661C0000615E53000C050F0A645E38000C060E0A625A53001403100A372E314A1D0001E01802120A322E31354E1F00000B09E20437324A3B00011C0C060E0A335A1B002000140A4300322E31395E21000C050F0A626A58004E71010D7300655A730010050F0A6336661C000465635A1C001004100A32306A8B006A700000334EE10101A721E16A380000616AA80000624EC6010D5400665EA6016A8A010C060E0A3766FA000433334EF802016F2DC20433335E3E0041FC5EA70104050F56C502015941C55E8C010C050F0A65CA8B011004100A34306A8C0100336A54005EDF0008060D0A5A37010C050F0A376A6D0000386AF30200396AF90100615EDD011004100A35315EA70108060E0A4E120421692DC20435326A8A0200636A1C0000646AAD000465645A1D015AF60401900C050F0A665E37001004100A36305E4E0276540000315EDD0141A15E1C0141A15EFA011004100A37314E83050DC56E50010433385A180141356AA90000635E19012DC5003762A9010C040F0A395E59000C040F0A615E8E020004569F0601E418020C0A6D657461360C0705010C12A7204C05090C0054709C192018211000009C7054051A01164C01010B100000010118429903070010000000C0E30118A800E3F43D22C8D1130A9C3ED6091A69D30DD3C7EC213868710CFA1854A0101C770B1F838D075893BC0B3713052F10005548B902055604C0FB0511E818ABFD345441DF1563B513B1ABCDB909ACF6709B2B615613D51E18F86CFE8815DA83988FC1A30D0A7B1006F0ABECD61B0F42C37592E5C11D00C0A70540E8089D95A161A9911DA1B3AE65616439187456E5D2B479FD05CEAA22EFC345111459557D282731C20FAB2A633AE6B1931C55E5F6310B598C0600C0FF0540E8958B1D907AE41E0E7231398C8094591F2E26CE7873EC481FC5C4025C2761710598D81D89237E3D0413BBD78435EB9D07627711D3FAFA1D0000C0D30540E8DD25603C623B3F15BB849D343CC9C11A973036C4D81E170D12463B00C081970CC268FE5C1AF0B00E18CD3E1784EF961EA3D95B1C51D2B80100C0032508E8663F3D892656161FECA7743126861913FD9438720A18C40B9F92C803E5491E0D5392295758474A034AB2BB41561D4204497612EFCC22630700C0F30580E8D939F3243B6C7F143BE7B1C70D5B211FE7BC58C588473A169C9737BA44E6931BF3F278CBAC1F941E5E9EF51F391EE117CBB3CF54CC93F10800C0E70540E8CF0EDAFAA7C86610D9C1F3F7A7DBA11D3BF8111E7B3F8F0007BFA467E6583117E017716D4AF7E319FC22BEC13048B7165FC4B9062788A20400C2E30540A811778F23613A2400E26EE96884F50A14DC2DACD53B2A6D1FBB85C76371BAD308B7709B696AC53E1937130125F7550000C24900E8F71F36C0A642BA06FE430BC839E8840D7F68029701B905020FCDA0FE9EEB780BA1994C79239E480F341359221E57FE116622B13AB7B91C0000C2A70580E889649E382A147300914CCF14364678079269511C5FFE8A05322DC12EA3D1FF07A6253FCFD3620C10B4E414E7BAA4A106969C88EA0CC1621200C2FF0540E872C217A0D8549B1E4EF8C9328F1A0516093F32EAEC091B04E1C7C17D24EC8B06FCB8C7ABA33EA61D1FF7FA98F0F14604E3DE8F97C091A90D00C2D30540E87B0F82C60D2B550BEFC112B120C280153DD82AB5BABF681F07DB86EA61BD7A02605BCDCF68620F096CAB496C66848D176D35D3F7677AB20500C2034500F4F002D5FC31794AF8101E9ABFBB5070E22F00F3774A2AF0D3830DFECE4FC958659D1EDFF9AA31034899123BDF45E92A90AA07E73B8B6D62DDBC0000C2F30100000000EFBC39E2BD94CF139DB7CFDD7A01E412F376EB7F4711AA03DEEE83343382B617DB7D216F04366F0ABBAFF6CE2FAF881CF75501A26A2E560700C2E701000000005795C93FF779821BAAB24D1D5AAF151255B6546A44A7BB0ACA162032BE3FC818D902DFBAE7EE3F185B60EF756D20431D0B6C90FE15A92E1B00C4E30100000000B011790AD5B5E60A362277187E51421646E4F331F43D0314887C1B94C44EF007916F3FD4CABF531F37130100000000005548B9020000000000C4FB0100000000B61B3E1085AEBC0476C32C94B10DCF156E98CAD09F179B0E0D53E2AD8E7A851D61CA35AC6058840C4C395630F134C71929C76409734ED41600C4A701000000001C834073D70165196310DAAC9AE604010CC2694D6646C8024138A7E817E279180867F460EF478A1FE18C9AB88398310F9CD1A230C3568B0B00C4FF0100000000464B8FB2ADFBAB0068E9AE102EA6A8172DDD614D6E0CB617A5BB15DBDC9ABA01743790BDD0E50B0FEE06F8FBF11D2506DD00083CC2A5CD1E00C4D3010000000003D0D4484D4F1B0B001A19417FC511054023235B2398E2136864C4D972A52A1D8D8C0429CCE7C3159111DA3E77686D0732C212DFA10D691700C4030200000000082B36AFE0D8331861C562008123CE01ACD85BBD3E44E00F157B55EBD9A99011622FE07F901F9809EC054BF801C4980CBD6013BC5A1C061E00C4F3010000000082FDD83E1EF90F0AB01F1A49B7DFC01AF6434BD969774C057E682E9947A22E1B0FCDE6BEB1073406A15955D182A19600342B5AAD856B780E00C4E70100000000AC284FD1CF36970C15E5D36532FEDE07A2FC2F5ADCAC350494FFF48C83880612F29FD4F1950A0B06FE93416E686894067F32C943EC55101100C6E30100000000C9B4BF4A7E2599109976F3EE4FFF9202D3EE91427068990CDABDE810C50B601CBB17304304791F00371301000000000055D10000C68900E841 code = 2 Rocksdb transaction: 
Put( Prefix = P key = 0x0000000000001a21'.0000003216.00000000000000120080' Value size = 184)
Put( Prefix = P key = 0x0000000000001a21'._fastinfo' Value size = 186)
Put( Prefix = O key = 0x7f8000000000000002487eb6ba217262'd_data.3f8c957b2525ea.0000000000000007!='0xfffffffffffffffeffffffffffffffff6f00180000'x' Value size = 565)
Put( Prefix = O key = 0x7f8000000000000002487eb6ba217262'd_data.3f8c957b2525ea.0000000000000007!='0xfffffffffffffffeffffffffffffffff'o' Value size = 423)
Put( Prefix = L key = 0x000000000054714c Value size = 4135)
2019-11-08 13:46:00.409 7f2689d7c700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.1/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_kv_sync_thread()' thread 7f2689d7c700 time 2019-11-08 13:46:00.412888
/tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.1/src/os/bluestore/BlueStore.cc: 10549: FAILED ceph_assert(r == 0)

 ceph version 14.2.4.1 (30ddf1955d1370b5459fa6c0894f6b56d84acb30) nautilus (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x559adac4e524]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x559adac4e6ff]
 3: (BlueStore::_kv_sync_thread()+0xf54) [0x559adb1b43b4]
 4: (BlueStore::KVSyncThread::entry()+0xd) [0x559adb1d692d]
 5: (()+0x76db) [0x7f269ae0b6db]
 6: (clone()+0x3f) [0x7f2699bab88f]

Actions #88

Updated by Marcin W over 4 years ago

Dev guys,

I haven't checked the source code for handling KV store but is it possible that some parts use buffered IO and some use async/DIRECT_IO? Or perhaps is there any setting which allow to switch some write operations to buffered mode? I guess SSTs may get corrupted if async/direct is mixed with writes using page cache. Is it possible that custom IO scheduler settings(switching all to (mq-)deadline including relaxed expiry centisecs, writeback ratio, etc) may cause these issues? I don't know about other Ceph instances but I do inject custom udev/sysctl settings which steer IO scheduler queues and behavior hence the questions.

Actions #89

Updated by Igor Fedotov over 4 years ago

Hi Marcin,
there is bluefs_buffered_io parameter (true by default) which controls buffered vs. direct IO mode. And yes - IIUC the mixture of direct and buffered IO is used in BlueFS, when this param is set to true.
Can't say anything about custom I/O scheduler settings though...

Actions #90

Updated by Rafal Wadolowski over 4 years ago

I deployed the version with new rocksdb and started tests. I will come up with results :)

Actions #91

Updated by Igor Fedotov over 4 years ago

@Rafal, @Krzystzof - am I right that data for OSDs from comment #87 aren't available any more as you deployed a new cluster other it?

Actions #92

Updated by Krzysztof Klimonda over 4 years ago

I'm actually not sure - I see that the cluster has been upgrade to a new 14.2.4 build with upgraded rocksdb, but osd-75 is still in a failed state and some of old sst files have same checksum - there are also new sst files, I'm assuming the cluster wasn't redeployed completely, it just tried to start those failed OSDs using new build but I'll double check with Rafal tomorrow.

If that's the case, is it still of any use to you, and if so what exactly are you looking for?

Actions #93

Updated by Igor Fedotov over 4 years ago

@Krzysztof - what I'd like to do for broken OSDs is:
a) (simple case) export bluefs log using ceph-bluestore-tool's bluefs-log-dump command
b) (preferably but needs another custom build) similar to a) but do that using custom built ceph-bluestore-tool (and relevant libs) containing this patch: https://github.com/ceph/ceph/pull/31513 This will look for bluefs allocations inconsistencies which potentially might be the root cause of data overwrite (which I can see in previous dumps).

Actions #94

Updated by Rafal Wadolowski over 4 years ago

@Igor Gajowiak version with upgraded rocksdb broke two OSD for now. Logs from one of them:

2019-11-11 16:05:18.832 7f78ba34b700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.2/src/kv/RocksDBStore.cc: In function 'virtual int RocksDBStore::get(const string&, const string&, ceph::bufferlist*)' thread 7f78ba34b700 time 2019-11-11 16:05:18.831854
/tmp/release/Ubuntu/WORKDIR/ceph-14.2.4.2/src/kv/RocksDBStore.cc: 1209: ceph_abort_msg("block checksum mismatch: expected 4127118766, got 1807936612  in db/002375.sst offset 6009078 size 3898 prefetch 0 footer checksum: 1
  metaindex handle: 9786F70556
  index handle: B2FCF005848406
  footer version: 2
  table_magic_number: 9863518390377041911
   data: 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
 ceph version 14.2.4.2 (dde035e15e656c97e328422754187d33665edca8) nautilus (stable)
 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xdf) [0x5571996dd5de]
 2: (RocksDBStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ceph::buffer::v14_2_0::list*)+0x3ce) [0x557199c8ef4e]
 3: (()+0x9b1f99) [0x557199bc8f99]
 4: (()+0x99e0d1) [0x557199bb50d1]
 5: (BlueStore::ExtentMap::fault_range(KeyValueDB*, unsigned int, unsigned int)+0x2aa) [0x557199bceafa]
 6: (BlueStore::_do_write_small(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::v14_2_0::list::iterator&, BlueStore::WriteContext*)+0x1c5) [0x557199c1bf45]
 7: (BlueStore::_do_write_data(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::v14_2_0::list&, BlueStore::WriteContext*)+0x22a) [0x557199c1f45a]
 8: (BlueStore::_do_write(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::v14_2_0::list&, unsigned int)+0x279) [0x557199c26a69]
 9: (BlueStore::_write(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, unsigned long, unsigned long, ceph::buffer::v14_2_0::list&, unsigned int)+0xda) [0x557199c2798a]
 10: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0xf85) [0x557199c2aca5]
 11: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x5ee) [0x557199c30c2e]
 12: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x54) [0x55719997e914]
 13: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xb09) [0x557199a7e719]
 14: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x1a7) [0x557199a8ca77]
 15: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x97) [0x557199991b57]
 16: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x705) [0x557199940665]
 17: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1bf) [0x55719976459f]
 18: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x557199a178c2]
 19: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xbf5) [0x5571997824b5]
 20: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x4ac) [0x557199db092c]
 21: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x557199db3ae0]
 22: (()+0x76db) [0x7f78dbbfb6db]
 23: (clone()+0x3f) [0x7f78da99b88f]

2019-11-11 16:05:18.844 7f78ba34b700 -1 *** Caught signal (Aborted) **
 in thread 7f78ba34b700 thread_name:tp_osd_tp

Exported SST: https://cf2.cloudferro.com:8080/swift/v1/AUTH_5b9ea421deb745bfb4dab930cebe153f/ceph-sharings/002375.sst
BlueFS log dump: https://cf2.cloudferro.com:8080/swift/v1/AUTH_5b9ea421deb745bfb4dab930cebe153f/ceph-sharings/osd-35-bluefs-log-dump

I will build another version and deploy it on cluster

Actions #95

Updated by Igor Fedotov over 4 years ago

Thanks, Rafal! One clarification though - have you redeployed all the OSDs before running Ceph with the latest RocksDB? I.e. are all OSDs created with the new code?
This might be important as there is no guarantee that persistent data for earlier OSDs is still consistent. Some chances you're facing data corruption happened before if OSDs weren't cleaned up totally...

Certainly the above doesn't apply to minor changes like logging enhancing or allocation consistency checking...

Actions #96

Updated by Rafal Wadolowski over 4 years ago

I'm just upgraded version, so they are created with 14.2.3 version.
So do you suggest to redeploy cluster?

Actions #97

Updated by Igor Fedotov over 4 years ago

Rafal Wadolowski wrote:

I'm just upgraded version, so they are created with 14.2.3 version.
So do you suggest to redeploy cluster?

At first I suggest to apply https://github.com/ceph/ceph/pull/31513 and do bluefs-log-dump to verify allocations are consistent (assuming you haven't applied it for the previous dump, have you?)
And then, yes, redeploy the cluster to be absolutely sure that OSD data is fresh and doesn't suffer from any previous DB releases influence.

Actions #98

Updated by Igor Fedotov over 4 years ago

@Rafal, is osd.35 still available? If so could you please run bluestore-tool:
CEPH_ARGS="--debug-bluefs 10 --log-file out.log" ceph-bluestore-tool --path <path-to-osd> --command bluefs-log-dump

and grep resulting out.log for "super.uuid"

Also please postpone cluster redeployment if not started yet, I've got some suspicious stuff that might need additional verification..

Actions #99

Updated by Rafal Wadolowski over 4 years ago

Yes it is available, I didn't redeployed failed OSDs

# grep "super.uuid" out.log 
2019-11-13 11:23:46.497 7f7742430140 10 bluefs _replay 0x288000: stop: uuid 04336662-3465-6238-3833-36346536332e != super.uuid aa66917f-55bb-4b2b-bdff-9bea2186a439
2019-11-13 11:23:46.649 7f7742430140 10 bluefs _replay 0x288000: stop: uuid 04336662-3465-6238-3833-36346536332e != super.uuid aa66917f-55bb-4b2b-bdff-9bea2186a439
2019-11-13 11:23:55.665 7f9d4fd18140 10 bluefs _replay 0x288000: stop: uuid 04336662-3465-6238-3833-36346536332e != super.uuid aa66917f-55bb-4b2b-bdff-9bea2186a439
2019-11-13 11:23:55.673 7f9d4fd18140 10 bluefs _replay 0x288000: stop: uuid 04336662-3465-6238-3833-36346536332e != super.uuid aa66917f-55bb-4b2b-bdff-9bea2186a439
2019-11-13 11:24:05.390 7f0dafdbf140 10 bluefs _replay 0x288000: stop: uuid 04336662-3465-6238-3833-36346536332e != super.uuid aa66917f-55bb-4b2b-bdff-9bea2186a439
2019-11-13 11:24:05.398 7f0dafdbf140 10 bluefs _replay 0x288000: stop: uuid 04336662-3465-6238-3833-36346536332e != super.uuid aa66917f-55bb-4b2b-bdff-9bea2186a439

I can postpone redeployment, tell me if you want to test me version with https://github.com/ceph/ceph/pull/31513

Actions #100

Updated by Igor Fedotov over 4 years ago

@Rafal - great, thank you. Now I can tell that data extents in this file which are supposed to be RocksDB data belong to BlueFS log (some outdated part of it). One can see these data in [4MB-8MB] range. One of the proofs of that is uuid value repeated (aa66917f-55bb-4b2b-bdff-9bea2186a439) at every 4K.

Which makes a hypothesis about buggy new RocksDB a bit less valid...
Going to dissect existing sst for a bit and check BlueFS log handling difference between 14.2.3 and .4.

Anyway please try https://github.com/ceph/ceph/pull/31513 for this osd. I've made some allocation consistency checking manual and it looks good but re-running this in an automatic manner is useful anyway.

Also if another broken OSD is still available (you mentioned two of them in #94) - please share corresponding bluefs log dump, OSD log and sst file.

Actions #101

Updated by Igor Fedotov over 4 years ago

Hopefully here is the final analysis for the root cause.

a) analysis for 002375.sst dump from comment #94 reveals that the file contains pretty regular structure at offset where checksum failure is detected, e.g. the following hex sequence is repeated from offset 4M to offset 8M:
01 01 XX 00 00 00 AA 66 91 7f 55....
Non-zero blocks are aligned at 4K boundaries, take from 60 to 100 of bytes and definitely have some increasing counters inside. Actually it looks like a portion of (earlier?) bluefs log.
Final proof of this hypothesis is the matching between bluefs uuid Rafal shared and substring in this hex dump.
So it's bluefs log which presumably overwrites DB data.

b) checking bluefs changes brought with v14.2.3 reveals (among a few others) the following commit:
https://github.com/ceph/ceph/commit/096033b9d931312c0688c2eea7e14626bfde0ad7#diff-618db1d3389289a9d25840a4500ef0b0

which actually has a bug when deciding which extents are allowed to merge - it shouldn't proceed with consolidation if extents to consolidate have non-matching devices. So it's theoretically possible that log starts to use some extents from an invalid device.

c) Now let's check bluefs log dump from comment #94.
For ino 1 (i.e. bluefs log file) it has following extents:
log_fnode file(ino 1 size 0x100000 mtime 0.000000 bdev 0 allocated 500000 extents [1:0x8400000~100000,0:0x6200000~400000])

Please note device 1 for 1:0x8400000~100000, and device 0 for 0:0x6200000~400000
So it's possible that two different devices (db and wal) are used for log file. And IMO this is caused by Bluefs::_compact_log_async implementation which has two allocations:
1)
dout(10) << func << " old_log_jump_to 0x" << std::hex << old_log_jump_to
<< " need 0x" << (old_log_jump_to + cct->_conf->bluefs_max_log_runway) << std::dec << dendl;
int r = _allocate(log_file->fnode.prefer_bdev,
cct->_conf->bluefs_max_log_runway, &log_file->fnode);
which tend to be at WAL device

2) r = _allocate(BlueFS::BDEV_DB, new_log_jump_to,
&new_log->fnode);
which occurs to BDEV_DB

and DB device is actually put into the top of extents list with WAL device extents appended later. Via bluefs_fnode_t::append_extent() which has improper consolidation!!!!

So it's quite possible that sometimes such consolidation might result in an extent which spans invalid range on DB device.
That's not the case for the current bluefs log for osd-35 but it could happen earlier and overwrote part of DB data.

I've seen hex sequences similar to log file content in some other sst dumps available in this ticket too. Haven't performed similar analysis for them though...

So I think now it's a good moment for verifying the fix from https://github.com/ceph/ceph/pull/31621 in the field.
@Rafal, @Krzysztof?

Actions #102

Updated by Marcin W over 4 years ago

@Igor Gajowiak,

Well done!
Would it also explain problem I had with 003808.sst(30MB) where all others were 16MB as they were supposed to be according to RocksDB settings? My problems with crashing OSDs disappeared after lowering min_write_buffer_number_to_merge to 2.

Actions #103

Updated by Igor Fedotov over 4 years ago

@Marcin Garski,
003808.sst has the same bluefs log pattern inside at e.g. offset 5687664. So it seems it has been overwritten by bluefs log as well. Hence my analysis applies there as well.

This doesn't explain its increased size though.
And reduction of parameter(s) could simply reduce the probability of the issue's appearance.

Actions #104

Updated by Igor Fedotov over 4 years ago

  • Has duplicate Bug #42284: fastbmap_allocator_impl.h: FAILED ceph_assert(available >= allocated) in upgrade:luminous-x-nautilus added
Actions #105

Updated by Igor Fedotov over 4 years ago

  • Has duplicate Bug #42683: OSD Segmentation fault added
Actions #106

Updated by Sage Weil over 4 years ago

  • Status changed from In Progress to Pending Backport
Actions #107

Updated by Sage Weil over 4 years ago

  • Status changed from Pending Backport to Resolved
Actions #108

Updated by Igor Fedotov over 4 years ago

  • Pull request ID set to 31621
Actions #109

Updated by Tobias Fischer over 4 years ago

Hi Igor,
looks like you fixed the issue :) nevertheless, i got logs and sst of 2 new OSDs if you need further confirmation: https://storage.clyso.com/s/wiSRFZfLKWQHLzi

Actions #110

Updated by Igor Fedotov over 4 years ago

TObias, thanks for the info. Hopefully we wouldn't need it any more.

Please let us know if the fix helps...

Actions #111

Updated by Rafal Wadolowski over 4 years ago

After 24h of testing, we can confirm, that fix is helping.
Good job Igor!

Actions #112

Updated by Igor Fedotov over 4 years ago

  • Has duplicate Bug #43211: Bluestore OSDs don't start after upgrade to 14.2.4 added
Actions #113

Updated by Marcin W over 4 years ago

Due to this bug, a had to mitigate it with setting RocksDB param min_write_buffer_number_to_merge to 2. Since 14.2.5 release, I can set it to half of max_write_buffer_number without issues. So far it works for longer than 24h and meta devices are no struggling with continuous merging anymore :)

@Igor Gajowiak at al,

Thank you for your help.

Actions #114

Updated by Telemetry Bot almost 3 years ago

  • Related to Bug #51753: crash: void BlueStore::_kv_sync_thread(): assert(r == 0) added
Actions

Also available in: Atom PDF