Bug #15600
closedrocksdb stops working correctly after 6 hours bluestore writes
0%
Description
Found this bug on jewel release 10.2.0.
With fio_rbd 100% 4KB writes, after 6 hours, rocksdb reported below error msg:
...
2016-04-25 13:41:09.230511 7f1e66e18700 4 rocksdb: (Original Log Time 2016/04/25-13:41:09.230485) EVENT_LOG_v1 {"time_micros": 1461616869230478, "job": 206964, "event": "compaction_finished", "compaction_time_micros": 49240, "output_level": 2, "num_output_files": 3, "total_output_size": 4770145, "num_input_records": 2524, "num_output_records": 2344, "num_subcompactions": 1, "lsm_state": [1, 10, 60, 143, 0, 0, 0]}
2016-04-25 13:41:09.230917 7f1e66e18700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1461616869230914, "job": 206964, "event": "table_file_deletion", "file_number": 1187734}
2016-04-25 13:41:09.231082 7f1e66e18700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1461616869231080, "job": 206964, "event": "table_file_deletion", "file_number": 1187521}
2016-04-25 13:41:09.231327 7f1e66e18700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1461616869231326, "job": 206964, "event": "table_file_deletion", "file_number": 1187520}
2016-04-25 13:41:09.231340 7f1e66e18700 2 rocksdb: Waiting after background compaction error: IO error: /home/ceph_user/my_cluster/ceph-deploy/osd/myosddata/*db/1187753.log: No such file or directory, Accumulated background error counts: 1*