Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2023-12-06T11:03:27ZCeph
Redmine bluestore - Feature #63739 (New): We might need some clarification/rework to properly locate serv...https://tracker.ceph.com/issues/637392023-12-06T11:03:27ZIgor Fedotovigor.fedotov@croit.io
<p>In same cases - primarily after some recovery and DB exporting - we might need to use RocksDB on top of regular filesystem.<br />In this scenario the location of non-DB files, namely sharding/* and ALLOCATOR_NCB_DIR/*, becomes unclear since no root path is provided and relative paths are in use through the code. Which is unlike to DB which still gets osd primary dir as a root when opening.</p> bluestore - Cleanup #63612 (Fix Under Review): get-attrs,set-attrs and rm-attrs option details ar...https://tracker.ceph.com/issues/636122023-11-22T15:31:48ZLaura Flores
<p>Description of problem:</p>
<p>The options that exist in the troubleshooting guide for getting, setting and removing object attribute keys are not included in the ceph-objectstore-tool(COT) help.</p>
<pre><code class="text syntaxhl"><span class="CodeRay">ceph-objectstore-tool --help
Allowed options:
--help produce help message
--type arg Arg is one of [bluestore (default), filestore,
memstore]
--data-path arg path to object store, mandatory
--journal-path arg path to journal, use if tool can't find it
--pgid arg PG id, mandatory for info, log, remove, export,
export-remove, mark-complete, trim-pg-log,
trim-pg-log-dups and mandatory for
apply-layout-settings if --pool is not specified
--pool arg Pool name, mandatory for apply-layout-settings if
--pgid is not specified
--op arg Arg is one of [info, log, remove, mkfs, fsck,
repair, fuse, dup, export, export-remove, import,
list, list-slow-omap, fix-lost, list-pgs,
dump-journal, dump-super, meta-list, get-osdmap,
set-osdmap, get-inc-osdmap, set-inc-osdmap,
mark-complete, reset-last-complete,
apply-layout-settings, update-mon-db,
dump-export, trim-pg-log, trim-pg-log-dups
statfs]
--epoch arg epoch# for get-osdmap and get-inc-osdmap, the
current epoch in use if not specified
--file arg path of file to export, export-remove, import,
get-osdmap, set-osdmap, get-inc-osdmap or
set-inc-osdmap
--mon-store-path arg path of monstore to update-mon-db
--fsid arg fsid for new store created by mkfs
--target-data-path arg path of target object store (for --op dup)
--mountpoint arg fuse mountpoint
--format arg (=json-pretty) Output format which may be json, json-pretty,
xml, xml-pretty
--debug Enable diagnostic output to stderr
--no-mon-config Do not contact mons for config
--no-superblock Do not read superblock
--force Ignore some types of errors and proceed with
operation - USE WITH CAUTION: CORRUPTION POSSIBLE
NOW OR IN THE FUTURE
--skip-journal-replay Disable journal replay
--skip-mount-omap Disable mounting of omap
--head Find head/snapdir when searching for objects by
name
--dry-run Don't modify the objectstore
--tty Treat stdout as a tty (no binary data)
--namespace arg Specify namespace when searching for objects
--rmtype arg Specify corrupting object removal 'snapmap' or
'nosnapmap' - TESTING USE ONLY
--slow-omap-threshold arg Threshold (in seconds) to consider omap listing
slow (for op=list-slow-omap)
</span></code></pre>
<p>Positional syntax:</p>
<p>ceph-objectstore-tool ... <object> (get|set)-bytes [file]<br />ceph-objectstore-tool ... <object> set-(attr|omap) <key> [file]<br />ceph-objectstore-tool ... <object> (get|rm)-(attr|omap) <key><br />ceph-objectstore-tool ... <object> get-omaphdr<br />ceph-objectstore-tool ... <object> set-omaphdr [file]<br />ceph-objectstore-tool ... <object> list-attrs<br />ceph-objectstore-tool ... <object> list-omap<br />ceph-objectstore-tool ... <object> remove|removeall<br />ceph-objectstore-tool ... <object> dump<br />ceph-objectstore-tool ... <object> set-size<br />ceph-objectstore-tool ... <object> clear-data-digest<br />ceph-objectstore-tool ... <object> remove-clone-metadata <cloneid></p>
<p><object> can be a JSON object description as displayed<br />by --op list.<br /><object> can be an object name which will be looked up in all<br />the OSD's PGs.<br /><object> can be the empty string ('') which with a provided pgid <br />specifies the pgmeta object</p>
<p>The optional [file] argument will read stdin or write stdout<br />if not specified or if '-' specified.<br />[ceph: root@mero006 /]#</p>
<p>Steps to Reproduce:<br />1.Configure a cluster<br />2.Get into OSD container and check the ceph objectstore tool help</p> bluestore - Cleanup #62850 (Fix Under Review): warning: void _denc_finish() defined but not usedhttps://tracker.ceph.com/issues/628502023-09-15T18:19:59ZLaura Flores
<p>Seen on the latest main SHA (01ef9e5e91e73422cf11f9b49d06815e4ed75c0d):<br /><pre><code class="text syntaxhl"><span class="CodeRay">In file included from ../src/include/encoding.h:41,
from ../src/include/compact_map.h:16,
from ../src/include/mempool.h:32,
from ../src/os/bluestore/bluestore_types.h:23,
from ../src/os/bluestore/bluefs_types.h:8,
from ../src/os/bluestore/bluefs_types.cc:5:
../src/include/denc.h:1826:15: warning: ‘void _denc_finish(ceph::buffer::v15_2_0::ptr::const_iterator&, __u8*, __u8*, char**, uint32_t*)’ defined but not used [-Wunused-function]
1826 | static void _denc_finish(::ceph::buffer::ptr::const_iterator& p, \
| ^~~~~~~~~~~~
../src/include/denc.h:1826:15: note: in definition of macro ‘DENC_HELPERS’
1826 | static void _denc_finish(::ceph::buffer::ptr::const_iterator& p, \
| ^~~~~~~~~~~~
../src/include/denc.h:1816:15: warning: ‘void _denc_start(ceph::buffer::v15_2_0::ptr::const_iterator&, __u8*, __u8*, char**, uint32_t*)’ defined but not used [-Wunused-function]
1816 | static void _denc_start(::ceph::buffer::ptr::const_iterator& p, \
| ^~~~~~~~~~~
../src/include/denc.h:1816:15: note: in definition of macro ‘DENC_HELPERS’
1816 | static void _denc_start(::ceph::buffer::ptr::const_iterator& p, \
| ^~~~~~~~~~~
../src/include/denc.h:1807:15: warning: ‘void _denc_finish(ceph::buffer::v15_2_0::list::contiguous_appender&, __u8*, __u8*, char**, uint32_t*)’ defined but not used [-Wunused-function]
1807 | static void _denc_finish(::ceph::buffer::list::contiguous_appender& p, \
| ^~~~~~~~~~~~
../src/include/denc.h:1807:15: note: in definition of macro ‘DENC_HELPERS’
1807 | static void _denc_finish(::ceph::buffer::list::contiguous_appender& p, \
| ^~~~~~~~~~~~
../src/include/denc.h:1797:15: warning: ‘void _denc_start(ceph::buffer::v15_2_0::list::contiguous_appender&, __u8*, __u8*, char**, uint32_t*)’ defined but not used [-Wunused-function]
1797 | static void _denc_start(::ceph::buffer::list::contiguous_appender& p, \
| ^~~~~~~~~~~
../src/include/denc.h:1797:15: note: in definition of macro ‘DENC_HELPERS’
1797 | static void _denc_start(::ceph::buffer::list::contiguous_appender& p, \
| ^~~~~~~~~~~
</span></code></pre></p>
<p>Reproduce by running:<br /><pre><code class="text syntaxhl"><span class="CodeRay">ninja osd
</span></code></pre></p>
<p>Not sure if it matters, but this was on Centos Stream 8.</p> bluestore - Feature #62500 (New): Warn on possible disk failures or high load on disks (OSDs)https://tracker.ceph.com/issues/625002023-08-21T15:22:48ZPonnuvel P
<p>In a user environment, we have recently hit an issue in that they've had a bad disk, with the OSD logging the following:<br />```<br />2023-07-13T15:17:15.149+0000 7f0681050d80 -1 bdev(0x55d05eff8380 /var/lib/ceph/osd/ceph-175/block) read stalled read 0x29f40370000~100000 (buffered) since 63410177.290546s, timeout is 5.000000s<br />```</p>
<p>However, this wasn't spotted for weeks as there's no discernible warning (a health warning or info in 'ceph health detail' for example). <br />This led to degradation of performance in the cluster before identifying the bad disk as the cause.</p>
<p>I think we can make this a health warning perhaps so that users are warned (i.e. appears in `ceph -s` and `ceph health detail`) of such issues on time.</p>
<p>There are also similar issues reported y BlueStore such as:</p>
<p>(list isn't comprehensive)</p>
<p>```<br />2023-07-14T03:31:00.715+0000 7fd75bb70700 0 bluestore(/var/lib/ceph/osd/ceph-175) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.028621219s, txc = 0x55a107c30f00<br />```</p>
<p>```<br /> [..] log_latency_fn slow operation observed for upper_bound, latency = 6.25955s [..]<br />```</p>
<p>```<br />[..] log_latency slow operation observed for submit_transact<br />```</p>
<p>which are also potential candidates for the same as warning as they could imply bad disks.</p>
<p>The BlueStore messages can also happen when the OSDs are heavily loaded and/or rocksdb<br />compaction is stalling the disk temporarily or even some random scsi resets which can affect<br />OSDs even if a disk on a OSD node being affected isn't part of Ceph cluster (e.g. an OSD).</p>
<p>So there's a possibility of reporting false positives which needs to be considered carefully.</p>
<p>We could have "threshold" in that if an OSD hits these conditions, say, x times in a y period then that<br />would be useful too suggesting there's fundamental load issues that needs admin's attention<br />even if it's not a bad disk as such.</p>
<p>But overall I see the benefit of having info reported to the user in health status even if it means adding<br />two types of new warnings such as:<br />1. bad disk<br />2. OSD is slow/overloaded/node-underconfigured</p>
<p>I am looking to implement this. Raising this to track as well as solicit any suggestions/opinions. Thanks.</p> bluestore - Feature #62371 (New): Copies of superblock(s)https://tracker.ceph.com/issues/623712023-08-09T08:44:45ZAdam Kupczyk
<p>Currently there is only one BlueStore superblock, and only one BlueFS superblock.<br />There should be many copies of superblocks.</p>
<p>When one superblock is unreadable / corrupted the other one could be selected.<br />The locations of superblocks should be fixed.</p> bluestore - Feature #59060 (New): Make alloc_size persistenthttps://tracker.ceph.com/issues/590602023-03-14T12:59:49ZAdam Kupczyk
<p>Currently we use ceph config to retrieve min_alloc_size for block device.<br />This value should be persisted, so even stand-alone ceph-bluestore-tool could operate properly.</p>
<p>There are 2 flavours of min_alloc_size for block:<br />1) the AU size that is used by core BlueStore<br />2) the AU size that is used by BlueFS to hold files when allocating from block.</p>
<p>Both should be part of superblocks info.</p> bluestore - Fix #58759 (New): BlueFS log runway space exhaustedhttps://tracker.ceph.com/issues/587592023-02-17T12:32:45ZAdam Kupczyk
<p>In BlueFS::_flush_and_sync_log_core we have following data integrity check:<br />ceph_assert(bl.length() <= runway);</p>
<p>It is there, because it is unacceptable to put transaction larger then currently available transaction.<br />If we do so, there would be no good way to get the data (we do _do_replay_recovery_read() heuristic, but it requires lengthy recovery).</p>
<p>The solution could be that if we have less runway than transaction, <br />we inject a log-extending transaction first.</p>
<p>It has been almost impossible before, but these commits help:<br /><a class="external" href="https://github.com/ceph/ceph/pull/42750">https://github.com/ceph/ceph/pull/42750</a> "incremental update" <br /><a class="external" href="https://github.com/ceph/ceph/pull/48854">https://github.com/ceph/ceph/pull/48854</a> "4K bluefs"</p> bluestore - Feature #57785 (New): fragmentation score in metricshttps://tracker.ceph.com/issues/577852022-10-06T18:17:57ZKevin Fox
<p>Currently the bluestore fragmentation score does not seem to be exported in metrics. Due to the issue described in <a class="external" href="https://tracker.ceph.com/issues/57672">https://tracker.ceph.com/issues/57672</a>, it would be really nice to have that metric available so it can be acted upon by a metrics/alerting system.</p> bluestore - Documentation #57762 (New): documentation about same hardware class wronghttps://tracker.ceph.com/issues/577622022-10-04T17:38:40ZKevin Fox
<p>The documentation in at least one place:<br /><a class="external" href="https://docs.ceph.com/en/pacific/man/8/ceph-bluestore-tool/">https://docs.ceph.com/en/pacific/man/8/ceph-bluestore-tool/</a> bluefs-bdev-migrate says that:<br />if source list has slow volume only - operation isn’t permitted, requires explicit allocation via new-db/new-wal command.</p>
<p>The rook documentation makes a similar claim about hardware types cant be the same.</p>
<p>I spent a couple days trying to figure out how to work around it but when I tried just doing it, it worked. So maybe the documentation is wrong, or is confusing enough I misinterpreted it.</p>
<p>With rook, this works:<br /> - name: "minikube-m03" <br /> devices:<br /> - name: "vdb" <br /> config:<br /> metadataDevice: db/db1<br /> - name: "vdc" <br /> config:<br /> metadataDevice: db/db2</p>
<p>where vdb & db/db1 are both on hdds. also tried it with ssd/ssd and it works ok there too.</p> bluestore - Feature #55608 (Fix Under Review): Use swap for deferred_stable when not bluefs_layou...https://tracker.ceph.com/issues/556082022-05-11T08:48:01Zyunqing wang
<p>When bluefs_layout.single_shared_device() is false and deferred_done is not empty, the deferred_stable_queue is always empty which make deferred_stable empty also.<br />So we can use swap instead of insert when insert key of deferred_done to deferred_stable.</p> bluestore - Feature #55474 (New): BLUESTORE_FRAGMENTATION health check doesn't workhttps://tracker.ceph.com/issues/554742022-04-27T17:43:20ZMohammed Naser
<p>It seems like there is documentation around `BLUESTORE_FRAGMENTATION` in the docs:</p>
<p><a class="external" href="https://docs.ceph.com/en/pacific/rados/operations/health-checks/#bluestore-fragmentation">https://docs.ceph.com/en/pacific/rados/operations/health-checks/#bluestore-fragmentation</a></p>
<p>But, there is no actual code that triggers this alert, and this is a <strong>super</strong> critical issue IMHO, since if your fragmentation gets high, it will kill your OSDs with absolutely no heads up or expectations.</p> bluestore - Documentation #55462 (New): clarify the meaning of the code in do_replay_recovery_readhttps://tracker.ceph.com/issues/554622022-04-27T01:09:38ZYizheng Jiao
<p>I spent some time to understand the function do_replay_recovery_read in BlueFS.cc. Most parts of the code make sense to me.<br />However, I am not sure what is the meaning of the line below.</p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/a32221cb9a00c40a0c783eaf8296e5bb513b4c83/src/os/bluestore/BlueFS.cc#L4204">https://github.com/ceph/ceph/blob/a32221cb9a00c40a0c783eaf8296e5bb513b4c83/src/os/bluestore/BlueFS.cc#L4204</a></p>
<p>In my understanding, the beginning page of the `raw_data` is 0.<br />According to the code, we have this equation `search_b = chunk_b - overlay_size = raw_data + page_size (4096) - overlay_size (48)`.<br />Therefore, the first 48 bytes of `search_b` are 0es. I am wondering why this is necessary or how it can be useful for any cases.<br />Please help me out.</p>
<p>I am happy to contribute changes to make this function more understandable after I get a better grasp of it.</p> bluestore - Fix #54299 (Need More Info): osd error restarthttps://tracker.ceph.com/issues/542992022-02-16T13:06:36Zduans song
<p>debug -83> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.bottommost_compression_opts.strategy: 0<br />debug -82> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0<br />debug -81> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0<br />debug -80> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.bottommost_compression_opts.enabled: false<br />debug -79> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.compression_opts.window_bits: -14<br />debug -78> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.compression_opts.level: 32767<br />debug -77> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.compression_opts.strategy: 0<br />debug -76> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.compression_opts.max_dict_bytes: 0<br />debug -75> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0<br />debug -74> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.compression_opts.enabled: false<br />debug -73> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.level0_file_num_compaction_trigger: 4<br />debug -72> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.level0_slowdown_writes_trigger: 20<br />debug -71> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.level0_stop_writes_trigger: 36<br />debug -70> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.target_file_size_base: 67108864<br />debug -69> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.target_file_size_multiplier: 1<br />debug -68> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_base: 268435456<br />debug -67> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 0<br />debug -66> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000<br />debug -65> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl<sup><a href="#fn0">0</a></sup>: 1<br />debug -64> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl<sup><a href="#fn1">1</a></sup>: 1<br />debug -63> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl<sup><a href="#fn2">2</a></sup>: 1<br />debug -62> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl<sup><a href="#fn3">3</a></sup>: 1<br />debug -61> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl<sup><a href="#fn4">4</a></sup>: 1<br />debug -60> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl<sup><a href="#fn5">5</a></sup>: 1<br />debug -59> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl<sup><a href="#fn6">6</a></sup>: 1<br />debug -58> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_sequential_skip_in_iterations: 8<br />debug -57> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_compaction_bytes: 1677721600<br />debug -56> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.arena_block_size: 33554432<br />debug -55> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736<br />debug -54> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944<br />debug -53> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100<br />debug -52> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.disable_auto_compactions: 0<br />debug -51> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_style: kCompactionStyleLevel<br />debug -50> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio<br />debug -49> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_universal.size_ratio: 1<br />debug -48> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2<br />debug -47> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295<br />debug -46> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200<br />debug -45> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1<br />debug -44> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize<br />debug -43> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824<br />debug -42> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0<br />debug -41> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.table_properties_collectors: <br />debug -40> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.inplace_update_support: 0<br />debug -39> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.inplace_update_num_locks: 10000<br />debug -38> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000<br />debug -37> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.memtable_whole_key_filtering: 0<br />debug -36> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.memtable_huge_page_size: 0<br />debug -35> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.bloom_locality: 0<br />debug -34> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.max_successive_merges: 0<br />debug -33> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.optimize_filters_for_hits: 0<br />debug -32> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.paranoid_file_checks: 0<br />debug -31> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.force_consistency_checks: 0<br />debug -30> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.report_bg_io_stats: 0<br />debug -29> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.ttl: 2592000<br />debug -28> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: Options.periodic_compaction_seconds: 0<br />debug -27> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: [column_family.cc:555] (skipping printing options)</p>
<p>debug -26> 2022-02-16T13:04:03.712+0000 7f35fee9d080 4 rocksdb: [column_family.cc:555] (skipping printing options)</p>
<p>debug -25> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4568] Recovered from manifest file:db/MANIFEST-044787 succeeded,manifest_file_number is 44787, next_file_number is 45179, last_sequence is 974768953, log_number is 45176,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 0</p>
<p>debug -24> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [default] (ID 0), log number is 45157</p>
<p>debug -23> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [m-0] (ID 1), log number is 44777</p>
<p>debug -22> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [m-1] (ID 2), log number is 44777</p>
<p>debug -21> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [m-2] (ID 3), log number is 44777</p>
<p>debug -20> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [p-0] (ID 4), log number is 44777</p>
<p>debug -19> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [p-1] (ID 5), log number is 44777</p>
<p>debug -18> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [p-2] (ID 6), log number is 44777</p>
<p>debug -17> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [O-0] (ID 7), log number is 45162</p>
<p>debug -16> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [O-1] (ID 8), log number is 45157</p>
<p>debug -15> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [O-2] (ID 9), log number is 45162</p>
<p>debug -14> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [L] (ID 10), log number is 45176</p>
<p>debug -13> 2022-02-16T13:04:03.731+0000 7f35fee9d080 4 rocksdb: [version_set.cc:4577] Column family [P] (ID 11), log number is 45157</p>
<p>debug -12> 2022-02-16T13:04:03.732+0000 7f35fee9d080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1645016643732799, "job": 1, "event": "recovery_started", "log_files": [45157, 45162, 45168, 45171, 45173, 45176, 45178]}<br />debug -11> 2022-02-16T13:04:03.732+0000 7f35fee9d080 4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log <a class="issue tracker-9 status-3 priority-4 priority-default closed" title="Backport: nautilus: mgr/dashboard: Refactor Python unittests and controller (Resolved)" href="https://tracker.ceph.com/issues/45157">#45157</a> mode 2<br />debug -10> 2022-02-16T13:04:03.801+0000 7f35fee9d080 4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: cephadm: iscsi should use the correct container image (Resolved)" href="https://tracker.ceph.com/issues/45162">#45162</a> mode 2<br />debug -9> 2022-02-16T13:04:04.018+0000 7f35fee9d080 4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log <a class="issue tracker-1 status-1 priority-3 priority-lowest" title="Bug: mimic: cephtool/test.sh: test_mon_osd_pool_set failure (New)" href="https://tracker.ceph.com/issues/45168">#45168</a> mode 2<br />debug -8> 2022-02-16T13:04:04.931+0000 7f35fee9d080 4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: SSL certificate of docs.ceph.com expired (Resolved)" href="https://tracker.ceph.com/issues/45171">#45171</a> mode 2<br />debug -7> 2022-02-16T13:04:05.847+0000 7f35fee9d080 4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log <a class="issue tracker-5 status-5 priority-4 priority-default closed" title="Tasks: mgr/dashboard: Add a Troubleshooting guide to the Dashboard documentation (Closed)" href="https://tracker.ceph.com/issues/45173">#45173</a> mode 2<br />debug -6> 2022-02-16T13:04:06.798+0000 7f35fee9d080 4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log <a class="issue tracker-1 status-1 priority-4 priority-default" title="Bug: octopus: "setUpClass (tasks.mgr.dashboard.test_ganesha.GaneshaTest)" (New)" href="https://tracker.ceph.com/issues/45176">#45176</a> mode 2<br />debug -5> 2022-02-16T13:04:07.816+0000 7f35fee9d080 3 rocksdb: [le/block_based/filter_policy.cc:584] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5.<br />debug -4> 2022-02-16T13:04:07.980+0000 7f35fee9d080 1 bluefs _allocate unable to allocate 0x400000 on bdev 1, allocator name block, allocator type hybrid, capacity 0x8000000000, block size 0x1000, free 0x3909b57000, fragmentation 0.0523383, allocated 0x0<br />debug -3> 2022-02-16T13:04:07.980+0000 7f35fee9d080 -1 bluefs _allocate allocation failed, needed 0x3fd8ae<br />debug -2> 2022-02-16T13:04:07.980+0000 7f35fee9d080 -1 bluefs _flush_range allocated: 0x220000 offset: 0x21d430 length: 0x40047e<br />debug -1> 2022-02-16T13:04:07.990+0000 7f35fee9d080 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7f35fee9d080 time 2022-02-16T13:04:07.981279+0000<br />/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/os/bluestore/BlueFS.cc: 2729: ceph_abort_msg("bluefs enospc")</p>
<pre><code>ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)<br /> 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&)+0xe5) [0x55b9ff12d7a4]<br /> 2: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0x1614) [0x55b9ff81e914]<br /> 3: (BlueFS::_flush(BlueFS::FileWriter*, bool, bool*)+0x90) [0x55b9ff81ec80]<br /> 4: (BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock&lt;std::mutex&gt;&)+0x32) [0x55b9ff838362]<br /> 5: (BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x11b) [0x55b9ff84a67b]<br /> 6: (rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x1f) [0x55b9ffcdbb2f]<br /> 7: (rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x58a) [0x55b9ffded88a]<br /> 8: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x2d0) [0x55b9ffdeece0]<br /> 9: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0xb6) [0x55b9fff0a496]<br /> 10: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x26c) [0x55b9fff0addc]<br /> 11: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x3c) [0x55b9fff0b4fc]<br /> 12: (rocksdb::BlockBasedTableBuilder::Flush()+0x6d) [0x55b9fff0b58d]<br /> 13: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x2b8) [0x55b9fff0e9f8]<br /> 14: (rocksdb::BuildTable(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, rocksdb::InternalIteratorBase&lt;rocksdb::Slice&gt;*, std::vector&lt;std::unique_ptr&lt;rocksdb::FragmentedRangeTombstoneIterator, std::default_delete&lt;rocksdb::FragmentedRangeTombstoneIterator&gt; >, std::allocator&lt;std::unique_ptr&lt;rocksdb::FragmentedRangeTombstoneIterator, std::default_delete&lt;rocksdb::FragmentedRangeTombstoneIterator&gt; > > >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector&lt;std::unique_ptr&lt;rocksdb::IntTblPropCollectorFactory, std::default_delete&lt;rocksdb::IntTblPropCollectorFactory&gt; >, std::allocator&lt;std::unique_ptr&lt;rocksdb::IntTblPropCollectorFactory, std::default_delete&lt;rocksdb::IntTblPropCollectorFactory&gt; > > > const*, unsigned int, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, std::vector&lt;unsigned long, std::allocator&lt;unsigned long&gt; >, unsigned long, rocksdb::SnapshotChecker*, rocksdb::CompressionType, unsigned long, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int, unsigned long, unsigned long, rocksdb::Env::WriteLifeTimeHint, unsigned long)+0xa45) [0x55b9ffeb9455]<br /> 15: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0xcf5) [0x55b9ffd1e475]<br /> 16: (rocksdb::DBImpl::RecoverLogFiles(std::vector&lt;unsigned long, std::allocator&lt;unsigned long&gt; > const&, unsigned long*, bool, bool*)+0x1491) [0x55b9ffd20411]<br /> 17: (rocksdb::DBImpl::Recover(std::vector&lt;rocksdb::ColumnFamilyDescriptor, std::allocator&lt;rocksdb::ColumnFamilyDescriptor&gt; > const&, bool, bool, bool, unsigned long*)+0xae8) [0x55b9ffd21f08]<br /> 18: (rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyDescriptor, std::allocator&lt;rocksdb::ColumnFamilyDescriptor&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyHandle*, std::allocator&lt;rocksdb::ColumnFamilyHandle*&gt; ><strong>, rocksdb::DB</strong>*, bool, bool)+0x59d) [0x55b9ffd1bc2d]<br /> 19: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyDescriptor, std::allocator&lt;rocksdb::ColumnFamilyDescriptor&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyHandle*, std::allocator&lt;rocksdb::ColumnFamilyHandle*&gt; ><strong>, rocksdb::DB</strong>*)+0x15) [0x55b9ffd1cfc5]<br /> 20: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&)+0x10c1) [0x55b9ffc94c91]<br /> 21: (BlueStore::_open_db(bool, bool, bool)+0x948) [0x55b9ff71aa98]<br /> 22: (BlueStore::_open_db_and_around(bool, bool)+0x2f7) [0x55b9ff784347]<br /> 23: (BlueStore::_mount()+0x204) [0x55b9ff787204]<br /> 24: (OSD::init()+0x380) [0x55b9ff260e40]<br /> 25: main()<br /> 26: __libc_start_main()<br /> 27: _start()</code></pre>
<p>debug 0> 2022-02-16T13:04:08.000+0000 7f35fee9d080 -1 *<strong>* Caught signal (Aborted) *</strong><br /> in thread 7f35fee9d080 thread_name:ceph-osd</p>
<pre><code>ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)<br /> 1: /lib64/libpthread.so.0(+0x12b20) [0x7f35fcc04b20]<br /> 2: gsignal()<br /> 3: abort()<br /> 4: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&)+0x1b6) [0x55b9ff12d875]<br /> 5: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0x1614) [0x55b9ff81e914]<br /> 6: (BlueFS::_flush(BlueFS::FileWriter*, bool, bool*)+0x90) [0x55b9ff81ec80]<br /> 7: (BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock&lt;std::mutex&gt;&)+0x32) [0x55b9ff838362]<br /> 8: (BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x11b) [0x55b9ff84a67b]<br /> 9: (rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x1f) [0x55b9ffcdbb2f]<br /> 10: (rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x58a) [0x55b9ffded88a]<br /> 11: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x2d0) [0x55b9ffdeece0]<br /> 12: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0xb6) [0x55b9fff0a496]<br /> 13: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x26c) [0x55b9fff0addc]<br /> 14: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x3c) [0x55b9fff0b4fc]<br /> 15: (rocksdb::BlockBasedTableBuilder::Flush()+0x6d) [0x55b9fff0b58d]<br /> 16: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x2b8) [0x55b9fff0e9f8]<br /> 17: (rocksdb::BuildTable(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, rocksdb::InternalIteratorBase&lt;rocksdb::Slice&gt;*, std::vector&lt;std::unique_ptr&lt;rocksdb::FragmentedRangeTombstoneIterator, std::default_delete&lt;rocksdb::FragmentedRangeTombstoneIterator&gt; >, std::allocator&lt;std::unique_ptr&lt;rocksdb::FragmentedRangeTombstoneIterator, std::default_delete&lt;rocksdb::FragmentedRangeTombstoneIterator&gt; > > >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector&lt;std::unique_ptr&lt;rocksdb::IntTblPropCollectorFactory, std::default_delete&lt;rocksdb::IntTblPropCollectorFactory&gt; >, std::allocator&lt;std::unique_ptr&lt;rocksdb::IntTblPropCollectorFactory, std::default_delete&lt;rocksdb::IntTblPropCollectorFactory&gt; > > > const*, unsigned int, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, std::vector&lt;unsigned long, std::allocator&lt;unsigned long&gt; >, unsigned long, rocksdb::SnapshotChecker*, rocksdb::CompressionType, unsigned long, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int, unsigned long, unsigned long, rocksdb::Env::WriteLifeTimeHint, unsigned long)+0xa45) [0x55b9ffeb9455]<br /> 18: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0xcf5) [0x55b9ffd1e475]<br /> 19: (rocksdb::DBImpl::RecoverLogFiles(std::vector&lt;unsigned long, std::allocator&lt;unsigned long&gt; > const&, unsigned long*, bool, bool*)+0x1491) [0x55b9ffd20411]<br /> 20: (rocksdb::DBImpl::Recover(std::vector&lt;rocksdb::ColumnFamilyDescriptor, std::allocator&lt;rocksdb::ColumnFamilyDescriptor&gt; > const&, bool, bool, bool, unsigned long*)+0xae8) [0x55b9ffd21f08]<br /> 21: (rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyDescriptor, std::allocator&lt;rocksdb::ColumnFamilyDescriptor&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyHandle*, std::allocator&lt;rocksdb::ColumnFamilyHandle*&gt; ><strong>, rocksdb::DB</strong>*, bool, bool)+0x59d) [0x55b9ffd1bc2d]<br /> 22: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyDescriptor, std::allocator&lt;rocksdb::ColumnFamilyDescriptor&gt; > const&, std::vector&lt;rocksdb::ColumnFamilyHandle*, std::allocator&lt;rocksdb::ColumnFamilyHandle*&gt; ><strong>, rocksdb::DB</strong>*)+0x15) [0x55b9ffd1cfc5]<br /> 23: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; > const&)+0x10c1) [0x55b9ffc94c91]<br /> 24: (BlueStore::_open_db(bool, bool, bool)+0x948) [0x55b9ff71aa98]<br /> 25: (BlueStore::_open_db_and_around(bool, bool)+0x2f7) [0x55b9ff784347]<br /> 26: (BlueStore::_mount()+0x204) [0x55b9ff787204]<br /> 27: (OSD::init()+0x380) [0x55b9ff260e40]<br /> 28: main()<br /> 29: __libc_start_main()<br /> 30: _start()<br /> NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.</code></pre>
<p>--- logging levels ---<br /> 0/ 5 none<br /> 0/ 1 lockdep<br /> 0/ 1 context<br /> 1/ 1 crush<br /> 1/ 5 mds<br /> 1/ 5 mds_balancer<br /> 1/ 5 mds_locker<br /> 1/ 5 mds_log<br /> 1/ 5 mds_log_expire<br /> 1/ 5 mds_migrator<br /> 0/ 1 buffer<br /> 0/ 1 timer<br /> 0/ 1 filer<br /> 0/ 1 striper<br /> 0/ 1 objecter<br /> 0/ 5 rados<br /> 0/ 5 rbd<br /> 0/ 5 rbd_mirror<br /> 0/ 5 rbd_replay<br /> 0/ 5 rbd_pwl<br /> 0/ 5 journaler<br /> 0/ 5 objectcacher<br /> 0/ 5 immutable_obj_cache<br /> 0/ 5 client<br /> 1/ 5 osd<br /> 0/ 5 optracker<br /> 0/ 5 objclass<br /> 1/ 3 filestore<br /> 1/ 3 journal<br /> 0/ 0 ms<br /> 1/ 5 mon<br /> 0/10 monc<br /> 1/ 5 paxos<br /> 0/ 5 tp<br /> 1/ 5 auth<br /> 1/ 5 crypto<br /> 1/ 1 finisher<br /> 1/ 1 reserver<br /> 1/ 5 heartbeatmap<br /> 1/ 5 perfcounter<br /> 1/ 5 rgw<br /> 1/ 5 rgw_sync<br /> 1/10 civetweb<br /> 1/ 5 javaclient<br /> 1/ 5 asok<br /> 1/ 1 throttle<br /> 0/ 0 refs<br /> 1/ 5 compressor<br /> 1/ 5 bluestore<br /> 1/ 5 bluefs<br /> 1/ 3 bdev<br /> 1/ 5 kstore<br /> 4/ 5 rocksdb<br /> 4/ 5 leveldb<br /> 4/ 5 memdb<br /> 1/ 5 fuse<br /> 1/ 5 mgr<br /> 1/ 5 mgrc<br /> 1/ 5 dpdk<br /> 1/ 5 eventtrace<br /> 1/ 5 prioritycache<br /> 0/ 5 test<br /> 0/ 5 cephfs_mirror<br /> 0/ 5 cephsqlite<br /> <del>2/-2 (syslog threshold)<br /> 99/99 (stderr threshold)<br />--</del> pthread ID / name mapping for recent threads ---<br /> 139869758236416 / admin_socket<br /> 139869766629120 / msgr-worker-2<br /> 139869775021824 / msgr-worker-1<br /> 139869783414528 / msgr-worker-0<br /> 139869886730368 / ceph-osd<br /> max_recent 10000<br /> max_new 10000<br /> log_file /var/lib/ceph/crash/2022-02-16T13:04:08.000688Z_d29e695a-b962-4ec0-8b86-28a1176bf244/log<br />--- end dump of recent events ---<br />reraise_fatal: default handler for signal 6 didn't terminate the process?</p> bluestore - Fix #48288 (Need More Info): test/objectstore: allocate function may return -ENOSPChttps://tracker.ceph.com/issues/482882020-11-19T03:03:49Zyantao xue
<p>test/objectstore: allocate function may return -ENOSPC</p> bluestore - Feature #44978 (New): support "bad block" isolationhttps://tracker.ceph.com/issues/449782020-04-07T16:11:51ZGreg Farnumgfarnum@redhat.com
<p>A while ago we had a question at a meetup:</p>
<blockquote>
<p>Does BlueStore support bad block isolation?<br />In FileStore if you hit a bad block file you could just move the file into a root "badblock" folder and it wouldn't bother you again</p>
</blockquote>
<p>Sam suggested you <strong>might</strong> be able to do this by mounting BlueStore via FUSE and moving the object around, but we're not sure if that's actually possible. If it is, perhaps document that and close this ticket. If not, come up with a way to avoid using bad blocks on disk in the future (presumably by similarly leaking the space).</p>