Project

General

Profile

Actions

Bug #64444

open

No Valid allocation info on disk (empty file)

Added by Pablo Higueras 3 months ago. Updated 3 months ago.

Status:
Triaged
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi!

For over a year now we are suffering from high start-up times in our OSDs (more than an hour).

This is something that does not always happen, but it is frequent.

Here is a trace of the log file when this happens:

2024-02-12T10:32:22.378+0000 7f3f0e837540  1 freelist init
2024-02-12T10:32:22.378+0000 7f3f0e837540  1 freelist _read_cfg
2024-02-12T10:32:22.498+0000 7f3f0e837540 -1 bluestore::NCB::__restore_allocator::No Valid allocation info on disk (empty file)
2024-02-12T10:32:22.498+0000 7f3f0e837540  0 bluestore(/var/lib/ceph/osd/ceph-54) _init_alloc::NCB::restore_allocator() failed! Run Full Recovery from ONodes (might take a while) ...
2024-02-12T11:55:42.607+0000 7f3f0e837540  1 HybridAllocator _spillover_range constructing fallback allocator
2024-02-12T11:56:00.555+0000 7f3f0e837540  1 bluestore::NCB::read_allocation_from_drive_on_startup::::Allocation Recovery was completed in 5018.058711 seconds, extent_count=585636576
2024-02-12T11:56:00.555+0000 7f3f0e837540  1 bluestore(/var/lib/ceph/osd/ceph-54) _init_alloc loaded 0 B in 0 extents, allocator type hybrid, capacity 0x6fc7c000000, block size 0x1000, free 0x30e6e1f6000, fragmentation 0.45
2024-02-12T11:56:00.559+0000 7f3f0e837540  4 rocksdb: [db/db_impl/db_impl.cc:446] Shutdown: canceling all background work
2024-02-12T11:56:00.807+0000 7f3f0e837540  4 rocksdb: [db/db_impl/db_impl.cc:625] Shutdown complete
2024-02-12T11:56:00.895+0000 7f3f0e837540  1 bluefs umount
2024-02-12T11:56:00.911+0000 7f3f0e837540  1 bdev(0x55f93a1b9800 /var/lib/ceph/osd/ceph-54/block) close
2024-02-12T11:56:01.131+0000 7f3f0e837540  1 bdev(0x55f93a1b9800 /var/lib/ceph/osd/ceph-54/block) open path /var/lib/ceph/osd/ceph-54/block
2024-02-12T11:56:01.135+0000 7f3f0e837540  1 bdev(0x55f93a1b9800 /var/lib/ceph/osd/ceph-54/block) open size 7681481900032 (0x6fc7c000000, 7.0 TiB) block_size 4096 (4 KiB) non-rotational device, discard supported
2024-02-12T11:56:01.135+0000 7f3f0e837540  1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-54/block size 7.0 TiB
2024-02-12T11:56:01.135+0000 7f3f0e837540  1 bluefs mount
2024-02-12T11:56:01.135+0000 7f3f0e837540  1 bluefs _init_alloc shared, id 1, capacity 0x6fc7c000000, block size 0x10000
2024-02-12T11:56:04.311+0000 7f3f0e837540  1 bluefs mount shared_bdev_used = 1407900450816
2024-02-12T11:56:04.311+0000 7f3f0e837540  1 bluestore(/var/lib/ceph/osd/ceph-54) _prepare_db_environment set db_paths to db,7297407805030 db.slow,7297407805030
2024-02-12T11:56:04.427+0000 7f3f0e837540  4 rocksdb: RocksDB version: 6.15.5

There is not much information about this problem, we have been investigating this issue (https://tracker.ceph.com/issues/48276), but it does not seem to help us.

We are looking for any kind of help, we don't know what to do.

Thank you!

Actions

Also available in: Atom PDF