Project

General

Profile

Actions

Bug #41014

closed

make bluefs_alloc_size default to bluestore_min_alloc_size

Added by Neha Ojha over 4 years ago. Updated over 4 years ago.

Status:
Duplicate
Priority:
Urgent
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Originally 1M was chosen as the bluefs_alloc_size since metadata is stored in rocksdb, which persists it in large chunks. In more recent testing, we found no performance impact from using bluefs_alloc_size 256k. Reducing this further, so that both BlueFS and BlueStore allocators use the same allocation size, will enable BlueFS to use any free space left, so fragmentation will no longer lead to an out of space error unless the device is actually fully used.


Related issues 1 (0 open1 closed)

Has duplicate bluestore - Bug #41301: os/bluestore/BlueFS: use 64K alloc_size on the shared deviceResolvedSage Weil08/15/2019

Actions
Actions #1

Updated by Neha Ojha over 4 years ago

  • Status changed from New to Fix Under Review
  • Priority changed from Normal to Urgent
  • Pull request ID set to 29404
Actions #2

Updated by Vikhyat Umrao over 4 years ago

  • Has duplicate Bug #41301: os/bluestore/BlueFS: use 64K alloc_size on the shared device added
Actions #3

Updated by Sage Weil over 4 years ago

  • Status changed from Fix Under Review to Resolved
  • Pull request ID changed from 29404 to 29537
Actions #4

Updated by Nathan Cutler over 4 years ago

  • Status changed from Resolved to Duplicate
Actions #5

Updated by Nathan Cutler over 4 years ago

  • Backport deleted (luminous,mimic,nautilus)

luminous,mimic,nautilus backports are being handled via #41301

Actions

Also available in: Atom PDF