Project

General

Profile

Actions

Bug #52604

closed

osd: mkfs: bluestore_stored > 235GiB from start

Added by Konstantin Shalygin over 2 years ago. Updated 14 days ago.

Status:
Closed
Priority:
Low
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

We getted a situation where OSD after deploy on first (mkfs) run allocate ~ 235GiB of space. In case where NVMe OSD is 365GiB RAW is totally abnormal.
Currently can't find a glue why this happens, something from kernel or ENV tells ceph-osd to allocate this space, because on another machines this is not happens.

Time spent on mkfs for alloc this space on this CPU was 1 hour 06 minutes (tested on Intel SSDPEDMD016T4, 1.6TiB)

# ack boot
ceph-osd.59.log
1713:2021-09-14 10:06:14.712 7f3047315c00  0 osd.59 0 done with init, starting boot process
1714:2021-09-14 10:06:14.712 7f3047315c00  1 osd.59 0 start_boot
2091:2021-09-14 11:12:24.184 7f3030964700  1 osd.59 4009968 state: booting -> active

When OSD booted I was check perf dump:

 "bluestore_allocated": 253647523840,
 "bluestore_stored": 252952705757,

Environment

# uname -a
Linux stat1 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/os-release
NAME="Ubuntu" 
VERSION="18.04.5 LTS (Bionic Beaver)" 
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS" 
VERSION_ID="18.04" 
HOME_URL="https://www.ubuntu.com/" 
SUPPORT_URL="https://help.ubuntu.com/" 
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

# ceph-osd --version
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              12
On-line CPU(s) list: 0-11
Thread(s) per core:  2
Core(s) per socket:  6
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               63
Model name:          Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
Stepping:            2
CPU MHz:             3600.152
CPU max MHz:         3800.0000
CPU min MHz:         1200.0000
BogoMIPS:            7000.26
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            15360K
NUMA node0 CPU(s):   0-11
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d

Logs and dumps are attached


Files

ceph-volume.log (17.8 KB) ceph-volume.log Konstantin Shalygin, 09/14/2021 12:20 PM
ceph-osd.59.log (191 KB) ceph-osd.59.log Konstantin Shalygin, 09/14/2021 12:20 PM
osd.59_bluestore_allocator_dump_block.log (97.1 KB) osd.59_bluestore_allocator_dump_block.log Konstantin Shalygin, 09/14/2021 12:20 PM
osd.59_bluestore_bluefs_available.log (127 Bytes) osd.59_bluestore_bluefs_available.log Konstantin Shalygin, 09/14/2021 12:20 PM
osd.59_bluestore_bluefs_stats.log (202 Bytes) osd.59_bluestore_bluefs_stats.log Konstantin Shalygin, 09/14/2021 12:20 PM
osd.59_perf_dump.log (30.1 KB) osd.59_perf_dump.log Konstantin Shalygin, 09/14/2021 12:20 PM
Actions #1

Updated by Konstantin Shalygin over 2 years ago

  • Description updated (diff)
Actions #2

Updated by Konstantin Shalygin over 2 years ago

Found the root cause for this - #48212, cluster in PG merging state wasn't trim osdmap's

{
    "cluster_fsid": "d168189f-6105-4223-b244-f59842404076",
    "osd_fsid": "26dd7b3d-1ecf-40c9-b50f-7f8cf6f2a569",
    "whoami": 60,
    "state": "active",
    "oldest_map": 3820600,
    "newest_map": 4054962,
    "num_pgs": 488
}

Actual osdmap's size is ~90MiB in this case. After restart monitors osdmap's was start trimming. Wonder if it is possible to say to fresh OSD to don't replay all osdmaps, just enough to look at last maps to determine that OSD is new?

Actions #3

Updated by Konstantin Shalygin over 2 years ago

  • Tracker changed from Bug to Support
  • Priority changed from Normal to Low
Actions #4

Updated by Konstantin Shalygin 14 days ago

  • Tracker changed from Support to Bug
  • Status changed from New to Closed
  • Source set to Community (user)
  • Regression set to No
  • Severity set to 3 - minor

The fix was merged

Actions

Also available in: Atom PDF