Project

General

Profile

Actions

Bug #63674

open

Ceph incorrectly calculates RAW space for OSD based on VDO

Added by Sergey Ponomarev 5 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Tested on Ceph versions quincy, reef
Linux distribution - Rocky Linux 9.3.
VDO 8.2.1.6 from the repository.
This problem was observed after the migration of VDO to LVM, until the VDO version 8 branch, when VDO was not integrated into LVM, there was no such problem.

100GB of data was written to rbd using fio with --dedupe_percentage=95 flag.

Results:

ceph -s
cluster:
id: 0de83e08-8e54-11ee-81df-0050569c1314
health: HEALTH_OK

services:
mon: 1 daemons, quorum mon-01 (age 101m)
mgr: mgr-01(active, since 99m), standbys: ceph.ibyuzg
osd: 3 osds: 3 up (since 47m), 3 in (since 48m)
data:
pools: 2 pools, 33 pgs
objects: 25.61k objects, 100 GiB
usage: 302 GiB used, 2.7 TiB / 3 TiB avail
pgs: 33 active+clean

ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 ssd 1.00000 1.00000 1 TiB 101 GiB 100 GiB 0 B 837 MiB 923 GiB 9.85 1.00 33 up
1 ssd 1.00000 1.00000 1 TiB 101 GiB 100 GiB 0 B 841 MiB 923 GiB 9.85 1.00 33 up
2 ssd 1.00000 1.00000 1 TiB 101 GiB 100 GiB 0 B 837 MiB 923 GiB 9.85 1.00 32 up
TOTAL 3 TiB 302 GiB 300 GiB 0 B 2.5 GiB 2.7 TiB 9.85
MIN/MAX VAR: 1.00/1.00 STDDEV: 0

vdostats --si
Device Size Used Available Use% Space saving%
VDO-pool0-vpool 107.4G 10.3G 97.1G 10% 94%
VDO-pool1-vpool 107.4G 10.3G 97.1G 10% 94%
VDO-pool2-vpool 107.4G 10.3G 97.1G 10% 94%

No data to display

Actions

Also available in: Atom PDF