Project

General

Profile

Actions

Bug #41829

open

ceph df reports incorrect pool usage

Added by Dan Moraru over 4 years ago. Updated over 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Since upgrading from v14.2.2 to v14.2.3, ceph df erroneously equates pool usage with the amount of data stored in the pool, i.e. the STORED and USED columns in the POOLS section are identical:

$ ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 1.3 PiB 1.1 PiB 218 TiB 218 TiB 16.79
mdd 2.0 TiB 2.0 TiB 2.1 GiB 10 GiB 0.49
ssd 6.8 TiB 6.8 TiB 4.4 GiB 47 GiB 0.68
TOTAL 1.3 PiB 1.1 PiB 218 TiB 218 TiB 16.68

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
fs.metadata.archive 35 333 MiB 798 333 MiB 0.02 644 GiB
fs.data.archive 36 0 B 105.73k 0 B 0 2.1 TiB
fs.data.archive.frames 38 158 TiB 41.38M 158 TiB 14.02 725 TiB
fs.metadata.users 41 418 MiB 7.25k 418 MiB 0.02 644 GiB
fs.data.users 42 0 B 108.22k 0 B 0 2.1 TiB
fs.data.users.home 43 132 GiB 2.24M 132 GiB 0.01 725 TiB

Overall usage reported in the RAW STORAGE section is correct and matches the total across OSDs:

$ ceph osd df | grep TOTAL
TOTAL 1.3 PiB 218 TiB 217 TiB 866 MiB 520 GiB 1.1 PiB 16.68

Of the above 6 pools, four are triply-replicated and two are 6+2 erasure-coded:

$ ceph osd dump | grep pool
pool 35 'fs.metadata.archive' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 8677 flags hashpspool stripe_width 0 target_size_ratio 0.25 application cephfs
pool 36 'fs.data.archive' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 8677 flags hashpspool stripe_width 0 compression_algorithm lz4 compression_mode aggressive target_size_ratio 0.5 application cephfs
pool 38 'fs.data.archive.frames' erasure size 8 min_size 7 crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode warn last_change 15874 lfor 0/0/15670 flags hashpspool,ec_overwrites stripe_width 393216 compression_algorithm lz4 compression_mode aggressive target_size_ratio 0.5 application cephfs
pool 41 'fs.metadata.users' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 17120 flags hashpspool stripe_width 0 target_size_ratio 0.25 application cephfs
pool 42 'fs.data.users' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 17121 flags hashpspool stripe_width 0 compression_algorithm lz4 compression_mode aggressive target_size_ratio 0.5 application cephfs
pool 43 'fs.data.users.home' erasure size 8 min_size 7 crush_rule 4 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode warn last_change 17122 lfor 0/0/16895 flags hashpspool,ec_overwrites stripe_width 393216 compression_algorithm lz4 compression_mode aggressive target_size_ratio 0.5 application cephfs


Files

strace16.err (262 KB) strace16.err Dan Moraru, 03/14/2020 06:35 PM
strace17.err (262 KB) strace17.err Dan Moraru, 03/14/2020 06:35 PM

Related issues 3 (1 open2 closed)

Related to Dashboard - Bug #45185: mgr/dashboard: fix usage calculation to match "ceph df" wayResolvedErnesto Puerta

Actions
Related to Dashboard - Feature #38697: mgr/dashboard: Enhance info shown in Landing Page cards 'PGs per OSD' & 'Raw Capacity'Closed

Actions
Is duplicate of mgr - Bug #40203: ceph df shows incorrect usageNew06/07/2019

Actions
Actions

Also available in: Atom PDF