Project

General

Profile

Actions

Bug #57121

closed

STORE==USED in ceph df

Added by Howie C over 1 year ago. Updated 7 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
quincy, pacific
Regression:
No
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Related issues 3 (0 open3 closed)

Related to Ceph - Bug #54347: ceph df stats break when there is an OSD with CRUSH weight == 0Duplicate

Actions
Copied to Ceph - Backport #57592: quincy: STORE==USED in ceph df ResolvedActions
Copied to Ceph - Backport #57593: pacific: STORE==USED in ceph df ResolvedActions
Actions #1

Updated by Howie C over 1 year ago

We are seeing this bug on a newly deployed cluster across different versions(16.2.7 to 1.7.2). During the test, we rebuilt the cluster 3 times with clean OS and wiped drives.

Basically, the ceph df output is incorrect (STORE=USED) unless we add 4-5 pools to the cluster, this bug could be reproduced consistently when we add or decrease the pool counts.

=== Incorrect ceph df output ===

# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
ssd    852 TiB  851 TiB  866 GiB   866 GiB       0.10
TOTAL  852 TiB  851 TiB  866 GiB   866 GiB       0.10

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0      0 B      0    269 TiB
vm.store                2   32  550 GiB  140.81k  550 GiB   0.07    269 TiB

=== Correct ceph df output after adding more pools ===

# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
ssd    852 TiB  851 TiB  866 GiB   866 GiB       0.10
TOTAL  852 TiB  851 TiB  866 GiB   866 GiB       0.10

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0      0 B      0    269 TiB
vm.store                2   32  550 GiB  140.81k  850 GiB   0.10    269 TiB
dbslave1               33   32      0 B        0      0 B      0    269 TiB
dbslave2               34   32      0 B        0      0 B      0    269 TiB

Actions #2

Updated by Igor Fedotov over 1 year ago

  • Backport set to quincy, pacific
Actions #3

Updated by Igor Fedotov over 1 year ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 47986
Actions #4

Updated by Igor Fedotov over 1 year ago

  • Category changed from ceph cli to OSD
Actions #5

Updated by Igor Fedotov over 1 year ago

  • Status changed from Fix Under Review to Pending Backport
Actions #6

Updated by Backport Bot over 1 year ago

Actions #7

Updated by Backport Bot over 1 year ago

Actions #8

Updated by Backport Bot over 1 year ago

  • Tags set to backport_processed
Actions #9

Updated by Igor Fedotov over 1 year ago

  • Related to Bug #54347: ceph df stats break when there is an OSD with CRUSH weight == 0 added
Actions #10

Updated by Igor Fedotov 7 months ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF