Project

General

Profile

Actions

Bug #52969

open

use "ceph df" command found pool max avail increase when there are degraded objects in it

Added by minghang zhao over 2 years ago. Updated almost 2 years ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
octopus,quincy
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

down former:
--- POOLS ---
POOL ID STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 0 B 5 0 B 0 138 GiB
tfs 2 79 MiB 91 236 MiB 0.17 46 GiB
data 3 0 B 0 0 B 0 138 GiB
[root@lnhost116 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-17 0.14639 root tfs
-16 0.14639 rack tfs-rack_0
-22 0.04880 host tfs-lnhost116
4 ssd 0.04880 osd.4 up 1.00000 1.00000
-25 0.04880 host tfs-lnhost117
2 ssd 0.04880 osd.2 up 1.00000 1.00000
-28 0.04880 host tfs-lnhost118
0 ssd 0.04880 osd.0 up 1.00000 1.00000

down after:
--- POOLS ---
POOL ID STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 0 B 5 0 B 0 207 GiB
tfs 2 79 MiB 91 158 MiB 0.11 70 GiB
data 3 0 B 0 0 B 0 138 GiB
[root@lnhost116 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-17 0.14639 root tfs
-16 0.14639 rack tfs-rack_0
-22 0.04880 host tfs-lnhost116
4 ssd 0.04880 osd.4 up 1.00000 1.00000
-25 0.04880 host tfs-lnhost117
2 ssd 0.04880 osd.2 down 1.00000 1.00000
-28 0.04880 host tfs-lnhost118
0 ssd 0.04880 osd.0 up 1.00000 1.00000

The TFS storage pool is a triple copy, and hosts the fault domain rule. I took an OSD from the TFS storage pool down, but its Max Avail increased, which was illogical.


Files

test.txt (17.2 KB) test.txt jianwei zhang, 06/01/2022 10:47 AM
Actions

Also available in: Atom PDF