Project

General

Profile

Support #48287

ceph pool not utilizing all available raw storage

Added by Amudhan Pandia n over 3 years ago. Updated almost 3 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Hi,

ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable) - Currently in use in Debian buster.

Ceph pools are not showing available free storage space. I have added `ceph df` output it shows that I have configured 262TB of RAW Capacity and Available as 154 TB and used 108 TB. Why it's not allowing to use of the rest of the RAW capacity.
And pools are allocated to 108 TB which is showing in ceph-kerenl client mount only half of the space are available due to redundancy 2 set to the pool.

`df output from ceph-kernel client mount`

mon-ip:/ 67T 55T 13T 82% /mountpath

`ceph df ouput `

--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 262 TiB 154 TiB 108 TiB 108 TiB 41.40
TOTAL 262 TiB 154 TiB 108 TiB 108 TiB 41.40

--- POOLS ---
POOL ID STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 91 MiB 48 273 MiB 0 8.4 TiB
cephfs_data 2 54 TiB 15.08M 108 TiB 81.18 13 TiB
cephfs_metadata 3 1.5 GiB 12.41k 4.5 GiB 0.02 8.4 TiB

I have seen this behavior in another test ceph cluster also where enitre RAW storage is not usable.

History

#1 Updated by Amudhan Pandia n over 3 years ago

Hi,

This is another test cluster with the same kind of issue.

ceph version 15.2.5 (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable) in Debian Buster

ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 262 TiB 176 TiB 86 TiB 86 TiB 32.87
TOTAL 262 TiB 176 TiB 86 TiB 86 TiB 32.87

--- POOLS ---
POOL ID STORED OBJECTS USED %USED MAX AVAIL
cephfs.cephfs-tst3.meta 11 328 MiB 4.30k 986 MiB 0 32 TiB
cephfs.cephfs-tst3.data 12 43 TiB 11.34M 86 TiB 47.36 48 TiB
device_health_metrics 13 0 B 0 0 B 0 32 TiB

Debian 10, Cephfs-kernel client mounted showing only 91TB as Total size.

mon-ip:/ 91T 43T 48T 48% /mnt/cephfs

#2 Updated by Greg Farnum almost 3 years ago

  • Tracker changed from Bug to Support
  • Status changed from New to Closed

This is usually a result of very unbalanced storage device distributions, where some of the available disk space cannot be used while respecting the CRUSH rules and replication requirements.

Also available in: Atom PDF