Project

General

Profile

Actions

Bug #17778

closed

notional amount of data stored is greater than actual data in the cluster.

Added by Parikshith B over 7 years ago. Updated almost 7 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

1. I was running IOs on cephfs and replicted pools and noticed that estimated data is more than actual data.

ceph -s
cluster 54483280-809a-48f0-a657-a077f1f60100
health HEALTH_WARN
noscrub,nodeep-scrub,sortbitwise flag(s) set
monmap e1: 1 mons at {rack3-ramp-9=10.242.43.35:6789/0}
election epoch 11, quorum 0 rack3-ramp-9
fsmap e9188: 1/1/1 up {0=rack3-ramp-9=up:active}
osdmap e2017: 16 osds: 16 up, 16 in
flags noscrub,nodeep-scrub,sortbitwise
pgmap v220139: 2348 pgs, 3 pools, 22916 GB data, 9852 kobjects
19864 GB used, 37179 GB / 57043 GB avail
2348 active+clean


ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
57043G 37179G 19864G 34.82
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 7688G 26.95 17763G 5091323
cephfs_metadata 2 312M 0 17763G 239
pool1 4 15228G 53.39 17763G 4997442

2. Replication size is 2 on all the pools. There are no clones or snapshots.

3. Ran the same test on just the replicated pools,here notional data was half of used data(as replication was 2)

Actions #1

Updated by Greg Farnum almost 7 years ago

  • Status changed from New to Closed

I think this must be the result of using sparse objects and the way we count them.

Actions

Also available in: Atom PDF