Project

General

Profile

Actions

Bug #6751

closed

Pool 'df' statistics go bad after changing PG count

Added by John Spray over 10 years ago. Updated about 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

To reproduce:

  1. Create a pool with few PGs
  2. Create some objects in the pool
  3. Note the 'ceph df' output
  4. Increase the number of PGs in the pool with ceph osd pool set <name> pg_num <count>
  5. Wait for PG creation
  6. Note the 'ceph df' output: the 'USED' and 'OBJECTS' values for the pool will have increased roughly proportionally with the number of PGs, while no more objects were really created and the actual usage of space on the OSDs is hardly increased at all.

Detail:

Create my pool
ceph osd pool create pbench3 10

root@gravel1:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
2790G 2789G 263M 0

POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 0 0 0
pbench3 6 0 0 0

Put some objects in it:
rados bench --no-cleanup -p pbench3 -b 1000000 60 write

root@gravel1:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
2790G 2784G 6048M 0.21

POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 0 0 0
pbench3 6 2910M 0.10 3054

Increase the number of PGs
root@gravel2:~# ceph osd pool set pbench3 pg_num 100
set pool 6 pg_num to 100
root@gravel2:~# ceph osd pool set pbench3 pgp_num 100
Error EAGAIN: currently creating pgs, wait

Let things quiet down:

root@gravel2:~# ceph pg stat
v9127: 292 pgs: 57 creating, 193 active+clean, 42 peering; 10906 MB data, 6122 MB used, 2784 GB / 2790 GB avail; 1020 MB/s wr, 1069 op/s
root@gravel2:~# ceph pg stat
v9132: 292 pgs: 281 active+clean, 11 peering; 31912 MB data, 6119 MB used, 2784 GB / 2790 GB avail; 1617 MB/s wr, 1696 op/s
root@gravel2:~# ceph pg stat
v9133: 292 pgs: 292 active+clean; 31912 MB data, 6119 MB used, 2784 GB / 2790 GB avail; 805 MB/s wr, 844 op/s
root@gravel2:~# ceph pg stat
v9133: 292 pgs: 292 active+clean; 31912 MB data, 6119 MB used, 2784 GB / 2790 GB avail

Now the DF output for my pool is way wrong:

root@gravel1:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
2790G 2784G 6119M 0.21

POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 0 0 0
pbench3 6 31912M 1.12 33487

root@gravel2:~# ceph osd pool set pbench3 pgp_num 100
set pool 6 pgp_num to 100

Wait for everything to go to active+clean…

root@gravel1:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
2790G 2784G 6188M 0.22

POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 0 0 0
pbench3 6 31912M 1.12 33487

root@gravel2:~# ceph pg stat
v9218: 1192 pgs: 1192 active+clean; 313 GB data, 6112 MB used, 2784 GB / 2790 GB avail

Even more:

ceph osd pool set pbench3 pg_num 1000

root@gravel2:~# ceph pg stat
v9218: 1192 pgs: 1192 active+clean; 313 GB data, 6112 MB used, 2784 GB / 2790 GB avail

root@gravel1:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
2790G 2784G 6112M 0.21

POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 0 0 0
pbench3 6 313G 11.23 336809


Related issues 1 (0 open1 closed)

Related to Ceph - Feature #6729: Make pg statistics less wrong after splitResolved11/06/2013

Actions
Actions

Also available in: Atom PDF