Project

General

Profile

Actions

Bug #8895

closed

ceph osd pool stats (displayed incorrect values)

Added by Andrey Matyashov over 9 years ago. Updated over 9 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

root@virt-node-03:~# ceph osd pool stats
pool data id 0
  -5/0 objects degraded (-inf%)
  recovery io 4069 kB/s, 1 objects/s
  client io 45427 B/s rd, 2 op/s

pool metadata id 1
  client io 2047 B/s wr, 0 op/s

pool rbd id 2
  -32/12 objects degraded (-266.667%)
  recovery io 73088 kB/s, 17 objects/s
  client io 3006 kB/s rd, 11789 kB/s wr, 177 op/s

More info on my system:
3 nodes, with 9 hdd
1 hdd - 500GB
7 hdd - 1TB
1 hdd - 2TB

root@virt-node-03:~# ceph -s
    cluster f53d4a19-b2c0-4a92-9620-bc6e3bfc27d6
     health HEALTH_ERR 36 pgs backfill; 1 pgs backfill_toofull; 10 pgs backfilling; 27 pgs degraded; 2 pgs inconsistent; 67 pgs stuck unclean; recovery 121020/1640938 objects degraded (7.375%); 2 near full osd(s); 2 scrub errors
     monmap e1: 3 mons at {virt-master=10.100.23.2:6789/0,virt-node-02=10.100.23.3:6789/0,virt-node-03=10.100.23.4:6789/0}, election epoch 200, quorum 0,1,2 virt-master,virt-node-02,virt-node-03
     mdsmap e146: 1/1/1 up {0=virt-node-02=up:active}, 2 up:standby
     osdmap e2441: 9 osds: 9 up, 8 in
      pgmap v1173009: 192 pgs, 3 pools, 1979 GB data, 517 kobjects
            5521 GB used, 2817 GB / 8339 GB avail
            121020/1640938 objects degraded (7.375%)
                   1 active+remapped+backfill_toofull
                 124 active+clean
                   1 active+clean+inconsistent
                  19 active+remapped+wait_backfill
                  10 active+degraded+remapped+backfilling
                  20 active+remapped
                  16 active+degraded+remapped+wait_backfill
                   1 active+degraded+remapped+inconsistent+wait_backfill
recovery io 12201 kB/s, 2 objects/s
  client io 50840 B/s rd, 1912 kB/s wr, 33 op/s

root@virt-node-03:~# ceph --version
ceph version 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f)

root@virt-node-03:~# uname -a
Linux virt-node-03 2.6.32-30-pve #1 SMP Wed Jun 25 05:54:15 CEST 2014 x86_64 GNU/Linux

Actions #1

Updated by Greg Farnum over 9 years ago

Which part of the stats do you think are incorrect? You've got 7*1TB+2TB+500GB, which sounds like ~8339GB to me (given the difference between power-of-ten and power-of-two measurements).

Actions #2

Updated by Andrey Matyashov over 9 years ago

Negative & undefined values in counts objects:

-5/0 objects degraded (-inf%)
-32/12 objects degraded (-266.667%)
Actions #3

Updated by John Spray over 9 years ago

Can probably close this as dupe of #5884?

Actions #4

Updated by Sage Weil over 9 years ago

  • Status changed from New to Duplicate
Actions

Also available in: Atom PDF