Project

General

Profile

Actions

Bug #15416

closed

ceph df reports incorrect pool usage

Added by Simon Weald about 8 years ago. Updated about 7 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi guys

We're currently facing an outage with our Openstack cluster which is backed by Ceph. The issue appears to be related to the Cinder cold-store pool and the number of objects Ceph thinks it contains:

GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    165T      160T        5100G          3.00 
POOLS:
    NAME                         ID     USED       %USED     MAX AVAIL     OBJECTS              
    cinder-volumes_p02           3          8E         0        53943G     -9223372036854751073 
    glance-images_p02            4       1178G      0.69        53943G                    92985 
    ephemeral-vms_p02            5        229G      0.14        53943G                    44052 
    cinder-volumes-cache_p02     6      16585M         0          651G                    26914 
    glance-images-cache_p02      7      14718M         0          651G                    12350 
    ephemeral-vms-cache_p02      8      50774M      0.03          651G                    85424 

As you can see, the cinder-volumes_p02 pool reports a huge number in both the USED and OBJECTS column - what could be causing this, and how can we get it re-read?

We can create RBD devices in the affected pool using the rbd client manually, but Cinder refuses to use the pool as I presume it thinks that it is full.

We're running Infernalis 9.2.1 on Trusty - if there are any pertinent logs I can provide to help then please let me know.

Actions

Also available in: Atom PDF