Project

General

Profile

Actions

Bug #13777

closed

Ceph file system is not freeing space

Added by Eric Eastman over 8 years ago. Updated almost 8 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
infernalis
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I have a Ceph file system that is not freeing space. Using Ceph 9.1.0 I created a file system with snapshots enabled, filled up the file system over days while taking snapshots hourly. I then deleted all files and all snapshots, but Ceph is not returning the space. I left the cluster sit for two days to see if the cleanup process was being done in the background and it still has not freed the space. I tried rebooting the cluster and clients and the space is still not returned.

The file system was created with the command:
# ceph fs new cephfs cephfs_metadata cephfs_data

# getfattr -d -m ceph.dir.* /cephfs/
getfattr: Removing leading '/' from absolute path names
# file: cephfs/
ceph.dir.entries="0" 
ceph.dir.files="0" 
ceph.dir.rbytes="0" 
ceph.dir.rctime="1447033469.0920991041" 
ceph.dir.rentries="4" 
ceph.dir.rfiles="1" 
ceph.dir.rsubdirs="3" 
ceph.dir.subdirs="0" 

ls -l /cephfs/
total 0

# ls -l /cephfs/.snap
total 0

# grep ceph /proc/mounts 
ceph-fuse /cephfs fuse.ceph-fuse rw,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0

# df /cephfs/
Filesystem     1K-blocks      Used Available Use% Mounted on
ceph-fuse      276090880 194162688  81928192  71% /cephfs

# df -i /cephfs/
Filesystem      Inodes IUsed IFree IUse% Mounted on
ceph-fuse      2501946     -     -     - /cephfs

# ceph df detail
GLOBAL:
    SIZE     AVAIL      RAW USED     %RAW USED     OBJECTS 
    263G     80009M         181G         68.78       2443k 
POOLS:
    NAME                ID     CATEGORY     USED       %USED     MAX AVAIL     OBJECTS     DIRTY     READ     WRITE  
    rbd                 0      -                 0         0        27826M           0         0        0          0 
    cephfs_data         1      -            76846M     28.50        27826M     2501672     2443k     345k     32797k 
    cephfs_metadata     2      -            34868k      0.01        27826M         259       259     480k     23327k 
    kSAFEbackup         3      -              108M      0.04        27826M          15        15        0         49 

Dumping the statistics shows lots of strays:
  "mds_cache": {
        "num_strays": 16389,
        "num_strays_purging": 0,
        "num_strays_delayed": 0,
        "num_purge_ops": 0,
        "strays_created": 17066,
        "strays_purged": 677,
        "strays_reintegrated": 0,
        "strays_migrated": 0,
        "num_recovering_processing": 0,
        "num_recovering_enqueued": 0,
        "num_recovering_prioritized": 0,
        "recovery_started": 0,
        "recovery_completed": 0
    },

The whole cluster and client systems are running on Trusty with a 4.3.0 kernel and Ceph version 9.1.0
# ceph -v
ceph version 9.1.0 (3be81ae6cf17fcf689cd6f187c4615249fea4f61)
# uname -a
Linux ede-c2-adm01 4.3.0-040300-generic #201511020949 SMP Mon Nov 2 14:50:44 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux


I am attaching the output of ceph mds tell \* dumpcache /tmp/dumpcache.txt and the MDS log, with debug mds = 20, from startup to when the MDS went active.

Additional information is in the following list posts:
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/25212


Files

dumpcache.txt.bz2 (111 KB) dumpcache.txt.bz2 Eric Eastman, 11/12/2015 02:46 AM
dumpcache.2 (7.2 KB) dumpcache.2 dumpcache after flush journal and restart Eric Eastman, 11/12/2015 03:11 PM
perf.2 (4.97 KB) perf.2 perf output after flush journal and restart Eric Eastman, 11/12/2015 03:15 PM

Related issues 2 (0 open2 closed)

Related to CephFS - Bug #13782: Snapshotted files not properly purgedResolved11/12/2015

Actions
Copied to CephFS - Backport #14067: infernalis : Ceph file system is not freeing spaceResolvedAbhishek VarshneyActions
Actions

Also available in: Atom PDF