Project

General

Profile

Bug #56450

Updated by Xiubo Li almost 2 years ago

Before running the Filesystem benchmark test, from OS we can see that the *_/_* directory had    *_81GB_*: 

 <pre> 
 [root@lxbceph1 kcephfs]# df -h 
 Filesystem                                                    Size    Used Avail Use% Mounted on 
 devtmpfs                                                       14G    4.0K     14G     1% /dev 
 tmpfs                                                          14G     84K     14G     1% /dev/shm 
 tmpfs                                                          14G     50M     14G     1% /run 
 tmpfs                                                          14G       0     14G     0% /sys/fs/cgroup 
 /dev/mapper/rhel-root                                         235G    155G     81G    66% / 
 /dev/sda1                                                      15G    1.4G     14G     9% /boot 
 /dev/loop1                                                     39M     39M       0 100% /var/lib/snapd/snap/gh/545 
 /dev/loop0                                                     39M     39M       0 100% /var/lib/snapd/snap/gh/544 
 /dev/loop4                                                     56M     56M       0 100% /var/lib/snapd/snap/core18/2409 
 /dev/loop2                                                     45M     45M       0 100% /var/lib/snapd/snap/snapd/15904 
 /dev/loop3                                                     56M     56M       0 100% /var/lib/snapd/snap/core18/1944 
 tmpfs                                                         2.8G     16K    2.8G     1% /run/user/42 
 tmpfs                                                         2.8G       0    2.8G     0% /run/user/0 
 10.72.47.117:40545,10.72.47.117:40547,10.72.47.117:40549:/     99G       0     99G     0% /mnt/kcephfs 
 </pre> 

 And after the test finished, it had has *_26GB_*: 

 <pre> 
 [root@lxbceph1 usr]# df -h 
 Filesystem                                                    Size    Used Avail Use% Mounted on 
 devtmpfs                                                       14G    4.0K     14G     1% /dev 
 tmpfs                                                          14G     84K     14G     1% /dev/shm 
 tmpfs                                                          14G     50M     14G     1% /run 
 tmpfs                                                          14G       0     14G     0% /sys/fs/cgroup 
 /dev/mapper/rhel-root                                         235G    210G     26G    90% / 
 /dev/sda1                                                      15G    1.4G     14G     9% /boot 
 /dev/loop1                                                     39M     39M       0 100% /var/lib/snapd/snap/gh/545 
 /dev/loop0                                                     39M     39M       0 100% /var/lib/snapd/snap/gh/544 
 /dev/loop4                                                     56M     56M       0 100% /var/lib/snapd/snap/core18/2409 
 /dev/loop2                                                     45M     45M       0 100% /var/lib/snapd/snap/snapd/15904 
 /dev/loop3                                                     56M     56M       0 100% /var/lib/snapd/snap/core18/1944 
 tmpfs                                                         2.8G     16K    2.8G     1% /run/user/42 
 tmpfs                                                         2.8G       0    2.8G     0% /run/user/0 
 10.72.47.117:40545,10.72.47.117:40547,10.72.47.117:40549:/     99G       0     99G     0% /mnt/kcephfs 
 </pre> 

 And from *_ceph df_* we can see that the spaces had been released: 

 <pre> 
 [root@lxbceph1 build]# ./bin/ceph df 
 --- RAW STORAGE --- 
 CLASS       SIZE      AVAIL       USED    RAW USED    %RAW USED 
 hdd      303 GiB    300 GiB    3.5 GiB     3.5 GiB         1.14 
 TOTAL    303 GiB    300 GiB    3.5 GiB     3.5 GiB         1.14 
 
 --- POOLS --- 
 POOL             ID    PGS     STORED    OBJECTS       USED    %USED    MAX AVAIL 
 .mgr              1      1    577 KiB          2    1.7 MiB        0       99 GiB 
 cephfs.a.meta     2     16    156 MiB         61    468 MiB     0.15       99 GiB 
 cephfs.a.data     3     32        0 B          0        0 B        0       99 GiB 
 </pre> 

 If I run the same test again it will fail with no disk spaces and from the *_df -h_* we can see that the disk spaces from OS has been used up: 

 <pre> 
 [root@lxbceph1 usr]# df -h 
 Filesystem                                                    Size    Used Avail Use% Mounted on 
 devtmpfs                                                       14G    4.0K     14G     1% /dev 
 tmpfs                                                          14G     84K     14G     1% /dev/shm 
 tmpfs                                                          14G     50M     14G     1% /run 
 tmpfs                                                          14G       0     14G     0% /sys/fs/cgroup 
 /dev/mapper/rhel-root                                         235G    235G    103M 100% / 
 /dev/sda1                                                      15G    1.4G     14G     9% /boot 
 /dev/loop1                                                     39M     39M       0 100% /var/lib/snapd/snap/gh/545 
 /dev/loop0                                                     39M     39M       0 100% /var/lib/snapd/snap/gh/544 
 /dev/loop4                                                     56M     56M       0 100% /var/lib/snapd/snap/core18/2409 
 /dev/loop2                                                     45M     45M       0 100% /var/lib/snapd/snap/snapd/15904 
 /dev/loop3                                                     56M     56M       0 100% /var/lib/snapd/snap/core18/1944 
 tmpfs                                                         2.8G     16K    2.8G     1% /run/user/42 
 tmpfs                                                         2.8G       0    2.8G     0% /run/user/0 
 10.72.47.117:40545,10.72.47.117:40547,10.72.47.117:40549:/     99G    8.5G     91G     9% /mnt/kcephfs 
 </pre> 

 Is this a bug ? If not how to fix this ? 

Back