Project

General

Profile

Actions

Bug #27216

open

kclient: usable space suddently decreased

Added by Vasanth M over 5 years ago. Updated over 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Component(FS):
ceph-fuse, libcephfs
Labels (FS):
qa, task(hard)
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

its Suddenly happens in ceph fs mountpoint(client side ) to decrease , but the cluster overall storage available . i just paste the sample below here.i am using mimic version .

  cluster:
    id:     ff41da27-5750-4b04-9866-6718fa629605
    health: HEALTH_ERR
            Module 'dashboard' has failed: error('No socket could be created',)
            14390/182732 objects misplaced (7.875%)
            48/91366 objects unfound (0.053%)
            16 scrub errors
            Possible data damage: 19 pgs recovery_unfound, 5 pgs inconsistent
            Degraded data redundancy: 8110/182732 objects degraded (4.438%), 42 pgs degraded, 9 pgs undersized
 
  services:
    mon: 4 daemons, quorum osd1,osd3,ceph-mon1,osd2
    mgr: ceph-mon1(active), standbys: osd2, osd3
    mds: cephfs-1/1/1 up  {0=osd2=up:active}, 2 up:standby
    osd: 5 osds: 5 up, 5 in; 31 remapped pgs
    rgw: 2 daemons active
 
  data:
    pools:   10 pools, 256 pgs
    objects: 91.37 k objects, 47 GiB
    usage:   112 GiB used, 19 TiB / 19 TiB avail
    pgs:     8110/182732 objects degraded (4.438%)
             14390/182732 objects misplaced (7.875%)
             48/91366 objects unfound (0.053%)
             209 active+clean
             14  active+recovery_wait+degraded+remapped
             11  active+recovery_unfound+degraded
             8   active+recovery_wait+undersized+degraded+remapped
             7   active+recovery_unfound+degraded+remapped
             5   active+clean+inconsistent
             1   active+recovering+degraded+remapped
             1   active+recovery_unfound+undersized+degraded+remapped
 
 
 
 
 
 
--------------------------------------------------------------------------------
-------------------------------------------------------------------------------
[root@ceph-mon1 ~]# df -hT
Filesystem              Type            Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs              43G  7.9G   36G  19% /
devtmpfs                devtmpfs        2.9G     0  2.9G   0% /dev
tmpfs                   tmpfs           2.9G     0  2.9G   0% /dev/shm
tmpfs                   tmpfs           2.9G  117M  2.8G   5% /run
tmpfs                   tmpfs           2.9G     0  2.9G   0% /sys/fs/cgroup
/dev/sda1               xfs            1014M  213M  802M  21% /boot
10.137.10.190:6789:/    ceph            228G   47G  182G  21% /mnt/cephfs
tmpfs                   tmpfs           581M     0  581M   0% /run/user/0
ceph-fuse               fuse.ceph-fuse  228G   47G  182G  21% /mnt/fuse

Actions #1

Updated by Zheng Yan over 5 years ago

what do you mean "decrease"

Actions #2

Updated by Vasanth M over 5 years ago

Zheng Yan wrote:

what do you mean "decrease"

We had 19Tb of total storage , Totally we have 4 osds each have 5 Tb , it has been 9 Tb client side mountpoint , In the morning 228Gb only shows on my client system remaining amount of storage did not get ,but all osds and monitors everything working fine and rebooted also ....

Thanks & Regards
Vasanth M
System Engineer

Actions #3

Updated by Patrick Donnelly over 5 years ago

  • Subject changed from ceph fs suddenly decrease to kclient: usable space suddently decreased
  • Category deleted (NFS (Linux Kernel))
  • Target version deleted (v13.2.2)
  • Affected Versions deleted (v9.2.2)

Vasanth M wrote:

Zheng Yan wrote:

what do you mean "decrease"

We had 19Tb of total storage , Totally we have 4 osds each have 5 Tb , it has been 9 Tb client side mountpoint , In the morning 228Gb only shows on my client system remaining amount of storage did not get ,but all osds and monitors everything working fine and rebooted also ....

If I understand correctly, your "available" storage shown from `df -h` went from 9TB to 228GB?

Actions #4

Updated by Vasanth M over 5 years ago

Patrick Donnelly wrote:

Vasanth M wrote:

Zheng Yan wrote:

what do you mean "decrease"

We had 19Tb of total storage , Totally we have 4 osds each have 5 Tb , it has been 9 Tb client side mountpoint , In the morning 228Gb only shows on my client system remaining amount of storage did not get ,but all osds and monitors everything working fine and rebooted also ....

If I understand correctly, your "available" storage shown from `df -h` went from 9TB to 228GB?

yes ,why its happen that and how can i fix this ?

Thanks & Regards
Vasanth m
System Engineer

Actions

Also available in: Atom PDF