Project

General

Profile

Actions

Bug #36669

closed

client: displayed as the capacity of all OSDs when there are multiple data pools in the fS

Added by huanwen ren over 5 years ago. Updated over 5 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
huanwen ren
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When using ceph-fuse to mount the cephfs file directory, if there are multiple data pools built in FS, the capacity of all OSDs is displayed. If there is only one data pool in FS, it displays the correct capacity information.

Can be constructed as follows:


[root@hw110 tecs]# ceph df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    9.00TiB     7.85TiB      1.15TiB         12.82
POOLS:
    NAME             ID     USED        %USED     MAX AVAIL     OBJECTS
    nfs_metadata     1       171MiB      0.02       1.03TiB         105
    nfs_data         2      3.59GiB      0.23       1.55TiB        2341
    pool1            9       779GiB     51.86        723GiB      202249
    cinder           10          0B         0        723GiB           0
    glance           11          0B         0        723GiB           0
[root@hw110 tecs]# ceph fs ls
name: ceph-fs, metadata pool: nfs_metadata, data pools: [nfs_data pool1 ]
[root@hw110 tecs]# df -kh
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_sys-lv_root   99G  8.9G   85G  10% /
devtmpfs                     32G     0   32G   0% /dev
tmpfs                        32G   57M   32G   1% /dev/shm
tmpfs                        32G  215M   31G   1% /run
tmpfs                        32G     0   32G   0% /sys/fs/cgroup
/dev/sda2                   976M  190M  720M  21% /boot
/dev/xxx1p1               94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-12
/dev/xxx12p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-9
/dev/xxx10p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-14
/dev/xxx14p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-11
/dev/xxx11p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-10
/dev/xxx13p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-13
/dev/xxx0p1               94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-8
/dev/xxx15p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-15
tmpfs                       6.3G     0  6.3G   0% /run/user/1001
ceph-fuse                   9.1T  1.2T  7.9T  13% /share-fs/export
tmpfs                       6.3G     0  6.3G   0% /run/user/0
[root@hw110 tecs]#
[root@hw110 tecs]#
[root@hw110 tecs]# ceph fs rm_data_pool ceph-fs pool1
removed data pool 9 from fsmap
[root@hw110 tecs]#
[root@hw110 tecs]#
[root@hw110 tecs]# df -k
Filesystem                  1K-blocks    Used  Available Use% Mounted on
/dev/mapper/vg_sys-lv_root  103080888 9324112   88497512  10% /
devtmpfs                     32702636       0   32702636   0% /dev
tmpfs                        32714180   57432   32656748   1% /dev/shm
tmpfs                        32714180  220012   32494168   1% /run
tmpfs                        32714180       0   32714180   0% /sys/fs/cgroup
/dev/sda2                      999320  193816     736692  21% /boot
/dev/xxx1p1                  95968    5500      90468   6% /var/lib/ceph/osd/ceph-12
/dev/xxx12p1                 95968    5500      90468   6% /var/lib/ceph/osd/ceph-9
/dev/xxx10p1                 95968    5500      90468   6% /var/lib/ceph/osd/ceph-14
/dev/xxx14p1                 95968    5500      90468   6% /var/lib/ceph/osd/ceph-11
/dev/xxx11p1                 95968    5500      90468   6% /var/lib/ceph/osd/ceph-10
/dev/xxx13p1                 95968    5500      90468   6% /var/lib/ceph/osd/ceph-13
/dev/xxx0p1                  95968    5500      90468   6% /var/lib/ceph/osd/ceph-8
/dev/xxx15p1                 95968    5500      90468   6% /var/lib/ceph/osd/ceph-15
tmpfs                         6542836       0    6542836   0% /run/user/1001
ceph-fuse                  1666256896 3764224 1662492672   1% /share-fs/export
tmpfs                         6542836       0    6542836   0% /run/user/0
[root@hw110 tecs]# df -kh
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_sys-lv_root   99G  8.9G   85G  10% /
devtmpfs                     32G     0   32G   0% /dev
tmpfs                        32G   57M   32G   1% /dev/shm
tmpfs                        32G  215M   31G   1% /run
tmpfs                        32G     0   32G   0% /sys/fs/cgroup
/dev/sda2                   976M  190M  720M  21% /boot
/dev/xxx1p1               94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-12
/dev/xxx12p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-9
/dev/xxx10p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-14
/dev/xxx14p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-11
/dev/xxx11p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-10
/dev/xxx13p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-13
/dev/xxx0p1               94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-8
/dev/xxx15p1              94M  5.4M   89M   6% /var/lib/ceph/osd/ceph-15
tmpfs                       6.3G     0  6.3G   0% /run/user/1001
ceph-fuse                   1.6T  3.6G  1.6T   1% /share-fs/export
tmpfs                       6.3G     0  6.3G   0% /run/user/0
[root@hw110 tecs]# ceph df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    9.00TiB     7.85TiB      1.15TiB         12.82
POOLS:
    NAME             ID     USED        %USED     MAX AVAIL     OBJECTS
    nfs_metadata     1       171MiB      0.02       1.03TiB         105
    nfs_data         2      3.59GiB      0.23       1.55TiB        2341
    pool1            9       779GiB     51.86        723GiB      202249
    cinder           10          0B         0        723GiB           0
    glance           11          0B         0        723GiB           0

Actions #1

Updated by huanwen ren over 5 years ago

Actions #2

Updated by Patrick Donnelly over 5 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to huanwen ren
  • Target version set to v14.0.0
Actions #3

Updated by Patrick Donnelly over 5 years ago

  • Status changed from Fix Under Review to Rejected

See PR discussion.

Actions

Also available in: Atom PDF