Actions
Bug #22835
closedclient: the total size of fs is equal to the cluster size when using multiple data pools
Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
ceph-qa-suite:
fs
Component(FS):
Client, ceph-fuse, libcephfs
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Ceph Cluster
[root@cephfs103 cephfs]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
8807G 4627G 4179G 47.46
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
star_meta 14 57732k 0 1096G 2929761
star_data 16 1216G 52.52 1096G 3188854
star_data2 17 0 0 1096G 0
Only a data pool is being used
[root@cephfs103 cephfs]# ceph fs status
star - 0 clients
====
+------+--------+-------------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+-------------+---------------+-------+-------+
| 0 | active | 192.9.9.102 | Reqs: 0 /s | 0 | 0 |
+------+--------+-------------+---------------+-------+-------+
+-----------+----------+-------+-------+
| Pool | type | used | avail |
+-----------+----------+-------+-------+
| star_meta | metadata | 56.2M | 1096G |
| star_data | data | 1216G | 1096G |
+-----------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
| 192.9.9.104 |
+-------------+
[root@cephfs103 cephfs]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_sys-lv_root 99G 25G 69G 27% /
devtmpfs 31G 0 31G 0% /dev
tmpfs 32G 12K 32G 1% /dev/shm
tmpfs 32G 130M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda2 380M 134M 227M 38% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0
ceph-fuse 2.3T 1.2T 1.1T 53% /mnt
Multiple data pools are being used
[root@cephfs103 cephfs]# ceph fs status
star - 0 clients
====
+------+--------+-------------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+-------------+---------------+-------+-------+
| 0 | active | 192.9.9.102 | Reqs: 0 /s | 0 | 0 |
+------+--------+-------------+---------------+-------+-------+
+------------+----------+-------+-------+
| Pool | type | used | avail |
+------------+----------+-------+-------+
| star_meta | metadata | 54.8M | 1096G |
| star_data | data | 1216G | 1096G |
| star_data2 | data | 0 | 1096G |
+------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
| 192.9.9.104 |
+-------------+
[root@cephfs103 cephfs]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_sys-lv_root 99G 25G 69G 27% /
devtmpfs 31G 0 31G 0% /dev
tmpfs 32G 12K 32G 1% /dev/shm
tmpfs 32G 130M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda2 380M 134M 227M 38% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0
ceph-fuse 8.7T 4.1T 4.6T 48% /mnt
Expected result
The total size = size of star_data + size of star_data2
Updated by Patrick Donnelly over 6 years ago
- Status changed from New to Won't Fix
This is intended. To avoid double-counting available space, the client simply returns the total raw space in the cluster. Consider that one of the data pools may have no replication or erasure coding (i.e. it could use all of the raw space).
Actions