Project

General

Profile

Actions

Bug #10316

closed

pool quota no offect for cephfs client

Added by wei qiaomiao over 9 years ago. Updated almost 7 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version:0.87
ceph cluster os:redhat 6
mds num: 1

step:
(1) set pool 'data' max-byte limit is 1G
[root@A22832417 ~]# ceph osd pool get-quota data
quotas for pool 'data':
max objects: N/A
max bytes : 1024MB
(2) fs client mount
ceph-fuse -m ip:/ /test
(3) copy data to /test
it can copy >1G data to /test, while 'ceph -s' warning that pool 'data' is full

[root@A22832417 ~]# ceph -s
cluster 2d4ff1df-8c42-40b7-b71e-94e13075aef1
health HEALTH_WARN pool 'data' is full
monmap e10: 2 mons at {brs17=10.118.202.17:6789/0,ceph-mon1.test=10.118.202.17:6787/0}, election epoch 10, quorum 0,1 ceph-mon1.test,brs17
mdsmap e62: 1/1/1 up {0=mds0=up:active}
osdmap e561: 4 osds: 4 up, 4 in
pgmap v154846: 741 pgs, 11 pools, 1050 MB data, 318 objects
375 GB used, 485 GB / 906 GB avail
741 active+clean

Files

data_obj.txt (5.41 KB) data_obj.txt wei qiaomiao, 12/15/2014 07:19 PM
Actions #1

Updated by Greg Farnum over 9 years ago

I suspect this is just that you're writing into the local page cache (and ceph client userspace cache), rather than that the pool quota is getting totally ignored. Can you verify that the data actually gets put into RADOS?

Anyway, if so we'll need to look at proactively respecting rados quotas within the cephfs clients. :/

Actions #2

Updated by wei qiaomiao over 9 years ago

I think the data is already put into Rados, because my ceph cluster and client are rebooted but my data is not lost.And It is confused to me that there are several objects are not found in pool 'data'.
This is all my data in client:
[root@A22832417 test]# ll aih
??? 1.2G
1 drwxr-xr-x. 1 root root 1.2G 12? 15 15:35 .
2 dr-xr-xr-x. 29 root root 4.0K 12? 20 04:43 ..
1099511627795 -rw-r--r-
. 1 root root 512M 12? 19 23:41 512M.img
1099511627796 rw-r--r-. 1 root root 512M 12? 19 23:39 512M.img1
1099511627797 rw-r--r-. 1 root root 100M 12? 19 23:42 512M.img2
1099511627791 rw-r--r-. 1 root root 6.0M 11? 10 23:52 6M.img
1099511627787 rw-r--r-. 1 root root 46 11? 10 19:42 eee.txt
1099511627793 rw-r--r-. 1 root root 46 11? 25 22:54 eee.txt1
1099511628800 rw-r--r-. 1 root root 84M 12? 20 00:01 nfs.cap
1099511627776 rw-r--r-. 1 root root 5.9M 11? 10 01:10 tipc.ko
but I can't find '1099511628800' and '1099511628800' in the output of 'rados ls -p data'.See also the output of 'rados ls -p data' detail from the file 'data_obj.txt'.

Actions #3

Updated by Greg Farnum over 9 years ago

  • Status changed from New to Won't Fix

Oh, you mean you got to 1050MB instead of exactly 1GB.

Yeah, that's expected behavior; these are all soft limits. Hard limits would be exceedingly expensive to enforce and would require a different design.

Actions #4

Updated by Greg Farnum almost 7 years ago

  • Project changed from Ceph to CephFS
  • Category deleted (24)
Actions

Also available in: Atom PDF