Project

General

Profile

Actions

Bug #10316

closed

pool quota no offect for cephfs client

Added by wei qiaomiao over 9 years ago. Updated almost 7 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version:0.87
ceph cluster os:redhat 6
mds num: 1

step:
(1) set pool 'data' max-byte limit is 1G
[root@A22832417 ~]# ceph osd pool get-quota data
quotas for pool 'data':
max objects: N/A
max bytes : 1024MB
(2) fs client mount
ceph-fuse -m ip:/ /test
(3) copy data to /test
it can copy >1G data to /test, while 'ceph -s' warning that pool 'data' is full

[root@A22832417 ~]# ceph -s
cluster 2d4ff1df-8c42-40b7-b71e-94e13075aef1
health HEALTH_WARN pool 'data' is full
monmap e10: 2 mons at {brs17=10.118.202.17:6789/0,ceph-mon1.test=10.118.202.17:6787/0}, election epoch 10, quorum 0,1 ceph-mon1.test,brs17
mdsmap e62: 1/1/1 up {0=mds0=up:active}
osdmap e561: 4 osds: 4 up, 4 in
pgmap v154846: 741 pgs, 11 pools, 1050 MB data, 318 objects
375 GB used, 485 GB / 906 GB avail
741 active+clean

Files

data_obj.txt (5.41 KB) data_obj.txt wei qiaomiao, 12/15/2014 07:19 PM
Actions

Also available in: Atom PDF