Project

General

Profile

Feature #12404

"ceph pool set-quota max_bytes" fails to work

Added by ceph zte over 8 years ago. Updated over 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

*Hi ,all!

My ceph version is 0.87.*

*There are two questions in ceph use that I can not understand why?

First, I have set the pool gjghl max_bytes 1024kb, But I can put 5M data into the pool gjghl.

Second , The Pool gjghl max_bytes is 1024 Kb,I can resize it to smaller size without any warning. For example the

pool gjghl max_bytes is 1024 kb ant it have 5Mb data, I can set the max_bytes to 100 b.But without any warning info to me .and the pool also have 5MB data.It seems that the max_bytes is no useful for pool gjghl *

[root@server22 ~]# ceph osd pool set-quota gjghl max_bytes 1048576
set-quota max_bytes = 1048576 for pool gjghl

[root@server22 ~]# ceph osd pool get-quota gjghl
quotas for pool 'gjghl':
  max objects: N/A
  max bytes  : 1024kB

[root@server22 ~]# du -sh ./test
5.0M    ./test

[root@server22 ~]# rados put songtest  ./test -p gjghl

[root@server22 ~]# rados ls -p gjghl
songtest

[root@server22 ~]# rados df
pool name       category                 KB      objects       clones     degrad                                                                             ed      unfound           rd        rd KB           wr        wr KB
ffdsf           -                          0            1            0                                                                                         0           0            0            0            0            0
gggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg -                                                                                                       0            0            0            0           0            0                                                                                         0            0            0
gjghl           -                       5120            1            0                                                                                         0           0            0            0            2         5120
hhhaaa          -                          1            4            0                                                                                         0           0       294809       221103            7            1
hjkjl           -                          0            1            0                                                                                         0           0            0            0            0            0
pool            -                          0            0            0                                                                                         0           0            0            0            0            0
pool1           -                          1            3            0                                                                                         0           0            6            3            7            1
rbd             -                          0            0            0                                                                                         0           0            0            0            0            0
test1112223333  -                          1            4            0                                                                                         0           0       542592       406944            0            0
tttttttt        -                          0            1            0                                                                                         0           0            0            0            0            0
  total used        39729608           15
  total avail       65076792
  total space      104806400

[root@server22 ~]# ceph df
GLOBAL:
    SIZE        AVAIL      RAW USED     %RAW USED
    102350M     63554M       38795M         37.90
POOLS:
    NAME                                                                 ID     USED      %USED     MAX AVAIL     OBJECTS
    rbd                                                                  0          0         0        63554M           0
    tttttttt                                                             1          0         0        63554M           1
    pool                                                                 2          0         0        63554M           0
    ffdsf                                                                3          0         0        63554M           1
    pool1                                                                4         17         0        63554M           3
    hjkjl                                                                5          0         0        63554M           1
    hhhaaa                                                               6         17         0        63554M           4
    test1112223333                                                       7         17         0        63554M           4
    gggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg     8          0         0        63554M           0
    gjghl                                                                9      5120k         0        63554M           1

[root@server22 ~]# ceph osd pool set-quota gjghl max_bytes 100
set-quota max_bytes = 100 for pool gjghl
[root@server22 ~]# ceph osd pool get-quota gjghl
quotas for pool 'gjghl':
  max objects: N/A
  max bytes  : 100B

History

#1 Updated by huang jun over 8 years ago

or you can get the object you have upload and compare the md5sum to see if they are the same file.

#2 Updated by ceph zte over 8 years ago

It is the same md5sum.

[root@server22 home]# rados get songtest test1 -p gjghl
[root@server22 home]# md5sum test
b0d54726a04506c3dcfede0a521ec571 test
[root@server22 home]# md5sum test1
b0d54726a04506c3dcfede0a521ec571 test1
[root@server22 home]#

#3 Updated by Kefu Chai over 8 years ago

  • Subject changed from ceph pool set-quota max_bytes does not useful! to "ceph pool set-quota max_bytes" fails to work
  • Description updated (diff)

#4 Updated by Sage Weil over 8 years ago

  • Assignee set to Kefu Chai

#5 Updated by Kefu Chai over 8 years ago

  • Tracker changed from Bug to Feature
  • Status changed from New to 12
  • Assignee deleted (Kefu Chai)
  • Priority changed from High to Normal

ceph zte, i am afraid that this is how ceph works.

the client checks the full-ness of the pool when it submits the requests. but it does not compare the expected size of pool after the write op is submitted, with the quota. the full-ness stats are populated with osdmap.

so i'd take this as a feature request, not a bug.

#6 Updated by Patrick Donnelly over 4 years ago

  • Status changed from 12 to New

Also available in: Atom PDF