Project

General

Profile

Actions

Bug #51422

open

Quota about bucket does not work.

Added by 海航 于 almost 3 years ago. Updated over 2 years ago.

Status:
Triaged
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
quota
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Sorry to disturb here.I'm a fresh man in ceph community.
Here I met something about quota settings(of which ceph version is (14.2.11 nautilus statble).
Here I create one rgw user "admin" and set quota as following(limit quota bucket max-size to 300M):
`
[root@ceph test]# radosgw-admin quota set --quota-scope bucket --uid admin --max-size 300M
[root@ceph test]# radosgw-admin quota enable --quota-scope bucket --uid admin
[root@ceph test]# radosgw-admin user info --uid admin {
"user_id": "admin",
"display_name": "admin",
"email": "",
"suspended": 0,
"max_buckets": 1000,
.........
"bucket_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 314572800,
"max_size_kb": 307200,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
.........
}
`
Then I use s3cmd to create one bucket named "dlweb-haha" and try to upload one tar file named "python.tar" whose size is around 260M. Then it appears QuotaExceeded error as following:
`
[root@haihang test]# s3cmd put python.tar s3://dlweb-haha
upload: 'python.tar' -> 's3://dlweb-haha/python.tar' [part 1 of 17, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 37.97 MB/s done
upload: 'python.tar' -> 's3://dlweb-haha/python.tar' [part 2 of 17, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 39.59 MB/s done
upload: 'python.tar' -> 's3://dlweb-haha/python.tar' [part 3 of 17, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 29.11 MB/s done
upload: 'python.tar' -> 's3://dlweb-haha/python.tar' [part 4 of 17, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 43.30 MB/s done
upload: 'python.tar' -> 's3://dlweb-haha/python.tar' [part 5 of 17, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 91.28 MB/s done
ERROR:
Upload of 'python.tar' part 5 failed. Use
/usr/local/bin/s3cmd abortmp s3://dlweb-haha/python.tar 2~xGg3cURNNovcQlwaz_JpeLAjZCTj6CB
to abort the upload, or
/usr/local/bin/s3cmd --upload-id 2~xGg3cURNNovcQlwaz_JpeLAjZCTj6CB put ...
to continue the upload.
ERROR: S3 error: 403 (QuotaExceeded)
`
How could this happen? 260M is less than the limitation 300M apparently?


Files

quota_error.log (7.09 KB) quota_error.log Log about repeat my operation. 海航 于, 07/12/2021 01:59 AM
Actions #1

Updated by 海航 于 almost 3 years ago

Addtional:
When failed, we can see nothing with cmd "c3cmd":
[root@haihang test]# s3cmd ls -rH s3://dlweb-haha
[root@haihang test]#

But when checking user status, it appears like:
[root@ceph test]# radosgw-admin user stats --uid=admin --sync-stats {
"stats": {
"total_entries": 14,
"total_bytes": 157286400,
"total_bytes_rounded": 157286400
},
"last_stats_sync": "2021-06-30 09:08:23.882316Z",
"last_stats_update": "2021-06-30 09:08:23.877100Z"
}
While for current rgw user "admin", actually he has only one empty bucket named "dlweb-haha".
Why does this happened?

Actions #2

Updated by Casey Bodley almost 3 years ago

  • Assignee set to Mark Kogan
  • Tags set to quota
Actions #3

Updated by Mark Kogan almost 3 years ago

  • Assignee deleted (Mark Kogan)

does not reproduce with current master, will re-check with the specific version (14.2.11)

another question was the admin user created as system user ?
(radosgw-admin user create --uid=admin ... --system true)

Actions #4

Updated by Mark Kogan almost 3 years ago

  • Assignee set to Mark Kogan
Actions #5

Updated by Mark Kogan almost 3 years ago

does not reproduce on upstream v14.2.11 tag:

git branch -vv 
* (detached from v14.2.11) f7fdb2f 14.2.11

./bin/radosgw-admin quota set --quota-scope bucket --uid cosbench --max-size 300M
./bin/radosgw-admin quota enable --quota-scope bucket --uid cosbench
./bin/radosgw-admin user info --uid cosbench | jq

fallocate -l 260M python.tar

s3cmd put python.tar s3://bkt
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 1 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    73.91 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 2 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    57.59 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 3 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    75.78 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 4 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    43.39 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 5 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    56.61 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 6 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    56.89 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 7 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    57.97 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 8 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    57.15 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 9 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    56.83 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 10 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    57.08 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 11 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    58.42 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 12 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    58.55 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 13 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    56.74 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 14 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    58.86 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 15 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    60.05 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 16 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    60.10 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 17 of 18, 15MB] [1 of 1]
 15728640 of 15728640   100% in    0s    60.09 MB/s  done
upload: 'python.tar' -> 's3://bkt/python.tar'  [part 18 of 18, 5MB] [1 of 1]
 5242880 of 5242880   100% in    0s    42.43 MB/s  done

Actions #6

Updated by 海航 于 almost 3 years ago

Following details:
single ceph cluster
[root@ceph ~]# ceph version
ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable)
[root@ceph ~]# radosgw-admin user info --uid admin {
"user_id": "admin",
"display_name": "admin",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [ {
"user": "admin",
"access_key": "4E6MUX8DPXZWQ33KMBUJ",
"secret_key": "QILXh1D0NTR43sHcoKayOX8c6YfjTTOHdLpxrh4k"
}
],
"swift_keys": [],
"caps": [ {
"type": "buckets",
"perm": "read"
}, {
"type": "metadata",
"perm": "read"
}, {
"type": "usage",
"perm": "read"
}, {
"type": "users",
"perm": "read"
}, {
"type": "zone",
"perm": "read"
}
],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": 1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 314572800,
"max_size_kb": 307200,
"max_objects": 10
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
On another node, connect ceph cluster with s3cmd(access_key/secret_key of admin user):
[root@haihang test]# du -sh file
200M file
[root@haihang test]# ls -lh file
-rw-r--r-
1 root root 200M Jun 29 14:18 file
[root@haihang test]# s3cmd ls -rH
2021-06-30 08:56 s3://dlweb-haha
2021-06-30 05:39 s3://dlweb-hehe
[root@haihang test]# s3cmd ls -rH s3://dlweb-hehe
[root@haihang test]# s3cmd ls -rH s3://dlweb-haha
[root@haihang test]# s3cmd la

[root@haihang test]# s3cmd put file s3://dlweb-haha
upload: 'file' -> 's3://dlweb-haha/file' [part 1 of 14, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 96.94 MB/s done
ERROR:
Upload of 'file' part 1 failed. Use
/usr/local/bin/s3cmd abortmp s3://dlweb-haha/file 2~pUce5H0Zm9mNfrUXRlywml4ubUkdjxM
to abort the upload, or
/usr/local/bin/s3cmd --upload-id 2~pUce5H0Zm9mNfrUXRlywml4ubUkdjxM put ...
to continue the upload.
ERROR: S3 error: 403 (QuotaExceeded)

It seems that there are some garbage or caches in the cluster which occupied the storage and does not release.

Actions #7

Updated by 海航 于 almost 3 years ago

Mark Kogan wrote:

does not reproduce on upstream v14.2.11 tag:
[...]

Hi Kogan:
I reproduce it again:

Actions #8

Updated by 海航 于 almost 3 years ago

海航 于 wrote:

Mark Kogan wrote:

does not reproduce on upstream v14.2.11 tag:
[...]

Hi Kogan:
I reproduce it again:

It appears like that once triggered QuotaExceeded Error(total size > 500M ), even after that you deleted the target(200M) which was already stored in the bucket, then try to upload file(200M) again(At this time, size stored in bucket is 200M), QuotaExceed still came out.
In my case, I used "s3cmd del s3://dlweb-haha/file*" to empty my bucket. It still not allowed to upload file.
[root@haihang test]# s3cmd ls rH s3://dlweb-haha
[root@haihang test]# ls -lh file
-rw-r--r-
1 root root 200M Jun 29 14:18 file
[root@haihang test]# s3cmd put file s3://dlweb-haha
upload: 'file' -> 's3://dlweb-haha/file' [part 1 of 2, 100MB] [1 of 1]
104857600 of 104857600 100% in 1s 98.22 MB/s done
ERROR:
Upload of 'file' part 1 failed. Use
/usr/local/bin/s3cmd abortmp s3://dlweb-haha/file 2~fcRdmE0nb9Q9IHHIGuEe__i4uvDnF_D
to abort the upload, or
/usr/local/bin/s3cmd --upload-id 2~fcRdmE0nb9Q9IHHIGuEe__i4uvDnF_D put ...
to continue the upload.
ERROR: S3 error: 403 (QuotaExceeded)

On master node to check user status(user "admin" only has one bucket named dlweb-haha)
[root@ceph ~]# radosgw-admin user stats --uid=admin --sync-stats {
"stats": {
"total_entries": 19,
"total_bytes": 513802240,
"total_bytes_rounded": 513802240
},
"last_stats_sync": "2021-07-12 04:57:35.986969Z",
"last_stats_update": "2021-07-12 04:57:35.983081Z"
}

How could this happened?

Actions #9

Updated by Mark Kogan almost 3 years ago

  • Status changed from New to Triaged
Actions #10

Updated by JS Landry almost 3 years ago

Hi, do you have a quota on the bucket itself?

radosgw-admin bucket stats --bucket dlweb-haha
Actions #11

Updated by 海航 于 almost 3 years ago

JS Landry wrote:

Hi, do you have a quota on the bucket itself?

[...]

Hi, as I knew, there is no other quota confs except quota conf on user level. Following output for your reference:
[root@ceph ~]# radosgw-admin bucket stats --bucket dlweb-haha {
"bucket": "dlweb-haha",
"num_shards": 0,
"tenant": "",
"zonegroup": "eedff22c-73f0-47de-80f8-102ac752aa39",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "2aa5c558-5e0a-4dd0-9a00-c92193e675c4.16188.1",
"marker": "2aa5c558-5e0a-4dd0-9a00-c92193e675c4.16188.1",
"index_type": "Normal",
"owner": "admin",
"ver": "0#133",
"master_ver": "0#0",
"mtime": "2021-07-12 01:02:35.505515Z",
"max_marker": "0#",
"usage": {
"rgw.main": {
"size": 513802240,
"size_actual": 513802240,
"size_utilized": 513802240,
"size_kb": 501760,
"size_kb_actual": 501760,
"size_kb_utilized": 501760,
"num_objects": 10
},
"rgw.multimeta": {
"size": 0,
"size_actual": 0,
"size_utilized": 243,
"size_kb": 0,
"size_kb_actual": 0,
"size_kb_utilized": 1,
"num_objects": 9
}
},
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max\_objects": -1

}
[root@haihang test]# s3cmd ls rH s3://dlweb-haha
[root@haihang test]#
[root@haihang test]# ls -lh
total 892M
-rw-r--r-
1 root root 21 Jun 30 16:48 10.txt
rw-r--r- 1 root root 0 Jun 29 13:47 11.txt
rw-r--r- 1 root root 20 Jun 30 16:48 1.txt
rw-r--r- 1 root root 20 Jun 30 16:48 2.txt
rw-r--r- 1 root root 20 Jun 30 16:48 3.txt
rw-r--r- 1 root root 20 Jun 30 16:48 4.txt
rw-r--r- 1 root root 20 Jun 30 16:48 5.txt
rw-r--r- 1 root root 20 Jun 30 16:48 6.txt
rw-r--r- 1 root root 20 Jun 30 16:48 7.txt
rw-r--r- 1 root root 20 Jun 30 16:48 8.txt
rw-r--r- 1 root root 20 Jun 30 16:48 9.txt
rw-r--r- 1 root root 200M Jun 29 14:18 file
rw-r--r- 1 root root 200M Jul 12 08:59 file1
rw------ 1 root root 246M Jun 29 10:36 python1.tar
rw------ 1 root root 246M Jun 29 10:28 python.tar
[root@haihang test]# s3cmd put python.tar s3://dlweb-haha
upload: 'python.tar' -> 's3://dlweb-haha/python.tar' [part 1 of 3, 100MB] [1 of 1]
104857600 of 104857600 100% in 0s 100.46 MB/s done
ERROR:
Upload of 'python.tar' part 1 failed. Use
/usr/local/bin/s3cmd abortmp s3://dlweb-haha/python.tar 2~D_Y9EtadgHLPnrm79TINxRXz6zG3_Fd
to abort the upload, or
/usr/local/bin/s3cmd --upload-id 2~D_Y9EtadgHLPnrm79TINxRXz6zG3_Fd put ...
to continue the upload.
ERROR: S3 error: 403 (QuotaExceeded)
[root@haihang test]# s3cmd ls -rH s3://dlweb-haha
[root@haihang test]#

Weird? Right?

Actions #12

Updated by JS Landry over 2 years ago

Hi, object versioning maybe? Deleted objects still consume quota space.
https://docs.ceph.com/en/pacific/radosgw/s3/bucketops/#enable-suspend-bucket-versioning

How is the garbage collect list?

radosgw-admin gc list

If the gc is stuck, it could explain it, but I doubt it.

Can you list the objects using list/radoslist/bi list ?

radosgw-admin bucket list --bucket dlweb-haha
radosgw-admin bucket radoslist --bucket dlweb-haha
radosgw-admin bi list --bucket dlweb-haha

Using the bucket id, can you list the object at the rados level? (rados -p pool ls / listomapkeys)

Sorry I don't have any solution for you, I'm not a ceph dev, but a user having similar problems than you, browsing the tracker looking for answers.

Actions #13

Updated by 海航 于 over 2 years ago

JS Landry wrote:

Hi, object versioning maybe? Deleted objects still consume quota space.
https://docs.ceph.com/en/pacific/radosgw/s3/bucketops/#enable-suspend-bucket-versioning

How is the garbage collect list? [...]
If the gc is stuck, it could explain it, but I doubt it.

Can you list the objects using list/radoslist/bi list ?
[...]

Using the bucket id, can you list the object at the rados level? (rados -p pool ls / listomapkeys)

Sorry I don't have any solution for you, I'm not a ceph dev, but a user having similar problems than you, browsing the tracker looking for answers.

Thank you for your suggestion. I am a freshman in ceph and I've tried to collect what you suggested(radosgw-admin gc list/radosgw-admin bucket list --bucket dlweb-haha .etc)
It seems that there is none in gc list:
[root@ceph product_pic_backup]# radosgw-admin gc list
[]
[root@ceph product_pic_backup]# radosgw-admin gc list --bucket dlweb-haha
[]
[root@ceph product_pic_backup]# radosgw-admin bucket list --bucket dlweb-haha
[ {
"name": "_multipart_file.2~FDjfoMMChzLQZbfOpk9Dmkl7WI_PSoS.meta",
"instance": "",
"ver": {
"pool": 7,
"epoch": 8
},
"locator": "",
"exists": "true",
"meta": {
"category": 3,
"size": 27,
"mtime": "2021-07-12 04:46:51.043575Z",
"etag": "",
"storage_class": "",
"owner": "admin",
"owner_display_name": "admin",
"content_type": "application/octet-stream",
"accounted_size": 0,
"user_data": "",
"appendable": "false"
},
"tag": "_F24p2gCcnuRrKmMP73G-nbZ-kjdviVa",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
},
........................ {
"name": "_multipart_python.tar.2~yvtEBzK91g11-Cy6-PiF2_tBpPjik3Q.meta",
"instance": "",
"ver": {
"pool": 7,
"epoch": 6
},
"locator": "",
"exists": "true",
"meta": {
"category": 3,
"size": 27,
"mtime": "2021-07-12 01:49:13.642471Z",
"etag": "",
"storage_class": "",
"owner": "admin",
"owner_display_name": "admin",
"content_type": "application/x-tar",
"accounted_size": 0,
"user_data": "",
"appendable": "false"
},
"tag": "_EOnKNkKBqmOWdLUmPOue5tlUHRwQDO0",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
]
[root@ceph product_pic_backup]# radosgw-admin bucket radoslist --bucket dlweb-haha
2aa5c558-5e0a-4dd0-9a00-c92193e675c4.16188.1__multipart_file1.2~C4d44FmnTq8OBu0sSPKvk1HTUDQy9OO.1
.........
2aa5c558-5e0a-4dd0-9a00-c92193e675c4.16188.1__shadow_python.tar.2~bo1B9B3TGxxGRe4_q4TpDP7kb09pAJE.2_23
2aa5c558-5e0a-4dd0-9a00-c92193e675c4.16188.1__shadow_python.tar.2~bo1B9B3TGxxGRe4_q4TpDP7kb09pAJE.2_24
[root@ceph product_pic_backup]# radosgw-admin bi list --bucket dlweb-haha
[ {
"type": "plain",
"idx": "_multipart_file.2~FDjfoMMChzLQZbfOpk9Dmkl7WI_PSoS.meta",
"entry": {
"name": "_multipart_file.2~FDjfoMMChzLQZbfOpk9Dmkl7WI_PSoS.meta",
"instance": "",
"ver": {
"pool": 7,
"epoch": 8
},
"locator": "",
"exists": "true",
"meta": {
"category": 3,
"size": 27,
"mtime": "2021-07-12 04:46:51.043575Z",
"etag": "",
"storage_class": "",
"owner": "admin",
"owner_display_name": "admin",
"content_type": "application/octet-stream",
"accounted_size": 0,
"user_data": "",
"appendable": "false"
},
"tag": "_F24p2gCcnuRrKmMP73G-nbZ-kjdviVa",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
},
............ {
"type": "plain",
"idx": "_multipart_python.tar.2~yvtEBzK91g11-Cy6-PiF2_tBpPjik3Q.meta",
"entry": {
"name": "_multipart_python.tar.2~yvtEBzK91g11-Cy6-PiF2_tBpPjik3Q.meta",
"instance": "",
"ver": {
"pool": 7,
"epoch": 6
},
"locator": "",
"exists": "true",
"meta": {
"category": 3,
"size": 27,
"mtime": "2021-07-12 01:49:13.642471Z",
"etag": "",
"storage_class": "",
"owner": "admin",
"owner_display_name": "admin",
"content_type": "application/x-tar",
"accounted_size": 0,
"user_data": "",
"appendable": "false"
},
"tag": "_EOnKNkKBqmOWdLUmPOue5tlUHRwQDO0",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
}
]

In fact, I am not able to understand the meanings of output.
BTW,which pool should I use to list the objects?
[root@ceph product_pic_backup]# ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
5 default.rgw.buckets.index
6 default.rgw.buckets.data
7 default.rgw.buckets.non-ec

Thank you very much for your efforts!

Actions

Also available in: Atom PDF