Actions
Bug #46021
closednautilus 14.2.9: RGW compression does not take effect, using command “radosgw-admin zone placement modify……
Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
doc
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
The same issue seems still exists in 14.2.9 which is supposed to be fixed in 14.2.5. https://tracker.ceph.com/issues/41981
The reference doc: https://docs.ceph.com/docs/master/radosgw/compression/
Here's the results in my cluster.
# ceph version
ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) nautilus (stable)
# cephstatus
cluster:
id: 9491ad1a-cb7b-4b07-9218-676575b44285
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2h)
mgr: a(active, since 37h)
mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
osd: 3 osds: 3 up (since 4d), 3 in (since 4d)
rgw: 1 daemon active (rook.ceph.store.a)
data:
pools: 10 pools, 176 pgs
objects: 1.36k objects, 4.0 GiB
usage: 15 GiB used, 105 GiB / 120 GiB avail
pgs: 176 active+clean
io:
client: 852 B/s rd, 1 op/s rd, 0 op/s wr
# radosgw-admin zone list
{
"default_info": "327ad9f8-0bed-43ed-8934-1f85f110b11a",
"zones": [
"rook-ceph-store"
]
}
bash-4.2# radosgw-admin zone get
{
"id": "327ad9f8-0bed-43ed-8934-1f85f110b11a",
"name": "rook-ceph-store",
"domain_root": "rook-ceph-store.rgw.meta:root",
"control_pool": "rook-ceph-store.rgw.control",
"gc_pool": "rook-ceph-store.rgw.log:gc",
"lc_pool": "rook-ceph-store.rgw.log:lc",
"log_pool": "rook-ceph-store.rgw.log",
"intent_log_pool": "rook-ceph-store.rgw.log:intent",
"usage_log_pool": "rook-ceph-store.rgw.log:usage",
"reshard_pool": "rook-ceph-store.rgw.log:reshard",
"user_keys_pool": "rook-ceph-store.rgw.meta:users.keys",
"user_email_pool": "rook-ceph-store.rgw.meta:users.email",
"user_swift_pool": "rook-ceph-store.rgw.meta:users.swift",
"user_uid_pool": "rook-ceph-store.rgw.meta:users.uid",
"otp_pool": "rook-ceph-store.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "rook-ceph-store.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "rook-ceph-store.rgw.buckets.data",
"compression_type": "zlib"
}
},
"data_extra_pool": "rook-ceph-store.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"metadata_heap": "",
"realm_id": ""
}
bash-4.# dd if=/dev/zero of=file1 count=1 bs=10G
bash-4.2# ls -l file1
-rw-r--r--. 1 root root 2147479552 Jun 15 19:47 file1
bash-4.2# s3cmd put file1 --no-ssl --host=${AWS_ENDPOINT} --host-bucket= s3://rookbucket
...
bash-4.2# radosgw-admin bucket stats bucket= s3://rookbucket
...
"usage": {
"rgw.main": {
"size": 2147479552,
"size_actual": 2147479552,
"size_utilized": 2147479552,
"size_kb": 2097148,
"size_kb_actual": 2097148,
"size_kb_utilized": 2097148,
"num_objects": 1
},
...
Updated by Abhishek Lekshmanan almost 4 years ago
were the gateways restarted after config changes?
Updated by Casey Bodley almost 4 years ago
- Status changed from New to Need More Info
Updated by Casey Bodley almost 4 years ago
- Status changed from Need More Info to Can't reproduce
Updated by Yan Zhao almost 4 years ago
https://github.com/rook/rook/issues/5574
Based on the above ticket, the above issue is fixed in ceph 15.x.
Actions