Bug #59242
closed[crimson] Pool compression does not take effect
0%
Description
It has been observed that data stored in a pool with compression enabled is almost equivalent to the data stored in a pool with no compression.
Steps -
Creating and enabling 3 pools, 1 with no compression, 1 for snappy compression mode and another for zlib compression
# ceph osd pool create re_pool_compress_snappy 32 32 pool 're_pool_compress_snappy' created # ceph osd pool create re_pool_compress_zlib 32 32 pool 're_pool_compress_zlib' created # ceph osd pool create re_pool_no_compress 32 32 pool 're_pool_no_compress' created # ceph osd pool application enable re_pool_no_compress rados enabled application 'rados' on pool 're_pool_no_compress' # ceph osd pool application enable re_pool_compress_zlib rados enabled application 'rados' on pool 're_pool_compress_zlib' # ceph osd pool application enable re_pool_compress_snappy rados enabled application 'rados' on pool 're_pool_compress_snappy'
Setting snappy compression algorithm
# ceph osd pool set re_pool_compress_snappy compression_algorithm snappy set pool 23 compression_algorithm to snappy # ceph osd pool set re_pool_compress_snappy compression_mode aggressive set pool 23 compression_mode to aggressive # ceph osd pool set re_pool_compress_snappy compression_required_ratio 0.3 set pool 23 compression_required_ratio to 0.3 # ceph osd pool set re_pool_compress_snappy compression_min_blob_size 1B set pool 23 compression_min_blob_size to 1B
Setting zlib compression
# ceph osd pool set re_pool_compress_zlib compression_algorithm zlib set pool 24 compression_algorithm to zlib # ceph osd pool set re_pool_compress_zlib compression_mode passive set pool 24 compression_mode to passive # ceph osd pool set re_pool_compress_zlib compression_required_ratio 0.7 set pool 24 compression_required_ratio to 0.7 # ceph osd pool set re_pool_compress_zlib compression_min_blob_size 10B set pool 24 compression_min_blob_size to 10B
Changes taking effect as seen in ceph osd dump
# ceph osd dump | grep pool 2023-03-31T01:58:30.596+0000 7f461aeb0700 -1 WARNING: the following dangerous and experimental features are enabled: crimson 2023-03-31T01:58:30.597+0000 7f461aeb0700 -1 WARNING: the following dangerous and experimental features are enabled: crimson pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 27 flags hashpspool,nopgchange,crimson stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 9.09 pool 2 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 33 flags hashpspool,nopgchange,selfmanaged_snaps,crimson stripe_width 0 application rbd read_balance_score 2.25 pool 8 're_pool_compress_snappy' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 85 flags hashpspool,nopgchange,crimson stripe_width 0 compression_algorithm snappy compression_min_blob_size 1 compression_mode aggressive compression_required_ratio 0.3 application rados read_balance_score 1.41 pool 9 're_pool_compress_zlib' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 89 flags hashpspool,nopgchange,crimson stripe_width 0 compression_algorithm zlib compression_min_blob_size 10 compression_mode passive compression_required_ratio 0.7 application rados read_balance_score 1.69 pool 10 're_pool_no_compress' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 77 flags hashpspool,nopgchange,crimson stripe_width 0 application rados read_balance_score 1.69
Writing data to each pool
# rados --no-log-to-stderr -b 40KB -p re_pool_no_compress bench 100 write --no-cleanup # rados --no-log-to-stderr -b 40KB -p re_pool_compress_zlib bench 100 write --no-cleanup # rados --no-log-to-stderr -b 40KB -p re_pool_compress_snappy bench 100 write --no-cleanup
ceph df stats post write operation
# ceph df 2023-03-31T02:10:26.351+0000 7fde9d445700 -1 WARNING: the following dangerous and experimental features are enabled: crimson 2023-03-31T02:10:26.352+0000 7fde9d445700 -1 WARNING: the following dangerous and experimental features are enabled: crimson --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED TOTAL 1.3 TiB 1.3 TiB 62 GiB 62 GiB 4.56 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 897 KiB 4 897 KiB 0 402 GiB rbd 2 32 38 B 2 38 B 0 402 GiB re_pool_compress_snappy 8 32 2.6 GiB 69.20k 2.6 GiB 0.22 402 GiB re_pool_compress_zlib 9 32 2.6 GiB 69.39k 2.6 GiB 0.22 402 GiB re_pool_no_compress 10 32 2.7 GiB 70.50k 2.7 GiB 0.22 402 GiB
Crimson image used - https://shaman.ceph.com/repos/ceph/main/c3f1eeebbd5e7805720d5f665b68bdca160a5855/crimson/275574/
Cephadm version -
# yum list | grep cephadm cephadm.noarch 2:18.0.0-3168.gc3f1eeeb.el8 @ceph-noarch ceph-mgr-cephadm.noarch 2:18.0.0-3168.gc3f1eeeb.el8 ceph-noarch
# cephadm shell -- ceph -v Inferring fsid 86890d16-cf04-11ed-8f60-78ac443b3b3c Inferring config /var/lib/ceph/86890d16-cf04-11ed-8f60-78ac443b3b3c/mon.dell-r640-079/config Using ceph image with id '39c6c7c7a15b' and tag 'c3f1eeebbd5e7805720d5f665b68bdca160a5855-crimson' created on 2023-03-29 22:11:49 +0000 UTC quay.ceph.io/ceph-ci/ceph@sha256:b667b57bdc127d1992370596b283f9a9992248e5d4f4861a26e85538e5611454 ceph version 18.0.0-3168-gc3f1eeeb (c3f1eeebbd5e7805720d5f665b68bdca160a5855) reef (dev)
When the same operations are performed on a downstream quincy build,
# ceph osd dump | grep pool pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 35 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr pool 2 'cephfs.cephfs.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 82 lfor 0/0/52 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs pool 3 'cephfs.cephfs.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 54 lfor 0/0/52 flags hashpspool stripe_width 0 application cephfs pool 22 're_pool_3' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 286 flags hashpspool stripe_width 0 application rados pool 23 're_pool_compress_snappy' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 359 flags hashpspool stripe_width 0 compression_algorithm snappy compression_min_blob_size 1 compression_mode aggressive compression_required_ratio 0.3 application rados pool 24 're_pool_compress_zlib' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 368 flags hashpspool stripe_width 0 compression_algorithm zlib compression_min_blob_size 10 compression_mode passive compression_required_ratio 0.7 application rados pool 25 're_pool_no_compress' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 327 flags hashpspool stripe_width 0 application rados
ceph df stats
# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 500 GiB 472 GiB 28 GiB 28 GiB 5.64 TOTAL 500 GiB 472 GiB 28 GiB 28 GiB 5.64 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 449 KiB 2 1.3 MiB 0 136 GiB cephfs.cephfs.meta 2 16 7.0 KiB 22 110 KiB 0 136 GiB cephfs.cephfs.data 3 32 0 B 0 0 B 0 136 GiB re_pool_3 22 1 2.1 GiB 5.34k 6.3 GiB 1.53 136 GiB re_pool_compress_snappy 23 32 5.6 GiB 148.01k 1.4 GiB 0.34 163 GiB re_pool_compress_zlib 24 32 132 MiB 3.39k 396 MiB 0.09 136 GiB re_pool_no_compress 25 32 1.0 GiB 26.64k 3.0 GiB 0.74 136 GiB
Quincy cluster version -
# ceph -v ceph version 17.2.5-75.el9cp (52c8ab07f1bc5423199eeb6ab5714bc30a930955) quincy (stable)
ceph config parameter osd_pool_default_pg_autoscale_mode has been set to off on quincy cluster to be aligned with crimson settings
As seen, the data stored in the 3 pools on a quincy cluster differ from each other whereas on the crimson cluster data stored in all the 3 pools is almost same.
Files
Updated by Matan Breizman about 1 year ago
I think that `--max-objects <>` needs to be used here for having accurate comparisons.
Crimson bench wrote 70k objects to all 3 pools. However, Classic wrote 148k to the first, 3k to the second and 26k to the third.
By limiting the test to the number of objects written instead of running time, we will have 1:1 comparison between the clusters.
Updated by Harsh Kumar about 1 year ago
Matan Breizman wrote:
I think that `--max-objects <>` needs to be used here for having accurate comparisons.
Crimson bench wrote 70k objects to all 3 pools. However, Classic wrote 148k to the first, 3k to the second and 26k to the third.
By limiting the test to the number of objects written instead of running time, we will have 1:1 comparison between the clusters.
Hey Matan,
Apologies for the delayed response.
I ran the tests again as per your suggestion.
TL;DR - Compression did not take effect on a Crimson cluster and the amount of data stored in all pools were same, this is contrary to the behavior observed on a RHCS Quincy cluster.
It has also been observed that rados bench writes twice the number of objects given as an input to --max-objects arguments. A separate tracker will be raised for this
Please find the details of tests below -
bench cmd: rados --no-log-to-stderr -b 40KB -p <pool_name> bench 500 write --no-cleanup --max-objects 100000
RHCS 6.0 Quincy Cluster
OSD pools -
# ceph osd dump | grep pool
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 30 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 2 'cephfs.cephfs.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 165 lfor 0/0/66 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 3 'cephfs.cephfs.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 72 lfor 0/0/66 flags hashpspool stripe_width 0 application cephfs
pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 72 lfor 0/0/68 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 216 lfor 0/0/68 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 72 lfor 0/0/70 flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 72 lfor 0/0/70 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
pool 8 're_pool_compress_snappy' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 236 flags hashpspool stripe_width 0 compression_algorithm snappy compression_min_blob_size 1 compression_mode aggressive compression_required_ratio 0.3 application rados
pool 9 're_pool_compress_zlib' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 240 flags hashpspool stripe_width 0 compression_algorithm zlib compression_min_blob_size 10 compression_mode passive compression_required_ratio 0.7 application rados
pool 10 're_pool_no_compress' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 230 flags hashpspool stripe_width 0 application rados
pool 11 're_pool_compress_zstd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 248 flags hashpspool stripe_width 0 compression_algorithm zstd compression_min_blob_size 1024 compression_mode aggressive compression_required_ratio 0.5 application rados
ceph df and ceph df detail:
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 375 GiB 341 GiB 34 GiB 34 GiB 9.03
TOTAL 375 GiB 341 GiB 34 GiB 34 GiB 9.03
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 449 KiB 2 1.3 MiB 0 105 GiB
cephfs.cephfs.meta 2 16 25 KiB 22 165 KiB 0 105 GiB
cephfs.cephfs.data 3 32 0 B 0 0 B 0 105 GiB
.rgw.root 4 32 2.6 KiB 6 72 KiB 0 105 GiB
default.rgw.log 5 32 3.6 KiB 209 408 KiB 0 105 GiB
default.rgw.control 6 32 0 B 8 0 B 0 105 GiB
default.rgw.meta 7 32 2.4 KiB 3 30 KiB 0 105 GiB
re_pool_compress_snappy 8 32 3.8 GiB 100.00k 1.1 GiB 0.36 105 GiB
re_pool_compress_zlib 9 32 3.8 GiB 100.00k 11 GiB 3.52 105 GiB
re_pool_no_compress 10 32 3.8 GiB 100.00k 11 GiB 3.52 105 GiB
re_pool_compress_zstd 11 32 3.8 GiB 100.00k 1.1 GiB 0.36 105 GiB
# ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 375 GiB 341 GiB 34 GiB 34 GiB 9.04
TOTAL 375 GiB 341 GiB 34 GiB 34 GiB 9.04
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
.mgr 1 1 449 KiB 449 KiB 0 B 2 1.3 MiB 1.3 MiB 0 B 0 105 GiB N/A N/A N/A 0 B 0 B
cephfs.cephfs.meta 2 16 25 KiB 2.3 KiB 23 KiB 22 165 KiB 96 KiB 69 KiB 0 105 GiB N/A N/A N/A 0 B 0 B
cephfs.cephfs.data 3 32 0 B 0 B 0 B 0 0 B 0 B 0 B 0 105 GiB N/A N/A N/A 0 B 0 B
.rgw.root 4 32 2.6 KiB 2.6 KiB 0 B 6 72 KiB 72 KiB 0 B 0 105 GiB N/A N/A N/A 0 B 0 B
default.rgw.log 5 32 3.6 KiB 3.6 KiB 0 B 209 408 KiB 408 KiB 0 B 0 105 GiB N/A N/A N/A 0 B 0 B
default.rgw.control 6 32 0 B 0 B 0 B 8 0 B 0 B 0 B 0 105 GiB N/A N/A N/A 0 B 0 B
default.rgw.meta 7 32 2.4 KiB 382 B 2.0 KiB 3 30 KiB 24 KiB 6.1 KiB 0 105 GiB N/A N/A N/A 0 B 0 B
re_pool_compress_snappy 8 32 3.8 GiB 3.8 GiB 0 B 100.00k 1.1 GiB 1.1 GiB 0 B 0.36 105 GiB N/A N/A N/A 1.1 GiB 11 GiB
re_pool_compress_zlib 9 32 3.8 GiB 3.8 GiB 0 B 100.00k 11 GiB 11 GiB 0 B 3.52 105 GiB N/A N/A N/A 0 B 0 B
re_pool_no_compress 10 32 3.8 GiB 3.8 GiB 0 B 100.00k 11 GiB 11 GiB 0 B 3.52 105 GiB N/A N/A N/A 0 B 0 B
re_pool_compress_zstd 11 32 3.8 GiB 3.8 GiB 0 B 100.00k 1.1 GiB 1.1 GiB 0 B 0.36 105 GiB N/A N/A N/A 1.1 GiB 11 GiB
Cluster version:
# cephadm shell -- ceph version
Inferring fsid 6a0d04c2-d8c2-11ed-b83c-fa163e2863fd
Inferring config /var/lib/ceph/6a0d04c2-d8c2-11ed-b83c-fa163e2863fd/mon.ceph-hakumar-p3a91g-node1-installer/config
Using ceph image with id '4764a38b8220' and tag 'ceph-6.0-rhel-9-containers-candidate-66107-20230311010710' created on 2023-03-11 01:09:48 +0000 UTC
registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:ab69c07900af6663b2b59b0b84b59b9c8013ddd3a1b7ea7d99f8ba49fd0a53d8
ceph version 17.2.5-75.el9cp (52c8ab07f1bc5423199eeb6ab5714bc30a930955) quincy (stable)
Crimson Cluster -
OSD pools:
# ceph osd dump | grep pool
2023-04-13T06:00:24.175+0000 7f893e8cc700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
2023-04-13T06:00:24.176+0000 7f893e8cc700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 29 flags hashpspool,nopgchange,crimson stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 15.00
pool 2 'cephfs.cephfs.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 43 flags hashpspool,nopgchange,crimson stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 1.87
pool 3 'cephfs.cephfs.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 44 flags hashpspool,nopgchange,crimson stripe_width 0 application cephfs read_balance_score 1.87
pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 51 flags hashpspool,nopgchange,crimson stripe_width 0 application rgw read_balance_score 1.87
pool 5 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 54 flags hashpspool,nopgchange,crimson stripe_width 0 application rgw read_balance_score 3.28
pool 6 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 56 flags hashpspool,nopgchange,crimson stripe_width 0 application rgw read_balance_score 1.87
pool 7 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 59 flags hashpspool,nopgchange,crimson stripe_width 0 pg_autoscale_bias 4 application rgw read_balance_score 2.34
pool 8 're_pool_compress_snappy' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 87 flags hashpspool,nopgchange,crimson stripe_width 0 compression_algorithm snappy compression_min_blob_size 1 compression_mode aggressive compression_required_ratio 0.3 application rados read_balance_score 1.87
pool 9 're_pool_compress_zlib' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 91 flags hashpspool,nopgchange,crimson stripe_width 0 compression_algorithm zlib compression_min_blob_size 10 compression_mode passive compression_required_ratio 0.7 application rados read_balance_score 2.34
pool 11 're_pool_compress_zstd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 79 flags hashpspool,nopgchange,crimson stripe_width 0 compression_algorithm zstd compression_min_blob_size 1024 compression_mode aggressive compression_required_ratio 0.5 application rados read_balance_score 1.87
pool 12 're_pool_no_compress' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 95 flags hashpspool,nopgchange,crimson stripe_width 0 read_balance_score 1.87
ceph df and ceph df detail -
# ceph df
2023-04-13T05:23:07.002+0000 7f9785ca0700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
2023-04-13T05:23:07.003+0000 7f9785ca0700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
TOTAL 1.5 TiB 1.4 TiB 55 GiB 55 GiB 3.69
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 897 KiB 4 897 KiB 0 450 GiB
cephfs.cephfs.meta 2 32 4.6 KiB 44 4.6 KiB 0 450 GiB
cephfs.cephfs.data 3 32 0 B 0 0 B 0 450 GiB
.rgw.root 4 32 368 B 8 368 B 0 450 GiB
default.rgw.log 5 32 7.2 KiB 418 7.2 KiB 0 450 GiB
default.rgw.control 6 32 0 B 16 0 B 0 450 GiB
default.rgw.meta 7 32 738 B 2 738 B 0 450 GiB
re_pool_compress_snappy 8 32 7.6 GiB 200.00k 7.6 GiB 0.56 450 GiB
re_pool_compress_zlib 9 32 7.6 GiB 200.00k 7.6 GiB 0.56 450 GiB
re_pool_compress_zstd 11 32 7.6 GiB 200.00k 7.6 GiB 0.56 450 GiB
re_pool_no_compress 12 32 7.6 GiB 200.00k 7.6 GiB 0.56 450 GiB
# ceph df detail
2023-04-13T05:23:49.944+0000 7fa70a8bf700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
2023-04-13T05:23:49.944+0000 7fa70a8bf700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
TOTAL 1.5 TiB 1.4 TiB 55 GiB 55 GiB 3.69
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
.mgr 1 1 897 KiB 897 KiB 0 B 4 897 KiB 897 KiB 0 B 0 450 GiB N/A N/A N/A 0 B 0 B
cephfs.cephfs.meta 2 32 4.6 KiB 4.6 KiB 0 B 44 4.6 KiB 4.6 KiB 0 B 0 450 GiB N/A N/A N/A 0 B 0 B
cephfs.cephfs.data 3 32 0 B 0 B 0 B 0 0 B 0 B 0 B 0 450 GiB N/A N/A N/A 0 B 0 B
.rgw.root 4 32 368 B 368 B 0 B 8 368 B 368 B 0 B 0 450 GiB N/A N/A N/A 0 B 0 B
default.rgw.log 5 32 7.2 KiB 7.2 KiB 0 B 418 7.2 KiB 7.2 KiB 0 B 0 450 GiB N/A N/A N/A 0 B 0 B
default.rgw.control 6 32 0 B 0 B 0 B 16 0 B 0 B 0 B 0 450 GiB N/A N/A N/A 0 B 0 B
default.rgw.meta 7 32 738 B 738 B 0 B 2 738 B 738 B 0 B 0 450 GiB N/A N/A N/A 0 B 0 B
re_pool_compress_snappy 8 32 7.6 GiB 7.6 GiB 0 B 200.00k 7.6 GiB 7.6 GiB 0 B 0.56 450 GiB N/A N/A N/A 0 B 0 B
re_pool_compress_zlib 9 32 7.6 GiB 7.6 GiB 0 B 200.00k 7.6 GiB 7.6 GiB 0 B 0.56 450 GiB N/A N/A N/A 0 B 0 B
re_pool_compress_zstd 11 32 7.6 GiB 7.6 GiB 0 B 200.00k 7.6 GiB 7.6 GiB 0 B 0.56 450 GiB N/A N/A N/A 0 B 0 B
re_pool_no_compress 12 32 7.6 GiB 7.6 GiB 0 B 200.00k 7.6 GiB 7.6 GiB 0 B 0.56 450 GiB N/A N/A N/A 0 B 0 B
Cluster version -
# cephadm shell -- ceph version
Inferring fsid 0f746e28-d97c-11ed-81f2-78ac443b3a54
Inferring config /var/lib/ceph/0f746e28-d97c-11ed-81f2-78ac443b3a54/mon.dell-r640-056/config
Using ceph image with id 'ad3ecf86ac85' and tag 'bf6db9f2c0b862c27e2a4db5e7a4a16b5fd297b5-crimson' created on 2023-04-12 01:13:07 +0000 UTC
quay.ceph.io/ceph-ci/ceph@sha256:019bf5f6100b8a449c3bb9c23dd979d09323930df8824aaed818162decb4098d
2023-04-13T05:25:19.040+0000 7f28c0207700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
2023-04-13T05:25:19.041+0000 7f28c0207700 -1 WARNING: the following dangerous and experimental features are enabled: crimson
ceph version 18.0.0-3438-gbf6db9f2 (bf6db9f2c0b862c27e2a4db5e7a4a16b5fd297b5) reef (dev)
Crimson test complete stdout has been attached.
Updated by Aishwarya Mathuria about 1 month ago
- Status changed from New to Fix Under Review
Updated by Matan Breizman 9 days ago
- Status changed from Fix Under Review to Resolved