Project

General

Profile

Support #18741

moving rgw pools to ssd cache

Added by Petr Malkov about 7 years ago. Updated about 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

jewel 10.2.5

I'am looking for method to make global (at least rgw) ssd-cache tier with hdd

All built-in pools I redirected to ssd level storage by modifying crushmap (default ruleset 0 -> ssd-cache)

NAME                           ID     USED      %USED     MAX AVAIL     OBJECTS
...
default.rgw.buckets.data 9 747M 0.03 2218G 255
...
rgw-hdd-pool 18 6169M 0.04 16702G 2097

Then I created rgw-hdd-pool on hdd level (ruleset 2 -> hdd-pool)

ceph osd tier add rgw-hdd-pool default.rgw.buckets.data
ceph osd tier cache-mode default.rgw.buckets.data writeback
ceph osd tier set-overlay rgw-hdd-pool default.rgw.buckets.data
ceph osd pool set default.rgw.buckets.data hit_set_type bloom
ceph osd pool set default.rgw.buckets.data hit_set_count 1
ceph osd pool set default.rgw.buckets.data hit_set_period 300
ceph osd pool set default.rgw.buckets.data target_max_bytes 1732000000000
ceph osd pool set default.rgw.buckets.data cache_min_flush_age 300
ceph osd pool set default.rgw.buckets.data cache_min_evict_age 300
ceph osd pool set default.rgw.buckets.data cache_target_dirty_ratio 0.01
ceph osd pool set default.rgw.buckets.data cache_target_full_ratio 0.02

Put in some data, it felt down to ssd-cache and lower to hdd-pool. Cluster has no active clients that could keep warm data. But after 300 seconds no flushing/evicting. Only direct command works:

rados -p default.rgw.buckets.data cache-flush-evict-all

Earlier this method worked with 2 new-created pools, but not now with built-in. How to fix?

History

#1 Updated by Patrick Donnelly about 6 years ago

  • Project changed from Ceph to rgw

Also available in: Atom PDF