Project

General

Profile

Bug #11840

avoid complicated cache tier, like cache loop and three level cache

Added by chuanhong wang almost 5 years ago. Updated almost 5 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

there are three pool in ceph cluster, e.g, t1, t2 t3, and t2 is a cache of t1.
1, we can set t1 as a cache of t2 too, then t1 is the cache of t2, t2 also is the cache of t1,a loop of cache generated;
2, we can also set t3 as a cache of t2 ort t1, then a three level cache tier generated, t3 is the cache of t2, t2 is the cache of t1.
cache loop, and three level cache are weird, also maybe reduce the IO performance, so it'd be better to avoid complicated tier like them.

we don't support multiple tiering, so we'd better fail a "ceph osd tier add" command which intent to add a tier for another tier.


Related issues

Related to Ceph - Bug #13950: OSDMonitor::_check_become_tier cannot prevent client from setting multiple tiers in a particular way Resolved 12/02/2015

Associated revisions

Revision 5c3d0740 (diff)
Added by Kefu Chai almost 5 years ago

mon: disallow adding a tier on top of another tier

multiple tiering is not supported at the moment

Fixes: #11840
Signed-off-by: Kefu Chai <>

History

#1 Updated by Sage Weil almost 5 years ago

  • Assignee set to Kefu Chai
  • Priority changed from Normal to High
  • Source changed from other to Community (user)

#2 Updated by Sage Weil almost 5 years ago

  • Category set to Monitor

#3 Updated by Kefu Chai almost 5 years ago

chuanhong wang wrote:

there are three pool in ceph cluster, e.g, t1, t2 t3, and t2 is a cache of t1.
1, we can set t1 as a cache of t2 too, then t1 is the cache of t2, t2 also is the cache of t1,a loop of cache generated;

yeah, that would be definitely a bad config and we should at least warn the user about the cyclic caching.

2, we can also set t3 as a cache of t2 ort t1, then a three level cache tier generated, t3 is the cache of t2, t2 is the cache of t1.
cache loop, and three level cache are weird, also maybe reduce the IO performance, so it'd be better to avoid complicated tier like them.

the setting of t3=>t2=>t1 is weird, but not necessarily wrong. so i don't think we should forbid user to do so.

chuanhong, what do you think?

#4 Updated by chuanhong wang almost 5 years ago

that's ok, multi-level cache is not wrong config, maybe pepole will take use of it, if they have three or more types of storage devices with different access speed and want to create a hierarchical storage system.

#5 Updated by Samuel Just almost 5 years ago

I don't think a three level cache will actually work right now. Certainly, we don't have any ceph-qa-suite tests which do it.

#6 Updated by Kefu Chai almost 5 years ago

yes, a "rados put" will bring down the cluster.

 1998  ./ceph osd pool create base_pool 2
 1999  ./ceph osd pool create cache_pool 2
 2000  ./ceph osd pool create another_cache_pool 2
 2001  ./ceph osd tier add base_pool cache_pool
 2002  ./ceph osd tier add cache_pool another_cache_pool
 2003  echo foo > /tmp/foo.txt
 2004  ./rados -p base_pool put foo /tmp/foo.txt # this command never returns

and

$ ./ceph -s
    cluster e9d69815-8d82-4b9d-936c-0ac1d71827fd
     health HEALTH_WARN
            6 pgs stuck inactive
            30 pgs stuck unclean
            recovery 162/162 objects misplaced (100.000%)
     monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0}
            election epoch 6, quorum 0,1,2 a,b,c
     mdsmap e10: 3/3/3 up {0=a=up:active,1=c=up:active,2=b=up:active}
     osdmap e26: 3 osds: 3 up, 3 in; 24 remapped pgs
      pgmap v299: 30 pgs, 6 pools, 4164 bytes data, 54 objects
            118 GB used, 402 GB / 549 GB avail
            162/162 objects misplaced (100.000%)
                  24 active+remapped
                   6 creating

#7 Updated by Kefu Chai almost 5 years ago

  • Description updated (diff)
  • Status changed from New to 12

#8 Updated by Kefu Chai almost 5 years ago

  • Status changed from 12 to Fix Under Review

#9 Updated by Kefu Chai almost 5 years ago

  • Status changed from Fix Under Review to Resolved

#10 Updated by Kefu Chai over 4 years ago

  • Related to Bug #13950: OSDMonitor::_check_become_tier cannot prevent client from setting multiple tiers in a particular way added

Also available in: Atom PDF