Project

General

Profile

Actions

Bug #11840

closed

avoid complicated cache tier, like cache loop and three level cache

Added by chuanhong wang almost 9 years ago. Updated almost 9 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

there are three pool in ceph cluster, e.g, t1, t2 t3, and t2 is a cache of t1.
1, we can set t1 as a cache of t2 too, then t1 is the cache of t2, t2 also is the cache of t1,a loop of cache generated;
2, we can also set t3 as a cache of t2 ort t1, then a three level cache tier generated, t3 is the cache of t2, t2 is the cache of t1.
cache loop, and three level cache are weird, also maybe reduce the IO performance, so it'd be better to avoid complicated tier like them.

we don't support multiple tiering, so we'd better fail a "ceph osd tier add" command which intent to add a tier for another tier.


Related issues 1 (0 open1 closed)

Related to Ceph - Bug #13950: OSDMonitor::_check_become_tier cannot prevent client from setting multiple tiers in a particular wayResolved12/02/2015

Actions
Actions #1

Updated by Sage Weil almost 9 years ago

  • Assignee set to Kefu Chai
  • Priority changed from Normal to High
  • Source changed from other to Community (user)
Actions #2

Updated by Sage Weil almost 9 years ago

  • Category set to Monitor
Actions #3

Updated by Kefu Chai almost 9 years ago

chuanhong wang wrote:

there are three pool in ceph cluster, e.g, t1, t2 t3, and t2 is a cache of t1.
1, we can set t1 as a cache of t2 too, then t1 is the cache of t2, t2 also is the cache of t1,a loop of cache generated;

yeah, that would be definitely a bad config and we should at least warn the user about the cyclic caching.

2, we can also set t3 as a cache of t2 ort t1, then a three level cache tier generated, t3 is the cache of t2, t2 is the cache of t1.
cache loop, and three level cache are weird, also maybe reduce the IO performance, so it'd be better to avoid complicated tier like them.

the setting of t3=>t2=>t1 is weird, but not necessarily wrong. so i don't think we should forbid user to do so.

chuanhong, what do you think?

Actions #4

Updated by chuanhong wang almost 9 years ago

that's ok, multi-level cache is not wrong config, maybe pepole will take use of it, if they have three or more types of storage devices with different access speed and want to create a hierarchical storage system.

Actions #5

Updated by Samuel Just almost 9 years ago

I don't think a three level cache will actually work right now. Certainly, we don't have any ceph-qa-suite tests which do it.

Actions #6

Updated by Kefu Chai almost 9 years ago

yes, a "rados put" will bring down the cluster.

 1998  ./ceph osd pool create base_pool 2
 1999  ./ceph osd pool create cache_pool 2
 2000  ./ceph osd pool create another_cache_pool 2
 2001  ./ceph osd tier add base_pool cache_pool
 2002  ./ceph osd tier add cache_pool another_cache_pool
 2003  echo foo > /tmp/foo.txt
 2004  ./rados -p base_pool put foo /tmp/foo.txt # this command never returns

and

$ ./ceph -s
    cluster e9d69815-8d82-4b9d-936c-0ac1d71827fd
     health HEALTH_WARN
            6 pgs stuck inactive
            30 pgs stuck unclean
            recovery 162/162 objects misplaced (100.000%)
     monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0}
            election epoch 6, quorum 0,1,2 a,b,c
     mdsmap e10: 3/3/3 up {0=a=up:active,1=c=up:active,2=b=up:active}
     osdmap e26: 3 osds: 3 up, 3 in; 24 remapped pgs
      pgmap v299: 30 pgs, 6 pools, 4164 bytes data, 54 objects
            118 GB used, 402 GB / 549 GB avail
            162/162 objects misplaced (100.000%)
                  24 active+remapped
                   6 creating

Actions #7

Updated by Kefu Chai almost 9 years ago

  • Description updated (diff)
  • Status changed from New to 12
Actions #8

Updated by Kefu Chai almost 9 years ago

  • Status changed from 12 to Fix Under Review
Actions #9

Updated by Kefu Chai almost 9 years ago

  • Status changed from Fix Under Review to Resolved
Actions #10

Updated by Kefu Chai over 8 years ago

  • Related to Bug #13950: OSDMonitor::_check_become_tier cannot prevent client from setting multiple tiers in a particular way added
Actions

Also available in: Atom PDF