Project

General

Profile

Actions

Bug #9091

closed

thrashosds needs to pg split cache pools

Added by Sage Weil over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

for example,

tasks:
- chef: null
- clock.check: null
- install: null
- ceph:
    conf:
      osd:
        osd max backfills: 1
    log-whitelist:
    - wrongly marked me down
    - objects unfound and apparently lost
- thrashosds:
    chance_pgnum_grow: 3
    chance_pgpnum_fix: 1
    timeout: 1200
- exec:
    client.0:
    - ceph osd pool create base 4
    - ceph osd pool create cache 4
    - ceph osd tier add base cache
    - ceph osd tier cache-mode cache writeback
    - ceph osd tier set-overlay base cache
    - ceph osd pool set cache hit_set_type bloom
    - ceph osd pool set cache hit_set_count 8
    - ceph osd pool set cache hit_set_period 60
    - ceph osd pool set cache target_max_objects 250
- rados:
    clients:
    - client.0
    objects: 500
    op_weights:
      copy_from: 50
      delete: 50
      read: 100
      write: 100
    ops: 4000
    pools:
    - base
roles:
- - mon.a
  - mon.c
  - osd.0
  - osd.1
  - osd.2
  - client.0
- - mon.b
  - osd.3
  - osd.4
  - osd.5
  - client.1

thrashosds doesn't know that 'base' and 'cache' exist, and doesn't test splitting on them

maybe we add a

- thrashosds.add_pools: [base,cache]


after the exec ?
Actions #1

Updated by Sage Weil over 9 years ago

  • Assignee set to Anonymous
Actions #2

Updated by Anonymous over 9 years ago

- exec:
    client.0:
    - ceph osd pool create base 4
    - ceph osd pool create cache 4
    - ceph osd tier add base cache
    - ceph osd tier cache-mode cache writeback
    - ceph osd tier set-overlay base cache
    - ceph osd pool set cache hit_set_type bloom
    - ceph osd pool set cache hit_set_count 8
    - ceph osd pool set cache hit_set_period 60
    - ceph osd pool set cache target_max_objects 250
- thrashosds:
    chance_pgnum_grow: 3
    chance_pgpnum_fix: 1
    timeout: 1200

Switching the thrashosds and exec clauses appears to work better.
I set a breakpoint at the end of the thrashosds task:

    ctx.manager = manager                                   // Breakpoint set here
    thrash_proc = ceph_manager.Thrasher(
        manager,
        config,
        logger=log.getChild('thrasher')
        )
    try:
        yield
    finally:
        log.info('joining thrashosds')
        thrash_proc.do_join()
        manager.wait_for_recovery(config.get('timeout', 360))

With the old yaml file, manager.listpools() returned ['rbd']
With the new yaml file, manager.listpools() returned ['rbd', 'cache', 'base']

Actions #3

Updated by Sage Weil over 9 years ago

So teh question now is: can we add a method somewhere that adds a new pool to the list you get back from listpools()? So we can do, say:

- thrashosds:
    chance_pgnum_grow: 3
    chance_pgpnum_fix: 1
    timeout: 1200
- exec:
    client.0:
    - ceph osd pool create base 4
    - ceph osd pool create cache 4
    - ceph osd tier add base cache
    - ceph osd tier cache-mode cache writeback
    - ceph osd tier set-overlay base cache
    - ceph osd pool set cache hit_set_type bloom
    - ceph osd pool set cache hit_set_count 8
    - ceph osd pool set cache hit_set_period 60
    - ceph osd pool set cache target_max_objects 250
- ceph.created_pool: [base, cache]

It will be better if the order can stay the same because of the way the qa bits are assembled...

Actions #4

Updated by Anonymous over 9 years ago

I am guessing that we could do this (we should have all the information somewhere at this point). The test that I have been running have been failing so I suspect that there is something else wrong with the yaml file that I cobbled together.

Actions #5

Updated by Anonymous over 9 years ago

I created a pull request, #94 in ceph-qa-suite for this. The new code implements a routine named add_pools in thrashosds.py that reinitializes ctx.manager.

This can be used by adding a thrashosds.add_pools: line to the appropriate yaml file.

Actions #6

Updated by Anonymous over 9 years ago

  • Status changed from New to Fix Under Review
- ceph.created_pool: [base, cache]

This has been implemented in origin/wip-9091-wusui

Actions #7

Updated by Anonymous over 9 years ago

I thought that I pushed this earlier, but I did not see the new wip-branch on github. I have issued a new pull request.

Actions #8

Updated by Sage Weil over 9 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF