Project

General

Profile

Actions

Feature #10891

closed

get all values for a given pool

Added by Alfredo Deza about 9 years ago. Updated about 9 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
ceph cli
Target version:
-
% Done:

100%

Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

To get a specific value for a setting in a pool a user is required to run something like:

sudo ceph osd pool get $poolname $key

Where `$key` is one of:

crash_replay_interval|pg_num|pgp_num|
 crush_ruleset|hit_set_type|hit_set_
 period|hit_set_count|hit_set_fpp|auid|
 target_max_objects|target_max_bytes|
 cache_target_dirty_ratio|cache_target_
 full_ratio|cache_min_flush_age|cache_
 min_evict_age|erasure_code_profile|min_
 read_recency_for_promote

But there is no way to get all of the values for a given pool from that subcommand. Instead, a user is required to run an `osd dump`
that will contain a fairly verbose output of a lot of values that may not be interesting when only looking for the given pool:

In this case, I needed to get values for the 'data' pool:

$ sudo ceph osd dump
epoch 13435
fsid dd2d2a23-ba22-38d5-22ec-869b5443dddd
created 2014-10-22 19:40:33.622313
modified 2015-02-11 13:50:16.334207
flags
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 87 flags hashpspool stripe_width 0
pool 1 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 5785 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 3 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 106 flags hashpspool stripe_width 0
max_osd 53
osd.0 up   in  weight 1 up_from 6508 up_thru 13433 down_at 6506 last_clean_interval [6051,6507) 10.8.128.128:6800/19881 10.8.128.128:6805/1019881 10.8.128.128:6816/1019881 10.8.128.128:6817/1019881 exists,up 8ee99dddff-css81-4s03-abssd-103a2ssss9a1ba5
osd.1 up   in  weight 1 up_from 2535 up_thru 13433 down_at 2527 last_clean_interval [646,2526) 10.8.128.128:6806/29162 10.8.128.128:6807/29162 10.8.128.128:6808/29162 10.8.128.128:6809/29162 exists,up ad8d42c1-dss4-4ssa-9ss93-f0sf2d4asss49c
...

It would be ideal to be able to do something like:

$ sudo ceph osd pool get all $poolname

or maybe

$ sudo ceph osd pool get $poolname all
$ sudo ceph osd pool get $poolname *
Actions #1

Updated by Alfredo Deza about 9 years ago

  • Description updated (diff)
Actions #2

Updated by Alfredo Deza about 9 years ago

  • Description updated (diff)
Actions #3

Updated by Kefu Chai about 9 years ago

  • Assignee set to Kefu Chai
Actions #4

Updated by Kefu Chai about 9 years ago

Michal Jarzabek is working on it.

Actions #5

Updated by Kefu Chai about 9 years ago

  • Status changed from New to Fix Under Review
  • % Done changed from 0 to 40

PR posted at https://github.com/ceph/ceph/pull/3887, pending on review

Actions #6

Updated by Kefu Chai about 9 years ago

  • Status changed from Fix Under Review to Resolved
  • % Done changed from 40 to 100

finished by michal. might need more polish suggested by João.

Actions

Also available in: Atom PDF