Project

General

Profile

Actions

Feature #49049

closed

mgr/prometheus: Update ceph_pool_* metrics to include additional labels

Added by Paul Cuzner about 3 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
prometheus module
Target version:
% Done:

80%

Source:
Tags:
Backport:
pacific
Reviewed:
Affected Versions:
Pull request ID:

Description

The pool metadata only provides a pool_id (as a key for matching) and the name of the pool. This feature would add two further labels to the ceph_pool_metadata metric

compression=on or off
type=replica or erasure
scheme=replica2 or ec4+2 etc

NB. this information is already available within the osd_map['pools'] data
options holds the compression info
type reflects replicated(1) and erasure code(3)
size and minsize can be used to determine the scheme

Having this data provide more flexibility to prometheus queries when determining storage efficiency and aggregation.


Related issues 1 (0 open1 closed)

Copied to mgr - Backport #50294: pacific: mgr/prometheus: Update ceph_pool_* metrics to include additional labelsResolvedKonstantin ShalyginActions
Actions #1

Updated by Paul Cuzner about 3 years ago

Looks like scheme is going to be unreliable for EC pools from the data available in the pool metadata output.

This is what some initial code provides
  1. HELP ceph_pool_metadata POOL Metadata
  2. TYPE ceph_pool_metadata untyped
    ceph_pool_metadata{pool_id="1",name="device_health_metrics",type="replicated",compression_active="0"} 1.0
    ceph_pool_metadata{pool_id="2",name="rbd",type="replicated",compression_active="0"} 1.0
    ceph_pool_metadata{pool_id="3",name="iscsi",type="replicated",compression_active="0"} 1.0
    ceph_pool_metadata{pool_id="4",name="compressed",type="replicated",compression_active="1"} 1.0
    ceph_pool_metadata{pool_id="5",name="ecpool",type="erasure",compression_active="0"} 1.0
    ceph_pool_metadata{pool_id="8",name="ecpool41",type="erasure",compression_active="0"} 1.0
Actions #2

Updated by Paul Cuzner about 3 years ago

Quick update, I missed the fact the ec data is available in osd_map - so we can provide a simple description of the pool protection scheme.

  1. HELP ceph_pool_metadata POOL Metadata
  2. TYPE ceph_pool_metadata untyped
    ceph_pool_metadata{pool_id="1",name="device_health_metrics",type="replicated",description="replica:3",compression_active="false"} 1.0
    ceph_pool_metadata{pool_id="2",name="testpool",type="replicated",description="replica:3",compression_active="true"} 1.0
    ceph_pool_metadata{pool_id="3",name="my21",type="erasure",description="ec:2+1",compression_active="false"} 1.0
Actions #3

Updated by Paul Cuzner about 3 years ago

  • Subject changed from mgr/prometheus: Add pool compression state and pool type to ceph_pool_metadata to mgr/prometheus: Update ceph_pool_* metrics to include additional labels
  • Backport changed from pacific octopus nautilus to pacific

In addition to the addition metadata, this feature will also introduce used_bytes at the pool level so the actual space consumed after replication/compression/ec has taken place.

Actions #4

Updated by Paul Cuzner about 3 years ago

  • Status changed from New to Fix Under Review
  • % Done changed from 0 to 80
  • Pull request ID set to 40635

PR submitted

Actions #5

Updated by Paul Cuzner about 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #6

Updated by Backport Bot about 3 years ago

  • Copied to Backport #50294: pacific: mgr/prometheus: Update ceph_pool_* metrics to include additional labels added
Actions #7

Updated by Loïc Dachary almost 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF