Project

General

Profile

Feature #55737

Add filesystem subvolumegroup and subvolume counts to telemetry

Added by Blaine Gardner almost 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

Ceph telemetry's "rbd" section counts the number of images per pool, but there is no such equivalent usage information for CephFS. To my understanding, counting the number of subvolumes would be the equivalent metric. Counting subvolume groups might also be worthwhile. The Rook project is interested in collecting this telemetry, and I imagine it is useful for cephadm as well.

An example of the "rbd" section in today's Ceph telemetry shows that "num_images_by_pool" is a simple integer list to show how many images each pool has.

"rbd": {
"mirroring_by_pool": [
false
],
"num_images_by_pool": [
0
],
"num_pools": 1
},

Can something similar be implemented for CephFS? I have suggested a format that might work below, based loosely on the "pools" telemetry section:

"fs": {
"count": 1,
"feature_flags": {
"enable_multiple": true,
"ever_enabled_multiple": true
},
"filesystems": [ {
"approx_ctime": "2022-05",
"balancer_enabled": false,
"bytes": 0,
"cached_caps": 0,
"cached_dns": 10,
"cached_inos": 16,
"cached_subtrees": 4,
"ever_allowed_features": 32,
"explicitly_allowed_features": 32,
"files": 0,
"max_mds": 1,
"num_data_pools": 1,
"num_in": 1,
"num_mds": 2,
"num_sessions": 0,
"num_standby_replay": 1,
"num_up": 1,
"snaps": 0,
"standby_count_wanted": 1,
"num_subvolume_groups": 1, # <--- add this
"num_subvolumes_across_all_groups": 3, # <--- add this
"subvolume_groups": [ # <--- add this section (or similar) {
"num_subvolumes": 3,
}
]
}
],
"num_standby_mds": 0,
"total_num_mds": 2
},

It would be nice to have this backported to Quincy.

Also available in: Atom PDF