Actions
Bug #50266
closed"ceph fs snapshot mirror daemon status" should not use json keys as value
% Done:
0%
Source:
Development
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Currently the command outputs:
{ "14135": { "1": { "name": "a", "directory_count": 0, "peers": { "ae3f22e6-1c72-4a81-8d5d-eebca3bfd29d": { "remote": { "client_name": "client.mirror_remote", "cluster_name": "site-remote", "fs_name": "backup_fs" }, "stats": { "failure_count": 0, "recovery_count": 0 } } } } } }
JSON keys should not be the value, I'm proposing the following JSON:
{ "daemon_id": "444607", "filesystem_id": "1", "name": "myfs", "directory_count": 0, "peers": [ { "uuid": "4a6983c0-3c9d-40f5-b2a9-2334a4659827", "remote": { "client_name": "client.mirror_remote", "cluster_name": "site-remote", "fs_name": "backup_fs" }, "stats": { "failure_count": 0, "recovery_count": 0 } } ] }
It will make the JSON parsing and unmarshalling into other language type easier.
Updated by Sébastien Han about 3 years ago
After discussing with Venky, it seems that a daemon can mirror multiple ffs so we need another list for the fs_id, so it should look like:
{ "daemon_id": "444607", "filesystems": [ { "filesystem_id": "1", "name": "myfs", "directory_count": 0, "peers": [ { "uuid": "4a6983c0-3c9d-40f5-b2a9-2334a4659827", "remote": { "client_name": "client.mirror_remote", "cluster_name": "site-remote", "fs_name": "backup_fs" }, "stats": { "failure_count": 0, "recovery_count": 0 } } ] } ] }
Updated by Patrick Donnelly about 3 years ago
- Assignee set to Venky Shankar
- Priority changed from Normal to High
- Target version set to v17.0.0
- Source set to Development
- Backport set to pacific
Updated by Venky Shankar about 3 years ago
- Status changed from New to In Progress
Updated by Venky Shankar almost 3 years ago
BTW, this is how the json would look for multiple active mirror daemon instances (yeh, we only support running one instance right now, just putting it up for future proofing):
[ { "daemon_id": 284167, "filesystems": [ { "filesystem_id": 1, "name": "a", "directory_count": 1, "peers": [ { "uuid": "02117353-8cd1-44db-976b-eb20609aa160", "remote": { "client_name": "client.mirror_remote", "cluster_name": "ceph", "fs_name": "backup_fs" }, "stats": { "failure_count": 1, "recovery_count": 0 } } ] } ] }, { "daemon_id": 324137, "filesystems": [ { "filesystem_id": 1, "name": "a", "directory_count": 0, "peers": [ { "uuid": "02117353-8cd1-44db-976b-eb20609aa160", "remote": { "client_name": "client.mirror_remote", "cluster_name": "ceph", "fs_name": "backup_fs" }, "stats": { "failure_count": 0, "recovery_count": 0 } } ] } ] } ]
Updated by Venky Shankar almost 3 years ago
- Status changed from In Progress to Fix Under Review
- Pull request ID set to 40933
Updated by Venky Shankar almost 3 years ago
- Status changed from Fix Under Review to Pending Backport
Updated by Backport Bot almost 3 years ago
- Copied to Backport #50537: pacific: "ceph fs snapshot mirror daemon status" should not use json keys as value added
Updated by Loïc Dachary almost 3 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".
Actions