Actions
Bug #41832
closedDifferent pools count in ceph -s and ceph osd pool ls
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
After removing some count of pool we have different count of pools in ceph -s output and ceph osd pool ls:
[root@S-26-6-1-2 cph]# ceph -s cluster: id: cef4c4fd-82d3-4b79-8b48-3d94e9496c9b health: HEALTH_OK services: mon: 3 daemons, quorum S-26-4-1-2,S-26-4-2-2,S-26-6-1-2 (age 3d) mgr: S-26-4-1-2(active, since 30h), standbys: S-26-6-1-2, S-26-4-2-2 osd: 96 osds: 96 up (since 3d), 96 in (since 6d) data: pools: 12 pools, 1796 pgs objects: 6.16M objects, 20 TiB usage: 44 TiB used, 435 TiB / 478 TiB avail pgs: 1795 active+clean 1 active+clean+scrubbing+deep io: client: 96 MiB/s rd, 93 MiB/s wr, 196 op/s rd, 136 op/s wr [root@S-26-6-1-2 cph]# ceph osd pool ls iscsi rbd rbdtier iscsitier rbde3ntdata rbde3ntmeta rbdnt [root@S-26-6-1-2 cph]#
Actions