Project

General

Profile

Actions

Bug #41832

closed

Different pools count in ceph -s and ceph osd pool ls

Added by Fyodor Ustinov over 4 years ago. Updated over 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
ceph cli
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

After removing some count of pool we have different count of pools in ceph -s output and ceph osd pool ls:

[root@S-26-6-1-2 cph]# ceph -s
  cluster:
    id:     cef4c4fd-82d3-4b79-8b48-3d94e9496c9b
    health: HEALTH_OK

  services:
    mon:         3 daemons, quorum S-26-4-1-2,S-26-4-2-2,S-26-6-1-2 (age 3d)
    mgr:         S-26-4-1-2(active, since 30h), standbys: S-26-6-1-2, S-26-4-2-2
    osd:         96 osds: 96 up (since 3d), 96 in (since 6d)

  data:
    pools:   12 pools, 1796 pgs
    objects: 6.16M objects, 20 TiB
    usage:   44 TiB used, 435 TiB / 478 TiB avail
    pgs:     1795 active+clean
             1    active+clean+scrubbing+deep

  io:
    client:   96 MiB/s rd, 93 MiB/s wr, 196 op/s rd, 136 op/s wr

[root@S-26-6-1-2 cph]# ceph osd pool ls
iscsi
rbd
rbdtier
iscsitier
rbde3ntdata
rbde3ntmeta
rbdnt
[root@S-26-6-1-2 cph]#


Related issues 1 (0 open1 closed)

Is duplicate of mgr - Bug #40011: ceph -s shows wrong number of pools when pool was deletedResolvedDaniel Oliveira

Actions
Actions

Also available in: Atom PDF