Project

General

Profile

Bug #41832

Different pools count in ceph -s and ceph osd pool ls

Added by Fyodor Ustinov 6 months ago. Updated 5 months ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
ceph cli
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

After removing some count of pool we have different count of pools in ceph -s output and ceph osd pool ls:

[root@S-26-6-1-2 cph]# ceph -s
  cluster:
    id:     cef4c4fd-82d3-4b79-8b48-3d94e9496c9b
    health: HEALTH_OK

  services:
    mon:         3 daemons, quorum S-26-4-1-2,S-26-4-2-2,S-26-6-1-2 (age 3d)
    mgr:         S-26-4-1-2(active, since 30h), standbys: S-26-6-1-2, S-26-4-2-2
    osd:         96 osds: 96 up (since 3d), 96 in (since 6d)

  data:
    pools:   12 pools, 1796 pgs
    objects: 6.16M objects, 20 TiB
    usage:   44 TiB used, 435 TiB / 478 TiB avail
    pgs:     1795 active+clean
             1    active+clean+scrubbing+deep

  io:
    client:   96 MiB/s rd, 93 MiB/s wr, 196 op/s rd, 136 op/s wr

[root@S-26-6-1-2 cph]# ceph osd pool ls
iscsi
rbd
rbdtier
iscsitier
rbde3ntdata
rbde3ntmeta
rbdnt
[root@S-26-6-1-2 cph]#


Related issues

Duplicates mgr - Bug #40011: ceph -s shows wrong number of pools when pool was deleted Pending Backport 05/23/2019

History

#1 Updated by Jan Fajerski 6 months ago

  • Status changed from New to Duplicate

I assume you deleted pool prior to this? If so this is a duplicate of https://tracker.ceph.com/issues/40011. Please re-open if you disagree.

#2 Updated by Nathan Cutler 5 months ago

  • Duplicates Bug #40011: ceph -s shows wrong number of pools when pool was deleted added

#3 Updated by Nathan Cutler 5 months ago

Hi Fyodor, can you tell us more about the environment where you saw this behavior? (Ideally, the version of ceph (from `ceph --version`), the operating system, and operating system version.)

#4 Updated by Fyodor Ustinov 5 months ago

Nathan Cutler wrote:

Hi Fyodor, can you tell us more about the environment where you saw this behavior? (Ideally, the version of ceph (from `ceph --version`), the operating system, and operating system version.)

[root@S-26-6-1-2 cph]# ceph --version
ceph version 14.2.3 (0f776cf838a1ae3130b2b73dc26be9c95c6ccc39) nautilus (stable)

[root@S-26-6-1-2 cph]# uname -a
Linux S-26-6-1-2 5.2.11-1.el7.elrepo.x86_64 #1 SMP Thu Aug 29 08:10:52 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux

[root@S-26-6-1-2 cph]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)

Additional info (maybe this is important) - this cluster has been updated from "Mimic".

Also available in: Atom PDF