Project

General

Profile

Actions

Bug #41832

closed

Different pools count in ceph -s and ceph osd pool ls

Added by Fyodor Ustinov over 4 years ago. Updated over 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
ceph cli
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

After removing some count of pool we have different count of pools in ceph -s output and ceph osd pool ls:

[root@S-26-6-1-2 cph]# ceph -s
  cluster:
    id:     cef4c4fd-82d3-4b79-8b48-3d94e9496c9b
    health: HEALTH_OK

  services:
    mon:         3 daemons, quorum S-26-4-1-2,S-26-4-2-2,S-26-6-1-2 (age 3d)
    mgr:         S-26-4-1-2(active, since 30h), standbys: S-26-6-1-2, S-26-4-2-2
    osd:         96 osds: 96 up (since 3d), 96 in (since 6d)

  data:
    pools:   12 pools, 1796 pgs
    objects: 6.16M objects, 20 TiB
    usage:   44 TiB used, 435 TiB / 478 TiB avail
    pgs:     1795 active+clean
             1    active+clean+scrubbing+deep

  io:
    client:   96 MiB/s rd, 93 MiB/s wr, 196 op/s rd, 136 op/s wr

[root@S-26-6-1-2 cph]# ceph osd pool ls
iscsi
rbd
rbdtier
iscsitier
rbde3ntdata
rbde3ntmeta
rbdnt
[root@S-26-6-1-2 cph]#


Related issues 1 (0 open1 closed)

Is duplicate of mgr - Bug #40011: ceph -s shows wrong number of pools when pool was deletedResolvedDaniel Oliveira

Actions
Actions #1

Updated by Jan Fajerski over 4 years ago

  • Status changed from New to Duplicate

I assume you deleted pool prior to this? If so this is a duplicate of https://tracker.ceph.com/issues/40011. Please re-open if you disagree.

Actions #2

Updated by Nathan Cutler over 4 years ago

  • Is duplicate of Bug #40011: ceph -s shows wrong number of pools when pool was deleted added
Actions #3

Updated by Nathan Cutler over 4 years ago

Hi Fyodor, can you tell us more about the environment where you saw this behavior? (Ideally, the version of ceph (from `ceph --version`), the operating system, and operating system version.)

Actions #4

Updated by Fyodor Ustinov over 4 years ago

Nathan Cutler wrote:

Hi Fyodor, can you tell us more about the environment where you saw this behavior? (Ideally, the version of ceph (from `ceph --version`), the operating system, and operating system version.)

[root@S-26-6-1-2 cph]# ceph --version
ceph version 14.2.3 (0f776cf838a1ae3130b2b73dc26be9c95c6ccc39) nautilus (stable)

[root@S-26-6-1-2 cph]# uname -a
Linux S-26-6-1-2 5.2.11-1.el7.elrepo.x86_64 #1 SMP Thu Aug 29 08:10:52 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux

[root@S-26-6-1-2 cph]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)

Additional info (maybe this is important) - this cluster has been updated from "Mimic".

Actions

Also available in: Atom PDF