Project

General

Profile

Actions

Bug #40011

closed

ceph -s shows wrong number of pools when pool was deleted

Added by Jan Fajerski almost 5 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
High
Assignee:
Daniel Oliveira
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
nautilus, mimic
Regression:
No
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This is reproducible in a vstart cluster:

 MDS=0 ../src/vstart.sh -n -b -d
 bin/ceph osd pool create foo 12
 bin/ceph osd pool create bar 12
 bin/ceph osd pool create foobar 12
 bin/ceph -s
 bin/ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
 bin/ceph osd pool rm foo foo --yes-i-really-really-mean-it
 bin/ceph -s
 bin/ceph osd lspools

"ceph -s" show the following at the first invocation:

*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2019-05-23 10:26:46.503 7fbb7db4c700 -1 WARNING: all dangerous and experimental features are enabled.
2019-05-23 10:26:46.519 7fbb7db4c700 -1 WARNING: all dangerous and experimental features are enabled.
  cluster:
    id:     d240be1a-33ca-483d-94e7-aadc47d6e8a4
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 18m)
    mgr: x(active, since 17m)
    osd: 3 osds: 3 up (since 17m), 3 in (since 17m)

  data:
    pools:   3 pools, 36 pgs
    objects: 0 objects, 0 B
    usage:   6.0 GiB used, 27 GiB / 33 GiB avail
    pgs:     36 active+clean

After deleting the pool:

*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2019-05-23 10:27:02.763 7f9f5f7d2700 -1 WARNING: all dangerous and experimental features are enabled.
2019-05-23 10:27:02.783 7f9f5f7d2700 -1 WARNING: all dangerous and experimental features are enabled.
  cluster:
    id:     d240be1a-33ca-483d-94e7-aadc47d6e8a4
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 18m)
    mgr: x(active, since 18m)
    osd: 3 osds: 3 up (since 17m), 3 in (since 17m)

  data:
    pools:   3 pools, 24 pgs
    objects: 0 objects, 0 B
    usage:   6.0 GiB used, 27 GiB / 33 GiB avail
    pgs:     24 active+clean

Note the PG count changes as expected, the number of pools does not. "ceph osd lspools" is not affected.


Related issues 9 (0 open9 closed)

Related to mgr - Bug #40871: osd status reports old crush location after osd movesResolvedKefu Chai

Actions
Has duplicate mgr - Bug #41414: OSDMonitor: deleted pool still shown in stats via `ceph status`Duplicate

Actions
Has duplicate Ceph - Bug #41832: Different pools count in ceph -s and ceph osd pool lsDuplicate09/14/2019

Actions
Has duplicate RADOS - Bug #41944: inconsistent pool count in ceph -s outputResolved09/20/2019

Actions
Has duplicate RADOS - Bug #42592: ceph-mon/mgr PGstat Segmentation FaultDuplicate11/01/2019

Actions
Has duplicate RADOS - Bug #42689: nautilus mon/mgr: ceph status:pool number display is not rightDuplicate11/08/2019

Actions
Has duplicate CephFS - Bug #41228: mon: deleting a CephFS and its pools causes MONs to crashResolved

Actions
Copied to mgr - Backport #42857: mimic: ceph -s shows wrong number of pools when pool was deletedRejectedActions
Copied to mgr - Backport #42858: nautilus: ceph -s shows wrong number of pools when pool was deletedResolvedNathan CutlerActions
Actions

Also available in: Atom PDF