Project

General

Profile

Actions

Bug #41944

closed

inconsistent pool count in ceph -s output

Added by Sage Weil over 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

> bash-4.4# ceph -s
>   cluster:
>     id:     6f73f37c-863f-4851-94ca-ea7fbd5eb044
>     health: HEALTH_OK
>     
>   services:
>     mon: 3 daemons, quorum a,b,c (age 3d)
>     mgr: a(active, since 3d)
>     mds: rook-ceph-cephfilesystem:1 {0=rook-ceph-cephfilesystem-a=up:active} 1
> up:standby-replay
>     osd: 3 osds: 3 up (since 3d), 3 in (since 3d)
>     
>   data:
>     pools:   4 pools, 128 pgs
>     objects: 22 objects, 2.2 KiB
>     usage:   12 GiB used, 285 GiB / 297 GiB avail
>     pgs:     128 active+clean
>     
>   io:
>     client:   1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
>     
> bash-4.4# ceph osd pool ls
> rook-ceph-cephfilesystem-metadata
> rook-ceph-cephfilesystem-data0   

Related issues 1 (0 open1 closed)

Is duplicate of mgr - Bug #40011: ceph -s shows wrong number of pools when pool was deletedResolvedDaniel Oliveira

Actions
Actions #1

Updated by Nathan Cutler over 4 years ago

Is this after pools are deleted? In that case, it's #40011

Actions #2

Updated by Nathan Cutler over 4 years ago

  • Is duplicate of Bug #40011: ceph -s shows wrong number of pools when pool was deleted added
Actions #3

Updated by Nathan Cutler over 3 years ago

  • Status changed from Need More Info to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF