Project

General

Profile

Bug #7912

Wrong pool count when using "ceph -s" and "ceph -w"

Added by Volker Voigt almost 10 years ago. Updated almost 10 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

running ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)

I created a bunch of pools and deleted some of them afterwards. Now, when I run "ceph -s" (or -w), the number of pools does not match the really existing number.


root@csdeveubap-u01mon01:~# ceph -s
cluster daa9fff9-0da5-43a8-ae2b-0a186030e2ad
health HEALTH_OK
monmap e5: 5 mons at {csdeveubap-u01mon01=10.88.32.6:6789/0,csdeveubap-u01mon02=10.88.32.7:6789/0,csdeveubs-u01mon01=10.88.7.11:6789/0,csdeveubs-u01mon02=10.88.7.12:6789/0,csdeveubs-u01mon03=10.88.7.13:6789/0}, election epoch 146, quorum 0,1,2,3,4 csdeveubap-u01mon01,csdeveubap-u01mon02,csdeveubs-u01mon01,csdeveubs-u01mon02,csdeveubs-u01mon03
osdmap e1302: 180 osds: 180 up, 180 in
pgmap v281598: 35008 pgs, 10 pools, 3721 MB data, 2884 objects
32825 MB used, 654 TB / 654 TB avail
35008 active+clean

Above output shows "pgmap v281598: 35008 pgs, 10 pools".


root@csdeveubap-u01mon01:~# ceph osd lspools
0 data,1 metadata,2 rbd,4 cloudstorage_dev,6 csdev_jenkins,8 csdev-developer,9 cosbench-1,

But only 7 pools actually exist, as is shown correctly with ceph osd lspools. Same goes for:


root@csdeveubap-u01mon01:~# ceph osd dump|grep pool
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 4 'cloudstorage_dev' rep size 4 min_size 2 crush_ruleset 3 object_hash rjenkins pg_num 16384 pgp_num 16384 last_change 938 owner 0 flags hashpspool
pool 6 'csdev_jenkins' rep size 3 min_size 1 crush_ruleset 4 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 950 owner 0 flags hashpspool
pool 8 'csdev-developer' rep size 2 min_size 1 crush_ruleset 4 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 1298 owner 0 flags hashpspool
pool 9 'cosbench-1' rep size 4 min_size 2 crush_ruleset 3 object_hash rjenkins pg_num 16384 pgp_num 16384 last_change 1293 owner 0 flags hashpspool

Btw, when I call "ceph -s -f json" (or xml), no pool count is given at all.

Creating a new pool increases the count to 11. When I delete that pool afterwards, the count does not drop, it stays at 11, etc.

Associated revisions

Revision 81853c61 (diff)
Added by Sage Weil almost 10 years ago

mon/PGMap: clear pool sum when last pg is deleted

Use the x.0 pg as a sentinel for the existence of the pool. Note that we
have to clean in up two paths: apply_incrmenetal (which is actually
deprecated) and the normal PGMonitor refresh.

Fixes: #7912
Signed-off-by: Sage Weil <>

History

#1 Updated by Sage Weil almost 10 years ago

  • Status changed from New to Resolved

Also available in: Atom PDF