Actions
Bug #41414
closedOSDMonitor: deleted pool still shown in stats via `ceph status`
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
pdonnell@senta02 ~/ceph/build$ bin/ceph status *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 2019-08-23T18:28:04.389-0400 7f9a99cb2700 -1 WARNING: all dangerous and experimental features are enabled. 2019-08-23T18:28:04.473-0400 7f9a99cb2700 -1 WARNING: all dangerous and experimental features are enabled. cluster: id: a38b33a9-1eae-40c7-982f-645b415e6d45 health: HEALTH_ERR Module 'dashboard' has failed: error("No socket could be created -- (('::', 41007, 0, 0): [Errno 98] Address already in use)",) services: mon: 3 daemons, quorum a,b,c (age 16m) mgr: x(active, since 16m) mds: a:1 {0=a=up:active} osd: 3 osds: 3 up (since 14m), 3 in (since 14m) data: pools: 4 pools, 24 pgs objects: 22 objects, 2.2 KiB usage: 6.0 GiB used, 3.0 TiB / 3.0 TiB avail pgs: 24 active+clean pdonnell@senta02 ~/ceph/build$ bin/ceph pg ls *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 2019-08-23T18:28:20.585-0400 7f5477899700 -1 WARNING: all dangerous and experimental features are enabled. 2019-08-23T18:28:20.677-0400 7f5477899700 -1 WARNING: all dangerous and experimental features are enabled. PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP 1.0 4 0 0 0 0 0 0 8 active+clean 14m 17'8 38:40 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.1 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:32 [2,0,1]p2 [2,0,1]p2 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.2 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:32 [0,1,2]p0 [0,1,2]p0 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.3 2 0 0 0 34 0 0 2 active+clean 14m 17'2 38:34 [1,2,0]p1 [1,2,0]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.4 4 0 0 0 1526 0 0 6 active+clean 14m 17'6 38:38 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.5 2 0 0 0 90 0 0 3 active+clean 14m 17'3 38:35 [2,0,1]p2 [2,0,1]p2 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.6 1 0 0 0 0 0 0 1 active+clean 14m 18'1 38:33 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.7 1 0 0 0 0 0 0 2 active+clean 14m 17'2 38:34 [1,2,0]p1 [1,2,0]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.8 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:32 [1,2,0]p1 [1,2,0]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.9 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:32 [1,2,0]p1 [1,2,0]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.a 0 0 0 0 0 0 0 1 active+clean 14m 17'1 38:33 [1,2,0]p1 [1,2,0]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.b 1 0 0 0 0 0 0 3 active+clean 14m 17'3 38:35 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.c 1 0 0 0 0 0 0 2 active+clean 14m 17'2 38:34 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.d 2 0 0 0 70 0 0 3 active+clean 14m 17'3 38:35 [2,1,0]p2 [2,1,0]p2 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.e 1 0 0 0 0 0 0 3 active+clean 14m 17'3 38:35 [1,2,0]p1 [1,2,0]p1 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 1.f 3 0 0 0 566 0 0 5 active+clean 14m 17'5 38:37 [2,0,1]p2 [2,0,1]p2 2019-08-23T18:13:41.247841-0400 2019-08-23T18:13:41.247841-0400 2.0 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [2,1,0]p2 [2,1,0]p2 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 2.1 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [2,1,0]p2 [2,1,0]p2 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 2.2 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [0,1,2]p0 [0,1,2]p0 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 2.3 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [1,2,0]p1 [1,2,0]p1 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 2.4 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 2.5 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 2.6 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 2.7 0 0 0 0 0 0 0 0 active+clean 14m 0'0 38:31 [1,0,2]p1 [1,0,2]p1 2019-08-23T18:13:42.289003-0400 2019-08-23T18:13:42.289003-0400 * NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilisation. See http://docs.ceph.com/docs/master/dev/placement-group/#omap-statistics for further details. pdonnell@senta02 ~/ceph/build$ bin/ceph osd pool ls *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 2019-08-23T18:28:35.384-0400 7f914c5dd700 -1 WARNING: all dangerous and experimental features are enabled. 2019-08-23T18:28:35.488-0400 7f914c5dd700 -1 WARNING: all dangerous and experimental features are enabled. cephfs.a.meta cephfs.a.data
This misaccounting only appears to happen if you let the PGs finish becoming active.
Updated by Neha Ojha over 4 years ago
- Is duplicate of Bug #40011: ceph -s shows wrong number of pools when pool was deleted added
Updated by Neha Ojha over 4 years ago
- Project changed from RADOS to mgr
Moving this to mgr since the original bug belongs there, feel free to change it back to RADOS if investigation shows this is not a mgr level bug.
Actions