Project

General

Profile

Bug #22070

"ceph pg dump" has stale states listed (322 PGs in states, only 320 PGs in cluster!)

Added by linghucong linghucong over 6 years ago. Updated almost 6 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

in ceph pg dump cmd, we can not find the scrubbing pg. like below:

it look like have two other pg than the total? where the two pg come

from?

root@node-1150:~# ceph -s
cluster:
id: 3edc30f3-2157-4251-b94c-2a81db839bc8
health: HEALTH_WARN
too many PGs per OSD (320 > max 300)

services:
mon: 3 daemons, quorum node-1150,node-1151,node-1152
mgr: node-1150(active), standbys: node-1151, node-1152
osd: 3 osds: 3 up, 3 in
data:
pools: 5 pools, 320 pgs
objects: 17.1K objects, 93.7G
usage: 288G used, 2.45T / 2.73T avail
pgs: 320 active+clean
1 active+clean+scrubbing+deep
1 active+clean+scrubbing
io:
client: 341 B/s wr, 0 op/s rd, 0 op/s wr

root@node-1150:~# ceph pg dump|grep scrubbing
dumped all
root@node-1150:~# ceph pg dump|grep deep
dumped all
root@node-1150:~# ceph health detail
HEALTH_WARN too many PGs per OSD (320 > max 300)
TOO_MANY_PGS too many PGs per OSD (320 > max 300)
root@node-1150:~# ceph -v
ceph version 13.0.0-2613-gce6ba63 (ce6ba63e143b194dc6f42f0f9620df8673161da7) mimic (dev)

History

#1 Updated by linghucong linghucong over 6 years ago

look like the ceph -s cmd handled by the mon and the num_pg_by_state is not correct. but i have not find why?

#2 Updated by Greg Farnum over 6 years ago

  • Project changed from Ceph to mgr
  • Subject changed from can not find scrub pg in ceph pg dump command. to "ceph pg dump" has stale states listed (322 PGs in states, only 320 PGs in cluster!)

#3 Updated by Sage Weil almost 6 years ago

  • Status changed from New to Can't reproduce

This was an old dev commit. If you are still seeing this on curent master or a stable release, please reopen the ticket.

Also available in: Atom PDF