Project

General

Profile

Actions

Bug #18251

closed

There are some scrub errors after reset a node.

Added by de lan over 7 years ago. Updated almost 7 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

my ceph environment:
3 mons at tree nodes:192.7.7.177 192.7.7.180 192.7.7.181
27 osds.
ceph version: 10.2.3.3

before node reset.the ceph is OK:
[root@ceph177 ~]# ceph -s
cluster 89b09a3d-9609-59e8-ce17-0b4170d65212
health HEALTH_WARN
too many PGs per OSD (601 > max 300)
monmap e1: 3 mons at {192.7.7.177=111.111.111.177:6789/0,192.7.7.180=111.111.111.180:6789/0,192.7.7.181=111.111.111.181:6789/0}
election epoch 134, quorum 0,1,2 192.7.7.177,192.7.7.180,192.7.7.181
osdmap e4586: 27 osds: 27 up, 27 in
flags sortbitwise
pgmap v1271837: 5416 pgs, 8 pools, 686 GB data, 284 kobjects
2216 GB used, 22922 GB / 25138 GB avail
5416 active+clean

when i reset the 192.7.7.181 node:
client io 373 kB/s rd, 0 B/s wr, 417 op/s rd, 0 op/s wr
[root@ceph177 ~]# ceph -s
cluster 89b09a3d-9609-59e8-ce17-0b4170d65212
health HEALTH_ERR
12 pgs inconsistent
38 scrub errors
too many PGs per OSD (601 > max 300)
monmap e1: 3 mons at {192.7.7.177=111.111.111.177:6789/0,192.7.7.180=111.111.111.180:6789/0,192.7.7.181=111.111.111.181:6789/0}
election epoch 142, quorum 0,1,2 192.7.7.177,192.7.7.180,192.7.7.181
osdmap e4627: 27 osds: 27 up, 27 in
flags sortbitwise
pgmap v1274284: 5416 pgs, 8 pools, 686 GB data, 284 kobjects
2216 GB used, 22922 GB / 25138 GB avail
5404 active+clean
12 active+clean+inconsistent

Actions

Also available in: Atom PDF