Actions
Bug #37844
openOSD medium errors do not generate warning or error
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Hi,
I've been seeing inconsistent pgs for a few times past weeks
OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 5.74d is active+clean+inconsistent, acting [336,113,434,360,368,59,255,163,178,457,283]
With investigating the OSDs part of the pg, I've found that one osd had medium errors in dmesg, and also errors in the ceph-osd log:
2019-01-08 17:56:13.199 7fd3fe1c5700 -1 bluestore(/var/lib/ceph/osd/ceph-457) _do_read bdev-read failed: (5) Input/output error 2019-01-08 18:38:07.751 7fd3fe1c5700 -1 bluestore(/var/lib/ceph/osd/ceph-457) _do_read bdev-read failed: (5) Input/output error
This happened for three disks already the past few weeks.
The problem is there is no warning or error in the ceph health or other ceph commands output that points in that direction.
I would expect having some kind of warning running ceph health detail with failing OSDS.
Thanks!!
Updated by Greg Farnum about 5 years ago
- Project changed from Ceph to RADOS
- Category deleted (
OSD)
Actions