Actions
Bug #55825
closedcluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster
Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
qa-suite
Labels (FS):
qa
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2022-05-31T03:51:25.140 INFO:teuthology.orchestra.console:Power off for smithi040 completed 2022-05-31T03:51:25.245 INFO:teuthology.run:Summary data: client.0-kernel-sha1: 342bda2e52872e35161c821330b18f3dcb4986ba client.1-kernel-sha1: 342bda2e52872e35161c821330b18f3dcb4986ba description: fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} duration: 3209.442944049835 failure_reason: '"2022-05-31T03:34:28.535922+0000 mon.a (mon.0) 2544 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log' flavor: default owner: scheduled_vshankar@teuthology success: false 2022-05-31T03:51:25.246 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/ 2022-05-31T03:51:25.312 INFO:teuthology.run:FAIL
IMO we should add the PG_DEGRADED in the log-ignorelist to ignore this warning ?
Updated by Venky Shankar almost 2 years ago
Updated by Laura Flores over 1 year ago
- Is duplicate of Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings added
Actions