Project

General

Profile

Actions

Bug #65018

open

PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"

Added by Patrick Donnelly about 2 months ago. Updated about 1 month ago.

Status:
Pending Backport
Priority:
High
Category:
Correctness/Safety
Target version:
% Done:

0%

Source:
Q/A
Tags:
backport_processed
Backport:
squid,reef
Regression:
No
Severity:
4 - irritation
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
qa-suite
Labels (FS):
qa
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2024-03-20T19:01:35.938 DEBUG:teuthology.orchestra.run.smithi043:> sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:360516069d9393362c4cc6eb9371680fe16d66ab shell --fsid b40d606c-e6ea-11ee-95c9-87774f69a715 -- ceph osd last-stat-seq osd.1
2024-03-20T19:01:36.042 INFO:journalctl@ceph.mon.a.smithi043.stdout:Mar 20 19:01:35 smithi043 ceph-mon[31664]: osdmap e88: 12 total, 12 up, 12 in
2024-03-20T19:01:36.250 INFO:journalctl@ceph.mon.b.smithi118.stdout:Mar 20 19:01:35 smithi118 ceph-mon[36322]: osdmap e88: 12 total, 12 up, 12 in
2024-03-20T19:01:36.261 INFO:journalctl@ceph.mon.c.smithi151.stdout:Mar 20 19:01:35 smithi151 ceph-mon[36452]: osdmap e88: 12 total, 12 up, 12 in
2024-03-20T19:01:36.479 INFO:teuthology.orchestra.run.smithi043.stdout:223338299439
2024-03-20T19:01:36.479 DEBUG:teuthology.orchestra.run.smithi043:> sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:360516069d9393362c4cc6eb9371680fe16d66ab shell --fsid b40d606c-e6ea-11ee-95c9-87774f69a715 -- ceph osd last-stat-seq osd.7
2024-03-20T19:01:36.513 INFO:teuthology.orchestra.run.smithi043.stderr:Inferring config /var/lib/ceph/b40d606c-e6ea-11ee-95c9-87774f69a715/mon.a/config
2024-03-20T19:01:37.010 INFO:journalctl@ceph.mon.a.smithi043.stdout:Mar 20 19:01:36 smithi043 ceph-mon[31664]: pgmap v349: 97 pgs: 1 activating+degraded, 2 activating, 1 peering, 93 active+clean; 639 KiB data, 360 MiB used, 1.0 TiB / 1.0 TiB avail; 3.1 KiB/s rd, 5 op/s; 2/192 objects degraded (1.042%)
2024-03-20T19:01:37.011 INFO:journalctl@ceph.mon.a.smithi043.stdout:Mar 20 19:01:36 smithi043 ceph-mon[31664]: Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)

https://pulpito.ceph.com/pdonnell-2024-03-20_18:16:52-fs-wip-batrick-testing-20240320.145742-distro-default-smithi/7612919/

many others also fail in similar fashion.


Related issues 3 (3 open0 closed)

Related to CephFS - Bug #65700: qa: Health detail: HEALTH_WARN Degraded data redundancy: 40/348 objects degraded (11.494%), 9 pgs degraded" in cluster log Pending BackportPatrick Donnelly

Actions
Copied to CephFS - Backport #65272: reef: PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"NewPatrick DonnellyActions
Copied to CephFS - Backport #65273: squid: PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"In ProgressPatrick DonnellyActions
Actions

Also available in: Atom PDF