https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-11-12T18:51:45Z
Ceph
RADOS - Bug #53138: cluster [WRN] Health check failed: Degraded data redundancy: 3/1164 objects degraded (0.258%) seen in rbd
https://tracker.ceph.com/issues/53138?journal_id=205938
2021-11-12T18:51:45Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Triaged</i></li></ul><p>This warning comes up because there are PGs recovering, probably because the test is injecting failures - we can ignore such warnings.</p>
<pre>
2021-11-02T14:37:55.923243+0000 mgr.x (mgr.14099) 334 : cluster [DBG] pgmap v865: 41 pgs: 1 active+recovering+undersized+remapped, 3 active+recovering+undersized+degraded+remapped, 37 active+clean; 454 MiB data, 1.4 GiB used, 719 GiB / 720 GiB avail; 22 MiB/s rd, 35 MiB/s wr, 1.48k op/s; 3/1164 objects degraded (0.258%); 22/1164 objects misplaced (1.890%); 2.6 MiB/s, 4 keys/s, 6 objects/s recovering
2021-11-02T14:37:57.569993+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in
2021-11-02T14:37:55.923243+0000 mgr.x (mgr.14099) 334 : cluster [DBG] pgmap v865: 41 pgs: 1 active+recovering+undersized+remapped, 3 active+recovering+undersized+degraded+remapped, 37 active+clean; 454 MiB data, 1.4 GiB used, 719 GiB / 720 GiB avail; 22 MiB/s rd, 35 MiB/s wr, 1.48k op/s; 3/1164 objects degraded (0.258%); 22/1164 objects misplaced (1.890%); 2.6 MiB/s, 4 keys/s, 6 objects/s recovering
2021-11-02T14:37:57.569993+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in
2021-11-02T14:37:57.923775+0000 mgr.x (mgr.14099) 335 : cluster [DBG] pgmap v867: 41 pgs: 1 active+recovering+undersized+remapped, 3 active+recovering+undersized+degraded+remapped, 37 active+clean; 454 MiB data, 1.4 GiB used, 719 GiB / 720 GiB avail; 734 KiB/s rd, 6.7 MiB/s wr, 557 op/s; 3/1164 objects degraded (0.258%); 22/1164 objects misplaced (1.890%); 2.2 MiB/s, 4 keys/s, 5 objects/s recovering
2021-11-02T14:37:58.570932+0000 mon.a (mon.0) 881 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in
2021-11-02T14:37:57.923775+0000 mgr.x (mgr.14099) 335 : cluster [DBG] pgmap v867: 41 pgs: 1 active+recovering+undersized+remapped, 3 active+recovering+undersized+degraded+remapped, 37 active+clean; 454 MiB data, 1.4 GiB used, 719 GiB / 720 GiB avail; 734 KiB/s rd, 6.7 MiB/s wr, 557 op/s; 3/1164 objects degraded (0.258%); 22/1164 objects misplaced (1.890%); 2.2 MiB/s, 4 keys/s, 5 objects/s recovering
</pre>
RADOS - Bug #53138: cluster [WRN] Health check failed: Degraded data redundancy: 3/1164 objects degraded (0.258%) seen in rbd
https://tracker.ceph.com/issues/53138?journal_id=206947
2021-12-02T20:39:53Z
Deepika Upadhyay
<ul></ul><p>@Neha I am seeing these failures more than usual, maybe we might be having performance regression, if not, can we increase the timeout?</p>
RADOS - Bug #53138: cluster [WRN] Health check failed: Degraded data redundancy: 3/1164 objects degraded (0.258%) seen in rbd
https://tracker.ceph.com/issues/53138?journal_id=206948
2021-12-02T20:40:06Z
Deepika Upadhyay
<ul><li><strong>Priority</strong> changed from <i>Normal</i> to <i>High</i></li></ul>
RADOS - Bug #53138: cluster [WRN] Health check failed: Degraded data redundancy: 3/1164 objects degraded (0.258%) seen in rbd
https://tracker.ceph.com/issues/53138?journal_id=209127
2022-01-24T22:52:55Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Priority</strong> changed from <i>High</i> to <i>Normal</i></li></ul>