Actions
Bug #38841
openObjects degraded higher than 100%
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
1. Working Mimic or Nautilus deployment with Bluestore (haven't tested with Filestore)
2. All OSDs up, all PGs active+clean
3. Remove or add an OSD
4. Degraded objects above 100% during backfill/recovery
Example output below from a Mimic 13.2.4 test cluster after removing an OSD:
cluster: id: MY ID health: HEALTH_WARN 709/58572 objects misplaced (1.210%) Degraded data redundancy: 90094/58572 objects degraded (153.818%), 49 pgs degraded, 51 pgs undersized services: mon: 3 daemons, quorum san2-mon1,san2-mon2,san2-mon3 mgr: san2-mon1(active), standbys: san2-mon2, san2-mon3 osd: 52 osds: 52 up, 52 in; 84 remapped pgs data: pools: 16 pools, 2016 pgs objects: 19.52 k objects, 72 GiB usage: 7.8 TiB used, 473 TiB / 481 TiB avail pgs: 90094/58572 objects degraded (153.818%) 709/58572 objects misplaced (1.210%) 1932 active+clean 47 active+recovery_wait+undersized+degraded+remapped 33 active+remapped+backfill_wait 2 active+recovering+undersized+remapped 1 active+recovery_wait+undersized+degraded 1 active+recovering+undersized+degraded+remapped io: client: 24 KiB/s wr, 0 op/s rd, 3 op/s wr recovery: 0 B/s, 126 objects/s
Updated by David Zafman almost 5 years ago
The number of degraded objects is based on object replicas not the number of objects. So let's say every pool is has 3 replicas. In that case the output could say:
90094/175716 objects degraded (51.273%)
709/175716 objects misplaced (0.403%)
Though "ceph pg dump pgs" will show a total of 58572 if you add up the objects for all the PGs involved.
Actions