Project

General

Profile

Actions

Bug #21803

closed

objects degraded higher than 100%

Added by David Zafman over 6 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
High
Assignee:
David Zafman
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Original post:
1. Jewel deployment with filestore.
2. Upgrade to Luminous (including mgr deployment and "ceph osd
require-osd-release luminous"), still on filestore.
3. rados bench with subsequent cleanup.
4. All OSDs up, all PGs active+clean.
5. Stop one OSD. Remove from CRUSH, auth list, OSD map.
6. Reinitialize OSD with bluestore.
7. Start OSD, commencing backfill.
8. Degraded objects above 100%.

I reproduced with a simpler test:

1. ceph osd pool create test 1 1
2. ceph osd pool set test size 1
3. rados -p test bench 10 write --no-cleanup
4. ceph osd pool set test size 3


Related issues 3 (0 open3 closed)

Related to RADOS - Bug #21887: degraded calculation is off during backfillDuplicate10/21/2017

Actions
Related to RADOS - Bug #20059: miscounting degraded objectsResolvedDavid Zafman05/23/2017

Actions
Related to RADOS - Bug #22837: discover_all_missing() not always called during activatingResolvedDavid Zafman01/30/2018

Actions
Actions

Also available in: Atom PDF