Project

General

Profile

Actions

Bug #3806

closed

OSDs stuck in active+degraded after changing replication from 2 to 3

Added by Ben Poliakoff over 11 years ago. Updated over 11 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Small 3 node cluster running 0.56.1-1~bpo60+1 on Debian/Squeeze, with "tuneables" enabled

I recently changed the replication level from 2 to 3. This modification appeared to go smoothly, but several days later I noticed 3 PGs lingering in "active+degraded" state. At this point the output of "ceph health detail" showed the PGs only mapped to two OSDs rather than three.

I then tried shutting down the OSD listed first in the output of "ceph health detail" and let the cluster rebuild. After that the 3 PGs changed state to "active+remapped".

Per request from joshd on IRC I'm attaching my osdmap and pg dump.


Files

osdmap (4.45 KB) osdmap Ben Poliakoff, 01/15/2013 02:17 PM
pg_dump.txt (236 KB) pg_dump.txt Ben Poliakoff, 01/15/2013 02:17 PM
crush.txt (1.59 KB) crush.txt Ben Poliakoff, 01/18/2013 03:45 PM
Actions

Also available in: Atom PDF