Project

General

Profile

Actions

Bug #43521

open

PG's failing to allocating due to mapping to incorrect domain in crushmap

Added by Richard Gallamore over 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
OSDMap
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

While doing some performance testing in various configurations, I ran into a bug where PG's will be mapped to a hdd that is not in the specified failure domain which concludes in the incomplete status and in some cases due to the pesudo nature of crushmap, degraded pg's.

In order to reproduce:
Setup at least one ceph host with at least 4 osd's, create 3 location under the host, I used pods (host-1.0, host-1.1 host-1.2). Move one of the osd's into each pod and leave the last one under location host.

I suspect this bug will occur with a replicated pool, but I found the bug testing erasure.

ceph osd erasure-code-profile set isa-test plugin=isa k=2 m=1 crush-device-class=hdd crush-failure-domain=pod
ceph osd pool create isa-test 64 64 erasure isa-test

The pool will be unable to obtain a normal state of active+clean

No data to display

Actions

Also available in: Atom PDF