Project

General

Profile

Actions

Bug #23118

closed

Balancer maps PG to OSDs on the same host with upmap

Added by Wido den Hollander about 6 years ago. Updated about 6 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

With a v12.2.3 cluster I saw a PG mapped to two OSDs on the same host.

It was reported on ceph-devel here: https://marc.info/?l=ceph-devel&m=151938009325272&w=2

Hi,

I reported this on ceph-users here: 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024885.html

It turns out that the balancer module mapped PGs to two OSDs on the same 
host.

I had one PG (1.41) which mapped to a lot of OSDs:

root@man:~# ceph osd dump|grep pg_upmap|grep 1.41
pg_upmap_items 1.41 [9,15,11,7,10,2]
root@man:~#

But using 'ceph pg map' I saw it eventually mapped to:

root at man:~# ceph pg map 1.41
osdmap e21543 pg 1.41 (1.41) -> up [15,7,4] acting [15,7,4]
root at man:~#

osd.15 and osd.4 are both on the same host.

This cluster is running v12.2.3 with the balancer enabled in 'upmap' mode.

The balancer module wasn't enabled prior to 12.2.3.

I searched the tracker for a issue but couldn't find one about it.

Is this a know issue?

I needed I have:

- OSDMap
- CRUSMap

Both from the situation when it was 'broken'.

Wido

As requested I posted the OSDMap with ceph-post-file. The ID is: 3d4e2379-c089-4298-8ac1-62278d2c959c

Actions #2

Updated by Nathan Cutler about 6 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF