Project

General

Profile

Actions

Bug #56046

open

Changes to CRUSH weight and Upmaps causes PGs to go into a degraded+remapped state instead of just remapped

Added by Wes Dillingham almost 2 years ago. Updated almost 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-ansible
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I have a brand new virtual 16.2.9 cluster (and a physical 16.2.7 cluster) with 0 client activity. Both built initially on Pacific. The cluster has only been partially filled with rados bench objects.

When changing CRUSH weights of an OSD "ceph osd crush reweight osd.10 0" or introducing upmap entries (manually or via the balancer) the cluster responds with degraded PGs instead of "remapped PGs". This is counter to how things have worked in past clusters in Nautilus.

In Nautilus we would make use of the norebalance flag weighing in new capacity to prevent data movement until finished adding OSDs but because in Pacific this causes the PGs to go degraded data movement begins immediately and we cant make use of tools which modify the upmaps to facilitate the gradual movement.


Files

osdmap.748 (9.85 KB) osdmap.748 Wes Dillingham, 06/16/2022 10:11 PM
osdmap.748.decompiled (5.4 KB) osdmap.748.decompiled Wes Dillingham, 06/16/2022 10:11 PM
osdmap.747.decompiled (5.4 KB) osdmap.747.decompiled Wes Dillingham, 06/16/2022 10:11 PM
osdmap.747 (9.85 KB) osdmap.747 Wes Dillingham, 06/16/2022 10:11 PM
osdmap.746.decompiled (4.95 KB) osdmap.746.decompiled Wes Dillingham, 06/16/2022 10:11 PM
osdmap.746 (9.18 KB) osdmap.746 Wes Dillingham, 06/16/2022 10:11 PM
osdmap.745.decompiled (4.95 KB) osdmap.745.decompiled Wes Dillingham, 06/16/2022 10:11 PM
osdmap.745 (9.18 KB) osdmap.745 Wes Dillingham, 06/16/2022 10:11 PM
Actions

Also available in: Atom PDF