Actions
Bug #40622
closedPG stuck in active+clean+remapped
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
A cluster have 6 servers, in 3 racks, 2 servers per a rack.
A replication rule distributes replicas to the 3 racks: one replica per a rack
We start removing one server in each rack: all replicas must move to the remaining server in each rack.
In the second rack, the second server has two DSOs less than the one being removed from the cluster.
When moving data from server in a second rack, 1 PG stuck at active+clean+remapped status: apparently can not find the desired OSD for moving inside second rack.
I'm try use:- ceph osd out 21
- ceph crush reweight osd.21 0
but the same PG (id 5.783) stuck in active+clean+remapped status.
I have mon_max_pg_per_osd=400 set up, it's can be a barrier.
Files
Actions