Actions
Bug #57136
openecpool pg stay active+clean+remapped
Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I create a ec pool, the erasure code profile is:
$ ceph osd erasure-code-profile get ec_profile crush-device-class= crush-failure-domain=host crush-root=default jerasure-per-chunk-alignment=false k=8 m=2 plugin=jerasure technique=reed_sol_van w=8
ceph osd pool create c3-micr.rgw.buckets.data 4096 4096 erasure ec_profile
have a pg stay active+clean+remapped
$ ceph pg dump | grep remapped 17.fb8 0 0 0 0 0 0 0 0 0 0 active+clean+remapped 2022-08-16 13:00:17.637929 0'0 1044:42 [185,112,71,235,201,134,164,NONE,88,1] 185 [185,112,71,235,201,134,164,18,88,1] 185 0'0 2022-08-16 13:00:17.637893 0'0 2022-08-16 12:57:04.988974 0
$ ceph pg map 17.fb8 osdmap e1045 pg 17.fb8 (17.fb8) -> up [185,112,71,235,201,134,164,2147483647,88,1] acting [185,112,71,235,201,134,164,18,88,1]
Actions