Project

General

Profile

Actions

Bug #57136

open

ecpool pg stay active+clean+remapped

Added by yite gu over 1 year ago. Updated over 1 year ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I create a ec pool, the erasure code profile is:

$ ceph osd erasure-code-profile get ec_profile
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=8
m=2
plugin=jerasure
technique=reed_sol_van
w=8


ceph osd pool create c3-micr.rgw.buckets.data 4096 4096 erasure ec_profile
have a pg stay active+clean+remapped
$ ceph pg dump | grep remapped
17.fb8        0                  0        0         0       0     0           0          0   0        0 active+clean+remapped 2022-08-16 13:00:17.637929     0'0  1044:42 [185,112,71,235,201,134,164,NONE,88,1]        185  [185,112,71,235,201,134,164,18,88,1]            185        0'0 2022-08-16 13:00:17.637893             0'0 2022-08-16 12:57:04.988974             0 

$ ceph pg map 17.fb8
osdmap e1045 pg 17.fb8 (17.fb8) -> up [185,112,71,235,201,134,164,2147483647,88,1] acting [185,112,71,235,201,134,164,18,88,1]
Actions #1

Updated by Radoslaw Zarzynski over 1 year ago

  • Status changed from New to Need More Info

It looks osd.18 isn't up. Could you please share the ceph -s and ceph osd tree?

Actions #2

Updated by yite gu over 1 year ago

Radoslaw Zarzynski wrote:

It looks osd.18 isn't up. Could you please share the ceph -s and ceph osd tree?

 -1       3492.30103 root default                                               
 -2        349.23010     host c3-sh1-v6-ceph-st01                               
  0   hdd   14.55125         osd.0                          up  1.00000 1.00000 
  1   hdd   14.55125         osd.1                          up  1.00000 1.00000 
  2   hdd   14.55125         osd.2                          up  1.00000 1.00000 
  3   hdd   14.55125         osd.3                          up  1.00000 1.00000 
  4   hdd   14.55125         osd.4                          up  1.00000 1.00000 
  5   hdd   14.55125         osd.5                          up  1.00000 1.00000 
  6   hdd   14.55125         osd.6                          up  1.00000 1.00000 
  7   hdd   14.55125         osd.7                          up  1.00000 1.00000 
  8   hdd   14.55125         osd.8                          up  1.00000 1.00000 
  9   hdd   14.55125         osd.9                          up  1.00000 1.00000 
 10   hdd   14.55125         osd.10                         up  1.00000 1.00000 
 11   hdd   14.55125         osd.11                         up  1.00000 1.00000 
 12   hdd   14.55125         osd.12                         up  1.00000 1.00000 
 13   hdd   14.55125         osd.13                         up  1.00000 1.00000 
 14   hdd   14.55125         osd.14                         up  1.00000 1.00000 
 15   hdd   14.55125         osd.15                         up  1.00000 1.00000 
 16   hdd   14.55125         osd.16                         up  1.00000 1.00000 
 17   hdd   14.55125         osd.17                         up  1.00000 1.00000 
 18   hdd   14.55125         osd.18                         up  1.00000 1.00000 
 19   hdd   14.55125         osd.19                         up  1.00000 1.00000 
 20   hdd   14.55125         osd.20                         up  1.00000 1.00000 
 21   hdd   14.55125         osd.21                         up  1.00000 1.00000 
 22   hdd   14.55125         osd.22                         up  1.00000 1.00000 
 23   hdd   14.55125         osd.23                         up  1.00000 1.00000
$ ceph -s
  cluster:
    id:     5b41ef4c-5184-4a99-b1ae-1a462da0b7ae
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum c3-sh1-v6-ceph-st01.bj,c3-sh1-v6-ceph-st02.bj,c3-sh1-v6-ceph-st03.bj (age 2w)
    mgr: c3-sh1-v6-ceph-st01.bj(active, since 6d), standbys: c3-sh1-v6-ceph-st02.bj, c3-sh1-v6-ceph-st03.bj
    osd: 276 osds: 276 up (since 18h), 276 in (since 18h); 1 remapped pgs

  data:
    pools:   6 pools, 6144 pgs
    objects: 23 objects, 2.4 KiB
    usage:   518 GiB used, 3.4 PiB / 3.4 PiB avail
    pgs:     6143 active+clean
             1    active+clean+remapped

Actions

Also available in: Atom PDF