Project

General

Profile

Bug #38461

Ceph osd out is the same as ceph osd reweight 0 (result in same bucket weights)

Added by Марк Коренберг about 5 years ago. Updated about 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds says:


Note

Sometimes, typically in a “small” cluster with few hosts (for instance with a small testing cluster), the fact to take out the OSD can spawn a CRUSH corner case where some PGs remain stuck in the active+remapped state. If you are in this case, you should mark the OSD in with:

    ceph osd in {osd-num}

to come back to the initial state and then, instead of marking out the OSD, set its weight to 0 with:

    ceph osd crush reweight osd.{osd-num} 0

After that, you can observe the data migration which should come to its end. The difference between marking out the OSD and reweighting it to 0 is that in the first case the weight of the bucket which contains the OSD is not changed whereas in the second case the weight of the bucket is updated (and decreased of the OSD weight). The reweight command could be sometimes favoured in the case of a “small” cluster.

But I have checked, on Mimic 13.2.4 both commands yield exactly the same result. I have checked text diff of output of `osd df tree` command. All weights (except for OSD I touched) were not changed using both commands. Therefore, either update documentation or fix bug.

History

#1 Updated by Greg Farnum about 5 years ago

  • Project changed from Ceph to RADOS
  • Subject changed from Ceph osd out is the same as ceph osd reweight 0 to Ceph osd out is the same as ceph osd reweight 0 (result in same bucket weights)

Also available in: Atom PDF