Bug #64817
openStretch mode does not work for pools that use CRUSH rule with device classes
0%
Description
I have converted a (test) 3 node replicated cluster (2 storage nodes, 1 node with monitor only, min_size=2, size=4) setup to a "stretch mode" setup [1]. That works as expected.
CRUSH rule (adjusted to work with 1 host and 2 OSDs per device class per data center only)
rule stretch_rule {
id 5
type replicated
step take dc1
step choose firstn 0 type host
step chooseleaf firstn 2 type osd
step emit
step take dc2
step choose firstn 0 type host
step chooseleaf firstn 2 type osd
step emit
}
When stretch rules with device classes are used things don't work as expected anymore. Example crush rule:
rule stretch_rule_ssd {
id 4
type replicated
step take dc1 class ssd
step choose firstn 0 type host
step chooseleaf firstn 2 type osd
step emit
step take dc2 class ssd
step choose firstn 0 type host
step chooseleaf firstn 2 type osd
step emit
}
A similar crush rule for hdd exists. When I change the crush_rule for one of the pools to use stretch_rule_ssd the PGs on OSDs with device class ssd become inactive as soon as one of the data centers goes offline (and "degraded stretched mode" has been activated, and only 1 bucket, data center, is needed for peering). I don't understand why. Another issue with this is that as soon as the datacenter is online again, the recovery will never finish by itself and a "ceph osd force_healthy_stretch_mode --yes-i-really-mean-it" is needed to get HEALTH_OK. The force_healthy_stretch_mode command is not needed when no crush rule that uses device classes is used.
[1]: https://docs.ceph.com/en/latest/rados/operations/stretch-mode/
I brought this up in this Ceph user ML thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PVYFRJMNS4HOAM5LWHBQDARSXTKS42ZG/
I have repeated the steps I did before to confirm it was not a glitch of some kind, but it is not. Stretch mode does not work for pools that use a CRUSH rule that uses device classes.