Actions
Bug #44446
closedosd status reports old crush location after osd moves
% Done:
0%
Source:
Tags:
Backport:
luminous, mimic, nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Description
Scenario:
Move an OSD disk from host=worker1 to a new node (host=worker0) and on that new node we update the crush location (e.g. host=worker0). After the OSD starts up `ceph osd status` reports the old location, but `ceph osd tree` reports the new location.
[nwatkins@smash rook]$ kubectl -n rook-ceph exec -it rook-ceph-tools-7cf4cc7568-kz4q6 ceph osd status +----+---------+-------+-------+--------+---------+--------+---------+-----------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+---------+-------+-------+--------+---------+--------+---------+-----------+ | 0 | worker1 | 1027M | 8188M | 0 | 0 | 0 | 0 | exists,up | | 1 | worker0 | 1027M | 8188M | 0 | 0 | 0 | 0 | exists,up | +----+---------+-------+-------+--------+---------+--------+---------+-----------+ [nwatkins@smash rook]$ kubectl -n rook-ceph exec -it rook-ceph-tools-7cf4cc7568-kz4q6 ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.01758 root default -5 0.01758 host worker0 0 hdd 0.00879 osd.0 up 1.00000 1.00000 1 hdd 0.00879 osd.1 up 1.00000 1.00000
And then after restarting the manager, `ceph osd status` also starts reporting the correct location
[nwatkins@smash rook]$ kubectl -n rook-ceph exec -it rook-ceph-tools-7cf4cc7568-kz4q6 ceph osd status +----+---------+-------+-------+--------+---------+--------+---------+-----------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+---------+-------+-------+--------+---------+--------+---------+-----------+ | 0 | worker0 | 1027M | 8188M | 0 | 0 | 0 | 0 | exists,up | | 1 | worker0 | 1027M | 8188M | 0 | 0 | 0 | 0 | exists,up | +----+---------+-------+-------+--------+---------+--------+---------+-----------+
related: http://tracker.ceph.com/issues/40011
Actions