Actions
Support #22795
closedsome directories are not mounted and some osds do not start when I reboot my cluster
Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Tags:
Reviewed:
Affected Versions:
Pull request ID:
Description
There are four hosts in mycluster. Each host has several osds (one osd per disk). When I reboot all the hosts in sequence. in the host rebooted firstly, some osds do not start.
the message in the /vat/log/messages shows ceph-disk was killed as timeout.
When all the hosts start, some directories are not mounted and some osds do not start auto again.
Updated by Greg Farnum about 6 years ago
- Project changed from Ceph to ceph-volume
Updated by Alfredo Deza about 6 years ago
- Project changed from ceph-volume to Ceph
- Status changed from New to Closed
This is a known issue with ceph-disk, in 12.2.2 you have the option to use ceph-volume which doesn't have these problems. There is no support for dmcrypt at the moment though.
If keeping the same cluster or wanting to keep using ceph-disk, manual activation of the OSDs need to happen.
Actions