Project

General

Profile

Actions

Support #22795

closed

some directories are not mounted and some osds do not start when I reboot my cluster

Added by bajie white about 6 years ago. Updated about 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

There are four hosts in mycluster. Each host has several osds (one osd per disk). When I reboot all the hosts in sequence. in the host rebooted firstly, some osds do not start.

the message in the /vat/log/messages shows ceph-disk was killed as timeout.

When all the hosts start, some directories are not mounted and some osds do not start auto again.

Actions #1

Updated by Nathan Cutler about 6 years ago

  • Tracker changed from Bug to Support
Actions #2

Updated by Greg Farnum about 6 years ago

  • Project changed from Ceph to ceph-volume
Actions #3

Updated by Alfredo Deza about 6 years ago

  • Project changed from ceph-volume to Ceph
  • Status changed from New to Closed

This is a known issue with ceph-disk, in 12.2.2 you have the option to use ceph-volume which doesn't have these problems. There is no support for dmcrypt at the moment though.

If keeping the same cluster or wanting to keep using ceph-disk, manual activation of the OSDs need to happen.

Actions

Also available in: Atom PDF