Project

General

Profile

Actions

Bug #13556

closed

ceph-disk failed to activate osd and killed by udev

Added by chuanhong wang over 8 years ago. Updated over 8 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

software environment: ceph-0.87.2+rhel7
problem: there are 9 osd in one of my server, and their journal are on the same disk. If plug out all the disks and then plug in, some osd can't be activated automatically by udev.
The program, which called by udev, must end running in 30 sencends, or else udev will kill it. ceph-disk always get activate_lock firstly and then activate an osd, so all the osd in one server just can be activated one by one, the total time of activating them can exceed 30 senconds sometimes.
Therefore, should ceph-disk allocate a lock for every osd, and all osd can be activated at the same time?

Actions #1

Updated by Loïc Dachary over 8 years ago

  • Status changed from New to Won't Fix

Giant (0.87.*) is no longer supported, upgrading to Hammer (0.94) should fix this problem.

Actions

Also available in: Atom PDF