Project

General

Profile

Bug #3552

After ceph-deploy installation a reboot breaks OSDs

Added by David Zafman over 11 years ago. Updated almost 11 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The mounts for OSD partitions on /var/lib/ceph/osd/ceph-* are not added to /etc/fstab nor mounted some other way. This means that after a node reboot OSDs can't find their information.

History

#1 Updated by Greg Farnum over 11 years ago

  • Description updated (diff)

What OS is this? I'm not certain but I'd expect this all to be accomplished via upstart triggers and the existing jobs if that's the init system.

#2 Updated by David Zafman over 11 years ago

Ubuntu 12.04.1 LTS (actually ceph-deploy only supports 12.04 right now anyway)

#3 Updated by David Zafman over 11 years ago

After several reboots I'm finding this problem to be intermittent. Sometimes I find that the mounts are performed and OSD daemons are running. It can even do one of two OSDs on a node.

#4 Updated by Greg Farnum about 11 years ago

  • Project changed from CephFS to Ceph

Whoops, not an FS bug!

I've put this in the main Ceph project for now, but it might also belong in devops. We need to clarify those boundaries, which I expect will happen next week.

#5 Updated by Dan Mick almost 11 years ago

So, AFAICT, this is supposed to happen by virtue of ceph-disk-activate being called from udev, which will ferret out the right path/mount information and make it happen. David, do you believe this is still an issue?

#6 Updated by Sage Weil almost 11 years ago

  • Status changed from New to Resolved

Also available in: Atom PDF