Project

General

Profile

Actions

Bug #1064

closed

all mds's dies one by one after restart

Added by Sergey Yudin almost 13 years ago. Updated over 7 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

At first all cfuse client hungs while trying access files on mounted ceph. I restart all nodes with ceph -a stop && ceph -a start and now all mds's dies one by one during replay.


Files

ceph_debug_1.txt (13 KB) ceph_debug_1.txt Sergey Yudin, 05/05/2011 03:52 AM
ceph_debug.txt (8.74 KB) ceph_debug.txt Sergey Yudin, 05/05/2011 03:52 AM
Actions #1

Updated by Sage Weil almost 13 years ago

  • Category set to 1

Hi Sergey,

Can you attach the full mds log for journal replay? (probably need to gzip, it'll be big!)

Also, can you describe what you remember of what parts of the hierarchy were snapshotted how? And whether you remember renaming files or directories across snapshot realm boundaries? (e.g. mkdir /foo/.snap/mysnap ; mv /foo/something /bar)

Thanks!

Actions #2

Updated by Sage Weil almost 13 years ago

  • Status changed from New to 4
Actions #3

Updated by Sage Weil over 12 years ago

  • Status changed from 4 to Can't reproduce
Actions #4

Updated by John Spray over 7 years ago

  • Project changed from Ceph to CephFS
  • Category deleted (1)

Bulk updating project=ceph category=mds bugs so that I can remove the MDS category from the Ceph project to avoid confusion.

Actions

Also available in: Atom PDF