Project

General

Profile

Actions

Bug #663

closed

cmds segfaults

Added by Alexander Rødseth over 13 years ago. Updated over 7 years ago.

Status:
Can't reproduce
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hello.

One of my cmds daemons segfaults.

After asking for advice on #ceph/irc.oftc.net, I used cdebugpack -c /etc/ceph/ceph.conf mds_crash.tar.gz to produce the following file:
http://68.178.169.4:81/mds_crash.tar.gz
(approx 124M)

Here's some commandline output:
http://aur.pastebin.com/eL7g2Uz6
http://aur.pastebin.com/911FaQNs

This is all running on Arch Linux, as a small test-cluster.
I have used the very latest git-version of both ceph, the linux kernel and btrfs-progs.

kernel: 2.6.37-rc6-00009-gb3444d1-dirty
ceph version 0.23.2 (commit:5bdae2af8c53adb2e059022c58813e97e7a7ba5d)
Btrfs v0.19-35-g1b444cd

To reproduce, I start cmds on that particular machine and just wait a few seconds.

The other nodes seems to be running just fine, except "1 crashed+peering":
pg v23517: 792 pgs: 791 active+clean, 1 crashed+peering; 270 GB data, 681 GB used, 4202 GB / 4890 GB avail

Hope there's at least some useful info in there somewhere.

Best regards,
Alexander Rødseth

Actions

Also available in: Atom PDF