Project

General

Profile

Actions

Bug #1104

closed

Segmentation fault when deleting a folder

Added by Bernard Grymonpon almost 13 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

got this after removing a just created folder:

2011-05-20 18:19:09.679553 7f8254c89700 mds0.18 handle_mds_map i am now mds0.18
2011-05-20 18:19:09.679572 7f8254c89700 mds0.18 handle_mds_map state change up:rejoin --> up:active
2011-05-20 18:19:09.679577 7f8254c89700 mds0.18 recovery_done -- successful recovery!
2011-05-20 18:19:09.679907 7f8254c89700 mds0.18 active_start
2011-05-20 18:19:09.682956 7f8254c89700 mds0.18 cluster recovered.
  • Caught signal (Segmentation fault) *
    in thread 0x7f8254c89700
    ceph version 0.28-112-g6f8708b (commit:6f8708baec1999b1bc0bad3ad5c6130d7e0d3e1d)
    1: /usr/bin/cmds() [0x6f8792]
    2: (()+0xef60) [0x7f82572e6f60]
    3: (MDCache::get_or_create_stray_dentry(CInode
    )+0x25) [0x5273e5]
    4: (Server::handle_client_unlink(MDRequest*)+0x997) [0x4ff3e7]
    5: (Server::handle_client_request(MClientRequest*)+0x543) [0x5090b3]
    6: (MDS::handle_deferrable_message(Message*)+0x99f) [0x49448f]
    7: (MDS::_dispatch(Message*)+0x144a) [0x4a581a]
    8: (MDS::ms_dispatch(Message*)+0x57) [0x4a5fd7]
    9: (SimpleMessenger::dispatch_entry()+0x7da) [0x6d00ba]
    10: (SimpleMessenger::DispatchThread::entry()+0x1c) [0x484f6c]
    11: (()+0x68ba) [0x7f82572de8ba]
    12: (clone()+0x6d) [0x7f8255f7302d]

More info:

Setup: 3 node ceph testcluster, was running .26-something, upgraded today to the latest master branch (git pull, got me up to 6f8708baec1999b1bc0bad3ad5c6130d7e0d3e1d, made debian packages, and replaced all packages on all cluster nodes, and restarted ceph everywhere (after changing the config file to include the "." in the names).

All nodes run 3 OSDs, one mon and the two first nodes run a mds.

On a client I mounted the ceph filesystem, made a folder and then removed it:

root@dhcp114:~# mount -t ceph ceph-001.om:/ /mnt/root@dhcp114:~# cd /mnt
root@dhcp114:/mnt# ls
bonnie f1 f2 f3 f4 f5 foo
root@dhcp114:/mnt# mkdir test
root@dhcp114:/mnt# rmdir test

This hanged. On the ceph cluster, i got the segmentation fault on all mds'.


Files

mds.0.log.bz2 (14.3 MB) mds.0.log.bz2 Bernard Grymonpon, 05/20/2011 09:50 AM
cmds.bz2 (14 MB) cmds.bz2 Sage Weil, 05/24/2011 10:15 AM
Actions

Also available in: Atom PDF