Project

General

Profile

Actions

Bug #1573

closed

mds crash during multiple_rsync workunit

Added by Josh Durgin over 12 years ago. Updated over 7 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2011-09-26T13:57:39.883 INFO:teuthology.task.workunit.client.0.out:sent 5628516632 bytes  received 958672 bytes  1580203.59 bytes/sec
2011-09-26T13:57:39.883 INFO:teuthology.task.workunit.client.0.out:total size is 5624631147  speedup is 1.00
2011-09-26T13:57:39.883 INFO:teuthology.task.workunit.client.0.err:rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1060) [sender=3.0.7]

From teuthology:~t/log/mds.0.log.gz:

2011-09-26 13:55:19.069142 7f1130dc8700 -- 10.3.14.208:6800/1313 >> 10.3.14.207:0/1470 pipe(0x148e000 sd=10 pgs=2 cs=1 l=0).fault initiating reconnect
*** Caught signal (Segmentation fault) **
 in thread 0x7f11328ce700
 ceph version 0.35-99-g9da0fdf (commit:9da0fdf4aa0f898f34ceeff33c0b1e24072fc72c)
 1: /tmp/cephtest/binary/usr/local/bin/ceph-mds() [0x8ec234]
 2: (()+0xfb40) [0x7f1136146b40]
 3: (CInode::authority()+0x46) [0x718506]
 4: (CDir::authority()+0x56) [0x6ebcb6]
 5: (CInode::authority()+0x49) [0x718509]
 6: (CDir::authority()+0x56) [0x6ebcb6]
 7: (CInode::authority()+0x49) [0x718509]
 8: (CDir::authority()+0x56) [0x6ebcb6]
 9: (CInode::authority()+0x49) [0x718509]
 10: (CDir::authority()+0x56) [0x6ebcb6]
 11: (CInode::authority()+0x49) [0x718509]
 12: (CDir::authority()+0x56) [0x6ebcb6]
 13: (CInode::authority()+0x49) [0x718509]
 14: (CDir::authority()+0x56) [0x6ebcb6]
 15: (CInode::authority()+0x49) [0x718509]
 16: (CDir::authority()+0x56) [0x6ebcb6]
 17: (CInode::authority()+0x49) [0x718509]
 18: (Locker::try_eval(SimpleLock*, bool*)+0x2a) [0x672cfa]
 19: (Locker::eval_gather(SimpleLock*, bool, bool*, std::list<Context*, std::allocator<Context*> >*)+0x1c33) [0x677943]
 20: (Locker::wrlock_finish(SimpleLock*, Mutation*, bool*)+0x45d) [0x67c92d]
 21: (Locker::_drop_non_rdlocks(Mutation*, std::set<CInode*, std::less<CInode*>, std::allocator<CInode*> >*)+0x19d) [0x67cedd]
 22: (Locker::drop_locks(Mutation*, std::set<CInode*, std::less<CInode*>, std::allocator<CInode*> >*)+0x94) [0x68d344]
 23: (Locker::scatter_writebehind_finish(ScatterLock*, Mutation*)+0x1e5) [0x68d5f5]
 24: (Locker::C_Locker_ScatterWB::finish(int)+0x1d) [0x69a0bd]
 25: (Context::complete(int)+0x12) [0x49b862]
 26: (finish_contexts(CephContext*, std::list<Context*, std::allocator<Context*> >&, int)+0x14e) [0x7ead9e]
 27: (Journaler::_finish_flush(int, unsigned long, utime_t)+0x206) [0x7e29f6]
 28: (Journaler::C_Flush::finish(int)+0x1d) [0x7eafad]
 29: (Objecter::handle_osd_op_reply(MOSDOpReply*)+0xd8a) [0x7b15da]
 30: (MDS::handle_core_message(Message*)+0xedf) [0x4c5fff]
 31: (MDS::_dispatch(Message*)+0x3c) [0x4c615c]
 32: (MDS::ms_dispatch(Message*)+0x97) [0x4c8697]
 33: (SimpleMessenger::dispatch_entry()+0x9d2) [0x822032]
 34: (SimpleMessenger::DispatchThread::entry()+0x2c) [0x492b2c]
 35: (Thread::_entry_func(void*)+0x12) [0x816072]
 36: (()+0x7971) [0x7f113613e971]
 37: (clone()+0x6d) [0x7f1134bd292d]
Actions #1

Updated by Sage Weil over 12 years ago

  • Target version changed from v0.38 to v0.39
Actions #2

Updated by Sage Weil over 12 years ago

  • Status changed from New to Duplicate
Actions #3

Updated by John Spray over 7 years ago

  • Project changed from Ceph to CephFS
  • Category deleted (1)
  • Target version deleted (v0.39)

Bulk updating project=ceph category=mds bugs so that I can remove the MDS category from the Ceph project to avoid confusion.

Actions

Also available in: Atom PDF