Bug #3210
mds crashed and segfault at unlink_local_finish
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Logs: /a/teuthology-2012-09-23_19:00:07-regression-master-testing-gcov/28428
ceph version 0.51-691-g153fb3b (commit:153fb3bd50c582ba20160c55e35cbcdceb2e6811) 1: /tmp/cephtest/binary/usr/local/bin/ceph-mds() [0x98ba3a] 2: (()+0xfcb0) [0x7fe0e3890cb0] 3: (CInode::pop_and_dirty_projected_inode(LogSegment*)+0x1e) [0x7568ee] 4: (Mutation::pop_and_dirty_projected_inodes()+0x68) [0x5b3888] 5: (Mutation::apply()+0x1b) [0x5b3eeb] 6: (Server::_unlink_local_finish(MDRequest*, CDentry*, CDentry*, unsigned long)+0x36d) [0x55e78d] 7: (C_MDS_unlink_local_finish::finish(int)+0x33) [0x5ae073] 8: (Context::complete(int)+0x12) [0x4b7672] 9: (finish_contexts(CephContext*, std::list<Context*, std::allocator<Context*> >&, int)+0x177) [0x4ec4c7] 10: (Journaler::_finish_flush(int, unsigned long, utime_t)+0x246) [0x7b7806] 11: (Journaler::C_Flush::finish(int)+0x1d) [0x7bfdfd] 12: (Objecter::handle_osd_op_reply(MOSDOpReply*)+0x103b) [0x7dbd9b] 13: (MDS::handle_core_message(Message*)+0x5cf) [0x4e504f] 14: (MDS::_dispatch(Message*)+0xa2) [0x4e6112] 15: (MDS::ms_dispatch(Message*)+0x113) [0x4e8ef3] 16: (DispatchQueue::entry()+0x6c9) [0x951269] 17: (DispatchQueue::DispatchThread::entry()+0x15) [0x8ab595] 18: (Thread::_entry_func(void*)+0x12) [0x8aed32] 19: (()+0x7e9a) [0x7fe0e3888e9a] 20: (clone()+0x6d) [0x7fe0e1e304bd] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. ubuntu@teuthology:/a/teuthology-2012-09-23_19:00:07-regression-master-testing-gcov/28428$ cat summary.yaml ceph-sha1: 153fb3bd50c582ba20160c55e35cbcdceb2e6811 client.0-kernel-sha1: 1c17c1aacab817f4621baeee15707ee992c6410a description: collection:kernel-thrash clusters:fixed-3.yaml fs:btrfs.yaml thrashers:default.yaml workloads:kclient_workunit_suites_ffsb.yaml duration: 1759.8838889598846 failure_reason: 'Command failed with status 1: ''/tmp/cephtest/enable-coredump /tmp/cephtest/binary/usr/local/bin/ceph-coverage /tmp/cephtest/archive/coverage /tmp/cephtest/daemon-helper term /tmp/cephtest/binary/usr/local/bin/ceph-mds -f -i a -c /tmp/cephtest/ceph.conf''' flavor: gcov mon.a-kernel-sha1: 1c17c1aacab817f4621baeee15707ee992c6410a mon.b-kernel-sha1: 1c17c1aacab817f4621baeee15707ee992c6410a owner: scheduled_teuthology@teuthology success: false ubuntu@teuthology:/a/teuthology-2012-09-23_19:00:07-regression-master-testing-gcov/28428$ cat config.yaml kernel: &id001 kdb: true sha1: 1c17c1aacab817f4621baeee15707ee992c6410a nuke-on-error: true overrides: ceph: conf: mds: debug mds: 1/20 coverage: true fs: btrfs log-whitelist: - slow request sha1: 153fb3bd50c582ba20160c55e35cbcdceb2e6811 s3tests: branch: master workunit: sha1: 153fb3bd50c582ba20160c55e35cbcdceb2e6811 roles: - - mon.a - mon.c - osd.0 - osd.1 - osd.2 - - mon.b - mds.a - osd.3 - osd.4 - osd.5 - - client.0 targets: ubuntu@plana36.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCe7CpJbnd7W2/n42TTTjDArnVkyZfbRANfmkdgfDM+6AYg6qd9wUhes6LP++eMvhuM96Sz5W4380o8OME0cguG1LkkADbm8pQbPAPZwF1Fj28YxgZKpc2PTPsF+sjOujC+AaXaQ82ffSkLL0oElKZgAiFEGCytSdUNFHZxjztDIOoWlt7kylQCy4sJCEbND8JFwFfeGyyePvMl3CNdbnR7H5GuyIx70iglLBO/XFwArjeOUZ/FboRZWOBivpZQf9IMy8k2rQetzxTyugd7cTVdq1G5N5NeHpbQfv286G2oDaZj1HT252jDF04UP083zMxH1W9gmOoUKzIhl+iXaNLZ ubuntu@plana37.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDrxOb9f5/SfItd83HOnLVyJRnfji0fbdvL+3T82akjV6J4s/nyR8Bu+rpXbyUwu2BRDoxK4pT2dBqw86meq1qbU5Q1ypWBSH41MYGd213fy0g8YibFiYVGmXFCSwtY8X2Pet9vtLDoYvtnsgNI8djy5GPkQyZFKSszJHznZvQU10NWfM6RfxxtsBKXC/aot4QXb3GIym2/EmeuTAAef6p98dd15P9l9HQkpwXZLwiDZ53IbU79CTINo5HTD/6+1XHUcjb1OUKzQMx1jU485gW6IlsR0G0jJKSv+YEu4zSxxva7gWt1AYxGo2jhNDffEGLsNurzXFf9yeYshCTAszLf ubuntu@plana45.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDp3cwfZhOipCot6NiKX4cRMn4zx43QY0+5HdqzCQU2y7OrOJt3d0qvifnZPyeq8/d+aW2WL2OM8m4taz380JsP0SLmlpY8D0pGY/tN0pQDqIFd8EboMtKY6tR8unQrVzuczMqup/tkKSfdRp0zAeTiJ8qH7l9MaVcOw6WfRACb8f7APJE2gVRBrzPAdbqKzAphTRzZSz0cq722AX7XQDPT2dz7NoTp5Tk7xaQdDu2II+78B1H27IWdyYeonfy17yf9N+IA2Xzna/g5zu8apg7UvzyFmHunLyjr78dhPtR39201A0QJ5x5Qli9/UaB3LwiqnbCiGfx4xWFazdUFzxiD tasks: - internal.lock_machines: 3 - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.syslog: null - internal.timer: null - chef: null - clock: null - ceph: log-whitelist: - wrongly marked me down - objects unfound and apparently lost - thrashosds: null - kclient: null - workunit: clients: all: - suites/ffsb.sh
Related issues
History
#1 Updated by Sage Weil about 11 years ago
- Project changed from Ceph to CephFS
#2 Updated by Sage Weil about 11 years ago
- Status changed from New to Resolved
commit:44bc687d98f931b15538805d3923492d62dca779