Actions
Bug #9114
closedosd: segv in build_push_op
Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
ubuntu@teuthology:/var/lib/teuthworker/archive/sage-2014-08-13_15:28:18-rados-next-testing-basic-multi/422759
(gdb) bt #0 0x00007ff779524b7b in raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:42 #1 0x00000000009a326e in reraise_fatal (signum=11) at global/signal_handler.cc:59 #2 handle_fatal_signal (signum=11) at global/signal_handler.cc:105 #3 <signal handler called> #4 0x00000000007e024d in ReplicatedBackend::build_push_op (this=0x3e92780, recovery_info=..., progress=..., out_progress=0x7ff75b0d4fc0, out_op=0x311a680, stat=0x2e56ec0) at osd/ReplicatedPG.cc:8574 #5 0x0000000000818e05 in ReplicatedBackend::prep_push (this=0x3e92780, obc=..., soid=..., peer=..., version=..., data_subset=..., clone_subsets=..., pop=0x311a680) at osd/ReplicatedPG.cc:8183 #6 0x00000000008193c2 in ReplicatedBackend::prep_push_to_replica (this=0x3e92780, obc=..., soid=..., peer=..., pop=0x311a680) at osd/ReplicatedPG.cc:8138 #7 0x000000000081a157 in ReplicatedBackend::start_pushes (this=0x3e92780, soid=..., obc=..., h=0x24a0930) at osd/ReplicatedPG.cc:10016 #8 0x000000000085f341 in C_ReplicatedBackend_OnPullComplete::finish (this=0x3f45150, handle=...) at osd/ReplicatedPG.cc:2158 #9 0x000000000066d769 in GenContext<ThreadPool::TPHandle&>::complete (this=0x3f45150, t=...) at ./include/Context.h:45 #10 0x00000000008372f0 in ReplicatedPG::BlessedGenContext<ThreadPool::TPHandle&>::finish (this=<optimized out>, t=...) at osd/ReplicatedPG.h:262 #11 0x000000000066d769 in GenContext<ThreadPool::TPHandle&>::complete (this=0x3ce66a0, t=...) at ./include/Context.h:45 #12 0x0000000000674cad in ThreadPool::WorkQueueVal<GenContext<ThreadPool::TPHandle&>*, GenContext<ThreadPool::TPHandle&>*>::_void_process (this=<optimized out>, handle=...) at ./common/WorkQueue.h:191 #13 0x0000000000a75f66 in ThreadPool::worker (this=0x2196730, wt=0x21c5d80) at common/WorkQueue.cc:128 #14 0x0000000000a79010 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:318 #15 0x00007ff77951ce9a in start_thread (arg=0x7ff75b0d6700) at pthread_create.c:308 #16 0x00007ff777ecfccd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112 #17 0x0000000000000000 in ?? () p iter $1 = {<std::tr1::__shared_ptr<ObjectMap::ObjectMapIteratorImpl, (__gnu_cxx::_Lock_policy)2>> = {_M_ptr = 0x0, _M_refcount = {_M_pi = 0x0}}, <No data fields>} (gdb) p progress $2 = (const ObjectRecoveryProgress &) @0x2e56c28: {first = true, data_recovered_to = 0, data_complete = false, omap_recovered_to = {static npos = <optimized out>, _M_dataplus = {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0xf2e718 ""}}, omap_complete = false}
another osd crashed on ENOENT on collection_add, same pg
Updated by Sage Weil over 9 years ago
note: i manually killed ceph_test_rados to make teuthology clean up
Actions