Project

General

Profile

Bug #10604

osd crash in upgrade:dumpling-dumpling-distro-basic-multi run

Added by Yuri Weinstein about 9 years ago. Updated about 9 years ago.

Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Run: http://pulpito.ceph.com/teuthology-2015-01-21_11:54:26-upgrade:dumpling-dumpling-distro-basic-multi/

Job: 716099

Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2015-01-21_11:54:26-upgrade:dumpling-dumpling-distro-basic-multi/716099/

In /a/teuthology-2015-01-21_11:54:26-upgrade:dumpling-dumpling-distro-basic-multi/716099/remote/plana44/log/ceph-osd.0.log.gz

ceph-osd.0.log.gz:92897938-   -10> 2015-01-21 12:49:38.305620 7fe52f57e700 10 osd.0 126 do_waiters -- start
ceph-osd.0.log.gz:92898019-    -9> 2015-01-21 12:49:38.305623 7fe52f57e700 10 osd.0 126 do_waiters -- finish
ceph-osd.0.log.gz:92898101-    -8> 2015-01-21 12:49:38.305647 7fe52ad75700 10 osd.0 126 dequeue_op 0x54411d0 prio 196 cost 0 latency 0.000262 osd_sub_op_reply(client.4230.0:6568 0.0 87af8780/10000000010.00000153/head//0 [] ondisk, result = 0) v1 pg pg[0.0( v 126'116 lc 38'81 (0'0,126'116] local-les=102 n=56 ec=1 les/c 102/94 100/101/101) [0,3]/[0,3,4] r=0 lpr=101 pi=41-100/9 luod=115'115 rops=2 bft=3 lcod 38'80 mlcod 38'61 active+recovering+remapped m=5]
ceph-osd.0.log.gz:92898533-    -7> 2015-01-21 12:49:38.305675 7fe52ad75700  5 --OSD::tracker-- reqid: unknown.0.0:0, seq: 2457, time: 2015-01-21 12:49:38.305675, event: reached_pg, request: osd_sub_op_reply(client.4230.0:6568 0.0 87af8780/10000000010.00000153/head//0 [] ondisk, result = 0) v1
ceph-osd.0.log.gz:92898800-    -6> 2015-01-21 12:49:38.305686 7fe52ad75700  5 --OSD::tracker-- reqid: unknown.0.0:0, seq: 2457, time: 2015-01-21 12:49:38.305686, event: started, request: osd_sub_op_reply(client.4230.0:6568 0.0 87af8780/10000000010.00000153/head//0 [] ondisk, result = 0) v1
ceph-osd.0.log.gz:92899064-    -5> 2015-01-21 12:49:38.305693 7fe52ad75700  7 osd.0 pg_epoch: 126 pg[0.0( v 126'116 lc 38'81 (0'0,126'116] local-les=102 n=56 ec=1 les/c 102/94 100/101/101) [0,3]/[0,3,4] r=0 lpr=101 pi=41-100/9 luod=115'115 rops=2 bft=3 lcod 38'80 mlcod 38'61 active+recovering+remapped m=5] repop_ack rep_tid 2 op osd_op(client.4230.0:6568 10000000010.00000153 [write 1703936~524288] 0.87af8780 snapc 1=[] e124) v4 result 0 ack_type 4 from osd.4
ceph-osd.0.log.gz:92899500-    -4> 2015-01-21 12:49:38.305714 7fe52ad75700  5 --OSD::tracker-- reqid: client.4230.0:6568, seq: 2456, time: 2015-01-21 12:49:38.305714, event: sub_op_commit_rec, request: osd_op(client.4230.0:6568 10000000010.00000153 [write 1703936~524288] 0.87af8780 snapc 1=[] e124) v4
ceph-osd.0.log.gz:92899776-    -3> 2015-01-21 12:49:38.305724 7fe52ad75700 10 osd.0 pg_epoch: 126 pg[0.0( v 126'116 lc 38'81 (0'0,126'116] local-les=102 n=56 ec=1 les/c 102/94 100/101/101) [0,3]/[0,3,4] r=0 lpr=101 pi=41-100/9 luod=115'115 rops=2 bft=3 lcod 38'80 mlcod 38'61 active+recovering+remapped m=5] eval_repop repgather(0x3bbc2e0 applying 126'116 rep_tid=2 wfack=0,3 wfdisk=0,3 op=osd_op(client.4230.0:6568 10000000010.00000153 [write 1703936~524288] 0.87af8780 snapc 1=[] e124) v4) wants=d
ceph-osd.0.log.gz:92900249-    -2> 2015-01-21 12:49:38.305741 7fe52ad75700 10 osd.0 126 dequeue_op 0x54411d0 finish
ceph-osd.0.log.gz:92900338-    -1> 2015-01-21 12:49:38.305745 7fe52ad75700  5 --OSD::tracker-- reqid: unknown.0.0:0, seq: 2457, time: 2015-01-21 12:49:38.305745, event: done, request: osd_sub_op_reply(client.4230.0:6568 0.0 87af8780/10000000010.00000153/head//0 [] ondisk, result = 0) v1
ceph-osd.0.log.gz:92900599-     0> 2015-01-21 12:49:38.360517 7fe52a574700 -1 *** Caught signal (Aborted) **
ceph-osd.0.log.gz:92900681- in thread 7fe52a574700
ceph-osd.0.log.gz:92900705-
ceph-osd.0.log.gz:92900706: ceph version 0.67.11-61-gd73f0b8 (d73f0b86d3989d7b5924e984f949734a64ab04a9)
ceph-osd.0.log.gz:92900783- 1: ceph-osd() [0x7f628f]
ceph-osd.0.log.gz:92900809- 2: (()+0x10340) [0x7fe53f702340]
ceph-osd.0.log.gz:92900843- 3: (gsignal()+0x39) [0x7fe53d6e9f89]
ceph-osd.0.log.gz:92900881- 4: (abort()+0x148) [0x7fe53d6ed398]
ceph-osd.0.log.gz:92900918- 5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7fe53dff56b5]
ceph-osd.0.log.gz:92900988- 6: (()+0x5e836) [0x7fe53dff3836]
ceph-osd.0.log.gz:92901022- 7: (()+0x5e863) [0x7fe53dff3863]
ceph-osd.0.log.gz:92901056- 8: (()+0x5eaa2) [0x7fe53dff3aa2]
ceph-osd.0.log.gz:92901090- 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1f2) [0x8bf002]
ceph-osd.0.log.gz:92901182- 10: (ReplicatedPG::sub_op_modify(std::tr1::shared_ptr<OpRequest>)+0x109d) [0x60818d]
ceph-osd.0.log.gz:92901268- 11: (ReplicatedPG::do_sub_op(std::tr1::shared_ptr<OpRequest>)+0x432) [0x6088a2]
ceph-osd.0.log.gz:92901349- 12: (PG::do_request(std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3ae) [0x6fa3ae]
ceph-osd.0.log.gz:92901444- 13: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x343) [0x6468f3]
ceph-osd.0.log.gz:92901566- 14: (OSD::OpWQ::_process(boost::intrusive_ptr<PG>, ThreadPool::TPHandle&)+0x19f) [0x65b5ef]
ceph-osd.0.log.gz:92901659- 15: (ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_process(void*, ThreadPool::TPHandle&)+0x9c) [0x6971cc]
ceph-osd.0.log.gz:92901850- 16: (ThreadPool::worker(ThreadPool::WorkThread*)+0xaf1) [0x8afe31]
ceph-osd.0.log.gz:92901918- 17: (ThreadPool::WorkThread::entry()+0x10) [0x8b0d20]
ceph-osd.0.log.gz:92901973- 18: (()+0x8182) [0x7fe53f6fa182]
ceph-osd.0.log.gz:92902007- 19: (clone()+0x6d) [0x7fe53d7ae38d]
ceph-osd.0.log.gz:92902044- NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.


Related issues

Duplicates Ceph - Bug #6574: osd/ReplicatedPG.cc: 7851: FAILED assert(0) Resolved 10/16/2013

History

#1 Updated by Samuel Just about 9 years ago

  • Priority changed from Normal to Urgent

#2 Updated by Sage Weil about 9 years ago

  • Category set to OSD

#3 Updated by Samuel Just about 9 years ago

  • Status changed from New to Duplicate

Also available in: Atom PDF