Project

General

Profile

Bug #19191

osd/ReplicatedBackend.cc: 1109: FAILED assert(!parent->get_log().get_missing().is_missing(soid))

Added by Sage Weil about 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
PG Split
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

  -334> 2017-03-04 02:19:34.154279 7f9a40a60700 10 osd.1 pg_epoch: 93 pg[1.e( v 93'2357 lc 0'0 (72'1265,93'2357] lb MIN (bitwise) local-les=65 n=33 ec=7 les/c/f 65/48/0 84/88/52) [2,1]/[4,5] r=-1 lpr=88 pi=7-87/8 luod=0'0 lua=93'2356 crt=93'2357 active+remapped m=17] handle_message: 0x7f9a71228f00
  -333> 2017-03-04 02:19:34.154292 7f9a40a60700 10 osd.1 pg_epoch: 93 pg[1.e( v 93'2357 lc 0'0 (72'1265,93'2357] lb MIN (bitwise) local-les=65 n=33 ec=7 les/c/f 65/48/0 84/88/52) [2,1]/[4,5] r=-1 lpr=88 pi=7-87/8 luod=0'0 lua=93'2356 crt=93'2357 active+remapped m=17] sub_op_modify trans 1:70ffb74a:::benchmark_data_

     0> 2017-03-04 02:19:34.157419 7f9a40a60700 -1 /build/ceph-12.0.0-1023-gfcbb6f6/src/osd/ReplicatedBackend.cc: In function 'void ReplicatedBackend::sub_op_modify(OpRequestRef)' thread 7f9a40a60700 time 2017-03-04 02:19:34.154304
/build/ceph-12.0.0-1023-gfcbb6f6/src/osd/ReplicatedBackend.cc: 1109: FAILED assert(!parent->get_log().get_missing().is_missing(soid))

 ceph version 12.0.0-1023-gfcbb6f6 (fcbb6f681122104df058e33b18d1d007dc4a2d0f)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x10e) [0x7f9a5d8c6a2e]
 2: (ReplicatedBackend::sub_op_modify(boost::intrusive_ptr<OpRequest>)+0x10f3) [0x7f9a5d62be13]
 3: (ReplicatedBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x324) [0x7f9a5d630e84]
 4: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x445) [0x7f9a5d4fa105]
 5: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x212) [0x7f9a5d3b0d42]
 6: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x47) [0x7f9a5d3b1177]
 7: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xff5) [0x7f9a5d3d9f25]
 8: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x955) [0x7f9a5d8cc355]
 9: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x7f9a5d8ce4b0]
 10: (()+0x8184) [0x7f9a5b65a184]
 11: (clone()+0x6d) [0x7f9a5a74a37d]

it's a backfill target that got an empty opt

2017-03-04 02:19:34.158019 7f94b8215700 10 osd.4 pg_epoch: 93 pg[1.e( v 93'2357 (72'1265,93'2357] local-les=89 n=102 ec=7 les/c/f 89/48/0 84/88/52) [2,1]/[4,5] r=0 lpr=88 pi=7-87/8 luod=93'2356 bft=1,2 crt=93'2357 lcod 93'2355 mlcod 0'0 active+remapped+backfill_wait] issue_repop shipping empty opt to osd.1, object 1:70ffb74a:::benchmark_data_smithi157_18125_object21406:head beyond MAX(last_backfill_started , pinfo.last_backfill MIN)

but hit this assert
  // we better not be missing this.
  assert(!parent->get_log().get_missing().is_missing(soid));

/a/sage-2017-03-03_17:51:47-rados-wip-osd-full---basic-smithi/878986

History

#1 Updated by Yuri Weinstein about 7 years ago

Also see during PRs testing https://trello.com/c/il60a5yB

http://qa-proxy.ceph.com/teuthology/yuriw-2017-03-05_23:28:15-rados-wip-yuri-testing_2017_3_4---basic-smithi/884669/teuthology.log

http://qa-proxy.ceph.com/teuthology/yuriw-2017-03-05_23:28:45-rados-wip-yuri-testing_2017_3_4---basic-mira/884672/teuthology.log

2017-03-05T23:32:03.747 INFO:teuthology.orchestra.run.mira114.stderr: ceph version 12.0.0-1033-gb2578b9 (b2578b9675e91ea1e6b680c50623ff56081ac28c)
2017-03-05T23:32:03.747 INFO:teuthology.orchestra.run.mira114.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x10e) [0x7f735e64d8fe]
2017-03-05T23:32:03.748 INFO:teuthology.orchestra.run.mira114.stderr: 2: (AsyncMessenger::~AsyncMessenger()+0x23a) [0x7f735e75ae7a]
2017-03-05T23:32:03.748 INFO:teuthology.orchestra.run.mira114.stderr: 3: (AsyncMessenger::~AsyncMessenger()+0x9) [0x7f735e75ae89]
2017-03-05T23:32:03.748 INFO:teuthology.orchestra.run.mira114.stderr: 4: (MessengerTest::TearDown()+0x191) [0x55d41f98b511]
2017-03-05T23:32:03.748 INFO:teuthology.orchestra.run.mira114.stderr: 5: (void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*)+0x33) [0x55d41f9be153]
2017-03-05T23:32:03.748 INFO:teuthology.orchestra.run.mira114.stderr: 6: (testing::Test::Run()+0x80) [0x55d41f9b16a0]
2017-03-05T23:32:03.749 INFO:teuthology.orchestra.run.mira114.stderr: 7: (testing::TestInfo::Run()+0x9e) [0x55d41f9b177e]
2017-03-05T23:32:03.749 INFO:teuthology.orchestra.run.mira114.stderr: 8: (testing::TestCase::Run()+0xa5) [0x55d41f9b1885]
2017-03-05T23:32:03.749 INFO:teuthology.orchestra.run.mira114.stderr: 9: (testing::internal::UnitTestImpl::RunAllTests()+0x248) [0x55d41f9b1b38]
2017-03-05T23:32:03.749 INFO:teuthology.orchestra.run.mira114.stderr: 10: (testing::UnitTest::Run()+0x54) [0x55d41f9b1df4]
2017-03-05T23:32:03.749 INFO:teuthology.orchestra.run.mira114.stderr: 11: (main()+0x195) [0x55d41f9720d5]
2017-03-05T23:32:03.749 INFO:teuthology.orchestra.run.mira114.stderr: 12: (__libc_start_main()+0xf5) [0x7f735d908f45]
2017-03-05T23:32:03.750 INFO:teuthology.orchestra.run.mira114.stderr: 13: (()+0x32816) [0x55d41f972816]
2017-03-05T23:32:03.750 INFO:teuthology.orchestra.run.mira114.stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

#2 Updated by Kefu Chai about 7 years ago

yuri, the backtrace you posted is another issue. i am building your branch of wip-yuri-testing_2017_3_4 to see if "ceph_test_msgr" still fails. and i am able to reproduce this. https://github.com/ceph/ceph/pull/13700 is the offending PR. just updated it.

#3 Updated by Josh Durgin about 7 years ago

  • Assignee set to Josh Durgin

Summary:

During pg split we are resetting last_backfill but not clearing the
local missing set. This comes back to haunt us as this assert when we
are backfilled from MIN later.

Notes:

Looking into this - In epoch 85 osd.1 is the primary, and osd.4 updates the missing set to include the object:

2017-03-04 02:19:13.689953 7f94bb21b700 10 osd.4 pg_epoch: 85 pg[1.e( v 83'1553 (46'900,83'1553] local-les=85 n=1445 ec=7 les/c/f 72/48/0 84/84/52) [2,1]/[4,5] r=0 lpr=84 pi=7-83/7 bft=1,2 crt=83'1553 lcod 83'1552 mlcod 0'0 remapped] search_for_missing 1:70ffb74a:::benchmark_data_smithi157_18125_obj
ect21406:head 72'1266 also missing on osd.1

osd.1 adds it to its missing set:

2017-03-04 02:19:13.694889 7f9a4626b700 20 update missing, append 72'1266 (0'0) modify   1:70ffb74a:::benchmark_data_smithi157_18125_object21406:head by client.4152.0:21407 2017-03-04 02:18:47.934409 0
2017-03-04 02:19:13.696338 7f9a4626b700 10 osd.1 pg_epoch: 85 pg[1.e( v 83'1553 lc 70'1262 (45'868,83'1553] lb 1:7132a282:::benchmark_data_smithi157_18125_object7678:head (bitwise) local-les=65 n=92 ec=7 les/c/f 72/48/0 84/84/52) [2,1]/[4,5] r=-1 lpr=84 pi=7-83/7 crt=83'1553 lcod 70'1261 remapped m=17] activate - not complete, missing(17)

osd.4 sends the empty transaction because backfill has not yet started (the pg has only reached backfill_wait), the pg has not split, and thus hoid > last_backfill_started and hoid > peer_info[peer].last_backfill, so shoud_send_op() returns false.

The question is why is osd.1's missing set not reset - and the answer lies in the Stray state. When receiving an MOSDPGLog, if its this message's info.last_backfill is not MIN, we merge the log (and with it the missing set). This is to start backfill from the same place we left off from.

In this case, osd.1 was being backfilled by osd.4 earlier:

2017-03-04 02:18:47.340531 7f9a40a60700 10 osd.1 pg_epoch: 70 pg[1.e( v 70'1258 (45'868,70'1258] lb 1:7129d7f9:::benchmark_data_smithi157_18125_object
19148:head (bitwise) local-les=65 n=91 ec=7 les/c/f 65/48/0 64/64/52) [0,1]/[4,5] r=-1 lpr=65 pi=7-63/5 luod=0'0 crt=70'1258 active] do_backfill pg_backfill(progress 1.e e 70/70 lb 1:7132a282:::benchmark_data_smithi157_18125_object7678:head) v3

When the pg splits, last_backfill is reset from
1:7132a282:::benchmark_data_smithi157_18125_object7678:head to MIN:

2017-03-04 02:19:19.735654 7f9a4626b700 10 osd.1 88 Splitting pg[1.e( v 87'1923 lc 72'1265 (72'1265,87'1923] lb 1:7132a282:::benchmark_data_smithi157_18125_object7678:head (bitwise) local-les=65 n=66 ec=7 les/c/f 65/48/0 84/88/52) [2,1]/[4,5] r=-1 lpr=88 pi=7-87/8 crt=87'1923 remapped NOTIFY m=17] into 1.1e
2017-03-04 02:19:19.737117 7f9a4626b700 10 osd.1 pg_epoch: 88 pg[1.e( v 87'1923 lc 72'1265 (72'1265,87'1923] lb MIN (bitwise) local-les=65 n=66 ec=7 les/c/f 65/48/0 84/88/52) [2,1]/[4,5] r=-1 lpr=88 pi=7-87/8 crt=87'1923 remapped NOTIFY m=17] release_backoffs [MIN,MAX)

but the local missing set is not cleared. Hence the bug.

#4 Updated by Greg Farnum about 7 years ago

Do we actually want to clear the missing set here, or just filter it for the correct child PG?
...I presume killing it works fine since otherwise presumably the child PG would be buggy, and it's surely better to not have them behave divergently. Never mind that question.

#5 Updated by Greg Farnum almost 7 years ago

  • Project changed from Ceph to RADOS
  • Category set to PG Split
  • Component(RADOS) OSD added

#6 Updated by Sage Weil over 6 years ago

  • Status changed from New to 17
  • Priority changed from Immediate to High

#8 Updated by Sage Weil over 6 years ago

  • Status changed from 17 to Resolved

Also available in: Atom PDF