Project

General

Profile

Actions

Bug #36337

open

OSDs crash with failed assertion in PGLog::merge_log as logs do not overlap

Added by Jonas Jelten over 5 years ago. Updated over 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hello!

Unfortunately, our single-node-"Cluster" with 11 OSDs is broken because some OSDs crash when they start peering.
I'm on Ubuntu 18.04 with Ceph Mimic (13.2.2).

The problem seems to be created when RAM was filled up and OSD processes then crashed because of memory allocation failures.

No weird commands (e.g. force_create_pg) were used on this cluster and it was set up with 13.2.1 initially.
The affected pool seems to be a replicated pool with size=3 and min_size=2 (which haven't been changed).

Crash log of osd.9 (only the crashed thread) is attached. It crashes with:

0> 2018-10-07 15:02:31.019 7f27b99a8700 -1 ** Caught signal (Aborted) *
in thread 7f27b99a8700 thread_name:tp_osd_tp
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
1: (()+0x91b860) [0x55d747c9e860]
2: (()+0x12890) [0x7f27da05a890]
3: (gsignal()+0xc7) [0x7f27d8f1ee97]
4: (abort()+0x141) [0x7f27d8f20801]
5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x250) [0x7f27db96c970]
6: (()+0x26c9e7) [0x7f27db96c9e7]
7: (PGLog::merge_log(pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x1ad0) [0x55d747865f50]
8: (PG::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t)+0x64) [0x55d7477c3194]
9: (PG::proc_master_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_missing_set<false>&, pg_shard_t)+0x8b) [0x55d747804abb]
10: (PG::RecoveryState::GetLog::react(PG::RecoveryState::GotLog const&)+0xab) [0x55d747804e4b]
11: (boost::statechart::simple_state<PG::RecoveryState::GetLog, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mp
12: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_queued_events()+0xb3) [0x55d7478
13: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_b
14: (PG::do_peering_event(std::shared_ptr<PGPeeringEvent>, PG::RecoveryCtx*)+0x143) [0x55d747811a53]
15: (OSD::dequeue_peering_evt(OSDShard*, PG*, std::shared_ptr<PGPeeringEvent>, ThreadPool::TPHandle&)+0xcf) [0x55d74775250f]
16: (PGPeeringItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x50) [0x55d7479be820]
17: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x598) [0x55d747762148]
18: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x46e) [0x7f27db97175e]
19: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x7f27db9737e0]
20: (()+0x76db) [0x7f27da04f6db]
21: (clone()+0x3f) [0x7f27d900188f]
0> 2018-10-07 15:02:30.959 7f27b99a8700 -1 /home/jj/cephsrc/ceph-13.2.2/src/osd/PGLog.cc: In function 'void PGLog::merge_log(pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool
/home/jj/cephsrc/ceph-13.2.2/src/osd/PGLog.cc: 345: FAILED assert(log.head >= olog.tail && olog.head >= log.tail)

The last hints are:

-485> 2018-10-07 15:02:30.955 7f27b99a8700 10 osd.9 pg_epoch: 13898 pg[4.2( v 3035'149287 (3035'146200,3035'149287] local-lis/les=3015/3016 n=6960 ec=140/140 lis/c 3161/3015 les/c/f 3162/3016/0 13898/13898/1389
-483> 2018-10-07 15:02:30.955 7f27b99a8700 10 merge_log log((3035'145746,3035'145746], crt=3035'145746) from osd.4 into log((3035'146200,3035'149287], crt=3035'149287)

So it seems the contact to osd.4 for pg 4.2 leads to the crash.
The logs do no longer overlap but ceph is still attempting to merge them.
osd.4 has (3035'145746,3035'145746], osd.9 has (3035'146200,3035'149287], which don't overlap.

I guess that it would be okay to get rid of the PG on osd.4 as its log is older there.
If that is true, Ceph should do this automatically.

pools:

pool 1 'xxx_data' erasure size 10 min_size 9 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 175 flags hashpspool,ec_overwrites stripe_width 32768 application cephfs
pool 3 'ssn_root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 147 flags hashpspool stripe_width 0 application cephfs
pool 4 'ssn_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 147 flags hashpspool stripe_width 0 application cephfs
pool 5 'xxx_backup' erasure size 10 min_size 8 crush_rule 2 object_hash rjenkins pg_num 32 pgp_num 32 last_change 176 flags hashpspool,ec_overwrites stripe_width 28672 application cephfs

pg dump is attached.

pg 4.2 query says this:

{
"state": "recovery_wait+undersized+degraded+peered",
"snap_trimq": "[]",
"snap_trimq_len": 0,
"epoch": 13902,
"up": [
4
],
"acting": [
4
],
"acting_recovery_backfill": [
"4"
],
[...]

"recovery_state": [
  {
"name": "Started/Primary/Active",
"enter_time": "2018-10-07 15:02:38.218711",
"might_have_unfound": [ {
"osd": "7",
"status": "osd is down"
}, {
"osd": "9",
"status": "osd is down"
}
],
"recovery_progress": {
"backfill_targets": [],
"waiting_on_backfill": [],
"last_backfill_started": "MIN",
"backfill_info": {
"begin": "MIN",
"end": "MIN",
"objects": []
},
"peer_backfill_info": [],
"backfills_in_flight": [],
"recovering": [],
"pg_backend": {
"pull_from_peer": [],
"pushing": []
}
},
}, {
"name": "Started",
"enter_time": "2018-10-07 15:02:37.251392"
}
],

My current way of resolving this is ceph-objectstore-tool export-delete of the pg in the osd that has the broken and older log. In this case, I delete pg 4.2 from OSD.4.

After I've done that until the cluster was completely healthy again (all PGs active+clean), I launched my MDS again, but this led to another corruption! Now, other PGs have the same log-overlap-issues and I have to delete the broken PGs again. My guess is that the cluster actually wasn't active+clean, and writes by the MDS just triggered log writes which led to syncs and crashes again.

Any pointers what can be done to fix this from happening? And of course to either actually fix the corruption or fix it more efficiently?


Files

osd9-crash-threadlog20.log (797 KB) osd9-crash-threadlog20.log log of osd.9 that has the crash Jonas Jelten, 10/07/2018 01:16 PM
pgdump (85.8 KB) pgdump ceph pg dump Jonas Jelten, 10/07/2018 01:19 PM
ceph-osd.4.log (314 KB) ceph-osd.4.log osd4 log that peered with osd.9 Jonas Jelten, 10/07/2018 01:34 PM
Actions #2

Updated by Jonas Jelten over 5 years ago

Because I had to delete many broken PGs (46 to date), I've created a tool that finds the broken PGs and their OSDs and removes only older PGs that have no overlap.

https://gist.github.com/TheJJ/c6be62e612ac4782bd0aa279d8c82197

Maybe it's helpful if anyone else encounters this kind of corruption.

Actions #3

Updated by Greg Farnum over 5 years ago

  • Project changed from Ceph to RADOS
  • Category deleted (OSD)
Actions #4

Updated by Sage Weil about 5 years ago

  • Status changed from New to 12
  • Priority changed from Normal to High
2019-02-21T22:51:59.427 INFO:tasks.ceph.osd.0.smithi117.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.1-4064-g29c3ee3/rpm/el7/BUILD/ceph-14.0.1-4064-g29c3ee3/src/osd/PGLog.cc: In function 'void PGLog::merge_log(pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 7f38bd4a3700 time 2019-02-21 22:51:59.445175
2019-02-21T22:51:59.427 INFO:tasks.ceph.osd.0.smithi117.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.1-4064-g29c3ee3/rpm/el7/BUILD/ceph-14.0.1-4064-g29c3ee3/src/osd/PGLog.cc: 359: FAILED ceph_assert(log.head >= olog.tail && olog.head >= log.tail)
2019-02-21T22:51:59.433 INFO:tasks.ceph.osd.0.smithi117.stderr: ceph version 14.0.1-4064-g29c3ee3 (29c3ee3b2ff12b9c71f42161314be14bd122bbda) nautilus (dev)
2019-02-21T22:51:59.434 INFO:tasks.ceph.osd.0.smithi117.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x556934106b4c]
2019-02-21T22:51:59.434 INFO:tasks.ceph.osd.0.smithi117.stderr: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x556934106d1a]
2019-02-21T22:51:59.434 INFO:tasks.ceph.osd.0.smithi117.stderr: 3: (PGLog::merge_log(pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x1bb4) [0x556934374ae4]
2019-02-21T22:51:59.434 INFO:tasks.ceph.osd.0.smithi117.stderr: 4: (PG::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t)+0x64) [0x5569342d4524]
2019-02-21T22:51:59.434 INFO:tasks.ceph.osd.0.smithi117.stderr: 5: (PG::proc_master_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_missing_set<false>&, pg_shard_t)+0x94) [0x5569343043d4]
2019-02-21T22:51:59.434 INFO:tasks.ceph.osd.0.smithi117.stderr: 6: (PG::RecoveryState::GetLog::react(PG::RecoveryState::GotLog const&)+0x97) [0x55693431fe67]
2019-02-21T22:51:59.435 INFO:tasks.ceph.osd.0.smithi117.stderr: 7: (boost::statechart::simple_state<PG::RecoveryState::GetLog, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x43) [0x55693435fc33]
2019-02-21T22:51:59.435 INFO:tasks.ceph.osd.0.smithi117.stderr: 8: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x136) [0x5569343406b6]
2019-02-21T22:51:59.435 INFO:tasks.ceph.osd.0.smithi117.stderr: 9: (PG::do_peering_event(std::shared_ptr<PGPeeringEvent>, PG::RecoveryCtx*)+0x119) [0x55693432fb89]
2019-02-21T22:51:59.435 INFO:tasks.ceph.osd.0.smithi117.stderr: 10: (OSD::dequeue_peering_evt(OSDShard*, PG*, std::shared_ptr<PGPeeringEvent>, ThreadPool::TPHandle&)+0x1b4) [0x55693426b7e4]
2019-02-21T22:51:59.436 INFO:tasks.ceph.osd.0.smithi117.stderr: 11: (PGPeeringItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x50) [0x5569344d9f20]
2019-02-21T22:51:59.437 INFO:tasks.ceph.osd.0.smithi117.stderr: 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xa0c) [0x55693426025c]
2019-02-21T22:51:59.437 INFO:tasks.ceph.osd.0.smithi117.stderr: 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x433) [0x556934852cf3]
2019-02-21T22:51:59.437 INFO:tasks.ceph.osd.0.smithi117.stderr: 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x556934855d90]
2019-02-21T22:51:59.437 INFO:tasks.ceph.osd.0.smithi117.stderr: 15: (()+0x7e25) [0x7f38e84dde25]

/a/sage-2019-02-21_21:52:17-rados-wip-sage3-testing-2019-02-21-1359-distro-basic-smithi/3622514

unfortunately no logs (dead test) :(

Actions #5

Updated by Patrick Donnelly over 4 years ago

  • Status changed from 12 to New
Actions #6

Updated by Neha Ojha almost 4 years ago

2020-06-08T21:19:39.877 INFO:tasks.ceph.osd.4.smithi087.stderr:/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.0.0-2296-g9b3b804/rpm/el8/BUILD/ceph-16.0.0-2296-g9b3b804/src/osd/PGLog.cc: In function 'void PGLog::merge_log(pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 7fe20d22a700 time 2020-06-08T21:19:39.872282+0000
2020-06-08T21:19:39.878 INFO:tasks.ceph.osd.4.smithi087.stderr:/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.0.0-2296-g9b3b804/rpm/el8/BUILD/ceph-16.0.0-2296-g9b3b804/src/osd/PGLog.cc: 371: FAILED ceph_assert(log.head >= olog.tail && olog.head >= log.tail)
2020-06-08T21:19:39.893 INFO:tasks.ceph.osd.4.smithi087.stderr: ceph version 16.0.0-2296-g9b3b804 (9b3b8049a7c5ac552900c3ae709b6a88ade1bf7c) pacific (dev)
2020-06-08T21:19:39.893 INFO:tasks.ceph.osd.4.smithi087.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x561df6132bba]
2020-06-08T21:19:39.893 INFO:tasks.ceph.osd.4.smithi087.stderr: 2: (()+0x545dd4) [0x561df6132dd4]
2020-06-08T21:19:39.893 INFO:tasks.ceph.osd.4.smithi087.stderr: 3: (PGLog::merge_log(pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x1ca1) [0x561df631b6f1]
2020-06-08T21:19:39.894 INFO:tasks.ceph.osd.4.smithi087.stderr: 4: (PeeringState::merge_log(ceph::os::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t)+0x75) [0x561df64a43b5]
2020-06-08T21:19:39.894 INFO:tasks.ceph.osd.4.smithi087.stderr: 5: (PeeringState::Stray::react(MLogRec const&)+0xcc) [0x561df64df6cc]
2020-06-08T21:19:39.894 INFO:tasks.ceph.osd.4.smithi087.stderr: 6: (boost::statechart::simple_state<PeeringState::Stray, PeeringState::Started, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xa5) [0x561df650b135]
2020-06-08T21:19:39.894 INFO:tasks.ceph.osd.4.smithi087.stderr: 7: (boost::statechart::state_machine<PeeringState::PeeringMachine, PeeringState::Initial, std::allocator<boost::statechart::none>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x5b) [0x561df62fb5eb]
2020-06-08T21:19:39.894 INFO:tasks.ceph.osd.4.smithi087.stderr: 8: (PG::do_peering_event(std::shared_ptr<PGPeeringEvent>, PeeringCtx&)+0x2d1) [0x561df62ee2d1]
2020-06-08T21:19:39.895 INFO:tasks.ceph.osd.4.smithi087.stderr: 9: (OSD::dequeue_peering_evt(OSDShard*, PG*, std::shared_ptr<PGPeeringEvent>, ThreadPool::TPHandle&)+0x29c) [0x561df62603ec]
2020-06-08T21:19:39.895 INFO:tasks.ceph.osd.4.smithi087.stderr: 10: (ceph::osd::scheduler::PGPeeringItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x56) [0x561df6498c36]
2020-06-08T21:19:39.895 INFO:tasks.ceph.osd.4.smithi087.stderr: 11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12ef) [0x561df625366f]
2020-06-08T21:19:39.895 INFO:tasks.ceph.osd.4.smithi087.stderr: 12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x561df688eae4]
2020-06-08T21:19:39.895 INFO:tasks.ceph.osd.4.smithi087.stderr: 13: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x561df6891744]
2020-06-08T21:19:39.896 INFO:tasks.ceph.osd.4.smithi087.stderr: 14: (()+0x82de) [0x7fe23066c2de]
2020-06-08T21:19:39.896 INFO:tasks.ceph.osd.4.smithi087.stderr: 15: (clone()+0x43) [0x7fe22f411133]

/a/nojha-2020-06-08_18:28:32-rados:thrash-master-distro-basic-smithi/5130051 - dead job, no logs again :(

Actions #7

Updated by Neha Ojha over 3 years ago

  • Priority changed from High to Normal
Actions

Also available in: Atom PDF