https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2020-09-04T22:23:47Z
Ceph
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=174454
2020-09-04T22:23:47Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Need More Info</i></li></ul><p>Is it possible for you to capture osd logs with debug_osd=30? We'll also try to reproduce this at our end.</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=174502
2020-09-06T09:11:20Z
Denis Krienbühl
<ul></ul><p>Neha Ojha wrote:</p>
<blockquote>
<p>Is it possible for you to capture osd logs with debug_osd=30? We'll also try to reproduce this at our end.</p>
</blockquote>
<p>Unfortunately we already reset the OSD, as we've been suffering from a number of problems with our cluster and didn't have a chance to keep this in a broken state for investigation. We expect that we may run into further issues like this though, and if permits we'll happily provide more output.</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=193647
2021-05-03T12:07:24Z
Tobias Urdin
<ul><li><strong>File</strong> <a href="/attachments/download/5486/crash2.txt">crash2.txt</a> <a class="icon-only icon-magnifier" title="View" href="/attachments/5486/crash2.txt">View</a> added</li></ul><p>I had multiple OSDs die during an upgrade from Nautilus to Octopus with this trace. See the attached crash2.txt</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=195912
2021-05-25T09:37:36Z
Tobias Urdin
<ul></ul><p>Neha, can you draw any conclusions from the above debug_osd=30 log with this issue?</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=201977
2021-08-26T21:37:10Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Status</strong> changed from <i>Need More Info</i> to <i>New</i></li></ul>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=201980
2021-08-26T21:37:37Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Duplicated by</strong> <i><a class="issue tracker-1 status-10 priority-4 priority-default closed" href="/issues/52180">Bug #52180</a>: crash: void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [with bool TrackChanges = false]: assert(p->second.need <= v || p->second.is_delete())</i> added</li></ul>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=211520
2022-03-06T20:42:54Z
Tobias Urdin
<ul></ul><p>Just got this again during a recovery after doing maintenance on another node this OSD crashed.</p>
<pre><code>-1> 2022-03-06T21:08:01.635+0100 7fc59a875700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/15.2.15/rpm/el7/BUILD/ceph-15.2.15/src/osd/osd_types.h: In function 'void pg_missing_set&lt;TrackChanges&gt;::got(const hobject_t&, eversion_t) [with bool TrackChanges = false]' thread 7fc59a875700 time 2022-03-06T21:08:01.523875+0100<br />/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/15.2.15/rpm/el7/BUILD/ceph-15.2.15/src/osd/osd_types.h: 4774: FAILED ceph_assert(p->second.need <= v || p->second.is_delete())</code></pre>
<pre><code>ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)<br /> 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14c) [0x558cce356419]<br /> 2: (()+0x4df5e1) [0x558cce3565e1]<br /> 3: (PeeringState::on_peer_recover(pg_shard_t, hobject_t const&, eversion_t const&)+0x1a1) [0x558cce68a261]<br /> 4: (ReplicatedBackend::handle_push_reply(pg_shard_t, PushReplyOp const&, PushOp*)+0x56d) [0x558cce754acd]<br /> 5: (ReplicatedBackend::do_push_reply(boost::intrusive_ptr&lt;OpRequest&gt;)+0xf2) [0x558cce759142]<br /> 6: (ReplicatedBackend::_handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x227) [0x558cce7594d7]<br /> 7: (PGBackend::handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x4a) [0x558cce5f532a]<br /> 8: (PrimaryLogPG::do_request(boost::intrusive_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0x5cb) [0x558cce59b47b]<br /> 9: (OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, boost::intrusive_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x2f9) [0x558cce43ab69]<br /> 10: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr&lt;PG&gt;&, ThreadPool::TPHandle&)+0x69) [0x558cce676609]<br /> 11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x143a) [0x558cce45644a]<br /> 12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b6) [0x558ccea42206]<br /> 13: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x558ccea44d50]<br /> 14: (()+0x7ea5) [0x7fc5bda74ea5]<br /> 15: (clone()+0x6d) [0x7fc5bc9379fd]</code></pre>
<pre><code>0> 2022-03-06T21:08:01.673+0100 7fc59a875700 -1 *<strong>* Caught signal (Aborted) *</strong><br /> in thread 7fc59a875700 thread_name:tp_osd_tp</code></pre>
<pre><code>ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)<br /> 1: (()+0xf630) [0x7fc5bda7c630]<br /> 2: (gsignal()+0x37) [0x7fc5bc86f3d7]<br /> 3: (abort()+0x148) [0x7fc5bc870ac8]<br /> 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x19b) [0x558cce356468]<br /> 5: (()+0x4df5e1) [0x558cce3565e1]<br /> 6: (PeeringState::on_peer_recover(pg_shard_t, hobject_t const&, eversion_t const&)+0x1a1) [0x558cce68a261]<br /> 7: (ReplicatedBackend::handle_push_reply(pg_shard_t, PushReplyOp const&, PushOp*)+0x56d) [0x558cce754acd]<br /> 8: (ReplicatedBackend::do_push_reply(boost::intrusive_ptr&lt;OpRequest&gt;)+0xf2) [0x558cce759142]<br /> 9: (ReplicatedBackend::_handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x227) [0x558cce7594d7]<br /> 10: (PGBackend::handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x4a) [0x558cce5f532a]<br /> 11: (PrimaryLogPG::do_request(boost::intrusive_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0x5cb) [0x558cce59b47b]<br /> 12: (OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, boost::intrusive_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x2f9) [0x558cce43ab69]<br /> 13: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr&lt;PG&gt;&, ThreadPool::TPHandle&)+0x69) [0x558cce676609]<br /> 14: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x143a) [0x558cce45644a]<br /> 15: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b6) [0x558ccea42206]<br /> 16: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x558ccea44d50]<br /> 17: (()+0x7ea5) [0x7fc5bda74ea5]<br /> 18: (clone()+0x6d) [0x7fc5bc9379fd]<br /> NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.</code></pre>
<p>--- logging levels ---<br /> 0/ 0 none<br /> 0/ 0 lockdep<br /> 0/ 0 context<br /> 0/ 0 crush<br /> 0/ 0 mds<br /> 1/ 5 mds_balancer<br /> 1/ 5 mds_locker<br /> 1/ 5 mds_log<br /> 1/ 5 mds_log_expire<br /> 1/ 5 mds_migrator<br /> 0/ 0 buffer<br /> 0/ 0 timer<br /> 0/ 0 filer<br /> 0/ 0 striper<br /> 0/ 0 objecter<br /> 0/ 0 rados<br /> 0/ 0 rbd<br /> 0/ 5 rbd_mirror<br /> 0/ 5 rbd_replay<br /> 0/ 5 rbd_rwl<br /> 0/ 0 journaler<br /> 0/ 0 objectcacher<br /> 0/ 5 immutable_obj_cache<br /> 0/ 0 client<br /> 0/ 0 osd<br /> 0/ 0 optracker<br /> 0/ 0 objclass<br /> 0/ 0 filestore<br /> 0/ 0 journal<br /> 0/ 0 ms<br /> 0/ 0 mon<br /> 0/ 0 monc<br /> 0/ 0 paxos<br /> 0/ 0 tp<br /> 0/ 0 auth<br /> 0/ 0 crypto<br /> 0/ 0 finisher<br /> 0/ 0 reserver<br /> 0/ 0 heartbeatmap<br /> 0/ 0 perfcounter<br /> 0/ 0 rgw<br /> 1/ 5 rgw_sync<br /> 0/ 0 civetweb<br /> 0/ 0 javaclient<br /> 0/ 0 asok<br /> 0/ 0 throttle<br /> 0/ 0 refs<br /> 0/ 0 compressor<br /> 0/ 0 bluestore<br /> 0/ 0 bluefs<br /> 0/ 0 bdev<br /> 0/ 0 kstore<br /> 0/ 0 rocksdb<br /> 0/ 0 leveldb<br /> 0/ 0 memdb<br /> 0/ 0 fuse<br /> 0/ 0 mgr<br /> 0/ 0 mgrc<br /> 0/ 0 dpdk<br /> 0/ 0 eventtrace<br /> 1/ 5 prioritycache<br /> 0/ 5 test<br /> <del>2/-2 (syslog threshold)<br /> -1/-1 (stderr threshold)<br />--</del> pthread ID / name mapping for recent threads ---<br /> 7fc59a875700 / tp_osd_tp<br /> 7fc5ad09a700 / bstore_mempool<br /> max_recent 10000<br /> max_new 1000<br /> log_file /var/log/ceph/ceph-osd.67.log<br />--- end dump of recent events ---</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=211693
2022-03-09T19:59:05Z
Radoslaw Zarzynski
rzarzyns@redhat.com
<ul></ul><p>If this is easily reproducible could you please provide us with logs of replicas for the failing PG? It can be figured out <code>ceph osd pg dump</code>.<br />Alternatively, a log from primary but with increased debugs (<code>debug_ms = 1</code>, <code>debug_osd = 30</code>) would be also helpful.</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=212134
2022-03-17T00:16:08Z
Telemetry Bot
<ul><li><strong>Crash signature (v1)</strong> updated (<a title="View differences" href="/journals/212134/diff?detail_id=223292">diff</a>)</li><li><strong>Crash signature (v2)</strong> updated (<a title="View differences" href="/journals/212134/diff?detail_id=223293">diff</a>)</li><li><strong>Affected Versions</strong> <i>v15.2.13, v15.2.6, v15.2.8</i> added</li></ul><p><a class="external" href="http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f582692869a94580abf07e6695f97d0a75b4174a2d1727abee7acfb06a234e32">http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f582692869a94580abf07e6695f97d0a75b4174a2d1727abee7acfb06a234e32</a></p>
<p>Assert condition: p->second.need <= v || p->second.is_delete()<br />Assert function: void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [with bool TrackChanges = false]</p>
<p>Sanitized backtrace:<br /><pre> ReplicatedBackend::handle_push_reply(pg_shard_t, PushReplyOp const&, PushOp*)
ReplicatedBackend::do_push_reply(boost::intrusive_ptr<OpRequest>)
ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)
PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)
PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)
OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)
ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)
OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)
ShardedThreadPool::shardedthreadpool_worker(unsigned int)
ShardedThreadPool::WorkThreadSharded::entry()
clone()
</pre><br />Crash dump sample:<br /><pre>{
"archived": "2021-08-23 17:18:32.303571",
"assert_condition": "p->second.need <= v || p->second.is_delete()",
"assert_file": "osd/osd_types.h",
"assert_func": "void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [with bool TrackChanges = false]",
"assert_line": 4774,
"assert_msg": "osd/osd_types.h: In function 'void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [with bool TrackChanges = false]' thread 7f72e348b700 time 2021-08-23T09:50:07.539096-0700\nosd/osd_types.h: 4774: FAILED ceph_assert(p->second.need <= v || p->second.is_delete())",
"assert_thread_name": "tp_osd_tp",
"backtrace": [
"(()+0x12730) [0x7f72ff25a730]",
"(gsignal()+0x10b) [0x7f72fed1f7bb]",
"(abort()+0x121) [0x7f72fed0a535]",
"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a5) [0x559eb6f49b09]",
"(()+0x9c5c90) [0x559eb6f49c90]",
"(()+0xcc0c95) [0x559eb7244c95]",
"(ReplicatedBackend::handle_push_reply(pg_shard_t, PushReplyOp const&, PushOp*)+0x575) [0x559eb7319a95]",
"(ReplicatedBackend::do_push_reply(boost::intrusive_ptr<OpRequest>)+0xfa) [0x559eb731dc9a]",
"(ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x237) [0x559eb731e037]",
"(PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x57) [0x559eb71b0e97]",
"(PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x62f) [0x559eb7154a5f]",
"(OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x325) [0x559eb6fed5c5]",
"(ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x64) [0x559eb7231414]",
"(OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12fa) [0x559eb700a00a]",
"(ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b4) [0x559eb76108b4]",
"(ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x559eb7613330]",
"(()+0x7fa3) [0x7f72ff24ffa3]",
"(clone()+0x3f) [0x7f72fede14cf]"
],
"ceph_version": "15.2.13",
"crash_id": "2021-08-23T16:50:07.609190Z_40d40626-04c7-4f75-b10d-404b107288e8",
"entity_name": "osd.f44e26114dd19dd3c396a7933df86e932f20de67",
"os_id": "10",
"os_name": "Debian GNU/Linux 10 (buster)",
"os_version": "10 (buster)",
"os_version_id": "10",
"process_name": "ceph-osd",
"stack_sig": "ea5d5202cbfd479dad466304faaf11b74609c65a810668ae789b64ec19b8be0d",
"timestamp": "2021-08-23T16:50:07.609190Z",
"utsname_machine": "x86_64",
"utsname_release": "5.11.21-1-pve",
"utsname_sysname": "Linux",
"utsname_version": "#1 SMP PVE 5.11.21-1~bpo10 (Wed, 02 Jun 2021 11:34:45 +0200)"
}</pre></p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=212271
2022-03-17T15:52:23Z
Telemetry Bot
<ul><li><strong>Crash signature (v1)</strong> updated (<a title="View differences" href="/journals/212271/diff?detail_id=223578">diff</a>)</li><li><strong>Crash signature (v2)</strong> updated (<a title="View differences" href="/journals/212271/diff?detail_id=223579">diff</a>)</li></ul>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=214016
2022-04-04T17:48:00Z
Radoslaw Zarzynski
rzarzyns@redhat.com
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Need More Info</i></li><li><strong>Crash signature (v1)</strong> updated (<a title="View differences" href="/journals/214016/diff?detail_id=226598">diff</a>)</li></ul><p>Still new more info. See the comment <a class="issue tracker-3 status-5 priority-4 priority-default closed" title="Support: Document differences from S3 (Closed)" href="https://tracker.ceph.com/issues/8">#8</a>.</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=216066
2022-05-12T09:33:47Z
Tobias Urdin
<ul></ul><pre>
"assert_condition": "p->second.need <= v || p->second.is_delete()",
"assert_file": "/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.7/rpm/el8/BUILD/ceph-16.2.7/src/osd/osd_types.h",
"assert_func": "void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [with bool TrackChanges = false]",
"assert_line": 4910,
"assert_msg": "/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.7/rpm/el8/BUILD/ceph-16.2.7/src/osd/osd_types.h: In function 'void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [with bool TrackChanges = false]' thread 7f41e33ba700 time 2022-05-12T11:21:23.175952+0200\n/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.7/rpm/el8/BUILD/ceph-16.2.7/src/osd/osd_types.h: 4910: FAILED ceph_assert(p->second.need <= v || p->second.is_delete())\n",
"assert_thread_name": "tp_osd_tp",
"backtrace": [
"/lib64/libpthread.so.0(+0x12ce0) [0x7f4207b5dce0]",
"gsignal()",
"abort()",
"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5637d1a4dbcf]",
"/usr/bin/ceph-osd(+0x56ad98) [0x5637d1a4dd98]",
"(PeeringState::on_peer_recover(pg_shard_t, hobject_t const&, eversion_t const&)+0x1b4) [0x5637d1dcc164]",
"(ReplicatedBackend::handle_push_reply(pg_shard_t, PushReplyOp const&, PushOp*)+0x585) [0x5637d1eef615]",
"(ReplicatedBackend::do_push_reply(boost::intrusive_ptr<OpRequest>)+0x101) [0x5637d1ef22a1]",
"(ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x237) [0x5637d1ef6e17]",
"(PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x52) [0x5637d1d297d2]",
"(PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x5de) [0x5637d1ccc95e]",
"(OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x309) [0x5637d1b558e9]",
"(ceph::osd::scheduler::PGRecoveryMsg::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x68) [0x5637d1db3248]",
"(OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xc28) [0x5637d1b75d48]",
"(ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0x5637d21e75b4]",
"(ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0x5637d21ea254]",
"/lib64/libpthread.so.0(+0x81cf) [0x7f4207b531cf]",
"clone()"
],
</pre>
<p>One OSD crashed with this when upgrading from Octopus to Pacific</p>
RADOS - Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
https://tracker.ceph.com/issues/47299?journal_id=216113
2022-05-12T18:02:57Z
Radoslaw Zarzynski
rzarzyns@redhat.com
<ul></ul><p>Hello! A note from a bug scrub:</p>
<p>1. This issue looks like being caused by a particular data stored in OSD which<br />2. makes it very hard to replicate locally so<br />3. logs (or a coredump) would be really helpful.</p>
<p>For improving the debugging in the future we can also to split the assert's conditions into dedicated assert. However, this won't help with this particular case.</p>