Activity
From 04/25/2023 to 05/24/2023
05/24/2023
- 08:42 PM Bug #61228 (Resolved): Tests failing with slow scrubs with new mClock default profile
- 08:41 PM Backport #61232 (Resolved): reef: Tests failing with slow scrubs with new mClock default profile
- 08:41 PM Backport #61231 (Resolved): quincy: Tests failing with slow scrubs with new mClock default profile
- 08:06 PM Bug #61226 (Duplicate): event duration is overflow
- 07:35 PM Bug #61388 (Duplicate): osd/TrackedOp: TrackedOp event order error
- Closing this in favor of the fresh duplicate. Apologizes we've missed the initial fix!
- 04:16 PM Bug #61388: osd/TrackedOp: TrackedOp event order error
- duplicate issue: https://tracker.ceph.com/issues/58012
- 10:55 AM Bug #61388 (Pending Backport): osd/TrackedOp: TrackedOp event order error
- 01:53 AM Bug #61388 (Duplicate): osd/TrackedOp: TrackedOp event order error
- Header_read time is recv_stap, throttled time is throttle_stamp. Throttled event is in front of header_read event cur...
- 05:58 PM Backport #61404 (Resolved): reef: Scrubs are too slow with new mClock profile changes
- 05:41 PM Backport #61404 (Resolved): reef: Scrubs are too slow with new mClock profile changes
- https://github.com/ceph/ceph/pull/51712
- 05:55 PM Backport #61403 (In Progress): quincy: Scrubs are too slow with new mClock profile changes
- 05:41 PM Backport #61403 (Resolved): quincy: Scrubs are too slow with new mClock profile changes
- https://github.com/ceph/ceph/pull/51728
- 05:51 PM Backport #61345 (Resolved): reef: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpda...
- 05:51 PM Backport #61345: reef: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- https://github.com/ceph/ceph/pull/51683
- 05:39 PM Bug #61313 (Pending Backport): Scrubs are too slow with new mClock profile changes
- 04:13 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- /a/yuriw-2023-05-23_22:39:17-rados-wip-yuri6-testing-2023-05-23-0757-reef-distro-default-smithi/7284941
- 03:47 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2023-05-24_14:33:21-rados-wip-yuri6-testing-2023-05-23-0757-reef-distro-default-smithi/7285192
- 02:23 PM Bug #57650: mon-stretch: reweighting an osd to a big number, then back to original causes uneven ...
- Think this has to do with how we are subtracting/adding weights to each crush bucket, I think a better way is to alwa...
- 12:45 PM Bug #57310 (Fix Under Review): StriperTest: The futex facility returned an unexpected error code
- 12:22 PM Bug #57310 (In Progress): StriperTest: The futex facility returned an unexpected error code
- This looks like we are sending notification to the semaphore after it was destroyed. we are missing some waits if we ...
- 10:59 AM Bug #59531: quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.0...
- quincy:
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-defau... - 10:58 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- quincy:
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-defau... - 10:57 AM Backport #61396 (New): quincy: osd/TrackedOp: TrackedOp event order error
- 10:57 AM Backport #61395 (New): reef: osd/TrackedOp: TrackedOp event order error
- 06:57 AM Documentation #58590: osd_op_thread_suicide_timeout is not documented
- > options that are marked `advanced` don't necessarily warrant detailed documentation
Yes that may be true. But th... - 01:07 AM Documentation #58590: osd_op_thread_suicide_timeout is not documented
- My sense is that options that are marked `advanced` don't necessarily warrant detailed documentation. There are near...
- 05:06 AM Bug #24990 (Resolved): api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
- 05:06 AM Backport #53166 (Resolved): pacific: api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
05/23/2023
- 10:59 PM Backport #61336: reef: Able to modify the mclock reservation, weight and limit parameters when bu...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51663
merged - 10:57 PM Backport #61303: reef: src/osd/PrimaryLogPG.cc: 4284: ceph_abort_msg("out of order op")
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51666
merged - 10:38 PM Bug #61386 (Pending Backport): TEST_recovery_scrub_2: TEST FAILED WITH 1 ERRORS
- /a/lflores-2023-05-23_18:17:13-rados-wip-yuri-testing-2023-05-22-0845-reef-distro-default-smithi/7284160...
- 10:17 PM Bug #61385 (New): TEST_dump_scrub_schedule fails from "key is query_active: negation:0 # expected...
- /a/yuriw-2023-05-22_23:22:00-rados-wip-yuri-testing-2023-05-22-0845-reef-distro-default-smithi/7282843...
- 09:59 PM Bug #57650: mon-stretch: reweighting an osd to a big number, then back to original causes uneven ...
- This some how only reproducible with re weight an osd from 0.0900 to 0.7000 and back to 0.0900.
This PR https://gi... - 07:37 PM Backport #53166: pacific: api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51261
merged - 03:43 PM Bug #47273 (Resolved): ceph report missing osdmap_clean_epochs if answered by peon
- 03:42 PM Backport #56604 (Resolved): pacific: ceph report missing osdmap_clean_epochs if answered by peon
- 03:26 PM Backport #56604: pacific: ceph report missing osdmap_clean_epochs if answered by peon
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51258
merged - 03:28 PM Backport #59628: pacific: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNotify)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51341
merged - 03:21 PM Backport #61150: quincy: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismat...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51512
merged - 08:57 AM Bug #61358 (New): qa: osd - cluster [WRN] 1 slow requests found in cluster log
- Description: fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distr...
- 08:51 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 06:52 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 07:17 AM Bug #61226: event duration is overflow
- pr is here https://github.com/ceph/ceph/pull/51545
- 06:34 AM Feature #43910: Utilize new Linux kernel v5.6 prctl PR_SET_IO_FLUSHER option
- Rook gives a warning to not use XFS with hyperconverged settings (see https://github.com/rook/rook/blob/v1.11.6/Docum...
- 02:28 AM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Still specific to Jammy.
- 01:59 AM Bug #61350 (Rejected): In the readonly mode of cache tier, the object isn't be promoted, which is...
- Steps to Reproduce::
1. creat cache iter of cephfs data pool.
2. copy a file to the mounted cephfs directory.
3. ...
05/22/2023
- 11:25 PM Bug #58894: [pg-autoscaler][mgr] does not throw warn to increase PG count on pools with autoscale...
- https://github.com/ceph/ceph/pull/50693 merged
- 09:53 PM Bug #61349: ObjectWriteOperation::mtime2() works with IoCtx::operate() but not aio_operate()
- for background, https://github.com/ceph/ceph/pull/50206 changes some of rgw's librados operations to aio_operate(), a...
- 09:51 PM Bug #61349 (Fix Under Review): ObjectWriteOperation::mtime2() works with IoCtx::operate() but not...
- 09:36 PM Bug #61349 (Resolved): ObjectWriteOperation::mtime2() works with IoCtx::operate() but not aio_ope...
- @librados::IoCtxImpl::operate()@ takes an optional @ceph::real_time*@ and uses it when given
but @librados::IoCtxI... - 09:04 PM Bug #59599: osd: cls_refcount unit test failures during upgrade sequence
- /a/yuriw-2023-05-22_15:26:04-rados-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7282680
- 07:40 PM Backport #61232: reef: Tests failing with slow scrubs with new mClock default profile
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51569
merged - 07:32 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/lflores-2023-05-22_16:08:13-rados-wip-yuri6-testing-2023-05-19-1351-reef-distro-default-smithi/7282703
Was alre... - 06:17 PM Bug #61313 (Fix Under Review): Scrubs are too slow with new mClock profile changes
- 06:46 AM Bug #61313 (Resolved): Scrubs are too slow with new mClock profile changes
- Scrubs are being reported to be very slow in multiple teuthology tests causing them to fail.
An example: https://pu... - 04:50 PM Backport #61345 (Resolved): reef: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpda...
- 04:48 PM Bug #59049 (Pending Backport): WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate ...
- 12:21 PM Backport #59538: pacific: osd/scrub: verify SnapMapper consistency not backported
- Hi @Ronen
Is there something we can do to prepare or help with the backport? - 11:57 AM Backport #61303 (In Progress): reef: src/osd/PrimaryLogPG.cc: 4284: ceph_abort_msg("out of order ...
- 11:40 AM Backport #61335 (In Progress): quincy: Able to modify the mclock reservation, weight and limit pa...
- 11:16 AM Backport #61335 (Resolved): quincy: Able to modify the mclock reservation, weight and limit param...
- https://github.com/ceph/ceph/pull/51664
- 11:37 AM Backport #61336 (In Progress): reef: Able to modify the mclock reservation, weight and limit para...
- 11:16 AM Backport #61336 (Resolved): reef: Able to modify the mclock reservation, weight and limit paramet...
- https://github.com/ceph/ceph/pull/51663
- 11:08 AM Bug #61155 (Pending Backport): Able to modify the mclock reservation, weight and limit parameters...
05/19/2023
- 09:40 PM Bug #55809: "Leak_IndirectlyLost" valgrind report on mon.c
- /a/yuriw-2023-05-10_14:47:51-rados-wip-yuri5-testing-2023-05-09-1324-pacific-distro-default-smithi/7269818/smithi043/...
- 06:27 PM Backport #61303 (In Progress): reef: src/osd/PrimaryLogPG.cc: 4284: ceph_abort_msg("out of order ...
- https://github.com/ceph/ceph/pull/51666
- 06:24 PM Bug #58940 (Pending Backport): src/osd/PrimaryLogPG.cc: 4284: ceph_abort_msg("out of order op")
- 09:17 AM Bug #54682: crash: void ReplicatedBackend::_do_push(OpRequestRef): abort
- I encountered the same problem in v15.2.8,
-7> 2023-05-19T15:16:16.593+0800 7f0c1beed700 -1 /SDS-CICD/rpmbuild...
05/18/2023
- 09:47 PM Bug #46877: mon_clock_skew_check: expected MON_CLOCK_SKEW but got none
- /a/yuriw-2023-05-18_14:38:49-rados-wip-yuri-testing-2023-05-10-0917-distro-default-smithi/7277648
- 09:25 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Laura Flores wrote:
> /a/yuriw-2023-05-11_15:01:38-rados-wip-yuri8-testing-2023-05-10-1402-distro-default-smithi/727... - 06:11 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2023-05-11_15:01:38-rados-wip-yuri8-testing-2023-05-10-1402-distro-default-smithi/7271184
So far, no Reef... - 06:36 PM Bug #59333: PgScrubber: timeout on reserving replicas
- /a/yuriw-2023-05-11_15:01:38-rados-wip-yuri8-testing-2023-05-10-1402-distro-default-smithi/7271192
- 01:42 PM Bug #52316 (Fix Under Review): qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum'])...
- 01:33 PM Bug #52316 (In Progress): qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == l...
- Since we are failing at the first assert of the check for quorum, and we had few iterators over the thrashing, it loo...
- 12:41 PM Backport #61232 (In Progress): reef: Tests failing with slow scrubs with new mClock default profile
- 07:29 AM Backport #61232 (Resolved): reef: Tests failing with slow scrubs with new mClock default profile
- https://github.com/ceph/ceph/pull/51569
- 12:36 PM Backport #61231 (In Progress): quincy: Tests failing with slow scrubs with new mClock default pro...
- 07:29 AM Backport #61231 (Resolved): quincy: Tests failing with slow scrubs with new mClock default profile
- https://github.com/ceph/ceph/pull/51568
- 07:28 AM Bug #61228 (Pending Backport): Tests failing with slow scrubs with new mClock default profile
- 07:26 AM Bug #61228 (Fix Under Review): Tests failing with slow scrubs with new mClock default profile
- 04:11 AM Bug #61228 (Resolved): Tests failing with slow scrubs with new mClock default profile
- After the changes made in https://github.com/ceph/ceph/pull/49975, teuthology tests are failing due to slow scrubs wi...
- 02:32 AM Bug #61226 (Duplicate): event duration is overflow
- ...
05/17/2023
- 10:55 PM Bug #59656: pg_upmap_primary timeout
- Hello Flaura, thanks for your answer.
Indeed your example explains a lot of things, i will try to understand more ... - 09:26 PM Bug #59656: pg_upmap_primary timeout
- Hi Kevin,
Yes, the read balancer does take primary affinity into account.
I will walk through an example on a v... - 09:43 PM Bug #51729: Upmap verification fails for multi-level crush rule
- Hi Chris,
If possible, can you try this change with your crush rule?
https://github.com/ceph/ceph/compare/main...... - 07:51 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- /a/yuriw-2023-05-16_23:44:06-rados-wip-yuri10-testing-2023-05-16-1243-distro-default-smithi/7276255
Hey Nitzan, ye... - 07:49 AM Bug #49888 (Fix Under Review): rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTrie...
- 05:39 AM Bug #49888 (In Progress): rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: re...
- If my above comment is correct, teuthology also have that incorrect configuration as default, in placeholder.py we th...
- 05:32 AM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- Looks like all the failure related to thrash-eio, i checked all the archived that we have (that still out there) and ...
- 07:43 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- pacific:
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de... - 07:42 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- pacific - https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de...
- 07:24 AM Bug #59779 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
- 07:24 AM Bug #59778 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
- 07:24 AM Bug #59777 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
05/16/2023
- 07:10 PM Bug #59656: pg_upmap_primary timeout
- Hello Flaura (my shortcut for Flores + Laura), I experienced another malfunction (or not ?) of the read balancer.
Ba... - 04:28 PM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm...
- 11:17 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- reef:
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... - 04:08 PM Bug #49962: 'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown ...
- reef -
https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-s... - 11:18 AM Bug #49962: 'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown ...
- Seen again reef qa run:
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-ree... - 10:33 AM Backport #61150 (In Progress): quincy: osd/PeeringState.cc: ceph_abort_msg("past_interval start i...
- 09:52 AM Backport #61149 (In Progress): pacific: osd/PeeringState.cc: ceph_abort_msg("past_interval start ...
05/15/2023
- 10:41 PM Bug #59757 (Duplicate): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: ass...
- Changing the status to Duplicate, since it's an extension of the original issue, which is Resolved (https://tracker.c...
- 10:01 PM Bug #59757 (Resolved): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: asse...
- By looking at http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=43c985d46c6c...
- 02:22 AM Bug #59757 (Duplicate): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: ass...
*New crash events were reported via Telemetry with newer versions (['17.2.1', '17.2.3', '17.2.4', '17.2.5']) than e...- 10:11 PM Bug #61177 (Resolved): pacific:mon:ceph_assert(m < ranks.size()) `different code path than tracke...
- Same problem with https://tracker.ceph.com/issues/50089, but it is a different code path.
We opened a new tracker ... - 10:10 PM Backport #61176 (Resolved): quincy:mon:ceph_assert(m < ranks.size()) `different code path than tr...
- Same problem with https://tracker.ceph.com/issues/50089, but it is a different code path.
We opened a new tracker ... - 02:26 PM Bug #58155 (Resolved): mon:ceph_assert(m < ranks.size()) `different code path than tracker 50089`
- 01:33 PM Bug #61155 (Fix Under Review): Able to modify the mclock reservation, weight and limit parameters...
- 01:09 PM Bug #61155 (Resolved): Able to modify the mclock reservation, weight and limit parameters when bu...
- This is a follow-up tracker to https://tracker.ceph.com/issues/57533.
With the fix for the above tracker, it would... - 12:55 PM Backport #59455: pacific: Monitors do not permit OSD to join after upgrading to Quincy
- https://github.com/ceph/ceph/pull/51382
- 08:18 AM Backport #61150 (Resolved): quincy: osd/PeeringState.cc: ceph_abort_msg("past_interval start inte...
- https://github.com/ceph/ceph/pull/51512
- 08:18 AM Backport #61149 (Resolved): pacific: osd/PeeringState.cc: ceph_abort_msg("past_interval start int...
- https://github.com/ceph/ceph/pull/51510
- 08:10 AM Bug #49689 (Pending Backport): osd/PeeringState.cc: ceph_abort_msg("past_interval start interval ...
- There are multiple Telemetry reports of this issue in P and Q, we should consider backporting this patch.
- 03:00 AM Bug #61143 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b6a45a1fcd86db7edc24b7c...- 03:00 AM Bug #61140 (Pending Backport): crash: int OSD::shutdown(): assert(end_time - start_time_func < cc...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8a0ffcca7ae094e79a916d2f...- 03:00 AM Bug #61139 (New): crash: void ConnectionTracker::notify_rank_removed(int, int): assert(rank == ne...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ba4154b83e9e098001b35403...- 03:00 AM Bug #61138 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=152c9f3a4d3b4c7cf434838d...- 02:57 AM Bug #61017 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6a71acab6bdd7cba7a603f3b...- 02:57 AM Bug #61013 (New): crash: CRYPTO_gcm128_init()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=49e9a185c491033b443080fe...- 02:57 AM Bug #61012 (New): crash: CRYPTO_gcm128_setiv()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d1db8afd94755d05d5b3daa6...- 02:57 AM Bug #60999 (New): crash: std::_Rb_tree_increment(std::_Rb_tree_node_base*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=531fe0b9afcdd2a0e060c181...- 02:56 AM Bug #60997 (New): crash: Monitor::scrub()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d9ccf63487dba00b2e3f1acb...- 02:56 AM Bug #60996 (New): crash: ceph::buffer::list::iterator_impl<true>::copy(unsigned int, char*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=944a48ed8a413006a8b9ef72...- 02:56 AM Bug #60995 (New): crash: __pthread_mutex_lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fe1683eaf3d07037a25c4dca...- 02:56 AM Bug #60970 (New): crash: PeeringState::Crashed::Crashed(boost::statechart::state<PeeringState::Cr...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b65f8df71b306ffc0e25c4c4...- 02:56 AM Bug #60968 (New): crash: MMgrBeacon::~MMgrBeacon()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=18af8169429a3c4956d8db75...- 02:56 AM Bug #60967 (New): crash: virtual int RocksDBStore::get(const string&, const string&, ceph::buffer...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fc398263b59d4aeca2e3a05a...- 02:56 AM Bug #60963 (New): crash: rocksdb_cache::BinnedLRUHandleTable::FindPointer(rocksdb::Slice const&, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=97ee502b2d078226328d3dad...- 02:56 AM Bug #60962 (New): crash: void ConnectionTracker::notify_rank_removed(int, int): assert(rank == ne...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f5d70b14276df3dc61b28cca...- 02:55 AM Bug #60951 (New): crash: ceph::buffer::list::iterator_impl<true>::copy(unsigned int, char*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=59e70a6cb6850ac450c8e655...- 02:55 AM Bug #60947 (New): crash: crimson::dmclock::RequestTag::RequestTag(crimson::dmclock::RequestTag co...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b9fcb5d9aa69a120d0a9ada0...- 02:55 AM Bug #60939 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2169709a2f327bcd67cdf39d...- 02:49 AM Bug #60711 (New): crash: __cxa_rethrow()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ee97c7bf0a55f307ecdbd082...- 02:49 AM Bug #60710 (New): crash: std::_Rb_tree_increment(std::_Rb_tree_node_base*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=dd42aeb75f4bf10fe9c64cf8...- 02:49 AM Bug #60709 (New): crash: strlen()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4f446fc979e48beca2a4c805...- 02:49 AM Bug #60700 (New): crash: _IO_vfprintf()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c109e2b38f33a68f566185f5...- 02:49 AM Bug #60699 (New): crash: void ConnectionTracker::notify_rank_removed(int, int): assert(rank == ne...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bd2ac1bb41c8f2f014f90acd...- 02:49 AM Bug #60695 (New): crash: PrimaryLogPG::cancel_log_updates()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=157ec0dd40bd3ea98c367d77...- 02:49 AM Bug #60691 (New): crash: void MonitorDBStore::_open(const std::string&): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2c9dd285372c98e0685d184c...- 02:49 AM Bug #60684 (New): crash: void Paxos::read_and_prepare_transactions(MonitorDBStore::TransactionRef...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=782e2455147d68f29a504e95...- 02:49 AM Bug #60683 (New): crash: PrimaryLogPG::get_rw_locks(bool, PrimaryLogPG::OpContext*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a8dfe36d86da9e3e2c310a4f...- 02:49 AM Bug #60682 (New): crash: tc_new()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=eefa0dbb3a2a42c612819b8a...- 02:49 AM Bug #60681 (New): crash: rocksdb::GetVarint32Ptr(char const*, char const*, unsigned int*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f906002477b5539bc86fd32d...- 02:49 AM Bug #60680 (New): crash: pthread_rwlock_wrlock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ee88f6d803dbf2fce8e49b17...- 02:49 AM Bug #60671 (New): crash: void ObjectContext::stop_block(): assert(blocked)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3e2e8f22f22f76bde155e39c...- 02:48 AM Bug #60663 (New): crash: Context::complete(int)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c2ee03f7cd01488f33943a58...- 02:48 AM Bug #60662 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a5992dd14646bf786ab192c8...- 02:48 AM Bug #60661 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b2ba72e68a69b5c6151eb37c...- 02:48 AM Bug #60657 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=86ebe2d42b2bca87e0a5c0dd...- 02:48 AM Bug #60650 (New): crash: std::_Rb_tree_increment(std::_Rb_tree_node_base*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=00be9b9c45774c50cc82222e...- 02:48 AM Bug #60649 (New): crash: std::_Rb_tree_increment(std::_Rb_tree_node_base*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=abd9958e5316bdff7b1da97a...- 02:48 AM Bug #60642 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ae3ef0ab87ca52d1d71cb95d...- 02:48 AM Bug #60639 (New): crash: __pthread_mutex_lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d2e338968de50959b71b8b66...- 02:48 AM Bug #60638 (New): crash: __pthread_mutex_lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=855a48c80408c88eca13ce68...- 02:47 AM Bug #60631 (New): crash: boost::json::detail::throw_system_error(boost::system::error_code const&...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f94549d257db774ceba05b10...- 02:47 AM Bug #60624 (New): crash: StackStringBuf<4096ul>::xsputn(char const*, long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bf8916590c8937b7ce0ba97d...- 02:47 AM Bug #60620 (New): crash: std::_Rb_tree_increment(std::_Rb_tree_node_base*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7d6fbc0b0169a3ce495036d4...- 02:47 AM Bug #60603 (New): crash: void SignalHandler::queue_signal_info(int, siginfo_t*, void*): assert(r ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ed2df72de2440a7d09c79d46...- 02:47 AM Bug #60602 (New): crash: void SignalHandler::queue_signal_info(int, siginfo_t*, void*): assert(r ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8fae7c7d22c5178663398d86...- 02:46 AM Bug #60597 (New): crash: tc_new()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=927d6e99babd614e0291d5ae...- 02:46 AM Bug #60596 (New): crash: void PrimaryLogPG::eval_repop(PrimaryLogPG::RepGather*): assert(waiting_...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=862550c2d0acbaa001c27e63...- 02:46 AM Bug #60594 (New): crash: int fmt::v6::internal::format_float<double>(double, int, fmt::v6::intern...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4254a9aca05bbf94a3922101...- 02:46 AM Bug #60592 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=10997064a94fa0f58b2f194d...- 02:46 AM Bug #60590 (New): crash: virtual void AuthMonitor::update_from_paxos(bool*): assert(version > key...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c47c433b97365f96695a9a9f...- 02:46 AM Bug #60589 (New): crash: virtual void OSDMonitor::update_from_paxos(bool*): assert(version > osdm...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=31c00e57dbc1e4eb714b89c5...- 02:46 AM Bug #60588 (New): crash: virtual void OSDMonitor::update_from_paxos(bool*): assert(version > osdm...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e76d0abf64c32468c7691db...- 02:46 AM Bug #60587 (New): crash: virtual void OSDMonitor::update_from_paxos(bool*): assert(version > osdm...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fbdfe2726dd1524362e807dc...- 02:43 AM Bug #60474 (New): crash: tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::Free...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4ce98765c572a3a40356e633...- 02:41 AM Bug #60380 (New): crash: Session::~Session()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=141dcc50a935721f3ea08451...- 02:41 AM Bug #60379 (New): crash: __assert_perror_fail()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=060a9914f36be8d3695fa414...- 02:41 AM Bug #60378 (New): crash: MOSDPGLog::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8116de9d97f4cbe038b8fa96...- 02:41 AM Bug #60377 (New): crash: pg_log_t::copy_after(ceph::common::CephContext*, pg_log_t const&, eversi...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6948c1802465a3ef7eda00ad...- 02:41 AM Bug #60376 (New): crash: tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::Free...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=09cef354d99d6f6276c67970...- 02:41 AM Bug #60370 (New): crash: OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7fd809240caf32abcca4cf13...- 02:41 AM Bug #60368 (New): crash: std::__throw_invalid_argument(char const*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e3901a11598b4438c521a4a9...- 02:41 AM Bug #60363 (New): crash: non-virtual thunk to PgScrubber::clear_queued_or_active()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=43cdd4062ddf32d30a06ed0b...- 02:41 AM Bug #60362 (New): crash: pg_log_entry_t::pg_log_entry_t(pg_log_entry_t const&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=998aa26bb11c7b1686c0656d...- 02:41 AM Bug #60361 (New): crash: ceph::ErasureCode::minimum_to_decode(std::set<int, std::less<int>, std::...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=36eaec9222f894f3ba692650...- 02:41 AM Bug #60358 (New): crash: PG::get_osdmap() const
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=087d7c20ee2d92444a968185...- 02:40 AM Bug #60348 (New): crash: pg_log_t::copy_after(ceph::common::CephContext*, pg_log_t const&, eversi...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7aa221f214611a4679ea19de...- 02:40 AM Bug #60342 (New): crash: int CrushCompiler::parse_crush(const iter_t&): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b7999e1ad40125809831f206...- 02:39 AM Bug #60313 (New): crash: rocksdb::ParseInternalKey(rocksdb::Slice const&, rocksdb::ParsedInternal...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e794ee01795360cec68480c1...- 02:39 AM Bug #60290 (New): crash: PGPool::update(std::shared_ptr<OSDMap const>)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7682007b1ad4847560e04bb0...- 02:39 AM Bug #60289 (New): crash: tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::Free...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f1ddd125a182155c827a2817...- 02:39 AM Bug #60288 (New): crash: OSDService::send_message_osd_cluster(int, Message*, unsigned int)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a7ea16e44fad1c0c934e34ee...- 02:38 AM Bug #60273 (New): crash: ProtocolV2::ready()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=568738134f554513e5ef7b4f...- 02:38 AM Bug #60268 (New): crash: rocksdb::port::Mutex::Mutex(bool)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3251ce082c616ea0965e930f...- 02:38 AM Bug #60251 (New): crash: __pthread_mutex_lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=70e3ca48b3695f7897b86d7b...- 02:38 AM Bug #60244 (New): crash: std::map<std::basic_string<char, std::char_traits<char>, std::allocator<...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5c907fc47415452d4adba455...- 02:38 AM Bug #60243 (New): crash: boost::intrusive::rbtree_algorithms<boost::intrusive::rbtree_node_traits...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a09ec22b124bcdbfd5ea3ff7...- 02:38 AM Bug #60238 (New): crash: tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::Free...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f2fc975aa8e358c4646c2357...- 02:38 AM Bug #60237 (New): crash: MPGStats::~MPGStats()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=94376ed38c19a5ce1f255ce7...- 02:38 AM Bug #60235 (New): crash: pg_log_entry_t::~pg_log_entry_t()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=616f85924c302527b272abc8...- 02:37 AM Bug #60229 (New): crash: OSDShard::update_pg_epoch(OSDShardPGSlot*, unsigned int)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=55680f1cd6a575e16fabe123...- 02:37 AM Bug #60227 (New): crash: PGLog::IndexedLog::~IndexedLog()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c15d5c0823fd3e284b91083d...- 02:34 AM Bug #60100 (New): crash: PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=099a833a58e5ee78e9196666...- 02:34 AM Bug #60084 (New): crash: void std::list<pg_log_entry_t, mempool::pool_allocator<(mempool::pool_in...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9c0f5dfaa275433c0d20a61...- 02:34 AM Bug #60083 (New): crash: PG::read_state(ObjectStore*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3329e01356c7ae5b308e907e...- 02:34 AM Bug #60082 (New): crash: tc_memalign()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=86b76741a4adb77987c903c0...- 02:34 AM Bug #60080 (New): crash: ceph::buffer::list::append(ceph::buffer::list const&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=06241146655baac57da981e6...- 02:34 AM Bug #60076 (New): crash: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6ec39f04d906631aa3aa97eb...- 02:34 AM Bug #60075 (New): crash: MOSDPGPush::~MOSDPGPush()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=397ebc4ea36c98be8899c4b6...- 02:34 AM Bug #60074 (New): crash: tcmalloc::CentralFreeList::FetchFromOneSpans(int, void**, void**)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f7a4893f406b608b1e978fc0...- 02:34 AM Bug #60073 (New): crash: std::__throw_bad_cast()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d57122fab534b52e4fee48c7...- 02:34 AM Bug #60071 (New): crash: ceph::buffer::ptr::release()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e58fc7bce0afca7c053503c9...- 02:34 AM Bug #60070 (New): crash: rocksdb::GetVarint32Ptr(char const*, char const*, unsigned int*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=19fe16e6dbeb1633981345da...- 02:33 AM Bug #60057 (New): crash: int BlueFS::_replay(bool, bool): assert(r == q->second->file_map.end())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=07d0bb01bb262d5b0779453a...- 02:33 AM Bug #60055 (New): crash: void PeeringState::add_log_entry(const pg_log_entry_t&, bool): assert(e....
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d4bf27b06f0da79f942b403a...- 02:33 AM Bug #60050 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3473b4336127169b1a2f4bc1...- 02:33 AM Bug #60032 (New): crash: pg_shard_t::encode(ceph::buffer::list&) const
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fb5d0a36a01145d2b797767e...- 02:32 AM Bug #60016 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3ed1b7f6612a8182b2800f12...- 02:32 AM Bug #60010 (New): crash: _mosdop::MOSDOp<std::vector<OSDOp, std::allocator<OSDOp> > >::~MOSDOp()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6737b18151f5fc6d07a4409b...- 02:32 AM Bug #60003 (New): crash: PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=670be4ddba308f4db20ca54f...- 02:32 AM Bug #60002 (New): crash: SharedLRU<hobject_t, ObjectContext>::clear()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d698301768e0815722950d10...- 02:32 AM Bug #59998 (New): crash: __assert_perror_fail()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=406faa94035440e3d11ab372...- 02:32 AM Bug #59997 (New): crash: static void PGLog::read_log_and_missing(ceph::common::CephContext*, Obje...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=27ebfd35e8beb96874504568...- 02:32 AM Bug #59996 (New): crash: __assert_perror_fail()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5fe8a7693011f8a1603544bd...- 02:32 AM Bug #59995 (New): crash: virtual bool PrimaryLogPG::should_send_op(pg_shard_t, const hobject_t&):...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=76cb11b41830dca7bdc5dae2...- 02:32 AM Bug #59993 (New): crash: RDMAConnectedSocketImpl::fin()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=945e9c99f08713fa53d8e364...- 02:29 AM Bug #59896 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e242fe178cdda735acfb110...- 02:29 AM Bug #59892 (New): crash: Infiniband::QueuePair::to_dead()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e6290b00bd0569e02d063c0d...- 02:29 AM Bug #59891 (New): crash: RDMAConnectedSocketImpl::fin()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1706101818e1d1d9319cdeb7...- 02:29 AM Bug #59890 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=beaaa07af9c446636296c5ea...- 02:29 AM Bug #59883 (New): crash: PeeringState::Crashed::Crashed(boost::statechart::state<PeeringState::Cr...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=15dd6d31679d011d664b1ce8...- 02:29 AM Bug #59871 (New): crash: MOSDPGLog::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=79d69d2ec412b943ff9003b0...- 02:29 AM Bug #59870 (New): crash: Message::encode(unsigned long, int, bool)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fc42ae9e24dac0d36f0e6467...- 02:29 AM Bug #59867 (New): crash: syscall()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bea75efcace465c1f92ae0ab...- 02:29 AM Bug #59864 (New): crash: ceph::buffer::list::iterator_impl<true>::operator
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=dec27e58ffa0a2111b86b773...- 02:29 AM Bug #59863 (New): crash: ceph::buffer::list::append(ceph::buffer::ptr const&, unsigned int, unsig...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=24b3a7148d578389adc62fdb...- 02:28 AM Bug #59862 (New): crash: ceph::buffer::ptr::ptr(ceph::buffer::ptr const&, unsigned int, unsigned ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b82812a3b9f8da25ff6f48b...- 02:28 AM Bug #59861 (New): crash: ClassHandler::ClassMethod::exec(void*, ceph::buffer::list&, ceph::buffer...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f7384b43cbeb4a5054f7fa51...- 02:28 AM Bug #59860 (New): crash: syscall()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=41a428cd35198bd7df0ca281...- 02:28 AM Bug #59859 (New): crash: tc_new()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6e2448e967617d8d00cd3037...- 02:28 AM Bug #59855 (New): crash: pthread_mutex_lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=75fa43ee3e9fb422ba623843...- 02:28 AM Bug #59853 (New): crash: void _handle_dups(ceph::common::CephContext*, pg_log_t&, const pg_log_t&...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=21daa9c4c392a4b09a8aa46b...- 02:28 AM Bug #59849 (New): crash: virtual void AuthMonitor::update_from_paxos(bool*): assert(ret == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a9880ecbdaed67ba5d392f8e...- 02:28 AM Bug #59848 (New): crash: virtual void AuthMonitor::update_from_paxos(bool*): assert(ret == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=173ae8a004ac4db63db45b99...- 02:28 AM Bug #59844 (New): crash: PGLog::IndexedLog::trim(ceph::common::CephContext*, eversion_t, std::set...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=41ac876ff38676a63ac34a04...- 02:28 AM Bug #59832 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=dcc9893f033406affa252b88...- 02:28 AM Bug #59831 (New): crash: void ECBackend::continue_recovery_op(ECBackend::RecoveryOp&, RecoveryMes...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9142b126ec0e6395147219ea...- 02:28 AM Bug #59826 (New): crash: ceph::buffer::ptr::release()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bc10fd7fb0d22582b52983b2...- 02:27 AM Bug #59822 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=55f73977968776550cd7c368...- 02:27 AM Bug #59818 (New): crash: librados::IoCtx::watch2(std::basic_string<char, std::char_traits<char>, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6e5196f28e1293b66b0b4bc8...- 02:27 AM Bug #59816 (New): crash: std::_Rb_tree_increment(std::_Rb_tree_node_base*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=869e60af4867d95bb95aba35...- 02:27 AM Bug #59814 (New): crash: __gxx_personality_v0()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=795294b083bc480194f0cf18...- 02:27 AM Bug #59813 (Fix Under Review): crash: void PaxosService::propose_pending(): assert(have_pending)
*New crash events were reported via Telemetry with newer versions (['17.2.5']) than encountered in Tracker (17.2.1)...- 02:26 AM Bug #59809 (New): crash: void Monitor::handle_sync_chunk(MonOpRequestRef): assert(state == STATE_...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=09368f4fec8eca8e5e4bda18...- 02:26 AM Bug #59807 (New): crash: virtual bool PrimaryLogPG::should_send_op(pg_shard_t, const hobject_t&):...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bdc12d0cac68061a391ee579...- 02:25 AM Bug #59803 (New): crash: std::_Rb_tree_rebalance_for_erase(std::_Rb_tree_node_base*, std::_Rb_tre...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e95c39faf88657deb964c25e...- 02:25 AM Bug #59800 (New): crash: void Paxos::wait_for_readable(MonOpRequestRef, Context*): assert(!is_rea...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=47e52088eead1e93679810f6...- 02:25 AM Bug #59798 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3c677cd3186fcff1bd67d56f...- 02:25 AM Bug #59797 (New): crash: tc_new()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3598302912715137826672ff...- 02:25 AM Bug #59796 (New): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: assert(m ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=aafd3b5c261f4f33ad53bf4f...- 02:24 AM Bug #59792 (New): crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=41bb6f44f9a31350179a81bc...- 02:24 AM Bug #59789 (New): crash: PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=caba60485d26349b52c15ff9...- 02:24 AM Bug #59788 (New): crash: void Elector::handle_ack(MonOpRequestRef): assert(m->epoch == get_epoch())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=650ed5c23e39ef451da73c52...- 02:24 AM Bug #59786 (New): crash: void Paxos::handle_begin(MonOpRequestRef): assert(begin->last_committed ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e366e3122782e399f1df8173...- 02:23 AM Bug #59779 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
*New crash events were reported via Telemetry with newer versions (['17.2.5']) than encountered in Tracker (17.2.0)...- 02:23 AM Bug #59778 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
*New crash events were reported via Telemetry with newer versions (['17.2.5']) than encountered in Tracker (16.2.7)...- 02:23 AM Bug #59777 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
*New crash events were reported via Telemetry with newer versions (['17.2.5']) than encountered in Tracker (17.2.0)...- 02:23 AM Bug #59775 (New): crash: void SignalHandler::queue_signal_info(int, siginfo_t*, void*): assert(r ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6e8c12772c508250d3a542df...- 02:23 AM Bug #59774 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d60aae5e6924f5266d50a9bf...- 02:23 AM Bug #59773 (New): crash: void PGLog::IndexedLog::add(const pg_log_entry_t&, bool): assert(head.ve...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0a56bf26ebe764a456229180...- 02:23 AM Bug #59772 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3aaa74f9c7d25386acbb6b89...- 02:23 AM Bug #59770 (New): crash: PeeringState::Crashed::Crashed(boost::statechart::state<PeeringState::Cr...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=14ad299b91f8cff54084e86c...- 02:23 AM Bug #59764 (New): crash: PGLog::IndexedLog::trim(ceph::common::CephContext*, eversion_t, std::set...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4599de927dc65d356c010d10...- 02:23 AM Bug #59763 (New): crash: void Processor::accept(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=605dd32cb2d5a5404ead70d9...- 02:23 AM Bug #59762 (New): crash: CrushTester::test()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=25ba59000f76285c6c9fe37e...- 02:22 AM Bug #59758 (New): crash: std::__detail::_Map_base<osd_reqid_t, std::pair<osd_reqid_t const, pg_lo...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=11e8377a6abfc599933a282b...- 02:22 AM Bug #59753 (New): crash: void MonitorDBStore::clear(std::set<std::__cxx11::basic_string<char> >&)...
*New crash events were reported via Telemetry with newer versions (['16.2.1', '16.2.2', '16.2.5', '16.2.6', '16.2.7...- 02:22 AM Bug #59752 (New): crash: void MonitorDBStore::clear(std::set<std::__cxx11::basic_string<char> >&)...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=51a1e0ba5e6de81858514508...- 02:14 AM Bug #59750 (New): crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d977e1ffc3251b2e75c444e9...- 02:14 AM Bug #59747 (New): crash: DeviceList::DeviceList(ceph::common::CephContext*): assert(num)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a55019bf3d0557b128cc63a9...- 02:14 AM Bug #59746 (New): crash: void PGLog::merge_log(pg_info_t&, pg_log_t&&, pg_shard_t, pg_info_t&, PG...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=789755527cdd1926593b4963...- 02:14 AM Bug #59744 (New): crash: __pthread_mutex_lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0970618f6018d62213af6efb...- 02:14 AM Bug #59743 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=867f41e53b105201cabb6a3f...- 02:13 AM Bug #59740 (New): crash: rocksdb::ColumnFamilySet::~ColumnFamilySet()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d4a88f5cfc29eb3f4610ab8b...
05/12/2023
- 02:34 AM Bug #56194: crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->ops_in_flight...
- can zhu wrote:
> ceph v16.2.10
>
> {
> "assert_condition": "(sharded_in_flight_list.back())->ops_in_flight_s... - 02:33 AM Bug #56194: crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->ops_in_flight...
- ceph v16.2.10
{
"assert_condition": "(sharded_in_flight_list.back())->ops_in_flight_sharded.empty()",
"a... - 02:33 AM Bug #56194: crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->ops_in_flight...
- is there any updates for this issue?
05/11/2023
- 08:39 PM Bug #56849: crash: void PaxosService::propose_pending(): assert(have_pending)
- Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.
- 08:12 PM Bug #56371: crash: MOSDPGLog::encode_payload(unsigned long)
- Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.
- 08:07 PM Bug #56847: crash: void PaxosService::propose_pending(): assert(have_pending)
- Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.
- 08:07 PM Bug #56848: crash: void PaxosService::propose_pending(): assert(have_pending)
- Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.
- 11:51 AM Feature #59727: The libradosstriper interface provides an optional parameter to avoid shared lock...
- pull request: https://github.com/ceph/ceph/pull/51443
- 08:36 AM Feature #59727 (New): The libradosstriper interface provides an optional parameter to avoid share...
The flow of the read operation of the current libradosstriper interface:
1. Lock (shared lock)
2. to read
3. Unl...- 09:03 AM Bug #56707: pglog growing unbounded on EC with copy by ref
- 王子敬 wang wrote:
> 王子敬 wang wrote:
> > I have also experienced this situation here
> >
> > - Create 30 objects in... - 08:48 AM Bug #56707: pglog growing unbounded on EC with copy by ref
- 王子敬 wang wrote:
> I have also experienced this situation here
>
> - Create 30 objects in bucket1 using put
> - ... - 03:32 AM Bug #56707: pglog growing unbounded on EC with copy by ref
- I have also experienced this situation here
- Create 30 objects in bucket1 using put
- Using 30 objects as the s... - 07:50 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi...
- 12:08 AM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- It's not obvious to me from the above why this started popping up in the last few weeks -- have you been able to iden...
05/10/2023
- 11:25 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-04-26_01:16:19-rados-wip-yuri11-testing-2023-04-25-1605-pacific-distro-default-smithi/7253751
Sure R... - 08:54 PM Bug #59656: pg_upmap_primary timeout
- Hi Kevin,
I am working to reproduce this issue on my end, but I also have some tricks you can try to generate OSD ... - 04:21 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- /a/yuriw-2023-04-26_01:16:19-rados-wip-yuri11-testing-2023-04-25-1605-pacific-distro-default-smithi/7253869
/a/yuriw... - 04:20 PM Backport #59715 (In Progress): pacific: mon: race condition between `mgr fail` and MgrMonitor::pr...
- 04:19 PM Backport #59715 (Resolved): pacific: mon: race condition between `mgr fail` and MgrMonitor::prepa...
- https://github.com/ceph/ceph/pull/50980
- 03:23 PM Bug #59049 (Fix Under Review): WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate ...
- 02:03 PM Bug #59291: pg_pool_t version compatibility issue
- So Neha and I have discussed this and we were looking into a solution where anything 31 and above would have to encod...
05/09/2023
- 06:28 PM Bug #54511 (Resolved): test_pool_min_size: AssertionError: not clean before minsize thrashing starts
- 06:28 PM Backport #57020 (Resolved): pacific: test_pool_min_size: AssertionError: not clean before minsize...
- merged long time ago
- 06:21 PM Bug #56151 (Resolved): mgr/DaemonServer:: adjust_pgs gap > max_pg_num_change should be gap >= max...
- 06:18 PM Backport #59179 (Resolved): pacific: [pg-autoscaler][mgr] does not throw warn to increase PG coun...
- 06:18 PM Backport #59702 (Resolved): reef: mon: FAILED ceph_assert(osdmon()->is_writeable())
- 06:18 PM Backport #59701 (Resolved): quincy: mon: FAILED ceph_assert(osdmon()->is_writeable())
- 06:18 PM Backport #59700 (New): pacific: mon: FAILED ceph_assert(osdmon()->is_writeable())
- 06:10 PM Bug #57017: mon-stretched_cluster: degraded stretched mode lead to Monitor crash
- quincy backport: https://github.com/ceph/ceph/pull/51413
pacific backport: https://github.com/ceph/ceph/pull/51414 - 06:08 PM Bug #59271: mon: FAILED ceph_assert(osdmon()->is_writeable())
- reef: https://github.com/ceph/ceph/pull/51409
quincy: https://github.com/ceph/ceph/pull/51413
pacific: https://gith... - 06:08 PM Bug #59271 (Pending Backport): mon: FAILED ceph_assert(osdmon()->is_writeable())
- 07:13 AM Bug #55009: Scrubbing exits due to error reading object head
- piotr@stackhpc.com, Mark Holliman:
As a temporary step, I'd suggest increasing the osd_max_scrubs configuration para... - 05:02 AM Bug #50371: Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp
- Looking at the kern.log.gz file to get some hints this time....
- 03:12 AM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Radoslaw Zarzynski wrote:
> Let's check whether this reproduces in Reef too. If so, then... there is no OMAP without... - 02:40 AM Bug #59510: osd crash
- Thank for your response, if use the ssd as the data pool, need to add a nvme device as DB?
05/08/2023
- 09:18 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- /a/yuriw-2023-04-24_22:54:45-rados-wip-yuri7-testing-2023-04-19-1343-distro-default-smithi/7250551
Something inter... - 09:05 PM Bug #59656: pg_upmap_primary timeout
- Thank you Kevin! I appreciate it.
- 08:27 PM Bug #59656: pg_upmap_primary timeout
- this is the map on which I tried to apply the osdmaptool
- 08:19 PM Bug #59656: pg_upmap_primary timeout
- Thanks Kevin. Your osdmap will also be helpful whenever you get a chance.
I will need some time to evaluate what's h... - 08:08 PM Bug #59656: pg_upmap_primary timeout
- This is the only logs I have, I really don't think they contain any valuable information,but may be I'm wrong
- 08:02 PM Bug #59656: pg_upmap_primary timeout
- It's difficult to say without the logs. Even if there are no errors explicitly presenting themselves, something off a...
- 07:59 PM Bug #59656: pg_upmap_primary timeout
- Ok, for now I have no errors, I will send it when I will face the "not acting set" error again.
Can you please look ... - 07:55 PM Bug #59656: pg_upmap_primary timeout
- Kevin NGUETCHOUANG wrote:
> How can I get the OSD logs ?
All ceph logs are available by default under "/var/log/c... - 07:43 PM Bug #59656: pg_upmap_primary timeout
- How can I get the OSD logs ?
- 07:36 PM Bug #59656: pg_upmap_primary timeout
- Thanks Kevin, "Error EINVAL: osd.* is not in acting set for pg <pgid" helps, as it points me to the area of the code ...
- 07:28 PM Bug #59656: pg_upmap_primary timeout
- Hello Laura thank for answering me.
1. I'm using the reef version. (v18)
2. this is where the problem begins, I d... - 06:57 PM Bug #59656: pg_upmap_primary timeout
- Hello Kevin, thanks for reporting this issue.
A few questions:
1. What is the version of your cluster?
2. In wha... - 05:22 PM Bug #59656: pg_upmap_primary timeout
- This a fresh Reef's feature added in https://github.com/ceph/ceph/pull/49178.
CCing Laura who was involved here. - 07:15 PM Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- The pacific backport has been approved and is just awaiting testing: https://github.com/ceph/ceph/pull/49521
- 07:14 PM Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- Radoslaw Zarzynski wrote:
> Sounds like a missed backport. Please correct me if I'm wrong.
That's my understandin... - 05:28 PM Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- Sounds like a missed backport. Please correct me if I'm wrong.
- 06:37 PM Bug #50371: Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp
- Thanks Brad! Let me know how I can help.
I found another instance in Pacific:
/a/yuriw-2023-05-06_14:41:44-rados-... - 06:19 PM Bug #59504: 17.2.6: build fails with fmt 9.1.0
- Radoslaw Zarzynski wrote:
> I recall there was a bunch of libfmt-related fixes in main. Perhaph we missed bacporting... - 05:47 PM Bug #59504 (Need More Info): 17.2.6: build fails with fmt 9.1.0
- I recall there was a bunch of libfmt-related fixes in main. Perhaph we missed bacporting some of them. Could by any c...
- 05:54 PM Bug #53751: "N monitors have not enabled msgr2" is always shown for new clusters
- This might be a doc sure but I'm not sure. Bumping for deep bug scrub.
- 05:51 PM Bug #59510: osd crash
- Increasing the timeout could be obviously help in short term but won't deal with the underlying problem. Igor's idea ...
- 05:48 PM Bug #59080: mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt...
- Will be merged as a part of the big mClock PR.
- 05:39 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Let's check whether this reproduces in Reef too. If so, then... there is no OMAP without RocksDB and we upgraded it r...
- 05:34 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- High as it's a new thing in Reef.
- 05:33 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- Laura: the occurance from February was actually on a branch with the rocksdb bump up. See: https://github.com/ceph/ce...
- 05:25 PM Bug #55009: Scrubbing exits due to error reading object head
- Sounds entire scrubbing could get blocked.
- 07:50 AM Bug #55009: Scrubbing exits due to error reading object head
- In case it's useful, more detailed log (with debug level 10) from the same environment mentioned by Mark:...
- 05:19 PM Bug #59670: Ceph status shows PG recovering when norecover flag is set
- Has the PG ultimately went into the proper state? Asking to exclude a race-condition on _just reporting_ via ceph-mgr.
- 02:07 PM Bug #59670 (New): Ceph status shows PG recovering when norecover flag is set
- On the Gibba cluster, we observed that ceph -s was showing one PG in recovering state after norecovery flag was set
... - 05:17 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- Rising to urgent to not lose it from the sight line of Reef.
- 02:39 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- This is an actual bug in the scrub code:
Working with Nitzan, here is what we've found out:
(based on logs from... - 05:13 PM Bug #58893: test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired
- /a/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/7264242...
- 05:10 PM Backport #59677 (New): quincy: osd:tick checking mon for new map
- 05:10 PM Backport #59676 (New): reef: osd:tick checking mon for new map
- 05:10 PM Backport #59675 (New): pacific: osd:tick checking mon for new map
- 05:08 PM Bug #57977 (Pending Backport): osd:tick checking mon for new map
- 05:07 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- Laura, would you mind taking a look? Definitely not urgent thing.
- 03:56 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/7264188
- 03:44 PM Bug #48965: qa/standalone/osd/osd-force-create-pg.sh: TEST_reuse_id: return 1
- /a/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/7264602
05/05/2023
- 07:40 PM Bug #59599: osd: cls_refcount unit test failures during upgrade sequence
- See also here:
http://qa-proxy.ceph.com/teuthology/teuthology-2023-05-05_14:23:01-upgrade:pacific-x-quincy-distro... - 06:59 AM Bug #59656 (Need More Info): pg_upmap_primary timeout
- Hello,
I created a ceph cluster using cephadm with ceph version reef 10 nodes, 3 mon nodes and 8 osd nodes. On top o...
05/04/2023
- 05:57 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-04-25_18:56:08-rados-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/7252745
- 03:05 PM Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- Thanks Nitzan!
- 05:56 AM Bug #53575 (In Progress): Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- 05:55 AM Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- Laura, the original PR was for quincy. And the related tracker https://tracker.ceph.com/issues/57618 already have bac...
- 07:54 AM Backport #59627 (Resolved): quincy: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNo...
- Already in quincy
- 05:43 AM Backport #59628 (In Progress): pacific: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.Wat...
- 01:51 AM Bug #50371: Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp
- Laura Flores wrote:
> /a/yuriw-2023-04-25_21:30:50-rados-wip-yuri3-testing-2023-04-25-1147-distro-default-smithi/725...
05/03/2023
- 09:37 PM Backport #59637 (New): reef: scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: ...
- 09:35 PM Bug #58797 (Pending Backport): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR...
- /a/yuriw-2023-04-27_14:24:15-rados-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smithi/7255773
- 09:26 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- /a/yuriw-2023-04-27_14:24:15-rados-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smithi/7255789
/a/yuriw-202... - 06:35 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- /a/yuriw-2023-04-25_21:30:50-rados-wip-yuri3-testing-2023-04-25-1147-distro-default-smithi/7253199
/a/yuriw-2023-04-... - 07:00 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- /a/yuriw-2023-04-25_21:30:50-rados-wip-yuri3-testing-2023-04-25-1147-distro-default-smithi/7253386
- 06:57 PM Bug #50371 (New): Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp
- /a/yuriw-2023-04-25_21:30:50-rados-wip-yuri3-testing-2023-04-25-1147-distro-default-smithi/7253544...
- 06:54 PM Backport #59628 (Resolved): pacific: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchN...
- https://github.com/ceph/ceph/pull/51341
- 06:54 PM Backport #59627 (Resolved): quincy: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNo...
- 06:51 PM Bug #57618 (Pending Backport): rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNotify)
- 06:39 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-04-25_21:30:50-rados-wip-yuri3-testing-2023-04-25-1147-distro-default-smithi/7253406
- 02:43 PM Bug #55009: Scrubbing exits due to error reading object head
- Ronen, assigning to you in case you have any ideas.
- 02:38 PM Bug #55009: Scrubbing exits due to error reading object head
- I'm seeing what at least looks similar to this bug on a cluster running: ceph version 16.2.10
About a week ago we ...
05/02/2023
- 11:02 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- /a/lflores-2023-04-28_19:31:46-rados-wip-yuri10-testing-2023-04-18-0735-reef-distro-default-smithi/7257792
- 11:48 AM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- /a/sseshasa-2023-05-02_03:12:27-rados-wip-sseshasa3-testing-2023-05-01-2154-distro-default-smithi/7260279
- 10:57 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- /a/lflores-2023-04-28_19:31:46-rados-wip-yuri10-testing-2023-04-18-0735-reef-distro-default-smithi/7257789
/a/yuriw-... - 05:02 AM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- Neha Ojha wrote:
> Nitzan Mordechai wrote:
> > According to PR https://github.com/ceph/ceph/pull/44050 we can ignor... - 07:33 PM Bug #59564 (Fix Under Review): Connection scores not populated properly on monitors post installa...
- 05:53 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- still failing consistently in the rgw suite
on main: https://pulpito.ceph.com/cbodley-2023-04-26_00:39:50-rgw-wip-cb... - 02:53 PM Bug #56896: crash: int OSD::shutdown(): assert(end_time - start_time_func < cct->_conf->osd_fast_...
- Looking at the OSD code I don't see much sense behind this assertion and the relevant timeout parameter.
Shouldn't w... - 12:01 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/sseshasa-2023-05-02_03:12:27-rados-wip-sseshasa3-testing-2023-05-01-2154-distro-default-smithi/7260300...
- 11:27 AM Bug #59599: osd: cls_refcount unit test failures during upgrade sequence
- /a/sseshasa-2023-05-01_18:57:15-rados-wip-sseshasa2-testing-2023-05-01-2153-quincy-distro-default-smithi/7259884
- 11:26 AM Bug #59599 (Resolved): osd: cls_refcount unit test failures during upgrade sequence
- /a/sseshasa-2023-05-01_18:57:15-rados-wip-sseshasa2-testing-2023-05-01-2153-quincy-distro-default-smithi/7259891
H... - 08:16 AM Bug #59333: PgScrubber: timeout on reserving replicas
- /a/sseshasa-2023-05-02_03:09:13-rados-wip-sseshasa-testing-2023-05-01-2145-distro-default-smithi/7260258
05/01/2023
- 10:38 PM Bug #58289: "AssertionError: wait_for_recovery: failed before timeout expired" from down pg in pa...
- /a/yuriw-2023-04-25_14:52:56-upgrade:pacific-p2p-pacific-release-distro-default-smithi/7252143
- 08:13 PM Bug #53575 (New): Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- 08:11 PM Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- /a/yuriw-2023-04-25_14:15:40-rados-pacific-release-distro-default-smithi/7251534
- 07:43 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-04-24_23:35:26-smoke-pacific-release-distro-default-smithi/7250661
- 07:03 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- Nitzan Mordechai wrote:
> According to PR https://github.com/ceph/ceph/pull/44050 we can ignore that warning, i'll a...
04/29/2023
- 10:34 PM Support #59587 (New): ipv4 public_network + ipv6 cluster_network = osd: unable to find any IPv6 o...
- Hi,
I have an ipv4 only public network.
I created an ipv6 only cluster network (full mesh - 3 nodes - OSPF)
In...
04/28/2023
- 09:58 PM Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
- Radoslaw Zarzynski wrote:
> Hi Laura. In luck with verification of the hypothesis from the comment #17?
I ran thi... - 09:53 PM Bug #49525: found snap mapper error on pg 3.2s1 oid 3:4abe9991:::smithi10121515-14:e4 snaps missi...
- /a/yuriw-2023-04-25_14:15:40-rados-pacific-release-distro-default-smithi/7251426
- 09:20 PM Bug #51729: Upmap verification fails for multi-level crush rule
- I've landed on a potential fix for this problem. After evaluating the examples everyone provided and checking the ver...
- 08:15 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-04-25_14:15:40-rados-pacific-release-distro-default-smithi/7251186
- 02:18 PM Backport #55541 (In Progress): pacific: should use TCMalloc for better performance
- 12:08 PM Bug #59583 (New): osd: Higher client latency observed with mclock 'high_client_ops' profile durin...
- Recovery/backfill testing was performed with OSDs on SSDs and with Erasure Coded backend. Tests with 'high_client_ops...
- 10:08 AM Feature #42321: Add a new mode to balance pg layout by primary osds
- !ceph_osd_df.png!
Hi,rosinL. I have used the function of "balance pg layout by primary osds" submitted by you. In a ...
04/27/2023
- 02:12 PM Bug #59504: 17.2.6: build fails with fmt 9.1.0
- Redirecting to general RADOS.
- 01:16 PM Backport #52841 (In Progress): pacific: shard-threads cannot wakeup bug
- 01:15 PM Backport #53166 (In Progress): pacific: api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
- 01:13 PM Backport #53167 (Rejected): octopus: api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
- Octopus is EOL
- 01:13 PM Bug #52739 (Resolved): msg/async/ProtocalV2: recv_stamp of a message is set to a wrong value
- 01:13 PM Backport #52842 (Rejected): octopus: msg/async/ProtocalV2: recv_stamp of a message is set to a wr...
- Octopus is EOL
- 01:13 PM Backport #52840 (Rejected): octopus: shard-threads cannot wakeup bug
- Octopus is EOL
- 12:30 PM Backport #52307 (In Progress): pacific: doc: clarify use of `rados rm` command
- 12:30 PM Backport #52306 (Rejected): octopus: doc: clarify use of `rados rm` command
- Octopus is EOL
- 12:29 PM Backport #52557 (In Progress): pacific: pybind: rados.RadosStateError raised when closed watch ob...
- 12:28 PM Backport #52556 (Rejected): octopus: pybind: rados.RadosStateError raised when closed watch objec...
- Octopus is EOL
- 12:27 PM Backport #52596 (Rejected): octopus: make bufferlist::c_str() skip rebuild when it isn't necessary
- Octopus is EOL
- 12:26 PM Backport #51525 (Rejected): octopus: osd: Delay sending info to new backfill peer resetting last_...
- Octopus is EOL
- 12:26 PM Bug #50441 (Rejected): cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- 12:26 PM Backport #51551 (Rejected): octopus: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana ...
- Octopus is EOL
- 12:26 PM Bug #50393 (Resolved): CommandCrashedError: Command crashed: 'mkdir -p -- /home/ubuntu/cephtest/m...
- 12:25 PM Backport #51741 (Rejected): octopus: CommandCrashedError: Command crashed: 'mkdir -p -- /home/ubu...
- Octopus is EOL
- 12:23 PM Backport #56604 (In Progress): pacific: ceph report missing osdmap_clean_epochs if answered by peon
- 12:23 PM Backport #56603 (Rejected): octopus: ceph report missing osdmap_clean_epochs if answered by peon
- Octopus is EOL
- 12:22 PM Bug #48899 (Resolved): api_list: LibRadosList.EnumerateObjects and LibRadosList.EnumerateObjectsS...
- 12:22 PM Backport #55581 (Rejected): octopus: api_list: LibRadosList.EnumerateObjects and LibRadosList.Enu...
- Octopus is EOL
- 12:22 PM Backport #55066 (Rejected): pacific: osd_fast_shutdown_notify_mon option should be true by default
- Duplicate?
- 12:21 PM Backport #55067 (Rejected): octopus: osd_fast_shutdown_notify_mon option should be true by default
- Octopus is EOL
- 11:11 AM Bug #59080 (Fix Under Review): mclock-config.sh: TEST_profile_disallow_builtin_params_modify fail...
- The test script issue is related to timing of a check once a change to mon DB is made. Any changes to the mon DB conf...
- 10:50 AM Backport #52892 (In Progress): pacific: ceph-kvstore-tool repair segmentfault without bluestore-kv
- 10:49 AM Backport #52893 (Rejected): octopus: ceph-kvstore-tool repair segmentfault without bluestore-kv
- Octopus is EOL
- 09:12 AM Bug #48843 (Resolved): Get more parallel scrubs within osd_max_scrubs limits
- 09:11 AM Backport #49776 (Rejected): octopus: Get more parallel scrubs within osd_max_scrubs limits
- Octopus is EOL
- 09:11 AM Backport #52839 (In Progress): pacific: rados: build minimally when "WITH_MGR" is off
- 09:10 AM Backport #52791 (In Progress): pacific: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_...
- 09:10 AM Backport #52838 (Rejected): octopus: rados: build minimally when "WITH_MGR" is off
- Octopus is EOL
- 09:09 AM Backport #52792 (Rejected): octopus: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_fli...
- Octopus is EOL
- 09:09 AM Bug #48959 (Resolved): Primary OSD crash caused corrupted object and further crashes during backf...
- 09:09 AM Backport #52937 (Rejected): octopus: Primary OSD crash caused corrupted object and further crashe...
- Octopus is EOL
- 09:07 AM Bug #45868 (Resolved): rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- 09:07 AM Backport #55768 (Resolved): pacific: rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- 09:06 AM Backport #55767 (Rejected): octopus: rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- Octopus is EOL
- 09:06 AM Bug #53506 (Closed): mon: frequent cpu_tp had timed out messages
- 09:04 AM Backport #53719 (Resolved): octopus: mon: frequent cpu_tp had timed out messages
- 02:37 AM Bug #59510: osd crash
- The index pool make of ssd, and the data pool make of hdd, the crash message come from hdd, is there a way to voild t...
04/26/2023
- 07:59 PM Bug #53751: "N monitors have not enabled msgr2" is always shown for new clusters
- Hi Radoslaw, before that, a quick thing for your consideration I just found:
Running monmaptool is step 13 in http... - 06:05 PM Bug #59564 (Fix Under Review): Connection scores not populated properly on monitors post installa...
- ...
- 04:16 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- RCA by Aishwarya: https://gist.github.com/amathuria/26f5e9ecfc3f04a70c9795039fdf0c35?permalink_comment_id=4549186#gis...
- 12:14 PM Bug #59510: osd crash
- You might also want to compact this OSD's DB using ceph-kvstore-tool. Some chances are that the timeout is caused by ...
- 07:04 AM Bug #59510: osd crash
- like this?
*[6880136.695917] tp_osd_tp[6383]: segfault at 0 ip 00007ff38f003573 sp 00007ff36ba8a240 error 4 in libt... - 11:50 AM Backport #59456 (In Progress): quincy: Monitors do not permit OSD to join after upgrading to Quincy
- 11:49 AM Backport #59455 (In Progress): pacific: Monitors do not permit OSD to join after upgrading to Quincy
- 07:00 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Yup, the patch does exactly that – it ensures that a random nonce is always used.
I h... - 01:36 AM Bug #59532: quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD upgraded
- Radoslaw Zarzynski wrote:
> Hi Patrick!
> How reproducible this is? Is it constant or perhaps it happened just once...
04/25/2023
- 06:13 PM Bug #56393: failed to complete snap trimming before timeout
- Bump up.
- 06:12 PM Bug #59049 (In Progress): WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- 06:11 PM Bug #59510 (Need More Info): osd crash
- It looks the scan-for-backfill operation was taking long time and triggered the thread heartbeat. This could be even ...
- 06:08 PM Bug #59531: quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.0...
- Hi Aishwarya! What do you think on the Patrick's question: "Should we (fs suite) be setting a config to mute this WRN...
- 12:25 AM Bug #59531 (In Progress): quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold ...
- /ceph/teuthology-archive/pdonnell-2023-04-24_17:17:44-fs-wip-pdonnell-testing-20230420.183701-quincy-distro-default-s...
- 06:05 PM Bug #53751: "N monitors have not enabled msgr2" is always shown for new clusters
- Hello Niklas!
Thanks for getting back to it! Could you please collect monitor's logs with @debug_ms=20@ and @debug... - 01:44 AM Bug #53751: "N monitors have not enabled msgr2" is always shown for new clusters
- The fundamental issue here seems to be that in my newly deployed test cluster, nothing listens on port 3300 even thou...
- 05:56 PM Bug #59333: PgScrubber: timeout on reserving replicas
- bump up
- 03:46 PM Bug #59333: PgScrubber: timeout on reserving replicas
- See the same on pacific 16.2.13 RC
http://qa-proxy.ceph.com/teuthology/yuriw-2023-04-25_14:15:06-smoke-pacific-rel... - 05:46 PM Bug #57977: osd:tick checking mon for new map
- Yup, the patch does exactly that – it ensures that a random nonce is always used.
- 05:42 PM Bug #59532 (Need More Info): quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD up...
- Hi Patrick!
How reproducible this is? Is it constant or perhaps it happened just once? I'm asking because of the rec... - 12:34 AM Bug #59532 (Closed): quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD upgraded
- ...
- 10:17 AM Backport #59538 (Rejected): pacific: osd/scrub: verify SnapMapper consistency not backported
- 10:17 AM Backport #59537 (Resolved): quincy: osd/scrub: verify SnapMapper consistency not backported
- https://github.com/ceph/ceph/pull/52182
- 10:12 AM Bug #59478: osd/scrub: verify SnapMapper consistency not backported
- @Wout, the bot should create backport tickets soon
- 10:11 AM Bug #59478 (Pending Backport): osd/scrub: verify SnapMapper consistency not backported
- 10:04 AM Bug #56147: snapshots will not be deleted after upgrade from nautilus to pacific
- Matan Breizman wrote:
> > For already-converted clusters: Separate PR will be issued to remove/update the malformed ...
Also available in: Atom