Project

General

Profile

Activity

From 09/14/2022 to 10/13/2022

10/13/2022

07:39 AM Bug #57859 (Fix Under Review): bail from handle_command() if _generate_command_map() fails
Ilya Dryomov
03:51 AM Bug #57859 (Resolved): bail from handle_command() if _generate_command_map() fails
https://tracker.ceph.com/issues/54558 catches an exception from handle_command() to avoid mon termination due to a po... nikhil kshirsagar
04:03 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
nikhil kshirsagar wrote:
> Ilya Dryomov wrote:
> > I don't think https://github.com/ceph/ceph/pull/45547 is a compl...
nikhil kshirsagar

10/12/2022

05:08 PM Bug #57782: [mon] high cpu usage by fn_monstore thread
Hey Radek,
makes sense, I created a debug branch https://github.com/ceph/ceph-ci/pull/new/wip-crush-debug and migh...
Deepika Upadhyay
02:39 AM Bug #57852 (Need More Info): osd: unhealthy osd cannot be marked down in time
Before an unhealthy osd is marked down by mon, other osd may choose it as
heartbeat peer and then report an incorrec...
wencong wan

10/11/2022

10:13 AM Bug #57845 (New): MOSDRepOp::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_O...
... Andreas Teuchert

10/10/2022

06:33 PM Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log

Radoslaw,
Yes, I saw that piece of code too. But i *think* I figured it out just a short time ago. I had the cru...
Chris Durham
06:05 PM Bug #57796 (Need More Info): after rebalance of pool via pgupmap balancer, continuous issues in m...
Thanks for the report! The log comes from there:... Radoslaw Zarzynski
06:23 PM Bug #57782 (Need More Info): [mon] high cpu usage by fn_monstore thread
It looks we're burning CPU in @close(2)@. The single call site I can spot is in @write_data_set_to_csv@. Let's analyz... Radoslaw Zarzynski
06:08 AM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
Laura Flores wrote:
> I contacted some Telemetry users. I will report back here with any information.
>
I am on...
Jimmy Spets

10/07/2022

08:32 PM Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
I removed the hosts holding the osds reported by verify_upmap from the default root rule that no one uses, and the lo... Chris Durham
05:56 PM Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
Note that the balancer balanced a replicated pool, using its own custom crush root too. The hosts in that pool (not i... Chris Durham
05:46 PM Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
preformatting the crush info so it shows up properly ...... Chris Durham
05:43 PM Bug #57796 (Need More Info): after rebalance of pool via pgupmap balancer, continuous issues in m...

The pgupmap balancer was not balancing well, and after setting mgr/balancer/upmap_max_deviation to 1 (ceph config-k...
Chris Durham
04:46 PM Backport #57795 (In Progress): quincy: intrusive_lru leaking memory when
https://github.com/ceph/ceph/pull/54557 Backport Bot
04:46 PM Backport #57794 (Resolved): pacific: intrusive_lru leaking memory when
https://github.com/ceph/ceph/pull/54558 Backport Bot
04:29 PM Bug #57573 (Pending Backport): intrusive_lru leaking memory when
Casey Bodley
12:36 PM Bug #54773: crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)
See bug 54744. Gabriel Mainberger
12:35 PM Bug #54744: crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)
Rook v1.6.5 / Ceph v12.2.9 running on the host network and not inside the Kubernetes SDN caused creating a mon canary... Gabriel Mainberger

10/06/2022

08:38 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
I contacted some Telemetry users. I will report back here with any information.
Something to note: The large maj...
Laura Flores
05:08 PM Backport #57545: quincy: CommandFailedError: Command failed (workunit test rados/test_python.sh) ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48113
merged
Yuri Weinstein
05:05 PM Backport #57496: quincy: Invalid read of size 8 in handle_recovery_delete()
Nitzan Mordechai wrote:
> https://github.com/ceph/ceph/pull/48039
merged
Yuri Weinstein
05:04 PM Backport #57443: quincy: osd: Update osd's IOPS capacity using async Context completion instead o...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47983
merged
Yuri Weinstein
05:03 PM Backport #57346: quincy: expected valgrind issues and found none
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47933
merged
Yuri Weinstein
05:01 PM Backport #56602: quincy: ceph report missing osdmap_clean_epochs if answered by peon
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47928
merged
Yuri Weinstein
05:00 PM Backport #55282: quincy: osd: add scrub duration for scrubs after recovery
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47926
merged
Yuri Weinstein
04:47 PM Backport #57544: pacific: CommandFailedError: Command failed (workunit test rados/test_python.sh)...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48112
merged
Yuri Weinstein
02:08 PM Bug #57782 (Fix Under Review): [mon] high cpu usage by fn_monstore thread
We observed high cpu usage by ms_dispatch and fn_monstore thread (amounting to 100-99% in top) Ceph [ deployment was ... Deepika Upadhyay

10/05/2022

06:49 PM Bug #57699 (Fix Under Review): slow osd boot with valgrind (reached maximum tries (50) after wait...
Radoslaw Zarzynski
06:48 PM Bug #57049 (Duplicate): cluster logging does not adhere to mon_cluster_log_file_level
Radoslaw Zarzynski
06:46 PM Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
Hi Laura. In luck with verification of the hypothesis from the comment #17? Radoslaw Zarzynski
06:43 PM Bug #57532 (Duplicate): Notice discrepancies in the performance of mclock built-in profiles
Marked as duplicate per comment #4. Radoslaw Zarzynski
06:25 PM Bug #57757: ECUtil: terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of...
There is a coredump on the teuhtology node (@/ceph/teuthology-archive/yuriw-2022-09-29_16:44:24-rados-wip-lflores-tes... Radoslaw Zarzynski
06:19 PM Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
I think this a fix for that got reverted in quincy (https://tracker.ceph.com/issues/53806) but it's still in @main@. ... Radoslaw Zarzynski
06:12 PM Bug #50042: rados/test.sh: api_watch_notify failures
Assigning to Nitzan just for the sake of testing the hypothesis from https://tracker.ceph.com/issues/50042#note-35. Radoslaw Zarzynski
06:06 PM Cleanup #57587 (Resolved): mon: fix Elector warnings
Resolved by https://github.com/ceph/ceph/pull/48289. Laura Flores
06:05 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
This won't be easy to reproduce but there are still some options like:
* contacting owners of the external cluster...
Radoslaw Zarzynski

10/04/2022

05:25 PM Bug #50042: rados/test.sh: api_watch_notify failures
/a/yuriw-2022-09-29_16:40:30-rados-wip-all-kickoff-r-distro-default-smithi/7047940... Laura Flores

10/03/2022

10:21 PM Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
Found a similar instance here:
/a/lflores-2022-09-30_21:47:41-rados-wip-lflores-testing-distro-default-smithi/7050...
Laura Flores
10:07 PM Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
/a/yuriw-2022-09-29_16:44:24-rados-wip-lflores-testing-distro-default-smithi/7048304
/a/lflores-2022-09-30_21:47:41-...
Laura Flores
10:01 PM Bug #57757: ECUtil: terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of...
Put affected version as "14.2.9" since there is no option for "14.2.19". Laura Flores
09:59 PM Bug #57757 (Fix Under Review): ECUtil: terminate called after throwing an instance of 'ceph::buff...
/a/yuriw-2022-09-29_16:44:24-rados-wip-lflores-testing-distro-default-smithi/7048173/remote/smithi133/crash/posted/20... Laura Flores
12:59 PM Bug #57751 (Resolved): LibRadosAio.SimpleWritePP hang and pkill
/a/nmordech-2022-10-02_08:27:55-rados:verify-wip-nm-51282-distro-default-smithi/7051967/... Nitzan Mordechai

09/30/2022

07:13 PM Bug #17170 (Fix Under Review): mon/monclient: update "unable to obtain rotating service keys when...
Greg Farnum
04:49 PM Bug #57105: quincy: ceph osd pool set <pool> size math error
Looks like in both cases something is being subtracted from an zero value unsigned int64 and overflowing.
2^64 − ...
Brian Woods
03:37 PM Bug #57105: quincy: ceph osd pool set <pool> size math error
Setting the size (from 3) to 2, then setting it to 1 works...... Brian Woods
03:38 AM Bug #57105: quincy: ceph osd pool set <pool> size math error
I created a new cluster today to do a very specific test and ran into this (or something like it) again today. In th... Brian Woods
10:40 AM Bug #49777 (Resolved): test_pool_min_size: 'check for active or peered' reached maximum tries (5)...
Konstantin Shalygin
10:39 AM Backport #57022 (Resolved): pacific: test_pool_min_size: 'check for active or peered' reached max...
Konstantin Shalygin
09:28 AM Bug #50192 (Resolved): FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_...
Konstantin Shalygin
09:27 AM Backport #50274 (Resolved): pacific: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get...
Konstantin Shalygin
09:27 AM Bug #53516 (Resolved): Disable health warning when autoscaler is on
Konstantin Shalygin
09:27 AM Backport #53644 (Resolved): pacific: Disable health warning when autoscaler is on
Konstantin Shalygin
09:27 AM Bug #51942 (Resolved): src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotActive*>())
Konstantin Shalygin
09:26 AM Backport #53339 (Resolved): pacific: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<cons...
Konstantin Shalygin
09:26 AM Bug #55001 (Resolved): rados/test.sh: Early exit right after LibRados global tests complete
Konstantin Shalygin
09:26 AM Backport #57029 (Resolved): pacific: rados/test.sh: Early exit right after LibRados global tests ...
Konstantin Shalygin
09:26 AM Bug #57119 (Resolved): Heap command prints with "ceph tell", but not with "ceph daemon"
Konstantin Shalygin
09:25 AM Backport #57313 (Resolved): pacific: Heap command prints with "ceph tell", but not with "ceph dae...
Konstantin Shalygin
05:18 AM Backport #57372 (Resolved): quincy: segfault in librados via libcephsqlite
Konstantin Shalygin
04:23 AM Bug #57532: Notice discrepancies in the performance of mclock built-in profiles
As Sridhar has mentioned in the BZ, the Case 2 results are due to the max limit setting for best effort clients. This... Aishwarya Mathuria
02:19 AM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
/a/yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046230/ Kamoltat (Junior) Sirivadhna

09/29/2022

08:37 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- This was visible again in LRC upgrade today.... Vikhyat Umrao
07:31 PM Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046253 Kamoltat (Junior) Sirivadhna
07:21 PM Bug #53768: timed out waiting for admin_socket to appear after osd.2 restart in thrasher/defaults...
yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046234 Kamoltat (Junior) Sirivadhna
06:02 PM Bug #55435 (Resolved): mon/Elector: notify_ranked_removed() does not properly erase dead_ping in ...
Konstantin Shalygin
06:01 PM Backport #56550 (Resolved): pacific: mon/Elector: notify_ranked_removed() does not properly erase...
Konstantin Shalygin
03:55 PM Bug #54611 (Resolved): prometheus metrics shows incorrect ceph version for upgraded ceph daemon
Konstantin Shalygin
03:54 PM Backport #55309 (Resolved): pacific: prometheus metrics shows incorrect ceph version for upgraded...
Konstantin Shalygin
02:52 PM Bug #57727: mon_cluster_log_file_level option doesn't take effect
Yes. I was trying to close it as a duplicate after editing my comment. Thank you for closing it. Prashant D
02:50 PM Bug #57727 (Duplicate): mon_cluster_log_file_level option doesn't take effect
Ah, you edited your comment to say "Closing this tracker as a duplicate of 57049". Ilya Dryomov
02:48 PM Bug #57727 (Fix Under Review): mon_cluster_log_file_level option doesn't take effect
Ilya Dryomov
02:41 PM Bug #57727: mon_cluster_log_file_level option doesn't take effect
Hi Ilya,
I had a PR#47480 opened for this issue but closed it in favor of PR#47502. We have a old tracker 57049 fo...
Prashant D
02:00 PM Bug #57727 (Duplicate): mon_cluster_log_file_level option doesn't take effect
This appears to be regression introduced in quincy in https://github.com/ceph/ceph/pull/42014:... Ilya Dryomov
02:44 PM Bug #57049: cluster logging does not adhere to mon_cluster_log_file_level
I had a PR#47480 opened for this issue but closed it in favor of PR#47502. The PR#47502 addresses this issue along wi... Prashant D
02:15 PM Backport #56735 (Resolved): octopus: unessesarily long laggy PG state
Konstantin Shalygin
02:14 PM Bug #50806 (Resolved): osd/PrimaryLogPG.cc: FAILED ceph_assert(attrs || !recovery_state.get_pg_lo...
Konstantin Shalygin
02:13 PM Backport #50893 (Resolved): pacific: osd/PrimaryLogPG.cc: FAILED ceph_assert(attrs || !recovery_s...
Konstantin Shalygin
02:07 PM Bug #55158 (Resolved): mon/OSDMonitor: properly set last_force_op_resend in stretch mode
Konstantin Shalygin
02:07 PM Backport #55281 (Resolved): pacific: mon/OSDMonitor: properly set last_force_op_resend in stretch...
Konstantin Shalygin
11:58 AM Bug #57699: slow osd boot with valgrind (reached maximum tries (50) after waiting for 300 seconds)
I was not able to reproduce it with the more debug messages, I created PR with the debug message and will wait for re... Nitzan Mordechai
07:28 AM Bug #56289 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
Matan Breizman
07:28 AM Bug #54710 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
Matan Breizman
07:28 AM Bug #54709 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
Matan Breizman
07:21 AM Bug #54708 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
Matan Breizman
07:02 AM Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start
Radoslaw Zarzynski wrote:
> A note from the bug scrub: work in progress.
WIP: https://gist.github.com/Matan-B/ca5...
Matan Breizman
02:47 AM Bug #57532: Notice discrepancies in the performance of mclock built-in profiles
Hi Bharath, could you also add the mClock configuration values from osd config show command here?
Aishwarya Mathuria

09/28/2022

06:03 PM Bug #53806 (New): unessesarily long laggy PG state
Reopening b/c the original fix had to be reverted: https://github.com/ceph/ceph/pull/44499#issuecomment-1247315820. Radoslaw Zarzynski
05:54 PM Bug #57618: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNotify)
Note from a scrub: might we worth talking about. Radoslaw Zarzynski
05:51 PM Bug #57650 (In Progress): mon-stretch: reweighting an osd to a big number, then back to original ...
Radoslaw Zarzynski
05:51 PM Bug #57678 (Fix Under Review): Mon fail to send pending metadata through MMgrUpdate after an upgr...
Radoslaw Zarzynski
05:50 PM Bug #57698: osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
What are symptoms? How bad is it? A hang maybe? I'm asking to understand the impact. Radoslaw Zarzynski
05:48 PM Bug #57698 (In Progress): osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
IIRC Ronen has mentioned the scrub code interchanges @get_acting_set()@ and @get_acting_recovery_backfill()@. Radoslaw Zarzynski
01:40 PM Bug #57698 (Resolved): osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
The Primary registers its intent to scrub with the 'get_actingset()', as it should.
But the actual chunk requests ar...
Ronen Friedman
05:45 PM Bug #57699 (In Progress): slow osd boot with valgrind (reached maximum tries (50) after waiting f...
Marking WIP per our morning talk. Radoslaw Zarzynski
01:58 PM Bug #57699 (Resolved): slow osd boot with valgrind (reached maximum tries (50) after waiting for ...
/a/yuriw-2022-09-23_20:38:59-rados-wip-yuri6-testing-2022-09-23-1008-quincy-distro-default-smithi/7042504 ... Nitzan Mordechai
05:44 PM Backport #57705 (Resolved): pacific: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when redu...
Backport Bot
05:44 PM Backport #57704 (Resolved): quincy: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reduc...
Backport Bot
05:43 PM Bug #57529 (In Progress): mclock backfill is getting higher priority than WPQ
Marking as WIP as IIRC Sridhar was talking about this issue during core standups. Radoslaw Zarzynski
05:42 PM Bug #57573 (In Progress): intrusive_lru leaking memory when
As I understood:
1. @evit()@ intends to not free too much (which makes sense).
2. The dtor reuses @evict()@ for c...
Radoslaw Zarzynski
05:39 PM Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start
A note from the bug scrub: work in progress. Radoslaw Zarzynski
05:35 PM Bug #50089 (Pending Backport): mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing n...
Neha Ojha
11:06 AM Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...
... Gaurav Sitlani
11:03 AM Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...
I am seeing the same crash in version : ceph version 16.2.10 and just noticed that PR linked in this thread is merged... Gaurav Sitlani
01:10 PM Backport #57696 (Resolved): quincy: ceph log last command fail to log by verbosity level
https://github.com/ceph/ceph/pull/50407 Backport Bot
01:04 PM Feature #52424 (Resolved): [RFE] Limit slow request details to mgr log
Prashant D
01:03 PM Bug #57340 (Pending Backport): ceph log last command fail to log by verbosity level
Prashant D

09/27/2022

01:02 PM Bug #17170 (New): mon/monclient: update "unable to obtain rotating service keys when osd init" to...
Greg Farnum
01:02 PM Bug #17170 (Closed): mon/monclient: update "unable to obtain rotating service keys when osd init"...
This report can technically have other causes, but it's just always because the OSDs are too far out of clock sync wi... Greg Farnum
03:12 AM Bug #57678 (Resolved): Mon fail to send pending metadata through MMgrUpdate after an upgrade resu...
The prometheus metrics still showing older ceph version for upgraded mon. This issue is observed if we upgrade cluste... Prashant D

09/26/2022

02:44 PM Bug #51688 (In Progress): "stuck peering for" warning is misleading
Laura Flores
02:44 PM Bug #51688: "stuck peering for" warning is misleading
Shreyansh Sancheti is working on this bug. Laura Flores
01:11 PM Backport #57258 (In Progress): pacific: Assert in Ceph messenger
Konstantin Shalygin
12:29 PM Backport #56722 (In Progress): pacific: osd thread deadlock
Konstantin Shalygin
09:20 AM Backport #55633: octopus: ceph-osd takes all memory before oom on boot
Konstantin Shalygin wrote:
> Igor, seems when `version` filed is not set it's possible to change issue `status`
>
...
Igor Fedotov

09/24/2022

08:08 AM Bug #56495 (Resolved): Log at 1 when Throttle::get_or_fail() fails
Konstantin Shalygin
08:08 AM Backport #56641 (Resolved): quincy: Log at 1 when Throttle::get_or_fail() fails
Konstantin Shalygin
08:07 AM Backport #56642 (Resolved): pacific: Log at 1 when Throttle::get_or_fail() fails
Konstantin Shalygin
08:04 AM Backport #57257 (Resolved): quincy: Assert in Ceph messenger
Konstantin Shalygin
08:03 AM Backport #56723 (Resolved): quincy: osd thread deadlock
Konstantin Shalygin
07:58 AM Backport #55633: octopus: ceph-osd takes all memory before oom on boot
Igor, seems when `version` filed is not set it's possible to change issue `status`
Radoslaw, what is the current s...
Konstantin Shalygin
07:57 AM Backport #55633 (In Progress): octopus: ceph-osd takes all memory before oom on boot
Konstantin Shalygin
07:56 AM Backport #55631 (Resolved): pacific: ceph-osd takes all memory before oom on boot
Now PR merged, set resolved Konstantin Shalygin

09/22/2022

08:30 PM Backport #56642: pacific: Log at 1 when Throttle::get_or_fail() fails
Radoslaw Zarzynski wrote:
> https://github.com/ceph/ceph/pull/47764
merged
Yuri Weinstein
05:11 PM Bug #57650: mon-stretch: reweighting an osd to a big number, then back to original causes uneven ...
ceph osd tree:... Kamoltat (Junior) Sirivadhna
05:09 PM Bug #57650 (In Progress): mon-stretch: reweighting an osd to a big number, then back to original ...
Reweight an osd from 0.0900 to 0.7000
and then reweight back to 0.0900. Causes uneven weights between
two zones rep...
Kamoltat (Junior) Sirivadhna
03:03 PM Bug #57628: osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0)
The same issue was reported in telemetry also on version 15.0.0:
http://telemetry.front.sepia.ceph.com:4000/d/jByk5H...
Yaarit Hatuka
02:07 PM Bug #57570 (Fix Under Review): mon-stretched_cluster: Site weights are not monitored post stretch...
Kamoltat (Junior) Sirivadhna
12:43 PM Bug #57632 (In Progress): test_envlibrados_for_rocksdb: free(): invalid pointer
Matan Breizman
06:44 AM Bug #57632 (Closed): test_envlibrados_for_rocksdb: free(): invalid pointer
/a/kchai-2022-08-23_13:19:39-rados-wip-kefu-testing-2022-08-22-2243-distro-default-smithi/6987883/... Matan Breizman
06:45 AM Bug #57163 (Resolved): free(): invalid pointer
test_envlibrados_for_rocksdb failure will be tracked here: https://tracker.ceph.com/issues/57632 Matan Breizman
05:23 AM Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
Thanks for the reproducer Laura, I'm looking into the failures. Aishwarya Mathuria

09/21/2022

10:28 PM Bug #57628: osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0)
Telemetry also caught this on v14.1.1. Copying that link here to provide the full picture:
http://telemetry.front....
Laura Flores
10:00 PM Bug #57628: osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0)
Caught by Telemetry, happened twice on one 16.2.7 cluster:
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/...
Laura Flores
09:59 PM Bug #57628: osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0)
Might be Tracker #39659, but there aren't any logs anymore, so no way to be sure. Laura Flores
09:58 PM Bug #57628 (In Progress): osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_sinc...
/a/yuriw-2022-09-09_14:59:25-rados-wip-yuri2-testing-2022-09-06-1007-pacific-distro-default-smithi/7022809... Laura Flores
03:18 PM Bug #51688: "stuck peering for" warning is misleading
Peering PGs can be simulated in a vstart cluster by marking an OSD down with `./bin/ceph osd down <id>`.
Laura Flores
02:42 PM Bug #51688: "stuck peering for" warning is misleading
The relevant code would be in `src/mon/PGMap.cc` and `src/mon/PGMap.h`. Laura Flores
11:07 AM Bug #57618: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNotify)
It will only happen with EC pools, the hang will happen when not all osd are up, but still, i'm not sure if we suppos... Nitzan Mordechai
06:28 AM Bug #57618 (Resolved): rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNotify)
Job stopped with... Nitzan Mordechai
10:39 AM Bug #57616 (Resolved): osd/scrub: on_replica_init() cannot be called twice
Ronen Friedman

09/20/2022

12:23 PM Bug #57616 (Resolved): osd/scrub: on_replica_init() cannot be called twice
on_replica_init() may be called twice for a specific scrub-chunk request from a replica.
But after 30facb0f2b, it st...
Ronen Friedman
09:00 AM Backport #57373 (In Progress): pacific: segfault in librados via libcephsqlite
Matan Breizman

09/19/2022

03:37 PM Bug #57340: ceph log last command fail to log by verbosity level
https://github.com/ceph/ceph/pull/47873 merged Yuri Weinstein
03:09 PM Bug #57600 (New): thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "active+r...
/a/yuriw-2022-08-24_16:39:47-rados-wip-yuri4-testing-2022-08-24-0707-pacific-distro-default-smithi/6990392
Descripti...
Laura Flores
03:08 PM Bug #57599 (New): thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "recoveri...
/a/yuriw-2022-06-23_21:29:45-rados-wip-yuri4-testing-2022-06-22-1415-pacific-distro-default-smithi/6895209
Descrip...
Laura Flores
02:45 PM Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
/a/yuriw-2022-09-14_13:16:11-rados-wip-yuri6-testing-2022-09-13-1352-distro-default-smithi/7032356 Laura Flores
02:45 PM Bug #56149: thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "active+recover...
Matan Breizman wrote:
> /a/yuriw-2022-09-14_13:16:11-rados-wip-yuri6-testing-2022-09-13-1352-distro-default-smithi/7...
Laura Flores
12:27 PM Bug #56149: thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "active+recover...
/a/yuriw-2022-09-14_13:16:11-rados-wip-yuri6-testing-2022-09-13-1352-distro-default-smithi/7032356 Matan Breizman

09/16/2022

10:22 PM Cleanup #57587 (Fix Under Review): mon: fix Elector warnings
Laura Flores
07:33 PM Cleanup #57587 (Resolved): mon: fix Elector warnings
`ninja mon -j$(nproc)` on the latest main branch.... Laura Flores
05:27 PM Bug #57585 (Fix Under Review): ceph versions : mds : remove empty list entries from ceph versions
Vikhyat Umrao
05:25 PM Bug #57585 (Pending Backport): ceph versions : mds : remove empty list entries from ceph versions
Downstream BZ https://bugzilla.redhat.com/show_bug.cgi?id=2110933 Shreyansh Sancheti
03:31 PM Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
/a/yuriw-2022-09-15_17:53:16-rados-quincy-release-distro-default-smithi/7034203/ Neha Ojha
03:29 PM Bug #36304: FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wake_split_child(PG*)
/a/yuriw-2022-09-15_17:53:16-rados-quincy-release-distro-default-smithi/7034166 Neha Ojha
11:19 AM Fix #57577 (Resolved): osd: Improve osd bench accuracy by using buffers with random patterns
The osd bench currently uses buffers filled with the same character
for all the writes issued. Buffers can be filled...
Sridhar Seshasayee
06:22 AM Cleanup #52752: fix warnings
May be evident with "ninja common -j$(nproc)". Laura Flores
06:20 AM Cleanup #52754: windows warnings
Should be evident when running "ninja client -j$(nproc)". Laura Flores

09/15/2022

09:34 PM Cleanup #52754: windows warnings
New link: https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=windows,DIST=w... Laura Flores
09:32 PM Bug #53251: compiler warning about deprecated fmt::format_to()
Check by running `ninja mon $(nproc)` under the ceph/build directory Laura Flores
08:15 PM Bug #57573 (Pending Backport): intrusive_lru leaking memory when
Values allocated during inserts in the lru defined in
src/common/intrusive_lru.h that are
unreferenced are sometim...
Ali Maredia
03:42 PM Feature #57557: Ability to roll-back the enabled stretch-cluster configuration
As discussed in https://bugzilla.redhat.com/show_bug.cgi?id=2094016, this hasn't been implemented yet. Neha Ojha
12:47 PM Feature #57557 (New): Ability to roll-back the enabled stretch-cluster configuration
We have enabled a stretch-cluster configuration on a pre-production system with several already existing and used poo... Dmitry Smirnov
03:32 PM Bug #57570 (Resolved): mon-stretched_cluster: Site weights are not monitored post stretch mode de...
Site weights are not monitored post-stretch mode deployment.
Basically, after we successfully enabled stretch mode,...
Kamoltat (Junior) Sirivadhna
02:33 PM Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
Running the reproducer to see whether this bug also occurs on main:
http://pulpito.front.sepia.ceph.com/lflores-2022...
Laura Flores
04:50 AM Backport #57545 (In Progress): quincy: CommandFailedError: Command failed (workunit test rados/te...
Nitzan Mordechai
04:48 AM Backport #57544 (In Progress): pacific: CommandFailedError: Command failed (workunit test rados/t...
Nitzan Mordechai
12:58 AM Backport #57313 (In Progress): pacific: Heap command prints with "ceph tell", but not with "ceph ...
Prashant D

09/14/2022

09:18 PM Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
Quincy revert PR https://github.com/ceph/ceph/pull/48104
Not sure if we want to put this as a "fix".
Laura Flores
09:03 PM Bug #57546 (Fix Under Review): rados/thrash-erasure-code: wait_for_recovery timeout due to "activ...
When testing the Quincy RC for 17.2.4, we discovered this failure:
Description: rados/thrash-erasure-code/{ceph cl...
Laura Flores
07:24 PM Backport #57545 (Resolved): quincy: CommandFailedError: Command failed (workunit test rados/test_...
https://github.com/ceph/ceph/pull/48113 Backport Bot
07:24 PM Backport #57544 (Resolved): pacific: CommandFailedError: Command failed (workunit test rados/test...
https://github.com/ceph/ceph/pull/48112 Backport Bot
07:16 PM Bug #45721 (Pending Backport): CommandFailedError: Command failed (workunit test rados/test_pytho...
Neha Ojha
03:01 PM Bug #44595: cache tiering: Error: oid 48 copy_from 493 returned error code -2
/a/yuriw-2022-09-10_14:05:53-rados-quincy-release-distro-default-smithi/7024401... Laura Flores
09:44 AM Bug #49524 (In Progress): ceph_test_rados_delete_pools_parallel didn't start
Nitzan Mordechai
09:44 AM Bug #49524: ceph_test_rados_delete_pools_parallel didn't start
My theory is that fork failed, which caused all the test not to run, this is the only place we won't get any printing... Nitzan Mordechai
09:35 AM Bug #45702 (Fix Under Review): PGLog::read_log_and_missing: ceph_assert(miter == missing.get_item...
Nitzan Mordechai
07:10 AM Bug #57533 (Resolved): Able to modify the mclock reservation, weight and limit parameters when bu...

[ceph: root@magna086 /]# ceph config get osd osd_mclock_scheduler_client_res
1
[ceph: root@magna086 /]# ceph conf...
Srinivasa Bharath Kanta
06:27 AM Bug #57532: Notice discrepancies in the performance of mclock built-in profiles
From the following data, I noticed that -
1. In the case-1, for all profiles the IO reservations for high_clien...
Srinivasa Bharath Kanta
06:23 AM Bug #57532 (Duplicate): Notice discrepancies in the performance of mclock built-in profiles
Downstream BZ- https://bugzilla.redhat.com/show_bug.cgi?id=2126274 Srinivasa Bharath Kanta
 

Also available in: Atom