Activity
From 02/05/2023 to 03/06/2023
03/06/2023
- 10:55 PM Bug #49961 (New): scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed
- /a/yuriw-2023-03-01_19:28:10-rados-wip-yuri3-testing-2023-03-01-0812-quincy-distro-default-smithi/7190326]...
- 10:00 PM Bug #58925 (Fix Under Review): rocksdb "Leak_StillReachable" memory leak in mons
- 08:37 PM Bug #58925: rocksdb "Leak_StillReachable" memory leak in mons
- Steps to reproduce:...
- 07:50 PM Bug #58925: rocksdb "Leak_StillReachable" memory leak in mons
- Caused by https://github.com/ceph/ceph/pull/49006.
- 07:33 PM Bug #58925: rocksdb "Leak_StillReachable" memory leak in mons
- Laura Flores wrote:
> [...]
This example was from /a/yuriw-2023-03-03_17:39:09-rados-reef-distro-default-smithi/7... - 07:32 PM Bug #58925 (Resolved): rocksdb "Leak_StillReachable" memory leak in mons
- ...
- 07:33 PM Backport #57117 (In Progress): quincy: mon: race condition between `mgr fail` and MgrMonitor::pre...
- 07:25 PM Backport #57696 (In Progress): quincy: ceph log last command fail to log by verbosity level
- 07:22 PM Backport #58169 (In Progress): quincy: extra debugs for: [mon] high cpu usage by fn_monstore thread
- 07:17 PM Backport #58334 (In Progress): quincy: mon/monclient: update "unable to obtain rotating service k...
- 07:16 PM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- /a/yuriw-2023-03-03_17:39:09-rados-reef-distro-default-smithi/7193142
- 07:14 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- /a/yuriw-2023-03-03_17:39:09-rados-reef-distro-default-smithi/7193126...
- 06:23 PM Bug #58739: "Leak_IndirectlyLost" valgrind report on mon.a
- The quincy backport of the auth key rotation (https://github.com/ceph/ceph/pull/48093) got merged on 8 Feb. However, ...
- 06:09 PM Bug #58915: map eXX had wrong heartbeat front addr
- I wonder whether this is a fallout from the public_bind changes (for the overlapping IP problem) but it looks the bra...
- 06:01 PM Bug #55141 (In Progress): thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- 02:36 PM Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- Since this is EC pool, the NO_SHARD is confusing, we are not maintaining rollback_info_trimmed_to on replicas, lookin...
- 01:22 PM Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- I'm probably missing something here, but i'll try to summarize my finds...
- 05:53 PM Backport #56602 (Resolved): quincy: ceph report missing osdmap_clean_epochs if answered by peon
- 05:49 PM Feature #54280 (Resolved): support truncation sequences in sparse reads
- 05:48 PM Bug #54509 (Resolved): FAILED ceph_assert due to issue manifest API to the original object
- 05:47 PM Bug #54558 (Resolved): malformed json in a Ceph RESTful API call can stop all ceph-mon services
- 05:46 PM Backport #55296 (Resolved): pacific: malformed json in a Ceph RESTful API call can stop all ceph-...
- 05:46 PM Backport #55298 (Resolved): octopus: malformed json in a Ceph RESTful API call can stop all ceph-...
- 05:45 PM Backport #55297 (Resolved): quincy: malformed json in a Ceph RESTful API call can stop all ceph-m...
- 05:43 PM Bug #54994 (Resolved): osd: add scrub duration for scrubs after recovery
- 05:43 PM Backport #55282 (Resolved): quincy: osd: add scrub duration for scrubs after recovery
- 05:42 PM Bug #55088 (Resolved): Manager is failing to keep updated metadata in daemon_state for upgraded M...
- 05:42 PM Backport #55305 (Resolved): quincy: Manager is failing to keep updated metadata in daemon_state f...
- 05:39 PM Backport #55542 (Rejected): octopus: should use TCMalloc for better performance
- 05:32 PM Bug #57017: mon-stretched_cluster: degraded stretched mode lead to Monitor crash
- The quincy backport is important and needed.
- 05:30 PM Bug #57533 (Resolved): Able to modify the mclock reservation, weight and limit parameters when bu...
- 05:29 PM Backport #58708 (Resolved): quincy: Able to modify the mclock reservation, weight and limit param...
- 05:28 PM Fix #57577 (Resolved): osd: Improve osd bench accuracy by using buffers with random patterns
- 05:28 PM Backport #58214 (Resolved): quincy: osd: Improve osd bench accuracy by using buffers with random ...
- 05:26 PM Backport #58638 (Resolved): pacific: Mon fail to send pending metadata through MMgrUpdate after a...
- 05:24 PM Bug #57859 (Resolved): bail from handle_command() if _generate_command_map() fails
- 05:24 PM Backport #58007 (Resolved): pacific: bail from handle_command() if _generate_command_map() fails
- 05:24 PM Bug #57698 (Resolved): osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
- 05:24 PM Backport #58006 (Resolved): quincy: bail from handle_command() if _generate_command_map() fails
- 05:22 PM Fix #57963 (Resolved): osd: Misleading information displayed for the running configuration of osd...
- 05:22 PM Backport #58186 (Resolved): quincy: osd: Misleading information displayed for the running configu...
- 05:14 PM Backport #58872 (Rejected): octopus: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 05:13 PM Bug #44092 (Resolved): mon: config commands do not accept whitespace style config name
- 05:09 PM Backport #57346 (Resolved): quincy: expected valgrind issues and found none
- 05:00 PM Bug #56101 (Resolved): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- 04:52 PM Backport #58586 (Resolved): quincy: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in funct...
- 03:34 PM Bug #57977: osd:tick checking mon for new map
- The unwanted nonce match causes that @OSDMonitor::preprocess_boot()@ returns @true@, and thus prevents @OSDMonitor::p...
03/03/2023
- 10:20 PM Bug #54750: crash: PeeringState::Crashed::Crashed(boost::statechart::state<PeeringState::Crashed,...
- /a/yuriw-2023-02-22_20:55:15-rados-wip-yuri4-testing-2023-02-22-0817-quincy-distro-default-smithi/7184685...
- 10:03 PM Bug #58915 (Fix Under Review): map eXX had wrong heartbeat front addr
- Occurred during "Unwinding manager ceph" task.
/a/yuriw-2023-02-22_20:55:15-rados-wip-yuri4-testing-2023-02-22-081... - 05:59 PM Bug #51904 (Resolved): test_pool_min_size:AssertionError:wait_for_clean:failed before timeout exp...
- 05:58 PM Backport #57026 (Resolved): pacific: test_pool_min_size:AssertionError:wait_for_clean:failed befo...
03/02/2023
- 08:52 PM Bug #43887: ceph_test_rados_delete_pools_parallel failure
- Kamoltat (Junior) Sirivadhna wrote:
> Encountered this error in: yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-... - 08:18 PM Bug #43887: ceph_test_rados_delete_pools_parallel failure
- Encountered this error in: yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-2023-03-01-1424-distro-default-smithi/7...
- 07:40 PM Bug #58739: "Leak_IndirectlyLost" valgrind report on mon.a
- HIT in /a/yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-2023-03-01-1424-distro-default-smithi/7191392/remote/smi...
- 05:48 PM Bug #58739: "Leak_IndirectlyLost" valgrind report on mon.a
- https://github.com/ceph/ceph/pull/48641 is already merge. If we don't see new replications over some time (a few mont...
- 07:31 PM Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- /a/yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-2023-03-01-1424-distro-default-smithi/7191380
- 05:52 PM Bug #50637: OSD slow ops warning stuck after OSD fail
- Bump up + ping.
- 02:18 PM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Thanks, yite gu!
> The fix is: https://github.com/ceph/ceph/pull/50344/commits/fb868d4e... - 12:01 PM Bug #57977 (Fix Under Review): osd:tick checking mon for new map
- Thanks, yite gu!
The fix is: https://github.com/ceph/ceph/pull/50344/commits/fb868d4e71d3871cbd17cfbd4a536470e5c023f... - 01:06 PM Bug #58884 (In Progress): ceph: osd blocklist does not accept v2/v1: prefix for addr
- looks like addr type is CephEntityAddr, which mean it will accept "CephEntityAddr: CephIPAddr + optional '/nonce'"
<...
03/01/2023
- 08:46 PM Bug #58894 (Fix Under Review): [pg-autoscaler][mgr] does not throw warn to increase PG count on p...
- 08:32 PM Bug #58894 (Pending Backport): [pg-autoscaler][mgr] does not throw warn to increase PG count on p...
- Here pool test 1-3 should be spitting health warnings like: PG TOO FEW PLEASE SCALE....
- 07:41 PM Bug #58893: test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired
- Marking this as related to #51076 since there was a case of `test_map_discontinuity` logged there.
- 07:40 PM Bug #58893 (New): test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout e...
- /a/yuriw-2023-02-24_17:50:19-rados-main-distro-default-smithi/7186711...
- 03:34 PM Bug #57977: osd:tick checking mon for new map
- ...
- 12:44 PM Bug #57977: osd:tick checking mon for new map
- osd.0 hanppen restart, but Since then, it has not join the cluster alway. I upload osd boot log.
- 11:27 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Thanks for the update! Yeah, it might stuck there. To confirm we would logs with increas... - 02:44 PM Bug #58288 (Fix Under Review): quincy: mon: pg_num_check() according to crush rule
- Revert is merged https://github.com/ceph/ceph/pull/49465.
PR#50327 pushed as the actual fix.
02/28/2023
- 11:14 PM Bug #49428: ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed wi...
- Bunch of tests from LibRadosIoEC failing from "rados_mon_command osd pool create failed with error -22"
/a/yuriw-2... - 10:57 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Well, I need to move on, so I am deleting the pools. I may try to re-create this in a lab later. If I do, I will tr...
- 07:26 PM Feature #58885 (New): [pg-autoscaler] include warning and explanation in ceph -s when there's ove...
- Currently, we only warn the user about overlapping roots in mgr log.
Since there have been cases where the user file... - 05:39 PM Bug #58884 (Pending Backport): ceph: osd blocklist does not accept v2/v1: prefix for addr
- ...
- 03:10 PM Bug #57105 (Resolved): quincy: ceph osd pool set <pool> size math error
- 03:10 PM Bug #54188 (Resolved): Setting too many PGs leads error handling overflow
- 02:47 PM Bug #58141 (Resolved): mon/MonCommands: Support dump_historic_slow_ops
- 02:47 PM Backport #58143 (Resolved): quincy: mon/MonCommands: Support dump_historic_slow_ops
- 02:46 PM Backport #58144 (Resolved): pacific: mon/MonCommands: Support dump_historic_slow_ops
- 02:44 PM Bug #49689 (Resolved): osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch...
- 02:44 PM Bug #55549 (Resolved): OSDs crashing
- 10:53 AM Backport #58872 (In Progress): octopus: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:03 AM Backport #58872 (Rejected): octopus: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- https://github.com/ceph/ceph/pull/50303
- 10:47 AM Backport #58869 (In Progress): quincy: rados/test.sh: api_watch_notify failures
- 10:02 AM Backport #58869 (In Progress): quincy: rados/test.sh: api_watch_notify failures
- https://github.com/ceph/ceph/pull/49938
- 10:47 AM Backport #58868 (In Progress): pacific: rados/test.sh: api_watch_notify failures
- 10:02 AM Backport #58868 (In Progress): pacific: rados/test.sh: api_watch_notify failures
- https://github.com/ceph/ceph/pull/49943
- 10:03 AM Backport #58871 (New): quincy: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:02 AM Backport #58870 (New): pacific: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:02 AM Bug #38357 (Pending Backport): ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:01 AM Bug #50042 (Pending Backport): rados/test.sh: api_watch_notify failures
02/27/2023
- 07:00 PM Bug #58379: no active mgr after ~1 hour
- Review-in-progress.
- 06:55 PM Bug #58837: mgr/test_progress.py: test_osd_healthy_recovery fails after timeout
- Hi Junior! Would find some time for it?
- 06:52 PM Bug #44400 (Won't Fix): Marking OSD out causes primary-affinity 0 to be ignored when up_set has n...
- The discussion's outcome is that the fix could likely make more harm (for sure: bring more complexity) than the the s...
- 06:50 PM Bug #57977 (In Progress): osd:tick checking mon for new map
- 06:45 PM Bug #49428: ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed wi...
- Seems like a similar failure:
/a/yuriw-2023-02-16_22:44:43-rados-wip-yuri-testing-2023-02-16-0839-distro-default-s... - 06:18 PM Bug #58797: scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low ...
- /a/lflores-2023-02-20_21:22:20-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi/7181477...
- 04:11 PM Bug #58797 (Resolved): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpe...
- 06:09 PM Bug #49961: scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed
- /a/yuriw-2023-02-16_22:44:43-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi/7177204...
02/23/2023
- 08:44 PM Bug #51729: Upmap verification fails for multi-level crush rule
- Hi Chris, yes, I will post another update soon with my findings.
- 07:07 PM Bug #58797 (Fix Under Review): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR...
- 06:53 PM Bug #58837: mgr/test_progress.py: test_osd_healthy_recovery fails after timeout
- Seen in the mgr logs: 2 pgs stuck in recovery...
- 06:31 PM Bug #58837 (New): mgr/test_progress.py: test_osd_healthy_recovery fails after timeout
- /a/yuriw-2023-02-22_20:55:15-rados-wip-yuri4-testing-2023-02-22-0817-quincy-distro-default-smithi/7184746...
02/22/2023
- 09:00 PM Backport #58708: quincy: Able to modify the mclock reservation, weight and limit parameters when ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50097
merged - 01:11 PM Bug #21592 (Fix Under Review): LibRadosCWriteOps.CmpExt got 0 instead of -4095-1
02/21/2023
- 10:30 PM Bug #51729: Upmap verification fails for multi-level crush rule
- Is there any news on this? Thanks.
- 04:17 PM Bug #58797 (In Progress): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Une...
- This is an unintended side effect of https://github.com/ceph/ceph/pull/44749. I will create a fix.
Explanation:
... - 12:12 PM Feature #55169 (In Progress): crush: should validate rule outputs osds
- 12:11 PM Backport #58816 (New): quincy: ceph versions : mds : remove empty list entries from ceph versions
- 12:10 PM Backport #58815 (New): quincy: Set single compression algorithm as a default value in ms_osd_comp...
- 12:09 PM Bug #57585 (Pending Backport): ceph versions : mds : remove empty list entries from ceph versions
- 12:07 PM Bug #58410 (Pending Backport): Set single compression algorithm as a default value in ms_osd_comp...
02/20/2023
- 10:30 PM Bug #58410: Set single compression algorithm as a default value in ms_osd_compression_algorithm i...
- https://github.com/ceph/ceph/pull/49843 merged
- 08:24 PM Bug #58587 (Resolved): test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' d...
- 06:30 PM Bug #58797: scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low ...
- Also seen in a testing wip, but none of the PRs in the batch have been merged yet:
/a/lflores-2023-02-17_17:48:50-ra... - 06:28 PM Bug #58797 (Pending Backport): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR...
- /a/yuriw-2023-02-17_20:31:15-rados-main-distro-default-smithi/7179124...
- 02:49 AM Bug #58052: Empty Pool (zero objects) shows usage.
- Radoslaw Zarzynski wrote:
> Could you please provide results of @ceph pg 7.3e query@?
Did that cover what you nee...
02/19/2023
- 07:00 AM Bug #58739: "Leak_IndirectlyLost" valgrind report on mon.a
- Since osds 5/6/7 hitting other valgrind leaking errors that related to https://github.com/ceph/ceph/pull/49522 i thin...
02/17/2023
- 04:13 AM Documentation #58752 (New): doc/rados/configuration/mon-lookup-dns.rst: considers only Messenger ...
- The MON lookup using DNS SRV records documentation at https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-...
02/16/2023
- 04:34 PM Backport #58144: pacific: mon/MonCommands: Support dump_historic_slow_ops
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49233
merged - 04:33 PM Backport #58007: pacific: bail from handle_command() if _generate_command_map() fails
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48846
merged - 03:45 PM Backport #58214: quincy: osd: Improve osd bench accuracy by using buffers with random patterns
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49323
merged - 12:25 AM Bug #57105: quincy: ceph osd pool set <pool> size math error
- https://github.com/ceph/ceph/pull/49465 merged
- 12:23 AM Backport #58186: quincy: osd: Misleading information displayed for the running configuration of o...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49281
merged - 12:22 AM Backport #58143: quincy: mon/MonCommands: Support dump_historic_slow_ops
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49232
merged
02/15/2023
- 10:46 PM Bug #58739 (New): "Leak_IndirectlyLost" valgrind report on mon.a
- /a/yuriw-2023-02-13_21:53:12-rados-wip-yuri-testing-2023-02-06-1155-quincy-distro-default-smithi/7171896/remote/smith...
- 11:43 AM Bug #56097 (Fix Under Review): Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtes...
- 08:32 AM Bug #56097: Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ...
- Thrasher will try to inject args to live osds, that doesn't mean they are up yet, and if the are not up, we will end ...
02/14/2023
- 06:17 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Per comment #9 there are 2 hypothesizes at the moment:
>
> 1) the nonce issue (small ... - 12:20 AM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- 2023-01-13T14:56:19.349
INFO:tasks.ceph.osd.4.smithi120.stderr:/build/ceph-18.0.0-1762-gcb17f286/src/osd/PeeringStat...
02/13/2023
- 10:43 PM Feature #55169: crush: should validate rule outputs osds
- Shreyansh Sancheti wrote:
> Dan van der Ster wrote:
> > Shreyansh Sancheti wrote:
> > > Need more info on this!
>... - 10:31 AM Feature #55169: crush: should validate rule outputs osds
- Dan van der Ster wrote:
> Shreyansh Sancheti wrote:
> > Need more info on this!
>
> Sure, what do you need to kn... - 10:01 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- While thrashing OSDs in _do_thrash() (qa/tasks/ceph_manager.py), the task didn't add OSDs back into the cluster after...
- 06:18 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- ...
- 06:16 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- ...
- 08:19 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Scrubbing is almost caught up (though scrubbing doesn't seem to target the oldest first, not sure if that could be im...
- 08:17 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Not 100% sure what command you are wanting, "ceph pg 7.3e" isn't complete.
Do you mean this?
# ceph pg ls 7 | gre... - 07:17 PM Bug #58052: Empty Pool (zero objects) shows usage.
- From the attached @mgr-server1.log@ (the big log):...
- 07:04 PM Bug #56097: Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ...
- To avoid confusion: @osd.0@ from comment #9 is about the run mentioned in #7....
- 06:53 PM Bug #51688: "stuck peering for" warning is misleading
- Bump up! Need to decide on the @needs-test@.
- 06:50 PM Bug #58467 (Closed): osd: Only have one osd daemon no reply heartbeat on one node
- Closing per the comment #11.
- 06:47 PM Bug #57977: osd:tick checking mon for new map
- It's the nonce issue for sure. Per the @ceph osd dump@ from the description:...
- 06:40 PM Bug #57977: osd:tick checking mon for new map
- Per comment #9 there are 2 hypothesizes at the moment:
1) the nonce issue (small should be visible in log entries ... - 06:31 PM Bug #50637: OSD slow ops warning stuck after OSD fail
- Bug scrub comment: bump up.
- 06:11 PM Bug #47838 (In Progress): mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- 02:00 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- From the monitor logs during OSD map trim: ...
- 02:58 PM Backport #58708 (In Progress): quincy: Able to modify the mclock reservation, weight and limit pa...
- 02:44 PM Backport #58708 (Resolved): quincy: Able to modify the mclock reservation, weight and limit param...
- https://github.com/ceph/ceph/pull/50097
- 02:37 PM Bug #57533 (Pending Backport): Able to modify the mclock reservation, weight and limit parameters...
- 07:20 AM Fix #6109: pg <pgid> mark_unfound_lost fails if a completely-gone OSD still in map
- Hello,
I just had a customer facing this same issue, and to have it on the record, at least since luminous markin...
02/09/2023
- 11:58 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- This failure seems sporadic. I reran the same job that failed 50 times, and all succeeded except one, which failed fo...
- 11:57 PM Bug #58690 (New): thrashosds: IndexError: Cannot choose from an empty sequence
- /a/lflores-2023-02-08_20:25:06-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7162184...
- 11:00 PM Bug #57600: thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "active+recover...
- /a/lflores-2023-02-09_19:24:50-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7164248
- 07:19 PM Bug #52316: qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons)
- /a/lflores-2023-02-09_16:38:16-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7164042
- 07:12 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- /a/lflores-2023-02-08_20:25:06-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7162262...
- 06:53 PM Bug #58496 (Fix Under Review): osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.emp...
- 05:08 AM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- https://github.com/ceph/ceph/pull/49959
- 04:09 PM Feature #55169: crush: should validate rule outputs osds
- Shreyansh Sancheti wrote:
> Need more info on this!
Sure, what do you need to know? - 08:34 AM Feature #55169 (Need More Info): crush: should validate rule outputs osds
- Need more info on this!
- 02:38 AM Bug #56192: crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty())
- Looks like http://qa-proxy.ceph.com/teuthology/yuriw-2023-02-02_19:29:06-powercycle-wip-yuri6-testing-2023-01-26-0941...
02/08/2023
- 11:33 PM Bug #52221: crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
- /a/yuriw-2023-01-27_16:33:50-rados-wip-yuri2-testing-2023-01-26-1532-distro-default-smithi/7142354...
02/07/2023
- 01:25 PM Bug #49689 (Fix Under Review): osd/PeeringState.cc: ceph_abort_msg("past_interval start interval ...
- 09:00 AM Feature #58038 (Resolved): osd: add created_at and ceph_version_when_created metadata
- 08:59 AM Backport #58040: quincy: osd: add created_at and ceph_version_when_created metadata
- https://github.com/ceph/ceph/pull/49159
- 08:59 AM Backport #58040 (Resolved): quincy: osd: add created_at and ceph_version_when_created metadata
02/06/2023
- 10:37 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Not sure if this helps at all, but I was writing a script that generates a histogram of scrub times, and I noticed th...
- 07:09 PM Documentation #58650 (Need More Info): Write an overview of multisite.rst that explains to first-...
- After the line-editing of all ~1500 lines of doc/radosgw/multisite.rst is finished, a text of no more than three-hund...
- 07:04 PM Bug #54829: crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t*) con...
- /a/yuriw-2023-01-27_16:33:50-rados-wip-yuri2-testing-2023-01-26-1532-distro-default-smithi/7142297...
- 06:26 PM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- /a/yuriw-2023-01-27_01:09:38-rados-wip-yuri2-testing-2023-01-26-1532-distro-default-smithi/7140509
- 05:50 PM Backport #58637 (Resolved): pacific: osd/scrub: "scrub a chunk" requests are sent to the wrong se...
- 05:49 PM Backport #58636 (Resolved): quincy: osd/scrub: "scrub a chunk" requests are sent to the wrong set...
Also available in: Atom