Activity
From 02/01/2023 to 03/02/2023
03/02/2023
- 08:52 PM Bug #43887: ceph_test_rados_delete_pools_parallel failure
- Kamoltat (Junior) Sirivadhna wrote:
> Encountered this error in: yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-... - 08:18 PM Bug #43887: ceph_test_rados_delete_pools_parallel failure
- Encountered this error in: yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-2023-03-01-1424-distro-default-smithi/7...
- 07:40 PM Bug #58739: "Leak_IndirectlyLost" valgrind report on mon.a
- HIT in /a/yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-2023-03-01-1424-distro-default-smithi/7191392/remote/smi...
- 05:48 PM Bug #58739: "Leak_IndirectlyLost" valgrind report on mon.a
- https://github.com/ceph/ceph/pull/48641 is already merge. If we don't see new replications over some time (a few mont...
- 07:31 PM Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- /a/yuriw-2023-03-02_00:09:05-rados-wip-yuri11-testing-2023-03-01-1424-distro-default-smithi/7191380
- 05:52 PM Bug #50637: OSD slow ops warning stuck after OSD fail
- Bump up + ping.
- 02:18 PM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Thanks, yite gu!
> The fix is: https://github.com/ceph/ceph/pull/50344/commits/fb868d4e... - 12:01 PM Bug #57977 (Fix Under Review): osd:tick checking mon for new map
- Thanks, yite gu!
The fix is: https://github.com/ceph/ceph/pull/50344/commits/fb868d4e71d3871cbd17cfbd4a536470e5c023f... - 01:06 PM Bug #58884 (In Progress): ceph: osd blocklist does not accept v2/v1: prefix for addr
- looks like addr type is CephEntityAddr, which mean it will accept "CephEntityAddr: CephIPAddr + optional '/nonce'"
<...
03/01/2023
- 08:46 PM Bug #58894 (Fix Under Review): [pg-autoscaler][mgr] does not throw warn to increase PG count on p...
- 08:32 PM Bug #58894 (Resolved): [pg-autoscaler][mgr] does not throw warn to increase PG count on pools wit...
- Here pool test 1-3 should be spitting health warnings like: PG TOO FEW PLEASE SCALE....
- 07:41 PM Bug #58893: test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired
- Marking this as related to #51076 since there was a case of `test_map_discontinuity` logged there.
- 07:40 PM Bug #58893 (New): test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout e...
- /a/yuriw-2023-02-24_17:50:19-rados-main-distro-default-smithi/7186711...
- 03:34 PM Bug #57977: osd:tick checking mon for new map
- ...
- 12:44 PM Bug #57977: osd:tick checking mon for new map
- osd.0 hanppen restart, but Since then, it has not join the cluster alway. I upload osd boot log.
- 11:27 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Thanks for the update! Yeah, it might stuck there. To confirm we would logs with increas... - 02:44 PM Bug #58288 (Fix Under Review): quincy: mon: pg_num_check() according to crush rule
- Revert is merged https://github.com/ceph/ceph/pull/49465.
PR#50327 pushed as the actual fix.
02/28/2023
- 11:14 PM Bug #49428: ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed wi...
- Bunch of tests from LibRadosIoEC failing from "rados_mon_command osd pool create failed with error -22"
/a/yuriw-2... - 10:57 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Well, I need to move on, so I am deleting the pools. I may try to re-create this in a lab later. If I do, I will tr...
- 07:26 PM Feature #58885 (New): [pg-autoscaler] include warning and explanation in ceph -s when there's ove...
- Currently, we only warn the user about overlapping roots in mgr log.
Since there have been cases where the user file... - 05:39 PM Bug #58884 (Resolved): ceph: osd blocklist does not accept v2/v1: prefix for addr
- ...
- 03:10 PM Bug #57105 (Resolved): quincy: ceph osd pool set <pool> size math error
- 03:10 PM Bug #54188 (Resolved): Setting too many PGs leads error handling overflow
- 02:47 PM Bug #58141 (Resolved): mon/MonCommands: Support dump_historic_slow_ops
- 02:47 PM Backport #58143 (Resolved): quincy: mon/MonCommands: Support dump_historic_slow_ops
- 02:46 PM Backport #58144 (Resolved): pacific: mon/MonCommands: Support dump_historic_slow_ops
- 02:44 PM Bug #49689 (Resolved): osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch...
- 02:44 PM Bug #55549 (Resolved): OSDs crashing
- 10:53 AM Backport #58872 (In Progress): octopus: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:03 AM Backport #58872 (Rejected): octopus: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- https://github.com/ceph/ceph/pull/50303
- 10:47 AM Backport #58869 (In Progress): quincy: rados/test.sh: api_watch_notify failures
- 10:02 AM Backport #58869 (Resolved): quincy: rados/test.sh: api_watch_notify failures
- https://github.com/ceph/ceph/pull/49938
- 10:47 AM Backport #58868 (In Progress): pacific: rados/test.sh: api_watch_notify failures
- 10:02 AM Backport #58868 (In Progress): pacific: rados/test.sh: api_watch_notify failures
- https://github.com/ceph/ceph/pull/49943
- 10:03 AM Backport #58871 (New): quincy: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:02 AM Backport #58870 (New): pacific: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:02 AM Bug #38357 (Pending Backport): ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 10:01 AM Bug #50042 (Pending Backport): rados/test.sh: api_watch_notify failures
02/27/2023
- 07:00 PM Bug #58379: no active mgr after ~1 hour
- Review-in-progress.
- 06:55 PM Bug #58837: mgr/test_progress.py: test_osd_healthy_recovery fails after timeout
- Hi Junior! Would find some time for it?
- 06:52 PM Bug #44400 (Won't Fix): Marking OSD out causes primary-affinity 0 to be ignored when up_set has n...
- The discussion's outcome is that the fix could likely make more harm (for sure: bring more complexity) than the the s...
- 06:50 PM Bug #57977 (In Progress): osd:tick checking mon for new map
- 06:45 PM Bug #49428: ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed wi...
- Seems like a similar failure:
/a/yuriw-2023-02-16_22:44:43-rados-wip-yuri-testing-2023-02-16-0839-distro-default-s... - 06:18 PM Bug #58797: scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low ...
- /a/lflores-2023-02-20_21:22:20-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi/7181477...
- 04:11 PM Bug #58797 (Resolved): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpe...
- 06:09 PM Bug #49961: scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed
- /a/yuriw-2023-02-16_22:44:43-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi/7177204...
02/23/2023
- 08:44 PM Bug #51729: Upmap verification fails for multi-level crush rule
- Hi Chris, yes, I will post another update soon with my findings.
- 07:07 PM Bug #58797 (Fix Under Review): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR...
- 06:53 PM Bug #58837: mgr/test_progress.py: test_osd_healthy_recovery fails after timeout
- Seen in the mgr logs: 2 pgs stuck in recovery...
- 06:31 PM Bug #58837 (New): mgr/test_progress.py: test_osd_healthy_recovery fails after timeout
- /a/yuriw-2023-02-22_20:55:15-rados-wip-yuri4-testing-2023-02-22-0817-quincy-distro-default-smithi/7184746...
02/22/2023
- 09:00 PM Backport #58708: quincy: Able to modify the mclock reservation, weight and limit parameters when ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50097
merged - 01:11 PM Bug #21592 (Fix Under Review): LibRadosCWriteOps.CmpExt got 0 instead of -4095-1
02/21/2023
- 10:30 PM Bug #51729: Upmap verification fails for multi-level crush rule
- Is there any news on this? Thanks.
- 04:17 PM Bug #58797 (In Progress): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Une...
- This is an unintended side effect of https://github.com/ceph/ceph/pull/44749. I will create a fix.
Explanation:
... - 12:12 PM Feature #55169 (In Progress): crush: should validate rule outputs osds
- 12:11 PM Backport #58816 (In Progress): quincy: ceph versions : mds : remove empty list entries from ceph ...
- 12:10 PM Backport #58815 (New): quincy: Set single compression algorithm as a default value in ms_osd_comp...
- 12:09 PM Bug #57585 (Pending Backport): ceph versions : mds : remove empty list entries from ceph versions
- 12:07 PM Bug #58410 (Pending Backport): Set single compression algorithm as a default value in ms_osd_comp...
02/20/2023
- 10:30 PM Bug #58410: Set single compression algorithm as a default value in ms_osd_compression_algorithm i...
- https://github.com/ceph/ceph/pull/49843 merged
- 08:24 PM Bug #58587 (Resolved): test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' d...
- 06:30 PM Bug #58797: scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low ...
- Also seen in a testing wip, but none of the PRs in the batch have been merged yet:
/a/lflores-2023-02-17_17:48:50-ra... - 06:28 PM Bug #58797 (Pending Backport): scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR...
- /a/yuriw-2023-02-17_20:31:15-rados-main-distro-default-smithi/7179124...
- 02:49 AM Bug #58052: Empty Pool (zero objects) shows usage.
- Radoslaw Zarzynski wrote:
> Could you please provide results of @ceph pg 7.3e query@?
Did that cover what you nee...
02/19/2023
- 07:00 AM Bug #58739: "Leak_IndirectlyLost" valgrind report on mon.a
- Since osds 5/6/7 hitting other valgrind leaking errors that related to https://github.com/ceph/ceph/pull/49522 i thin...
02/17/2023
- 04:13 AM Documentation #58752 (New): doc/rados/configuration/mon-lookup-dns.rst: considers only Messenger ...
- The MON lookup using DNS SRV records documentation at https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-...
02/16/2023
- 04:34 PM Backport #58144: pacific: mon/MonCommands: Support dump_historic_slow_ops
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49233
merged - 04:33 PM Backport #58007: pacific: bail from handle_command() if _generate_command_map() fails
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48846
merged - 03:45 PM Backport #58214: quincy: osd: Improve osd bench accuracy by using buffers with random patterns
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49323
merged - 12:25 AM Bug #57105: quincy: ceph osd pool set <pool> size math error
- https://github.com/ceph/ceph/pull/49465 merged
- 12:23 AM Backport #58186: quincy: osd: Misleading information displayed for the running configuration of o...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49281
merged - 12:22 AM Backport #58143: quincy: mon/MonCommands: Support dump_historic_slow_ops
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49232
merged
02/15/2023
- 10:46 PM Bug #58739 (New): "Leak_IndirectlyLost" valgrind report on mon.a
- /a/yuriw-2023-02-13_21:53:12-rados-wip-yuri-testing-2023-02-06-1155-quincy-distro-default-smithi/7171896/remote/smith...
- 11:43 AM Bug #56097 (Fix Under Review): Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtes...
- 08:32 AM Bug #56097: Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ...
- Thrasher will try to inject args to live osds, that doesn't mean they are up yet, and if the are not up, we will end ...
02/14/2023
- 06:17 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Per comment #9 there are 2 hypothesizes at the moment:
>
> 1) the nonce issue (small ... - 12:20 AM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- 2023-01-13T14:56:19.349
INFO:tasks.ceph.osd.4.smithi120.stderr:/build/ceph-18.0.0-1762-gcb17f286/src/osd/PeeringStat...
02/13/2023
- 10:43 PM Feature #55169: crush: should validate rule outputs osds
- Shreyansh Sancheti wrote:
> Dan van der Ster wrote:
> > Shreyansh Sancheti wrote:
> > > Need more info on this!
>... - 10:31 AM Feature #55169: crush: should validate rule outputs osds
- Dan van der Ster wrote:
> Shreyansh Sancheti wrote:
> > Need more info on this!
>
> Sure, what do you need to kn... - 10:01 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- While thrashing OSDs in _do_thrash() (qa/tasks/ceph_manager.py), the task didn't add OSDs back into the cluster after...
- 06:18 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- ...
- 06:16 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- ...
- 08:19 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Scrubbing is almost caught up (though scrubbing doesn't seem to target the oldest first, not sure if that could be im...
- 08:17 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Not 100% sure what command you are wanting, "ceph pg 7.3e" isn't complete.
Do you mean this?
# ceph pg ls 7 | gre... - 07:17 PM Bug #58052: Empty Pool (zero objects) shows usage.
- From the attached @mgr-server1.log@ (the big log):...
- 07:04 PM Bug #56097: Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ...
- To avoid confusion: @osd.0@ from comment #9 is about the run mentioned in #7....
- 06:53 PM Bug #51688: "stuck peering for" warning is misleading
- Bump up! Need to decide on the @needs-test@.
- 06:50 PM Bug #58467 (Closed): osd: Only have one osd daemon no reply heartbeat on one node
- Closing per the comment #11.
- 06:47 PM Bug #57977: osd:tick checking mon for new map
- It's the nonce issue for sure. Per the @ceph osd dump@ from the description:...
- 06:40 PM Bug #57977: osd:tick checking mon for new map
- Per comment #9 there are 2 hypothesizes at the moment:
1) the nonce issue (small should be visible in log entries ... - 06:31 PM Bug #50637: OSD slow ops warning stuck after OSD fail
- Bug scrub comment: bump up.
- 06:11 PM Bug #47838 (In Progress): mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- 02:00 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- From the monitor logs during OSD map trim: ...
- 02:58 PM Backport #58708 (In Progress): quincy: Able to modify the mclock reservation, weight and limit pa...
- 02:44 PM Backport #58708 (Resolved): quincy: Able to modify the mclock reservation, weight and limit param...
- https://github.com/ceph/ceph/pull/50097
- 02:37 PM Bug #57533 (Pending Backport): Able to modify the mclock reservation, weight and limit parameters...
- 07:20 AM Fix #6109: pg <pgid> mark_unfound_lost fails if a completely-gone OSD still in map
- Hello,
I just had a customer facing this same issue, and to have it on the record, at least since luminous markin...
02/09/2023
- 11:58 PM Bug #58690: thrashosds: IndexError: Cannot choose from an empty sequence
- This failure seems sporadic. I reran the same job that failed 50 times, and all succeeded except one, which failed fo...
- 11:57 PM Bug #58690 (New): thrashosds: IndexError: Cannot choose from an empty sequence
- /a/lflores-2023-02-08_20:25:06-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7162184...
- 11:00 PM Bug #57600: thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "active+recover...
- /a/lflores-2023-02-09_19:24:50-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7164248
- 07:19 PM Bug #52316: qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons)
- /a/lflores-2023-02-09_16:38:16-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7164042
- 07:12 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- /a/lflores-2023-02-08_20:25:06-rados-wip-lflores-testing-2023-02-06-1529-distro-default-smithi/7162262...
- 06:53 PM Bug #58496 (Fix Under Review): osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.emp...
- 05:08 AM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- https://github.com/ceph/ceph/pull/49959
- 04:09 PM Feature #55169: crush: should validate rule outputs osds
- Shreyansh Sancheti wrote:
> Need more info on this!
Sure, what do you need to know? - 08:34 AM Feature #55169 (Need More Info): crush: should validate rule outputs osds
- Need more info on this!
- 02:38 AM Bug #56192: crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty())
- Looks like http://qa-proxy.ceph.com/teuthology/yuriw-2023-02-02_19:29:06-powercycle-wip-yuri6-testing-2023-01-26-0941...
02/08/2023
- 11:33 PM Bug #52221: crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
- /a/yuriw-2023-01-27_16:33:50-rados-wip-yuri2-testing-2023-01-26-1532-distro-default-smithi/7142354...
02/07/2023
- 01:25 PM Bug #49689 (Fix Under Review): osd/PeeringState.cc: ceph_abort_msg("past_interval start interval ...
- 09:00 AM Feature #58038 (Resolved): osd: add created_at and ceph_version_when_created metadata
- 08:59 AM Backport #58040: quincy: osd: add created_at and ceph_version_when_created metadata
- https://github.com/ceph/ceph/pull/49159
- 08:59 AM Backport #58040 (Resolved): quincy: osd: add created_at and ceph_version_when_created metadata
02/06/2023
- 10:37 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Not sure if this helps at all, but I was writing a script that generates a histogram of scrub times, and I noticed th...
- 07:09 PM Documentation #58650 (Need More Info): Write an overview of multisite.rst that explains to first-...
- After the line-editing of all ~1500 lines of doc/radosgw/multisite.rst is finished, a text of no more than three-hund...
- 07:04 PM Bug #54829: crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t*) con...
- /a/yuriw-2023-01-27_16:33:50-rados-wip-yuri2-testing-2023-01-26-1532-distro-default-smithi/7142297...
- 06:26 PM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- /a/yuriw-2023-01-27_01:09:38-rados-wip-yuri2-testing-2023-01-26-1532-distro-default-smithi/7140509
- 05:50 PM Backport #58637 (Resolved): pacific: osd/scrub: "scrub a chunk" requests are sent to the wrong se...
- 05:49 PM Backport #58636 (Resolved): quincy: osd/scrub: "scrub a chunk" requests are sent to the wrong set...
02/03/2023
- 10:01 PM Backport #58639 (In Progress): quincy: Mon fail to send pending metadata through MMgrUpdate after...
- 07:08 PM Backport #58639 (Resolved): quincy: Mon fail to send pending metadata through MMgrUpdate after an...
- https://github.com/ceph/ceph/pull/49989
- 09:30 PM Backport #58638 (In Progress): pacific: Mon fail to send pending metadata through MMgrUpdate afte...
- 07:07 PM Backport #58638 (Resolved): pacific: Mon fail to send pending metadata through MMgrUpdate after a...
- https://github.com/ceph/ceph/pull/49988
- 07:04 PM Bug #57678 (Pending Backport): Mon fail to send pending metadata through MMgrUpdate after an upgr...
- 07:01 PM Bug #58052: Empty Pool (zero objects) shows usage.
- I am concerned that this could be a bigger issue, kinda like a memory leak, but for storage. And that this could con...
- 06:59 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Radoslaw Zarzynski wrote:
> Downloading manually. Neha is testing ceph-post-file.
I kinda want to kill these pool... - 04:58 PM Backport #58637 (Resolved): pacific: osd/scrub: "scrub a chunk" requests are sent to the wrong se...
- https://github.com/ceph/ceph/pull/48544
- 04:58 PM Backport #58636 (Resolved): quincy: osd/scrub: "scrub a chunk" requests are sent to the wrong set...
- https://github.com/ceph/ceph/pull/48543
- 09:05 AM Bug #50637: OSD slow ops warning stuck after OSD fail
- I tried to reproduce this issue on (latest main) vstart cluster by setting osd_op_complaint_time to 1 second and runn...
- 08:36 AM Bug #58607 (Fix Under Review): osd: PushOp and PullOp costs for mClock don't reflect the size of ...
- 08:35 AM Bug #58606 (Fix Under Review): osd: osd_recovery_cost with mClockScheduler enabled doesn't reflec...
- 08:34 AM Bug #58529 (Fix Under Review): osd: very slow recovery due to delayed push reply messages
- 06:35 AM Bug #57977: osd:tick checking mon for new map
- Prashant D wrote:
> yite gu wrote:
> > Radoslaw Zarzynski wrote:
> > > Per the comment #11 I'm redirecting Prashan... - 03:59 AM Bug #57977: osd:tick checking mon for new map
- yite gu wrote:
> Radoslaw Zarzynski wrote:
> > Per the comment #11 I'm redirecting Prashant's questions from commen... - 06:27 AM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- hi, Radoslaw
my osd pod network use cilium, so I use cmd `cilium monitor -t drop` to capture package on osd.16 pod,... - 02:13 AM Backport #56135 (Resolved): pacific: scrub starts message missing in cluster log
- https://github.com/ceph/ceph/pull/48070
- 02:10 AM Backport #56134 (Resolved): quincy: scrub starts message missing in cluster log
02/02/2023
- 04:13 PM Bug #51688 (Fix Under Review): "stuck peering for" warning is misleading
- 09:55 AM Bug #57940: ceph osd crashes with FAILED ceph_assert(clone_overlap.count(clone)) when nobackfill ...
- Hi,
I've put the pool at size=1 and executed a data scraper for backup the most of data.
Then I've deleted the pool...
02/01/2023
- 03:38 PM Bug #58239 (Resolved): pacific: src/mon/Monitor.cc: FAILED ceph_assert(osdmon()->is_writeable())
- My mistake, this issue is resolved because we have reverted https://github.com/ceph/ceph/pull/48803
Revert PR: htt... - 03:23 PM Documentation #58625 (Need More Info): 16.2.11 BlueFS log changes make 16.2.11 incompatible with ...
- This tracker will track the documentation of and announcement of the change introduced in https://github.com/ceph/cep...
Also available in: Atom