Activity
From 01/05/2023 to 02/03/2023
02/03/2023
- 10:01 PM Backport #58639 (In Progress): quincy: Mon fail to send pending metadata through MMgrUpdate after...
- 07:08 PM Backport #58639 (Resolved): quincy: Mon fail to send pending metadata through MMgrUpdate after an...
- https://github.com/ceph/ceph/pull/49989
- 09:30 PM Backport #58638 (In Progress): pacific: Mon fail to send pending metadata through MMgrUpdate afte...
- 07:07 PM Backport #58638 (Resolved): pacific: Mon fail to send pending metadata through MMgrUpdate after a...
- https://github.com/ceph/ceph/pull/49988
- 07:04 PM Bug #57678 (Pending Backport): Mon fail to send pending metadata through MMgrUpdate after an upgr...
- 07:01 PM Bug #58052: Empty Pool (zero objects) shows usage.
- I am concerned that this could be a bigger issue, kinda like a memory leak, but for storage. And that this could con...
- 06:59 PM Bug #58052: Empty Pool (zero objects) shows usage.
- Radoslaw Zarzynski wrote:
> Downloading manually. Neha is testing ceph-post-file.
I kinda want to kill these pool... - 04:58 PM Backport #58637 (Resolved): pacific: osd/scrub: "scrub a chunk" requests are sent to the wrong se...
- https://github.com/ceph/ceph/pull/48544
- 04:58 PM Backport #58636 (Resolved): quincy: osd/scrub: "scrub a chunk" requests are sent to the wrong set...
- https://github.com/ceph/ceph/pull/48543
- 09:05 AM Bug #50637: OSD slow ops warning stuck after OSD fail
- I tried to reproduce this issue on (latest main) vstart cluster by setting osd_op_complaint_time to 1 second and runn...
- 08:36 AM Bug #58607 (Fix Under Review): osd: PushOp and PullOp costs for mClock don't reflect the size of ...
- 08:35 AM Bug #58606 (Fix Under Review): osd: osd_recovery_cost with mClockScheduler enabled doesn't reflec...
- 08:34 AM Bug #58529 (Fix Under Review): osd: very slow recovery due to delayed push reply messages
- 06:35 AM Bug #57977: osd:tick checking mon for new map
- Prashant D wrote:
> yite gu wrote:
> > Radoslaw Zarzynski wrote:
> > > Per the comment #11 I'm redirecting Prashan... - 03:59 AM Bug #57977: osd:tick checking mon for new map
- yite gu wrote:
> Radoslaw Zarzynski wrote:
> > Per the comment #11 I'm redirecting Prashant's questions from commen... - 06:27 AM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- hi, Radoslaw
my osd pod network use cilium, so I use cmd `cilium monitor -t drop` to capture package on osd.16 pod,... - 02:13 AM Backport #56135 (Resolved): pacific: scrub starts message missing in cluster log
- https://github.com/ceph/ceph/pull/48070
- 02:10 AM Backport #56134 (Resolved): quincy: scrub starts message missing in cluster log
02/02/2023
- 04:13 PM Bug #51688 (Fix Under Review): "stuck peering for" warning is misleading
- 09:55 AM Bug #57940: ceph osd crashes with FAILED ceph_assert(clone_overlap.count(clone)) when nobackfill ...
- Hi,
I've put the pool at size=1 and executed a data scraper for backup the most of data.
Then I've deleted the pool...
02/01/2023
- 03:38 PM Bug #58239 (Resolved): pacific: src/mon/Monitor.cc: FAILED ceph_assert(osdmon()->is_writeable())
- My mistake, this issue is resolved because we have reverted https://github.com/ceph/ceph/pull/48803
Revert PR: htt... - 03:23 PM Documentation #58625 (Need More Info): 16.2.11 BlueFS log changes make 16.2.11 incompatible with ...
- This tracker will track the documentation of and announcement of the change introduced in https://github.com/ceph/cep...
01/31/2023
- 03:33 PM Bug #49689 (In Progress): osd/PeeringState.cc: ceph_abort_msg("past_interval start interval misma...
- Marking as "In Progress" again until requested changes are ready for review.
- 01:47 PM Bug #56097: Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ...
- Cluster log shows:...
- 08:02 AM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- osd.12 no more packets from other OSDs after osd.11 send package at 2023-01-19T08:00:42...
- 06:02 AM Backport #58611 (In Progress): pacific: api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosW...
- 05:41 AM Backport #58612 (In Progress): quincy: api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosWa...
- 05:39 AM Backport #58613 (In Progress): pacific: pglog growing unbounded on EC with copy by ref
- 05:36 AM Backport #58614 (In Progress): quincy: pglog growing unbounded on EC with copy by ref
01/30/2023
- 07:38 PM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- Radoslaw Zarzynski wrote:
> Thanks for the log!
>
> I think it's not just about hearbeats but rather a general sl... - 07:03 PM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- Thanks for the log!
I think it's not just about hearbeats but rather a general slowness. Client IO is affected as ... - 06:45 PM Bug #58505 (Need More Info): Wrong calculate free space OSD and PG used bytes
- Could you please provide a verbose log (@debug_osd=20@) from one of those affected OSDs?
- 06:25 PM Backport #58614 (In Progress): quincy: pglog growing unbounded on EC with copy by ref
- https://github.com/ceph/ceph/pull/49936
- 06:25 PM Backport #58613 (In Progress): pacific: pglog growing unbounded on EC with copy by ref
- https://github.com/ceph/ceph/pull/49937
- 06:23 PM Bug #56707 (Pending Backport): pglog growing unbounded on EC with copy by ref
- 06:15 PM Bug #57900: mon/crush_ops.sh: mons out of quorum
- > @Radek so the suggestion is to give the mons more time to reboot?
Yes, exactly that. - 06:13 PM Backport #58612 (In Progress): quincy: api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosWa...
- https://github.com/ceph/ceph/pull/49938
- 06:13 PM Backport #58611 (In Progress): pacific: api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosW...
- https://github.com/ceph/ceph/pull/49943
- 06:13 PM Bug #52385 (Closed): a possible data loss due to recovery_unfound PG after restarting all nodes
- Closing per the prev comment.
- 01:18 AM Bug #52385: a possible data loss due to recovery_unfound PG after restarting all nodes
- I have tried to reproduce this issue in 17.2.5 for two weeks. However, nothing happened. Please close this ticket.
- 06:11 PM Bug #58098 (Resolved): qa/workunits/rados/test_crash.sh: crashes are never posted
- 06:09 PM Bug #45615 (Pending Backport): api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosWatchNotif...
- 06:08 PM Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start
- Looks it's ready for merge now. Yuri is pinged.
- 06:06 PM Bug #58496 (In Progress): osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- Per the prev comment.
- 04:25 PM Bug #58607 (Fix Under Review): osd: PushOp and PullOp costs for mClock don't reflect the size of ...
- Currently, PullOp cost is set to the following:...
- 03:56 PM Bug #58587: test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist
- Thanks Myoungwon!
- 03:56 PM Bug #58587 (Fix Under Review): test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk...
- 02:09 AM Bug #58587: test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist
- https://github.com/ceph/ceph/pull/49910
- 02:29 PM Bug #49524 (Resolved): ceph_test_rados_delete_pools_parallel didn't start
- Should be fixed by PR 49109
- 02:24 PM Bug #54122 (Resolved): Validate monitor ID provided with ok-to-stop similar to ok-to-rm
- Merged
- 02:05 PM Bug #58606 (Fix Under Review): osd: osd_recovery_cost with mClockScheduler enabled doesn't reflec...
- Currently _osd_recovery_cost_ is set to a static value equivalent to 20 MiB.
This cost is set regardless of the size... - 04:30 AM Bug #57940: ceph osd crashes with FAILED ceph_assert(clone_overlap.count(clone)) when nobackfill ...
- Thomas Le Gentil wrote:
> Infact, this did not work for some reason :( The osd did not crash for several days, then ... - 04:21 AM Bug #56772: crash: uint64_t SnapSet::get_clone_bytes(snapid_t) const: assert(clone_overlap.count(...
- Hi, the OSD to crash whenever it tries to backfill to target OSDs. If the situation persists, it may cause data loss....
01/29/2023
- 09:36 AM Documentation #58590: osd_op_thread_suicide_timeout is not documented
- common/options/osd.yaml.in:- name: osd_op_thread_suicide_timeout
common/options/rgw.yaml.in:- name: rgw_op_thread_su...
01/28/2023
- 09:47 PM Documentation #58590: osd_op_thread_suicide_timeout is not documented
- https://github.com/ceph/ceph/pull/49905
- 08:06 PM Documentation #58590: osd_op_thread_suicide_timeout is not documented
- ceph/src/common/options/global.yaml.in is the file in which these variables are documented.
- 07:58 PM Documentation #58590: osd_op_thread_suicide_timeout is not documented
- https://old.ceph.com/planet/dealing-with-some-osd-timeouts/ (from an email from Neha Ojha to Zac Dover)
https://do... - 07:09 PM Documentation #58590: osd_op_thread_suicide_timeout is not documented
- [zdover@fedora doc]$ grep -ir "osd_op_thread_suicide_timeout" *
[zdover@fedora doc]$ grep -ir "suicide" *
changelog... - 07:22 PM Documentation #58595 (New): Refine https://docs.ceph.com/en/latest/dev/developer_guide/debugging-...
- https://docs.ceph.com/en/latest/dev/developer_guide/debugging-gdb/
The "GDB - GNU Project Debugger" page isn't bad...
01/27/2023
- 03:36 PM Bug #58587: test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist
- Thank you!
01/26/2023
- 05:03 AM Documentation #58590 (New): osd_op_thread_suicide_timeout is not documented
- There are plenty of references to configuration option @osd_op_thread_suicide_timeout@ but this config variable is no...
- 12:59 AM Bug #58587: test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist
- Ok, I'll take a look
- 12:15 AM Backport #58586 (In Progress): quincy: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in fu...
- 12:02 AM Bug #57900: mon/crush_ops.sh: mons out of quorum
- /a/yuriw-2023-01-24_22:20:59-rados-wip-yuri-testing-2023-01-23-0926-distro-default-smithi/7136648
Time to revisit ...
01/25/2023
- 11:53 PM Bug #58587: test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist
- Hey Myoungwon, would you be able to take a look at this?
- 11:47 PM Bug #58587 (Pending Backport): test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk...
- /a/yuriw-2023-01-21_17:58:46-rados-wip-yuri6-testing-2023-01-20-0728-distro-default-smithi/7132613...
- 11:25 PM Backport #58586 (Resolved): quincy: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in funct...
- https://github.com/ceph/ceph/pull/49881
- 11:21 PM Bug #56101 (Pending Backport): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function s...
- 04:01 PM Bug #58239 (In Progress): pacific: src/mon/Monitor.cc: FAILED ceph_assert(osdmon()->is_writeable())
- This bug is not yet resolved, also removing PR number since
49412 is a revert PR. - 02:26 PM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- I think I know where the bug is. Will update.
01/24/2023
- 05:20 PM Bug #58141: mon/MonCommands: Support dump_historic_slow_ops
- https://github.com/ceph/ceph/pull/48972 merged
- 05:16 PM Bug #56707: pglog growing unbounded on EC with copy by ref
- https://github.com/ceph/ceph/pull/47332 merged
- 07:55 AM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- /a/yuriw-2023-01-23_17:16:25-rados-wip-yuri6-testing-2023-01-22-0750-distro-default-smithi/7134021
01/22/2023
- 07:58 AM Bug #56097: Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ...
- /a/yuriw-2023-01-21_17:58:46-rados-wip-yuri6-testing-2023-01-20-0728-distro-default-smithi/7132857
01/20/2023
- 08:48 PM Bug #58529: osd: very slow recovery due to delayed push reply messages
- I've opened this bug to track the slow backfill behavior from https://tracker.ceph.com/issues/58498, which appears to...
- 08:47 PM Bug #58529 (Fix Under Review): osd: very slow recovery due to delayed push reply messages
- I took a look at the logs for pg114.d6 attached to this tracker. The cost for the push replies is calculated to over
... - 04:46 AM Bug #58505: Wrong calculate free space OSD and PG used bytes
- NOT quite sure, but it looks like it's calculated here (./src/osd/OSD.cc #1070):...
01/19/2023
- 11:04 AM Bug #58505 (Need More Info): Wrong calculate free space OSD and PG used bytes
- I added a new node with OSD to the cluster. Now I'm adding several disks each. After a short balancing time , the fol...
- 09:06 AM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- It is recommended to adjust the upload file size limit to 10M :)
- 09:04 AM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- osd.12 log file with debug_ms=5
- 08:49 AM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- This problem happed again, but terrible osd is 12 in this time. Other osd report heartbeat no reply as below
osd.15
... - 02:58 AM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- Radoslaw Zarzynski wrote:
> This is what struck me at first glance:
>
> [...]
>
> So @osd.9@ is seeing slow op... - 05:41 AM Bug #58379 (Fix Under Review): no active mgr after ~1 hour
- 03:09 AM Bug #58370: OSD crash
- Radoslaw Zarzynski wrote:
> OK, then it's susceptible to the nonce issue. Would a @debug_ms=5@ log.
Ok, but But I'm...
01/18/2023
- 09:16 PM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- Didn't mean to change those fields.
- 09:15 PM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- ...
- 07:53 PM Bug #58496: osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty())
- ...
- 07:09 PM Bug #58496 (Pending Backport): osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.emp...
- /a/yuriw-2023-01-12_20:11:41-rados-main-distro-default-smithi/7138659...
- 07:34 PM Bug #58370: OSD crash
- OK, then it's susceptible to the nonce issue. Would a @debug_ms=5@ log.
- 07:32 PM Bug #58467 (Need More Info): osd: Only have one osd daemon no reply heartbeat on one node
- 07:32 PM Bug #58467: osd: Only have one osd daemon no reply heartbeat on one node
- This is what struck me at first glance:...
- 07:19 PM Bug #50637: OSD slow ops warning stuck after OSD fail
- > Prashant, would you mind taking a look at time?
Sure Radoslaw. I will have a look at this. - 07:17 PM Bug #50637: OSD slow ops warning stuck after OSD fail
- I think the problem is that we lack a machinery for cleaning the slow-ops status when a monitor marks on OSD down.
- 07:03 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- bump up
- 07:01 PM Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start
- bump up
- 01:00 PM Bug #56028: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- This looks like tier cache issue, it is causing the version to be incorrect
- 09:11 AM Bug #45615 (Fix Under Review): api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosWatchNotif...
- 08:10 AM Bug #44400 (Fix Under Review): Marking OSD out causes primary-affinity 0 to be ignored when up_se...
01/17/2023
- 10:33 PM Bug #57632 (Closed): test_envlibrados_for_rocksdb: free(): invalid pointer
- I'm going to "close" this since my PR was more of a workaround rather than a true solution.
- 05:30 PM Bug #58098: qa/workunits/rados/test_crash.sh: crashes are never posted
- Bumping this up, since it's still occurring in main:
/a/yuriw-2023-01-12_20:11:41-rados-main-distro-default-smithi... - 11:23 AM Bug #44400 (In Progress): Marking OSD out causes primary-affinity 0 to be ignored when up_set has...
- Our function OSDMap::_apply_primary_affinity will set osd as primary even if it is set to primary affinity 0, we are ...
- 09:22 AM Documentation #58469: "ceph config set mgr" command -- how to set it in ceph.conf
- <bl___> zdover, I don't know if this is about the same config: https://docs.ceph.com/en/quincy/dev/config-key/ I've s...
- 09:21 AM Documentation #58469: "ceph config set mgr" command -- how to set it in ceph.conf
- <zdover> bl___, your question about how to set options in ceph.conf that can be set with "ceph config set mgr" comman...
01/16/2023
- 10:53 AM Documentation #58469 (In Progress): "ceph config set mgr" command -- how to set it in ceph.conf
- <bl___> confusing. if I have configuration command like `ceph config set mgr mgr/cephadm/daemon_cache_timeout` how co...
- 10:46 AM Documentation #58354 (Resolved): doc/ceph-volume/lvm/encryption.rst is inaccurate -- LUKS version...
- 10:26 AM Documentation #58468: cephadm installation guide -- refine and correct
- root@RX570:~# ceph health detail
HEALTH_WARN failed to probe daemons or devices; OSD count 0 < osd_pool_default_size... - 10:26 AM Documentation #58468: cephadm installation guide -- refine and correct
- root@RX570:~# ceph orch daemon add osd RX570:/dev/sdl
Error EINVAL: Traceback (most recent call last):
File "/usr... - 10:26 AM Documentation #58468: cephadm installation guide -- refine and correct
- Ubuntu Jammy | Purged all docker/ceph packages and files from system. Starting from scratch.
Following: https://do... - 10:25 AM Documentation #58468 (New): cephadm installation guide -- refine and correct
- <trevorksmith> zdover, I am following these instructions. - https://docs.ceph.com/en/quincy/cephadm/install/ These a...
- 09:07 AM Bug #50637: OSD slow ops warning stuck after OSD fail
- We just observed this exact behavior with a dying server and its OSDs down:...
- 08:26 AM Bug #58467 (Closed): osd: Only have one osd daemon no reply heartbeat on one node
- osd.9 log file:...
01/15/2023
- 09:54 PM Documentation #58462 (New): Installation Documentation - indicate which strings are specified by ...
- <IcePic> Also, if we can wake up zdover, it would be nice if the installation docs could have a different color or so...
- 09:52 PM Documentation #58354 (Fix Under Review): doc/ceph-volume/lvm/encryption.rst is inaccurate -- LUKS...
- doc/ceph-volume/lvm/encryption.lvm is currently written informally. At some future time, the English in that file sho...
01/14/2023
- 08:27 AM Bug #58461 (Fix Under Review): osd/scrub: replica-response timeout is handled without locking the PG
- 08:25 AM Bug #58461 (Fix Under Review): osd/scrub: replica-response timeout is handled without locking the PG
- In ReplicaReservations::no_reply_t, a callback calls handle_no_reply_timeout()
without first locking the PG.
Intr...
01/13/2023
- 09:41 AM Bug #58370: OSD crash
- Radoslaw Zarzynski wrote:
> PG the was 2.50:
>
> [...]
>
> The PG was the @Deleting@ substate:
>
> [...]
>...
01/12/2023
- 08:48 PM Bug #58436 (Fix Under Review): ceph cluster log reporting log level in numeric format for the clo...
- 08:43 PM Bug #58436 (Fix Under Review): ceph cluster log reporting log level in numeric format for the clo...
- The cluster log now reporting log level in integer value compared to human readable log level e.g DBG, INF etc
16735... - 01:20 PM Bug #51194: PG recovery_unfound after scrub repair failed on primary
- We had another occurrence of this on Pacific v16.2.9
- 11:28 AM Backport #58040 (In Progress): quincy: osd: add created_at and ceph_version_when_created metadata
01/10/2023
- 02:12 PM Bug #58410: Set single compression algorithm as a default value in ms_osd_compression_algorithm i...
- BZ link: https://bugzilla.redhat.com/show_bug.cgi?id=2155380
- 02:10 PM Bug #58410 (Pending Backport): Set single compression algorithm as a default value in ms_osd_comp...
- Description of problem:
The default value for the compression parameter "ms_osd_compression_algorithm" is assigned t... - 01:38 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Per the comment #11 I'm redirecting Prashant's questions from comment #9 to the reporter... - 12:50 AM Documentation #58401 (Resolved): cephadm's "Replacing an OSD" instructions work better than RADOS...
01/09/2023
- 06:50 PM Bug #58370: OSD crash
- PG the was 2.50:...
- 06:36 PM Bug #57852 (In Progress): osd: unhealthy osd cannot be marked down in time
- 06:35 PM Bug #57977: osd:tick checking mon for new map
- Per the comment #11 I'm redirecting Prashant's questions from comment #9 to the reporter.
@yite gu: is the deploym... - 02:24 PM Bug #57977: osd:tick checking mon for new map
- @Prashant, I was thinking about this further. Although it is a containerized env, hostpid=true so the PIDs should be ...
- 06:29 PM Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start
- Label assigned but blocked due to the lab issue. Bump up.
- 06:27 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- Still blocked due to the lab issue. Bump up.
- 06:12 PM Documentation #58401: cephadm's "Replacing an OSD" instructions work better than RADOS's "Replaci...
- https://github.com/ceph/ceph/pull/49677
- 05:43 PM Documentation #58401 (Resolved): cephadm's "Replacing an OSD" instructions work better than RADOS...
- <Infinoid> For posterity, https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd seems to be working ...
- 02:40 PM Bug #58379 (In Progress): no active mgr after ~1 hour
01/06/2023
- 11:06 PM Bug #44400: Marking OSD out causes primary-affinity 0 to be ignored when up_set has no common OSD...
- Just confirming this is still present in pacific:...
- 05:55 PM Documentation #58374 (Resolved): crushtool flags remain undocumented in the crushtool manpage
- 05:55 PM Documentation #58374: crushtool flags remain undocumented in the crushtool manpage
- https://github.com/ceph/ceph/pull/49653
- 05:37 PM Bug #57977: osd:tick checking mon for new map
- @Prashant - thanks! Yes, this is containerized, so that's certainly possible in our case.
- 03:20 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> The issue during the upgrade looks awfully similar to a downstream Prashant has working ... - 03:27 AM Bug #57852: osd: unhealthy osd cannot be marked down in time
- Sure Radek. Let me have a look at this.
- 01:50 AM Bug #58370: OSD crash
- Radoslaw Zarzynski wrote:
> Is there the related log available by any chance?
01/05/2023
- 04:00 PM Feature #58389 (New): CRUSH algorithm should support 1 copy on SSD/NVME and 2 copies on HDD (and ...
- Brad Fitzpatrick makes the following request to Zac Dover in private correspondence on 05 Jan 2023:
"I'm kinda dis...
Also available in: Atom