Activity
From 03/29/2023 to 04/27/2023
04/27/2023
- 02:12 PM Bug #59504: 17.2.6: build fails with fmt 9.1.0
- Redirecting to general RADOS.
- 01:16 PM Backport #52841 (In Progress): pacific: shard-threads cannot wakeup bug
- 01:15 PM Backport #53166 (In Progress): pacific: api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
- 01:13 PM Backport #53167 (Rejected): octopus: api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
- Octopus is EOL
- 01:13 PM Bug #52739 (Resolved): msg/async/ProtocalV2: recv_stamp of a message is set to a wrong value
- 01:13 PM Backport #52842 (Rejected): octopus: msg/async/ProtocalV2: recv_stamp of a message is set to a wr...
- Octopus is EOL
- 01:13 PM Backport #52840 (Rejected): octopus: shard-threads cannot wakeup bug
- Octopus is EOL
- 12:30 PM Backport #52307 (In Progress): pacific: doc: clarify use of `rados rm` command
- 12:30 PM Backport #52306 (Rejected): octopus: doc: clarify use of `rados rm` command
- Octopus is EOL
- 12:29 PM Backport #52557 (In Progress): pacific: pybind: rados.RadosStateError raised when closed watch ob...
- 12:28 PM Backport #52556 (Rejected): octopus: pybind: rados.RadosStateError raised when closed watch objec...
- Octopus is EOL
- 12:27 PM Backport #52596 (Rejected): octopus: make bufferlist::c_str() skip rebuild when it isn't necessary
- Octopus is EOL
- 12:26 PM Backport #51525 (Rejected): octopus: osd: Delay sending info to new backfill peer resetting last_...
- Octopus is EOL
- 12:26 PM Bug #50441 (Rejected): cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- 12:26 PM Backport #51551 (Rejected): octopus: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana ...
- Octopus is EOL
- 12:26 PM Bug #50393 (Resolved): CommandCrashedError: Command crashed: 'mkdir -p -- /home/ubuntu/cephtest/m...
- 12:25 PM Backport #51741 (Rejected): octopus: CommandCrashedError: Command crashed: 'mkdir -p -- /home/ubu...
- Octopus is EOL
- 12:23 PM Backport #56604 (In Progress): pacific: ceph report missing osdmap_clean_epochs if answered by peon
- 12:23 PM Backport #56603 (Rejected): octopus: ceph report missing osdmap_clean_epochs if answered by peon
- Octopus is EOL
- 12:22 PM Bug #48899 (Resolved): api_list: LibRadosList.EnumerateObjects and LibRadosList.EnumerateObjectsS...
- 12:22 PM Backport #55581 (Rejected): octopus: api_list: LibRadosList.EnumerateObjects and LibRadosList.Enu...
- Octopus is EOL
- 12:22 PM Backport #55066 (Rejected): pacific: osd_fast_shutdown_notify_mon option should be true by default
- Duplicate?
- 12:21 PM Backport #55067 (Rejected): octopus: osd_fast_shutdown_notify_mon option should be true by default
- Octopus is EOL
- 11:11 AM Bug #59080 (Fix Under Review): mclock-config.sh: TEST_profile_disallow_builtin_params_modify fail...
- The test script issue is related to timing of a check once a change to mon DB is made. Any changes to the mon DB conf...
- 10:50 AM Backport #52892 (In Progress): pacific: ceph-kvstore-tool repair segmentfault without bluestore-kv
- 10:49 AM Backport #52893 (Rejected): octopus: ceph-kvstore-tool repair segmentfault without bluestore-kv
- Octopus is EOL
- 09:12 AM Bug #48843 (Resolved): Get more parallel scrubs within osd_max_scrubs limits
- 09:11 AM Backport #49776 (Rejected): octopus: Get more parallel scrubs within osd_max_scrubs limits
- Octopus is EOL
- 09:11 AM Backport #52839 (In Progress): pacific: rados: build minimally when "WITH_MGR" is off
- 09:10 AM Backport #52791 (In Progress): pacific: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_...
- 09:10 AM Backport #52838 (Rejected): octopus: rados: build minimally when "WITH_MGR" is off
- Octopus is EOL
- 09:09 AM Backport #52792 (Rejected): octopus: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_fli...
- Octopus is EOL
- 09:09 AM Bug #48959 (Resolved): Primary OSD crash caused corrupted object and further crashes during backf...
- 09:09 AM Backport #52937 (Rejected): octopus: Primary OSD crash caused corrupted object and further crashe...
- Octopus is EOL
- 09:07 AM Bug #45868 (Resolved): rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- 09:07 AM Backport #55768 (Resolved): pacific: rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- 09:06 AM Backport #55767 (Rejected): octopus: rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- Octopus is EOL
- 09:06 AM Bug #53506 (Closed): mon: frequent cpu_tp had timed out messages
- 09:04 AM Backport #53719 (Resolved): octopus: mon: frequent cpu_tp had timed out messages
- 02:37 AM Bug #59510: osd crash
- The index pool make of ssd, and the data pool make of hdd, the crash message come from hdd, is there a way to voild t...
04/26/2023
- 07:59 PM Bug #53751: "N monitors have not enabled msgr2" is always shown for new clusters
- Hi Radoslaw, before that, a quick thing for your consideration I just found:
Running monmaptool is step 13 in http... - 06:05 PM Bug #59564 (Pending Backport): Connection scores not populated properly on monitors post installa...
- ...
- 04:16 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- RCA by Aishwarya: https://gist.github.com/amathuria/26f5e9ecfc3f04a70c9795039fdf0c35?permalink_comment_id=4549186#gis...
- 12:14 PM Bug #59510: osd crash
- You might also want to compact this OSD's DB using ceph-kvstore-tool. Some chances are that the timeout is caused by ...
- 07:04 AM Bug #59510: osd crash
- like this?
*[6880136.695917] tp_osd_tp[6383]: segfault at 0 ip 00007ff38f003573 sp 00007ff36ba8a240 error 4 in libt... - 11:50 AM Backport #59456 (In Progress): quincy: Monitors do not permit OSD to join after upgrading to Quincy
- 11:49 AM Backport #59455 (In Progress): pacific: Monitors do not permit OSD to join after upgrading to Quincy
- 07:00 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> Yup, the patch does exactly that – it ensures that a random nonce is always used.
I h... - 01:36 AM Bug #59532: quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD upgraded
- Radoslaw Zarzynski wrote:
> Hi Patrick!
> How reproducible this is? Is it constant or perhaps it happened just once...
04/25/2023
- 06:13 PM Bug #56393: failed to complete snap trimming before timeout
- Bump up.
- 06:12 PM Bug #59049 (In Progress): WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- 06:11 PM Bug #59510 (Need More Info): osd crash
- It looks the scan-for-backfill operation was taking long time and triggered the thread heartbeat. This could be even ...
- 06:08 PM Bug #59531: quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.0...
- Hi Aishwarya! What do you think on the Patrick's question: "Should we (fs suite) be setting a config to mute this WRN...
- 12:25 AM Bug #59531 (Pending Backport): quincy: "OSD bench result of 228617.361065 IOPS exceeded the thres...
- /ceph/teuthology-archive/pdonnell-2023-04-24_17:17:44-fs-wip-pdonnell-testing-20230420.183701-quincy-distro-default-s...
- 06:05 PM Bug #53751: "N monitors have not enabled msgr2" is always shown for new clusters
- Hello Niklas!
Thanks for getting back to it! Could you please collect monitor's logs with @debug_ms=20@ and @debug... - 01:44 AM Bug #53751: "N monitors have not enabled msgr2" is always shown for new clusters
- The fundamental issue here seems to be that in my newly deployed test cluster, nothing listens on port 3300 even thou...
- 05:56 PM Bug #59333: PgScrubber: timeout on reserving replicas
- bump up
- 03:46 PM Bug #59333: PgScrubber: timeout on reserving replicas
- See the same on pacific 16.2.13 RC
http://qa-proxy.ceph.com/teuthology/yuriw-2023-04-25_14:15:06-smoke-pacific-rel... - 05:46 PM Bug #57977: osd:tick checking mon for new map
- Yup, the patch does exactly that – it ensures that a random nonce is always used.
- 05:42 PM Bug #59532 (Need More Info): quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD up...
- Hi Patrick!
How reproducible this is? Is it constant or perhaps it happened just once? I'm asking because of the rec... - 12:34 AM Bug #59532 (Closed): quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD upgraded
- ...
- 10:17 AM Backport #59538 (Rejected): pacific: osd/scrub: verify SnapMapper consistency not backported
- 10:17 AM Backport #59537 (Resolved): quincy: osd/scrub: verify SnapMapper consistency not backported
- https://github.com/ceph/ceph/pull/52182
- 10:12 AM Bug #59478: osd/scrub: verify SnapMapper consistency not backported
- @Wout, the bot should create backport tickets soon
- 10:11 AM Bug #59478 (Pending Backport): osd/scrub: verify SnapMapper consistency not backported
- 10:04 AM Bug #56147: snapshots will not be deleted after upgrade from nautilus to pacific
- Matan Breizman wrote:
> > For already-converted clusters: Separate PR will be issued to remove/update the malformed ...
04/24/2023
- 10:46 PM Backport #59179: pacific: [pg-autoscaler][mgr] does not throw warn to increase PG count on pools ...
- Kamoltat (Junior) Sirivadhna wrote:
> https://github.com/ceph/ceph/pull/50694
merged
04/23/2023
- 02:52 AM Bug #59510 (Need More Info): osd crash
- ...
04/21/2023
- 04:11 PM Bug #51729: Upmap verification fails for multi-level crush rule
- Hi Chris, this issue was actually discussed at Cephalocon. Looking at the verify_upmap code, it seems that we may nee...
- 04:06 PM Bug #51729: Upmap verification fails for multi-level crush rule
- Laura Flores wrote:
> Hi Chris, yes, I will post another update soon with my findings.
Pinging for updates.... - 06:24 AM Bug #59478: osd/scrub: verify SnapMapper consistency not backported
- I think the backport should go to at least pacific and quincy
04/20/2023
- 09:24 PM Bug #59504: 17.2.6: build fails with fmt 9.1.0
- I found that this issue can be fixed by add -DFMT_DEPRECATED_OSTREAM to CXXFLAGS.
So after run `CXXFLAGS=-DFMT_DEP... - 07:35 PM Bug #59504 (Need More Info): 17.2.6: build fails with fmt 9.1.0
- fmt 9.1.0
cmake settings... - 03:44 AM Bug #57977: osd:tick checking mon for new map
- There are two conditions that can cause this problem:
1. The OSDmap version held by the MON is the same as the OSD's...
04/19/2023
- 12:02 PM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> My understanding:
>
> 0. The OSD (as a process) got down BUT it was up **in the OSDMa...
04/18/2023
- 10:34 AM Bug #59478 (Closed): osd/scrub: verify SnapMapper consistency not backported
- We have a case where a cluster is suffering from malformed snapmapper keys due to bug https://tracker.ceph.com/issues...
04/17/2023
- 08:58 AM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- According to PR https://github.com/ceph/ceph/pull/44050 we can ignore that warning, i'll add it to the ignore log list
04/16/2023
- 09:05 AM Backport #59456 (Resolved): quincy: Monitors do not permit OSD to join after upgrading to Quincy
- https://github.com/ceph/ceph/pull/51102
- 09:05 AM Backport #59455 (Resolved): pacific: Monitors do not permit OSD to join after upgrading to Quincy
- https://github.com/ceph/ceph/pull/51382
- 09:02 AM Bug #58156 (Pending Backport): Monitors do not permit OSD to join after upgrading to Quincy
04/15/2023
- 06:14 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> My understanding:
>
> 0. The OSD (as a process) got down BUT it was up **in the OSDMa...
04/13/2023
- 06:25 PM Bug #57977: osd:tick checking mon for new map
- My understanding:
0. The OSD (as a process) got down BUT it was up **in the OSDMap** -- these are 2 different thin... - 06:15 PM Bug #59333: PgScrubber: timeout on reserving replicas
- Assgining for a screening whether this is a real problem or not (a testing issue?).
If it is, we could reassign even... - 06:10 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- Not a high priority; good opportunity to learn.
- 06:06 PM Bug #56393: failed to complete snap trimming before timeout
- Bump up.
- 06:02 PM Bug #49810 (Need More Info): rados/singleton: with msgr-failures/none MON_DOWN due to haven't for...
- There was re-occurrence recorded over 2 years, so would need to wait for one to get logs.
- 05:53 PM Bug #59192 (In Progress): cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an appl...
04/10/2023
- 03:52 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/lflores-2023-04-07_22:22:04-rados-wip-yuri4-testing-2023-04-07-1825-distro-default-smithi/7235344
- 10:21 AM Bug #49810: rados/singleton: with msgr-failures/none MON_DOWN due to haven't formed initial quoru...
- somebody know why?
04/09/2023
- 07:48 AM Backport #55792 (Rejected): octopus: CEPH Graylog Logging Missing "host" Field
- Octopus is EOL
04/07/2023
- 10:43 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2023-03-30_21:29:24-rados-wip-yuri2-testing-2023-03-30-0826-distro-default-smithi/7227539
- 05:54 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2023-04-04_15:24:40-rados-wip-yuri4-testing-2023-03-31-1237-distro-default-smithi/7231452
- 10:42 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- /a/yuriw-2023-03-30_21:29:24-rados-wip-yuri2-testing-2023-03-30-0826-distro-default-smithi/7227612
- 06:17 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- /a/yuriw-2023-04-04_15:24:40-rados-wip-yuri4-testing-2023-03-31-1237-distro-default-smithi/7231256
- 10:41 PM Bug #59080: mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt...
- /a/yuriw-2023-03-30_21:29:24-rados-wip-yuri2-testing-2023-03-30-0826-distro-default-smithi/7227588
- 05:58 PM Bug #56393: failed to complete snap trimming before timeout
- /a/yuriw-2023-04-04_15:24:40-rados-wip-yuri4-testing-2023-03-31-1237-distro-default-smithi/7231364
From rados/thra... - 05:53 PM Bug #59049: WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event
- /a/yuriw-2023-04-04_15:24:40-rados-wip-yuri4-testing-2023-03-31-1237-distro-default-smithi/7231129
- 05:50 PM Bug #17945: ceph_test_rados_api_tier: failed to decode hitset in HitSetWrite test
- /a/yuriw-2023-04-04_15:24:40-rados-wip-yuri4-testing-2023-03-31-1237-distro-default-smithi/7231399
04/06/2023
- 11:29 PM Bug #57852 (Need More Info): osd: unhealthy osd cannot be marked down in time
- I am working on probable fix for this issue but I could not able to reproduce this issue on vstart cluster by blockin...
- 07:01 PM Bug #59271 (Fix Under Review): mon: FAILED ceph_assert(osdmon()->is_writeable())
04/05/2023
- 09:30 PM Bug #59333: PgScrubber: timeout on reserving replicas
- /a/yuriw-2023-03-29_17:58:41-rados-wip-yuri11-testing-2023-03-28-0950-distro-default-smithi/7225752
- 08:53 PM Bug #59333 (New): PgScrubber: timeout on reserving replicas
- /a/yuriw-2023-03-28_22:43:59-rados-wip-yuri11-testing-2023-03-28-0950-distro-default-smithi/7224215...
- 09:21 PM Backport #58613: pacific: pglog growing unbounded on EC with copy by ref
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49937
merged - 08:49 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- /a/yuriw-2023-03-28_22:43:59-rados-wip-yuri11-testing-2023-03-28-0950-distro-default-smithi/7224527
- 08:47 PM Bug #59271: mon: FAILED ceph_assert(osdmon()->is_writeable())
- /a/yuriw-2023-03-28_22:43:59-rados-wip-yuri11-testing-2023-03-28-0950-distro-default-smithi/7224432/remote/smithi137/...
- 08:24 PM Bug #59080: mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt...
- /a/yuriw-2023-03-30_21:53:20-rados-wip-yuri7-testing-2023-03-29-1100-distro-default-smithi/7228106
- 08:22 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- /a/yuriw-2023-03-30_21:53:20-rados-wip-yuri7-testing-2023-03-29-1100-distro-default-smithi/7228118
- 08:20 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2023-03-30_21:53:20-rados-wip-yuri7-testing-2023-03-29-1100-distro-default-smithi/7227904
- 07:30 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-03-30_21:53:20-rados-wip-yuri7-testing-2023-03-29-1100-distro-default-smithi/7227986
- 07:00 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-03-16_21:59:27-rados-wip-yuri6-testing-2023-03-12-0918-pacific-distro-default-smithi/7211186
/a/yuriw-... - 07:15 PM Bug #49525: found snap mapper error on pg 3.2s1 oid 3:4abe9991:::smithi10121515-14:e4 snaps missi...
- /a/yuriw-2023-03-13_19:57:13-rados-wip-yuri6-testing-2023-03-12-0918-pacific-distro-default-smithi/7205944
- 03:48 AM Bug #57977: osd:tick checking mon for new map
- Radoslaw Zarzynski wrote:
> That's a very good question. How about providing logs from both monitors and the problem...
04/03/2023
- 05:57 PM Bug #59291: pg_pool_t version compatibility issue
- Oops, it looks like @is_stretch_pool()@ really doesn't depend on @features@....
- 03:06 PM Bug #59291: pg_pool_t version compatibility issue
- mail : ...
- 02:06 PM Bug #59291: pg_pool_t version compatibility issue
- Ceph Version 17.2.5 also has the same problem
- 02:05 PM Bug #59291: pg_pool_t version compatibility issue
- ceph version 16.2.16
- 02:03 PM Bug #59291: pg_pool_t version compatibility issue
- Thank you for helping me modify the status
- 01:52 PM Bug #59291 (New): pg_pool_t version compatibility issue
- How is pg_pool_t version forward compatible? For example, if I want to add a new field, how should I modify it?
!i... - 05:21 PM Bug #58940: src/osd/PrimaryLogPG.cc: 4284: ceph_abort_msg("out of order op")
- A note from bug scrub: this affects reef and main only.
- 03:07 PM Bug #59271: mon: FAILED ceph_assert(osdmon()->is_writeable())
- Lets keep this tracker, and I'll point this tracker to be related to https://tracker.ceph.com/issues/57017
- 02:57 PM Bug #59271: mon: FAILED ceph_assert(osdmon()->is_writeable())
- Let's mark this one as a duplicate if you already have a Tracker open for the issue.
- 02:32 PM Bug #59271: mon: FAILED ceph_assert(osdmon()->is_writeable())
- Thanks for reporting this, I'm summarizing this for record:
1. We had this bug which also happens downstream: http... - 02:09 PM Bug #58132 (Resolved): qa/standalone/mon: --mon-initial-members setting causes us to populate rem...
- All backports are merged
- 02:09 PM Backport #58336 (Resolved): pacific: qa/standalone/mon: --mon-initial-members setting causes us t...
- Backport PR: https://github.com/ceph/ceph/pull/49312 Merged!
- 02:07 PM Backport #58335 (Resolved): quincy: qa/standalone/mon: --mon-initial-members setting causes us to...
- backport PR: https://github.com/ceph/ceph/pull/49311 merged!
03/31/2023
- 10:22 PM Bug #59286 (New): mon/test_mon_osdmap_prune.sh: test times out after 5+ hours
- On the initial run, this test ran for almost 4 hours before it timed out:
/a/yuriw-2023-03-14_20:10:47-rados-wip-yur... - 09:50 PM Bug #59285 (New): mon/mon-last-epoch-clean.sh: TEST_mon_last_clean_epoch failure due to stuck pgs
- /a/yuriw-2023-03-17_23:38:21-rados-reef-distro-default-smithi/7212349...
- 09:35 PM Bug #59057: rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_t...
- /a/yuriw-2023-03-14_20:10:47-rados-wip-yuri-testing-2023-03-14-0714-reef-distro-default-smithi/7207197
- 09:33 PM Bug #58130: LibRadosAio.SimpleWrite hang and pkill
- /a/yuriw-2023-03-14_20:10:47-rados-wip-yuri-testing-2023-03-14-0714-reef-distro-default-smithi/7207033...
- 09:11 PM Bug #56393: failed to complete snap trimming before timeout
- /a/yuriw-2023-03-14_20:10:47-rados-wip-yuri-testing-2023-03-14-0714-reef-distro-default-smithi/7207042
- 07:27 PM Bug #56393: failed to complete snap trimming before timeout
- /a/yuriw-2023-03-14_20:10:47-rados-wip-yuri-testing-2023-03-14-0714-reef-distro-default-smithi/7207193
- 03:27 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- /a/yuriw-2023-03-27_23:05:54-rados-wip-yuri4-testing-2023-03-25-0714-distro-default-smithi/7221965
- 12:11 AM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- this seems to happen exclusively against ubuntu 22.04:
https://pulpito.ceph.com/cbodley-2023-03-30_21:31:09-rgw:veri... - 03:22 PM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Yes Radek, it is being investigated by Brad.
/a/yuriw-2023-03-27_23:05:54-rados-wip-yuri4-testing-2023-03-25-0714-... - 03:20 PM Bug #59271: mon: FAILED ceph_assert(osdmon()->is_writeable())
- Junior maybe you have an idea? The last issue fixed like this on Pacific was https://tracker.ceph.com/issues/58239.
- 03:20 PM Bug #59271 (Resolved): mon: FAILED ceph_assert(osdmon()->is_writeable())
- /a/yuriw-2023-03-27_23:05:54-rados-wip-yuri4-testing-2023-03-25-0714-distro-default-smithi/7222122/remote/smithi181/l...
- 12:36 PM Bug #48750: ceph config set using osd/host mask not working
- I also have this same problem with v17.2.5.
Unfortunately, this makes osd_memory_target_autotune useless.
Happy...
03/30/2023
- 06:48 PM Bug #57782: [mon] high cpu usage by fn_monstore thread
- In the upcoming v17.2.6 we'll have the extra debugs (see https://github.com/ceph/ceph/pull/50406). Would you mind to ...
- 06:47 PM Backport #58169 (Resolved): quincy: extra debugs for: [mon] high cpu usage by fn_monstore thread
- 06:44 PM Bug #59080: mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt...
- Bump up.
- 06:43 PM Bug #56034: qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3()
- Running the same command again on gibba gives a valid out, and now the PG is active+clean. So this probably has to do...
- 06:28 PM Bug #56034: qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3()
- gibba also returns -1 for the same command...
- 06:22 PM Bug #56034: qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3()
- This is where we got -1 from pg_stats...
- 06:21 PM Bug #56034: qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3()
- The direct root cause of the failure is that we tried to start an OSD with ID set to @-1@:...
- 06:04 PM Bug #59172: test_pool_min_size: AssertionError: wait_for_clean: failed before timeout expired due...
- Would you mind taking a look as you already have the context?
- 06:02 PM Bug #36304: FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wake_split_child(PG*)
- Hello Nathan!
Do you have a log or a coredump by any chance? - 06:00 PM Bug #59196 (In Progress): ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Looks the problem is under investigation. Please correct me if I'm wrong.
- 05:55 PM Bug #58940 (Fix Under Review): src/osd/PrimaryLogPG.cc: 4284: ceph_abort_msg("out of order op")
- 03:15 PM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- @Matan that's probably right, although I wonder what changed to make this pop up so frequently in the rados/rgw suite...
- 08:27 AM Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enable...
- Neha Ojha wrote:
> Looking at a previous run very similar to ... that had passed, it appears that the warning existe...
03/29/2023
- 05:04 AM Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Note that this tracker was originally #59058 until it was accidentally deleted by myself.
Below is a summary of th... - 04:59 AM Bug #59196 (Fix Under Review): ceph_test_lazy_omap_stats segfault while waiting for active+clean
- ...
Also available in: Atom