Activity
From 03/25/2022 to 04/23/2022
04/23/2022
- 09:59 AM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- I suspect this issue is due to https://github.com/ceph/ceph/commit/7c84e06e6f846f6b4b6fd959218b4d474520f429 and have ...
- 12:04 AM Bug #55419 (In Progress): cephtool/test.sh: failure on blocklist testing
04/22/2022
- 10:00 PM Bug #52153: crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): abort
- I have also seen this crash on my monitor running 16.2.7.
- 09:24 PM Bug #55407: quincy osd's fail to boot and crash
- I saw the stacktrace. This time v17.2.0. Latest...
- 09:20 PM Bug #55407: quincy osd's fail to boot and crash
- Ok. This is the situation:
1.- OSD built from scracth in pacific. (docker pull ceph/daemon:latest-pacific)(
2.- U... - 08:47 PM Bug #55407: quincy osd's fail to boot and crash
- Igor Fedotov wrote:
> >2022-04-22T13:34:42.419+0000 7fd5798ed080 -1 bluefs _replay 0x11000: stop: unrecognized op 12... - 03:36 PM Bug #55407: quincy osd's fail to boot and crash
- >2022-04-22T13:34:42.419+0000 7fd5798ed080 -1 bluefs _replay 0x11000: stop: unrecognized op 12
@Gonzalo, AFAIU you... - 01:38 PM Bug #55407: quincy osd's fail to boot and crash
- Neha Ojha wrote:
> Did you see the same segmentation fault in quincy and pacific? Were you testing a custom build of... - 09:21 PM Bug #55419 (Resolved): cephtool/test.sh: failure on blocklist testing
- /a/yuriw-2022-04-22_13:56:48-rados-wip-yuri2-testing-2022-04-22-0500-distro-default-smithi/6800292...
- 07:52 PM Bug #24057 (Rejected): cbt fails to copy results to the archive dir
- 06:27 PM Bug #43189 (Resolved): pgs stuck in laggy state
- 06:27 PM Backport #43232 (Rejected): nautilus: pgs stuck in laggy state
- Nautilus is EOL
- 06:26 PM Bug #41385 (Resolved): osd/ReplicatedBackend.cc: 1349: FAILED ceph_assert(peer_missing.count(from...
- 06:26 PM Backport #41731 (Rejected): nautilus: osd/ReplicatedBackend.cc: 1349: FAILED ceph_assert(peer_mis...
- Nautilus is EOL
- 02:46 PM Backport #55405 (In Progress): quincy: librados C++ API requires C++17 to build
- https://github.com/ceph/ceph/pull/46005
- 02:41 PM Backport #55406 (In Progress): pacific: librados C++ API requires C++17 to build
- https://github.com/ceph/ceph/pull/46004
04/21/2022
- 11:11 PM Bug #55407 (Need More Info): quincy osd's fail to boot and crash
- Did you see the same segmentation fault in quincy and pacific? Were you testing a custom build of ceph (17.1.0 is a d...
- 08:20 PM Bug #55407 (Rejected): quincy osd's fail to boot and crash
- I have a cluster with pacific. One of the osd started to crash...
So I zapped the disk and recreated again. I foun... - 08:21 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Gonzalo Aguilar Delgado wrote:
> I suppose this thread can be closed as soon as the fix is in master. But just for r... - 06:10 PM Bug #53729: ceph-osd takes all memory before oom on boot
- I suppose this thread can be closed as soon as the fix is in master. But just for reference, in case has something to...
- 05:01 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Mykola Golub wrote:
> Gonzalo Aguilar Delgado wrote:
>
> > Mykola, specially thank you for doing the patch.
>
... - 04:39 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Gonzalo Aguilar Delgado wrote:
> Mykola, specially thank you for doing the patch.
I am not the author of the pa... - 04:34 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Yesssss!!! Great job team!
It's up & running. It purged dups, booted the ceph-osd and only 1/2Gb RAM full booted. ... - 04:23 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Mykola Golub wrote:
> Gonzalo Aguilar Delgado wrote:
>
> > CEPH_ARGS="--osd_pg_log_trim_max=10000 --osd_max_pg_lo... - 07:30 AM Bug #53729: ceph-osd takes all memory before oom on boot
- Gonzalo Aguilar Delgado wrote:
> CEPH_ARGS="--osd_pg_log_trim_max=10000 --osd_max_pg_log_entries=2000 " LD_LIBRARY... - 07:01 AM Bug #53729: ceph-osd takes all memory before oom on boot
- Nitzan Mordechai wrote:
> Gonzalo Aguilar Delgado wrote:
> > Nitzan Mordechai wrote:
> > > Can you please add the ... - 07:28 PM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- - Just for completeness as expected this issue again happed today in gibba cluster during the log rotation window and...
- 05:14 PM Feature #54115 (In Progress): Log pglog entry size in OSD log if it exceeds certain size limit
- 04:25 PM Backport #55406 (Rejected): pacific: librados C++ API requires C++17 to build
- https://github.com/ceph/ceph/pull/46004
- 04:25 PM Backport #55405 (In Progress): quincy: librados C++ API requires C++17 to build
- 04:22 PM Bug #55233: librados C++ API requires C++17 to build
- The c++ api was created only for internal use. It should not be held to such a guarantee. At least, that's what I u...
- 04:20 PM Bug #55233 (Pending Backport): librados C++ API requires C++17 to build
- 03:31 PM Feature #53050 (Pending Backport): Support blocklisting a CIDR range
- 03:31 PM Feature #53050 (Resolved): Support blocklisting a CIDR range
- 12:21 PM Feature #55402 (New): rgw: Add dbstore & cloud-transition test-suites to teuthology
- Add new test-suites to teuthology for below RGW features -
* cloud-transition
* dbstore backend - 03:40 AM Bug #55355: osd thread deadlock
- Thanks for your reply @Radoslaw Zarzynski.
I checked the latest code and found that the code logic is the same. I th...
04/20/2022
- 09:08 PM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
- Looking further into this issue, it looks like the bug occurs whenever "prep_object_replica_pushes()" is called, and ...
- 08:22 PM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- Neha noticed today that in the LRC cluster after having this workaround still, this cluster when went through a log r...
- 06:50 PM Bug #49231: MONs unresponsive over extended periods of time
- Mimic is EOL :-(. Would you be able to upgrade soon?
- 06:43 PM Bug #55101 (New): mon has slow op
- 06:33 PM Bug #55255 (Need More Info): "ceph iostat" exception!
- 06:33 PM Bug #55255: "ceph iostat" exception!
- Hello! How do you synchronize the clocks in your cluster? Is NTP properly running?
I'm asking about that to ensure... - 06:24 PM Bug #47025: rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed
- @Laura, @Nitzan: the assertion failure in the comment #7 is about @reply_map.size()@ while other occurrences mention ...
- 06:06 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- Hello! Looks it's reproducible which is good.
Would you be able to provide logs with extra debugs as mentioned in ht... - 06:02 PM Bug #55355: osd thread deadlock
- Hello! @14.2.22@ is out-of-live actually. Would you be able to verify the issue on a newer one?
- 09:53 AM Bug #55355: osd thread deadlock
- I find thread 45 wants to stop connection,but the connection has been stopped by thread 71
@
(gdb) f 5
#5 Async... - 05:57 PM Bug #51463 (Resolved): blocked requests while stopping/starting OSDs
- 05:55 PM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
- Lowering the priority as the last replication is 5 months old.
- 05:51 PM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- Good to know and thanks for your testing!
Just to the record: leaving the bug in the @Need More Info@ state as the... - 04:37 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Gonzalo Aguilar Delgado wrote:
> Nitzan Mordechai wrote:
> > Can you please add the output of trim-pg-log ?
> > CE... - 04:23 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Mykola Golub wrote:
> Just as information that might be useful for someone. Although ceph-objectstore-tool is a more... - 04:21 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Nitzan Mordechai wrote:
> Can you please add the output of trim-pg-log ?
> CEPH_ARGS="--osd_pg_log_trim_max=10000 -... - 02:47 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Just as information that might be useful for someone. Although ceph-objectstore-tool is a more reliable way to confir...
- 12:03 PM Bug #53729: ceph-osd takes all memory before oom on boot
- Can you please add the output of trim-pg-log ?
CEPH_ARGS="--osd_pg_log_trim_max=10000 --osd_max_pg_log_entries=2000 ... - 11:06 AM Bug #53729: ceph-osd takes all memory before oom on boot
- After a while it crashed...
-34> 2022-04-20T11:02:25.218+0000 7f72b3b83640 5 rocksdb: commit_cache_size High Pri... - 10:50 AM Bug #53729: ceph-osd takes all memory before oom on boot
- I've built the repo from git@github.com:NitzanMordhai/ceph.git branch origin/wip-nitzan-pglog-dups-not-trimmed
And...
04/19/2022
- 11:45 PM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- we found out that quincy has https://github.com/ceph/ceph/pull/40640 log_to_journald feature. When we set ...
- 05:48 PM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- Tim Wilkinson wrote:
> While executing tests on 17.1.0-203-g2c8d01fc, I see the ceph.log files on all MONs are zero ... - 04:56 PM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- Gibba cluster quincy version `17.1.0-163-g4e244311`.
- 04:52 PM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- I think we have not seen this issue before in previous quincy builds? something changed recently.
- 04:51 PM Bug #55383: monitor cluster logs(ceph.log) appear empty until rotated
- Tim Wilkinson wrote:
> While executing tests on 17.1.0-203-g2c8d01fc, I see the ceph.log files on all MONs are zero ... - 04:43 PM Bug #55383 (Resolved): monitor cluster logs(ceph.log) appear empty until rotated
- While executing tests on 17.1.0-203-g2c8d01fc, I see the ceph.log files on all MONs are zero length unless rotated.
... - 08:13 AM Backport #55019: octopus: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
- Sorry for being a nag ... I initially reported https://tracker.ceph.com/issues/53663 and still observe the issues of ...
04/18/2022
- 12:55 PM Bug #55355 (Resolved): osd thread deadlock
- my ceph version is 14.2.22
After the network is abnormal, the osd cannot join the cluster.
Then I find the osd th...
04/16/2022
- 08:18 AM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- 2022-04-16T00:06:06.526+0200 7f6997402700 -1 osd.110 1166753 heartbeat_check: no reply from <censor>:6812 osd.109 sin...
- 08:15 AM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- There is a lot of lines like this before the crash line in the log file
-24> 2022-04-16T00:06:02.105+0200 7fede... - 08:14 AM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- 2022-04-16T00:06:08.540+0200 7fedde264700 0 log_channel(cluster) log [WRN] : Monitor daemon marked osd.109 down, but...
- 08:09 AM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- OSD crashed with this again
{
"archived": "2022-04-15 23:07:21.580173",
"assert_condition": "is_primary(...
04/15/2022
- 02:08 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- the pacth
osd/PeeringState: fix acting_set_writeable min_size check
can resolve ceph v15.2.13 recovery_unfoun... - 02:07 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- jianwei zhang wrote:
> Radoslaw Zarzynski wrote:
> > > the all osds is up&in, so the case doesn't involve recovery_...
04/14/2022
- 07:58 AM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
- Radoslaw Zarzynski wrote:
> Hello Sridhar! Is there anything new? Have we discussed it already maybe?
Hello Radek... - 07:58 AM Bug #51463: blocked requests while stopping/starting OSDs
- yes. This is fixed with this two tasks:
https://tracker.ceph.com/issues/53327
https://tracker.ceph.com/issues/53326
04/13/2022
- 07:27 PM Bug #53789: CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados...
- /a/nojha-2022-04-13_16:47:41-rados-wip-yuri2-testing-2022-04-13-0703-distro-basic-smithi/6790486
- 06:58 PM Bug #47300: mount.ceph fails to understand AAAA records from SRV record
- Lowering the priority as there are no recent reports about the issue and assigning as it's a good way to learn about ...
- 06:50 PM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
- Hello Sridhar! Is there anything new? Have we discussed it already maybe?
- 06:48 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- Lowering the priority further as there is no new info since the last time.
- 06:47 PM Bug #45318: Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log running...
- Not seeing this very frequently, most likely a result of failure injection
- 06:32 PM Bug #54188: Setting too many PGs leads error handling overflow
- Getting this back to normal as the impact is the inability to create a new pool (no direct threat to data).
- 06:25 PM Bug #51463: blocked requests while stopping/starting OSDs
- We suspect this ticker is actually a duplicate of https://tracker.ceph.com/issues/53327.
If somebody could test the ... - 06:21 PM Backport #55073 (Resolved): pacific: osd: osd_fast_shutdown_notify_mon not quite right
- 06:20 PM Backport #55075 (Resolved): quincy: osd: osd_fast_shutdown_notify_mon not quite right
- 06:15 PM Bug #43268: Restrict admin socket commands more from the Ceph tool
- A note from a bug scrub:
1. if somebody already has the access to monitors, he can do a lot.
2. no new comments o... - 06:08 PM Bug #43887: ceph_test_rados_delete_pools_parallel failure
- Lowering the priority as the issue is neither:
* causing data loss,
* a frequent thing. - 09:41 AM Backport #55296: pacific: malformed json in a Ceph RESTful API call can stop all ceph-mon services
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/45893
ceph-backport.sh versi... - 09:40 AM Backport #55297: quincy: malformed json in a Ceph RESTful API call can stop all ceph-mon services
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/45892
ceph-backport.sh versi... - 09:38 AM Backport #55298: octopus: malformed json in a Ceph RESTful API call can stop all ceph-mon services
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/45891
ceph-backport.sh versi... - 08:40 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- ...
- 02:58 AM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- Neha Ojha wrote:
> Aishwarya, can you please take a look at this bug? could be a test issue, but we should find out
...
04/12/2022
- 11:05 PM Backport #55309 (Resolved): pacific: prometheus metrics shows incorrect ceph version for upgraded...
- https://github.com/ceph/ceph/pull/47693
- 11:05 PM Backport #55308 (Resolved): pacific: Manager is failing to keep updated metadata in daemon_state ...
- https://github.com/ceph/ceph/pull/47692
- 10:40 PM Backport #55306 (Resolved): quincy: prometheus metrics shows incorrect ceph version for upgraded ...
- 10:40 PM Backport #55305 (Resolved): quincy: Manager is failing to keep updated metadata in daemon_state f...
- https://github.com/ceph/ceph/pull/46559
- 07:19 PM Bug #53895 (Resolved): Unable to format `ceph config dump` command output in yaml using `-f yaml`
- 05:45 PM Backport #55298 (Resolved): octopus: malformed json in a Ceph RESTful API call can stop all ceph-...
- 05:45 PM Backport #55297 (Resolved): quincy: malformed json in a Ceph RESTful API call can stop all ceph-m...
- 05:45 PM Backport #55296 (Resolved): pacific: malformed json in a Ceph RESTful API call can stop all ceph-...
- 05:43 PM Bug #54558 (Pending Backport): malformed json in a Ceph RESTful API call can stop all ceph-mon se...
- 12:37 PM Bug #54592: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
- Verified it downstream by following below steps.
1. Deployed multisite cluster with 16.2.0-152.el8cp version.
2. ... - 08:53 AM Backport #55280 (In Progress): quincy: mon/OSDMonitor: properly set last_force_op_resend in stret...
- 08:52 AM Backport #55281 (In Progress): pacific: mon/OSDMonitor: properly set last_force_op_resend in stre...
- 12:11 AM Bug #55088 (Pending Backport): Manager is failing to keep updated metadata in daemon_state for up...
- 12:11 AM Bug #54611 (Pending Backport): prometheus metrics shows incorrect ceph version for upgraded ceph ...
04/11/2022
- 11:05 PM Backport #55282 (Resolved): quincy: osd: add scrub duration for scrubs after recovery
- https://github.com/ceph/ceph/pull/47926
- 11:05 PM Backport #55281 (Resolved): pacific: mon/OSDMonitor: properly set last_force_op_resend in stretch...
- https://github.com/ceph/ceph/pull/45870
- 11:05 PM Backport #55280 (Resolved): quincy: mon/OSDMonitor: properly set last_force_op_resend in stretch ...
- https://github.com/ceph/ceph/pull/45871
- 11:02 PM Bug #55158 (Pending Backport): mon/OSDMonitor: properly set last_force_op_resend in stretch mode
- 10:58 PM Bug #55158: mon/OSDMonitor: properly set last_force_op_resend in stretch mode
- https://github.com/ceph/ceph/pull/45744 merged
- 11:00 PM Bug #54994 (Pending Backport): osd: add scrub duration for scrubs after recovery
- 10:57 PM Bug #54994: osd: add scrub duration for scrubs after recovery
- https://github.com/ceph/ceph/pull/45599 merged
- 10:57 PM Bug #55088: Manager is failing to keep updated metadata in daemon_state for upgraded MON(s) and t...
- https://github.com/ceph/ceph/pull/45670 merged
- 10:56 PM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
- https://github.com/ceph/ceph/pull/45547 merged
- 10:52 PM Bug #54611: prometheus metrics shows incorrect ceph version for upgraded ceph daemon
- https://github.com/ceph/ceph/pull/45505 merged
- 11:02 AM Bug #47025: rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed
- the error in LibRadosWatchNotifyECPP.WatchNotify happened after calling watch, and i think that's the one that we are...
- 06:14 AM Bug #47025: rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed
- Laura Flores wrote:
> Instance of LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3:
>
> /a/yuriw-202... - 09:03 AM Bug #55255: "ceph iostat" exception!
- @nojha@redhat.com
@adking@redhat.com
@jdurgin@redhat.com
@kchai@redhat.com - 08:56 AM Bug #55255: "ceph iostat" exception!
- @Neha Ojha
- 08:55 AM Bug #55255: "ceph iostat" exception!
- @dzafman
- 08:29 AM Bug #55255: "ceph iostat" exception!
- from the test_utime.cc, when stamp_delta overflow, the test is not expected as follows:
Expected equality of these... - 08:26 AM Bug #55255: "ceph iostat" exception!
- // src/test/test_utime.cc
TEST(utime_t, localtime)
{
utime_t stamp_delta = utime_t();
vector<ut... - 08:07 AM Bug #55255: "ceph iostat" exception!
- when this occurs, clock is not synchronized, and the clock relative log refers to "clock.log-20220402"
- 08:03 AM Bug #55255 (Need More Info): "ceph iostat" exception!
- ceph iostat can not execute correctly but exception!
- 06:04 AM Bug #50042: rados/test.sh: api_watch_notify failures
- It looks like we are too fast here - rados_watch2 didn't call the callback (we still do not have the print of watch_n...
- 05:03 AM Bug #45868 (Fix Under Review): rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- 02:42 AM Bug #45868: rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- Nitzan, Could you check whether this fix could be expanded to resolve similar issues that are linked to #50042 please?
04/10/2022
04/08/2022
- 05:12 PM Feature #54525 (In Progress): osd/mon: log memory usage during tick
- 04:51 PM Feature #54525: osd/mon: log memory usage during tick
- Areas to add this type of logging would be OSD::tick() and Monitor::tick().
- 11:42 AM Bug #52129: LibRadosWatchNotify.AioWatchDelete failed
- /a/yuriw-2022-04-07_20:00:39-rados-wip-yuri5-testing-2022-04-05-1720-distro-default-smithi/6781441...
- 11:24 AM Bug #47025: rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed
- /a/yuriw-2022-04-07_20:00:39-rados-wip-yuri5-testing-2022-04-05-1720-distro-default-smithi/6781441...
- 05:20 AM Bug #55101: mon has slow op
- ceph tag v15.2.13
04/07/2022
- 10:57 PM Bug #55233 (Pending Backport): librados C++ API requires C++17 to build
- This is considered as a bug as we still guarantee the C++11 compatibility.
- 09:30 PM Bug #52026 (Resolved): osd: pgs went back into snaptrim state after osd restart
- 09:27 PM Bug #52026: osd: pgs went back into snaptrim state after osd restart
- https://github.com/ceph/ceph/pull/45785 merged
- 09:30 PM Backport #55139 (Resolved): pacific: osd: pgs went back into snaptrim state after osd restart
- 08:50 PM Bug #51843: osd/scrub: OSD crashes at PG removal
- https://github.com/ceph/ceph/pull/45729 merged
- 04:49 PM Bug #45868: rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- The rados_watch_check returned zero. but then at some point, until we got the replay buffer we got -ENOTCONN on our w...
- 04:23 AM Bug #45868: rados_api_tests: LibRadosWatchNotify.AioWatchNotify2 fails
- ...
- 03:52 PM Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- Leading up to the ceph_assert failure in osd.2:
/a/yuriw-2022-03-29_21:35:32-rados-wip-yuri5-testing-2022-03-29-11... - 11:54 AM Backport #55219 (In Progress): quincy: Doc: Update mclock release notes regarding an existing iss...
- 11:25 AM Backport #55219 (Resolved): quincy: Doc: Update mclock release notes regarding an existing issue.
- https://github.com/ceph/ceph/pull/45048
- 11:21 AM Bug #55186 (Pending Backport): Doc: Update mclock release notes regarding an existing issue.
- 08:55 AM Bug #49231: MONs unresponsive over extended periods of time
- Yes, I observed this on mimic latest stable.
I think this issue is still present, but not easy to observe. My gues... - 06:43 AM Bug #53855: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
- /a/yuriw-2022-04-06_16:35:43-rados-wip-yuri5-testing-2022-04-05-1720-distro-default-smithi/6780098
- 05:38 AM Bug #55101: mon has slow op
- the attachment includes:
ceph-mon.a.log ceph-mon.b.log ceph-mon.c.log
b.ops (ceph daemon mon.b ops)
c.ops(ceph dae... - 03:46 AM Bug #20960: ceph_test_rados: mismatched version (due to pg import/export)
- ...
- 03:25 AM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
- /a/yuriw-2022-04-06_16:35:43-rados-wip-yuri5-testing-2022-04-05-1720-distro-default-smithi/6779876
04/06/2022
- 06:49 PM Bug #52488: Pacific mon won't join Octopus mons
- Hello!
Could you please provide the logs with extra debugs: @debug_mon = 20@ and @debug_ms = 1@ from: 1) the new, ... - 06:39 PM Bug #55109: Export from lost OSD to another OSD - participants list
- This looks actually like a question for the ceph-users mailing list, not as a bug report.
- 06:35 PM Tasks #55159: stretch mode isn't covered in teuthology
- Proposed this as a topic for the next CDS (https://pad.ceph.com/p/cds-reef-rados).
- 06:31 PM Tasks #55159: stretch mode isn't covered in teuthology
- Agreed, making this a task to make incremental progress on this.
- 06:32 PM Feature #55213 (New): Implement code for Ceph to warn of clock SKU on other daemons
- RHBZ - https://bugzilla.redhat.com/show_bug.cgi?id=2072667
- 06:20 PM Feature #55169: crush: should validate rule outputs osds
- Adding the extra check makes sense, I think. Implementing the patch would be a low-hanging-fruit but reviewing will not.
- 06:17 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- Aishwarya, can you please take a look at this bug? could be a test issue, but we should find out
- 06:08 PM Bug #49231: MONs unresponsive over extended periods of time
- Thanks for update! Could you please say something more about the version on which this you've tested? Is it mimic maybe?
- 06:00 PM Bug #53924 (Need More Info): EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- > I'm thinking about the root cause described in this patch, will it cause the problem I reported ?
Yes, I think ... - 09:06 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- Radoslaw Zarzynski wrote:
> > the all osds is up&in, so the case doesn't involve recovery_unfound due to osd down.
... - 05:45 PM Bug #53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout
- Hmm, the last reoccurance (@/a/yuriw-2022-03-04_00:56:58-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smith...
- 05:39 PM Bug #51846: rados/test.sh: LibRadosList.ListObjectsCursor did not complete.
- Kamoltat Sirivadhna wrote:
> /a/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smi... - 05:00 PM Bug #53895 (Fix Under Review): Unable to format `ceph config dump` command output in yaml using `...
- 02:38 PM Backport #54466 (Resolved): pacific: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely cl...
- 02:38 PM Backport #54467 (Resolved): quincy: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely cle...
- 12:41 AM Bug #50659: Segmentation fault under Pacific 16.2.1 when using a custom crush location hook
- Andrew Davidoff wrote:
> I appreciate the work to get this bug squashed but I wonder if there's a schedule published... - 12:11 AM Bug #50659: Segmentation fault under Pacific 16.2.1 when using a custom crush location hook
- I appreciate the work to get this bug squashed but I wonder if there's a schedule published somewhere that might indi...
04/05/2022
- 08:52 PM Bug #54515: mon/health-mute.sh: TEST_mute: return 1 (HEALTH WARN 3 mgr modules have failed depend...
- ...
- 08:28 PM Bug #54515: mon/health-mute.sh: TEST_mute: return 1 (HEALTH WARN 3 mgr modules have failed depend...
- /a/ksirivad-2022-04-05_19:51:49-rados:standalone:workloads-master-distro-basic-smithi/6778033/...
- 07:57 PM Backport #55139 (In Progress): pacific: osd: pgs went back into snaptrim state after osd restart
- 03:53 PM Bug #55186 (Fix Under Review): Doc: Update mclock release notes regarding an existing issue.
- 03:49 PM Bug #55186 (Resolved): Doc: Update mclock release notes regarding an existing issue.
- The issue mentioned in the release note is tracked by https://tracker.ceph.com/issues/55153
- 02:01 PM Bug #52124 (In Progress): Invalid read of size 8 in handle_recovery_delete()
- 02:00 PM Bug #43887 (In Progress): ceph_test_rados_delete_pools_parallel failure
- 01:16 PM Bug #49231: MONs unresponsive over extended periods of time
- Experimenting further, I found the value osd_map_message_max_bytes=16384 to be the best choice. With larger or smalle...
04/04/2022
- 10:26 PM Bug #52878 (Resolved): qa/tasks: python3 'dict' object has no attribute 'iterkeys' error
- 09:47 PM Backport #55013: pacific: librados: check latest osdmap on ENOENT in pool_reverse_lookup()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45586
merged - 09:28 PM Bug #54050 (Closed): OSD: move message to cluster log when osd hitting the pg hard limit
- dongdong tao wrote:
> PR: https://github.com/ceph/ceph/pull/44821
Closed in favor of https://github.com/ceph/ceph... - 09:26 PM Bug #54180 (Resolved): In some cases osdmaptool takes forever to complete
- 05:48 PM Bug #47299 (Need More Info): Assertion in pg_missing_set: p->second.need <= v || p->second.is_del...
- Still new more info. See the comment #8.
- 05:41 PM Bug #50743: *: crash in pthread_getname_np
- Still _Need More Info_ as the logs aren't there after the months.
- 05:33 PM Bug #36304 (Need More Info): FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wa...
- Waiting for reproducing the issue. See comment #25.
- 05:31 PM Bug #54548 (Won't Fix): mon hang when run ceph -s command after execute "ceph osd in osd.<x>" com...
- This was discussed by Neha in ceph-users mailing list – keywords for Google: @mon hang when run ceph -s command after...
- 05:30 PM Bug #55178: osd-scrub-test.sh: TEST_scrub_extended_sleep times out
- Ronen-- assigning you in case you have an idea?
- 05:28 PM Bug #55178 (New): osd-scrub-test.sh: TEST_scrub_extended_sleep times out
- /a/yuriw-2022-04-01_17:44:32-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/6772689...
- 05:28 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- A note from the bug scrub: although this is a client-side symptom (which might be a result of backend failure or an i...
- 05:23 PM Bug #55009: Scrubbing exits due to error reading object head
- I think we should check the OSD's log to verify the reason behind the ENOENT.
- 05:08 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- Laura Flores wrote:
> /a/yuriw-2022-03-25_18:42:52-rados-wip-yuri7-testing-2022-03-24-1341-pacific-distro-default-sm... - 05:06 PM Bug #55101 (Need More Info): mon has slow op
- Hello! We would need to take a look on the mon.b's log, preferably also on the one preceding the restart.
- 05:04 PM Bug #54558 (Fix Under Review): malformed json in a Ceph RESTful API call can stop all ceph-mon se...
- 04:54 PM Bug #47025: rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed
- Instance of LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3:
/a/yuriw-2022-04-01_17:44:32-rados-wip-... - 04:25 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
- /a/yuriw-2022-04-01_17:44:32-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/6772697
- 10:23 AM Bug #38357 (Fix Under Review): ClsLock.TestExclusiveEphemeralStealEphemeral failed
- 09:16 AM Feature #55169 (In Progress): crush: should validate rule outputs osds
- In this thread https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/2ZUJN75RLL4YYD4EHAUS5I4IL37A7UUL/ a us...
- 08:11 AM Bug #55140 (Duplicate): quincy OSD won't start: what(): void pg_stat_t::decode(ceph::buffer::v...
04/01/2022
- 06:42 PM Backport #54614: quincy: support truncation sequences in sparse reads
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45736
merged - 05:15 PM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
- /a/yuriw-2022-04-01_01:23:52-rados-wip-yuri2-testing-2022-03-31-1523-pacific-distro-default-smithi/6771162
- 10:26 AM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
- /a/yuriw-2022-03-31_21:45:19-rados-wip-yuri5-testing-2022-03-31-1158-quincy-distro-default-smithi/6770388
- 10:26 AM Tasks #55159 (New): stretch mode isn't covered in teuthology
- There seem to be virtually no teuthology tests for stretch mode (beyond just enabling the connectivity election mode?...
- 10:00 AM Bug #55158 (Fix Under Review): mon/OSDMonitor: properly set last_force_op_resend in stretch mode
- 09:21 AM Bug #55158 (Resolved): mon/OSDMonitor: properly set last_force_op_resend in stretch mode
- Setting last_force_op_resend but not last_force_op_resend_prenautilus and last_force_op_resend_preluminous doesn't ma...
- 08:16 AM Bug #51627: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
- Saw the same assert failure here: /a/yuriw-2022-03-31_21:45:19-rados-wip-yuri5-testing-2022-03-31-1158-quincy-distro...
03/31/2022
- 10:39 PM Bug #54516 (Won't Fix): mon/config.sh: unrecognized config option 'debug asok'
- I don't think this a real failure. The occurrences captured in sentry are from runs, where the install sha1 and worku...
- 11:05 AM Bug #54516 (Fix Under Review): mon/config.sh: unrecognized config option 'debug asok'
- https://github.com/ceph/ceph/pull/45730
- 07:15 AM Bug #54516 (In Progress): mon/config.sh: unrecognized config option 'debug asok'
- 09:17 PM Backport #55157 (Resolved): quincy: mon: config commands do not accept whitespace style config name
- https://github.com/ceph/ceph/pull/47381
- 09:17 PM Backport #55156 (Resolved): pacific: mon: config commands do not accept whitespace style config name
- https://github.com/ceph/ceph/pull/47380
- 09:12 PM Bug #44092 (Pending Backport): mon: config commands do not accept whitespace style config name
- 04:55 PM Bug #55154 (Duplicate): Multiple OSD's during upgrade crashed with bluestore/simple_bitmap.cc: 54...
- Duplicate of https://tracker.ceph.com/issues/55145.
- 04:51 PM Bug #55154 (Duplicate): Multiple OSD's during upgrade crashed with bluestore/simple_bitmap.cc: 54...
- ...
- 04:02 PM Bug #55153 (Resolved): Make the mClock config options related to [res, wgt, lim] modifiable durin...
- The individual config parameters related to reservation
weight and limit are not modifiable during runtime when
a b... - 03:58 PM Backport #54614 (In Progress): quincy: support truncation sequences in sparse reads
- 02:10 PM Bug #55140: quincy OSD won't start: what(): void pg_stat_t::decode(ceph::buffer::v15_2_0::list...
- The fix for this went in yesterday https://tracker.ceph.com/issues/53923. If you upgrade to the latest Quincy version...
- 12:03 PM Feature #55147 (New): osd: allow remote write by calling cls method from within cls context
- This is a write version of #48182.
This new feature allows a cls method to remotely write to multiple objects in par... - 10:04 AM Bug #55109: Export from lost OSD to another OSD - participants list
- Looks like peer_info show only 5 (random?) peers. Checked PG at each OSD to confirm and mark-complete with ceph-objec...
- 02:11 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- [root@jianwei 10.128.130.71]# cat pg-unfound/3.356.pg.map
osdmap e1786 pg 3.356 (3.356) -> up [49,44,47,18,14,31] a... - 02:04 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- ...
- 02:00 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- sorry, above log is 2022-03-07, too old
following log is the problem timestamp log - 01:45 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- Radoslaw Zarzynski wrote:
> > the all osds is up&in, so the case doesn't involve recovery_unfound due to osd down.
... - 01:44 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- Radoslaw Zarzynski wrote:
> > the all osds is up&in, so the case doesn't involve recovery_unfound due to osd down.
...
03/30/2022
- 10:35 PM Bug #52488: Pacific mon won't join Octopus mons
- Also confirming the same problem attempting to add either new Octopus or Pacific monitor to existing Nautilus cluster...
- 09:46 PM Bug #52319: LibRadosWatchNotify.WatchNotify2 fails
- Yes, I mentioned that code in https://tracker.ceph.com/issues/50042#note-33 which has yet another incarnation.
- 05:57 PM Bug #52319: LibRadosWatchNotify.WatchNotify2 fails
- Brad Hubbard wrote:
> This is a bit different to #47719. In that case we got an ENOENT when we expected an ENOTCONN ... - 07:04 PM Bug #55141 (In Progress): thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- From the @/home/teuthworker/archive/yuriw-2022-03-29_21:35:32-rados-wip-yuri5-testing-2022-03-29-1152-quincy-distro-d...
- 06:43 PM Bug #54511: test_pool_min_size: AssertionError: not clean before minsize thrashing starts
- Need to observe more @thrashers/minsize_recovery@ where this issue happens.
- 06:28 AM Bug #54511: test_pool_min_size: AssertionError: not clean before minsize thrashing starts
- /a/yuriw-2022-03-29_21:35:32-rados-wip-yuri5-testing-2022-03-29-1152-quincy-distro-default-smithi/6767633
- 06:27 PM Bug #54521 (Need More Info): daemon: Error while waiting for process to exit
- ...
- 06:21 PM Bug #54521: daemon: Error while waiting for process to exit
- Aishwarya Mathuria wrote:
> /a/yuriw-2022-03-29_21:35:01-rados-wip-yuri3-testing-2022-03-29-1133-distro-default-smit... - 10:09 AM Bug #54521: daemon: Error while waiting for process to exit
- /a/yuriw-2022-03-29_21:35:01-rados-wip-yuri3-testing-2022-03-29-1133-distro-default-smithi/6767712
- 06:18 PM Bug #55140 (Duplicate): quincy OSD won't start: what(): void pg_stat_t::decode(ceph::buffer::v...
- My cluster has 3 control nodes running rawhide (mons, mgrs, mds).
1 physical server with 6 HDDs running 6 OSDs (fedo... - 06:16 PM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
- > the all osds is up&in, so the case doesn't involve recovery_unfound due to osd down.
A note from the bug scrub: ... - 06:00 PM Bug #53663 (Duplicate): Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools
- Marking this as a duplicate based on above comments.
- 05:52 PM Backport #55138 (Resolved): quincy: osd: pgs went back into snaptrim state after osd restart
- 05:52 PM Backport #55138 (In Progress): quincy: osd: pgs went back into snaptrim state after osd restart
- 05:51 PM Backport #55138 (Resolved): quincy: osd: pgs went back into snaptrim state after osd restart
- https://github.com/ceph/ceph/pull/45641
- 05:51 PM Backport #55139 (Resolved): pacific: osd: pgs went back into snaptrim state after osd restart
- https://github.com/ceph/ceph/pull/45785
- 05:47 PM Bug #53923 (Resolved): [Upgrade] mgr FAILED to decode MSG_PGSTATS
- 05:40 PM Bug #53923 (Pending Backport): [Upgrade] mgr FAILED to decode MSG_PGSTATS
- 02:46 PM Bug #53923: [Upgrade] mgr FAILED to decode MSG_PGSTATS
- https://github.com/ceph/ceph/pull/45695 merged
- 05:46 PM Backport #55137 (Resolved): quincy: [Upgrade] mgr FAILED to decode MSG_PGSTATS
- 05:45 PM Backport #55137 (Resolved): quincy: [Upgrade] mgr FAILED to decode MSG_PGSTATS
- https://github.com/ceph/ceph/pull/45695
- 05:45 PM Bug #52026 (Pending Backport): osd: pgs went back into snaptrim state after osd restart
- 05:41 PM Bug #51076 (In Progress): "wait_for_recovery: failed before timeout expired" during thrashosd tes...
- 09:26 AM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
- /a/yuriw-2022-03-29_21:35:01-rados-wip-yuri3-testing-2022-03-29-1133-distro-default-smithi/6767720
- 05:19 AM Bug #49777: test_pool_min_size: 'check for active or peered' reached maximum tries (5) after wait...
- /a/yuriw-2022-03-29_21:35:32-rados-wip-yuri5-testing-2022-03-29-1152-quincy-distro-default-smithi/6767532
- 05:13 AM Bug #52562: Thrashosds read error injection failed with error ENXIO
- /a/yuriw-2022-03-29_21:35:32-rados-wip-yuri5-testing-2022-03-29-1152-quincy-distro-default-smithi/6767850
03/29/2022
- 06:45 PM Bug #54263 (Resolved): cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> 327...
- 06:44 PM Backport #54527 (Resolved): quincy: cephadm upgrade pacific to quincy autoscaler is scaling pgs f...
- 06:43 PM Backport #54526 (Resolved): pacific: cephadm upgrade pacific to quincy autoscaler is scaling pgs ...
- 06:27 PM Backport #55020 (Resolved): pacific: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark oma...
- 06:26 PM Backport #55018 (Resolved): quincy: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap...
- 05:56 PM Bug #53923 (In Progress): [Upgrade] mgr FAILED to decode MSG_PGSTATS
- 01:05 PM Bug #55109 (New): Export from lost OSD to another OSD - participants list
- Hi,
I marked OSD as lost. Then extracted PGs from broken OSD and import them to working OSD. PG query doesn't show... - 07:06 AM Bug #55101 (New): mon has slow op
- there are 3 nodes in our cluster, 84 OSD each node.
after execute "systemctl restart ceph-osd.target" on node2, ther...
03/28/2022
- 09:55 PM Bug #52012: osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<const NotActiv...
- https://github.com/ceph/ceph/pull/45374 merged
- 04:28 PM Backport #55073: pacific: osd: osd_fast_shutdown_notify_mon not quite right
- @Nitzan https://github.com/ceph/ceph/pull/45654 already takes care of it. My fault, should have updated this Tracker.
- 12:09 PM Backport #55073 (In Progress): pacific: osd: osd_fast_shutdown_notify_mon not quite right
- 03:52 PM Backport #55020: pacific: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45591
merged - 03:49 PM Backport #54412: pacific:osd:add pg_num_max value
- Kamoltat Sirivadhna wrote:
> https://github.com/ceph/ceph/pull/45173
merged - 03:49 PM Backport #54526: pacific: cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> ...
- Kamoltat Sirivadhna wrote:
> https://github.com/ceph/ceph/pull/45173
merged - 03:48 PM Bug #54263: cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> 32768 for ceph...
- https://github.com/ceph/ceph/pull/45173 merged
- 03:37 PM Bug #52136: Valgrind reports memory "Leak_DefinitelyLost" errors.
- /a/yuriw-2022-03-26_19:43:35-rados-wip-yuri7-testing-2022-03-24-1341-pacific-distro-default-smithi/6762662
- 03:32 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- /a/yuriw-2022-03-25_18:42:52-rados-wip-yuri7-testing-2022-03-24-1341-pacific-distro-default-smithi/6761700...
- 03:27 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
- /a/yuriw-2022-03-25_18:42:52-rados-wip-yuri7-testing-2022-03-24-1341-pacific-distro-default-smithi/6761328
- 03:18 PM Bug #49777: test_pool_min_size: 'check for active or peered' reached maximum tries (5) after wait...
- /a/yuriw-2022-03-25_18:42:52-rados-wip-yuri7-testing-2022-03-24-1341-pacific-distro-default-smithi/676107 -- logs ava...
- 01:34 PM Bug #55088 (Fix Under Review): Manager is failing to keep updated metadata in daemon_state for up...
- 01:01 PM Bug #55088 (Resolved): Manager is failing to keep updated metadata in daemon_state for upgraded M...
- The ceph manager updates mon metadata through handle_mon_map which gets triggered
less frequently, mostly in the cas... - 01:10 PM Bug #54296: OSDs using too much memory
- Dan van der Ster wrote:
> Ruben Kerkhof wrote:
> > > Excellent idea! I'll ask the customer and get back with the re... - 06:29 AM Bug #55068 (Resolved): qa/standalone: Fix test_activate_osd() in ceph_helpers.sh
- 06:20 AM Bug #55068 (Pending Backport): qa/standalone: Fix test_activate_osd() in ceph_helpers.sh
- 06:29 AM Backport #55081 (Resolved): quincy: qa/standalone: Fix test_activate_osd() in ceph_helpers.sh
- 06:25 AM Backport #55081 (Resolved): quincy: qa/standalone: Fix test_activate_osd() in ceph_helpers.sh
- https://github.com/ceph/ceph/pull/45653
- 06:28 AM Backport #55080 (Resolved): pacific: qa/standalone: Fix test_activate_osd() in ceph_helpers.sh
- 06:25 AM Backport #55080 (Resolved): pacific: qa/standalone: Fix test_activate_osd() in ceph_helpers.sh
- https://github.com/ceph/ceph/pull/45654
03/26/2022
- 03:48 PM Support #53432 (Resolved): How to use and optimize ceph dpdk
- 09:59 AM Bug #54592: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
- I opened https://tracker.ceph.com/issues/53663 which apparently is expected to be caused by this issue here.
How can... - 12:03 AM Backport #53933 (Resolved): pacific: Stretch mode: peering can livelock with acting set changes s...
- 12:02 AM Backport #53944 (Resolved): pacific: [RFE] Limit slow request details to mgr log
03/25/2022
- 10:50 PM Bug #53544: src/test/osd/RadosModel.h: ceph_abort_msg("racing read got wrong version") in thrash_...
- /a/nojha-2022-03-25_19:43:39-rados-wip-quincy-fast-shutdown-backports-distro-basic-smithi/6762122...
- 08:59 PM Bug #51307 (Resolved): LibRadosWatchNotify.Watch2Delete fails
- 08:59 PM Backport #55021 (Resolved): quincy: LibRadosWatchNotify.Watch2Delete fails
- 07:35 PM Documentation #54619 (Resolved): Doc: Improve mClock config reference documentation
- 05:09 PM Documentation #54619 (Pending Backport): Doc: Improve mClock config reference documentation
- 07:35 PM Backport #55069 (Resolved): quincy: Doc: Improve mClock config reference documentation
- 05:12 PM Backport #55069 (In Progress): quincy: Doc: Improve mClock config reference documentation
- 05:10 PM Backport #55069 (Resolved): quincy: Doc: Improve mClock config reference documentation
- https://github.com/ceph/ceph/pull/45652
- 07:01 PM Backport #53944: pacific: [RFE] Limit slow request details to mgr log
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44771
merged - 07:00 PM Backport #53933: pacific: Stretch mode: peering can livelock with acting set changes swapping pri...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44664
merged - 06:03 PM Backport #55074 (In Progress): octopus: osd: osd_fast_shutdown_notify_mon not quite right
- 05:58 PM Backport #55074 (Resolved): octopus: osd: osd_fast_shutdown_notify_mon not quite right
- https://github.com/ceph/ceph/pull/45655
- 05:58 PM Backport #55075 (Resolved): quincy: osd: osd_fast_shutdown_notify_mon not quite right
- https://github.com/ceph/ceph/pull/45653
- 05:58 PM Backport #55073 (Resolved): pacific: osd: osd_fast_shutdown_notify_mon not quite right
- https://github.com/ceph/ceph/pull/45654
- 05:58 PM Bug #53327 (Pending Backport): osd: osd_fast_shutdown_notify_mon not quite right and enable osd_f...
- 05:01 PM Bug #55068 (Resolved): qa/standalone: Fix test_activate_osd() in ceph_helpers.sh
- Fix a bug in test_activate_osd() that was exposed by the changes
in PR: https://github.com/ceph/ceph/pull/44807.
... - 04:56 PM Backport #55067 (Rejected): octopus: osd_fast_shutdown_notify_mon option should be true by default
- 04:56 PM Backport #55066 (Rejected): pacific: osd_fast_shutdown_notify_mon option should be true by default
- 04:55 PM Backport #55065 (Rejected): quincy: osd_fast_shutdown_notify_mon option should be true by default
- 04:51 PM Bug #53328 (Pending Backport): osd_fast_shutdown_notify_mon option should be true by default
- 04:12 PM Bug #52026 (Fix Under Review): osd: pgs went back into snaptrim state after osd restart
- 03:49 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- /a/yuriw-2022-03-24_16:44:32-rados-wip-yuri-testing-2022-03-24-0726-distro-default-smithi/6758238
- 03:08 PM Backport #54233: octopus: devices: mon devices appear empty when scraping SMART metrics
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44960
merged - 03:08 PM Backport #53719: octopus: mon: frequent cpu_tp had timed out messages
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44546
merged - 02:37 AM Bug #17170: mon/monclient: update "unable to obtain rotating service keys when osd init" to sugge...
- this issue occurs in 15.2.13. osd log as rotating_service_keys.png.
- 12:29 AM Bug #50042: rados/test.sh: api_watch_notify failures
- Laura Flores wrote:
> /a/yuriw-2022-03-24_14:35:45-rados-wip-yuri7-testing-2022-03-23-1332-quincy-distro-default-smi...
Also available in: Atom