Activity
From 06/13/2022 to 07/12/2022
07/12/2022
- 10:29 PM Bug #56495 (Fix Under Review): Log at 1 when Throttle::get_or_fail() fails
- 01:57 PM Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- Greg Farnum wrote:
> That said, I wouldn’t expect anything useful from running this — pool snaps are hard to use wel... - 01:06 PM Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- That said, I wouldn’t expect anything useful from running this — pool snaps are hard to use well. What were you tryin...
- 12:59 PM Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- AFAICT this is just a RADOS issue?
- 01:30 PM Backport #53339: pacific: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotActive...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46767
merged - 12:41 PM Bug #56530: Quincy: High CPU and slow progress during backfill
- Thanks for looking at this. Answers to your questions:
1. Backfill started at around 4-5 objects per second, and t... - 11:56 AM Bug #56530: Quincy: High CPU and slow progress during backfill
- While we look into this, I have a couple of questions:
1. Did the recovery rate stay at 1 object/sec throughout? I... - 11:16 AM Bug #56530 (Resolved): Quincy: High CPU and slow progress during backfill
- I'm seeing a similar problem on a small cluster just upgraded from Pacific 16.2.9 to Quincy 17.2.1 (non-cephadm). The...
07/11/2022
- 09:18 PM Bug #54396 (Resolved): Setting osd_pg_max_concurrent_snap_trims to 0 prematurely clears the snapt...
- 09:17 PM Feature #55982 (Resolved): log the numbers of dups in PG Log
- 09:17 PM Backport #55985 (Resolved): octopus: log the numbers of dups in PG Log
- 01:35 PM Bug #54172: ceph version 16.2.7 PG scrubs not progressing
- https://github.com/ceph/ceph/pull/46845 merged
- 01:31 PM Backport #51287: pacific: LibRadosService.StatusFormat failed, Expected: (0) != (retry), actual: ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46677
merged
07/08/2022
- 05:27 AM Backport #56498 (In Progress): quincy: Make the mClock config options related to [res, wgt, lim] ...
- 04:50 AM Backport #56498 (Resolved): quincy: Make the mClock config options related to [res, wgt, lim] mod...
- https://github.com/ceph/ceph/pull/47020
- 04:46 AM Bug #55153 (Pending Backport): Make the mClock config options related to [res, wgt, lim] modifiab...
- 01:48 AM Bug #56495 (Resolved): Log at 1 when Throttle::get_or_fail() fails
- When trying to debug a throttle failure we currently need to set debug_ms=20 which can delay troubleshooting due to t...
- 01:00 AM Bug #54509: FAILED ceph_assert due to issue manifest API to the original object
- I'll take a look
07/07/2022
- 08:51 PM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- Potential Pacific occurrence? Although this one is catching on LibRadosTwoPoolsPP.CachePin rather than LibRadosTwoPoo...
- 03:44 PM Bug #55153: Make the mClock config options related to [res, wgt, lim] modifiable during runtime f...
- https://github.com/ceph/ceph/pull/46700 merged
- 01:21 AM Bug #56487: Error EPERM: problem getting command descriptions from mon, when execute "ceph -s".
- similar issue as:
https://tracker.ceph.com/issues/36300 - 01:20 AM Bug #56487: Error EPERM: problem getting command descriptions from mon, when execute "ceph -s".
- in the case, cephx is disabled. test script as below:
#/usr/bin/bash
while true
do
echo `date` >> /tmp/o.log
r... - 01:18 AM Bug #56487 (New): Error EPERM: problem getting command descriptions from mon, when execute "ceph ...
- version 15.2.13
disable cephx, and excute "ceph -s" every 1 second,
A great chance to reproduce this error. log as ...
07/06/2022
- 11:19 PM Bug #36300: Clients receive "wrong fsid" error when CephX is disabled
- Can you make a new ticket with your details and link to this one? We may have recreated a similar issue but the detai...
- 03:14 PM Bug #54509: FAILED ceph_assert due to issue manifest API to the original object
- @Myoungwon Oh - can you take a look at
http://pulpito.front.sepia.ceph.com/rfriedma-2022-07-05_18:14:55-rados-wip-... - 11:12 AM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- Ronen Friedman wrote:
> Kamoltat Sirivadhna wrote:
> > /a/ksirivad-2022-07-01_21:00:49-rados:thrash-erasure-code-ma... - 11:10 AM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- Kamoltat Sirivadhna wrote:
> /a/ksirivad-2022-07-01_21:00:49-rados:thrash-erasure-code-main-distro-default-smithi/69... - 10:47 AM Bug #51168: ceph-osd state machine crash during peering process
- ceph-osd log on crashed osd uploaded
07/05/2022
- 02:32 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- /a/ksirivad-2022-07-01_21:00:49-rados:thrash-erasure-code-main-distro-default-smithi/6910169/
- 02:06 PM Bug #54511: test_pool_min_size: AssertionError: not clean before minsize thrashing starts
- /a/ksirivad-2022-07-01_21:00:49-rados:thrash-erasure-code-main-distro-default-smithi/6910103/
- 09:17 AM Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- Dan van der Ster wrote:
> Venky Shankar wrote:
> > Hi Dan,
> >
> > I need to check, but does the inconsistent ob... - 07:24 AM Bug #55559: osd-backfill-stats.sh fails in TEST_backfill_ec_prim_out
- Looks like we don't have the correct primary (was osd.1, changed to osd.4, and after the wait_for_clean was back to o...
- 01:51 AM Bug #36300: Clients receive "wrong fsid" error when CephX is disabled
- #/usr/bin/bash
while true
do
echo `date` >> /tmp/o.log
ret=`ceph -s >> /tmp/o.log 2>&1 `
sleep 1
echo '' >> /t... - 01:28 AM Bug #36300: Clients receive "wrong fsid" error when CephX is disabled
- version 15.2.13
disable cephx, and excute "ceph -s" every 1 second,
A great chance to reproduce this error. log as ... - 01:24 AM Bug #36300: Clients receive "wrong fsid" error when CephX is disabled
- Mon Jul 4 15:31:19 CST 2022
2022-07-04T15:31:20.219+0800 7f8595551700 10 monclient: get_monmap_and_config
2022-07-0...
07/04/2022
- 08:54 PM Backport #55981 (Resolved): quincy: don't trim excessive PGLog::IndexedLog::dups entries on-line
- 08:18 PM Bug #56463 (Triaged): osd nodes with NVME try to run `smartctl` and `nvme` even when the tools ar...
- Using debian packages:
ceph-osd 17.2.1-1~bpo11+1
ceph-volume 17.2.1-1~bpo11+1
Every day some job runs wh... - 07:53 PM Backport #55746 (Resolved): quincy: Support blocklisting a CIDR range
- 05:48 PM Feature #55693 (Fix Under Review): Limit the Health Detail MSG log size in cluster logs
07/03/2022
- 12:49 PM Bug #56147: snapshots will not be deleted after upgrade from nautilus to pacific
- Radoslaw Zarzynski wrote:
> Hello Matan! Does this snapshot issue ring a bell?
Introduced here:
https://github.c...
07/01/2022
- 05:36 PM Backport #54386: octopus: [RFE] Limit slow request details to mgr log
- Ponnuvel P wrote:
> please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/45154
... - 04:17 PM Bug #56439 (New): mon/crush_ops.sh: Error ENOENT: no backward-compatible weight-set
- /a/yuriw-2022-06-23_16:06:40-rados-wip-yuri7-testing-2022-06-23-0725-octopus-distro-default-smithi/6894952...
- 01:51 PM Bug #56392: ceph build warning: comparison of integer expressions of different signedness
- Note: this warning was caused by merging https://github.com/ceph/ceph/pull/46029/
- 01:40 PM Bug #55435 (Pending Backport): mon/Elector: notify_ranked_removed() does not properly erase dead_...
- 01:17 PM Bug #55435 (Resolved): mon/Elector: notify_ranked_removed() does not properly erase dead_ping in ...
- 01:16 PM Bug #55708 (Fix Under Review): Reducing 2 Monitors Causes Stray Daemon
- 12:55 PM Bug #56438 (Need More Info): found snap mapper error on pg 3.bs0> oid 3:d81a0fb3:::smithi10749189...
- /a/yuriw-2022-06-29_18:22:37-rados-wip-yuri2-testing-2022-06-29-0820-distro-default-smithi/6906226
The error looks... - 12:29 PM Bug #53342: Exiting scrub checking -- not all pgs scrubbed
- /a/yuriw-2022-06-29_18:22:37-rados-wip-yuri2-testing-2022-06-29-0820-distro-default-smithi/6906076
/a/yuriw-2022-06-... - 09:21 AM Cleanup #52753 (Rejected): rbd cls : centos 8 warning
- 09:20 AM Cleanup #52753: rbd cls : centos 8 warning
- Looks like this warning is no longer there with a newer g++:
https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=... - 12:12 AM Backport #55983 (Resolved): quincy: log the numbers of dups in PG Log
06/30/2022
- 07:53 PM Bug #56034: qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3()
- /a/yuriw-2022-06-29_13:30:16-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6905537
- 07:41 PM Bug #50242 (New): test_repair_corrupted_obj fails with assert not inconsistent
- 07:41 PM Bug #50242: test_repair_corrupted_obj fails with assert not inconsistent
- /a/yuriw-2022-06-29_13:30:16-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6905523/
- 07:30 PM Bug #55001: rados/test.sh: Early exit right after LibRados global tests complete
- /a/yuriw-2022-06-29_13:30:16-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6905499
- 12:44 PM Bug #56147: snapshots will not be deleted after upgrade from nautilus to pacific
- Here I have a PR, which should fix the conversion on update
https://github.com/ceph/ceph/pull/46908
But what is w...
06/29/2022
- 06:29 PM Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
- Not a terribly high priority.
- 06:24 PM Bug #48029: Exiting scrub checking -- not all pgs scrubbed.
- The code that generated the exception is (from the @main@ branch):...
- 06:13 PM Bug #56392 (Fix Under Review): ceph build warning: comparison of integer expressions of different...
- 06:12 PM Bug #56393: failed to complete snap trimming before timeout
- Could it be srub related?
- 06:08 PM Bug #56147 (New): snapshots will not be deleted after upgrade from nautilus to pacific
- Hello Matan! Does this snapshot issue ring a bell?
- 06:03 PM Bug #46889: librados: crashed in service_daemon_update_status
- Lowering the priority to match the BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2101415#c9.
- 05:55 PM Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- Yeah, this clearly looks like a race condition (likely around life time management).
Lowering to High as it happen... - 05:50 PM Bug #56101 (Need More Info): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function saf...
- Well, it seems the logs on @dell-per320-4.gsslab.pnq.redhat.com:/home/core/tracker56101@ are on the default levels. S...
- 03:37 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- A Telemetry contact was able to provide their OSD log. There was not a coredump available anymore, but they were able...
- 03:48 PM Bug #56420 (New): ceph-object-store: there is no chunking in --op log
- The current implementation assumes that huge amount of memory are always available....
06/28/2022
- 08:06 PM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- Thank you Myoungwon!
- 08:01 PM Bug #53294 (Fix Under Review): rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuri...
- 05:13 AM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- https://github.com/ceph/ceph/pull/46866
I found that there is no reply if sending message with invalid pool inform... - 02:22 AM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- I'll take a closer look.
- 07:23 PM Bug #49777: test_pool_min_size: 'check for active or peered' reached maximum tries (5) after wait...
- /a/lflores-2022-06-27_23:44:12-rados:thrash-erasure-code-wip-yuri2-testing-2022-04-26-1132-octopus-distro-default-smi...
- 04:31 PM Backport #54614 (Resolved): quincy: support truncation sequences in sparse reads
- 02:06 PM Backport #56408 (In Progress): quincy: ceph version 16.2.7 PG scrubs not progressing
- 02:00 PM Backport #56408 (Resolved): quincy: ceph version 16.2.7 PG scrubs not progressing
- https://github.com/ceph/ceph/pull/46844
- 02:05 PM Backport #56409 (In Progress): pacific: ceph version 16.2.7 PG scrubs not progressing
- 02:01 PM Backport #56409 (Resolved): pacific: ceph version 16.2.7 PG scrubs not progressing
- https://github.com/ceph/ceph/pull/46845
- 01:55 PM Bug #54172 (Pending Backport): ceph version 16.2.7 PG scrubs not progressing
- 01:10 PM Backport #50910 (Rejected): octopus: PGs always go into active+clean+scrubbing+deep+repair in the...
- Will not be fixed on Octopus.
For future ref:
Fixed in main branch by 41258.
- 12:36 PM Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- Venky Shankar wrote:
> Hi Dan,
>
> I need to check, but does the inconsistent object warning show up only after r... - 10:01 AM Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- Hi Dan,
I need to check, but does the inconsistent object warning show up only after reducing max_mds? - 11:02 AM Bug #56147: snapshots will not be deleted after upgrade from nautilus to pacific
- It seems to be a failure on conversion after upgrade
in the omap dump before the update with one deleted object in... - 06:25 AM Bug #46889: librados: crashed in service_daemon_update_status
- Josh Durgin wrote:
> Are there any logs or coredump available? What version was this?
Sorry, I think I have misse...
06/27/2022
- 07:32 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- Found an instance where this does not occur with minsize_recovery. It's possible that it's a different root cause, bu...
- 07:09 PM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- Myoungwon Oh wrote:
> I think this is the same issue as https://tracker.ceph.com/issues/53855.
I thought so too, ... - 06:53 PM Bug #56393 (New): failed to complete snap trimming before timeout
- Description: rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/...
- 06:19 PM Bug #56392 (Resolved): ceph build warning: comparison of integer expressions of different signedness
- ../src/mon/Elector.cc: In member function ‘void Elector::notify_rank_removed(int)’:
../src/mon/Elector.cc:733:20: wa...
06/24/2022
- 07:45 PM Bug #48029: Exiting scrub checking -- not all pgs scrubbed.
- /a/yuriw-2022-06-22_22:13:20-rados-wip-yuri3-testing-2022-06-22-1121-pacific-distro-default-smithi/6892691
Descrip... - 04:18 PM Bug #45702: PGLog::read_log_and_missing: ceph_assert(miter == missing.get_items().end() || (miter...
- /a/yuriw-2022-06-23_21:29:45-rados-wip-yuri4-testing-2022-06-22-1415-pacific-distro-default-smithi/6895353
- 09:45 AM Bug #56386: Writes to a cephfs after metadata pool snapshot causes inconsistent objects
- > Removing the pool snap then deep scrubbing again removes the inconsistent objects.
This isn't true -- my quick t... - 07:26 AM Bug #56386 (Can't reproduce): Writes to a cephfs after metadata pool snapshot causes inconsistent...
- If you take a snapshot of the meta pool, then decrease max_mds, metadata objects will be inconsistent.
Removing the ... - 03:14 AM Bug #56377 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e94095d0cd0fcf3fd898984b...- 03:13 AM Bug #56371 (Duplicate): crash: MOSDPGLog::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2260a57d5917388881ad6b24...- 03:13 AM Bug #56352 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3348f49ddb73c803861097dc...- 03:13 AM Bug #56351 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3aca522a7914d781399c656e...- 03:13 AM Bug #56350 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=279837e667d5bd5af7117e58...- 03:12 AM Bug #56349 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0af06b2db676dc127cf14736...- 03:12 AM Bug #56348 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2c87a7239d9493be78ec973d...- 03:12 AM Bug #56347 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d89aad10db4ba24f32836f6b...- 03:12 AM Bug #56341 (New): crash: __cxa_rethrow()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4ea8075453d1e75053186bfb...- 03:12 AM Bug #56340 (New): crash: MOSDRepOp::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=109df62e078655f21a42e939...- 03:12 AM Bug #56337 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2966e604246718b92712c37f...- 03:12 AM Bug #56336 (New): crash: MOSDPGScan::encode_payload(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6a63e3ada81c75347510f5f6...- 03:12 AM Bug #56333 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ea60d26b0fb86048ba4db78d...- 03:12 AM Bug #56332 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c743495a46a830b11d21142d...- 03:12 AM Bug #56331 (New): crash: MOSDPGLog::encode_payload(unsigned long)
*New crash events were reported via Telemetry with newer versions (['17.2.0']) than encountered in Tracker (0.0.0)....- 03:12 AM Bug #56330 (New): crash: void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [wi...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2c5246221b9449c92df232e9...- 03:12 AM Bug #56329 (New): crash: rocksdb::DBImpl::CompactRange(rocksdb::CompactRangeOptions const&, rocks...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bfc26738960c1795f6c7671a...- 03:12 AM Bug #56326 (New): crash: void PeeringState::add_log_entry(const pg_log_entry_t&, bool): assert(e....
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2786349b0161a62145b35214...- 03:11 AM Bug #56325 (New): crash: void PeeringState::add_log_entry(const pg_log_entry_t&, bool): assert(e....
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=86280552d1b3deaaae5d29d2...- 03:11 AM Bug #56324 (New): crash: MOSDPGLog::encode_payload(unsigned long)
*New crash events were reported via Telemetry with newer versions (['17.2.0']) than encountered in Tracker (0.0.0)....- 03:11 AM Bug #56320 (New): crash: int OSDMap::build_simple_optioned(ceph::common::CephContext*, epoch_t, u...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3d21b380da7e67a51e3c3ff5...- 03:11 AM Bug #56319 (New): crash: int OSDMap::build_simple_optioned(ceph::common::CephContext*, epoch_t, u...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e1f3eef62cf680a2a1e12fe3...- 03:11 AM Bug #56307 (New): crash: virtual void PrimaryLogPG::on_local_recover(const hobject_t&, const Obje...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1bffe92eb6037f0945a6822d...- 03:11 AM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=97cfb7606f247983cba0a9666...- 03:11 AM Bug #56303 (New): crash: virtual bool PrimaryLogPG::should_send_op(pg_shard_t, const hobject_t&):...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=03f9b6cfdf027552d7733607...- 03:10 AM Bug #56300 (New): crash: void MonitorDBStore::_open(const string&): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8fdacfdb77748da3299053f8...- 03:10 AM Bug #56292 (New): crash: int OSD::shutdown(): assert(end_time - start_time_func < cct->_conf->osd...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0523fecb66e5a47efa4b27b4...- 03:10 AM Bug #56289 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=657abbfaddca21e4f153180f...- 03:09 AM Bug #56265 (New): crash: void MonitorDBStore::_open(const string&): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5a64b6a073f492ff2e80966e...- 03:08 AM Bug #56247 (New): crash: BackfillInterval::pop_front()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b80697d3e5dc1d900a588df5...- 03:08 AM Bug #56244 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3db4d3c60ed74d15ae58c626...- 03:08 AM Bug #56243 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9b71de1eabc47b3fb6580322...- 03:08 AM Bug #56242 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8da89c9532956acaff6e7f70...- 03:08 AM Bug #56241 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=66ed6db515cc489439bc0f6a...- 03:08 AM Bug #56238 (New): crash: non-virtual thunk to PrimaryLogPG::op_applied(eversion_t const&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b4a30ba69ea953149454037b...- 03:06 AM Bug #56207 (New): crash: void ECBackend::handle_sub_write_reply(pg_shard_t, const ECSubWriteReply...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7f6b4f1dc6564900d71f71e9...- 03:06 AM Bug #56203 (New): crash: void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_t) [wi...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f69a87d11f743c4125be0748...- 03:05 AM Bug #56201 (New): crash: void OSD::do_recovery(PG*, epoch_t, uint64_t, ThreadPool::TPHandle&): as...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e11765511e1628fcf2a52548...- 03:05 AM Bug #56198 (New): crash: rocksdb::port::Mutex::Unlock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2f6c08ed7f0db8e9480a0cde...- 03:05 AM Bug #56194 (New): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->ops_in_...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e86e0b84f213b49af7dc5555...- 03:05 AM Bug #56192 (Pending Backport): crash: virtual Monitor::~Monitor(): assert(session_map.sessions.em...
*New crash events were reported via Telemetry with newer versions (['16.2.9', '17.2.0']) than encountered in Tracke...- 03:04 AM Bug #56191 (New): crash: std::vector<std::filesystem::path::_Cmpt, std::allocator<std::filesystem...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d7dee2fb28426f502f494906...- 03:04 AM Bug #56188 (New): crash: void PGLog::IndexedLog::add(const pg_log_entry_t&, bool): assert(head.ve...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1ced48493cc0b5d62eb75f1e...
06/23/2022
- 08:44 PM Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
- /a/yuriw-2022-06-14_20:42:00-rados-wip-yuri2-testing-2022-06-14-0949-octopus-distro-default-smithi/6878271
- 08:43 PM Bug #52737 (Duplicate): osd/tests: stat mismatch
- 08:40 PM Bug #43584: MON_DOWN during mon_join process
- /a/yuriw-2022-06-14_20:42:00-rados-wip-yuri2-testing-2022-06-14-0949-octopus-distro-default-smithi/6878197
- 03:06 PM Bug #56147: snapshots will not be deleted after upgrade from nautilus to pacific
- Yes. For the debuglogs I tested this with nautilus (14.2.22) to octopus (15.2.16). The behavior is the same as descri...
- 01:22 PM Bug #52416 (Resolved): devices: mon devices appear empty when scraping SMART metrics
- 01:22 PM Backport #54233 (Resolved): octopus: devices: mon devices appear empty when scraping SMART metrics
- 06:26 AM Fix #50574 (Resolved): qa/standalone: Modify/re-write failing standalone tests with mclock scheduler
- 06:18 AM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- I think this is the same issue as https://tracker.ceph.com/issues/53855.
- 06:09 AM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- OK, I'll take a look.
- 05:10 AM Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- I have been able to easily reproduce this by running the following test:
rados/verify/{centos_latest ceph clusters/...
06/22/2022
- 09:45 PM Bug #53969 (Resolved): BufferList.rebuild_aligned_size_and_memory failure
- 09:45 PM Backport #53972 (Resolved): pacific: BufferList.rebuild_aligned_size_and_memory failure
- 09:42 PM Bug #53677 (Resolved): qa/tasks/backfill_toofull.py: AssertionError: 2.0 not in backfilling
- 09:42 PM Bug #53308 (Resolved): pg-temp entries are not cleared for PGs that no longer exist
- 09:41 PM Bug #54593 (Resolved): librados: check latest osdmap on ENOENT in pool_reverse_lookup()
- 09:41 PM Backport #55012 (Resolved): octopus: librados: check latest osdmap on ENOENT in pool_reverse_look...
- 09:40 PM Backport #55013 (Resolved): pacific: librados: check latest osdmap on ENOENT in pool_reverse_look...
- 09:36 PM Bug #54592 (Resolved): partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
- 09:36 PM Backport #55019 (Resolved): octopus: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark oma...
- 09:34 PM Backport #55984 (Resolved): pacific: log the numbers of dups in PG Log
- 08:52 PM Bug #56101 (New): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- 08:51 PM Bug #56101 (Duplicate): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- 05:54 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?var-sig_v2=97cfb7606f247983cba0a9666bb882d9e1...
- 12:58 AM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- Looking into this further in today's team meeting we discussed the fact that these segfaults appear to occur in pthre...
- 08:51 PM Bug #56102 (Duplicate): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in - RocksDBStore::e...
- 08:36 PM Bug #53729 (In Progress): ceph-osd takes all memory before oom on boot
- 08:25 PM Bug #53789: CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados...
- /a/yuriw-2022-06-21_16:28:27-rados-wip-yuri4-testing-2022-06-21-0704-pacific-distro-default-smithi/6889549
- 06:39 PM Bug #53789: CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados...
- /a/yuriw-2022-06-18_00:01:31-rados-quincy-release-distro-default-smithi/6884838
- 06:24 PM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- Hi Myoungwon Oh, can you please help take a look at this issue?
- 06:17 PM Bug #56030 (Fix Under Review): frequently down and up a osd may cause recovery not in asynchronous
- 06:15 PM Bug #56147 (Need More Info): snapshots will not be deleted after upgrade from nautilus to pacific
- > Also I could this observer on a update from nautilus to ocotopus.
Just to ensure: am I correct the issue is visi... - 05:36 PM Bug #55695 (Fix Under Review): Shutting down a monitor forces Paxos to restart and sometimes disr...
- 03:34 PM Bug #56149: thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "active+recover...
- According to some tests I ran on main and pacific, this kind of failure does not happen very frequently:
Main:
ht... - 02:59 PM Bug #54172 (Fix Under Review): ceph version 16.2.7 PG scrubs not progressing
- 06:33 AM Feature #56153 (Resolved): add option to dump pg log to pg command
- Currently we need to stop the cluster and use ceph_objectstore_tool to dump pg log
Commend: ceph pg n.n log
will...
06/21/2022
- 10:46 PM Backport #53339 (In Progress): pacific: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<c...
- 08:01 PM Bug #56151 (Resolved): mgr/DaemonServer:: adjust_pgs gap > max_pg_num_change should be gap >= max...
- output should say that gap >= max_pg_num_change when pg is trying to scale beyond pgpnum in mgr/DaemonServer:: adjust...
- 07:31 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- Oh interesting. Is there a reason that bug would only be affecting minsize_recovery?
- 08:03 AM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- Talked with Ronen Friedman regarding that issue, it may be related to other bug that he is working on that the scrub ...
- 07:20 PM Bug #49777: test_pool_min_size: 'check for active or peered' reached maximum tries (5) after wait...
- Sridhar Seshasayee wrote:
> /a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-defa... - 05:23 PM Bug #56149 (New): thrash-erasure-code: AssertionError: wait_for_recovery timeout due to "active+r...
- Description:
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr... - 05:04 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- - OSD.392 logs are available in gibba037 following path:...
- 05:03 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- - Looks like there is a commonality that this crash is happening in shutdown/restart so looks like some issue during ...
- 04:50 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- - After upgrading the LRC cluster, the same crash was seen in one of the OSDs in LRC....
- 02:31 PM Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- ...
- 04:53 PM Bug #51945: qa/workunits/mon/caps.sh: Error: Expected return 13, got 0
- /a/yuriw-2022-06-16_19:58:30-rados-wip-yuri7-testing-2022-06-16-1051-pacific-distro-default-smithi/6882914
- 04:37 PM Backport #56099: pacific: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
- https://github.com/ceph/ceph/pull/46748
- 04:35 PM Bug #56102: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in - RocksDBStore::estimate_pref...
- Wasn't sure if I should keep the files on here due to privacy, but Neha said it's okay.
- 04:27 PM Bug #56102: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in - RocksDBStore::estimate_pref...
- @Adam I have uploaded all of the relevant logs from gibba019, osd.31 here. I found the crash in ceph-osd.31.log-20220...
- 02:25 PM Bug #56147 (Resolved): snapshots will not be deleted after upgrade from nautilus to pacific
- After upgrading from 14.2.22 to 16.2.9 snapshot deletion does not remove "clones" from pool
More precise: Objects in... - 07:15 AM Bug #56136 (Fix Under Review): [Progress] Do not show NEW PG_NUM value for pool if autoscaler is ...
- 06:51 AM Bug #56136 (Resolved): [Progress] Do not show NEW PG_NUM value for pool if autoscaler is set to off
- When noautscale is set, autoscale-status shows NEW PG_NUM value if pool pg_num is more than 96.
$ ./bin/ceph osd p... - 06:50 AM Backport #56135 (Resolved): pacific: scrub starts message missing in cluster log
- https://github.com/ceph/ceph/pull/48070
- 06:50 AM Backport #56134 (Resolved): quincy: scrub starts message missing in cluster log
- https://github.com/ceph/ceph/pull/47621
- 06:45 AM Bug #55798 (Pending Backport): scrub starts message missing in cluster log
06/20/2022
- 07:38 AM Bug #53855: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
- https://github.com/ceph/ceph/pull/46748
- 06:48 AM Bug #56030: frequently down and up a osd may cause recovery not in asynchronous
- add more log
- 04:52 AM Backport #56059 (Resolved): pacific: Assertion failure (ceph_assert(have_pending)) when creating ...
06/19/2022
- 01:44 PM Bug #54172: ceph version 16.2.7 PG scrubs not progressing
- @CorySnider: thanks. Your suggestion is spot on. The suggested fix
solves one issue. There is another problem relate...
06/17/2022
- 08:39 PM Bug #56102 (Duplicate): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in - RocksDBStore::e...
- ...
- 08:37 PM Bug #56101 (Resolved): Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- ...
- 08:26 PM Feature #55982: log the numbers of dups in PG Log
- https://github.com/ceph/ceph/pull/46608 merged
- 08:19 PM Bug #53294 (New): rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
- There are still some occurrences of this type of failure in Quincy, which includes the backport of #53855. So, I am r...
- 08:06 PM Backport #56099 (Resolved): pacific: rados/test.sh hangs while running LibRadosTwoPoolsPP.Manifes...
- 08:02 PM Bug #53855 (Pending Backport): rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlush...
- 06:54 AM Bug #53855: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
- ok
- 07:28 PM Bug #56097 (Fix Under Review): Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtes...
- /a/yuriw-2022-06-16_18:33:18-rados-wip-yuri5-testing-2022-06-16-0649-distro-default-smithi/6882594...
- 06:45 PM Bug #44595: cache tiering: Error: oid 48 copy_from 493 returned error code -2
- /a/yuriw-2022-06-16_18:33:18-rados-wip-yuri5-testing-2022-06-16-0649-distro-default-smithi/6882724
- 04:44 PM Backport #56059: pacific: Assertion failure (ceph_assert(have_pending)) when creating new OSDs du...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46691
merged - 02:47 PM Bug #55355: osd thread deadlock
- Radoslaw Zarzynski wrote:
> Big, big thanks, jianwei zhang, for your analysis. It was extremely helpful!
Good news,... - 12:31 PM Bug #55355: osd thread deadlock
- Big, big thanks, jianwei zhang, for your analysis. It was extremely helpful!
- 12:30 PM Bug #55355 (Fix Under Review): osd thread deadlock
- 08:39 AM Bug #54172: ceph version 16.2.7 PG scrubs not progressing
- We've experienced this issue as well, on both 16.2.6 and 16.2.7, and I've identified the cause. Here's the scenario:
...
06/16/2022
- 03:28 PM Bug #53685: Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed.
- /a/yuriw-2022-06-11_02:24:12-rados-quincy-release-distro-default-smithi/6873771
- 12:40 PM Backport #53338: pacific: osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<...
- It looks like this backport has been merged in https://github.com/ceph/ceph/pull/45374, and released in 16.2.8, so I ...
- 09:38 AM Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- +*Observed this in a pacific run:*+
/a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-dis... - 09:23 AM Bug #49777: test_pool_min_size: 'check for active or peered' reached maximum tries (5) after wait...
- /a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/6881131
Descrip... - 09:14 AM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
- /a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/6881215
- 08:05 AM Bug #55726: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- ...
- 05:04 AM Bug #55153 (Fix Under Review): Make the mClock config options related to [res, wgt, lim] modifiab...
- 01:54 AM Bug #55750: mon: slow request of very long time
- Neha Ojha wrote:
> yite gu wrote:
> > It appears that this mon request has been completed,but it have no erase from...
06/15/2022
- 06:56 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- This `wait_for_clean` assertion failure is happening with the minsize_recovery thrasher, which is used by rados/thras...
- 06:21 PM Bug #55750: mon: slow request of very long time
- yite gu wrote:
> It appears that this mon request has been completed,but it have no erase from ops_in_flight_sharded... - 06:11 PM Bug #55776: octopus: map exx had wrong cluster addr
- ...
- 05:54 PM Bug #55726: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- Could you please provide the output from @ceph osd lspools@ as well?
- 05:51 PM Bug #47300 (Resolved): mount.ceph fails to understand AAAA records from SRV record
- 05:50 PM Backport #55513 (Resolved): quincy: mount.ceph fails to understand AAAA records from SRV record
- 05:50 PM Backport #55514 (Resolved): pacific: mount.ceph fails to understand AAAA records from SRV record
- 05:06 PM Backport #55514: pacific: mount.ceph fails to understand AAAA records from SRV record
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46112
merged - 05:03 PM Backport #55296: pacific: malformed json in a Ceph RESTful API call can stop all ceph-mon services
- nikhil kshirsagar wrote:
> please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/... - 10:05 AM Bug #56057 (Fix Under Review): Add health error if one or more OSDs registered v1/v2 public ip ad...
- 07:13 AM Bug #56057 (Pending Backport): Add health error if one or more OSDs registered v1/v2 public ip ad...
- In a containerized environment after a OSD node reboot, some OSDs registered their public v1/v2 addresses on cluster ...
- 09:10 AM Backport #56059 (In Progress): pacific: Assertion failure (ceph_assert(have_pending)) when creati...
- 08:55 AM Backport #56059 (Resolved): pacific: Assertion failure (ceph_assert(have_pending)) when creating ...
- https://github.com/ceph/ceph/pull/46691
- 09:07 AM Backport #56060 (In Progress): quincy: Assertion failure (ceph_assert(have_pending)) when creatin...
- 08:55 AM Backport #56060 (Resolved): quincy: Assertion failure (ceph_assert(have_pending)) when creating n...
- https://github.com/ceph/ceph/pull/46689
- 08:51 AM Bug #55773 (Pending Backport): Assertion failure (ceph_assert(have_pending)) when creating new OS...
06/14/2022
- 09:40 PM Bug #49777: test_pool_min_size: 'check for active or peered' reached maximum tries (5) after wait...
- Running some tests to try and reproduce the issue and get a sense of how frequently it fails. This has actually been ...
- 09:05 PM Backport #51287 (In Progress): pacific: LibRadosService.StatusFormat failed, Expected: (0) != (re...
- 08:08 PM Bug #53855: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
- @Myoungwon Oh does this look like the same thing to you? Perhaps your fix needs to be backported to Pacific.
/a/yu... - 03:03 PM Bug #52316: qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons)
- /a/yuriw-2022-06-13_16:36:31-rados-wip-yuri7-testing-2022-06-13-0706-distro-default-smithi/6876523
Description: ra... - 02:37 PM Bug #56034: qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3()
- Another detail to note is that this particular test has the pg autoscaler enabled, as opposed to TEST_divergent_2(), ...
- 10:48 AM Bug #56034 (Resolved): qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3()
- /a/yuriw-2022-06-13_16:36:31-rados-wip-yuri7-testing-2022-06-13-0706-distro-default-smithi/6876516
Also historical... - 06:22 AM Bug #56030: frequently down and up a osd may cause recovery not in asynchronous
- i set osd_async_recovery_min_cost = 0 hope async recovery anyway
- 03:57 AM Bug #56030 (Fix Under Review): frequently down and up a osd may cause recovery not in asynchronous
- ceph version: octopus 15.2.13
in my test cluster, have 6 osds, 3 for bucket index pool,3 for other pools, there ar...
06/13/2022
- 10:40 PM Bug #56028 (New): thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.vers...
- This assertion is resurfacing in Pacific runs. The last fix for this was tracked in #46323, but this test branch incl...
- 10:27 PM Bug #52737: osd/tests: stat mismatch
- @Ronen I'm pretty sure this is a duplicate of #50222
- 10:26 PM Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
- /a/yuriw-2022-06-07_19:48:58-rados-wip-yuri6-testing-2022-06-07-0955-pacific-distro-default-smithi/6866688
- 03:12 AM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
- /a/yuriw-2022-06-09_03:58:30-smoke-quincy-release-distro-default-smithi/6869659/
Test description: smoke/basic/{clus...
Also available in: Atom