Project

General

Profile

Actions

Bug #64347

open

src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)

Added by Laura Flores 3 months ago. Updated 30 days ago.

Status:
Pending Backport
Priority:
Normal
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
quincy,reef,squid
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/lflores-2024-02-06_20:55:47-rados-wip-yuri4-testing-2024-02-05-0849-distro-default-smithi/7548965

2024-02-06T22:42:10.739 INFO:tasks.ceph.ceph_manager.ceph:no progress seen, keeping timeout for now
2024-02-06T22:42:10.739 DEBUG:teuthology.orchestra.run.smithi079:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json
2024-02-06T22:42:10.740 DEBUG:teuthology.orchestra.run.smithi113:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 30 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-osd.6.asok dump_ops_in_flight
2024-02-06T22:42:10.744 INFO:tasks.ceph.osd.7.smithi113.stderr:2024-02-06T22:42:10.642+0000 7f281b9e5640 -1 osd.7 pg_epoch: 3046 pg[104.2( v 2995'18 (2983'14,2995'18] local-lis/les=2562/2563 n=10 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2995'18 lcod 2991'16 mlcod 2991'16 active+clean+scrubbing [ 104.2:  ]  trimq=[3~4](3)] on_active_advmap removed_snaps already contains [3~1]
2024-02-06T22:42:10.744 INFO:tasks.ceph.osd.7.smithi113.stderr:./src/osd/PG.cc: In function 'virtual void PG::on_active_advmap(const OSDMapRef&)' thread 7f281b9e5640 time 2024-02-06T22:42:10.649362+0000
2024-02-06T22:42:10.744 INFO:tasks.ceph.osd.7.smithi113.stderr:./src/osd/PG.cc: 1901: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: ceph version 19.0.0-1269-g633ab857 (633ab857b9926af935a3e6291c3e1d9251aca357) squid (dev)
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x118) [0x5580a3ff3dd8]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 2: ceph-osd(+0x3f4f8f) [0x5580a3ff3f8f]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 3: ceph-osd(+0x38e4b6) [0x5580a3f8d4b6]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 4: (PeeringState::Active::react(PeeringState::AdvMap const&)+0x19e) [0x5580a43eb8be]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 5: ceph-osd(+0x82dbc1) [0x5580a442cbc1]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 6: (PeeringState::advance_map(std::shared_ptr<OSDMap const>, std::shared_ptr<OSDMap const>, std::vector<int, std::allocator<int> >&, int, std::vector<int, std::allocator<int> >&, int, PeeringCtx&)+0x266) [0x5580a43b96f6]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 7: (PG::handle_advance_map(std::shared_ptr<OSDMap const>, std::shared_ptr<OSDMap const>, std::vector<int, std::allocator<int> >&, int, std::vector<int, std::allocator<int> >&, int, PeeringCtx&)+0xfb) [0x5580a41f3d1b]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 8: (OSD::advance_pg(unsigned int, PG*, ThreadPool::TPHandle&, PeeringCtx&)+0x39a) [0x5580a4170c1a]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 9: (OSD::dequeue_peering_evt(OSDShard*, PG*, std::shared_ptr<PGPeeringEvent>, ThreadPool::TPHandle&)+0x237) [0x5580a417d997]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 10: (ceph::osd::scheduler::PGPeeringItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x51) [0x5580a43a8e81]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 11: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xab3) [0x5580a4187493]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x293) [0x5580a467f2d3]
2024-02-06T22:42:10.745 INFO:tasks.ceph.osd.7.smithi113.stderr: 13: ceph-osd(+0xa80834) [0x5580a467f834]
2024-02-06T22:42:10.746 INFO:tasks.ceph.osd.7.smithi113.stderr: 14: /lib/x86_64-linux-gnu/libc.so.6(+0x94b43) [0x7f2840062b43]
2024-02-06T22:42:10.746 INFO:tasks.ceph.osd.7.smithi113.stderr: 15: /lib/x86_64-linux-gnu/libc.so.6(+0x126a00) [0x7f28400f4a00]

Description: rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests}

This test occurred with the read balancer enabled (d-balancer/read), so it's worth taking a look whether that could be related.


Related issues 5 (3 open2 closed)

Related to RADOS - Bug #65559: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)Closed

Actions
Has duplicate RADOS - Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failedDuplicateMatan Breizman

Actions
Copied to RADOS - Backport #65305: reef: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)In ProgressMatan BreizmanActions
Copied to RADOS - Backport #65306: squid: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)In ProgressMatan BreizmanActions
Copied to RADOS - Backport #65307: quincy: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)In ProgressMatan BreizmanActions
Actions #1

Updated by Radoslaw Zarzynski 3 months ago

void PG::on_active_advmap(const OSDMapRef &osdmap)
{
  const auto& new_removed_snaps = osdmap->get_new_removed_snaps();
  auto i = new_removed_snaps.find(get_pgid().pool());
  if (i != new_removed_snaps.end()) {
    bool bad = false;
    for (auto j : i->second) {
      if (snap_trimq.intersects(j.first, j.second)) {
        decltype(snap_trimq) added, overlap;
        added.insert(j.first, j.second);
        overlap.intersection_of(snap_trimq, added);
        derr << __func__ << " removed_snaps already contains " 
             << overlap << dendl;
        bad = true;
        snap_trimq.union_of(added);
      } else {
        snap_trimq.insert(j.first, j.second);
      }
    }
    dout(10) << __func__ << " new removed_snaps " << i->second
             << ", snap_trimq now " << snap_trimq << dendl;
    ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps);
  }

  const auto& new_purged_snaps = osdmap->get_new_purged_snaps();
  auto j = new_purged_snaps.find(get_pgid().pgid.pool());
  if (j != new_purged_snaps.end()) {
    bool bad = false;
    for (auto k : j->second) {
      if (!recovery_state.get_info().purged_snaps.contains(k.first, k.second)) {
        interval_set<snapid_t> rm, overlap;
        rm.insert(k.first, k.second);
        overlap.intersection_of(recovery_state.get_info().purged_snaps, rm);
        derr << __func__ << " purged_snaps does not contain " 
             << rm << ", only " << overlap << dendl;
        recovery_state.adjust_purged_snaps(
          [&overlap](auto &purged_snaps) {
            purged_snaps.subtract(overlap);
          });
        // This can currently happen in the normal (if unlikely) course of
        // events.  Because adding snaps to purged_snaps does not increase
        // the pg version or add a pg log entry, we don't reliably propagate
        // purged_snaps additions to other OSDs.
        // One example:
        //  - purge S
        //  - primary and replicas update purged_snaps
        //  - no object updates
        //  - pg mapping changes, new primary on different node
        //  - new primary pg version == eversion_t(), so info is not
        //    propagated.
        //bad = true;
      } else {
        recovery_state.adjust_purged_snaps(
          [&k](auto &purged_snaps) {
            purged_snaps.erase(k.first, k.second);
          });
      }
    }
    dout(10) << __func__ << " new purged_snaps " << j->second
             << ", now " << recovery_state.get_info().purged_snaps << dendl;
    ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps);
  }

Actions #2

Updated by Radoslaw Zarzynski 3 months ago

  • Assignee changed from Laura Flores to Matan Breizman

Hi Matan. Would you mind taking a look?

Actions #3

Updated by Matan Breizman 3 months ago

  • Status changed from New to In Progress

From osd.7 (relevant pg is 104.2):
Periodic scrub purged snaps begins and finds 41 strays from snap 3 which queues snap retrim:

2024-02-06T22:38:08.979+0000 7f2835218640 10 osd.7 2927 scrub_purged_snaps done queueing pgs, updating superblock
2024-02-06T22:38:08.979+0000 7f2835218640 10 osd.7 2927 scrub_purged_snaps done

2024-02-06T22:39:36.801+0000 7f2835218640 10 osd.7 2973 scrub_purged_snaps
2024-02-06T22:39:36.801+0000 7f2835218640 10 snap_mapper.run

...
2024-02-06T22:39:36.801+0000 7f2835218640 10 snap_mapper.run stray 104:0a89393a:test-rados-api-smithi079-33923-14::foo32:3 snap 3 in pool 104 shard 255 purged_snaps [2,4)
2024-02-06T22:39:36.801+0000 7f2835218640 10 snap_mapper.run stray 104:01d89abe:test-rados-api-smithi079-33923-14::foo25:3 snap 3 in pool 104 shard 255 purged_snaps [2,4)
2024-02-06T22:39:36.801+0000 7f2835218640 10 snap_mapper.run stray 104:8c467b10:test-rados-api-smithi079-33923-14::foo89:3 snap 3 in pool 104 shard 255 purged_snaps [2,4)
...

2024-02-06T22:39:36.801+0000 7f2835218640 10 snap_mapper.run end, found 41 stray

2024-02-06T22:39:36.801+0000 7f2835218640 10 osd.7 2973 scrub_purged_snaps requeue pg 104.2 0x5580a76cb000 snap 3

2024-02-06T22:39:36.801+0000 7f2835218640 20 osd.7 pg_epoch: 2973 pg[104.2( v 2965'10 (2946'6,2965'10] local-lis/les=2562/2563 n=6 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2965'10 lcod 2952'8 mlcod 2952'8 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1](3)] queue_snap_retrim snap 3, trimq now [3~1], repeat 3

The trimming in pg 104.2 in blocked by ongoing scrub:

2024-02-06T22:39:36.801+0000 7f2835218640 20 osd.7 pg_epoch: 2973 pg[104.2( v 2965'10 (2946'6,2965'10] local-lis/les=2562/2563 n=6 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2965'10 lcod 2952'8 mlcod 2952'8 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1](3)] queue_snap_retrim snap 3, trimq now [3~1], repeat 3
2024-02-06T22:39:36.801+0000 7f2835218640 10 osd.7 pg_epoch: 2973 pg[104.2( v 2965'10 (2946'6,2965'10] local-lis/les=2562/2563 n=6 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2965'10 lcod 2952'8 mlcod 2952'8 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1](3)] kick_snap_trim: clean and snaps to trim, kicking
2024-02-06T22:39:36.801+0000 7f2835218640 10 osd.7 pg_epoch: 2973 pg[104.2( v 2965'10 (2946'6,2965'10] local-lis/les=2562/2563 n=6 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2965'10 lcod 2952'8 mlcod 2952'8 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1](3)] SnapTrimmer state<NotTrimming>: NotTrimming react KickTrim
2024-02-06T22:39:36.801+0000 7f2835218640 10 osd.7 pg_epoch: 2973 pg[104.2( v 2965'10 (2946'6,2965'10] local-lis/les=2562/2563 n=6 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2965'10 lcod 2952'8 mlcod 2952'8 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1](3)] SnapTrimmer state<NotTrimming>:  scrubbing, will requeue snap_trimmer after
2024-02-06T22:39:36.801+0000 7f2835218640 20 osd.7 pg_epoch: 2973 pg[104.2( v 2965'10 (2946'6,2965'10] local-lis/les=2562/2563 n=6 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2965'10 lcod 2952'8 mlcod 2952'8 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1](3)] exit NotTrimming
2024-02-06T22:39:36.801+0000 7f2835218640 20 osd.7 pg_epoch: 2973 pg[104.2( v 2965'10 (2946'6,2965'10] local-lis/les=2562/2563 n=6 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2965'10 lcod 2952'8 mlcod 2952'8 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1](3)] enter WaitScrub

The pg scrubbing keeps blocking the trimming (for a long time) and later on the pg receives snap 6, 5, 4 and 3 as new removed_snaps, when receiving 3 - we encounter the assert since 3 was already in the snap_trimq from earlier:

2024-02-06T22:37:45.252+0000 7f281f9ed640 10 osd.7 pg_epoch: 2911 pg[104.2( empty local-lis/les=2562/2563 n=0 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=0'0 mlcod 0'0 active+clean+scrubbing [ 104.2:  ]  trimq=[2~1]] on_active_advmap new removed_snaps [2~1], snap_trimq now [2~1]
2024-02-06T22:42:07.626+0000 7f281f9ed640 10 osd.7 pg_epoch: 3043 pg[104.2( v 2995'18 (2983'14,2995'18] local-lis/les=2562/2563 n=10 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2995'18 lcod 2991'16 mlcod 2991'16 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1,6~1](3)] on_active_advmap new removed_snaps [6~1], snap_trimq now [3~1,6~1]
2024-02-06T22:42:08.746+0000 7f281b9e5640 10 osd.7 pg_epoch: 3044 pg[104.2( v 2995'18 (2983'14,2995'18] local-lis/les=2562/2563 n=10 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2995'18 lcod 2991'16 mlcod 2991'16 active+clean+scrubbing [ 104.2:  ]  trimq=[3~1,5~2](3)] on_active_advmap new removed_snaps [5~1], snap_trimq now [3~1,5~2]
2024-02-06T22:42:09.542+0000 7f281b9e5640 10 osd.7 pg_epoch: 3045 pg[104.2( v 2995'18 (2983'14,2995'18] local-lis/les=2562/2563 n=10 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2995'18 lcod 2991'16 mlcod 2991'16 active+clean+scrubbing [ 104.2:  ]  trimq=[3~4](3)] on_active_advmap new removed_snaps [4~1], snap_trimq now [3~4]
2024-02-06T22:42:10.642+0000 7f281b9e5640 -1 osd.7 pg_epoch: 3046 pg[104.2( v 2995'18 (2983'14,2995'18] local-lis/les=2562/2563 n=10 ec=296/296 lis/c=2562/2562 les/c/f=2563/2563/0 sis=2562) [7,0,6] r=0 lpr=2562 crt=2995'18 lcod 2991'16 mlcod 2991'16 active+clean+scrubbing [ 104.2:  ]  trimq=[3~4](3)] on_active_advmap removed_snaps already contains [3~1]  <---


Worth noting:

Before scrubbing the purged snaps a (new) random thrash command was used (https://github.com/ceph/ceph/pull/53579):

2024-02-06T22:15:03.532+0000 7f476e823640  1 -- [v2:172.21.15.113:6800/994897033,v1:172.21.15.113:6801/994897033] <== client.34955 172.21.15.79:0/1041767495 2 ==== command(tid 2: {"prefix": "reset_purged_snaps_last"}) v1 ==== 61+0+0 (crc 0 0 0) 0x563ced53f1e0 con 0x563cecd8ec00

2024-02-06T22:15:37.811+0000 7f476e823640  1 -- [v2:172.21.15.113:6800/994897033,v1:172.21.15.113:6801/994897033] <== client.45954 172.21.15.79:0/3324902735 2 ==== command(tid 2: {"prefix": "reset_purged_snaps_last"}) v1 ==== 61+0+0 (crc 0 0 0) 0x563cedbafb80 con 0x563ced734800


Snap 3 is set as purged in OSD side (OSD::handle_osd_map)

2024-02-06T22:38:21.107+0000 7f282dd54640 20 osd.7 2933 handle_osd_map pg_num_history pg_num_history(e2934 pg_nums {1={15=1},2={18=8},6={47=32},7={47=32},11={48=32},12={48=32},14={48=32},17={48=32},19={48=32},22={48=32},31={48=32},50={52=32},81={101=32},104={296=32},113={427=32},143={1221=32},159={1366=32},160={1370=32},175={1586=8},178={1669=32},218={1917=8},219={1972=32},221={2302=32},222={2375=8},223={2401=8},224={2411=8},225={2416=32},226={2421=8},227={2428=8},228={2435=8},229={2443=8},230={2450=8},231={2457=8},232={2528=32},233={2530=32},234={2532=32},235={2534=32},236={2535=32},237={2536=32},238={2539=32},239={2542=32},240={2546=32},241={2548=32},242={2561=8},243={2576=8},244={2658=8},245={2665=8},246={2672=8},247={2679=8},248={2686=8},249={2694=8},250={2702=8},251={2709=8},252={2751=8},253={2752=32},254={2791=32},255={2796=32},256={2800=32},257={2806=8},258={2843=32},259={2846=32},260={2850=8},261={2850=32},262={2871=8},263={2871=32},264={2903=32},265={2918=32}} deleted_pools 2301,143,2322,178,2352,221,2370,218,2396,222,2406,223,2413,224,2418,225,2423,226,2430,227,2438,228,2445,229,2452,230,2460,231,2527,159,2529,160,2531,232,2533,233,2534,234,2535,19,2537,235,2538,236,2541,238,2545,239,2547,240,2551,241,2558,237,2571,242,2633,243,2653,175,2660,244,2667,245,2674,246,2681,247,2689,248,2697,249,2704,250,2711,251,2748,7,2787,253,2788,252,2795,254,2799,255,2803,256,2808,257,2842,81,2845,50,2845,258,2847,6,2849,259,2856,260,2868,12,2868,31,2870,22,2875,262,2900,263,2902,219,2917,264)
2024-02-06T22:38:21.107+0000 7f282dd54640 10 snap_mapper.record_purged_snaps purged_snaps {2934={104=[2~1]}}
2024-02-06T22:38:21.107+0000 7f282dd54640 20 bluestore.OmapIteratorImpl(0x5580a73cb7a0) lower_bound to PSN__104_0000000000000003 key 0x0000000000000000C1A3FC6E0000000000000403'.PSN__104_0000000000000003'
2024-02-06T22:38:21.107+0000 7f282dd54640 20 bluestore.OmapIteratorImpl(0x5580a73cb7a0) valid is at 0x0000000000000000C1A3FC6E0000000000000403'.PSN__142_0000000000000003'
2024-02-06T22:38:21.107+0000 7f282dd54640 10 snap_mapper.record_purged_snaps [2,3) - join with later [2,4)
2024-02-06T22:38:21.107+0000 7f282dd54640 10 snap_mapper.record_purged_snaps rm 0 keys, set 1 keys

This looks off:

2024-02-06T22:38:21.107+0000 7f282dd54640 20 bluestore.OmapIteratorImpl(0x5580a73cb7a0) lower_bound to PSN__104_0000000000000003 key 0x0000000000000000C1A3FC6E0000000000000403'.PSN__104_0000000000000003'
2024-02-06T22:38:21.107+0000 7f282dd54640 20 bluestore.OmapIteratorImpl(0x5580a73cb7a0) valid is at 0x0000000000000000C1A3FC6E0000000000000403'.PSN__142_0000000000000003'

Looking for snaps in pool 104 returned snaps in pool 142, and this caused the `scrub_purged_snaps` to add snap id 3 to the trim queue.
We probably miss a pool != key_pool case, in similarity to the mon side.

See: https://github.com/ceph/ceph/pull/28865/commits/0a48392ce066471233cc1e81e957b2999b9c411c

Same check should have been applied to SnapMapper::_lookup_purged_snap()

Actions #4

Updated by Matan Breizman 3 months ago

  • Status changed from In Progress to Fix Under Review
  • Backport set to quincy,reef
  • Pull request ID set to 55562
Actions #5

Updated by Radoslaw Zarzynski 2 months ago

Bump up – needs qa.

Actions #6

Updated by Matan Breizman about 2 months ago

  • Related to Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed added
Actions #7

Updated by Matan Breizman about 2 months ago

  • Related to deleted (Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed)
Actions #8

Updated by Matan Breizman about 2 months ago

  • Has duplicate Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed added
Actions #9

Updated by Aishwarya Mathuria about 2 months ago

/a/yuriw-2024-03-15_19:59:43-rados-wip-yuri6-testing-2024-03-15-0709-distro-default-smithi/7603610/

Actions #10

Updated by Radoslaw Zarzynski about 2 months ago

In QA.

Actions #11

Updated by Laura Flores about 1 month ago

/a/yuriw-2024-04-01_20:57:46-rados-wip-yuri3-testing-2024-04-01-0837-squid-distro-default-smithi/7634716

Actions #12

Updated by Laura Flores about 1 month ago

  • Backport changed from quincy,reef to quincy,reef,squid
Actions #13

Updated by Matan Breizman 30 days ago

  • Status changed from Fix Under Review to Pending Backport
Actions #14

Updated by Backport Bot 30 days ago

  • Copied to Backport #65305: reef: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps) added
Actions #15

Updated by Backport Bot 30 days ago

  • Copied to Backport #65306: squid: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps) added
Actions #16

Updated by Backport Bot 30 days ago

  • Copied to Backport #65307: quincy: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps) added
Actions #17

Updated by Backport Bot 30 days ago

  • Tags set to backport_processed
Actions #18

Updated by Laura Flores 16 days ago

  • Related to Bug #65559: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps) added
Actions

Also available in: Atom PDF