https://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2014-09-16T01:47:36ZCeph Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=411622014-09-16T01:47:36ZDan van der Ster
<ul></ul><p>In case it wasn't clear, there is nothing special about osd.11. Each time I reweight 2 OSDs the slow requests are caused by a different osd which became busy with snap trimming.</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=411632014-09-16T01:59:35ZDan van der Ster
<ul></ul><blockquote>
<p>I was able to isolate the cause of the backfilling to one single OSD</p>
</blockquote>
<p>typo.. I was able to isolate the cause of the <em>slow requests</em> to one single OSD...</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=411642014-09-16T02:33:17ZDan van der Ster
<ul></ul><p>Here is a bit more... I checked for "snap_trimmer entry" on other OSDs this morning. There were a few others, but all except osd.11 had only a handful (~5-10) of snaps to trim, e.g.:</p>
<p>/var/log/ceph/ceph-osd.252.log:2014-09-16 09:03:04.407862 7f0665871700 10 osd.252 pg_epoch: 94976 pg[2.174( v 74470'89792 (45121'86792,74470'89792] local-les=94976 n=0 ec=1 les/c 94976/94976 94971/94975/94975) [252,656,694] r=0 lpr=94975 lcod 0'0 mlcod 0'0 active+clean snaptrimq=[1~5]] snap_trimmer entry</p>
<p>/var/log/ceph/ceph-osd.279.log:2014-09-16 09:03:04.929522 7f23c9e2f700 10 osd.279 pg_epoch: 94976 pg[2.3d7( v 74470'98502 (6107'95502,74470'98502] local-les=94976 n=0 ec=1 les/c 94976/94976 94971/94975/94975) [279,496,1140] r=0 lpr=94975 lcod 0'0 mlcod 0'0 active+clean snaptrimq=[1~5]] snap_trimmer entry</p>
<p>/var/log/ceph/ceph-osd.1156.log:2014-09-16 09:03:03.381479 7fe40ff9b700 10 osd.1156 pg_epoch: 94975 pg[2.200( v 74470'95313 (6105'92313,74470'95313] local-les=94975 n=0 ec=1 les/c 94975/94975 94971/94974/94974) [1156,780,304] r=0 lpr=94974 lcod 0'0 mlcod 0'0 active+clean snaptrimq=[1~5]] snap_trimmer entry</p>
<p>/var/log/ceph/ceph-osd.220.log:2014-09-16 09:07:03.202399 7f0c98fbb700 10 osd.220 pg_epoch: 95014 pg[5.29( v 93278'1466 (0'0,93278'1466] local-les=94972 n=483 ec=357 les/c 94972/95014 94971/94971/94971) [220,385,169] r=0 lpr=94971 mlcod 0'0 active+clean snaptrimq=[7236~2,7239~4,723e~38]] snap_trimmer entry</p>
<p>But osd.11 had more than 28000 snaps to trim:</p>
<ol>
<li>grep 'trimming snap' ceph-osd.11.log | wc -l<br />28457</li>
</ol>
<p>It looks like we need a way to do the trimming in chunks, say 16 at a time? Is that doable?</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=412532014-09-17T06:03:19ZDan van der Ster
<ul></ul><p>I also noticed that before the snap trimmer starts, purge_snaps is [] for 5.318. Is that normal, or should (the complete) purge_snaps normally be copied along to the new OSD? Certainly if it had been, the small incremental (if at all) trim would have completed very quickly. If on the contrary the missed purged_snaps update is only happening rarely, then that might explain why we don't see these slow requests after every backfill of a pool 5 PG completes.</p>
<p>I saw in the doc that "if the purged_snaps update is lost, we merely retrim a now empty snap." -- but I didn't find where the purged_snaps update occurs and how it could possibly be lost. In this case, "merely" retrimming takes a long time, so understanding where it got lost could also be a good solution to this issue.</p>
<p>I also wanted to mention that pool 5 is our glance images pool. The number of snaps to trim will only continue to rise over time, so the effect of this issue will get worse and worse.</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=412962014-09-18T02:52:00ZDan van der Ster
<ul></ul><p>Please comment on <a class="external" href="https://github.com/ceph/ceph/pull/2516">https://github.com/ceph/ceph/pull/2516</a>.<br />Thanks!</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=413342014-09-18T14:45:44ZSage Weilsage@newdream.net
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>In Progress</i></li><li><strong>Assignee</strong> set to <i>Sage Weil</i></li><li><strong>Priority</strong> changed from <i>High</i> to <i>Urgent</i></li></ul><p>Dan van der Ster wrote:</p>
<blockquote>
<p>I also noticed that before the snap trimmer starts, purge_snaps is [] for 5.318. Is that normal, or should (the complete) purge_snaps normally be copied along to the new OSD? Certainly if it had been, the small incremental (if at all) trim would have completed very quickly. If on the contrary the missed purged_snaps update is only happening rarely, then that might explain why we don't see these slow requests after every backfill of a pool 5 PG completes.</p>
</blockquote>
<p>I think this is exactly the problem... purged_snaps shouldn't start out empty. That's why it is uselessing scanning for all 7000 deleted snaps.</p>
<p>Trying to reproduce this ....</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=413402014-09-18T15:08:17ZSage Weilsage@newdream.net
<ul></ul><p>Okay, I can't seem to reproduce this.</p>
<p>Dan or Florian, can you attach a log? What I need is debug ms = 1 and debug osd = 20 on teh OSD that the PG is getting backfilled <strong>to</strong>, but starting before the backfill starts. I want to see why purged_snaps is not getting initialized properly.</p>
<p>Thanks!</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=413432014-09-18T15:21:15ZDan van der Ster
<ul></ul><p>Thanks Sage. There's a log with debug_osd=20 attached to this issue. I'll try tomorrow to get one with debug_ms=1 too.</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=413442014-09-18T15:23:33ZSage Weilsage@newdream.net
<ul></ul><p>Nevermind, I've reproduced it!</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=413492014-09-18T17:01:09ZSage Weilsage@newdream.net
<ul><li><strong>Status</strong> changed from <i>In Progress</i> to <i>Fix Under Review</i></li></ul><p>wip-9487<br />wip-9487-dumpling for backport</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=413582014-09-19T01:39:37ZDan van der Ster
<ul></ul><p>Hi Sage, Thanks for the quick patch. I tried wip-9487-dumpling on our test cluster and now there is no snap trimming at all after backfilling. If I perform some rmsnap's during the backfilling, the snap_trimq is never more than 1 or 2 snaps long. So I think this fixes it, thanks!</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=416302014-09-23T14:09:42ZSamuel Justsjust@redhat.com
<ul><li><strong>Status</strong> changed from <i>Fix Under Review</i> to <i>12</i></li></ul><p>I had some comments on that pull request.</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=418412014-09-25T10:38:16ZSamuel Justsjust@redhat.com
<ul><li><strong>Status</strong> changed from <i>12</i> to <i>7</i></li></ul> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=425742014-10-06T08:04:35ZDan van der Ster
<ul></ul><p>Hi Sam,<br />Same as for <a class="issue tracker-1 status-3 priority-6 priority-high2 closed" title="Bug: Dumpling: removing many snapshots in a short time makes OSDs go berserk (Resolved)" href="https://tracker.ceph.com/issues/9503">#9503</a><br />I think this is fixed in master/giant.. correct? Just a gentle reminder that we'd appreciate a backport in dumpling.<br />Cheers, Dan</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=432112014-10-15T17:15:02ZSage Weilsage@newdream.net
<ul><li><strong>Status</strong> changed from <i>7</i> to <i>Pending Backport</i></li></ul> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=439762014-11-04T02:31:33ZDan van der Ster
<ul><li><strong>File</strong> <a href="/attachments/download/1514/ceph-osd.76.log.gz">ceph-osd.76.log.gz</a> added</li></ul><p>Hi Sage and Sam,<br />I've just tried wip-9113-9487-dumpling on our test cluster. (Using this build: <a class="external" href="http://gitbuilder.ceph.com/ceph-rpm-centos6_5-x86_64-basic/ref/wip-9113-9487-dumpling/x86_64/">http://gitbuilder.ceph.com/ceph-rpm-centos6_5-x86_64-basic/ref/wip-9113-9487-dumpling/x86_64/</a>)</p>
<pre>
osd.76: running {"version":"0.67.11-4-g496e561"}
</pre>
<p>This commit:</p>
<pre><code>ReplicatedPG: don't move on to the next snap immediately</code></pre>
<p>seems to work. I see a sleep between each trim operation now.</p>
<p>But this fix:</p>
<pre><code>osd: initialize purged_snap on backfill start; restart backfill if change</code></pre>
<p>doesn't seem to work here. I still see</p>
<pre>
2014-11-04 11:11:50.037102 7f84cc75f700 5 osd.76 pg_epoch: 18200 pg[35.0( empty local-les=0 n=0 ec=18036 les/c 18151/18151 18188/18188/18188) [76,71,298] r=0 lpr=18188 pi=18148-18187/2 mlcod 0'0 inactive] enter Star
ted/Primary/Active
2014-11-04 11:11:50.037123 7f84cc75f700 10 osd.76 pg_epoch: 18200 pg[35.0( empty local-les=0 n=0 ec=18036 les/c 18151/18151 18188/18188/18188) [76,71,298] r=0 lpr=18188 pi=18148-18187/2 mlcod 0'0 inactive] state<Star
ted/Primary/Active>: In Active, about to call activate
2014-11-04 11:11:50.037138 7f84cc75f700 10 osd.76 pg_epoch: 18200 pg[35.0( empty local-les=18200 n=0 ec=18036 les/c 18151/18151 18188/18188/18188) [76,71,298] r=0 lpr=18188 pi=18148-18187/2 mlcod 0'0 active] check_lo
cal
2014-11-04 11:11:50.037152 7f84cc75f700 10 osd.76 pg_epoch: 18200 pg[35.0( empty local-les=18200 n=0 ec=18036 les/c 18151/18151 18188/18188/18188) [76,71,298] r=0 lpr=18188 pi=18148-18187/2 mlcod 0'0 active snaptrimq=[1~6e]] activate - snap_trimq [1~6e]
...
2014-11-04 11:11:50.037152 7f84cc75f700 10 osd.76 pg_epoch: 18200 pg[35.0( empty local-les=18200 n=0 ec=18036 les/c 18151/18151 18188/18188/18188) [76,71,298] r=0 lpr=18188 pi=18148-18187/2 mlcod 0'0 active snaptrimq=[1~6e]] activate - snap_trimq [1~6e]
2014-11-04 11:12:01.181457 7f84c955a700 10 osd.76 pg_epoch: 18206 pg[35.0( empty local-les=18200 n=0 ec=18036 les/c 18200/18206 18188/18188/18188) [76,71,298] r=0 lpr=18188 mlcod 0'0 active+clean snaptrimq=[2~6d]] SnapTrimmer state<Trimming/WaitingOnReplicas>: purged_snaps now [1~1], snap_trimq now [2~6d]
2014-11-04 11:12:01.370462 7f84c955a700 10 osd.76 pg_epoch: 18206 pg[35.0( empty local-les=18200 n=0 ec=18036 les/c 18200/18206 18188/18188/18188) [76,71,298] r=0 lpr=18188 mlcod 0'0 active+clean snaptrimq=[3~6c]] SnapTrimmer state<Trimming/WaitingOnReplicas>: purged_snaps now [1~2], snap_trimq now [3~6c]
2014-11-04 11:12:01.508636 7f84c955a700 10 osd.76 pg_epoch: 18206 pg[35.0( empty local-les=18200 n=0 ec=18036 les/c 18200/18206 18188/18188/18188) [76,71,298] r=0 lpr=18188 mlcod 0'0 active+clean snaptrimq=[4~6b]] SnapTrimmer state<Trimming/WaitingOnReplicas>: purged_snaps now [1~3], snap_trimq now [4~6b]
...
</pre>
<p>A full log is attached (of OSD 76, who is getting PG 35.0 which has a few trimmed snaps inside).</p>
<p>(This is strange because the older branch wip-9487-dumpling <em>does</em> fix the purged_snaps initialization problem -- we're still running that branch elsewhere -- and the relevant commit hasn't changed much since then).</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=439822014-11-04T06:54:19ZDan van der Ster
<ul></ul><p>Looking more, I noticed that the pool 35 PGs are not entering the backfilling state -- only recovery. I'm bringing osd.76 in and out, also reweighting it from 2.72 to 0 to 2.72. I never see the line:</p>
<pre><code>pg 35.0 ... restarting backfill on osd.76</code></pre>
<p>Hence it never enters the block in PG.cc where the purged_stats are copied over. Pool 35 is a test pool where I've made some snaps, rm'd some snaps, and written some random bench objects. Any idea? Am I doing something wrong in this test, or do you think the purged_snaps need to be copied also for the recovery state?</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=444372014-11-14T10:12:41ZSamuel Justsjust@redhat.com
<ul></ul><p>I think that's an annoying special case for snaps purged on an empty pg. Both the old primary which did the trim and the new primary which is empty have the same last update 0'0 and the same empty log. So, the new empty primary's info is chosen over the old empty primary's info and thus purged_snaps goes back to empty. If the old primary had a longer log, it would have resulted in the new primary effectively starting with the old primary's info and purged_snaps. I think that's a separate bug. Are you sure the other patch did not result in this behavior?</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=444562014-11-17T03:17:43ZDan van der Ster
<ul><li><strong>File</strong> <a href="/attachments/download/1521/ceph-osd.90.log.gz">ceph-osd.90.log.gz</a> added</li></ul><p>Well the PG isn't empty -- I've been writing a bunch of data to it using rados bench. Basically, I'm having trouble getting a new test pool to enter backfilling at all. But maybe thats a different issue.</p>
<p>Anyway, I have other older pools that are entering backfilling and have a few purged_snaps. But alas I still have re-trimming of already purged snaps with 0.67.11-4-g496e561. I've attached a log for osd.90 -- PG's 3.2e7 and 3.7e are affected, e.g.</p>
<pre>
2014-11-17 11:54:38.981159 7f0cc119e700 10 osd.90 pg_epoch: 19542 pg[3.2e7( v 19531'36796 (18972'33790,19531'36796] local-les=19542 n=0 ec=256 les/c 19542/19542 19522/19541/19532) [90,7,99] r=0 lpr=19541 lcod 19531'36795 mlcod 0'0 active+clean snaptrimq=[1~14]] SnapTrimmer state<Trimming/WaitingOnReplicas>: WaitingOnReplicas: adding snap 1 to purged_snaps
2014-11-17 11:54:38.981177 7f0cc119e700 10 osd.90 pg_epoch: 19542 pg[3.2e7( v 19531'36796 (18972'33790,19531'36796] local-les=19542 n=0 ec=256 les/c 19542/19542 19522/19541/19532) [90,7,99] r=0 lpr=19541 lcod 19531'36795 mlcod 0'0 active+clean snaptrimq=[2~13]] SnapTrimmer state<Trimming/WaitingOnReplicas>: purged_snaps now [1~1], snap_trimq now [2~13]
2014-11-17 11:54:39.084524 7f0cc079d700 10 osd.90 pg_epoch: 19542 pg[3.2e7( v 19531'36796 (18972'33790,19531'36796] local-les=19542 n=0 ec=256 les/c 19542/19542 19522/19541/19532) [90,7,99] r=0 lpr=19541 lcod 19531'36795 mlcod 0'0 active+clean snaptrimq=[2~13]] SnapTrimmer state<Trimming/WaitingOnReplicas>: WaitingOnReplicas: adding snap 2 to purged_snaps
2014-11-17 11:54:39.084542 7f0cc079d700 10 osd.90 pg_epoch: 19542 pg[3.2e7( v 19531'36796 (18972'33790,19531'36796] local-les=19542 n=0 ec=256 les/c 19542/19542 19522/19541/19532) [90,7,99] r=0 lpr=19541 lcod 19531'36795 mlcod 0'0 active+clean snaptrimq=[3~12]] SnapTrimmer state<Trimming/WaitingOnReplicas>: purged_snaps now [1~2], snap_trimq now [3~12]
2014-11-17 11:54:39.189007 7f0cc119e700 10 osd.90 pg_epoch: 19542 pg[3.2e7( v 19531'36796 (18972'33790,19531'36796] local-les=19542 n=0 ec=256 les/c 19542/19542 19522/19541/19532) [90,7,99] r=0 lpr=19541 lcod 19531'36795 mlcod 0'0 active+clean snaptrimq=[3~12]] SnapTrimmer state<Trimming/WaitingOnReplicas>: WaitingOnReplicas: adding snap 3 to purged_snaps
2014-11-17 11:54:39.189121 7f0cc119e700 10 osd.90 pg_epoch: 19542 pg[3.2e7( v 19531'36796 (18972'33790,19531'36796] local-les=19542 n=0 ec=256 les/c 19542/19542 19522/19541/19532) [90,7,99] r=0 lpr=19541 lcod 19531'36795 mlcod 0'0 active+clean snaptrimq=[4~11]] SnapTrimmer state<Trimming/WaitingOnReplicas>: purged_snaps now [1~3], snap_trimq now [4~11]
...
</pre>
<p>The full log is attached. Is this yet another case?</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=444802014-11-17T10:53:24ZSamuel Justsjust@redhat.com
<ul></ul><p>All other osds are running that branch, right? Also, which sha1 was it which you thought was working (the branches have been changed)?</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=444902014-11-17T14:49:35ZDan van der Ster
<ul></ul><p>This test cluster is currently running 0.67.11-4-g496e561, mons and osds.</p>
<p>On our prod cluster we still run ceph-0.67.10-15.g23876d7.el6.x86_64 (from post 10 above). I haven't tried that older patch on the test cluster recently. I could revert to that and reconfirm that it's passing my current tests.</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=444932014-11-17T14:58:35ZSamuel Justsjust@redhat.com
<ul></ul><p>Oddly, I'm able to reproduce it easily on v0.67.11, but not wip-9113-9487-dumpling (496e561d81f2dd1bf92d588fc3afc2431e0a5b98). Are you sure the other osds were running wip-9113-9487-dumpling? Can you trigger backfill again on this pg/osd combo (probably mark it out, wait for active+clean, and then back in, wait for active+clean) and capture logging on all of the osds in the old and new acting set?</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=447192014-11-21T08:03:45ZDan van der Ster
<ul><li><strong>File</strong> <a href="/attachments/download/1539/cephversion.txt">cephversion.txt</a> <a class="icon-only icon-magnifier" title="View" href="/attachments/1539/cephversion.txt">View</a> added</li></ul><p>Today I restarted every mon and osd on the test cluster (again) and confirmed it is all running 0.67.11-4-g496e561. Now I cannot reproduce the issue anymore!! Great, but strange! (I still have a few PGs with the "annoying special case for snaps purged on an empty pg" issue, but the other issue I was reporting is no longer there).</p>
<p>This is very strange... I am sure that the OSDs have been running 0.67.11-4-g496e561 since Nov 4 -- I have the startup logs to prove it (attached). Perhaps there is something about this patch that implies that OSDs need to be restarted twice for it to fully work??</p>
<p>Anyway, from my side this looks good now to merge to dumpling (and firefly). I will definitely pay close attention when we are running this in production.</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=447862014-11-24T10:18:01ZSage Weilsage@newdream.net
<ul><li><strong>Status</strong> changed from <i>Pending Backport</i> to <i>Resolved</i></li></ul> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=448332014-11-25T06:55:31ZSage Weilsage@newdream.net
<ul><li><strong>Status</strong> changed from <i>Resolved</i> to <i>Pending Backport</i></li></ul><p>oops, still need firefly</p> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=450742014-12-02T13:22:43ZSage Weilsage@newdream.net
<ul><li><strong>Assignee</strong> changed from <i>Sage Weil</i> to <i>Samuel Just</i></li></ul> Ceph - Bug #9487: dumpling: snaptrimmer causes slow requests while backfilling. osd_snap_trim_sleep not helpinghttps://tracker.ceph.com/issues/9487?journal_id=450982014-12-02T13:47:46ZSamuel Justsjust@redhat.com
<ul><li><strong>Status</strong> changed from <i>Pending Backport</i> to <i>Resolved</i></li></ul>