https://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2014-08-05T08:28:21ZCeph Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=389382014-08-05T08:28:21ZSage Weilsage@newdream.net
<ul></ul><p>please query the admin socket for the process like so:</p>
<pre><code>ceph daemon /var/run/ceph/ceph-client.*.asok objecter_requests</code></pre>
<p>to see the in flight requests. That will tell you which request(s) are blocked and with which OSD they are outstanding. Then you can check the OSD with</p>
<pre><code>ceph daemon /var/run/ceph/ceph-osd.NNN.asok dump_ops_in_flight</code></pre>
<p>to see what their state is there. I'm guessing the request is hung on teh OSD side of things...</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=389392014-08-05T08:28:29ZSage Weilsage@newdream.net
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Need More Info</i></li></ul> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=390312014-08-05T18:15:21ZGuang Yangyguang11@outlook.com
<ul></ul><blockquote>
<p>I'm guessing the request is hung on teh OSD side of things...</p>
</blockquote>
<p>Thanks Sage. Sadly after radosgw daemon restarting, I am not able to reproduce it for now (I will keep monitoring).</p>
<p>Even with in-flight OP, shouldn't they be timeout after some time without response? From radosgw's log, it seems it just hangs forever.</p>
<p>Following are logs for the same thread:</p>
<pre>
2014-08-05 06:09:35.448230 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:09:40.448390 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:09:45.448586 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:09:50.448730 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:09:55.448888 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:00.449061 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:05.449275 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:10.449521 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:15.449688 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:20.449893 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:25.450096 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:30.450232 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:35.450404 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:40.450660 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:45.450877 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:50.451132 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:10:55.451361 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:00.451537 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:05.451785 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:10.451987 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:15.452205 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:20.452401 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:25.452626 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:30.452823 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:35.453083 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:40.453248 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:45.453470 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:50.453713 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:11:55.453945 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:00.454124 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:05.454291 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:10.454522 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:15.454707 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:20.454907 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:25.455115 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:30.455417 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:35.455658 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:40.455947 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:45.456107 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:50.456318 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:12:55.456575 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:00.456801 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:05.457052 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:10.457330 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:15.457591 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:20.457815 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:25.458048 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:30.458224 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:35.458459 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
2014-08-05 06:13:40.458695 7faadee30700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fa9dc7f3700' had timed out after 600
</pre> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=405502014-09-03T06:40:13ZGuang Yangyguang11@outlook.com
<ul></ul><p>@radosgw<br />$ ceph daemon /var/run/ceph/ceph-client.*.asok objecter_requests
{ "ops": [
{ "tid": 15582485,<br /> "pg": "3.ed2ef9fb",<br /> "osd": 146,<br /> "object_id": "default.7291.42_2546732403_de301a716f_o.jpg",<br /> "object_locator": "@3",<br /> "target_object_id": "default.7291.42_2546732403_de301a716f_o.jpg",<br /> "target_object_locator": "@3",<br /> "paused": 0,<br /> "used_replica": 0,<br /> "precalc_pgid": 0,<br /> "last_sent": "2014-09-03 12:48:31.135795",<br /> "attempts": 1,<br /> "snapid": "head",<br /> "snap_context": "0=[]",<br /> "mtime": "0.000000",<br /> "osd_ops": [<br /> "getxattrs",<br /> "stat",<br /> "read 0~5898240"]},<br />]...</p>
<p>@osd146<br />$ ceph daemon /var/run/ceph/ceph-osd.NNN.asok dump_ops_in_flight
{ "num_ops": 0,<br /> "ops": []}</p>
<p>And currently all radosgw threads hang in the same way..</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=405572014-09-03T08:19:22ZSage Weilsage@newdream.net
<ul></ul><p>Guang Yang wrote:</p>
<blockquote>
<p>@radosgw<br />$ ceph daemon /var/run/ceph/ceph-client.*.asok objecter_requests
{ "ops": [
{ "tid": 15582485,<br />"pg": "3.ed2ef9fb",<br />"osd": 146,<br />"object_id": "default.7291.42_2546732403_de301a716f_o.jpg",<br />"object_locator": "@3",<br />"target_object_id": "default.7291.42_2546732403_de301a716f_o.jpg",<br />"target_object_locator": "@3",<br />"paused": 0,<br />"used_replica": 0,<br />"precalc_pgid": 0,<br />"last_sent": "2014-09-03 12:48:31.135795",<br />"attempts": 1,<br />"snapid": "head",<br />"snap_context": "0=[]",<br />"mtime": "0.000000",<br />"osd_ops": [<br />"getxattrs",<br />"stat",<br />"read 0~5898240"]},<br />]...</p>
<p>@osd146<br />$ ceph daemon /var/run/ceph/ceph-osd.NNN.asok dump_ops_in_flight
{ "num_ops": 0,<br />"ops": []}</p>
<p>And currently all radosgw threads hang in the same way..</p>
</blockquote>
<p>Hmm, and this is reproducible? It may be the throttling in the msgr layer.. debug ms = 20 might help, if you can reproduce this situation. You could also ceph pg <pgid> query to confirm that the pg is peered and active. From this hung state, if you ceph osd down 146, will things then repeer and recover?</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=406092014-09-04T17:28:24ZGuang Yangyguang11@outlook.com
<ul></ul><blockquote>
<p>Sage Weil wrote:<br />Hmm, and this is reproducible? It may be the throttling in the msgr layer.. debug ms = 20 might help, if you can reproduce this situation. You could also ceph pg <pgid> query to confirm that the pg is peered and active. From this hung state, if you ceph osd down 146, will things then repeer and recover?</p>
</blockquote>
<ol>
<li>Yeah, it is very reproducible on v0.80.4, however, we didn't see this pattern before (prior to v0.80).</li>
<li>I can confirm that all PGs are active+clean during the time.</li>
<li>Let me increase the debug ms to 20 and check.</li>
</ol>
<p>Thanks!</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=409052014-09-10T07:17:26ZGuang Yangyguang11@outlook.com
<ul></ul><p>While in the progress of testing with debug ms = 20, I have a couple of questions:<br /> 1> What is the risk of adding timeout for radosgw op? My understanding is that the change should be around librados::IoCtxImpl::operate_* to add a timeout for the conditional wait, is that correct?<br /> 2> Is it possible it was caused by network jitter (NIC error, etc.)? I assume not since there is explicitly message ack handling at messenger layer, but it would be nice to get confirmed?<br /> 3> By throttling you mentioned previously, can you elaborate what type of throttling might cause this behavior?</p>
<p>Thanks,<br />Guang</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=409062014-09-10T07:18:45ZGuang Yangyguang11@outlook.com
<ul></ul><p>Another hang we observed with debug ms = 1 at radosgw side, it is confirmed that the osd_op does not have its peer osd_op_reply which hang the thread.</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=409072014-09-10T07:47:25ZGuang Yangyguang11@outlook.com
<ul><li><strong>File</strong> <a href="/attachments/download/1387/bug9008.log">bug9008.log</a> <a class="icon-only icon-magnifier" title="View" href="/attachments/1387/bug9008.log">View</a> added</li></ul><p>radosgw side log before hang.</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=409542014-09-11T06:08:53ZGuang Yangyguang11@outlook.com
<ul></ul><p>I finally got some logs which revealed that the osd_op_reply message had been received by Pipe, but it stayed in DispatchQueue and there was no dispatcher trying to dispatch it (and the dispatcher thread was throttled).</p>
<p><strong>Radosgw WQ thread logs</strong>:<br /><pre>
2014-09-09 14:46:24.857510 7f657ad0a700 20 -- 10.214.140.210:0/1064300 submit_message osd_op(client.29459.0:174471939 [cmpxattr user. rgw.idtag (22) op 1 mode 1,create 0~0,delete,setxattr user.rgw.idtag (23),writefull 0~2784877,setxattr user.rgw.manifest (466),setxattr user.rgw.acl (127),setxattr user.rgw.content_type (25),setxattr user.rgw.etag (33),setxattr user.rgw.x-amz-meta-origin (56)] 3.3000ebaa ondisk+write e523) v4 remote, 10.214.141.21:6805/33707, have
pipe.
2014-09-09 14:56:25.939603 7f66b0219700 1 heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7f657ad0a700' had timed out after 600
</pre></p>
<p><strong>reader/writer pipe logs</strong>:<br /><pre>
2014-09-09 14:46:24.857650 7f63b9608700 20 -- 10.214.140.210:0/1064300 >> 10.214.141.21:6805/33707 pipe(0x7f669c148b00 sd=325 :35922 s=2 pgs=1474 cs=1 l=1 c=0x7f669c13b530).writer encoding 212897 features 17592186044415 0x7f647c02ac50 osd_op(client.29459.0:174471939 default.29329.25_7633301044_eeb7d3d282_o.jpg [cmpxattr user.rgw.idtag (22) op 1 mode 1,create 0~0,delete,setxattr user.rgw.idtag (23),writefull 0~2784877,setxattr user.rgw.manifest (466),setxattr user.rgw.acl (127),setxattr user.rgw.content_type (25),setxattr user.rgw.etag (33),setxattr user.rgw.x-amz-meta-origin (56)] 3.3000ebaa ondisk+write e523) v4
2014-09-09 14:46:24.857476 7f657ad0a700 1 -- 10.214.140.210:0/1064300 --> 10.214.141.21:6805/33707 -- osd_op(client.29459.0:174471939 default.29329.25_7633301044_eeb7d3d282_o.jp
g [cmpxattr user.rgw.idtag (22) op 1 mode 1,create 0~0,delete,setxattr user.rgw.idtag (23),writefull 0~2784877,setxattr user.rgw.manifest (466),setxattr user.rgw.acl (127),setxat
tr user.rgw.content_type (25),setxattr user.rgw.etag (33),setxattr user.rgw.x-amz-meta-origin (56)] 3.3000ebaa ondisk+write e523) v4 -- ?+0 0x7f647c02ac50 con 0x7f669c13b530
2014-09-09 14:46:25.304788 7f63b48bb700 10 -- 10.214.140.210:0/1064300 >> 10.214.141.21:6805/33707 pipe(0x7f669c148b00 sd=325 :35922 s=2 pgs=1474 cs=1 l=1 c=0x7f669c13b530).reade
r got message 235178 0x7f64d80fb410 osd_op_reply(174471939 default.29329.25_7633301044_eeb7d3d282_o.jpg [cmpxattr (22) op 1 mode 1,create 0~0,delete,setxattr (23),writefull 0~278
4877,setxattr (466),setxattr (127),setxattr (25),setxattr (33),setxattr (56)] v523'20313 uv20313 ondisk = 0) v6
2014-09-09 14:46:25.304821 7f63b48bb700 20 -- 10.214.140.210:0/1064300 queue 0x7f64d80fb410 prio 127
</pre></p>
<p>And continue searching 'done calling dispatch on 0x7f64d80fb410', nothing could be found.</p>
<p>So that we can see that even the message was received, it failed to be dispatched and thus not able to deliver to upper layer (radosgw wq thread).</p>
<p>Another pattern I saw was that as those osd_op_reply didn't get dispatched, the objecter throttle hit its limit (since it the tokens were release upon dispatch), lots of following radosgw wq thread got hung with another pattern (waiting on throttler).</p>
<p><strong>However the dispatcher thread was hung there as well</strong><br /><pre>
2014-09-09 14:46:24.984694 7f66aee17700 1 -- 10.214.140.210:0/1064300 <== osd.107 10.214.140.25:6810/3812 540190 ==== osd_op_reply(174471909 [pgls start_epoch 0] v0'0 uv0 ondisk = 1) v6 ==== 167+0+44 (1640671103 0 139081063) 0x7f64f80bd3c0 con 0x7f669c0c46e0
2014-09-09 14:46:24.984742 7f66aee17700 2 throttle(objecter_bytes 0x204e468) _wait waiting...
</pre></p>
<p>After looking at the code, I failed to find the reason why dispatcher will need to wait for throttle budget... It looks like it should only release budget, or did I miss anything obvious?</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=409912014-09-11T13:57:20ZGreg Farnumgfarnum@redhat.com
<ul></ul><p>Well, the dispatcher doesn't normally take budget directly, but it could be doing something else farther down the call chain that requires sending a new message. Do you have more complete logs you can share for somebody to look at?<br />I went over it briefly with Yehuda and we didn't see any obvious way that this should be able to deadlock, but that's my guess. You could test it by disabling objecter throttling and seeing if the rgw hangs disappear.</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=409972014-09-11T14:34:49ZGreg Farnumgfarnum@redhat.com
<ul><li><strong>Subject</strong> changed from <i>rgw: processing thread hangs forever if the pending OP does not get replied</i> to <i>Objecter: pg listing can deadlock when throttling is in use</i></li><li><strong>Status</strong> changed from <i>Need More Info</i> to <i>12</i></li></ul><p>Okay, this is because our object listing code is incorrect (in both Firefly and Giant-to-be). A pgls response has a context which <strong>directly</strong> triggers a new pgls message to go out — but if the Objecter doesn't have any available throttle, it blocks. And this happens in the dispatch thread which calls handle_osd_op_reply (in Firefly that's the main dispatch loop; in Giant it's the Pipe reader thread).<br />We might just be able to throw the callback into the Finisher thread instead — although then the Finisher thread can block, which isn't exactly ideal either. :/ The other option is to somehow have pg listing replies and outgoing continuation messages not drop and take op budget...</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=409982014-09-11T14:42:29ZSage Weilsage@newdream.net
<ul></ul><p>my vote is to make the pgls continuation hold onto existing budget (and not take new budget). is that feasible?</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=411052014-09-15T04:52:38ZGuang Yangyguang11@outlook.com
<ul></ul><p>Please help to review: <a class="external" href="https://github.com/ceph/ceph/pull/2489">https://github.com/ceph/ceph/pull/2489</a></p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=415432014-09-23T09:25:15ZIan Colleicolle@redhat.com
<ul><li><strong>Project</strong> changed from <i>rgw</i> to <i>Ceph</i></li></ul> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=416062014-09-23T13:27:49ZSamuel Justsjust@redhat.com
<ul><li><strong>Status</strong> changed from <i>12</i> to <i>7</i></li></ul> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=420392014-09-29T14:11:20ZSamuel Justsjust@redhat.com
<ul><li><strong>Priority</strong> changed from <i>High</i> to <i>Urgent</i></li></ul> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=426922014-10-07T11:28:27ZSamuel Justsjust@redhat.com
<ul><li><strong>Status</strong> changed from <i>7</i> to <i>Pending Backport</i></li></ul> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=458582014-12-19T16:05:57ZChris Armstrongchris@opdemand.com
<ul></ul><p>Hi folks, we're running into this on Giant. Is there any estimate as to when this'll be fixed in a maintenance release? Thanks!</p> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=535142015-06-14T07:50:23ZNathan Cutlerncutler@suse.cz
<ul><li><strong>Assignee</strong> set to <i>Nathan Cutler</i></li><li><strong>Backport</strong> set to <i>firefly</i></li><li><strong>Regression</strong> set to <i>No</i></li></ul> Ceph - Bug #9008: Objecter: pg listing can deadlock when throttling is in usehttps://tracker.ceph.com/issues/9008?journal_id=578362015-09-04T15:19:26ZLoïc Dacharyloic@dachary.org
<ul><li><strong>Status</strong> changed from <i>Pending Backport</i> to <i>Resolved</i></li></ul>