Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2017-06-28T04:48:22Z
Ceph
Redmine
Ceph - Backport #20443 (Resolved): kraken: osd: client IOPS drops to zero frequently
https://tracker.ceph.com/issues/20443
2017-06-28T04:48:22Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15962">https://github.com/ceph/ceph/pull/15962</a></p>
Ceph - Backport #19928 (Resolved): kraken: mon crash on shutdown, lease_ack_timeout event
https://tracker.ceph.com/issues/19928
2017-05-15T09:32:40Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15084">https://github.com/ceph/ceph/pull/15084</a></p>
Ceph - Backport #19916 (Resolved): kraken: osd/OSD.h: 706: FAILED assert(removed) in PG::unreg_ne...
https://tracker.ceph.com/issues/19916
2017-05-12T13:12:21Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15066">https://github.com/ceph/ceph/pull/15066</a></p>
Ceph - Backport #19647 (Resolved): kraken: ceph-disk: directory-backed OSDs do not start on boot
https://tracker.ceph.com/issues/19647
2017-04-18T09:18:50Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14604">https://github.com/ceph/ceph/pull/14604</a></p>
Ceph - Backport #19646 (Resolved): jewel: ceph-disk: directory-backed OSDs do not start on boot
https://tracker.ceph.com/issues/19646
2017-04-18T08:18:58Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14602">https://github.com/ceph/ceph/pull/14602</a></p>
Ceph - Backport #19508 (Resolved): Upgrading from 0.94.6 to 10.2.6 can overload monitors (failed ...
https://tracker.ceph.com/issues/19508
2017-04-06T08:42:09Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14392">https://github.com/ceph/ceph/pull/14392</a></p>
rgw - Backport #19322 (Resolved): kraken: multisite: possible infinite loop in RGWFetchAllMetaCR
https://tracker.ceph.com/issues/19322
2017-03-21T11:00:41Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14067">https://github.com/ceph/ceph/pull/14067</a></p>
rgw - Backport #19321 (Resolved): jewel: multisite: possible infinite loop in RGWFetchAllMetaCR
https://tracker.ceph.com/issues/19321
2017-03-21T10:42:15Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="http://tracker.ceph.com/issues/17655">http://tracker.ceph.com/issues/17655</a></p>
Ceph - Backport #19315 (Resolved): kraken: osd: pg log split does not rebuild index for parent or...
https://tracker.ceph.com/issues/19315
2017-03-20T13:14:23Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14048">https://github.com/ceph/ceph/pull/14048</a></p>
Ceph - Backport #19314 (Resolved): jewel: osd: pg log split does not rebuild index for parent or ...
https://tracker.ceph.com/issues/19314
2017-03-20T12:33:06Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14047">https://github.com/ceph/ceph/pull/14047</a></p>
rgw - Backport #19115 (Resolved): jewel: rgw_file: ensure valid_s3_object_name for directories
https://tracker.ceph.com/issues/19115
2017-03-01T07:03:14Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/13717">https://github.com/ceph/ceph/pull/13717</a></p>
rgw - Backport #18827 (Resolved): jewel: RGW leaking data
https://tracker.ceph.com/issues/18827
2017-02-06T08:17:28Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/13358">https://github.com/ceph/ceph/pull/13358</a></p>
Ceph - Backport #18729 (Resolved): jewel: ceph-disk: error on _bytes2str
https://tracker.ceph.com/issues/18729
2017-01-30T12:00:48Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/13187">https://github.com/ceph/ceph/pull/13187</a></p>
Ceph - Backport #18485 (Resolved): jewel: osd_recovery_incomplete: failed assert not manager.is_r...
https://tracker.ceph.com/issues/18485
2017-01-11T06:48:24Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/12875">https://github.com/ceph/ceph/pull/12875</a></p>
Ceph - Backport #17909 (Resolved): jewel: ReplicatedBackend::build_push_op: add a second config t...
https://tracker.ceph.com/issues/17909
2016-11-15T09:46:07Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/11991">https://github.com/ceph/ceph/pull/11991</a></p>
<p>build_push_op assumes 8MB of omap entries is about as much work to read as 8MB of object data. This is probably false. Add a config (osd_recovery_max_omap_entries_per_chunk ?) with a sane default (50k?) and change build_push_op to use it.</p>