Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2017-06-27T11:13:38Z
Ceph
Redmine
Ceph - Backport #20428 (Resolved): jewel: osd: client IOPS drops to zero frequently
https://tracker.ceph.com/issues/20428
2017-06-27T11:13:38Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15947">https://github.com/ceph/ceph/pull/15947</a></p>
Ceph - Backport #19926 (Resolved): jewel: mon crash on shutdown, lease_ack_timeout event
https://tracker.ceph.com/issues/19926
2017-05-15T08:16:42Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15083">https://github.com/ceph/ceph/pull/15083</a></p>
Ceph - Backport #19915 (Resolved): jewel: osd/OSD.h: 706: FAILED assert(removed) in PG::unreg_nex...
https://tracker.ceph.com/issues/19915
2017-05-12T12:42:17Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15065">https://github.com/ceph/ceph/pull/15065</a></p>
Ceph - Backport #19646 (Resolved): jewel: ceph-disk: directory-backed OSDs do not start on boot
https://tracker.ceph.com/issues/19646
2017-04-18T08:18:58Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14602">https://github.com/ceph/ceph/pull/14602</a></p>
Ceph - Backport #19323 (Rejected): hammer: segfault in FileStore::fiemap()
https://tracker.ceph.com/issues/19323
2017-03-21T12:18:47Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14069">https://github.com/ceph/ceph/pull/14069</a></p>
rgw - Backport #19321 (Resolved): jewel: multisite: possible infinite loop in RGWFetchAllMetaCR
https://tracker.ceph.com/issues/19321
2017-03-21T10:42:15Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="http://tracker.ceph.com/issues/17655">http://tracker.ceph.com/issues/17655</a></p>
Ceph - Backport #19314 (Resolved): jewel: osd: pg log split does not rebuild index for parent or ...
https://tracker.ceph.com/issues/19314
2017-03-20T12:33:06Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14047">https://github.com/ceph/ceph/pull/14047</a></p>
Ceph - Backport #19265 (Resolved): jewel: An OSD was seen getting ENOSPC even with osd_failsafe_f...
https://tracker.ceph.com/issues/19265
2017-03-13T09:07:53Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15050">https://github.com/ceph/ceph/pull/15050</a></p>
Ceph - Backport #18729 (Resolved): jewel: ceph-disk: error on _bytes2str
https://tracker.ceph.com/issues/18729
2017-01-30T12:00:48Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/13187">https://github.com/ceph/ceph/pull/13187</a></p>
Ceph - Backport #18581 (Rejected): jewel: osd: ENOENT on clone
https://tracker.ceph.com/issues/18581
2017-01-18T11:10:46Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/12978">https://github.com/ceph/ceph/pull/12978</a></p>
Ceph - Backport #18485 (Resolved): jewel: osd_recovery_incomplete: failed assert not manager.is_r...
https://tracker.ceph.com/issues/18485
2017-01-11T06:48:24Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/12875">https://github.com/ceph/ceph/pull/12875</a></p>
Ceph - Backport #18132 (Resolved): hammer: ReplicatedBackend::build_push_op: add a second config ...
https://tracker.ceph.com/issues/18132
2016-12-03T07:51:56Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/12417">https://github.com/ceph/ceph/pull/12417</a></p>
Ceph - Bug #17753 (Resolved): ceph-create-keys loops forever
https://tracker.ceph.com/issues/17753
2016-10-31T16:17:47Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p>ceph-create-keys got stuck while deploying recent 10.2.4 (5efb6b1c2c9eb68f479446e7b42cd8945a18dd53).<br />Syslog contains a lot of the following messages:</p>
<p>Oct 31 16:15:28 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: INFO:ceph-create-keys:Cannot get or create admin key<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: INFO:ceph-create-keys:Talking to monitor...<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: no valid command found; 10 closest matches:<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth rm <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth del <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth export {<entity>}<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth get-or-create <entity> {<caps> [<caps>...]}<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth caps <entity> <caps> [<caps>...]<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth get <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth get-key <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth print-key <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth print_key <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth list<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: Error EINVAL: invalid command</p>
sepia - Bug #17274 (Resolved): Access to sepia lab: Alexey Sheplyakov
https://tracker.ceph.com/issues/17274
2016-09-14T07:52:26Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p>1. I don't intend to run jobs manually<br />2. Username: asheplyakov<br />3. ssh key is attached (id_rsa.pub)<br />4. Hashed VPN password: <br /><a class="email" href="mailto:asheplyakov@asheplyakov.srt.mirantis.net">asheplyakov@asheplyakov.srt.mirantis.net</a> wFW0ZgT4cNhKRAGXiUtevQ 1b11f0702b2db42a42aae6579737ece2caad3b80a8186b971686575cb76b3051</p>
Ceph - Backport #14231 (Rejected): hammer: ceph-disk fails to work with udev generated symlinks
https://tracker.ceph.com/issues/14231
2016-01-05T09:21:33Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<a name="description"></a>
<h3 >description<a href="#description" class="wiki-anchor">¶</a></h3>
<p>~# ceph-deploy osd prepare node-9:/dev/sdc3:/dev/disk/by-id/ata-INTEL_SSDSC2BW240A4_PHDA410301812403GN-part3</p>
<p>fails here:</p>
<p>[node-9][WARNIN] DEBUG:ceph-disk:Journal /dev/disk/by-id/ata-INTEL_SSDSC2BW240A4_PHDA410301812403GN-part3 was previously prepared with ceph-disk. Reusing it.<br />[node-9][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk -i 2 /dev/disk/by-id/ata-INTEL_SSDSC<br />[node-9][WARNIN] Problem opening /dev/disk/by-id/ata-INTEL_SSDSC for reading! Error is 2.</p>
<a name="workaround"></a>
<h3 >workaround<a href="#workaround" class="wiki-anchor">¶</a></h3>
<p>~# ceph-deploy osd prepare node-9:/dev/sdc3:$(ssh node-9 realpath /dev/disk/by-id/ata-INTEL_SSDSC2BW240A4_PHDA410301812403GN-part3)</p>