Activity
From 05/04/2017 to 06/02/2017
06/02/2017
- 03:01 PM Bug #20168 (Resolved): IO work queue does not process failed lock request
- If the attempt to request the exclusive lock fails, the IO work queue will not attempt to recover. For example, in Je...
- 07:36 AM Backport #20154 (Resolved): kraken: Potential IO hang if image is flattened while read request is...
- https://github.com/ceph/ceph/pull/16184
- 07:36 AM Backport #20153 (Resolved): jewel: Potential IO hang if image is flattened while read request is ...
- https://github.com/ceph/ceph/pull/15464
- 07:36 AM Backport #20152 (Rejected): hammer: Potential IO hang if image is flattened while read request is...
- https://github.com/ceph/ceph/pull/15980
06/01/2017
- 05:19 PM Bug #18963 (Fix Under Review): rbd-mirror: forced failover does not function when peer is unreach...
- 12:29 AM Bug #18367: Zombie image snapshot problem
- Jason Dillaman wrote:
> @daolong: can you please provide the output from "rados -p volumes listomapvals rbd_header.2...
05/31/2017
- 11:18 PM Bug #20111 (Rejected): Python RBD: diff_iterate_cb() does not acquire GIL before calling user-pro...
- Indeed -- the callback cdef functions have the "with gil" suffix to tell Cython to re-acquire the GIL.
- 09:03 AM Bug #20111: Python RBD: diff_iterate_cb() does not acquire GIL before calling user-provided callb...
- Cython internally ensures (and acquires) GIL in such functions. Please close the bug. I verified, that GIL is locked ...
- 08:21 PM Support #20120: libvirt creat volume io very slow
- Definitely not enough information to attempt to address.
- 09:22 AM Support #20120 (Closed): libvirt creat volume io very slow
- use ceph client creat volume for test bw=260442KB/s, iops=65110
but cloudstack creat volume for test bw=9523.2KB/s... - 06:12 PM Bug #18367 (Need More Info): Zombie image snapshot problem
- @daolong: can you please provide the output from "rados -p volumes listomapvals rbd_header.2eb9e622cdd48" and "rados ...
- 01:05 PM Bug #20110: RBD aio_ API does not provide awaiting of any completion from a list.
- @Марк: can you provide an example? Your ticket description clearly states "next I want to wait until any of them is c...
- 12:42 PM Bug #20110: RBD aio_ API does not provide awaiting of any completion from a list.
- So, how should I wait for whole transfer completion?
What is the difference between rbd_read2() and rbd_aio_read2(... - 12:08 PM Bug #20110 (Need More Info): RBD aio_ API does not provide awaiting of any completion from a list.
- @Марк: I am probably not understanding your goal, but since you can associate a callback with a completion, and said ...
- 09:45 AM Documentation #20119: Documentation of Python RBD API does not say that aio_* functions call thei...
- Also it does not say, that `data` argument may be None, which signs error of read operation (I'm not sure, figured ou...
- 09:35 AM Documentation #20119: Documentation of Python RBD API does not say that aio_* functions call thei...
- also, exceptions are silently ignored from these callbacks!
- 09:19 AM Documentation #20119 (Closed): Documentation of Python RBD API does not say that aio_* functions ...
- Documentation of Python RBD API does not say that aio_* functions call their callbacks in DIFFERENT (dummy) thread
...
05/30/2017
- 10:13 PM Bug #20111 (Rejected): Python RBD: diff_iterate_cb() does not acquire GIL before calling user-pro...
- rbd_diff_iterate2() is called with GIL released. Callback it calls must acquire GIL.
Bug was not detected since Cy... - 09:37 PM Bug #20110 (Closed): RBD aio_ API does not provide awaiting of any completion from a list.
- Suppose I want to copy RBD image in parallel 10 streams. Well, I can run 10 aio_read() functions and associate them w...
05/29/2017
- 07:49 AM Feature #18430 (In Progress): Transparently support migrating images with minimal/zero downtime
05/26/2017
- 02:12 PM Bug #19832 (Pending Backport): Potential IO hang if image is flattened while read request is in-f...
- 02:12 PM Bug #19962 (Resolved): Discard related IO should skip op if object map marks object as non-existent
05/24/2017
05/23/2017
- 01:22 PM Bug #19962 (Fix Under Review): Discard related IO should skip op if object map marks object as no...
- PR: https://github.com/ceph/ceph/pull/15239
- 10:47 AM Bug #20054: librbd memory overhead when used with KVM
- Sorry, looks like I got some formatting issues there. Here again the overhead table:...
- 10:44 AM Bug #20054 (Resolved): librbd memory overhead when used with KVM
- Hi,
we are running a jewel ceph cluster which serves RBD volumes for our KVM
virtual machines. Recently we noticed... - 10:17 AM Bug #19832 (Fix Under Review): Potential IO hang if image is flattened while read request is in-f...
- PR: https://github.com/ceph/ceph/pull/15234
05/22/2017
- 12:51 PM Bug #19970 (Resolved): Reduce the potential for erroneous blacklisting due to release lock race
- 10:29 AM Backport #20023 (Resolved): jewel: rbd-mirror replay fails on attempting to reclaim data to local...
- https://github.com/ceph/ceph/pull/15488
- 10:29 AM Backport #20022 (Resolved): kraken: rbd-mirror replay fails on attempting to reclaim data to loca...
- https://github.com/ceph/ceph/pull/15486
- 10:28 AM Backport #20017 (Resolved): jewel: rbd-nbd: kernel reported invalid device size (0, expected 1073...
- https://github.com/ceph/ceph/pull/15463
- 10:28 AM Backport #20016 (Rejected): kraken: rbd-nbd: kernel reported invalid device size (0, expected 107...
- 10:28 AM Backport #20009 (Rejected): jewel: rbd-mirror: admin socket path names collision
- 10:28 AM Backport #20008 (Rejected): kraken: rbd-mirror: admin socket path names collision
- 10:28 AM Backport #20005 (Rejected): kraken: Lock release requests not honored after watch is re-acquired
05/19/2017
- 12:32 AM Bug #19970 (Fix Under Review): Reduce the potential for erroneous blacklisting due to release loc...
- *PR*: https://github.com/ceph/ceph/pull/15162
05/18/2017
05/17/2017
- 08:50 PM Bug #19970 (Resolved): Reduce the potential for erroneous blacklisting due to release lock race
- There is a potential race when a client attempts to acquire the exclusive lock. If the current lock owner closes the ...
- 08:45 PM Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
- *PR*: https://github.com/ceph/ceph/pull/15140
- 03:43 PM Bug #16502 (Need More Info): fio ended up with soft lockup - CPU#1 stuck for 22s!
- AFAIK this could happen for a bunch of reasons, including an unresponsive cluster.
- 12:48 PM Bug #19962 (Resolved): Discard related IO should skip op if object map marks object as non-existent
- The discard-related ops should queue their completion if the object map is enabled and it detects that the object doe...
- 09:14 AM Backport #19957 (Resolved): jewel: rbd: Lock release requests not honored after watch is re-acquired
- https://github.com/ceph/ceph/pull/17385
05/16/2017
- 04:40 PM Bug #19929 (Pending Backport): Lock release requests not honored after watch is re-acquired
- 03:17 PM Bug #19636: upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in kraken 11.2.1 integrati...
- Seems similar in http://pulpito.ceph.com/teuthology-2017-05-13_05:45:02-upgrade:client-upgrade-kraken-distro-basic-sm...
- 03:12 PM Bug #19942 (Duplicate): "[ FAILED ] TestLibRBD.Metadata" in upgrade:client-upgrade-kraken-distr...
- Run: http://pulpito.ceph.com/teuthology-2017-05-13_05:45:02-upgrade:client-upgrade-kraken-distro-basic-smithi/
Job: ...
05/15/2017
- 04:13 PM Bug #19929 (Fix Under Review): Lock release requests not honored after watch is re-acquired
- *PR*: https://github.com/ceph/ceph/pull/15093
- 02:35 PM Bug #19929 (Resolved): Lock release requests not honored after watch is re-acquired
- After the watch fails, the lock owner client id is cleared and the re-acquire process is started. If the re-acquire i...
- 12:44 PM Bug #19907 (Pending Backport): rbd-mirror: admin socket path names collision
05/11/2017
- 03:17 PM Bug #19863 (Resolved): bluestore deferred write crc changes before write
- 11:31 AM Bug #19907 (Fix Under Review): rbd-mirror: admin socket path names collision
- PR: https://github.com/ceph/ceph/pull/15048
- 10:07 AM Bug #19907 (Resolved): rbd-mirror: admin socket path names collision
- For every pool replayer rbd-mirror initializes separate local and remote ceph contexts. During initialization they us...
- 11:30 AM Feature #17489 (Resolved): [iscsi]: add support for librbd via LIO TCMU userspace passthrough
- tcmu-runner now has support for librbd: https://github.com/open-iscsi/tcmu-runner/blob/master/rbd.c
- 09:39 AM Feature #17489: [iscsi]: add support for librbd via LIO TCMU userspace passthrough
- Is there a related commit or PR that could be linked here?
05/10/2017
- 06:14 PM Bug #19875 (Fix Under Review): rbd osd ops repeat alloc hint
- PR: https://github.com/ceph/ceph/pull/15037
- 06:06 PM Bug #19875 (In Progress): rbd osd ops repeat alloc hint
- 02:29 PM Bug #19897: rbd maybe pending in 99% when remove a clone image
- Note: op work threads are currently hard-coded to 1.
- 09:35 AM Bug #19897 (Fix Under Review): rbd maybe pending in 99% when remove a clone image
- 08:09 AM Bug #19897: rbd maybe pending in 99% when remove a clone image
- https://github.com/ceph/ceph/pull/15024
- 06:22 AM Bug #19897: rbd maybe pending in 99% when remove a clone image
- these are the log when enable tp=15
root@node1:jintang$ rbd rm test_pool/test_child
2017-05-10 10:37:43.889802 7... - 06:16 AM Bug #19897 (Duplicate): rbd maybe pending in 99% when remove a clone image
- Prerequisite: rbd_op_threads is 3 and rbd_cache is disable
When rbd removes a clone image, it is possible that rbd...
05/09/2017
- 04:01 PM Bug #19889: rbd/compatibility: rbd import fails with Jewel client, Kraken OSDs (breaks rolling up...
- OK, any clues as to why this would only occur in combination between a Jewel client and a Kraken cluster?
- 03:59 PM Bug #19889: rbd/compatibility: rbd import fails with Jewel client, Kraken OSDs (breaks rolling up...
- For some reason, the calls to update the object map never completed (even though they had worked a few milliseconds b...
- 03:49 PM Bug #19889 (In Progress): rbd/compatibility: rbd import fails with Jewel client, Kraken OSDs (bre...
- 01:05 PM Bug #19889: rbd/compatibility: rbd import fails with Jewel client, Kraken OSDs (breaks rolling up...
- I should add (on the last comment) that I only saw this deletion issue with one image out of three where import previ...
- 01:01 PM Bug #19889: rbd/compatibility: rbd import fails with Jewel client, Kraken OSDs (breaks rolling up...
- Additional update: good news and bad news.
The good news is that upgrading the client to Kraken fixes @rbd import@... - 12:51 PM Bug #19889 (Closed): rbd/compatibility: rbd import fails with Jewel client, Kraken OSDs (breaks r...
- Client:...
- 03:42 PM Bug #19863 (Fix Under Review): bluestore deferred write crc changes before write
- *PR*: https://github.com/ceph/ceph/pull/15017
- 03:33 AM Bug #19863: bluestore deferred write crc changes before write
- it's an objectmap object. the block in question looks like this:...
- 04:31 AM Subtask #18786 (In Progress): rbd-mirror A/A: create simple image distribution policy
05/08/2017
- 11:41 PM Bug #19871 (Pending Backport): rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- 09:39 PM Backport #17843 (New): jewel: object-map: batch updates during trim operation
- 06:08 PM Bug #19863 (In Progress): bluestore deferred write crc changes before write
- both instances were rbd. trying to reproduce with some additional logging that hexdumps the buffers.
- 02:57 PM Bug #19863: bluestore deferred write crc changes before write
- /a/sage-2017-05-05_22:43:52-rbd-wip-sage-testing---basic-smithi/1105005
05/05/2017
- 09:07 PM Bug #18938: Unable to build 11.2.0 under i686
- Hello Kefu,
I applied https://github.com/ceph/ceph/pull/14891.patch to ceph 11.2.0 tarball and that doesn't fix bu... - 06:41 PM Bug #17195 (Resolved): There seems to be a thread waiting indefinitely in krbd.cc
- 05:46 PM Bug #19875: rbd osd ops repeat alloc hint
- Optimize the copy-up logic so that if multiple object requests are queued for the same backing object the hints shoul...
- 05:31 PM Bug #19875 (Resolved): rbd osd ops repeat alloc hint
- [set-alloc-hint object_size 4194304 write_size 4194304,write 0~565760,set-alloc-hint object_size 4194304 write_size 4...
- 02:47 PM Backport #19873 (In Progress): jewel: [rbd-mirror] failover and failback of unmodified image resu...
- 12:50 PM Backport #19873 (Resolved): jewel: [rbd-mirror] failover and failback of unmodified image results...
- https://github.com/ceph/ceph/pull/14977
- 02:46 PM Bug #19871 (Fix Under Review): rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- PR: https://github.com/ceph/ceph/pull/14976
- 01:30 PM Bug #19871 (In Progress): rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- 10:35 AM Bug #19871 (Resolved): rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- before NBD_DO_IT, /sys/block/nbdX/size reports 0
so the check_device_size failed.
log:
rbd-nbd: kernel reporte... - 12:50 PM Backport #19872 (In Progress): kraken: [rbd-mirror] failover and failback of unmodified image res...
- 12:50 PM Backport #19872 (Resolved): kraken: [rbd-mirror] failover and failback of unmodified image result...
- https://github.com/ceph/ceph/pull/14974
- 12:27 PM Bug #19858 (Pending Backport): [rbd-mirror] failover and failback of unmodified image results in ...
- 12:28 AM Bug #19858 (Fix Under Review): [rbd-mirror] failover and failback of unmodified image results in ...
- *PR*: https://github.com/ceph/ceph/pull/14963
05/04/2017
- 10:21 PM Bug #19863 (Resolved): bluestore deferred write crc changes before write
- write is set up......
- 06:46 PM Bug #19811 (Pending Backport): rbd-mirror replay fails on attempting to reclaim data to local sit...
- 01:39 AM Bug #19811 (Fix Under Review): rbd-mirror replay fails on attempting to reclaim data to local sit...
- *PR*: https://github.com/ceph/ceph/pull/14945
- 12:34 AM Bug #19811 (In Progress): rbd-mirror replay fails on attempting to reclaim data to local site (LS...
- @Eric: thanks, I see the issue now.
- 04:30 PM Bug #19858 (Resolved): [rbd-mirror] failover and failback of unmodified image results in split-brain
- If an image is gracefully failed over and immediately failed back (demote A/promote B -> demote B/promote A), the ima...
- 03:14 PM Bug #18938 (Resolved): Unable to build 11.2.0 under i686
- 02:46 AM Bug #18938: Unable to build 11.2.0 under i686
- the reason why we only need to guard ...
- 08:00 AM Bug #19413: Cannot delete some snapshots after upgrade from jewel to kraken
- I'm sorry for being late. Thank you very much indeed Jason! This solved my problem too.
Also available in: Atom