Activity
From 06/16/2015 to 07/15/2015
07/15/2015
- 02:23 PM Tasks #12029 (Resolved): add qemu erasure coded pool cache tier test
- 01:34 AM Bug #12333 (Won't Fix): librbd doesn't notice if exclusive lock is broken
- librbd doesn't notice if another client breaks its exclusive lock, even if that client sends a notify that it acquire...
07/14/2015
- 03:54 PM Bug #12018 (Fix Under Review): rbd and pool quota do not go well together
- 03:54 PM Tasks #12029 (Fix Under Review): add qemu erasure coded pool cache tier test
- *master PR*: https://github.com/ceph/ceph-qa-suite/pull/490
- 02:32 PM Backport #12238 (In Progress): [ FAILED ] TestLibRBD.ExclusiveLockTransition
- 02:24 PM Backport #12241: [ FAILED ] TestObjectMap.InvalidateFlagInMemoryOnly
- Part of the series introduced by #11791, assigning to Jason Dillaman who is the original author
- 02:20 PM Backport #12239 (In Progress): librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- 02:08 PM Backport #12234: segfault: test_rbd.TestClone.test_unprotect_with_children
- Again facing issues trying to backport this one, better done by the original author
- 01:59 PM Backport #12236 (In Progress): Possible crash while concurrently writing and shrinking an image
- 01:36 PM Backport #12237: A client opening an image mid-resize can result in the object map being invalidated
- Got a merge conflict; this backport could be better done by original author
07/09/2015
- 03:41 PM Feature #11286: add general journal
- -*master PR*: https://github.com/ceph/ceph/pull/5186-
- 02:37 PM Tasks #12029 (In Progress): add qemu erasure coded pool cache tier test
07/08/2015
- 02:51 PM Backport #11852: librbd: new config option for legacy blocking aio behavior
- For the record, the initial description was: https://github.com/ceph/ceph/pull/4827 Just the first and last commit t...
- 02:34 PM Backport #12240 (In Progress): librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- 12:34 PM Backport #12240 (Resolved): librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- https://github.com/ceph/ceph/pull/5171
- 01:31 PM Backport #11853 (Resolved): librbd: new config option for legacy blocking aio behavior
- 01:31 PM Backport #11770 (Resolved): librbd: aio calls may block
- 12:35 PM Backport #12241 (Resolved): [ FAILED ] TestObjectMap.InvalidateFlagInMemoryOnly
- https://github.com/ceph/ceph/pull/5279
- 12:33 PM Backport #12239 (Resolved): librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- https://github.com/ceph/ceph/pull/5243
- 12:32 PM Backport #12238 (Resolved): [ FAILED ] TestLibRBD.ExclusiveLockTransition
- https://github.com/ceph/ceph/pull/5241
- 12:31 PM Backport #12237 (Resolved): A client opening an image mid-resize can result in the object map bei...
- https://github.com/ceph/ceph/pull/5279
- 12:30 PM Backport #12236 (Resolved): Possible crash while concurrently writing and shrinking an image
- https://github.com/ceph/ceph/pull/5318
- 12:30 PM Backport #12235 (Resolved): librbd: crash when two clients try to write to an exclusive locked image
- https://github.com/ceph/ceph/pull/5319
- 12:28 PM Backport #12234 (Resolved): segfault: test_rbd.TestClone.test_unprotect_with_children
- https://github.com/ceph/ceph/pull/5279
- 12:21 PM Backport #12109: qa: test rbd notify-based proxying across versions
- I swapped the comment and the description so that it looks more like the other backport tickets. Purely cosmetic.
- 12:20 PM Backport #12109 (In Progress): qa: test rbd notify-based proxying across versions
- Jason Dillaman wrote: I think this is doable with the existing teuthology upgrade setups. Similar to the client-upgra...
07/07/2015
- 03:50 PM Feature #11287 (In Progress): librbd: write to journal
- 03:50 PM Feature #12218 (Duplicate): RBD journaling feature support
- See #11287
- 12:02 AM Bug #12069 (Resolved): ENOSPC hidden by cache not detected by callers of flatten
07/06/2015
- 11:55 PM Bug #12214 (Pending Backport): [ FAILED ] TestObjectMap.InvalidateFlagInMemoryOnly
- Introduced by #11791 -- which is pending backport
- 11:54 PM Bug #12215 (Pending Backport): segfault: test_rbd.TestClone.test_unprotect_with_children
- Introduced by #11791 -- which is pending backport
- 02:22 PM Bug #12219 (Resolved): rbd-fuse should respect standard Ceph configuration overrides and search p...
- rbd-fuse appears to use /etc/ceph/ceph.conf even when $CEPH_CONF is set in the environment.
- 01:52 PM Feature #12218 (Duplicate): RBD journaling feature support
- Add new RBD journaling feature and integrate with new journal library.
07/05/2015
- 03:10 PM Bug #12215 (Fix Under Review): segfault: test_rbd.TestClone.test_unprotect_with_children
- *next PR*: https://github.com/ceph/ceph/pull/5146
- 02:30 PM Bug #12215 (Resolved): segfault: test_rbd.TestClone.test_unprotect_with_children
- http://qa-proxy.ceph.com/teuthology/teuthology-2015-06-30_23:00:08-rbd-next-distro-basic-multi/956365/teuthology.log
... - 02:50 PM Bug #12214 (Fix Under Review): [ FAILED ] TestObjectMap.InvalidateFlagInMemoryOnly
- *next PR*: https://github.com/ceph/ceph/pull/5145
- 02:26 PM Bug #12214 (Resolved): [ FAILED ] TestObjectMap.InvalidateFlagInMemoryOnly
- http://qa-proxy.ceph.com/teuthology/teuthology-2015-06-30_23:00:08-rbd-next-distro-basic-multi/956322/teuthology.log
...
07/02/2015
- 09:45 PM Feature #12113: rbd map should probably avoid creating duplicate devices for the same image
- I can have a look at this
- 05:04 PM Bug #12069 (Fix Under Review): ENOSPC hidden by cache not detected by callers of flatten
- *next PR*: https://github.com/ceph/ceph/pull/5131
- 03:50 PM Bug #12069: ENOSPC hidden by cache not detected by callers of flatten
- rbd_close returns a status that is hard-coded to 0. Will add an optional close method to librbd::Image.
- 03:42 PM Bug #12069 (In Progress): ENOSPC hidden by cache not detected by callers of flatten
06/30/2015
- 04:08 PM Bug #7693 (Closed): virsh domblkinfo fails with 'Bad file descriptor'
- Tracked in bugzilla
06/26/2015
- 06:13 PM Bug #12165 (Pending Backport): [ FAILED ] TestLibRBD.ExclusiveLockTransition
- 06:12 PM Bug #12176 (Pending Backport): librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- 02:02 PM Bug #12176 (Fix Under Review): librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- *next PR*: https://github.com/ceph/ceph/pull/5090
- 02:02 PM Bug #12176: librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- librados bubbled up the following error:
librbd::ImageWatcher: 0x3bffcc0 image watch failed: 63381088, (110) Conne... - 01:50 PM Bug #12176 (Resolved): librbd/internal.cc: 1967: FAILED assert(watchers.size() == 1)
- http://qa-proxy.ceph.com/teuthology/teuthology-2015-06-23_23:00:08-rbd-next-distro-basic-multi/947472/teuthology.log
06/25/2015
- 09:42 PM Bug #12165 (Fix Under Review): [ FAILED ] TestLibRBD.ExclusiveLockTransition
- 08:53 PM Bug #12165: [ FAILED ] TestLibRBD.ExclusiveLockTransition
- *next PR*: https://github.com/ceph/ceph/pull/5080
- 08:33 PM Bug #12165: [ FAILED ] TestLibRBD.ExclusiveLockTransition
- Appears to only occur when the RBD cache is disabled.
- 08:16 PM Bug #12165 (Resolved): [ FAILED ] TestLibRBD.ExclusiveLockTransition
- http://qa-proxy.ceph.com/teuthology/teuthology-2015-06-23_23:00:08-rbd-next-distro-basic-multi/947327/teuthology.log
...
06/24/2015
- 03:33 PM Bug #11791 (Pending Backport): A client opening an image mid-resize can result in the object map ...
- 03:33 PM Bug #11743 (Pending Backport): Possible crash while concurrently writing and shrinking an image
- 09:03 AM Bug #4243: rbd cli: usage confusing for snapshot operations
- Jason Dillaman wrote:
> Just a note: the rbd CLI help message includes the following:
>
> [...]
Thanks Jason ...
06/23/2015
- 06:23 PM Bug #11743 (Fix Under Review): Possible crash while concurrently writing and shrinking an image
- *next PR*: https://github.com/ceph/ceph/pull/5063
- 03:45 PM Bug #11587: inconsistency with rbd-replay utilities and --with-debug option
- Ken Dreyer wrote:
> And then make @/usr/bin/rbd-replay-prep@ no longer depend on the @--with-debug@ configure option... - 03:27 PM Bug #11587: inconsistency with rbd-replay utilities and --with-debug option
- moving them to @ceph-common@ sounds like a good option to me. And then make @/usr/bin/rbd-replay-prep@ no longer depe...
- 02:16 PM Bug #11587 (Need More Info): inconsistency with rbd-replay utilities and --with-debug option
- 03:22 PM Bug #11791 (Fix Under Review): A client opening an image mid-resize can result in the object map ...
- *next PR*: https://github.com/ceph/ceph/pull/5061
- 02:09 PM Bug #4243: rbd cli: usage confusing for snapshot operations
- Just a note: the rbd CLI help message includes the following:...
- 08:50 AM Feature #12111 (In Progress): rbd: support size suffixes for all size-based options
- 08:50 AM Feature #12112 (In Progress): rbd: add --object-size option
06/22/2015
- 06:38 PM Feature #12113 (Resolved): rbd map should probably avoid creating duplicate devices for the same ...
- Perhaps the default should be that mapping an rbd image creates a new /dev/rbd* if one does not exist for the image, ...
- 06:31 PM Feature #12112 (Resolved): rbd: add --object-size option
- Object size can be specified when creating an image with the --order option, as a number of bits in the size.
An opt... - 06:25 PM Feature #12111 (Resolved): rbd: support size suffixes for all size-based options
- These options should support them. They're all in bytes currently....
- 05:22 PM Bug #11537 (Pending Backport): librbd: crash when two clients try to write to an exclusive locked...
- Similar issue occurs in Hammer -- causes deadlock instead of a crash due to differences in locking addressed in PR4528
- 05:09 PM Bug #11579 (Resolved): Copyup operation for deep flatten cannot use current parent extents
- 05:07 PM Bug #12017: ERROR: test_rbd.TestImage.test_update_features
- Was this a failure from a teuthology test? If so, can you provide a link to the logs or the name of the suite? The ...
- 05:02 PM Bug #12035 (Duplicate): fsx detected zeroes instead of data
- Re-tested after #11579 merged and now unable to reproduce. Closing.
- 04:12 PM Bug #11938 (Resolved): Periodic failure of TestLibRBD.BlockingIO
- 03:20 PM Backport #12109: qa: test rbd notify-based proxying across versions
- *hammer PR*: https://github.com/ceph/ceph/pull/5046
- 03:14 PM Backport #12109 (Resolved): qa: test rbd notify-based proxying across versions
- https://github.com/ceph/ceph/pull/5046
- 03:12 PM Feature #11405 (Pending Backport): qa: test rbd notify-based proxying across versions
06/20/2015
- 12:30 PM Bug #4243 (Fix Under Review): rbd cli: usage confusing for snapshot operations
- 12:01 PM Bug #4243: rbd cli: usage confusing for snapshot operations
- Pull request : https://github.com/ceph/ceph/pull/5034
- 05:11 AM Bug #4243 (In Progress): rbd cli: usage confusing for snapshot operations
06/18/2015
- 05:20 AM Bug #12018: rbd and pool quota do not go well together
- josh, thans for pointing me out, i will rewrite PR
- 12:37 AM Bug #12018: rbd and pool quota do not go well together
- I think we should handle this roughly the same way as we did with the cluster becoming full:
http://comments.gmane... - 12:48 AM Bug #12069: ENOSPC hidden by cache not detected by callers of flatten
- Once #12018 is fixed this will block, but we should be able to detect other errors that arise due to bugs (generally ...
- 12:46 AM Bug #12069 (Resolved): ENOSPC hidden by cache not detected by callers of flatten
- Related to #12018, 'rbd flatten' doesn't detect the error flushing the cache, since it is implicit in closing the ima...
06/17/2015
- 02:26 PM Feature #11822 (Resolved): [rbd] support gb/tb units in rbd create/resize
- https://github.com/ceph/ceph/pull/4948 , it is now merged.
- 08:09 AM Bug #12018: rbd and pool quota do not go well together
06/16/2015
- 10:30 PM Bug #11502 (Can't reproduce): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- If this happens again we can investigate. No trace of what went wrong from these logs.
- 10:22 PM Bug #11796 (Resolved): "rbd_get_flags() indicates object map is invalid" in smoke-master-distro-b...
- Seems to have been fixed. Reopen if it recurs.
- 06:10 PM Bug #12018: rbd and pool quota do not go well together
- I think this is a general issue with pool quota handling. It should be treated the same as cluster full handling, but...
- 07:17 AM Bug #12018: rbd and pool quota do not go well together
- the quota of rbd is 2G
[root@c8 ~]# ceph osd pool get-quota rbd
quotas for pool 'rbd':
max objects: N/A
max b... - 07:11 AM Bug #12018: rbd and pool quota do not go well together
- can you run "ceph osd pool get-quota rbd" ?
- 05:54 PM Bug #11587: inconsistency with rbd-replay utilities and --with-debug option
- Should we move these utilities to ceph-common? They're generally useful rather than just for testing.
- 05:35 PM Bug #11743 (In Progress): Possible crash while concurrently writing and shrinking an image
- 05:34 PM Bug #12035: fsx detected zeroes instead of data
- Most likely caused by issue #11579
- 05:27 PM Bug #12035 (Duplicate): fsx detected zeroes instead of data
- This happened in several runs on next and master, with varying rbd cache and cache pool settings. Two such runs:
h... - 12:52 PM Feature #11822 (In Progress): [rbd] support gb/tb units in rbd create/resize
- https://github.com/ceph/ceph/pull/4948
- 06:44 AM Bug #12017: ERROR: test_rbd.TestImage.test_update_features
- I used master branch, after using hammer branch, this failure is not observed.
Also available in: Atom