Activity
From 08/06/2017 to 09/04/2017
09/03/2017
- 07:02 PM Bug #18315 (Resolved): Attempting to remove an image w/ incompatible features results in partial ...
- 07:02 PM Backport #18454 (Rejected): hammer: Attempting to remove an image w/ incompatible features result...
- Hammer is EOL.
- 07:02 PM Bug #18436 (Resolved): Qemu crash triggered by network issues
- 07:01 PM Backport #18774 (Rejected): hammer: Qemu crash triggered by network issues
- 06:58 PM Bug #21009 (Rejected): hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- Hammer is EOL.
- 06:57 PM Bug #19832 (Resolved): Potential IO hang if image is flattened while read request is in-flight
- 06:57 PM Backport #20152 (Rejected): hammer: Potential IO hang if image is flattened while read request is...
- Hammer is EOL.
09/02/2017
- 03:17 PM Cleanup #17127 (Resolved): rbd-mirror: image sync should send NOCACHE advise flag
- 03:17 PM Backport #18137 (Resolved): jewel: rbd-mirror: image sync should send NOCACHE advise flag
- 03:16 PM Bug #20185 (Resolved): [cli] ensure positional arguments exist before casting
- 03:16 PM Backport #20265 (Resolved): jewel: [cli] ensure positional arguments exist before casting
09/01/2017
- 08:00 PM Bug #21217 (Resolved): "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-luminous
- Run: http://pulpito.ceph.com/teuthology-2017-09-01_04:23:18-upgrade:jewel-x-luminous-distro-basic-ovh/
Jobs: ['15858... - 05:52 PM Feature #21216 (Closed): Method to release all rbd locks
- I ran into an issue when upgrading from Kraken to Luminous with Openstack. Existing volumes would have I/O errors, ho...
- 12:48 PM Support #20183: Ceph RBD image-feature
- Hi ,
rbd image can be mapped to a block device only if "--image-feature=layering" is set on the image.
This can ... - 01:32 AM Backport #18704 (Fix Under Review): jewel: Prevent librbd from blacklisting the in-use librados c...
08/31/2017
- 02:01 PM Backport #20515 (Fix Under Review): jewel: IO work queue does not process failed lock request
- 01:03 AM Backport #20515 (In Progress): jewel: IO work queue does not process failed lock request
- 01:13 PM Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
- @Nathan: it's a lot of code to attempt to backport which is why I yanked the backport label -- it's high risk.
- 12:02 PM Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
- @Jason, @Mykola: Is jewel backport feasible for this fix? Someone is requesting it.
- 01:02 AM Backport #19957 (Fix Under Review): jewel: rbd: Lock release requests not honored after watch is ...
- 12:42 AM Backport #20636 (Rejected): kraken: rbd-mirror: cluster watcher should ignore -EPERM errors again...
- Kraken is EoL
- 12:42 AM Backport #20514 (Rejected): kraken: IO work queue does not process failed lock request
- Kraken is EoL
- 12:41 AM Backport #20005 (Rejected): kraken: Lock release requests not honored after watch is re-acquired
- Kraken is EoL
08/30/2017
- 08:39 PM Documentation #20437 (Resolved): Convert downstream Ceph iSCSI documentation for upstream
- 03:48 PM Bug #21181 (Resolved): "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel-master
- Run: http://pulpito.ceph.com/yuriw-2017-08-29_17:01:34-upgrade:luminous-x:parallel-master-distro-basic-smithi/
Log: ... - 01:53 PM Bug #20426: some generic options can not be passed by rbd-nbd
- Pan Liu wrote:
> Expect to fix it in this PR: https://github.com/ceph/ceph/pull/14135
A new clean fix opened: htt... - 12:19 PM Bug #19413 (Resolved): Cannot delete some snapshots after upgrade from jewel to kraken
- 09:09 AM Bug #19413: Cannot delete some snapshots after upgrade from jewel to kraken
- Using an old client also fixed this issue for me. Glad this has been fixed in 11.2.1. Appreciate the info.
- 12:05 PM Bug #21179 (Resolved): [rbd] image-meta list does not return all entries
- If you have more than 64 key/value pairs on an image, the remainder will not be returned.
08/29/2017
- 06:43 AM Bug #21009 (Fix Under Review): hammer:librbd: The qemu VMs hang occasionally after a snapshot is ...
08/28/2017
- 08:35 PM Feature #17356 (Resolved): object-map: batch updates during trim operation
- 08:35 PM Backport #17843 (Resolved): jewel: object-map: batch updates during trim operation
- 08:34 PM Bug #19811 (Resolved): rbd-mirror replay fails on attempting to reclaim data to local site (LS) f...
- 08:33 PM Backport #20023 (Resolved): jewel: rbd-mirror replay fails on attempting to reclaim data to local...
- 08:32 PM Bug #20175 (Resolved): test_librbd_api.sh fails in upgrade test
- 08:32 PM Backport #20532 (Resolved): jewel: test_librbd_api.sh fails in upgrade test
- 08:31 PM Bug #18888 (Resolved): rbd_clone_copy_on_read ineffective with exclusive-lock
- 08:31 PM Backport #19174 (Resolved): jewel: rbd_clone_copy_on_read ineffective with exclusive-lock
- 01:00 PM Bug #21008 (Need More Info): clone flatten is pending in 4% when it uses ec pool
- @Tang: the attached log shows that librbd is waiting for a response from the OSDs. Can you re-run with "--debug-rbd=2...
08/27/2017
- 04:11 PM Feature #17010 (Resolved): RBD default features should be negotiated with the OSD
- 04:11 PM Backport #19805 (Resolved): jewel: RBD default features should be negotiated with the OSD
- 04:09 PM Bug #19858 (Resolved): [rbd-mirror] failover and failback of unmodified image results in split-brain
- 04:09 PM Backport #19873 (Resolved): jewel: [rbd-mirror] failover and failback of unmodified image results...
- 04:08 PM Bug #19716 (Resolved): [test] test_notify.py: assert(not image.is_exclusive_lock_owner()) on line...
- 04:07 PM Backport #19795 (Resolved): jewel: [test] test_notify.py: assert(not image.is_exclusive_lock_owne...
- 03:52 PM Bug #19871 (Resolved): rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- 03:52 PM Backport #20016 (Rejected): kraken: rbd-nbd: kernel reported invalid device size (0, expected 107...
- 03:52 PM Backport #20016: kraken: rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- Too late for non-critical kraken backports.
- 03:52 PM Backport #20017 (Resolved): jewel: rbd-nbd: kernel reported invalid device size (0, expected 1073...
- 03:50 PM Backport #20153 (Resolved): jewel: Potential IO hang if image is flattened while read request is ...
- 12:44 PM Bug #21017 (Resolved): [dashboard] iSCSI summary page showing duplicate images
08/23/2017
- 10:57 PM Feature #21088 (Resolved): rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- As mentioned in https://github.com/ceph/ceph/pull/17023#issuecomment-322392667, it would be nice to let the user spec...
- 06:57 PM Bug #19798 (Resolved): [test] remove hard-coded image name from TestLibRBD.Mirror
- 06:57 PM Backport #19808 (Resolved): jewel: [test] remove hard-coded image name from TestLibRBD.Mirror
- 06:57 PM Bug #19130 (Resolved): Enabling mirroring for a pool wiht clones may fail
- 06:57 PM Backport #19228 (Resolved): jewel: Enabling mirroring for a pool wiht clones may fail
- 06:36 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- Tang Jin wrote:
> @Greg Farnum
> here is rbd flatten cmd hung log named "long_text_2017-08-22.txt"
what's the cl...
08/22/2017
- 02:14 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- @Greg Farnum
here is rbd flatten cmd hung log named "long_text_2017-08-22.txt"
08/21/2017
- 04:13 PM Backport #21045 (Resolved): luminous: TestMirroringWatcher.ModeUpdated: periodic failure due to i...
- https://github.com/ceph/ceph/pull/17465
- 04:09 PM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- Tang Jin wrote:
> @Greg Farnum
> can ceph rbd support this function (clone flatten from a ec pool)?
I'll try to ... - 06:30 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- @Greg Farnum
can ceph rbd support this function (clone flatten from a ec pool)? - 11:04 AM Bug #21029 (Pending Backport): TestMirroringWatcher.ModeUpdated: periodic failure due to injected...
08/17/2017
- 10:40 PM Bug #21029 (Fix Under Review): TestMirroringWatcher.ModeUpdated: periodic failure due to injected...
- *PR*: https://github.com/ceph/ceph/pull/17078
- 10:28 PM Bug #21029 (Resolved): TestMirroringWatcher.ModeUpdated: periodic failure due to injected message...
- ...
- 10:26 PM Bug #20567 (Resolved): rbd-mirror do not support ec pools when the primary image use ec data pool.
- 07:47 PM Bug #20567 (Fix Under Review): rbd-mirror do not support ec pools when the primary image use ec d...
- *master PR*: https://github.com/ceph/ceph/pull/17073
- 04:17 PM Bug #20567 (Pending Backport): rbd-mirror do not support ec pools when the primary image use ec d...
- *luminous PR*: https://github.com/ceph/ceph/pull/17023
- 10:03 AM Documentation #15000: Need better documentation to describe RBD image features
- This one _definitely_ should find its way into the documentation! It is very hard to find any info on features, even ...
- 12:45 AM Bug #21017 (Fix Under Review): [dashboard] iSCSI summary page showing duplicate images
- *PR*: https://github.com/ceph/ceph/pull/17055
- 12:43 AM Bug #21017 (Resolved): [dashboard] iSCSI summary page showing duplicate images
- The unique id for service daemons was changed to "<hostname>:<pool>/<image>" recently to prevent duplicate service na...
08/16/2017
- 11:23 AM Bug #21009: hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- After investigating the backtrace and logs, we find a deadlock is possible in the following scenario:
1) OPs issue... - 11:15 AM Bug #21009: hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- Sorry for the repeat.
- 11:05 AM Bug #21009: hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- After investigating the backtrace and logs, we find a deadlock is possible in the following scenario:
1) OPs issue... - 09:00 AM Bug #21009 (Rejected): hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- We're hosting hundreds of VMs with qemu and ceph as core infrastructure in the production environment. The ceph bas...
- 11:04 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- If flatten a clone when pool is ec pool, it will hang in 4%, and I check the client log, it read a data object from e...
- 08:41 AM Bug #21008 (Closed): clone flatten is pending in 4% when it uses ec pool
- If flatten a clone when pool is ec pool, it will be pending in 4%, and I check the client log, it read a data object ...
08/14/2017
- 05:54 PM Bug #20567: rbd-mirror do not support ec pools when the primary image use ec data pool.
- I've submitted a PR that implements this: https://github.com/ceph/ceph/pull/17023
08/11/2017
- 07:29 PM Bug #20860 (Fix Under Review): [rbd-mirror] asok hook names not updated when image is renamed
- PR: https://github.com/ceph/ceph/pull/16998
08/09/2017
- 08:12 PM Bug #20954: test_admin_socket.sh qa test fails due to recent changes in vstart admin socket confi...
- luminous backport https://github.com/ceph/ceph/pull/16948 was merged prior to the v12.2.0 release.
- 02:33 PM Bug #20954 (Resolved): test_admin_socket.sh qa test fails due to recent changes in vstart admin s...
- 02:29 PM Bug #20954: test_admin_socket.sh qa test fails due to recent changes in vstart admin socket confi...
- Backport PR: https://github.com/ceph/ceph/pull/16946
- 02:06 PM Bug #20954 (Pending Backport): test_admin_socket.sh qa test fails due to recent changes in vstart...
- 07:00 AM Bug #20954 (Fix Under Review): test_admin_socket.sh qa test fails due to recent changes in vstart...
- PR: https://github.com/ceph/ceph/pull/16917
- 06:59 AM Bug #20954 (Resolved): test_admin_socket.sh qa test fails due to recent changes in vstart admin s...
- The way test_admin_socket.sh uses to determine the watcher admin socket is fragile and has started to fail after the ...
- 06:32 PM Backport #20964 (Resolved): luminous: [config] switch to new config option getter methods
- https://github.com/ceph/ceph/pull/17464
- 04:55 PM Cleanup #20737 (Pending Backport): [config] switch to new config option getter methods
- This will be a good candidate ticket for 12.2.1 or .2 (after some runtime to ensure I didn't break anything).
- 04:50 PM Cleanup #20737 (Resolved): [config] switch to new config option getter methods
- 12:45 PM Cleanup #20737 (Fix Under Review): [config] switch to new config option getter methods
- *PR*: https://github.com/ceph/ceph/pull/16737
- 10:33 AM Cleanup #20737: [config] switch to new config option getter methods
- Sorry, I didn't notice there is already a PR had fixed this problem.
I closed mine and here is the right PR: https:/... - 09:29 AM Cleanup #20737: [config] switch to new config option getter methods
- PR: https://github.com/ceph/ceph/pull/16937
08/08/2017
- 06:15 PM Bug #20940: IO stall/hang with ceph 10.2.7 on Arch Linux
- @Jason indicated that he doesn't normally see ceph/rbd running under jemalloc. Digging into Arch's qemu package, I f...
- 06:15 PM Bug #20940 (Resolved): IO stall/hang with ceph 10.2.7 on Arch Linux
- Appears to be some sort of jemalloc / glibc issue on Arch -- running QEMU built under glibc malloc does not result in...
- 05:31 PM Bug #20940: IO stall/hang with ceph 10.2.7 on Arch Linux
- ...
- 05:30 PM Bug #20940: IO stall/hang with ceph 10.2.7 on Arch Linux
- That's odd memory is not an issue on this host. It's got 32G of RAM with nothing else currently running on it.
$ ... - 04:55 PM Bug #20940: IO stall/hang with ceph 10.2.7 on Arch Linux
- @Jamin: it looks like you have lots of threads hung inside jemalloc attempting to allocate memory. This thread in par...
- 04:04 PM Bug #20940: IO stall/hang with ceph 10.2.7 on Arch Linux
- I'm assuming you wanted me to attach to the qemu process.
- 02:35 AM Bug #20940: IO stall/hang with ceph 10.2.7 on Arch Linux
- One of the threads appears to hang:...
- 12:46 AM Bug #20940: IO stall/hang with ceph 10.2.7 on Arch Linux
- The only change being made is the ceph package version on the VM host.
I believe the cephx permissions can be rule... - 12:06 AM Bug #20940 (Need More Info): IO stall/hang with ceph 10.2.7 on Arch Linux
- @Jamin: nothing unusual in that log from a librbd perspective -- it just looks like write requests were sent out but ...
- 05:53 PM Documentation #20701 (Resolved): [rbd-mirror] update mirroring docs for Luminous
- 04:49 PM Documentation #20701: [rbd-mirror] update mirroring docs for Luminous
- *PR*: https://github.com/ceph/ceph/pull/16908
- 03:26 PM Documentation #20701 (In Progress): [rbd-mirror] update mirroring docs for Luminous
- 03:26 PM Bug #20644 (Resolved): [rbd-mirror] assertion failure when mirrored pool is removed
- 03:25 PM Bug #20655 (Resolved): [rbd-mirror] demoting a primary image may result in the image being deleted
- 03:24 PM Bug #20918 (Resolved): TestInternal.FlattenNoEmptyObjects, TestInternal.TestCoR have mismatched o...
- 02:40 PM Bug #20918 (Pending Backport): TestInternal.FlattenNoEmptyObjects, TestInternal.TestCoR have mism...
- 01:58 PM Cleanup #20941 (Resolved): Disable 'rbd_localize_parent_reads' by default
- 12:45 PM Cleanup #20941 (Pending Backport): Disable 'rbd_localize_parent_reads' by default
- 12:02 PM Bug #20943 (Rejected): rbd: list of watchers not correspond to list of clients
- 11:51 AM Bug #20943: rbd: list of watchers not correspond to list of clients
- As of now that's the expected behavior -- the blacklist entry is expected to stay there and not go away (well, at lea...
- 09:29 AM Bug #20943 (Rejected): rbd: list of watchers not correspond to list of clients
- It is possible to remove watcher from 'rbd status image' but
it's not preventing removed client from using of this r...
08/07/2017
- 10:55 PM Cleanup #20941 (Fix Under Review): Disable 'rbd_localize_parent_reads' by default
- *PR*: https://github.com/ceph/ceph/pull/16882
- 09:43 PM Cleanup #20941 (Resolved): Disable 'rbd_localize_parent_reads' by default
- The localization functionality is not well tested at the RADOS level. It would be safer to disable this by default.
- 09:22 PM Bug #20940 (Resolved): IO stall/hang with ceph 10.2.7 on Arch Linux
- On Arch Linux, starting with ceph 10.2.5 I noticed that VMs using ceph/rbd backed volumes experience a complete I/O s...
- 06:32 PM Bug #20918 (Fix Under Review): TestInternal.FlattenNoEmptyObjects, TestInternal.TestCoR have mism...
- *PR*: https://github.com/ceph/ceph/pull/16877/files
- 03:47 PM Bug #20918 (In Progress): TestInternal.FlattenNoEmptyObjects, TestInternal.TestCoR have mismatche...
- 02:08 PM Bug #20630 (Resolved): [test] rbd-mirror teuthology task doesn't start daemon in foreground mode
- r
- 02:07 PM Backport #20635 (Resolved): jewel: [test] rbd-mirror teuthology task doesn't start daemon in fore...
Also available in: Atom