Activity
From 08/11/2017 to 09/09/2017
09/09/2017
- 08:24 AM Bug #21181 (Pending Backport): "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel-master
09/08/2017
- 08:45 PM Bug #21319 (In Progress): [cli] mirror "getter" commands will fail if mirroring has never been en...
- 07:15 PM Bug #21319 (Resolved): [cli] mirror "getter" commands will fail if mirroring has never been enabled
- They should filter out the -ENOENT error from the missing rbd_mirroring object:...
- 07:55 AM Feature #21305 (Rejected): Just discard changed data since snapshot in "rbd rollback" command
- Rolling back images is painfully slow. Yes, I know, "rbd clone", but this creates another image and automatic-snapsho...
09/07/2017
- 04:30 PM Backport #21298 (Closed): jewel: [test] various teuthology errors
- Needed jewel backport of one commit to be done via https://github.com/ceph/ceph/pull/17402
- 04:24 PM Backport #21298 (Closed): jewel: [test] various teuthology errors
- 04:25 PM Backport #21299 (Resolved): luminous: [rbd-mirror] asok hook names not updated when image is renamed
- https://github.com/ceph/ceph/pull/17860
- 04:24 PM Bug #21251: [test] various teuthology errors
- Jewel backport just 98061bb3d7ce6309ddb04ea4d7e9d44a7ecd09c6 -> will be done via https://github.com/ceph/ceph/pull/17402
- 08:34 AM Bug #21251 (Pending Backport): [test] various teuthology errors
- 02:03 PM Bug #19413: Cannot delete some snapshots after upgrade from jewel to kraken
- @Lionel: the bug was fixed in kraken 11.2.1 but if you had previously created any snapshots on kraken 11.2.0 using je...
- 01:52 PM Bug #19413: Cannot delete some snapshots after upgrade from jewel to kraken
- Hi,
FYI, we just have exactly the same issue when upgrading from Kraken to Luminous (four OxFF).
It was fixed by ... - 01:00 PM Backport #21289 (In Progress): luminous: [rbd] image-meta list does not return all entries
- 08:44 AM Backport #21289 (Resolved): luminous: [rbd] image-meta list does not return all entries
- https://github.com/ceph/ceph/pull/17561
- 12:58 PM Backport #21288 (In Progress): luminous: [test] various teuthology errors
- 08:44 AM Backport #21288 (Resolved): luminous: [test] various teuthology errors
- https://github.com/ceph/ceph/pull/17560
- 12:57 PM Bug #21181 (Fix Under Review): "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel-master
- *PR*: https://github.com/ceph/ceph/pull/17559
- 12:49 PM Bug #21181 (In Progress): "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel-master
- 12:57 PM Backport #21277 (In Progress): luminous: [cls] metadata_list API function does not honor `max_ret...
- 07:35 AM Backport #21277 (Resolved): luminous: [cls] metadata_list API function does not honor `max_return...
- https://github.com/ceph/ceph/pull/17558
- 12:55 PM Backport #21269 (In Progress): luminous: some generic options can not be passed by rbd-nbd
- 12:54 PM Backport #21265 (In Progress): luminous: [cli] rename of non-existent image results in seg fault
- 12:46 PM Backport #21282 (In Progress): kraken: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewe...
- 07:35 AM Backport #21282 (Resolved): kraken: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x...
- https://github.com/ceph/ceph/pull/17553
- 12:44 PM Backport #21281 (Closed): hammer: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-l...
- Not applicable to hammer
- 07:35 AM Backport #21281 (Closed): hammer: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-l...
- 12:38 PM Backport #21279 (In Progress): jewel: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel...
- 07:35 AM Backport #21279 (Resolved): jewel: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-...
- https://github.com/ceph/ceph/pull/17552
- 12:37 PM Backport #21280 (In Progress): luminous: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:je...
- 07:35 AM Backport #21280 (Resolved): luminous: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel...
- https://github.com/ceph/ceph/pull/17551
- 12:34 PM Bug #20860 (Pending Backport): [rbd-mirror] asok hook names not updated when image is renamed
- 08:44 AM Backport #21290 (Resolved): jewel: [rbd] image-meta list does not return all entries
- https://github.com/ceph/ceph/pull/20281
- 08:32 AM Bug #21179 (Pending Backport): [rbd] image-meta list does not return all entries
- 05:33 AM Bug #21217 (Pending Backport): "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-lumi...
- 05:30 AM Bug #21247 (Pending Backport): [cls] metadata_list API function does not honor `max_return` param...
09/06/2017
- 09:31 PM Feature #21273 (New): [rbd-mirror] delay bootstrap creation of image until after sync slot is free
- If mirroring is enabled on a pool with lots of images, it might take a long time for the sync process to copy them al...
- 08:16 PM Bug #21179 (Fix Under Review): [rbd] image-meta list does not return all entries
- *PR*: https://github.com/ceph/ceph/pull/17532
- 07:25 PM Bug #21179 (In Progress): [rbd] image-meta list does not return all entries
- 07:43 PM Backport #21268 (In Progress): luminous: rbd map should warn when creating duplicate devices for ...
- 07:39 PM Backport #21268 (Rejected): luminous: rbd map should warn when creating duplicate devices for the...
- https://github.com/ceph/ceph/pull/17529
- 07:40 PM Backport #21269 (Resolved): luminous: some generic options can not be passed by rbd-nbd
- https://github.com/ceph/ceph/pull/17557
- 07:39 PM Backport #21266 (Resolved): jewel: [cli] rename of non-existent image results in seg fault
- https://github.com/ceph/ceph/pull/20280
- 07:39 PM Backport #21265 (Resolved): luminous: [cli] rename of non-existent image results in seg fault
- https://github.com/ceph/ceph/pull/17556
- 07:38 PM Bug #20580 (Pending Backport): rbd map should warn when creating duplicate devices for the same i...
- 06:00 PM Bug #20580 (Resolved): rbd map should warn when creating duplicate devices for the same image
- 07:14 AM Bug #21248 (Pending Backport): [cli] rename of non-existent image results in seg fault
- 06:29 AM Bug #20426 (Pending Backport): some generic options can not be passed by rbd-nbd
- 04:22 AM Documentation #17723 (Resolved): snapshot flatten sample name
- 02:46 AM Documentation #17723 (Closed): snapshot flatten sample name
- https://github.com/ceph/ceph/pull/17436
- 01:40 AM Bug #21217: "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-luminous
- Note: since the ceph_test_cls_rbd application is run from older releases against newer releases, this change needs to...
- 01:38 AM Bug #21217 (Fix Under Review): "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-lumi...
- *PR*: https://github.com/ceph/ceph/pull/17509
09/05/2017
- 08:50 PM Bug #21251 (Fix Under Review): [test] various teuthology errors
- *PR*: https://github.com/ceph/ceph/pull/17504
- 07:55 PM Bug #21251 (Resolved): [test] various teuthology errors
- Addressing the issues present under the following test run: http://pulpito.ceph.com/teuthology-2017-09-05_02:01:01-rb...
- 04:12 PM Bug #21248 (Fix Under Review): [cli] rename of non-existent image results in seg fault
- *PR*: https://github.com/ceph/ceph/pull/17502
- 04:08 PM Bug #21248 (In Progress): [cli] rename of non-existent image results in seg fault
- 04:08 PM Bug #21248 (Resolved): [cli] rename of non-existent image results in seg fault
- 03:24 PM Bug #21247 (Fix Under Review): [cls] metadata_list API function does not honor `max_return` param...
- *PR*: https://github.com/ceph/ceph/pull/17499
- 02:36 PM Bug #21247 (Resolved): [cls] metadata_list API function does not honor `max_return` parameter.
- This was broken under commit d3de6f5e and needs to be fixed.
- 09:13 AM Backport #21045 (In Progress): luminous: TestMirroringWatcher.ModeUpdated: periodic failure due t...
- 09:10 AM Backport #20964 (In Progress): luminous: [config] switch to new config option getter methods
09/03/2017
- 07:02 PM Bug #18315 (Resolved): Attempting to remove an image w/ incompatible features results in partial ...
- 07:02 PM Backport #18454 (Rejected): hammer: Attempting to remove an image w/ incompatible features result...
- Hammer is EOL.
- 07:02 PM Bug #18436 (Resolved): Qemu crash triggered by network issues
- 07:01 PM Backport #18774 (Rejected): hammer: Qemu crash triggered by network issues
- 06:58 PM Bug #21009 (Rejected): hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- Hammer is EOL.
- 06:57 PM Bug #19832 (Resolved): Potential IO hang if image is flattened while read request is in-flight
- 06:57 PM Backport #20152 (Rejected): hammer: Potential IO hang if image is flattened while read request is...
- Hammer is EOL.
09/02/2017
- 03:17 PM Cleanup #17127 (Resolved): rbd-mirror: image sync should send NOCACHE advise flag
- 03:17 PM Backport #18137 (Resolved): jewel: rbd-mirror: image sync should send NOCACHE advise flag
- 03:16 PM Bug #20185 (Resolved): [cli] ensure positional arguments exist before casting
- 03:16 PM Backport #20265 (Resolved): jewel: [cli] ensure positional arguments exist before casting
09/01/2017
- 08:00 PM Bug #21217 (Resolved): "[ FAILED ] TestClsRbd.get_all_features" in upgrade:jewel-x-luminous
- Run: http://pulpito.ceph.com/teuthology-2017-09-01_04:23:18-upgrade:jewel-x-luminous-distro-basic-ovh/
Jobs: ['15858... - 05:52 PM Feature #21216 (Closed): Method to release all rbd locks
- I ran into an issue when upgrading from Kraken to Luminous with Openstack. Existing volumes would have I/O errors, ho...
- 12:48 PM Support #20183: Ceph RBD image-feature
- Hi ,
rbd image can be mapped to a block device only if "--image-feature=layering" is set on the image.
This can ... - 01:32 AM Backport #18704 (Fix Under Review): jewel: Prevent librbd from blacklisting the in-use librados c...
08/31/2017
- 02:01 PM Backport #20515 (Fix Under Review): jewel: IO work queue does not process failed lock request
- 01:03 AM Backport #20515 (In Progress): jewel: IO work queue does not process failed lock request
- 01:13 PM Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
- @Nathan: it's a lot of code to attempt to backport which is why I yanked the backport label -- it's high risk.
- 12:02 PM Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
- @Jason, @Mykola: Is jewel backport feasible for this fix? Someone is requesting it.
- 01:02 AM Backport #19957 (Fix Under Review): jewel: rbd: Lock release requests not honored after watch is ...
- 12:42 AM Backport #20636 (Rejected): kraken: rbd-mirror: cluster watcher should ignore -EPERM errors again...
- Kraken is EoL
- 12:42 AM Backport #20514 (Rejected): kraken: IO work queue does not process failed lock request
- Kraken is EoL
- 12:41 AM Backport #20005 (Rejected): kraken: Lock release requests not honored after watch is re-acquired
- Kraken is EoL
08/30/2017
- 08:39 PM Documentation #20437 (Resolved): Convert downstream Ceph iSCSI documentation for upstream
- 03:48 PM Bug #21181 (Resolved): "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel-master
- Run: http://pulpito.ceph.com/yuriw-2017-08-29_17:01:34-upgrade:luminous-x:parallel-master-distro-basic-smithi/
Log: ... - 01:53 PM Bug #20426: some generic options can not be passed by rbd-nbd
- Pan Liu wrote:
> Expect to fix it in this PR: https://github.com/ceph/ceph/pull/14135
A new clean fix opened: htt... - 12:19 PM Bug #19413 (Resolved): Cannot delete some snapshots after upgrade from jewel to kraken
- 09:09 AM Bug #19413: Cannot delete some snapshots after upgrade from jewel to kraken
- Using an old client also fixed this issue for me. Glad this has been fixed in 11.2.1. Appreciate the info.
- 12:05 PM Bug #21179 (Resolved): [rbd] image-meta list does not return all entries
- If you have more than 64 key/value pairs on an image, the remainder will not be returned.
08/29/2017
- 06:43 AM Bug #21009 (Fix Under Review): hammer:librbd: The qemu VMs hang occasionally after a snapshot is ...
08/28/2017
- 08:35 PM Feature #17356 (Resolved): object-map: batch updates during trim operation
- 08:35 PM Backport #17843 (Resolved): jewel: object-map: batch updates during trim operation
- 08:34 PM Bug #19811 (Resolved): rbd-mirror replay fails on attempting to reclaim data to local site (LS) f...
- 08:33 PM Backport #20023 (Resolved): jewel: rbd-mirror replay fails on attempting to reclaim data to local...
- 08:32 PM Bug #20175 (Resolved): test_librbd_api.sh fails in upgrade test
- 08:32 PM Backport #20532 (Resolved): jewel: test_librbd_api.sh fails in upgrade test
- 08:31 PM Bug #18888 (Resolved): rbd_clone_copy_on_read ineffective with exclusive-lock
- 08:31 PM Backport #19174 (Resolved): jewel: rbd_clone_copy_on_read ineffective with exclusive-lock
- 01:00 PM Bug #21008 (Need More Info): clone flatten is pending in 4% when it uses ec pool
- @Tang: the attached log shows that librbd is waiting for a response from the OSDs. Can you re-run with "--debug-rbd=2...
08/27/2017
- 04:11 PM Feature #17010 (Resolved): RBD default features should be negotiated with the OSD
- 04:11 PM Backport #19805 (Resolved): jewel: RBD default features should be negotiated with the OSD
- 04:09 PM Bug #19858 (Resolved): [rbd-mirror] failover and failback of unmodified image results in split-brain
- 04:09 PM Backport #19873 (Resolved): jewel: [rbd-mirror] failover and failback of unmodified image results...
- 04:08 PM Bug #19716 (Resolved): [test] test_notify.py: assert(not image.is_exclusive_lock_owner()) on line...
- 04:07 PM Backport #19795 (Resolved): jewel: [test] test_notify.py: assert(not image.is_exclusive_lock_owne...
- 03:52 PM Bug #19871 (Resolved): rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- 03:52 PM Backport #20016 (Rejected): kraken: rbd-nbd: kernel reported invalid device size (0, expected 107...
- 03:52 PM Backport #20016: kraken: rbd-nbd: kernel reported invalid device size (0, expected 1073741824)
- Too late for non-critical kraken backports.
- 03:52 PM Backport #20017 (Resolved): jewel: rbd-nbd: kernel reported invalid device size (0, expected 1073...
- 03:50 PM Backport #20153 (Resolved): jewel: Potential IO hang if image is flattened while read request is ...
- 12:44 PM Bug #21017 (Resolved): [dashboard] iSCSI summary page showing duplicate images
08/23/2017
- 10:57 PM Feature #21088 (Resolved): rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- As mentioned in https://github.com/ceph/ceph/pull/17023#issuecomment-322392667, it would be nice to let the user spec...
- 06:57 PM Bug #19798 (Resolved): [test] remove hard-coded image name from TestLibRBD.Mirror
- 06:57 PM Backport #19808 (Resolved): jewel: [test] remove hard-coded image name from TestLibRBD.Mirror
- 06:57 PM Bug #19130 (Resolved): Enabling mirroring for a pool wiht clones may fail
- 06:57 PM Backport #19228 (Resolved): jewel: Enabling mirroring for a pool wiht clones may fail
- 06:36 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- Tang Jin wrote:
> @Greg Farnum
> here is rbd flatten cmd hung log named "long_text_2017-08-22.txt"
what's the cl...
08/22/2017
- 02:14 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- @Greg Farnum
here is rbd flatten cmd hung log named "long_text_2017-08-22.txt"
08/21/2017
- 04:13 PM Backport #21045 (Resolved): luminous: TestMirroringWatcher.ModeUpdated: periodic failure due to i...
- https://github.com/ceph/ceph/pull/17465
- 04:09 PM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- Tang Jin wrote:
> @Greg Farnum
> can ceph rbd support this function (clone flatten from a ec pool)?
I'll try to ... - 06:30 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- @Greg Farnum
can ceph rbd support this function (clone flatten from a ec pool)? - 11:04 AM Bug #21029 (Pending Backport): TestMirroringWatcher.ModeUpdated: periodic failure due to injected...
08/17/2017
- 10:40 PM Bug #21029 (Fix Under Review): TestMirroringWatcher.ModeUpdated: periodic failure due to injected...
- *PR*: https://github.com/ceph/ceph/pull/17078
- 10:28 PM Bug #21029 (Resolved): TestMirroringWatcher.ModeUpdated: periodic failure due to injected message...
- ...
- 10:26 PM Bug #20567 (Resolved): rbd-mirror do not support ec pools when the primary image use ec data pool.
- 07:47 PM Bug #20567 (Fix Under Review): rbd-mirror do not support ec pools when the primary image use ec d...
- *master PR*: https://github.com/ceph/ceph/pull/17073
- 04:17 PM Bug #20567 (Pending Backport): rbd-mirror do not support ec pools when the primary image use ec d...
- *luminous PR*: https://github.com/ceph/ceph/pull/17023
- 10:03 AM Documentation #15000: Need better documentation to describe RBD image features
- This one _definitely_ should find its way into the documentation! It is very hard to find any info on features, even ...
- 12:45 AM Bug #21017 (Fix Under Review): [dashboard] iSCSI summary page showing duplicate images
- *PR*: https://github.com/ceph/ceph/pull/17055
- 12:43 AM Bug #21017 (Resolved): [dashboard] iSCSI summary page showing duplicate images
- The unique id for service daemons was changed to "<hostname>:<pool>/<image>" recently to prevent duplicate service na...
08/16/2017
- 11:23 AM Bug #21009: hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- After investigating the backtrace and logs, we find a deadlock is possible in the following scenario:
1) OPs issue... - 11:15 AM Bug #21009: hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- Sorry for the repeat.
- 11:05 AM Bug #21009: hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- After investigating the backtrace and logs, we find a deadlock is possible in the following scenario:
1) OPs issue... - 09:00 AM Bug #21009 (Rejected): hammer:librbd: The qemu VMs hang occasionally after a snapshot is created.
- We're hosting hundreds of VMs with qemu and ceph as core infrastructure in the production environment. The ceph bas...
- 11:04 AM Bug #21008: clone flatten is pending in 4% when it uses ec pool
- If flatten a clone when pool is ec pool, it will hang in 4%, and I check the client log, it read a data object from e...
- 08:41 AM Bug #21008 (Closed): clone flatten is pending in 4% when it uses ec pool
- If flatten a clone when pool is ec pool, it will be pending in 4%, and I check the client log, it read a data object ...
08/14/2017
- 05:54 PM Bug #20567: rbd-mirror do not support ec pools when the primary image use ec data pool.
- I've submitted a PR that implements this: https://github.com/ceph/ceph/pull/17023
08/11/2017
- 07:29 PM Bug #20860 (Fix Under Review): [rbd-mirror] asok hook names not updated when image is renamed
- PR: https://github.com/ceph/ceph/pull/16998
Also available in: Atom