Activity
From 04/03/2021 to 05/02/2021
05/02/2021
- 09:28 PM Bug #49504: qemu_dynamic_features.sh times out
- http://qa-proxy.ceph.com/teuthology/dis-2021-05-01_20:10:41-rbd-master-distro-basic-smithi/6088548/teuthology.log
- 09:26 PM Bug #50605 (Resolved): [test] qa/workunits/rbd: use bionic version of qemu-iotests for focal
- http://qa-proxy.ceph.com/teuthology/dis-2021-05-01_20:10:41-rbd-master-distro-basic-smithi/6088547/teuthology.log
...
04/29/2021
- 08:12 PM Bug #50589 (Fix Under Review): [pwl ssd] crash when reading a split log entry (after the log wraps)
- 07:52 PM Bug #50589 (Resolved): [pwl ssd] crash when reading a split log entry (after the log wraps)
- write_log_entries() will split a log entry at the end of the log, the remainder is written to the beginning at DATA_R...
- 07:40 AM Backport #50576 (Resolved): pacific: [pwl rwl] IO hang after a period of time I/O of different bl...
- https://github.com/ceph/ceph/pull/43772
- 07:35 AM Bug #49879 (Pending Backport): [pwl rwl] IO hang after a period of time I/O of different block sizes
04/28/2021
- 07:49 PM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- http://qa-proxy.ceph.com/teuthology/dis-2021-04-28_17:10:10-rbd-wip-dis-testing-distro-basic-smithi/6079915/teutholog...
- 04:13 PM Bug #50560 (Fix Under Review): [pwl ssd] indefinite I/O hang if cache is filled to capacity
- 01:45 PM Bug #50560 (Resolved): [pwl ssd] indefinite I/O hang if cache is filled to capacity
- 03:10 PM Bug #50522 (Fix Under Review): [rbd-nbd] default pool isn't picked up
- 01:41 PM Bug #49879 (Fix Under Review): [pwl rwl] IO hang after a period of time I/O of different block sizes
04/27/2021
- 10:43 AM Bug #48850: "FAILED ceph_assert(m_pending_ops == 0)" in TestImageReplayer/3.SnapshotUnprotect
- ...
04/26/2021
- 03:12 PM Bug #50525 (In Progress): FAILED ] DiffIterateTest/0.DiffIterate, where TypeParam = DiffIterateP...
- ...
- 12:57 PM Bug #50522 (Resolved): [rbd-nbd] default pool isn't picked up
- Assuming image "foo" in pool "rbd":
$ sudo rbd device map --device-type krbd foo
/dev/rbd0
$ sudo rbd device unm... - 07:01 AM Bug #49504: qemu_dynamic_features.sh times out
- http://qa-proxy.ceph.com/teuthology/yuriw-2021-04-22_16:42:11-rbd-wip-yuri5-testing-2021-04-20-0819-pacific-distro-ba...
04/22/2021
- 06:01 AM Feature #18864: rbd export/import for consistent group
- Deepika Upadhyay wrote:
> @ShuaiChao Wang can you paste it's link for Pull Request?
OK, the link is as follows:
... - 05:33 AM Feature #18864: rbd export/import for consistent group
- @ShuaiChao Wang can you paste it's link for Pull Request?
- 04:48 AM Bug #49558: make check: run-rbd-unit-tests-61.sh (Failed)
- run-rbd-unit-tests-1.sh...
04/21/2021
- 06:15 PM Backport #50469 (Resolved): pacific: librbd/crypto: Error loading non first keyslot
- https://github.com/ceph/ceph/pull/49413
- 06:10 PM Bug #50461 (Pending Backport): librbd/crypto: Error loading non first keyslot
- 11:52 AM Bug #50461 (Fix Under Review): librbd/crypto: Error loading non first keyslot
- 11:13 AM Bug #50461 (Resolved): librbd/crypto: Error loading non first keyslot
- This bug will showup only if the user will try to use any keyslot other than 0.
The only case for that currently is ... - 09:01 AM Backport #49768 (Resolved): nautilus: [rbd] the "trash mv" operation should support an optional "...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40675
m... - 08:54 AM Bug #50439 (Fix Under Review): librbd: removing a snapshot with multiple peer can go into an infi...
- 06:25 AM Backport #50231 (Resolved): pacific: packaging: require ceph-common for immutable object cache da...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40665
m...
04/20/2021
- 12:28 PM Bug #50439: librbd: removing a snapshot with multiple peer can go into an infinite loop
- I can't edit the issue, but here is my PR: https://github.com/ceph/ceph/pull/40937.
- 12:24 PM Bug #50439 (Resolved): librbd: removing a snapshot with multiple peer can go into an infinite loop
- When you have multiple peers, librbd may try to unlink a snapshot from a peer it's not linked against and eventually ...
04/19/2021
- 06:39 AM Bug #46875: TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
- Josh Durgin wrote:
> Happened again here https://jenkins.ceph.com/job/ceph-pull-requests/73665/consoleFull :
> [......
04/16/2021
- 03:47 PM Bug #46875: TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
- Happened again here https://jenkins.ceph.com/job/ceph-pull-requests/73665/consoleFull :...
04/12/2021
- 03:18 PM Backport #49768: nautilus: [rbd] the "trash mv" operation should support an optional "--image-id"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40675
merged - 09:50 AM Bug #47868 (Resolved): rbd-target-api / one of two service crash
- Fixed in ceph-iscsi 3.5.
- 09:46 AM Bug #47868: rbd-target-api / one of two service crash
- Mike, as Jason pointed out in your other ticket (https://github.com/ceph/ceph-iscsi/issues/221), a safety check has b...
04/08/2021
- 01:03 PM Backport #49768 (In Progress): nautilus: [rbd] the "trash mv" operation should support an optiona...
- 11:11 AM Backport #50231 (In Progress): pacific: packaging: require ceph-common for immutable object cache...
- 11:10 AM Backport #50231 (Resolved): pacific: packaging: require ceph-common for immutable object cache da...
- https://github.com/ceph/ceph/pull/40665
- 11:11 AM Backport #50232 (In Progress): octopus: packaging: require ceph-common for immutable object cache...
- 11:10 AM Backport #50232 (Resolved): octopus: packaging: require ceph-common for immutable object cache da...
- https://github.com/ceph/ceph/pull/40666
- 11:08 AM Bug #50207 (Pending Backport): packaging: require ceph-common for immutable object cache daemon
04/07/2021
- 11:13 AM Bug #50207 (Fix Under Review): packaging: require ceph-common for immutable object cache daemon
- 11:10 AM Bug #50207 (Resolved): packaging: require ceph-common for immutable object cache daemon
- systemd starts it with --setuser ceph --setgroup ceph. "ceph" user and group are created by ceph-common and won't be...
Also available in: Atom