Activity
From 04/15/2021 to 05/14/2021
05/14/2021
- 08:57 PM Bug #49558: make check: run-rbd-unit-tests-61.sh (Failed)
- ...
- 03:48 PM Backport #50713 (In Progress): pacific: librbd: removing a snapshot with multiple peer can go int...
- 03:45 PM Backport #50712 (In Progress): octopus: librbd: removing a snapshot with multiple peer can go int...
05/13/2021
- 09:00 PM Bug #42248: rbd export-diff with --whole-object skips parent data for fast-diff enabled images
- The resulting patch appears to break snap to snap comparisons when appending the 'whole-object' option. This option e...
- 12:15 PM Bug #50525: FAILED ] DiffIterateTest/0.DiffIterate, where TypeParam = DiffIterateParams<false>
- /ceph/teuthology-archive/ideepika-2021-05-11_09:51:52-rbd-wip-49876-LUKS2-distro-basic-gibba/6109316/teuthology.log
05/12/2021
- 10:20 PM Bug #50787 (Resolved): rbd diff between two snapshots lists entire image content with 'whole-obje...
- Hi,
I believe that the fix introduced in response to https://tracker.ceph.com/issues/42248 has broken the 'whole-o... - 12:31 PM Backport #50713: pacific: librbd: removing a snapshot with multiple peer can go into an infinite ...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/41304
ceph-backport.sh versi... - 11:27 AM Backport #50712: octopus: librbd: removing a snapshot with multiple peer can go into an infinite ...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/41302
ceph-backport.sh versi...
05/11/2021
- 11:24 AM Backport #50757 (In Progress): pacific: [pwl] "rbd status" output is incorrect
- 11:10 AM Backport #50757 (Resolved): pacific: [pwl] "rbd status" output is incorrect
- https://github.com/ceph/ceph/pull/41281
- 11:06 AM Bug #50613 (Pending Backport): [pwl] "rbd status" output is incorrect
05/10/2021
- 08:32 PM Bug #50675 (In Progress): [pwl ssd] cache larger than 4G will corrupt itself
- 07:10 PM Bug #50734 (Resolved): [pwl][test] make recovery.yaml actually trigger recovery
- ...
05/09/2021
- 08:08 PM Backport #50718 (In Progress): pacific: [pwl] cache can't be opened after a crash or power failure
- 07:55 PM Backport #50718 (Resolved): pacific: [pwl] cache can't be opened after a crash or power failure
- https://github.com/ceph/ceph/pull/41244
- 07:51 PM Bug #50668 (Pending Backport): [pwl] cache can't be opened after a crash or power failure
- 10:50 AM Backport #50716 (Rejected): pacific: Global config overrides do not apply to in-use images
- 10:50 AM Backport #50715 (Rejected): nautilus: Global config overrides do not apply to in-use images
- 10:50 AM Backport #50714 (Resolved): octopus: Global config overrides do not apply to in-use images
- https://github.com/ceph/ceph/pull/41763
- 10:48 AM Bug #48035 (Pending Backport): Global config overrides do not apply to in-use images
- 10:45 AM Backport #50713 (Resolved): pacific: librbd: removing a snapshot with multiple peer can go into a...
- https://github.com/ceph/ceph/pull/41304
- 10:45 AM Backport #50712 (Resolved): octopus: librbd: removing a snapshot with multiple peer can go into a...
- https://github.com/ceph/ceph/pull/41302
- 10:44 AM Bug #50439 (Pending Backport): librbd: removing a snapshot with multiple peer can go into an infi...
- 02:55 AM Bug #48999: Data corruption with rbd_balance_parent_reads and rbd_balance_snap_reads set to true.
- Not sure if this is an issue with rbd snap handling or with RADOS or with user expectations around changing snapshots...
05/07/2021
- 03:29 PM Bug #49592: "test_notify.py" is timing out in upgrade-clients:client-upgrade-nautilus-pacific-pa...
- ...
05/06/2021
- 11:57 AM Bug #50669: [pwl ssd] multiple crash / power failure recovery issues
- Mahati Chamarthy wrote:
> Ilya Dryomov wrote:
> > At the very least:
> >
> > - m_first_valid_entry and m_first_f... - 11:20 AM Bug #50669: [pwl ssd] multiple crash / power failure recovery issues
- Ilya Dryomov wrote:
> At the very least:
>
> - m_first_valid_entry and m_first_free_entry aren't read from media ... - 11:45 AM Bug #50675 (Resolved): [pwl ssd] cache larger than 4G will corrupt itself
- Unlike in rwl mode where head and tail pointers are log entry indexes and the number of log entries is limited to a m...
- 09:35 AM Backport #50673 (In Progress): pacific: [test] qa/workunits/rbd: use bionic version of qemu-iotes...
- 09:20 AM Backport #50673 (Resolved): pacific: [test] qa/workunits/rbd: use bionic version of qemu-iotests ...
- https://github.com/ceph/ceph/pull/41195
- 09:23 AM Bug #50613 (Fix Under Review): [pwl] "rbd status" output is incorrect
- 09:15 AM Bug #50605 (Pending Backport): [test] qa/workunits/rbd: use bionic version of qemu-iotests for focal
05/05/2021
- 10:56 PM Bug #50668 (Fix Under Review): [pwl] cache can't be opened after a crash or power failure
- 08:54 PM Bug #50668 (Resolved): [pwl] cache can't be opened after a crash or power failure
- Opening the cache after a crash or power failure always fails with EINVAL because ImageCacheState::create_image_cache...
- 09:25 PM Bug #50670: [pwl ssd] head / tail pointer corruption
- I suspect this is related to AbstractWriteLog::check_allocation() logic and may be addressed by https://github.com/ce...
- 09:22 PM Bug #50670 (Resolved): [pwl ssd] head / tail pointer corruption
- After filling the ssd cache to capacity and maintaining the load (so that the cache is constantly retiring old and ge...
- 09:05 PM Bug #50669 (Resolved): [pwl ssd] multiple crash / power failure recovery issues
- At the very least:
- m_first_valid_entry and m_first_free_entry aren't read from media (correct values are there b... - 08:39 PM Bug #49504: qemu_dynamic_features.sh times out
- http://qa-proxy.ceph.com/teuthology/dis-2021-05-05_09:43:24-rbd-wip-dis-testing-distro-basic-smithi/6097180/teutholog...
- 08:36 PM Bug #50667 (New): [rbd-nbd] rbd-nbd.sh get_pid() failed to get pid for a newly mapped image
- http://qa-proxy.ceph.com/teuthology/dis-2021-05-04_20:50:55-rbd-wip-dis-testing-distro-basic-smithi/6095174/teutholog...
05/03/2021
- 10:54 PM Bug #49592: "test_notify.py" is timing out in upgrade-clients:client-upgrade-nautilus-pacific-pa...
- https://pulpito.ceph.com/teuthology-2021-04-26_01:20:02-upgrade-clients:client-upgrade-octopus-pacific-pacific-distro...
- 08:51 PM Bug #50207 (Resolved): packaging: require ceph-common for immutable object cache daemon
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:33 PM Bug #50618: qemu_xfstests_luks1 failed on xfstest 168
- ...
- 04:43 PM Bug #50618 (New): qemu_xfstests_luks1 failed on xfstest 168
- This is for 16.2.2 release
Run: https://pulpito.ceph.com/yuriw-2021-04-28_19:21:36-rbd-pacific-distro-basic-smithi... - 04:19 PM Backport #50232 (Resolved): octopus: packaging: require ceph-common for immutable object cache da...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40666
m... - 01:30 PM Backport #50615: pacific: [pwl ssd] crash when reading a split log entry (after the log wraps)
- Assigning to myself to hold the backport until ssd cache becomes testable.
- 01:30 PM Backport #50615 (Resolved): pacific: [pwl ssd] crash when reading a split log entry (after the lo...
- https://github.com/ceph/ceph/pull/43772
- 01:28 PM Bug #50589 (Pending Backport): [pwl ssd] crash when reading a split log entry (after the log wraps)
- 01:19 PM Feature #50614 (Resolved): [pwl] enhance "rbd status" output and periodically update it
- "Image cache state" section is very confusing because it is effectively a snapshot from the time the cache was loaded...
- 01:10 PM Bug #50613 (Resolved): [pwl] "rbd status" output is incorrect
- Several issues:
- the value of "present" field is lost to the value of "clean" -- if the cache is present but dirt... - 12:14 PM Backport #50609: pacific: [pwl ssd] indefinite I/O hang if cache is filled to capacity
- Assigning to myself to hold the backport until ssd cache becomes testable.
- 12:10 PM Backport #50609 (Resolved): pacific: [pwl ssd] indefinite I/O hang if cache is filled to capacity
- https://github.com/ceph/ceph/pull/43772
- 12:06 PM Bug #50560 (Pending Backport): [pwl ssd] indefinite I/O hang if cache is filled to capacity
- 11:40 AM Bug #50605 (Fix Under Review): [test] qa/workunits/rbd: use bionic version of qemu-iotests for focal
05/02/2021
- 09:28 PM Bug #49504: qemu_dynamic_features.sh times out
- http://qa-proxy.ceph.com/teuthology/dis-2021-05-01_20:10:41-rbd-master-distro-basic-smithi/6088548/teuthology.log
- 09:26 PM Bug #50605 (Resolved): [test] qa/workunits/rbd: use bionic version of qemu-iotests for focal
- http://qa-proxy.ceph.com/teuthology/dis-2021-05-01_20:10:41-rbd-master-distro-basic-smithi/6088547/teuthology.log
...
04/29/2021
- 08:12 PM Bug #50589 (Fix Under Review): [pwl ssd] crash when reading a split log entry (after the log wraps)
- 07:52 PM Bug #50589 (Resolved): [pwl ssd] crash when reading a split log entry (after the log wraps)
- write_log_entries() will split a log entry at the end of the log, the remainder is written to the beginning at DATA_R...
- 07:40 AM Backport #50576 (Resolved): pacific: [pwl rwl] IO hang after a period of time I/O of different bl...
- https://github.com/ceph/ceph/pull/43772
- 07:35 AM Bug #49879 (Pending Backport): [pwl rwl] IO hang after a period of time I/O of different block sizes
04/28/2021
- 07:49 PM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- http://qa-proxy.ceph.com/teuthology/dis-2021-04-28_17:10:10-rbd-wip-dis-testing-distro-basic-smithi/6079915/teutholog...
- 04:13 PM Bug #50560 (Fix Under Review): [pwl ssd] indefinite I/O hang if cache is filled to capacity
- 01:45 PM Bug #50560 (Resolved): [pwl ssd] indefinite I/O hang if cache is filled to capacity
- 03:10 PM Bug #50522 (Fix Under Review): [rbd-nbd] default pool isn't picked up
- 01:41 PM Bug #49879 (Fix Under Review): [pwl rwl] IO hang after a period of time I/O of different block sizes
04/27/2021
- 10:43 AM Bug #48850: "FAILED ceph_assert(m_pending_ops == 0)" in TestImageReplayer/3.SnapshotUnprotect
- ...
04/26/2021
- 03:12 PM Bug #50525 (In Progress): FAILED ] DiffIterateTest/0.DiffIterate, where TypeParam = DiffIterateP...
- ...
- 12:57 PM Bug #50522 (Resolved): [rbd-nbd] default pool isn't picked up
- Assuming image "foo" in pool "rbd":
$ sudo rbd device map --device-type krbd foo
/dev/rbd0
$ sudo rbd device unm... - 07:01 AM Bug #49504: qemu_dynamic_features.sh times out
- http://qa-proxy.ceph.com/teuthology/yuriw-2021-04-22_16:42:11-rbd-wip-yuri5-testing-2021-04-20-0819-pacific-distro-ba...
04/22/2021
- 06:01 AM Feature #18864: rbd export/import for consistent group
- Deepika Upadhyay wrote:
> @ShuaiChao Wang can you paste it's link for Pull Request?
OK, the link is as follows:
... - 05:33 AM Feature #18864: rbd export/import for consistent group
- @ShuaiChao Wang can you paste it's link for Pull Request?
- 04:48 AM Bug #49558: make check: run-rbd-unit-tests-61.sh (Failed)
- run-rbd-unit-tests-1.sh...
04/21/2021
- 06:15 PM Backport #50469 (Resolved): pacific: librbd/crypto: Error loading non first keyslot
- https://github.com/ceph/ceph/pull/49413
- 06:10 PM Bug #50461 (Pending Backport): librbd/crypto: Error loading non first keyslot
- 11:52 AM Bug #50461 (Fix Under Review): librbd/crypto: Error loading non first keyslot
- 11:13 AM Bug #50461 (Resolved): librbd/crypto: Error loading non first keyslot
- This bug will showup only if the user will try to use any keyslot other than 0.
The only case for that currently is ... - 09:01 AM Backport #49768 (Resolved): nautilus: [rbd] the "trash mv" operation should support an optional "...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40675
m... - 08:54 AM Bug #50439 (Fix Under Review): librbd: removing a snapshot with multiple peer can go into an infi...
- 06:25 AM Backport #50231 (Resolved): pacific: packaging: require ceph-common for immutable object cache da...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40665
m...
04/20/2021
- 12:28 PM Bug #50439: librbd: removing a snapshot with multiple peer can go into an infinite loop
- I can't edit the issue, but here is my PR: https://github.com/ceph/ceph/pull/40937.
- 12:24 PM Bug #50439 (Resolved): librbd: removing a snapshot with multiple peer can go into an infinite loop
- When you have multiple peers, librbd may try to unlink a snapshot from a peer it's not linked against and eventually ...
04/19/2021
- 06:39 AM Bug #46875: make check: run-rbd-unit-tests: TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure
- Josh Durgin wrote:
> Happened again here https://jenkins.ceph.com/job/ceph-pull-requests/73665/consoleFull :
> [......
04/16/2021
- 03:47 PM Bug #46875: make check: run-rbd-unit-tests: TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure
- Happened again here https://jenkins.ceph.com/job/ceph-pull-requests/73665/consoleFull :...
Also available in: Atom