Activity
From 10/31/2021 to 11/29/2021
11/29/2021
- 11:21 AM Bug #52675 (Resolved): [rbd-mirror] unbreak one-way snapshot-based mirroring
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:16 AM Backport #52733 (Resolved): pacific: [rbd-mirror] unbreak one-way snapshot-based mirroring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43315
m... - 11:15 AM Backport #53028 (Resolved): pacific: rbd diff between two snapshots lists entire image content wi...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43805
m... - 07:56 AM Feature #53373: [pwl] old data may lose if host IP change after restarting
- The current design uses host IP to judge whether the existing cache file is valid, but in the distributed environment...
- 07:45 AM Backport #53421 (Resolved): pacific: librbd/crypto: fix various memory leaks
- https://github.com/ceph/ceph/pull/44998
- 07:36 AM Bug #53419 (Pending Backport): librbd/crypto: fix various memory leaks
- 07:34 AM Bug #53419 (Resolved): librbd/crypto: fix various memory leaks
- Fix all current (i.e. contained in master branch) errors that are reported by running:...
11/28/2021
- 01:38 PM Bug #53417 (Fix Under Review): librbd/crypto: Uninitialized image data may be gibberish
- 11:50 AM Bug #53417: librbd/crypto: Uninitialized image data may be gibberish
- To reproduce, you may need to read from an area where the relevant rados object exists, so first write a small amount...
- 11:21 AM Bug #53417 (Fix Under Review): librbd/crypto: Uninitialized image data may be gibberish
- By convention, librbd returns zeros when reading uninitialized image data.
When using encryption, this convention is...
11/25/2021
- 09:34 AM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- I reproduced this issue with debug enabled on pwl as well.
Looks like that pwl is flushing both writes concurrently,...
11/24/2021
- 04:26 PM Backport #53387 (In Progress): octopus: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUp...
- https://github.com/ceph/ceph/pull/43663
- 04:00 PM Backport #53387 (Resolved): octopus: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdat...
- https://github.com/ceph/ceph/pull/43663
- 04:15 PM Backport #53386 (In Progress): pacific: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUp...
- 04:00 PM Backport #53386 (Resolved): pacific: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdat...
- https://github.com/ceph/ceph/pull/44094
- 03:55 PM Bug #53375 (Pending Backport): rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdate may ...
- 09:52 AM Bug #43274: unittest_rbd_mirror: Exception: SegFault
- ...
11/23/2021
- 02:32 PM Bug #53375 (Fix Under Review): rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdate may ...
- 12:10 PM Bug #53375 (Resolved): rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdate may get stuck
- https://jenkins.ceph.com/job/ceph-pull-requests/86004/console...
- 07:12 AM Feature #53373: [pwl] old data may lose if host IP change after restarting
- https://github.com/ceph/ceph/pull/43839 is a feasible solution. Adding id in root structure, we'd better using string...
- 07:08 AM Feature #53373 (New): [pwl] old data may lose if host IP change after restarting
- If multiple clients successively write to the same image and go down halfway, and then the client that went down earl...
- 06:05 AM Bug #52277 (Closed): [pwl] IO hang when the single IO size * io_depth > cache size
- 06:04 AM Bug #52277: [pwl] IO hang when the single IO size * io_depth > cache size
- As https://tracker.ceph.com/issues/52599 this solution applied, current issue won't appear. Internal flush request(f...
- 02:18 AM Bug #53368 (Closed): rbd_cache configuration is meaningless
- @ yunqing wang closing, as I suppose you resolved by taking it out of QEMU environment, if not please reopen with mor...
- 01:31 AM Bug #53368: rbd_cache configuration is meaningless
- sorry I used it in qemu, which this was covered by it.
- 01:17 AM Bug #53368 (Closed): rbd_cache configuration is meaningless
- When set rbd_cache = false, the cache is still enabled.
I tested in v15.2.15, maybe alse in master.
11/22/2021
- 12:58 AM Bug #53352 (Closed): why one rbd's data can overwrite another rbd's data !!!
- software version info:
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
QEMU emula...
11/20/2021
- 08:24 PM Backport #52733: pacific: [rbd-mirror] unbreak one-way snapshot-based mirroring
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43315
merged - 02:32 AM Bug #53350 (Closed): rbd content comes from another rbd
- ceph version is 12.2.12
two vm based rbd. Now one object's content of one rbd seems come from another rbd.
anyone h...
11/19/2021
- 04:51 PM Bug #53243 (Fix Under Review): wrong encoding of snap protection record in exporting image
- 02:33 PM Backport #53028: pacific: rbd diff between two snapshots lists entire image content with 'whole-o...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43805
merged
11/16/2021
- 07:20 AM Backport #53032 (Resolved): pacific: rbd-mirror: metadata of mirrored image are not properly clea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43662
m... - 06:29 AM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- @Deepika The two writes at the end of the test should not race, but rather the second write is expected to begin only...
11/14/2021
- 02:41 PM Backport #53264 (Resolved): pacific: [pwl ssd] cache larger than 4G will corrupt itself
- https://github.com/ceph/ceph/pull/43918
- 02:36 PM Bug #50675 (Pending Backport): [pwl ssd] cache larger than 4G will corrupt itself
- 02:36 PM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- pacific backport: https://github.com/ceph/ceph/pull/43918
11/12/2021
- 05:26 PM Bug #53250 (Fix Under Review): [rbd_support] passing invalid interval removes entire schedule
- 05:14 PM Bug #53250 (Resolved): [rbd_support] passing invalid interval removes entire schedule
- If we provide a random string in the snapshot remove command the entire schedule associated with the image is getting...
- 04:16 PM Bug #53247 (Rejected): rbd: ModuleNotFoundError: No module named 'tasks.qemu'
- ...
- 01:05 PM Bug #53243 (Resolved): wrong encoding of snap protection record in exporting image
- The size of the protection flag should be 1 but 8.
src/tools/rbd/action/Export.cc
```
int do_export_diff_fd(li... - 07:06 AM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- The effective_pool_size is 70% configured size. So 8GB is enough.
- 06:54 AM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- we can have a different yaml fragment, having 8GB testing maybe, how much size the cache is actually desired?
rbd... - 06:48 AM Bug #50734: [pwl][test] make recovery.yaml actually trigger recovery
- @majinpeng @congminyin did you have context around this by any change?
11/11/2021
- 08:22 AM Bug #53110: [rbd-mirror] handle_unregister_watch: error
- These watch (and other) errors are not the real problem for this test failure. They are after the test is finished an...
11/10/2021
- 01:53 PM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- @Or does it relate to omap in any way https://github.com/ceph/ceph/pull/43127#issuecomment-924595267
- 11:40 AM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- Near the end of the test there are 2 writes.
One writes 512 (TEST_IO_SIZE) zeros to image offset 0.
The other write... - 07:30 AM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- Deepika Upadhyay wrote:
> @Congminyin are the new test cases added for this use case or should we be adding them?
...
11/05/2021
- 11:19 AM Backport #53170 (Resolved): pacific: [pwl] deadlock on AbstractWriteLog::m_lock during shutdown
- https://github.com/ceph/ceph/pull/43772
- 11:17 AM Bug #52566 (Pending Backport): [pwl ssd] assert in _aio_stop() during shutdown
- 11:17 AM Bug #52235 (Pending Backport): [pwl] deadlock on AbstractWriteLog::m_lock during shutdown
- 01:52 AM Bug #53058 (Closed): [pwl] FAILED ceph_assert(initialized)
11/04/2021
- 02:18 PM Backport #53027 (In Progress): octopus: rbd diff between two snapshots lists entire image content...
- 02:17 PM Backport #53028 (In Progress): pacific: rbd diff between two snapshots lists entire image content...
- 10:08 AM Bug #46875: TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
- https://jenkins.ceph.com/job/ceph-pull-requests/85058 (attached the log)
11/03/2021
- 07:07 PM Backport #53032: pacific: rbd-mirror: metadata of mirrored image are not properly cleaned up afte...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43662
merged - 07:01 PM Bug #50905: [rbd-nbd] kernel lockup during rbd_fsx_nbd
- /ceph/teuthology-archive/yuriw-2021-11-01_19:22:06-rbd-wip-yuri3-testing-2021-11-01-0947-pacific-distro-basic-smithi/...
- 03:18 PM Tasks #53143 (New): [pwl] optimize the design of syncpoint generated by internal flush
keeping this as a reminder for us:
https://github.com/ceph/ceph/pull/43461#issuecomment-958622561
- 03:10 PM Backport #53141 (Resolved): pacific: [pwl] flush requests are dispatched in advance
- https://github.com/ceph/ceph/pull/43772
- 03:07 PM Bug #52599 (Pending Backport): [pwl] flush requests are dispatched in advance
- 03:03 PM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- @Congminyin are the new test cases added for this use case or should we be adding them?
- 12:53 PM Bug #53108: [pwl] TestMigration.Stress* failure with pwl cache
- ...
- 12:44 PM Bug #52566: [pwl ssd] assert in _aio_stop() during shutdown
- was able to reproduce both deadlock and this issue; fix seems to work as well
- 12:36 PM Bug #53110: [rbd-mirror] handle_unregister_watch: error
- /ceph/teuthology-archive/ideepika-2021-11-02_12:33:30-rbd-wip-ssd-cache-testing-distro-basic-smithi/6477630/teutholog...
- 07:18 AM Bug #53058: [pwl] FAILED ceph_assert(initialized)
- TestLibRBD.CreateThickRemoveFullTry failiure caused this assert. Currently test case failed and it can't do rbd_close...
11/01/2021
- 10:25 PM Bug #53125 (Need More Info): [doc] --pool is deprecated for import command
- Greetings,
While using the import command, I'm facing the next warning message 'rbd: -p [ --pool ] is deprecated, us... - 11:23 AM Backport #53118 (Resolved): pacific: [pwl rwl] dead lock issue when pwl initialization failed
- https://github.com/ceph/ceph/pull/43772
- 11:22 AM Bug #53116 (Pending Backport): [pwl rwl] dead lock issue when pwl initialization failed
- 11:13 AM Bug #53116 (Resolved): [pwl rwl] dead lock issue when pwl initialization failed
- when pwl initialization failed, 'AbstractWriteLog' will release itself
in callback, it hold guard lock and want to g... - 09:27 AM Backport #51670 (In Progress): pacific: [pwl ssd] first_free_entry corruption on media (segfault ...
- 09:25 AM Backport #51670 (New): pacific: [pwl ssd] first_free_entry corruption on media (segfault in buffe...
- 07:04 AM Backport #53114 (Resolved): pacific: [pwl ssd] flush cause io re-oreder to writeback layer
- https://github.com/ceph/ceph/pull/43772
- 07:04 AM Bug #52511 (Pending Backport): [pwl ssd] flush cause io re-oreder to writeback layer
- 06:14 AM Bug #53113 (Can't reproduce): rbd/cephadm: failure iscsi tests
- ...
- 06:00 AM Bug #53112 (Closed): qemu socket failure when trying to mount nfs
- ...
10/31/2021
- 07:02 PM Bug #53110 (New): [rbd-mirror] handle_unregister_watch: error
- ...
- 06:12 AM Bug #53108 (Duplicate): [pwl] TestMigration.Stress* failure with pwl cache
- ...
- 05:32 AM Bug #53057: [pwl] TestDeepCopy.NoSnaps faild w/ enable rbd pwl.
- http://qa-proxy.ceph.com/teuthology/ideepika-2021-10-30_13:14:03-rbd-wip-deepika2-testing-distro-basic-smithi/6468908...
Also available in: Atom