Activity
From 11/04/2021 to 12/03/2021
12/03/2021
- 05:20 PM Bug #48850: "FAILED ceph_assert(m_pending_ops == 0)" in TestImageReplayer/3.SnapshotUnprotect
- ...
- 10:50 AM Backport #53170 (Resolved): pacific: [pwl] deadlock on AbstractWriteLog::m_lock during shutdown
- https://github.com/ceph/ceph/pull/43772
- 10:05 AM Bug #52235 (Resolved): [pwl] deadlock on AbstractWriteLog::m_lock during shutdown
- 09:50 AM Backport #51669 (Resolved): pacific: [pwl ssd] segfault in AbstractWriteLog::get_context() during...
- https://github.com/ceph/ceph/pull/43772
- 09:50 AM Bug #50951 (Resolved): [pwl ssd] segfault in AbstractWriteLog::get_context() during fio workload
- backport https://github.com/ceph/ceph/pull/43772
- 09:20 AM Bug #53477: [rbd-mirror] mirror demon crash
- The messages reported in the description is not an issue actually. This is generated on cleanup by "status" command, ...
- 07:14 AM Bug #53477: [rbd-mirror] mirror demon crash
- https://pulpito.ceph.com/ideepika-2021-12-02_12:23:25-rbd-wip-deepika-testing-2021-12-02-1334-distro-default-smithi/6...
12/02/2021
- 09:00 PM Bug #53477 (New): [rbd-mirror] mirror demon crash
- ...
- 08:32 AM Bug #53460 (New): rbd-mirror: split-brain after failover if rbd-mirror is started only after promote
- The following scenario leads to the split-brain failure:
1) start rbd-mirror on site1 (site1 is local and site2 is...
12/01/2021
- 05:29 AM Bug #53434 (Fix Under Review): DiffIterateTest/0.DiffIterate failed w/ librbd pwl cache.
- 05:27 AM Bug #53108 (Fix Under Review): [pwl] TestMigration.Stress* failure with pwl cache
11/30/2021
- 08:18 PM Bug #53440 (New): TestLibRBD.ConcurentOperations hang
- https://jenkins.ceph.com/job/ceph-pull-requests/86489/consoleFull#20912794656733401c-e9d0-4737-9832-6594c5da0afa
<pr... - 07:30 AM Bug #53434: DiffIterateTest/0.DiffIterate failed w/ librbd pwl cache.
- pr https://github.com/ceph/ceph/pull/44144 fix this bug
- 07:18 AM Bug #53434 (Resolved): DiffIterateTest/0.DiffIterate failed w/ librbd pwl cache.
- [ RUN ] DiffIterateTest/0.DiffIterate
using new format!
wrote [4104167~28361,7507937~35127,8211521~20835,1064... - 07:27 AM Bug #51531: rbd_fsx_nbd "Size error" after resize with a timed out notification
- ...
11/29/2021
- 11:21 AM Bug #52675 (Resolved): [rbd-mirror] unbreak one-way snapshot-based mirroring
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:16 AM Backport #52733 (Resolved): pacific: [rbd-mirror] unbreak one-way snapshot-based mirroring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43315
m... - 11:15 AM Backport #53028 (Resolved): pacific: rbd diff between two snapshots lists entire image content wi...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43805
m... - 07:56 AM Feature #53373: [pwl] old data may lose if host IP change after restarting
- The current design uses host IP to judge whether the existing cache file is valid, but in the distributed environment...
- 07:45 AM Backport #53421 (Resolved): pacific: librbd/crypto: fix various memory leaks
- https://github.com/ceph/ceph/pull/44998
- 07:36 AM Bug #53419 (Pending Backport): librbd/crypto: fix various memory leaks
- 07:34 AM Bug #53419 (Resolved): librbd/crypto: fix various memory leaks
- Fix all current (i.e. contained in master branch) errors that are reported by running:...
11/28/2021
- 01:38 PM Bug #53417 (Fix Under Review): librbd/crypto: Uninitialized image data may be gibberish
- 11:50 AM Bug #53417: librbd/crypto: Uninitialized image data may be gibberish
- To reproduce, you may need to read from an area where the relevant rados object exists, so first write a small amount...
- 11:21 AM Bug #53417 (Fix Under Review): librbd/crypto: Uninitialized image data may be gibberish
- By convention, librbd returns zeros when reading uninitialized image data.
When using encryption, this convention is...
11/25/2021
- 09:34 AM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- I reproduced this issue with debug enabled on pwl as well.
Looks like that pwl is flushing both writes concurrently,...
11/24/2021
- 04:26 PM Backport #53387 (In Progress): octopus: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUp...
- https://github.com/ceph/ceph/pull/43663
- 04:00 PM Backport #53387 (Resolved): octopus: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdat...
- https://github.com/ceph/ceph/pull/43663
- 04:15 PM Backport #53386 (In Progress): pacific: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUp...
- 04:00 PM Backport #53386 (Resolved): pacific: rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdat...
- https://github.com/ceph/ceph/pull/44094
- 03:55 PM Bug #53375 (Pending Backport): rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdate may ...
- 09:52 AM Bug #43274: unittest_rbd_mirror: Exception: SegFault
- ...
11/23/2021
- 02:32 PM Bug #53375 (Fix Under Review): rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdate may ...
- 12:10 PM Bug #53375 (Resolved): rbd-mirror: TestMockMirrorStatusUpdater.RemoveImmediateUpdate may get stuck
- https://jenkins.ceph.com/job/ceph-pull-requests/86004/console...
- 07:12 AM Feature #53373: [pwl] old data may lose if host IP change after restarting
- https://github.com/ceph/ceph/pull/43839 is a feasible solution. Adding id in root structure, we'd better using string...
- 07:08 AM Feature #53373 (New): [pwl] old data may lose if host IP change after restarting
- If multiple clients successively write to the same image and go down halfway, and then the client that went down earl...
- 06:05 AM Bug #52277 (Closed): [pwl] IO hang when the single IO size * io_depth > cache size
- 06:04 AM Bug #52277: [pwl] IO hang when the single IO size * io_depth > cache size
- As https://tracker.ceph.com/issues/52599 this solution applied, current issue won't appear. Internal flush request(f...
- 02:18 AM Bug #53368 (Closed): rbd_cache configuration is meaningless
- @ yunqing wang closing, as I suppose you resolved by taking it out of QEMU environment, if not please reopen with mor...
- 01:31 AM Bug #53368: rbd_cache configuration is meaningless
- sorry I used it in qemu, which this was covered by it.
- 01:17 AM Bug #53368 (Closed): rbd_cache configuration is meaningless
- When set rbd_cache = false, the cache is still enabled.
I tested in v15.2.15, maybe alse in master.
11/22/2021
- 12:58 AM Bug #53352 (Closed): why one rbd's data can overwrite another rbd's data !!!
- software version info:
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
QEMU emula...
11/20/2021
- 08:24 PM Backport #52733: pacific: [rbd-mirror] unbreak one-way snapshot-based mirroring
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43315
merged - 02:32 AM Bug #53350 (Closed): rbd content comes from another rbd
- ceph version is 12.2.12
two vm based rbd. Now one object's content of one rbd seems come from another rbd.
anyone h...
11/19/2021
- 04:51 PM Bug #53243 (Fix Under Review): wrong encoding of snap protection record in exporting image
- 02:33 PM Backport #53028: pacific: rbd diff between two snapshots lists entire image content with 'whole-o...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43805
merged
11/16/2021
- 07:20 AM Backport #53032 (Resolved): pacific: rbd-mirror: metadata of mirrored image are not properly clea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43662
m... - 06:29 AM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- @Deepika The two writes at the end of the test should not race, but rather the second write is expected to begin only...
11/14/2021
- 02:41 PM Backport #53264 (Resolved): pacific: [pwl ssd] cache larger than 4G will corrupt itself
- https://github.com/ceph/ceph/pull/43918
- 02:36 PM Bug #50675 (Pending Backport): [pwl ssd] cache larger than 4G will corrupt itself
- 02:36 PM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- pacific backport: https://github.com/ceph/ceph/pull/43918
11/12/2021
- 05:26 PM Bug #53250 (Fix Under Review): [rbd_support] passing invalid interval removes entire schedule
- 05:14 PM Bug #53250 (Resolved): [rbd_support] passing invalid interval removes entire schedule
- If we provide a random string in the snapshot remove command the entire schedule associated with the image is getting...
- 04:16 PM Bug #53247 (Rejected): rbd: ModuleNotFoundError: No module named 'tasks.qemu'
- ...
- 01:05 PM Bug #53243 (Resolved): wrong encoding of snap protection record in exporting image
- The size of the protection flag should be 1 but 8.
src/tools/rbd/action/Export.cc
```
int do_export_diff_fd(li... - 07:06 AM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- The effective_pool_size is 70% configured size. So 8GB is enough.
- 06:54 AM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- we can have a different yaml fragment, having 8GB testing maybe, how much size the cache is actually desired?
rbd... - 06:48 AM Bug #50734: [pwl][test] make recovery.yaml actually trigger recovery
- @majinpeng @congminyin did you have context around this by any change?
11/11/2021
- 08:22 AM Bug #53110: [rbd-mirror] handle_unregister_watch: error
- These watch (and other) errors are not the real problem for this test failure. They are after the test is finished an...
11/10/2021
- 01:53 PM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- @Or does it relate to omap in any way https://github.com/ceph/ceph/pull/43127#issuecomment-924595267
- 11:40 AM Bug #49876: [luks] sporadic failure in TestLibRBD.TestEncryptionLUKS2
- Near the end of the test there are 2 writes.
One writes 512 (TEST_IO_SIZE) zeros to image offset 0.
The other write... - 07:30 AM Bug #50675: [pwl ssd] cache larger than 4G will corrupt itself
- Deepika Upadhyay wrote:
> @Congminyin are the new test cases added for this use case or should we be adding them?
...
11/05/2021
- 11:19 AM Backport #53170 (Resolved): pacific: [pwl] deadlock on AbstractWriteLog::m_lock during shutdown
- https://github.com/ceph/ceph/pull/43772
- 11:17 AM Bug #52566 (Pending Backport): [pwl ssd] assert in _aio_stop() during shutdown
- 11:17 AM Bug #52235 (Pending Backport): [pwl] deadlock on AbstractWriteLog::m_lock during shutdown
- 01:52 AM Bug #53058 (Closed): [pwl] FAILED ceph_assert(initialized)
11/04/2021
- 02:18 PM Backport #53027 (In Progress): octopus: rbd diff between two snapshots lists entire image content...
- 02:17 PM Backport #53028 (In Progress): pacific: rbd diff between two snapshots lists entire image content...
- 10:08 AM Bug #46875: TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
- https://jenkins.ceph.com/job/ceph-pull-requests/85058 (attached the log)
Also available in: Atom