Activity
From 03/31/2019 to 04/29/2019
04/29/2019
- 12:41 PM Feature #24065: [fast-diff] interlock object-map/fast-diff features together
- Opened tracker ticket #39521
- 12:14 PM Feature #24065: [fast-diff] interlock object-map/fast-diff features together
- @Ricardo: it sounds like a bug if you can disable fast-diff and leave object-map enabled. The dashboard should probab...
- 11:30 AM Feature #24065: [fast-diff] interlock object-map/fast-diff features together
- My main question here is, from a Ceph Dashboard point of view, can we simply drop the `fast-diff` checkbox from the R...
- 10:56 AM Feature #24065: [fast-diff] interlock object-map/fast-diff features together
- In Nautilus, I cannot create an image with `object-map` and without `fast-diff` because `object-map` will now implici...
- 12:41 PM Bug #39521 (Resolved): Fast-diff can be disabled w/o disabling object-map
- Related to the changes from https://tracker.ceph.com/issues/24065. Running "rbd feature disable <image> fast-diff" wi...
04/26/2019
- 03:45 PM Backport #39501 (Rejected): mimic: snapshot object maps can go inconsistent during copyup
- 03:45 PM Backport #39500 (Rejected): luminous: snapshot object maps can go inconsistent during copyup
- 03:45 PM Backport #39499 (Resolved): nautilus: snapshot object maps can go inconsistent during copyup
- https://github.com/ceph/ceph/pull/29722
- 12:12 PM Bug #39435 (Pending Backport): snapshot object maps can go inconsistent during copyup
04/25/2019
- 02:55 PM Bug #39447 (Fix Under Review): [tests] "rbd" teuthology suite has no coverage of "rbd diff"
- 07:43 AM Backport #39462 (Resolved): nautilus: [rbd-mirror] "bad crc in data" error when listing large pools
- https://github.com/ceph/ceph/pull/28122
- 07:43 AM Backport #39461 (Resolved): mimic: [rbd-mirror] "bad crc in data" error when listing large pools
- https://github.com/ceph/ceph/pull/28123
- 07:43 AM Backport #39460 (Resolved): luminous: [rbd-mirror] "bad crc in data" error when listing large pools
- https://github.com/ceph/ceph/pull/28124
04/24/2019
- 07:54 PM Bug #39455 (Need More Info): "rbd diff" reports diff on freshly-created image
- > My expectation based on reading the rbd manpage ("diff - Dump a list of byte extents in the image that have changed...
- 04:54 PM Bug #39455 (Rejected): "rbd diff" reports diff on freshly-created image
- While working on #39447 I was surprised to find that "rbd diff" reports a diff in the following case:
Starting fro... - 05:53 PM Bug #39435 (Fix Under Review): snapshot object maps can go inconsistent during copyup
- 03:40 PM Bug #39021: Several race conditions are possible between io::ObjectRequest and io::CopyupRequest
- .... also include https://github.com/ceph/ceph/pull/27757
- 03:17 PM Backport #39450 (Resolved): librbd cannot open image against Jewel cluster
- The RefreshRequest state machine attempts to invoke the OSD cls method 'rbd.get_snapshot_timestamp', which doesn't ex...
- 01:47 PM Bug #39447 (In Progress): [tests] "rbd" teuthology suite has no coverage of "rbd diff"
- 11:48 AM Bug #39447 (Resolved): [tests] "rbd" teuthology suite has no coverage of "rbd diff"
- $SUBJ says it all
- 12:41 PM Bug #24668 (Pending Backport): [test] qemu-iotests tests fails under latest Ubuntu kernel
- 12:31 PM Bug #39448 (Resolved): [test] qemu-iotests test case 005 fails due to msgr error message
- ...
- 10:01 AM Bug #39407 (Pending Backport): [rbd-mirror] "bad crc in data" error when listing large pools
04/23/2019
- 02:50 PM Bug #39435 (Resolved): snapshot object maps can go inconsistent during copyup
- If the data read from the parent is all zeros, deep copyup isn't performed. However snapshot object maps are updated...
- 01:07 PM Bug #36626: couldn't rewatch after network was blocked and client blacklisted
- The provided log doesn't show any attempt to write IO while the client is blacklisted. Was that part just snipped out...
- 01:06 PM Backport #39429 (Resolved): mimic: 'rbd mirror status --verbose' will occasionally seg fault
- https://github.com/ceph/ceph/pull/28125
- 01:06 PM Backport #39428 (Resolved): nautilus: 'rbd mirror status --verbose' will occasionally seg fault
- https://github.com/ceph/ceph/pull/28121
- 01:06 PM Backport #39427 (Resolved): luminous: 'rbd mirror status --verbose' will occasionally seg fault
- https://github.com/ceph/ceph/pull/28126
- 01:05 PM Backport #39423 (Resolved): nautilus: Drop "ceph_test_librbd_api" target
- https://github.com/ceph/ceph/pull/28091
- 12:26 PM Bug #39407 (Fix Under Review): [rbd-mirror] "bad crc in data" error when listing large pools
- 12:23 PM Bug #39407 (Resolved): [rbd-mirror] "bad crc in data" error when listing large pools
- If a pool has more than 1024 images, rbd-mirror will issue multiple "mirror_image_list" commands to the OSD. The subs...
04/22/2019
- 05:16 PM Bug #39031 (Pending Backport): 'rbd mirror status --verbose' will occasionally seg fault
- 05:16 PM Cleanup #39386 (Resolved): [test] add 'writearound' cache policy test cases to rbd suite
- 05:22 AM Cleanup #39072 (Pending Backport): Drop "ceph_test_librbd_api" target
04/19/2019
- 06:21 PM Feature #24235 (Rejected): Add new command - ceph rbd-mirror status like ceph fs(mds) status
- 06:19 PM Cleanup #39025 (In Progress): Simplify ImageCtx locking where possible
- 06:18 PM Cleanup #39072 (Fix Under Review): Drop "ceph_test_librbd_api" target
- 06:17 PM Cleanup #39072 (In Progress): Drop "ceph_test_librbd_api" target
- 06:13 PM Cleanup #39386 (Fix Under Review): [test] add 'writearound' cache policy test cases to rbd suite
- 06:04 PM Cleanup #39386 (In Progress): [test] add 'writearound' cache policy test cases to rbd suite
04/18/2019
- 06:13 PM Bug #24668 (Fix Under Review): [test] qemu-iotests tests fails under latest Ubuntu kernel
- 05:54 PM Bug #24668 (In Progress): [test] qemu-iotests tests fails under latest Ubuntu kernel
- 05:56 PM Cleanup #39386 (Resolved): [test] add 'writearound' cache policy test cases to rbd suite
- The suite already tests cache disabled, writearound, and writethrough -- so a new writearound workload should added w...
- 05:54 PM Bug #20484 (Won't Fix): snap can be removed while it is been using by rbd-nbd device
- 05:53 PM Bug #12333 (Won't Fix): librbd doesn't notice if exclusive lock is broken
- 05:52 PM Bug #38260 (Won't Fix): "Segmentation fault (core dumped)" in upgrade:client-upgrade-kraken-luminous
- 05:22 PM Bug #39031 (Fix Under Review): 'rbd mirror status --verbose' will occasionally seg fault
- 03:15 PM Bug #39031 (In Progress): 'rbd mirror status --verbose' will occasionally seg fault
- 09:11 AM Feature #37849 (Resolved): [io] simple scheduler plugin for object dispatcher layer
04/17/2019
- 09:47 AM Documentation #39351 (New): perf image iotop and rbd perf image are not documented
- For example, "perf image iotop" handles keys "<" and ">" for changing column sorting.
04/16/2019
- 10:54 AM Backport #38977 (Resolved): nautilus: return ETIMEDOUT if we meet a timeout in poll
- 08:01 AM Backport #39316 (Resolved): mimic: krbd: fix rbd map hang due to udev return subsystem unordered
- https://github.com/ceph/ceph/pull/30176
- 08:01 AM Backport #39315 (Resolved): nautilus: krbd: fix rbd map hang due to udev return subsystem unordered
- https://github.com/ceph/ceph/pull/28019
- 08:01 AM Backport #39314 (Resolved): luminous: krbd: fix rbd map hang due to udev return subsystem unordered
- https://github.com/ceph/ceph/pull/31360
04/15/2019
- 08:02 PM Backport #38977: nautilus: return ETIMEDOUT if we meet a timeout in poll
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27539
merged - 08:41 AM Bug #39089 (Pending Backport): krbd: fix rbd map hang due to udev return subsystem unordered
- 06:22 AM Backport #38976 (In Progress): mimic: return ETIMEDOUT if we meet a timeout in poll
- https://github.com/ceph/ceph/pull/27588
04/14/2019
- 07:17 AM Backport #39288 (Resolved): nautilus: [rbd-mirror] image replayer should periodically flush IO an...
- https://github.com/ceph/ceph/pull/27937
- 07:10 AM Bug #39257 (Pending Backport): [rbd-mirror] image replayer should periodically flush IO and commi...
04/13/2019
- 07:10 PM Bug #38895 (Resolved): "cannot move migrating image to trash" error should return EBUSY
- 07:09 PM Backport #38968 (Resolved): nautilus: "cannot move migrating image to trash" error should return ...
04/12/2019
- 08:12 PM Backport #38968: nautilus: "cannot move migrating image to trash" error should return EBUSY
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27475
merged - 11:57 AM Bug #39269 (Fix Under Review): rbd-nbd return correctly error message when no-match device.
- 09:13 AM Bug #39269: rbd-nbd return correctly error message when no-match device.
- https://github.com/ceph/ceph/pull/27484
- 09:12 AM Bug #39269 (Resolved): rbd-nbd return correctly error message when no-match device.
- When exec: rbd-nbd map rbd/image --device /dev/image
The error message is:
rbd-nbd: failed to open device... - 09:09 AM Bug #36626: couldn't rewatch after network was blocked and client blacklisted
- It still occurs even rbd_cache was false.
the log file is a bit big and i deleted some content.... - 06:56 AM Backport #38977 (In Progress): nautilus: return ETIMEDOUT if we meet a timeout in poll
- https://github.com/ceph/ceph/pull/27539
- 01:03 AM Backport #38975 (In Progress): luminous: return ETIMEDOUT if we meet a timeout in poll
- https://github.com/ceph/ceph/pull/27536
04/11/2019
- 09:03 PM Bug #39257 (Fix Under Review): [rbd-mirror] image replayer should periodically flush IO and commi...
- 05:03 PM Bug #39257 (Resolved): [rbd-mirror] image replayer should periodically flush IO and commit positions
- With the cache enabled, the commit position is only updated after enough IO is issued. Therefore, if there is no addi...
- 05:01 PM Bug #12383 (Won't Fix): "Segmentation fault rbd()"
- 05:00 PM Bug #12545 (Can't reproduce): CEPH_QA_SUITE/AARCH64: Command failed (workunit test rbd/map-snapsh...
- 05:00 PM Bug #12535 (Can't reproduce): CEPH_QA_SUITE/AARCH64: segfault in rbd-fuse
- 05:00 PM Bug #22059 (Closed): It's impossible to add rbd image-meta key/value on opened image
- Closing due to a lack of activity
- 05:00 PM Bug #22660 (Won't Fix): Inconsistency raised while performing multiple "image rename" in parallel.
- 04:59 PM Bug #23189 (Closed): snapshot size 0 and image size 0
- Closing due to a lack of activity
- 04:59 PM Bug #23263 (Closed): Journaling feature causes cluster to have slow requests and inconsistent PG
- Closing due to a lack of activity
- 04:59 PM Bug #24102 (Closed): snapshot of RBD image is found to be all zero.
- Closing due to a lack of activity
- 04:59 PM Bug #24106 (Closed): fail to create rbd device when the clusters' health is ok
- Closing due to a lack of activity
- 04:58 PM Bug #24425 (Closed): create iscsi gateway stop with "The first gateway defined must be the local ...
- Closing due to a lack of activity
- 04:58 PM Bug #24528 (Closed): Missing snapshot after upgrade from Kraken (11.2.0) to Luminous (12.2.5)
- 04:58 PM Bug #5591 (Won't Fix): rbd-fuse crashes repeatedly under light load
- 04:56 PM Bug #36626 (Need More Info): couldn't rewatch after network was blocked and client blacklisted
- Can you provide any logs from step 6? I would expect that all the IOs would be failing w/ EBLACKLISTED. Perhaps it's ...
- 04:54 PM Bug #38553 (Need More Info): rbd: race condition in rbd removing
- 04:54 PM Bug #38553: rbd: race condition in rbd removing
- Do you still hit this on Nautilus? It moves images to the trash as the first step before removing, so I would think i...
- 04:51 PM Bug #38308 (Can't reproduce): segfault when deleting rbd in python bindings
- Closing due to lack of feedback
- 04:48 PM Bug #36699 (Closed): the remote cluster could be stop sync after 3 or 4 days
- 04:08 PM Bug #38928 (Resolved): non-default namespace images ignore pool level config overrides
- 04:08 PM Backport #38961 (Resolved): nautilus: non-default namespace images ignore pool level config overr...
- 02:48 PM Backport #38961: nautilus: non-default namespace images ignore pool level config overrides
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27423
merged
04/10/2019
- 09:56 PM Bug #38814 (Resolved): backport krbd discard qa fixes to nautilus
- 09:56 PM Backport #38861 (Resolved): nautilus: backport krbd discard qa fixes to nautilus
- 09:45 PM Bug #38660 (Resolved): librbd: avoid aggregate-initializing IsWriteOpVisitor
- 09:45 PM Backport #38684 (Resolved): mimic: librbd: avoid aggregate-initializing IsWriteOpVisitor
- 09:45 PM Bug #38659 (Resolved): librbd: avoid aggregate-initializing any static_visitor
- 09:45 PM Backport #38685 (Resolved): mimic: librbd: avoid aggregate-initializing any static_visitor
- 09:29 PM Bug #37932 (Resolved): Throttle.cc: 194: FAILED assert(c >= 0) due to invalid ceph_osd_op union
- 09:28 PM Backport #37987 (Resolved): luminous: Throttle.cc: 194: FAILED assert(c >= 0) due to invalid ceph...
- 09:08 PM Backport #39226 (Resolved): nautilus: [sparsify] verify that image isn't using an EC data pool
- https://github.com/ceph/ceph/pull/27903
- 09:08 PM Backport #39224 (Resolved): nautilus: deep cp a migration prepared image will results assert
- https://github.com/ceph/ceph/pull/27882
- 09:04 PM Backport #39196 (Rejected): mimic: Several race conditions are possible between io::ObjectRequest...
- 09:04 PM Backport #39195 (Resolved): nautilus: Several race conditions are possible between io::ObjectRequ...
- https://github.com/ceph/ceph/pull/28132
- 09:04 PM Backport #39194 (Rejected): luminous: Several race conditions are possible between io::ObjectRequ...
- 09:03 PM Backport #39186 (Resolved): mimic: [cli] 'rbd list -l' with non-user snapshots results in "-ENOEN...
- https://github.com/ceph/ceph/pull/28138
- 03:15 AM Backport #38968 (In Progress): nautilus: "cannot move migrating image to trash" error should retu...
- https://github.com/ceph/ceph/pull/27475
04/09/2019
- 05:35 PM Bug #39021 (Pending Backport): Several race conditions are possible between io::ObjectRequest and...
- 03:04 PM Backport #38956 (Resolved): nautilus: backport krbd discard qa fixes to stable branches
- 02:37 PM Backport #38956: nautilus: backport krbd discard qa fixes to stable branches
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27239
merged - 12:59 PM Bug #38661 (Pending Backport): deep cp a migration prepared image will results assert
- 12:58 PM Bug #38364 (Pending Backport): [sparsify] verify that image isn't using an EC data pool
04/08/2019
- 05:34 AM Backport #38954 (In Progress): luminous: backport krbd discard qa fixes to stable branches
- -https://github.com/ceph/ceph/pull/27425-
- 03:37 AM Backport #38961 (In Progress): nautilus: non-default namespace images ignore pool level config ov...
- https://github.com/ceph/ceph/pull/27423
04/05/2019
- 02:11 AM Backport #38955 (In Progress): mimic: backport krbd discard qa fixes to stable branches
- https://github.com/ceph/ceph/pull/27391
04/04/2019
- 07:47 PM Backport #38861: nautilus: backport krbd discard qa fixes to nautilus
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27258
merged
04/03/2019
- 08:07 PM Backport #38684: mimic: librbd: avoid aggregate-initializing IsWriteOpVisitor
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27039
merged - 07:37 PM Backport #38685: mimic: librbd: avoid aggregate-initializing any static_visitor
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27041
merged - 09:50 AM Bug #39089: krbd: fix rbd map hang due to udev return subsystem unordered
- https://github.com/ceph/ceph/pull/27339
- 09:47 AM Bug #39089 (Resolved): krbd: fix rbd map hang due to udev return subsystem unordered
- Recently we found 'rbd map' hang forever but rbd device can work fine after terminating 'rbd map' or waiting it timed...
04/02/2019
- 03:59 PM Backport #37987: luminous: Throttle.cc: 194: FAILED assert(c >= 0) due to invalid ceph_osd_op union
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26064
merged - 12:32 PM Bug #39081 (Pending Backport): [cli] 'rbd list -l' with non-user snapshots results in "-ENOENT" e...
- 12:30 PM Bug #39081 (Resolved): [cli] 'rbd list -l' with non-user snapshots results in "-ENOENT" errors
- ...
04/01/2019
- 04:13 PM Cleanup #39072 (Resolved): Drop "ceph_test_librbd_api" target
- The librados C++ API is not stable, so it can no longer be used,
Also available in: Atom