Activity
From 07/11/2016 to 08/09/2016
08/09/2016
- 07:39 PM Bug #16974 (Resolved): rbd-mirror: force-promoted image will remain R/O until local rbd-mirror da...
- If the local rbd-mirror daemon was replicating events from a remote primary image to a local non-primary image and an...
- 01:19 PM Bug #16956 (Pending Backport): rbd-mirror: FAILED assert(m_on_update_status_finish == nullptr)
- 12:14 PM Bug #16970 (Fix Under Review): ImageReplayer::is_replaying does not include flush state
- *PR*: https://github.com/ceph/ceph/pull/10627
- 12:06 PM Bug #16970 (Resolved): ImageReplayer::is_replaying does not include flush state
- 06:45 AM Backport #16484 (Resolved): jewel: ExclusiveLock object leaked when switching to snapshot
- 06:31 AM Backport #16315 (Resolved): jewel: When journaling is enabled, a flush request shouldn't flush th...
- 06:31 AM Backport #16371 (Resolved): jewel: rbd-mirror: ensure replay status formatter has completed befor...
- 06:31 AM Backport #16372 (Resolved): jewel: Unable to disable journaling feature if in unexpected mirror s...
- 06:31 AM Backport #16423 (Resolved): jewel: Journal duplicate op detection can cause lockdep error
- 06:31 AM Backport #16424 (Resolved): jewel: Journal needs to handle duplicate maintenance op tids
- 06:31 AM Backport #16425 (Resolved): jewel: rbd-mirror: potential race condition accessing local image jou...
- 06:31 AM Backport #16426 (Resolved): jewel: Possible race condition during journal transition from replay ...
- 06:31 AM Backport #16459 (Resolved): jewel: rbd-mirror should disable proxied maintenance ops for non-prim...
- 06:31 AM Backport #16460 (Resolved): jewel: Crash when utilizing advisory locking API functions
- 06:31 AM Backport #16482 (Resolved): jewel: Timeout sending mirroring notification shouldn't result in fai...
- 06:31 AM Backport #16483 (Resolved): jewel: Close journal and object map before flagging exclusive lock as...
- 06:30 AM Backport #16485 (Resolved): jewel: Whitelist EBUSY error from "snap unprotect" for journal replay
- 06:30 AM Backport #16486 (Resolved): jewel: Object map/fast-diff invalidated if journal replays the same s...
- 06:30 AM Backport #16514 (Resolved): jewel: Image removal doesn't necessarily clean up all rbd_mirroring e...
- 02:06 AM Bug #16394 (Rejected): Ceph RBD
- after offline talk, it's user's code problem.
- 12:32 AM Bug #16394: Ceph RBD
- @Junming: ping
- 12:30 AM Bug #16179: rbd-mirror: image sync object map reload logs message
- 12:30 AM Bug #16741 (Need More Info): io getting stuck after advancing journal object set
- 12:27 AM Bug #16967 (Resolved): rbd bench-write: seg fault when "--io-size" is larger than image size
08/08/2016
- 10:05 PM Subtask #15239 (Resolved): Throttle in-flight image syncs to only a X concurrent
- *PR*: https://github.com/ceph/ceph/pull/9623
- 10:02 PM Subtask #14414 (Closed): Add new "exclusive lock released" journal event to librbd
- Resolved issue by tracking demotion/promotion via journal tags.
- 09:59 PM Subtask #15108 (Resolved): Periodically update the sync point object number during sync
- *PR*: https://github.com/ceph/ceph/pull/9699
- 08:01 PM Bug #16962 (Resolved): rbd-mirror: snap protect of non-layered image results in split-brain
- Attempting to protect a snapshot against an image that doesn't support layering results in an error:...
- 07:57 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- @Kjetil: unfortunately we cannot just increase the length rbd_image_info_t::block_name_prefix [1] since that would br...
- 05:31 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- Reading into this - librbd/rbd having generated slightly out-of-spec id's is our mess to deal with ? (We get to maint...
- 06:42 PM Bug #16956 (Fix Under Review): rbd-mirror: FAILED assert(m_on_update_status_finish == nullptr)
- *PR*: https://github.com/ceph/ceph/pull/10613
- 02:38 PM Bug #16956 (Resolved): rbd-mirror: FAILED assert(m_on_update_status_finish == nullptr)
- 01:25 PM Bug #16855 (In Progress): rbd mirror: after promote, the mirror image often be up+error
- Actually, I was able to repeat the issue. Thanks.
- 01:09 PM Bug #16855: rbd mirror: after promote, the mirror image often be up+error
- @de Ian: it doesn't look like debugging was enabled for the first log. Can you please provide the exact steps you per...
- 01:05 PM Bug #16179 (In Progress): rbd-mirror: image sync object map reload logs message
- 01:00 PM Feature #15706 (Resolved): Optionally limit the maximum number of snapshots for an image
- 12:46 PM Bug #16176 (Need More Info): objectmap does not show object existence correctly
- @Xinxin: I've tried to repeat your findings without success. Can you repeat with "debug rbd = 20" in your ceph client...
- 08:42 AM Bug #16519 (Resolved): librbd: potential use after free on refresh error
- 08:42 AM Bug #16517 (Resolved): TaskFinisher: cancel all tasks wait until finisher done
- 08:33 AM Bug #15225 (Resolved): Linking to -lrbd causes process startup times to balloon
- 08:33 AM Bug #15121 (Resolved): Protect against excessively large object map sizes
- 08:31 AM Backport #16952 (Resolved): hammer: ceph 10.2.2 rbd status on image format 2 returns "(2) No such...
- https://github.com/ceph/ceph/pull/10987
- 08:31 AM Backport #16951 (Resolved): jewel: ceph 10.2.2 rbd status on image format 2 returns "(2) No such ...
- https://github.com/ceph/ceph/pull/10652
- 08:31 AM Backport #16950 (Resolved): jewel: librbd/ExclusiveLock.cc: 197: FAILED assert(m_watch_handle != 0)
- https://github.com/ceph/ceph/pull/10827
- 08:28 AM Backport #15359 (Rejected): infernalis: Linking to -lrbd causes process startup times to balloon
- 08:28 AM Backport #15128 (Rejected): infernalis: Protect against excessively large object map sizes
- 08:20 AM Backport #16518 (Resolved): jewel: TaskFinisher: cancel all tasks wait until finisher done
- 08:20 AM Backport #16520 (Resolved): jewel: librbd: potential use after free on refresh error
08/07/2016
- 10:35 AM Bug #16923 (Pending Backport): librbd/ExclusiveLock.cc: 197: FAILED assert(m_watch_handle != 0)
- 10:34 AM Bug #16887 (Pending Backport): ceph 10.2.2 rbd status on image format 2 returns "(2) No such file...
08/06/2016
- 02:09 PM Bug #16176 (In Progress): objectmap does not show object existence correctly
- 02:08 PM Bug #16019 (Resolved): Failure in TestJournalReplay.Rename after injected socket failure
- Believe this is resolved by issue #16404
- 02:07 PM Bug #15947 (Resolved): Sporadic TestImageReplayer.NextTag failure
- Flagging as resolved by ticket #16708
08/05/2016
- 02:12 PM Bug #15947 (In Progress): Sporadic TestImageReplayer.NextTag failure
- 12:36 PM Bug #16921: rbd-nbd IO hang
- [ 3799.647869] nbd: registered device at major 43
[ 4068.478821] block nbd0: NBD_DISCONNECT
[ 4068.478895] block nb... - 01:01 AM Bug #16887 (Fix Under Review): ceph 10.2.2 rbd status on image format 2 returns "(2) No such file...
- *PR*: https://github.com/ceph/ceph/pull/10581
Won't solve the issue for existing images since the API uses fixed w... - 01:00 AM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- This is a very old issue that could hit if the combination of a client's (global) instance id concatenated with a pot...
- 12:51 AM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- My assertions of this being jewel specific should be treated with a grain of salt, it may well be that we tipped over...
- 12:24 AM Bug #16887 (In Progress): ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or d...
- 12:21 AM Bug #16889 (Need More Info): Ceph 10.2.2 meet a Segmentation fault after rename a image with form...
- @de Ian: can you install the debuginfo packages so that the backtrace can resolve the full call stack? It quite possi...
08/04/2016
- 08:27 PM Bug #16921 (New): rbd-nbd IO hang
- Running fsx_nbd on my local trusty VM works just fine. Examining the test logs, they appear to be missing the first c...
- 06:13 PM Bug #16921 (In Progress): rbd-nbd IO hang
- 02:45 PM Bug #16921 (Resolved): rbd-nbd IO hang
- It can be reproduced on jewel with *rbd/thrash/{base/install.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml ...
- 06:11 PM Bug #16923 (Fix Under Review): librbd/ExclusiveLock.cc: 197: FAILED assert(m_watch_handle != 0)
- *PR*: https://github.com/ceph/ceph/pull/10574
- 04:47 PM Bug #16923 (Resolved): librbd/ExclusiveLock.cc: 197: FAILED assert(m_watch_handle != 0)
- ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
1: (()+0x24c4bb) [0x7fdebcbde4bb]
2: (()+0x...
08/03/2016
- 05:45 AM Backport #16904 (Resolved): jewel: journal should prefetch small chunks of the object during replay
- https://github.com/ceph/ceph/pull/10684
- 05:44 AM Backport #16903 (Resolved): jewel: Non-primary image is recording journal events during image sync
- https://github.com/ceph/ceph/pull/10797
- 05:44 AM Backport #16902 (Resolved): jewel: rbd-mirror: image deleter should use pool id + global image uu...
- https://github.com/ceph/ceph/pull/11433
- 03:25 AM Bug #16898 (Resolved): "TestLibRBD.UpdateFeatures" tests failed in upgrade:client-upgrade-inferna...
- run http://pulpito.ceph.com/kchai-2016-08-02_07:44:12-upgrade-wip-16507-jewel---basic-mira/
jobs: 346861
logs: http...
08/02/2016
- 07:16 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- Patch is for illustrative purposes - not intended as a solution.
- 07:15 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- Less broken formatting:...
- 07:14 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- To un-break my previous comment slightly:
librbd's create_v2 makes the block prefix / id as: bid_ss << std::hex <<... - 05:45 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- Actually - if I read this correctly - somehow we consistently end up with block_prefix (including rbd_data.) that's 2...
- 01:59 AM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- And - it's not consistent. I have one 10.2.2 cluster where this is a problem and one where it's not. Is the length of...
- 05:52 PM Bug #16227 (Resolved): rbd-mirror: volume rename followed by delete, not deleted on secondary
- 05:51 PM Bug #16538 (Pending Backport): rbd-mirror: image deleter should use pool id + global image uuid f...
- 05:50 PM Bug #16478 (Pending Backport): Non-primary image is recording journal events during image sync
- 10:48 AM Bug #16889: Ceph 10.2.2 meet a Segmentation fault after rename a image with format 1
- de lan wrote:
> my test result of v10.2.2:http://www.daisycloud.org:9091/teuthology-2016-08-02_16:01:23-rbd:cli-v10.... - 10:12 AM Bug #16889: Ceph 10.2.2 meet a Segmentation fault after rename a image with format 1
- my test result of v10.2.2:http://www.daisycloud.org:9091/teuthology-2016-08-02_16:01:23-rbd:cli-v10.2.2---basic-plana...
- 09:07 AM Bug #16889 (Can't reproduce): Ceph 10.2.2 meet a Segmentation fault after rename a image with for...
- Hi!,when i test zhe ceph 10.2.2,it often meet a Segmentation fault after rename a image with format 1。
The CI test... - 06:30 AM Bug #16855: rbd mirror: after promote, the mirror image often be up+error
- @Jason Dillaman
Hi.
I have repoduced it,and taken zhe log.
it shows that zhe image is split-brained,and it didn't...
08/01/2016
- 11:54 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- Summary: It looks like the block_name_prefix/rbd_id may accidentally have been extended by one byte, "rbd status" doe...
- 11:35 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- For anybody else with the same problem, really-dirty-hack full of awful assumptions: rados -p rbd listwatchers rbd_he...
- 09:58 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- As for - why this impacts us. We use/abuse a combination of advisory locking (hint) and watchers to paper over mandat...
- 08:43 PM Bug #16887: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory"
- With --debug-rados=20/20
Ok - so there seems to be some disagreement about what the rbd_header. object should be n... - 08:05 PM Bug #16887 (Resolved): ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or dire...
- Not quite sure when this started failing, it only ended up noticeable now. So, I can't tell you exactly when, but I'm...
07/30/2016
- 04:34 PM Bug #16223 (Pending Backport): journal should prefetch small chunks of the object during replay
- 03:25 AM Backport #16869 (Resolved): jewel: Discard hangs when 'rbd_skip_partial_discard' is enabled
- https://github.com/ceph/ceph/pull/10797
- 03:25 AM Backport #16868 (Resolved): jewel: Prevent the creation of a clone from a non-primary mirrored image
- https://github.com/ceph/ceph/pull/10650
- 03:24 AM Backport #16867 (Resolved): jewel: mkfs.xfs slow performance with discards and object map
- https://github.com/ceph/ceph/pull/10649
07/29/2016
- 03:56 PM Bug #15353 (Rejected): librbd: disable optimizations that result in pipelining guarded writes mix...
- Yes, now that we store write errors in the pg log this shouldn't be an issue.
- 03:04 PM Bug #15353 (Need More Info): librbd: disable optimizations that result in pipelining guarded writ...
- @Josh: do the recent PG log changes make this ticket obsolete?
- 03:07 PM Bug #15871 (Resolved): Replay of snap remove journal event caused assertion failure
- Fixed under ticket #16114 and was backported via same ticket.
- 03:05 PM Bug #15561 (Closed): RBD image can be listed but not opened
- @Ke Ke: feel free to re-open if you are still experiencing this issue.
- 02:55 PM Bug #16740 (In Progress): Cannot disable journaling or remove non-mirrored, "non-primary" image
- 01:31 PM Bug #16689 (Pending Backport): mkfs.xfs slow performance with discards and object map
- 12:34 PM Bug #16855 (Need More Info): rbd mirror: after promote, the mirror image often be up+error
- @de Ian: can you please provide a debug log from the rbd-mirror daemon executing against cluster 'cluster1'? You need...
- 07:28 AM Bug #16855 (Resolved): rbd mirror: after promote, the mirror image often be up+error
- HI!
when i do some demote and promote operation.zhe mirror image often become up+error.... - 04:28 AM Feature #13186: I hope retain snapshot of the rbd block, after rbd block export and import
- zouming zou wrote:
> (1)I have a rbd block foo_c01,that contain a snapshot foo_c01_s01.then,I export foo_c01 to foo_...
07/28/2016
- 08:46 PM Bug #16227 (Fix Under Review): rbd-mirror: volume rename followed by delete, not deleted on secon...
- *PR*: https://github.com/ceph/ceph/pull/10484
(will be backported with ticket #16538) - 08:25 PM Bug #16227 (In Progress): rbd-mirror: volume rename followed by delete, not deleted on secondary
- 08:46 PM Bug #16538 (Fix Under Review): rbd-mirror: image deleter should use pool id + global image uuid f...
- *PR*: https://github.com/ceph/ceph/pull/10484
- 07:19 PM Feature #16171 (Fix Under Review): Request exclusive lock if owner sends -ENOTSUPP for proxied ma...
- *PR*: https://github.com/ceph/ceph/pull/10481
- 03:38 PM Feature #16171 (In Progress): Request exclusive lock if owner sends -ENOTSUPP for proxied mainten...
- 06:04 PM Bug #16386 (Pending Backport): Discard hangs when 'rbd_skip_partial_discard' is enabled
07/27/2016
- 12:23 PM Feature #15388 (Resolved): Inherit the Parent Image properties while Cloning a rbd Image
- 12:22 PM Feature #6626 (Resolved): openstack: cinder: allow users to delete snapshots that have clones
07/26/2016
- 02:56 PM Bug #16717: "[ FAILED ] TestLibRBD.TestCreateLsDeletePP" in upgrade:client-upgrade-jewel-distro...
- run: http://pulpito.ceph.com/teuthology-2016-07-20_02:45:02-upgrade:client-upgrade-jewel-distro-basic-smithi/
Jobs: ...
07/25/2016
- 06:35 PM Bug #16811 (New): [udev] /dev/rbd/<poolname>/<imagename> symlink should have an fsid in it
- It's possible to map images from different clusters on the same box. Currently, if (poolname, imagename) pair happen...
- 05:03 PM Bug #16478 (In Progress): Non-primary image is recording journal events during image sync
- 05:02 PM Bug #16741: io getting stuck after advancing journal object set
- @Mykola: do you think this is still and issue after the recent changes introduced in the reduce memory footprint branch?
- 04:50 PM Bug #16708 (Fix Under Review): Sporadic failure in TestImageReplayer.StartReplayAndWrite
- *PR*: https://github.com/ceph/ceph/pull/10432
- 05:58 AM Bug #16394: Ceph RBD
- junming rao wrote:
> @Jason
> This is debug logs from librbd;
> thanks。
- 05:56 AM Bug #16394: Ceph RBD
- @Jason
This is debug logs from librbd;
thanks。
07/24/2016
- 03:13 PM Cleanup #16130 (Resolved): Proxied operations shouldn't result in error messages if replayed
- 03:13 PM Bug #16449 (Pending Backport): Prevent the creation of a clone from a non-primary mirrored image
- 03:10 PM Bug #16717 (In Progress): "[ FAILED ] TestLibRBD.TestCreateLsDeletePP" in upgrade:client-upgrad...
- @Mykola: instead of waiting (possibly for a very long time) for the backport, perhaps just change the test [1] to ove...
07/23/2016
- 09:47 PM Bug #16799 (Duplicate): hammer-backports: rbd import fails in cache tier test
- 08:17 PM Bug #16799 (Duplicate): hammer-backports: rbd import fails in cache tier test
- test: rados/singleton-nomsgr/{all/export-after-evict.yaml}
what happens:...
07/22/2016
- 10:17 PM Bug #16708: Sporadic failure in TestImageReplayer.StartReplayAndWrite
- also in https://jenkins.ceph.com/job/ceph-pull-requests/9476/consoleFull#-1053578855d63714d2-c8d8-41fc-a9d4-8dee30be4c32
- 01:10 PM Bug #16708 (In Progress): Sporadic failure in TestImageReplayer.StartReplayAndWrite
- 06:21 AM Bug #16708: Sporadic failure in TestImageReplayer.StartReplayAndWrite
- spotted again in https://jenkins.ceph.com/job/ceph-pull-requests/9474/consoleFull#-1053578855d63714d2-c8d8-41fc-a9d4-...
- 08:37 PM Backport #16796 (Resolved): jewel: Renaming old format image results in "Transport endpoint is no...
- https://github.com/ceph/ceph/pull/10684
- 04:30 PM Feature #16780 (Resolved): rbd-mirror: use sparse read during image sync
- If 1 byte is used in a primary image backing object, the image sync process will read and write a full object size ch...
- 07:03 AM Bug #16773 (Resolved): FAILED assert(m_image_ctx.journal == nullptr)
- Hi!
when i test the ci suite:rbd:valgrind/{base/install.yaml clusters/{fixed-1.yaml openstack.yaml} fs/xfs.yaml va...
07/21/2016
- 09:15 PM Bug #16529: "[ FAILED ] TestClsRbd.mirror_image" in upgrade:jewel-x-master-distro-basic-vps
- http://qa-proxy.ceph.com/teuthology/teuthology-2016-07-20_04:20:03-upgrade:jewel-x-master-distro-basic-vps/325448/teu...
- 06:56 PM Bug #16223 (Fix Under Review): journal should prefetch small chunks of the object during replay
- 03:48 PM Bug #15947: Sporadic TestImageReplayer.NextTag failure
- Another instance: https://jenkins.ceph.com/job/ceph-pull-requests/9411/console...
- 10:51 AM Bug #16555 (In Progress): librbd should permit removal of image being bootstrapped by rbd-mirror
- 08:05 AM Feature #15632 (Fix Under Review): Expose librbd API methods to directly acquire and release the ...
- PR: https://github.com/ceph/ceph/pull/9592
- 08:04 AM Bug #16321 (Pending Backport): Renaming old format image results in "Transport endpoint is not co...
- 07:58 AM Feature #14738 (Fix Under Review): Optionally unregister "laggy" journal clients
- PR: https://github.com/ceph/ceph/pull/10378
07/20/2016
- 05:16 AM Backport #16747 (Resolved): jewel: rbd-mirror: snap rename does not correctly replicate
- https://github.com/ceph/ceph/pull/10684
07/19/2016
- 06:49 PM Bug #16741 (Resolved): io getting stuck after advancing journal object set
- I can easily reproduce this issue running write bench on an image that has small object size journal:...
- 05:19 PM Bug #16622 (Pending Backport): rbd-mirror: snap rename does not correctly replicate
- 04:22 PM Bug #16740 (Resolved): Cannot disable journaling or remove non-mirrored, "non-primary" image
- Use rbd-mirror to create a non-primary image. If the 'rbd_mirroring' object is removed, this image will now be treate...
- 12:24 PM Bug #16529: "[ FAILED ] TestClsRbd.mirror_image" in upgrade:jewel-x-master-distro-basic-vps
- The upgrade test installs 10.2.0, upgrades the OSDs to a point release but keeps the test_cls_rbd at 10.2.0. There wa...
- 11:23 AM Bug #16707 (Fix Under Review): rbd-replay-prep doesn't record discard IO events
- *PR*: https://github.com/ceph/ceph/pull/10332
- 09:17 AM Bug #16717 (Fix Under Review): "[ FAILED ] TestLibRBD.TestCreateLsDeletePP" in upgrade:client-u...
- PR: https://github.com/ceph/ceph/pull/10348
- 07:37 AM Bug #16717 (In Progress): "[ FAILED ] TestLibRBD.TestCreateLsDeletePP" in upgrade:client-upgrad...
- 07:35 AM Bug #16717: "[ FAILED ] TestLibRBD.TestCreateLsDeletePP" in upgrade:client-upgrade-jewel-distro...
- I guess running hammer ceph_test_librbd_api with jewel librbd is intentional (to run only hammer tests using jewel li...
- 07:27 AM Bug #16717: "[ FAILED ] TestLibRBD.TestCreateLsDeletePP" in upgrade:client-upgrade-jewel-distro...
- The teuthology suit upgrades client host from hammer to jewel and reverts ceph_test_librbd_api to hammer using this p...
- 07:27 AM Backport #16735 (Resolved): jewel: rbd-nbd does not properly handle resize notifications
- https://github.com/ceph/ceph/pull/10679
- 05:15 AM Bug #16223: journal should prefetch small chunks of the object during replay
- *PR*: https://github.com/ceph/ceph/pull/10341
- 05:07 AM Bug #15715 (Pending Backport): rbd-nbd does not properly handle resize notifications
07/18/2016
- 06:07 PM Bug #16689 (Fix Under Review): mkfs.xfs slow performance with discards and object map
- *PR*: https://github.com/ceph/ceph/pull/10332
- 05:37 PM Bug #16689 (In Progress): mkfs.xfs slow performance with discards and object map
- 02:38 PM Bug #16689: mkfs.xfs slow performance with discards and object map
- Attaching the raw lttng files for librbd runs. The new image is 10G instead of the 100G in the previous case, but all...
- 04:51 PM Bug #16717 (Resolved): "[ FAILED ] TestLibRBD.TestCreateLsDeletePP" in upgrade:client-upgrade-j...
- Run: http://pulpito.ceph.com/teuthology-2016-07-13_02:45:02-upgrade:client-upgrade-jewel-distro-basic-smithi/
Job: 3... - 12:26 PM Bug #16708 (Resolved): Sporadic failure in TestImageReplayer.StartReplayAndWrite
- From Mykola:...
- 12:22 PM Bug #16223 (In Progress): journal should prefetch small chunks of the object during replay
- 11:53 AM Bug #16707 (Resolved): rbd-replay-prep doesn't record discard IO events
- 07:07 AM Bug #16529: "[ FAILED ] TestClsRbd.mirror_image" in upgrade:jewel-x-master-distro-basic-vps
- http://pulpito.ceph.com/teuthology-2016-07-17_04:20:03-upgrade:jewel-x-master-distro-basic-vps/319486/
http://pulpit...
07/17/2016
- 12:29 AM Bug #16689: mkfs.xfs slow performance with discards and object map
- I am attaching 2 files for lttng trace run captured while running: _mkfs.xfs -s size=4096 -f /dev/sda_.
* replay.1...
07/16/2016
- 02:28 PM Feature #15468: Feature : cephx user management for RBD images
- Few more things: I really wants to hide from such user as much information about cluster, as possible. Right now anyo...
07/15/2016
- 06:10 PM Documentation #16704 (Duplicate): RBD replay needs to document new configuration option
- The "rbd_tracing" configuration option needs to be set to true to enable tracing.
- 09:36 AM Backport #16701 (Resolved): jewel: rbd-mirror: image sync throttle needs to use pool id + image i...
- https://github.com/ceph/ceph/pull/10678
07/14/2016
- 09:38 PM Bug #16689 (Resolved): mkfs.xfs slow performance with discards and object map
- Examples:
object-map enabled:
time mkfs.xfs -s size=4096 -f /dev/sda
real 9m10.882s
user 0m0.000s
sys 0m0.012s... - 02:41 AM Bug #16623: segfault in unittest_rbd_mirror
- Always some problems when on the bleeding edge.
- 01:42 AM Bug #16623 (Resolved): segfault in unittest_rbd_mirror
- Damn, I thought about a problem in the gmock/gtest code but dismissed it as "unlikely"
Sure enough, it works fine ...
07/13/2016
- 04:43 PM Bug #16536 (Pending Backport): rbd-mirror: image sync throttle needs to use pool id + image id to...
- 03:16 PM Bug #16623: segfault in unittest_rbd_mirror
- @Brad: Josh just merged the upgraded googletest/googlemock changes. It works for me now, but can you pull the latest ...
- 03:05 PM Bug #16623: segfault in unittest_rbd_mirror
- Work for switching to the newer googletest framework was already in-progress:
*PR*: https://github.com/ceph/ceph/p... - 02:58 PM Bug #16623: segfault in unittest_rbd_mirror
- Using the latest gmock/gtest environment appears to fix the issue.
- 02:45 PM Bug #16623: segfault in unittest_rbd_mirror
- Yup -- able to instantly reproduce under F24.
- 01:02 PM Bug #16623: segfault in unittest_rbd_mirror
- Turns out that I am on F23 -- I thought I had already upgraded. I found a similar bug report where GCC 6 + F24 resul...
- 10:07 AM Bug #16623: segfault in unittest_rbd_mirror
- I spent a lot of time looking at this today but I couldn't pin it down but I do have some findings.
The following ... - 02:45 AM Bug #16623: segfault in unittest_rbd_mirror
- Hi @Jason,
Following our discussion on IRC I did the following.
# git clone --recursive https://github.com/ceph...
07/12/2016
- 11:01 PM Bug #16623: segfault in unittest_rbd_mirror
- @jason yes, I can
- 01:13 PM Bug #16623 (Need More Info): segfault in unittest_rbd_mirror
- @Brad: are you able to repeat this issue?
- 03:01 PM Bug #16538 (In Progress): rbd-mirror: image deleter should use pool id + global image uuid for key
- 02:33 PM Backport #16658 (Resolved): jewel: rbd-mirror: gracefully handle being blacklisted
- https://github.com/ceph/ceph/pull/10684
- 04:12 AM Bug #16536 (Fix Under Review): rbd-mirror: image sync throttle needs to use pool id + image id to...
- *PR*: https://github.com/ceph/ceph/pull/10254
- 01:16 AM Bug #16536 (In Progress): rbd-mirror: image sync throttle needs to use pool id + image id to form...
- 03:10 AM Bug #16654: the option 'rbd_cache_writethrough_until_flush=true' dosn't work
- Sorry, the correct test data is :
1. Test in the VM with 'rbd_cache_writethrough_until_flush=false' , the randwrit... - 03:04 AM Bug #16654 (Resolved): the option 'rbd_cache_writethrough_until_flush=true' dosn't work
- Env: my ceph cluster with 252 SATA osds. the test VM kernerl version is ' 3.13.0-86-generic'
test cmd: sudo fio -...
07/11/2016
- 08:48 PM Bug #16622 (Fix Under Review): rbd-mirror: snap rename does not correctly replicate
- *PR*: https://github.com/ceph/ceph/pull/10249
Also available in: Atom