Activity
From 03/04/2018 to 04/02/2018
04/02/2018
- 03:48 PM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
- We'll get this status artifact fixed. You can use the admin socket to the rbd-mirror daemon to force a flush if desir...
- 02:17 AM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
- Another question is if I use fio to write for 60s.Is there any other parameter to indicate that the mirror is over?
... - 02:09 AM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
- Thank you! @Jason Dillaman
I have anther two question.I find the non-primary image used volumes is equal to primary ...
03/31/2018
- 07:14 PM Bug #23526 (Fix Under Review): "Message too long" error when appending journal
- PR: https://github.com/ceph/ceph/pull/21157
03/30/2018
- 08:36 PM Bug #22932 (Resolved): [rbd-mirror] infinite loop is possible when formatting the status message
- 08:35 PM Backport #22965 (Resolved): jewel: [rbd-mirror] infinite loop is possible when formatting the sta...
- 08:35 PM Bug #11502 (Resolved): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- 08:34 PM Backport #23065 (Resolved): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-m...
- 08:33 PM Bug #22120 (Resolved): possible deadlock in various maintenance operations
- 08:33 PM Backport #22175 (Resolved): jewel: possible deadlock in various maintenance operations
- 02:08 PM Bug #23528 (Resolved): rbd-nbd: EBUSY when do map
- When doing rbd-nbd map, if the Ceph service is not available,
the codes will wait on rados.connect(), unless killing... - 02:00 PM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
- This is actually just an artifact of IO batching within rbd-mirror. The commit position is only updated after every 3...
- 02:57 AM Bug #23516 (Resolved): [rbd-mirror] entries_behind_master will not be zero after mirror over
- I have two ceph-12.2.4 cluster. Rbd mirror deamon run on cluster two. poolclz/rbdclz on cluster one is primary image ...
- 07:17 AM Bug #23526 (Resolved): "Message too long" error when appending journal
- When appending to a journal object the number of appends sent in one rados operation is not limited and we may hit os...
- 05:04 AM Backport #23525 (Resolved): jewel: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic...
- https://github.com/ceph/ceph/pull/21207
- 05:04 AM Backport #23524 (Resolved): luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dyna...
- https://github.com/ceph/ceph/pull/21192
03/29/2018
- 08:22 PM Feature #23515 (New): [api] lock_acquire should expose setting the optional lock description
- This lock description can be used by iSCSI to describe the lock owner in a manner which can be programatically interp...
- 08:20 PM Feature #23514 (New): [api] image-meta needs to support compare-and-write operation
- iSCSI would like to re-use image-meta to store port state and persistent group reservations. In the case of PGRs, it ...
- 07:45 PM Bug #23502 (Pending Backport): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_fea...
- 07:01 PM Bug #23502 (Fix Under Review): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_fea...
- PR: https://github.com/ceph/ceph/pull/21131
- 10:22 AM Bug #23502 (In Progress): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features...
- 08:34 AM Bug #23502 (Resolved): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh...
- http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330979...
- 03:44 PM Bug #23512 (Resolved): Allow removal of RBD images even if the journal is corrupt
- Allow removal of RBD images even if the journal is corrupt
Red Hat bug - https://bugzilla.redhat.com/show_bug.cgi?id... - 02:55 PM Bug #18768 (Closed): rbd rm on empty volumes 2/3 sec per volume
- 02:26 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
- This issue was submitted against kernel RBD in Ceph Jewel, but the kernel RBD implementation has changed. The object...
- 12:25 PM Backport #23508 (In Progress): jewel: test_admin_socket.sh may fail on wait_for_clean
- 12:22 PM Backport #23508 (Resolved): jewel: test_admin_socket.sh may fail on wait_for_clean
- https://github.com/ceph/ceph/pull/21125
- 12:24 PM Backport #23507 (In Progress): luminous: test_admin_socket.sh may fail on wait_for_clean
- 12:22 PM Backport #23507 (Resolved): luminous: test_admin_socket.sh may fail on wait_for_clean
- https://github.com/ceph/ceph/pull/21124
- 12:18 PM Bug #23499 (Pending Backport): test_admin_socket.sh may fail on wait_for_clean
- 09:40 AM Bug #23499 (Fix Under Review): test_admin_socket.sh may fail on wait_for_clean
- PR: https://github.com/ceph/ceph/pull/21116
- 08:11 AM Bug #23499 (Resolved): test_admin_socket.sh may fail on wait_for_clean
- See e.g: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smith...
- 12:14 PM Feature #23505: rbd zero copy
- Hi Jason,
you mean by the following commands?
rbd snap create
rbd snap protect
rbd clone
rbd flatten
rbd s... - 11:54 AM Feature #23505: rbd zero copy
- This is basically implemented via "rbd clone".
- 10:03 AM Feature #23505 (Rejected): rbd zero copy
- rbd zero copy would just ask the Ceph cluster to "copy $rbd image" to a new image. This saves a lot of bandwith back ...
03/27/2018
- 09:34 PM Bug #22819 (Resolved): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete: r=0
- 09:34 PM Backport #22857 (Resolved): luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 shoul...
- 03:15 PM Backport #22857: luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete:...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20253
merged - 09:34 PM Backport #22964 (Resolved): luminous: [rbd-mirror] infinite loop is possible when formatting the ...
- 03:14 PM Backport #22964: luminous: [rbd-mirror] infinite loop is possible when formatting the status message
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20416
merged - 09:33 PM Backport #23011 (Resolved): luminous: [journal] allocating a new tag after acquiring the lock sho...
- 03:13 PM Backport #23011: luminous: [journal] allocating a new tag after acquiring the lock should use on-...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20454
merged - 09:33 PM Backport #23064 (Resolved): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- 03:13 PM Backport #23064: luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20550
merged - 03:12 PM Backport #23064: luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20550
merged - 09:33 PM Bug #22362 (Resolved): cluster resource agent ocf:ceph:rbd - wrong permissions
- 09:32 PM Backport #23152 (Resolved): luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- 03:10 PM Backport #23152: luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20628
merged - 09:32 PM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
- 09:32 PM Backport #23177 (Resolved): luminous: [test] OpenStack tempest test is failing across all branche...
- 03:09 PM Backport #23177: luminous: [test] OpenStack tempest test is failing across all branches (again)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20715
merged - 09:31 PM Bug #23388 (Resolved): [cls] rbd.group_image_list is incorrectly flagged as R/W
- 09:31 PM Backport #23407 (Resolved): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
- 03:06 PM Backport #23407: luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20967
merged - 09:30 PM Feature #23422 (Resolved): librados/snap_set_diff: don't assert on empty snapset
- 09:30 PM Backport #23423 (Resolved): luminous: librados/snap_set_diff: don't assert on empty snapset
- 03:05 PM Backport #23423: luminous: librados/snap_set_diff: don't assert on empty snapset
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20991
merged - 03:26 PM Backport #23304 (Resolved): luminous: parent blocks are still seen after a whole-object discard
- 03:09 PM Backport #23304: luminous: parent blocks are still seen after a whole-object discard
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20860
merged
03/26/2018
- 12:29 PM Support #23461 (New): which rpm package src/journal/ is in?
- I want write some log in src/journal and I want to know which rpm package should I replace after I repack.
thank you! - 09:53 AM Support #23456 (Resolved): where is the log of src\journal\JournalPlayer.cc
- 09:19 AM Support #23456: where is the log of src\journal\JournalPlayer.cc
- @Mykola Golub OK,Than you!
03/25/2018
- 06:53 AM Support #23456: where is the log of src\journal\JournalPlayer.cc
- The currently configured log location can be found running:
ceph-conf log_file
or
ceph --show-config | gre... - 03:44 AM Support #23457 (Closed): rbd mirror: entries_behind_master will not be zero after mirror over
- I have two ceph-12.2.2 cluster. Rbd mirror deamon run on cluster two. poolclz/rbdclz on cluster one is primary image ...
03/24/2018
- 01:36 PM Support #23456: where is the log of src\journal\JournalPlayer.cc
- sorry,this is not a bug but a support. I can't change the label now.
- 01:16 PM Support #23456 (Resolved): where is the log of src\journal\JournalPlayer.cc
- when I add "debug_journal = 20/20 debug_journaler = 20/20" in ceph.conf.And I restart ceph-mon deamon ceph-mgr deamo...
03/23/2018
- 01:57 PM Bug #20054: librbd memory overhead when used with KVM
- Li Yichao wrote:
> I've done 3 experiments and think the overhead is not due to rbd cache.
>
> * Experiment is do... - 01:49 PM Bug #20054: librbd memory overhead when used with KVM
- I've done 3 experiments and think the overhead is not due to rbd cache.
* Experiment is done based on the question... - 02:58 AM Feature #23445 (Resolved): Flatten operation should use object map
- If the object is known to exist in the image, the copy-up operation can be skipped for that object.
03/21/2018
- 05:30 PM Backport #23423: luminous: librados/snap_set_diff: don't assert on empty snapset
- PR: https://github.com/ceph/ceph/pull/20991
- 06:03 AM Feature #23399 (Resolved): [clone v2] add snapshot-by-id API methods and rbd CLI support
03/20/2018
- 01:02 PM Backport #23423 (In Progress): luminous: librados/snap_set_diff: don't assert on empty snapset
- 12:36 PM Backport #23423 (Resolved): luminous: librados/snap_set_diff: don't assert on empty snapset
- https://github.com/ceph/ceph/pull/20991
- 11:51 AM Feature #23422 (Resolved): librados/snap_set_diff: don't assert on empty snapset
- master PR: https://github.com/ceph/ceph/pull/20648
- 05:43 AM Support #23401: rbd mirror lead to a potential risk that primary image can be remove from a remot...
- understand,thank you very much
- 05:36 AM Support #23401 (Closed): rbd mirror lead to a potential risk that primary image can be remove fro...
- It's not possible since the remote rbd-mirror daemon needs to be able to (1) register with the journal and (2) create...
- 04:59 AM Backport #23407 (In Progress): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
- https://github.com/ceph/ceph/pull/20967
- 04:46 AM Feature #23399 (Fix Under Review): [clone v2] add snapshot-by-id API methods and rbd CLI support
- *PR*: https://github.com/ceph/ceph/pull/20966
03/19/2018
- 04:42 PM Backport #23407 (Resolved): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
- https://github.com/ceph/ceph/pull/20967
- 10:01 AM Feature #22787 (In Progress): [librbd] deep copy should optionally support flattening a cloned image
- 07:38 AM Support #23401 (Closed): rbd mirror lead to a potential risk that primary image can be remove fro...
- when we use rbd mirror we must get class-write authority. But if we get this authority we can remove primary rbd imag...
- 03:28 AM Feature #23399 (In Progress): [clone v2] add snapshot-by-id API methods and rbd CLI support
- 12:40 AM Feature #23399 (Resolved): [clone v2] add snapshot-by-id API methods and rbd CLI support
- A user should be able to set the snapshot by id for use w/ "rbd children". This is required to be able to the list th...
- 12:37 AM Feature #23398 (Resolved): [clone v2] auto-delete trashed snapshot upon release of last child
- The "DetachChildRequest" state machine should be updated to release the self-managed snapshot if it was the last user...
03/17/2018
03/16/2018
- 06:41 PM Feature #20762 (New): rbdmap should support other block devices
- PR 19711 was for a different issue.
- 01:00 PM Bug #23388 (Fix Under Review): [cls] rbd.group_image_list is incorrectly flagged as R/W
- *PR*: https://github.com/ceph/ceph/pull/20939
- 12:57 PM Bug #23388 (Resolved): [cls] rbd.group_image_list is incorrectly flagged as R/W
- R/W operations cannot return any data as a payload. I suspect this is the cause of the transient failures like the fo...
03/14/2018
- 10:55 PM Bug #23184: rbd workunit return 0 response code for fail
- @Vasu: what's the status here?
- 02:55 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- Jason, is there a way to trigger a ceph health on a detection of slow operation? I realize this can be a logwatch ty...
03/13/2018
- 11:43 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- @Alex: it might not have been osd.4 that had any blocked ops. Hopefully "ceph health" should tell you which specific ...
- 01:22 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- Hi Jason, all dump_blocked_ops are zero, I ran them in a script against all OSDs, maybe too long has passed?
"ops... - 09:22 AM Backport #23304 (In Progress): luminous: parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/20860
03/12/2018
- 07:53 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- ... and missing the log from osd.4 which was the only one mentioned in your problem description.
- 07:51 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- @Alex: can you please run "ceph daemon osd.<X> dump_blocked_ops"?
- 06:24 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- last set of OSD logs
- 06:23 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- second set of logs, looks like tracker stops at 10
- 06:23 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- Hi Jason, issue impeded by https://tracker.ceph.com/issues/23205#change-108877 - OSDs are not showing anything that I...
- 06:01 PM Bug #23263 (Need More Info): Journaling feature causes cluster to have slow requests and inconsis...
- @Alex: can you please dump out the slow requests from the OSDs to see what object is causing the issue?
- 09:14 AM Backport #23305 (Resolved): jewel: parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/21219
- 09:14 AM Backport #23304 (Resolved): luminous: parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/20860
03/11/2018
- 01:35 AM Bug #23285 (Pending Backport): parent blocks are still seen after a whole-object discard
- 01:26 AM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- I went ahead and built a custom kernel reverting the change https://github.com/torvalds/linux/commit/639812a1ed9bf49a...
03/09/2018
- 12:12 PM Cleanup #22960 (Resolved): [librbd] provide plug-in object-based cache interface
- 09:16 AM Bug #23285 (Fix Under Review): parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/20809
- 09:15 AM Bug #23285 (Resolved): parent blocks are still seen after a whole-object discard
03/08/2018
- 06:48 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- It's related to where the image data is stored -- which would be the bulk storage usage source for a trashed image.
- 06:46 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- Well, what's the difference between the base pool and the data pool? I did couldn't find anything that would tell me ...
- 03:20 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- Create and fill images that utilize a data pool (i.e. rbd create --size 10G --data-pool=datapool rbd/image). If you m...
- 03:13 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- I look forward to try fixing this bug? Can I get a recipe to reproduce the bug?
- 01:42 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
- ... and make sure you test all "rbd mirror pool XYZ" commands, not just the three listed cases.
- 12:59 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
- Sorry, I've checked against master and not jewel.
- 12:46 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
- Looks like it's already resolved -
$ ./bin/rbd mirror pool enable rbd pool
$ ./bin/rbd mirror pool enable rbd poo...
03/07/2018
- 08:53 PM Cleanup #22738 (Resolved): [test] separate v1 format tests from v2 format tests under teuthology
- 01:21 PM Bug #23263 (Closed): Journaling feature causes cluster to have slow requests and inconsistent PG
- First noticed this problem in our ESXi/iSCSI cluster, but now I can replicate it in lab with just Ubuntu:
1. Creat... - 12:18 PM Bug #12219: rbd-fuse should respect standard Ceph configuration overrides and search paths
- Besides, ./ceph.conf and ~/.ceph/ceph.conf are also not sought when /etc/ceph/ceph.conf is missing.
03/06/2018
03/05/2018
- 10:29 PM Feature #22873 (Resolved): [clone v2] removing an image should automatically delete snapshots in ...
- 10:28 PM Subtask #19298 (New): rbd-mirror scrub: new CLI action to request image verification
- Delayed pending the ability for the OSDs to deeply delete an object (and all associated snapshot revisions).
- 10:27 PM Cleanup #22960 (Fix Under Review): [librbd] provide plug-in object-based cache interface
- *PR*: https://github.com/ceph/ceph/pull/20682
- 10:25 PM Cleanup #22738 (Fix Under Review): [test] separate v1 format tests from v2 format tests under teu...
- *PR*: https://github.com/ceph/ceph/pull/20729
- 11:57 AM Backport #23177 (In Progress): luminous: [test] OpenStack tempest test is failing across all bran...
- https://github.com/ceph/ceph/pull/20715
Also available in: Atom