Project

General

Profile

Activity

From 03/04/2018 to 04/02/2018

04/02/2018

03:48 PM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
We'll get this status artifact fixed. You can use the admin socket to the rbd-mirror daemon to force a flush if desir... Jason Dillaman
02:17 AM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
Another question is if I use fio to write for 60s.Is there any other parameter to indicate that the mirror is over?
...
liuzhong chen
02:09 AM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
Thank you! @Jason Dillaman
I have anther two question.I find the non-primary image used volumes is equal to primary ...
liuzhong chen

03/31/2018

07:14 PM Bug #23526 (Fix Under Review): "Message too long" error when appending journal
PR: https://github.com/ceph/ceph/pull/21157 Mykola Golub

03/30/2018

08:36 PM Bug #22932 (Resolved): [rbd-mirror] infinite loop is possible when formatting the status message
Nathan Cutler
08:35 PM Backport #22965 (Resolved): jewel: [rbd-mirror] infinite loop is possible when formatting the sta...
Nathan Cutler
08:35 PM Bug #11502 (Resolved): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
Nathan Cutler
08:34 PM Backport #23065 (Resolved): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-m...
Nathan Cutler
08:33 PM Bug #22120 (Resolved): possible deadlock in various maintenance operations
Nathan Cutler
08:33 PM Backport #22175 (Resolved): jewel: possible deadlock in various maintenance operations
Nathan Cutler
02:08 PM Bug #23528 (Resolved): rbd-nbd: EBUSY when do map
When doing rbd-nbd map, if the Ceph service is not available,
the codes will wait on rados.connect(), unless killing...
Li Wang
02:00 PM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
This is actually just an artifact of IO batching within rbd-mirror. The commit position is only updated after every 3... Jason Dillaman
02:57 AM Bug #23516 (Resolved): [rbd-mirror] entries_behind_master will not be zero after mirror over
I have two ceph-12.2.4 cluster. Rbd mirror deamon run on cluster two. poolclz/rbdclz on cluster one is primary image ... liuzhong chen
07:17 AM Bug #23526 (Resolved): "Message too long" error when appending journal
When appending to a journal object the number of appends sent in one rados operation is not limited and we may hit os... Mykola Golub
05:04 AM Backport #23525 (Resolved): jewel: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic...
https://github.com/ceph/ceph/pull/21207 Nathan Cutler
05:04 AM Backport #23524 (Resolved): luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dyna...
https://github.com/ceph/ceph/pull/21192 Nathan Cutler

03/29/2018

08:22 PM Feature #23515 (New): [api] lock_acquire should expose setting the optional lock description
This lock description can be used by iSCSI to describe the lock owner in a manner which can be programatically interp... Jason Dillaman
08:20 PM Feature #23514 (New): [api] image-meta needs to support compare-and-write operation
iSCSI would like to re-use image-meta to store port state and persistent group reservations. In the case of PGRs, it ... Jason Dillaman
07:45 PM Bug #23502 (Pending Backport): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_fea...
Jason Dillaman
07:01 PM Bug #23502 (Fix Under Review): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_fea...
PR: https://github.com/ceph/ceph/pull/21131 Mykola Golub
10:22 AM Bug #23502 (In Progress): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features...
Mykola Golub
08:34 AM Bug #23502 (Resolved): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh...
http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330979... Mykola Golub
03:44 PM Bug #23512 (Resolved): Allow removal of RBD images even if the journal is corrupt
Allow removal of RBD images even if the journal is corrupt
Red Hat bug - https://bugzilla.redhat.com/show_bug.cgi?id...
Vikhyat Umrao
02:55 PM Bug #18768 (Closed): rbd rm on empty volumes 2/3 sec per volume
Jason Dillaman
02:26 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
This issue was submitted against kernel RBD in Ceph Jewel, but the kernel RBD implementation has changed. The object... Ben England
12:25 PM Backport #23508 (In Progress): jewel: test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler
12:22 PM Backport #23508 (Resolved): jewel: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21125 Nathan Cutler
12:24 PM Backport #23507 (In Progress): luminous: test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler
12:22 PM Backport #23507 (Resolved): luminous: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21124 Nathan Cutler
12:18 PM Bug #23499 (Pending Backport): test_admin_socket.sh may fail on wait_for_clean
Jason Dillaman
09:40 AM Bug #23499 (Fix Under Review): test_admin_socket.sh may fail on wait_for_clean
PR: https://github.com/ceph/ceph/pull/21116 Mykola Golub
08:11 AM Bug #23499 (Resolved): test_admin_socket.sh may fail on wait_for_clean
See e.g: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smith... Mykola Golub
12:14 PM Feature #23505: rbd zero copy
Hi Jason,
you mean by the following commands?
rbd snap create
rbd snap protect
rbd clone
rbd flatten
rbd s...
Stefan Kooman
11:54 AM Feature #23505: rbd zero copy
This is basically implemented via "rbd clone". Jason Dillaman
10:03 AM Feature #23505 (Rejected): rbd zero copy
rbd zero copy would just ask the Ceph cluster to "copy $rbd image" to a new image. This saves a lot of bandwith back ... Stefan Kooman

03/27/2018

09:34 PM Bug #22819 (Resolved): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete: r=0
Nathan Cutler
09:34 PM Backport #22857 (Resolved): luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 shoul...
Nathan Cutler
03:15 PM Backport #22857: luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete:...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20253
merged
Yuri Weinstein
09:34 PM Backport #22964 (Resolved): luminous: [rbd-mirror] infinite loop is possible when formatting the ...
Nathan Cutler
03:14 PM Backport #22964: luminous: [rbd-mirror] infinite loop is possible when formatting the status message
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20416
merged
Yuri Weinstein
09:33 PM Backport #23011 (Resolved): luminous: [journal] allocating a new tag after acquiring the lock sho...
Nathan Cutler
03:13 PM Backport #23011: luminous: [journal] allocating a new tag after acquiring the lock should use on-...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20454
merged
Yuri Weinstein
09:33 PM Backport #23064 (Resolved): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
Nathan Cutler
03:13 PM Backport #23064: luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20550
merged
Yuri Weinstein
03:12 PM Backport #23064: luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20550
merged
Yuri Weinstein
09:33 PM Bug #22362 (Resolved): cluster resource agent ocf:ceph:rbd - wrong permissions
Nathan Cutler
09:32 PM Backport #23152 (Resolved): luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
Nathan Cutler
03:10 PM Backport #23152: luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20628
merged
Yuri Weinstein
09:32 PM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
Nathan Cutler
09:32 PM Backport #23177 (Resolved): luminous: [test] OpenStack tempest test is failing across all branche...
Nathan Cutler
03:09 PM Backport #23177: luminous: [test] OpenStack tempest test is failing across all branches (again)
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20715
merged
Yuri Weinstein
09:31 PM Bug #23388 (Resolved): [cls] rbd.group_image_list is incorrectly flagged as R/W
Nathan Cutler
09:31 PM Backport #23407 (Resolved): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
Nathan Cutler
03:06 PM Backport #23407: luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20967
merged
Yuri Weinstein
09:30 PM Feature #23422 (Resolved): librados/snap_set_diff: don't assert on empty snapset
Nathan Cutler
09:30 PM Backport #23423 (Resolved): luminous: librados/snap_set_diff: don't assert on empty snapset
Nathan Cutler
03:05 PM Backport #23423: luminous: librados/snap_set_diff: don't assert on empty snapset
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20991
merged
Yuri Weinstein
03:26 PM Backport #23304 (Resolved): luminous: parent blocks are still seen after a whole-object discard
Nathan Cutler
03:09 PM Backport #23304: luminous: parent blocks are still seen after a whole-object discard
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20860
merged
Yuri Weinstein

03/26/2018

12:29 PM Support #23461 (New): which rpm package src/journal/ is in?
I want write some log in src/journal and I want to know which rpm package should I replace after I repack.
thank you!
liuzhong chen
09:53 AM Support #23456 (Resolved): where is the log of src\journal\JournalPlayer.cc
Mykola Golub
09:19 AM Support #23456: where is the log of src\journal\JournalPlayer.cc
@Mykola Golub OK,Than you! liuzhong chen

03/25/2018

06:53 AM Support #23456: where is the log of src\journal\JournalPlayer.cc
The currently configured log location can be found running:
ceph-conf log_file
or
ceph --show-config | gre...
Mykola Golub
03:44 AM Support #23457 (Closed): rbd mirror: entries_behind_master will not be zero after mirror over
I have two ceph-12.2.2 cluster. Rbd mirror deamon run on cluster two. poolclz/rbdclz on cluster one is primary image ... liuzhong chen

03/24/2018

01:36 PM Support #23456: where is the log of src\journal\JournalPlayer.cc
sorry,this is not a bug but a support. I can't change the label now. liuzhong chen
01:16 PM Support #23456 (Resolved): where is the log of src\journal\JournalPlayer.cc
when I add "debug_journal = 20/20 debug_journaler = 20/20" in ceph.conf.And I restart ceph-mon deamon ceph-mgr deamo... liuzhong chen

03/23/2018

01:57 PM Bug #20054: librbd memory overhead when used with KVM
Li Yichao wrote:
> I've done 3 experiments and think the overhead is not due to rbd cache.
>
> * Experiment is do...
Li Yichao
01:49 PM Bug #20054: librbd memory overhead when used with KVM
I've done 3 experiments and think the overhead is not due to rbd cache.
* Experiment is done based on the question...
Li Yichao
02:58 AM Feature #23445 (Resolved): Flatten operation should use object map
If the object is known to exist in the image, the copy-up operation can be skipped for that object. Jason Dillaman

03/21/2018

05:30 PM Backport #23423: luminous: librados/snap_set_diff: don't assert on empty snapset
PR: https://github.com/ceph/ceph/pull/20991 Mykola Golub
06:03 AM Feature #23399 (Resolved): [clone v2] add snapshot-by-id API methods and rbd CLI support
Mykola Golub

03/20/2018

01:02 PM Backport #23423 (In Progress): luminous: librados/snap_set_diff: don't assert on empty snapset
Mykola Golub
12:36 PM Backport #23423 (Resolved): luminous: librados/snap_set_diff: don't assert on empty snapset
https://github.com/ceph/ceph/pull/20991 Nathan Cutler
11:51 AM Feature #23422 (Resolved): librados/snap_set_diff: don't assert on empty snapset
master PR: https://github.com/ceph/ceph/pull/20648 Nathan Cutler
05:43 AM Support #23401: rbd mirror lead to a potential risk that primary image can be remove from a remot...
understand,thank you very much liuzhong chen
05:36 AM Support #23401 (Closed): rbd mirror lead to a potential risk that primary image can be remove fro...
It's not possible since the remote rbd-mirror daemon needs to be able to (1) register with the journal and (2) create... Jason Dillaman
04:59 AM Backport #23407 (In Progress): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
https://github.com/ceph/ceph/pull/20967 Prashant D
04:46 AM Feature #23399 (Fix Under Review): [clone v2] add snapshot-by-id API methods and rbd CLI support
*PR*: https://github.com/ceph/ceph/pull/20966 Jason Dillaman

03/19/2018

04:42 PM Backport #23407 (Resolved): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
https://github.com/ceph/ceph/pull/20967 Nathan Cutler
10:01 AM Feature #22787 (In Progress): [librbd] deep copy should optionally support flattening a cloned image
Mykola Golub
07:38 AM Support #23401 (Closed): rbd mirror lead to a potential risk that primary image can be remove fro...
when we use rbd mirror we must get class-write authority. But if we get this authority we can remove primary rbd imag... liuzhong chen
03:28 AM Feature #23399 (In Progress): [clone v2] add snapshot-by-id API methods and rbd CLI support
Jason Dillaman
12:40 AM Feature #23399 (Resolved): [clone v2] add snapshot-by-id API methods and rbd CLI support
A user should be able to set the snapshot by id for use w/ "rbd children". This is required to be able to the list th... Jason Dillaman
12:37 AM Feature #23398 (Resolved): [clone v2] auto-delete trashed snapshot upon release of last child
The "DetachChildRequest" state machine should be updated to release the self-managed snapshot if it was the last user... Jason Dillaman

03/17/2018

08:17 PM Bug #23388 (Pending Backport): [cls] rbd.group_image_list is incorrectly flagged as R/W
Mykola Golub

03/16/2018

06:41 PM Feature #20762 (New): rbdmap should support other block devices
PR 19711 was for a different issue. Mykola Golub
01:00 PM Bug #23388 (Fix Under Review): [cls] rbd.group_image_list is incorrectly flagged as R/W
*PR*: https://github.com/ceph/ceph/pull/20939 Jason Dillaman
12:57 PM Bug #23388 (Resolved): [cls] rbd.group_image_list is incorrectly flagged as R/W
R/W operations cannot return any data as a payload. I suspect this is the cause of the transient failures like the fo... Jason Dillaman

03/14/2018

10:55 PM Bug #23184: rbd workunit return 0 response code for fail
@Vasu: what's the status here? Jason Dillaman
02:55 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
Jason, is there a way to trigger a ceph health on a detection of slow operation? I realize this can be a logwatch ty... Alex Gorbachev

03/13/2018

11:43 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
@Alex: it might not have been osd.4 that had any blocked ops. Hopefully "ceph health" should tell you which specific ... Jason Dillaman
01:22 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
Hi Jason, all dump_blocked_ops are zero, I ran them in a script against all OSDs, maybe too long has passed?
"ops...
Alex Gorbachev
09:22 AM Backport #23304 (In Progress): luminous: parent blocks are still seen after a whole-object discard
https://github.com/ceph/ceph/pull/20860 Prashant D

03/12/2018

07:53 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
... and missing the log from osd.4 which was the only one mentioned in your problem description. Jason Dillaman
07:51 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
@Alex: can you please run "ceph daemon osd.<X> dump_blocked_ops"? Jason Dillaman
06:24 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
last set of OSD logs Alex Gorbachev
06:23 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
second set of logs, looks like tracker stops at 10 Alex Gorbachev
06:23 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
Hi Jason, issue impeded by https://tracker.ceph.com/issues/23205#change-108877 - OSDs are not showing anything that I... Alex Gorbachev
06:01 PM Bug #23263 (Need More Info): Journaling feature causes cluster to have slow requests and inconsis...
@Alex: can you please dump out the slow requests from the OSDs to see what object is causing the issue? Jason Dillaman
09:14 AM Backport #23305 (Resolved): jewel: parent blocks are still seen after a whole-object discard
https://github.com/ceph/ceph/pull/21219 Nathan Cutler
09:14 AM Backport #23304 (Resolved): luminous: parent blocks are still seen after a whole-object discard
https://github.com/ceph/ceph/pull/20860 Nathan Cutler

03/11/2018

01:35 AM Bug #23285 (Pending Backport): parent blocks are still seen after a whole-object discard
Jason Dillaman
01:26 AM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
I went ahead and built a custom kernel reverting the change https://github.com/torvalds/linux/commit/639812a1ed9bf49a... Alex Gorbachev

03/09/2018

12:12 PM Cleanup #22960 (Resolved): [librbd] provide plug-in object-based cache interface
Mykola Golub
09:16 AM Bug #23285 (Fix Under Review): parent blocks are still seen after a whole-object discard
https://github.com/ceph/ceph/pull/20809 Ilya Dryomov
09:15 AM Bug #23285 (Resolved): parent blocks are still seen after a whole-object discard
Ilya Dryomov

03/08/2018

06:48 PM Bug #22872: "rbd trash purge --threshold" should support data pool
It's related to where the image data is stored -- which would be the bulk storage usage source for a trashed image. Jason Dillaman
06:46 PM Bug #22872: "rbd trash purge --threshold" should support data pool
Well, what's the difference between the base pool and the data pool? I did couldn't find anything that would tell me ... Rishabh Dave
03:20 PM Bug #22872: "rbd trash purge --threshold" should support data pool
Create and fill images that utilize a data pool (i.e. rbd create --size 10G --data-pool=datapool rbd/image). If you m... Jason Dillaman
03:13 PM Bug #22872: "rbd trash purge --threshold" should support data pool
I look forward to try fixing this bug? Can I get a recipe to reproduce the bug? Rishabh Dave
01:42 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
... and make sure you test all "rbd mirror pool XYZ" commands, not just the three listed cases. Jason Dillaman
12:59 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
Sorry, I've checked against master and not jewel. Rishabh Dave
12:46 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
Looks like it's already resolved -
$ ./bin/rbd mirror pool enable rbd pool
$ ./bin/rbd mirror pool enable rbd poo...
Rishabh Dave

03/07/2018

08:53 PM Cleanup #22738 (Resolved): [test] separate v1 format tests from v2 format tests under teuthology
Mykola Golub
01:21 PM Bug #23263 (Closed): Journaling feature causes cluster to have slow requests and inconsistent PG
First noticed this problem in our ESXi/iSCSI cluster, but now I can replicate it in lab with just Ubuntu:
1. Creat...
Alex Gorbachev
12:18 PM Bug #12219: rbd-fuse should respect standard Ceph configuration overrides and search paths
Besides, ./ceph.conf and ~/.ceph/ceph.conf are also not sought when /etc/ceph/ceph.conf is missing. Rishabh Dave

03/06/2018

08:43 PM Bug #23143 (Resolved): rbd-nbd can deadlock in logging thread
Sage Weil

03/05/2018

10:29 PM Feature #22873 (Resolved): [clone v2] removing an image should automatically delete snapshots in ...
Jason Dillaman
10:28 PM Subtask #19298 (New): rbd-mirror scrub: new CLI action to request image verification
Delayed pending the ability for the OSDs to deeply delete an object (and all associated snapshot revisions). Jason Dillaman
10:27 PM Cleanup #22960 (Fix Under Review): [librbd] provide plug-in object-based cache interface
*PR*: https://github.com/ceph/ceph/pull/20682 Jason Dillaman
10:25 PM Cleanup #22738 (Fix Under Review): [test] separate v1 format tests from v2 format tests under teu...
*PR*: https://github.com/ceph/ceph/pull/20729 Jason Dillaman
11:57 AM Backport #23177 (In Progress): luminous: [test] OpenStack tempest test is failing across all bran...
https://github.com/ceph/ceph/pull/20715 Prashant D
 

Also available in: Atom