Activity
From 05/21/2018 to 06/19/2018
06/19/2018
- 09:32 PM Bug #24545 (Pending Backport): yet another case when deep copying a clone may result in invalid o...
- 07:49 PM Bug #24528 (Need More Info): Missing snapshot after upgrade from Kraken (11.2.0) to Luminous (12....
- @Remi: can you please provide the output from the following command?...
- 06:04 PM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
- @Marc: was it much faster w/ the RBD cache disabled? If so, this issue has been addressed in Mimic since we automatic...
- 05:49 PM Bug #24479 (Rejected): import_export.sh fails on master
- @Kefu: this seems like it's a bug in your RADOS test since it installed v12.2.5 (sha1 cad919881333ac92274171586c827e0...
- 05:22 PM Bug #24106 (Need More Info): fail to create rbd device when the clusters' health is ok
- It appears your client cannot connect to host 192.168.159.140, port 6800. If you run 'rbd create disk01 --size 512 --...
06/18/2018
- 10:46 PM Subtask #24558 (Resolved): [namespaces] update create image state machine to support namespaces
- 10:45 PM Subtask #24410 (Fix Under Review): [namespaces] management APIs and CLI commands
- *PR*: https://github.com/ceph/ceph/pull/22608
06/16/2018
- 11:25 AM Bug #24545 (Fix Under Review): yet another case when deep copying a clone may result in invalid o...
- PR: https://github.com/ceph/ceph/pull/22587
- 10:42 AM Bug #24545 (Resolved): yet another case when deep copying a clone may result in invalid object map
- The case:
clone
discard an object at offset X
create snap1
shrink to the size less then X
create sna...
06/15/2018
- 03:46 AM Bug #24528 (Closed): Missing snapshot after upgrade from Kraken (11.2.0) to Luminous (12.2.5)
- I just upgraded from Kraken to Luminous. But after the upgrade, I started noticing that openstack-compute-node servic...
06/14/2018
- 08:15 PM Bug #24102: snapshot of RBD image is found to be all zero.
- I cannot reproduce what you are seeing. Can you please provide exact CLI commands and CLI output?
06/13/2018
- 08:31 PM Backport #24519 (Resolved): mimic: [rbd-mirror] simple image map policy doesn't always level-load...
- https://github.com/ceph/ceph/pull/22892
- 08:22 PM Feature #24065 (Resolved): [fast-diff] interlock object-map/fast-diff features together
- 08:21 PM Bug #24161 (Pending Backport): [rbd-mirror] simple image map policy doesn't always level-load ins...
- 08:18 PM Bug #24516 (Resolved): [rbd-mirror] object map is getting invalidated during rbd-mirror-fsx-worku...
- ...
- 05:21 PM Bug #23955 (Resolved): librbd::Watcher's handle_rewatch_complete might fire after object destroyed
- 05:20 PM Backport #23985 (Resolved): luminous: librbd::Watcher's handle_rewatch_complete might fire after ...
- 05:20 PM Backport #24084 (Resolved): luminous: [rbd-mirror] bootstrap should not raise -EREMOTEIO if local...
- 05:19 PM Backport #24086 (Resolved): luminous: [rbd-mirror] potential races during PoolReplayer shut-down
- 02:51 PM Backport #23607 (Resolved): luminous: import-diff failed: (33) Numerical argument out of domain -...
- 02:50 PM Backport #23640 (Resolved): luminous: rbd: import with option --export-format fails to protect sn...
- 02:48 PM Backport #24155 (Resolved): mimic: [rbd-mirror] potential deadlock when running asok 'flush' command
- 02:44 PM Backport #24391 (In Progress): mimic: [rbd-mirror] entries_behind_master will not be zero after m...
- 02:37 AM Bug #24102: snapshot of RBD image is found to be all zero.
- Yes, this is it.
Jason Dillaman wrote:
> So you are saying that if you remove image X via 'rbd rm <base-tier-po...
06/12/2018
- 02:02 PM Bug #24506 (New): [qemu] use 'rbd_open_read_only' when media is flagged as read-only
- 12:50 PM Bug #24102: snapshot of RBD image is found to be all zero.
- So you are saying that if you remove image X via 'rbd rm <base-tier-pool>/<image-name>', running "ceph osd pool ls de...
- 06:44 AM Bug #24102: snapshot of RBD image is found to be all zero.
- This issue can be triggered by deleting a RBD image from a cache tiering pool. But the probability of the occurrence ...
- 08:01 AM Backport #24499 (Resolved): mimic: invalid object map when deep copying a resized (expanded) clone
- https://github.com/ceph/ceph/pull/22768
- 08:01 AM Backport #24498 (Resolved): luminous: "invalid object map" flag may be not stored on disk
- https://github.com/ceph/ceph/pull/22753
- 08:01 AM Backport #24497 (Closed): jewel: "invalid object map" flag may be not stored on disk
- 08:00 AM Backport #24496 (Resolved): mimic: "invalid object map" flag may be not stored on disk
- https://github.com/ceph/ceph/pull/22754
06/11/2018
- 02:17 PM Bug #24008 (Resolved): [rbd-mirror] potential races during PoolReplayer shut-down
- 02:16 PM Backport #23985: luminous: librbd::Watcher's handle_rewatch_complete might fire after object dest...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21938
merged - 02:16 PM Backport #24084: luminous: [rbd-mirror] bootstrap should not raise -EREMOTEIO if local image stil...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22142
merged - 02:15 PM Backport #24086: luminous: [rbd-mirror] potential races during PoolReplayer shut-down
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22172
merged - 12:48 PM Bug #24102: snapshot of RBD image is found to be all zero.
- OK, then it's not enabled.
At this point, I don't have enough information to assist trying to determine how your s... - 12:36 PM Bug #24102: snapshot of RBD image is found to be all zero.
- How to check whether the data-pool feature is enabled?
I use the command "rbd info reppool/<image-id>" on the imag... - 12:12 PM Bug #24102: snapshot of RBD image is found to be all zero.
- Are you using the RBD data-pool feature on this image? The initial version of Luminous had a bug where the snapshots ...
- 09:57 AM Bug #24102: snapshot of RBD image is found to be all zero.
- On the OSD side, the snapshot is marked as "removed" (in the removed_snaps set). So the find_object_context function ...
06/10/2018
- 07:01 AM Bug #24102: snapshot of RBD image is found to be all zero.
- I fetch the rbd_object_map for the image and the snapshot, the rbd_object_map(s) of them are different.
The comman...
06/09/2018
- 03:32 PM Bug #24399 (Pending Backport): invalid object map when deep copying a resized (expanded) clone
- 03:32 PM Bug #24434 (Pending Backport): "invalid object map" flag may be not stored on disk
- 12:35 PM Bug #24479 (Rejected): import_export.sh fails on master
- ...
- 11:42 AM Bug #24221 (Resolved): "rbd/import_export.sh" errors in upgrade:client-upgrade-luminous-mimic upg...
- 11:19 AM Backport #24476 (Resolved): mimic: "rbd trash purge --threshold" should support data pool
- https://github.com/ceph/ceph/pull/22891
06/08/2018
- 12:52 PM Bug #24102: snapshot of RBD image is found to be all zero.
- Yes, I am using cache tiering.
The version of ceph is 12.2.2.
The attachment is the log of running the command "...
06/07/2018
- 12:04 PM Backport #23607: luminous: import-diff failed: (33) Numerical argument out of domain - if image s...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21316
merged - 12:03 PM Backport #24156: luminous: [rbd-mirror] potential deadlock when running asok 'flush' command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22180
merged - 12:01 PM Backport #24378: luminous: [rbd-mirror] entries_behind_master will not be zero after mirror over
- Jason Dillaman wrote:
> https://github.com/ceph/ceph/pull/22370
merged - 12:07 AM Bug #24425: create iscsi gateway stop with "The first gateway defined must be the local machine"
- Add 2nd gateway on the host itself works!
06/06/2018
- 09:03 PM Bug #24434 (Fix Under Review): "invalid object map" flag may be not stored on disk
- PR: https://github.com/ceph/ceph/pull/22444
- 01:27 PM Bug #24434 (Resolved): "invalid object map" flag may be not stored on disk
- The case:
- Create an image with object map enabled.
- Make the image object map smaller than expected (e.g. usin... - 09:44 AM Bug #24433 (Rejected): caps doesn't support mix of "profile rbd" and "allow rw"
- I try to create an user who has capabilites to access to differents rbd pools and a cephfs pool.
I put the caps like...
06/05/2018
- 08:00 PM Feature #23398 (In Progress): [clone v2] auto-delete trashed snapshot upon release of last child
- 07:17 PM Bug #24399 (Fix Under Review): invalid object map when deep copying a resized (expanded) clone
- PR: https://github.com/ceph/ceph/pull/22415
- 05:39 PM Bug #24425: create iscsi gateway stop with "The first gateway defined must be the local machine"
- next to add 2nd gateway failed:
/iscsi-target...-igw/gateways> create node0019 192.168.57.17 skipchecks=true
OS v... - 05:26 PM Bug #24425: create iscsi gateway stop with "The first gateway defined must be the local machine"
- Changed the "ceph-gw-1" to the localhostname "node0018" make it work!
/iscsi-target...-igw/gateways> create node... - 05:16 PM Bug #24425 (Closed): create iscsi gateway stop with "The first gateway defined must be the local ...
- [root@node0018-isci-gateway ~]# gwcli
/iscsi-target...-igw/gateways> ls
o- gateways ..................................
06/04/2018
- 05:35 PM Subtask #24411: [namespaces] rbd CLI should support "--namespace" optional and "[pool-name[/names...
- ... also "rbd ls", "rbd trash XYZ", "rbd group XYZ"
- 05:33 PM Subtask #24411 (Resolved): [namespaces] rbd CLI should support "--namespace" optional and "[pool-...
- Any image-level commands should support specifying a default librados::IoCtx namespace.
- 05:34 PM Subtask #24412 (Resolved): [namespaces] support v2 cloning across namespaces / disallow v1 clonin...
- 05:32 PM Subtask #24410 (Resolved): [namespaces] management APIs and CLI commands
- Namespaces need to be defined by the storage administrator prior to use. Create new librbd APIs and rbd CLI actions t...
- 05:29 PM Tasks #24409 (New): [namespaces] implement namespace support within RBD
- 01:38 PM Bug #22872 (Pending Backport): "rbd trash purge --threshold" should support data pool
- 04:59 AM Bug #24399 (Resolved): invalid object map when deep copying a resized (expanded) clone
- Clone an image, resize to a large size, deep copy. The destination image will have invalidated object map, and the er...
06/02/2018
- 07:40 AM Support #23677: rbd mirror: is there a method to calculate data size that have not been mirror to...
- this support can be close if needed.
- 07:05 AM Backport #24392 (Closed): jewel: [rbd-mirror] entries_behind_master will not be zero after mirror...
- 07:05 AM Backport #24391 (Resolved): mimic: [rbd-mirror] entries_behind_master will not be zero after mirr...
- https://github.com/ceph/ceph/pull/22549
- 07:04 AM Backport #24390 (Resolved): mimic: [rbd-mirror] daemon failed to stop on active/passive test case
- https://github.com/ceph/ceph/pull/22667
- 06:50 AM Bug #23516 (Pending Backport): [rbd-mirror] entries_behind_master will not be zero after mirror over
- 06:48 AM Bug #24169 (Pending Backport): [rbd-mirror] daemon failed to stop on active/passive test case
06/01/2018
- 05:45 PM Backport #24388 (Resolved): mimic: Allow removal of RBD images even if the journal is corrupt
- https://github.com/ceph/ceph/pull/22662
- 05:45 PM Backport #24387 (Resolved): luminous: Allow removal of RBD images even if the journal is corrupt
- https://github.com/ceph/ceph/pull/23595
- 03:21 PM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
- Sorry, i missed that mail.
I am currently on vacation until 9.6.2018.
Nevertheless i created a additional trace f... - 01:26 PM Bug #23512 (Pending Backport): Allow removal of RBD images even if the journal is corrupt
- 01:09 PM Bug #23516 (Fix Under Review): [rbd-mirror] entries_behind_master will not be zero after mirror over
- 01:07 PM Bug #23516 (Pending Backport): [rbd-mirror] entries_behind_master will not be zero after mirror over
- 01:08 PM Backport #24378 (In Progress): luminous: [rbd-mirror] entries_behind_master will not be zero afte...
- 01:08 PM Backport #24378 (Resolved): luminous: [rbd-mirror] entries_behind_master will not be zero after m...
- https://github.com/ceph/ceph/pull/22370
05/31/2018
- 11:20 PM Bug #24169 (Fix Under Review): [rbd-mirror] daemon failed to stop on active/passive test case
- *PR*: https://github.com/ceph/ceph/pull/22348
- 07:59 PM Bug #23189 (Need More Info): snapshot size 0 and image size 0
- Is this still an issue on the latest version of Jewel clients?
- 07:58 PM Bug #23184 (Can't reproduce): rbd workunit return 0 response code for fail
- 07:58 PM Bug #22363 (Resolved): Watchers are lost on active RBD image with running client
- I believe this was resolved by #19957
- 07:55 PM Bug #18716 (Won't Fix): xfs/005 failure with rbd on qemu + ubuntu 14.04
- Ubuntu 14.04 is no longer supported.
- 07:55 PM Bug #16019 (Resolved): Failure in TestJournalReplay.Rename after injected socket failure
- I believe this is resolved by #23068
- 07:51 PM Bug #9380 (Resolved): rbd cache sizing is per image
- In Mimic, only a single, merged cache layer is instantiated.
- 07:50 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- @Alex: any update on this issue? Any chance you had a large RBD writeback cache configured (see #23526)?
- 07:46 PM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
- @Marc: any update?
- 01:32 PM Bug #23516 (Fix Under Review): [rbd-mirror] entries_behind_master will not be zero after mirror over
- *PR*: https://github.com/ceph/ceph/pull/22342
- 10:21 AM Bug #24161 (Fix Under Review): [rbd-mirror] simple image map policy doesn't always level-load ins...
05/30/2018
- 08:57 PM Bug #20860 (Resolved): [rbd-mirror] asok hook names not updated when image is renamed
- 08:57 PM Support #22649 (Closed): rbd-mirror use ceph public_network
- 08:56 PM Bug #23516 (In Progress): [rbd-mirror] entries_behind_master will not be zero after mirror over
- 08:54 PM Feature #18765 (Resolved): rbd-mirror: add support for active/active daemon instances
- 08:53 PM Bug #24169 (In Progress): [rbd-mirror] daemon failed to stop on active/passive test case
- 08:53 PM Bug #24309 (Duplicate): [rbd-mirror] daemon does not always exit when sending TERM signal
- Whoops -- duplicate of #24169
- 02:37 PM Bug #23512 (Fix Under Review): Allow removal of RBD images even if the journal is corrupt
- *PR*: https://github.com/ceph/ceph/pull/22327
- 01:34 PM Bug #23512 (In Progress): Allow removal of RBD images even if the journal is corrupt
05/29/2018
- 03:30 PM Bug #24161: [rbd-mirror] simple image map policy doesn't always level-load instances
- PR https://github.com/ceph/ceph/pull/22304
Adding images after a bunch of image removals should pick an instance w... - 12:16 PM Feature #24235: Add new command - ceph rbd-mirror status like ceph fs(mds) status
- "ceph status" already lists which rbd-mirror daemons are running. There is no notion of persistent rbd-mirror daemons...
05/28/2018
- 04:56 PM Feature #24235: Add new command - ceph rbd-mirror status like ceph fs(mds) status
- Thanks, Jason for the feedback. Actually motive is to identify how many rbd-mirrors are running in this clusters and ...
05/25/2018
- 05:26 PM Bug #24309 (Duplicate): [rbd-mirror] daemon does not always exit when sending TERM signal
- The rbd-mirror-thrash agent in teuthology randomly and periodically sends a TERM signal to one or more daemons. It's ...
- 05:16 PM Bug #24221: "rbd/import_export.sh" errors in upgrade:client-upgrade-luminous-mimic upgrade:client...
- https://github.com/ceph/ceph/pull/22230
05/24/2018
- 02:33 PM Backport #24203 (In Progress): mimic: Prevent the use of internal feature bits from outside cls/rbd
- https://github.com/ceph/ceph/pull/22222
05/23/2018
- 02:16 PM Backport #24156 (In Progress): luminous: [rbd-mirror] potential deadlock when running asok 'flush...
- https://github.com/ceph/ceph/pull/22180
- 12:42 PM Feature #24235 (Need More Info): Add new command - ceph rbd-mirror status like ceph fs(mds) status
- What's the end-goal here? The issue is that in your example, those all query status via the monitor. We really don't ...
- 08:30 AM Backport #24086 (In Progress): luminous: [rbd-mirror] potential races during PoolReplayer shut-down
- https://github.com/ceph/ceph/pull/22172
05/22/2018
- 08:52 PM Feature #24235 (Rejected): Add new command - ceph rbd-mirror status like ceph fs(mds) status
- Add new command - ceph rbd-mirror status like ceph fs(mds) status
For more information please check - https://tracke... - 12:32 PM Feature #24226 (Resolved): Image remove state machine should move image to trash as first step
- ... if the cluster minimum client release is set to luminous or later (i.e. trash is guaranteed to be supported). Thi...
- 03:44 AM Backport #24084 (In Progress): luminous: [rbd-mirror] bootstrap should not raise -EREMOTEIO if lo...
- https://github.com/ceph/ceph/pull/22142
- 01:43 AM Bug #24221: "rbd/import_export.sh" errors in upgrade:client-upgrade-luminous-mimic upgrade:client...
- @Jason can you pls take a look ?
- 01:43 AM Bug #24221 (Resolved): "rbd/import_export.sh" errors in upgrade:client-upgrade-luminous-mimic upg...
- Runs (assuming the same root issue):
http://pulpito.ceph.com/teuthology-2018-05-22_00:21:15-upgrade:client-upgrade-l...
05/21/2018
- 08:03 PM Bug #23789 (Resolved): luminous: "cluster [WRN] Manager daemon x is unresponsive. No standby daem...
- 07:39 PM Bug #23789 (Fix Under Review): luminous: "cluster [WRN] Manager daemon x is unresponsive. No stan...
- https://github.com/ceph/ceph/pull/22128
- 12:41 PM Bug #24182 (Closed): luminous->mimic: qa/workunits/rbd/test_librbd_python.sh failure during upgrade
- Tests were pulling from an out-of-date luminous branch:
> 2018-05-18T20:44:09.237 INFO:teuthology.orchestra.run.ov... - 12:30 PM Bug #24182 (In Progress): luminous->mimic: qa/workunits/rbd/test_librbd_python.sh failure during ...
- 08:48 AM Backport #24203 (Resolved): mimic: Prevent the use of internal feature bits from outside cls/rbd
- https://github.com/ceph/ceph/pull/22222
- 07:00 AM Bug #24165 (Pending Backport): Prevent the use of internal feature bits from outside cls/rbd
Also available in: Atom