Activity
From 06/19/2019 to 07/18/2019
07/18/2019
- 09:05 PM Bug #39405 (Pending Backport): ceph_volume_client: python program embedded in test_volume_client....
- 09:04 PM Bug #39510 (Pending Backport): test_volume_client: test_put_object_versioned is unreliable
- 06:06 PM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
- 06:06 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
- 05:35 PM Bug #40821 (Resolved): osdc: objecter ops output does not have useful time information
- ...
- 01:22 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- kernel data structure for this...
- 07:52 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- echo 2>/proc/sys/vm/drop_caches " can walk release the reference and walk around the issue....
- 06:16 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- Analyzing more on the log , it seems an overflow in ll_ref.
From below log, it is pretty clear the patten is 2 _ll... - 01:00 PM Feature #40811 (Fix Under Review): mds: add command that modify session metadata
- 07:34 AM Feature #40811 (Resolved): mds: add command that modify session metadata
- 10:52 AM Feature #40617 (Fix Under Review): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
07/17/2019
- 07:56 PM Backport #40807 (In Progress): luminous: mds: msg weren't destroyed before handle_client_reconnec...
- 07:53 PM Backport #40807 (Resolved): luminous: mds: msg weren't destroyed before handle_client_reconnect r...
- https://github.com/ceph/ceph/pull/29097
- 07:45 PM Backport #39233: mimic: kclient: nofail option not supported
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28090
merged - 07:44 PM Backport #39472: mimic: mds: fail to resolve snapshot name contains '_'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28186
merged - 07:44 PM Backport #39669: mimic: mds: output lock state in format dump
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28274
merged - 07:44 PM Backport #39679: mimic: pybind: add the lseek() function to pybind of cephfs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28337
merged - 07:43 PM Backport #39689: mimic: mds: error "No space left on device" when create a large number of dirs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28381
merged - 07:43 PM Backport #40168: mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nano...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28501
merged - 07:41 PM Backport #40342: mimic: mds: fix corner case of replaying open sessions
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28579
merged - 07:40 PM Backport #40042: mimic: avoid trimming too many log segments after mds failover
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28650
merged - 10:40 AM Bug #40800 (Fix Under Review): ceph_volume_client: to_bytes converts NoneType object str
- 10:04 AM Bug #40800 (Resolved): ceph_volume_client: to_bytes converts NoneType object str
- Precisely, it happens here - https://github.com/ceph/ceph/blob/master/src/pybind/ceph_volume_client.py#L29-L32
IMO... - 10:12 AM Backport #40796 (In Progress): nautilus: mgr / volumes: support asynchronous subvolume deletes
- 02:39 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- The affected inode is a symlink...
07/16/2019
- 08:41 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
- https://github.com/ceph/ceph/pull/29079
- 08:40 PM Feature #40036 (Pending Backport): mgr / volumes: support asynchronous subvolume deletes
- 12:57 PM Cleanup #40787 (Fix Under Review): mds: reorg CInode header
- 12:35 PM Cleanup #40787 (Resolved): mds: reorg CInode header
- 11:25 AM Bug #40695 (Fix Under Review): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
- 07:35 AM Bug #40784 (Resolved): mds: metadata changes may be lost when MDS is restarted
- Assumed a client copied some obj to another location in ceph. when early_replied was received , the cp command would ...
- 04:07 AM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
07/15/2019
- 02:03 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- 01:58 PM Cleanup #40694 (Fix Under Review): mds: move MDSDaemon conf change handling to MDSRank finisher
- 01:49 PM Bug #40613 (Need More Info): kclient: .handle_message_footer got old message 1 <= 648 0x558ceadea...
- Waiting to see if this happens again.
- 01:46 PM Bug #40476 (Fix Under Review): cephfs-shell: cd with no args has no effect
- 01:45 PM Bug #40603 (Fix Under Review): mds: disallow setting ceph.dir.pin value exceeding max rank id
07/13/2019
- 03:41 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- hmm, interesting.
- 2147483646 = 0x80000002, it is more like a memory corruption?
- 03:13 PM Bug #40775 (Resolved): /src/include/xlist.h: 77: FAILED assert(_size == 0)
- It seems like we handle the inode ref wrongly? the number looks like overflow.
-12> 2019-07-13 00:49:44.582 7... - 03:12 AM Bug #40746: client: removing dir reports "not empty" issue due to client side filled wrong dir of...
- I don't see any problem. last paramter of fill_dirent() should be offset for next readdir. With your change, offset o...
- 12:17 AM Bug #40472 (Pending Backport): MDSMonitor: use stringstream instead of dout for mds repaired
- 12:16 AM Bug #40489 (Pending Backport): cephfs-shell: name 'files' is not defined error in do_rm()
- 12:15 AM Bug #40615 (Pending Backport): ceph-fuse: mount does not support the fallocate()
- 12:14 AM Bug #40679 (Pending Backport): cephfs-shell: TypeError in poutput
07/12/2019
- 11:16 PM Bug #40773 (Resolved): qa: 'ceph osd require-osd-release nautilus' fails
- ...
- 05:27 PM Bug #40746 (Fix Under Review): client: removing dir reports "not empty" issue due to client side ...
- 08:30 AM Bug #40746 (Resolved): client: removing dir reports "not empty" issue due to client side filled w...
- recently, during use nfs-ganesha+cephfs, we found some "directory not empty error" when removing
existing directory....
07/11/2019
- 07:50 PM Cleanup #40742 (Resolved): mds: reorg CDir header
- 07:30 PM Feature #40401 (Fix Under Review): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume a...
- 02:53 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Zheng Yan wrote:
> is cephfs exported to nfs
No, it is ceph-fuse (13.2.5).
It seems like customer has ~10 nod... - 01:35 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- is cephfs exported to nfs
- 03:55 AM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- I think we need to try reclaim the caps in this case. I am seeing num_stray accumulated in my env due to the indoes ...
- 03:44 AM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- not big issue even inode becomes to have caps
- 01:09 PM Backport #40343 (Resolved): luminous: mds: fix corner case of replaying open sessions
07/10/2019
- 07:17 PM Feature #40401 (In Progress): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and su...
- 02:53 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- ...
- 02:52 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- It seems not a valid fix as this is not the only path to have the error.
We hit this in another way, not due to fr...
07/09/2019
- 08:01 PM Cleanup #40694 (In Progress): mds: move MDSDaemon conf change handling to MDSRank finisher
- 05:45 PM Feature #40563 (Fix Under Review): client: query a single cache information, for example print a ...
- Jos Collin wrote:
> @Patrick,
>
> Seems wenpengLi is already working on this?
> https://github.com/ceph/ceph/pul... - 04:22 AM Feature #40563: client: query a single cache information, for example print a single inode cache ...
- @Patrick,
Seems wenpengLi is already working on this?
https://github.com/ceph/ceph/pull/28853
07/08/2019
- 05:07 PM Bug #40695 (Resolved): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
- See also: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/6QL7XW72O4NJBZGQEPX6SOBXSTUZOZOZ/
- 05:05 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- In order to avoid checking the mds_lock state.
- 04:24 PM Cleanup #40578 (In Progress): mds: reorganize class members in headers to follow coding guidelines
- 12:53 PM Backport #40222 (In Progress): mimic: mds: reset heartbeat during long-running loops in recovery
- 12:44 PM Backport #40041 (Resolved): luminous: avoid trimming too many log segments after mds failover
- 12:43 PM Backport #40221 (Resolved): luminous: mds: reset heartbeat during long-running loops in recovery
- 10:47 AM Documentation #40689 (Resolved): mgr/volumes: document mgr fs volumes CLI
- Document ceph-mgr FS volumes CLI
07/07/2019
- 05:17 PM Feature #40681 (Fix Under Review): mds: show total number of opened files beneath a directory
07/06/2019
- 07:30 AM Feature #40681 (Rejected): mds: show total number of opened files beneath a directory
- In our online clusters, occasionally, there exists some clients that open massive files/dirs under a directory. So, w...
07/05/2019
- 04:39 PM Bug #40679 (Fix Under Review): cephfs-shell: TypeError in poutput
- 04:14 PM Bug #40679 (Resolved): cephfs-shell: TypeError in poutput
- Recent changes in signature of poutput method from cmd2 module causes the following error....
07/03/2019
- 06:20 PM Bug #40611 (Rejected): can I upload missing rpm package from my build to: https://download.ceph....
- Please repost to ceph-users or the dev list.
- 06:08 PM Feature #40285: mds: support hierarchical layout transformations on files
- Patrick Donnelly wrote:
> The main goal of this feature is to support moving whole trees to cheaper storage hardware... - 04:46 AM Bug #40584: kernel build failure in kernel_untar_build.sh
- Patrick Donnelly wrote:
> Perhaps related to a new distro being used with luminous builds?
yeh, probably. this is... - 04:15 AM Bug #40584: kernel build failure in kernel_untar_build.sh
- Perhaps related to a new distro being used with luminous builds?
- 04:14 AM Bug #40582 (Rejected): cephfs-journal-tool: Error 22 ((22) Invalid argument)
- Please seek help on the ceph-users mailing list.
07/02/2019
- 09:31 PM Feature #40633 (Resolved): mds: dump recent log events for extraordinary events
- When major events happen like client eviction, we often want to get an idea what went wrong but production clusters u...
- 01:55 PM Bug #40615 (Fix Under Review): ceph-fuse: mount does not support the fallocate()
- 07:12 AM Bug #40615: ceph-fuse: mount does not support the fallocate()
- You can see that libfuse already supports the fallocate() function call in version 2.9,
see https://github.com/libfu... - 06:36 AM Bug #40615 (Resolved): ceph-fuse: mount does not support the fallocate()
- ceph version: 14.2.1
fuse version: 2.9.2-6
err info:... - 10:22 AM Bug #40283 (In Progress): qa: add testing for lazyio
- 09:52 AM Feature #40617 (Resolved): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- ... analogous to `ceph fs subvolume getpath`. This will return the path of the `fs subvolumegroup`.
- 04:42 AM Bug #36029: ceph-fuse assert failed when try to do file lock
- We hit the bug as well, is there any PR targeting this bug somewhere? Seems like the related code _update_lock_state...
07/01/2019
- 10:35 PM Feature #17309 (Resolved): qa: mon_thrash test for CephFS
- 10:25 PM Bug #40613 (New): kclient: .handle_message_footer got old message 1 <= 648 0x558ceadeaac0 client_...
- Got this assertion with the testing kernel. We haven't seen this type of failure in a while. Last time was #18690.
... - 10:10 PM Bug #40612 (New): qa: multimds suite MDS behind on trimming
- ...
- 10:01 PM Bug #40611 (Rejected): can I upload missing rpm package from my build to: https://download.ceph....
- Hi there,
Not sure who is the project manager for nfs-ganesha, need your help.
when I am working on NeoKylin/Ce... - 09:57 PM Bug #38326 (Pending Backport): mds: evict stale client when one of its write caps are stolen
- 09:52 PM Bug #40305: qa: spurious unresponsive client causes eviction due to valgrind/multimds
- /ceph/teuthology-archive/pdonnell-2019-06-21_01:51:23-multimds-wip-pdonnell-testing-20190620.220400-distro-basic-smit...
- 09:16 PM Feature #40563: client: query a single cache information, for example print a single inode cache ...
- MDS has a command to print the inode. Should be straightforward to add to the client.
- 06:24 PM Bug #37681 (Fix Under Review): qa: power off still resulted in client sending session close
- 06:18 PM Bug #37681 (In Progress): qa: power off still resulted in client sending session close
- The correct ipmitool command to simulate pulling the power plug is "power reset". "power off" will permit graceful sh...
- 06:13 PM Bug #37681: qa: power off still resulted in client sending session close
- Still happening:...
- 04:10 PM Backport #40343: luminous: mds: fix corner case of replaying open sessions
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28536
merged - 04:10 PM Backport #40041: luminous: avoid trimming too many log segments after mds failover
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28543
merged - 04:09 PM Backport #40221: luminous: mds: reset heartbeat during long-running loops in recovery
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/28544
merged - 02:57 PM Bug #40608 (Duplicate): mds: assert after `delete gather` in C_Drop_Cache::recall_client_state
- While performing an mds cache drop I had an MDS assert.
Command was:... - 03:04 AM Bug #40603 (Resolved): mds: disallow setting ceph.dir.pin value exceeding max rank id
- Currently we allow to set ceph.dir.pin value to any number. If it is larger than current max id, this dir will stay i...
06/28/2019
- 05:13 PM Bug #40584 (New): kernel build failure in kernel_untar_build.sh
- Have been seeing `qa/workunits/kernel_untar_build.sh` failures in luminous lately. See:
http://qa-proxy.ceph.c... - 03:02 PM Bug #40582 (Rejected): cephfs-journal-tool: Error 22 ((22) Invalid argument)
- For unknown reason journal export stopped working.
journal is 23438084784916~692721059
2019-06-28 17:00:02.692533... - 06:10 AM Feature #40299 (Resolved): mgr/volumes: allow setting mode on fs subvol, subvol group
- 06:10 AM Bug #40431 (Resolved): mgr/volumes: allow setting data pool layout for fs subvolumes
- 06:10 AM Bug #40429 (Resolved): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- 06:10 AM Backport #40571 (Resolved): nautilus: mgr/volumes: allow setting mode on fs subvol, subvol group
- 06:09 AM Backport #40570 (Resolved): nautilus: mgr/volumes: allow setting data pool layout for fs subvolumes
- 06:09 AM Backport #40569 (Resolved): nautilus: mgr/volumes: subvolume.py calls Exceptions with too few arg...
- 02:38 AM Cleanup #40578 (Resolved): mds: reorganize class members in headers to follow coding guidelines
- Guide here: https://google.github.io/styleguide/cppguide.html#Declaration_Order
A past commit that has improved th...
06/27/2019
- 04:07 PM Backport #40571 (In Progress): nautilus: mgr/volumes: allow setting mode on fs subvol, subvol group
- 04:00 PM Backport #40571 (Resolved): nautilus: mgr/volumes: allow setting mode on fs subvol, subvol group
- https://github.com/ceph/ceph/pull/28767
- 03:59 PM Backport #40570 (In Progress): nautilus: mgr/volumes: allow setting data pool layout for fs subvo...
- 03:59 PM Backport #40570 (Resolved): nautilus: mgr/volumes: allow setting data pool layout for fs subvolumes
- https://github.com/ceph/ceph/pull/28767
- 03:58 PM Backport #40569 (In Progress): nautilus: mgr/volumes: subvolume.py calls Exceptions with too few ...
- 03:57 PM Backport #40569 (Resolved): nautilus: mgr/volumes: subvolume.py calls Exceptions with too few arg...
- https://github.com/ceph/ceph/pull/28767
- 02:10 PM Bug #40431 (Pending Backport): mgr/volumes: allow setting data pool layout for fs subvolumes
- 02:09 PM Feature #40299 (Pending Backport): mgr/volumes: allow setting mode on fs subvol, subvol group
- 02:09 PM Bug #40429 (Pending Backport): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- 06:25 AM Feature #40563 (Fix Under Review): client: query a single cache information, for example print a ...
- I want to query a single cache information, for example print a single inode cache information,but the client cache i...
06/26/2019
- 10:50 AM Backport #38686 (Resolved): luminous: kcephfs TestClientLimits.test_client_pin fails with "client...
- 10:49 AM Backport #38445 (Resolved): luminous: mds: drop cache does not timeout as expected
- 10:48 AM Backport #38340 (Resolved): luminous: mds: may leak gather during cache drop
- 10:48 AM Bug #37726 (Pending Backport): mds: high debug logging with many subtrees is slow
- mimic backport is still open
- 10:47 AM Backport #38877 (Resolved): luminous: mds: high debug logging with many subtrees is slow
- 10:47 AM Bug #39026 (Resolved): mds: crash during mds restart
- 10:46 AM Backport #39191 (Resolved): luminous: mds: crash during mds restart
- 10:46 AM Bug #38994 (Resolved): mds: we encountered "No space left on device" when moving huge number of f...
- 10:46 AM Backport #39198 (Resolved): luminous: mds: we encountered "No space left on device" when moving h...
- 10:46 AM Backport #39208 (Resolved): luminous: mds: mds_cap_revoke_eviction_timeout is not used to initial...
- 08:58 AM Bug #39266 (Resolved): There is no punctuation mark or blank between tid and client_id in the ou...
- 08:58 AM Backport #39468 (Resolved): luminous: There is no punctuation mark or blank between tid and clie...
- 08:18 AM Backport #39221 (Resolved): luminous: mds: behind on trimming and "[dentry] was purgeable but no ...
- 08:17 AM Backport #39231 (Resolved): luminous: kclient: nofail option not supported
- 08:16 AM Backport #40160 (Resolved): luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- 08:15 AM Backport #39213 (Resolved): luminous: mds: there is an assertion when calling Beacon::shutdown()
- 05:10 AM Bug #40476: cephfs-shell: cd with no args has no effect
- Rishabh Dave wrote:
> Patrick Donnelly said:
> > What commit/branch are you testing? I thought I just changed this ... - 05:07 AM Bug #40182 (Resolved): luminous: pybind: luminous volume client breaks against nautilus cluster
06/25/2019
- 04:31 PM Backport #38686: luminous: kcephfs TestClientLimits.test_client_pin fails with "client caps fell ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27040
merged - 04:30 PM Backport #38445: luminous: mds: drop cache does not timeout as expected
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27342
merged - 04:30 PM Backport #38340: luminous: mds: may leak gather during cache drop
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27342
merged - 04:29 PM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27679
merged - 04:29 PM Backport #39191: luminous: mds: crash during mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27737
merged - 04:29 PM Backport #39198: luminous: mds: we encountered "No space left on device" when moving huge number ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27801
merged - 04:28 PM Backport #39208: luminous: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server:...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27840
merged - 04:28 PM Backport #39468: luminous: There is no punctuation mark or blank between tid and client_id in th...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27848
merged - 04:27 PM Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28432
merged - 04:27 PM Backport #39231: luminous: kclient: nofail option not supported
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28436
merged - 04:26 PM Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28437
merged - 04:26 PM Backport #39213: luminous: mds: there is an assertion when calling Beacon::shutdown()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28438
merged - 04:25 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Jan Fajerski wrote:
> proposed fix: https://github.com/ceph/ceph/pull/28445
merged - 04:14 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Going over the userland code today to see what's there and what can be reused. Some notes:
struct ceph_mds_request... - 08:20 AM Bug #40476: cephfs-shell: cd with no args has no effect
- Patrick Donnelly said:
> What commit/branch are you testing? I thought I just changed this to cd into the root direc...
06/24/2019
- 08:20 PM Bug #40429: mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- Ramana Raja wrote:
> > +pybind/mgr/volumes/fs/subvolume.py: note: In member "_get_ancestor_xattr" of class "SubVolum... - 01:15 PM Bug #40429: mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- > +pybind/mgr/volumes/fs/subvolume.py: note: In member "_get_ancestor_xattr" of class "SubVolume":
+pybind/mgr/volum... - 12:19 PM Bug #40429 (Fix Under Review): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- 01:13 PM Bug #40476: cephfs-shell: cd with no args has no effect
- I'd consider this to be NOTABUG. We don't really have the concept of a home directory in cephfs shell, so why should ...
- 12:14 PM Bug #40431 (Fix Under Review): mgr/volumes: allow setting data pool layout for fs subvolumes
- 12:14 PM Feature #40299 (Fix Under Review): mgr/volumes: allow setting mode on fs subvol, subvol group
- 10:19 AM Cleanup #39717 (Resolved): cephfs-shell: Fix flake8 warnings and errors
- 10:19 AM Backport #40471 (Resolved): nautilus: cephfs-shell: Fix flake8 warnings and errors
- 10:19 AM Bug #39404 (Resolved): cephfs-shell: fix string decode for ls command
- 10:18 AM Backport #39678 (Resolved): nautilus: cephfs-shell: fix string decode for ls command
- 10:18 AM Bug #39165 (Resolved): cephfs-shell: add commands to manipulate quotas
- 10:18 AM Backport #39936 (Resolved): nautilus: cephfs-shell: add commands to manipulate quotas
- 10:18 AM Feature #38829 (Resolved): cephfs-shell: add a "stat" command
- 10:18 AM Backport #39937 (Resolved): nautilus: cephfs-shell: add a "stat" command
- 10:18 AM Cleanup #40191 (Resolved): cephfs-shell: Fix flake8 errors
- 10:17 AM Backport #40217 (Resolved): nautilus: cephfs-shell: Fix flake8 errors
- 10:17 AM Bug #40244 (Resolved): cephfs-shell: 'lls' command errors
- 10:17 AM Backport #40313 (Resolved): nautilus: cephfs-shell: 'lls' command errors
- 10:17 AM Bug #40243 (Resolved): cephfs-shell: Incorrect error message is printed in 'lcd' command
- 10:17 AM Backport #40314 (Resolved): nautilus: cephfs-shell: Incorrect error message is printed in 'lcd' c...
- 10:17 AM Bug #40418 (Resolved): cephfs-shell: test only python3 and assert python3 in cephfs-shell
- 10:16 AM Bug #40455 (Resolved): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 10:16 AM Backport #40469 (Resolved): nautilus: cephfs-shell: test only python3 and assert python3 in cephf...
- 10:16 AM Backport #40470 (Resolved): nautilus: cephfs-shell: fix unecessary usage of to_bytes for file paths
- 10:03 AM Backport #40495 (Resolved): nautilus: test_volume_client: declare only one default for python ver...
- https://github.com/ceph/ceph/pull/30030
- 10:03 AM Backport #40494 (Resolved): mimic: test_volume_client: declare only one default for python version
- https://github.com/ceph/ceph/pull/30110
- 10:03 AM Backport #40493 (Rejected): luminous: test_volume_client: declare only one default for python ver...
- 08:36 AM Bug #40489 (Fix Under Review): cephfs-shell: name 'files' is not defined error in do_rm()
- 08:30 AM Bug #40489 (Resolved): cephfs-shell: name 'files' is not defined error in do_rm()
- ...
06/22/2019
- 03:00 AM Bug #40476 (Need More Info): cephfs-shell: cd with no args has no effect
- What commit/branch are you testing? I thought I just changed this to cd into the root directory (of CephFS).
- 02:56 AM Bug #40472 (Fix Under Review): MDSMonitor: use stringstream instead of dout for mds repaired
- 02:45 AM Bug #40286 (In Progress): luminous: qa: remove ubuntu 14.04 testing
- 02:42 AM Documentation #39620 (In Progress): doc: MDS and metadata pool hardware requirements/recommendations
06/21/2019
- 07:22 PM Bug #40374 (Resolved): nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reporting..."
- 07:22 PM Bug #40373 (Resolved): nautilus: qa: still testing simple messenger
- 06:45 PM Backport #40471: nautilus: cephfs-shell: Fix flake8 warnings and errors
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #39678: nautilus: cephfs-shell: fix string decode for ls command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #39936: nautilus: cephfs-shell: add commands to manipulate quotas
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #39937: nautilus: cephfs-shell: add a "stat" command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40217: nautilus: cephfs-shell: Fix flake8 errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40313: nautilus: cephfs-shell: 'lls' command errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40314: nautilus: cephfs-shell: Incorrect error message is printed in 'lcd' command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40469: nautilus: cephfs-shell: test only python3 and assert python3 in cephfs-shell
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:43 PM Backport #40470: nautilus: cephfs-shell: fix unecessary usage of to_bytes for file paths
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 09:04 AM Bug #40477 (Fix Under Review): mds: cleanup truncating inodes when standby replay mds trim log se...
- 09:02 AM Bug #40477 (Resolved): mds: cleanup truncating inodes when standby replay mds trim log segments
- 08:54 AM Bug #40476 (Resolved): cephfs-shell: cd with no args has no effect
- Issuing cd command with no args implies "cd $HOME" in bash but on CephFS shell it has no effect it leads to an error ...
- 08:21 AM Bug #40474 (Fix Under Review): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 08:15 AM Bug #40474 (Resolved): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- client may set CEPH_CLIENT_CAPS_PENDING_CAPSNAP flag even there is no further cap snap flush. This may confuse mds an...
- 03:37 AM Bug #40472: MDSMonitor: use stringstream instead of dout for mds repaired
- https://github.com/ceph/ceph/pull/28683
- 03:33 AM Bug #40472 (Resolved): MDSMonitor: use stringstream instead of dout for mds repaired
- use stringstream instead of dout for mds repaired to get result directly from command line.
06/20/2019
- 10:43 PM Backport #39936 (In Progress): nautilus: cephfs-shell: add commands to manipulate quotas
- 10:43 PM Backport #39937 (In Progress): nautilus: cephfs-shell: add a "stat" command
- 10:42 PM Backport #40217 (In Progress): nautilus: cephfs-shell: Fix flake8 errors
- 10:42 PM Backport #40313 (In Progress): nautilus: cephfs-shell: 'lls' command errors
- 10:42 PM Backport #40314 (In Progress): nautilus: cephfs-shell: Incorrect error message is printed in 'lcd...
- 10:42 PM Backport #40469 (In Progress): nautilus: cephfs-shell: test only python3 and assert python3 in ce...
- 06:10 PM Backport #40469 (Resolved): nautilus: cephfs-shell: test only python3 and assert python3 in cephf...
- https://github.com/ceph/ceph/pull/28681
- 10:42 PM Backport #40470 (In Progress): nautilus: cephfs-shell: fix unecessary usage of to_bytes for file ...
- 06:10 PM Backport #40470 (Resolved): nautilus: cephfs-shell: fix unecessary usage of to_bytes for file paths
- https://github.com/ceph/ceph/pull/28681
- 10:38 PM Backport #40471 (In Progress): nautilus: cephfs-shell: Fix flake8 warnings and errors
- 10:38 PM Backport #40471 (Resolved): nautilus: cephfs-shell: Fix flake8 warnings and errors
- https://github.com/ceph/ceph/pull/28681
- 10:38 PM Cleanup #39717 (Pending Backport): cephfs-shell: Fix flake8 warnings and errors
- 10:16 PM Backport #40040 (Resolved): nautilus: avoid trimming too many log segments after mds failover
- 04:38 PM Backport #40040: nautilus: avoid trimming too many log segments after mds failover
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28582
merged - 10:16 PM Backport #40223 (Resolved): nautilus: mds: reset heartbeat during long-running loops in recovery
- 04:38 PM Backport #40223: nautilus: mds: reset heartbeat during long-running loops in recovery
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28611
merged - 10:16 PM Bug #40061 (Resolved): mds: blacklisted clients eviction is broken
- 10:15 PM Backport #40236 (Resolved): nautilus: mds: blacklisted clients eviction is broken
- 10:15 PM Backport #40344 (Resolved): nautilus: mds: fix corner case of replaying open sessions
- 04:39 PM Backport #40344: nautilus: mds: fix corner case of replaying open sessions
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28580
merged - 06:10 PM Bug #40455 (Pending Backport): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 03:18 PM Bug #40431 (In Progress): mgr/volumes: allow setting data pool layout for fs subvolumes
- 01:26 PM Bug #40460 (Pending Backport): test_volume_client: declare only one default for python version
- 06:48 AM Bug #40460 (Resolved): test_volume_client: declare only one default for python version
- test_volume_client.py declares default python version in more than one places.
- 01:09 PM Bug #40418 (Pending Backport): cephfs-shell: test only python3 and assert python3 in cephfs-shell
- 09:18 AM Backport #39670 (Resolved): nautilus: mds: output lock state in format dump
- 09:18 AM Feature #39969 (Resolved): mgr / volume: refactor volume module
- 09:18 AM Backport #40378 (Resolved): nautilus: mgr / volume: refactor volume module
- 09:17 AM Backport #40164 (Resolved): nautilus: mount: key parsing fail when doing a remount
- 09:15 AM Backport #40220 (Resolved): nautilus: TestMisc.test_evict_client fails
- 09:14 AM Backport #40161 (Resolved): nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- 08:43 AM Bug #39526 (Resolved): cephfs-shell: teuthology tests
- 08:43 AM Backport #39935 (Resolved): nautilus: cephfs-shell: teuthology tests
- 08:42 AM Bug #39507 (Resolved): cephfs-shell: mkdir error for relative path
- 08:42 AM Backport #39960 (Resolved): nautilus: cephfs-shell: mkdir error for relative path
- 08:17 AM Bug #39395 (Fix Under Review): ceph: ceph fs auth fails
- 07:56 AM Bug #39395 (In Progress): ceph: ceph fs auth fails
06/19/2019
- 08:08 PM Bug #40455 (Fix Under Review): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 08:03 PM Bug #40455 (Resolved): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 06:56 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- I think we are going to need this after all. If we don't do this, we'll have to delay writing to newly-created files ...
- 06:04 PM Backport #40445 (Resolved): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_sn...
- https://github.com/ceph/ceph/pull/29344
- 06:04 PM Backport #40444 (Resolved): mimic: mds: MDCache::cow_inode does not cleanup unneeded client_snap_...
- https://github.com/ceph/ceph/pull/30234
- 06:04 PM Backport #40443 (Resolved): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when o...
- https://github.com/ceph/ceph/pull/29343
- 06:04 PM Backport #40442 (Resolved): mimic: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when oper...
- https://github.com/ceph/ceph/pull/30108
- 06:03 PM Backport #40440 (Resolved): nautilus: mds: cannot switch mds state from standby-replay to active
- https://github.com/ceph/ceph/pull/29233
- 06:03 PM Backport #40438 (Resolved): nautilus: getattr on snap inode stuck
- https://github.com/ceph/ceph/pull/29231
- 06:03 PM Backport #40437 (Resolved): mimic: getattr on snap inode stuck
- https://github.com/ceph/ceph/pull/29230
- 02:49 PM Feature #40299 (In Progress): mgr/volumes: allow setting mode on fs subvol, subvol group
- 12:04 PM Bug #40431 (Resolved): mgr/volumes: allow setting data pool layout for fs subvolumes
- This is required by CephFS CSI driver. Allow setting data pool layout for fs subvolumes,
$ ceph fs subvolume crea... - 11:02 AM Bug #40430 (Fix Under Review): cephfs-shell: No error message is printed on ls of invalid directo...
- 10:50 AM Bug #40430 (Resolved): cephfs-shell: No error message is printed on ls of invalid directories
- For any invalid ls command, no error message is printed....
- 10:04 AM Backport #40042 (In Progress): mimic: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28650
- 09:45 AM Bug #40429 (Resolved): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- mypy revealed...
- 06:51 AM Bug #38326 (Fix Under Review): mds: evict stale client when one of its write caps are stolen
- increment patches https://github.com/ceph/ceph/pull/28642
- 03:30 AM Backport #39260: nautilus: ls -S command produces AttributeError: 'str' object has no attribute '...
- Follow-up for missing commit in backport: https://github.com/ceph/ceph/pull/28641
- 01:35 AM Bug #39987 (Pending Backport): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- 01:33 AM Bug #40213 (Pending Backport): mds: cannot switch mds state from standby-replay to active
- 01:30 AM Bug #40361 (Pending Backport): getattr on snap inode stuck
- 01:29 AM Bug #40101 (Pending Backport): libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operatin...
- 12:43 AM Backport #39670: nautilus: mds: output lock state in format dump
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28233
merged - 12:42 AM Backport #40378: nautilus: mgr / volume: refactor volume module
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28595
merged - 12:42 AM Backport #40164: nautilus: mount: key parsing fail when doing a remount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28610
merged - 12:41 AM Backport #40220: nautilus: TestMisc.test_evict_client fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28613
merged - 12:41 AM Backport #40161: nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28612
merged - 12:40 AM Backport #39935: nautilus: cephfs-shell: teuthology tests
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28614
merged - 12:39 AM Backport #39960: nautilus: cephfs-shell: mkdir error for relative path
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28616
merged
Also available in: Atom