Activity
From 05/16/2019 to 06/14/2019
06/14/2019
- 09:52 PM Backport #40378: nautilus: mgr / volume: refactor volume module
- Ramana, please do this backport.
- 09:51 PM Backport #40378 (Resolved): nautilus: mgr / volume: refactor volume module
- https://github.com/ceph/ceph/pull/28595
- 09:51 PM Feature #39969 (Pending Backport): mgr / volume: refactor volume module
- 07:43 PM Backport #40338: nautilus: mgr/volumes: Name 'sub_name' is not defined
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #40321: nautilus: test: extend mgr/volume test to cover new interfaces
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #40158: nautilus: mgr/volumes: unable to set quota on fs subvolumes
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #40157: nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #39934: nautilus: mgr/volumes: add CephFS subvolumes library
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:30 PM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28499
merged - 07:29 PM Backport #40167: nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the n...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28500
merged - 07:26 PM Backport #39686: nautilus: ceph-fuse: client hang because its bad session PipeConnection to mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28375
merged - 07:26 PM Backport #39690: nautilus: mds: error "No space left on device" when create a large number of dirs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28394
merged - 06:40 PM Bug #40374 (Fix Under Review): nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reportin...
- 06:35 PM Bug #40374 (Resolved): nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reporting..."
- ...
- 06:29 PM Bug #40373 (Fix Under Review): nautilus: qa: still testing simple messenger
- 06:25 PM Bug #40373 (Resolved): nautilus: qa: still testing simple messenger
- ...
- 05:57 PM Bug #40369 (Fix Under Review): ceph_volume_client: fs_name must be converted to string before usi...
- 03:52 PM Bug #40369 (Resolved): ceph_volume_client: fs_name must be converted to string before using it
- "fs_name":https://github.com/ceph/ceph/blob/master/src/pybind/ceph_volume_client.py#L255 would normally be assigned a...
- 05:56 PM Bug #40371 (Fix Under Review): cephfs-shell: du must ignore non-directory files
- 05:38 PM Bug #40371 (Resolved): cephfs-shell: du must ignore non-directory files
- cephfs-shell's du command crashes if it comes across because files that are not directories since it tries to get 'ce...
- 09:49 AM Bug #40361 (Fix Under Review): getattr on snap inode stuck
- 09:32 AM Bug #40361 (Resolved): getattr on snap inode stuck
- from maillling list
On Wed, Jun 12, 2019 at 3:26 PM Hector Martin <hector@marcansoft.com> wrote:
>
> Hi list,
... - 04:24 AM Bug #40101 (Fix Under Review): libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operatin...
- 04:22 AM Backport #40221 (In Progress): luminous: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28544
- 03:21 AM Backport #40041 (In Progress): luminous: avoid trimming too many log segments after mds failover
06/13/2019
- 07:36 PM Backport #40343 (In Progress): luminous: mds: fix corner case of replaying open sessions
- 07:35 PM Backport #40343 (Resolved): luminous: mds: fix corner case of replaying open sessions
- https://github.com/ceph/ceph/pull/28536
- 07:35 PM Backport #40344 (Resolved): nautilus: mds: fix corner case of replaying open sessions
- https://github.com/ceph/ceph/pull/28580
- 07:35 PM Backport #40342 (Resolved): mimic: mds: fix corner case of replaying open sessions
- https://github.com/ceph/ceph/pull/28579
- 07:35 PM Bug #40211 (Pending Backport): mds: fix corner case of replaying open sessions
- 06:22 PM Bug #40213 (Fix Under Review): mds: cannot switch mds state from standby-replay to active
- 03:44 PM Backport #40321 (In Progress): nautilus: test: extend mgr/volume test to cover new interfaces
- 02:43 PM Backport #40321: nautilus: test: extend mgr/volume test to cover new interfaces
- https://github.com/ceph/ceph/pull/28429
- 10:24 AM Backport #40321 (Resolved): nautilus: test: extend mgr/volume test to cover new interfaces
- https://github.com/ceph/ceph/pull/28429
- 02:48 PM Backport #40338 (In Progress): nautilus: mgr/volumes: Name 'sub_name' is not defined
- https://github.com/ceph/ceph/pull/28429
- 02:47 PM Backport #40338 (Resolved): nautilus: mgr/volumes: Name 'sub_name' is not defined
- https://github.com/ceph/ceph/pull/28429
- 02:39 PM Bug #40014 (Pending Backport): mgr/volumes: Name 'sub_name' is not defined
- 12:24 PM Feature #17434 (Fix Under Review): qa: background rsync task for FS workunits
- 10:26 AM Backport #40327 (Resolved): mimic: mds: evict stale client when one of its write caps are stolen
- https://github.com/ceph/ceph/pull/28585
- 10:26 AM Backport #40326 (Resolved): nautilus: mds: evict stale client when one of its write caps are stolen
- https://github.com/ceph/ceph/pull/28583
- 10:25 AM Backport #40325 (Rejected): mimic: ceph_volume_client: d_name needs to be converted to string bef...
- https://github.com/ceph/ceph/pull/29766
- 10:25 AM Backport #40324 (Resolved): nautilus: ceph_volume_client: d_name needs to be converted to string ...
- https://github.com/ceph/ceph/pull/28609
- 10:24 AM Backport #40323 (Rejected): luminous: ceph_volume_client: d_name needs to be converted to string ...
- 10:22 AM Backport #40314 (Resolved): nautilus: cephfs-shell: Incorrect error message is printed in 'lcd' c...
- https://github.com/ceph/ceph/pull/28681
- 10:22 AM Backport #40313 (Resolved): nautilus: cephfs-shell: 'lls' command errors
- https://github.com/ceph/ceph/pull/28681
06/12/2019
- 09:33 PM Bug #40288 (Closed): mds: lost mds journal when hot-standby mds switch occurs
- 12:58 PM Bug #40288: mds: lost mds journal when hot-standby mds switch occurs
- Sorry, there doesn't seems to have any problem, it's my misunderstanding. Turn off this issue please, thank you!
- 02:51 AM Bug #40288 (Closed): mds: lost mds journal when hot-standby mds switch occurs
- ceph version: jewel 10.2.2
mds mode: hot-standby
There is a risk mds lost some event because it wake up waiters... - 09:31 PM Bug #40243 (Pending Backport): cephfs-shell: Incorrect error message is printed in 'lcd' command
- 09:30 PM Bug #40244 (Pending Backport): cephfs-shell: 'lls' command errors
- 09:27 PM Bug #39949 (Pending Backport): test: extend mgr/volume test to cover new interfaces
- 09:17 PM Bug #39406 (Pending Backport): ceph_volume_client: d_name needs to be converted to string before ...
- 09:07 PM Bug #38326 (Pending Backport): mds: evict stale client when one of its write caps are stolen
- Zheng, any issues backporting this?
- 08:41 PM Bug #40305 (New): qa: spurious unresponsive client causes eviction due to valgrind/multimds
- ...
- 04:16 PM Bug #40014: mgr/volumes: Name 'sub_name' is not defined
- Ramana Raja wrote:
> Venky Shankar wrote:
> > Ramana, I think we should just mention that this issue will be fixed ... - 03:06 PM Bug #40093: qa: client mount cannot be forcibly unmounted when all MDS are down
- /ceph/teuthology-archive/pdonnell-2019-06-11_01:05:56-fs-wip-pdonnell-testing-20190610.220401-distro-basic-smithi/402...
- 01:47 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- ceph-csi ticket: https://github.com/ceph/ceph-csi/issues/411
Ramana, I'm reassigning this to you. We need this d... - 01:37 PM Bug #40297 (Fix Under Review): cephfs-shell: Produces TypeError on passing '*' pattern to ls, rm ...
- 12:58 PM Bug #40297 (Resolved): cephfs-shell: Produces TypeError on passing '*' pattern to ls, rm or rmdir
- ...
- 01:37 PM Bug #40298 (Fix Under Review): cephfs-shell: 'rmdir *' does not remove all directories
- 01:15 PM Bug #40298 (Resolved): cephfs-shell: 'rmdir *' does not remove all directories
- ...
- 01:30 PM Feature #40299 (Resolved): mgr/volumes: allow setting mode on fs subvol, subvol group
- Allow setting mode bits(directory permissions) when creating fs subvolume, and fs subvolume group through the CLI.
... - 01:18 PM Bug #22038: ceph-volume-client: rados.Error: command not known
- Note: luminous backport is tracked by #40182, where cbbdd0da7d40e4e5def5cc0b9a9250348e71019f is also being backported...
- 01:05 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Patrick Donnelly wrote:
> Are there any other issues?
A couple more. PR is updated. - 09:53 AM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- Oops! Sorry, I didn't notice this was assigned to David.
- 09:30 AM Backport #39209 (Resolved): nautilus: mds: mds_cap_revoke_eviction_timeout is not used to initial...
06/11/2019
- 10:58 PM Bug #40286 (Resolved): luminous: qa: remove ubuntu 14.04 testing
- pdonnell@icewind ~/ceph/qa$ git grep 14.04
distros/all/ubuntu_14.04.yaml:os_version: "14.04"
distros/all/ubuntu_14.... - 10:34 PM Feature #40285 (New): mds: support hierarchical layout transformations on files
- The main goal of this feature is to support moving whole trees to cheaper storage hardware. This can be done manually...
- 09:57 PM Bug #11314 (Duplicate): qa: MDS crashed and the runs hung without ever timing out
- 09:57 PM Feature #10369 (Fix Under Review): qa-suite: detect unexpected MDS failovers and daemon crashes
- 09:56 PM Feature #5486: kclient: make it work with selinux
- Targeting Octopus so it shows up in searches.
- 08:59 PM Bug #40284 (New): kclient: evaluate/fix/add lazio support in the kernel
- ceph-fuse now supports lazyio [2, #20598] but I don't believe we ever checked what needed to be done for the kernel c...
- 08:59 PM Bug #40283 (Resolved): qa: add testing for lazyio
- I'm distressed we have no tests for client behavior (via libcephfs) with lazyio. : /
In particular, verify behavio... - 08:50 PM Backport #39470 (Resolved): nautilus: There is no punctuation mark or blank between tid and clie...
- 08:47 PM Backport #39473 (Resolved): nautilus: mds: fail to resolve snapshot name contains '_'
- 08:36 PM Feature #36397: mds: support real state reclaim
- Raising priority on this. We forgot to finish this and I'd like Zheng to work on it while the problem is still fresh ...
- 08:07 PM Feature #40261 (New): mds: permit executing scripts from various file system events
- Potential uses:
- automatic gzip of closed files meeting some criteria
- automatic archival of unlinked files
- ... - 08:06 PM Backport #39232 (Resolved): nautilus: kclient: nofail option not supported
- 08:05 PM Backport #39214 (Resolved): nautilus: mds: there is an assertion when calling Beacon::shutdown()
- 08:05 PM Backport #39211 (Resolved): nautilus: MDSTableServer.cc: 83: FAILED assert(version == tid)
- 08:04 PM Backport #39222 (Resolved): nautilus: mds: behind on trimming and "[dentry] was purgeable but no ...
- 08:03 PM Bug #37726 (Resolved): mds: high debug logging with many subtrees is slow
- 08:02 PM Backport #38876 (Resolved): nautilus: mds: high debug logging with many subtrees is slow
- 07:55 PM Backport #40166 (In Progress): luminous: client: ceph.dir.rctime xattr value incorrectly prefixes...
- 07:54 PM Backport #40168 (In Progress): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- 07:52 PM Backport #40167 (In Progress): nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes...
- 07:48 PM Backport #40169 (In Progress): nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 ...
- 05:41 PM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- David Disseldorp wrote:
> The backport introducing this bug has now been merged into Nautilus: https://github.com/ce... - 03:55 PM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- The backport introducing this bug has now been merged into Nautilus: https://github.com/ceph/ceph/pull/27901 .
A f... - 07:07 PM Bug #38946 (Resolved): ceph_volume_client: Too many arguments for "WriteOpCtx"
- 07:06 PM Backport #39050 (Resolved): nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx"
- 06:04 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Jan Fajerski wrote:
> Patrick Donnelly wrote:
> > Let's treat this as a backport. Please cherry-pick the commits fr... - 09:56 AM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Patrick Donnelly wrote:
> Let's treat this as a backport. Please cherry-pick the commits from here:
>
> https://g... - 06:00 PM Bug #40101: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on .snap directory
- I have just run into another problem which may be related:
'ls .snap' now hangs for a long time (indefinitely?) an... - 04:52 PM Bug #40014: mgr/volumes: Name 'sub_name' is not defined
- Venky Shankar wrote:
> Ramana, I think we should just mention that this issue will be fixed w/ subvolume refactor an...
06/10/2019
- 09:19 PM Bug #40197 (Fix Under Review): The command 'node ls' sometimes output some incorrect information ...
- 03:01 PM Bug #40244 (Fix Under Review): cephfs-shell: 'lls' command errors
- 02:56 PM Bug #40244 (Resolved): cephfs-shell: 'lls' command errors
- lls need not print for current working directory. It does not print correct path for relative paths.
- 02:49 PM Bug #40243 (Fix Under Review): cephfs-shell: Incorrect error message is printed in 'lcd' command
- 02:43 PM Bug #40243 (Resolved): cephfs-shell: Incorrect error message is printed in 'lcd' command
- For different types of incorrect arguments passed, appropriate error message is not printed.
- 01:24 PM Bug #40200 (Fix Under Review): luminous: mds: does fails assert(session->get_nref() == 1) when ba...
- 02:05 AM Bug #40200: luminous: mds: does fails assert(session->get_nref() == 1) when balancing
- your patch should fix the issue. Thanks for tracking it down. Could you please create PR ?
- 12:01 PM Bug #38739 (Resolved): cephfs-shell: python traceback with mkdir inside inexistant directory
- 12:01 PM Backport #39379 (Resolved): nautilus: cephfs-shell: python traceback with mkdir inside inexistant...
- 12:01 PM Feature #38740 (Resolved): cephfs-shell: support mkdir with non-octal mode
- 12:01 PM Backport #39378 (Resolved): nautilus: cephfs-shell: support mkdir with non-octal mode
- 12:01 PM Bug #38741 (Resolved): cephfs-shell: python traceback with mkdir when reattempt of mkdir
- 12:01 PM Backport #39377 (Resolved): nautilus: cephfs-shell: python traceback with mkdir when reattempt of...
- 12:00 PM Bug #38743 (Resolved): cephfs-shell: mkdir creates directory with invalid octal mode
- 12:00 PM Backport #39376 (Resolved): nautilus: cephfs-shell: mkdir creates directory with invalid octal mode
- 12:00 PM Bug #38996 (Resolved): cephfs-shell: ls command produces error: no "colorize" attribute found error
- 12:00 PM Backport #39197 (Resolved): nautilus: cephfs-shell: ls command produces error: no "colorize" attr...
- 11:59 AM Backport #39192 (Resolved): nautilus: mds: crash during mds restart
- 11:58 AM Backport #39199 (Resolved): nautilus: mds: we encountered "No space left on device" when moving h...
- 10:37 AM Bug #22524: NameError: global name 'get_mds_map' is not defined
- Note: luminous backport is tracked by #40182, where cbbdd0da7d40e4e5def5cc0b9a9250348e71019f is also being backported...
- 10:28 AM Backport #40236 (Resolved): nautilus: mds: blacklisted clients eviction is broken
- https://github.com/ceph/ceph/pull/28618
- 10:27 AM Backport #40223 (Resolved): nautilus: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28611
- 10:27 AM Backport #40222 (Resolved): mimic: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28918
- 10:27 AM Backport #40221 (Resolved): luminous: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28544
- 10:26 AM Backport #40220 (Resolved): nautilus: TestMisc.test_evict_client fails
- https://github.com/ceph/ceph/pull/28613
- 10:26 AM Backport #40219 (Resolved): mimic: TestMisc.test_evict_client fails
- https://github.com/ceph/ceph/pull/29228
- 10:26 AM Backport #40218 (Resolved): luminous: TestMisc.test_evict_client fails
- https://github.com/ceph/ceph/pull/29229
- 10:26 AM Backport #40217 (Resolved): nautilus: cephfs-shell: Fix flake8 errors
- https://github.com/ceph/ceph/pull/28681
- 10:22 AM Bug #38803 (Resolved): qa: test_sessionmap assumes simple messenger
- 10:22 AM Backport #39430 (Resolved): nautilus: qa: test_sessionmap assumes simple messenger
- 03:20 AM Bug #40213 (Resolved): mds: cannot switch mds state from standby-replay to active
- if a standby-replay mds run for a long time, there are too many inodes in cache. In the rejoin phase, the mds server ...
06/09/2019
- 11:24 PM Bug #40200: luminous: mds: does fails assert(session->get_nref() == 1) when balancing
- We have seen 3 identical crashes so far. (Logs of the crashed MDSs are at ceph-post-file: a74beec8-0a68-44c1-bfc5-56d...
- 03:51 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- https://github.com/ceph/ceph/pull/28190 is incomplete
https://github.com/ceph/ceph/pull/28459 - 03:24 AM Bug #39987 (Fix Under Review): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
06/08/2019
- 06:17 PM Bug #24072 (Resolved): mds: race with new session from connection and imported session
- 04:32 PM Backport #39379: nautilus: cephfs-shell: python traceback with mkdir inside inexistant directory
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39378: nautilus: cephfs-shell: support mkdir with non-octal mode
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39377: nautilus: cephfs-shell: python traceback with mkdir when reattempt of mkdir
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39376: nautilus: cephfs-shell: mkdir creates directory with invalid octal mode
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39197: nautilus: cephfs-shell: ls command produces error: no "colorize" attribute found...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:30 PM Backport #39192: nautilus: mds: crash during mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27714
merged - 04:30 PM Backport #39199: nautilus: mds: we encountered "No space left on device" when moving huge number ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27736
merged - 04:29 PM Backport #39209: nautilus: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server:...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27842
merged - 04:29 PM Backport #39470: nautilus: There is no punctuation mark or blank between tid and client_id in th...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27846
merged - 04:29 PM Backport #39473: nautilus: mds: fail to resolve snapshot name contains '_'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27849
merged - 03:57 PM Bug #40001: mds cache oversize after restart
- Zheng Yan wrote:
> please if these dirfrag fetches are from open_file_table
How can I figure out if they are from... - 01:06 PM Bug #40211 (Fix Under Review): mds: fix corner case of replaying open sessions
- 12:01 PM Bug #40211 (Resolved): mds: fix corner case of replaying open sessions
- Marking a session dirty may flush all existing dirty sessions. MDS
calls Server::finish_force_open_sessions() for lo... - 04:19 AM Bug #39987 (Pending Backport): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- 04:16 AM Bug #40061 (Pending Backport): mds: blacklisted clients eviction is broken
- 04:14 AM Feature #40121 (Resolved): mds: count purge queue items left in journal
- 04:13 AM Bug #40171 (Pending Backport): mds: reset heartbeat during long-running loops in recovery
- 04:12 AM Bug #40173 (Pending Backport): TestMisc.test_evict_client fails
- 04:12 AM Cleanup #40191 (Pending Backport): cephfs-shell: Fix flake8 errors
- 02:14 AM Bug #40210 (New): mds: stuck in up:clientreplay during thrashing
- ...
06/07/2019
- 07:17 PM Bug #40182 (Fix Under Review): luminous: pybind: luminous volume client breaks against nautilus c...
- Let's treat this as a backport. Please cherry-pick the commits from here:
https://github.com/ceph/ceph/pull/17266/... - 07:20 AM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- proposed fix: https://github.com/ceph/ceph/pull/28445
- 04:35 PM Backport #39232: nautilus: kclient: nofail option not supported
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27851
merged - 03:44 PM Backport #39214: nautilus: mds: there is an assertion when calling Beacon::shutdown()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27852
merged - 03:44 PM Backport #39211: nautilus: MDSTableServer.cc: 83: FAILED assert(version == tid)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27853
merged - 03:43 PM Backport #39222: nautilus: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27879
merged - 03:43 PM Backport #38876: nautilus: mds: high debug logging with many subtrees is slow
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27892
merged - 03:42 PM Backport #39471: nautilus: Expose CephFS snapshot creation time to clients
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27901
merged - 12:01 PM Bug #40202 (Fix Under Review): cephfs-shell: Error messages are printed to stdout
- 11:38 AM Bug #40202 (Resolved): cephfs-shell: Error messages are printed to stdout
- The error messages are mixed with other output messages.
- 08:52 AM Bug #40200 (Rejected): luminous: mds: does fails assert(session->get_nref() == 1) when balancing
- We've seen this assertion twice after upgrading MDS's from v12.2.11 to v12.2.12 and due to #40190 it can be disruptiv...
- 02:49 AM Bug #40197 (Fix Under Review): The command 'node ls' sometimes output some incorrect information ...
- Env: my ceph cluster has tree nodes.Each node has one monitor and one mds and some osds.
test command: ceph node ls
... - 12:06 AM Backport #39050: nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27893
merged
06/06/2019
- 09:49 PM Bug #39436 (Resolved): qa: upgrade task fails from mimic to master
- 07:46 PM Backport #39213 (In Progress): luminous: mds: there is an assertion when calling Beacon::shutdown()
- 07:44 PM Backport #40160 (In Progress): luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "par...
- 12:46 AM Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Jeff, can you do this backport please?
- 07:44 PM Backport #39231 (In Progress): luminous: kclient: nofail option not supported
- 05:21 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- I think adopting `fs dump` instead of `mds dump` is the right thing to do.
- 07:22 AM Bug #40182 (Resolved): luminous: pybind: luminous volume client breaks against nautilus cluster
- Due to the removal of the 'ceph mds dump' command in nautilus, a luminous ceph_volume_client does not work against a ...
- 04:03 PM Cleanup #40191 (Fix Under Review): cephfs-shell: Fix flake8 errors
- 03:52 PM Cleanup #40191 (Resolved): cephfs-shell: Fix flake8 errors
- Fix the following errors:
* E303 too many blank lines
* E722 do not use bare 'except'
* E501 line too long
* F632... - 02:17 PM Backport #39221 (In Progress): luminous: mds: behind on trimming and "[dentry] was purgeable but ...
- 12:38 AM Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Zheng, please do this backport.
- 12:30 PM Bug #40014: mgr/volumes: Name 'sub_name' is not defined
- Ramana, I think we should just mention that this issue will be fixed w/ subvolume refactor and mark as resolved once ...
- 11:10 AM Backport #40158 (In Progress): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- https://github.com/ceph/ceph/pull/28429
- 11:09 AM Backport #40157 (In Progress): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- https://github.com/ceph/ceph/pull/28429
- 11:09 AM Backport #39934 (In Progress): nautilus: mgr/volumes: add CephFS subvolumes library
- https://github.com/ceph/ceph/pull/28429
- 12:33 AM Feature #38153: client: proactively release caps it is not using
- Status on this Zheng?
06/05/2019
- 09:34 PM Feature #40121 (Fix Under Review): mds: count purge queue items left in journal
- 07:55 PM Backport #39430: nautilus: qa: test_sessionmap assumes simple messenger
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27772
merged - 06:20 PM Bug #40159 (Fix Under Review): mds: openfiletable prefetching large amounts of inodes lead to mds...
- 06:31 AM Bug #40159 (Fix Under Review): mds: openfiletable prefetching large amounts of inodes lead to mds...
- Recently, we found that both mdses of one of our cluster can't boot to up:active
After debugging, we believe this ... - 02:07 PM Bug #40173 (Fix Under Review): TestMisc.test_evict_client fails
- 02:01 PM Bug #40173 (Resolved): TestMisc.test_evict_client fails
- /ceph/teuthology-archive/pdonnell-2019-06-04_03:15:58-fs-wip-pdonnell-testing-20190603.231819-distro-basic-smithi/400...
- 01:30 PM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Venky, status on this ticket?
> ... - 12:41 PM Bug #40014 (Fix Under Review): mgr/volumes: Name 'sub_name' is not defined
- 10:17 AM Bug #40171 (Fix Under Review): mds: reset heartbeat during long-running loops in recovery
- 09:30 AM Bug #40171 (Resolved): mds: reset heartbeat during long-running loops in recovery
- 08:43 AM Bug #39949: test: extend mgr/volume test to cover new interfaces
- Backporting note: this will probably need to be done by a CephFS developer because it will be part of a series of com...
- 08:42 AM Feature #39969: mgr / volume: refactor volume module
- Backporting note: this will probably need to be done by a CephFS developer because it will be part of a series of com...
- 08:38 AM Backport #40158 (Need More Info): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- feature backport - commits need to be cherry-picked in the correct order
reassigning to the developer - 08:37 AM Backport #40157 (Need More Info): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- feature backport - commits need to be cherry-picked in the correct order
reassigning to the developer - 08:36 AM Backport #39934 (Need More Info): nautilus: mgr/volumes: add CephFS subvolumes library
- feature backport - commits need to be cherry-picked in the correct order
reassigning to the developer - 06:45 AM Backport #40169 (Resolved): nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:...
- https://github.com/ceph/ceph/pull/28499
- 06:45 AM Backport #40168 (Resolved): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" ...
- https://github.com/ceph/ceph/pull/28501
- 06:44 AM Backport #40167 (Resolved): nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- https://github.com/ceph/ceph/pull/28500
- 06:44 AM Backport #40166 (Resolved): luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- https://github.com/ceph/ceph/pull/28502
- 06:44 AM Backport #40165 (Resolved): mimic: mount: key parsing fail when doing a remount
- https://github.com/ceph/ceph/pull/29225
- 06:44 AM Backport #40164 (Resolved): nautilus: mount: key parsing fail when doing a remount
- https://github.com/ceph/ceph/pull/28610
- 06:44 AM Backport #40163 (Resolved): luminous: mount: key parsing fail when doing a remount
- https://github.com/ceph/ceph/pull/29226
- 06:44 AM Backport #40162 (Resolved): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->i...
- https://github.com/ceph/ceph/pull/29609
- 06:44 AM Backport #40161 (Resolved): nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- https://github.com/ceph/ceph/pull/28612
- 06:43 AM Backport #40160 (Resolved): luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- https://github.com/ceph/ceph/pull/28437
- 12:54 AM Backport #39690 (In Progress): nautilus: mds: error "No space left on device" when create a larg...
- https://github.com/ceph/ceph/pull/28394
- 12:38 AM Bug #40085 (Pending Backport): FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_di...
- 12:36 AM Bug #39705 (Pending Backport): qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.4...
- 12:36 AM Bug #39943 (Pending Backport): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to t...
06/04/2019
- 11:23 PM Bug #39951 (Pending Backport): mount: key parsing fail when doing a remount
- 10:36 PM Backport #40158 (In Progress): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- 10:35 PM Backport #40158 (Resolved): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- 10:32 PM Backport #40157 (In Progress): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- 10:26 PM Backport #40157 (Resolved): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- 10:31 PM Backport #39934 (In Progress): nautilus: mgr/volumes: add CephFS subvolumes library
- 05:48 PM Bug #40152 (Pending Backport): mgr/volumes: unable to set quota on fs subvolumes
- 04:12 PM Bug #40152 (Fix Under Review): mgr/volumes: unable to set quota on fs subvolumes
- 02:38 PM Bug #40152 (Resolved): mgr/volumes: unable to set quota on fs subvolumes
- Setting quota on fs subvolumes fails in master. Tested on a vstart cluster.
build]$ ./bin/ceph fs subvolume create... - 04:09 PM Bug #39750 (Pending Backport): mgr/volumes: cannot create subvolumes with py3 libraries
- 12:54 PM Bug #39750 (Fix Under Review): mgr/volumes: cannot create subvolumes with py3 libraries
- 12:54 PM Bug #39750: mgr/volumes: cannot create subvolumes with py3 libraries
- Thanks, Nathan! I could reproduce this issue just with -DWITH_PYTHON3=ON.
- 10:32 AM Bug #39750: mgr/volumes: cannot create subvolumes with py3 libraries
- To successfully reproduce, -DWITH_PYTHON2=OFF may also be needed (in addition to the options shown in the bug descrip...
- 12:55 PM Backport #39689 (In Progress): mimic: mds: error "No space left on device" when create a large n...
- https://github.com/ceph/ceph/pull/28381
- 10:33 AM Backport #40131 (Resolved): nautilus: Document behaviour of fsync-after-close
- https://github.com/ceph/ceph/pull/30025
- 10:33 AM Backport #40130 (Resolved): mimic: Document behaviour of fsync-after-close
- https://github.com/ceph/ceph/pull/29765
- 08:46 AM Feature #40121 (Resolved): mds: count purge queue items left in journal
- MDS purge queue didn't have a perf counter to record how many items still left in journal. Even when MDS restarted, t...
- 06:22 AM Backport #39686 (In Progress): nautilus: ceph-fuse: client hang because its bad session PipeConne...
- https://github.com/ceph/ceph/pull/28375
06/03/2019
- 10:29 PM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Venky, status on this ticket?
>
> For this ticket: scrub stat... - 09:34 PM Bug #40116 (Resolved): nautilus: qa: cannot schedule kcephfs/multimds
- 09:10 PM Bug #40116: nautilus: qa: cannot schedule kcephfs/multimds
- merged https://github.com/ceph/ceph/pull/28369
- 09:01 PM Bug #40116 (Fix Under Review): nautilus: qa: cannot schedule kcephfs/multimds
- 08:46 PM Bug #40116 (Resolved): nautilus: qa: cannot schedule kcephfs/multimds
- ...
- 06:44 PM Bug #40034: mds: stuck in clientreplay
- Nathan Fish wrote:
> Patrick Donnelly wrote:
> > None of us see why the MDS was stuck in clientreplay. How long do ... - 05:24 PM Bug #40034: mds: stuck in clientreplay
- Patrick Donnelly wrote:
> None of us see why the MDS was stuck in clientreplay. How long do you think it was in that... - 05:23 PM Bug #40034 (Need More Info): mds: stuck in clientreplay
- 05:04 PM Bug #40034: mds: stuck in clientreplay
- None of us see why the MDS was stuck in clientreplay. How long do you think it was in that state?
- 06:40 PM Bug #40085 (Fix Under Review): FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_di...
- 05:44 PM Bug #40085: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- This set should fix it:
https://github.com/ceph/ceph/pull/28324 - 06:28 PM Documentation #24641 (Pending Backport): Document behaviour of fsync-after-close
- 06:27 PM Documentation #24641 (Fix Under Review): Document behaviour of fsync-after-close
- 02:45 PM Bug #39750: mgr/volumes: cannot create subvolumes with py3 libraries
- I can't reproduce the issue. I get the following warning when I run the do_cmake.sh as shown in the report -
CMake...
06/02/2019
- 02:28 AM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- it's a longstanding bug. fix by "ceph: use ceph_evict_inode to cleanup inode's resource" in https://github.com/ceph/c...
06/01/2019
- 09:53 AM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- it's kernel BUG at fs/ceph/mds_client.c:1500!
BUG_ON(session->s_nr_caps > 0);
No idea how can it happen
05/31/2019
- 10:28 PM Feature #24463: kclient: add btime support
- I've spent the last couple of days working on this. The btime piece happens to be pretty simple, but it shares a feat...
- 06:14 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Another: /ceph/teuthology-archive/yuriw-2019-05-30_20:50:30-kcephfs-mimic_v13.2.6_QE-testing-basic-smithi/3989013/teu...
- 06:12 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Another: /ceph/teuthology-archive/yuriw-2019-05-30_20:50:30-kcephfs-mimic_v13.2.6_QE-testing-basic-smithi/3989039/teu...
- 06:11 PM Bug #40102 (Resolved): qa: probable kernel deadlock/oops during umount on testing branch
- ...
- 04:51 PM Bug #40101 (Resolved): libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on .sn...
- When I make an nfs-ganesha export of a cephfs using FSAL_CEPH, the NFS client receives ESTALE when attempting to stat...
- 09:27 AM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
- Patrick Donnelly wrote:
> Venky, status on this ticket?
For this ticket: scrub status commands have been added vi... - 01:27 AM Backport #39679 (In Progress): mimic: pybind: add the lseek() function to pybind of cephfs
- https://github.com/ceph/ceph/pull/28337
05/30/2019
- 11:11 PM Backport #39680 (In Progress): nautilus: pybind: add the lseek() function to pybind of cephfs
- 08:28 PM Bug #40093 (Can't reproduce): qa: client mount cannot be forcibly unmounted when all MDS are down
- ...
- 05:07 PM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
- Venky, status on this ticket?
- 04:52 PM Feature #5486 (In Progress): kclient: make it work with selinux
- [PATCH 1/2] ceph: rename struct ceph_acls_info to ceph_acl_sec_ctx
[PATCH 2/2] ceph: add selinux support - 02:42 PM Bug #40085 (Resolved): FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- A customer reported a crash in nfs-ganesha that indicated a problem down in libcephfs:...
- 08:23 AM Bug #40001: mds cache oversize after restart
- please if these dirfrag fetches are from open_file_table
- 05:16 AM Bug #40001: mds cache oversize after restart
- Patrick Donnelly wrote:
> Are you using snapshots? Can you tell us more about how the cluster is being used like # o...
05/29/2019
- 09:47 PM Bug #40001: mds cache oversize after restart
- Are you using snapshots? Can you tell us more about how the cluster is being used like # of clients and versions.
- 06:49 PM Documentation #24641: Document behaviour of fsync-after-close
- Proposed documentation update here:
https://github.com/ceph/ceph/pull/28300
Niklas, please take a look and let ... - 06:21 PM Bug #40034: mds: stuck in clientreplay
- Here's ganesha.log, not sure if there's anything useful:
https://termbin.com/7ni9
Is it really intended for an md... - 06:15 PM Bug #40034: mds: stuck in clientreplay
- Logs from nfs-ganesha would be helpful too if you have them.
- 01:53 PM Bug #39987 (Fix Under Review): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- 12:09 PM Bug #40061 (Fix Under Review): mds: blacklisted clients eviction is broken
- https://github.com/ceph/ceph/pull/28293
- 12:01 PM Bug #40061 (Resolved): mds: blacklisted clients eviction is broken
- 02:11 AM Backport #39669 (In Progress): mimic: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/28274
05/28/2019
- 07:03 PM Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
- Jeff Layton wrote:
> Reconfirming that I think this is a problem. Here's Client::mkdir():
>
> [...]
>
> There ... - 06:44 PM Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
- Reconfirming that I think this is a problem. Here's Client::mkdir():...
- 06:19 PM Documentation #24641: Document behaviour of fsync-after-close
- > Second your answer sounds kcephfs specific -- do the same guarantees still hold for ceph-fuse?
FUSE just farms o... - 06:16 PM Documentation #24641: Document behaviour of fsync-after-close
- Niklas replied via email:
> I think it makes sense to document it the way you say it, e.g. "kcephfs's guarantees i... - 03:13 PM Documentation #24641: Document behaviour of fsync-after-close
- Niklas Hambuechen wrote:
> The following should be documented:
>
> Does close()/re-open()/fsync() provide the sam... - 04:17 PM Bug #40002: mds: not trim log under heavy load
- Zheng Yan wrote:
> multiple-active mds?
yes - 08:22 AM Bug #40002: mds: not trim log under heavy load
- multiple-active mds?
- 10:51 AM Backport #40042 (Resolved): mimic: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28650
- 10:51 AM Backport #40041 (Resolved): luminous: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28543
- 10:50 AM Backport #40040 (Resolved): nautilus: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28582
- 09:25 AM Feature #40036 (Fix Under Review): mgr / volumes: support asynchronous subvolume deletes
- 09:25 AM Feature #40036: mgr / volumes: support asynchronous subvolume deletes
- see: https://github.com/ceph/ceph/blob/master/src/pybind/mgr/volumes/module.py#L393
- 09:00 AM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
- Currently, removing a subvolume does an in-band directory removal. This can the operation to run for long for huge su...
- 07:59 AM Bug #40028 (Pending Backport): mds: avoid trimming too many log segments after mds failover
- 07:57 AM Bug #40034: mds: stuck in clientreplay
- ...
05/27/2019
- 04:13 PM Bug #40034 (Need More Info): mds: stuck in clientreplay
- When I came in on Monday morning, our cluster's cephfs was stuck in clientreplay, and nfs mount through nfs-ganesha h...
- 12:12 PM Bug #40028 (Resolved): mds: avoid trimming too many log segments after mds failover
- If mds was behind on trim before failover, the new mds may trim too many log segments at the same time, and cause unh...
05/24/2019
- 01:23 PM Bug #39947 (Fix Under Review): cephfs-shell: add CI testing with flake8
- 02:55 AM Bug #40019 (New): mds: crash at ms_dispatch thread
- Env: ceph 14.2.1 3 mds
I enabled module ceph crash so paste the meta here
meta:... - 12:29 AM Backport #39670 (In Progress): nautilus: mds: output lock state in format dump
05/23/2019
- 04:00 PM Bug #40001: mds cache oversize after restart
- I set debug_mds to 20/20 and almost all of the log is like...
- 10:11 AM Bug #40014 (Resolved): mgr/volumes: Name 'sub_name' is not defined
- I'm getting a new mypy error in master:...
05/22/2019
- 03:55 PM Bug #40002 (Fix Under Review): mds: not trim log under heavy load
- ceph version 14.2.1
we have 3 mds under a heavy load (create 8k files per second)
we find mds log add very fast... - 03:46 PM Bug #40001 (Rejected): mds cache oversize after restart
- ceph version 14.2.1
we have 3 mds under a heavy load (create 8k files per second)
all 3 mds are under 30G mem... - 12:52 PM Cleanup #4744 (New): mds: pass around LogSegments via std::shared_ptr
- 12:19 PM Feature #38153 (New): client: proactively release caps it is not using
- 12:05 PM Feature #358 (Rejected): mds: efficient revert to snapshot
- There's no RADOS support for reverting to an older snapshot so I don't see this getting fixed in any near-future time...
- 12:01 PM Feature #15066: multifs: Allow filesystems to be assigned RADOS namespace as well as pool for met...
- Needs the ability to delete a RADOS namespace. See also: https://www.spinics.net/lists/ceph-devel/msg36695.html
- 11:48 AM Tasks #39998 (New): client: audit ACL
- Look for race conditions involved with client checks and releasing caps. Jeff wants to help with this.
- 11:27 AM Feature #17835 (Fix Under Review): mds: enable killpoint tests for MDS-MDS subtree export
- 02:35 AM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> Zheng Yan wrote:
> >
> > Yes, this can causes inconsistency. But it's not unique to link cou...
05/21/2019
- 02:20 PM Feature #39982 (Duplicate): cephfs client periodically report cache utilisation to MDS server
- Yup, this is something we're working on for Octopus. Thanks Stefan!
- 12:18 PM Bug #39947 (In Progress): cephfs-shell: add CI testing with flake8
- 09:58 AM Bug #39987 (Resolved): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
An user reported a bug that mds couldn't finish freezing dirfrag. Cache dump includes following entries....- 03:02 AM Backport #39472 (In Progress): mimic: mds: fail to resolve snapshot name contains '_'
- https://github.com/ceph/ceph/pull/28186
05/20/2019
- 05:54 PM Feature #39982 (Duplicate): cephfs client periodically report cache utilisation to MDS server
- After seen Gregory's talk "What are "caps"? (And Why Won't my Client Drop Them?) he explained that the MDS servers ne...
- 07:01 AM Feature #39969 (Fix Under Review): mgr / volume: refactor volume module
- 03:02 AM Feature #39969 (In Progress): mgr / volume: refactor volume module
- 03:02 AM Feature #39969 (Resolved): mgr / volume: refactor volume module
- Now, with the addition of submodule commands (interfaces), volume commands live in the main module source while submo...
05/19/2019
- 08:39 AM Bug #39951 (Fix Under Review): mount: key parsing fail when doing a remount
- 08:24 AM Feature #20 (Fix Under Review): client: recover from a killed session (w/ blacklist)
05/17/2019
- 01:25 PM Backport #39960 (Resolved): nautilus: cephfs-shell: mkdir error for relative path
- https://github.com/ceph/ceph/pull/28616
05/16/2019
- 11:16 AM Bug #39951: mount: key parsing fail when doing a remount
- Here's the link to a PR:
https://github.com/ceph/ceph/pull/28148 - 11:06 AM Bug #39951 (Resolved): mount: key parsing fail when doing a remount
- When doing a CephFS remount (-o remount) the secret is parsed from procfs and we get '<hidden>' as a result and the m...
- 05:30 AM Bug #39949 (Fix Under Review): test: extend mgr/volume test to cover new interfaces
- 04:57 AM Bug #39949 (Resolved): test: extend mgr/volume test to cover new interfaces
- extend `qa/workunits/fs/test-volumes.sh` tests to cover newly introduces subvolume/subvolumegroup interfaces.
Also available in: Atom