Activity
From 01/22/2020 to 02/20/2020
02/20/2020
- 02:18 PM Bug #43248 (Fix Under Review): cephfs-shell: do not drop into shell after running command-line co...
- 11:37 AM Bug #43964: qa: Test failure: test_acls
- See - https://tracker.ceph.com/issues/43486#note-2
- 11:37 AM Bug #43964: qa: Test failure: test_acls
- btrfs-progs-devel wasn't available on "CentOS 8 too":https://tracker.ceph.com/issues/43486 after which "we added a fi...
- 10:01 AM Bug #43964 (Fix Under Review): qa: Test failure: test_acls
- 08:09 AM Bug #44117: vstart_runner.py: align LocalRemote.run with teuthology's run
- I was able to reproduce the bug; see /ceph/teuthology-archive/rishabh-2020-02-20_07:10:45-fs-wip-rishabh-dummy-test-d...
02/19/2020
- 06:59 PM Bug #44176 (Resolved): qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"
- 05:40 PM Fix #44171 (In Progress): pybind/cephfs: audit for unimplemented bindings for libcephfs
- 02:23 PM Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs
- The initial PR is sent. It doesn't cover all the missing bindings yet.
- 05:01 PM Feature #44214 (Resolved): mount.ceph: add "fs" alias for "mds_namespace"
- I feel "mds_namespace" is not an intuitive name. Let's keep it for backwards compatibility but introduce a cleaner na...
- 05:00 PM Feature #44212 (Resolved): client: add alias client_fs for client_mds_namespace
- I feel "mds_namespace" is not an intuitive name. Let's keep client_mds_namespace for backwards compatibility but intr...
- 04:55 PM Feature #44211 (Resolved): mount.ceph: stop printing warning message about mds_namespace
- We see:...
- 02:31 PM Bug #44207 (Fix Under Review): mgr/volumes: deadlock when trying to purge large number of trash e...
- 12:34 PM Bug #44207 (In Progress): mgr/volumes: deadlock when trying to purge large number of trash entries
- 12:31 PM Bug #44207 (Resolved): mgr/volumes: deadlock when trying to purge large number of trash entries
- There's a subtle deadlock when purge tasks (via the generic async job machinery) tries to fetch the next job to execu...
- 12:37 PM Bug #44208 (Resolved): mgr/volumes: support canceling in-progress/pending clone operations.
- This is useful when a user wants to interrupt a long-running clone operation.
$ ceph fs clone cancel <volume> <clo... - 12:20 AM Bug #44100: cephfs rsync kworker high load.
- none none wrote:
> Zheng Yan wrote:
> > could you check if this still happen with upstream 5.5 kernel
>
> From w...
02/18/2020
- 11:18 PM Bug #43750 (Fix Under Review): mds: add perf counters for openfiletable
- 11:06 PM Bug #44133 (Rejected): Using VIM in a file system is very slow
- Please seek help on the ceph-users mailing list.
- 11:06 PM Bug #44172 (Triaged): cephfs-journal-tool: cannot set --dry_run arg
- 11:04 PM Bug #43964: qa: Test failure: test_acls
- From master: /ceph/teuthology-archive/teuthology-2020-01-28_03:15:03-fs-master-distro-basic-smithi/4712989/teuthology...
- 10:54 PM Feature #44191: cephfs: geo-replication
- Found the ticket #41074. Maybe we can close this.
- 10:39 PM Feature #44191 (Resolved): cephfs: geo-replication
- This is a skeleton ticket for geo-replication of subvolumes.
- 10:49 PM Feature #44193 (Resolved): pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in ...
- 10:40 PM Feature #44192 (Resolved): mds: stable multimds scrub
- Remove warnings/guards for multimds scrub when blockers complete.
- 10:37 PM Feature #44190 (New): qa: thrash file systems during workload tests
- Verify creation/deletion of file systems does not interfere with on-going workloads. This should be a background task...
- 10:03 PM Feature #38951: client: implement asynchronous unlink/create
- Moving this to Zhegn since he's working on the libcephfs part of this.
- 10:02 PM Feature #38951 (In Progress): client: implement asynchronous unlink/create
- 09:40 PM Feature #24725: mds: propagate rstats from the leaf dirs up to the specified diretory
- Zheng has a follow-on PR: https://github.com/ceph/ceph/pull/32126
- 12:32 PM Bug #43039: client: shutdown race fails with status 141
- I took a look at the logs but there is nothing conclusive there. Again, I suspect that this is a problem down in the ...
- 12:12 PM Bug #44176 (In Progress): qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"
- 08:38 AM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- From Jeff's idea and comments of the first version to fulfill the "halt" mount option, which will try to close all th...
- 12:39 AM Bug #42602 (Resolved): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- 12:37 AM Cleanup #40578 (Resolved): mds: reorganize class members in headers to follow coding guidelines
- 12:37 AM Cleanup #43426 (Resolved): mds: reorg mdstypes header
- 12:16 AM Bug #44097: nautilus: "cluster [WRN] Health check failed: 1 clients failing to respond to capabil...
- Maybe related, also ffsb: /ceph/teuthology-archive/pdonnell-2020-02-16_17:35:17-kcephfs-wip-pdonnell-testing-20200215...
02/17/2020
- 11:54 PM Bug #44176 (Resolved): qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"
- ...
- 11:50 PM Bug #43039 (New): client: shutdown race fails with status 141
- /ceph/teuthology-archive/pdonnell-2020-02-15_16:51:06-fs-wip-pdonnell-testing-20200215.033325-distro-basic-smithi/476...
- 04:45 PM Bug #44172 (Resolved): cephfs-journal-tool: cannot set --dry_run arg
- cephfs-journal-tool seems to support a --dry_run argument but I'm not able to pass it to the tool.
I believe I tri... - 03:20 PM Fix #44171 (Need More Info): pybind/cephfs: audit for unimplemented bindings for libcephfs
- Recently we've added some missing bindings:...
- 02:58 PM Bug #44127: cephfs-shell: read config options from cephf.conf and from ceph config command
- Following are the cephfs-shell options the description talks about -...
- 02:55 PM Bug #44114: test_cephfs_shell.TestDU test fail unexpectedly
- Nope, none of my recent PRs modify anything around TestDU. Besides, after splitting this PR I couldn't reproduce this...
- 02:49 PM Bug #44114 (Need More Info): test_cephfs_shell.TestDU test fail unexpectedly
- Is this not related to your PR? I have not seen this in my testing.
- 02:47 PM Bug #44132 (Triaged): mds: assertion failure due to blacklist
02/14/2020
- 08:47 AM Bug #44139 (New): mds: check all on-disk metadata is versioned
- 06:41 AM Bug #44133: Using VIM in a file system is very slow
- process in ssh terminal is stuck when I use vim to edit the python or txt file and save to cephfs(mounted in kernel m...
- 05:06 AM Bug #44133 (Rejected): Using VIM in a file system is very slow
- Using VIM in a file system is very slow
- 06:18 AM Backport #42441 (In Progress): nautilus: mds: create a configurable snapshot limit
- 04:54 AM Backport #42160 (In Progress): luminous: osdc: objecter ops output does not have useful time info...
- 04:49 AM Backport #42123 (In Progress): luminous: client: no method to handle SEEK_HOLE and SEEK_DATA in l...
- 04:36 AM Backport #41857 (In Progress): luminous: client: removing dir reports "not empty" issue due to cl...
- 02:51 AM Bug #43909 (Fix Under Review): mds: SIGSEGV in Migrator::export_sessions_flushed
- 12:08 AM Bug #43392 (Resolved): MDSMonitor: support automatic failover to standbys with stronger affinity
02/13/2020
- 11:17 PM Bug #44132 (Resolved): mds: assertion failure due to blacklist
- ...
- 10:35 PM Bug #42467: mds: daemon crashes while updating blacklist
- ceph-post-file: 44655e58-39e0-4fff-a2fc-2645b131c594 for the crash listed above (13.2.8)
- 05:30 PM Bug #42467 (New): mds: daemon crashes while updating blacklist
- A new report of this coming from ceph-users:
"[ceph-users] Ceph MDS ASSERT In function 'MDRequestRef'"
Differen... - 07:19 PM Bug #44127 (Resolved): cephfs-shell: read config options from cephf.conf and from ceph config com...
- cephfs-shell by default has following options -...
- 06:39 PM Bug #44100: cephfs rsync kworker high load.
- Zheng Yan wrote:
> could you check if this still happen with upstream 5.5 kernel
From where should I get it? elre... - 02:15 AM Bug #44100: cephfs rsync kworker high load.
- could you check if this still happen with upstream 5.5 kernel
- 02:00 PM Bug #43943 (Fix Under Review): qa: "[WRN] evicting unresponsive client smithi131:z (6314), after ...
- 12:27 PM Bug #44117 (Resolved): vstart_runner.py: align LocalRemote.run with teuthology's run
- teuthology's run uses keyword argument but that's not the case for vstart_runner.py. This gets the test passing with ...
- 12:09 PM Bug #42986 (Resolved): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.Tes...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:08 PM Bug #43649 (Resolved): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:00 PM Backport #43780 (Resolved): nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephf...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32919
m... - 11:59 AM Backport #43790 (Resolved): nautilus: RuntimeError: Files in flight high water is unexpectedly lo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33115
m... - 11:58 AM Backport #43784 (Resolved): nautilus: fs: OpenFileTable object shards have too many k/v pairs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32921
m... - 11:58 AM Backport #43777 (Resolved): nautilus: qa: test_full racy check: AssertionError: 29 not greater th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32918
m... - 11:58 AM Backport #43733 (Resolved): nautilus: qa: ffsb suite causes SLOW_OPS warnings
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32917
m... - 11:57 AM Backport #43348 (Resolved): nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32756
m... - 11:51 AM Backport #43729 (Resolved): nautilus: client: chdir does not raise error if a file is passed
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32916
m... - 11:51 AM Backport #43770 (Resolved): nautilus: mount.ceph fails with ERANGE if name= option is longer than...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32807
m... - 11:51 AM Backport #43503 (Resolved): nautilus: mount.ceph: give a hint message when no mds is up or cluste...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32910
m... - 11:51 AM Backport #42951: nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.Te...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:51 AM Backport #43271: nautilus: qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:51 AM Backport #43338: nautilus: qa/tasks: add remaining tests for fs volume
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:51 AM Backport #43629: nautilus: mgr/volumes: provision subvolumes with config metadata storage in cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:50 AM Backport #43724: nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:50 AM Backport #44020: pybind/mgr/volumes: restore from snapshot
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 10:24 AM Bug #44114 (Need More Info): test_cephfs_shell.TestDU test fail unexpectedly
- I suspect it's because one of the test machines was pretty laggy. This has been reported multiple times before -
h... - 09:18 AM Bug #44113 (Fix Under Review): cephfs-shell: set proper return value for the tool
- Actually, code for this was already present on the PR, just separated the commit; marking "Fix Under Review".
- 09:15 AM Bug #44113 (Resolved): cephfs-shell: set proper return value for the tool
- Currently, cephfs-shell tool returns zero all the time, whether the shell is in interactive or non-interactive mode -...
02/12/2020
- 07:38 PM Backport #43780: nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32919
merged - 06:44 PM Backport #43790: nautilus: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/33115
merged - 06:44 PM Backport #43784: nautilus: fs: OpenFileTable object shards have too many k/v pairs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32921
merged - 06:43 PM Backport #43777: nautilus: qa: test_full racy check: AssertionError: 29 not greater than or equal...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32918
merged - 06:41 PM Backport #43733: nautilus: qa: ffsb suite causes SLOW_OPS warnings
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32917
merged - 06:39 PM Backport #43729: nautilus: client: chdir does not raise error if a file is passed
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32916
merged - 06:37 PM Bug #44100: cephfs rsync kworker high load.
- Tried to work around this by not using multiple cephfs mounts. First process finished quickly, 2nd had kworker high l...
- 06:17 PM Bug #44100: cephfs rsync kworker high load.
- If I unmount the cephfs and mount it again, problems seems to be gone.
- 06:08 PM Bug #44100: cephfs rsync kworker high load.
- none none wrote:
> PS. this kworker is only appearing with the 2nd rsync process. I already have a rsync session run... - 03:54 PM Bug #44100: cephfs rsync kworker high load.
- Maybe it is related to renewing capabilities? Or is it normal that the mds is asked to renew 132k caps so often?
{... - 03:30 PM Bug #44100: cephfs rsync kworker high load.
- PS. this kworker is only appearing with the 2nd rsync process. I already have a rsync session running copying from di...
- 02:31 PM Bug #44100: cephfs rsync kworker high load.
- Did another test rsyncing 2 files, 16 minutes???
### concurr. link backup test2 ###
### start:15:06:13 ###
... - 02:05 PM Bug #44100 (Resolved): cephfs rsync kworker high load.
- I have an rsync backup running which became 10 hours. When I test with rsync one instance, it looks like it processes...
- 06:36 PM Backport #43770: nautilus: mount.ceph fails with ERANGE if name= option is longer than 37 characters
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32807
merged - 06:36 PM Backport #43503: nautilus: mount.ceph: give a hint message when no mds is up or cluster is laggy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32910
merged - 04:42 PM Bug #44101 (New): nautilus: qa: df pool accounting incomplete
- ...
- 04:30 PM Feature #24880 (Resolved): pybind/mgr/volumes: restore from snapshot
- 04:29 PM Bug #43645 (Resolved): mgr/volumes: subvolumes with snapshots can be deleted
- 04:29 PM Backport #43724 (Resolved): nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- Merged https://github.com/ceph/ceph/pull/33122/
- 04:27 PM Feature #43349 (Resolved): mgr/volumes: provision subvolumes with config metadata storage in cephfs
- 04:27 PM Backport #43629 (Resolved): nautilus: mgr/volumes: provision subvolumes with config metadata stor...
- Merged https://github.com/ceph/ceph/pull/33122/
- 04:26 PM Bug #42872 (Resolved): qa/tasks: add remaining tests for fs volume
- 04:25 PM Backport #43338 (Resolved): nautilus: qa/tasks: add remaining tests for fs volume
- Merged https://github.com/ceph/ceph/pull/33122/
- 04:24 PM Bug #41694 (Resolved): qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- 04:24 PM Backport #43271 (Resolved): nautilus: qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- Merged https://github.com/ceph/ceph/pull/33122
- 04:23 PM Bug #42646 (Resolved): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.TestVo...
- 04:23 PM Backport #42951 (Resolved): nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test...
- Merged https://github.com/ceph/ceph/pull/33122
- 04:21 PM Backport #44020 (Resolved): pybind/mgr/volumes: restore from snapshot
- Merged https://github.com/ceph/ceph/pull/33122/
- 03:29 PM Bug #43567: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_directory
- Here's the root cause behind the error - https://github.com/ceph/ceph/pull/32612#discussion_r366312713
- 02:13 PM Bug #43905 (Closed): qa: test_rebuild_inotable infinite loop
- It's bug in test branch
- 02:12 PM Bug #43908 (Resolved): mds: FAILED ceph_assert(!p.is_remote_wrlock())
- 01:42 PM Bug #43598 (In Progress): mds: PurgeQueue does not handle objecter errors
- 12:37 PM Backport #43137: nautilus: pybind/mgr/volumes: idle connection drop is not working
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33116
m... - 10:18 AM Backport #43137 (Resolved): nautilus: pybind/mgr/volumes: idle connection drop is not working
- https://github.com/ceph/ceph/pull/33116 merged
- 12:11 PM Bug #44097: nautilus: "cluster [WRN] Health check failed: 1 clients failing to respond to capabil...
- ...
- 12:07 PM Bug #44097 (Can't reproduce): nautilus: "cluster [WRN] Health check failed: 1 clients failing to ...
- 10:19 AM Bug #43113 (Resolved): pybind/mgr/volumes: idle connection drop is not working
- 07:58 AM Backport #42441: nautilus: mds: create a configurable snapshot limit
- Yet to backport.
02/11/2020
- 09:56 PM Bug #40784 (Resolved): mds: metadata changes may be lost when MDS is restarted
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:56 PM Bug #41329 (Resolved): mds: reject sessionless messages
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:55 PM Bug #42088 (Resolved): 'ceph -s' does not show standbys if there are no filesystems
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:52 PM Bug #43484 (Resolved): mds: note features client has when rejecting client due to feature incompat
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:52 PM Bug #43514 (Resolved): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:49 PM Backport #43568 (Resolved): nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32912
m... - 04:42 PM Backport #43568: nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32912
merged - 09:49 PM Backport #43509 (Resolved): nautilus: 'ceph -s' does not show standbys if there are no filesystems
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32912
m... - 04:41 PM Backport #43509: nautilus: 'ceph -s' does not show standbys if there are no filesystems
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32912
merged - 09:48 PM Backport #43628 (Resolved): nautilus: client: disallow changing fuse_default_permissions option a...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32915
m... - 04:03 PM Backport #43628: nautilus: client: disallow changing fuse_default_permissions option at runtime
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32915
merged - 09:48 PM Backport #43624 (Resolved): nautilus: mds: note features client has when rejecting client due to ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32914
m... - 04:02 PM Backport #43624: nautilus: mds: note features client has when rejecting client due to feature inc...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32914
merged - 09:47 PM Backport #43573 (Resolved): nautilus: cephfs-journal-tool: will crash without any extra argument
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32913
m... - 04:01 PM Backport #43573: nautilus: cephfs-journal-tool: will crash without any extra argument
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32913
merged - 09:47 PM Backport #43343 (Resolved): nautilus: mds: client does not response to cap revoke After session s...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32909
m... - 04:00 PM Backport #43343: nautilus: mds: client does not response to cap revoke After session stale->resum...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32909
merged - 09:46 PM Backport #43558 (Resolved): nautilus: mds: reject forward scrubs when cluster has multiple active...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32602
m... - 03:59 PM Backport #43558: nautilus: mds: reject forward scrubs when cluster has multiple active MDS (more ...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/32602
merged - 09:46 PM Backport #43506 (Resolved): nautilus: MDSMonitor: warn if a new file system is being created with...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32600
m... - 03:59 PM Backport #43506: nautilus: MDSMonitor: warn if a new file system is being created with an EC defa...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32600
merged - 09:45 PM Backport #43345 (Resolved): nautilus: mds: metadata changes may be lost when MDS is restarted
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30843
m... - 03:58 PM Backport #43345: nautilus: mds: metadata changes may be lost when MDS is restarted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30843
merged - 09:45 PM Backport #41853 (Resolved): nautilus: mds: reject sessionless messages
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30843
m... - 03:58 PM Backport #41853: nautilus: mds: reject sessionless messages
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30843
merged - 06:46 AM Bug #44071 (In Progress): kclient: reconfigure superblock parameters does not work
- 06:46 AM Bug #44071 (Fix Under Review): kclient: reconfigure superblock parameters does not work
- The '-o remount,ceph_optX' does not work.
- 03:27 AM Bug #43392 (Fix Under Review): MDSMonitor: support automatic failover to standbys with stronger a...
- 02:10 AM Bug #43964 (New): qa: Test failure: test_acls
- http://pulpito.front.sepia.ceph.com/jcollin-2020-02-05_00:06:17-fs-inter-mds-testing5-distro-basic-smithi/
02/10/2020
- 07:55 PM Backport #41854 (Rejected): mimic: mds: reject sessionless messages
- Not necessary since #43344 will not be backported.
- 07:55 PM Backport #43344 (Rejected): mimic: mds: metadata changes may be lost when MDS is restarted
- Too complicated for Mimic.
- 04:44 PM Bug #43964 (Need More Info): qa: Test failure: test_acls
- clicked wrong status
- 02:52 PM Bug #43964 (Fix Under Review): qa: Test failure: test_acls
- I believe this is from an older run where the test suites were misconfigured to use CentOS 7. Can you check? There's ...
- 02:11 PM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- Will add one mount option "suspend=<on|off>" to suspend the specified mount point.
Currently the remount is not wo... - 02:08 PM Feature #44044 (In Progress): qa: add network namespaces to kernel/ceph-fuse mounts for partition...
- 06:01 AM Bug #44030: mds: "mds daemon damaged" after restarting MDS - Filesystem DOWN
- Today tried:
stop all mds again.
- Forced mds up with ceph mds repaired
- starting mds leads to another crash, wit... - 01:54 AM Bug #44030: mds: "mds daemon damaged" after restarting MDS - Filesystem DOWN
- dvanders is helping me for further investigation since we have our filesystem down now since more than 48 hours. It s...
02/08/2020
- 06:20 PM Bug #44023: MDS continuously crashing on v14.2.7
- Managed to mess around and recover by adding wipe_sessions to ceph.conf, sorry for the false alarm. This can be closed.
02/07/2020
- 11:45 PM Feature #44044 (Resolved): qa: add network namespaces to kernel/ceph-fuse mounts for partition te...
- In teuthology, we want to shutdown the kernel mount without any kind of cleanup like sending SIGKILL to ceph-fuse. We...
- 09:27 PM Bug #43968 (Resolved): qa: multimds suite using centos7
- 05:33 PM Bug #44023: MDS continuously crashing on v14.2.7
- Rolling back to 14.2.6 did not fix the issue.
- 04:10 PM Bug #44023: MDS continuously crashing on v14.2.7
- It looks like the MDSes are not being assigned a rank when they come up, ceph fs get cephfs shows:
Filesystem 'cephf... - 12:09 AM Bug #44023: MDS continuously crashing on v14.2.7
- I have tried resetting the MDS map to no avail. Also have tried failing the filesystem and then setting it joinable w...
- 01:51 PM Feature #36707 (Resolved): client: support getfattr ceph.dir.pin extended attribute
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:50 PM Bug #38324 (Resolved): mds: decoded LogEvent may leak during shutdown
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:19 AM Backport #43338 (In Progress): nautilus: qa/tasks: add remaining tests for fs volume
- 11:18 AM Backport #43629 (In Progress): nautilus: mgr/volumes: provision subvolumes with config metadata s...
- 11:18 AM Backport #43724 (In Progress): nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- 11:17 AM Backport #44020 (In Progress): pybind/mgr/volumes: restore from snapshot
- 08:50 AM Bug #44030: mds: "mds daemon damaged" after restarting MDS - Filesystem DOWN
- ceph.log
ceph-post-file: 2c9c6886-840f-4270-b4c1-323343c9efa4
ceph-mon.log
ceph-post-file: 2c9c6886-840f-4270-b4... - 08:42 AM Bug #44030: mds: "mds daemon damaged" after restarting MDS - Filesystem DOWN
- SECOND CRASH LOG:
{
"crash_id": "2020-02-07_03:38:59.667251Z_18b5e608-2954-4c6f-b205-d6d6d52d65c3",
"times... - 08:32 AM Bug #44030 (New): mds: "mds daemon damaged" after restarting MDS - Filesystem DOWN
- CRASH LOG:
{
"crash_id": "2020-02-07_06:44:44.534106Z_bfebeb65-8d38-49ed-b811-731f0152325f",
"timestamp"... - 05:55 AM Bug #43965 (Fix Under Review): mgr/volumes: synchronize ownership (for symlinks) and inode timest...
- 03:57 AM Bug #43905: qa: test_rebuild_inotable infinite loop
- ...
- 03:27 AM Bug #43905: qa: test_rebuild_inotable infinite loop
- Zheng Yan wrote:
> (2 << 40) is correct because inode number of rank 1 start at (2 << 40)
That's pretty weird. It... - 03:28 AM Cleanup #43425 (Resolved): mds: reorg snap header
02/06/2020
- 10:48 PM Backport #43137 (In Progress): nautilus: pybind/mgr/volumes: idle connection drop is not working
- 10:26 PM Bug #44023 (New): MDS continuously crashing on v14.2.7
- I have max mds set to 2, though I have tried fiddling with the values since hitting the crash. Ceph status indicates ...
- 10:07 PM Backport #43790 (In Progress): nautilus: RuntimeError: Files in flight high water is unexpectedly...
- 08:42 PM Backport #38350 (Rejected): luminous: mds: decoded LogEvent may leak during shutdown
- Leaks during MDS shutdown; not essential.
- 08:42 PM Backport #38349 (Rejected): mimic: mds: decoded LogEvent may leak during shutdown
- Leaks only during MDS shutdown; not essential.
- 08:41 PM Backport #37637 (Rejected): luminous: client: support getfattr ceph.dir.pin extended attribute
- Cancelling this backport; it's not essential.
- 08:40 PM Backport #37636 (Rejected): mimic: client: support getfattr ceph.dir.pin extended attribute
- Cancelling this backport. It's not essential.
- 07:01 PM Bug #43061: ceph fs add_data_pool doesn't set pool metadata properly
- Status on this Ramana?
- 06:59 PM Bug #43750 (Triaged): mds: add perf counters for openfiletable
- 04:53 PM Bug #44021 (Resolved): client: bad error handling in Client::_lseek
- The SEEK_HOLE and SEEK_DATA error handling looks broken in the userland client:...
- 04:47 PM Backport #44020 (Resolved): pybind/mgr/volumes: restore from snapshot
- https://github.com/ceph/ceph/pull/33122/
- 04:46 PM Feature #24880 (Pending Backport): pybind/mgr/volumes: restore from snapshot
- 03:06 PM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Greg Farnum wrote:
> Okay, I dove into this a bit today. No final conclusions but reminding myself about how some of... - 12:11 AM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Okay, I dove into this a bit today. No final conclusions but reminding myself about how some of this works and severa...
- 09:18 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- most unexpected cap revokes were because of fuse API limitation. lookup and gettattr always want CEPH_STAT_CAP_INODE_...
- 09:16 AM Feature #7333: client: evaluate multiple O_APPEND writers
- This is one fix about this O_APPEND & O_DIRECT:
In O_APPEND & O_DIRECT mode, the data from different writers will
...
02/05/2020
- 05:09 AM Bug #43968 (Fix Under Review): qa: multimds suite using centos7
- 05:04 AM Bug #43968 (Resolved): qa: multimds suite using centos7
- http://pulpito.ceph.com/teuthology-2020-02-01_04:15:02-multimds-master-testing-basic-smithi/4723526/
- 04:40 AM Bug #43796 (Fix Under Review): qa: test_version_splitting
- 03:58 AM Bug #43965 (Resolved): mgr/volumes: synchronize ownership (for symlinks) and inode timestamps for...
- `lchown()` and `[l]utimes()` python binding calls needs to be implemented. async cloner module in mgr/volumes would s...
- 12:53 AM Bug #43964 (Resolved): qa: Test failure: test_acls
- This shows up in all runs:...
02/04/2020
- 04:04 PM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- From the original ticket:
Jeff Layton wrote:
> Greg pointed out some things in a face-to-face discussion the other ... - 03:33 PM Bug #43960 (Triaged): MDS: incorrectly issues Fc for new opens when there is an existing writer
- Cloned from #43748, to cover the MDS-side issue. (Note that I have changed much of the text below to correct a few de...
- 02:57 PM Bug #43909: mds: SIGSEGV in Migrator::export_sessions_flushed
- ...
- 02:51 PM Bug #43901: qa: fsx: fatal error: libaio.h: No such file or directory
- What distro is this? For RHEL/Centos you probably just need to ensure that libaio-devel is installed. For ubuntu, lib...
- 02:11 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > client: 172.21.15.131:0/4191323679 (cephfs instance), registers ... - 01:00 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- Greg pointed out some things in a face-to-face discussion the other day that lead me to question whether this ought t...
02/03/2020
- 08:08 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- Venky Shankar wrote:
> client: 172.21.15.131:0/4191323679 (cephfs instance), registers its addrs with ceph-mgr:
>
... - 04:36 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- client: 172.21.15.131:0/4191323679 (cephfs instance), registers its addrs with ceph-mgr:...
02/02/2020
- 02:57 PM Feature #42530 (Resolved): cephfs-shell: add setxattr and getxattr
- 02:52 PM Bug #40861 (Resolved): cephfs-shell: -p doesn't work for rmdir
- 02:52 PM Bug #40863 (Resolved): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
02/01/2020
- 12:56 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
- Moving this back to resolved. Opened #43943
- 12:56 PM Bug #43943 (Resolved): qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 ...
- /a/sage-2020-01-28_03:52:05-rados-wip-sage2-testing-2020-01-27-1839-distro-basic-smithi/4713589
description: rados/m...
01/31/2020
- 01:13 PM Bug #43905: qa: test_rebuild_inotable infinite loop
- It's a bug revealed by 'mds: cleanup '* -> excl' check in Locker::file_eval()'
- 09:47 AM Bug #43905: qa: test_rebuild_inotable infinite loop
- (2 << 40) is correct because inode number of rank 1 start at (2 << 40)
- 08:26 AM Bug #43908 (Fix Under Review): mds: FAILED ceph_assert(!p.is_remote_wrlock())
- Nothing do with async dirops PR
- 03:55 AM Bug #40867 (In Progress): mgr: failover during in qa testing causes unresponsive client warnings
- Another one:
/a/sage-2020-01-30_22:27:29-rados-wip-sage-testing-2020-01-30-1230-distro-basic-smithi/4719492
01/30/2020
- 03:21 PM Bug #43208 (Resolved): mds: unsafe req may result in data remaining in the datapool
- 03:17 PM Bug #43909 (Resolved): mds: SIGSEGV in Migrator::export_sessions_flushed
- ...
- 03:15 PM Bug #43908 (Resolved): mds: FAILED ceph_assert(!p.is_remote_wrlock())
- ...
- 03:06 PM Cleanup #43408 (Resolved): mds: reorg StrayManager header
- 03:04 PM Bug #43905 (Closed): qa: test_rebuild_inotable infinite loop
- ...
- 01:55 PM Bug #43763 (Resolved): cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
- 01:48 PM Bug #43902 (Triaged): qa: mon_thrash: timeout "ceph quorum_status"
- ...
- 01:45 PM Bug #43901 (Resolved): qa: fsx: fatal error: libaio.h: No such file or directory
- ...
- 10:18 AM Bug #43761 (Triaged): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not gi...
- Ramana, I'm assigning this to you. The bug is arguably in ceph-ansible because it's enabling the application but not ...
- 09:53 AM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- We just updated our 2nd cluster to nautilus and saw the exact same mds respawn at the moment we enabled msgr2:
<pr... - 08:58 AM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- Sage Weil wrote:
> another instance of this on master,
> [...]
> /a/sage-2020-01-28_03:52:05-rados-wip-sage2-testi...
01/29/2020
- 02:52 PM Bug #41759 (Can't reproduce): mgr/volumes: test_async_subvolume_rm fails since purge threads did ...
- Patrick Donnelly wrote:
> Venky, is this still a problem?
haven't seen it lately. moving to "can't reproduce"
... - 01:57 PM Bug #43761: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not give the nec...
- Hello,
From the mailing list : https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/23FDDSYBCDVMYGCUTAL...
01/28/2020
- 08:05 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- another instance of this on master,...
- 02:32 PM Bug #43762: pybind/mgr/volumes: create fails with TypeError
- Adding more context to this
This happened after creating a second volume. I had to enable creation of multi filesy... - 11:53 AM Backport #43568: nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- Nathan Cutler wrote:
> @Ramana - please feel free to take any backport issue that is in state "New" or "Need More In...
01/27/2020
- 10:23 PM Bug #43827 (Duplicate): decode fail in SessionMapStore::decode_legacy on upgrade
- 04:20 PM Bug #43827: decode fail in SessionMapStore::decode_legacy on upgrade
- I think it's RADOS bug. omap header/keys got lost after upgrade
- 06:01 AM Bug #43827: decode fail in SessionMapStore::decode_legacy on upgrade
- mimic does not use legacy session format. Looks like that mds got zero length omap header, so it retired loading sess...
- 07:44 PM Backport #43790 (New): nautilus: RuntimeError: Files in flight high water is unexpectedly low (0 ...
- 05:21 PM Backport #43790 (Need More Info): nautilus: RuntimeError: Files in flight high water is unexpecte...
- seems to be complicated by the lack of https://github.com/ceph/ceph/pull/31596 in nautilus?
- 05:16 PM Backport #43784 (In Progress): nautilus: fs: OpenFileTable object shards have too many k/v pairs
- 04:54 PM Backport #43780 (In Progress): nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.ce...
- 04:53 PM Backport #43777 (In Progress): nautilus: qa: test_full racy check: AssertionError: 29 not greater...
- 04:51 PM Backport #43733 (In Progress): nautilus: qa: ffsb suite causes SLOW_OPS warnings
- 04:48 PM Backport #43729 (In Progress): nautilus: client: chdir does not raise error if a file is passed
- 04:47 PM Backport #43628 (In Progress): nautilus: client: disallow changing fuse_default_permissions optio...
- 04:46 PM Backport #43724 (Need More Info): nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- leaving mgr/volumes backports to the developers
- 04:45 PM Backport #43624 (In Progress): nautilus: mds: note features client has when rejecting client due ...
- 04:44 PM Backport #43573 (In Progress): nautilus: cephfs-journal-tool: will crash without any extra argument
- 04:37 PM Backport #43568 (In Progress): nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- @Ramana - please feel free to take any backport issue that is in state "New" or "Need More Info".
If you can possi... - 04:37 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- Changing this to just a client bug. The kernel driver issue will be tracked via: https://bugzilla.redhat.com/show_bug...
- 04:33 PM Backport #43509 (In Progress): nautilus: 'ceph -s' does not show standbys if there are no filesys...
- 04:32 PM Backport #43502 (In Progress): mimic: mount.ceph: give a hint message when no mds is up or cluste...
- 04:30 PM Backport #43503 (In Progress): nautilus: mount.ceph: give a hint message when no mds is up or clu...
- 04:21 PM Backport #43343 (In Progress): nautilus: mds: client does not response to cap revoke After sessio...
- 04:08 PM Backport #43629 (Need More Info): nautilus: mgr/volumes: provision subvolumes with config metadat...
- 04:07 PM Backport #43338 (Need More Info): nautilus: qa/tasks: add remaining tests for fs volume
- 04:06 PM Backport #43137 (Need More Info): nautilus: pybind/mgr/volumes: idle connection drop is not working
- 11:58 AM Feature #40811 (Resolved): mds: add command that modify session metadata
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
01/26/2020
- 05:22 PM Bug #43827 (Duplicate): decode fail in SessionMapStore::decode_legacy on upgrade
- /a/sage-2020-01-26_15:00:33-upgrade:cephfs-wip-sage2-testing-2020-01-24-1408-distro-basic-smithi/4709313 (and the who...
- 11:07 AM Backport #43143 (Resolved): nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32079
m... - 11:06 AM Backport #41106 (Resolved): nautilus: mds: add command that modify session metadata
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32245
m...
01/25/2020
- 11:40 AM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- 11:38 AM Cleanup #40694 (Pending Backport): mds: move MDSDaemon conf change handling to MDSRank finisher
- 11:31 AM Bug #43336 (Resolved): qa: test_unmount_for_evicted_client hangs
- 11:26 AM Bug #43336 (Pending Backport): qa: test_unmount_for_evicted_client hangs
- 11:25 AM Backport #43345 (In Progress): nautilus: mds: metadata changes may be lost when MDS is restarted
01/24/2020
- 11:54 PM Bug #43090 (Closed): mds:check if oldin is null before accessing its member
- 11:52 PM Bug #42827 (Won't Fix): mds: when mounting the extra slash(es) at the end of server path will be ...
- 11:51 PM Bug #41242 (Closed): mds: re-introudce mds_log_max_expiring to control expiring concurrency manually
- 11:33 PM Bug #43817 (Resolved): mds: update cephfs octopus feature bit
- 2020-02-06 After discussion at the CDM, we will stop naming releases for the CephFS min_compat_client bits. The oper...
- 11:19 PM Feature #43423 (In Progress): mds: collect and show the dentry lease metric
- 11:18 PM Feature #39098 (Resolved): mds: lock caching for asynchronous unlink
- 11:17 PM Bug #42770 (Closed): Regulary trim inode in memory
- 11:17 PM Bug #41651 (Closed): dbench: command not found
- 11:13 PM Cleanup #37931 (New): MDSMonitor: rename `mds repaired` to `fs repaired`
- 11:11 PM Feature #12274 (New): mds: start forward scrubs from all subtree roots, skip non-auth metadata
- 11:06 PM Bug #26863 (Can't reproduce): qa: test_full_different_file "dd: error writing 'large_file': No sp...
- 11:05 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- Venky, is this still a problem?
- 10:05 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- I see a couple of other potential fixes:
1) we could not ask for those caps on an OPEN/CREATE and just rely on the... - 10:02 PM Bug #43800 (Duplicate): FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MD...
- super xor wrote:
> seems to exist already, my bad: https://tracker.ceph.com/issues/43348
no worries! Thanks for t... - 08:46 AM Bug #43800: FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MDS failure
- seems to exist already, my bad: https://tracker.ceph.com/issues/43348
- 06:33 AM Bug #43800 (Duplicate): FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MD...
- We had a complete cephfs failure tonight caused by crashes of all active and standby MDS....
- 09:10 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- We'll look at merging this at the beginning of Pacific release cycle.
- 04:39 PM Backport #43143: nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32079
merged - 04:07 PM Backport #41106: nautilus: mds: add command that modify session metadata
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/32245
merged - 12:51 AM Bug #43762 (Need More Info): pybind/mgr/volumes: create fails with TypeError
- This needs more information, as Victoria is checking if this happens because of their configuration/python version.
01/23/2020
- 11:53 PM Bug #43459 (Resolved): qa: FATAL ERROR: libtool does not seem to be installed.
- 11:29 PM Cleanup #43387 (Resolved): mds: reorg SnapServer header
- 11:28 PM Bug #43660 (Resolved): mds: null pointer dereference in Server::handle_client_link
- 11:12 PM Bug #43796 (Resolved): qa: test_version_splitting
- ...
- 08:43 PM Bug #43762 (Triaged): pybind/mgr/volumes: create fails with TypeError
- 11:15 AM Bug #43762 (Closed): pybind/mgr/volumes: create fails with TypeError
- ...
- 05:02 PM Backport #43791 (Rejected): mimic: RuntimeError: Files in flight high water is unexpectedly low (...
- 05:02 PM Backport #43790 (Resolved): nautilus: RuntimeError: Files in flight high water is unexpectedly lo...
- https://github.com/ceph/ceph/pull/33115
- 04:57 PM Backport #43785 (Rejected): mimic: fs: OpenFileTable object shards have too many k/v pairs
- 04:57 PM Backport #43784 (Resolved): nautilus: fs: OpenFileTable object shards have too many k/v pairs
- https://github.com/ceph/ceph/pull/32921
- 04:56 PM Backport #43780 (Resolved): nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephf...
- https://github.com/ceph/ceph/pull/32919
- 04:55 PM Backport #43778 (Rejected): mimic: qa: test_full racy check: AssertionError: 29 not greater than ...
- 04:55 PM Backport #43777 (Resolved): nautilus: qa: test_full racy check: AssertionError: 29 not greater th...
- https://github.com/ceph/ceph/pull/32918
- 04:46 PM Backport #43770 (In Progress): nautilus: mount.ceph fails with ERANGE if name= option is longer t...
- 04:40 PM Backport #43770 (Resolved): nautilus: mount.ceph fails with ERANGE if name= option is longer than...
- https://github.com/ceph/ceph/pull/32807
- 04:38 PM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- Patrick Donnelly wrote:
> Nathan, are you still planning to work on this?
Yes. Sorry for the latency! - 01:21 AM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- Nathan, are you still planning to work on this?
- 01:06 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- I am seeing similar issues on our cluster. I had the Ganesha node running on the same node as the MONs just for conve...
- 01:05 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- What I have is mostly working now, but I'm occasionally seeing an async create come back with -EEXIST when running xf...
- 12:07 PM Bug #43763 (Resolved): cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
- cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
stat fails with No such file or directory - 10:22 AM Bug #43761 (Resolved): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not g...
- Hello,
I notice a regression on "ceph fs authorize" command that is not enough anymore to give right access to be ... - 01:30 AM Bug #43644 (Fix Under Review): mds: Empty directory check is done on the importer side (at import...
- 01:27 AM Bug #36078 (Can't reproduce): mds: 9 active MDS cluster stuck during fsstress
- 01:24 AM Bug #43600 (Triaged): qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- 01:23 AM Feature #17852 (Resolved): mds: when starting forward scrub, return handle or stamp/version which...
- 01:17 AM Bug #43517 (Triaged): qa: random subvolumegroup collision
- 01:16 AM Feature #41302 (Fix Under Review): mds: add ephemeral random and distributed export pins
- 01:15 AM Bug #38203 (Can't reproduce): ceph-mds segfault during migrator nicely exporting
01/22/2020
- 10:41 PM Bug #43513 (Resolved): qa: filelock_interrupt.py hang
- 04:23 PM Bug #42986 (Pending Backport): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_...
- 04:01 AM Cleanup #42867 (Resolved): mds: reorg Server header
- 04:01 AM Bug #42515 (Pending Backport): fs: OpenFileTable object shards have too many k/v pairs
- 03:58 AM Feature #39129 (Resolved): create mechanism to delegate ranges of inode numbers to client
- 03:55 AM Cleanup #43369 (Resolved): mds: reorg SnapClient header
- 03:55 AM Bug #43649 (Pending Backport): mount.ceph fails with ERANGE if name= option is longer than 37 cha...
- 03:40 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- Zheng Yan wrote:
> > Now, send SIGKILL to the standby ceph-fuse client. This will cause I/O to halt for the first cl... - 03:14 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
> Now, send SIGKILL to the standby ceph-fuse client. This will cause I/O to halt for the first client until the MDS...
Also available in: Atom