Activity
From 06/06/2020 to 07/05/2020
07/05/2020
- 11:24 AM Bug #46360 (Resolved): mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits cer...
- During `fs subvolume clone`, libcephfs hit the "Disk quota exceeded error" that caused the subvolume clone to be stuc...
07/04/2020
- 08:33 PM Bug #43817 (Resolved): mds: update cephfs octopus feature bit
- 04:12 PM Bug #46357 (New): qa: Error downloading packages
- ...
- 12:44 PM Bug #42688: Standard CephFS caps do not allow certain dot files to be written
- The problem persists in Octopus!
Just did a fresh MAAS/JUJU cephfs install.
I believe at least the documentation ... - 03:28 AM Bug #46355 (Resolved): client: directory inode can not call release_callback
- I use Ganesha + CEPH to test release_callback feature, I have merged the relevant modification code:
https://github....
07/03/2020
- 03:37 PM Backport #46349 (Rejected): nautilus: qa/tasks: make sh() in vstart_runner.py identical with teut...
- 03:37 PM Backport #46348 (Resolved): octopus: qa/tasks: make sh() in vstart_runner.py identical with teuth...
- https://github.com/ceph/ceph/pull/36044
- 12:42 PM Backport #46201: octopus: mds: add ephemeral random and distributed export pins
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35759
m... - 05:38 AM Bug #46282: qa: multiclient connection interruptions by stopping one client
- Patrick Donnelly wrote:
> [...]
>
>
> From: /ceph/teuthology-archive/pdonnell-2020-06-29_23:19:06-fs-wip-pdonne... - 01:01 AM Bug #46282 (In Progress): qa: multiclient connection interruptions by stopping one client
- 01:51 AM Bug #46273: mds: deleting a large number of files in a directory causes the file system to read only
- The origin dirfrag is closed through close_dirfrag,and then the mergeed basedirfrag commits it.
More detailed log:...
07/02/2020
- 04:48 PM Bug #41541 (Resolved): mgr/volumes: ephemerally pin volumes
- 04:47 PM Backport #46201 (Resolved): octopus: mds: add ephemeral random and distributed export pins
- 04:47 PM Feature #41302 (Resolved): mds: add ephemeral random and distributed export pins
- 04:46 PM Backport #46315 (Resolved): octopus: mgr/volumes: ephemerally pin volumes
- 11:24 AM Bug #46158 (In Progress): pybind/mgr/volumes: Persist snapshot size on snapshot creation
- 06:27 AM Bug #46069 (Pending Backport): qa/tasks: make sh() in vstart_runner.py identical with teuthology....
07/01/2020
- 08:15 PM Bug #43039 (New): client: shutdown race fails with status 141
- /ceph/teuthology-archive/pdonnell-2020-07-01_06:37:23-fs-wip-pdonnell-testing-20200701.033411-distro-basic-smithi/519...
- 03:59 PM Backport #46315 (Resolved): octopus: mgr/volumes: ephemerally pin volumes
- https://github.com/ceph/ceph/pull/35759
- 03:56 PM Backport #46311 (Resolved): octopus: qa/tasks/cephfs/test_snapshots.py: Command failed with statu...
- https://github.com/ceph/ceph/pull/36043
- 03:56 PM Backport #46310 (Resolved): nautilus: qa/tasks/cephfs/test_snapshots.py: Command failed with stat...
- https://github.com/ceph/ceph/pull/36172
- 01:57 PM Bug #46302 (Resolved): mds: optimize ephemeral rand pin
- there can be two optimization
1. get_ephemeral_rand() is called for each loaded inode of dirfrag fetch. all calls ge... - 03:04 AM Backport #46289 (In Progress): octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46289 (Resolved): octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- https://github.com/ceph/ceph/pull/35499
- 03:04 AM Backport #46291 (In Progress): octopus: mgr/volumes/nfs: Add interface for get and list exports
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46291 (Resolved): octopus: mgr/volumes/nfs: Add interface for get and list exports
- https://github.com/ceph/ceph/pull/35499
- 03:03 AM Backport #46292 (In Progress): octopus: mgr/nfs: Check cluster exists before creating exports and...
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46292 (Resolved): octopus: mgr/nfs: Check cluster exists before creating exports and ma...
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46290 (Resolved): octopus: mgr/nfs: Add interface for listing cluster
- https://github.com/ceph/ceph/pull/35499
- 02:50 AM Bug #45744 (Pending Backport): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 02:50 AM Bug #45740 (Pending Backport): mgr/nfs: Check cluster exists before creating exports and make exp...
- 02:50 AM Feature #45741 (Pending Backport): mgr/volumes/nfs: Add interface for get and list exports
- 02:50 AM Feature #45742 (Pending Backport): mgr/nfs: Add interface for listing cluster
06/30/2020
- 06:14 PM Feature #44279: client: provide asok commands to getattr an inode with desired caps
- Jeff Layton wrote:
> Do we need an asok interface for this? If you're planning to write testcases that link in libce... - 01:40 PM Feature #44279: client: provide asok commands to getattr an inode with desired caps
- Do we need an asok interface for this? If you're planning to write testcases that link in libcephfs directly, then yo...
- 05:37 PM Bug #45530 (Pending Backport): qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: [...
- 04:51 PM Bug #46282 (Resolved): qa: multiclient connection interruptions by stopping one client
- ...
- 03:50 PM Bug #46278 (Resolved): mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_...
- NOTE: This is applicable even without using subvolume snapshots
A snapshot of a directory with quota set on it, vi... - 03:35 PM Bug #46277 (Fix Under Review): pybind/mgr/volumes: get_pool_names may indicate volume does not ex...
- 03:32 PM Bug #46277 (Resolved): pybind/mgr/volumes: get_pool_names may indicate volume does not exist if m...
- ...
- 03:15 PM Bug #46269 (Fix Under Review): ceph-fuse: ceph-fuse process is terminated by the logratote task a...
- 06:18 AM Bug #46269 (Resolved): ceph-fuse: ceph-fuse process is terminated by the logratote task and what ...
- *1. reproduce the scene as shown below:*
(1) step 1:
Open the terminal_1, and
Prepare the cmd: "killall -q -1 ce... - 03:14 PM Bug #46273 (Fix Under Review): mds: deleting a large number of files in a directory causes the fi...
- 11:32 AM Bug #46273 (Resolved): mds: deleting a large number of files in a directory causes the file syste...
- Log as follow:...
- 07:31 AM Backport #46190 (In Progress): octopus: mds: cap revoking requests didn't success when the client...
- 07:30 AM Backport #46191 (In Progress): nautilus: mds: cap revoking requests didn't success when the clien...
06/29/2020
- 10:03 PM Bug #46056 (Resolved): assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- The main patch that fixes this in ganesha is here:
https://github.com/nfs-ganesha/nfs-ganesha/commit/e45743b47... - 09:52 PM Bug #41541 (Pending Backport): mgr/volumes: ephemerally pin volumes
- 04:28 PM Bug #37725 (Can't reproduce): mds: stopping MDS with subtrees pinnned cannot finish stopping
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > I believe this is fixed already but needs double-checked. It may also... - 10:01 AM Bug #37725: mds: stopping MDS with subtrees pinnned cannot finish stopping
- Patrick Donnelly wrote:
> I believe this is fixed already but needs double-checked. It may also be fixed by the ephe... - 01:42 PM Bug #46158 (Triaged): pybind/mgr/volumes: Persist snapshot size on snapshot creation
- 01:41 PM Bug #46218 (Triaged): mds: Add inter MDS messages to the corpus and enforce versioning
- 07:44 AM Feature #46059 (Fix Under Review): vstart_runner.py: optionally rotate logs between tests
06/26/2020
- 03:35 PM Backport #46235 (Resolved): nautilus: pybind/mgr/volumes: volume deletion not always removes the ...
- https://github.com/ceph/ceph/pull/36167
- 03:35 PM Backport #46234 (Resolved): octopus: pybind/mgr/volumes: volume deletion not always removes the a...
- https://github.com/ceph/ceph/pull/36327
- 05:29 AM Bug #46218 (Triaged): mds: Add inter MDS messages to the corpus and enforce versioning
- Now that the current and the new inter-MDS messages are "guaranteed to be of type MMDSOp":https://tracker.ceph.com/is...
06/25/2020
- 09:37 PM Bug #41565 (Resolved): mds: detect MDS<->MDS messages that are not versioned
- 09:33 PM Bug #45910 (Pending Backport): pybind/mgr/volumes: volume deletion not always removes the associa...
- 09:33 PM Bug #46101 (Resolved): qa: set omit_sudo to False for cmds executed with sudo
- 08:31 PM Bug #46213 (Rejected): qa: pjd test reports odd EIO errors
- Probably caused by: https://github.com/ceph/ceph/pull/35725/files
- 06:00 PM Bug #46213 (Rejected): qa: pjd test reports odd EIO errors
- ...
- 12:41 PM Feature #45729 (In Progress): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes...
- 12:40 PM Feature #45371 (Fix Under Review): mgr/volumes: `protect` and `clone` operation in a single trans...
- 09:41 AM Bug #23421 (Closed): ceph-fuse: stop ceph-fuse if no root permissions?
- 05:38 AM Bug #46163 (Fix Under Review): mgr/volumes: Clone operation uses source subvolume root directory ...
- 02:59 AM Backport #46201 (In Progress): octopus: mds: add ephemeral random and distributed export pins
- 01:26 AM Backport #46201 (Resolved): octopus: mds: add ephemeral random and distributed export pins
- https://github.com/ceph/ceph/pull/35759
- 01:26 AM Feature #41302 (Pending Backport): mds: add ephemeral random and distributed export pins
06/24/2020
- 11:03 PM Feature #45371: mgr/volumes: `protect` and `clone` operation in a single transaction
- Based on #note-3 reassigning it to myself.
- 08:45 PM Backport #46200 (Resolved): nautilus: qa: "[WRN] evicting unresponsive client smithi131:z (6314),...
- https://github.com/ceph/ceph/pull/36171
- 08:45 PM Backport #46199 (Resolved): octopus: qa: "[WRN] evicting unresponsive client smithi131:z (6314), ...
- https://github.com/ceph/ceph/pull/36042
- 08:43 PM Backport #46191 (Resolved): nautilus: mds: cap revoking requests didn't success when the client d...
- https://github.com/ceph/ceph/pull/35841
- 08:43 PM Backport #46190 (Resolved): octopus: mds: cap revoking requests didn't success when the client do...
- https://github.com/ceph/ceph/pull/35842
- 08:43 PM Backport #46189 (Resolved): nautilus: mds: EMetablob replay too long will cause mds restart
- https://github.com/ceph/ceph/pull/36170
- 08:43 PM Backport #46188 (Resolved): octopus: mds: EMetablob replay too long will cause mds restart
- https://github.com/ceph/ceph/pull/36040
- 08:42 PM Backport #46187 (Resolved): nautilus: client: fix snap directory atime
- https://github.com/ceph/ceph/pull/36169
- 08:42 PM Backport #46186 (Resolved): octopus: client: fix snap directory atime
- https://github.com/ceph/ceph/pull/36039
- 08:42 PM Backport #46185 (Resolved): octopus: cephadm: mds permissions for osd are unnecessarily permissive
- https://github.com/ceph/ceph/pull/35898
- 06:48 PM Bug #46100 (Resolved): vstart_runner.py: check for Raw instance before treating as iterable
- 06:35 PM Fix #46070 (Pending Backport): client: fix snap directory atime
- 06:34 PM Bug #45935 (Pending Backport): mds: cap revoking requests didn't success when the client doing re...
- 06:34 PM Bug #45815 (Resolved): vstart_runner.py: set stdout and stderr to None by default
- 06:33 PM Bug #43943 (Pending Backport): qa: "[WRN] evicting unresponsive client smithi131:z (6314), after ...
- 06:31 PM Bug #46057 (Resolved): qa/cephfs: run_as_user must args list instead of str
- 06:30 PM Bug #46042 (Pending Backport): mds: EMetablob replay too long will cause mds restart
- 06:30 PM Bug #46023 (Resolved): mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm)
- 03:33 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Jeff Layton wrote:
> Looks like a legit bug. The Linux kernel does this in __vfs_setxattr:
>
> [...]
>
> ...so... - 03:10 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Yeah, the latest scheme just ensures that handles (which are somewhat analogous to inodes) are never shared between e...
- 02:59 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Jeff Layton wrote:
> I started working on this, but there is a bit of a dilemma. My approach to fixing this was to s... - 01:06 PM Bug #46081 (Pending Backport): cephadm: mds permissions for osd are unnecessarily permissive
- 09:43 AM Feature #45743 (Fix Under Review): mgr/nfs: Add interface to show cluster information
- 05:00 AM Bug #46167 (Rejected): pybind/mgr/volumes: xlist.h: 144: FAILED ceph_assert((bool)_front == (bool...
- Caused by https://github.com/ceph/ceph/pull/35410
- 04:56 AM Bug #46167 (Rejected): pybind/mgr/volumes: xlist.h: 144: FAILED ceph_assert((bool)_front == (bool...
- ...
- 03:03 AM Feature #46166 (Resolved): mds: store symlink target as xattr in data pool inode for disaster rec...
- Currently, the MDS only stores the symlink inode's backtrace information in the data pool. During disaster recovery o...
- 01:56 AM Bug #46163 (Triaged): mgr/volumes: Clone operation uses source subvolume root directory mode and ...
06/23/2020
- 11:53 PM Bug #46163 (Resolved): mgr/volumes: Clone operation uses source subvolume root directory mode and...
- If a subvolumes mode or uid/gid values are changed post a snapshot, and a clone of a snapshot prior to the change is ...
- 09:25 PM Backport #46155 (In Progress): octopus: Test failure: test_create_multiple_exports (tasks.cephfs....
- 02:41 PM Backport #46155 (Fix Under Review): octopus: Test failure: test_create_multiple_exports (tasks.ce...
- 02:40 PM Backport #46155 (Resolved): octopus: Test failure: test_create_multiple_exports (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/35499
- 09:23 PM Bug #46104: Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
- Note: the Orchestrator project doesn't have a Backport tracker. Moved to "fs" Project which is where all the other NF...
- 01:44 PM Bug #46104 (Pending Backport): Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs...
- 09:22 PM Backport #46156 (In Progress): octopus: Test failure: test_export_create_and_delete (tasks.cephfs...
- 02:42 PM Backport #46156 (Fix Under Review): octopus: Test failure: test_export_create_and_delete (tasks.c...
- 02:42 PM Backport #46156 (Resolved): octopus: Test failure: test_export_create_and_delete (tasks.cephfs.te...
- https://github.com/ceph/ceph/pull/35499
- 05:51 PM Bug #46158 (Closed): pybind/mgr/volumes: Persist snapshot size on snapshot creation
- Due to issue [1], the subvolume snapshot info command returns incorrect snapshot size if it's requested after the cor...
- 01:47 PM Bug #46058 (Duplicate): qa: test_scrub_pause_and_resume KeyError: 'a'
- Thanks for tracking that down Venky!
- 05:48 AM Bug #46058: qa: test_scrub_pause_and_resume KeyError: 'a'
- This is due to: https://tracker.ceph.com/issues/44638
The PR got merged but the tracker status was never updated. - 01:46 PM Bug #46046 (Pending Backport): Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs....
- 01:29 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Sidharth Anupkrishnan wrote:
> Jeff Layton wrote:
> > The kernel seems to key its behavior on the size parameter. ... - 12:27 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Jeff Layton wrote:
> The kernel seems to key its behavior on the size parameter. When it's 0, the pointer passed in ... - 10:26 AM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- The kernel seems to key its behavior on the size parameter. When it's 0, the pointer passed in is ignored and an empt...
- 11:38 AM Feature #45371: mgr/volumes: `protect` and `clone` operation in a single transaction
- With subvolume and snapshot decoupling feature[1], snapshot protect and unprotect would no longer be required.
[1]... - 09:00 AM Backport #46152 (Resolved): octopus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks....
- https://github.com/ceph/ceph/pull/36038
- 09:00 AM Backport #46151 (Resolved): nautilus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks...
- https://github.com/ceph/ceph/pull/36168
- 09:00 AM Bug #45071 (Resolved): cephfs-shell: CI testing does not detect flake8 errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:59 AM Feature #45289 (Resolved): mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:59 AM Bug #45300 (Resolved): qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected keyword ar...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:59 AM Bug #45396 (Resolved): ceph-fuse: building the source code failed with libfuse3.5 or higher versions
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:58 AM Bug #45866 (Resolved): ceph-fuse build failure against libfuse v3.9.1
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:57 AM Backport #46001: octopus: pybind/mgr/volumes: add command to return metadata regarding a subvolum...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35670
m... - 06:36 AM Backport #46001 (Resolved): octopus: pybind/mgr/volumes: add command to return metadata regarding...
- 08:57 AM Backport #45849: octopus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35671
m... - 06:34 AM Backport #45849 (Resolved): octopus: mgr/volumes: create fs subvolumes with isolated RADOS namesp...
- 08:57 AM Backport #46013 (Resolved): octopus: qa: commit 9f6c764f10f break qa code in several places
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35600
m... - 08:57 AM Backport #45851 (Resolved): octopus: mds: scrub on directory with recently created files may fail...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35555
m... - 08:56 AM Backport #45848 (Resolved): octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpec...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35554
m... - 08:56 AM Backport #45846 (Resolved): octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35451
m... - 08:56 AM Backport #45941 (Resolved): octopus: ceph-fuse build failure against libfuse v3.9.1
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35450
m... - 08:56 AM Backport #45845 (Resolved): octopus: ceph-fuse: building the source code failed with libfuse3.5 o...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35450
m... - 08:56 AM Backport #45842 (Resolved): octopus: ceph-fuse: the -d option couldn't enable the debug mode in l...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35449
m... - 08:55 AM Backport #45476 (Resolved): octopus: cephfs-shell: CI testing does not detect flake8 errors
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34998
m... - 08:55 AM Backport #45888 (Resolved): octopus: client: fails to reconnect to MDS
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35616
m... - 08:55 AM Backport #45838 (Resolved): octopus: mds may start to fragment dirfrag before rollback finishes
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35448
m... - 05:48 AM Bug #44638 (Pending Backport): test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestSc...
06/22/2020
- 11:34 PM Bug #46084 (Triaged): client: supplying ceph_fsetxattr with no value unsets xattr
- Jeff suggests the behavior should be to convert the nullptr to an empty string, to match the behavior of the kernel c...
- 01:54 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Patrick Donnelly wrote:
> Interesting design choice here. Is it causing issues for some application of yours? I'm in... - 01:51 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Looks like a legit bug. The Linux kernel does this in __vfs_setxattr:...
- 01:44 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Interesting design choice here. Is it causing issues for some application of yours? I'm inclined not to change the be...
- 09:15 PM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
- The issue is right there in the log?
2020-06-19T14:49:24.230+0530 7fba997fa700 -1 client.4311 failed to remount fo... - 05:45 PM Backport #46001: octopus: pybind/mgr/volumes: add command to return metadata regarding a subvolum...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/35670
merged - 05:45 PM Backport #45849: octopus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35671
merged - 05:43 PM Backport #46013: octopus: qa: commit 9f6c764f10f break qa code in several places
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35600
merged - 05:41 PM Backport #45851: octopus: mds: scrub on directory with recently created files may fail to load ba...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35555
merged - 05:40 PM Backport #45848: octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected keyword...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35554
merged - 05:37 PM Backport #45846: octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35451
merged - 05:36 PM Backport #45941: octopus: ceph-fuse build failure against libfuse v3.9.1
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35450
merged - 05:36 PM Backport #45845: octopus: ceph-fuse: building the source code failed with libfuse3.5 or higher ve...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35450
merged - 05:34 PM Backport #45842: octopus: ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35449
merged - 05:34 PM Backport #45476: octopus: cephfs-shell: CI testing does not detect flake8 errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34998
merged - 05:32 PM Backport #45888: octopus: client: fails to reconnect to MDS
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35616
merged - 05:28 PM Backport #45838: octopus: mds may start to fragment dirfrag before rollback finishes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35448
merged - 04:11 PM Bug #41541 (Fix Under Review): mgr/volumes: ephemerally pin volumes
- 04:09 PM Bug #37725 (Triaged): mds: stopping MDS with subtrees pinnned cannot finish stopping
- I believe this is fixed already but needs double-checked. It may also be fixed by the ephemeral pinning branch for #4...
- 01:47 PM Feature #46074 (Triaged): mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- 01:41 PM Bug #46129 (Fix Under Review): mds: fix hang issue when accessing a file under a lost parent dire...
- 02:45 AM Bug #46129 (Resolved): mds: fix hang issue when accessing a file under a lost parent directory
- Once a while we had encountered some serious problem that resulted in some metadata lost. After we brought the MDS up...
- 01:03 PM Bug #46100 (Fix Under Review): vstart_runner.py: check for Raw instance before treating as iterable
- 01:03 PM Bug #46101 (Fix Under Review): qa: set omit_sudo to False for cmds executed with sudo
- 12:47 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Ok, I have a patchset that fixes this for ganesha. It turns out not to be terribly invasive, but it does need more te...
- 11:56 AM Bug #46140 (Resolved): mds: couldn't see the logs in log file before the daemon get aborted
- It seems the **assert()** call doesn't flush the log buffer to the relevant log files before it aborting the daemons,...
- 11:30 AM Bug #45593 (Rejected): qa: removing network bridge appears to cause dropped packets
- This is not the ceph qa test suite's bug, the root cause is the node itself get lost.
06/20/2020
- 09:55 PM Backport #45773 (Resolved): octopus: vstart_runner: LocalFuseMount.mount should set set.mounted t...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35447
m... - 09:54 PM Backport #45680: octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35256
m...
06/19/2020
- 04:37 PM Bug #45332 (Resolved): qa: TestExports is failure under new Python3 runtime
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:37 PM Bug #45398 (Resolved): mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume cr...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:46 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- I think what I can do is FSAL_CEPH embed the export ID inside the handle-key that it uses as the cache key, without e...
- 12:58 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- sepia-liu also proposed a patch to ganesha to fix this: https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/495911
.... - 02:17 PM Bug #45666 (Resolved): qa: AssertionError: '1' != b'1'
- 02:17 PM Backport #45886 (Resolved): octopus: qa: AssertionError: '1' != b'1'
- 01:04 PM Backport #46085 (In Progress): octopus: handle multiple ganesha.nfsd's appropriately in vstart.sh
- https://github.com/ceph/ceph/pull/35499
- 10:50 AM Backport #46002 (In Progress): nautilus: pybind/mgr/volumes: add command to return metadata regar...
- 10:47 AM Feature #44193 (Pending Backport): pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clus...
- 10:46 AM Backport #46106 (Resolved): octopus: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway cl...
- https://github.com/ceph/ceph/pull/35499
- 10:40 AM Backport #45849 (In Progress): octopus: mgr/volumes: create fs subvolumes with isolated RADOS nam...
- 10:37 AM Backport #46001 (In Progress): octopus: pybind/mgr/volumes: add command to return metadata regard...
- 09:39 AM Backport #46011 (Resolved): octopus: qa: TestExports is failure under new Python3 runtime
- Already backported via https://github.com/ceph/ceph/commit/bb8f7b0907a2557f4ef5b0455c0fce66aa896e4b
- 09:30 AM Bug #23421 (New): ceph-fuse: stop ceph-fuse if no root permissions?
- 09:29 AM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
- Patrick Donnelly wrote:
> That would indicate ceph-fuse maybe crashed. Please check the logs.
2020-06-19T14:49:24... - 09:10 AM Bug #46046 (Fix Under Review): Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs....
- 09:09 AM Bug #46104 (Fix Under Review): Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs...
- 08:47 AM Bug #46104 (Resolved): Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
- ...
- 06:49 AM Bug #46101 (Resolved): qa: set omit_sudo to False for cmds executed with sudo
- There are 2 set of distinct commands -- first are commands that setup and teardown network namespaces and second are ...
- 06:41 AM Bug #46100 (Resolved): vstart_runner.py: check for Raw instance before treating as iterable
- ...
- 05:05 AM Fix #46070 (Fix Under Review): client: fix snap directory atime
- 03:46 AM Bug #43762: pybind/mgr/volumes: create fails with TypeError
- Victoria Martinez de la Cruz wrote:
> Adding more context to this
>
> This happened after creating a second volum... - 02:49 AM Backport #45680 (Resolved): octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph f...
- 02:40 AM Feature #43435: kclient:send client provided metric flags in client metadata
- V1: https://patchwork.kernel.org/project/ceph-devel/list/?series=303647
V2: https://patchwork.kernel.org/project/cep...
06/18/2020
- 10:23 PM Backport #45773: octopus: vstart_runner: LocalFuseMount.mount should set set.mounted to True
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35447
merged - 10:22 PM Backport #45680: octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35256
merged - 09:27 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- I started working on this, but there is a bit of a dilemma. My approach to fixing this was to share the ceph client b...
- 11:29 AM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Ok, I think I may see what's going on now.
Ganesha encodes the export ID into the filehandle on the wire, but that... - 09:09 PM Backport #46011: octopus: qa: TestExports is failure under new Python3 runtime
- yet
- 08:07 PM Backport #46094 (Rejected): octopus: cephfs-shell: set proper return value for the tool
- 08:06 PM Backport #46085 (Resolved): octopus: handle multiple ganesha.nfsd's appropriately in vstart.sh
- https://github.com/ceph/ceph/pull/35499
- 07:47 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Forgot to mention this: I tested this on both an octopus build and a nautilus build.
- 07:09 PM Bug #46084 (Resolved): client: supplying ceph_fsetxattr with no value unsets xattr
- While working on [1] I noticed an unexpected behavior when trying to use ceph_fsetxattr with an empty (null) value. I...
- 05:01 PM Bug #46081 (Fix Under Review): cephadm: mds permissions for osd are unnecessarily permissive
- 04:58 PM Bug #46081 (Resolved): cephadm: mds permissions for osd are unnecessarily permissive
- ...
- 04:06 PM Bug #46079 (Pending Backport): handle multiple ganesha.nfsd's appropriately in vstart.sh
- 04:03 PM Bug #46079 (Fix Under Review): handle multiple ganesha.nfsd's appropriately in vstart.sh
- 03:53 PM Bug #46079 (Resolved): handle multiple ganesha.nfsd's appropriately in vstart.sh
- Currently, daemons started later will overwrite files for ones started earlier. Also, we could use a way to run alter...
- 01:34 PM Bug #46075 (Resolved): ceph-fuse: mount -a on already mounted folder should be ignored
- The expected behaviour of `mount -a` is to mount the paths written in /etc/fstab, and ignore those that are already m...
- 01:04 PM Feature #46074 (Resolved): mds: provide altrenatives to increase the total cephfs subvolume snaps...
- Issue is originally discussed here: https://github.com/ceph/ceph-csi/issues/1133
This bug is filed to provide disc... - 12:58 PM Feature #44193: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in exporting s...
- Varsha, I see that the PR has been merged. Can we set 'Backport' field to Octopus, and create backport ticket?
- 12:50 PM Backport #45953 (In Progress): octopus: vstart: Support deployment of ganesha daemon by cephadm w...
- 12:48 PM Backport #46003 (In Progress): octopus: vstart: set $CEPH_CONF when calling ganesha-rados-grace c...
- 10:03 AM Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- https://github.com/ceph/ceph/pull/35644
- 02:41 AM Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- I dug into this more. Here's where the mgr notifies:...
- 09:29 AM Fix #46070 (Resolved): client: fix snap directory atime
- The fuse client gets almost all the .snap directory timestamps from it's parent. The exception is atime, which is ke...
- 09:00 AM Bug #46069 (Resolved): qa/tasks: make sh() in vstart_runner.py identical with teuthology.orchestr...
- teuthology.orchestra.remote.sh has changed. So make it identical in vstart_runner.py for compatibility.
This also fi... - 08:36 AM Bug #44947: Hung ops for evicted CephFS clients do not get cleaned up fully
- We haven't seen this again since I raised the ticket. We've upgraded to 14.2.9 recently; I'll keep an eye out for thi...
- 08:13 AM Bug #46068 (Closed): qa/tasks/cephfs/nfs: AssertionError in test_export_create_and_delete
- Not a bug
- 07:53 AM Bug #46068 (Closed): qa/tasks/cephfs/nfs: AssertionError in test_export_create_and_delete
- http://qa-proxy.ceph.com/teuthology/kchai-2020-06-18_05:48:03-rados-wip-kefu2-testing-2020-06-18-0822-distro-basic-sm...
06/17/2020
- 10:51 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- http://pulpito.ceph.com/teuthology-2020-06-17_14:23:02-upgrade:nautilus-x-master-distro-basic-smithi/
- 09:29 PM Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- Varsha, I recommend you change task/test_orch_cli.yaml to add rados client debugging ("debug rados = 10") so it's eas...
- 02:03 PM Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- mgr log...
- 01:56 PM Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- http://pulpito.ceph.com/swagner-2020-06-17_13:30:53-rados:cephadm-wip-swagner-testing-2020-06-17-1044-distro-basic-sm...
- 11:24 AM Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- ...
- 11:15 AM Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- Varsha, could you take a look?
- 11:13 AM Bug #46046 (Resolved): Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_orch_cli.yaml}...
- 08:18 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Patrick suggested that ganesha may be using an Inode pointer from one cephfs client and is trying to hand that pointe...
- 07:02 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- My current theory is that Ganesha is replacing one mount's Inode with the other because the inode numbers are identic...
- 06:37 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Tried modifying the MulticlientSimple test to more closely follow ganesha's usage, but it didn't reproduce the issue....
- 05:00 PM Bug #46056 (Resolved): assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Testing a simple vstart setup with these commands. ganesha is configured with 2 exports that are exporting the same f...
- 06:39 PM Feature #46059 (Resolved): vstart_runner.py: optionally rotate logs between tests
- When running lots of tests with debugging, the size of the logs quickly reaches 10s of gigabytes. Have vstart_runner....
- 05:43 PM Bug #46058 (Duplicate): qa: test_scrub_pause_and_resume KeyError: 'a'
- ...
- 05:41 PM Bug #46057 (Resolved): qa/cephfs: run_as_user must args list instead of str
- Running @self.mount_a.run_as_root(args='ls ~')@ leads to following error -...
- 05:15 PM Bug #45749 (Won't Fix): client: num_caps shows number of caps received
- Whoops, sorry this was fixed in master a long time ago but was missed for backport to Luminous. Fixed in
8859ccf2... - 05:08 PM Bug #46042 (Fix Under Review): mds: EMetablob replay too long will cause mds restart
- 07:07 AM Bug #46042: mds: EMetablob replay too long will cause mds restart
- We encountered the warning 'mdlog behind on trimming' and MDS crashed. Then the standby MDS recovers its journal and ...
- 06:15 AM Bug #46042 (Resolved): mds: EMetablob replay too long will cause mds restart
- 04:42 PM Bug #43191 (Resolved): test_cephfs_shell: set `colors` to Never for cephfs-shell
- 04:32 PM Feature #46041 (Resolved): mds/metric: if client send the metrics to old ceph, the mds session co...
- 03:45 AM Feature #46041 (Fix Under Review): mds/metric: if client send the metrics to old ceph, the mds se...
- 03:03 AM Feature #46041 (In Progress): mds/metric: if client send the metrics to old ceph, the mds session...
- 03:02 AM Feature #46041: mds/metric: if client send the metrics to old ceph, the mds session connection wi...
- When the mgr received an unkown type of message it will colse the socket connection directly:
In dmesg log we can ... - 03:01 AM Feature #46041 (Resolved): mds/metric: if client send the metrics to old ceph, the mds session co...
- We need one way to get whether the mds support the metric collection or not.
- 01:44 PM Bug #46023: mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm)
- Note that the fix in the PR is to "patch" the sequence number in the tracking map. I didn't want to do away with the ...
- 01:15 PM Bug #46023 (Fix Under Review): mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm)
- 04:24 AM Bug #46023: mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm)
- This happens when a rank 0 MDS goes offline after handling metrics for a client from another rank (say, to rank 1) fo...
- 10:16 AM Bug #45958 (Resolved): nautilus: ERROR: test_get_authorized_ids (tasks.cephfs.test_volume_client....
- 10:16 AM Bug #45966 (Resolved): nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- 09:06 AM Backport #45888 (In Progress): octopus: client: fails to reconnect to MDS
- 07:39 AM Bug #45829 (Resolved): fs: ceph_test_libcephfs abort in TestUtime
- 07:10 AM Bug #45829 (Fix Under Review): fs: ceph_test_libcephfs abort in TestUtime
- Fixed in teuthology: https://github.com/ceph/teuthology/pull/1508
- 05:03 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- http://pulpito.ceph.com/xiubli-2020-06-17_03:41:52-fs:basic_workload-wip-fs-fderr-2020-06-17-1011-distro-basic-smithi...
- 12:50 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- http://pulpito.ceph.com/xiubli-2020-06-16_14:44:06-fs:basic_workload-wip-fs-ulimit-2020-06-16-1511-distro-basic-smith...
- 02:29 AM Bug #44113 (Pending Backport): cephfs-shell: set proper return value for the tool
- 02:29 AM Bug #43248 (Resolved): cephfs-shell: do not drop into shell after running command-line command
- Backport managed by #44113.
06/16/2020
- 03:30 PM Bug #45958: nautilus: ERROR: test_get_authorized_ids (tasks.cephfs.test_volume_client.TestVolumeC...
- https://github.com/ceph/ceph/pull/35520 merged
- 06:23 AM Bug #45958 (Fix Under Review): nautilus: ERROR: test_get_authorized_ids (tasks.cephfs.test_volume...
- 03:30 PM Bug #45966: nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- https://github.com/ceph/ceph/pull/35520 merged
- 01:28 PM Backport #46012 (In Progress): nautilus: qa: commit 9f6c764f10f break qa code in several places
- 01:26 PM Backport #46013 (In Progress): octopus: qa: commit 9f6c764f10f break qa code in several places
- 10:37 AM Feature #45741 (Fix Under Review): mgr/volumes/nfs: Add interface for get and list exports
- 10:36 AM Bug #45744 (Fix Under Review): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 04:10 AM Bug #46025 (Fix Under Review): client: release the client_lock before copying data in read
- 01:57 AM Bug #46025 (Resolved): client: release the client_lock before copying data in read
- There is a copy from bufferlist to out buffer in client read, In the case of large file reading, other requests will ...
06/15/2020
- 10:12 PM Bug #46023 (Resolved): mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm)
- ...
- 10:05 PM Bug #45434 (Triaged): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- 10:05 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- /ceph/teuthology-archive/pdonnell-2020-06-12_09:37:27-kcephfs-wip-pdonnell-testing-20200612.063208-distro-basic-smith...
- 10:03 PM Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- and master: /ceph/teuthology-archive/pdonnell-2020-06-12_09:37:27-kcephfs-wip-pdonnell-testing-20200612.063208-distro...
- 10:01 PM Bug #46022 (New): qa: test_strays num_purge_ops violates threshold 34/16
- ...
- 07:29 PM Feature #12334 (Resolved): nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:24 PM Backport #46013 (Resolved): octopus: qa: commit 9f6c764f10f break qa code in several places
- https://github.com/ceph/ceph/pull/35600
- 07:24 PM Backport #46012 (Resolved): nautilus: qa: commit 9f6c764f10f break qa code in several places
- https://github.com/ceph/ceph/pull/35601
- 07:23 PM Backport #46011 (Resolved): octopus: qa: TestExports is failure under new Python3 runtime
- 07:22 PM Bug #45521 (Resolved): mds: layout parser does not handle [-.] in pool names
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:20 PM Backport #46003 (Resolved): octopus: vstart: set $CEPH_CONF when calling ganesha-rados-grace comm...
- https://github.com/ceph/ceph/pull/35499
- 07:20 PM Bug #44579: qa: commit 9f6c764f10f break qa code in several places
- 9f6c764f10f is in the process of being backported to octopus as well
- 07:18 PM Bug #44579: qa: commit 9f6c764f10f break qa code in several places
- yes, 9f6c764f10f was backported to nautilus via 6e00035ab44f23f3b3ff3242ac81e69dc9f174fc
- 07:16 PM Bug #44579 (Pending Backport): qa: commit 9f6c764f10f break qa code in several places
- 04:59 PM Backport #46002 (Resolved): nautilus: pybind/mgr/volumes: add command to return metadata regardin...
- https://github.com/ceph/ceph/pull/35672
- 04:59 PM Backport #46001 (Resolved): octopus: pybind/mgr/volumes: add command to return metadata regarding...
- https://github.com/ceph/ceph/pull/35670
- 03:27 PM Bug #45997 (Fix Under Review): nautilus: ceph_volume_client.py: UnicodeEncodeError exception whil...
- -Found a fix being worked on in GitHub.- Thanks for the link Dan. Updated.
- 01:54 PM Bug #45997: nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removing volume w...
- (We have a PR incoming)
- 01:54 PM Bug #45997 (Triaged): nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removin...
- 01:09 PM Bug #45997 (Resolved): nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removi...
- While deleting a Manila share, we get this backtrace:...
- 08:14 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- http://pulpito.ceph.com/xiubli-2020-06-15_05:16:00-fs:basic_workload-wip-xiubli-fs-testing-2020-06-12-1613-distro-bas...
- 07:50 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- From one of my test case's teuthology.log, we can see that, it is aborted when doing the mount(), in create_file_even...
06/13/2020
- 08:16 PM Bug #45663: luminous to nautilus upgrade
I think the xlock causes this
2020-06-13 17:32:24.920 7fb5edd82700 0 log_channel(cluster) log [WRN] :
slow re...
06/12/2020
- 09:14 PM Fix #44171 (Resolved): pybind/cephfs: audit for unimplemented bindings for libcephfs
- 09:10 PM Feature #45237 (Pending Backport): pybind/mgr/volumes: add command to return metadata regarding a...
- 09:07 PM Bug #45866 (Pending Backport): ceph-fuse build failure against libfuse v3.9.1
- 02:00 PM Bug #45990 (New): Add MDS Daemon with ceph orch
- I am trying to get cephfs up & running:
root@ceph01:~# cephadm shell --fsid 5436dd5d-83d4-4dc8-a93b-60ab5db145df... - 09:50 AM Backport #45851 (In Progress): octopus: mds: scrub on directory with recently created files may f...
- 09:34 AM Backport #45848 (In Progress): octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unex...
- 07:43 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- LOCK_MIX is is a transition state for muti MDSs could do read/write
at the same time, but the Fcb caps are not allow...
06/11/2020
- 05:28 PM Backport #45679 (Resolved): nautilus: mds: layout parser does not handle [-.] in pool names
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35391
m... - 03:43 PM Backport #45679: nautilus: mds: layout parser does not handle [-.] in pool names
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35391
merged - 05:28 PM Backport #45974 (Resolved): nautilus: qa: AssertionError: '1' != b'1'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35535
m... - 05:28 PM Backport #45974 (In Progress): nautilus: qa: AssertionError: '1' != b'1'
- 10:18 AM Backport #45974 (Resolved): nautilus: qa: AssertionError: '1' != b'1'
- https://github.com/ceph/ceph/pull/35535
- 05:28 PM Backport #45967 (Resolved): nautilus: qa: TestExports is failure under new Python3 runtime
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35535
m... - 05:26 PM Backport #45689 (Resolved): nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35393
m... - 05:26 PM Backport #45686 (Resolved): nautilus: mds: FAILED assert(locking == lock) in MutationImpl::finish...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35392
m... - 05:25 PM Backport #45681: nautilus: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolum...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35482
m... - 04:06 PM Backport #45681 (Resolved): nautilus: mgr/volumes: Not able to resize cephfs subvolume with ceph ...
- 05:24 PM Backport #45850: nautilus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35482
m... - 04:09 PM Backport #45850 (Resolved): nautilus: mgr/volumes: create fs subvolumes with isolated RADOS names...
- 01:14 PM Bug #44415 (Fix Under Review): cephfs.pyx: passing empty string is fine but passing None is not t...
- 10:01 AM Bug #44579: qa: commit 9f6c764f10f break qa code in several places
- this needs backport to nautilus.
In nautilus:... - 05:48 AM Feature #45742 (Fix Under Review): mgr/nfs: Add interface for listing cluster
- 05:13 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Xiubo Li wrote:
> From the mds.c logs, we can see that:
>
> client_caps(revoke ino 0x10000000392 65 seq 1 ==> ... - 03:38 AM Bug #45829 (In Progress): fs: ceph_test_libcephfs abort in TestUtime
- I can access to the lab now, will continue work on this.
06/10/2020
- 08:59 PM Bug #45971 (Pending Backport): vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- 08:58 PM Bug #45971 (Fix Under Review): vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- 07:46 PM Bug #45971 (Resolved): vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- I had a machine with /etc/ceph/ceph.conf file and started vstart on it. The ganesha-rados-grace object got created on...
- 03:33 PM Backport #45689: nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35393
merged - 03:32 PM Backport #45686: nautilus: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35392
merged - 02:39 PM Bug #45338: find leads to recursive output with nfs mount
- Vasu? Were you able to upgrade this and did it help? If so, what versions did you go to?
- 12:28 PM Bug #45815 (Fix Under Review): vstart_runner.py: set stdout and stderr to None by default
- 11:53 AM Bug #45966 (Fix Under Review): nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- 11:07 AM Bug #45966 (In Progress): nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- 10:53 AM Bug #45966 (Resolved): nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- See a number of job failures in nautilus kcephfs suite due to incomplete py2 to py3 qa/tasks transition in nautilus.
... - 11:52 AM Backport #45967 (In Progress): nautilus: qa: TestExports is failure under new Python3 runtime
- 11:39 AM Backport #45967 (Resolved): nautilus: qa: TestExports is failure under new Python3 runtime
- https://github.com/ceph/ceph/pull/35535
- 11:38 AM Bug #45332 (Pending Backport): qa: TestExports is failure under new Python3 runtime
- 11:37 AM Bug #45960 (Duplicate): nautilus: ERROR: test_export_pin_getfattr (tasks.cephfs.test_exports.Test...
- Duplicate of https://tracker.ceph.com/issues/45332
- 10:07 AM Bug #45342: qa/tasks/vstart_runner.py: RuntimeError: Fuse mount failed to populate /sys/ after 31...
- Rebased and revolved the conflicts.
- 08:28 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Xiubo Li wrote:
> From the mds.c logs, we can see that:
>
> client_caps(revoke ino 0x10000000392 65 seq 1 ==> ... - 08:23 AM Bug #45935 (Fix Under Review): mds: cap revoking requests didn't success when the client doing re...
- 06:55 AM Backport #45825 (In Progress): luminous: MDS config reference lists mds log max expiring
- 06:54 AM Backport #45826 (In Progress): mimic: MDS config reference lists mds log max expiring
06/09/2020
- 05:22 PM Bug #45960 (Duplicate): nautilus: ERROR: test_export_pin_getfattr (tasks.cephfs.test_exports.Test...
- http://pulpito.ceph.com/yuriw-2020-06-08_17:05:22-multimds-wip-yuri5-testing-2020-06-08-1602-nautilus-distro-basic-sm...
- 03:40 PM Bug #45958 (Resolved): nautilus: ERROR: test_get_authorized_ids (tasks.cephfs.test_volume_client....
- http://pulpito.ceph.com/yuriw-2020-06-08_17:06:44-fs-wip-yuri5-testing-2020-06-08-1602-nautilus-distro-basic-smithi/5...
- 10:10 AM Backport #45953 (Resolved): octopus: vstart: Support deployment of ganesha daemon by cephadm with...
- https://github.com/ceph/ceph/pull/35499
- 08:49 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- From the mds.c logs, we can see that:
client_caps(revoke ino 0x10000000392 65 seq 1 ==> pending pAsLsXs was pAs... - 12:56 AM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- 14.2.9/15.2.2 are also the victims
06/08/2020
- 11:07 PM Feature #43423 (Fix Under Review): mds: collect and show the dentry lease metric
- 11:02 PM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Patrick Donnelly wrote:
> Xiubo, which teuthology test is this from?
Sorry, forgot to copy it, this the second is... - 02:14 PM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Xiubo, which teuthology test is this from?
- 01:32 PM Bug #45935 (In Progress): mds: cap revoking requests didn't success when the client doing reconne...
- 01:32 PM Bug #45935 (Resolved): mds: cap revoking requests didn't success when the client doing reconnecti...
- The kclient node log:...
- 08:30 PM Backport #45687 (In Progress): luminous: mds: FAILED assert(locking == lock) in MutationImpl::fin...
- 06:25 PM Feature #45830 (Pending Backport): vstart: Support deployment of ganesha daemon by cephadm with N...
- 06:05 PM Bug #45866 (Fix Under Review): ceph-fuse build failure against libfuse v3.9.1
- 06:03 PM Bug #45866 (Pending Backport): ceph-fuse build failure against libfuse v3.9.1
- temporarily setting Pending Backport to get a backport issue created
- 06:04 PM Backport #45941 (In Progress): octopus: ceph-fuse build failure against libfuse v3.9.1
- 06:03 PM Backport #45941 (Resolved): octopus: ceph-fuse build failure against libfuse v3.9.1
- https://github.com/ceph/ceph/pull/35450
- 05:47 PM Backport #45886 (In Progress): octopus: qa: AssertionError: '1' != b'1'
- 04:46 PM Backport #45850 (In Progress): nautilus: mgr/volumes: create fs subvolumes with isolated RADOS na...
- 04:32 PM Backport #45681 (In Progress): nautilus: mgr/volumes: Not able to resize cephfs subvolume with ce...
- 01:45 PM Feature #45906 (Fix Under Review): mds: make threshold for MDS_TRIM warning configurable
- 01:43 PM Bug #45829 (Triaged): fs: ceph_test_libcephfs abort in TestUtime
- 12:48 PM Bug #45834: cephadm: "fs volume create cephfs" overwrites existing placement specification
- Easy solution would be to validate the existence of this CephFS, before starting the MDS daemons.
06/06/2020
- 09:10 AM Backport #45846 (In Progress): octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connectio...
- 08:58 AM Backport #45845 (In Progress): octopus: ceph-fuse: building the source code failed with libfuse3....
- 08:57 AM Backport #45842 (In Progress): octopus: ceph-fuse: the -d option couldn't enable the debug mode i...
- 08:57 AM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:56 AM Bug #43968 (Resolved): qa: multimds suite using centos7
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:55 AM Backport #45838 (In Progress): octopus: mds may start to fragment dirfrag before rollback finishes
- 08:43 AM Backport #44330 (Resolved): nautilus: qa: multimds suite using centos7
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35184
m... - 08:43 AM Backport #45804 (Resolved): nautilus: qa: verify sub-suite does not define os_version
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35184
m... - 08:39 AM Backport #45773 (In Progress): octopus: vstart_runner: LocalFuseMount.mount should set set.mounte...
Also available in: Atom