Activity
From 05/11/2020 to 06/09/2020
06/09/2020
- 05:22 PM Bug #45960 (Duplicate): nautilus: ERROR: test_export_pin_getfattr (tasks.cephfs.test_exports.Test...
- http://pulpito.ceph.com/yuriw-2020-06-08_17:05:22-multimds-wip-yuri5-testing-2020-06-08-1602-nautilus-distro-basic-sm...
- 03:40 PM Bug #45958 (Resolved): nautilus: ERROR: test_get_authorized_ids (tasks.cephfs.test_volume_client....
- http://pulpito.ceph.com/yuriw-2020-06-08_17:06:44-fs-wip-yuri5-testing-2020-06-08-1602-nautilus-distro-basic-smithi/5...
- 10:10 AM Backport #45953 (Resolved): octopus: vstart: Support deployment of ganesha daemon by cephadm with...
- https://github.com/ceph/ceph/pull/35499
- 08:49 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- From the mds.c logs, we can see that:
client_caps(revoke ino 0x10000000392 65 seq 1 ==> pending pAsLsXs was pAs... - 12:56 AM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- 14.2.9/15.2.2 are also the victims
06/08/2020
- 11:07 PM Feature #43423 (Fix Under Review): mds: collect and show the dentry lease metric
- 11:02 PM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Patrick Donnelly wrote:
> Xiubo, which teuthology test is this from?
Sorry, forgot to copy it, this the second is... - 02:14 PM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Xiubo, which teuthology test is this from?
- 01:32 PM Bug #45935 (In Progress): mds: cap revoking requests didn't success when the client doing reconne...
- 01:32 PM Bug #45935 (Resolved): mds: cap revoking requests didn't success when the client doing reconnecti...
- The kclient node log:...
- 08:30 PM Backport #45687 (In Progress): luminous: mds: FAILED assert(locking == lock) in MutationImpl::fin...
- 06:25 PM Feature #45830 (Pending Backport): vstart: Support deployment of ganesha daemon by cephadm with N...
- 06:05 PM Bug #45866 (Fix Under Review): ceph-fuse build failure against libfuse v3.9.1
- 06:03 PM Bug #45866 (Pending Backport): ceph-fuse build failure against libfuse v3.9.1
- temporarily setting Pending Backport to get a backport issue created
- 06:04 PM Backport #45941 (In Progress): octopus: ceph-fuse build failure against libfuse v3.9.1
- 06:03 PM Backport #45941 (Resolved): octopus: ceph-fuse build failure against libfuse v3.9.1
- https://github.com/ceph/ceph/pull/35450
- 05:47 PM Backport #45886 (In Progress): octopus: qa: AssertionError: '1' != b'1'
- 04:46 PM Backport #45850 (In Progress): nautilus: mgr/volumes: create fs subvolumes with isolated RADOS na...
- 04:32 PM Backport #45681 (In Progress): nautilus: mgr/volumes: Not able to resize cephfs subvolume with ce...
- 01:45 PM Feature #45906 (Fix Under Review): mds: make threshold for MDS_TRIM warning configurable
- 01:43 PM Bug #45829 (Triaged): fs: ceph_test_libcephfs abort in TestUtime
- 12:48 PM Bug #45834: cephadm: "fs volume create cephfs" overwrites existing placement specification
- Easy solution would be to validate the existence of this CephFS, before starting the MDS daemons.
06/06/2020
- 09:10 AM Backport #45846 (In Progress): octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connectio...
- 08:58 AM Backport #45845 (In Progress): octopus: ceph-fuse: building the source code failed with libfuse3....
- 08:57 AM Backport #45842 (In Progress): octopus: ceph-fuse: the -d option couldn't enable the debug mode i...
- 08:57 AM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:56 AM Bug #43968 (Resolved): qa: multimds suite using centos7
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:55 AM Backport #45838 (In Progress): octopus: mds may start to fragment dirfrag before rollback finishes
- 08:43 AM Backport #44330 (Resolved): nautilus: qa: multimds suite using centos7
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35184
m... - 08:43 AM Backport #45804 (Resolved): nautilus: qa: verify sub-suite does not define os_version
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35184
m... - 08:39 AM Backport #45773 (In Progress): octopus: vstart_runner: LocalFuseMount.mount should set set.mounte...
06/05/2020
- 09:55 PM Bug #45749 (In Progress): client: num_caps shows number of caps received
- 06:20 PM Bug #45910 (Fix Under Review): pybind/mgr/volumes: volume deletion not always removes the associa...
- 05:43 PM Bug #45910 (In Progress): pybind/mgr/volumes: volume deletion not always removes the associated o...
- 05:38 PM Bug #45910 (Resolved): pybind/mgr/volumes: volume deletion not always removes the associated osd ...
- ...
- 03:45 PM Backport #44330: nautilus: qa: multimds suite using centos7
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35184
merged - 03:45 PM Backport #45804: nautilus: qa: verify sub-suite does not define os_version
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/35184
merged - 01:49 PM Feature #45729 (New): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes indepen...
- After some discussion and agreeing on the approach, below is the proposed design:
*Direct Addressing Scheme for Sn... - 01:45 PM Bug #45740 (Fix Under Review): mgr/nfs: Check cluster exists before creating exports and make exp...
- 12:01 PM Feature #45906 (Resolved): mds: make threshold for MDS_TRIM warning configurable
- The MDS_TRIM health warning currently triggers on a hard-coded factor 2 threshold, I've got a setup here that runs so...
- 11:32 AM Feature #45741 (In Progress): mgr/volumes/nfs: Add interface for get and list exports
- 11:32 AM Bug #45744 (In Progress): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
06/04/2020
- 02:36 PM Backport #45887 (In Progress): nautilus: client: fails to reconnect to MDS
- 11:56 AM Backport #45887 (Resolved): nautilus: client: fails to reconnect to MDS
- https://github.com/ceph/ceph/pull/35403
- 02:13 PM Backport #45898 (In Progress): nautilus: mds: add config to require forward to auth MDS
- 02:12 PM Backport #45898 (Resolved): nautilus: mds: add config to require forward to auth MDS
- https://github.com/ceph/ceph/pull/35377
- 02:12 PM Bug #45875 (Pending Backport): mds: add config to require forward to auth MDS
- 09:22 AM Bug #45875 (Resolved): mds: add config to require forward to auth MDS
- This is a backport tracker ticket for https://github.com/ceph/ceph/pull/29995
- 02:04 PM Backport #45854 (In Progress): nautilus: cephfs-journal-tool: NetHandler create_socket couldn't c...
- 02:01 PM Backport #45852 (In Progress): nautilus: mds: scrub on directory with recently created files may ...
- 01:56 PM Backport #45847 (In Progress): nautilus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connecti...
- 01:56 PM Backport #45843 (In Progress): nautilus: ceph-fuse: the -d option couldn't enable the debug mode ...
- 01:55 PM Backport #45839 (In Progress): nautilus: mds may start to fragment dirfrag before rollback finishes
- 01:38 PM Backport #45774 (In Progress): nautilus: vstart_runner: LocalFuseMount.mount should set set.mount...
- 01:33 PM Backport #45709 (In Progress): nautilus: mds: wrong link count under certain circumstance
- 01:09 PM Backport #45689 (In Progress): nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha...
- 12:59 PM Backport #45686 (In Progress): nautilus: mds: FAILED assert(locking == lock) in MutationImpl::fin...
- 12:56 PM Backport #45679 (In Progress): nautilus: mds: layout parser does not handle [-.] in pool names
- 11:56 AM Bug #45590 (Resolved): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:56 AM Backport #45888 (Resolved): octopus: client: fails to reconnect to MDS
- https://github.com/ceph/ceph/pull/35616
- 11:56 AM Backport #45886 (Resolved): octopus: qa: AssertionError: '1' != b'1'
- https://github.com/ceph/ceph/pull/35364
06/03/2020
- 06:55 PM Bug #45866 (Fix Under Review): ceph-fuse build failure against libfuse v3.9.1
- 04:00 PM Bug #45866 (Resolved): ceph-fuse build failure against libfuse v3.9.1
- I got this, building master today against libfuse v3.9.1:...
- 04:07 PM Backport #45845: octopus: ceph-fuse: building the source code failed with libfuse3.5 or higher ve...
- Needs fix for: #45866
- 12:19 PM Backport #45845 (Resolved): octopus: ceph-fuse: building the source code failed with libfuse3.5 o...
- https://github.com/ceph/ceph/pull/35450
- 02:59 PM Bug #45817 (Fix Under Review): qa: Command failed with status 2: ['sudo', 'bash', '-c', 'ip addr ...
- 02:43 PM Bug #45666 (Pending Backport): qa: AssertionError: '1' != b'1'
- 02:39 PM Bug #45665 (Pending Backport): client: fails to reconnect to MDS
- 02:35 PM Bug #45835 (Triaged): mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory
- 10:33 AM Bug #45835 (Resolved): mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory
- We just upgraded from mimic v13.2.6 to nautilus v14.2.9 and the single active MDS was going out-of-memory during the ...
- 01:49 PM Feature #42447: add basic client setup page
- This was merged in commit 85df3a5fb2d388.
- 01:46 PM Bug #45532 (Resolved): cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- The patch for this went into v5.7 and should trickle out to v5.4 and v5.6 stable series kernels soon.
- 01:45 PM Bug #45338: find leads to recursive output with nfs mount
- Note: similar-sounding bug here that was fixed with an upgrade (though he doesn't say to which version):
https://g... - 01:16 PM Backport #45675 (Resolved): nautilus: qa: TypeError: unsupported operand type(s) for +: 'range' a...
- https://github.com/ceph/ceph/pull/34171
- 12:23 PM Backport #45854 (Resolved): nautilus: cephfs-journal-tool: NetHandler create_socket couldn't crea...
- https://github.com/ceph/ceph/pull/35401
- 12:23 PM Backport #45853 (Resolved): octopus: cephfs-journal-tool: NetHandler create_socket couldn't creat...
- https://github.com/ceph/ceph/pull/40762
- 12:22 PM Backport #45852 (Resolved): nautilus: mds: scrub on directory with recently created files may fai...
- https://github.com/ceph/ceph/pull/35400
- 12:21 PM Backport #45851 (Resolved): octopus: mds: scrub on directory with recently created files may fail...
- https://github.com/ceph/ceph/pull/35555
- 12:21 PM Bug #43598 (Resolved): mds: PurgeQueue does not handle objecter errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:21 PM Bug #44380 (Resolved): mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_c...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:21 PM Bug #44389 (Resolved): client: fuse mount will print call trace with incorrect options
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #44680 (Resolved): mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #44962 (Resolved): "ceph fs status" command outputs to stderr instead of stdout when json for...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #44963 (Resolved): fix MClientCaps::FLAG_SYNC in check_caps
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #45090 (Resolved): mds: inode's xattr_map may reference a large memory.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #45141 (Resolved): some obsolete "ceph mds" sub commands are suggested by bash completion
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:19 PM Backport #45850 (Resolved): nautilus: mgr/volumes: create fs subvolumes with isolated RADOS names...
- https://github.com/ceph/ceph/pull/35482
- 12:19 PM Backport #45849 (Resolved): octopus: mgr/volumes: create fs subvolumes with isolated RADOS namesp...
- https://github.com/ceph/ceph/pull/35671
- 12:19 PM Backport #45848 (Resolved): octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpec...
- https://github.com/ceph/ceph/pull/35554
- 12:19 PM Backport #45847 (Resolved): nautilus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections...
- https://github.com/ceph/ceph/pull/35399
- 12:19 PM Backport #45846 (Resolved): octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections ...
- https://github.com/ceph/ceph/pull/35451
- 12:18 PM Backport #45843 (Resolved): nautilus: ceph-fuse: the -d option couldn't enable the debug mode in ...
- https://github.com/ceph/ceph/pull/35398
- 12:18 PM Backport #45842 (Resolved): octopus: ceph-fuse: the -d option couldn't enable the debug mode in l...
- https://github.com/ceph/ceph/pull/35449
- 12:18 PM Backport #45839 (Resolved): nautilus: mds may start to fragment dirfrag before rollback finishes
- https://github.com/ceph/ceph/pull/35397
- 12:17 PM Backport #45838 (Resolved): octopus: mds may start to fragment dirfrag before rollback finishes
- https://github.com/ceph/ceph/pull/35448
- 12:12 PM Bug #45662 (Fix Under Review): pybind/mgr/volumes: volume deletion should check mon_allow_pool_de...
- 09:10 AM Bug #45834 (Closed): cephadm: "fs volume create cephfs" overwrites existing placement specification
- The orchestrator behaves unexpectedly with apply mds. Consider the following:
I have a ceph cluster running and wa... - 04:54 AM Feature #45830 (Fix Under Review): vstart: Support deployment of ganesha daemon by cephadm with N...
- 04:54 AM Feature #45830 (Resolved): vstart: Support deployment of ganesha daemon by cephadm with NFS option
- 01:58 AM Bug #43543 (Pending Backport): mds: scrub on directory with recently created files may fail to lo...
- 12:48 AM Bug #45396 (Pending Backport): ceph-fuse: building the source code failed with libfuse3.5 or high...
- 12:45 AM Feature #45289 (Pending Backport): mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- 12:43 AM Bug #45304 (Pending Backport): qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- 12:41 AM Bug #41034 (Pending Backport): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- 12:40 AM Bug #45524 (Pending Backport): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- 12:35 AM Bug #45699 (Pending Backport): mds may start to fragment dirfrag before rollback finishes
- 12:25 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- Another in a different master test branch:
/ceph/teuthology-archive/pdonnell-2020-06-02_19:51:19-fs-wip-pdonnell-t...
06/02/2020
- 08:56 PM Feature #36253 (Resolved): cephfs: clients should send usage metadata to MDSs for administration/...
- \o/
- 08:47 PM Bug #45829 (Resolved): fs: ceph_test_libcephfs abort in TestUtime
- ...
- 07:40 PM Backport #45827 (Resolved): nautilus: MDS config reference lists mds log max expiring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35278
m... - 07:38 PM Backport #45827 (In Progress): nautilus: MDS config reference lists mds log max expiring
- 07:38 PM Backport #45827 (Resolved): nautilus: MDS config reference lists mds log max expiring
- https://github.com/ceph/ceph/pull/35278
- 07:38 PM Backport #45826 (Rejected): mimic: MDS config reference lists mds log max expiring
- https://github.com/ceph/ceph/pull/35515
- 07:38 PM Backport #45825 (Resolved): luminous: MDS config reference lists mds log max expiring
- https://github.com/ceph/ceph/pull/35516
- 07:37 PM Documentation #45730 (Pending Backport): MDS config reference lists mds log max expiring
- 07:02 PM Bug #45339 (Resolved): qa/cephfs: run nsenter commands with superuser privileges
- 02:40 PM Bug #45300 (Pending Backport): qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected ke...
- 02:36 PM Bug #45806: qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq1pg2pz7-mnt...
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > If the previous test cases failed, the netnses will not be removed/cl... - 02:35 PM Bug #45806: qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq1pg2pz7-mnt...
- Xiubo Li wrote:
> If the previous test cases failed, the netnses will not be removed/cleaned up, and we cannot be su... - 04:15 AM Bug #45806 (Fix Under Review): qa/task/vstart_runner.py: setting the network namespace "ceph-ns--...
- If the previous test cases failed, the netnses will not be removed/cleaned up, and we cannot be sure what state they ...
- 04:11 AM Bug #45806 (Resolved): qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq...
- ...
- 02:30 PM Bug #45817 (Resolved): qa: Command failed with status 2: ['sudo', 'bash', '-c', 'ip addr add 192....
- ...
- 01:32 PM Bug #45813 (Duplicate): qa/cephfs: tests kclient crash with "unexpected keyword" error
- Closing in favour of https://tracker.ceph.com/issues/45300
- 12:10 PM Bug #45813 (Fix Under Review): qa/cephfs: tests kclient crash with "unexpected keyword" error
- 10:59 AM Bug #45813 (Duplicate): qa/cephfs: tests kclient crash with "unexpected keyword" error
- @mount_wait()@ was added to mount.py with the assumption that all mount methods accept mountpoint in "this commit":ht...
- 12:36 PM Bug #45815 (Resolved): vstart_runner.py: set stdout and stderr to None by default
- Right now both are set to BytesIO() in LocalRemoteProcess.__init__() when values are not passed. See - https://github...
- 04:31 AM Bug #45593: qa: removing network bridge appears to cause dropped packets
- The netns was only set up on simithi114, but connections to smiithi038 test node was lost(node 'smithi038' is offline...
06/01/2020
- 07:49 PM Backport #45708 (Resolved): octopus: mds: wrong link count under certain circumstance
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35253
m... - 07:15 PM Backport #45708: octopus: mds: wrong link count under certain circumstance
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35253
merged - 07:48 PM Backport #45685 (Resolved): octopus: mds: FAILED assert(locking == lock) in MutationImpl::finish_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35252
m... - 07:15 PM Backport #45685: octopus: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35252
merged - 07:48 PM Backport #45678 (Resolved): octopus: mds: layout parser does not handle [-.] in pool names
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35251
m... - 07:14 PM Backport #45678: octopus: mds: layout parser does not handle [-.] in pool names
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35251
merged - 07:48 PM Backport #45674 (Resolved): octopus: qa: TypeError: unsupported operand type(s) for +: 'range' an...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35250
m... - 07:13 PM Backport #45674: octopus: qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35250
merged - 07:47 PM Backport #45688: octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35150
m... - 07:40 PM Backport #45603: octopus: mds: PurgeQueue does not handle objecter errors
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35148
m... - 07:40 PM Backport #45601: octopus: mds: inode's xattr_map may reference a large memory.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35147
m... - 07:40 PM Backport #45495 (Resolved): octopus: client: fuse mount will print call trace with incorrect options
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34999
m... - 07:39 PM Backport #45477 (Resolved): octopus: fix MClientCaps::FLAG_SYNC in check_caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34997
m... - 07:39 PM Backport #45473 (Resolved): octopus: some obsolete "ceph mds" sub commands are suggested by bash ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34996
m... - 07:38 PM Backport #45251 (Resolved): octopus: "ceph fs status" command outputs to stderr instead of stdout...
- 07:37 PM Backport #45028 (Resolved): octopus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 07:34 PM Backport #45600 (Resolved): nautilus: mds: inode's xattr_map may reference a large memory.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35199
m... - 07:10 PM Backport #45600: nautilus: mds: inode's xattr_map may reference a large memory.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35199
merged - 07:34 PM Backport #45497 (Resolved): nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35185
m... - 07:10 PM Backport #45497: nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" ==...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35185
merged - 07:33 PM Backport #45602 (Resolved): nautilus: mds: PurgeQueue does not handle objecter errors
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35149
m... - 07:05 PM Backport #45602: nautilus: mds: PurgeQueue does not handle objecter errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35149
merged - 07:33 PM Backport #45478 (Resolved): nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35118
m... - 07:04 PM Backport #45478: nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35118
merged - 07:33 PM Backport #45474 (Resolved): nautilus: some obsolete "ceph mds" sub commands are suggested by bash...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35117
m... - 07:03 PM Backport #45474: nautilus: some obsolete "ceph mds" sub commands are suggested by bash completion
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35117
merged - 07:32 PM Backport #45804 (In Progress): nautilus: qa: verify sub-suite does not define os_version
- 06:29 PM Backport #45804 (Resolved): nautilus: qa: verify sub-suite does not define os_version
- https://github.com/ceph/ceph/pull/35184
- 06:29 PM Bug #43516 (Pending Backport): qa: verify sub-suite does not define os_version
- 01:49 PM Bug #45665 (Fix Under Review): client: fails to reconnect to MDS
- 02:06 AM Bug #45665: client: fails to reconnect to MDS
- If the ceph-fuse client need to flush the caps and does sync wait, the umount() will just return successfully, then t...
- 01:49 PM Documentation #45730 (Fix Under Review): MDS config reference lists mds log max expiring
- 03:52 AM Feature #20196: mds: early reintegration of strays on hardlink deletion
- Zheng Yan wrote:
> we can more files in strays now, https://github.com/ceph/ceph/pull/33479
Hi zheng
I anti-...
05/30/2020
05/29/2020
- 06:23 PM Feature #45746: mgr/nfs: Add interface to update export
- Export create command will be strictly idempotent. Update exports from json file.
- 05:30 PM Backport #45774 (Resolved): nautilus: vstart_runner: LocalFuseMount.mount should set set.mounted ...
- https://github.com/ceph/ceph/pull/35396
- 05:29 PM Backport #45773 (Resolved): octopus: vstart_runner: LocalFuseMount.mount should set set.mounted t...
- https://github.com/ceph/ceph/pull/35447
- 05:02 PM Feature #45743: mgr/nfs: Add interface to show cluster information
- Patrick Donnelly wrote:
> There's two blocking issues we see:
>
> * cephadm does not permit deploying Ganesha wit... - 04:31 PM Feature #45743: mgr/nfs: Add interface to show cluster information
- There's two blocking issues we see:
* cephadm does not permit deploying Ganesha with non-standard ports (i.e. not ... - 03:36 AM Bug #45665: client: fails to reconnect to MDS
- When the mds daemon is restarting or trying to reconnect the client, while the client was trying to umount and waitin...
05/28/2020
- 05:54 PM Bug #45749 (Won't Fix): client: num_caps shows number of caps received
- It should be the number of outstanding caps to match the MDS asok info.
- 04:55 PM Bug #45745 (Triaged): mgr/nfs: Move enable pool to cephadm
- 02:16 PM Bug #45745 (Rejected): mgr/nfs: Move enable pool to cephadm
- 04:55 PM Bug #45744 (Triaged): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 02:15 PM Bug #45744 (Resolved): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 04:52 PM Bug #45740 (Triaged): mgr/nfs: Check cluster exists before creating exports and make exports pers...
- 01:56 PM Bug #45740 (Resolved): mgr/nfs: Check cluster exists before creating exports and make exports per...
- Check if cluster exists before creating exports and add tests for it.
vstart needs to be updated. As we are using te... - 03:27 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Andrej Filipcic wrote:
>
> I have checked 5.6.15 kernel patches, but this fix does not seem to be there yet?
>
... - 03:13 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Hi,
I have checked 5.6.15 kernel patches, but this fix does not seem to be there yet?
Cheers,
Andrej - 02:19 PM Feature #45747 (Resolved): pybind/mgr/nfs: add interface for adding user defined configuration
- The common config in RADOS (which presently just links to each export config) should also include a user-defined conf...
- 02:17 PM Feature #45746 (Resolved): mgr/nfs: Add interface to update export
- 02:03 PM Feature #45743 (Resolved): mgr/nfs: Add interface to show cluster information
- ...
- 02:00 PM Feature #45742 (Resolved): mgr/nfs: Add interface for listing cluster
- ceph nfs cluster list
- 01:58 PM Feature #45741 (Resolved): mgr/volumes/nfs: Add interface for get and list exports
- ...
- 02:18 AM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- /a/yuriw-2020-05-24_19:30:40-rados-wip-yuri-master_5.24.20-distro-basic-smithi/5087753
05/27/2020
- 03:33 PM Bug #45723 (Pending Backport): vstart_runner: LocalFuseMount.mount should set set.mounted to True
- 09:31 AM Bug #45723 (Fix Under Review): vstart_runner: LocalFuseMount.mount should set set.mounted to True
- 09:15 AM Bug #45723 (Resolved): vstart_runner: LocalFuseMount.mount should set set.mounted to True
- When not to set to True, the cleanup doesn't run on teardown since cleanup methods just exit when @self.mounted@ is s...
- 02:41 PM Documentation #45730 (Resolved): MDS config reference lists mds log max expiring
- Seems like this option was removed in mimic and backported to luminous.
- 01:53 PM Feature #45729 (Need More Info): pybind/mgr/volumes: Add the ability to keep snapshots of subvolu...
- 12:41 PM Feature #45729 (Resolved): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes in...
- From the perspective of CSI and its volume life cycle management, a snapshot of a volume is expected to survive beyon...
- 01:42 PM Bug #45663 (Triaged): luminous to nautilus upgrade
- 01:42 PM Bug #45665 (Triaged): client: fails to reconnect to MDS
- 08:31 AM Bug #45283 (Closed): Kernel log flood "ceph: Failed to find inode for 1"
- Closing, issue is being handled by the ubuntu kernel team in the launchpad URL (comment #7).
- 02:26 AM Backport #45680 (In Progress): octopus: mgr/volumes: Not able to resize cephfs subvolume with cep...
- 02:18 AM Backport #45601 (Resolved): octopus: mds: inode's xattr_map may reference a large memory.
- 02:18 AM Backport #45603 (Resolved): octopus: mds: PurgeQueue does not handle objecter errors
- 02:17 AM Backport #45688 (Resolved): octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
05/26/2020
- 10:16 PM Backport #45688: octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35150
merged - 10:15 PM Backport #45603: octopus: mds: PurgeQueue does not handle objecter errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35148
merged - 10:14 PM Backport #45601: octopus: mds: inode's xattr_map may reference a large memory.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35147
merged - 10:13 PM Backport #45495: octopus: client: fuse mount will print call trace with incorrect options
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34999
merged - 10:10 PM Backport #45477: octopus: fix MClientCaps::FLAG_SYNC in check_caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34997
merged - 10:09 PM Backport #45473: octopus: some obsolete "ceph mds" sub commands are suggested by bash completion
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34996
merged - 10:08 PM Backport #45251: octopus: "ceph fs status" command outputs to stderr instead of stdout when json ...
- Kotresh Hiremath Ravishankar wrote:
> https://github.com/ceph/ceph/pull/34727
merged - 10:07 PM Backport #45028: octopus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34509
merged - 09:25 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- http://pulpito.ceph.com/yuriw-2020-05-21_22:43:47-kcephfs-wip-yuri-testing-2020-05-21-2001-octopus-distro-basic-smith...
- 07:56 PM Backport #45708 (In Progress): octopus: mds: wrong link count under certain circumstance
- 12:15 PM Backport #45708 (Resolved): octopus: mds: wrong link count under certain circumstance
- https://github.com/ceph/ceph/pull/35253
- 07:44 PM Backport #45685 (In Progress): octopus: mds: FAILED assert(locking == lock) in MutationImpl::fini...
- 07:43 PM Backport #45678 (In Progress): octopus: mds: layout parser does not handle [-.] in pool names
- 07:41 PM Backport #45674 (In Progress): octopus: qa: TypeError: unsupported operand type(s) for +: 'range'...
- 12:15 PM Backport #45709 (Resolved): nautilus: mds: wrong link count under certain circumstance
- https://github.com/ceph/ceph/pull/35394
05/25/2020
- 06:45 PM Bug #45024 (Pending Backport): mds: wrong link count under certain circumstance
- 02:28 PM Bug #44172 (Resolved): cephfs-journal-tool: cannot set --dry_run arg
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:54 PM Bug #45699 (Fix Under Review): mds may start to fragment dirfrag before rollback finishes
- 01:46 PM Bug #45699 (Resolved): mds may start to fragment dirfrag before rollback finishes
- /ceph/teuthology-archive/pdonnell-2020-05-21_21:34:09-kcephfs-wip-pdonnell-testing-20200520.182104-distro-basic-smith...
- 11:57 AM Bug #45114 (Resolved): client: make cache shrinking callbacks available via libcephfs
- backports will be handled via #12334 of which this appears to be a duplicate?
* octopus backport issue: #45688 - 11:54 AM Backport #45688 (In Progress): octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha ...
- 11:07 AM Backport #45496 (Resolved): nautilus: client: fuse mount will print call trace with incorrect opt...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35000
m... - 11:07 AM Backport #45221 (Resolved): nautilus: cephfs-journal-tool: cannot set --dry_run arg
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34784
m... - 11:07 AM Backport #45217: nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34783
m... - 11:07 AM Backport #44483: nautilus: mds: assertion failure due to blacklist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34435
m... - 11:07 AM Backport #44478: nautilus: mds: assert(p != active_requests.end())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34338
m...
05/24/2020
- 09:10 PM Backport #45689 (Resolved): nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- https://github.com/ceph/ceph/pull/35393
- 09:10 PM Backport #45688 (Resolved): octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- https://github.com/ceph/ceph/pull/35150
- 09:07 PM Bug #44132 (Resolved): mds: assertion failure due to blacklist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:06 PM Bug #44382 (Resolved): qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:05 PM Backport #45687 (Resolved): luminous: mds: FAILED assert(locking == lock) in MutationImpl::finish...
- https://github.com/ceph/ceph/pull/35345
- 09:05 PM Backport #45686 (Resolved): nautilus: mds: FAILED assert(locking == lock) in MutationImpl::finish...
- https://github.com/ceph/ceph/pull/35392
- 09:05 PM Backport #45685 (Resolved): octopus: mds: FAILED assert(locking == lock) in MutationImpl::finish_...
- https://github.com/ceph/ceph/pull/35252
- 09:05 PM Backport #45681 (Resolved): nautilus: mgr/volumes: Not able to resize cephfs subvolume with ceph ...
- https://github.com/ceph/ceph/pull/35482
- 09:04 PM Backport #45680 (Resolved): octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph f...
- https://github.com/ceph/ceph/pull/35256
- 09:04 PM Backport #45679 (Resolved): nautilus: mds: layout parser does not handle [-.] in pool names
- https://github.com/ceph/ceph/pull/35391
- 09:04 PM Backport #45678 (Resolved): octopus: mds: layout parser does not handle [-.] in pool names
- https://github.com/ceph/ceph/pull/35251
- 09:04 PM Backport #45675 (Resolved): nautilus: qa: TypeError: unsupported operand type(s) for +: 'range' a...
- https://github.com/ceph/ceph/pull/34171
- 09:03 PM Backport #45674 (Resolved): octopus: qa: TypeError: unsupported operand type(s) for +: 'range' an...
- https://github.com/ceph/ceph/pull/35250
- 03:26 AM Backport #44478 (Resolved): nautilus: mds: assert(p != active_requests.end())
- 03:26 AM Backport #44483 (Resolved): nautilus: mds: assertion failure due to blacklist
- 03:26 AM Backport #45217 (Resolved): nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterF...
05/22/2020
- 09:24 PM Bug #45398 (Pending Backport): mgr/volumes: Not able to resize cephfs subvolume with ceph fs subv...
- 09:21 PM Bug #45521 (Pending Backport): mds: layout parser does not handle [-.] in pool names
- 09:19 PM Bug #45261 (Pending Backport): mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- 09:16 PM Bug #45666 (Fix Under Review): qa: AssertionError: '1' != b'1'
- 09:12 PM Bug #45666 (Resolved): qa: AssertionError: '1' != b'1'
- ...
- 08:57 PM Bug #45665 (Resolved): client: fails to reconnect to MDS
- ...
- 08:47 PM Bug #45664 (New): libcephfs: FAILED LibCephFS.LazyIOMultipleWritersOneReader
- ...
- 08:09 PM Bug #45663: luminous to nautilus upgrade
- Related to this issue
https://tracker.ceph.com/issues/44100 - 08:08 PM Bug #45663 (Triaged): luminous to nautilus upgrade
I have been using snapshots on cephfs since luminous, 1xfs and
1xactivemds and used an rsync on it for backup (mo...- 05:46 PM Bug #45662 (Resolved): pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete
- ...
- 05:35 PM Bug #45590 (Pending Backport): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- 05:32 PM Bug #45648 (Duplicate): qa/tasks/mds_thrash.py fails when trying to trash max_mds
- 06:36 AM Bug #45648 (Fix Under Review): qa/tasks/mds_thrash.py fails when trying to trash max_mds
- 06:35 AM Bug #45648 (Duplicate): qa/tasks/mds_thrash.py fails when trying to trash max_mds
- ...
- 04:51 PM Feature #12334 (Pending Backport): nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- No, the backport release list just needs updated.
- 01:48 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Since target version is set to 16.0.0 and the status was changed to "Resolved", I guess backports are not needed (?)
- 03:30 PM Backport #45496: nautilus: client: fuse mount will print call trace with incorrect options
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35000
merged - 03:29 PM Backport #45221: nautilus: cephfs-journal-tool: cannot set --dry_run arg
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34784
merged - 03:28 PM Backport #45217: nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34783
merged - 03:28 PM Backport #44483: nautilus: mds: assertion failure due to blacklist
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34435
merged - 03:27 PM Backport #44478: nautilus: mds: assert(p != active_requests.end())
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34338
merged - 01:36 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- See this in nautilus testing too,
http://pulpito.ceph.com/yuriw-2020-05-21_00:08:14-kcephfs-wip-yuri3-testing-2020-0... - 12:44 PM Backport #45600 (In Progress): nautilus: mds: inode's xattr_map may reference a large memory.
05/21/2020
- 05:51 PM Backport #45497 (In Progress): nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rs...
- 05:50 PM Backport #44330 (In Progress): nautilus: qa: multimds suite using centos7
- 07:54 AM Bug #45531 (Resolved): qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Ope...
05/20/2020
- 04:15 PM Bug #44127 (Resolved): cephfs-shell: read config options from cephf.conf and from ceph config com...
- 03:07 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Yes, it fixes the issue for me. Fast now. I have tested it on 5.6.13 kernel. You got the email right.
Many thanks ... - 02:47 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, this patch (suggested by Zheng) seems to fix it.
Andrej, can you test it out and confirm whether it fixes it f... - 01:07 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, threw in a little printk debugging and it looks like the lease generations are not matching up like I'd expect. S...
- 12:21 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Thanks for running the bisect. I can confirm that if I set the d_delete op to NULL, that this problem goes away. That...
- 10:42 AM Feature #12334 (Resolved): nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Ceph patches were merged.
- 08:27 AM Backport #45602 (In Progress): nautilus: mds: PurgeQueue does not handle objecter errors
- 08:25 AM Backport #45603 (In Progress): octopus: mds: PurgeQueue does not handle objecter errors
- 08:18 AM Backport #45601 (In Progress): octopus: mds: inode's xattr_map may reference a large memory.
05/19/2020
- 11:04 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- This one is problematic:...
- 08:03 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, if you feel ambitious, there is a function in fs/ceph/mds_client.c called schedule_delayed(). That requeues the d...
- 07:30 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- I have tried the commit e3ec8d6898f71636a067dae683174ef9bf81bc96 on 5.0.21 kernel, where it applies cleanly, and it w...
- 05:32 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Andrej Filipcic wrote:
> The main issue is that this 5s delay remains forever, eg hours or more, even if the clien... - 05:03 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- To me it seems it's only related to the directories. This file test is fast enough, and it does not show any differen...
- 04:24 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, I reproduced this today and it seems like it's just that when an inode is created, we generally give a full set o...
- 09:35 AM Backport #45603 (Resolved): octopus: mds: PurgeQueue does not handle objecter errors
- https://github.com/ceph/ceph/pull/35148
- 09:35 AM Backport #45602 (Resolved): nautilus: mds: PurgeQueue does not handle objecter errors
- https://github.com/ceph/ceph/pull/35149
- 09:33 AM Backport #45601 (Resolved): octopus: mds: inode's xattr_map may reference a large memory.
- https://github.com/ceph/ceph/pull/35147
- 09:33 AM Backport #45600 (Resolved): nautilus: mds: inode's xattr_map may reference a large memory.
- https://github.com/ceph/ceph/pull/35199
- 06:12 AM Backport #45478 (In Progress): nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- 06:09 AM Backport #45474 (In Progress): nautilus: some obsolete "ceph mds" sub commands are suggested by b...
- 01:50 AM Bug #45593 (In Progress): qa: removing network bridge appears to cause dropped packets
- It seems not the removing NAT rule's issue, this began very early and last for minutes already:...
05/18/2020
- 10:14 PM Bug #45552 (Resolved): qa/task/vstart_runner.py: admin_socket: exception getting command descript...
- 10:12 PM Bug #45090 (Pending Backport): mds: inode's xattr_map may reference a large memory.
- 10:11 PM Bug #45090 (Resolved): mds: inode's xattr_map may reference a large memory.
- 10:10 PM Bug #43598 (Pending Backport): mds: PurgeQueue does not handle objecter errors
- 10:08 PM Bug #45114 (Pending Backport): client: make cache shrinking callbacks available via libcephfs
- 10:01 PM Bug #45373 (Resolved): cephfs-shell: OSError type exceptions throw object has no attribute 'get_e...
- 09:58 PM Bug #45430 (Resolved): qa/cephfs: cleanup() and cleanup_netns() needs to be run even FS was not m...
- 09:29 PM Bug #45593 (Rejected): qa: removing network bridge appears to cause dropped packets
- ...
- 07:59 PM Bug #45590 (Fix Under Review): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- 07:54 PM Bug #45590 (Resolved): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- ...
- 05:59 PM Bug #45521: mds: layout parser does not handle [-.] in pool names
- Zheng Yan wrote:
>
> is this behavior related to this issue?
>
Not at all -- I put this in the wrong tracker.... - 01:48 PM Bug #45532 (Triaged): cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- 01:46 PM Bug #45553 (Duplicate): mds: rstats on snapshot are updated by changes to HEAD
- 01:12 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay , I will work on writing up a new ticket for the slow requests problem and at the moment not do anything to trou...
- 12:58 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Unfortunately, ganesha doesn't have great instrumentation in this area. There is a ganeshactl program that ships with...
- 12:22 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Ganesha patches are merged and have been for over a week. The libcephfs bits are also still ready, but testing is tak...
- 04:15 AM Bug #45575 (Resolved): cephfs-journal-tool: incorrect read_offset after finding missing objects
- in JournalScanner::scan_events(), read_offset is not increased when find missing objects, that will lead to a wrong r...
05/16/2020
- 07:43 PM Documentation #45573 (New): doc: client: client_reconnect_stale=1
- The existing documentation is out of date: https://docs.ceph.com/docs/mimic/cephfs/eviction/#advanced-un-blacklisting...
- 02:21 AM Cleanup #45525 (Resolved): qa/task/cephfs/mount.py: skip saving/restoring the previous value for ...
05/15/2020
- 03:18 PM Bug #45398 (Fix Under Review): mgr/volumes: Not able to resize cephfs subvolume with ceph fs subv...
- 08:41 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay I have left the nfs and old ceph cluster connected and haven't seen a cache pressure message. Added a client mou...
- 03:36 AM Bug #45530 (Fix Under Review): qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: [...
- 03:35 AM Bug #45531 (Fix Under Review): qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD fai...
- This PR is fixing it: https://github.com/ceph/ceph-build/pull/1569
- 12:42 AM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Jeff Layton wrote:
> Got it, so basically you just need to go through and vet all of those and convert the ones that... - 03:15 AM Cleanup #45525 (Fix Under Review): qa/task/cephfs/mount.py: skip saving/restoring the previous va...
- taking this -- it's particularly disruptive for qa testing and the fix is simple.
05/14/2020
- 08:43 PM Bug #45538 (Triaged): qa: Fix string/byte comparison mismatch in test_exports
- 03:33 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Got it, so basically you just need to go through and vet all of those and convert the ones that were configured as "m...
- 02:44 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Hi Jeff,
You are right.Checked it again: ... - 02:37 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Xiubo also asked on IRC:...
- 02:34 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- I dropped my PR as Ilya (rightly) pointed out that updating the configs from a distro kernel would pull in a bunch of...
- 12:13 AM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- We are missing some kconfig contents:...
- 11:40 AM Bug #45553 (Duplicate): mds: rstats on snapshot are updated by changes to HEAD
- The "ceph.dir.rbytes" xattr on .snap/<snap_name> directory is getting updated on already taken snapshots.
Check the ... - 10:18 AM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Much simpler test case:...
- 09:26 AM Bug #45552 (Fix Under Review): qa/task/vstart_runner.py: admin_socket: exception getting command ...
- 08:53 AM Bug #45552 (In Progress): qa/task/vstart_runner.py: admin_socket: exception getting command descr...
- 08:53 AM Bug #45552 (Resolved): qa/task/vstart_runner.py: admin_socket: exception getting command descript...
- ...
- 05:19 AM Bug #45304 (Fix Under Review): qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
05/13/2020
- 10:49 PM Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- Octopus too: /ceph/teuthology-archive/yuriw-2020-05-09_21:30:44-kcephfs-wip-yuri-octopus_15.2.2_RC0-distro-basic-smit...
- 08:50 PM Bug #43515 (Resolved): qa: SyntaxError: invalid token
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Feature #44277 (Resolved): pybind/mgr/volumes: add command to return metadata regarding a subvolume
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Bug #44801 (Resolved): client: write stuck at waiting for larger max_size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:59 PM Bug #45538 (Triaged): qa: Fix string/byte comparison mismatch in test_exports
- mount.getfattr() returns string rather than bytes after https://github.com/ceph/ceph/pull/34941. This produces assert...
- 04:47 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- We are missing some kconfig contents:
CONFIG_NF_TABLES/CONFIG_NF_TABLES_IPV4/CONFIG_NF_TABLES_ARP/CONFIG_NF_TABLE... - 09:25 AM Bug #45531 (Resolved): qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Ope...
- ...
- 01:46 PM Bug #45521: mds: layout parser does not handle [-.] in pool names
- Jeff Layton wrote:
> The kernel client only copies off the layout when given Fw or or Fr caps.
>
is this behavi... - 01:04 PM Bug #45521 (In Progress): mds: layout parser does not handle [-.] in pool names
- 12:47 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- We are officially moved off of the old cluster so now I can mess around with the old one without any worries (still u...
- 11:15 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Cephfs for our cluster has grown significantly in the number of clients due to joining a large HTC grid. I will not e...
- 12:27 PM Bug #45532 (Resolved): cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Hi,
a simple test script (testdirs.sh), demonstrating the problem described below, creating 10k files in ~1000 dir... - 08:54 AM Bug #45530 (Resolved): qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd', '|...
- ...
- 04:03 AM Cleanup #45525: qa/task/cephfs/mount.py: skip saving/restoring the previous value for ip_forward
- ...
- 03:31 AM Cleanup #45525 (In Progress): qa/task/cephfs/mount.py: skip saving/restoring the previous value f...
- 03:30 AM Cleanup #45525 (Resolved): qa/task/cephfs/mount.py: skip saving/restoring the previous value for ...
- Skip saving/restore the previous value for ip_forward and just hardcode it to 1.
- 03:38 AM Bug #45524 (Fix Under Review): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- 03:25 AM Bug #45524 (In Progress): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- 03:24 AM Bug #45524 (Resolved): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- ...
05/12/2020
- 07:15 PM Bug #45521: mds: layout parser does not handle [-.] in pool names
- The kernel client only copies off the layout when given Fw or or Fr caps.
We could change the MDS to gratuitously... - 05:47 PM Bug #45521 (Resolved): mds: layout parser does not handle [-.] in pool names
- ...
- 07:03 PM Bug #45459 (Resolved): qa/task/cephfs/mount.py: Error: Connection activation failed: Activation f...
- 02:07 PM Feature #20196: mds: early reintegration of strays on hardlink deletion
- we can more files in strays now, https://github.com/ceph/ceph/pull/33479
- 02:35 AM Bug #45446 (Resolved): vstart_runner.py: using python3 leads to TypeError: unhashable type: 'Raw'
05/11/2020
- 07:56 PM Bug #45430 (Fix Under Review): qa/cephfs: cleanup() and cleanup_netns() needs to be run even FS w...
- 07:45 PM Feature #45371 (In Progress): mgr/volumes: `protect` and `clone` operation in a single transaction
- 07:32 PM Bug #45332 (Resolved): qa: TestExports is failure under new Python3 runtime
- 07:15 PM Bug #45283 (Triaged): Kernel log flood "ceph: Failed to find inode for 1"
- 06:05 PM Backport #45212 (Resolved): nautilus: client: write stuck at waiting for larger max_size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34767
m... - 02:55 PM Backport #45212: nautilus: client: write stuck at waiting for larger max_size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34767
merged - 06:05 PM Backport #45181 (Resolved): nautilus: pybind/mgr/volumes: add command to return metadata regardin...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34679
m... - 06:03 PM Backport #44655: nautilus: qa: SyntaxError: invalid token
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34470
m... - 04:16 PM Backport #44655 (Resolved): nautilus: qa: SyntaxError: invalid token
- 02:53 PM Backport #44655: nautilus: qa: SyntaxError: invalid token
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34470
merged - 04:17 PM Backport #45496 (In Progress): nautilus: client: fuse mount will print call trace with incorrect ...
- 02:26 PM Backport #45496 (Resolved): nautilus: client: fuse mount will print call trace with incorrect opt...
- https://github.com/ceph/ceph/pull/35000
- 04:13 PM Backport #45495 (In Progress): octopus: client: fuse mount will print call trace with incorrect o...
- 02:26 PM Backport #45495 (Resolved): octopus: client: fuse mount will print call trace with incorrect options
- https://github.com/ceph/ceph/pull/34999
- 04:11 PM Bug #44645 (Resolved): cephfs-shell: Fix flake8 errors (E302, E502, E128, F821, W605, E128 and E122)
- being backported via https://tracker.ceph.com/issues/45476
- 04:10 PM Bug #44657 (Resolved): cephfs-shell: Fix flake8 errors (F841, E302, E502, E128, E305 and E222)
- will be backported via https://tracker.ceph.com/issues/45476
- 04:06 PM Backport #45476 (In Progress): octopus: cephfs-shell: CI testing does not detect flake8 errors
- 02:21 PM Backport #45476 (Resolved): octopus: cephfs-shell: CI testing does not detect flake8 errors
- https://github.com/ceph/ceph/pull/34998
- 04:05 PM Backport #45477 (In Progress): octopus: fix MClientCaps::FLAG_SYNC in check_caps
- 02:22 PM Backport #45477 (Resolved): octopus: fix MClientCaps::FLAG_SYNC in check_caps
- https://github.com/ceph/ceph/pull/34997
- 04:05 PM Backport #45473 (In Progress): octopus: some obsolete "ceph mds" sub commands are suggested by ba...
- 02:21 PM Backport #45473 (Resolved): octopus: some obsolete "ceph mds" sub commands are suggested by bash ...
- https://github.com/ceph/ceph/pull/34996
- 02:35 PM Bug #16881 (Resolved): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:35 PM Bug #24823 (Resolved): mds: deadlock when setting config value via admin socket
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:35 PM Bug #36189 (Resolved): ceph-fuse client can't read or write due to backward cap_gen
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Feature #37678 (Resolved): mds: log new client sessions with various metadata
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38020 (Resolved): mds: remove cache drop admin socket command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38137 (Resolved): mds: may leak gather during cache drop
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38348 (Resolved): mds: drop cache does not timeout as expected
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38677 (Resolved): qa: kclient unmount hangs after file system goes down
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38704 (Resolved): qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in cl...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Fix #38801 (Resolved): qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39305 (Resolved): ceph-fuse: client hang because its bad session PipeConnection to mds
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39405 (Resolved): ceph_volume_client: python program embedded in test_volume_client.py use p...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39406 (Resolved): ceph_volume_client: d_name needs to be converted to string before using
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39510 (Resolved): test_volume_client: test_put_object_versioned is unreliable
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:32 PM Bug #40460 (Resolved): test_volume_client: declare only one default for python version
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:32 PM Bug #40800 (Resolved): ceph_volume_client: to_bytes converts NoneType object str
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:32 PM Bug #40936 (Resolved): tools/cephfs: memory leak in cephfs/Resetter.cc
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41031 (Resolved): qa: malformed job
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41141 (Resolved): mds: recall capabilities more regularly when under cache pressure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Feature #41209 (Resolved): mds: create a configurable snapshot limit
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41799 (Resolved): client: FAILED assert(cap == in->auth_cap)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41800 (Resolved): qa: logrotate should tolerate connection resets
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #41836 (Resolved): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #42213 (Resolved): test_reconnect_eviction fails with "RuntimeError: MDS in reject state up:a...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #42251 (Resolved): mds: no assert on frozen dir when scrub path
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Fix #42450 (Resolved): MDSMonitor: warn if a new file system is being created with an EC default ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #42515 (Resolved): fs: OpenFileTable object shards have too many k/v pairs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42637 (Resolved): qa: ffsb suite causes SLOW_OPS warnings
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42675 (Resolved): mds: tolerate no snaprealm encoded in on-disk root inode
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42759 (Resolved): mds: inode lock stuck at unstable state after evicting client
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42826 (Resolved): mds: client does not response to cap revoke After session stale->resume ci...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42829 (Resolved): tools/cephfs: linkages injected by cephfs-data-scan have first == head
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42938 (Resolved): mds: free heap memory may grow too large for some workloads
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #43061 (Resolved): ceph fs add_data_pool doesn't set pool metadata properly
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43362 (Resolved): client: disallow changing fuse_default_permissions option at runtime
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43438 (Resolved): cephfs-journal-tool: will crash without any extra argument
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43440 (Resolved): client: chdir does not raise error if a file is passed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43483 (Resolved): mds: reject forward scrubs when cluster has multiple active MDS (more than...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43554 (Resolved): qa: test_full racy check: AssertionError: 29 not greater than or equal to 30
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43567 (Resolved): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_di...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #43761 (Resolved): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not g...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #43909 (Resolved): mds: SIGSEGV in Migrator::export_sessions_flushed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #44021 (Resolved): client: bad error handling in Client::_lseek
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:26 PM Bug #44295 (Resolved): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:26 PM Backport #45497 (Resolved): nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat...
- https://github.com/ceph/ceph/pull/35185
- 02:26 PM Bug #44408 (Resolved): qa: after the cephfs qa test case quit the mountpoints still exist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:25 PM Bug #44437 (Resolved): qa:test_config_session_timeout failed with incorrect options
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:25 PM Bug #44525 (Resolved): LibCephFS::RecalledGetattr test failed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:24 PM Bug #44677 (Resolved): stale scrub status entry from a failed mds shows up in `ceph status`
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:23 PM Bug #44771 (Resolved): ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Client::_d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:23 PM Bug #44885 (Resolved): enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:22 PM Backport #45478 (Resolved): nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- https://github.com/ceph/ceph/pull/35118
- 02:21 PM Backport #45474 (Resolved): nautilus: some obsolete "ceph mds" sub commands are suggested by bash...
- https://github.com/ceph/ceph/pull/35117
- 02:20 PM Bug #45387 (Resolved): qa: install task runs twice with double unwind causing fatal errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:23 AM Bug #45459: qa/task/cephfs/mount.py: Error: Connection activation failed: Activation failed becau...
- fall back to use brctl instead of nmcli to setup the bridge for now on all ubuntu releases.
- 07:23 AM Bug #45459 (Fix Under Review): qa/task/cephfs/mount.py: Error: Connection activation failed: Acti...
- 07:12 AM Bug #45459 (Resolved): qa/task/cephfs/mount.py: Error: Connection activation failed: Activation f...
- ...
Also available in: Atom