Activity
From 05/17/2020 to 06/15/2020
06/15/2020
- 10:12 PM Bug #46023 (Resolved): mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm)
- ...
- 10:05 PM Bug #45434 (Triaged): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- 10:05 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- /ceph/teuthology-archive/pdonnell-2020-06-12_09:37:27-kcephfs-wip-pdonnell-testing-20200612.063208-distro-basic-smith...
- 10:03 PM Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- and master: /ceph/teuthology-archive/pdonnell-2020-06-12_09:37:27-kcephfs-wip-pdonnell-testing-20200612.063208-distro...
- 10:01 PM Bug #46022 (New): qa: test_strays num_purge_ops violates threshold 34/16
- ...
- 07:29 PM Feature #12334 (Resolved): nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:24 PM Backport #46013 (Resolved): octopus: qa: commit 9f6c764f10f break qa code in several places
- https://github.com/ceph/ceph/pull/35600
- 07:24 PM Backport #46012 (Resolved): nautilus: qa: commit 9f6c764f10f break qa code in several places
- https://github.com/ceph/ceph/pull/35601
- 07:23 PM Backport #46011 (Resolved): octopus: qa: TestExports is failure under new Python3 runtime
- 07:22 PM Bug #45521 (Resolved): mds: layout parser does not handle [-.] in pool names
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:20 PM Backport #46003 (Resolved): octopus: vstart: set $CEPH_CONF when calling ganesha-rados-grace comm...
- https://github.com/ceph/ceph/pull/35499
- 07:20 PM Bug #44579: qa: commit 9f6c764f10f break qa code in several places
- 9f6c764f10f is in the process of being backported to octopus as well
- 07:18 PM Bug #44579: qa: commit 9f6c764f10f break qa code in several places
- yes, 9f6c764f10f was backported to nautilus via 6e00035ab44f23f3b3ff3242ac81e69dc9f174fc
- 07:16 PM Bug #44579 (Pending Backport): qa: commit 9f6c764f10f break qa code in several places
- 04:59 PM Backport #46002 (Resolved): nautilus: pybind/mgr/volumes: add command to return metadata regardin...
- https://github.com/ceph/ceph/pull/35672
- 04:59 PM Backport #46001 (Resolved): octopus: pybind/mgr/volumes: add command to return metadata regarding...
- https://github.com/ceph/ceph/pull/35670
- 03:27 PM Bug #45997 (Fix Under Review): nautilus: ceph_volume_client.py: UnicodeEncodeError exception whil...
- -Found a fix being worked on in GitHub.- Thanks for the link Dan. Updated.
- 01:54 PM Bug #45997: nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removing volume w...
- (We have a PR incoming)
- 01:54 PM Bug #45997 (Triaged): nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removin...
- 01:09 PM Bug #45997 (Resolved): nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removi...
- While deleting a Manila share, we get this backtrace:...
- 08:14 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- http://pulpito.ceph.com/xiubli-2020-06-15_05:16:00-fs:basic_workload-wip-xiubli-fs-testing-2020-06-12-1613-distro-bas...
- 07:50 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- From one of my test case's teuthology.log, we can see that, it is aborted when doing the mount(), in create_file_even...
06/13/2020
- 08:16 PM Bug #45663: luminous to nautilus upgrade
I think the xlock causes this
2020-06-13 17:32:24.920 7fb5edd82700 0 log_channel(cluster) log [WRN] :
slow re...
06/12/2020
- 09:14 PM Fix #44171 (Resolved): pybind/cephfs: audit for unimplemented bindings for libcephfs
- 09:10 PM Feature #45237 (Pending Backport): pybind/mgr/volumes: add command to return metadata regarding a...
- 09:07 PM Bug #45866 (Pending Backport): ceph-fuse build failure against libfuse v3.9.1
- 02:00 PM Bug #45990 (New): Add MDS Daemon with ceph orch
- I am trying to get cephfs up & running:
root@ceph01:~# cephadm shell --fsid 5436dd5d-83d4-4dc8-a93b-60ab5db145df... - 09:50 AM Backport #45851 (In Progress): octopus: mds: scrub on directory with recently created files may f...
- 09:34 AM Backport #45848 (In Progress): octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unex...
- 07:43 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- LOCK_MIX is is a transition state for muti MDSs could do read/write
at the same time, but the Fcb caps are not allow...
06/11/2020
- 05:28 PM Backport #45679 (Resolved): nautilus: mds: layout parser does not handle [-.] in pool names
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35391
m... - 03:43 PM Backport #45679: nautilus: mds: layout parser does not handle [-.] in pool names
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35391
merged - 05:28 PM Backport #45974 (Resolved): nautilus: qa: AssertionError: '1' != b'1'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35535
m... - 05:28 PM Backport #45974 (In Progress): nautilus: qa: AssertionError: '1' != b'1'
- 10:18 AM Backport #45974 (Resolved): nautilus: qa: AssertionError: '1' != b'1'
- https://github.com/ceph/ceph/pull/35535
- 05:28 PM Backport #45967 (Resolved): nautilus: qa: TestExports is failure under new Python3 runtime
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35535
m... - 05:26 PM Backport #45689 (Resolved): nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35393
m... - 05:26 PM Backport #45686 (Resolved): nautilus: mds: FAILED assert(locking == lock) in MutationImpl::finish...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35392
m... - 05:25 PM Backport #45681: nautilus: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolum...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35482
m... - 04:06 PM Backport #45681 (Resolved): nautilus: mgr/volumes: Not able to resize cephfs subvolume with ceph ...
- 05:24 PM Backport #45850: nautilus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35482
m... - 04:09 PM Backport #45850 (Resolved): nautilus: mgr/volumes: create fs subvolumes with isolated RADOS names...
- 01:14 PM Bug #44415 (Fix Under Review): cephfs.pyx: passing empty string is fine but passing None is not t...
- 10:01 AM Bug #44579: qa: commit 9f6c764f10f break qa code in several places
- this needs backport to nautilus.
In nautilus:... - 05:48 AM Feature #45742 (Fix Under Review): mgr/nfs: Add interface for listing cluster
- 05:13 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Xiubo Li wrote:
> From the mds.c logs, we can see that:
>
> client_caps(revoke ino 0x10000000392 65 seq 1 ==> ... - 03:38 AM Bug #45829 (In Progress): fs: ceph_test_libcephfs abort in TestUtime
- I can access to the lab now, will continue work on this.
06/10/2020
- 08:59 PM Bug #45971 (Pending Backport): vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- 08:58 PM Bug #45971 (Fix Under Review): vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- 07:46 PM Bug #45971 (Resolved): vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- I had a machine with /etc/ceph/ceph.conf file and started vstart on it. The ganesha-rados-grace object got created on...
- 03:33 PM Backport #45689: nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35393
merged - 03:32 PM Backport #45686: nautilus: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35392
merged - 02:39 PM Bug #45338: find leads to recursive output with nfs mount
- Vasu? Were you able to upgrade this and did it help? If so, what versions did you go to?
- 12:28 PM Bug #45815 (Fix Under Review): vstart_runner.py: set stdout and stderr to None by default
- 11:53 AM Bug #45966 (Fix Under Review): nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- 11:07 AM Bug #45966 (In Progress): nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- 10:53 AM Bug #45966 (Resolved): nautilus: qa/tasks: NameError: global name 'StringIO' is not defined
- See a number of job failures in nautilus kcephfs suite due to incomplete py2 to py3 qa/tasks transition in nautilus.
... - 11:52 AM Backport #45967 (In Progress): nautilus: qa: TestExports is failure under new Python3 runtime
- 11:39 AM Backport #45967 (Resolved): nautilus: qa: TestExports is failure under new Python3 runtime
- https://github.com/ceph/ceph/pull/35535
- 11:38 AM Bug #45332 (Pending Backport): qa: TestExports is failure under new Python3 runtime
- 11:37 AM Bug #45960 (Duplicate): nautilus: ERROR: test_export_pin_getfattr (tasks.cephfs.test_exports.Test...
- Duplicate of https://tracker.ceph.com/issues/45332
- 10:07 AM Bug #45342: qa/tasks/vstart_runner.py: RuntimeError: Fuse mount failed to populate /sys/ after 31...
- Rebased and revolved the conflicts.
- 08:28 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Xiubo Li wrote:
> From the mds.c logs, we can see that:
>
> client_caps(revoke ino 0x10000000392 65 seq 1 ==> ... - 08:23 AM Bug #45935 (Fix Under Review): mds: cap revoking requests didn't success when the client doing re...
- 06:55 AM Backport #45825 (In Progress): luminous: MDS config reference lists mds log max expiring
- 06:54 AM Backport #45826 (In Progress): mimic: MDS config reference lists mds log max expiring
06/09/2020
- 05:22 PM Bug #45960 (Duplicate): nautilus: ERROR: test_export_pin_getfattr (tasks.cephfs.test_exports.Test...
- http://pulpito.ceph.com/yuriw-2020-06-08_17:05:22-multimds-wip-yuri5-testing-2020-06-08-1602-nautilus-distro-basic-sm...
- 03:40 PM Bug #45958 (Resolved): nautilus: ERROR: test_get_authorized_ids (tasks.cephfs.test_volume_client....
- http://pulpito.ceph.com/yuriw-2020-06-08_17:06:44-fs-wip-yuri5-testing-2020-06-08-1602-nautilus-distro-basic-smithi/5...
- 10:10 AM Backport #45953 (Resolved): octopus: vstart: Support deployment of ganesha daemon by cephadm with...
- https://github.com/ceph/ceph/pull/35499
- 08:49 AM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- From the mds.c logs, we can see that:
client_caps(revoke ino 0x10000000392 65 seq 1 ==> pending pAsLsXs was pAs... - 12:56 AM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- 14.2.9/15.2.2 are also the victims
06/08/2020
- 11:07 PM Feature #43423 (Fix Under Review): mds: collect and show the dentry lease metric
- 11:02 PM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Patrick Donnelly wrote:
> Xiubo, which teuthology test is this from?
Sorry, forgot to copy it, this the second is... - 02:14 PM Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection ...
- Xiubo, which teuthology test is this from?
- 01:32 PM Bug #45935 (In Progress): mds: cap revoking requests didn't success when the client doing reconne...
- 01:32 PM Bug #45935 (Resolved): mds: cap revoking requests didn't success when the client doing reconnecti...
- The kclient node log:...
- 08:30 PM Backport #45687 (In Progress): luminous: mds: FAILED assert(locking == lock) in MutationImpl::fin...
- 06:25 PM Feature #45830 (Pending Backport): vstart: Support deployment of ganesha daemon by cephadm with N...
- 06:05 PM Bug #45866 (Fix Under Review): ceph-fuse build failure against libfuse v3.9.1
- 06:03 PM Bug #45866 (Pending Backport): ceph-fuse build failure against libfuse v3.9.1
- temporarily setting Pending Backport to get a backport issue created
- 06:04 PM Backport #45941 (In Progress): octopus: ceph-fuse build failure against libfuse v3.9.1
- 06:03 PM Backport #45941 (Resolved): octopus: ceph-fuse build failure against libfuse v3.9.1
- https://github.com/ceph/ceph/pull/35450
- 05:47 PM Backport #45886 (In Progress): octopus: qa: AssertionError: '1' != b'1'
- 04:46 PM Backport #45850 (In Progress): nautilus: mgr/volumes: create fs subvolumes with isolated RADOS na...
- 04:32 PM Backport #45681 (In Progress): nautilus: mgr/volumes: Not able to resize cephfs subvolume with ce...
- 01:45 PM Feature #45906 (Fix Under Review): mds: make threshold for MDS_TRIM warning configurable
- 01:43 PM Bug #45829 (Triaged): fs: ceph_test_libcephfs abort in TestUtime
- 12:48 PM Bug #45834: cephadm: "fs volume create cephfs" overwrites existing placement specification
- Easy solution would be to validate the existence of this CephFS, before starting the MDS daemons.
06/06/2020
- 09:10 AM Backport #45846 (In Progress): octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connectio...
- 08:58 AM Backport #45845 (In Progress): octopus: ceph-fuse: building the source code failed with libfuse3....
- 08:57 AM Backport #45842 (In Progress): octopus: ceph-fuse: the -d option couldn't enable the debug mode i...
- 08:57 AM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:56 AM Bug #43968 (Resolved): qa: multimds suite using centos7
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:55 AM Backport #45838 (In Progress): octopus: mds may start to fragment dirfrag before rollback finishes
- 08:43 AM Backport #44330 (Resolved): nautilus: qa: multimds suite using centos7
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35184
m... - 08:43 AM Backport #45804 (Resolved): nautilus: qa: verify sub-suite does not define os_version
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35184
m... - 08:39 AM Backport #45773 (In Progress): octopus: vstart_runner: LocalFuseMount.mount should set set.mounte...
06/05/2020
- 09:55 PM Bug #45749 (In Progress): client: num_caps shows number of caps received
- 06:20 PM Bug #45910 (Fix Under Review): pybind/mgr/volumes: volume deletion not always removes the associa...
- 05:43 PM Bug #45910 (In Progress): pybind/mgr/volumes: volume deletion not always removes the associated o...
- 05:38 PM Bug #45910 (Resolved): pybind/mgr/volumes: volume deletion not always removes the associated osd ...
- ...
- 03:45 PM Backport #44330: nautilus: qa: multimds suite using centos7
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35184
merged - 03:45 PM Backport #45804: nautilus: qa: verify sub-suite does not define os_version
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/35184
merged - 01:49 PM Feature #45729 (New): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes indepen...
- After some discussion and agreeing on the approach, below is the proposed design:
*Direct Addressing Scheme for Sn... - 01:45 PM Bug #45740 (Fix Under Review): mgr/nfs: Check cluster exists before creating exports and make exp...
- 12:01 PM Feature #45906 (Resolved): mds: make threshold for MDS_TRIM warning configurable
- The MDS_TRIM health warning currently triggers on a hard-coded factor 2 threshold, I've got a setup here that runs so...
- 11:32 AM Feature #45741 (In Progress): mgr/volumes/nfs: Add interface for get and list exports
- 11:32 AM Bug #45744 (In Progress): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
06/04/2020
- 02:36 PM Backport #45887 (In Progress): nautilus: client: fails to reconnect to MDS
- 11:56 AM Backport #45887 (Resolved): nautilus: client: fails to reconnect to MDS
- https://github.com/ceph/ceph/pull/35403
- 02:13 PM Backport #45898 (In Progress): nautilus: mds: add config to require forward to auth MDS
- 02:12 PM Backport #45898 (Resolved): nautilus: mds: add config to require forward to auth MDS
- https://github.com/ceph/ceph/pull/35377
- 02:12 PM Bug #45875 (Pending Backport): mds: add config to require forward to auth MDS
- 09:22 AM Bug #45875 (Resolved): mds: add config to require forward to auth MDS
- This is a backport tracker ticket for https://github.com/ceph/ceph/pull/29995
- 02:04 PM Backport #45854 (In Progress): nautilus: cephfs-journal-tool: NetHandler create_socket couldn't c...
- 02:01 PM Backport #45852 (In Progress): nautilus: mds: scrub on directory with recently created files may ...
- 01:56 PM Backport #45847 (In Progress): nautilus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connecti...
- 01:56 PM Backport #45843 (In Progress): nautilus: ceph-fuse: the -d option couldn't enable the debug mode ...
- 01:55 PM Backport #45839 (In Progress): nautilus: mds may start to fragment dirfrag before rollback finishes
- 01:38 PM Backport #45774 (In Progress): nautilus: vstart_runner: LocalFuseMount.mount should set set.mount...
- 01:33 PM Backport #45709 (In Progress): nautilus: mds: wrong link count under certain circumstance
- 01:09 PM Backport #45689 (In Progress): nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha...
- 12:59 PM Backport #45686 (In Progress): nautilus: mds: FAILED assert(locking == lock) in MutationImpl::fin...
- 12:56 PM Backport #45679 (In Progress): nautilus: mds: layout parser does not handle [-.] in pool names
- 11:56 AM Bug #45590 (Resolved): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:56 AM Backport #45888 (Resolved): octopus: client: fails to reconnect to MDS
- https://github.com/ceph/ceph/pull/35616
- 11:56 AM Backport #45886 (Resolved): octopus: qa: AssertionError: '1' != b'1'
- https://github.com/ceph/ceph/pull/35364
06/03/2020
- 06:55 PM Bug #45866 (Fix Under Review): ceph-fuse build failure against libfuse v3.9.1
- 04:00 PM Bug #45866 (Resolved): ceph-fuse build failure against libfuse v3.9.1
- I got this, building master today against libfuse v3.9.1:...
- 04:07 PM Backport #45845: octopus: ceph-fuse: building the source code failed with libfuse3.5 or higher ve...
- Needs fix for: #45866
- 12:19 PM Backport #45845 (Resolved): octopus: ceph-fuse: building the source code failed with libfuse3.5 o...
- https://github.com/ceph/ceph/pull/35450
- 02:59 PM Bug #45817 (Fix Under Review): qa: Command failed with status 2: ['sudo', 'bash', '-c', 'ip addr ...
- 02:43 PM Bug #45666 (Pending Backport): qa: AssertionError: '1' != b'1'
- 02:39 PM Bug #45665 (Pending Backport): client: fails to reconnect to MDS
- 02:35 PM Bug #45835 (Triaged): mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory
- 10:33 AM Bug #45835 (Resolved): mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory
- We just upgraded from mimic v13.2.6 to nautilus v14.2.9 and the single active MDS was going out-of-memory during the ...
- 01:49 PM Feature #42447: add basic client setup page
- This was merged in commit 85df3a5fb2d388.
- 01:46 PM Bug #45532 (Resolved): cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- The patch for this went into v5.7 and should trickle out to v5.4 and v5.6 stable series kernels soon.
- 01:45 PM Bug #45338: find leads to recursive output with nfs mount
- Note: similar-sounding bug here that was fixed with an upgrade (though he doesn't say to which version):
https://g... - 01:16 PM Backport #45675 (Resolved): nautilus: qa: TypeError: unsupported operand type(s) for +: 'range' a...
- https://github.com/ceph/ceph/pull/34171
- 12:23 PM Backport #45854 (Resolved): nautilus: cephfs-journal-tool: NetHandler create_socket couldn't crea...
- https://github.com/ceph/ceph/pull/35401
- 12:23 PM Backport #45853 (Resolved): octopus: cephfs-journal-tool: NetHandler create_socket couldn't creat...
- https://github.com/ceph/ceph/pull/40762
- 12:22 PM Backport #45852 (Resolved): nautilus: mds: scrub on directory with recently created files may fai...
- https://github.com/ceph/ceph/pull/35400
- 12:21 PM Backport #45851 (Resolved): octopus: mds: scrub on directory with recently created files may fail...
- https://github.com/ceph/ceph/pull/35555
- 12:21 PM Bug #43598 (Resolved): mds: PurgeQueue does not handle objecter errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:21 PM Bug #44380 (Resolved): mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_c...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:21 PM Bug #44389 (Resolved): client: fuse mount will print call trace with incorrect options
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #44680 (Resolved): mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #44962 (Resolved): "ceph fs status" command outputs to stderr instead of stdout when json for...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #44963 (Resolved): fix MClientCaps::FLAG_SYNC in check_caps
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #45090 (Resolved): mds: inode's xattr_map may reference a large memory.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:20 PM Bug #45141 (Resolved): some obsolete "ceph mds" sub commands are suggested by bash completion
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:19 PM Backport #45850 (Resolved): nautilus: mgr/volumes: create fs subvolumes with isolated RADOS names...
- https://github.com/ceph/ceph/pull/35482
- 12:19 PM Backport #45849 (Resolved): octopus: mgr/volumes: create fs subvolumes with isolated RADOS namesp...
- https://github.com/ceph/ceph/pull/35671
- 12:19 PM Backport #45848 (Resolved): octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpec...
- https://github.com/ceph/ceph/pull/35554
- 12:19 PM Backport #45847 (Resolved): nautilus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections...
- https://github.com/ceph/ceph/pull/35399
- 12:19 PM Backport #45846 (Resolved): octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections ...
- https://github.com/ceph/ceph/pull/35451
- 12:18 PM Backport #45843 (Resolved): nautilus: ceph-fuse: the -d option couldn't enable the debug mode in ...
- https://github.com/ceph/ceph/pull/35398
- 12:18 PM Backport #45842 (Resolved): octopus: ceph-fuse: the -d option couldn't enable the debug mode in l...
- https://github.com/ceph/ceph/pull/35449
- 12:18 PM Backport #45839 (Resolved): nautilus: mds may start to fragment dirfrag before rollback finishes
- https://github.com/ceph/ceph/pull/35397
- 12:17 PM Backport #45838 (Resolved): octopus: mds may start to fragment dirfrag before rollback finishes
- https://github.com/ceph/ceph/pull/35448
- 12:12 PM Bug #45662 (Fix Under Review): pybind/mgr/volumes: volume deletion should check mon_allow_pool_de...
- 09:10 AM Bug #45834 (Closed): cephadm: "fs volume create cephfs" overwrites existing placement specification
- The orchestrator behaves unexpectedly with apply mds. Consider the following:
I have a ceph cluster running and wa... - 04:54 AM Feature #45830 (Fix Under Review): vstart: Support deployment of ganesha daemon by cephadm with N...
- 04:54 AM Feature #45830 (Resolved): vstart: Support deployment of ganesha daemon by cephadm with NFS option
- 01:58 AM Bug #43543 (Pending Backport): mds: scrub on directory with recently created files may fail to lo...
- 12:48 AM Bug #45396 (Pending Backport): ceph-fuse: building the source code failed with libfuse3.5 or high...
- 12:45 AM Feature #45289 (Pending Backport): mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- 12:43 AM Bug #45304 (Pending Backport): qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- 12:41 AM Bug #41034 (Pending Backport): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- 12:40 AM Bug #45524 (Pending Backport): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- 12:35 AM Bug #45699 (Pending Backport): mds may start to fragment dirfrag before rollback finishes
- 12:25 AM Bug #45829: fs: ceph_test_libcephfs abort in TestUtime
- Another in a different master test branch:
/ceph/teuthology-archive/pdonnell-2020-06-02_19:51:19-fs-wip-pdonnell-t...
06/02/2020
- 08:56 PM Feature #36253 (Resolved): cephfs: clients should send usage metadata to MDSs for administration/...
- \o/
- 08:47 PM Bug #45829 (Resolved): fs: ceph_test_libcephfs abort in TestUtime
- ...
- 07:40 PM Backport #45827 (Resolved): nautilus: MDS config reference lists mds log max expiring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35278
m... - 07:38 PM Backport #45827 (In Progress): nautilus: MDS config reference lists mds log max expiring
- 07:38 PM Backport #45827 (Resolved): nautilus: MDS config reference lists mds log max expiring
- https://github.com/ceph/ceph/pull/35278
- 07:38 PM Backport #45826 (Rejected): mimic: MDS config reference lists mds log max expiring
- https://github.com/ceph/ceph/pull/35515
- 07:38 PM Backport #45825 (Resolved): luminous: MDS config reference lists mds log max expiring
- https://github.com/ceph/ceph/pull/35516
- 07:37 PM Documentation #45730 (Pending Backport): MDS config reference lists mds log max expiring
- 07:02 PM Bug #45339 (Resolved): qa/cephfs: run nsenter commands with superuser privileges
- 02:40 PM Bug #45300 (Pending Backport): qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected ke...
- 02:36 PM Bug #45806: qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq1pg2pz7-mnt...
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > If the previous test cases failed, the netnses will not be removed/cl... - 02:35 PM Bug #45806: qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq1pg2pz7-mnt...
- Xiubo Li wrote:
> If the previous test cases failed, the netnses will not be removed/cleaned up, and we cannot be su... - 04:15 AM Bug #45806 (Fix Under Review): qa/task/vstart_runner.py: setting the network namespace "ceph-ns--...
- If the previous test cases failed, the netnses will not be removed/cleaned up, and we cannot be sure what state they ...
- 04:11 AM Bug #45806 (Resolved): qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq...
- ...
- 02:30 PM Bug #45817 (Resolved): qa: Command failed with status 2: ['sudo', 'bash', '-c', 'ip addr add 192....
- ...
- 01:32 PM Bug #45813 (Duplicate): qa/cephfs: tests kclient crash with "unexpected keyword" error
- Closing in favour of https://tracker.ceph.com/issues/45300
- 12:10 PM Bug #45813 (Fix Under Review): qa/cephfs: tests kclient crash with "unexpected keyword" error
- 10:59 AM Bug #45813 (Duplicate): qa/cephfs: tests kclient crash with "unexpected keyword" error
- @mount_wait()@ was added to mount.py with the assumption that all mount methods accept mountpoint in "this commit":ht...
- 12:36 PM Bug #45815 (Resolved): vstart_runner.py: set stdout and stderr to None by default
- Right now both are set to BytesIO() in LocalRemoteProcess.__init__() when values are not passed. See - https://github...
- 04:31 AM Bug #45593: qa: removing network bridge appears to cause dropped packets
- The netns was only set up on simithi114, but connections to smiithi038 test node was lost(node 'smithi038' is offline...
06/01/2020
- 07:49 PM Backport #45708 (Resolved): octopus: mds: wrong link count under certain circumstance
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35253
m... - 07:15 PM Backport #45708: octopus: mds: wrong link count under certain circumstance
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35253
merged - 07:48 PM Backport #45685 (Resolved): octopus: mds: FAILED assert(locking == lock) in MutationImpl::finish_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35252
m... - 07:15 PM Backport #45685: octopus: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35252
merged - 07:48 PM Backport #45678 (Resolved): octopus: mds: layout parser does not handle [-.] in pool names
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35251
m... - 07:14 PM Backport #45678: octopus: mds: layout parser does not handle [-.] in pool names
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35251
merged - 07:48 PM Backport #45674 (Resolved): octopus: qa: TypeError: unsupported operand type(s) for +: 'range' an...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35250
m... - 07:13 PM Backport #45674: octopus: qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35250
merged - 07:47 PM Backport #45688: octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35150
m... - 07:40 PM Backport #45603: octopus: mds: PurgeQueue does not handle objecter errors
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35148
m... - 07:40 PM Backport #45601: octopus: mds: inode's xattr_map may reference a large memory.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35147
m... - 07:40 PM Backport #45495 (Resolved): octopus: client: fuse mount will print call trace with incorrect options
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34999
m... - 07:39 PM Backport #45477 (Resolved): octopus: fix MClientCaps::FLAG_SYNC in check_caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34997
m... - 07:39 PM Backport #45473 (Resolved): octopus: some obsolete "ceph mds" sub commands are suggested by bash ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34996
m... - 07:38 PM Backport #45251 (Resolved): octopus: "ceph fs status" command outputs to stderr instead of stdout...
- 07:37 PM Backport #45028 (Resolved): octopus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 07:34 PM Backport #45600 (Resolved): nautilus: mds: inode's xattr_map may reference a large memory.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35199
m... - 07:10 PM Backport #45600: nautilus: mds: inode's xattr_map may reference a large memory.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35199
merged - 07:34 PM Backport #45497 (Resolved): nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35185
m... - 07:10 PM Backport #45497: nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" ==...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35185
merged - 07:33 PM Backport #45602 (Resolved): nautilus: mds: PurgeQueue does not handle objecter errors
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35149
m... - 07:05 PM Backport #45602: nautilus: mds: PurgeQueue does not handle objecter errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35149
merged - 07:33 PM Backport #45478 (Resolved): nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35118
m... - 07:04 PM Backport #45478: nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35118
merged - 07:33 PM Backport #45474 (Resolved): nautilus: some obsolete "ceph mds" sub commands are suggested by bash...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35117
m... - 07:03 PM Backport #45474: nautilus: some obsolete "ceph mds" sub commands are suggested by bash completion
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35117
merged - 07:32 PM Backport #45804 (In Progress): nautilus: qa: verify sub-suite does not define os_version
- 06:29 PM Backport #45804 (Resolved): nautilus: qa: verify sub-suite does not define os_version
- https://github.com/ceph/ceph/pull/35184
- 06:29 PM Bug #43516 (Pending Backport): qa: verify sub-suite does not define os_version
- 01:49 PM Bug #45665 (Fix Under Review): client: fails to reconnect to MDS
- 02:06 AM Bug #45665: client: fails to reconnect to MDS
- If the ceph-fuse client need to flush the caps and does sync wait, the umount() will just return successfully, then t...
- 01:49 PM Documentation #45730 (Fix Under Review): MDS config reference lists mds log max expiring
- 03:52 AM Feature #20196: mds: early reintegration of strays on hardlink deletion
- Zheng Yan wrote:
> we can more files in strays now, https://github.com/ceph/ceph/pull/33479
Hi zheng
I anti-...
05/30/2020
05/29/2020
- 06:23 PM Feature #45746: mgr/nfs: Add interface to update export
- Export create command will be strictly idempotent. Update exports from json file.
- 05:30 PM Backport #45774 (Resolved): nautilus: vstart_runner: LocalFuseMount.mount should set set.mounted ...
- https://github.com/ceph/ceph/pull/35396
- 05:29 PM Backport #45773 (Resolved): octopus: vstart_runner: LocalFuseMount.mount should set set.mounted t...
- https://github.com/ceph/ceph/pull/35447
- 05:02 PM Feature #45743: mgr/nfs: Add interface to show cluster information
- Patrick Donnelly wrote:
> There's two blocking issues we see:
>
> * cephadm does not permit deploying Ganesha wit... - 04:31 PM Feature #45743: mgr/nfs: Add interface to show cluster information
- There's two blocking issues we see:
* cephadm does not permit deploying Ganesha with non-standard ports (i.e. not ... - 03:36 AM Bug #45665: client: fails to reconnect to MDS
- When the mds daemon is restarting or trying to reconnect the client, while the client was trying to umount and waitin...
05/28/2020
- 05:54 PM Bug #45749 (Won't Fix): client: num_caps shows number of caps received
- It should be the number of outstanding caps to match the MDS asok info.
- 04:55 PM Bug #45745 (Triaged): mgr/nfs: Move enable pool to cephadm
- 02:16 PM Bug #45745 (Rejected): mgr/nfs: Move enable pool to cephadm
- 04:55 PM Bug #45744 (Triaged): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 02:15 PM Bug #45744 (Resolved): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 04:52 PM Bug #45740 (Triaged): mgr/nfs: Check cluster exists before creating exports and make exports pers...
- 01:56 PM Bug #45740 (Resolved): mgr/nfs: Check cluster exists before creating exports and make exports per...
- Check if cluster exists before creating exports and add tests for it.
vstart needs to be updated. As we are using te... - 03:27 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Andrej Filipcic wrote:
>
> I have checked 5.6.15 kernel patches, but this fix does not seem to be there yet?
>
... - 03:13 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Hi,
I have checked 5.6.15 kernel patches, but this fix does not seem to be there yet?
Cheers,
Andrej - 02:19 PM Feature #45747 (Resolved): pybind/mgr/nfs: add interface for adding user defined configuration
- The common config in RADOS (which presently just links to each export config) should also include a user-defined conf...
- 02:17 PM Feature #45746 (Resolved): mgr/nfs: Add interface to update export
- 02:03 PM Feature #45743 (Resolved): mgr/nfs: Add interface to show cluster information
- ...
- 02:00 PM Feature #45742 (Resolved): mgr/nfs: Add interface for listing cluster
- ceph nfs cluster list
- 01:58 PM Feature #45741 (Resolved): mgr/volumes/nfs: Add interface for get and list exports
- ...
- 02:18 AM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- /a/yuriw-2020-05-24_19:30:40-rados-wip-yuri-master_5.24.20-distro-basic-smithi/5087753
05/27/2020
- 03:33 PM Bug #45723 (Pending Backport): vstart_runner: LocalFuseMount.mount should set set.mounted to True
- 09:31 AM Bug #45723 (Fix Under Review): vstart_runner: LocalFuseMount.mount should set set.mounted to True
- 09:15 AM Bug #45723 (Resolved): vstart_runner: LocalFuseMount.mount should set set.mounted to True
- When not to set to True, the cleanup doesn't run on teardown since cleanup methods just exit when @self.mounted@ is s...
- 02:41 PM Documentation #45730 (Resolved): MDS config reference lists mds log max expiring
- Seems like this option was removed in mimic and backported to luminous.
- 01:53 PM Feature #45729 (Need More Info): pybind/mgr/volumes: Add the ability to keep snapshots of subvolu...
- 12:41 PM Feature #45729 (Resolved): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes in...
- From the perspective of CSI and its volume life cycle management, a snapshot of a volume is expected to survive beyon...
- 01:42 PM Bug #45663 (Triaged): luminous to nautilus upgrade
- 01:42 PM Bug #45665 (Triaged): client: fails to reconnect to MDS
- 08:31 AM Bug #45283 (Closed): Kernel log flood "ceph: Failed to find inode for 1"
- Closing, issue is being handled by the ubuntu kernel team in the launchpad URL (comment #7).
- 02:26 AM Backport #45680 (In Progress): octopus: mgr/volumes: Not able to resize cephfs subvolume with cep...
- 02:18 AM Backport #45601 (Resolved): octopus: mds: inode's xattr_map may reference a large memory.
- 02:18 AM Backport #45603 (Resolved): octopus: mds: PurgeQueue does not handle objecter errors
- 02:17 AM Backport #45688 (Resolved): octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
05/26/2020
- 10:16 PM Backport #45688: octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35150
merged - 10:15 PM Backport #45603: octopus: mds: PurgeQueue does not handle objecter errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35148
merged - 10:14 PM Backport #45601: octopus: mds: inode's xattr_map may reference a large memory.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35147
merged - 10:13 PM Backport #45495: octopus: client: fuse mount will print call trace with incorrect options
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34999
merged - 10:10 PM Backport #45477: octopus: fix MClientCaps::FLAG_SYNC in check_caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34997
merged - 10:09 PM Backport #45473: octopus: some obsolete "ceph mds" sub commands are suggested by bash completion
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34996
merged - 10:08 PM Backport #45251: octopus: "ceph fs status" command outputs to stderr instead of stdout when json ...
- Kotresh Hiremath Ravishankar wrote:
> https://github.com/ceph/ceph/pull/34727
merged - 10:07 PM Backport #45028: octopus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34509
merged - 09:25 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- http://pulpito.ceph.com/yuriw-2020-05-21_22:43:47-kcephfs-wip-yuri-testing-2020-05-21-2001-octopus-distro-basic-smith...
- 07:56 PM Backport #45708 (In Progress): octopus: mds: wrong link count under certain circumstance
- 12:15 PM Backport #45708 (Resolved): octopus: mds: wrong link count under certain circumstance
- https://github.com/ceph/ceph/pull/35253
- 07:44 PM Backport #45685 (In Progress): octopus: mds: FAILED assert(locking == lock) in MutationImpl::fini...
- 07:43 PM Backport #45678 (In Progress): octopus: mds: layout parser does not handle [-.] in pool names
- 07:41 PM Backport #45674 (In Progress): octopus: qa: TypeError: unsupported operand type(s) for +: 'range'...
- 12:15 PM Backport #45709 (Resolved): nautilus: mds: wrong link count under certain circumstance
- https://github.com/ceph/ceph/pull/35394
05/25/2020
- 06:45 PM Bug #45024 (Pending Backport): mds: wrong link count under certain circumstance
- 02:28 PM Bug #44172 (Resolved): cephfs-journal-tool: cannot set --dry_run arg
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:54 PM Bug #45699 (Fix Under Review): mds may start to fragment dirfrag before rollback finishes
- 01:46 PM Bug #45699 (Resolved): mds may start to fragment dirfrag before rollback finishes
- /ceph/teuthology-archive/pdonnell-2020-05-21_21:34:09-kcephfs-wip-pdonnell-testing-20200520.182104-distro-basic-smith...
- 11:57 AM Bug #45114 (Resolved): client: make cache shrinking callbacks available via libcephfs
- backports will be handled via #12334 of which this appears to be a duplicate?
* octopus backport issue: #45688 - 11:54 AM Backport #45688 (In Progress): octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha ...
- 11:07 AM Backport #45496 (Resolved): nautilus: client: fuse mount will print call trace with incorrect opt...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35000
m... - 11:07 AM Backport #45221 (Resolved): nautilus: cephfs-journal-tool: cannot set --dry_run arg
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34784
m... - 11:07 AM Backport #45217: nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34783
m... - 11:07 AM Backport #44483: nautilus: mds: assertion failure due to blacklist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34435
m... - 11:07 AM Backport #44478: nautilus: mds: assert(p != active_requests.end())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34338
m...
05/24/2020
- 09:10 PM Backport #45689 (Resolved): nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- https://github.com/ceph/ceph/pull/35393
- 09:10 PM Backport #45688 (Resolved): octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- https://github.com/ceph/ceph/pull/35150
- 09:07 PM Bug #44132 (Resolved): mds: assertion failure due to blacklist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:06 PM Bug #44382 (Resolved): qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:05 PM Backport #45687 (Resolved): luminous: mds: FAILED assert(locking == lock) in MutationImpl::finish...
- https://github.com/ceph/ceph/pull/35345
- 09:05 PM Backport #45686 (Resolved): nautilus: mds: FAILED assert(locking == lock) in MutationImpl::finish...
- https://github.com/ceph/ceph/pull/35392
- 09:05 PM Backport #45685 (Resolved): octopus: mds: FAILED assert(locking == lock) in MutationImpl::finish_...
- https://github.com/ceph/ceph/pull/35252
- 09:05 PM Backport #45681 (Resolved): nautilus: mgr/volumes: Not able to resize cephfs subvolume with ceph ...
- https://github.com/ceph/ceph/pull/35482
- 09:04 PM Backport #45680 (Resolved): octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph f...
- https://github.com/ceph/ceph/pull/35256
- 09:04 PM Backport #45679 (Resolved): nautilus: mds: layout parser does not handle [-.] in pool names
- https://github.com/ceph/ceph/pull/35391
- 09:04 PM Backport #45678 (Resolved): octopus: mds: layout parser does not handle [-.] in pool names
- https://github.com/ceph/ceph/pull/35251
- 09:04 PM Backport #45675 (Resolved): nautilus: qa: TypeError: unsupported operand type(s) for +: 'range' a...
- https://github.com/ceph/ceph/pull/34171
- 09:03 PM Backport #45674 (Resolved): octopus: qa: TypeError: unsupported operand type(s) for +: 'range' an...
- https://github.com/ceph/ceph/pull/35250
- 03:26 AM Backport #44478 (Resolved): nautilus: mds: assert(p != active_requests.end())
- 03:26 AM Backport #44483 (Resolved): nautilus: mds: assertion failure due to blacklist
- 03:26 AM Backport #45217 (Resolved): nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterF...
05/22/2020
- 09:24 PM Bug #45398 (Pending Backport): mgr/volumes: Not able to resize cephfs subvolume with ceph fs subv...
- 09:21 PM Bug #45521 (Pending Backport): mds: layout parser does not handle [-.] in pool names
- 09:19 PM Bug #45261 (Pending Backport): mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- 09:16 PM Bug #45666 (Fix Under Review): qa: AssertionError: '1' != b'1'
- 09:12 PM Bug #45666 (Resolved): qa: AssertionError: '1' != b'1'
- ...
- 08:57 PM Bug #45665 (Resolved): client: fails to reconnect to MDS
- ...
- 08:47 PM Bug #45664 (New): libcephfs: FAILED LibCephFS.LazyIOMultipleWritersOneReader
- ...
- 08:09 PM Bug #45663: luminous to nautilus upgrade
- Related to this issue
https://tracker.ceph.com/issues/44100 - 08:08 PM Bug #45663 (Triaged): luminous to nautilus upgrade
I have been using snapshots on cephfs since luminous, 1xfs and
1xactivemds and used an rsync on it for backup (mo...- 05:46 PM Bug #45662 (Resolved): pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete
- ...
- 05:35 PM Bug #45590 (Pending Backport): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- 05:32 PM Bug #45648 (Duplicate): qa/tasks/mds_thrash.py fails when trying to trash max_mds
- 06:36 AM Bug #45648 (Fix Under Review): qa/tasks/mds_thrash.py fails when trying to trash max_mds
- 06:35 AM Bug #45648 (Duplicate): qa/tasks/mds_thrash.py fails when trying to trash max_mds
- ...
- 04:51 PM Feature #12334 (Pending Backport): nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- No, the backport release list just needs updated.
- 01:48 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Since target version is set to 16.0.0 and the status was changed to "Resolved", I guess backports are not needed (?)
- 03:30 PM Backport #45496: nautilus: client: fuse mount will print call trace with incorrect options
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35000
merged - 03:29 PM Backport #45221: nautilus: cephfs-journal-tool: cannot set --dry_run arg
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34784
merged - 03:28 PM Backport #45217: nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34783
merged - 03:28 PM Backport #44483: nautilus: mds: assertion failure due to blacklist
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34435
merged - 03:27 PM Backport #44478: nautilus: mds: assert(p != active_requests.end())
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34338
merged - 01:36 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- See this in nautilus testing too,
http://pulpito.ceph.com/yuriw-2020-05-21_00:08:14-kcephfs-wip-yuri3-testing-2020-0... - 12:44 PM Backport #45600 (In Progress): nautilus: mds: inode's xattr_map may reference a large memory.
05/21/2020
- 05:51 PM Backport #45497 (In Progress): nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rs...
- 05:50 PM Backport #44330 (In Progress): nautilus: qa: multimds suite using centos7
- 07:54 AM Bug #45531 (Resolved): qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Ope...
05/20/2020
- 04:15 PM Bug #44127 (Resolved): cephfs-shell: read config options from cephf.conf and from ceph config com...
- 03:07 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Yes, it fixes the issue for me. Fast now. I have tested it on 5.6.13 kernel. You got the email right.
Many thanks ... - 02:47 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, this patch (suggested by Zheng) seems to fix it.
Andrej, can you test it out and confirm whether it fixes it f... - 01:07 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, threw in a little printk debugging and it looks like the lease generations are not matching up like I'd expect. S...
- 12:21 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Thanks for running the bisect. I can confirm that if I set the d_delete op to NULL, that this problem goes away. That...
- 10:42 AM Feature #12334 (Resolved): nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Ceph patches were merged.
- 08:27 AM Backport #45602 (In Progress): nautilus: mds: PurgeQueue does not handle objecter errors
- 08:25 AM Backport #45603 (In Progress): octopus: mds: PurgeQueue does not handle objecter errors
- 08:18 AM Backport #45601 (In Progress): octopus: mds: inode's xattr_map may reference a large memory.
05/19/2020
- 11:04 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- This one is problematic:...
- 08:03 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, if you feel ambitious, there is a function in fs/ceph/mds_client.c called schedule_delayed(). That requeues the d...
- 07:30 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- I have tried the commit e3ec8d6898f71636a067dae683174ef9bf81bc96 on 5.0.21 kernel, where it applies cleanly, and it w...
- 05:32 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Andrej Filipcic wrote:
> The main issue is that this 5s delay remains forever, eg hours or more, even if the clien... - 05:03 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- To me it seems it's only related to the directories. This file test is fast enough, and it does not show any differen...
- 04:24 PM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Ok, I reproduced this today and it seems like it's just that when an inode is created, we generally give a full set o...
- 09:35 AM Backport #45603 (Resolved): octopus: mds: PurgeQueue does not handle objecter errors
- https://github.com/ceph/ceph/pull/35148
- 09:35 AM Backport #45602 (Resolved): nautilus: mds: PurgeQueue does not handle objecter errors
- https://github.com/ceph/ceph/pull/35149
- 09:33 AM Backport #45601 (Resolved): octopus: mds: inode's xattr_map may reference a large memory.
- https://github.com/ceph/ceph/pull/35147
- 09:33 AM Backport #45600 (Resolved): nautilus: mds: inode's xattr_map may reference a large memory.
- https://github.com/ceph/ceph/pull/35199
- 06:12 AM Backport #45478 (In Progress): nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- 06:09 AM Backport #45474 (In Progress): nautilus: some obsolete "ceph mds" sub commands are suggested by b...
- 01:50 AM Bug #45593 (In Progress): qa: removing network bridge appears to cause dropped packets
- It seems not the removing NAT rule's issue, this began very early and last for minutes already:...
05/18/2020
- 10:14 PM Bug #45552 (Resolved): qa/task/vstart_runner.py: admin_socket: exception getting command descript...
- 10:12 PM Bug #45090 (Pending Backport): mds: inode's xattr_map may reference a large memory.
- 10:11 PM Bug #45090 (Resolved): mds: inode's xattr_map may reference a large memory.
- 10:10 PM Bug #43598 (Pending Backport): mds: PurgeQueue does not handle objecter errors
- 10:08 PM Bug #45114 (Pending Backport): client: make cache shrinking callbacks available via libcephfs
- 10:01 PM Bug #45373 (Resolved): cephfs-shell: OSError type exceptions throw object has no attribute 'get_e...
- 09:58 PM Bug #45430 (Resolved): qa/cephfs: cleanup() and cleanup_netns() needs to be run even FS was not m...
- 09:29 PM Bug #45593 (Rejected): qa: removing network bridge appears to cause dropped packets
- ...
- 07:59 PM Bug #45590 (Fix Under Review): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- 07:54 PM Bug #45590 (Resolved): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- ...
- 05:59 PM Bug #45521: mds: layout parser does not handle [-.] in pool names
- Zheng Yan wrote:
>
> is this behavior related to this issue?
>
Not at all -- I put this in the wrong tracker.... - 01:48 PM Bug #45532 (Triaged): cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- 01:46 PM Bug #45553 (Duplicate): mds: rstats on snapshot are updated by changes to HEAD
- 01:12 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay , I will work on writing up a new ticket for the slow requests problem and at the moment not do anything to trou...
- 12:58 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Unfortunately, ganesha doesn't have great instrumentation in this area. There is a ganeshactl program that ships with...
- 12:22 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Ganesha patches are merged and have been for over a week. The libcephfs bits are also still ready, but testing is tak...
- 04:15 AM Bug #45575 (Resolved): cephfs-journal-tool: incorrect read_offset after finding missing objects
- in JournalScanner::scan_events(), read_offset is not increased when find missing objects, that will lead to a wrong r...
Also available in: Atom