Activity
From 11/03/2020 to 12/02/2020
12/02/2020
- 08:18 PM Bug #48422 (Fix Under Review): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(md...
- 08:08 AM Bug #48422 (Resolved): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_n...
- ...
- 08:12 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- Jeff Layton wrote:
> Answering my own question, looks like: 4.18.0-240.1.1.el8_3.x86_64. I'd be interested to see if... - 08:08 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- Answering my own question, looks like: 4.18.0-240.1.1.el8_3.x86_64. I'd be interested to see if this is still a probl...
- 08:01 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- I wonder if this is the same problem as https://tracker.ceph.com/issues/47563? What kernel was the client running?
- 08:00 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- relevant ECONRESET:...
- 07:56 PM Bug #48439 (Resolved): fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) ...
- ...
- 08:09 PM Bug #48203 (Resolved): qa: quota failure
- 08:08 PM Bug #48206 (Pending Backport): client: fix crash when doing remount in none fuse case
- 08:07 PM Fix #15134 (Resolved): multifs: test case exercising mds_thrash for multiple filesystems
- 07:48 PM Feature #6373: kcephfs: qa: test fscache
- Patch to add arbitrary mount options to kclient:
https://github.com/ceph/ceph/pull/38407 - 11:09 AM Bug #44415 (Resolved): cephfs.pyx: passing empty string is fine but passing None is not to arg co...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:07 AM Bug #47565 (Resolved): qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:07 AM Bug #47806 (Resolved): mon/MDSMonitor: divide mds identifier and mds real name with dot
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:07 AM Bug #47833 (Resolved): mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit_se...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:06 AM Bug #47881 (Resolved): mon/MDSMonitor: stop all MDS processes in the cluster at the same time. So...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:06 AM Bug #47918 (Resolved): cephfs client and nfs-ganesha have inconsistent reference count after rele...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
12/01/2020
- 09:31 PM Feature #6373: kcephfs: qa: test fscache
- https://github.com/ceph/ceph-cm-ansible/pull/592
https://github.com/ceph/ceph-cm-ansible/pull/593 - 01:48 PM Feature #6373: kcephfs: qa: test fscache
- One other catch. If we want to do testing with fscache, then it would be ideal if we could provision the clients with...
- 07:44 PM Backport #47990 (Resolved): nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37840
m... - 07:43 PM Backport #47988 (Resolved): nautilus: cephfs client and nfs-ganesha have inconsistent reference c...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37838
m... - 07:43 PM Backport #47957 (Resolved): nautilus: mon/MDSMonitor: stop all MDS processes in the cluster at th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37822
m... - 07:43 PM Backport #47939 (Resolved): nautilus: mon/MDSMonitor: divide mds identifier and mds real name wit...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37821
m... - 07:43 PM Backport #47935 (Resolved): nautilus: mds FAILED ceph_assert(sessions != 0) in function 'void Ses...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37820
m... - 07:42 PM Backport #46611 (Resolved): nautilus: cephfs.pyx: passing empty string is fine but passing None i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37725
m... - 02:47 PM Bug #48411 (Resolved): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all fail...
- I got this failure when doing some testing with the draft fscache rework. It looks unrelated to the kernel changes, a...
- 04:05 AM Bug #48242 (Resolved): qa: add debug information for client address for kclient
- 03:46 AM Bug #47786 (Resolved): mds: log [ERR] : failed to commit dir 0x100000005f1.1010* object, errno -2
- 03:44 AM Bug #46769: qa: Refactor cephfs creation/removal code.
- Actually, here's a test failure where we get:...
- 03:38 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- /ceph/teuthology-archive/pdonnell-2020-11-24_19:01:27-fs-wip-pdonnell-testing-20201123.213848-distro-basic-smithi/565...
11/30/2020
- 10:31 PM Backport #47990: nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37840
merged - 10:31 PM Backport #47988: nautilus: cephfs client and nfs-ganesha have inconsistent reference count after ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37838
merged - 10:29 PM Backport #47957: nautilus: mon/MDSMonitor: stop all MDS processes in the cluster at the same time...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37822
merged - 10:29 PM Backport #47939: nautilus: mon/MDSMonitor: divide mds identifier and mds real name with dot
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37821
merged - 10:28 PM Backport #47935: nautilus: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hi...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37820
merged - 10:27 PM Backport #46611: nautilus: cephfs.pyx: passing empty string is fine but passing None is not to ar...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37725
merged - 10:11 PM Bug #48313 (Fix Under Review): client: ceph.dir.entries does not acquire necessary caps
- 07:03 PM Feature #48404: client: add a ceph.caps vxattr
- So (e.g.):...
- 06:51 PM Feature #48404 (Resolved): client: add a ceph.caps vxattr
- We recently added a new vxattr to the kernel client, to help support some testing and to generally improve visibility...
- 06:34 PM Bug #48403 (Resolved): mds: fix recall defaults based on feedback from production clusters
- They are too low and often cause the MDS to OOM.
- 03:59 PM Fix #47931 (Fix Under Review): Directory quota optimization
- 02:18 PM Backport #48370 (In Progress): octopus: mds: dir->mark_new should together with dir->mark_dirty
- 01:04 PM Backport #48196 (Need More Info): octopus: mgr/volumes: allow/deny r/rw access of auth IDs to sub...
- conflict
- 12:34 PM Backport #48129 (In Progress): octopus: some clients may return failure in the scenario where mul...
- 06:36 AM Feature #48394 (In Progress): mds: defer storing the OpenFileTable journal
- 06:36 AM Feature #48394 (Fix Under Review): mds: defer storing the OpenFileTable journal
- When flushing the OpenFileTable journal to osd disk may take a bit
longer, if we hold the mds_lock or other locks, i...
11/26/2020
- 11:16 AM Backport #48376 (Resolved): nautilus: libcephfs allows calling ftruncate on a file open read-only
- https://github.com/ceph/ceph/pull/39129
- 11:16 AM Backport #48375 (Resolved): octopus: libcephfs allows calling ftruncate on a file open read-only
- https://github.com/ceph/ceph/pull/38424
- 11:15 AM Backport #48374 (Resolved): nautilus: client: dump which fs is used by client for multiple-fs
- https://github.com/ceph/ceph/pull/38552
- 11:15 AM Backport #48372 (Resolved): octopus: client: dump which fs is used by client for multiple-fs
- https://github.com/ceph/ceph/pull/38551
- 11:15 AM Backport #48371 (Resolved): nautilus: mds: dir->mark_new should together with dir->mark_dirty
- https://github.com/ceph/ceph/pull/39128
- 11:15 AM Backport #48370 (Resolved): octopus: mds: dir->mark_new should together with dir->mark_dirty
- https://github.com/ceph/ceph/pull/38352
- 03:53 AM Feature #46866 (Fix Under Review): kceph: add metric for number of pinned capabilities
- 03:53 AM Feature #46866: kceph: add metric for number of pinned capabilities
- The patchwork link: https://patchwork.kernel.org/project/ceph-devel/patch/20201126034743.1151342-1-xiubli@redhat.com/...
11/25/2020
- 09:30 PM Cleanup #48235 (Resolved): client: do not unset the client_debug_inject_tick_delay in libcephfs
- 09:29 PM Bug #48249 (Pending Backport): mds: dir->mark_new should together with dir->mark_dirty
- 09:28 PM Bug #48202 (Pending Backport): libcephfs allows calling ftruncate on a file open read-only
- 09:27 PM Feature #48246 (Pending Backport): client: dump which fs is used by client for multiple-fs
- 09:24 PM Bug #48365 (Resolved): qa: ffsb build failure on CentOS 8.2
- ...
11/24/2020
- 09:22 AM Feature #48337 (Fix Under Review): client: add ceph.cluster_fsid/ceph.client_id vxattr support in...
- 07:09 AM Feature #48337 (In Progress): client: add ceph.cluster_fsid/ceph.client_id vxattr support in libc...
- 07:08 AM Feature #48337 (Resolved): client: add ceph.cluster_fsid/ceph.client_id vxattr support in libcephfs
- More detail please see: https://tracker.ceph.com/issues/44340
11/23/2020
- 02:40 PM Bug #48318 (Won't Fix): Client: the directory's capacity will not be updated after write data int...
- rstats are propagated lazily. Try doing an fsync.
11/21/2020
- 03:01 AM Bug #48318 (Resolved): Client: the directory's capacity will not be updated after write data into...
- The reproduction steps are as follows:...
11/20/2020
- 11:28 PM Bug #48313 (In Progress): client: ceph.dir.entries does not acquire necessary caps
- My mistake -- fix isn't quite ready yet. We might want to roll in a fix that gets the same caps when we look for the ...
- 11:23 PM Bug #48313: client: ceph.dir.entries does not acquire necessary caps
- @Jeff, you marked this as "Fix under review" but where is the PR?
- 03:38 PM Bug #48313 (Resolved): client: ceph.dir.entries does not acquire necessary caps
- Cloned from linux kernel client tracker #48104. The userland client needs the same change to take Fs caps for dirstat...
- 09:08 PM Feature #6373: kcephfs: qa: test fscache
- I started testing fscache in my home environment about a year ago and found that it was pretty horribly broken. David...
- 03:48 AM Backport #48111 (In Progress): octopus: doc: document MDS recall configurations
11/18/2020
- 09:54 PM Backport #48286 (Resolved): nautilus: rados/upgrade/nautilus-x-singleton fails due to cluster [WR...
- https://github.com/ceph/ceph/pull/39706
- 09:54 PM Backport #48285 (Resolved): octopus: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN...
- https://github.com/ceph/ceph/pull/38422
- 06:57 PM Bug #47689: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unresponsive c...
- backport because of these failures: https://pulpito.ceph.com/teuthology-2020-11-17_03:29:52-upgrade:nautilus-p2p-naut...
- 06:57 PM Bug #47689 (Pending Backport): rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evic...
- 02:56 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- I have candidate fix in Teuthology at the moment, just as an update.
11/17/2020
- 03:34 PM Feature #12274 (Resolved): mds: start forward scrubs from all subtree roots, skip non-auth metadata
- 04:51 AM Cleanup #48235 (Fix Under Review): client: do not unset the client_debug_inject_tick_delay in lib...
- 04:12 AM Cleanup #48235 (In Progress): client: do not unset the client_debug_inject_tick_delay in libcephfs
11/16/2020
- 06:01 PM Bug #48203: qa: quota failure
- Created pull-request https://github.com/ceph/ceph/pull/38112 that simply reverts the fuse-client commit b8954e5734b3 ...
- 05:32 PM Bug #48203: qa: quota failure
- Luis Henriques wrote:
> Patrick Donnelly wrote:
> > Luis Henriques wrote:
> > > After discussing this with Jeff on... - 11:02 AM Bug #48203: qa: quota failure
- Patrick Donnelly wrote:
> Luis Henriques wrote:
> > After discussing this with Jeff on the mailing-list[1] we agree... - 05:37 PM Bug #48249 (Fix Under Review): mds: dir->mark_new should together with dir->mark_dirty
- 02:40 PM Bug #48249 (Resolved): mds: dir->mark_new should together with dir->mark_dirty
- 02:52 PM Bug #43493 (Can't reproduce): osdc: fix null pointer caused program crash
- 02:38 PM Feature #48246 (Fix Under Review): client: dump which fs is used by client for multiple-fs
- 10:00 AM Feature #48246 (Resolved): client: dump which fs is used by client for multiple-fs
- Under multiple-fs scenario, we may need to quickly find out which fs is used by client for debugging online issues.
- 08:23 AM Backport #48192 (In Progress): nautilus: mds: throttle workloads which acquire caps faster than t...
- 07:31 AM Backport #48191 (In Progress): octopus: mds: throttle workloads which acquire caps faster than th...
- 05:53 AM Bug #48242 (Fix Under Review): qa: add debug information for client address for kclient
- 04:46 AM Bug #48242: qa: add debug information for client address for kclient
- The libcephfs will be:...
- 04:40 AM Bug #48242 (Resolved): qa: add debug information for client address for kclient
- The kernel related tracker is
https://tracker.ceph.com/issues/48057
11/15/2020
- 08:39 PM Bug #45342 (Resolved): qa/tasks/vstart_runner.py: RuntimeError: Fuse mount failed to populate /sy...
- 08:38 PM Bug #47842 (Resolved): qa: "fsstress.sh: line 16: 28870 Bus error (core dumped) "$B...
- 08:37 PM Bug #48147 (Resolved): qa: vstart_runner crashes when run with kernel client
- 08:36 PM Bug #48207 (Resolved): qa: switch to 'osdop_read' instead of 'op_r' for test_readahead
- 01:52 AM Bug #47786 (Fix Under Review): mds: log [ERR] : failed to commit dir 0x100000005f1.1010* object, ...
11/14/2020
- 10:10 AM Bug #42299 (Resolved): mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 10:08 AM Backport #42738 (Resolved): nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 01:00 AM Cleanup #48235 (Resolved): client: do not unset the client_debug_inject_tick_delay in libcephfs
- The related link: https://github.com/ceph/ceph/pull/37746#discussion_r520950057
11/13/2020
- 08:31 PM Bug #48203: qa: quota failure
- Luis Henriques wrote:
> After discussing this with Jeff on the mailing-list[1] we agreed that the best thing to do i... - 08:24 PM Bug #48231 (Resolved): qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- ...
11/12/2020
- 08:09 PM Bug #48125 (Fix Under Review): qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- 08:08 PM Bug #47998 (Fix Under Review): cephfs kernel client hung
- 06:29 PM Bug #47998: cephfs kernel client hung
- Ok, I sent a patch to the mailing list (and you) this morning that may help this. The basic idea is to move the iget5...
- 03:27 PM Bug #48203: qa: quota failure
- After discussing this with Jeff on the mailing-list[1] we agreed that the best thing to do is to simply revert to ret...
- 10:17 AM Fix #48053 (Fix Under Review): qa: update test_readahead to work with the kernel
- 07:52 AM Feature #46746 (New): mgr/nfs: Add interface to accept yaml file for creating clusters
- 01:20 AM Bug #48207 (Fix Under Review): qa: switch to 'osdop_read' instead of 'op_r' for test_readahead
- 01:19 AM Bug #48207 (Resolved): qa: switch to 'osdop_read' instead of 'op_r' for test_readahead
- The 'op_r' will just acount CEPH_OSD_FLAG_READ flag, which will
include some other none real data read opcodes, like... - 01:17 AM Bug #48206 (Fix Under Review): client: fix crash when doing remount in none fuse case
- 01:12 AM Bug #48206 (Resolved): client: fix crash when doing remount in none fuse case
- The g_conf() will try to dereference the `g_ceph_context` to get the `_conf`, but the `g_ceph_context` is not set in ...
11/11/2020
- 08:48 PM Bug #48202 (Fix Under Review): libcephfs allows calling ftruncate on a file open read-only
- 07:37 PM Bug #48202 (In Progress): libcephfs allows calling ftruncate on a file open read-only
- 06:50 PM Bug #48202 (Resolved): libcephfs allows calling ftruncate on a file open read-only
- When calling ceph_ftruncate on an "fd" open read only, using the O_RDONLY flag, libcephfs does not return and error a...
- 08:46 PM Bug #48203: qa: quota failure
- In response to: https://tracker.ceph.com/issues/36593#note-14
Yes, there is not an easy solution here. I guess we ... - 08:44 PM Bug #48203 (Resolved): qa: quota failure
- ...
- 08:45 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Thanks for checking Luis, I made a new ticket here: https://tracker.ceph.com/issues/48203
Let's move the discussio... - 03:07 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Adam Emerson wrote:
> Patrick Donnelly wrote:
> > Another: /ceph/teuthology-archive/pdonnell-2020-09-26_05:47:56-fs... - 02:25 PM Backport #48196 (Resolved): octopus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume...
- https://github.com/ceph/ceph/pull/39390
- 02:25 PM Backport #48195 (Resolved): nautilus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolum...
- https://github.com/ceph/ceph/pull/39292
- 02:22 PM Bug #45575 (Resolved): cephfs-journal-tool: incorrect read_offset after finding missing objects
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:20 PM Backport #48192 (Resolved): nautilus: mds: throttle workloads which acquire caps faster than the ...
- https://github.com/ceph/ceph/pull/38101
- 02:20 PM Backport #48191 (Resolved): octopus: mds: throttle workloads which acquire caps faster than the c...
- https://github.com/ceph/ceph/pull/38095
- 02:19 PM Bug #47783 (Resolved): mgr/nfs: Pseudo path prints wrong error message
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:55 PM Backport #47940 (Resolved): octopus: mon/MDSMonitor: divide mds identifier and mds real name with...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37857
m... - 01:55 PM Backport #47936 (Resolved): octopus: mds FAILED ceph_assert(sessions != 0) in function 'void Sess...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37856
m... - 01:55 PM Backport #47891 (Resolved): octopus: mgr/nfs: Pseudo path prints wrong error message
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37855
m... - 01:54 PM Backport #46959 (Resolved): octopus: cephfs-journal-tool: incorrect read_offset after finding mis...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37854
m... - 01:54 PM Backport #47991 (Resolved): octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), in...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37841
m... - 01:40 PM Backport #47989 (Resolved): octopus: cephfs client and nfs-ganesha have inconsistent reference co...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37839
m... - 01:40 PM Backport #46610 (Resolved): octopus: cephfs.pyx: passing empty string is fine but passing None is...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37724
m... - 01:40 PM Backport #47824: octopus: pybind/mgr/volumes: Make number of cloner threads configurable
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37671
m... - 12:41 PM Bug #47563 (Resolved): qa: kernel client closes session improperly causing eviction due to timeout
- Patrick Donnelly wrote:
> Jeff, this is just waiting on kcephfs patches now right?
Yes. The patch was just merged... - 02:46 AM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- Jeff, this is just waiting on kcephfs patches now right?
- 04:04 AM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> My guess is that it probably won't help, actually. I think that you're correct that this is a s... - 01:19 AM Feature #46866: kceph: add metric for number of pinned capabilities
- Patrick Donnelly wrote:
> Xiubo, can we close this?
The sending the pinned cap metric to MDS feature is not finis... - 01:15 AM Feature #46865 (Fix Under Review): client: add metric for number of pinned capabilities
- Patrick Donnelly wrote:
> Status on this?
Sorry, forgot to update it, the PR was under reviewing.
11/10/2020
- 06:02 PM Bug #47689 (Resolved): rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unr...
- 06:00 PM Bug #42271 (Resolved): client: ceph-fuse which had been blacklisted couldn't auto reconnect after...
- 05:51 PM Bug #44288 (Won't Fix): MDSMap encoder "ev" (extended version) is not checked for validity when d...
- 05:48 PM Bug #46616 (Rejected): client: avoid adding inode already in the caps delayed list
- 05:46 PM Fix #47983 (Closed): mds: use proper gather for inode commit ops
- 05:42 PM Feature #46865: client: add metric for number of pinned capabilities
- Status on this?
- 05:41 PM Feature #46866: kceph: add metric for number of pinned capabilities
- Xiubo, can we close this?
- 05:40 PM Feature #38951 (Resolved): client: implement asynchronous unlink/create
- 05:38 PM Feature #40681 (Rejected): mds: show total number of opened files beneath a directory
- 05:38 PM Feature #42831 (Resolved): mds: add config to deny all client reconnects
- 05:37 PM Feature #38052 (New): mds: provide interface to control/view internal operations
- 05:35 PM Feature #12274 (Fix Under Review): mds: start forward scrubs from all subtree roots, skip non-aut...
- 03:47 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- The kernel client code is optimized to buffer the new file size when doing the truncate syscall:...
- 02:23 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Patrick Donnelly wrote:
> /ceph/teuthology-archive/pdonnell-2020-11-04_17:39:34-fs-wip-pdonnell-testing-20201103.210...
11/09/2020
- 10:18 PM Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- Jeff Layton wrote:
> I assume that "_deleting" is a directory and that this test is expecting to see a particular li... - 06:06 PM Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- I assume that "_deleting" is a directory and that this test is expecting to see a particular link count on the direct...
- 02:38 PM Bug #48125 (Triaged): qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- 09:03 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Another: /ceph/teuthology-archive/pdonnell-2020-09-26_05:47:56-fs-wip-pdonnell-testing-202... - 06:13 PM Bug #44821: Can not move directory when CephFS is lower layer for OverlayFS
- After some more digging with a different report [1], I found out that the problem may actually be related with the fa...
- 11:19 AM Bug #44821: Can not move directory when CephFS is lower layer for OverlayFS
- Could it be that you're using tmpfs without xattrs support? You can check this in your kernel config, if it has CONF...
- 04:34 PM Bug #47981 (Resolved): mds: count error of modified dentries
- 01:48 PM Bug #47981: mds: count error of modified dentries
- Allegedly fixes a regression introduced by (the fix for) #47148, which is not currently slated for backport.
- 03:54 PM Backport #47095 (New): octopus: mds: provide altrenatives to increase the total cephfs subvolume ...
- 09:39 AM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Is there any news when the backport will happen to octopus? Without this backport, snapshotting is unfortunately not ...
- 02:47 PM Backport #48090 (Rejected): octopus: mds: count error of modified dentries
- That commit is not slated to be backported: https://tracker.ceph.com/issues/47148
Closing this. - 07:57 AM Backport #48090 (Need More Info): octopus: mds: count error of modified dentries
- octopus does not have the following commits:...
- 02:47 PM Backport #48089 (Rejected): nautilus: mds: count error of modified dentries
- That commit is not slated to be backported: https://tracker.ceph.com/issues/47148
Closing this. - 08:00 AM Backport #48089 (Need More Info): nautilus: mds: count error of modified dentries
- nautilus does not have the following commits:...
- 02:41 PM Bug #48075 (Triaged): qa: AssertionError: 12582912 != 'infinite'
- Need to undo this commit to test: 319dfe9119a7858f458c1e897e13fdb11231694a
- 02:36 PM Bug #48148 (Triaged): mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
- 09:06 AM Bug #48148 (Triaged): mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
- In my cluster with a single MDS, ceph version is 12.2.13, Assert will be encountered when a large number of deletion ...
- 01:50 PM Feature #47148: mds: get rid of the mds_lock when storing the inode backtrace to meta pool
- Allegedly the fix for this issue introduced a regression, #47981.
That being the case, this fix should be backport... - 11:51 AM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> Yeah, those look like they are stuck on spinlocks. This bug is more about sleeping locks (mutex... - 09:18 AM Backport #48110 (In Progress): nautilus: client: ::_read fails to advance pos at EOF checking
- 08:38 AM Backport #48109 (In Progress): octopus: client: ::_read fails to advance pos at EOF checking
- 08:34 AM Backport #48097 (In Progress): nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- 08:24 AM Backport #48098 (In Progress): octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- 08:13 AM Backport #47824 (Resolved): octopus: pybind/mgr/volumes: Make number of cloner threads configurable
- 08:11 AM Backport #48095 (In Progress): nautilus: mds: fix file recovery crash after replaying delayed req...
- 08:03 AM Backport #48096 (In Progress): octopus: mds: fix file recovery crash after replaying delayed requ...
- 06:20 AM Bug #48147 (Resolved): qa: vstart_runner crashes when run with kernel client
- Addition of line @self.rbytes = config.get("rbytes", False)@ to kernel_mount.py leads to crash when testcases are run...
- 04:02 AM Fix #48053: qa: update test_readahead to work with the kernel
- To make this work for kclient we need some patches to support counting the read/write ops.
11/07/2020
- 04:52 AM Feature #40401 (Pending Backport): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume a...
- 04:51 AM Bug #47307 (Pending Backport): mds: throttle workloads which acquire caps faster than the client ...
11/06/2020
- 07:41 PM Bug #47998: cephfs kernel client hung
- Yeah, those look like they are stuck on spinlocks. This bug is more about sleeping locks (mutexes, rwsems and the pag...
- 05:11 PM Bug #47998: cephfs kernel client hung
- Luis Henriques wrote:
...
> I've started looking at this yesterday (I wasn't aware of this bug), and I theory was t... - 07:06 PM Backport #47940: octopus: mon/MDSMonitor: divide mds identifier and mds real name with dot
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37857
merged - 07:06 PM Backport #47936: octopus: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37856
merged - 07:05 PM Backport #47891: octopus: mgr/nfs: Pseudo path prints wrong error message
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37855
merged - 07:05 PM Backport #46959: octopus: cephfs-journal-tool: incorrect read_offset after finding missing objects
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37854
merged - 07:04 PM Backport #47991: octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x2000000...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37841
merged - 07:04 PM Backport #47989: octopus: cephfs client and nfs-ganesha have inconsistent reference count after r...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37839
merged - 07:02 PM Backport #46610: octopus: cephfs.pyx: passing empty string is fine but passing None is not to arg...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37724
merged - 07:01 PM Backport #47824: octopus: pybind/mgr/volumes: Make number of cloner threads configurable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37671
merged - 06:37 PM Bug #48143 (Won't Fix - EOL): octopus: qa: statfs command timeout is too short
- ...
- 11:26 AM Bug #43762 (Closed): pybind/mgr/volumes: create fails with TypeError
- Closing this, as this bug is fixed via https://tracker.ceph.com/issues/46360 and also backported to nautilus.
11/05/2020
- 09:43 PM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> My guess is that it probably won't help, actually. I think that you're correct that this is a s... - 06:02 PM Bug #47998: cephfs kernel client hung
- My guess is that it probably won't help, actually. I think that you're correct that this is a similar problem but not...
- 05:57 PM Backport #47958 (Resolved): octopus: mon/MDSMonitor: stop all MDS processes in the cluster at the...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37858
m... - 04:31 PM Backport #47958: octopus: mon/MDSMonitor: stop all MDS processes in the cluster at the same time....
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37858
merged - 02:38 PM Backport #48130 (Resolved): nautilus: some clients may return failure in the scenario where multi...
- https://github.com/ceph/ceph/pull/39127
- 02:38 PM Backport #48129 (Resolved): octopus: some clients may return failure in the scenario where multip...
- https://github.com/ceph/ceph/pull/38349
- 01:24 PM Bug #45338 (Closed): find leads to recursive output with nfs mount
- No response in several months. Closing bug.
- 09:20 AM Feature #44931 (In Progress): mgr/volumes: get the list of auth IDs that have been granted access...
- 05:03 AM Bug #45100 (Resolved): qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- 05:01 AM Bug #47854 (Pending Backport): some clients may return failure in the scenario where multiple cli...
- 05:01 AM Bug #47844 (Resolved): mds: only update the requesting metrics
- 05:00 AM Feature #43423 (Resolved): mds: collect and show the dentry lease metric
- 04:00 AM Bug #48125 (Resolved): qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- ...
11/04/2020
- 10:06 PM Documentation #43031 (Closed): CephFS Documentation Sprint 3
- 07:05 PM Fix #48121 (Resolved): qa: merge fs/multimds suites
- 07:04 PM Cleanup #23718 (Resolved): qa: merge fs/kcephfs suites
- 06:44 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- /ceph/teuthology-archive/pdonnell-2020-11-04_17:39:34-fs-wip-pdonnell-testing-20201103.210407-distro-basic-smithi/559...
- 02:41 PM Fix #41782 (Resolved): mds: allow stray directories to fragment and switch from 10 stray director...
- 09:25 AM Backport #48112 (Rejected): nautilus: doc: document MDS recall configurations
- 09:25 AM Backport #48111 (Resolved): octopus: doc: document MDS recall configurations
- https://github.com/ceph/ceph/pull/38202
- 09:24 AM Backport #48110 (Resolved): nautilus: client: ::_read fails to advance pos at EOF checking
- https://github.com/ceph/ceph/pull/37991
- 09:24 AM Backport #48109 (Resolved): octopus: client: ::_read fails to advance pos at EOF checking
- https://github.com/ceph/ceph/pull/37989
- 04:23 AM Feature #47005 (Resolved): kceph: add metric for number of pinned capabilities and number of dirs...
- 02:45 AM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> I'm now wondering whether you might be hitting the problem in 3e1d0452edcee after all. Have you...
11/03/2020
- 09:06 PM Documentation #48010 (Pending Backport): doc: document MDS recall configurations
- 08:54 PM Bug #48076 (Pending Backport): client: ::_read fails to advance pos at EOF checking
- 08:49 PM Backport #47823: nautilus: pybind/mgr/volumes: Make number of cloner threads configurable
- Kotresh Hiremath Ravishankar wrote:
> It's config tune-able that aids performance testing and also could be useful a... - 01:55 PM Backport #47823 (In Progress): nautilus: pybind/mgr/volumes: Make number of cloner threads config...
- It's config tune-able that aids performance testing and also could be useful at production. Hence was marked for a ba...
- 04:10 PM Bug #47998: cephfs kernel client hung
- I'm now wondering whether you might be hitting the problem in 3e1d0452edcee after all. Have you tested a kernel with ...
- 02:44 PM Bug #47998: cephfs kernel client hung
- Oh, a friend pointed out that this patch may end up giving us "Busy inodes after umount" problems if the thing is unm...
- 02:37 PM Bug #47998 (In Progress): cephfs kernel client hung
- 01:52 PM Bug #47998: cephfs kernel client hung
- geng jichao wrote:
> iget5_lockd, called by ceph_get_inode, may be blocked in some cases.so when ceph_get_inode is c... - 01:00 PM Bug #47998: cephfs kernel client hung
- iget5_lockd, called by ceph_get_inode, may be blocked in some cases.so when ceph_get_inode is called, shoud not hold ...
- 12:39 PM Bug #47998: cephfs kernel client hung
- geng jichao wrote:
> I guess this is the case:
> stack A and B are processing the same inode, but stack C is other.... - 07:18 AM Bug #47998: cephfs kernel client hung
- I guess this is the case:
stack A and B are processing the same inode, but stack C is other. The inode corresponding... - 11:27 AM Backport #48098 (Resolved): octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- https://github.com/ceph/ceph/pull/37987
- 11:27 AM Backport #48097 (Resolved): nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- https://github.com/ceph/ceph/pull/37988
- 11:27 AM Bug #46559 (Resolved): Create NFS Ganesha Cluster instructions are misleading
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:26 AM Backport #48096 (Resolved): octopus: mds: fix file recovery crash after replaying delayed requests
- https://github.com/ceph/ceph/pull/37985
- 11:26 AM Backport #48095 (Resolved): nautilus: mds: fix file recovery crash after replaying delayed requests
- https://github.com/ceph/ceph/pull/37986
- 11:24 AM Backport #48090 (Rejected): octopus: mds: count error of modified dentries
- 11:24 AM Backport #48089 (Rejected): nautilus: mds: count error of modified dentries
- 10:21 AM Backport #47942 (Resolved): octopus: octopus: client: hang after statfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37530
m... - 10:20 AM Backport #47249 (Resolved): octopus: mon: deleting a CephFS and its pools causes MONs to crash
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37256
m...
Also available in: Atom