Activity
From 10/13/2020 to 11/11/2020
11/11/2020
- 08:48 PM Bug #48202 (Fix Under Review): libcephfs allows calling ftruncate on a file open read-only
- 07:37 PM Bug #48202 (In Progress): libcephfs allows calling ftruncate on a file open read-only
- 06:50 PM Bug #48202 (Resolved): libcephfs allows calling ftruncate on a file open read-only
- When calling ceph_ftruncate on an "fd" open read only, using the O_RDONLY flag, libcephfs does not return and error a...
- 08:46 PM Bug #48203: qa: quota failure
- In response to: https://tracker.ceph.com/issues/36593#note-14
Yes, there is not an easy solution here. I guess we ... - 08:44 PM Bug #48203 (Resolved): qa: quota failure
- ...
- 08:45 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Thanks for checking Luis, I made a new ticket here: https://tracker.ceph.com/issues/48203
Let's move the discussio... - 03:07 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Adam Emerson wrote:
> Patrick Donnelly wrote:
> > Another: /ceph/teuthology-archive/pdonnell-2020-09-26_05:47:56-fs... - 02:25 PM Backport #48196 (Resolved): octopus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume...
- https://github.com/ceph/ceph/pull/39390
- 02:25 PM Backport #48195 (Resolved): nautilus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolum...
- https://github.com/ceph/ceph/pull/39292
- 02:22 PM Bug #45575 (Resolved): cephfs-journal-tool: incorrect read_offset after finding missing objects
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:20 PM Backport #48192 (Resolved): nautilus: mds: throttle workloads which acquire caps faster than the ...
- https://github.com/ceph/ceph/pull/38101
- 02:20 PM Backport #48191 (Resolved): octopus: mds: throttle workloads which acquire caps faster than the c...
- https://github.com/ceph/ceph/pull/38095
- 02:19 PM Bug #47783 (Resolved): mgr/nfs: Pseudo path prints wrong error message
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:55 PM Backport #47940 (Resolved): octopus: mon/MDSMonitor: divide mds identifier and mds real name with...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37857
m... - 01:55 PM Backport #47936 (Resolved): octopus: mds FAILED ceph_assert(sessions != 0) in function 'void Sess...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37856
m... - 01:55 PM Backport #47891 (Resolved): octopus: mgr/nfs: Pseudo path prints wrong error message
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37855
m... - 01:54 PM Backport #46959 (Resolved): octopus: cephfs-journal-tool: incorrect read_offset after finding mis...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37854
m... - 01:54 PM Backport #47991 (Resolved): octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), in...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37841
m... - 01:40 PM Backport #47989 (Resolved): octopus: cephfs client and nfs-ganesha have inconsistent reference co...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37839
m... - 01:40 PM Backport #46610 (Resolved): octopus: cephfs.pyx: passing empty string is fine but passing None is...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37724
m... - 01:40 PM Backport #47824: octopus: pybind/mgr/volumes: Make number of cloner threads configurable
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37671
m... - 12:41 PM Bug #47563 (Resolved): qa: kernel client closes session improperly causing eviction due to timeout
- Patrick Donnelly wrote:
> Jeff, this is just waiting on kcephfs patches now right?
Yes. The patch was just merged... - 02:46 AM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- Jeff, this is just waiting on kcephfs patches now right?
- 04:04 AM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> My guess is that it probably won't help, actually. I think that you're correct that this is a s... - 01:19 AM Feature #46866: kceph: add metric for number of pinned capabilities
- Patrick Donnelly wrote:
> Xiubo, can we close this?
The sending the pinned cap metric to MDS feature is not finis... - 01:15 AM Feature #46865 (Fix Under Review): client: add metric for number of pinned capabilities
- Patrick Donnelly wrote:
> Status on this?
Sorry, forgot to update it, the PR was under reviewing.
11/10/2020
- 06:02 PM Bug #47689 (Resolved): rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unr...
- 06:00 PM Bug #42271 (Resolved): client: ceph-fuse which had been blacklisted couldn't auto reconnect after...
- 05:51 PM Bug #44288 (Won't Fix): MDSMap encoder "ev" (extended version) is not checked for validity when d...
- 05:48 PM Bug #46616 (Rejected): client: avoid adding inode already in the caps delayed list
- 05:46 PM Fix #47983 (Closed): mds: use proper gather for inode commit ops
- 05:42 PM Feature #46865: client: add metric for number of pinned capabilities
- Status on this?
- 05:41 PM Feature #46866: kceph: add metric for number of pinned capabilities
- Xiubo, can we close this?
- 05:40 PM Feature #38951 (Resolved): client: implement asynchronous unlink/create
- 05:38 PM Feature #40681 (Rejected): mds: show total number of opened files beneath a directory
- 05:38 PM Feature #42831 (Resolved): mds: add config to deny all client reconnects
- 05:37 PM Feature #38052 (New): mds: provide interface to control/view internal operations
- 05:35 PM Feature #12274 (Fix Under Review): mds: start forward scrubs from all subtree roots, skip non-aut...
- 03:47 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- The kernel client code is optimized to buffer the new file size when doing the truncate syscall:...
- 02:23 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Patrick Donnelly wrote:
> /ceph/teuthology-archive/pdonnell-2020-11-04_17:39:34-fs-wip-pdonnell-testing-20201103.210...
11/09/2020
- 10:18 PM Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- Jeff Layton wrote:
> I assume that "_deleting" is a directory and that this test is expecting to see a particular li... - 06:06 PM Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- I assume that "_deleting" is a directory and that this test is expecting to see a particular link count on the direct...
- 02:38 PM Bug #48125 (Triaged): qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- 09:03 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Another: /ceph/teuthology-archive/pdonnell-2020-09-26_05:47:56-fs-wip-pdonnell-testing-202... - 06:13 PM Bug #44821: Can not move directory when CephFS is lower layer for OverlayFS
- After some more digging with a different report [1], I found out that the problem may actually be related with the fa...
- 11:19 AM Bug #44821: Can not move directory when CephFS is lower layer for OverlayFS
- Could it be that you're using tmpfs without xattrs support? You can check this in your kernel config, if it has CONF...
- 04:34 PM Bug #47981 (Resolved): mds: count error of modified dentries
- 01:48 PM Bug #47981: mds: count error of modified dentries
- Allegedly fixes a regression introduced by (the fix for) #47148, which is not currently slated for backport.
- 03:54 PM Backport #47095 (New): octopus: mds: provide altrenatives to increase the total cephfs subvolume ...
- 09:39 AM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Is there any news when the backport will happen to octopus? Without this backport, snapshotting is unfortunately not ...
- 02:47 PM Backport #48090 (Rejected): octopus: mds: count error of modified dentries
- That commit is not slated to be backported: https://tracker.ceph.com/issues/47148
Closing this. - 07:57 AM Backport #48090 (Need More Info): octopus: mds: count error of modified dentries
- octopus does not have the following commits:...
- 02:47 PM Backport #48089 (Rejected): nautilus: mds: count error of modified dentries
- That commit is not slated to be backported: https://tracker.ceph.com/issues/47148
Closing this. - 08:00 AM Backport #48089 (Need More Info): nautilus: mds: count error of modified dentries
- nautilus does not have the following commits:...
- 02:41 PM Bug #48075 (Triaged): qa: AssertionError: 12582912 != 'infinite'
- Need to undo this commit to test: 319dfe9119a7858f458c1e897e13fdb11231694a
- 02:36 PM Bug #48148 (Triaged): mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
- 09:06 AM Bug #48148 (Triaged): mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
- In my cluster with a single MDS, ceph version is 12.2.13, Assert will be encountered when a large number of deletion ...
- 01:50 PM Feature #47148: mds: get rid of the mds_lock when storing the inode backtrace to meta pool
- Allegedly the fix for this issue introduced a regression, #47981.
That being the case, this fix should be backport... - 11:51 AM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> Yeah, those look like they are stuck on spinlocks. This bug is more about sleeping locks (mutex... - 09:18 AM Backport #48110 (In Progress): nautilus: client: ::_read fails to advance pos at EOF checking
- 08:38 AM Backport #48109 (In Progress): octopus: client: ::_read fails to advance pos at EOF checking
- 08:34 AM Backport #48097 (In Progress): nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- 08:24 AM Backport #48098 (In Progress): octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- 08:13 AM Backport #47824 (Resolved): octopus: pybind/mgr/volumes: Make number of cloner threads configurable
- 08:11 AM Backport #48095 (In Progress): nautilus: mds: fix file recovery crash after replaying delayed req...
- 08:03 AM Backport #48096 (In Progress): octopus: mds: fix file recovery crash after replaying delayed requ...
- 06:20 AM Bug #48147 (Resolved): qa: vstart_runner crashes when run with kernel client
- Addition of line @self.rbytes = config.get("rbytes", False)@ to kernel_mount.py leads to crash when testcases are run...
- 04:02 AM Fix #48053: qa: update test_readahead to work with the kernel
- To make this work for kclient we need some patches to support counting the read/write ops.
11/07/2020
- 04:52 AM Feature #40401 (Pending Backport): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume a...
- 04:51 AM Bug #47307 (Pending Backport): mds: throttle workloads which acquire caps faster than the client ...
11/06/2020
- 07:41 PM Bug #47998: cephfs kernel client hung
- Yeah, those look like they are stuck on spinlocks. This bug is more about sleeping locks (mutexes, rwsems and the pag...
- 05:11 PM Bug #47998: cephfs kernel client hung
- Luis Henriques wrote:
...
> I've started looking at this yesterday (I wasn't aware of this bug), and I theory was t... - 07:06 PM Backport #47940: octopus: mon/MDSMonitor: divide mds identifier and mds real name with dot
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37857
merged - 07:06 PM Backport #47936: octopus: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37856
merged - 07:05 PM Backport #47891: octopus: mgr/nfs: Pseudo path prints wrong error message
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37855
merged - 07:05 PM Backport #46959: octopus: cephfs-journal-tool: incorrect read_offset after finding missing objects
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37854
merged - 07:04 PM Backport #47991: octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x2000000...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37841
merged - 07:04 PM Backport #47989: octopus: cephfs client and nfs-ganesha have inconsistent reference count after r...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37839
merged - 07:02 PM Backport #46610: octopus: cephfs.pyx: passing empty string is fine but passing None is not to arg...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37724
merged - 07:01 PM Backport #47824: octopus: pybind/mgr/volumes: Make number of cloner threads configurable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37671
merged - 06:37 PM Bug #48143 (Won't Fix - EOL): octopus: qa: statfs command timeout is too short
- ...
- 11:26 AM Bug #43762 (Closed): pybind/mgr/volumes: create fails with TypeError
- Closing this, as this bug is fixed via https://tracker.ceph.com/issues/46360 and also backported to nautilus.
11/05/2020
- 09:43 PM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> My guess is that it probably won't help, actually. I think that you're correct that this is a s... - 06:02 PM Bug #47998: cephfs kernel client hung
- My guess is that it probably won't help, actually. I think that you're correct that this is a similar problem but not...
- 05:57 PM Backport #47958 (Resolved): octopus: mon/MDSMonitor: stop all MDS processes in the cluster at the...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37858
m... - 04:31 PM Backport #47958: octopus: mon/MDSMonitor: stop all MDS processes in the cluster at the same time....
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37858
merged - 02:38 PM Backport #48130 (Resolved): nautilus: some clients may return failure in the scenario where multi...
- https://github.com/ceph/ceph/pull/39127
- 02:38 PM Backport #48129 (Resolved): octopus: some clients may return failure in the scenario where multip...
- https://github.com/ceph/ceph/pull/38349
- 01:24 PM Bug #45338 (Closed): find leads to recursive output with nfs mount
- No response in several months. Closing bug.
- 09:20 AM Feature #44931 (In Progress): mgr/volumes: get the list of auth IDs that have been granted access...
- 05:03 AM Bug #45100 (Resolved): qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- 05:01 AM Bug #47854 (Pending Backport): some clients may return failure in the scenario where multiple cli...
- 05:01 AM Bug #47844 (Resolved): mds: only update the requesting metrics
- 05:00 AM Feature #43423 (Resolved): mds: collect and show the dentry lease metric
- 04:00 AM Bug #48125 (Resolved): qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- ...
11/04/2020
- 10:06 PM Documentation #43031 (Closed): CephFS Documentation Sprint 3
- 07:05 PM Fix #48121 (Resolved): qa: merge fs/multimds suites
- 07:04 PM Cleanup #23718 (Resolved): qa: merge fs/kcephfs suites
- 06:44 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- /ceph/teuthology-archive/pdonnell-2020-11-04_17:39:34-fs-wip-pdonnell-testing-20201103.210407-distro-basic-smithi/559...
- 02:41 PM Fix #41782 (Resolved): mds: allow stray directories to fragment and switch from 10 stray director...
- 09:25 AM Backport #48112 (Rejected): nautilus: doc: document MDS recall configurations
- 09:25 AM Backport #48111 (Resolved): octopus: doc: document MDS recall configurations
- https://github.com/ceph/ceph/pull/38202
- 09:24 AM Backport #48110 (Resolved): nautilus: client: ::_read fails to advance pos at EOF checking
- https://github.com/ceph/ceph/pull/37991
- 09:24 AM Backport #48109 (Resolved): octopus: client: ::_read fails to advance pos at EOF checking
- https://github.com/ceph/ceph/pull/37989
- 04:23 AM Feature #47005 (Resolved): kceph: add metric for number of pinned capabilities and number of dirs...
- 02:45 AM Bug #47998: cephfs kernel client hung
- Jeff Layton wrote:
> I'm now wondering whether you might be hitting the problem in 3e1d0452edcee after all. Have you...
11/03/2020
- 09:06 PM Documentation #48010 (Pending Backport): doc: document MDS recall configurations
- 08:54 PM Bug #48076 (Pending Backport): client: ::_read fails to advance pos at EOF checking
- 08:49 PM Backport #47823: nautilus: pybind/mgr/volumes: Make number of cloner threads configurable
- Kotresh Hiremath Ravishankar wrote:
> It's config tune-able that aids performance testing and also could be useful a... - 01:55 PM Backport #47823 (In Progress): nautilus: pybind/mgr/volumes: Make number of cloner threads config...
- It's config tune-able that aids performance testing and also could be useful at production. Hence was marked for a ba...
- 04:10 PM Bug #47998: cephfs kernel client hung
- I'm now wondering whether you might be hitting the problem in 3e1d0452edcee after all. Have you tested a kernel with ...
- 02:44 PM Bug #47998: cephfs kernel client hung
- Oh, a friend pointed out that this patch may end up giving us "Busy inodes after umount" problems if the thing is unm...
- 02:37 PM Bug #47998 (In Progress): cephfs kernel client hung
- 01:52 PM Bug #47998: cephfs kernel client hung
- geng jichao wrote:
> iget5_lockd, called by ceph_get_inode, may be blocked in some cases.so when ceph_get_inode is c... - 01:00 PM Bug #47998: cephfs kernel client hung
- iget5_lockd, called by ceph_get_inode, may be blocked in some cases.so when ceph_get_inode is called, shoud not hold ...
- 12:39 PM Bug #47998: cephfs kernel client hung
- geng jichao wrote:
> I guess this is the case:
> stack A and B are processing the same inode, but stack C is other.... - 07:18 AM Bug #47998: cephfs kernel client hung
- I guess this is the case:
stack A and B are processing the same inode, but stack C is other. The inode corresponding... - 11:27 AM Backport #48098 (Resolved): octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- https://github.com/ceph/ceph/pull/37987
- 11:27 AM Backport #48097 (Resolved): nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- https://github.com/ceph/ceph/pull/37988
- 11:27 AM Bug #46559 (Resolved): Create NFS Ganesha Cluster instructions are misleading
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:26 AM Backport #48096 (Resolved): octopus: mds: fix file recovery crash after replaying delayed requests
- https://github.com/ceph/ceph/pull/37985
- 11:26 AM Backport #48095 (Resolved): nautilus: mds: fix file recovery crash after replaying delayed requests
- https://github.com/ceph/ceph/pull/37986
- 11:24 AM Backport #48090 (Rejected): octopus: mds: count error of modified dentries
- 11:24 AM Backport #48089 (Rejected): nautilus: mds: count error of modified dentries
- 10:21 AM Backport #47942 (Resolved): octopus: octopus: client: hang after statfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37530
m... - 10:20 AM Backport #47249 (Resolved): octopus: mon: deleting a CephFS and its pools causes MONs to crash
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37256
m...
11/02/2020
- 07:57 PM Bug #48076 (Resolved): client: ::_read fails to advance pos at EOF checking
- /ceph/teuthology-archive/pdonnell-2020-11-02_02:29:29-fs:volumes-master-distro-basic-smithi/5581167/teuthology.log
... - 06:13 PM Bug #48075 (Triaged): qa: AssertionError: 12582912 != 'infinite'
- ...
- 04:24 PM Bug #46434 (Pending Backport): osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- 02:38 PM Bug #46906 (Pending Backport): mds: fix file recovery crash after replaying delayed requests
- 01:57 PM Bug #47998: cephfs kernel client hung
- Also, to answer your earlier question: Yes, later kernels should work with earlier cluster versions. I'd still like t...
- 12:19 PM Bug #47998: cephfs kernel client hung
- Thanks for the analysis -- that looks like a real problem. I don't immediately see a way to fix it however. We could ...
- 05:55 AM Bug #41133 (Closed): qa/tasks: update thrasher design
- These changes doesn't seem to be that much appealing based on the review comments got and the latest implementation/u...
- 05:20 AM Fix #48053 (In Progress): qa: update test_readahead to work with the kernel
10/30/2020
- 04:39 PM Fix #48053 (Resolved): qa: update test_readahead to work with the kernel
- The new stats patches by Xiubo should allow us to get the number of read ops performed on RADOS. Jeff is including th...
- 10:19 AM Bug #47998: cephfs kernel client hung
- Through further repetition and analysis, I have made new discoveries. it similar to the problem that has been solved ...
- 10:11 AM Bug #47389 (Need More Info): ceph fs volume create fails to create pool
- 02:15 AM Cleanup #23718 (Fix Under Review): qa: merge fs/kcephfs suites
10/29/2020
- 08:08 PM Backport #47942: octopus: octopus: client: hang after statfs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37530
merged - 08:06 PM Backport #47249: octopus: mon: deleting a CephFS and its pools causes MONs to crash
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37256
merged - 07:36 PM Bug #48009: MDS crashes on locking assert in /build/ceph-14.2.11/src/mds/ScatterLock.h: In functi...
- Both issues involve Scatterlock, but the crash seems to be at a different line. Here is a log of one event. This happ...
- 05:03 PM Bug #47981 (Pending Backport): mds: count error of modified dentries
- 03:12 PM Bug #46985: common: validate type CephBool cause 'invalid command json'
- Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/37098 fixes a bug in https://github.com/ceph/ceph/pull/36459 an...
10/28/2020
- 02:11 PM Fix #48027 (Resolved): qa: add cephadm tests for CephFS in QA
- 01:43 PM Bug #48009: MDS crashes on locking assert in /build/ceph-14.2.11/src/mds/ScatterLock.h: In functi...
- looks like dup of https://tracker.ceph.com/issues/46906
- 01:36 PM Bug #48009 (Need More Info): MDS crashes on locking assert in /build/ceph-14.2.11/src/mds/Scatter...
- Do you have any logs from the event? Has it been repeatable?
- 01:37 PM Bug #47981 (Fix Under Review): mds: count error of modified dentries
- 09:05 AM Bug #47998: cephfs kernel client hung
- Thank you for your reply。Unfortunately, when I repeated this problem, another problem occurred. The kernel directly o...
- 08:15 AM Fix #47983: mds: use proper gather for inode commit ops
- Erqi Chen wrote:
> New idea to optimize by pr 37828 optimize batch backtrace store,old pr 37778 is closed and this i... - 08:14 AM Fix #47983: mds: use proper gather for inode commit ops
- New idea to optimize by pr #37828 optimize batch backtrace store,old pr #37778 is closed and this issue also can be c...
- 05:31 AM Bug #45100 (Fix Under Review): qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.Te...
- 05:23 AM Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- For the Kclient:
When we first time to do the :...
10/27/2020
- 05:00 PM Backport #47877 (Resolved): octopus: Create NFS Ganesha Cluster instructions are misleading
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37691
m... - 02:12 PM Documentation #48017 (Resolved): snap-schedule doc
- Add documentation for the snap.schedule module
- 09:42 AM Backport #47958 (In Progress): octopus: mon/MDSMonitor: stop all MDS processes in the cluster at ...
- 09:37 AM Backport #47940 (In Progress): octopus: mon/MDSMonitor: divide mds identifier and mds real name w...
- 09:36 AM Backport #47936 (In Progress): octopus: mds FAILED ceph_assert(sessions != 0) in function 'void S...
- 09:36 AM Backport #47891 (In Progress): octopus: mgr/nfs: Pseudo path prints wrong error message
- 09:35 AM Backport #46959 (In Progress): octopus: cephfs-journal-tool: incorrect read_offset after finding ...
- 08:31 AM Backport #47991 (In Progress): octopus: qa: "client.4606 isn't responding to mclientcaps(revoke),...
- 08:30 AM Backport #47990 (In Progress): nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke)...
- 08:29 AM Backport #47989 (In Progress): octopus: cephfs client and nfs-ganesha have inconsistent reference...
- 08:28 AM Backport #47988 (In Progress): nautilus: cephfs client and nfs-ganesha have inconsistent referenc...
10/26/2020
- 11:23 PM Documentation #48010 (Resolved): doc: document MDS recall configurations
- 09:46 PM Backport #47957 (In Progress): nautilus: mon/MDSMonitor: stop all MDS processes in the cluster at...
- 09:46 PM Backport #47941 (Need More Info): nautilus: octopus: client: hang after statfs
- 09:44 PM Backport #47939 (In Progress): nautilus: mon/MDSMonitor: divide mds identifier and mds real name ...
- 09:21 PM Backport #47935 (In Progress): nautilus: mds FAILED ceph_assert(sessions != 0) in function 'void ...
- 09:16 PM Backport #47823 (Need More Info): nautilus: pybind/mgr/volumes: Make number of cloner threads con...
- non-trivial feature backport
- 08:46 PM Bug #48009 (Need More Info): MDS crashes on locking assert in /build/ceph-14.2.11/src/mds/Scatter...
- Twice this week, an MDS has crashed in this function. Failover occurred as normal so disruption was brief. The issue ...
- 02:33 PM Bug #47998: cephfs kernel client hung
- Note that exporting kcephfs via knfsd is not a well-tested configuration. Most folks that want to export cephfs via N...
- 12:12 PM Bug #47998: cephfs kernel client hung
- when the kernel client is used to mount, it will hang up, about once a day. The kernel stack when it appears is as f...
- 12:01 PM Bug #47998 (Resolved): cephfs kernel client hung
- 01:53 PM Fix #47983 (Fix Under Review): mds: use proper gather for inode commit ops
- 07:08 AM Fix #47983 (Closed): mds: use proper gather for inode commit ops
- Improvement for CInode::_commit_ops.
MDS journal handles inodes concurrently by CInode::_commit_ops, fin is C_Gath... - 10:33 AM Backport #47991 (Resolved): octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), in...
- https://github.com/ceph/ceph/pull/37841
- 10:33 AM Backport #47990 (Resolved): nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), i...
- https://github.com/ceph/ceph/pull/37840
- 10:33 AM Backport #47989 (Resolved): octopus: cephfs client and nfs-ganesha have inconsistent reference co...
- https://github.com/ceph/ceph/pull/37839
- 10:33 AM Backport #47988 (Resolved): nautilus: cephfs client and nfs-ganesha have inconsistent reference c...
- https://github.com/ceph/ceph/pull/37838
- 08:11 AM Bug #47515 (Resolved): pybind/snap_schedule: deactivating a schedule is ineffective
- 04:21 AM Bug #45100 (In Progress): qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDam...
- 03:31 AM Bug #47977 (Fix Under Review): fs: "./bin/ceph daemon client.admin.133423 config show" do not work
- 03:30 AM Bug #47981 (Resolved): mds: count error of modified dentries
- Head and snap items should be added to count modified dentries in fragmenting new dir,rather than logical and.
<pr...
10/25/2020
- 11:29 PM Bug #47973 (Resolved): Clang does not see names as variables in lambda lists
- 11:28 PM Bug #47918 (Pending Backport): cephfs client and nfs-ganesha have inconsistent reference count af...
- 11:27 PM Bug #47565 (Pending Backport): qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x20...
- 11:26 PM Bug #46883 (Resolved): kclient: ghost kernel mount
- 11:22 PM Bug #47786: mds: log [ERR] : failed to commit dir 0x100000005f1.1010* object, errno -2
- /ceph/teuthology-archive/pdonnell-2020-10-24_06:29:44-multimds-wip-pdonnell-testing-20201024.032205-distro-basic-smit...
- 11:16 PM Bug #47979 (Can't reproduce): qa: test_ephemeral_pin_distribution failure
- ...
- 11:12 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- More: https://pulpito.ceph.com/pdonnell-2020-10-24_06:26:58-kcephfs-wip-pdonnell-testing-20201024.032205-distro-basic...
- 11:11 PM Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- https://pulpito.ceph.com/pdonnell-2020-10-24_06:26:58-kcephfs-wip-pdonnell-testing-20201024.032205-distro-basic-smith...
- 04:10 PM Bug #47977 (Resolved): fs: "./bin/ceph daemon client.admin.133423 config show" do not work
- The "client.admin.133423 " is a fuse client for cephfs:...
10/24/2020
10/23/2020
- 11:52 PM Bug #47973 (Resolved): Clang does not see names as variables in lambda lists
- home/jenkins/workspace/ceph-master-compile/src/tools/cephfs_mirror/Mirror.cc:529:33: error: 'mirror_action' in captur...
- 05:45 PM Bug #47964: ceph-fuse RPM package must same-version ceph rpm
- Frédéric Schaer wrote:
> Hi,
>
> I just got a hard time debugging why a ceph-fuse client suddendly was not able t... - 08:54 AM Bug #47964 (New): ceph-fuse RPM package must same-version ceph rpm
- Hi,
I just got a hard time debugging why a ceph-fuse client suddendly was not able to mount cephfs.
Not only that... - 07:33 AM Bug #46360 (Resolved): mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits cer...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:33 AM Bug #46765 (Resolved): mds: segv in MDCache::wait_for_uncommitted_fragments
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:32 AM Bug #46766 (Resolved): mds: memory leak during cache drop
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:32 AM Bug #46830 (Resolved): mds: do not raise "client failing to respond to cap release" when client w...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:32 AM Bug #46832 (Resolved): client: static dirent for readdir is not thread-safe
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:32 AM Fix #46851 (Resolved): qa: add debugging for volumes plugin use of libcephfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:32 AM Bug #46891 (Resolved): mds: kcephfs parse dirfrag's ndist is always 0
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:32 AM Bug #46976 (Resolved): After restarting an mds, its standy-replay mds remained in the "resolve" s...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:32 AM Bug #46984 (Resolved): mds: recover files after normal session close
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:31 AM Feature #46989 (Resolved): pybind/mgr/nfs: Test mounting of exports created with nfs export command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:31 AM Fix #47149 (Resolved): pybind/mgr/volumes: add debugging for global lock
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:31 AM Bug #47202 (Resolved): qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:30 AM Backport #47958 (Resolved): octopus: mon/MDSMonitor: stop all MDS processes in the cluster at the...
- https://github.com/ceph/ceph/pull/37858
- 07:30 AM Backport #47957 (Resolved): nautilus: mon/MDSMonitor: stop all MDS processes in the cluster at th...
- https://github.com/ceph/ceph/pull/37822
- 06:15 AM Bug #46926 (Resolved): mds: fix the decode version
- 12:17 AM Cleanup #46620 (Resolved): client: add command_lock support
10/22/2020
- 07:58 PM Bug #47881 (Pending Backport): mon/MDSMonitor: stop all MDS processes in the cluster at the same ...
- 07:55 PM Backport #47942 (In Progress): octopus: octopus: client: hang after statfs
- 07:35 AM Backport #47942 (Resolved): octopus: octopus: client: hang after statfs
- https://github.com/ceph/ceph/pull/37530
- 07:54 PM Bug #46426 (Resolved): mds: 8MMDSPing is not an MMDSOp type
- 05:55 PM Backport #47147 (Resolved): octopus: pybind/mgr/nfs: Test mounting of exports created with nfs ex...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37365
m... - 05:55 PM Backport #47247 (Resolved): octopus: qa: Replacing daemon mds.a as rank 0 with standby daemon mds...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37367
m... - 05:55 PM Backport #47151 (Resolved): octopus: pybind/mgr/volumes: add debugging for global lock
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37366
m... - 05:54 PM Backport #47089 (Resolved): octopus: After restarting an mds, its standy-replay mds remained in t...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37363
m... - 05:54 PM Backport #47083 (Resolved): octopus: mds: 'forward loop' when forward_all_requests_to_auth is set
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37360
m... - 05:54 PM Backport #46940 (Resolved): octopus: mds: memory leak during cache drop
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37354
m... - 05:54 PM Backport #47021 (Resolved): octopus: client: shutdown race fails with status 141
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37358
m... - 05:54 PM Backport #47018 (Resolved): octopus: mds: kcephfs parse dirfrag's ndist is always 0
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37357
m... - 05:53 PM Backport #47016 (Resolved): octopus: mds: fix the decode version
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37356
m... - 05:53 PM Backport #46942 (Resolved): octopus: mds: segv in MDCache::wait_for_uncommitted_fragments
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37355
m... - 05:53 PM Backport #46859 (Resolved): octopus: mds: do not raise "client failing to respond to cap release"...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37353
m... - 05:53 PM Backport #46857 (Resolved): octopus: qa: add debugging for volumes plugin use of libcephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37352
m... - 05:53 PM Backport #46855 (Resolved): octopus: client: static dirent for readdir is not thread-safe
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37351
m... - 05:52 PM Backport #46463 (Resolved): octopus: mgr/volumes: fs subvolume clones stuck in progress when libc...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37350
m... - 05:52 PM Backport #47087 (Resolved): octopus: mds: recover files after normal session close
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37334
m... - 05:52 PM Backport #46786: octopus: client: in _open() the open ref maybe decreased twice, but only increas...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37249
m... - 05:52 PM Backport #46783: octopus: mds/CInode: Optimize only pinned by subtrees check
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37248
m... - 05:52 PM Backport #46637: octopus: mds: optimize ephemeral rand pin
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37247
m... - 05:51 PM Backport #46636: octopus: mds: null pointer dereference in MDCache::finish_rollback
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37243
m... - 05:51 PM Backport #46634: octopus: mds forwarding request 'no_available_op_found'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37240
m... - 10:29 AM Bug #47842 (Fix Under Review): qa: "fsstress.sh: line 16: 28870 Bus error (core dum...
- 04:51 AM Bug #47842: qa: "fsstress.sh: line 16: 28870 Bus error (core dumped) "$BIN" -d "$T"...
- From remote/smithi132/log/ceph-client.0.26463.log.gz, the client didn't send any caps renewing request to MDSs more t...
- 04:08 AM Bug #47842: qa: "fsstress.sh: line 16: 28870 Bus error (core dumped) "$BIN" -d "$T"...
- Xiubo Li wrote:
> The client in smithi132 was blocklisted:
>
> [...]
The mds.a evicted the client:... - 07:37 AM Bug #46302 (Resolved): mds: optimize ephemeral rand pin
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:37 AM Bug #46533 (Resolved): mds: null pointer dereference in MDCache::finish_rollback
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:37 AM Bug #46543 (Resolved): mds forwarding request 'no_available_op_found'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:37 AM Bug #46664 (Resolved): client: in _open() the open ref maybe decreased twice, but only increases ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:36 AM Fix #46727 (Resolved): mds/CInode: Optimize only pinned by subtrees check
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:35 AM Backport #47941 (Rejected): nautilus: octopus: client: hang after statfs
- 07:35 AM Backport #47940 (Resolved): octopus: mon/MDSMonitor: divide mds identifier and mds real name with...
- https://github.com/ceph/ceph/pull/37857
- 07:35 AM Backport #47939 (Resolved): nautilus: mon/MDSMonitor: divide mds identifier and mds real name wit...
- https://github.com/ceph/ceph/pull/37821
- 07:35 AM Backport #47936 (Resolved): octopus: mds FAILED ceph_assert(sessions != 0) in function 'void Sess...
- https://github.com/ceph/ceph/pull/37856
- 07:35 AM Backport #47935 (Resolved): nautilus: mds FAILED ceph_assert(sessions != 0) in function 'void Ses...
- https://github.com/ceph/ceph/pull/37820
- 03:19 AM Fix #47931 (Fix Under Review): Directory quota optimization
- If the directory is set to a quota of 100G, after 80G of data is written to the directory, when the directory is re-q...
10/21/2020
- 08:33 PM Documentation #43028 (Resolved): doc: cephfs-shell options
- 04:25 PM Backport #47147: octopus: pybind/mgr/nfs: Test mounting of exports created with nfs export command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37365
merged - 02:46 PM Bug #47918 (Fix Under Review): cephfs client and nfs-ganesha have inconsistent reference count af...
- 07:25 AM Bug #47918: cephfs client and nfs-ganesha have inconsistent reference count after release cache
- Tracing to the client log, the ll_ref is increased 2 times each time lookup parent directory(..)....
- 03:21 AM Bug #47918 (Resolved): cephfs client and nfs-ganesha have inconsistent reference count after rele...
- After nfs-ganesha has released the cache, cephfs client still holds many inodes in pin state.
The number of caches i... - 09:12 AM Bug #47842: qa: "fsstress.sh: line 16: 28870 Bus error (core dumped) "$BIN" -d "$T"...
- The client in smithi132 was blocklisted:...
- 07:43 AM Backport #46786 (Resolved): octopus: client: in _open() the open ref maybe decreased twice, but o...
- 07:42 AM Backport #46783 (Resolved): octopus: mds/CInode: Optimize only pinned by subtrees check
- 07:42 AM Backport #46637 (Resolved): octopus: mds: optimize ephemeral rand pin
- 07:42 AM Backport #46636 (Resolved): octopus: mds: null pointer dereference in MDCache::finish_rollback
- 07:41 AM Backport #46634 (Resolved): octopus: mds forwarding request 'no_available_op_found'
- 03:48 AM Tasks #47920 (Won't Fix): client: get rid of the client_lock for mdsmap
- More detail in https://github.com/ceph/ceph/pull/36204#discussion_r507977982.
- 01:50 AM Bug #46434 (Fix Under Review): osdc: FAILED ceph_assert(bh->waitfor_read.empty())
10/20/2020
- 07:10 PM Bug #46434 (In Progress): osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- 07:02 PM Bug #46434: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Seems this started sometime around June 10th:...
- 03:46 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- https://github.com/ceph/ceph/pull/34057#issuecomment-693409380
- 03:35 PM Backport #47247: octopus: qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in clus...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37367
merged - 03:34 PM Backport #47089: octopus: After restarting an mds, its standy-replay mds remained in the "resolve...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37363
merged - 03:33 PM Backport #47083: octopus: mds: 'forward loop' when forward_all_requests_to_auth is set
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37360
merged - 03:32 PM Backport #46940: octopus: mds: memory leak during cache drop
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37354
merged - 03:27 PM Backport #47021: octopus: client: shutdown race fails with status 141
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37358
merged - 03:27 PM Backport #47018: octopus: mds: kcephfs parse dirfrag's ndist is always 0
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37357
merged - 03:26 PM Backport #47016: octopus: mds: fix the decode version
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37356
merged - 03:26 PM Bug #47881 (Fix Under Review): mon/MDSMonitor: stop all MDS processes in the cluster at the same ...
- 08:23 AM Bug #47881: mon/MDSMonitor: stop all MDS processes in the cluster at the same time. Some MDS cann...
- Patrick Donnelly wrote:
> Would `ceph fs fail <fs_name>` not be the command you want?
"ceph mds fail <role_or_gid... - 03:25 PM Backport #46942: octopus: mds: segv in MDCache::wait_for_uncommitted_fragments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37355
merged - 03:25 PM Backport #46859: octopus: mds: do not raise "client failing to respond to cap release" when clien...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/37353
merged - 03:24 PM Backport #46857: octopus: qa: add debugging for volumes plugin use of libcephfs
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/37352
merged - 03:23 PM Backport #46855: octopus: client: static dirent for readdir is not thread-safe
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/37351
merged - 03:23 PM Documentation #46884 (Resolved): pybind/mgr/mds_autoscaler: add documentation
- 03:22 PM Backport #46463: octopus: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37350
merged - 03:21 PM Backport #47087: octopus: mds: recover files after normal session close
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37334
merged - 03:21 PM Backport #46786: octopus: client: in _open() the open ref maybe decreased twice, but only increas...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37249
merged - 03:20 PM Backport #46783: octopus: mds/CInode: Optimize only pinned by subtrees check
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37248
merged - 03:20 PM Backport #46637: octopus: mds: optimize ephemeral rand pin
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37247
merged - 03:19 PM Backport #46636: octopus: mds: null pointer dereference in MDCache::finish_rollback
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37243
merged - 03:19 PM Backport #46634: octopus: mds forwarding request 'no_available_op_found'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37240
merged - 02:18 PM Backport #46611 (In Progress): nautilus: cephfs.pyx: passing empty string is fine but passing Non...
- 02:11 PM Backport #46610 (In Progress): octopus: cephfs.pyx: passing empty string is fine but passing None...
- 11:56 AM Bug #46671 (Need More Info): nautilus:tasks/cfuse_workunit_suites_fsstress: "kernel: watchdog: BU...
- Sorry for the long delay. This one slipped through the cracks.
It looks like this is probably stuck waiting on the... - 11:23 AM Documentation #43028 (In Progress): doc: cephfs-shell options
- 08:03 AM Bug #47849 (Fix Under Review): qa/vstart_runner: LocalRemote.run can't take multiple commands
- 04:45 AM Bug #46426 (Fix Under Review): mds: 8MMDSPing is not an MMDSOp type
- 03:29 AM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- /ceph/teuthology-archive/yuriw-2020-10-07_19:13:19-multimds-wip-yuri5-testing-2020-10-07-1021-octopus-distro-basic-sm...
- 03:29 AM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- /ceph/teuthology-archive/yuriw-2020-10-07_19:13:19-multimds-wip-yuri5-testing-2020-10-07-1021-octopus-distro-basic-sm...
- 02:28 AM Bug #47833 (Pending Backport): mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap...
- 02:28 AM Bug #47806 (Pending Backport): mon/MDSMonitor: divide mds identifier and mds real name with dot
- 02:27 AM Bug #47734 (Pending Backport): client: hang after statfs
10/19/2020
- 07:33 PM Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- /ceph/teuthology-archive/pdonnell-2020-10-13_22:14:10-kcephfs-wip-pdonnell-testing-20201013.174240-distro-basic-smith...
- 05:21 PM Bug #47786: mds: log [ERR] : failed to commit dir 0x100000005f1.1010* object, errno -2
- $ grep 'failed to commit dir' pdonnell-2020-10-*/*/teu*
Binary file pdonnell-2020-10-07_03:30:19-multimds-wip-pdonne... - 02:02 AM Bug #47786: mds: log [ERR] : failed to commit dir 0x100000005f1.1010* object, errno -2
- For /ceph/teuthology-archive/pdonnell-2020-10-08_01:40:56-multimds-wip-pdonnell-testing-20201007.214100-distro-basic-...
- 04:54 PM Bug #46434: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Multiple failures in https://pulpito.ceph.com/yuriw-2020-10-12_15:45:53-powercycle-nautilus-distro-basic-smithi/
- 04:11 PM Bug #47843 (Fix Under Review): mds: stuck in resolve when restarting MDS and reducing max_mds
- 01:41 PM Bug #47842 (Triaged): qa: "fsstress.sh: line 16: 28870 Bus error (core dumped) "$BI...
- 01:40 PM Bug #47881 (Need More Info): mon/MDSMonitor: stop all MDS processes in the cluster at the same ti...
- Would `ceph fs fail <fs_name>` not be the command you want?
- 06:43 AM Bug #47881: mon/MDSMonitor: stop all MDS processes in the cluster at the same time. Some MDS cann...
- Zheng Yan wrote:
> this is by design. monitor never marks laggy mds failed if there is no replacement
Pull Requet... - 12:21 AM Bug #47881: mon/MDSMonitor: stop all MDS processes in the cluster at the same time. Some MDS cann...
- this is by design. monitor never marks laggy mds failed if there is no replacement
- 01:21 PM Fix #15134 (In Progress): multifs: test case exercising mds_thrash for multiple filesystems
- 11:45 AM Documentation #46884 (In Progress): pybind/mgr/mds_autoscaler: add documentation
- 08:33 AM Backport #47891 (Resolved): octopus: mgr/nfs: Pseudo path prints wrong error message
- https://github.com/ceph/ceph/pull/37855
10/17/2020
- 07:25 AM Bug #47881 (Resolved): mon/MDSMonitor: stop all MDS processes in the cluster at the same time. So...
- Stop all MDS processes in the cluster at the same time, After all MDS processes exits, some MDS are still in the "act...
10/16/2020
- 09:35 AM Documentation #45730 (Resolved): MDS config reference lists mds log max expiring
- 09:34 AM Backport #45826 (Rejected): mimic: MDS config reference lists mds log max expiring
- mimic EOL
- 09:12 AM Backport #47877 (In Progress): octopus: Create NFS Ganesha Cluster instructions are misleading
- 09:12 AM Backport #47877 (Resolved): octopus: Create NFS Ganesha Cluster instructions are misleading
- https://github.com/ceph/ceph/pull/37691
- 09:11 AM Bug #46559 (Pending Backport): Create NFS Ganesha Cluster instructions are misleading
10/15/2020
- 12:38 PM Bug #36171: mds: ctime should not use client provided ctime/mtime
- IMHO ctime should always be `ceph_clock_now()` rather than any time from the client.
Here's an XFS demo. Note how ... - 08:41 AM Backport #47824 (In Progress): octopus: pybind/mgr/volumes: Make number of cloner threads configu...
- 08:27 AM Bug #46883: kclient: ghost kernel mount
- From: /ceph/teuthology-archive/pdonnell-2020-08-08_02:19:19-kcephfs-wip-pdonnell-testing-20200808.001303-distro-basic...
10/14/2020
- 07:06 PM Bug #46883: kclient: ghost kernel mount
- I'm not a fan of this noshare option. That seems like a hacky workaround for a problem that I'm not sure any of us fu...
- 02:22 PM Bug #47854 (Fix Under Review): some clients may return failure in the scenario where multiple cli...
- 03:13 AM Bug #47854 (Resolved): some clients may return failure in the scenario where multiple clients cre...
- The issue can be reproduced by the following steps:
(1)ceph version: 14.2.10, multimds , multiple clients mount the ...
10/13/2020
- 05:27 PM Bug #47783 (Pending Backport): mgr/nfs: Pseudo path prints wrong error message
- 05:26 PM Documentation #47784 (Resolved): nfs: Remove doc on creating cephfs exports using rook
- 04:49 PM Bug #42365 (Resolved): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:46 PM Bug #47011 (Resolved): client: Client::open() pass wrong cap mask to path_walk
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:46 PM Bug #47125 (Resolved): mds: fix possible crash when the MDS is stopping
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:45 PM Bug #47224 (Resolved): various quota failures
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:45 PM Bug #47353 (Resolved): mds: purge_queue's _calculate_ops is inaccurate
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:45 PM Bug #47512 (Resolved): mgr/nfs: Cluster creation throws 'NoneType' object has no attribute 'repla...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:30 PM Fix #46696 (Resolved): mds: pre-fragment distributed ephemeral pin directories to distribute the ...
- 04:15 PM Backport #47608 (Resolved): octopus: mds: OpenFileTable::prefetch_inodes during rejoin can cause ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37383
m... - 04:15 PM Backport #47604 (Resolved): octopus: mds: purge_queue's _calculate_ops is inaccurate
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37372
m... - 04:15 PM Backport #47601 (Resolved): octopus: mgr/nfs: Cluster creation throws 'NoneType' object has no at...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37371
m... - 04:14 PM Backport #47260 (Resolved): octopus: client: FAILED assert(dir->readdir_cache[dirp->cache_index] ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37370
m... - 04:14 PM Backport #47623 (Resolved): octopus: various quota failures
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37369
m... - 04:14 PM Backport #47255 (Resolved): octopus: client: Client::open() pass wrong cap mask to path_walk
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37369
m... - 04:13 PM Backport #47253 (Resolved): octopus: mds: fix possible crash when the MDS is stopping
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37368
m... - 03:55 PM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- Varsha Rao wrote:
> Patrick Donnelly wrote:
> > In my own testing, the process is not respawned and the NFS client ... - 06:06 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- Patrick Donnelly wrote:
> In my own testing, the process is not respawned and the NFS client hangs. I suspect there'... - 03:20 PM Bug #46883 (Fix Under Review): kclient: ghost kernel mount
- 03:11 PM Bug #47833 (Fix Under Review): mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap...
- 12:16 PM Bug #47833: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit_session(Sessi...
- I tested the fix in the same env, with the stale fh clients, and now the mds does not crash while stopping.
- 01:53 PM Bug #47849 (Resolved): qa/vstart_runner: LocalRemote.run can't take multiple commands
- The issue is caused by this commit - https://github.com/ceph/ceph/pull/36457/commits/a177b470aa48a84e5346b310efa4fd62...
- 11:43 AM Bug #47844 (Fix Under Review): mds: only update the requesting metrics
- 11:40 AM Bug #47844 (In Progress): mds: only update the requesting metrics
- 11:40 AM Bug #47844 (Resolved): mds: only update the requesting metrics
- Currently for the MDSs without global metrics needed to be refreshed,
the global metric counters will be zero and th... - 10:17 AM Bug #47798 (Duplicate): pybind/mgr/volumes: TypeError: bad operand type for unary -: 'str' for er...
- Closing this as duplicate of https://tracker.ceph.com/issues/46360
- 10:16 AM Bug #47798: pybind/mgr/volumes: TypeError: bad operand type for unary -: 'str' for errno ETIMEDOUT
- The PR https://github.com/ceph/ceph/pull/35934 has already fixed this issue. The issue is tracked by tracked by https...
- 08:01 AM Bug #47843 (Fix Under Review): mds: stuck in resolve when restarting MDS and reducing max_mds
- In multi MDS ceph cluster, first reduce max_mds,before this step is completed, restart one or more MDS immediately. T...
- 03:34 AM Bug #47652 (Resolved): teuthology's misc.sudo_write_file is incompatible with vstart_runner
- Wasn't aware that Ramana too was working on the same issue. The fix was merged in commit https://github.com/ceph/ceph...
- 01:57 AM Bug #47842 (Resolved): qa: "fsstress.sh: line 16: 28870 Bus error (core dumped) "$B...
- ...
- 01:51 AM Feature #24285 (Resolved): mgr: add module which displays current usage of file system (`fs top`)
Also available in: Atom