Activity
From 08/22/2023 to 09/20/2023
09/20/2023
- 05:51 PM Feature #57481 (Fix Under Review): mds: enhance scrub to fragment/merge dirfrags
- 03:44 PM Backport #62696 (Rejected): reef: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have a...
- 03:44 PM Backport #62695 (Rejected): quincy: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have...
- 03:44 PM Bug #62482 (Resolved): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an applicati...
- Canceling backports since this doesn't fix the issue.
- 03:40 PM Backport #52029 (Rejected): pacific: mgr/nfs :update pool name to '.nfs' in vstart.sh
- 03:39 PM Backport #61994 (Rejected): pacific: mds/MDSRank: op_tracker of mds have slow op alway.
- I don't think this is urgent for about-to-be-EOL pacific.
- 03:38 PM Bug #58109: ceph-fuse: doesn't work properly when the version of libfuse is 3.1 or later
- Do we still want to backport this Venky?
- 03:38 PM Backport #52968 (Rejected): pacific: mgr/nfs: add 'nfs cluster config get'
- 03:38 PM Feature #52942 (Resolved): mgr/nfs: add 'nfs cluster config get'
- I don't see a need for this to go into Pacific.
- 03:37 PM Backport #53443 (Rejected): pacific: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731...
- 03:36 PM Backport #57776 (In Progress): pacific: Clarify security implications of path-restricted cephx ca...
- 03:35 PM Backport #57777 (In Progress): quincy: Clarify security implications of path-restricted cephx cap...
- 03:34 PM Backport #62733 (In Progress): reef: mds: add TrackedOp event for batching getattr/lookup
- 03:33 PM Backport #62732 (In Progress): quincy: mds: add TrackedOp event for batching getattr/lookup
- 03:32 PM Backport #62731 (In Progress): pacific: mds: add TrackedOp event for batching getattr/lookup
- 03:31 PM Backport #62897 (In Progress): pacific: client: evicted warning because client completes unmount ...
- 12:37 PM Backport #62897 (In Progress): pacific: client: evicted warning because client completes unmount ...
- https://github.com/ceph/ceph/pull/53555
- 03:30 PM Backport #62898 (In Progress): quincy: client: evicted warning because client completes unmount b...
- 12:37 PM Backport #62898 (In Progress): quincy: client: evicted warning because client completes unmount b...
- https://github.com/ceph/ceph/pull/53554
- 03:29 PM Backport #62899 (In Progress): reef: client: evicted warning because client completes unmount bef...
- 12:38 PM Backport #62899 (In Progress): reef: client: evicted warning because client completes unmount bef...
- https://github.com/ceph/ceph/pull/53553
- 03:28 PM Backport #62906 (In Progress): pacific: mds,qa: some balancer debug messages (<=5) not printed wh...
- 02:55 PM Backport #62906 (In Progress): pacific: mds,qa: some balancer debug messages (<=5) not printed wh...
- https://github.com/ceph/ceph/pull/53552
- 03:27 PM Backport #62907 (In Progress): quincy: mds,qa: some balancer debug messages (<=5) not printed whe...
- 02:55 PM Backport #62907 (In Progress): quincy: mds,qa: some balancer debug messages (<=5) not printed whe...
- https://github.com/ceph/ceph/pull/53551
- 03:26 PM Backport #62902 (In Progress): pacific: mds: log a message when exiting due to asok "exit" command
- 01:01 PM Backport #62902 (In Progress): pacific: mds: log a message when exiting due to asok "exit" command
- https://github.com/ceph/ceph/pull/53550
- 03:24 PM Backport #62900 (In Progress): quincy: mds: log a message when exiting due to asok "exit" command
- 01:00 PM Backport #62900 (In Progress): quincy: mds: log a message when exiting due to asok "exit" command
- https://github.com/ceph/ceph/pull/53549
- 03:22 PM Backport #62901 (In Progress): reef: mds: log a message when exiting due to asok "exit" command
- 01:01 PM Backport #62901 (In Progress): reef: mds: log a message when exiting due to asok "exit" command
- https://github.com/ceph/ceph/pull/53548
- 02:54 PM Backport #62905 (Rejected): quincy: Test failure: test_journal_migration (tasks.cephfs.test_journ...
- 02:54 PM Backport #62904 (Rejected): reef: Test failure: test_journal_migration (tasks.cephfs.test_journal...
- 02:54 PM Backport #62903 (Rejected): pacific: Test failure: test_journal_migration (tasks.cephfs.test_jour...
- 02:53 PM Bug #58219 (Pending Backport): Test failure: test_journal_migration (tasks.cephfs.test_journal_mi...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/49842#issuecomment-1727875012
Definitely. Probably mi... - 02:44 PM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- https://github.com/ceph/ceph/pull/49842#issuecomment-1727875012
- 02:50 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
- We've been experiencing this off and on for over a year. Cannot reproduce though Singularity has often been pushed wi...
- 02:48 PM Bug #55165: client: validate pool against pool ids as well as pool names
- Need someone to take over https://github.com/ceph/ceph/pull/45329
- 02:47 PM Bug #55980 (Pending Backport): mds,qa: some balancer debug messages (<=5) not printed when debug_...
- 02:46 PM Bug #56067 (New): Cephfs data loss with root_squash enabled
- Upstream PR was closed.
- 02:45 PM Bug #57071: mds: consider mds_cap_revoke_eviction_timeout for get_late_revoking_clients()
- What's the status of this ticket? Shoudl it be closed?
- 02:42 PM Bug #56694 (Rejected): qa: avoid blocking forever on hung umount
- Going to fix this with stdin-killer instead: https://github.com/ceph/ceph/pull/53255
- 01:42 PM Bug #58726: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- Hi,
I'm seeing this warning on a custom cluster, with ceph running in a qemu virtual machine with virtio-scsi disk... - 12:57 PM Bug #62577 (Pending Backport): mds: log a message when exiting due to asok "exit" command
- 12:55 PM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- I don't think it's related either. I was probably trying to link a different ticket but I don't recall which.
- 09:25 AM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- There are 3 processes of the compiler that seem to be in a deadlock:...
- 09:07 AM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- I agree with Venky, this doesn't seem to be related to the linked issue. @Patrick, would you mind clarifying why you ...
- 06:22 AM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- I don't think this is related to #62682.
- 12:31 PM Bug #62579 (Pending Backport): client: evicted warning because client completes unmount before th...
- 10:27 AM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- okay, so there **are** a few handle_fragment_notify logs but not as many handle_fragment_notify_ack logs.
that seems... - 07:25 AM Feature #62892 (New): mgr/snap_schedule: restore scheduling for subvols and groups
- Tracker to hold discussions on restoring functionality to help users set snap-schedules for subvols and also for non-...
- 06:21 AM Bug #62848 (Triaged): qa: fail_fs upgrade scenario hanging
- 02:14 AM Bug #62739 (Fix Under Review): cephfs-shell: remove distutils Version classes because they're dep...
09/19/2023
- 05:25 PM Feature #62856: cephfs: persist an audit log in CephFS
- We discussed this in standup today. We are now considering a design with a new "audit" module in the ceph-mgr.
- 05:09 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Greg Farnum wrote:
> > > Patrick Donnelly wrote:
> > > > If we... - 09:50 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Patrick Donnelly wrote:
> Greg Farnum wrote:
> > Patrick Donnelly wrote:
> > > If we are going to move the metadat... - 02:48 PM Feature #62882 (Pending Backport): mds: create an admin socket command for raising a signal
- This is useful for testing e.g. with SIGSTOP (mds is still "alive" but unresponsive) but this can be difficult to sen...
- 02:36 PM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:36 PM Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:35 PM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:35 PM Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:35 PM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:34 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:34 PM Bug #59348: qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.T...
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:33 PM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:33 PM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 01:04 PM Bug #62847 (Triaged): mds: blogbench requests stuck (5mds+scrub+snaps-flush)
- 12:56 PM Bug #62863 (Triaged): Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- 12:52 PM Bug #62873 (Triaged): qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limi...
- 07:14 AM Bug #62873: qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestCli...
- https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/7394932
http... - 06:59 AM Bug #62873 (Pending Backport): qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_cl...
- http://qa-proxy.ceph.com/teuthology/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 11:31 AM Backport #62879 (Resolved): pacific: cephfs-shell: update path to cephfs-shell since its location...
- https://github.com/ceph/ceph/pull/54144
- 11:31 AM Backport #62878 (In Progress): quincy: cephfs-shell: update path to cephfs-shell since its locati...
- https://github.com/ceph/ceph/pull/54186
- 11:26 AM Bug #58795 (Pending Backport): cephfs-shell: update path to cephfs-shell since its location has c...
- 11:09 AM Bug #62739 (In Progress): cephfs-shell: remove distutils Version classes because they're deprecated
- 08:46 AM Bug #62739: cephfs-shell: remove distutils Version classes because they're deprecated
- Dhairya Parmar wrote:
> python 3.10 deprecated distutils [0]. LooseVersion is used at many places in cephfs-shell.py... - 10:02 AM Bug #62876 (Fix Under Review): qa: use unique name for CephFS created during testing
- CephFS created during testing is name "cephfs". This isn't a good name because it makes it impossible to track this n...
- 09:59 AM Feature #62364: support dumping rstats on a particular path
- Greg, nothing is required for this since the details are available as mentioned in the above note. Good to close?
- 09:36 AM Bug #58878: mds: FAILED ceph_assert(trim_to > trimming_pos)
- Reopned for investigating a possible bug in the MDS that causes bogus values to be persisted in the journal header.
- 06:02 AM Bug #58878 (New): mds: FAILED ceph_assert(trim_to > trimming_pos)
09/18/2023
- 08:29 PM Bug #62870 (Resolved): test_nfs task fails due to no orch backend set
- This test is likely intended to have cephadm set as the orch backend, but for whatever reason, it seems to not be set...
- 02:54 PM Feature #62849 (In Progress): mds/FSMap: add field indicating the birth time of the epoch
- 12:48 PM Backport #62867 (In Progress): quincy: cephfs: qa snaptest-git-ceph.sh failed with "got remote pr...
- https://github.com/ceph/ceph/pull/53629
- 12:48 PM Backport #62866 (In Progress): reef: cephfs: qa snaptest-git-ceph.sh failed with "got remote proc...
- https://github.com/ceph/ceph/pull/53628
- 12:48 PM Backport #62865 (In Progress): pacific: cephfs: qa snaptest-git-ceph.sh failed with "got remote p...
- https://github.com/ceph/ceph/pull/53630
- 12:37 PM Bug #59413 (Pending Backport): cephfs: qa snaptest-git-ceph.sh failed with "got remote process re...
- 12:47 AM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- Patrick Donnelly wrote:
> /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-def... - 10:25 AM Bug #62863 (Can't reproduce): Slowness or deadlock in ceph-fuse causes teuthology job to hang and...
- https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/7394785/
I... - 02:22 AM Backport #62860 (In Progress): reef: mds: deadlock between unlink and linkmerge
- 12:43 AM Backport #62860 (Resolved): reef: mds: deadlock between unlink and linkmerge
- https://github.com/ceph/ceph/pull/53497
- 02:19 AM Backport #62858 (In Progress): quincy: mds: deadlock between unlink and linkmerge
- 12:42 AM Backport #62858 (In Progress): quincy: mds: deadlock between unlink and linkmerge
- https://github.com/ceph/ceph/pull/53496
- 02:14 AM Backport #62859 (In Progress): pacific: mds: deadlock between unlink and linkmerge
- 12:42 AM Backport #62859 (Resolved): pacific: mds: deadlock between unlink and linkmerge
- https://github.com/ceph/ceph/pull/53495
- 01:28 AM Bug #62861: mds: _submit_entry ELid(0) crashed the MDS
- It's a use-after-free bug for the stray CInodes.
- 01:28 AM Bug #62861 (Fix Under Review): mds: _submit_entry ELid(0) crashed the MDS
- 01:03 AM Bug #62861 (Resolved): mds: _submit_entry ELid(0) crashed the MDS
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395153/teuthol...
- 12:28 AM Bug #61818 (Pending Backport): mds: deadlock between unlink and linkmerge
09/16/2023
- 04:46 PM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- This is a job with scrubbing on dir frags on a set of replicas.
Interestingly there's no trace of handle_fragment_no... - 12:44 AM Feature #62856 (New): cephfs: persist an audit log in CephFS
- ... for quickly learning what disaster tools and commands have been run on the file system.
Too often we see a cl...
09/15/2023
- 07:54 PM Backport #62854 (In Progress): pacific: qa: "cluster [ERR] MDS abort because newly corrupt dentry...
- 06:45 PM Backport #62854 (In Progress): pacific: qa: "cluster [ERR] MDS abort because newly corrupt dentry...
- https://github.com/ceph/ceph/pull/53486
- 07:52 PM Backport #62853 (In Progress): quincy: qa: "cluster [ERR] MDS abort because newly corrupt dentry ...
- 06:44 PM Backport #62853 (In Progress): quincy: qa: "cluster [ERR] MDS abort because newly corrupt dentry ...
- https://github.com/ceph/ceph/pull/53485
- 07:51 PM Backport #62852 (In Progress): reef: qa: "cluster [ERR] MDS abort because newly corrupt dentry to...
- 06:44 PM Backport #62852 (In Progress): reef: qa: "cluster [ERR] MDS abort because newly corrupt dentry to...
- https://github.com/ceph/ceph/pull/53484
- 06:38 PM Bug #62164 (Pending Backport): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- 04:35 PM Feature #62849 (In Progress): mds/FSMap: add field indicating the birth time of the epoch
- So you can easily see when the FSMap epoch was published (real time) without looking at each file system's mdsmap. In...
- 04:03 PM Bug #62848 (Duplicate): qa: fail_fs upgrade scenario hanging
- ...
- 04:00 PM Bug #62847 (Triaged): mds: blogbench requests stuck (5mds+scrub+snaps-flush)
- ...
09/14/2023
- 04:07 PM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395153/teuthol...
- 04:04 PM Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395114/teuthol...
- 04:02 PM Bug #62067: ffsb.sh failure "Resource temporarily unavailable"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395112/teuthol...
- 04:02 PM Bug #62067: ffsb.sh failure "Resource temporarily unavailable"
- Venky Shankar wrote:
> Duplicate of #62484
Is it? This one gets EAGAIN while #62484 gets EIO. That's interesting... - 12:26 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Venky Shankar wrote:
> Xiubo/Dhairya, if the MDS can ensure that the sessionmap is persisted _before_ the client sta... - 05:22 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo/Dhairya, if the MDS can ensure that the sessionmap is persisted _before_ the client starts using the prellocate...
- 04:53 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Dhairya Parmar wrote:
> Greg Farnum wrote:
> > Xiubo Li wrote:
> > > Greg Farnum wrote:
> > > > I was talking to ... - 10:17 AM Backport #62843 (New): pacific: Lack of consistency in time format
- 10:17 AM Backport #62842 (New): reef: Lack of consistency in time format
- 10:17 AM Backport #62841 (New): quincy: Lack of consistency in time format
- 10:11 AM Bug #62494 (Pending Backport): Lack of consistency in time format
- 06:30 AM Bug #62698: qa: fsstress.sh fails with error code 124
- Rishabh, have you seen this in any of your very recent runs?
- 06:29 AM Bug #62706 (Can't reproduce): qa: ModuleNotFoundError: No module named XXXXXX
- Please reopen if this shows up again.
- 05:36 AM Backport #62835 (In Progress): quincy: cephfs-top: enhance --dump code to include the missing fields
- 04:20 AM Backport #62835 (Resolved): quincy: cephfs-top: enhance --dump code to include the missing fields
- https://github.com/ceph/ceph/pull/53454
- 04:42 AM Bug #59768 (Duplicate): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): asse...
- Duplicate of https://tracker.ceph.com/issues/58489
- 04:34 AM Backport #62834 (In Progress): pacific: cephfs-top: enhance --dump code to include the missing fi...
- 04:19 AM Backport #62834 (Resolved): pacific: cephfs-top: enhance --dump code to include the missing fields
- https://github.com/ceph/ceph/pull/53453
- 04:11 AM Bug #61397 (Pending Backport): cephfs-top: enhance --dump code to include the missing fields
- Venky Shankar wrote:
> Jos, this needs backports, yes?
Yes, needs backport. https://tracker.ceph.com/issues/57014... - 03:59 AM Bug #61397: cephfs-top: enhance --dump code to include the missing fields
- Jos, this needs backports, yes?
09/13/2023
- 04:55 PM Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute
- It'll be nice if we can handle this just from the MDS side. It may require changes to ceph-fuse and the kclient to pa...
- 01:33 PM Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi/
- 01:32 PM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
- 12:45 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 12:23 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 10:14 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 09:51 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- (I got to this rather late - so excuse me for any discussion that were already resolved).
Dhairya Parmar wrote:
>... - 11:55 AM Bug #59768: crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(g_conf()-...
- Neeraj Pratap Singh wrote:
> While I was debugging this issue, it seemed that the issue doesn't exist anymore.
> An... - 11:32 AM Bug #59768: crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(g_conf()-...
- While I was debugging this issue, it seemed that the issue doesn't exist anymore.
And I found this PR: https://githu... - 04:53 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- /a/https://pulpito.ceph.com/vshankar-2023-09-12_06:47:30-fs-wip-vshankar-testing-20230908.065909-testing-default-smit...
- 04:45 AM Bug #61574: qa: build failure for mdtest project
- Rishabh, this requires changes similar to tracker #61399?
- 04:43 AM Bug #61399 (Resolved): qa: build failure for ior
- Rishabh, this change does not need backport, yes?
09/12/2023
- 01:23 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> Greg Farnum wrote:
> > I was talking to Dhairya about this today and am not quite sure I understa... - 01:05 PM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > I reproduced it by creating *dirk4444/dirk5555* a... - 12:21 PM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Venky Shankar wrote:
> Xiubo Li wrote:
> > I reproduced it by creating *dirk4444/dirk5555* and found the root cause... - 09:41 AM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Xiubo Li wrote:
> I reproduced it by creating *dirk4444/dirk5555* and found the root cause:
>
> [...]
>
>
> ... - 12:56 PM Feature #61866 (In Progress): MDSMonitor: require --yes-i-really-mean-it when failing an MDS with...
- I will take a look Venky.
- 12:29 PM Feature #57481 (In Progress): mds: enhance scrub to fragment/merge dirfrags
- 10:26 AM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Additional PR: https://github.com/ceph/ceph/pull/53418
- 04:19 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick,
> >
> > Going by the description here, I assume this... - 03:07 AM Bug #62810: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
- The old commits will be revert in https://github.com/ceph/ceph/pull/52199 and need to fix it again.
- 03:07 AM Bug #62810 (New): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fi...
- 03:07 AM Bug #62810 (New): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fi...
- https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-default-smithi...
09/11/2023
- 06:02 PM Backport #62807 (In Progress): pacific: doc: write cephfs commands in full
- 05:53 PM Backport #62807 (Resolved): pacific: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53403
- 05:57 PM Backport #62806 (In Progress): reef: doc: write cephfs commands in full
- 05:53 PM Backport #62806 (Resolved): reef: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53402
- 05:55 PM Backport #62805 (In Progress): quincy: doc: write cephfs commands in full
- 05:53 PM Backport #62805 (Resolved): quincy: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53401
- 05:51 PM Documentation #62791 (Pending Backport): doc: write cephfs commands in full
- 04:54 PM Documentation #62791 (Resolved): doc: write cephfs commands in full
- 09:57 AM Documentation #62791 (Resolved): doc: write cephfs commands in full
- In @doc/cephfs/admininstration.rst@ we don't write CephFS commands fully. Example: @ceph fs rename@ is written as @fs...
- 04:23 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Venky Shankar wrote:
> Patrick,
>
> Going by the description here, I assume this change is only for the volumes p... - 03:03 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick,
Going by the description here, I assume this change is only for the volumes plugin. In case the changes a... - 03:00 PM Backport #62799 (In Progress): quincy: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53907
- 03:00 PM Backport #62798 (Rejected): pacific: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53905
- 03:00 PM Backport #62797 (Resolved): reef: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53906
- 02:56 PM Bug #62236 (Pending Backport): qa: run nfs related tests with fs suite
- 02:55 PM Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute
- Chris, please take this one.
- 12:12 PM Bug #62793 (Fix Under Review): client: setfattr -x ceph.dir.pin: No such attribute
- I've come across documents which suggests to remove ceph.dir.pin to disable export pins, but, looks like it does not ...
- 12:31 PM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- Milind, PTAL. I vaguely recall a similar issue you were looking into a while back.
- 12:28 PM Bug #62673: cephfs subvolume resize does not accept 'unit'
- Dhariya, I presume, this is similar change to the one you worked on a while back.
- 12:26 PM Bug #62465 (Can't reproduce): pacific (?): LibCephFS.ShutdownRace segmentation fault
- 12:15 PM Bug #62567: postgres workunit times out - MDS_SLOW_REQUEST in logs
- Xiubo, this might be related to the slow rename issue you have a PR for. Could you please check?
- 12:13 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> Greg Farnum wrote:
> > I was talking to Dhairya about this today and am not quite sure I understa... - 11:06 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Another instance: https://pulpito.ceph.com/vshankar-2023-09-08_07:03:01-fs-wip-vshankar-testing-20230830.153114-testi...
- 11:04 AM Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true
- Probably another instance - https://pulpito.ceph.com/vshankar-2023-09-08_07:03:01-fs-wip-vshankar-testing-20230830.15...
- 08:01 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- Venky Shankar wrote:
> FWIW, logs hint at missing (RADOS) objects:
>
> [...]
>
> I'm not certain yet if this i... - 06:04 AM Feature #61866: MDSMonitor: require --yes-i-really-mean-it when failing an MDS with MDS_HEALTH_TR...
- Manish, please take this one on prio.
09/10/2023
- 08:50 AM Bug #52723 (Resolved): mds: improve mds_bal_fragment_size_max config option
- 08:50 AM Backport #53122 (Rejected): pacific: mds: improve mds_bal_fragment_size_max config option
- 08:48 AM Backport #57111 (In Progress): quincy: mds: handle deferred client request core when mds reboot
- 08:48 AM Backport #57110 (In Progress): pacific: mds: handle deferred client request core when mds reboot
- 08:46 AM Bug #58651 (Resolved): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59409 (Resolved): reef: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59002 (Resolved): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59003 (Resolved): pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59032 (Resolved): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- 08:45 AM Backport #59719 (Resolved): reef: client: read wild pointer when reconnect to mds
- 08:44 AM Backport #59718 (Resolved): quincy: client: read wild pointer when reconnect to mds
- 08:44 AM Backport #59720 (Resolved): pacific: client: read wild pointer when reconnect to mds
- 08:43 AM Backport #61841 (Resolved): pacific: mds: do not evict clients if OSDs are laggy
- 08:35 AM Backport #62005 (In Progress): quincy: client: readdir_r_cb: get rstat for dir only if using rbyt...
- 08:35 AM Backport #62004 (In Progress): reef: client: readdir_r_cb: get rstat for dir only if using rbytes...
- 08:33 AM Backport #61992 (In Progress): quincy: mds/MDSRank: op_tracker of mds have slow op alway.
- 08:32 AM Backport #61993 (In Progress): reef: mds/MDSRank: op_tracker of mds have slow op alway.
- 08:29 AM Backport #62372 (Resolved): pacific: Consider setting "bulk" autoscale pool flag when automatical...
- 08:28 AM Bug #61907 (Resolved): api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62443 (Resolved): reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62441 (Resolved): quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62442 (Resolved): pacific: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:26 AM Backport #62421 (Resolved): pacific: mds: adjust cap acquistion throttle defaults
09/08/2023
- 04:08 PM Backport #62724 (In Progress): reef: mon/MDSMonitor: optionally forbid to use standby for another...
- 03:59 PM Backport #59405 (In Progress): reef: MDS allows a (kernel) client to exceed the xattrs key/value ...
- 10:52 AM Backport #62738 (In Progress): reef: quota: accept values in human readable format as well
- https://github.com/ceph/ceph/pull/53333
- 10:23 AM Feature #57481: mds: enhance scrub to fragment/merge dirfrags
- Chris, please take this one.
- 07:12 AM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Revert change: https://github.com/ceph/ceph/pull/53331
- 05:54 AM Backport #62585 (In Progress): quincy: mds: enforce a limit on the size of a session in the sessi...
- 05:38 AM Backport #62583 (In Progress): reef: mds: enforce a limit on the size of a session in the sessionmap
- 04:10 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > If we are going to move the metadata out of CephFS, I think it sho... - 03:21 AM Bug #62537 (Fix Under Review): cephfs scrub command will crash the standby-replay MDSs
- 01:36 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> I was talking to Dhairya about this today and am not quite sure I understand.
>
> Xiubo, Ven...
09/07/2023
- 07:53 PM Bug #62764 (New): qa: use stdin-killer for kclient mounts
- To reduce the number of dead jobs caused by a e.g. umount command stuck in uninterruptible sleep.
- 07:52 PM Bug #62763 (Fix Under Review): qa: use stdin-killer for ceph-fuse mounts
- To reduce the number of dead jobs caused by a e.g. umount command stuck in uninterruptible sleep.
- 07:44 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > If we are going to move the metadata out of CephFS, I think it sho... - 03:21 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Patrick Donnelly wrote:
> If we are going to move the metadata out of CephFS, I think it should go in cephsqlite. Th... - 02:59 PM Bug #61399: qa: build failure for ior
- What fixed this issue is using latest version of ior project as well as purging and then again installing mpich packa...
- 02:47 PM Bug #61399: qa: build failure for ior
- The PR has been merged just now, I'll check with Venky if this needs to be backported.
- 02:47 PM Bug #61399 (Fix Under Review): qa: build failure for ior
- 01:41 PM Bug #62729 (Resolved): src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘...
- 01:10 PM Feature #47264 (Resolved): "fs authorize" subcommand should work for multiple FSs too
- 12:25 PM Feature #62364: support dumping rstats on a particular path
- Venky Shankar wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Venky Shankar wrote:
> > > > Greg Farnum... - 12:05 PM Feature #62364: support dumping rstats on a particular path
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Venky Shankar wrote:
> > > Greg Farnum wrote:
> > > > Especially no... - 11:07 AM Bug #62739 (Pending Backport): cephfs-shell: remove distutils Version classes because they're dep...
- python 3.10 deprecated distutils [0]. LooseVersion is used at many places in cephfs-shell.py, suggest switching to pa...
- 10:16 AM Backport #62738 (In Progress): reef: quota: accept values in human readable format as well
- 10:07 AM Feature #55940 (Pending Backport): quota: accept values in human readable format as well
09/06/2023
- 09:53 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- Last successful main shaman build: https://shaman.ceph.com/builds/ceph/main/794f4d16c6c8bf35729045062d24322d30b5aa14/...
- 09:32 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- Laura suspected merging https://github.com/ceph/ceph/pull/51942 led tot this issue. I've built the PR branch (@wip-61...
- 07:48 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- https://shaman.ceph.com/builds/ceph/main/f9a01cf3851ffa2c51b5fb84e304c1481f35fe03/
- 07:48 PM Bug #62729 (Resolved): src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘...
- https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos8,DIST=centos8,MAC...
- 08:49 PM Backport #62733 (Resolved): reef: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53558
- 08:49 PM Backport #62732 (Resolved): quincy: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53557
- 08:49 PM Backport #62731 (Resolved): pacific: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53556
- 08:44 PM Bug #62057 (Pending Backport): mds: add TrackedOp event for batching getattr/lookup
- 06:47 PM Backport #62419 (Resolved): reef: mds: adjust cap acquistion throttle defaults
- https://github.com/ceph/ceph/pull/52972#issuecomment-1708910842
- 05:30 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro... - 02:21 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro... - 09:02 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Venky Shankar wrote:
> which prompted a variety of code changes to workaround the problem. This all carries a size... - 06:14 AM Feature #62715 (New): mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- A bit of history: The subvolume thing started out as a directory structure in the file system (and that is still the ...
- 03:09 PM Backport #62726 (New): quincy: mon/MDSMonitor: optionally forbid to use standby for another fs as...
- 03:09 PM Backport #62725 (Rejected): pacific: mon/MDSMonitor: optionally forbid to use standby for another...
- 03:09 PM Backport #62724 (In Progress): reef: mon/MDSMonitor: optionally forbid to use standby for another...
- https://github.com/ceph/ceph/pull/53340
- 03:00 PM Feature #61599 (Pending Backport): mon/MDSMonitor: optionally forbid to use standby for another f...
- 09:58 AM Bug #62706: qa: ModuleNotFoundError: No module named XXXXXX
- I too ran into this in one of my runs. I believe this is an env thing since a bunch of other tests from my run had is...
- 09:57 AM Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snaps...
- Dhairya, please take this one.
- 09:45 AM Bug #62674: cephfs snapshot remains visible in nfs export after deletion and new snaps not shown
- https://tracker.ceph.com/issues/58376 is the one reported by a community user.
- 08:55 AM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
- Hi Paul,
You are probably running into https://tracker.ceph.com/issues/59041 - at least for the part for listing s... - 09:41 AM Bug #62682 (Triaged): mon: no mdsmap broadcast after "fs set joinable" is set to true
- The upgrade process uses `fail_fs` which fails the file system and upgrades the MDSs without reducing max_mds to 1. I...
- 09:34 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick Donnelly wrote:
> Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread f... - 08:46 AM Feature #62668: qa: use teuthology scripts to test dozens of clients
- Patrick Donnelly wrote:
> We have one small suite for integration testing of multiple clients:
>
> https://github... - 08:32 AM Feature #62670: [RFE] cephfs should track and expose subvolume usage and quota
- Paul Cuzner wrote:
> Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quo... - 06:57 AM Bug #62720 (New): mds: identify selinux relabelling and generate health warning
- This request has come up from folks in the field. A recursive relabel on a file system brings the mds down to its kne...
- 05:54 AM Backport #59200 (Rejected): reef: qa: add testing in fs:workload for different kinds of subvolumes
- Available in reef.
- 05:53 AM Backport #59201 (Resolved): quincy: qa: add testing in fs:workload for different kinds of subvolumes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50974
Merged. - 12:48 AM Bug #62700: postgres workunit failed with "PQputline failed"
- Another one https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/73...
- 12:47 AM Fix #51177 (Resolved): pybind/mgr/volumes: investigate moving calls which may block on libcephfs ...
- 12:47 AM Backport #59417 (Resolved): pacific: pybind/mgr/volumes: investigate moving calls which may block...
09/05/2023
- 09:54 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-09-01_19:14:47-rados-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/7386551
- 08:09 PM Fix #62712 (New): pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when unde...
- Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread for each module, the request...
- 07:42 PM Backport #62268 (Resolved): pacific: qa: _test_stale_caps does not wait for file flush before stat
- 07:42 PM Backport #62517 (Resolved): pacific: mds: inode snaplock only acquired for open in create codepath
- 07:41 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
- 02:53 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- PR https://github.com/ceph/ceph/pull/52924 has been merged for fixing this issue. Original PR https://github.com/ceph...
- 02:51 PM Bug #62084 (Resolved): task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- 01:36 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- I was talking to Dhairya about this today and am not quite sure I understand.
Xiubo, Venky, are we contending the ... - 12:56 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Manish Yathnalli wrote:
> https://github.com/ceph/ceph/pull/52527
Manish, the PR id is linked in the "Pull reques... - 12:42 PM Feature #61863 (Fix Under Review): mds: issue a health warning with estimated time to complete re...
- https://github.com/ceph/ceph/pull/52527
- 12:42 PM Bug #62265 (Fix Under Review): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
- https://github.com/ceph/ceph/pull/53283
- 11:41 AM Bug #62706 (Pending Backport): qa: ModuleNotFoundError: No module named XXXXXX
- https://pulpito.ceph.com/rishabh-2023-08-10_20:13:47-fs-wip-rishabh-2023aug3-b4-testing-default-smithi/7365558/
... - 05:27 AM Bug #62702 (Fix Under Review): MDS slow requests for the internal 'rename' requests
- 04:43 AM Bug #62702 (Pending Backport): MDS slow requests for the internal 'rename' requests
- https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7378922
<... - 04:34 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
- 04:34 AM Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB
- Revert PR: https://github.com/ceph/ceph/pull/53153
- 01:03 AM Bug #62700 (Fix Under Review): postgres workunit failed with "PQputline failed"
- The scale factor will depend on the node's performance and disk sizes being used to run the test, and 500 seems too l...
- 12:53 AM Bug #62700 (Resolved): postgres workunit failed with "PQputline failed"
- https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/7365718/teutho...
09/04/2023
- 03:29 PM Bug #62698: qa: fsstress.sh fails with error code 124
- Copying following log entries on behalf of Radoslaw -...
- 03:21 PM Bug #62698: qa: fsstress.sh fails with error code 124
- These messages mean there was no even a single successful exchange of network heartbeat messages between osd.5 and (o...
- 02:58 PM Bug #62698 (Can't reproduce): qa: fsstress.sh fails with error code 124
- https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7379296
The... - 02:26 PM Backport #62696 (Rejected): reef: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have a...
- 02:26 PM Backport #62695 (Rejected): quincy: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have...
- 02:18 PM Bug #62482 (Pending Backport): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an a...
- 11:15 AM Feature #1680: support reflink (cheap file copy/clone)
- This feature would really be appreciated. We would like to switch to Ceph for our cluster storage, but we rely heavil...
- 09:36 AM Bug #62676: cephfs-mirror: 'peer_bootstrap import' hangs
- If this is just a perception issue then a message to the user like "You need to wait for 5 minutes for this command t...
- 07:39 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
- This is not a bug it seems. It waits for 5 minutes for the secrets to expire. Don't press Ctrl+C, Just wait for 5 min...
- 08:56 AM Bug #62494 (In Progress): Lack of consistency in time format
- 08:53 AM Backport #59408 (In Progress): reef: cephfs_mirror: local and remote dir root modes are not same
- 08:52 AM Backport #59001 (In Progress): pacific: cephfs_mirror: local and remote dir root modes are not same
- 06:31 AM Bug #62682 (Resolved): mon: no mdsmap broadcast after "fs set joinable" is set to true
- ...
- 12:45 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Sudhin Bengeri wrote:
> Hi Xuibo,
>
> Here is the uname -a output from the nodes:
> Linux wkhd 6.3.0-rc4+ #6 SMP...
09/03/2023
- 02:40 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick - I can take this one if you haven't started on it yet.
...
09/02/2023
- 09:05 PM Backport #62569 (In Progress): pacific: ceph_fs.h: add separate owner_{u,g}id fields
- 09:05 PM Backport #62570 (In Progress): reef: ceph_fs.h: add separate owner_{u,g}id fields
- 09:05 PM Backport #62571 (In Progress): quincy: ceph_fs.h: add separate owner_{u,g}id fields
09/01/2023
- 06:58 PM Bug #50250 (New): mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/cli...
- https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-s...
- 09:05 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
- 'peer_bootstrap import' command hangs subsequent to using wrong/invalid token to import. If we use an invalid token i...
08/31/2023
- 11:01 PM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
- When a snapshot is taken of the subvolume, the .snap directory shows the snapshot when viewed from the NFS mount and ...
- 10:26 PM Bug #62673 (New): cephfs subvolume resize does not accept 'unit'
- Specifying the quota or resize for a subvolume requires the value in bytes. This value should be accepted as <num><un...
- 10:06 PM Feature #62670 (Need More Info): [RFE] cephfs should track and expose subvolume usage and quota
- Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quota thresholds to drive...
- 06:34 PM Feature #62668 (New): qa: use teuthology scripts to test dozens of clients
- We have one small suite for integration testing of multiple clients:
https://github.com/ceph/ceph/tree/9d7c1825783... - 03:35 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Hi Xuibo,
Here is the uname -a output from the nodes:
Linux wkhd 6.3.0-rc4+ #6 SMP PREEMPT_DYNAMIC Mon May 22 22:... - 03:44 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Sudhin Bengeri wrote:
> Hi Xiubo,
>
> Thanks for your response.
>
> Are you saying that cephfs does not suppo... - 12:35 PM Backport #62662 (In Progress): pacific: mds: deadlock when getattr changes inode lockset
- 12:02 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53243
- 12:34 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
- 12:01 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53242
- 12:34 PM Bug #62664 (New): ceph-fuse: failed to remount for kernel dentry trimming; quitting!
- Hi,
While #62604 is being addressed I wanted to try the ceph-fuse client. I'm using the same setup with kernel 6.4... - 12:34 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
- 12:01 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53241
- 12:31 PM Bug #62663 (Can't reproduce): MDS: inode nlink value is -1 causing MDS to continuously crash
- All MDS daemons are continuously crashing. The logs are reporting an inode nlink value is set to -1. I have included ...
- 11:56 AM Bug #62052 (Pending Backport): mds: deadlock when getattr changes inode lockset
- 09:41 AM Bug #62580 (Fix Under Review): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_str...
- 08:57 AM Bug #62580 (In Progress): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.T...
- Xiubo Li wrote:
> This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to ... - 05:30 AM Bug #62580 (Duplicate): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.Tes...
- This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to Pacific yet.
- 09:30 AM Bug #62658 (Pending Backport): error during scrub thrashing: reached maximum tries (31) after wai...
- /a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378338...
- 07:10 AM Bug #62653 (New): qa: unimplemented fcntl command: 1036 with fsstress
- /a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378422
Happens wit...
08/30/2023
- 08:54 PM Bug #62648 (New): pybind/mgr/volumes: volume rm freeze waiting for async job on fs to complete
- ...
- 02:16 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Hi Xiubo,
Thanks for your response.
Are you saying that cephfs does not support fscrypt? I am not exactly sure... - 05:35 AM Feature #45021 (In Progress): client: new asok commands for diagnosing cap handling issues
08/29/2023
- 12:18 PM Bug #62626: mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
- Dhairya, could you link the commit which started causing this? (I recall we discussed a bit about this)
- 10:49 AM Bug #62626 (In Progress): mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
- Currently, when export update fails, this is the reponse:...
- 09:56 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- Venky Shankar wrote:
> FWIW, logs hint at missing (RADOS) objects:
>
> [...]
>
> I'm not certain yet if this i... - 09:39 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- FWIW, logs hint at missing (RADOS) objects:...
- 09:40 AM Backport #61987 (Resolved): reef: mds: session ls command appears twice in command listing
- 09:40 AM Backport #61988 (Resolved): quincy: mds: session ls command appears twice in command listing
- 05:46 AM Feature #61904: pybind/mgr/volumes: add more introspection for clones
- Rishabh, please take this one (along the same lines as https://tracker.ceph.com/issues/61905).
08/28/2023
- 01:33 PM Backport #62517 (In Progress): pacific: mds: inode snaplock only acquired for open in create code...
- 01:32 PM Backport #62516 (In Progress): quincy: mds: inode snaplock only acquired for open in create codepath
- 01:32 PM Backport #62518 (In Progress): reef: mds: inode snaplock only acquired for open in create codepath
- 01:17 PM Backport #62539 (Rejected): reef: qa: Health check failed: 1 pool(s) do not have an application e...
- 01:17 PM Backport #62538 (Rejected): quincy: qa: Health check failed: 1 pool(s) do not have an application...
- 01:17 PM Bug #62508 (Duplicate): qa: Health check failed: 1 pool(s) do not have an application enabled (PO...
- 12:24 PM Documentation #62605: cephfs-journal-tool: update parts of code that need mandatory --rank
- Good catch.
- 12:14 PM Documentation #62605 (New): cephfs-journal-tool: update parts of code that need mandatory --rank
- For instance If someone refers [0] to export journal to a file, it says to run ...
- 12:16 PM Bug #62537: cephfs scrub command will crash the standby-replay MDSs
- Neeraj, please take this one.
- 12:09 PM Tasks #62159 (In Progress): qa: evaluate mds_partitioner
- 12:08 PM Bug #62067 (Duplicate): ffsb.sh failure "Resource temporarily unavailable"
- Duplicate of #62484
- 12:06 PM Feature #62157 (In Progress): mds: working set size tracker
- Hi Yongseok,
Assigning this to you since I presume this being worked on along side the partitioner module. - 11:59 AM Feature #62215 (Rejected): libcephfs: Allow monitoring for any file changes like inotify
- Nothing planned for the foreseeable future related to this feature request.
- 11:11 AM Backport #62443: reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53005
the above PR has been closed and the commit has been ... - 11:08 AM Backport #62441: quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53006
the above PR has been closed and the commit has bee... - 09:19 AM Bug #59413 (Fix Under Review): cephfs: qa snaptest-git-ceph.sh failed with "got remote process re...
- 08:46 AM Bug #62510 (Duplicate): snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-... - 06:46 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Another one, but with kclient
> >
> > > https://pulpito.ceph.com/vsha... - 02:41 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Another one, but with kclient
>
> > https://pulpito.ceph.com/vshankar-2023-08-23_03:59:53-... - 02:29 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-testing-default-smith... - 06:22 AM Bug #62278: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume inf...
- Backport note: also include commit(s) from https://github.com/ceph/ceph/pull/52940
08/27/2023
- 09:06 AM Backport #62572 (In Progress): pacific: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53169
- 09:05 AM Backport #62573 (In Progress): reef: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53168
- 09:05 AM Backport #62574 (In Progress): quincy: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53167
08/25/2023
- 01:22 PM Backport #62585 (In Progress): quincy: mds: enforce a limit on the size of a session in the sessi...
- https://github.com/ceph/ceph/pull/53330
- 01:22 PM Backport #62584 (In Progress): pacific: mds: enforce a limit on the size of a session in the sess...
- https://github.com/ceph/ceph/pull/53634
- 01:21 PM Backport #62583 (Resolved): reef: mds: enforce a limit on the size of a session in the sessionmap
- https://github.com/ceph/ceph/pull/53329
- 01:17 PM Bug #61947 (Pending Backport): mds: enforce a limit on the size of a session in the sessionmap
- 02:54 AM Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- I will work on it.
- 01:42 AM Bug #62580 (Fix Under Review): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_str...
- ...
- 02:38 AM Bug #62510 (In Progress): snaptest-git-ceph.sh failure with fs/thrash
- 01:35 AM Bug #62579 (Fix Under Review): client: evicted warning because client completes unmount before th...
- 01:32 AM Bug #62579 (Pending Backport): client: evicted warning because client completes unmount before th...
- ...
08/24/2023
- 08:02 PM Bug #62577 (Fix Under Review): mds: log a message when exiting due to asok "exit" command
- 07:43 PM Bug #62577 (Pending Backport): mds: log a message when exiting due to asok "exit" command
- So it's clear what caused the call to suicide.
- 03:27 PM Backport #61691: quincy: mon failed to return metadata for mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52228
merged - 12:29 PM Bug #62381 (In Progress): mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() ...
- 12:15 PM Backport #62574 (Resolved): quincy: mds: add cap acquisition throttled event to MDR
- 12:14 PM Backport #62573 (Resolved): reef: mds: add cap acquisition throttled event to MDR
- 12:14 PM Backport #62572 (Resolved): pacific: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53169
- 12:14 PM Backport #62571 (Resolved): quincy: ceph_fs.h: add separate owner_{u,g}id fields
- https://github.com/ceph/ceph/pull/53139
- 12:14 PM Backport #62570 (Resolved): reef: ceph_fs.h: add separate owner_{u,g}id fields
- https://github.com/ceph/ceph/pull/53138
- 12:14 PM Backport #62569 (Rejected): pacific: ceph_fs.h: add separate owner_{u,g}id fields
- https://github.com/ceph/ceph/pull/53137
- 12:07 PM Bug #62217 (Pending Backport): ceph_fs.h: add separate owner_{u,g}id fields
- 12:06 PM Bug #59067 (Pending Backport): mds: add cap acquisition throttled event to MDR
- 12:03 PM Bug #62567 (Won't Fix): postgres workunit times out - MDS_SLOW_REQUEST in logs
- /a/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-20230822.060131-testing-default-smithi/7377197...
- 10:09 AM Bug #62484: qa: ffsb.sh test failure
- Another instance in main branch: /a/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-20230822.060131-testing-defa...
- 10:07 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Another one, but with kclient
> https://pulpito.ceph.com/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-2023... - 12:47 AM Bug #62435 (Need More Info): Pod unable to mount fscrypt encrypted cephfs PVC when it moves to an...
- Hi Sudhin,
This is not cephfs *fscrypt*. You are encrypting from the disk layer, not the filesystem layer. My unde...
08/23/2023
- 09:07 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-08-21_23:10:07-rados-pacific-release-distro-default-smithi/7375005
- 07:01 PM Bug #62556 (Resolved): qa/cephfs: dependencies listed in xfstests_dev.py are outdated
- @python2@ is one of the dependencies for @xfstests-dev@ that is listed in @xfstests_dev.py@ and @python2@ is not avai...
- 12:18 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-08-22_18:16:03-rados-wip-yuri10-testing-2023-08-17-1444-distro-default-smithi/7376742
- 05:44 AM Backport #62539 (In Progress): reef: qa: Health check failed: 1 pool(s) do not have an applicatio...
- https://github.com/ceph/ceph/pull/54380
- 05:43 AM Backport #62538 (In Progress): quincy: qa: Health check failed: 1 pool(s) do not have an applicat...
- https://github.com/ceph/ceph/pull/53863
- 05:38 AM Bug #62508 (Pending Backport): qa: Health check failed: 1 pool(s) do not have an application enab...
- 02:46 AM Bug #62537 (Fix Under Review): cephfs scrub command will crash the standby-replay MDSs
- ...
08/22/2023
- 06:23 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-08-17_21:18:20-rados-wip-yuri11-testing-2023-08-17-0823-distro-default-smithi/7372041
- 12:30 PM Bug #61399: qa: build failure for ior
- https://github.com/ceph/ceph/pull/52416 was merged accidentally (and then reverted). I've opened new PR for same patc...
- 08:54 AM Cleanup #4744 (In Progress): mds: pass around LogSegments via std::shared_ptr
- 08:03 AM Backport #62524 (Resolved): reef: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLO...
- https://github.com/ceph/ceph/pull/53661
- 08:03 AM Backport #62523 (Resolved): pacific: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_...
- https://github.com/ceph/ceph/pull/53662
- 08:03 AM Backport #62522 (Resolved): quincy: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_X...
- https://github.com/ceph/ceph/pull/53663
- 08:03 AM Backport #62521 (Resolved): reef: client: FAILED ceph_assert(_size == 0)
- https://github.com/ceph/ceph/pull/53666
- 08:03 AM Backport #62520 (In Progress): pacific: client: FAILED ceph_assert(_size == 0)
- https://github.com/ceph/ceph/pull/53981
- 08:03 AM Backport #62519 (Resolved): quincy: client: FAILED ceph_assert(_size == 0)
- https://github.com/ceph/ceph/pull/53664
- 08:02 AM Backport #62518 (In Progress): reef: mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/pull/53183
- 08:02 AM Backport #62517 (Resolved): pacific: mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/pull/53185
- 08:02 AM Backport #62516 (In Progress): quincy: mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/pull/53184
- 08:02 AM Backport #62515 (Resolved): reef: Error: Unable to find a match: python2 with fscrypt tests
- https://github.com/ceph/ceph/pull/53624
- 08:02 AM Backport #62514 (Rejected): pacific: Error: Unable to find a match: python2 with fscrypt tests
- https://github.com/ceph/ceph/pull/53625
- 08:02 AM Backport #62513 (In Progress): quincy: Error: Unable to find a match: python2 with fscrypt tests
- https://github.com/ceph/ceph/pull/53626
- 07:57 AM Bug #44565 (Pending Backport): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK ...
- 07:56 AM Bug #56698 (Pending Backport): client: FAILED ceph_assert(_size == 0)
- 07:55 AM Bug #62058 (Pending Backport): mds: inode snaplock only acquired for open in create codepath
- 07:54 AM Bug #62277 (Pending Backport): Error: Unable to find a match: python2 with fscrypt tests
- 07:17 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Xiubo, please take this one.
Sure. - 06:56 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo, please take this one.
- 06:55 AM Bug #62510 (Duplicate): snaptest-git-ceph.sh failure with fs/thrash
- /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/7369825...
- 07:02 AM Bug #62511 (New): src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
- /a/vshankar-2023-08-09_05:46:29-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/7363998...
- 06:22 AM Feature #58877 (Rejected): mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Update from internal discussion:
Given the complexities involved with the details mentioned in note-6, its risky t... - 06:16 AM Bug #62508 (Fix Under Review): qa: Health check failed: 1 pool(s) do not have an application enab...
- 06:12 AM Bug #62508 (Duplicate): qa: Health check failed: 1 pool(s) do not have an application enabled (PO...
- https://pulpito.ceph.com/yuriw-2023-08-18_20:13:47-fs-main-distro-default-smithi/
Fallout of https://github.com/ce... - 05:49 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Neeraj Pratap Singh wrote:
> @vshankar @kotresh Since, I was on sick leave yesterday. I saw the discussion made on t... - 05:37 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- @vshankar @kotresh Since, I was on sick leave yesterday. I saw the discussion made on the PR today. Seeing the final ...
- 05:31 AM Bug #62246: qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- Rishabh, were you able to push a fix for this?
- 05:31 AM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- https://pulpito.ceph.com/vshankar-2023-08-16_11:14:57-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/...
- 04:21 AM Backport #59202 (Resolved): pacific: qa: add testing in fs:workload for different kinds of subvol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51509
Merged.
Also available in: Atom