Activity
From 08/27/2023 to 09/25/2023
09/25/2023
- 08:35 PM Bug #62793 (In Progress): client: setfattr -x ceph.dir.pin: No such attribute
- 06:15 PM Bug #62663: MDS: inode nlink value is -1 causing MDS to continuously crash
- The issue originally occurred on 15.2.17. Apologies for the confusion on that the cluster during the troubleshooting....
- 01:03 PM Bug #62962: mds: standby-replay daemon crashes on replay
- Milind, please update the description with the crash backtrace and debug status as much as possible.
- 05:38 AM Bug #62962 (Duplicate): mds: standby-replay daemon crashes on replay
- Standby-replay daemon crashes during replay when accessing inode map.
Ref: BZ2218759... - 11:53 AM Bug #62968 (Fix Under Review): mgr/volumes: fix `subvolume group rm` command error message
- 10:53 AM Bug #62968 (Pending Backport): mgr/volumes: fix `subvolume group rm` command error message
- Currently, if we try to delete subvolumegroup using `fs subvolumegroup rm`
when there's one or more subvolume(s) pre... - 10:50 AM Backport #62288 (In Progress): pacific: ceph_test_libcephfs_reclaim crashes during test
- 10:35 AM Backport #62289 (In Progress): quincy: ceph_test_libcephfs_reclaim crashes during test
- 09:42 AM Backport #61805 (In Progress): reef: Better help message for cephfs-journal-tool -help command fo...
- 09:41 AM Backport #61803 (In Progress): pacific: Better help message for cephfs-journal-tool -help command...
- 09:39 AM Backport #61804 (In Progress): quincy: Better help message for cephfs-journal-tool -help command ...
- 07:49 AM Backport #61989 (Resolved): pacific: snap-schedule: allow retention spec to specify max number of...
- 07:47 AM Bug #62236: qa: run nfs related tests with fs suite
- Backport blocked - additionally requires https://github.com/ceph/ceph/pull/53594
- 07:13 AM Backport #62949 (In Progress): pacific: cephfs-mirror: do not run concurrent C_RestartMirroring c...
- 07:07 AM Backport #62948 (In Progress): quincy: cephfs-mirror: do not run concurrent C_RestartMirroring co...
- 07:03 AM Backport #62950 (In Progress): reef: cephfs-mirror: do not run concurrent C_RestartMirroring context
- 06:41 AM Backport #62905 (Rejected): quincy: Test failure: test_journal_migration (tasks.cephfs.test_journ...
- 06:41 AM Backport #62904 (Rejected): reef: Test failure: test_journal_migration (tasks.cephfs.test_journal...
- 06:41 AM Backport #62903 (Rejected): pacific: Test failure: test_journal_migration (tasks.cephfs.test_jour...
- 06:41 AM Bug #58219 (Resolved): Test failure: test_journal_migration (tasks.cephfs.test_journal_migration....
- 06:34 AM Backport #62842: reef: Lack of consistency in time format
- Sorry, reassigning back since it conflicts with one of the existing backports. Please backport this once that is merged.
- 06:01 AM Backport #62842: reef: Lack of consistency in time format
- Milind, I'm taking this one.
- 05:59 AM Backport #62287 (In Progress): reef: ceph_test_libcephfs_reclaim crashes during test
- 05:57 AM Backport #62584 (In Progress): pacific: mds: enforce a limit on the size of a session in the sess...
- 04:13 AM Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true
- Patrick Donnelly wrote:
> Patrick Donnelly wrote:
> > - This upgrade test is going from pacific to main. This is an... - 04:11 AM Bug #62953: qa: fs:upgrade needs updated to upgrade only from N-2, N-1 releases (i.e. reef/quincy)
- We have to do this change whenever we branch out a release (branch) with the main branch tracking the next release - ...
- 01:31 AM Backport #62865 (In Progress): pacific: cephfs: qa snaptest-git-ceph.sh failed with "got remote p...
- 01:30 AM Backport #62867 (In Progress): quincy: cephfs: qa snaptest-git-ceph.sh failed with "got remote pr...
- 01:26 AM Backport #62866 (In Progress): reef: cephfs: qa snaptest-git-ceph.sh failed with "got remote proc...
- 01:21 AM Backport #62946 (Rejected): quincy: postgres workunit failed with "PQputline failed"
- No need to backport to quincy, because the dependency commit hasn't been backported to quincy yet.
- 01:20 AM Backport #62947 (In Progress): reef: postgres workunit failed with "PQputline failed"
- 01:14 AM Backport #62513 (In Progress): quincy: Error: Unable to find a match: python2 with fscrypt tests
- 01:13 AM Backport #62515 (In Progress): reef: Error: Unable to find a match: python2 with fscrypt tests
- 01:13 AM Backport #62514 (In Progress): pacific: Error: Unable to find a match: python2 with fscrypt tests
09/23/2023
- 09:24 AM Bug #61394 (Resolved): qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), af...
- 02:48 AM Bug #62663: MDS: inode nlink value is -1 causing MDS to continuously crash
- Austin Axworthy wrote:
> All MDS daemons are continuously crashing. The logs are reporting an inode nlink value is s...
09/22/2023
- 09:59 PM Bug #62915 (Duplicate): qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`...
- already have a tracker https://tracker.ceph.com/issues/62870
- 09:59 PM Bug #62870 (Fix Under Review): test_nfs task fails due to no orch backend set
- 06:10 PM Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true
- Patrick Donnelly wrote:
> - This upgrade test is going from pacific to main. This is an N-3 to N upgrade.
https... - 06:08 PM Bug #62682 (Fix Under Review): mon: no mdsmap broadcast after "fs set joinable" is set to true
- 06:02 PM Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true
- Milind Changire wrote:
> [...]
>
> The command for *fs set joinable true* when executed by the mgr reaches the mo... - 06:10 PM Bug #62953 (Fix Under Review): qa: fs:upgrade needs updated to upgrade only from N-2, N-1 release...
- 03:21 PM Bug #61574 (Pending Backport): qa: build failure for mdtest project
- 06:10 AM Bug #61574: qa: build failure for mdtest project
- https://tracker.ceph.com/issues/61399#note-15 (applies to this as well)
- 03:17 PM Backport #62952 (In Progress): reef: kernel/fuse client using ceph ID with uid restricted MDS cap...
- https://github.com/ceph/ceph/pull/54468
- 03:17 PM Backport #62951 (In Progress): quincy: kernel/fuse client using ceph ID with uid restricted MDS c...
- https://github.com/ceph/ceph/pull/54469
- 03:12 PM Bug #57154 (Pending Backport): kernel/fuse client using ceph ID with uid restricted MDS caps cann...
- 03:11 PM Bug #62357 (Resolved): tools/cephfs_mirror: only perform actions if init succeed
- 02:49 PM Backport #62950 (Resolved): reef: cephfs-mirror: do not run concurrent C_RestartMirroring context
- https://github.com/ceph/ceph/pull/53638
- 02:49 PM Backport #62949 (Resolved): pacific: cephfs-mirror: do not run concurrent C_RestartMirroring context
- https://github.com/ceph/ceph/pull/53640
- 02:49 PM Backport #62948 (Resolved): quincy: cephfs-mirror: do not run concurrent C_RestartMirroring context
- https://github.com/ceph/ceph/pull/53639
- 02:49 PM Backport #62947 (Resolved): reef: postgres workunit failed with "PQputline failed"
- https://github.com/ceph/ceph/pull/53627
- 02:49 PM Backport #62946 (Rejected): quincy: postgres workunit failed with "PQputline failed"
- 02:48 PM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Canceling backports since this doesn't fix the issue.
>
> Pat... - 02:44 PM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Patrick Donnelly wrote:
> Canceling backports since this doesn't fix the issue.
Patrick, the fixes that add the w... - 02:44 PM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- https://github.com/ceph/ceph/pull/53418
- 02:43 PM Bug #62072 (Pending Backport): cephfs-mirror: do not run concurrent C_RestartMirroring context
- 02:42 PM Bug #62700 (Pending Backport): postgres workunit failed with "PQputline failed"
- 11:15 AM Bug #62126: test failure: suites/blogbench.sh stops running
- https://pulpito.ceph.com/vshankar-2023-09-20_10:42:39-fs-wip-vshankar-testing-20230920.072635-testing-default-smithi/...
- 07:41 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Venky Shankar wrote:
> > > > Xiubo, this is stil... - 07:32 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo, this is still waiting on fix to https... - 07:13 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo, this is still waiting on fix to https://tracker.ceph.com/issues/4... - 06:47 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Xiubo, this is still waiting on fix to https://tracker.ceph.com/issues/48640, yes?
No, thi... - 06:12 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo, this is still waiting on fix to https://tracker.ceph.com/issues/48640, yes?
- 07:32 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- The kernel fixing patchwork link: https://patchwork.kernel.org/project/ceph-devel/patch/20230511100911.361132-1-xiubl...
- 05:30 AM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- Milind Changire wrote:
> http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20... - 05:03 AM Bug #62936: Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring....
- Probably same as: https://tracker.ceph.com/issues/61831
- 05:02 AM Bug #62936 (Duplicate): Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.tes...
- /a/vshankar-2023-09-20_10:42:39-fs-wip-vshankar-testing-20230920.072635-testing-default-smithi/7399153
The cephfs-... - 04:37 AM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- 3 instances from my run:
- https://pulpito.ceph.com/vshankar-2023-09-20_10:42:39-fs-wip-vshankar-testing-20230920.... - 04:09 AM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- Milind Changire wrote:
> okay, so there **are** a few handle_fragment_notify logs but not as many handle_fragment_no... - 01:13 AM Bug #61009 (In Progress): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) ...
- Dhairya, I'm taking this one.
09/21/2023
- 11:37 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Neeraj Pratap Singh wrote:
> Patrick Donnelly wrote:
> > Venky Shankar wrote:
> > > Neeraj Pratap Singh wrote:
> ... - 06:36 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Neeraj Pratap Singh wrote:
> > > Venky Shankar wrote:
> > > > ... - 12:58 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Venky Shankar wrote:
> Neeraj Pratap Singh wrote:
> > Venky Shankar wrote:
> > > Patrick Donnelly wrote:
> > > > ... - 12:27 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Neeraj Pratap Singh wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Venky Shankar wrote:
> > > > ... - 11:40 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Venky Shankar wrote:
> > > Patrick,
> > >
> > > Going by the... - 05:37 PM Feature #62925 (New): cephfs-journal-tool: Add preventive measures in the tool to avoid corruting...
- The cephfs-journal-tool should be used by expert who has the knowledge of CephFS internals. Though we have a clear wa...
- 01:54 PM Bug #62915: qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while runn...
- The actual backtrace is this...
- 11:35 AM Bug #62915: qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while runn...
- http://pulpito.front.sepia.ceph.com/dparmar-2023-09-21_10:52:02-orch:cephadm-fix-nfs-apply-err-reporting-distro-defau...
- 10:04 AM Bug #62915 (Duplicate): qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`...
- Running nfs test suite fails in the first ever test case test_cluster_info() with teuthology logs filled with:...
- 12:18 PM Backport #62916 (In Progress): pacific: client: syncfs flush is only fast with a single MDS
- https://github.com/ceph/ceph/pull/53981
- 12:15 PM Bug #62326 (Resolved): pybind/mgr/cephadm: stop disabling fsmap sanity checks during upgrade
- 12:13 PM Bug #44916 (Pending Backport): client: syncfs flush is only fast with a single MDS
- 09:03 AM Backport #55580: pacific: snap_schedule: avoid throwing traceback for bad or missing arguments
- After resolving conflicts during backporting, I get an empty commit.
These changes have mostly been picked up via ht... - 08:56 AM Backport #57158 (In Progress): quincy: doc: update snap-schedule notes regarding 'start' time
- 08:55 AM Backport #57157 (In Progress): pacific: doc: update snap-schedule notes regarding 'start' time
- 08:00 AM Backport #62406 (In Progress): pacific: pybind/mgr/volumes: pending_subvolume_deletions count is ...
- 07:52 AM Backport #62404 (In Progress): quincy: pybind/mgr/volumes: pending_subvolume_deletions count is a...
- 07:48 AM Backport #62405 (In Progress): reef: pybind/mgr/volumes: pending_subvolume_deletions count is alw...
- 06:21 AM Backport #53993 (Rejected): pacific: qa: begin grepping kernel logs for kclient warnings/failures...
09/20/2023
- 05:51 PM Feature #57481 (Fix Under Review): mds: enhance scrub to fragment/merge dirfrags
- 03:44 PM Backport #62696 (Rejected): reef: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have a...
- 03:44 PM Backport #62695 (Rejected): quincy: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have...
- 03:44 PM Bug #62482 (Resolved): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an applicati...
- Canceling backports since this doesn't fix the issue.
- 03:40 PM Backport #52029 (Rejected): pacific: mgr/nfs :update pool name to '.nfs' in vstart.sh
- 03:39 PM Backport #61994 (Rejected): pacific: mds/MDSRank: op_tracker of mds have slow op alway.
- I don't think this is urgent for about-to-be-EOL pacific.
- 03:38 PM Bug #58109: ceph-fuse: doesn't work properly when the version of libfuse is 3.1 or later
- Do we still want to backport this Venky?
- 03:38 PM Backport #52968 (Rejected): pacific: mgr/nfs: add 'nfs cluster config get'
- 03:38 PM Feature #52942 (Resolved): mgr/nfs: add 'nfs cluster config get'
- I don't see a need for this to go into Pacific.
- 03:37 PM Backport #53443 (Rejected): pacific: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731...
- 03:36 PM Backport #57776 (In Progress): pacific: Clarify security implications of path-restricted cephx ca...
- 03:35 PM Backport #57777 (In Progress): quincy: Clarify security implications of path-restricted cephx cap...
- 03:34 PM Backport #62733 (In Progress): reef: mds: add TrackedOp event for batching getattr/lookup
- 03:33 PM Backport #62732 (In Progress): quincy: mds: add TrackedOp event for batching getattr/lookup
- 03:32 PM Backport #62731 (In Progress): pacific: mds: add TrackedOp event for batching getattr/lookup
- 03:31 PM Backport #62897 (In Progress): pacific: client: evicted warning because client completes unmount ...
- 12:37 PM Backport #62897 (In Progress): pacific: client: evicted warning because client completes unmount ...
- https://github.com/ceph/ceph/pull/53555
- 03:30 PM Backport #62898 (In Progress): quincy: client: evicted warning because client completes unmount b...
- 12:37 PM Backport #62898 (In Progress): quincy: client: evicted warning because client completes unmount b...
- https://github.com/ceph/ceph/pull/53554
- 03:29 PM Backport #62899 (In Progress): reef: client: evicted warning because client completes unmount bef...
- 12:38 PM Backport #62899 (In Progress): reef: client: evicted warning because client completes unmount bef...
- https://github.com/ceph/ceph/pull/53553
- 03:28 PM Backport #62906 (In Progress): pacific: mds,qa: some balancer debug messages (<=5) not printed wh...
- 02:55 PM Backport #62906 (In Progress): pacific: mds,qa: some balancer debug messages (<=5) not printed wh...
- https://github.com/ceph/ceph/pull/53552
- 03:27 PM Backport #62907 (In Progress): quincy: mds,qa: some balancer debug messages (<=5) not printed whe...
- 02:55 PM Backport #62907 (In Progress): quincy: mds,qa: some balancer debug messages (<=5) not printed whe...
- https://github.com/ceph/ceph/pull/53551
- 03:26 PM Backport #62902 (In Progress): pacific: mds: log a message when exiting due to asok "exit" command
- 01:01 PM Backport #62902 (In Progress): pacific: mds: log a message when exiting due to asok "exit" command
- https://github.com/ceph/ceph/pull/53550
- 03:24 PM Backport #62900 (In Progress): quincy: mds: log a message when exiting due to asok "exit" command
- 01:00 PM Backport #62900 (In Progress): quincy: mds: log a message when exiting due to asok "exit" command
- https://github.com/ceph/ceph/pull/53549
- 03:22 PM Backport #62901 (In Progress): reef: mds: log a message when exiting due to asok "exit" command
- 01:01 PM Backport #62901 (In Progress): reef: mds: log a message when exiting due to asok "exit" command
- https://github.com/ceph/ceph/pull/53548
- 02:54 PM Backport #62905 (Rejected): quincy: Test failure: test_journal_migration (tasks.cephfs.test_journ...
- 02:54 PM Backport #62904 (Rejected): reef: Test failure: test_journal_migration (tasks.cephfs.test_journal...
- 02:54 PM Backport #62903 (Rejected): pacific: Test failure: test_journal_migration (tasks.cephfs.test_jour...
- 02:53 PM Bug #58219 (Pending Backport): Test failure: test_journal_migration (tasks.cephfs.test_journal_mi...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/49842#issuecomment-1727875012
Definitely. Probably mi... - 02:44 PM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- https://github.com/ceph/ceph/pull/49842#issuecomment-1727875012
- 02:50 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
- We've been experiencing this off and on for over a year. Cannot reproduce though Singularity has often been pushed wi...
- 02:48 PM Bug #55165: client: validate pool against pool ids as well as pool names
- Need someone to take over https://github.com/ceph/ceph/pull/45329
- 02:47 PM Bug #55980 (Pending Backport): mds,qa: some balancer debug messages (<=5) not printed when debug_...
- 02:46 PM Bug #56067 (New): Cephfs data loss with root_squash enabled
- Upstream PR was closed.
- 02:45 PM Bug #57071: mds: consider mds_cap_revoke_eviction_timeout for get_late_revoking_clients()
- What's the status of this ticket? Shoudl it be closed?
- 02:42 PM Bug #56694 (Rejected): qa: avoid blocking forever on hung umount
- Going to fix this with stdin-killer instead: https://github.com/ceph/ceph/pull/53255
- 01:42 PM Bug #58726: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- Hi,
I'm seeing this warning on a custom cluster, with ceph running in a qemu virtual machine with virtio-scsi disk... - 12:57 PM Bug #62577 (Pending Backport): mds: log a message when exiting due to asok "exit" command
- 12:55 PM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- I don't think it's related either. I was probably trying to link a different ticket but I don't recall which.
- 09:25 AM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- There are 3 processes of the compiler that seem to be in a deadlock:...
- 09:07 AM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- I agree with Venky, this doesn't seem to be related to the linked issue. @Patrick, would you mind clarifying why you ...
- 06:22 AM Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- I don't think this is related to #62682.
- 12:31 PM Bug #62579 (Pending Backport): client: evicted warning because client completes unmount before th...
- 10:27 AM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- okay, so there **are** a few handle_fragment_notify logs but not as many handle_fragment_notify_ack logs.
that seems... - 07:25 AM Feature #62892 (New): mgr/snap_schedule: restore scheduling for subvols and groups
- Tracker to hold discussions on restoring functionality to help users set snap-schedules for subvols and also for non-...
- 06:21 AM Bug #62848 (Triaged): qa: fail_fs upgrade scenario hanging
- 02:14 AM Bug #62739 (Fix Under Review): cephfs-shell: remove distutils Version classes because they're dep...
09/19/2023
- 05:25 PM Feature #62856: cephfs: persist an audit log in CephFS
- We discussed this in standup today. We are now considering a design with a new "audit" module in the ceph-mgr.
- 05:09 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Greg Farnum wrote:
> > > Patrick Donnelly wrote:
> > > > If we... - 09:50 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Patrick Donnelly wrote:
> Greg Farnum wrote:
> > Patrick Donnelly wrote:
> > > If we are going to move the metadat... - 02:48 PM Feature #62882 (Pending Backport): mds: create an admin socket command for raising a signal
- This is useful for testing e.g. with SIGSTOP (mds is still "alive" but unresponsive) but this can be difficult to sen...
- 02:36 PM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:36 PM Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:35 PM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:35 PM Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:35 PM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:34 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:34 PM Bug #59348: qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.T...
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:33 PM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 02:33 PM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 01:04 PM Bug #62847 (Triaged): mds: blogbench requests stuck (5mds+scrub+snaps-flush)
- 12:56 PM Bug #62863 (Triaged): Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail
- 12:52 PM Bug #62873 (Triaged): qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limi...
- 07:14 AM Bug #62873: qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestCli...
- https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/7394932
http... - 06:59 AM Bug #62873 (Pending Backport): qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_cl...
- http://qa-proxy.ceph.com/teuthology/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-defa...
- 11:31 AM Backport #62879 (Resolved): pacific: cephfs-shell: update path to cephfs-shell since its location...
- https://github.com/ceph/ceph/pull/54144
- 11:31 AM Backport #62878 (In Progress): quincy: cephfs-shell: update path to cephfs-shell since its locati...
- https://github.com/ceph/ceph/pull/54186
- 11:26 AM Bug #58795 (Pending Backport): cephfs-shell: update path to cephfs-shell since its location has c...
- 11:09 AM Bug #62739 (In Progress): cephfs-shell: remove distutils Version classes because they're deprecated
- 08:46 AM Bug #62739: cephfs-shell: remove distutils Version classes because they're deprecated
- Dhairya Parmar wrote:
> python 3.10 deprecated distutils [0]. LooseVersion is used at many places in cephfs-shell.py... - 10:02 AM Bug #62876 (Fix Under Review): qa: use unique name for CephFS created during testing
- CephFS created during testing is name "cephfs". This isn't a good name because it makes it impossible to track this n...
- 09:59 AM Feature #62364: support dumping rstats on a particular path
- Greg, nothing is required for this since the details are available as mentioned in the above note. Good to close?
- 09:36 AM Bug #58878: mds: FAILED ceph_assert(trim_to > trimming_pos)
- Reopned for investigating a possible bug in the MDS that causes bogus values to be persisted in the journal header.
- 06:02 AM Bug #58878 (New): mds: FAILED ceph_assert(trim_to > trimming_pos)
09/18/2023
- 08:29 PM Bug #62870 (Resolved): test_nfs task fails due to no orch backend set
- This test is likely intended to have cephadm set as the orch backend, but for whatever reason, it seems to not be set...
- 02:54 PM Feature #62849 (In Progress): mds/FSMap: add field indicating the birth time of the epoch
- 12:48 PM Backport #62867 (In Progress): quincy: cephfs: qa snaptest-git-ceph.sh failed with "got remote pr...
- https://github.com/ceph/ceph/pull/53629
- 12:48 PM Backport #62866 (In Progress): reef: cephfs: qa snaptest-git-ceph.sh failed with "got remote proc...
- https://github.com/ceph/ceph/pull/53628
- 12:48 PM Backport #62865 (In Progress): pacific: cephfs: qa snaptest-git-ceph.sh failed with "got remote p...
- https://github.com/ceph/ceph/pull/53630
- 12:37 PM Bug #59413 (Pending Backport): cephfs: qa snaptest-git-ceph.sh failed with "got remote process re...
- 12:47 AM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- Patrick Donnelly wrote:
> /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-def... - 10:25 AM Bug #62863 (Can't reproduce): Slowness or deadlock in ceph-fuse causes teuthology job to hang and...
- https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/7394785/
I... - 02:22 AM Backport #62860 (In Progress): reef: mds: deadlock between unlink and linkmerge
- 12:43 AM Backport #62860 (Resolved): reef: mds: deadlock between unlink and linkmerge
- https://github.com/ceph/ceph/pull/53497
- 02:19 AM Backport #62858 (In Progress): quincy: mds: deadlock between unlink and linkmerge
- 12:42 AM Backport #62858 (In Progress): quincy: mds: deadlock between unlink and linkmerge
- https://github.com/ceph/ceph/pull/53496
- 02:14 AM Backport #62859 (In Progress): pacific: mds: deadlock between unlink and linkmerge
- 12:42 AM Backport #62859 (Resolved): pacific: mds: deadlock between unlink and linkmerge
- https://github.com/ceph/ceph/pull/53495
- 01:28 AM Bug #62861: mds: _submit_entry ELid(0) crashed the MDS
- It's a use-after-free bug for the stray CInodes.
- 01:28 AM Bug #62861 (Fix Under Review): mds: _submit_entry ELid(0) crashed the MDS
- 01:03 AM Bug #62861 (Resolved): mds: _submit_entry ELid(0) crashed the MDS
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395153/teuthol...
- 12:28 AM Bug #61818 (Pending Backport): mds: deadlock between unlink and linkmerge
09/16/2023
- 04:46 PM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- This is a job with scrubbing on dir frags on a set of replicas.
Interestingly there's no trace of handle_fragment_no... - 12:44 AM Feature #62856 (New): cephfs: persist an audit log in CephFS
- ... for quickly learning what disaster tools and commands have been run on the file system.
Too often we see a cl...
09/15/2023
- 07:54 PM Backport #62854 (In Progress): pacific: qa: "cluster [ERR] MDS abort because newly corrupt dentry...
- 06:45 PM Backport #62854 (In Progress): pacific: qa: "cluster [ERR] MDS abort because newly corrupt dentry...
- https://github.com/ceph/ceph/pull/53486
- 07:52 PM Backport #62853 (In Progress): quincy: qa: "cluster [ERR] MDS abort because newly corrupt dentry ...
- 06:44 PM Backport #62853 (In Progress): quincy: qa: "cluster [ERR] MDS abort because newly corrupt dentry ...
- https://github.com/ceph/ceph/pull/53485
- 07:51 PM Backport #62852 (In Progress): reef: qa: "cluster [ERR] MDS abort because newly corrupt dentry to...
- 06:44 PM Backport #62852 (In Progress): reef: qa: "cluster [ERR] MDS abort because newly corrupt dentry to...
- https://github.com/ceph/ceph/pull/53484
- 06:38 PM Bug #62164 (Pending Backport): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- 04:35 PM Feature #62849 (In Progress): mds/FSMap: add field indicating the birth time of the epoch
- So you can easily see when the FSMap epoch was published (real time) without looking at each file system's mdsmap. In...
- 04:03 PM Bug #62848 (Duplicate): qa: fail_fs upgrade scenario hanging
- ...
- 04:00 PM Bug #62847 (Triaged): mds: blogbench requests stuck (5mds+scrub+snaps-flush)
- ...
09/14/2023
- 04:07 PM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395153/teuthol...
- 04:04 PM Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395114/teuthol...
- 04:02 PM Bug #62067: ffsb.sh failure "Resource temporarily unavailable"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395112/teuthol...
- 04:02 PM Bug #62067: ffsb.sh failure "Resource temporarily unavailable"
- Venky Shankar wrote:
> Duplicate of #62484
Is it? This one gets EAGAIN while #62484 gets EIO. That's interesting... - 12:26 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Venky Shankar wrote:
> Xiubo/Dhairya, if the MDS can ensure that the sessionmap is persisted _before_ the client sta... - 05:22 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo/Dhairya, if the MDS can ensure that the sessionmap is persisted _before_ the client starts using the prellocate...
- 04:53 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Dhairya Parmar wrote:
> Greg Farnum wrote:
> > Xiubo Li wrote:
> > > Greg Farnum wrote:
> > > > I was talking to ... - 10:17 AM Backport #62843 (New): pacific: Lack of consistency in time format
- 10:17 AM Backport #62842 (New): reef: Lack of consistency in time format
- 10:17 AM Backport #62841 (New): quincy: Lack of consistency in time format
- 10:11 AM Bug #62494 (Pending Backport): Lack of consistency in time format
- 06:30 AM Bug #62698: qa: fsstress.sh fails with error code 124
- Rishabh, have you seen this in any of your very recent runs?
- 06:29 AM Bug #62706 (Can't reproduce): qa: ModuleNotFoundError: No module named XXXXXX
- Please reopen if this shows up again.
- 05:36 AM Backport #62835 (In Progress): quincy: cephfs-top: enhance --dump code to include the missing fields
- 04:20 AM Backport #62835 (Resolved): quincy: cephfs-top: enhance --dump code to include the missing fields
- https://github.com/ceph/ceph/pull/53454
- 04:42 AM Bug #59768 (Duplicate): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): asse...
- Duplicate of https://tracker.ceph.com/issues/58489
- 04:34 AM Backport #62834 (In Progress): pacific: cephfs-top: enhance --dump code to include the missing fi...
- 04:19 AM Backport #62834 (Resolved): pacific: cephfs-top: enhance --dump code to include the missing fields
- https://github.com/ceph/ceph/pull/53453
- 04:11 AM Bug #61397 (Pending Backport): cephfs-top: enhance --dump code to include the missing fields
- Venky Shankar wrote:
> Jos, this needs backports, yes?
Yes, needs backport. https://tracker.ceph.com/issues/57014... - 03:59 AM Bug #61397: cephfs-top: enhance --dump code to include the missing fields
- Jos, this needs backports, yes?
09/13/2023
- 04:55 PM Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute
- It'll be nice if we can handle this just from the MDS side. It may require changes to ceph-fuse and the kclient to pa...
- 01:33 PM Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi/
- 01:32 PM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
- 12:45 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 12:23 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 10:14 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 09:51 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- (I got to this rather late - so excuse me for any discussion that were already resolved).
Dhairya Parmar wrote:
>... - 11:55 AM Bug #59768: crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(g_conf()-...
- Neeraj Pratap Singh wrote:
> While I was debugging this issue, it seemed that the issue doesn't exist anymore.
> An... - 11:32 AM Bug #59768: crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(g_conf()-...
- While I was debugging this issue, it seemed that the issue doesn't exist anymore.
And I found this PR: https://githu... - 04:53 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- /a/https://pulpito.ceph.com/vshankar-2023-09-12_06:47:30-fs-wip-vshankar-testing-20230908.065909-testing-default-smit...
- 04:45 AM Bug #61574: qa: build failure for mdtest project
- Rishabh, this requires changes similar to tracker #61399?
- 04:43 AM Bug #61399 (Resolved): qa: build failure for ior
- Rishabh, this change does not need backport, yes?
09/12/2023
- 01:23 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> Greg Farnum wrote:
> > I was talking to Dhairya about this today and am not quite sure I understa... - 01:05 PM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > I reproduced it by creating *dirk4444/dirk5555* a... - 12:21 PM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Venky Shankar wrote:
> Xiubo Li wrote:
> > I reproduced it by creating *dirk4444/dirk5555* and found the root cause... - 09:41 AM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Xiubo Li wrote:
> I reproduced it by creating *dirk4444/dirk5555* and found the root cause:
>
> [...]
>
>
> ... - 12:56 PM Feature #61866 (In Progress): MDSMonitor: require --yes-i-really-mean-it when failing an MDS with...
- I will take a look Venky.
- 12:29 PM Feature #57481 (In Progress): mds: enhance scrub to fragment/merge dirfrags
- 10:26 AM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Additional PR: https://github.com/ceph/ceph/pull/53418
- 04:19 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick,
> >
> > Going by the description here, I assume this... - 03:07 AM Bug #62810: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
- The old commits will be revert in https://github.com/ceph/ceph/pull/52199 and need to fix it again.
- 03:07 AM Bug #62810 (New): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fi...
- 03:07 AM Bug #62810 (New): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fi...
- https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-default-smithi...
09/11/2023
- 06:02 PM Backport #62807 (In Progress): pacific: doc: write cephfs commands in full
- 05:53 PM Backport #62807 (Resolved): pacific: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53403
- 05:57 PM Backport #62806 (In Progress): reef: doc: write cephfs commands in full
- 05:53 PM Backport #62806 (Resolved): reef: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53402
- 05:55 PM Backport #62805 (In Progress): quincy: doc: write cephfs commands in full
- 05:53 PM Backport #62805 (Resolved): quincy: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53401
- 05:51 PM Documentation #62791 (Pending Backport): doc: write cephfs commands in full
- 04:54 PM Documentation #62791 (Resolved): doc: write cephfs commands in full
- 09:57 AM Documentation #62791 (Resolved): doc: write cephfs commands in full
- In @doc/cephfs/admininstration.rst@ we don't write CephFS commands fully. Example: @ceph fs rename@ is written as @fs...
- 04:23 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Venky Shankar wrote:
> Patrick,
>
> Going by the description here, I assume this change is only for the volumes p... - 03:03 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick,
Going by the description here, I assume this change is only for the volumes plugin. In case the changes a... - 03:00 PM Backport #62799 (In Progress): quincy: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53907
- 03:00 PM Backport #62798 (Rejected): pacific: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53905
- 03:00 PM Backport #62797 (Resolved): reef: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53906
- 02:56 PM Bug #62236 (Pending Backport): qa: run nfs related tests with fs suite
- 02:55 PM Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute
- Chris, please take this one.
- 12:12 PM Bug #62793 (Fix Under Review): client: setfattr -x ceph.dir.pin: No such attribute
- I've come across documents which suggests to remove ceph.dir.pin to disable export pins, but, looks like it does not ...
- 12:31 PM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- Milind, PTAL. I vaguely recall a similar issue you were looking into a while back.
- 12:28 PM Bug #62673: cephfs subvolume resize does not accept 'unit'
- Dhariya, I presume, this is similar change to the one you worked on a while back.
- 12:26 PM Bug #62465 (Can't reproduce): pacific (?): LibCephFS.ShutdownRace segmentation fault
- 12:15 PM Bug #62567: postgres workunit times out - MDS_SLOW_REQUEST in logs
- Xiubo, this might be related to the slow rename issue you have a PR for. Could you please check?
- 12:13 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> Greg Farnum wrote:
> > I was talking to Dhairya about this today and am not quite sure I understa... - 11:06 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Another instance: https://pulpito.ceph.com/vshankar-2023-09-08_07:03:01-fs-wip-vshankar-testing-20230830.153114-testi...
- 11:04 AM Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true
- Probably another instance - https://pulpito.ceph.com/vshankar-2023-09-08_07:03:01-fs-wip-vshankar-testing-20230830.15...
- 08:01 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- Venky Shankar wrote:
> FWIW, logs hint at missing (RADOS) objects:
>
> [...]
>
> I'm not certain yet if this i... - 06:04 AM Feature #61866: MDSMonitor: require --yes-i-really-mean-it when failing an MDS with MDS_HEALTH_TR...
- Manish, please take this one on prio.
09/10/2023
- 08:50 AM Bug #52723 (Resolved): mds: improve mds_bal_fragment_size_max config option
- 08:50 AM Backport #53122 (Rejected): pacific: mds: improve mds_bal_fragment_size_max config option
- 08:48 AM Backport #57111 (In Progress): quincy: mds: handle deferred client request core when mds reboot
- 08:48 AM Backport #57110 (In Progress): pacific: mds: handle deferred client request core when mds reboot
- 08:46 AM Bug #58651 (Resolved): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59409 (Resolved): reef: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59002 (Resolved): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59003 (Resolved): pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59032 (Resolved): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- 08:45 AM Backport #59719 (Resolved): reef: client: read wild pointer when reconnect to mds
- 08:44 AM Backport #59718 (Resolved): quincy: client: read wild pointer when reconnect to mds
- 08:44 AM Backport #59720 (Resolved): pacific: client: read wild pointer when reconnect to mds
- 08:43 AM Backport #61841 (Resolved): pacific: mds: do not evict clients if OSDs are laggy
- 08:35 AM Backport #62005 (In Progress): quincy: client: readdir_r_cb: get rstat for dir only if using rbyt...
- 08:35 AM Backport #62004 (In Progress): reef: client: readdir_r_cb: get rstat for dir only if using rbytes...
- 08:33 AM Backport #61992 (In Progress): quincy: mds/MDSRank: op_tracker of mds have slow op alway.
- 08:32 AM Backport #61993 (In Progress): reef: mds/MDSRank: op_tracker of mds have slow op alway.
- 08:29 AM Backport #62372 (Resolved): pacific: Consider setting "bulk" autoscale pool flag when automatical...
- 08:28 AM Bug #61907 (Resolved): api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62443 (Resolved): reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62441 (Resolved): quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62442 (Resolved): pacific: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:26 AM Backport #62421 (Resolved): pacific: mds: adjust cap acquistion throttle defaults
09/08/2023
- 04:08 PM Backport #62724 (In Progress): reef: mon/MDSMonitor: optionally forbid to use standby for another...
- 03:59 PM Backport #59405 (In Progress): reef: MDS allows a (kernel) client to exceed the xattrs key/value ...
- 10:52 AM Backport #62738 (In Progress): reef: quota: accept values in human readable format as well
- https://github.com/ceph/ceph/pull/53333
- 10:23 AM Feature #57481: mds: enhance scrub to fragment/merge dirfrags
- Chris, please take this one.
- 07:12 AM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Revert change: https://github.com/ceph/ceph/pull/53331
- 05:54 AM Backport #62585 (In Progress): quincy: mds: enforce a limit on the size of a session in the sessi...
- 05:38 AM Backport #62583 (In Progress): reef: mds: enforce a limit on the size of a session in the sessionmap
- 04:10 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > If we are going to move the metadata out of CephFS, I think it sho... - 03:21 AM Bug #62537 (Fix Under Review): cephfs scrub command will crash the standby-replay MDSs
- 01:36 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> I was talking to Dhairya about this today and am not quite sure I understand.
>
> Xiubo, Ven...
09/07/2023
- 07:53 PM Bug #62764 (New): qa: use stdin-killer for kclient mounts
- To reduce the number of dead jobs caused by a e.g. umount command stuck in uninterruptible sleep.
- 07:52 PM Bug #62763 (Fix Under Review): qa: use stdin-killer for ceph-fuse mounts
- To reduce the number of dead jobs caused by a e.g. umount command stuck in uninterruptible sleep.
- 07:44 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > If we are going to move the metadata out of CephFS, I think it sho... - 03:21 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Patrick Donnelly wrote:
> If we are going to move the metadata out of CephFS, I think it should go in cephsqlite. Th... - 02:59 PM Bug #61399: qa: build failure for ior
- What fixed this issue is using latest version of ior project as well as purging and then again installing mpich packa...
- 02:47 PM Bug #61399: qa: build failure for ior
- The PR has been merged just now, I'll check with Venky if this needs to be backported.
- 02:47 PM Bug #61399 (Fix Under Review): qa: build failure for ior
- 01:41 PM Bug #62729 (Resolved): src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘...
- 01:10 PM Feature #47264 (Resolved): "fs authorize" subcommand should work for multiple FSs too
- 12:25 PM Feature #62364: support dumping rstats on a particular path
- Venky Shankar wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Venky Shankar wrote:
> > > > Greg Farnum... - 12:05 PM Feature #62364: support dumping rstats on a particular path
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Venky Shankar wrote:
> > > Greg Farnum wrote:
> > > > Especially no... - 11:07 AM Bug #62739 (Resolved): cephfs-shell: remove distutils Version classes because they're deprecated
- python 3.10 deprecated distutils [0]. LooseVersion is used at many places in cephfs-shell.py, suggest switching to pa...
- 10:16 AM Backport #62738 (In Progress): reef: quota: accept values in human readable format as well
- 10:07 AM Feature #55940 (Pending Backport): quota: accept values in human readable format as well
09/06/2023
- 09:53 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- Last successful main shaman build: https://shaman.ceph.com/builds/ceph/main/794f4d16c6c8bf35729045062d24322d30b5aa14/...
- 09:32 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- Laura suspected merging https://github.com/ceph/ceph/pull/51942 led tot this issue. I've built the PR branch (@wip-61...
- 07:48 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- https://shaman.ceph.com/builds/ceph/main/f9a01cf3851ffa2c51b5fb84e304c1481f35fe03/
- 07:48 PM Bug #62729 (Resolved): src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘...
- https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos8,DIST=centos8,MAC...
- 08:49 PM Backport #62733 (Resolved): reef: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53558
- 08:49 PM Backport #62732 (Resolved): quincy: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53557
- 08:49 PM Backport #62731 (Resolved): pacific: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53556
- 08:44 PM Bug #62057 (Pending Backport): mds: add TrackedOp event for batching getattr/lookup
- 06:47 PM Backport #62419 (Resolved): reef: mds: adjust cap acquistion throttle defaults
- https://github.com/ceph/ceph/pull/52972#issuecomment-1708910842
- 05:30 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro... - 02:21 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro... - 09:02 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Venky Shankar wrote:
> which prompted a variety of code changes to workaround the problem. This all carries a size... - 06:14 AM Feature #62715 (New): mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- A bit of history: The subvolume thing started out as a directory structure in the file system (and that is still the ...
- 03:09 PM Backport #62726 (New): quincy: mon/MDSMonitor: optionally forbid to use standby for another fs as...
- 03:09 PM Backport #62725 (Rejected): pacific: mon/MDSMonitor: optionally forbid to use standby for another...
- 03:09 PM Backport #62724 (In Progress): reef: mon/MDSMonitor: optionally forbid to use standby for another...
- https://github.com/ceph/ceph/pull/53340
- 03:00 PM Feature #61599 (Pending Backport): mon/MDSMonitor: optionally forbid to use standby for another f...
- 09:58 AM Bug #62706: qa: ModuleNotFoundError: No module named XXXXXX
- I too ran into this in one of my runs. I believe this is an env thing since a bunch of other tests from my run had is...
- 09:57 AM Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snaps...
- Dhairya, please take this one.
- 09:45 AM Bug #62674: cephfs snapshot remains visible in nfs export after deletion and new snaps not shown
- https://tracker.ceph.com/issues/58376 is the one reported by a community user.
- 08:55 AM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
- Hi Paul,
You are probably running into https://tracker.ceph.com/issues/59041 - at least for the part for listing s... - 09:41 AM Bug #62682 (Triaged): mon: no mdsmap broadcast after "fs set joinable" is set to true
- The upgrade process uses `fail_fs` which fails the file system and upgrades the MDSs without reducing max_mds to 1. I...
- 09:34 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick Donnelly wrote:
> Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread f... - 08:46 AM Feature #62668: qa: use teuthology scripts to test dozens of clients
- Patrick Donnelly wrote:
> We have one small suite for integration testing of multiple clients:
>
> https://github... - 08:32 AM Feature #62670: [RFE] cephfs should track and expose subvolume usage and quota
- Paul Cuzner wrote:
> Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quo... - 06:57 AM Bug #62720 (New): mds: identify selinux relabelling and generate health warning
- This request has come up from folks in the field. A recursive relabel on a file system brings the mds down to its kne...
- 05:54 AM Backport #59200 (Rejected): reef: qa: add testing in fs:workload for different kinds of subvolumes
- Available in reef.
- 05:53 AM Backport #59201 (Resolved): quincy: qa: add testing in fs:workload for different kinds of subvolumes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50974
Merged. - 12:48 AM Bug #62700: postgres workunit failed with "PQputline failed"
- Another one https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/73...
- 12:47 AM Fix #51177 (Resolved): pybind/mgr/volumes: investigate moving calls which may block on libcephfs ...
- 12:47 AM Backport #59417 (Resolved): pacific: pybind/mgr/volumes: investigate moving calls which may block...
09/05/2023
- 09:54 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-09-01_19:14:47-rados-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/7386551
- 08:09 PM Fix #62712 (New): pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when unde...
- Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread for each module, the request...
- 07:42 PM Backport #62268 (Resolved): pacific: qa: _test_stale_caps does not wait for file flush before stat
- 07:42 PM Backport #62517 (Resolved): pacific: mds: inode snaplock only acquired for open in create codepath
- 07:41 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
- 02:53 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- PR https://github.com/ceph/ceph/pull/52924 has been merged for fixing this issue. Original PR https://github.com/ceph...
- 02:51 PM Bug #62084 (Resolved): task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- 01:36 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- I was talking to Dhairya about this today and am not quite sure I understand.
Xiubo, Venky, are we contending the ... - 12:56 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Manish Yathnalli wrote:
> https://github.com/ceph/ceph/pull/52527
Manish, the PR id is linked in the "Pull reques... - 12:42 PM Feature #61863 (Fix Under Review): mds: issue a health warning with estimated time to complete re...
- https://github.com/ceph/ceph/pull/52527
- 12:42 PM Bug #62265 (Fix Under Review): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
- https://github.com/ceph/ceph/pull/53283
- 11:41 AM Bug #62706 (Pending Backport): qa: ModuleNotFoundError: No module named XXXXXX
- https://pulpito.ceph.com/rishabh-2023-08-10_20:13:47-fs-wip-rishabh-2023aug3-b4-testing-default-smithi/7365558/
... - 05:27 AM Bug #62702 (Fix Under Review): MDS slow requests for the internal 'rename' requests
- 04:43 AM Bug #62702 (Pending Backport): MDS slow requests for the internal 'rename' requests
- https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7378922
<... - 04:34 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
- 04:34 AM Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB
- Revert PR: https://github.com/ceph/ceph/pull/53153
- 01:03 AM Bug #62700 (Fix Under Review): postgres workunit failed with "PQputline failed"
- The scale factor will depend on the node's performance and disk sizes being used to run the test, and 500 seems too l...
- 12:53 AM Bug #62700 (Resolved): postgres workunit failed with "PQputline failed"
- https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/7365718/teutho...
09/04/2023
- 03:29 PM Bug #62698: qa: fsstress.sh fails with error code 124
- Copying following log entries on behalf of Radoslaw -...
- 03:21 PM Bug #62698: qa: fsstress.sh fails with error code 124
- These messages mean there was no even a single successful exchange of network heartbeat messages between osd.5 and (o...
- 02:58 PM Bug #62698 (Can't reproduce): qa: fsstress.sh fails with error code 124
- https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7379296
The... - 02:26 PM Backport #62696 (Rejected): reef: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have a...
- 02:26 PM Backport #62695 (Rejected): quincy: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have...
- 02:18 PM Bug #62482 (Pending Backport): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an a...
- 11:15 AM Feature #1680: support reflink (cheap file copy/clone)
- This feature would really be appreciated. We would like to switch to Ceph for our cluster storage, but we rely heavil...
- 09:36 AM Bug #62676: cephfs-mirror: 'peer_bootstrap import' hangs
- If this is just a perception issue then a message to the user like "You need to wait for 5 minutes for this command t...
- 07:39 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
- This is not a bug it seems. It waits for 5 minutes for the secrets to expire. Don't press Ctrl+C, Just wait for 5 min...
- 08:56 AM Bug #62494 (In Progress): Lack of consistency in time format
- 08:53 AM Backport #59408 (In Progress): reef: cephfs_mirror: local and remote dir root modes are not same
- 08:52 AM Backport #59001 (In Progress): pacific: cephfs_mirror: local and remote dir root modes are not same
- 06:31 AM Bug #62682 (Resolved): mon: no mdsmap broadcast after "fs set joinable" is set to true
- ...
- 12:45 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Sudhin Bengeri wrote:
> Hi Xuibo,
>
> Here is the uname -a output from the nodes:
> Linux wkhd 6.3.0-rc4+ #6 SMP...
09/03/2023
- 02:40 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick - I can take this one if you haven't started on it yet.
...
09/02/2023
- 09:05 PM Backport #62569 (In Progress): pacific: ceph_fs.h: add separate owner_{u,g}id fields
- 09:05 PM Backport #62570 (In Progress): reef: ceph_fs.h: add separate owner_{u,g}id fields
- 09:05 PM Backport #62571 (In Progress): quincy: ceph_fs.h: add separate owner_{u,g}id fields
09/01/2023
- 06:58 PM Bug #50250 (New): mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/cli...
- https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-s...
- 09:05 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
- 'peer_bootstrap import' command hangs subsequent to using wrong/invalid token to import. If we use an invalid token i...
08/31/2023
- 11:01 PM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
- When a snapshot is taken of the subvolume, the .snap directory shows the snapshot when viewed from the NFS mount and ...
- 10:26 PM Bug #62673 (New): cephfs subvolume resize does not accept 'unit'
- Specifying the quota or resize for a subvolume requires the value in bytes. This value should be accepted as <num><un...
- 10:06 PM Feature #62670 (Need More Info): [RFE] cephfs should track and expose subvolume usage and quota
- Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quota thresholds to drive...
- 06:34 PM Feature #62668 (New): qa: use teuthology scripts to test dozens of clients
- We have one small suite for integration testing of multiple clients:
https://github.com/ceph/ceph/tree/9d7c1825783... - 03:35 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Hi Xuibo,
Here is the uname -a output from the nodes:
Linux wkhd 6.3.0-rc4+ #6 SMP PREEMPT_DYNAMIC Mon May 22 22:... - 03:44 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Sudhin Bengeri wrote:
> Hi Xiubo,
>
> Thanks for your response.
>
> Are you saying that cephfs does not suppo... - 12:35 PM Backport #62662 (In Progress): pacific: mds: deadlock when getattr changes inode lockset
- 12:02 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53243
- 12:34 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
- 12:01 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53242
- 12:34 PM Bug #62664 (New): ceph-fuse: failed to remount for kernel dentry trimming; quitting!
- Hi,
While #62604 is being addressed I wanted to try the ceph-fuse client. I'm using the same setup with kernel 6.4... - 12:34 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
- 12:01 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53241
- 12:31 PM Bug #62663 (Can't reproduce): MDS: inode nlink value is -1 causing MDS to continuously crash
- All MDS daemons are continuously crashing. The logs are reporting an inode nlink value is set to -1. I have included ...
- 11:56 AM Bug #62052 (Pending Backport): mds: deadlock when getattr changes inode lockset
- 09:41 AM Bug #62580 (Fix Under Review): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_str...
- 08:57 AM Bug #62580 (In Progress): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.T...
- Xiubo Li wrote:
> This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to ... - 05:30 AM Bug #62580 (Duplicate): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.Tes...
- This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to Pacific yet.
- 09:30 AM Bug #62658 (Pending Backport): error during scrub thrashing: reached maximum tries (31) after wai...
- /a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378338...
- 07:10 AM Bug #62653 (New): qa: unimplemented fcntl command: 1036 with fsstress
- /a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378422
Happens wit...
08/30/2023
- 08:54 PM Bug #62648 (New): pybind/mgr/volumes: volume rm freeze waiting for async job on fs to complete
- ...
- 02:16 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Hi Xiubo,
Thanks for your response.
Are you saying that cephfs does not support fscrypt? I am not exactly sure... - 05:35 AM Feature #45021 (In Progress): client: new asok commands for diagnosing cap handling issues
08/29/2023
- 12:18 PM Bug #62626: mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
- Dhairya, could you link the commit which started causing this? (I recall we discussed a bit about this)
- 10:49 AM Bug #62626 (In Progress): mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
- Currently, when export update fails, this is the reponse:...
- 09:56 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- Venky Shankar wrote:
> FWIW, logs hint at missing (RADOS) objects:
>
> [...]
>
> I'm not certain yet if this i... - 09:39 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- FWIW, logs hint at missing (RADOS) objects:...
- 09:40 AM Backport #61987 (Resolved): reef: mds: session ls command appears twice in command listing
- 09:40 AM Backport #61988 (Resolved): quincy: mds: session ls command appears twice in command listing
- 05:46 AM Feature #61904: pybind/mgr/volumes: add more introspection for clones
- Rishabh, please take this one (along the same lines as https://tracker.ceph.com/issues/61905).
08/28/2023
- 01:33 PM Backport #62517 (In Progress): pacific: mds: inode snaplock only acquired for open in create code...
- 01:32 PM Backport #62516 (In Progress): quincy: mds: inode snaplock only acquired for open in create codepath
- 01:32 PM Backport #62518 (In Progress): reef: mds: inode snaplock only acquired for open in create codepath
- 01:17 PM Backport #62539 (Rejected): reef: qa: Health check failed: 1 pool(s) do not have an application e...
- 01:17 PM Backport #62538 (Rejected): quincy: qa: Health check failed: 1 pool(s) do not have an application...
- 01:17 PM Bug #62508 (Duplicate): qa: Health check failed: 1 pool(s) do not have an application enabled (PO...
- 12:24 PM Documentation #62605: cephfs-journal-tool: update parts of code that need mandatory --rank
- Good catch.
- 12:14 PM Documentation #62605 (New): cephfs-journal-tool: update parts of code that need mandatory --rank
- For instance If someone refers [0] to export journal to a file, it says to run ...
- 12:16 PM Bug #62537: cephfs scrub command will crash the standby-replay MDSs
- Neeraj, please take this one.
- 12:09 PM Tasks #62159 (In Progress): qa: evaluate mds_partitioner
- 12:08 PM Bug #62067 (Duplicate): ffsb.sh failure "Resource temporarily unavailable"
- Duplicate of #62484
- 12:06 PM Feature #62157 (In Progress): mds: working set size tracker
- Hi Yongseok,
Assigning this to you since I presume this being worked on along side the partitioner module. - 11:59 AM Feature #62215 (Rejected): libcephfs: Allow monitoring for any file changes like inotify
- Nothing planned for the foreseeable future related to this feature request.
- 11:11 AM Backport #62443: reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53005
the above PR has been closed and the commit has been ... - 11:08 AM Backport #62441: quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53006
the above PR has been closed and the commit has bee... - 09:19 AM Bug #59413 (Fix Under Review): cephfs: qa snaptest-git-ceph.sh failed with "got remote process re...
- 08:46 AM Bug #62510 (Duplicate): snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-... - 06:46 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Another one, but with kclient
> >
> > > https://pulpito.ceph.com/vsha... - 02:41 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Another one, but with kclient
>
> > https://pulpito.ceph.com/vshankar-2023-08-23_03:59:53-... - 02:29 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-testing-default-smith... - 06:22 AM Bug #62278: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume inf...
- Backport note: also include commit(s) from https://github.com/ceph/ceph/pull/52940
08/27/2023
- 09:06 AM Backport #62572 (In Progress): pacific: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53169
- 09:05 AM Backport #62573 (In Progress): reef: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53168
- 09:05 AM Backport #62574 (In Progress): quincy: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53167
Also available in: Atom