Activity
From 07/08/2023 to 08/06/2023
08/06/2023
- 08:46 AM Feature #61595 (Fix Under Review): Consider setting "bulk" autoscale pool flag when automatically...
- 08:46 AM Bug #58394 (Fix Under Review): nofail option in fstab not supported
- After further research I've arrived at the conclusion that stripping the option in the mount.fuse.ceph helper is the ...
08/05/2023
- 11:14 AM Backport #62054 (Resolved): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds...
- https://github.com/ceph/ceph/pull/52512 merged
08/04/2023
- 09:35 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- This can be due to some sort of race (in the test I suppose). Tests for @du@ cephfs-shell command also suffered simil...
- 08:51 PM Bug #47292 (In Progress): cephfs-shell: test_df_for_valid_file failure
- 08:50 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- Inspecting methods (in test_cephfs_shell.py) @test_df_for_valid_file@ and @validate_df@, this seems to be a bug not i...
- 06:42 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- The bug on this ticket can't be reproducede with teuthology as well - https://pulpito.ceph.com/rishabh-2023-08-02_13:...
- 08:38 PM Backport #62337 (In Progress): pacific: MDSAUthCaps: use g_ceph_context directly
- 07:56 PM Backport #62337 (Resolved): pacific: MDSAUthCaps: use g_ceph_context directly
- https://github.com/ceph/ceph/pull/52821
- 08:36 PM Backport #62336 (In Progress): quincy: MDSAUthCaps: use g_ceph_context directly
- 07:56 PM Backport #62336 (In Progress): quincy: MDSAUthCaps: use g_ceph_context directly
- https://github.com/ceph/ceph/pull/52820
- 08:33 PM Backport #62335 (In Progress): reef: MDSAUthCaps: use g_ceph_context directly
- 07:56 PM Backport #62335 (In Progress): reef: MDSAUthCaps: use g_ceph_context directly
- https://github.com/ceph/ceph/pull/52819
- 07:55 PM Bug #62334 (Pending Backport): mds: use g_ceph_context directly
- Creating this ticket because Venky wants me to backport the PR for this ticket.
Summary of this PR -
Variable @... - 07:52 PM Backport #62333 (New): quincy: MDSAuthCaps: minor improvements
- 07:51 PM Backport #62332 (In Progress): reef: MDSAuthCaps: minor improvements
- https://github.com/ceph/ceph/pull/54185
- 07:51 PM Backport #62331 (Rejected): pacific: MDSAuthCaps: minor improvements
- https://github.com/ceph/ceph/pull/54143
- 07:47 PM Bug #62329 (Rejected): MDSAuthCaps: minor improvements
- Creating this ticket because Venky wants me to backport the PR for this ticket.
Summary of the PR -
1. Import std... - 06:51 PM Bug #62328 (New): qa: test_acls fails due to dependency package was not found
- ...
- 06:31 PM Bug #62243: qa/cephfs: test_snapshots.py fails because of a missing method
- https://pulpito.ceph.com/rishabh-2023-08-03_21:57:58-fs-wip-rishabh-2023Aug1-3-testing-default-smithi/7359973
- 05:58 PM Bug #62326 (Resolved): pybind/mgr/cephadm: stop disabling fsmap sanity checks during upgrade
- This was necessary due to #53155. We should not continue disabling the sanity checks forever during upgrades which he...
- 04:42 PM Bug #59107: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51698 merged
- 04:35 PM Backport #59410: reef: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52578
merged - 04:34 PM Backport #61987: reef: mds: session ls command appears twice in command listing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52515
merged - 04:34 PM Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific
- https://github.com/ceph/ceph/pull/52512 merged
- 04:31 PM Backport #62011: reef: client: dir->dentries inconsistent, both newname and oldname points to sam...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52504
merged - 04:30 PM Backport #62041: reef: client: do not send metrics until the MDS rank is ready
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52501
merged - 04:28 PM Backport #62044: reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52497
merged - 04:26 PM Backport #59373: reef: qa: test_join_fs_unset failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52235
merged - 03:16 PM Backport #61898: quincy: pybind/cephfs: holds GIL during rmdir
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52322
merged - 03:14 PM Backport #61425: quincy: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot bea...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52243
merged - 03:13 PM Backport #61415: quincy: mon/MDSMonitor: do not trigger propose on error from prepare_update
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52239
merged - 03:13 PM Backport #61412: quincy: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52234
merged - 09:22 AM Bug #62217: ceph_fs.h: add separate owner_{u,g}id fields
- Xiubo proposed [1] to add one extra field to the struct ceph_mds_request_head.
This field will be filled by client w...
08/03/2023
- 09:28 PM Bug #62208 (New): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- updating the status to New until the API becomes available
- 02:17 PM Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done
- Leonid Usov wrote:
> OK thanks Patrick! So this will wait until you submit the linked issue that introduces the new ... - 01:10 PM Bug #62208 (In Progress): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- OK thanks Patrick! So this will wait until you submit the linked issue that introduces the new method.
What are ot... - 12:56 PM Bug #62208 (New): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- 08:16 PM Bug #58394: nofail option in fstab not supported
- Here is the commit that introduces the "support" for the option to fuse3: https://github.com/libfuse/libfuse/commit/a...
- 10:10 AM Bug #58394: nofail option in fstab not supported
- A "suggestion for Debian":https://github.com/libfuse/libfuse/issues/691, for example, is to @apt install fuse3@
- 10:05 AM Bug #58394: nofail option in fstab not supported
- https://github.com/libfuse/libfuse/blob/master/ChangeLog.rst#libfuse-322-2018-03-31
> libfuse 3.2.2 (2018-03-31)
... - 08:34 AM Bug #58394: nofail option in fstab not supported
- Okay, so the research so far shows that
* The lack of @nofail@ support by fuse has been reported multiple times as... - 02:22 AM Bug #58394: nofail option in fstab not supported
- Brian Woods wrote:
> Venky Shankar wrote:
> > Were you able to check why.
>
> Not sure what you are asking. Che... - 06:53 PM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Venky Shankar wrote:
> Changes originating from the localhost would obviously be notified to the watcher, but *not* ... - 12:07 PM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Anagh Kumar Baranwal wrote:
> Hi Venky,
>
> Venky Shankar wrote:
> > Hi Anagh,
> >
> > Anagh Kumar Baranwal w... - 11:52 AM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Hi Venky,
Venky Shankar wrote:
> Hi Anagh,
>
> Anagh Kumar Baranwal wrote:
> libcephfs does not have such ... - 09:36 AM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Hi Anagh,
Anagh Kumar Baranwal wrote:
> I wanted to add a Ceph backend for rclone (https://rclone.org/) but it tu... - 04:11 PM Feature #61595 (In Progress): Consider setting "bulk" autoscale pool flag when automatically crea...
- 03:51 AM Feature #59714 (Fix Under Review): mgr/volumes: Support to reject CephFS clones if cloner threads...
08/02/2023
- 09:23 PM Bug #62228 (Resolved): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Resolved in https://tracker.ceph.com/issues/57206.
- 08:36 PM Backport #61410 (Resolved): reef: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- 07:07 PM Bug #58394: nofail option in fstab not supported
- Venky Shankar wrote:
> Were you able to check why.
Not sure what you are asking. Check why the ceph mount allowe... - 05:22 PM Bug #62208 (Need More Info): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- Hi Venky!
I've started looking at this one, but the ticket doesn't provide sufficient information so I don't know ... - 04:17 AM Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done
- Leonid, please take this one.
- 04:58 PM Bug #61409: qa: _test_stale_caps does not wait for file flush before stat
- Casey Bodley wrote:
> backports should include the flake8 cleanup from https://github.com/ceph/ceph/pull/52732
Ap... - 04:39 PM Feature #58154 (Resolved): mds: add minor segment boundaries
- Not a backport candidate as the change needs more bake time in main.
- 04:37 PM Backport #62289 (Resolved): quincy: ceph_test_libcephfs_reclaim crashes during test
- https://github.com/ceph/ceph/pull/53647
- 04:36 PM Backport #62288 (Rejected): pacific: ceph_test_libcephfs_reclaim crashes during test
- https://github.com/ceph/ceph/pull/53648
- 04:36 PM Backport #62287 (In Progress): reef: ceph_test_libcephfs_reclaim crashes during test
- https://github.com/ceph/ceph/pull/53635
- 04:36 PM Bug #57206 (Pending Backport): ceph_test_libcephfs_reclaim crashes during test
- 06:47 AM Bug #57206 (Fix Under Review): ceph_test_libcephfs_reclaim crashes during test
- 01:52 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- Can't reproduce this bug with vstart_runner.py....
- 01:48 PM Bug #62243 (Resolved): qa/cephfs: test_snapshots.py fails because of a missing method
- 08:51 AM Bug #62243: qa/cephfs: test_snapshots.py fails because of a missing method
- Venky Shankar wrote:
> I think this _doesn't_ need backport, Rishabh?
No need because the PR that introduced this... - 12:33 PM Bug #62278 (Fix Under Review): pybind/mgr/volumes: pending_subvolume_deletions count is always ze...
- 10:32 AM Bug #62278 (Pending Backport): pybind/mgr/volumes: pending_subvolume_deletions count is always ze...
- Even after deleting multiple subvolumes, the pending_subvolume_deletions count is always zero...
- 10:27 AM Bug #62246: qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- Rishabh Dave wrote:
> This failure might be same as the one reported here - https://tracker.ceph.com/issues/62188. I... - 09:53 AM Feature #55215 (Resolved): mds: fragment directory snapshots
- 08:55 AM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- Venky Shankar wrote:
> Rishabh Dave wrote:
> > I spent a good amount of time with this ticket. The reason for this ... - 08:18 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > _test_create_cluster() in test_nfs demanded strerr to be looked at... - 07:57 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Dhairya Parmar wrote:
> _test_create_cluster() in test_nfs demanded strerr to be looked at; therefore I had created ... - 07:34 AM Bug #62074: cephfs-shell: ls command has help message of cp command
- https://github.com/ceph/ceph/pull/52756
- 07:20 AM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Milind Changire wrote:
> Venky Shankar wrote:
> > Milind Changire wrote:
> > > Venky,
> > > What versions do I ba... - 07:11 AM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Venky Shankar wrote:
> Milind Changire wrote:
> > Venky,
> > What versions do I backport PR#48743 to?
> > That PR... - 06:49 AM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Milind Changire wrote:
> Venky,
> What versions do I backport PR#48743 to?
> That PR is only available in version ... - 07:19 AM Bug #59833 (Fix Under Review): crash: void MDLog::trim(int): assert(segments.size() >= pre_segmen...
- 06:48 AM Bug #62262 (Won't Fix): workunits/fs/test_o_trunc.sh failed with timedout
- This is not a issue in upstream, just caused by the new PR: https://github.com/ceph/ceph/pull/45073.
Will close it. - 06:47 AM Bug #62262: workunits/fs/test_o_trunc.sh failed with timedout
Just copied my analysis from https://github.com/ceph/ceph/pull/45073/commits/f064cd9a78ae475c574d1d46db18748fe9c001...- 06:38 AM Backport #61793 (In Progress): pacific: mgr/snap_schedule: catch all exceptions to avoid crashing...
- 06:37 AM Backport #61795 (In Progress): quincy: mgr/snap_schedule: catch all exceptions to avoid crashing ...
- 06:37 AM Backport #61794 (In Progress): reef: mgr/snap_schedule: catch all exceptions to avoid crashing mo...
- 06:33 AM Backport #61989 (In Progress): pacific: snap-schedule: allow retention spec to specify max number...
- 06:27 AM Backport #61991 (In Progress): quincy: snap-schedule: allow retention spec to specify max number ...
- 06:27 AM Backport #61990 (In Progress): reef: snap-schedule: allow retention spec to specify max number of...
- 05:01 AM Feature #61595: Consider setting "bulk" autoscale pool flag when automatically creating a data po...
- Leonid, please take this one.
- 04:54 AM Bug #62077: mgr/nfs: validate path when modifying cephfs export
- Venky Shankar wrote:
> Dhairya, this should be straightforward with the path validation helper you introduced, right... - 04:30 AM Bug #56003 (Duplicate): client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
- 03:04 AM Bug #62277 (Fix Under Review): Error: Unable to find a match: python2 with fscrypt tests
- 12:51 AM Bug #62277 (Pending Backport): Error: Unable to find a match: python2 with fscrypt tests
- http://qa-proxy.ceph.com/teuthology/yuriw-2023-07-29_14:02:18-fs-reef-release-distro-default-smithi/7356824/teutholog...
- 03:01 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Oh, or maybe its trying to install python2
... - 02:43 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Oh, or maybe its trying to install python2
> >
> > [...]
> >
> > T... - 12:51 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Venky Shankar wrote:
> Oh, or maybe its trying to install python2
>
> [...]
>
> This should be trying python3.... - 01:22 AM Bug #62227 (Fix Under Review): Error "dbench: command not found" in smoke on reef
- The fixing PR in *ceph-cm-ansible*: https://github.com/ceph/ceph-cm-ansible/pull/746 and https://github.com/ceph/ceph...
- 01:08 AM Bug #62227: Error "dbench: command not found" in smoke on reef
- This should be a same issue with https://tracker.ceph.com/issues/62187, which is *iozone* missing instead in *centos ...
- 01:03 AM Backport #62268 (In Progress): pacific: qa: _test_stale_caps does not wait for file flush before ...
- 01:02 AM Backport #62270 (In Progress): quincy: qa: _test_stale_caps does not wait for file flush before stat
- 01:00 AM Backport #62269 (In Progress): reef: qa: _test_stale_caps does not wait for file flush before stat
08/01/2023
- 08:29 PM Bug #61409: qa: _test_stale_caps does not wait for file flush before stat
- backports should include the flake8 cleanup from https://github.com/ceph/ceph/pull/52732
- 03:09 PM Bug #61409 (Pending Backport): qa: _test_stale_caps does not wait for file flush before stat
- 03:31 PM Backport #62273 (Rejected): pacific: Error: Unable to find a match: userspace-rcu-devel libedit-d...
- 03:31 PM Backport #62272 (Rejected): quincy: Error: Unable to find a match: userspace-rcu-devel libedit-de...
- 03:31 PM Backport #62271 (Resolved): reef: Error: Unable to find a match: userspace-rcu-devel libedit-deve...
- https://github.com/ceph/ceph/pull/52843
- 03:27 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Rishabh, please take this one.
- 01:52 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Sometimes we want to be able to t... - 03:25 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Oh, or maybe its trying to install python2...
- 03:24 PM Bug #59683 (Pending Backport): Error: Unable to find a match: userspace-rcu-devel libedit-devel d...
- https://pulpito.ceph.com/yuriw-2023-07-29_14:02:18-fs-reef-release-distro-default-smithi/7356824/
- 03:15 PM Backport #62270 (In Progress): quincy: qa: _test_stale_caps does not wait for file flush before stat
- https://github.com/ceph/ceph/pull/52743
- 03:15 PM Backport #62269 (In Progress): reef: qa: _test_stale_caps does not wait for file flush before stat
- https://github.com/ceph/ceph/pull/52742
- 03:15 PM Backport #62268 (Resolved): pacific: qa: _test_stale_caps does not wait for file flush before stat
- https://github.com/ceph/ceph/pull/52744
- 02:31 PM Feature #61905: pybind/mgr/volumes: add more introspection for recursive unlink threads
- Rishabh, please take this one.
- 02:23 PM Bug #62265 (Fix Under Review): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
- There are places where the mirror daemon uses realtime clocks which are prone to clock shifts (system time). Switch t...
- 12:56 PM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Venky,
What versions do I backport PR#48743 to?
That PR is only available in version 18.x
This is the older trac... - 09:12 AM Bug #62262 (Won't Fix): workunits/fs/test_o_trunc.sh failed with timedout
- https://pulpito.ceph.com/vshankar-2023-07-25_11:29:34-fs-wip-vshankar-testing-20230725.043804-testing-default-smithi/...
- 07:26 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- The difference I can see in src/test/libcephfs/CMakeLists.txt is
for, say, test.cc:... - 06:50 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Laura Flores wrote:
> Possibly more helpful, here's the last instance of it passing on ubuntu 20.04 main:
> https:/... - 04:43 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Thanks for the details, Laura. I'll have a look.
- 06:33 AM Backport #62242 (In Progress): pacific: mds: linkmerge assert check is incorrect in rename codepath
- 06:32 AM Backport #62241 (In Progress): quincy: mds: linkmerge assert check is incorrect in rename codepath
- 06:31 AM Backport #62240 (In Progress): reef: mds: linkmerge assert check is incorrect in rename codepath
- 05:28 AM Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid`
- Venky Shankar wrote:
> Xiubo Li wrote:
> > There is a ceph-user mail thread about this https://www.spinics.net/list... - 05:22 AM Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid`
- Xiubo Li wrote:
> There is a ceph-user mail thread about this https://www.spinics.net/lists/ceph-users/msg78109.html... - 05:12 AM Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid`
- There is a ceph-user mail thread about this https://www.spinics.net/lists/ceph-users/msg78109.html.
As a workaroun... - 05:05 AM Bug #62257 (New): mds: blocklist clients that are not advancing `oldest_client_tid`
- The size of the session map becomes huge, thereby exceeding the max write size for a RADOS operation, thereby resulti...
- 05:14 AM Bug #62245 (Fix Under Review): qa/workunits/libcephfs/test.sh failed: [ FAILED ] LibCephFS.Dele...
- 04:14 AM Bug #62243 (Fix Under Review): qa/cephfs: test_snapshots.py fails because of a missing method
- I think this _doesn't_ need backport, Rishabh?
- 04:10 AM Bug #61957 (Duplicate): test_client_limits.TestClientLimits.test_client_release_bug fails
- Xiubo Li wrote:
> Venky, this seems the same issue with https://tracker.ceph.com/issues/62229 and I have one PR to f...
07/31/2023
- 10:18 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Possibly more helpful, here's the last instance of it passing on ubuntu 20.04 main:
https://pulpito.ceph.com/teuthol... - 08:09 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Here's a job where the test passed on ubuntu. Could provide some clues to what changed:
https://pulpito.ceph.com/yur... - 04:44 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- A similar issue was reported and solved in https://tracker.ceph.com/issues/57050. A note from Casey:...
- 04:12 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Found an instance of this in the smoke suite and got some more information from the coredump here: https://tracker.ce...
- 06:50 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- https://pulpito.ceph.com/yuriw-2023-07-28_14:23:59-fs-wip-yuri-testing-2023-07-25-0833-reef-distro-default-smithi/735...
- 08:54 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Possibly more helpful, here's the last instance of it passing on ubuntu 20.04 main:
https://pulpito.ceph.com/teuthol... - 08:45 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Instance from June of this year: https://pulpito.ceph.com/yuriw-2023-06-24_13:58:32-smoke-reef-distro-default-smithi/...
- 04:01 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Marked it as "related" rather than a dupe to keep visibility in the smoke suite.
- 04:00 PM Bug #62228 (New): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- 03:59 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- From Venky's comment on the original tracker, it seems to pop up "once in awhile". So that could explain it. Best to ...
- 03:55 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Why did not it pop up in smoke before then?
- 03:53 PM Bug #62228 (Duplicate): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- 03:53 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- This actually looks like a dupe of https://tracker.ceph.com/issues/57206.
- 03:37 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Installed more debug symbols to get a clearer picture:...
- 04:03 PM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- Rishabh Dave wrote:
> I spent a good amount of time with this ticket. The reason for this failure is unclear from lo... - 01:45 PM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- I spent a good amount of time with this ticket. The reason for this failure is unclear from logs. There's no tracebac...
- 02:09 PM Bug #62246: qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- This failure might be same as the one reported here - https://tracker.ceph.com/issues/62188. If the cause of both the...
- 02:05 PM Bug #62246 (Fix Under Review): qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- Test method test_mount_mon_and_osd_caps_present_mds_caps_absent (in test_multifs_auth.TestClientsWithoutAuth) fails. ...
- 01:11 PM Bug #62245: qa/workunits/libcephfs/test.sh failed: [ FAILED ] LibCephFS.DelegTimeout
- The *DelegTimeout* test case itself it's buggy:...
- 01:04 PM Bug #62245 (Fix Under Review): qa/workunits/libcephfs/test.sh failed: [ FAILED ] LibCephFS.Dele...
- /teuthology/vshankar-2023-07-25_11:29:34-fs-wip-vshankar-testing-20230725.043804-testing-default-smithi/7350738
<p... - 12:57 PM Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done
- backport note - we _might_ need to pull in additional commits for p/q.
- 12:45 PM Bug #62243 (In Progress): qa/cephfs: test_snapshots.py fails because of a missing method
- 12:35 PM Bug #62243 (Resolved): qa/cephfs: test_snapshots.py fails because of a missing method
- @test_disallow_monitor_managed_snaps_for_fs_pools@ (in @test_snapshot.TestMonSnapsAndFsPools@) fails because method "...
- 12:42 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- Rishabh Dave wrote:
> Venky Shankar wrote:
> > /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053... - 12:26 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- Venky Shankar wrote:
> /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smith... - 12:35 PM Bug #61957: test_client_limits.TestClientLimits.test_client_release_bug fails
- Venky, this seems the same issue with https://tracker.ceph.com/issues/62229 and I have one PR to fix it.
[EDIT] S... - 12:28 PM Bug #62236 (Fix Under Review): qa: run nfs related tests with fs suite
- 09:26 AM Bug #62236 (Pending Backport): qa: run nfs related tests with fs suite
- Right now, its a part or orch suite (orch:cephadm) which Patrick told is due to legacy reasons. Needs to be under fs ...
- 12:15 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-07-26_15:54:22-rados-wip-yuri6-testing-2023-07-24-0819-pacific-distro-default-smithi/7353337
/a/yuriw-... - 11:38 AM Bug #62221 (In Progress): Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_...
- 06:23 AM Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.Test...
- Jos, please take this one.
- 06:23 AM Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.Test...
- https://pulpito.ceph.com/yuriw-2023-07-28_14:23:59-fs-wip-yuri-testing-2023-07-25-0833-reef-distro-default-smithi/735...
- 11:29 AM Bug #51964 (In Progress): qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- 10:53 AM Backport #62242 (Resolved): pacific: mds: linkmerge assert check is incorrect in rename codepath
- https://github.com/ceph/ceph/pull/52726
- 10:53 AM Backport #62241 (Resolved): quincy: mds: linkmerge assert check is incorrect in rename codepath
- https://github.com/ceph/ceph/pull/52725
- 10:53 AM Backport #62240 (Resolved): reef: mds: linkmerge assert check is incorrect in rename codepath
- https://github.com/ceph/ceph/pull/52724
- 10:51 AM Bug #61879 (Pending Backport): mds: linkmerge assert check is incorrect in rename codepath
- 07:07 AM Bug #50821: qa: untar_snap_rm failure during mds thrashing
- This popped up again with centos 9.stream, but I don't think anything to do with the distro. ref: /a/yuriw-2023-07-26...
- 06:55 AM Bug #59067: mds: add cap acquisition throttled event to MDR
- Leonid Usov wrote:
> Sorry for the confusion, Dhairya.
> Venky has assigned this to me but at that time I hadn't y... - 04:24 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Patrick Donnelly wrote:
> Seen here and probably elsewhere: /teuthology/yuriw-2023-07-10_00:47:51-fs-reef-distro-def...
07/29/2023
- 08:04 AM Bug #59067: mds: add cap acquisition throttled event to MDR
- Sorry for the confusion, Dhairya.
Venky has assigned this to me but at that time I hadn't yet been a member of the ... - 08:01 AM Bug #59067 (Fix Under Review): mds: add cap acquisition throttled event to MDR
- 02:47 AM Bug #62218 (Fix Under Review): mgr/snap_schedule: missing fs argument on command-line gives unexp...
- 02:42 AM Bug #62229 (Fix Under Review): log_channel(cluster) log [WRN] : client.7719 does not advance its ...
- The *test_client_oldest_tid* test case was triggered first:...
- 02:31 AM Bug #62229 (Fix Under Review): log_channel(cluster) log [WRN] : client.7719 does not advance its ...
- https://pulpito.ceph.com/vshankar-2023-07-25_11:29:34-fs-wip-vshankar-testing-20230725.043804-testing-default-smithi/...
07/28/2023
- 11:27 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- From gdb:...
- 08:31 PM Bug #62228 (Resolved): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- This is for 18.2.0
Run: https://pulpito.ceph.com/yuriw-2023-07-28_18:16:55-smoke-reef-release-distro-default-smith... - 08:57 PM Bug #62227: Error "dbench: command not found" in smoke on reef
- Seen in the fs suite as well
https://pulpito.ceph.com/yuriw-2023-07-26_14:34:38-fs-reef-release-distro-default-smi... - 08:41 PM Bug #62227: Error "dbench: command not found" in smoke on reef
- Comments from a chat discussion...
- 08:27 PM Bug #62227 (Fix Under Review): Error "dbench: command not found" in smoke on reef
- This is for 18.2.0
Run: https://pulpito.ceph.com/yuriw-2023-07-28_18:16:55-smoke-reef-release-distro-default-smith... - 04:35 PM Backport #62147: reef: qa: adjust fs:upgrade to use centos_8 yaml
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52618
merged - 04:04 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-07-19_14:33:14-rados-wip-yuri11-testing-2023-07-18-0927-pacific-distro-default-smithi/7343428
- 11:19 AM Bug #62221 (In Progress): Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_...
- /a/yuriw-2023-07-26_14:34:38-fs-reef-release-distro-default-smithi/7353194...
- 10:24 AM Bug #59067: mds: add cap acquisition throttled event to MDR
- ah didn't knew you took this tracker :)
- 08:59 AM Bug #62217: ceph_fs.h: add separate owner_{u,g}id fields
- Stéphane Graber pointed [2] that there are users who want to use cephfs idmapped mounts
with MDS versions which don'... - 06:32 AM Bug #62217 (Resolved): ceph_fs.h: add separate owner_{u,g}id fields
- This task is about adding separate fields to pass inode owner's UID/GID for operations which create new inodes:
CEPH... - 08:05 AM Bug #62218 (Pending Backport): mgr/snap_schedule: missing fs argument on command-line gives unexp...
- The first fs in the fsmap is taken as the default fs for all snap_schedule commands.
This leads to unexpected result... - 06:43 AM Documentation #62216 (Closed): doc: snapshot_clone_delay is not documented
- 06:02 AM Documentation #62216 (Closed): doc: snapshot_clone_delay is not documented
- mgr/volumes/snapshot_clone_delay can also be configured, missing in docs
- 05:51 AM Feature #62215 (Rejected): libcephfs: Allow monitoring for any file changes like inotify
- I wanted to add a Ceph backend for rclone (https://rclone.org/) but it turns out that there is no way to monitor for ...
- 02:06 AM Backport #62191 (In Progress): quincy: mds: replay thread does not update some essential perf cou...
- 02:05 AM Backport #62190 (In Progress): pacific: mds: replay thread does not update some essential perf co...
- 02:04 AM Backport #62189 (In Progress): reef: mds: replay thread does not update some essential perf counters
07/27/2023
- 01:32 PM Bug #62208 (Fix Under Review): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- The MDS calls ceph_abort("msg") in various places. If there is any pending cluster log messages to be sent to the mon...
- 12:06 PM Feature #62207 (New): Report cephfs-nfs service on ceph -s
- 11:10 AM Backport #62202 (Resolved): pacific: crash: MDSRank::send_message_client(boost::intrusive_ptr<Mes...
- https://github.com/ceph/ceph/pull/52844
- 11:10 AM Backport #62201 (Resolved): reef: crash: MDSRank::send_message_client(boost::intrusive_ptr<Messag...
- https://github.com/ceph/ceph/pull/52846
- 11:10 AM Backport #62200 (Resolved): quincy: crash: MDSRank::send_message_client(boost::intrusive_ptr<Mess...
- https://github.com/ceph/ceph/pull/52845
- 11:10 AM Backport #62199 (Duplicate): pacific: mds: couldn't successfully calculate the locker caps
- 11:10 AM Backport #62198 (Duplicate): reef: mds: couldn't successfully calculate the locker caps
- 11:09 AM Backport #62197 (Duplicate): quincy: mds: couldn't successfully calculate the locker caps
- 11:09 AM Bug #60625 (Pending Backport): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> ...
- 11:09 AM Bug #61781 (Pending Backport): mds: couldn't successfully calculate the locker caps
- 08:15 AM Backport #62194 (Resolved): quincy: ceph: corrupt snap message from mds1
- https://github.com/ceph/ceph/pull/52849
- 08:15 AM Backport #62193 (Resolved): pacific: ceph: corrupt snap message from mds1
- https://github.com/ceph/ceph/pull/52848
- 08:15 AM Backport #62192 (Resolved): reef: ceph: corrupt snap message from mds1
- https://github.com/ceph/ceph/pull/52847
- 08:15 AM Backport #62191 (Resolved): quincy: mds: replay thread does not update some essential perf counters
- https://github.com/ceph/ceph/pull/52683
- 08:15 AM Backport #62190 (Resolved): pacific: mds: replay thread does not update some essential perf counters
- https://github.com/ceph/ceph/pull/52682
- 08:15 AM Backport #62189 (Resolved): reef: mds: replay thread does not update some essential perf counters
- https://github.com/ceph/ceph/pull/52681
- 08:09 AM Bug #61217 (Pending Backport): ceph: corrupt snap message from mds1
- Xiubo, this needs backporting to p/q/r, yes?
- 08:08 AM Bug #61864 (Pending Backport): mds: replay thread does not update some essential perf counters
- 08:00 AM Bug #62187 (Fix Under Review): iozone: command not found
- 06:32 AM Bug #62187 (Fix Under Review): iozone: command not found
- /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smithi/7352594...
- 07:58 AM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- FWIW - this seems to be happening with multifs-auth tests in fs suite
/a/vshankar-2023-07-26_04:54:56-fs-wip-vshan... - 07:55 AM Bug #62188 (New): AttributeError: 'RemoteProcess' object has no attribute 'read'
- /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smithi/7352553...
- 06:19 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Kotresh Hiremath Ravishankar wrote:
> Neeraj Pratap Singh wrote:
> > I am thinking to move ahead with this approach... - 06:13 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Neeraj Pratap Singh wrote:
> I am thinking to move ahead with this approach: Allow the cloning only when (pending_cl... - 06:09 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- I am thinking to move ahead with this approach: Allow the cloning only when (pending_clones + in-progress_clones) <= ...
- 05:43 AM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smithi/7352573...
- 04:50 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Jos, please take this one.
07/26/2023
- 07:01 PM Bug #62126: test failure: suites/blogbench.sh stops running
- Seen here and probably elsewhere: /teuthology/yuriw-2023-07-10_00:47:51-fs-reef-distro-default-smithi/7331743/teuthol...
- 10:50 AM Backport #62178 (In Progress): reef: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
- 10:38 AM Backport #62178 (Resolved): reef: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemo...
- https://github.com/ceph/ceph/pull/52656
- 10:48 AM Backport #62177 (In Progress): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror...
- 10:38 AM Backport #62177 (Resolved): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
- https://github.com/ceph/ceph/pull/52654
- 10:46 AM Backport #62176 (In Progress): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror ...
- 10:38 AM Backport #62176 (In Progress): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror ...
- https://github.com/ceph/ceph/pull/52653
- 10:31 AM Bug #61182 (Pending Backport): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
- 10:05 AM Feature #61908 (Fix Under Review): mds: provide configuration for trim rate of the journal
- 09:33 AM Bug #52439 (Can't reproduce): qa: acls does not compile on centos stream
- 08:43 AM Bug #62036 (Fix Under Review): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- 06:32 AM Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- The mds became *up:active* before receiving the last *cache_rejoin ack*:...
- 05:39 AM Bug #62036 (In Progress): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- 08:08 AM Backport #59264 (Resolved): pacific: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50814
Merged. - 08:08 AM Backport #59261 (Resolved): pacific: mds: stray directories are not purged when all past parents ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50814
Merged. - 06:26 AM Backport #61346 (Resolved): pacific: mds: fsstress.sh hangs with multimds (deadlock between unlin...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51686
Merged. - 04:15 AM Bug #57014 (Resolved): cephfs-top: add an option to dump the computed values to stdout
- 04:13 AM Bug #58823 (Resolved): cephfs-top: navigate to home screen when no fs
- 04:12 AM Bug #59553 (Resolved): cephfs-top: fix help text for delay
- 04:12 AM Bug #58677 (Resolved): cephfs-top: test the current python version is supported
- 04:10 AM Documentation #57673 (Resolved): doc: document the relevance of mds_namespace mount option
- 04:09 AM Backport #58408 (Resolved): pacific: doc: document the relevance of mds_namespace mount option
- 04:03 AM Backport #59482 (Resolved): pacific: cephfs-top, qa: test the current python version is supported
- 04:02 AM Backport #58984 (Resolved): pacific: cephfs-top: navigate to home screen when no fs
07/25/2023
- 07:09 PM Bug #62164 (Fix Under Review): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- 02:13 PM Bug #62164 (Pending Backport): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- /teuthology/yuriw-2023-07-20_14:36:46-fs-wip-yuri-testing-2023-07-19-1340-pacific-distro-default-smithi/7344784/1$
... - 04:38 PM Bug #58813 (Resolved): cephfs-top: Sort menu doesn't show 'No filesystem available' screen when a...
- 04:38 PM Bug #58814 (Resolved): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 04:37 PM Backport #58865 (Resolved): quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' ...
- 03:07 PM Backport #58865: quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' screen when...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50365
merged - 04:37 PM Backport #58985 (Resolved): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 03:08 PM Backport #58985: quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50595
merged - 03:55 PM Bug #52386 (Resolved): client: fix dump mds twice
- 03:55 PM Backport #52442 (Resolved): pacific: client: fix dump mds twice
- 03:15 PM Backport #52442: pacific: client: fix dump mds twice
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51247
merged - 03:32 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-21_02:03:58-rados-wip-yuri7-testing-2023-07-20-0727-distro-default-smithi/7346244
- 05:06 AM Bug #62084 (Fix Under Review): task/test_nfs: AttributeError: 'TestNFS' object has no attribute '...
- 03:20 PM Bug #59107: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51699 merged
- 03:19 PM Backport #59725: pacific: mds: allow entries to be removed from lost+found directory
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51687
merged - 03:17 PM Backport #59721: pacific: qa: run scrub post disaster recovery procedure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51610
merged - 03:17 PM Backport #61235: pacific: mds: a few simple operations crash mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51609
merged - 03:16 PM Backport #59482: pacific: cephfs-top, qa: test the current python version is supported
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51353
merged - 03:15 PM Backport #59017: pacific: snap-schedule: handle non-existent path gracefully during snapshot crea...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51246
merged - 03:12 PM Backport #58984: pacific: cephfs-top: navigate to home screen when no fs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50737
merged - 03:09 PM Backport #59021: quincy: mds: warning `clients failing to advance oldest client/flush tid` seen w...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50785
merged - 03:08 PM Backport #59016: quincy: snap-schedule: handle non-existent path gracefully during snapshot creation
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50780
merged - 10:40 AM Bug #62160 (Duplicate): mds: MDS abort because newly corrupt dentry to be committed
- /a/yuriw-2023-07-20_14:36:46-fs-wip-yuri-testing-2023-07-19-1340-pacific-distro-default-smithi/7344784...
- 09:57 AM Tasks #62159 (In Progress): qa: evaluate mds_partitioner
- Evaluation types
* Various workloads using benchmark tools to mimic realistic scenarios
* unittest
* qa suite for ... - 09:51 AM Bug #62158 (New): mds: quick suspend or abort metadata migration
- This feature has been discussed in the CDS Squid CephFS session https://pad.ceph.com/p/cds-squid-mds-partitioner-2023...
- 09:41 AM Feature #62157 (In Progress): mds: working set size tracker
- This feature has been discussed in the CDS Squid CephFS session https://pad.ceph.com/p/cds-squid-mds-partitioner-2023...
- 07:32 AM Backport #62147 (In Progress): reef: qa: adjust fs:upgrade to use centos_8 yaml
- 07:27 AM Backport #62147 (In Progress): reef: qa: adjust fs:upgrade to use centos_8 yaml
- https://github.com/ceph/ceph/pull/52618
- 07:24 AM Bug #62146 (Pending Backport): qa: adjust fs:upgrade to use centos_8 yaml
- 04:20 AM Bug #62146 (Fix Under Review): qa: adjust fs:upgrade to use centos_8 yaml
- 04:19 AM Bug #62146 (Pending Backport): qa: adjust fs:upgrade to use centos_8 yaml
- Since n/o/p release packages aren't built for centos_9, those tests are failing with package issues.
- 06:04 AM Cleanup #61482: mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` interfaces
- Dhairya, let's get the deprecated warning in place and plan to remove the interface a couple of release down.
- 05:14 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Rishabh, can you run blogbench with verbose flag (if any) to see which operation does it exactly get stuck in?
- 05:12 AM Bug #61909 (Can't reproduce): mds/fsmap: fs fail cause to mon crash
- > Yes, there's really no other way, because have client use rbd storage in this cluster, I am in a hurry to recover c...
- 05:04 AM Bug #62073 (Duplicate): AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- Duplicate of https://tracker.ceph.com/issues/62084
07/24/2023
- 06:37 PM Bug #48673: High memory usage on standby replay MDS
- I've confirmed that `fs set auxtel allow_standby_replay false` does free the memory leak in the standby mds but doesn...
- 06:20 PM Bug #48673: High memory usage on standby replay MDS
- This issue triggered again this morning for the first time in 2 weeks. What's note worthy is that the active mds seem...
- 04:19 PM Backport #61900 (Resolved): pacific: pybind/cephfs: holds GIL during rmdir
- 03:03 PM Backport #61900: pacific: pybind/cephfs: holds GIL during rmdir
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52323
merged - 03:08 PM Bug #52439: qa: acls does not compile on centos stream
- I had a conversation with Patrick last week about this ticket. He doesn't remember what this ticket was even about. I...
- 12:39 PM Bug #62126 (New): test failure: suites/blogbench.sh stops running
- I found this failure during running integration tests for few CephFS PRs. This failure occurred even after running th...
- 11:40 AM Bug #61182 (Fix Under Review): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
- 08:22 AM Bug #62123 (New): mds: detect out-of-order locking
- From Patrick's comments in https://github.com/ceph/ceph/pull/52522#discussion_r1269575242.
We need to make sure th... - 04:56 AM Feature #61908: mds: provide configuration for trim rate of the journal
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > OK, this is what I have in mind:
> >
> > Introduce an MDS con...
07/21/2023
- 07:38 PM Backport #58991 (In Progress): quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 07:38 PM Backport #58992 (In Progress): pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 07:20 PM Backport #62028 (In Progress): pacific: mds/MDSAuthCaps: "fsname", path, root_squash can't be in ...
- 07:07 PM Backport #62027 (In Progress): quincy: mds/MDSAuthCaps: "fsname", path, root_squash can't be in s...
- 06:45 PM Backport #62026 (In Progress): reef: mds/MDSAuthCaps: "fsname", path, root_squash can't be in sam...
- 06:34 PM Backport #59015 (In Progress): pacific: Command failed (workunit test fs/quota/quota.sh) on smith...
- 06:21 PM Backport #59014 (In Progress): quincy: Command failed (workunit test fs/quota/quota.sh) on smithi...
- 06:04 PM Backport #59410 (In Progress): reef: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- 05:11 PM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > This one's interesting. I did mention in the standup yesterday t... - 01:21 AM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Venky Shankar wrote:
> This one's interesting. I did mention in the standup yesterday that I've seen this earlier an... - 04:00 PM Bug #62114 (Fix Under Review): mds: adjust cap acquistion throttle defaults
- 03:53 PM Bug #62114 (Pending Backport): mds: adjust cap acquistion throttle defaults
- They are too conservative and rarely trigger in production clusters.
- 08:47 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Milind Changire wrote:
> Venky,
> The upstream user has also sent across debug (level 20) logs for ceph-fuse as wel... - 08:45 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Venky,
The upstream user has also sent across debug (level 20) logs for ceph-fuse as well as mds.
Unfortunately, th... - 04:41 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos, as per https://tracker.ceph.com/issues/61182#note-31, please check if the volume deletions (and probably creatio...
- 01:33 AM Backport #61797 (Resolved): reef: client: only wait for write MDS OPs when unmounting
- 01:22 AM Bug #61897 (Duplicate): qa: rados:mgr fails with MDS_CLIENTS_LAGGY
07/20/2023
- 11:12 PM Backport #61797: reef: client: only wait for write MDS OPs when unmounting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52302
merged - 09:18 AM Backport #61347 (Resolved): reef: mds: fsstress.sh hangs with multimds (deadlock between unlink a...
- 09:17 AM Backport #59708 (Resolved): reef: Mds crash and fails with assert on prepare_new_inode
- 06:28 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- So, here is the order of tasks unwinding:
HA workunit finishes:... - 06:15 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Greg Farnum wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Oh, I guess the daemons are created via the... - 05:47 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Oh, I guess the daemons are created via the qa/suites/fs/mirror-ha/ce... - 05:31 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- (discussion continued on the PR)
- 05:22 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > If cephfs-mirror daemon faces any issues connecting to the cluster... - 05:17 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- Dhairya Parmar wrote:
> If cephfs-mirror daemon faces any issues connecting to the cluster or error accessing local ... - 02:14 AM Backport #61735 (Resolved): reef: mgr/stats: exception ValueError :invalid literal for int() with...
- 02:14 AM Backport #61694 (Resolved): reef: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't i...
- 12:33 AM Bug #62096 (Duplicate): mds: infinite rename recursion on itself
- https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/7337403
I don...
07/19/2023
- 06:29 PM Feature #61908: mds: provide configuration for trim rate of the journal
- Venky Shankar wrote:
> OK, this is what I have in mind:
>
> Introduce an MDS config key that controls the rate of... - 06:34 AM Feature #61908: mds: provide configuration for trim rate of the journal
- OK, this is what I have in mind:
Introduce an MDS config key that controls the rate of trimming - number of log se... - 04:13 PM Feature #62086 (Fix Under Review): mds: print locks when dumping ops
- 04:09 PM Feature #62086 (Pending Backport): mds: print locks when dumping ops
- To help identify where an operation is stuck obtaining locks.
- 03:53 PM Backport #61959: reef: mon: block osd pool mksnap for fs pools
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52399
merged - 03:52 PM Backport #61424: reef: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot beaco...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52242
merged - 03:52 PM Backport #61413: reef: mon/MDSMonitor: do not trigger propose on error from prepare_update
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52238
merged - 03:51 PM Backport #61410: reef: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52232
merged - 03:50 PM Backport #61759: reef: tools/cephfs/first-damage: unicode decode errors break iteration
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52231
merged - 03:48 PM Backport #61693: reef: mon failed to return metadata for mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52229
merged - 03:40 PM Backport #61735: reef: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52126
merged - 03:40 PM Backport #61694: reef: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52073
merged - 03:39 PM Backport #61347: reef: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51684
merged - 03:39 PM Backport #59724: reef: mds: allow entries to be removed from lost+found directory
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51607
merged - 03:38 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Venky Shankar wrote:
> Patrick, the "MDS behind trimming" warning during up:replay is kind of expected in cases wher... - 03:37 PM Backport #59708: reef: Mds crash and fails with assert on prepare_new_inode
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51506
merged - 03:36 PM Backport #59719: reef: client: read wild pointer when reconnect to mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51484
merged - 03:20 PM Bug #62084 (Resolved): task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- ...
- 02:52 PM Feature #62083 (In Progress): CephFS multi-client guranteed-consistent snapshots
- This tracker is to discuss and implement guranteed-consistent snapshots of subdirectories, when using CephFS across m...
- 01:58 PM Bug #62077: mgr/nfs: validate path when modifying cephfs export
- Dhairya, this should be straightforward with the path validation helper you introduced, right?
- 11:04 AM Bug #62077 (In Progress): mgr/nfs: validate path when modifying cephfs export
- ...
- 01:27 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Added a few more notes about reproduction.
- 11:35 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Milind Changire wrote:
> "Similar crash report in ceph-users mailing list":https://lists.ceph.io/hyperkitty/list/cep... - 06:59 AM Backport #62068 (In Progress): pacific: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_check...
- Commits appended in https://github.com/ceph/ceph/pull/50814
- 06:58 AM Backport #62069 (In Progress): reef: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.T...
- commits appended in https://github.com/ceph/ceph/pull/50813
- 06:58 AM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
- Commits appended in https://github.com/ceph/ceph/pull/50815
- 06:57 AM Bug #61897 (Resolved): qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- 06:57 AM Bug #61897: qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- Fixed in https://tracker.ceph.com/issues/61907
- 06:55 AM Bug #61781: mds: couldn't successfully calculate the locker caps
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo - simialr failure here: /a/vshankar-20... - 06:44 AM Bug #62074 (Resolved): cephfs-shell: ls command has help message of cp command
- CephFS:~/>>> help ls
usage: ls [-h] [-l] [-r] [-H] [-a] [-S] [paths [paths ...]]... - 06:32 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- From GChat:...
- 05:17 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo, this needs backported to reef, yes?
It's already in reef. - 04:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- A bit unrelated, but mentioning here for completeness:
/a/yuriw-2023-07-14_23:37:57-fs-wip-yuri8-testing-2023-07-1... - 04:23 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo, this needs backported to reef, yes?
- 03:04 AM Bug #56698 (Fix Under Review): client: FAILED ceph_assert(_size == 0)
- 02:42 AM Bug #56698: client: FAILED ceph_assert(_size == 0)
- Venky Shankar wrote:
> Xiubo, do we have the core for this crash. If you have the debug env, then figuring out which... - 03:03 AM Bug #61913 (Closed): client: crash the client more gracefully
- Will fix this in https://tracker.ceph.com/issues/56698.
- 12:54 AM Bug #62073: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-15_23:37:56-rados-wip-yuri2-testing-2023-07-15-0802-distro-default-smithi/7340872
07/18/2023
- 08:49 PM Bug #62073 (Duplicate): AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-17_14:37:31-rados-wip-yuri-testing-2023-07-14-1641-distro-default-smithi/7341551...
- 03:40 PM Bug #62072 (Resolved): cephfs-mirror: do not run concurrent C_RestartMirroring context
- If cephfs-mirror daemon faces any issues connecting to the cluster or error accessing local pool or mounting fs then ...
- 03:15 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> Venky Shankar wrote:
> > Out of the 3 replayer threads, only two exited when the mirror daemon ... - 03:01 PM Bug #56698: client: FAILED ceph_assert(_size == 0)
- Xiubo, do we have the core for this crash. If you have the debug env, then figuring out which xlist member in MetaSes...
- 02:48 PM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
- 02:48 PM Backport #62069 (Resolved): reef: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.Test...
- 02:48 PM Backport #62068 (Resolved): pacific: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.T...
- 02:47 PM Bug #59350 (Pending Backport): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScr...
- 02:43 PM Bug #62067 (New): ffsb.sh failure "Resource temporarily unavailable"
- /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-20230712.041849-testing-default-smithi/7334808
</pre>
Des... - 02:04 PM Bug #62052 (Fix Under Review): mds: deadlock when getattr changes inode lockset
- 12:36 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Patrick, maybe we should add the detail event when acquiring each loc... - 12:33 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Xiubo Li wrote:
> Patrick, maybe we should add the detail event when acquiring each locks ? Then it will be easier t... - 03:36 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- Patrick, maybe we should add the detail event when acquiring each locks ? Then it will be easier to find the root cau...
- 03:34 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- So the deadlock is between *getattr* and *create* requests.
- 01:56 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- I have a fix I'm polishing to push for a PR. It'll be up soon.
- 01:55 AM Bug #62052 (Pending Backport): mds: deadlock when getattr changes inode lockset
- During a lot of request contention for locks, it's possible for getattr to change the requested locks for the target ...
- 12:45 PM Bug #62058 (Fix Under Review): mds: inode snaplock only acquired for open in create codepath
- 12:43 PM Bug #62058 (Pending Backport): mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/blob/236f8b632fbddcfe9dcdb484561c0fede717fd2f/src/mds/Server.cc#L4612-L4615
It doesn'... - 12:38 PM Bug #62057 (Fix Under Review): mds: add TrackedOp event for batching getattr/lookup
- 12:36 PM Bug #62057 (Resolved): mds: add TrackedOp event for batching getattr/lookup
- 12:27 PM Bug #61781: mds: couldn't successfully calculate the locker caps
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vsh... - 12:09 PM Bug #61781: mds: couldn't successfully calculate the locker caps
- Venky Shankar wrote:
> Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-2023071... - 11:42 AM Bug #61781: mds: couldn't successfully calculate the locker caps
- Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-20230712.041849-testing-default...
- 11:47 AM Backport #61986 (In Progress): pacific: mds: session ls command appears twice in command listing
- 11:45 AM Backport #61988 (In Progress): quincy: mds: session ls command appears twice in command listing
- 11:43 AM Backport #61987 (In Progress): reef: mds: session ls command appears twice in command listing
- 11:05 AM Backport #62056 (In Progress): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails becaus...
- 10:42 AM Backport #62056 (Resolved): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails because m...
- https://github.com/ceph/ceph/pull/52514
- 11:03 AM Backport #62055 (In Progress): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails becau...
- 10:42 AM Backport #62055 (Resolved): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
- https://github.com/ceph/ceph/pull/52513
- 11:00 AM Backport #62054 (In Progress): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
- 10:41 AM Backport #62054 (Resolved): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds...
- https://github.com/ceph/ceph/pull/52512
- 10:41 AM Bug #61201 (Pending Backport): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds cr...
- 05:18 AM Bug #61924: tar: file changed as we read it (unless cephfs mounted with norbytes)
- Venky Shankar wrote:
> Hi Harry,
>
> Harry Coin wrote:
> > Ceph: Pacific. When using tar heavily (such as compi... - 03:17 AM Backport #61985 (In Progress): quincy: mds: cap revoke and cap update's seqs mismatched
- 03:14 AM Backport #61984 (In Progress): reef: mds: cap revoke and cap update's seqs mismatched
- 03:12 AM Backport #61983 (In Progress): pacific: mds: cap revoke and cap update's seqs mismatched
- 03:05 AM Backport #62012 (In Progress): pacific: client: dir->dentries inconsistent, both newname and oldn...
- 02:59 AM Backport #62010 (In Progress): quincy: client: dir->dentries inconsistent, both newname and oldna...
- 02:59 AM Backport #62011 (In Progress): reef: client: dir->dentries inconsistent, both newname and oldname...
- 02:45 AM Backport #62042 (In Progress): quincy: client: do not send metrics until the MDS rank is ready
- 02:42 AM Backport #62041 (In Progress): reef: client: do not send metrics until the MDS rank is ready
- 02:41 AM Backport #62040 (In Progress): pacific: client: do not send metrics until the MDS rank is ready
- 02:30 AM Backport #62043 (In Progress): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:23 AM Backport #62045 (In Progress): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:20 AM Backport #62044 (In Progress): reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
07/17/2023
- 01:11 PM Bug #60669: crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in->firs...
- Unassigning since its a duplicate and this crash is being awaited to be reproduced in teuthology run.
- 11:34 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Just FYI - https://github.com/ceph/ceph/pull/52196 disables the balancer by default since it has been a source of per...
- 08:32 AM Backport #62045 (Resolved): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52498
- 08:32 AM Backport #62044 (Resolved): reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52497
- 08:32 AM Backport #62043 (Resolved): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52499
- 08:32 AM Bug #54460 (Resolved): snaptest-multiple-capsnaps.sh test failure
- https://tracker.ceph.com/issues/59343 is the other ticket attached to the backport.
- 08:32 AM Backport #62042 (Resolved): quincy: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52502
- 08:32 AM Backport #62041 (Resolved): reef: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52501
- 08:31 AM Backport #62040 (Resolved): pacific: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52500
- 08:30 AM Bug #59343 (Pending Backport): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 08:29 AM Bug #61523 (Pending Backport): client: do not send metrics until the MDS rank is ready
- 08:26 AM Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- BTW, I did not debug into this as it was unrelated to the PRs in the test branch.
This needs triaged and RCA. - 06:47 AM Bug #62036 (Fix Under Review): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- /a/vshankar-2023-07-04_11:59:45-fs-wip-vshankar-testing-20230704.040136-testing-default-smithi/7326619...
07/16/2023
07/15/2023
- 02:46 AM Backport #62028 (In Progress): pacific: mds/MDSAuthCaps: "fsname", path, root_squash can't be in ...
- https://github.com/ceph/ceph/pull/52583
- 02:46 AM Backport #62027 (In Progress): quincy: mds/MDSAuthCaps: "fsname", path, root_squash can't be in s...
- https://github.com/ceph/ceph/pull/52582
- 02:46 AM Backport #62026 (In Progress): reef: mds/MDSAuthCaps: "fsname", path, root_squash can't be in sam...
- https://github.com/ceph/ceph/pull/52581
- 02:37 AM Feature #59388 (Pending Backport): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same ...
- 02:37 AM Feature #59388 (Resolved): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with...
07/14/2023
- 09:06 PM Bug #62021 (Fix Under Review): mds: unnecessary second lock on snaplock
- 06:44 PM Bug #62021 (Fix Under Review): mds: unnecessary second lock on snaplock
- https://github.com/ceph/ceph/blob/3ca0f45de9fa00088fc670b19a3ebd8d5e778b3b/src/mds/Server.cc#L4612-L4615...
- 02:26 PM Backport #61234 (Resolved): reef: mds: a few simple operations crash mds
- 02:26 PM Backport #61233 (Resolved): quincy: mds: a few simple operations crash mds
- 10:53 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Venky Shankar wrote:
> Out of the 3 replayer threads, only two exited when the mirror daemon was shutting down:
>
... - 12:56 AM Backport #62012 (Resolved): pacific: client: dir->dentries inconsistent, both newname and oldname...
- https://github.com/ceph/ceph/pull/52505
- 12:56 AM Backport #62011 (Resolved): reef: client: dir->dentries inconsistent, both newname and oldname po...
- https://github.com/ceph/ceph/pull/52504
- 12:56 AM Backport #62010 (Resolved): quincy: client: dir->dentries inconsistent, both newname and oldname ...
- https://github.com/ceph/ceph/pull/52503
- 12:35 AM Bug #49912 (Pending Backport): client: dir->dentries inconsistent, both newname and oldname point...
- 12:35 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- Rishabh Dave wrote:
> The PR has been merged. Should this PR be backported?
Yeah, it should be.
07/13/2023
- 06:48 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- The PR has been merged. Should this PR be backported?
- 12:45 PM Backport #62005 (In Progress): quincy: client: readdir_r_cb: get rstat for dir only if using rbyt...
- https://github.com/ceph/ceph/pull/53360
- 12:45 PM Backport #62004 (In Progress): reef: client: readdir_r_cb: get rstat for dir only if using rbytes...
- https://github.com/ceph/ceph/pull/53359
- 12:45 PM Backport #62003 (Rejected): pacific: client: readdir_r_cb: get rstat for dir only if using rbytes...
- https://github.com/ceph/ceph/pull/54179
- 12:34 PM Bug #61999 (Pending Backport): client: readdir_r_cb: get rstat for dir only if using rbytes for size
- 08:42 AM Bug #61999 (Rejected): client: readdir_r_cb: get rstat for dir only if using rbytes for size
- When client_dirsize_rbytes is off, there should be no need for getting rstat on readdir operations. This fixes perfor...
- 11:11 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- _test_create_cluster() in test_nfs demanded strerr to be looked at; therefore I had created a new helper _nfs_complet...
- 10:36 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Log is full of line complaining it could not find the nfs cluster daemon...
- 07:33 AM Backport #61994 (Rejected): pacific: mds/MDSRank: op_tracker of mds have slow op alway.
- 07:33 AM Backport #61993 (In Progress): reef: mds/MDSRank: op_tracker of mds have slow op alway.
- https://github.com/ceph/ceph/pull/53357
- 07:33 AM Backport #61992 (In Progress): quincy: mds/MDSRank: op_tracker of mds have slow op alway.
- https://github.com/ceph/ceph/pull/53358
- 07:31 AM Bug #61749 (Pending Backport): mds/MDSRank: op_tracker of mds have slow op alway.
- 05:52 AM Backport #61991 (Resolved): quincy: snap-schedule: allow retention spec to specify max number of ...
- https://github.com/ceph/ceph/pull/52749
- 05:52 AM Backport #61990 (Resolved): reef: snap-schedule: allow retention spec to specify max number of sn...
- https://github.com/ceph/ceph/pull/52748
- 05:51 AM Backport #61989 (Resolved): pacific: snap-schedule: allow retention spec to specify max number of...
- https://github.com/ceph/ceph/pull/52750
- 05:51 AM Backport #61988 (Resolved): quincy: mds: session ls command appears twice in command listing
- https://github.com/ceph/ceph/pull/52516
- 05:51 AM Backport #61987 (Resolved): reef: mds: session ls command appears twice in command listing
- https://github.com/ceph/ceph/pull/52515
- 05:51 AM Backport #61986 (Rejected): pacific: mds: session ls command appears twice in command listing
- https://github.com/ceph/ceph/pull/52517
- 05:51 AM Backport #61985 (Resolved): quincy: mds: cap revoke and cap update's seqs mismatched
- https://github.com/ceph/ceph/pull/52508
- 05:51 AM Backport #61984 (Resolved): reef: mds: cap revoke and cap update's seqs mismatched
- https://github.com/ceph/ceph/pull/52507
- 05:51 AM Backport #61983 (Resolved): pacific: mds: cap revoke and cap update's seqs mismatched
- https://github.com/ceph/ceph/pull/52506
- 05:49 AM Bug #61444 (Pending Backport): mds: session ls command appears twice in command listing
- 05:48 AM Bug #59582 (Pending Backport): snap-schedule: allow retention spec to specify max number of snaps...
- 05:43 AM Bug #61782 (Pending Backport): mds: cap revoke and cap update's seqs mismatched
- 05:01 AM Bug #61982 (New): Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_v...
- /a/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/7326482...
- 03:02 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Greg Farnum wrote:
> Oh, I guess the daemons are created via the qa/suites/fs/mirror-ha/cephfs-mirror/three-per-clus... - 02:40 AM Bug #61978 (In Progress): cephfs-mirror: support fan out setups
- Currently, adding multiple file system peers in a fan out fashion which looks something like: fs-local(site-a) -> fs-...
07/12/2023
- 08:41 PM Bug #61399 (In Progress): qa: build failure for ior
- 03:59 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Venky Shankar wrote:
> Patrick - I can take this one if you haven't started on it yet.
https://github.com/ceph/ce... - 02:51 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Venky Shankar wrote:
> Patrick - I can take this one if you haven't started on it yet.
I have started on it. Shou... - 02:38 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Patrick - I can take this one if you haven't started on it yet.
- 02:25 PM Bug #61950 (In Progress): mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scru...
- 12:59 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Ok, so an off-by-one error - should be relatively easy to figure...
- 02:48 PM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- This one's interesting. I did mention in the standup yesterday that I've seen this earlier and that cluster too had N...
- 02:35 PM Backport #61187 (Resolved): reef: qa: ignore cluster warning encountered in test_refuse_client_se...
- 02:35 PM Backport #61165: reef: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10...
- Xiubo, please backport the changes.
- 02:33 PM Backport #59723 (Resolved): reef: qa: run scrub post disaster recovery procedure
- 01:35 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Oh, I guess the daemons are created via the qa/suites/fs/mirror-ha/cephfs-mirror/three-per-cluster.yaml fragment. Loo...
- 01:23 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- I talked about this with Jos today and see that when the cephfs_mirror_thrash.py joins the background thread, the do_...
- 07:37 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- The thread (7f677beaa700) was blocked on a file system call to build snap mapping (local vs remote)...
- 05:45 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Out of the 3 replayer threads, only two exited when the mirror daemon was shutting down:...
- 05:37 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> @Venky,
>
> As discussed attaching job [1] and the mirror daemon log, which I've been referri... - 12:40 PM Bug #61967 (Duplicate): mds: "SimpleLock.h: 417: FAILED ceph_assert(state == LOCK_XLOCK || state ...
- 01:21 AM Bug #61967 (Duplicate): mds: "SimpleLock.h: 417: FAILED ceph_assert(state == LOCK_XLOCK || state ...
- ...
- 12:24 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Patrick, the "MDS behind trimming" warning during up:replay is kind of expected in cases where there are lot many jou...
- 12:23 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Laura Flores wrote:
> See https://trello.com/c/qQnRTrLO/1792-wip-yuri8-testing-2023-06-22-1309-pacific-old-wip-yuri8... - 12:14 PM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Also, I think there is a catch to this feature. Commit aae7a70ed... - 11:37 AM Bug #61972 (Duplicate): cephfs/tools: cephfs-data-scan "cleanup" operation is not parallelised
- Duplicate of https://tracker.ceph.com/issues/61357
- 11:07 AM Bug #61972 (Duplicate): cephfs/tools: cephfs-data-scan "cleanup" operation is not parallelised
- https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
scan_exte... - 04:53 AM Bug #60629: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Which means, the [start, len] in `inos_to_free` and/or `inos_to_purge` are not present in prealloc_inos for the clien...
- 04:50 AM Bug #60629: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Dhairya, the interval set operation that's asserting is possibly here:...
- 04:52 AM Bug #61907 (Resolved): api tests fail from "MDS_CLIENTS_LAGGY" warning
- 04:38 AM Bug #61186 (Fix Under Review): mgr/nfs: hitting incomplete command returns same suggestion twice
07/11/2023
- 07:44 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- See https://trello.com/c/qQnRTrLO/1792-wip-yuri8-testing-2023-06-22-1309-pacific-old-wip-yuri8-testing-2023-06-22-100...
- 07:16 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Laura Flores wrote:
> Dhairya Parmar wrote:
> > @laura this isn't seen in quincy or reef, is it?
>
> Right. But ... - 02:25 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Dhairya Parmar wrote:
> @laura this isn't seen in quincy or reef, is it?
Right. But since it occurs in pacific, i... - 11:12 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- @laura this isn't seen in quincy or reef, is it?
- 04:51 PM Backport #61899 (Resolved): reef: pybind/cephfs: holds GIL during rmdir
- 02:34 PM Backport #61899: reef: pybind/cephfs: holds GIL during rmdir
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52321
merged - 04:31 PM Bug #61907 (Fix Under Review): api tests fail from "MDS_CLIENTS_LAGGY" warning
- 04:07 PM Backport #61959 (In Progress): reef: mon: block osd pool mksnap for fs pools
- 04:01 PM Backport #61959 (Resolved): reef: mon: block osd pool mksnap for fs pools
- https://github.com/ceph/ceph/pull/52399
- 04:06 PM Backport #61960 (In Progress): quincy: mon: block osd pool mksnap for fs pools
- 04:01 PM Backport #61960 (Resolved): quincy: mon: block osd pool mksnap for fs pools
- https://github.com/ceph/ceph/pull/52398
- 04:05 PM Backport #61961 (In Progress): pacific: mon: block osd pool mksnap for fs pools
- 04:01 PM Backport #61961 (Resolved): pacific: mon: block osd pool mksnap for fs pools
- https://github.com/ceph/ceph/pull/52397
- 04:00 PM Bug #59552 (Pending Backport): mon: block osd pool mksnap for fs pools
- 03:57 PM Bug #59552 (Fix Under Review): mon: block osd pool mksnap for fs pools
- 03:32 PM Bug #61958 (New): mds: add debug logs for handling setxattr for ceph.dir.subvolume
- * add debug logs for EINVAL return case
* add subvolume status during inode dump - 02:27 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Sometimes we want to be able to turn off asynchronous subvolume ... - 10:27 AM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Patrick Donnelly wrote:
> Sometimes we want to be able to turn off asynchronous subvolume deletion during cluster re... - 02:23 PM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Venky Shankar wrote:
> Also, I think there is a catch to this feature. Commit aae7a70ed2cf9c32684cfdaf701778a05f229e... - 02:21 PM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Venky Shankar wrote:
> I didn't know that the balancer would re-export to rank-0 (from rank-N) if a directory become... - 10:59 AM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Also, I think there is a catch to this feature. Commit aae7a70ed2cf9c32684cfdaf701778a05f229e09 introduces per subvol...
- 10:30 AM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Patrick Donnelly wrote:
> The _deleting directory can often get sudden large volumes to recursively unlink. Rank 0 i... - 01:38 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Rishabh Dave wrote:
> rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smith... - 12:42 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/7328210/
- 11:07 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> @Venky,
>
> As discussed attaching job [1] and the mirror daemon log, which I've been referri... - 10:14 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- @Venky,
As discussed attaching job [1] and the mirror daemon log, which I've been referring to.
[1] http://pulp... - 01:30 PM Bug #61957 (Duplicate): test_client_limits.TestClientLimits.test_client_release_bug fails
- ...
- 07:16 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Venky Shankar wrote:
> Another suggestion/feedback - Should the module also persist (say) the last 10 partitioning s... - 07:12 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Hi Venky,
Venky Shankar wrote:
> Hi Yongseok,
>
> Yongseok Oh wrote:
> > This idea is based on our presentati... - 04:15 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Another suggestion/feedback - Should the module also persist (say) the last 10 partitioning strategies? I presume whe...
07/10/2023
- 09:22 PM Bug #61950 (Need More Info): mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_s...
- The changes implemented in [1] should make sure that we never have an openfiletable objects omap keys above osd_deep_...
- 07:13 PM Bug #61947 (Pending Backport): mds: enforce a limit on the size of a session in the sessionmap
- If the session's "completed_requests" vector gets too large, the session can get to a size where the MDS goes read-on...
- 02:39 PM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Hi Yongseok,
Yongseok Oh wrote:
> This idea is based on our presentation in Cephalocon2023. (Please refer to the ... - 02:08 PM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Venky Shankar wrote:
> Thanks for the feature proposal. CephFS team will go through the proposal asap.
I'm going ... - 02:18 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- Patrick Donnelly wrote:
> yite gu wrote:
> > yite gu wrote:
> > > Patrick Donnelly wrote:
> > > > yite gu wrote:
... - 12:56 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- yite gu wrote:
> yite gu wrote:
> > Patrick Donnelly wrote:
> > > yite gu wrote:
> > > > any way to recover this ... - 01:47 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> In quincy branch, this is consistently reproducible:
>
> http://pulpito.front.sepia.ceph.com/... - 11:16 AM Bug #61182 (In Progress): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after ...
- In quincy branch, this is consistently reproducible:
http://pulpito.front.sepia.ceph.com/jcollin-2023-07-10_04:22:... - 01:45 PM Bug #61924: tar: file changed as we read it (unless cephfs mounted with norbytes)
- Hi Harry,
Harry Coin wrote:
> Ceph: Pacific. When using tar heavily (such as compiling a linux kernel into distr... - 01:25 PM Bug #60629: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Dhairya Parmar wrote:
> I'm trying to think out loud and this is just a hypothesis:
>
> Server::_session_logged()... - 12:51 PM Bug #61945 (Triaged): LibCephFS.DelegTimeout failure
- 12:19 PM Bug #61945 (Triaged): LibCephFS.DelegTimeout failure
- /a/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/7326413...
- 12:15 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Laura Flores wrote:
> Occurs quite a bit. Perhaps from a recent regression?
>
> See http://pulpito.front.sepia.ce... - 07:04 AM Bug #60625 (Fix Under Review): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> ...
- 02:41 AM Cleanup #51383 (Fix Under Review): mgr/volumes/fs/exception.py: fix various flake8 issues
- 02:41 AM Cleanup #51401 (Fix Under Review): mgr/volumes/fs/operations/versions/metadata_manager.py: fix va...
- 02:41 AM Cleanup #51404 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v1.py: fix variou...
- 02:40 AM Cleanup #51405 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v2.py: fix variou...
- 02:39 AM Cleanup #51386 (Fix Under Review): mgr/volumes/fs/volume.py: fix various flake8 issues
- 02:38 AM Cleanup #51388 (Fix Under Review): mgr/volumes/fs/operations/index.py: add extra blank line
- 02:38 AM Cleanup #51389 (Fix Under Review): mgr/volumes/fs/operations/rankevicter.py: fix various flake8 i...
- 02:38 AM Cleanup #51394 (Fix Under Review): mgr/volumes/fs/operations/pin_util.py: fix various flake8 issues
- 02:37 AM Cleanup #51395 (Fix Under Review): mgr/volumes/fs/operations/lock.py: fix various flake8 issues
- 02:37 AM Cleanup #51397 (Fix Under Review): mgr/volumes/fs/operations/volume.py: fix various flake8 issues
- 02:37 AM Cleanup #51399 (Fix Under Review): mgr/volumes/fs/operations/template.py: fix various flake8 issues
- 02:08 AM Fix #52068 (Resolved): qa: add testing for "ms_mode" mount option
- 02:08 AM Backport #52440 (Resolved): pacific: qa: add testing for "ms_mode" mount option
- 02:07 AM Backport #59707 (Resolved): quincy: Mds crash and fails with assert on prepare_new_inode
07/09/2023
- 01:44 PM Feature #10679: Add support for the chattr +i command (immutable file)
- I'm claiming this ticket.
- 01:07 PM Documentation #61865 (Resolved): add doc on how to expedite MDS recovery with a lot of log segments
07/08/2023
Also available in: Atom