Activity
From 07/16/2023 to 08/14/2023
08/14/2023
- 09:18 PM Bug #62435 (Need More Info): Pod unable to mount fscrypt encrypted cephfs PVC when it moves to an...
- Here is our setup:
Kubernetes: 1.27.3
rook: 1.11.9
ceph: 17.2.6
OS: Ubuntu 20.04 modified kernel to support fscry... - 02:44 PM Backport #61801 (Rejected): pacific: mon/MDSMonitor: plug PAXOS when evicting an MDS
- EOL
- 02:41 PM Backport #61799 (In Progress): quincy: mon/MDSMonitor: plug PAXOS when evicting an MDS
- 02:40 PM Bug #62057 (Fix Under Review): mds: add TrackedOp event for batching getattr/lookup
- 02:34 PM Bug #62057 (Pending Backport): mds: add TrackedOp event for batching getattr/lookup
- (Testing a change to the backport script.)
- 02:30 PM Backport #62373 (New): quincy: Consider setting "bulk" autoscale pool flag when automatically cre...
- 02:28 PM Backport #61800 (In Progress): reef: mon/MDSMonitor: plug PAXOS when evicting an MDS
- 01:49 PM Backport #62424 (Rejected): pacific: mds: print locks when dumping ops
- EOL
- 12:37 PM Backport #62424 (Rejected): pacific: mds: print locks when dumping ops
- 01:49 PM Backport #62423 (In Progress): quincy: mds: print locks when dumping ops
- 12:37 PM Backport #62423 (In Progress): quincy: mds: print locks when dumping ops
- https://github.com/ceph/ceph/pull/52976
- 01:45 PM Backport #62422 (In Progress): reef: mds: print locks when dumping ops
- 12:36 PM Backport #62422 (In Progress): reef: mds: print locks when dumping ops
- https://github.com/ceph/ceph/pull/52975
- 01:40 PM Backport #62421 (In Progress): pacific: mds: adjust cap acquistion throttle defaults
- 12:36 PM Backport #62421 (Resolved): pacific: mds: adjust cap acquistion throttle defaults
- https://github.com/ceph/ceph/pull/52974
- 01:37 PM Backport #62420 (In Progress): quincy: mds: adjust cap acquistion throttle defaults
- 12:36 PM Backport #62420 (Resolved): quincy: mds: adjust cap acquistion throttle defaults
- https://github.com/ceph/ceph/pull/52973
- 01:35 PM Backport #62419 (In Progress): reef: mds: adjust cap acquistion throttle defaults
- 12:36 PM Backport #62419 (Resolved): reef: mds: adjust cap acquistion throttle defaults
- https://github.com/ceph/ceph/pull/52972
- 12:37 PM Backport #62427 (In Progress): pacific: nofail option in fstab not supported
- https://github.com/ceph/ceph/pull/52987
- 12:37 PM Backport #62426 (In Progress): quincy: nofail option in fstab not supported
- 12:37 PM Backport #62425 (In Progress): reef: nofail option in fstab not supported
- 12:23 PM Feature #62086 (Pending Backport): mds: print locks when dumping ops
- 12:22 PM Bug #62114 (Pending Backport): mds: adjust cap acquistion throttle defaults
- 12:21 PM Bug #58394 (Pending Backport): nofail option in fstab not supported
08/13/2023
- 06:39 AM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- /a/yuriw-2023-08-09_19:52:16-rados-wip-yuri5-testing-2023-08-08-0807-quincy-distro-default-smithi/7364400/
08/12/2023
- 02:33 AM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Venky Shankar wrote:
> From GChat
>
> > Rishabh Dave, 1:40 PM
> >https://tracker.ceph.com/issues/61903
> >Shoul...
08/11/2023
- 02:34 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- From GChat
> Rishabh Dave, 1:40 PM
>https://tracker.ceph.com/issues/61903
>Should I add an option to turn off th... - 11:32 AM Bug #61947 (Fix Under Review): mds: enforce a limit on the size of a session in the sessionmap
- 09:12 AM Bug #62407 (Pending Backport): pybind/mgr/volumes: Document a possible deadlock after a volume de...
- When a cephfs volume is deleted, the mgr threads (cloner, purge threads) could take a corresponding thread lock
and ... - 06:18 AM Backport #62406 (In Progress): pacific: pybind/mgr/volumes: pending_subvolume_deletions count is ...
- https://github.com/ceph/ceph/pull/53574
- 06:18 AM Backport #62405 (In Progress): reef: pybind/mgr/volumes: pending_subvolume_deletions count is alw...
- https://github.com/ceph/ceph/pull/53572
- 06:18 AM Backport #62404 (Resolved): quincy: pybind/mgr/volumes: pending_subvolume_deletions count is alwa...
- https://github.com/ceph/ceph/pull/53573
- 06:12 AM Bug #62278 (Pending Backport): pybind/mgr/volumes: pending_subvolume_deletions count is always ze...
- 12:58 AM Bug #61732 (Fix Under Review): pacific: test_cluster_info fails from "No daemons reported"
08/10/2023
- 09:35 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-08-02_20:21:03-rados-wip-yuri3-testing-2023-08-01-0825-pacific-distro-default-smithi/7358531
- 07:31 PM Bug #62096: mds: infinite rename recursion on itself
- Xiubo Li wrote:
> Patrick,
>
> This should be the same issue with:
>
> https://tracker.ceph.com/issues/58340
... - 12:00 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> This should be a similar issue with https://tracker.ceph.com/issues/58489. Just the *openc/mknod/m... - 10:35 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- @Neeraj,
Could you please check on the point 3 mentioned in the comment 6 above ?
-Kotresh H R - 07:17 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Venky, Neeraj and me had a meeting regarding this and please find the meeting minutes below:
1. When the subvolume...
08/09/2023
- 06:20 PM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- The attached file contains log snippets with apparently relevant information for a few crashes as well as intermediat...
- 06:17 PM Bug #62381 (In Progress): mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() ...
- Despite https://tracker.ceph.com/issues/53597 being marked as resolved we could still face the problem in v17.2.5
... - 03:21 PM Backport #62373 (In Progress): quincy: Consider setting "bulk" autoscale pool flag when automatic...
- 10:03 AM Backport #62373: quincy: Consider setting "bulk" autoscale pool flag when automatically creating ...
- https://github.com/ceph/ceph/pull/52902
- 09:26 AM Backport #62373 (Resolved): quincy: Consider setting "bulk" autoscale pool flag when automaticall...
- 03:21 PM Backport #62372 (In Progress): pacific: Consider setting "bulk" autoscale pool flag when automati...
- 10:04 AM Backport #62372: pacific: Consider setting "bulk" autoscale pool flag when automatically creating...
- https://github.com/ceph/ceph/pull/52900
- 09:26 AM Backport #62372 (Resolved): pacific: Consider setting "bulk" autoscale pool flag when automatical...
- https://github.com/ceph/ceph/pull/52900
- 03:21 PM Backport #62374 (In Progress): reef: Consider setting "bulk" autoscale pool flag when automatical...
- 10:04 AM Backport #62374: reef: Consider setting "bulk" autoscale pool flag when automatically creating a ...
- https://github.com/ceph/ceph/pull/52899
- 09:26 AM Backport #62374 (Resolved): reef: Consider setting "bulk" autoscale pool flag when automatically ...
- 12:22 PM Bug #62123: mds: detect out-of-order locking
- Bumping priority since we have places where the MDS could deadlock due to out-of-order locking,
- 12:06 PM Backport #62040 (Resolved): pacific: client: do not send metrics until the MDS rank is ready
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52500
Merged. - 12:05 PM Backport #62177 (Resolved): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52654
Merged. - 09:21 AM Feature #61595 (Pending Backport): Consider setting "bulk" autoscale pool flag when automatically...
- 08:05 AM Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock...
- Venky Shankar wrote:
> If the auth is sending `LOCK_AC_LOCK` then in the replica it should be handled here:
>
> [... - 07:58 AM Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock...
- If the auth is sending `LOCK_AC_LOCK` then in the replica it should be handled here:...
- 07:33 AM Bug #54833 (In Progress): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&...
- 07:33 AM Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock...
- There is no logs and I just went through MDS locker code, it seems buggy here:...
- 06:39 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Hi Jos,
Jos Collin wrote:
> In the run https://pulpito.ceph.com/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-test... - 04:39 AM Bug #61867 (Fix Under Review): mgr/volumes: async threads should periodically check for work
- 04:32 AM Backport #62012 (Resolved): pacific: client: dir->dentries inconsistent, both newname and oldname...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52505
Merged. - 04:31 AM Backport #61983 (Resolved): pacific: mds: cap revoke and cap update's seqs mismatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52506
Merged. - 04:31 AM Backport #62055 (Resolved): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52513
Merged. - 04:30 AM Backport #62043 (Resolved): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52499
Merged. - 03:54 AM Backport #61960 (Resolved): quincy: mon: block osd pool mksnap for fs pools
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52398
Merged. - 03:27 AM Bug #59785 (Closed): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == L...
- This issue won't be seen in latest builds for pacific v16.2.12 and later ... and quincy ceph-17.2.6-2 and later.
- 01:30 AM Backport #61696 (Resolved): pacific: CephFS: Debian cephfs-mirror package in the Ceph repo doesn'...
- 01:29 AM Backport #61734 (Resolved): pacific: mgr/stats: exception ValueError :invalid literal for int() w...
- 01:00 AM Bug #52280 (Resolved): Mds crash and fails with assert on prepare_new_inode
- 12:59 AM Backport #59706 (Resolved): pacific: Mds crash and fails with assert on prepare_new_inode
- 12:59 AM Backport #61798 (Resolved): pacific: client: only wait for write MDS OPs when unmounting
- 12:58 AM Bug #62096: mds: infinite rename recursion on itself
- Patrick,
This should be the same issue with:
https://tracker.ceph.com/issues/58340
https://tracker.ceph.com/is... - 12:36 AM Feature #62364 (New): support dumping rstats on a particular path
- Especially now that we have rstats disabled by default, we need an easy way to dump rstats (primarily rbytes, though ...
08/08/2023
- 06:20 PM Backport #61961: pacific: mon: block osd pool mksnap for fs pools
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52397
merged - 06:20 PM Backport #61798: pacific: client: only wait for write MDS OPs when unmounting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52304
merged - 06:19 PM Backport #61426: pacific: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot be...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52244
merged - 06:19 PM Backport #61414: pacific: mon/MDSMonitor: do not trigger propose on error from prepare_update
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52240
merged - 06:18 PM Backport #59372: pacific: qa: test_join_fs_unset failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52237
merged - 06:18 PM Backport #61411: pacific: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52233
merged - 06:17 PM Backport #61692: pacific: mon failed to return metadata for mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52230
merged - 06:17 PM Backport #61734: pacific: mgr/stats: exception ValueError :invalid literal for int() with base 16...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52125
merged - 06:16 PM Backport #61696: pacific: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52075
merged - 06:15 PM Backport #59706: pacific: Mds crash and fails with assert on prepare_new_inode
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51508
merged - 12:39 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- In the run https://pulpito.ceph.com/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-defa...
- 04:49 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Jos Collin wrote:
> @Venky:
>
> This bug couldn't be reproduced on main with consecutive runs of test_cephfs_mirr... - 04:31 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- @Venky:
This bug couldn't be reproduced on main with consecutive runs of test_cephfs_mirror_restart_sync_on_blockl... - 11:43 AM Bug #62077 (In Progress): mgr/nfs: validate path when modifying cephfs export
- 10:12 AM Feature #61595 (Fix Under Review): Consider setting "bulk" autoscale pool flag when automatically...
- The resolved status was a bit premature. See - https://github.com/ceph/ceph/pull/52792#issuecomment-1669259541
Fur... - 08:25 AM Feature #61595 (Resolved): Consider setting "bulk" autoscale pool flag when automatically creatin...
- 09:58 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- This should be a similar issue with https://tracker.ceph.com/issues/58489. Just the *openc/mknod/mkdir/symblink* even...
- 09:29 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- It aborted in Line#1623. The *session->take_ino()* may return *0* if the *used_preallocated_ino* doesn't exist. Then ...
- 09:27 AM Bug #62356 (Duplicate): mds: src/include/interval_set.h: 538: FAILED ceph_assert(p->first <= start)
- 08:40 AM Bug #62356: mds: src/include/interval_set.h: 538: FAILED ceph_assert(p->first <= start)
- I just realized there is already existing one tracker, which has the same issue https://tracker.ceph.com/issues/61009.
- 08:32 AM Bug #62356 (Duplicate): mds: src/include/interval_set.h: 538: FAILED ceph_assert(p->first <= start)
- ...
- 08:43 AM Bug #54943 (Duplicate): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [w...
- 08:33 AM Bug #62357 (Resolved): tools/cephfs_mirror: only perform actions if init succeed
- address non-zero return code first and then perform further actions.
- 08:30 AM Bug #62355 (Closed): cephfs-mirror: do not run concurrent C_RestartMirroring context
- closing since added the summary to the existing tracker(which wasnt meant to address C_RestartMirroring contexts but ...
- 08:20 AM Bug #62355 (Closed): cephfs-mirror: do not run concurrent C_RestartMirroring context
- This was majorly discussed in tracker https://tracker.ceph.com/issues/62072 and PR https://github.com/ceph/ceph/pull/...
- 08:28 AM Bug #62072 (Fix Under Review): cephfs-mirror: do not run concurrent C_RestartMirroring context
- Quick summary:
After digging deep the issue was much more than anticipated and kudos to venky for figuring it out ... - 05:39 AM Bug #61717: CephFS flock blocked on itself
- Greg Farnum wrote:
> I think there must be more going on here than is understood. The MDS is blocked on getting some... - 05:24 AM Bug #61717 (Can't reproduce): CephFS flock blocked on itself
- I think there must be more going on here than is understood. The MDS is blocked on getting some other internal locks ...
- 12:58 AM Backport #61984 (Resolved): reef: mds: cap revoke and cap update's seqs mismatched
08/07/2023
- 04:41 PM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Sure, no problem.
- 12:45 PM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Leonid, I forgot to update the tracker assignee post our sync. I've done implementing 50% of the work.
- 03:34 PM Feature #58550 (Resolved): mds: add perf counter to track (relatively) larger log events
- Not planning to backport this the minor log segment PR.
- 03:32 PM Bug #61869 (Resolved): pybind/cephfs: holds GIL during rmdir
- 03:32 PM Backport #61898 (Resolved): quincy: pybind/cephfs: holds GIL during rmdir
- 03:25 PM Bug #62326 (Fix Under Review): pybind/mgr/cephadm: stop disabling fsmap sanity checks during upgrade
- 02:53 PM Backport #61984: reef: mds: cap revoke and cap update's seqs mismatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52507
merged - 02:52 PM Backport #59263: reef: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50813
merged - 02:52 PM Backport #59260: reef: mds: stray directories are not purged when all past parents are clear
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50813
merged - 12:38 PM Backport #61759 (Resolved): reef: tools/cephfs/first-damage: unicode decode errors break iteration
- 07:37 AM Bug #62344 (New): tools/cephfs_mirror: mirror daemon logs reports initialisation failure for fs a...
- This is hard to reproduce and for now need to perform some steps manually and run a qa test case using vstart_runnner...
- 05:13 AM Feature #62207: Report cephfs-nfs service on ceph -s
- Dhairya, please take this one.
- 04:44 AM Backport #61166 (In Progress): pacific: [WRN] : client.408214273 isn't responding to mclientcaps(...
- 04:42 AM Backport #61167 (In Progress): quincy: [WRN] : client.408214273 isn't responding to mclientcaps(r...
- 04:38 AM Backport #61165 (In Progress): reef: [WRN] : client.408214273 isn't responding to mclientcaps(rev...
- 04:28 AM Backport #62197 (Duplicate): quincy: mds: couldn't successfully calculate the locker caps
- Will backport this together with https://tracker.ceph.com/issues/61167.
- 04:27 AM Backport #62199 (Duplicate): pacific: mds: couldn't successfully calculate the locker caps
- Will backport this together with https://tracker.ceph.com/issues/61166.
- 04:27 AM Backport #62198 (Duplicate): reef: mds: couldn't successfully calculate the locker caps
- Will backport this together with https://tracker.ceph.com/issues/61165.
- 04:22 AM Backport #62192 (In Progress): reef: ceph: corrupt snap message from mds1
- 04:22 AM Backport #62193 (In Progress): pacific: ceph: corrupt snap message from mds1
- 04:22 AM Backport #62194 (In Progress): quincy: ceph: corrupt snap message from mds1
- 03:26 AM Backport #62202 (In Progress): pacific: crash: MDSRank::send_message_client(boost::intrusive_ptr<...
- 03:26 AM Backport #62200 (In Progress): quincy: crash: MDSRank::send_message_client(boost::intrusive_ptr<M...
- 03:26 AM Backport #62201 (In Progress): reef: crash: MDSRank::send_message_client(boost::intrusive_ptr<Mes...
- 02:47 AM Backport #62271 (In Progress): reef: Error: Unable to find a match: userspace-rcu-devel libedit-d...
- 02:37 AM Backport #62273 (Rejected): pacific: Error: Unable to find a match: userspace-rcu-devel libedit-d...
- The dependent PR https://github.com/ceph/ceph/pull/48628 didn't backported to pacific, so no need for this one.
- 02:37 AM Backport #62272 (Rejected): quincy: Error: Unable to find a match: userspace-rcu-devel libedit-de...
- The dependent PR https://github.com/ceph/ceph/pull/48628 didn't backported to quincy, so no need for this one.
- 02:24 AM Backport #62044 (Resolved): reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:24 AM Backport #62041 (Resolved): reef: client: do not send metrics until the MDS rank is ready
- 02:23 AM Backport #62011 (Resolved): reef: client: dir->dentries inconsistent, both newname and oldname po...
08/06/2023
- 08:46 AM Feature #61595 (Fix Under Review): Consider setting "bulk" autoscale pool flag when automatically...
- 08:46 AM Bug #58394 (Fix Under Review): nofail option in fstab not supported
- After further research I've arrived at the conclusion that stripping the option in the mount.fuse.ceph helper is the ...
08/05/2023
- 11:14 AM Backport #62054 (Resolved): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds...
- https://github.com/ceph/ceph/pull/52512 merged
08/04/2023
- 09:35 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- This can be due to some sort of race (in the test I suppose). Tests for @du@ cephfs-shell command also suffered simil...
- 08:51 PM Bug #47292 (In Progress): cephfs-shell: test_df_for_valid_file failure
- 08:50 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- Inspecting methods (in test_cephfs_shell.py) @test_df_for_valid_file@ and @validate_df@, this seems to be a bug not i...
- 06:42 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- The bug on this ticket can't be reproducede with teuthology as well - https://pulpito.ceph.com/rishabh-2023-08-02_13:...
- 08:38 PM Backport #62337 (In Progress): pacific: MDSAUthCaps: use g_ceph_context directly
- 07:56 PM Backport #62337 (Resolved): pacific: MDSAUthCaps: use g_ceph_context directly
- https://github.com/ceph/ceph/pull/52821
- 08:36 PM Backport #62336 (In Progress): quincy: MDSAUthCaps: use g_ceph_context directly
- 07:56 PM Backport #62336 (In Progress): quincy: MDSAUthCaps: use g_ceph_context directly
- https://github.com/ceph/ceph/pull/52820
- 08:33 PM Backport #62335 (In Progress): reef: MDSAUthCaps: use g_ceph_context directly
- 07:56 PM Backport #62335 (In Progress): reef: MDSAUthCaps: use g_ceph_context directly
- https://github.com/ceph/ceph/pull/52819
- 07:55 PM Bug #62334 (Pending Backport): mds: use g_ceph_context directly
- Creating this ticket because Venky wants me to backport the PR for this ticket.
Summary of this PR -
Variable @... - 07:52 PM Backport #62333 (New): quincy: MDSAuthCaps: minor improvements
- 07:51 PM Backport #62332 (In Progress): reef: MDSAuthCaps: minor improvements
- https://github.com/ceph/ceph/pull/54185
- 07:51 PM Backport #62331 (Rejected): pacific: MDSAuthCaps: minor improvements
- https://github.com/ceph/ceph/pull/54143
- 07:47 PM Bug #62329 (Rejected): MDSAuthCaps: minor improvements
- Creating this ticket because Venky wants me to backport the PR for this ticket.
Summary of the PR -
1. Import std... - 06:51 PM Bug #62328 (New): qa: test_acls fails due to dependency package was not found
- ...
- 06:31 PM Bug #62243: qa/cephfs: test_snapshots.py fails because of a missing method
- https://pulpito.ceph.com/rishabh-2023-08-03_21:57:58-fs-wip-rishabh-2023Aug1-3-testing-default-smithi/7359973
- 05:58 PM Bug #62326 (Resolved): pybind/mgr/cephadm: stop disabling fsmap sanity checks during upgrade
- This was necessary due to #53155. We should not continue disabling the sanity checks forever during upgrades which he...
- 04:42 PM Bug #59107: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51698 merged
- 04:35 PM Backport #59410: reef: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52578
merged - 04:34 PM Backport #61987: reef: mds: session ls command appears twice in command listing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52515
merged - 04:34 PM Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific
- https://github.com/ceph/ceph/pull/52512 merged
- 04:31 PM Backport #62011: reef: client: dir->dentries inconsistent, both newname and oldname points to sam...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52504
merged - 04:30 PM Backport #62041: reef: client: do not send metrics until the MDS rank is ready
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52501
merged - 04:28 PM Backport #62044: reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52497
merged - 04:26 PM Backport #59373: reef: qa: test_join_fs_unset failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52235
merged - 03:16 PM Backport #61898: quincy: pybind/cephfs: holds GIL during rmdir
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52322
merged - 03:14 PM Backport #61425: quincy: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot bea...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52243
merged - 03:13 PM Backport #61415: quincy: mon/MDSMonitor: do not trigger propose on error from prepare_update
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52239
merged - 03:13 PM Backport #61412: quincy: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52234
merged - 09:22 AM Bug #62217: ceph_fs.h: add separate owner_{u,g}id fields
- Xiubo proposed [1] to add one extra field to the struct ceph_mds_request_head.
This field will be filled by client w...
08/03/2023
- 09:28 PM Bug #62208 (New): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- updating the status to New until the API becomes available
- 02:17 PM Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done
- Leonid Usov wrote:
> OK thanks Patrick! So this will wait until you submit the linked issue that introduces the new ... - 01:10 PM Bug #62208 (In Progress): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- OK thanks Patrick! So this will wait until you submit the linked issue that introduces the new method.
What are ot... - 12:56 PM Bug #62208 (New): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- 08:16 PM Bug #58394: nofail option in fstab not supported
- Here is the commit that introduces the "support" for the option to fuse3: https://github.com/libfuse/libfuse/commit/a...
- 10:10 AM Bug #58394: nofail option in fstab not supported
- A "suggestion for Debian":https://github.com/libfuse/libfuse/issues/691, for example, is to @apt install fuse3@
- 10:05 AM Bug #58394: nofail option in fstab not supported
- https://github.com/libfuse/libfuse/blob/master/ChangeLog.rst#libfuse-322-2018-03-31
> libfuse 3.2.2 (2018-03-31)
... - 08:34 AM Bug #58394: nofail option in fstab not supported
- Okay, so the research so far shows that
* The lack of @nofail@ support by fuse has been reported multiple times as... - 02:22 AM Bug #58394: nofail option in fstab not supported
- Brian Woods wrote:
> Venky Shankar wrote:
> > Were you able to check why.
>
> Not sure what you are asking. Che... - 06:53 PM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Venky Shankar wrote:
> Changes originating from the localhost would obviously be notified to the watcher, but *not* ... - 12:07 PM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Anagh Kumar Baranwal wrote:
> Hi Venky,
>
> Venky Shankar wrote:
> > Hi Anagh,
> >
> > Anagh Kumar Baranwal w... - 11:52 AM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Hi Venky,
Venky Shankar wrote:
> Hi Anagh,
>
> Anagh Kumar Baranwal wrote:
> libcephfs does not have such ... - 09:36 AM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Hi Anagh,
Anagh Kumar Baranwal wrote:
> I wanted to add a Ceph backend for rclone (https://rclone.org/) but it tu... - 04:11 PM Feature #61595 (In Progress): Consider setting "bulk" autoscale pool flag when automatically crea...
- 03:51 AM Feature #59714 (Fix Under Review): mgr/volumes: Support to reject CephFS clones if cloner threads...
08/02/2023
- 09:23 PM Bug #62228 (Resolved): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Resolved in https://tracker.ceph.com/issues/57206.
- 08:36 PM Backport #61410 (Resolved): reef: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- 07:07 PM Bug #58394: nofail option in fstab not supported
- Venky Shankar wrote:
> Were you able to check why.
Not sure what you are asking. Check why the ceph mount allowe... - 05:22 PM Bug #62208 (Need More Info): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- Hi Venky!
I've started looking at this one, but the ticket doesn't provide sufficient information so I don't know ... - 04:17 AM Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done
- Leonid, please take this one.
- 04:58 PM Bug #61409: qa: _test_stale_caps does not wait for file flush before stat
- Casey Bodley wrote:
> backports should include the flake8 cleanup from https://github.com/ceph/ceph/pull/52732
Ap... - 04:39 PM Feature #58154 (Resolved): mds: add minor segment boundaries
- Not a backport candidate as the change needs more bake time in main.
- 04:37 PM Backport #62289 (Resolved): quincy: ceph_test_libcephfs_reclaim crashes during test
- https://github.com/ceph/ceph/pull/53647
- 04:36 PM Backport #62288 (Rejected): pacific: ceph_test_libcephfs_reclaim crashes during test
- https://github.com/ceph/ceph/pull/53648
- 04:36 PM Backport #62287 (In Progress): reef: ceph_test_libcephfs_reclaim crashes during test
- https://github.com/ceph/ceph/pull/53635
- 04:36 PM Bug #57206 (Pending Backport): ceph_test_libcephfs_reclaim crashes during test
- 06:47 AM Bug #57206 (Fix Under Review): ceph_test_libcephfs_reclaim crashes during test
- 01:52 PM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- Can't reproduce this bug with vstart_runner.py....
- 01:48 PM Bug #62243 (Resolved): qa/cephfs: test_snapshots.py fails because of a missing method
- 08:51 AM Bug #62243: qa/cephfs: test_snapshots.py fails because of a missing method
- Venky Shankar wrote:
> I think this _doesn't_ need backport, Rishabh?
No need because the PR that introduced this... - 12:33 PM Bug #62278 (Fix Under Review): pybind/mgr/volumes: pending_subvolume_deletions count is always ze...
- 10:32 AM Bug #62278 (Pending Backport): pybind/mgr/volumes: pending_subvolume_deletions count is always ze...
- Even after deleting multiple subvolumes, the pending_subvolume_deletions count is always zero...
- 10:27 AM Bug #62246: qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- Rishabh Dave wrote:
> This failure might be same as the one reported here - https://tracker.ceph.com/issues/62188. I... - 09:53 AM Feature #55215 (Resolved): mds: fragment directory snapshots
- 08:55 AM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- Venky Shankar wrote:
> Rishabh Dave wrote:
> > I spent a good amount of time with this ticket. The reason for this ... - 08:18 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > _test_create_cluster() in test_nfs demanded strerr to be looked at... - 07:57 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Dhairya Parmar wrote:
> _test_create_cluster() in test_nfs demanded strerr to be looked at; therefore I had created ... - 07:34 AM Bug #62074: cephfs-shell: ls command has help message of cp command
- https://github.com/ceph/ceph/pull/52756
- 07:20 AM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Milind Changire wrote:
> Venky Shankar wrote:
> > Milind Changire wrote:
> > > Venky,
> > > What versions do I ba... - 07:11 AM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Venky Shankar wrote:
> Milind Changire wrote:
> > Venky,
> > What versions do I backport PR#48743 to?
> > That PR... - 06:49 AM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Milind Changire wrote:
> Venky,
> What versions do I backport PR#48743 to?
> That PR is only available in version ... - 07:19 AM Bug #59833 (Fix Under Review): crash: void MDLog::trim(int): assert(segments.size() >= pre_segmen...
- 06:48 AM Bug #62262 (Won't Fix): workunits/fs/test_o_trunc.sh failed with timedout
- This is not a issue in upstream, just caused by the new PR: https://github.com/ceph/ceph/pull/45073.
Will close it. - 06:47 AM Bug #62262: workunits/fs/test_o_trunc.sh failed with timedout
Just copied my analysis from https://github.com/ceph/ceph/pull/45073/commits/f064cd9a78ae475c574d1d46db18748fe9c001...- 06:38 AM Backport #61793 (In Progress): pacific: mgr/snap_schedule: catch all exceptions to avoid crashing...
- 06:37 AM Backport #61795 (In Progress): quincy: mgr/snap_schedule: catch all exceptions to avoid crashing ...
- 06:37 AM Backport #61794 (In Progress): reef: mgr/snap_schedule: catch all exceptions to avoid crashing mo...
- 06:33 AM Backport #61989 (In Progress): pacific: snap-schedule: allow retention spec to specify max number...
- 06:27 AM Backport #61991 (In Progress): quincy: snap-schedule: allow retention spec to specify max number ...
- 06:27 AM Backport #61990 (In Progress): reef: snap-schedule: allow retention spec to specify max number of...
- 05:01 AM Feature #61595: Consider setting "bulk" autoscale pool flag when automatically creating a data po...
- Leonid, please take this one.
- 04:54 AM Bug #62077: mgr/nfs: validate path when modifying cephfs export
- Venky Shankar wrote:
> Dhairya, this should be straightforward with the path validation helper you introduced, right... - 04:30 AM Bug #56003 (Duplicate): client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
- 03:04 AM Bug #62277 (Fix Under Review): Error: Unable to find a match: python2 with fscrypt tests
- 12:51 AM Bug #62277 (Pending Backport): Error: Unable to find a match: python2 with fscrypt tests
- http://qa-proxy.ceph.com/teuthology/yuriw-2023-07-29_14:02:18-fs-reef-release-distro-default-smithi/7356824/teutholog...
- 03:01 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Oh, or maybe its trying to install python2
... - 02:43 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Oh, or maybe its trying to install python2
> >
> > [...]
> >
> > T... - 12:51 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Venky Shankar wrote:
> Oh, or maybe its trying to install python2
>
> [...]
>
> This should be trying python3.... - 01:22 AM Bug #62227 (Fix Under Review): Error "dbench: command not found" in smoke on reef
- The fixing PR in *ceph-cm-ansible*: https://github.com/ceph/ceph-cm-ansible/pull/746 and https://github.com/ceph/ceph...
- 01:08 AM Bug #62227: Error "dbench: command not found" in smoke on reef
- This should be a same issue with https://tracker.ceph.com/issues/62187, which is *iozone* missing instead in *centos ...
- 01:03 AM Backport #62268 (In Progress): pacific: qa: _test_stale_caps does not wait for file flush before ...
- 01:02 AM Backport #62270 (In Progress): quincy: qa: _test_stale_caps does not wait for file flush before stat
- 01:00 AM Backport #62269 (In Progress): reef: qa: _test_stale_caps does not wait for file flush before stat
08/01/2023
- 08:29 PM Bug #61409: qa: _test_stale_caps does not wait for file flush before stat
- backports should include the flake8 cleanup from https://github.com/ceph/ceph/pull/52732
- 03:09 PM Bug #61409 (Pending Backport): qa: _test_stale_caps does not wait for file flush before stat
- 03:31 PM Backport #62273 (Rejected): pacific: Error: Unable to find a match: userspace-rcu-devel libedit-d...
- 03:31 PM Backport #62272 (Rejected): quincy: Error: Unable to find a match: userspace-rcu-devel libedit-de...
- 03:31 PM Backport #62271 (Resolved): reef: Error: Unable to find a match: userspace-rcu-devel libedit-deve...
- https://github.com/ceph/ceph/pull/52843
- 03:27 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Rishabh, please take this one.
- 01:52 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Sometimes we want to be able to t... - 03:25 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Oh, or maybe its trying to install python2...
- 03:24 PM Bug #59683 (Pending Backport): Error: Unable to find a match: userspace-rcu-devel libedit-devel d...
- https://pulpito.ceph.com/yuriw-2023-07-29_14:02:18-fs-reef-release-distro-default-smithi/7356824/
- 03:15 PM Backport #62270 (In Progress): quincy: qa: _test_stale_caps does not wait for file flush before stat
- https://github.com/ceph/ceph/pull/52743
- 03:15 PM Backport #62269 (In Progress): reef: qa: _test_stale_caps does not wait for file flush before stat
- https://github.com/ceph/ceph/pull/52742
- 03:15 PM Backport #62268 (Resolved): pacific: qa: _test_stale_caps does not wait for file flush before stat
- https://github.com/ceph/ceph/pull/52744
- 02:31 PM Feature #61905: pybind/mgr/volumes: add more introspection for recursive unlink threads
- Rishabh, please take this one.
- 02:23 PM Bug #62265 (Fix Under Review): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
- There are places where the mirror daemon uses realtime clocks which are prone to clock shifts (system time). Switch t...
- 12:56 PM Bug #59785: crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == LOCK_XLOCK...
- Venky,
What versions do I backport PR#48743 to?
That PR is only available in version 18.x
This is the older trac... - 09:12 AM Bug #62262 (Won't Fix): workunits/fs/test_o_trunc.sh failed with timedout
- https://pulpito.ceph.com/vshankar-2023-07-25_11:29:34-fs-wip-vshankar-testing-20230725.043804-testing-default-smithi/...
- 07:26 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- The difference I can see in src/test/libcephfs/CMakeLists.txt is
for, say, test.cc:... - 06:50 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Laura Flores wrote:
> Possibly more helpful, here's the last instance of it passing on ubuntu 20.04 main:
> https:/... - 04:43 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Thanks for the details, Laura. I'll have a look.
- 06:33 AM Backport #62242 (In Progress): pacific: mds: linkmerge assert check is incorrect in rename codepath
- 06:32 AM Backport #62241 (In Progress): quincy: mds: linkmerge assert check is incorrect in rename codepath
- 06:31 AM Backport #62240 (In Progress): reef: mds: linkmerge assert check is incorrect in rename codepath
- 05:28 AM Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid`
- Venky Shankar wrote:
> Xiubo Li wrote:
> > There is a ceph-user mail thread about this https://www.spinics.net/list... - 05:22 AM Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid`
- Xiubo Li wrote:
> There is a ceph-user mail thread about this https://www.spinics.net/lists/ceph-users/msg78109.html... - 05:12 AM Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid`
- There is a ceph-user mail thread about this https://www.spinics.net/lists/ceph-users/msg78109.html.
As a workaroun... - 05:05 AM Bug #62257 (New): mds: blocklist clients that are not advancing `oldest_client_tid`
- The size of the session map becomes huge, thereby exceeding the max write size for a RADOS operation, thereby resulti...
- 05:14 AM Bug #62245 (Fix Under Review): qa/workunits/libcephfs/test.sh failed: [ FAILED ] LibCephFS.Dele...
- 04:14 AM Bug #62243 (Fix Under Review): qa/cephfs: test_snapshots.py fails because of a missing method
- I think this _doesn't_ need backport, Rishabh?
- 04:10 AM Bug #61957 (Duplicate): test_client_limits.TestClientLimits.test_client_release_bug fails
- Xiubo Li wrote:
> Venky, this seems the same issue with https://tracker.ceph.com/issues/62229 and I have one PR to f...
07/31/2023
- 10:18 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Possibly more helpful, here's the last instance of it passing on ubuntu 20.04 main:
https://pulpito.ceph.com/teuthol... - 08:09 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Here's a job where the test passed on ubuntu. Could provide some clues to what changed:
https://pulpito.ceph.com/yur... - 04:44 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- A similar issue was reported and solved in https://tracker.ceph.com/issues/57050. A note from Casey:...
- 04:12 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Found an instance of this in the smoke suite and got some more information from the coredump here: https://tracker.ce...
- 06:50 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- https://pulpito.ceph.com/yuriw-2023-07-28_14:23:59-fs-wip-yuri-testing-2023-07-25-0833-reef-distro-default-smithi/735...
- 08:54 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Possibly more helpful, here's the last instance of it passing on ubuntu 20.04 main:
https://pulpito.ceph.com/teuthol... - 08:45 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Instance from June of this year: https://pulpito.ceph.com/yuriw-2023-06-24_13:58:32-smoke-reef-distro-default-smithi/...
- 04:01 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Marked it as "related" rather than a dupe to keep visibility in the smoke suite.
- 04:00 PM Bug #62228 (New): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- 03:59 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- From Venky's comment on the original tracker, it seems to pop up "once in awhile". So that could explain it. Best to ...
- 03:55 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Why did not it pop up in smoke before then?
- 03:53 PM Bug #62228 (Duplicate): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- 03:53 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- This actually looks like a dupe of https://tracker.ceph.com/issues/57206.
- 03:37 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- Installed more debug symbols to get a clearer picture:...
- 04:03 PM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- Rishabh Dave wrote:
> I spent a good amount of time with this ticket. The reason for this failure is unclear from lo... - 01:45 PM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- I spent a good amount of time with this ticket. The reason for this failure is unclear from logs. There's no tracebac...
- 02:09 PM Bug #62246: qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- This failure might be same as the one reported here - https://tracker.ceph.com/issues/62188. If the cause of both the...
- 02:05 PM Bug #62246 (Fix Under Review): qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- Test method test_mount_mon_and_osd_caps_present_mds_caps_absent (in test_multifs_auth.TestClientsWithoutAuth) fails. ...
- 01:11 PM Bug #62245: qa/workunits/libcephfs/test.sh failed: [ FAILED ] LibCephFS.DelegTimeout
- The *DelegTimeout* test case itself it's buggy:...
- 01:04 PM Bug #62245 (Fix Under Review): qa/workunits/libcephfs/test.sh failed: [ FAILED ] LibCephFS.Dele...
- /teuthology/vshankar-2023-07-25_11:29:34-fs-wip-vshankar-testing-20230725.043804-testing-default-smithi/7350738
<p... - 12:57 PM Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done
- backport note - we _might_ need to pull in additional commits for p/q.
- 12:45 PM Bug #62243 (In Progress): qa/cephfs: test_snapshots.py fails because of a missing method
- 12:35 PM Bug #62243 (Resolved): qa/cephfs: test_snapshots.py fails because of a missing method
- @test_disallow_monitor_managed_snaps_for_fs_pools@ (in @test_snapshot.TestMonSnapsAndFsPools@) fails because method "...
- 12:42 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- Rishabh Dave wrote:
> Venky Shankar wrote:
> > /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053... - 12:26 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- Venky Shankar wrote:
> /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smith... - 12:35 PM Bug #61957: test_client_limits.TestClientLimits.test_client_release_bug fails
- Venky, this seems the same issue with https://tracker.ceph.com/issues/62229 and I have one PR to fix it.
[EDIT] S... - 12:28 PM Bug #62236 (Fix Under Review): qa: run nfs related tests with fs suite
- 09:26 AM Bug #62236 (Pending Backport): qa: run nfs related tests with fs suite
- Right now, its a part or orch suite (orch:cephadm) which Patrick told is due to legacy reasons. Needs to be under fs ...
- 12:15 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-07-26_15:54:22-rados-wip-yuri6-testing-2023-07-24-0819-pacific-distro-default-smithi/7353337
/a/yuriw-... - 11:38 AM Bug #62221 (In Progress): Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_...
- 06:23 AM Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.Test...
- Jos, please take this one.
- 06:23 AM Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.Test...
- https://pulpito.ceph.com/yuriw-2023-07-28_14:23:59-fs-wip-yuri-testing-2023-07-25-0833-reef-distro-default-smithi/735...
- 11:29 AM Bug #51964 (In Progress): qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- 10:53 AM Backport #62242 (Resolved): pacific: mds: linkmerge assert check is incorrect in rename codepath
- https://github.com/ceph/ceph/pull/52726
- 10:53 AM Backport #62241 (Resolved): quincy: mds: linkmerge assert check is incorrect in rename codepath
- https://github.com/ceph/ceph/pull/52725
- 10:53 AM Backport #62240 (Resolved): reef: mds: linkmerge assert check is incorrect in rename codepath
- https://github.com/ceph/ceph/pull/52724
- 10:51 AM Bug #61879 (Pending Backport): mds: linkmerge assert check is incorrect in rename codepath
- 07:07 AM Bug #50821: qa: untar_snap_rm failure during mds thrashing
- This popped up again with centos 9.stream, but I don't think anything to do with the distro. ref: /a/yuriw-2023-07-26...
- 06:55 AM Bug #59067: mds: add cap acquisition throttled event to MDR
- Leonid Usov wrote:
> Sorry for the confusion, Dhairya.
> Venky has assigned this to me but at that time I hadn't y... - 04:24 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Patrick Donnelly wrote:
> Seen here and probably elsewhere: /teuthology/yuriw-2023-07-10_00:47:51-fs-reef-distro-def...
07/29/2023
- 08:04 AM Bug #59067: mds: add cap acquisition throttled event to MDR
- Sorry for the confusion, Dhairya.
Venky has assigned this to me but at that time I hadn't yet been a member of the ... - 08:01 AM Bug #59067 (Fix Under Review): mds: add cap acquisition throttled event to MDR
- 02:47 AM Bug #62218 (Fix Under Review): mgr/snap_schedule: missing fs argument on command-line gives unexp...
- 02:42 AM Bug #62229 (Fix Under Review): log_channel(cluster) log [WRN] : client.7719 does not advance its ...
- The *test_client_oldest_tid* test case was triggered first:...
- 02:31 AM Bug #62229 (Fix Under Review): log_channel(cluster) log [WRN] : client.7719 does not advance its ...
- https://pulpito.ceph.com/vshankar-2023-07-25_11:29:34-fs-wip-vshankar-testing-20230725.043804-testing-default-smithi/...
07/28/2023
- 11:27 PM Bug #62228: "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- From gdb:...
- 08:31 PM Bug #62228 (Resolved): "Segmentation fault" (['libcephfs/test.sh']) in smoke on reef
- This is for 18.2.0
Run: https://pulpito.ceph.com/yuriw-2023-07-28_18:16:55-smoke-reef-release-distro-default-smith... - 08:57 PM Bug #62227: Error "dbench: command not found" in smoke on reef
- Seen in the fs suite as well
https://pulpito.ceph.com/yuriw-2023-07-26_14:34:38-fs-reef-release-distro-default-smi... - 08:41 PM Bug #62227: Error "dbench: command not found" in smoke on reef
- Comments from a chat discussion...
- 08:27 PM Bug #62227 (Fix Under Review): Error "dbench: command not found" in smoke on reef
- This is for 18.2.0
Run: https://pulpito.ceph.com/yuriw-2023-07-28_18:16:55-smoke-reef-release-distro-default-smith... - 04:35 PM Backport #62147: reef: qa: adjust fs:upgrade to use centos_8 yaml
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52618
merged - 04:04 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-07-19_14:33:14-rados-wip-yuri11-testing-2023-07-18-0927-pacific-distro-default-smithi/7343428
- 11:19 AM Bug #62221 (In Progress): Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_...
- /a/yuriw-2023-07-26_14:34:38-fs-reef-release-distro-default-smithi/7353194...
- 10:24 AM Bug #59067: mds: add cap acquisition throttled event to MDR
- ah didn't knew you took this tracker :)
- 08:59 AM Bug #62217: ceph_fs.h: add separate owner_{u,g}id fields
- Stéphane Graber pointed [2] that there are users who want to use cephfs idmapped mounts
with MDS versions which don'... - 06:32 AM Bug #62217 (Resolved): ceph_fs.h: add separate owner_{u,g}id fields
- This task is about adding separate fields to pass inode owner's UID/GID for operations which create new inodes:
CEPH... - 08:05 AM Bug #62218 (Pending Backport): mgr/snap_schedule: missing fs argument on command-line gives unexp...
- The first fs in the fsmap is taken as the default fs for all snap_schedule commands.
This leads to unexpected result... - 06:43 AM Documentation #62216 (Closed): doc: snapshot_clone_delay is not documented
- 06:02 AM Documentation #62216 (Closed): doc: snapshot_clone_delay is not documented
- mgr/volumes/snapshot_clone_delay can also be configured, missing in docs
- 05:51 AM Feature #62215 (Rejected): libcephfs: Allow monitoring for any file changes like inotify
- I wanted to add a Ceph backend for rclone (https://rclone.org/) but it turns out that there is no way to monitor for ...
- 02:06 AM Backport #62191 (In Progress): quincy: mds: replay thread does not update some essential perf cou...
- 02:05 AM Backport #62190 (In Progress): pacific: mds: replay thread does not update some essential perf co...
- 02:04 AM Backport #62189 (In Progress): reef: mds: replay thread does not update some essential perf counters
07/27/2023
- 01:32 PM Bug #62208 (Fix Under Review): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- The MDS calls ceph_abort("msg") in various places. If there is any pending cluster log messages to be sent to the mon...
- 12:06 PM Feature #62207 (New): Report cephfs-nfs service on ceph -s
- 11:10 AM Backport #62202 (Resolved): pacific: crash: MDSRank::send_message_client(boost::intrusive_ptr<Mes...
- https://github.com/ceph/ceph/pull/52844
- 11:10 AM Backport #62201 (Resolved): reef: crash: MDSRank::send_message_client(boost::intrusive_ptr<Messag...
- https://github.com/ceph/ceph/pull/52846
- 11:10 AM Backport #62200 (Resolved): quincy: crash: MDSRank::send_message_client(boost::intrusive_ptr<Mess...
- https://github.com/ceph/ceph/pull/52845
- 11:10 AM Backport #62199 (Duplicate): pacific: mds: couldn't successfully calculate the locker caps
- 11:10 AM Backport #62198 (Duplicate): reef: mds: couldn't successfully calculate the locker caps
- 11:09 AM Backport #62197 (Duplicate): quincy: mds: couldn't successfully calculate the locker caps
- 11:09 AM Bug #60625 (Pending Backport): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> ...
- 11:09 AM Bug #61781 (Pending Backport): mds: couldn't successfully calculate the locker caps
- 08:15 AM Backport #62194 (Resolved): quincy: ceph: corrupt snap message from mds1
- https://github.com/ceph/ceph/pull/52849
- 08:15 AM Backport #62193 (Resolved): pacific: ceph: corrupt snap message from mds1
- https://github.com/ceph/ceph/pull/52848
- 08:15 AM Backport #62192 (Resolved): reef: ceph: corrupt snap message from mds1
- https://github.com/ceph/ceph/pull/52847
- 08:15 AM Backport #62191 (Resolved): quincy: mds: replay thread does not update some essential perf counters
- https://github.com/ceph/ceph/pull/52683
- 08:15 AM Backport #62190 (Resolved): pacific: mds: replay thread does not update some essential perf counters
- https://github.com/ceph/ceph/pull/52682
- 08:15 AM Backport #62189 (Resolved): reef: mds: replay thread does not update some essential perf counters
- https://github.com/ceph/ceph/pull/52681
- 08:09 AM Bug #61217 (Pending Backport): ceph: corrupt snap message from mds1
- Xiubo, this needs backporting to p/q/r, yes?
- 08:08 AM Bug #61864 (Pending Backport): mds: replay thread does not update some essential perf counters
- 08:00 AM Bug #62187 (Fix Under Review): iozone: command not found
- 06:32 AM Bug #62187 (Fix Under Review): iozone: command not found
- /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smithi/7352594...
- 07:58 AM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- FWIW - this seems to be happening with multifs-auth tests in fs suite
/a/vshankar-2023-07-26_04:54:56-fs-wip-vshan... - 07:55 AM Bug #62188 (New): AttributeError: 'RemoteProcess' object has no attribute 'read'
- /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smithi/7352553...
- 06:19 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Kotresh Hiremath Ravishankar wrote:
> Neeraj Pratap Singh wrote:
> > I am thinking to move ahead with this approach... - 06:13 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Neeraj Pratap Singh wrote:
> I am thinking to move ahead with this approach: Allow the cloning only when (pending_cl... - 06:09 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- I am thinking to move ahead with this approach: Allow the cloning only when (pending_clones + in-progress_clones) <= ...
- 05:43 AM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-default-smithi/7352573...
- 04:50 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Jos, please take this one.
07/26/2023
- 07:01 PM Bug #62126: test failure: suites/blogbench.sh stops running
- Seen here and probably elsewhere: /teuthology/yuriw-2023-07-10_00:47:51-fs-reef-distro-default-smithi/7331743/teuthol...
- 10:50 AM Backport #62178 (In Progress): reef: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
- 10:38 AM Backport #62178 (Resolved): reef: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemo...
- https://github.com/ceph/ceph/pull/52656
- 10:48 AM Backport #62177 (In Progress): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror...
- 10:38 AM Backport #62177 (Resolved): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
- https://github.com/ceph/ceph/pull/52654
- 10:46 AM Backport #62176 (In Progress): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror ...
- 10:38 AM Backport #62176 (In Progress): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror ...
- https://github.com/ceph/ceph/pull/52653
- 10:31 AM Bug #61182 (Pending Backport): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
- 10:05 AM Feature #61908 (Fix Under Review): mds: provide configuration for trim rate of the journal
- 09:33 AM Bug #52439 (Can't reproduce): qa: acls does not compile on centos stream
- 08:43 AM Bug #62036 (Fix Under Review): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- 06:32 AM Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- The mds became *up:active* before receiving the last *cache_rejoin ack*:...
- 05:39 AM Bug #62036 (In Progress): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- 08:08 AM Backport #59264 (Resolved): pacific: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50814
Merged. - 08:08 AM Backport #59261 (Resolved): pacific: mds: stray directories are not purged when all past parents ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50814
Merged. - 06:26 AM Backport #61346 (Resolved): pacific: mds: fsstress.sh hangs with multimds (deadlock between unlin...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51686
Merged. - 04:15 AM Bug #57014 (Resolved): cephfs-top: add an option to dump the computed values to stdout
- 04:13 AM Bug #58823 (Resolved): cephfs-top: navigate to home screen when no fs
- 04:12 AM Bug #59553 (Resolved): cephfs-top: fix help text for delay
- 04:12 AM Bug #58677 (Resolved): cephfs-top: test the current python version is supported
- 04:10 AM Documentation #57673 (Resolved): doc: document the relevance of mds_namespace mount option
- 04:09 AM Backport #58408 (Resolved): pacific: doc: document the relevance of mds_namespace mount option
- 04:03 AM Backport #59482 (Resolved): pacific: cephfs-top, qa: test the current python version is supported
- 04:02 AM Backport #58984 (Resolved): pacific: cephfs-top: navigate to home screen when no fs
07/25/2023
- 07:09 PM Bug #62164 (Fix Under Review): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- 02:13 PM Bug #62164 (Pending Backport): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- /teuthology/yuriw-2023-07-20_14:36:46-fs-wip-yuri-testing-2023-07-19-1340-pacific-distro-default-smithi/7344784/1$
... - 04:38 PM Bug #58813 (Resolved): cephfs-top: Sort menu doesn't show 'No filesystem available' screen when a...
- 04:38 PM Bug #58814 (Resolved): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 04:37 PM Backport #58865 (Resolved): quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' ...
- 03:07 PM Backport #58865: quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' screen when...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50365
merged - 04:37 PM Backport #58985 (Resolved): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 03:08 PM Backport #58985: quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50595
merged - 03:55 PM Bug #52386 (Resolved): client: fix dump mds twice
- 03:55 PM Backport #52442 (Resolved): pacific: client: fix dump mds twice
- 03:15 PM Backport #52442: pacific: client: fix dump mds twice
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51247
merged - 03:32 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-21_02:03:58-rados-wip-yuri7-testing-2023-07-20-0727-distro-default-smithi/7346244
- 05:06 AM Bug #62084 (Fix Under Review): task/test_nfs: AttributeError: 'TestNFS' object has no attribute '...
- 03:20 PM Bug #59107: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51699 merged
- 03:19 PM Backport #59725: pacific: mds: allow entries to be removed from lost+found directory
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51687
merged - 03:17 PM Backport #59721: pacific: qa: run scrub post disaster recovery procedure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51610
merged - 03:17 PM Backport #61235: pacific: mds: a few simple operations crash mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51609
merged - 03:16 PM Backport #59482: pacific: cephfs-top, qa: test the current python version is supported
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51353
merged - 03:15 PM Backport #59017: pacific: snap-schedule: handle non-existent path gracefully during snapshot crea...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51246
merged - 03:12 PM Backport #58984: pacific: cephfs-top: navigate to home screen when no fs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50737
merged - 03:09 PM Backport #59021: quincy: mds: warning `clients failing to advance oldest client/flush tid` seen w...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50785
merged - 03:08 PM Backport #59016: quincy: snap-schedule: handle non-existent path gracefully during snapshot creation
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50780
merged - 10:40 AM Bug #62160 (Duplicate): mds: MDS abort because newly corrupt dentry to be committed
- /a/yuriw-2023-07-20_14:36:46-fs-wip-yuri-testing-2023-07-19-1340-pacific-distro-default-smithi/7344784...
- 09:57 AM Tasks #62159 (In Progress): qa: evaluate mds_partitioner
- Evaluation types
* Various workloads using benchmark tools to mimic realistic scenarios
* unittest
* qa suite for ... - 09:51 AM Bug #62158 (New): mds: quick suspend or abort metadata migration
- This feature has been discussed in the CDS Squid CephFS session https://pad.ceph.com/p/cds-squid-mds-partitioner-2023...
- 09:41 AM Feature #62157 (In Progress): mds: working set size tracker
- This feature has been discussed in the CDS Squid CephFS session https://pad.ceph.com/p/cds-squid-mds-partitioner-2023...
- 07:32 AM Backport #62147 (In Progress): reef: qa: adjust fs:upgrade to use centos_8 yaml
- 07:27 AM Backport #62147 (In Progress): reef: qa: adjust fs:upgrade to use centos_8 yaml
- https://github.com/ceph/ceph/pull/52618
- 07:24 AM Bug #62146 (Pending Backport): qa: adjust fs:upgrade to use centos_8 yaml
- 04:20 AM Bug #62146 (Fix Under Review): qa: adjust fs:upgrade to use centos_8 yaml
- 04:19 AM Bug #62146 (Pending Backport): qa: adjust fs:upgrade to use centos_8 yaml
- Since n/o/p release packages aren't built for centos_9, those tests are failing with package issues.
- 06:04 AM Cleanup #61482: mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` interfaces
- Dhairya, let's get the deprecated warning in place and plan to remove the interface a couple of release down.
- 05:14 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Rishabh, can you run blogbench with verbose flag (if any) to see which operation does it exactly get stuck in?
- 05:12 AM Bug #61909 (Can't reproduce): mds/fsmap: fs fail cause to mon crash
- > Yes, there's really no other way, because have client use rbd storage in this cluster, I am in a hurry to recover c...
- 05:04 AM Bug #62073 (Duplicate): AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- Duplicate of https://tracker.ceph.com/issues/62084
07/24/2023
- 06:37 PM Bug #48673: High memory usage on standby replay MDS
- I've confirmed that `fs set auxtel allow_standby_replay false` does free the memory leak in the standby mds but doesn...
- 06:20 PM Bug #48673: High memory usage on standby replay MDS
- This issue triggered again this morning for the first time in 2 weeks. What's note worthy is that the active mds seem...
- 04:19 PM Backport #61900 (Resolved): pacific: pybind/cephfs: holds GIL during rmdir
- 03:03 PM Backport #61900: pacific: pybind/cephfs: holds GIL during rmdir
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52323
merged - 03:08 PM Bug #52439: qa: acls does not compile on centos stream
- I had a conversation with Patrick last week about this ticket. He doesn't remember what this ticket was even about. I...
- 12:39 PM Bug #62126 (New): test failure: suites/blogbench.sh stops running
- I found this failure during running integration tests for few CephFS PRs. This failure occurred even after running th...
- 11:40 AM Bug #61182 (Fix Under Review): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
- 08:22 AM Bug #62123 (New): mds: detect out-of-order locking
- From Patrick's comments in https://github.com/ceph/ceph/pull/52522#discussion_r1269575242.
We need to make sure th... - 04:56 AM Feature #61908: mds: provide configuration for trim rate of the journal
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > OK, this is what I have in mind:
> >
> > Introduce an MDS con...
07/21/2023
- 07:38 PM Backport #58991 (In Progress): quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 07:38 PM Backport #58992 (In Progress): pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 07:20 PM Backport #62028 (In Progress): pacific: mds/MDSAuthCaps: "fsname", path, root_squash can't be in ...
- 07:07 PM Backport #62027 (In Progress): quincy: mds/MDSAuthCaps: "fsname", path, root_squash can't be in s...
- 06:45 PM Backport #62026 (In Progress): reef: mds/MDSAuthCaps: "fsname", path, root_squash can't be in sam...
- 06:34 PM Backport #59015 (In Progress): pacific: Command failed (workunit test fs/quota/quota.sh) on smith...
- 06:21 PM Backport #59014 (In Progress): quincy: Command failed (workunit test fs/quota/quota.sh) on smithi...
- 06:04 PM Backport #59410 (In Progress): reef: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- 05:11 PM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > This one's interesting. I did mention in the standup yesterday t... - 01:21 AM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Venky Shankar wrote:
> This one's interesting. I did mention in the standup yesterday that I've seen this earlier an... - 04:00 PM Bug #62114 (Fix Under Review): mds: adjust cap acquistion throttle defaults
- 03:53 PM Bug #62114 (Pending Backport): mds: adjust cap acquistion throttle defaults
- They are too conservative and rarely trigger in production clusters.
- 08:47 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Milind Changire wrote:
> Venky,
> The upstream user has also sent across debug (level 20) logs for ceph-fuse as wel... - 08:45 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Venky,
The upstream user has also sent across debug (level 20) logs for ceph-fuse as well as mds.
Unfortunately, th... - 04:41 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos, as per https://tracker.ceph.com/issues/61182#note-31, please check if the volume deletions (and probably creatio...
- 01:33 AM Backport #61797 (Resolved): reef: client: only wait for write MDS OPs when unmounting
- 01:22 AM Bug #61897 (Duplicate): qa: rados:mgr fails with MDS_CLIENTS_LAGGY
07/20/2023
- 11:12 PM Backport #61797: reef: client: only wait for write MDS OPs when unmounting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52302
merged - 09:18 AM Backport #61347 (Resolved): reef: mds: fsstress.sh hangs with multimds (deadlock between unlink a...
- 09:17 AM Backport #59708 (Resolved): reef: Mds crash and fails with assert on prepare_new_inode
- 06:28 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- So, here is the order of tasks unwinding:
HA workunit finishes:... - 06:15 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Greg Farnum wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Oh, I guess the daemons are created via the... - 05:47 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Oh, I guess the daemons are created via the qa/suites/fs/mirror-ha/ce... - 05:31 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- (discussion continued on the PR)
- 05:22 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > If cephfs-mirror daemon faces any issues connecting to the cluster... - 05:17 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- Dhairya Parmar wrote:
> If cephfs-mirror daemon faces any issues connecting to the cluster or error accessing local ... - 02:14 AM Backport #61735 (Resolved): reef: mgr/stats: exception ValueError :invalid literal for int() with...
- 02:14 AM Backport #61694 (Resolved): reef: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't i...
- 12:33 AM Bug #62096 (Duplicate): mds: infinite rename recursion on itself
- https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/7337403
I don...
07/19/2023
- 06:29 PM Feature #61908: mds: provide configuration for trim rate of the journal
- Venky Shankar wrote:
> OK, this is what I have in mind:
>
> Introduce an MDS config key that controls the rate of... - 06:34 AM Feature #61908: mds: provide configuration for trim rate of the journal
- OK, this is what I have in mind:
Introduce an MDS config key that controls the rate of trimming - number of log se... - 04:13 PM Feature #62086 (Fix Under Review): mds: print locks when dumping ops
- 04:09 PM Feature #62086 (Pending Backport): mds: print locks when dumping ops
- To help identify where an operation is stuck obtaining locks.
- 03:53 PM Backport #61959: reef: mon: block osd pool mksnap for fs pools
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52399
merged - 03:52 PM Backport #61424: reef: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot beaco...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52242
merged - 03:52 PM Backport #61413: reef: mon/MDSMonitor: do not trigger propose on error from prepare_update
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52238
merged - 03:51 PM Backport #61410: reef: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52232
merged - 03:50 PM Backport #61759: reef: tools/cephfs/first-damage: unicode decode errors break iteration
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52231
merged - 03:48 PM Backport #61693: reef: mon failed to return metadata for mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52229
merged - 03:40 PM Backport #61735: reef: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52126
merged - 03:40 PM Backport #61694: reef: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52073
merged - 03:39 PM Backport #61347: reef: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51684
merged - 03:39 PM Backport #59724: reef: mds: allow entries to be removed from lost+found directory
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51607
merged - 03:38 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Venky Shankar wrote:
> Patrick, the "MDS behind trimming" warning during up:replay is kind of expected in cases wher... - 03:37 PM Backport #59708: reef: Mds crash and fails with assert on prepare_new_inode
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51506
merged - 03:36 PM Backport #59719: reef: client: read wild pointer when reconnect to mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51484
merged - 03:20 PM Bug #62084 (Resolved): task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- ...
- 02:52 PM Feature #62083 (In Progress): CephFS multi-client guranteed-consistent snapshots
- This tracker is to discuss and implement guranteed-consistent snapshots of subdirectories, when using CephFS across m...
- 01:58 PM Bug #62077: mgr/nfs: validate path when modifying cephfs export
- Dhairya, this should be straightforward with the path validation helper you introduced, right?
- 11:04 AM Bug #62077 (In Progress): mgr/nfs: validate path when modifying cephfs export
- ...
- 01:27 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Added a few more notes about reproduction.
- 11:35 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Milind Changire wrote:
> "Similar crash report in ceph-users mailing list":https://lists.ceph.io/hyperkitty/list/cep... - 06:59 AM Backport #62068 (In Progress): pacific: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_check...
- Commits appended in https://github.com/ceph/ceph/pull/50814
- 06:58 AM Backport #62069 (In Progress): reef: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.T...
- commits appended in https://github.com/ceph/ceph/pull/50813
- 06:58 AM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
- Commits appended in https://github.com/ceph/ceph/pull/50815
- 06:57 AM Bug #61897 (Resolved): qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- 06:57 AM Bug #61897: qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- Fixed in https://tracker.ceph.com/issues/61907
- 06:55 AM Bug #61781: mds: couldn't successfully calculate the locker caps
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo - simialr failure here: /a/vshankar-20... - 06:44 AM Bug #62074 (Resolved): cephfs-shell: ls command has help message of cp command
- CephFS:~/>>> help ls
usage: ls [-h] [-l] [-r] [-H] [-a] [-S] [paths [paths ...]]... - 06:32 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- From GChat:...
- 05:17 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo, this needs backported to reef, yes?
It's already in reef. - 04:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- A bit unrelated, but mentioning here for completeness:
/a/yuriw-2023-07-14_23:37:57-fs-wip-yuri8-testing-2023-07-1... - 04:23 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo, this needs backported to reef, yes?
- 03:04 AM Bug #56698 (Fix Under Review): client: FAILED ceph_assert(_size == 0)
- 02:42 AM Bug #56698: client: FAILED ceph_assert(_size == 0)
- Venky Shankar wrote:
> Xiubo, do we have the core for this crash. If you have the debug env, then figuring out which... - 03:03 AM Bug #61913 (Closed): client: crash the client more gracefully
- Will fix this in https://tracker.ceph.com/issues/56698.
- 12:54 AM Bug #62073: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-15_23:37:56-rados-wip-yuri2-testing-2023-07-15-0802-distro-default-smithi/7340872
07/18/2023
- 08:49 PM Bug #62073 (Duplicate): AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-17_14:37:31-rados-wip-yuri-testing-2023-07-14-1641-distro-default-smithi/7341551...
- 03:40 PM Bug #62072 (Resolved): cephfs-mirror: do not run concurrent C_RestartMirroring context
- If cephfs-mirror daemon faces any issues connecting to the cluster or error accessing local pool or mounting fs then ...
- 03:15 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> Venky Shankar wrote:
> > Out of the 3 replayer threads, only two exited when the mirror daemon ... - 03:01 PM Bug #56698: client: FAILED ceph_assert(_size == 0)
- Xiubo, do we have the core for this crash. If you have the debug env, then figuring out which xlist member in MetaSes...
- 02:48 PM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
- 02:48 PM Backport #62069 (Resolved): reef: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.Test...
- 02:48 PM Backport #62068 (Resolved): pacific: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.T...
- 02:47 PM Bug #59350 (Pending Backport): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScr...
- 02:43 PM Bug #62067 (New): ffsb.sh failure "Resource temporarily unavailable"
- /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-20230712.041849-testing-default-smithi/7334808
</pre>
Des... - 02:04 PM Bug #62052 (Fix Under Review): mds: deadlock when getattr changes inode lockset
- 12:36 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Patrick, maybe we should add the detail event when acquiring each loc... - 12:33 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Xiubo Li wrote:
> Patrick, maybe we should add the detail event when acquiring each locks ? Then it will be easier t... - 03:36 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- Patrick, maybe we should add the detail event when acquiring each locks ? Then it will be easier to find the root cau...
- 03:34 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- So the deadlock is between *getattr* and *create* requests.
- 01:56 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- I have a fix I'm polishing to push for a PR. It'll be up soon.
- 01:55 AM Bug #62052 (Pending Backport): mds: deadlock when getattr changes inode lockset
- During a lot of request contention for locks, it's possible for getattr to change the requested locks for the target ...
- 12:45 PM Bug #62058 (Fix Under Review): mds: inode snaplock only acquired for open in create codepath
- 12:43 PM Bug #62058 (Pending Backport): mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/blob/236f8b632fbddcfe9dcdb484561c0fede717fd2f/src/mds/Server.cc#L4612-L4615
It doesn'... - 12:38 PM Bug #62057 (Fix Under Review): mds: add TrackedOp event for batching getattr/lookup
- 12:36 PM Bug #62057 (Resolved): mds: add TrackedOp event for batching getattr/lookup
- 12:27 PM Bug #61781: mds: couldn't successfully calculate the locker caps
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vsh... - 12:09 PM Bug #61781: mds: couldn't successfully calculate the locker caps
- Venky Shankar wrote:
> Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-2023071... - 11:42 AM Bug #61781: mds: couldn't successfully calculate the locker caps
- Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-20230712.041849-testing-default...
- 11:47 AM Backport #61986 (In Progress): pacific: mds: session ls command appears twice in command listing
- 11:45 AM Backport #61988 (In Progress): quincy: mds: session ls command appears twice in command listing
- 11:43 AM Backport #61987 (In Progress): reef: mds: session ls command appears twice in command listing
- 11:05 AM Backport #62056 (In Progress): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails becaus...
- 10:42 AM Backport #62056 (Resolved): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails because m...
- https://github.com/ceph/ceph/pull/52514
- 11:03 AM Backport #62055 (In Progress): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails becau...
- 10:42 AM Backport #62055 (Resolved): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
- https://github.com/ceph/ceph/pull/52513
- 11:00 AM Backport #62054 (In Progress): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
- 10:41 AM Backport #62054 (Resolved): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds...
- https://github.com/ceph/ceph/pull/52512
- 10:41 AM Bug #61201 (Pending Backport): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds cr...
- 05:18 AM Bug #61924: tar: file changed as we read it (unless cephfs mounted with norbytes)
- Venky Shankar wrote:
> Hi Harry,
>
> Harry Coin wrote:
> > Ceph: Pacific. When using tar heavily (such as compi... - 03:17 AM Backport #61985 (In Progress): quincy: mds: cap revoke and cap update's seqs mismatched
- 03:14 AM Backport #61984 (In Progress): reef: mds: cap revoke and cap update's seqs mismatched
- 03:12 AM Backport #61983 (In Progress): pacific: mds: cap revoke and cap update's seqs mismatched
- 03:05 AM Backport #62012 (In Progress): pacific: client: dir->dentries inconsistent, both newname and oldn...
- 02:59 AM Backport #62010 (In Progress): quincy: client: dir->dentries inconsistent, both newname and oldna...
- 02:59 AM Backport #62011 (In Progress): reef: client: dir->dentries inconsistent, both newname and oldname...
- 02:45 AM Backport #62042 (In Progress): quincy: client: do not send metrics until the MDS rank is ready
- 02:42 AM Backport #62041 (In Progress): reef: client: do not send metrics until the MDS rank is ready
- 02:41 AM Backport #62040 (In Progress): pacific: client: do not send metrics until the MDS rank is ready
- 02:30 AM Backport #62043 (In Progress): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:23 AM Backport #62045 (In Progress): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:20 AM Backport #62044 (In Progress): reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
07/17/2023
- 01:11 PM Bug #60669: crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in->firs...
- Unassigning since its a duplicate and this crash is being awaited to be reproduced in teuthology run.
- 11:34 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Just FYI - https://github.com/ceph/ceph/pull/52196 disables the balancer by default since it has been a source of per...
- 08:32 AM Backport #62045 (Resolved): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52498
- 08:32 AM Backport #62044 (Resolved): reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52497
- 08:32 AM Backport #62043 (Resolved): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52499
- 08:32 AM Bug #54460 (Resolved): snaptest-multiple-capsnaps.sh test failure
- https://tracker.ceph.com/issues/59343 is the other ticket attached to the backport.
- 08:32 AM Backport #62042 (Resolved): quincy: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52502
- 08:32 AM Backport #62041 (Resolved): reef: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52501
- 08:31 AM Backport #62040 (Resolved): pacific: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52500
- 08:30 AM Bug #59343 (Pending Backport): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 08:29 AM Bug #61523 (Pending Backport): client: do not send metrics until the MDS rank is ready
- 08:26 AM Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- BTW, I did not debug into this as it was unrelated to the PRs in the test branch.
This needs triaged and RCA. - 06:47 AM Bug #62036 (Fix Under Review): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- /a/vshankar-2023-07-04_11:59:45-fs-wip-vshankar-testing-20230704.040136-testing-default-smithi/7326619...
07/16/2023
Also available in: Atom