Activity
From 02/19/2024 to 03/19/2024
03/18/2024
- 01:25 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> We have https://github.com/ceph/ceph/blob/main/src/common/ceph_releases.h which I'm using to... - 12:16 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- We have https://github.com/ceph/ceph/blob/main/src/common/ceph_releases.h which I'm using to fetch the last two versi...
- 12:43 PM Bug #64937 (Resolved): reef: qa: AttributeError: 'TestSnapSchedulesSubvolAndGroupArguments' objec...
- problem was fixed and PR is merged
- 08:36 AM Bug #59582 (Resolved): snap-schedule: allow retention spec to specify max number of snaps to retain
- 07:58 AM Bug #64961 (In Progress): ceph-fuse: crash when try to open & trunc a encrypted file
- Mount a kclient and encrypt it and then use a ceph-fuse client tries to open & trunc it, though it returned failure w...
- 05:53 AM Bug #55148 (Closed): snap_schedule: remove subvolume(-group) interfaces
- this tracker is no longer relevant since subvolume and subvolumegroup interface has been readded to snap-schedule
- 05:38 AM Backport #55579 (In Progress): quincy: snap_schedule: avoid throwing traceback for bad or missing...
- backport is available in quincy
- 05:30 AM Backport #58599 (Resolved): quincy: mon: prevent allocating snapids allocated for CephFS
- commit is available in quincy
- 02:37 AM Feature #58057 (Resolved): cephfs-top: enhance fstop tests to cover testing displayed data
- 02:36 AM Bug #61397 (Resolved): cephfs-top: enhance --dump code to include the missing fields
- 02:35 AM Backport #63553 (Resolved): reef: cephfs-top: enhance --dump code to include the missing fields
- 02:13 AM Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- https://pulpito.ceph.com/vshankar-2024-03-14_16:52:41-fs-wip-vshankar-testing1-quincy-2024-03-14-0655-quincy-testing-...
03/17/2024
03/16/2024
- 04:30 PM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Venky Shankar wrote:
> lei liu wrote:
> > We recently encountered a similar issue, may I ask if there is a solution...
03/15/2024
- 02:19 PM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- I think this happens when there are concurrent lookups and deletes under a directory. _readdir_cache_cb() has code li...
- 02:01 PM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- lei liu wrote:
> We recently encountered a similar issue, may I ask if there is a solution?
For now, restart ceph... - 01:36 PM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Venky Shankar wrote:
> lei liu wrote:
> > We recently encountered a similar issue, may I ask if there is a solution... - 01:26 PM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- lei liu wrote:
> We recently encountered a similar issue, may I ask if there is a solution?
Which version was thi... - 12:10 PM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- We recently encountered a similar issue, may I ask if there is a solution?
- 01:52 PM Backport #64223: reef: qa: flush journal may cause timeouts of `scrub status`
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55915
merged - 01:51 PM Backport #64582: reef: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55829
merged - 01:50 PM Backport #64222: reef: Test failure: test_filesystem_sync_stuck_for_around_5s (tasks.cephfs.test_...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55746
merged - 01:50 PM Backport #64075: reef: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.Test...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55743
merged - 01:49 PM Backport #64045: reef: mds: use explicitly sized types for network and disk encoding
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55742
merged - 01:48 PM Backport #63553: reef: cephfs-top: enhance --dump code to include the missing fields
- Jos Collin wrote:
> https://github.com/ceph/ceph/pull/54520
merged - 01:47 PM Backport #62026: reef: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52581
merged - 01:38 PM Bug #64947 (Fix Under Review): qa: fix continued use of log-whitelist
- 01:18 PM Bug #64947 (Fix Under Review): qa: fix continued use of log-whitelist
- 12:01 PM Backport #64896 (In Progress): squid: mds: QuiesceDb to manage subvolume quiesce state
- PR: https://github.com/ceph/ceph/pull/56202
- 10:39 AM Bug #64616: selinux denials with centos9.stream
- https://pulpito.ceph.com/yuriw-2024-03-08_16:17:06-fs-wip-yuri10-testing-2024-03-07-1242-reef-distro-default-smithi/7...
- 10:30 AM Bug #64946 (New): qa: unable to locate package libcephfs1
- https://pulpito.ceph.com/yuriw-2024-03-08_16:17:06-fs-wip-yuri10-testing-2024-03-07-1242-reef-distro-default-smithi/7...
- 09:43 AM Backport #64919 (In Progress): reef: qa: enhance labeled perf counters test for cephfs-mirror
- 09:29 AM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > im in strong favour of having this do... - 09:02 AM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > im in strong favour of having this done, considering the caveat di... - 09:17 AM Backport #64918 (In Progress): squid: qa: enhance labeled perf counters test for cephfs-mirror
- 06:18 AM Backport #64941 (New): quincy: qa: Add multifs root_squash testcase
- 06:18 AM Backport #64940 (New): reef: qa: Add multifs root_squash testcase
- 06:17 AM Backport #64939 (New): squid: qa: Add multifs root_squash testcase
- 06:07 AM Bug #64641 (Pending Backport): qa: Add multifs root_squash testcase
- 04:47 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Dan van der Ster wrote:
> There was a similar case back in nautilus:
> * https://lists.ceph.io/hyperkitty/list/ceph... - 12:42 AM Bug #48562 (New): qa: scrub - object missing on disk; some files may be lost
- https://pulpito.ceph.com/yuriw-2024-03-12_14:59:27-fs-wip-yuri11-testing-2024-03-11-0838-reef-distro-default-smithi/7...
- 12:40 AM Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.Test...
- /teuthology/yuriw-2024-03-12_14:59:27-fs-wip-yuri11-testing-2024-03-11-0838-reef-distro-default-smithi/7593782/teutho...
- 12:33 AM Bug #64937 (Resolved): reef: qa: AttributeError: 'TestSnapSchedulesSubvolAndGroupArguments' objec...
- ...
03/14/2024
- 04:51 PM Backport #64926 (In Progress): reef: mds: disable defer_client_eviction_on_laggy_osds by default
- 12:56 PM Backport #64926 (In Progress): reef: mds: disable defer_client_eviction_on_laggy_osds by default
- https://github.com/ceph/ceph/pull/56196
- 04:51 PM Bug #62653: qa: unimplemented fcntl command: 1036 with fsstress
- In rados run: /a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7597989
- 04:50 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> im in strong favour of having this done, considering the caveat discussed above i still feel... - 04:41 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Github docs for rate limits especiall... - 02:45 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- im in strong favour of having this done, considering the caveat discussed above i still feel the risk:reward is signi...
- 02:42 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Github docs for rate limits especially unauthenticated requests to... - 02:07 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> Github docs for rate limits especially unauthenticated requests to the raw content API is no... - 01:38 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Github docs for rate limits especially unauthenticated requests to the raw content API is non-existent, but since gi...
- 12:27 PM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- I'm thinking to make use of ceph repo's raw content access of src/ceph_release i.e [0]. This is still a github API ca...
- 04:47 PM Backport #64925 (In Progress): quincy: mds: disable defer_client_eviction_on_laggy_osds by default
- 12:55 PM Backport #64925 (In Progress): quincy: mds: disable defer_client_eviction_on_laggy_osds by default
- https://github.com/ceph/ceph/pull/56195
- 04:46 PM Backport #64924 (In Progress): squid: mds: disable defer_client_eviction_on_laggy_osds by default
- 12:55 PM Backport #64924 (In Progress): squid: mds: disable defer_client_eviction_on_laggy_osds by default
- https://github.com/ceph/ceph/pull/56194
- 04:28 PM Feature #63664 (Fix Under Review): mds: add quiesce protocol for halting I/O on subvolumes
- 04:27 PM Tasks #63706 (Closed): mds: Integrate the QuiesceDbManager and the QuiesceAgent into the MDS rank
- Folding this into #63664.
- 04:26 PM Tasks #63709 (Closed): mds: Plug the QuiesceProtocol implementation into the QuiesceAgent control...
- Folding this into #63664.
- 02:48 PM Bug #64927 (Fix Under Review): qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_i...
- 01:06 PM Bug #64927 (Fix Under Review): qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_i...
- Saw this failure occur a couple of times in recent QA run for CephFS PRs -
https://pulpito.ceph.com/rishabh-2024-0... - 12:56 PM Bug #64615: tools/first-damage: Skips root and lost+found inode
- We dont't backport first-damage tool fixes to releases.
- 12:32 PM Bug #64615 (Resolved): tools/first-damage: Skips root and lost+found inode
- We don't backport first-damage tool fixes, do we?
- 12:54 PM Bug #64685 (Pending Backport): mds: disable defer_client_eviction_on_laggy_osds by default
- 11:25 AM Backport #64922 (New): reef: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:25 AM Backport #64921 (New): quincy: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:25 AM Backport #64920 (New): squid: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:25 AM Backport #64919 (In Progress): reef: qa: enhance labeled perf counters test for cephfs-mirror
- https://github.com/ceph/ceph/pull/56211
- 11:25 AM Backport #64918 (In Progress): squid: qa: enhance labeled perf counters test for cephfs-mirror
- https://github.com/ceph/ceph/pull/56210
- 11:23 AM Bug #64058 (Pending Backport): qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:22 AM Bug #64486 (Pending Backport): qa: enhance labeled perf counters test for cephfs-mirror
- 07:37 AM Bug #64912 (Fix Under Review): make check: QuiesceDbTest.MultiRankRecovery Failed
- From: https://jenkins.ceph.com/job/ceph-pull-requests/131349/console...
- 04:24 AM Bug #64659: mds: switch to using xlists instead of elists
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Patrick Donnelly wrote:
> > > > Dhai... - 12:35 AM Bug #50719: xattr returning from the dead (sic!)
- Matthew Hutchinson wrote:
> Is there anything in the logs you saw that could be causing this issue? I am eager to ge...
03/13/2024
- 01:51 PM Backport #64218 (In Progress): reef: fs/cephadm/renamevolume: volume rename failure
- 01:50 PM Backport #64217 (In Progress): quincy: fs/cephadm/renamevolume: volume rename failure
- 01:49 PM Backport #64047 (In Progress): reef: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.T...
- 01:49 PM Backport #64046 (In Progress): quincy: qa: test_fragmented_injection (tasks.cephfs.test_data_scan...
- 01:48 PM Backport #64250 (In Progress): reef: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is...
- 01:47 PM Backport #64249 (In Progress): quincy: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' ...
- 01:15 PM Backport #64896 (In Progress): squid: mds: QuiesceDb to manage subvolume quiesce state
- PR: https://github.com/ceph/ceph/pull/56202
- 01:09 PM Tasks #63708 (Resolved): mds: MDS message transport for inter-rank QuiesceDbManager communications
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:09 PM Feature #63666 (Resolved): mds: QuiesceAgent to execute quiesce operations on an MDS rank
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:09 PM Feature #63668 (Resolved): pybind/mgr/volumes: add quiesce protocol API
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:09 PM Tasks #63707 (Resolved): mds: AdminSocket command to control the QuiesceDbManager
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:08 PM Feature #63665 (Pending Backport): mds: QuiesceDb to manage subvolume quiesce state
- 01:01 PM Bug #50719: xattr returning from the dead (sic!)
- Matthew Hutchinson wrote:
> Is there anything in the logs you saw that could be causing this issue? I am eager to ge... - 12:57 PM Bug #50719: xattr returning from the dead (sic!)
- Is there anything in the logs you saw that could be causing this issue? I am eager to get this resolved for all of ou...
- 12:55 PM Backport #64809 (Rejected): pacific: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- 12:52 PM Bug #58244: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
- I hit this on testing one of my PR https://github.com/ceph/ceph/pull/56153
https://pulpito.ceph.com/khiremat-2024-... - 12:08 PM Bug #63265: qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
- https://pulpito.ceph.com/rishabh-2024-03-08_19:54:47-fs-rishabh-2024mar8-testing-default-smithi/7588250.
Same erro... - 11:43 AM Bug #63699 (Fix Under Review): qa: failed cephfs-shell test_reading_conf
- 09:56 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > I have finally reproduced by by pulling the latest ceph code. I believe ... - 09:12 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> I have finally reproduced by by pulling the latest ceph code. I believe there is one commit in MDS... - 09:10 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Rerunning with the updated testing kernel
https://pulpito.ceph.com/vshankar-2024-03-13_05:41:30-fs-wip-vshankar-te... - 02:55 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- I have finally reproduced by by pulling the latest ceph code. I believe there is one commit in MDS have improved the ...
- 05:07 AM Bug #64751 (Fix Under Review): cephfs-mirror coredumped when acquiring pthread mutex
- 04:03 AM Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- @Xiubo Li Could you shortly summarise them for me? For the second one, it's marked as Resolved but there seems to be ...
03/12/2024
- 12:55 PM Bug #64751 (In Progress): cephfs-mirror coredumped when acquiring pthread mutex
- 12:52 PM Bug #63907 (Duplicate): cephfs-mirror: Mirror::update_fs_mirrors crashes while taking lock
- https://tracker.ceph.com/issues/64751
- 12:38 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Venky Shankar wrote:
> > > Dhairya Parmar wrote:
> > > > Venky S... - 12:10 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> the old client can simply be put on hold - revoke caps and pause I/O. Wait for the time auto... - 12:09 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Venky Shankar wrote:
> > > > Dhairya... - 11:50 AM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Venky Shankar wrote:
> > > Dhairya Parmar wrote:
> > > > Patrick... - 11:32 AM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Patrick Donnelly wrote:
> > > > Dhai... - 12:06 PM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Locally I have tried all the possible cases with the upstream ceph code ... - 09:22 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> Locally I have tried all the possible cases with the upstream ceph code and couldn't reproduce it,... - 07:53 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Locally I have tried all the possible cases with the upstream ceph code and couldn't reproduce it, and have partially...
- 02:04 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > > > Venky Shankar wrote... - 01:25 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Venky Shankar wrote:
> > > > Xiubo,
> > > >
>... - 12:01 PM Bug #64729: mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadat...
- Discussed a bit in standby - this could be a fallout from the recently introduced mdlog trim decay counter.
- 10:25 AM Bug #64042 (Fix Under Review): mgr/snap_schedule: Adding retention which already exists gives imp...
- 08:19 AM Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- Niklas Hambuechen wrote:
> Are these possibly related?
>
> * https://tracker.ceph.com/issues/63364 - MDS_CLIENT_O... - 04:57 AM Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- Are these possibly related?
* https://tracker.ceph.com/issues/63364 - MDS_CLIENT_OLDEST_TID: 15 clients failing to... - 04:55 AM Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- Some logs from `/var/log/ceph/ceph-mds.mycluster-node-4.log` to show that the problematic op hung for multiple hours:...
- 04:42 AM Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- The log this is triggering is this:
https://github.com/ceph/ceph/commit/5cf60960b642f999ce08d404a6b6e14c1eb434ca
... - 04:37 AM Bug #64852 (New): MDS hangs on "joining batch getattr" when client does statx
- Every couple days, our CephFS hangs on one specific directory:...
- 08:15 AM Bug #64856 (New): mds crashes when extracting from a tar is cancelled
- On fresh @vstart@ cluster following commands were run -...
- 06:28 AM Bug #63514 (Fix Under Review): mds: avoid sending inode/stray counters as part of health warning ...
- 05:39 AM Bug #64717: MDS stuck in replay/resolve use
- Hi Abhishek,
This tracker was discussed in yesterday's cephfs standup. In particular, the mds crash backtrace that... - 04:58 AM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Potentially related issue:
* https://tracker.ceph.com/issues/64852 - MDS hangs on "joining batch getattr" when cli... - 04:58 AM Bug #63364: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush tid
- Potentially related issue:
* https://tracker.ceph.com/issues/64852 - MDS hangs on "joining batch getattr" when cli... - 04:37 AM Bug #62847: mds: blogbench requests stuck (5mds+scrub+snaps-flush)
- Potentially related: https://tracker.ceph.com/issues/64852
- 01:18 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Venky Shankar wrote:
> > > Venky Shankar wrote:
> > > > This i...
03/11/2024
- 07:10 PM Bug #64846 (New): CephFS: support read_from_replica everywhere
- We would like to be able to use read-from-replica throughout the CephFS stack. Right now, there's a libcephfs::ceph_l...
- 05:37 PM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> Venky Shankar wrote:
> > Venky Shankar wrote:
> > > This is not as bad as it looks. The cep... - 11:36 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> Venky Shankar wrote:
> > This is not as bad as it looks. The ceph-fuse process seems to be e... - 02:19 PM Backport #64738 (In Progress): squid: Memory leak detected when accessing a CephFS volume from Sa...
- 02:18 PM Backport #64737 (In Progress): reef: Memory leak detected when accessing a CephFS volume from Sam...
- 02:14 PM Backport #64736 (In Progress): quincy: Memory leak detected when accessing a CephFS volume from S...
- 01:12 PM Bug #64729 (Triaged): mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report sl...
- 01:08 PM Bug #64730 (Triaged): fs/misc/multiple_rsync.sh workunit times out
- 01:08 PM Bug #59413 (Duplicate): cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- 01:08 PM Bug #64748 (Duplicate): reef: snaptest-git-ceph.sh failure
- 12:32 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Patrick Donnelly wrote:
> > > Dhairya Parmar wrote:
> > > > I'm ... - 12:27 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- the old client can simply be put on hold - revoke caps and pause I/O. Wait for the time autoclose arrives(def 300s); ...
- 12:24 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Dhairya Parmar wrote:
> > > I'm waiting for patrick/venky's res... - 12:15 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Patrick Donnelly wrote:
> Dhairya Parmar wrote:
> > I'm waiting for patrick/venky's response on this since they had... - 12:00 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> Patrick Donnelly wrote:
> > Dhairya Parmar wrote:
> > > I'm waiting for patrick/venky's re... - 11:23 AM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Patrick Donnelly wrote:
> Dhairya Parmar wrote:
> > I'm waiting for patrick/venky's response on this since they had... - 12:06 PM Bug #64659: mds: switch to using xlists instead of elists
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Patrick Donnelly wrote:
> > > Dhairya Parmar wrote:
> > > > Patr... - 11:16 AM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Abhishek Lekshmanan wrote:
> Hi Venky, Patrick
>
> further to our talk, we saw the MDS growing with a lot of log ... - 10:33 AM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hi Andras,
Patrick has a proposed fix that optimizes the iteration - https://github.com/ceph/ceph/pull/55768
I ... - 10:54 AM Bug #64711 (Fix Under Review): Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks...
- 09:31 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo,
> > >
> > > I see the following co... - 09:27 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo,
> >
> > I see the following commit in the testing kernel:
> >... - 07:45 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo,
>
> I see the following commit in the testing kernel:
>
> [...]
>
> The inter... - 06:18 AM Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo,
I see the following commit in the testing kernel:... - 06:14 AM Bug #64752: cephfs-mirror: valgrind report leaks
- The test test_peer_commands_with_mirroring_disabled passes, but then in the unwinding process, there's a CommandFaile...
- 04:54 AM Bug #63830: MDS fails to start
- Heðin Ejdesgaard Møller wrote:
> Milind Changire wrote:
> > Heðin Ejdesgaard Møller wrote:
> > > I have made a cor... - 04:32 AM Bug #64717: MDS stuck in replay/resolve use
- Hi Abhishek,
Abhishek Lekshmanan wrote:
> We have a cephfs cluster where we ran a lot of metadata intensive workl... - 04:15 AM Backport #64780 (Rejected): squid: qa/fscrypt: switch to postmerge fragment to distiguish the mou...
- The *squid* release has already included this.
03/08/2024
- 07:11 PM Feature #61866 (Fix Under Review): MDSMonitor: require --yes-i-really-mean-it when failing an MDS...
- 07:08 PM Tasks #64819 (In Progress): data corruption during rmw after lseek
- There's data corruption during a rmw after a seek.
reproducer... - 06:15 PM Tasks #64723: ffsb configure issues (gcc fails)
- This is an issue with not updating metadata with the new appended file size.
- 04:49 PM Bug #63830: MDS fails to start
- Milind Changire wrote:
> Heðin Ejdesgaard Møller wrote:
> > I have made a coredump of the mds service, but it's siz... - 10:42 AM Backport #64811 (In Progress): reef: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- 10:05 AM Backport #64811 (In Progress): reef: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- https://github.com/ceph/ceph/pull/56062
- 10:41 AM Backport #64810 (In Progress): quincy: mds: add debug logs for handling setxattr for ceph.dir.sub...
- 10:05 AM Backport #64810 (In Progress): quincy: mds: add debug logs for handling setxattr for ceph.dir.sub...
- https://github.com/ceph/ceph/pull/56061
- 10:04 AM Backport #64809 (Rejected): pacific: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- 09:55 AM Bug #61958 (Pending Backport): mds: add debug logs for handling setxattr for ceph.dir.subvolume
- 07:15 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> This is not as bad as it looks. The ceph-fuse process seems to be exiting as usual - its some... - 04:48 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- This is not as bad as it looks. The ceph-fuse process seems to be exiting as usual - its somewhere in the qa world wh...
- 02:01 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Continuing on this today - fusermount(1) is basically invoking umount2(2). Will try to see what's going on.
- 12:39 AM Backport #64585 (In Progress): squid: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cr...
- 12:37 AM Backport #64586 (In Progress): quincy: crash: void Locker::handle_file_lock(ScatterLock*, ceph::c...
- 12:33 AM Backport #64584 (In Progress): reef: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cre...
03/07/2024
- 06:50 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> I'm waiting for patrick/venky's response on this since they had discussed some approach rega... - 10:32 AM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- I'm waiting for patrick/venky's response on this since they had discussed some approach regarding changes to some pro...
- 06:40 PM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- There was a similar case back in nautilus:
* https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TNXNORN... - 03:30 PM Bug #64008 (Fix Under Review): mds: CInode::item_caps used in two different lists
- 12:56 PM Backport #64778 (In Progress): squid: mds: add per-client perf counters (w/ label) support
- 06:59 AM Backport #64778 (In Progress): squid: mds: add per-client perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/56035
- 12:55 PM Backport #64779 (In Progress): squid: cephfs_mirror: add perf counters (w/ label) support
- 06:59 AM Backport #64779 (In Progress): squid: cephfs_mirror: add perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/56035
- 11:44 AM Backport #64619 (In Progress): quincy: mds: check the layout in Server::handle_client_mknod
- 11:44 AM Bug #64786 (New): mds: make ceph.dir.subvolume availabile via getfattr
- * vxattr ceph.dir.subvolume can't be fetched for a subvolume
* there's no integration test to test presence of ceph.... - 11:42 AM Backport #64618 (In Progress): reef: mds: check the layout in Server::handle_client_mknod
- 11:39 AM Backport #64617 (In Progress): squid: mds: check the layout in Server::handle_client_mknod
- 11:17 AM Bug #64659: mds: switch to using xlists instead of elists
- Dhairya Parmar wrote:
> Patrick Donnelly wrote:
> > Dhairya Parmar wrote:
> > > Patrick Donnelly wrote:
> > > > >... - 07:39 AM Bug #51197: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation...
- Milind, I'm taking this one.
- 07:38 AM Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> Venky,
>
> Have you ever seen the *"On-disk backtrace is divergent or newer"* error ?
>
> [... - 06:59 AM Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> Venky,
>
> Have you ever seen the *"On-disk backtrace is divergent or newer"* error ?
>
> [... - 06:50 AM Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Venky,
Have you ever seen the *"On-disk backtrace is divergent or newer"* error ?... - 06:29 AM Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > It seems metadata damaged:
> >
> > Right. I sa... - 05:59 AM Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Venky Shankar wrote:
> Xiubo Li wrote:
> > It seems metadata damaged:
>
> Right. I saw that in the mds log but l... - 03:59 AM Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> It seems metadata damaged:
Right. I saw that in the mds log but left that out while creating th... - 03:23 AM Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- It seems metadata damaged:...
- 07:09 AM Backport #64780 (Rejected): squid: qa/fscrypt: switch to postmerge fragment to distiguish the mou...
- 06:33 AM Backport #64776 (In Progress): quincy: qa/cephfs: add MON_DOWN and `deprecated feature inline_dat...
- 06:28 AM Backport #64776 (In Progress): quincy: qa/cephfs: add MON_DOWN and `deprecated feature inline_dat...
- https://github.com/ceph/ceph/pull/56023
- 06:21 AM Backport #64763 (In Progress): reef: qa/cephfs: add MON_DOWN and `deprecated feature inline_data'...
- 01:49 AM Backport #64763 (In Progress): reef: qa/cephfs: add MON_DOWN and `deprecated feature inline_data'...
- https://github.com/ceph/ceph/pull/56022
- 06:16 AM Backport #64762 (In Progress): squid: qa/cephfs: add MON_DOWN and `deprecated feature inline_data...
- 01:49 AM Backport #64762 (In Progress): squid: qa/cephfs: add MON_DOWN and `deprecated feature inline_data...
- https://github.com/ceph/ceph/pull/56021
- 05:47 AM Backport #64757 (In Progress): quincy: selinux denials with centos9.stream
- 01:37 AM Backport #64757 (In Progress): quincy: selinux denials with centos9.stream
- https://github.com/ceph/ceph/pull/56020
- 05:32 AM Backport #64756 (In Progress): reef: selinux denials with centos9.stream
- 01:37 AM Backport #64756 (In Progress): reef: selinux denials with centos9.stream
- https://github.com/ceph/ceph/pull/56019
- 04:42 AM Backport #64755 (In Progress): squid: selinux denials with centos9.stream
- 01:37 AM Backport #64755 (In Progress): squid: selinux denials with centos9.stream
- https://github.com/ceph/ceph/pull/56018
- 04:41 AM Backport #64758 (In Progress): squid: osdc/Journaler: better handle ENOENT during replay as up:st...
- 01:37 AM Backport #64758 (In Progress): squid: osdc/Journaler: better handle ENOENT during replay as up:st...
- https://github.com/ceph/ceph/pull/56017
- 04:40 AM Backport #64759 (In Progress): reef: osdc/Journaler: better handle ENOENT during replay as up:sta...
- 01:37 AM Backport #64759 (In Progress): reef: osdc/Journaler: better handle ENOENT during replay as up:sta...
- https://github.com/ceph/ceph/pull/56016
- 04:39 AM Backport #64760 (In Progress): quincy: osdc/Journaler: better handle ENOENT during replay as up:s...
- 01:37 AM Backport #64760 (In Progress): quincy: osdc/Journaler: better handle ENOENT during replay as up:s...
- https://github.com/ceph/ceph/pull/56015
- 03:12 AM Bug #64748: reef: snaptest-git-ceph.sh failure
- Venky,
These two failures both caused by the *EOF* issue and there has a existing tracker for this and please see ... - 02:06 AM Bug #63830: MDS fails to start
- Heðin Ejdesgaard Møller wrote:
> I have made a coredump of the mds service, but it's size is ~10MB, so I'm unable to... - 01:46 AM Bug #64761 (New): cephfs-mirror: add throttling to mirror daemon ops
- Right now, there is not control on the number of concurrent in-flight operations. Introduce a mechanism to throttle o...
- 01:37 AM Bug #64746 (Pending Backport): qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to he...
- 01:36 AM Bug #64616 (Pending Backport): selinux denials with centos9.stream
- 01:34 AM Bug #57048 (Pending Backport): osdc/Journaler: better handle ENOENT during replay as up:standby-r...
03/06/2024
- 04:41 PM Tasks #64723: ffsb configure issues (gcc fails)
- The sleep interval is directly tied to client_caps_release_delay.
- 04:06 PM Bug #64752 (New): cephfs-mirror: valgrind report leaks
- /a/yuriw-2024-03-01_20:51:20-fs-squid-distro-default-smithi/7578146...
- 04:01 PM Backport #64098: reef: mount command returning misleading error message
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55300
merged - 03:59 PM Backport #63262: reef: MDS slow requests for the internal 'rename' requests
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54467
merged - 03:33 PM Bug #64751 (Fix Under Review): cephfs-mirror coredumped when acquiring pthread mutex
- /a/yuriw-2024-03-01_20:51:20-fs-squid-distro-default-smithi/7578112
Log: ./remote/smithi134/log/ceph-client.mirror... - 03:16 PM Feature #61866: MDSMonitor: require --yes-i-really-mean-it when failing an MDS with MDS_HEALTH_TR...
- Rishabh, please take this one.
- 01:20 PM Bug #64748 (Duplicate): reef: snaptest-git-ceph.sh failure
- - /a/vshankar-2024-03-05_07:34:10-fs-wip-vshankar-testing1-reef-2024-03-05-1017-testing-default-smithi/7582083
- /a/... - 01:20 PM Backport #64741 (In Progress): squid: client: do not proceed with I/O if filehandle is invalid
- https://github.com/ceph/ceph/pull/55997
- 08:47 AM Backport #64741 (In Progress): squid: client: do not proceed with I/O if filehandle is invalid
- 01:10 PM Backport #64739: quincy: client: do not proceed with I/O if filehandle is invalid
- ah, this will be a bit tricky, src/test/client/nonblocking.cc is not present in reef or quincy
- 08:46 AM Backport #64739 (New): quincy: client: do not proceed with I/O if filehandle is invalid
- 01:10 PM Backport #64740: reef: client: do not proceed with I/O if filehandle is invalid
- ah, this will be a bit tricky, src/test/client/nonblocking.cc is not present in reef or quincy
- 08:46 AM Backport #64740 (New): reef: client: do not proceed with I/O if filehandle is invalid
- 12:13 PM Bug #64659: mds: switch to using xlists instead of elists
- Patrick Donnelly wrote:
> Dhairya Parmar wrote:
> > Patrick Donnelly wrote:
> > > > working with elist might lead ... - 11:02 AM Bug #64747 (New): postgresql pkg install failure
- /a/vshankar-2024-03-05_07:34:10-fs-wip-vshankar-testing1-reef-2024-03-05-1017-testing-default-smithi/7582129...
- 10:36 AM Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirror...
- ...
- 05:09 AM Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirror...
- ...
- 04:23 AM Bug #64711 (In Progress): Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.ceph...
- 09:49 AM Bug #64746 (Fix Under Review): qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to he...
- 09:42 AM Bug #64746 (Pending Backport): qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to he...
- Probably a fallout from https://github.com/ceph/ceph/pull/54312
- 09:42 AM Backport #64744 (In Progress): squid: mds: `dump dir` command should indicate that a dir is not c...
- 08:47 AM Backport #64744 (In Progress): squid: mds: `dump dir` command should indicate that a dir is not c...
- https://github.com/ceph/ceph/pull/55989
- 09:36 AM Backport #64743 (In Progress): reef: mds: `dump dir` command should indicate that a dir is not ca...
- 08:47 AM Backport #64743 (In Progress): reef: mds: `dump dir` command should indicate that a dir is not ca...
- https://github.com/ceph/ceph/pull/55987
- 09:34 AM Backport #64742 (In Progress): quincy: mds: `dump dir` command should indicate that a dir is not ...
- 08:47 AM Backport #64742 (In Progress): quincy: mds: `dump dir` command should indicate that a dir is not ...
- https://github.com/ceph/ceph/pull/55986
- 08:46 AM Backport #64738 (In Progress): squid: Memory leak detected when accessing a CephFS volume from Sa...
- https://github.com/ceph/ceph/pull/56123
- 08:46 AM Backport #64737 (In Progress): reef: Memory leak detected when accessing a CephFS volume from Sam...
- https://github.com/ceph/ceph/pull/56122
- 08:46 AM Backport #64736 (In Progress): quincy: Memory leak detected when accessing a CephFS volume from S...
- https://github.com/ceph/ceph/pull/56121
- 08:45 AM Bug #63093 (Pending Backport): mds: `dump dir` command should indicate that a dir is not cached
- Jos, please update the original backport PR with the additional commit.
- 08:41 AM Bug #64313 (Pending Backport): client: do not proceed with I/O if filehandle is invalid
- 08:40 AM Bug #64479 (Pending Backport): Memory leak detected when accessing a CephFS volume from Samba usi...
- 07:32 AM Bug #64572 (Fix Under Review): workunits/fsx.sh failure
- 04:00 AM Bug #64572: workunits/fsx.sh failure
- Xiubo Li wrote:
> The *XFS_IOC_FREESP64* and *XFS_IOC_ALLOCSP64* macros are from */usr/include/xfs/xfs_fs.h*, which ... - 02:24 AM Bug #64572: workunits/fsx.sh failure
- The *XFS_IOC_FREESP64* and *XFS_IOC_ALLOCSP64* macros are from */usr/include/xfs/xfs_fs.h*, which is from *xfsprogs-d...
- 05:35 AM Bug #64730 (Triaged): fs/misc/multiple_rsync.sh workunit times out
- /a/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/7580882...
- 04:28 AM Bug #64729 (Triaged): mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report sl...
- /a/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/7580913...
- 01:23 AM Bug #64641 (Triaged): qa: Add multifs root_squash testcase
03/05/2024
- 11:09 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- /a/yuriw-2024-03-05_15:31:54-smoke-reef-release-distro-default-smithi/7582350
- 09:29 PM Tasks #64723 (In Progress): ffsb configure issues (gcc fails)
- Isolated the failure, see below
When appending to file and ls is performed on it at or before 5s it appears stale ... - 09:28 PM Tasks #64166 (Resolved): RMW issue with xfstest ffsb
- This is resolved by commit 9a083b09355; cherry-picked into wip-fscrypt branch.
- 01:55 PM Bug #64659: mds: switch to using xlists instead of elists
- Dhairya Parmar wrote:
> Patrick Donnelly wrote:
> > > working with elist might lead to severe consequences at times... - 09:05 AM Bug #64659: mds: switch to using xlists instead of elists
- Patrick Donnelly wrote:
> > working with elist might lead to severe consequences at times if the same class member i... - 01:49 PM Tasks #64691: Symlink target not set correctly in unencrypted dir
- Christopher Hoffman wrote:
> in->symlink_plain wasn't being set in case of non-fscrypt.
>
> [...]
is this pat... - 01:28 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hi Venky, Patrick
further to our talk, we saw the MDS growing with a lot of log segments and crashing in the up:re... - 01:26 PM Bug #64717 (New): MDS stuck in replay/resolve use
- We have a cephfs cluster where we ran a lot of metadata intensive workloads with snapshots enabled. In our monitoring...
- 09:53 AM Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirror...
- The check_peer_snap_in_progress() doesn't wait for 'syncing' to appear. It just checks the state at the moment and re...
- 09:35 AM Bug #64711 (Fix Under Review): Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks...
- /a/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/7580933
Probably a ... - 09:23 AM Bug #63949: leak in mds.c detected by valgrind during CephFS QA run
- https://pulpito.ceph.com/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/...
- 09:23 AM Bug #64149: valgrind+mds/client: gracefully shutdown the mds during valgrind tests
- https://pulpito.ceph.com/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/...
- 07:33 AM Bug #64707 (New): suites/fsstress.sh hangs on one client - test times out
- https://pulpito.ceph.com/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/...
- 07:20 AM Bug #64679 (Fix Under Review): cephfs: removexattr should always return -ENODATA when xattr doesn...
- 06:05 AM Bug #64572: workunits/fsx.sh failure
- I looked at this closely, at it seems that the compilation failure is deliberately triggered from the xfstest code wh...
- 04:40 AM Backport #64704 (In Progress): quincy: Test failure: test_mount_all_caps_absent (tasks.cephfs.tes...
- 04:10 AM Backport #64704 (In Progress): quincy: Test failure: test_mount_all_caps_absent (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/55944
- 04:40 AM Backport #64706 (Rejected): squid: mount command returning misleading error message
- Already in squid when branching...
- 04:10 AM Backport #64706 (Rejected): squid: mount command returning misleading error message
- 04:39 AM Backport #64703 (Rejected): squid: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_mu...
- Already in squid when branching...
- 04:10 AM Backport #64703 (Rejected): squid: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_mu...
- 04:36 AM Backport #64705 (In Progress): reef: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_...
- 04:10 AM Backport #64705 (In Progress): reef: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_...
- https://github.com/ceph/ceph/pull/55943
- 04:04 AM Bug #64700 (Pending Backport): Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multif...
- 01:37 AM Bug #64700 (Pending Backport): Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multif...
- The actual error is:...
- 03:36 AM Backport #64701 (In Progress): squid: mgr/volumes: Support to reject CephFS clones if cloner thre...
- 01:42 AM Backport #64701 (In Progress): squid: mgr/volumes: Support to reject CephFS clones if cloner thre...
- https://github.com/ceph/ceph/pull/55940
- 01:41 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Backport note: required additional commits from https://github.com/ceph/ceph/pull/55930
- 01:09 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Another debug session with @set detach-on-fork on@ which is supposed to let gdb debug both parent and child processes...
03/04/2024
- 08:11 PM Documentation #51428 (Resolved): mgr/nfs: move nfs doc from cephfs to mgr
- 08:11 PM Backport #51790 (Rejected): pacific: mgr/nfs: move nfs doc from cephfs to mgr
- pacific is EOL
- 08:09 PM Tasks #64691 (Resolved): Symlink target not set correctly in unencrypted dir
- in->symlink_plain wasn't being set in case of non-fscrypt. ...
- 04:17 PM Tasks #64691 (Resolved): Symlink target not set correctly in unencrypted dir
- Symlink does not work outside of an unencrypted dir. The target does not get set...
- 02:02 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- qa test case reproducer: https://github.com/ceph/ceph/pull/55784
- 01:52 PM Bug #64572 (Triaged): workunits/fsx.sh failure
- 01:51 PM Bug #64572: workunits/fsx.sh failure
- Venky Shankar wrote:
> https://pulpito.ceph.com/vshankar-2024-02-26_05:44:42-fs:workload-wip-vshankar-testing-202402... - 01:31 PM Bug #64685 (Fix Under Review): mds: disable defer_client_eviction_on_laggy_osds by default
- 01:28 PM Bug #64685 (Pending Backport): mds: disable defer_client_eviction_on_laggy_osds by default
- This config can result in a single client holding up mds to service other clients since once a client is deferred fro...
- 12:31 PM Feature #58057: cephfs-top: enhance fstop tests to cover testing displayed data
- The progress of this req is tracked at
[1] https://tracker.ceph.com/issues/61397
[2] https://tracker.ceph.com/issue... - 12:13 PM Bug #57594 (Can't reproduce): pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_da...
- 10:19 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Finally with the infra and the kclient issues set aside, I was able to gdb the ceph-fuse process, add a breakpoint in...
- 09:58 AM Backport #64224 (In Progress): quincy: qa: flush journal may cause timeouts of `scrub status`
- 09:48 AM Backport #64223 (In Progress): reef: qa: flush journal may cause timeouts of `scrub status`
- 07:10 AM Bug #64486 (Fix Under Review): qa: enhance labeled perf counters test for cephfs-mirror
- 07:05 AM Bug #64679 (Fix Under Review): cephfs: removexattr should always return -ENODATA when xattr doesn...
- This issue is from https://github.com/ceph/ceph/pull/55087.
The POSIX says we should return -ENODATA when the corr... - 01:26 AM Bug #64616 (Fix Under Review): selinux denials with centos9.stream
03/03/2024
- 03:00 PM Feature #64677 (New): Enhance Message with a generic method that can be used to delay payload dec...
- Message payload has a serialized version of the message content. Using the standard `decode_payload` suggests that th...
03/01/2024
- 08:34 PM Bug #64659: mds: switch to using xlists instead of elists
- > working with elist might lead to severe consequences at times if the same class member is used to initialise multip...
- 12:16 PM Bug #64659 (New): mds: switch to using xlists instead of elists
- ...
- 01:46 PM Bug #50719: xattr returning from the dead (sic!)
- Xiubo Li wrote:
> Matthew Hutchinson wrote:
> > We are currently working on recreating this issue internally as thi... - 09:53 AM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya,
>
> Before we go into improving the lagginess detection infrastructure, let's ver... - 05:48 AM Backport #64656 (Rejected): quincy: qa/fscrypt: switch to postmerge fragment to distiguish the mo...
- fscrypt hasn't been support yet in quincy.
- 03:23 AM Backport #64656 (Rejected): quincy: qa/fscrypt: switch to postmerge fragment to distiguish the mo...
- 05:48 AM Backport #64655 (In Progress): reef: qa/fscrypt: switch to postmerge fragment to distiguish the m...
- 03:23 AM Backport #64655 (In Progress): reef: qa/fscrypt: switch to postmerge fragment to distiguish the m...
- https://github.com/ceph/ceph/pull/55857
- 03:13 AM Bug #64654 (Duplicate): fscrypt: add mount-syntax/v2 test for fscrypt
- This has been fixed by https://tracker.ceph.com/issues/59195 ocassionaly, and we jsut need to backport it to quincy a...
- 03:07 AM Bug #64654 (Duplicate): fscrypt: add mount-syntax/v2 test for fscrypt
- We missed the v2 test for fscrypt.
- 03:12 AM Bug #59195 (Pending Backport): qa/fscrypt: switch to postmerge fragment to distiguish the mounter...
02/29/2024
- 02:17 PM Tasks #64413: File size is not correct after rmw
- Spoke to Chris regarding this.
Chris, if you can attach the debug client/mds logs from the two runs you mention (o... - 02:08 PM Bug #58090: Non-existent pending clone shows up in snapshot info
- Neeraj and I had a discussion regarding this.
We fixed a bunch of issues around clones and dangling index symlinks... - 08:04 AM Bug #64616: selinux denials with centos9.stream
- Dan Mick wrote:
> I bet you didn't mean to change the project to Calamari, which is long-dead
Oh god. I meant to ... - 07:58 AM Bug #64616: selinux denials with centos9.stream
- I bet you didn't mean to change the project to Calamari, which is long-dead
- 07:37 AM Bug #64641 (Pending Backport): qa: Add multifs root_squash testcase
- Multifs root_squash test is missing. Add it.
- 06:12 AM Backport #64583 (In Progress): squid: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FU...
- 06:10 AM Backport #64582 (In Progress): reef: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FUL...
- 06:08 AM Backport #64581 (In Progress): quincy: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_F...
- 04:24 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> Started running into
>
> > ceph: stderr Error: OCI runtime error: crun: bpf create ``: In... - 04:03 AM Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya, please take this one (on prio).
- 12:53 AM Bug #50719: xattr returning from the dead (sic!)
- Matthew Hutchinson wrote:
> We are currently working on recreating this issue internally as this was a customer clus...
02/28/2024
- 07:53 PM Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- We shouldn't overload. The background quiesce is an important test, but it is not functional e2e testing. For that we...
- 07:03 PM Bug #64616: selinux denials with centos9.stream
- Venky Shankar wrote:
> Patrick, I saw you working around with selinux denials in @qa/suites/fs/workload/tasks/5-work... - 03:32 PM Bug #64616: selinux denials with centos9.stream
- Patrick, I saw you working around with selinux denials in @qa/suites/fs/workload/tasks/5-workunit/postgres.yaml@, how...
- 02:55 PM Bug #64616 (Pending Backport): selinux denials with centos9.stream
- /a/vshankar-2024-02-26_10:07:12-fs-wip-vshankar-testing-20240226.064629-testing-default-smithi/7573529...
- 04:55 PM Bug #64615 (Fix Under Review): tools/first-damage: Skips root and lost+found inode
- 12:04 PM Bug #64615 (Resolved): tools/first-damage: Skips root and lost+found inode
- The 'first-damage.py' tool skips both root and lost+found inode as
a result the tool can't be used to repair/remove ... - 03:49 PM Backport #64619 (In Progress): quincy: mds: check the layout in Server::handle_client_mknod
- https://github.com/ceph/ceph/pull/56032
- 03:49 PM Backport #64618 (In Progress): reef: mds: check the layout in Server::handle_client_mknod
- https://github.com/ceph/ceph/pull/56031
- 03:49 PM Backport #64617 (In Progress): squid: mds: check the layout in Server::handle_client_mknod
- https://github.com/ceph/ceph/pull/56030
- 03:45 PM Bug #64061 (Pending Backport): mds: check the layout in Server::handle_client_mknod
- 03:40 PM Bug #64058 (Fix Under Review): qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 03:40 PM Bug #64058: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- Expanding this a bit:...
- 03:40 PM Bug #64290 (Closed): mds: erroneous "MDS abort because newly corrupt dentry to be committed" beca...
- I'm not sure why I forked #64058.
- 02:02 PM Backport #64565 (In Progress): reef: Difference in error code returned while removing system xatt...
- 06:21 AM Backport #64565: reef: Difference in error code returned while removing system xattrs using remov...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/55803
ceph-backport.sh versi... - 01:00 PM Bug #50719: xattr returning from the dead (sic!)
- We are currently working on recreating this issue internally as this was a customer cluster that was having the issue...
- 11:38 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Started running into
> ceph: stderr Error: OCI runtime error: crun: bpf create ``: Invalid argument
again in
... - 08:50 AM Bug #64611 (New): Inconsistent usage of the return codes in the MDS code base
- Ceph cluster may comprise of daemons running on different platforms with "incompatible numeric values of the errno de...
- 06:23 AM Backport #64566: squid: Difference in error code returned while removing system xattrs using remo...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/55805
ceph-backport.sh versi... - 06:17 AM Backport #64564: quincy: Difference in error code returned while removing system xattrs using rem...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/55802
ceph-backport.sh versi...
02/27/2024
- 06:33 PM Bug #64602 (Fix Under Review): tools/cephfs: cephfs-journal-tool does not recover dentries with a...
- 06:31 PM Bug #64602 (Fix Under Review): tools/cephfs: cephfs-journal-tool does not recover dentries with a...
- https://github.com/ceph/ceph/blob/4a1c26b52121803d1bd0f8c1c06eb856f2add307/src/tools/cephfs/JournalTool.cc#L867-L870
... - 02:17 PM Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- Leonid Usov wrote:
> >> If that is the case, we could cope with a background script config
> > Didn't understand th... - 02:10 PM Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- > then we didn't even need a dedicated trasher script, just a yaml config file with the shell loop that issues the qu...
- 10:14 AM Backport #64204 (In Progress): quincy: task/test_nfs: AttributeError: 'TestNFS' object has no att...
- 10:07 AM Backport #64205 (In Progress): reef: task/test_nfs: AttributeError: 'TestNFS' object has no attri...
- 09:56 AM Backport #64205: reef: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- Adam, I'm taking this one and the other release backports for this change.
- 09:28 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Trying my luck with this today - hopefully no infra issues show up.
- 05:58 AM Backport #64586 (In Progress): quincy: crash: void Locker::handle_file_lock(ScatterLock*, ceph::c...
- https://github.com/ceph/ceph/pull/56050
- 05:58 AM Backport #64585 (In Progress): squid: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cr...
- https://github.com/ceph/ceph/pull/56051
- 05:58 AM Backport #64584 (In Progress): reef: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cre...
- https://github.com/ceph/ceph/pull/56049
- 05:55 AM Bug #62077 (Fix Under Review): mgr/nfs: validate path when modifying cephfs export
- 05:51 AM Bug #54833 (Pending Backport): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<ML...
- 05:51 AM Backport #64583 (In Progress): squid: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FU...
- https://github.com/ceph/ceph/pull/55830
- 05:51 AM Backport #64582 (In Progress): reef: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FUL...
- https://github.com/ceph/ceph/pull/55829
- 05:51 AM Backport #64581 (In Progress): quincy: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_F...
- https://github.com/ceph/ceph/pull/55828
- 05:48 AM Bug #63132 (Pending Backport): qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
02/26/2024
- 04:10 PM Bug #64572 (Fix Under Review): workunits/fsx.sh failure
- https://pulpito.ceph.com/vshankar-2024-02-26_05:44:42-fs:workload-wip-vshankar-testing-20240216.060239-testing-defaul...
- 02:51 PM Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- >> If that is the case, we could cope with a background script config
> Didn't understand this question.
I meant ... - 02:48 PM Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- Leonid Usov wrote:
> @Patrick, I have several discussion points wrt the approach
>
> 1. Should we add more client... - 02:32 PM Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- @Patrick, I have several discussion points wrt the approach
1. Should we add more clients and/or more mountpoints ... - 02:17 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- @here - curious if https://tracker.ceph.com/issues/60241 is actually related to this ticket. The former has got compl...
- 01:59 PM Bug #64563 (Triaged): mds: enhance laggy clients detections due to laggy OSDs
- 01:58 PM Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya,
Before we go into improving the lagginess detection infrastructure, let's verify if there isn't a (corner... - 11:52 AM Bug #64563 (Triaged): mds: enhance laggy clients detections due to laggy OSDs
- Right now the code happily accepts that if there is any laggy OSD and a client got laggy then it must be due to the O...
- 01:39 PM Feature #63936 (Closed): client, libcephfs: enable sparse read capability in libcephfs I/O code p...
- Closed due to the fixes not being targeted anytime soon.
- 01:31 PM Backport #64566 (New): squid: Difference in error code returned while removing system xattrs usin...
- 01:31 PM Backport #64565 (In Progress): reef: Difference in error code returned while removing system xatt...
- https://github.com/ceph/ceph/pull/55803
- 01:31 PM Backport #64564 (New): quincy: Difference in error code returned while removing system xattrs usi...
- 01:30 PM Bug #64542 (Pending Backport): Difference in error code returned while removing system xattrs usi...
- 12:53 PM Bug #64008: mds: CInode::item_caps used in two different lists
- It seems like only the MDS code to be using elist, why not just switch to using xlist? That way we completely avoid t...
- 12:37 PM Bug #64486 (In Progress): qa: enhance labeled perf counters test for cephfs-mirror
- 12:27 PM Bug #61182 (Resolved): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the...
- 12:26 PM Backport #62176 (Resolved): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror dae...
- 11:23 AM Bug #62925 (Fix Under Review): cephfs-journal-tool: Add preventive measures in the tool to avoid ...
- 07:36 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- oh well, infra issues now :/...
- 04:41 AM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- I am working on this. The plan is to gdb the ceph-fuse process after fusermount. Will update by EOD today.
- 05:24 AM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hi Andras,
Nice to hear from you in the User/Dev monthly meet-up. You had a question related to what exactly is Sn... - 04:48 AM Support #64442: Ceph stripe parallel write
- Hi Nishit,
Nishit Khosla wrote:
> Hello,
>
> We are trying to do performance troubleshooting for cephfs and ex... - 02:37 AM Backport #64222 (In Progress): reef: Test failure: test_filesystem_sync_stuck_for_around_5s (task...
- 02:34 AM Backport #64221 (In Progress): quincy: Test failure: test_filesystem_sync_stuck_for_around_5s (ta...
- 02:28 AM Backport #64076 (In Progress): quincy: testing: Test failure: test_snapshot_remove (tasks.cephfs....
- 02:25 AM Backport #64075 (In Progress): reef: testing: Test failure: test_snapshot_remove (tasks.cephfs.te...
- 02:19 AM Backport #64043 (In Progress): quincy: mds: use explicitly sized types for network and disk encoding
- 02:19 AM Backport #64045 (In Progress): reef: mds: use explicitly sized types for network and disk encoding
02/22/2024
- 06:08 PM Bug #64542 (Pending Backport): Difference in error code returned while removing system xattrs usi...
- During removexattr() operation for those xattrs from "system." namespace, kernel client returns ENOTSUP in early stag...
- 05:47 PM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Also affects pacific and reef v18.2.0 (possibly v18.2.1 too):
https://pulpito.ceph.com/yuriw-2024-02-21_23:06:32-f... - 05:40 PM Tasks #63708 (Fix Under Review): mds: MDS message transport for inter-rank QuiesceDbManager commu...
- 02:58 PM Feature #63668 (Fix Under Review): pybind/mgr/volumes: add quiesce protocol API
- 12:31 PM Bug #64538 (Fix Under Review): cephfs-shell: hangs and then aborts
- When cephfs-shell is launched, it prints a deprecation warning, hangs for it and then aborts -...
- 11:53 AM Bug #64537 (New): mds: lower the log level when rejecting a session reclaim request
- I'm seeing a case where an old NFS Ganesha client got evicted but not due to a reclaim request by the new incarnation...
- 11:13 AM Bug #62925 (In Progress): cephfs-journal-tool: Add preventive measures in the tool to avoid corru...
- 08:29 AM Bug #64534 (New): qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite
- test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite
https://pulpito.ceph.com/jcollin-2024-02-... - 06:31 AM Bug #54834 (Duplicate): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&):...
- 05:24 AM Feature #64531 (New): mds,mgr: identify metadata heavy workloads
- This is coming from the folks in the field - apparently it helps to know early on before the MDS starts throwing up c...
02/21/2024
- 06:13 PM Tasks #64413: File size is not correct after rmw
- In the case of O_TRUNC, I've added to update_inode_file_size() to set effective_size to 0, when size is 0....
- 02:40 PM Backport #64518 (In Progress): reef: mgr/volumes: Support to reject CephFS clones if cloner threa...
- 07:17 AM Backport #64518 (In Progress): reef: mgr/volumes: Support to reject CephFS clones if cloner threa...
- https://github.com/ceph/ceph/pull/55692
- 02:19 PM Backport #64517 (In Progress): quincy: mgr/volumes: Support to reject CephFS clones if cloner thr...
- 07:17 AM Backport #64517 (In Progress): quincy: mgr/volumes: Support to reject CephFS clones if cloner thr...
- https://github.com/ceph/ceph/pull/55690
- 07:08 AM Feature #59714 (Pending Backport): mgr/volumes: Support to reject CephFS clones if cloner threads...
- 01:57 AM Bug #50719: xattr returning from the dead (sic!)
- Austin Axworthy wrote:
> Hello,
>
> I've come across this Ceph issue and noticed it hasn't been updated in 9 mont...
02/20/2024
- 07:04 PM Bug #50719: xattr returning from the dead (sic!)
- Hello,
I've come across this Ceph issue and noticed it hasn't been updated in 9 months. I aim to shed light on thi... - 05:23 PM Tasks #63669 (In Progress): qa: add teuthology tests for quiescing a group of subvolumes
- 05:21 PM Feature #64507 (New): pybind/mgr/snap_schedule: support crash-consistent snapshots
- Right now the module is specific with a 1-1 mapping of schedule to subvolume (or path). The module should be enhanced...
- 03:42 PM Feature #64506 (New): qa: update fs:upgrade to test from reef/squid to main
- 02:59 PM Backport #64505 (In Progress): reef: mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mas...
- 02:44 PM Backport #64505 (In Progress): reef: mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mas...
- https://github.com/ceph/ceph/pull/55669
- 02:54 PM Bug #64440: mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mask v18.2.1 <-> main
- Breaking relation to #62724 to allow backport script to work.
- 02:38 PM Bug #64440 (Pending Backport): mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mask v18....
- 02:51 PM Bug #64477 (New): pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with duplicated se...
- Unassigning myself; this is not related to the MDSMap encoding changes.
However, it does look like we're now seein... - 02:41 PM Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Unassigning myself to return to other high priority tasks.
This issue is only revealed by the fix for i64440 which... - 02:15 AM Bug #64502 (New): pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Every ceph-fuse mount for quincy fails to unmount for reef->main:
https://pulpito.ceph.com/pdonnell-2024-02-19_18:... - 02:39 PM Backport #62724 (Resolved): reef: mon/MDSMonitor: optionally forbid to use standby for another fs...
- 05:01 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Backport note: additional include commits from https://github.com/ceph/ceph/pull/55660
- 02:18 AM Bug #64503 (Fix Under Review): client: log message when unmount call is received
- 02:16 AM Bug #64503 (Fix Under Review): client: log message when unmount call is received
02/19/2024
- 04:07 PM Bug #64478: Upgrading mon from v18.2.1 to latest-reef-devel image is causing mon to fail when dec...
- Great to hear you're already investigating, thanks!
- 04:01 PM Bug #64490 (Fix Under Review): mds: some request errors come from errno.h rather than fs_types.h
- 03:59 PM Bug #64490 (Fix Under Review): mds: some request errors come from errno.h rather than fs_types.h
- (See future PR for where.)
- 02:02 PM Bug #64477 (In Progress): pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with dupli...
- 01:28 PM Bug #64284 (Won't Fix): client: align get/put caps with kclient
- Dhairya Parmar wrote:
> > While having similar implementation in the kclient and user-space is desired, I don't thin... - 12:51 PM Bug #62265: cephfs-mirror: use monotonic clocks in cephfs mirror daemon
- Jos, please continue where Manish left off.
- 12:50 PM Bug #62720: mds: identify selinux relabelling and generate health warning
- Chris, please take this one whenever you get some time off from the fscrypt work :)
- 12:42 PM Backport #64484 (In Progress): reef: mds: add per-client perf counters (w/ label) support
- 08:51 AM Backport #64484 (In Progress): reef: mds: add per-client perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/55640
- 12:39 PM Backport #64485 (In Progress): reef: cephfs_mirror: add perf counters (w/ label) support
- 08:51 AM Backport #64485 (In Progress): reef: cephfs_mirror: add perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/55640
- 12:07 PM Bug #57048 (Fix Under Review): osdc/Journaler: better handle ENOENT during replay as up:standby-r...
- 09:33 AM Bug #64486 (Pending Backport): qa: enhance labeled perf counters test for cephfs-mirror
- In particular, verify peer metric counters.
- 08:49 AM Feature #64387 (Pending Backport): mds: add per-client perf counters (w/ label) support
- 08:49 AM Feature #63945 (Pending Backport): cephfs_mirror: add perf counters (w/ label) support
- 08:47 AM Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- I've made the notes a little clearer here: https://github.com/ceph/ceph/pull/55637
I've tried to help the reader d... - 08:39 AM Documentation #64483 (In Progress): doc: document labelled perf metrics for mds/cephfs-mirror
- 07:56 AM Documentation #64483 (In Progress): doc: document labelled perf metrics for mds/cephfs-mirror
- 07:19 AM Bug #63700: qa: test_cd_with_args failure
- Neeraj, PTAL asap.
- 07:19 AM Bug #63699: qa: failed cephfs-shell test_reading_conf
- Neeraj, PTAL asap.
- 07:13 AM Bug #64149: valgrind+mds/client: gracefully shutdown the mds during valgrind tests
- Kotresh, please take this one (spoke to Milind regarding this before reassigning).
- 04:21 AM Bug #64479 (Fix Under Review): Memory leak detected when accessing a CephFS volume from Samba usi...
Also available in: Atom