Activity
From 08/16/2023 to 09/14/2023
09/14/2023
- 04:07 PM Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395153/teuthol...
- 04:04 PM Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395114/teuthol...
- 04:02 PM Bug #62067: ffsb.sh failure "Resource temporarily unavailable"
- /teuthology/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/7395112/teuthol...
- 04:02 PM Bug #62067: ffsb.sh failure "Resource temporarily unavailable"
- Venky Shankar wrote:
> Duplicate of #62484
Is it? This one gets EAGAIN while #62484 gets EIO. That's interesting... - 12:26 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Venky Shankar wrote:
> Xiubo/Dhairya, if the MDS can ensure that the sessionmap is persisted _before_ the client sta... - 05:22 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo/Dhairya, if the MDS can ensure that the sessionmap is persisted _before_ the client starts using the prellocate...
- 04:53 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Dhairya Parmar wrote:
> Greg Farnum wrote:
> > Xiubo Li wrote:
> > > Greg Farnum wrote:
> > > > I was talking to ... - 10:17 AM Backport #62843 (New): pacific: Lack of consistency in time format
- 10:17 AM Backport #62842 (New): reef: Lack of consistency in time format
- 10:17 AM Backport #62841 (New): quincy: Lack of consistency in time format
- 10:11 AM Bug #62494 (Pending Backport): Lack of consistency in time format
- 06:30 AM Bug #62698: qa: fsstress.sh fails with error code 124
- Rishabh, have you seen this in any of your very recent runs?
- 06:29 AM Bug #62706 (Can't reproduce): qa: ModuleNotFoundError: No module named XXXXXX
- Please reopen if this shows up again.
- 05:36 AM Backport #62835 (In Progress): quincy: cephfs-top: enhance --dump code to include the missing fields
- 04:20 AM Backport #62835 (Resolved): quincy: cephfs-top: enhance --dump code to include the missing fields
- https://github.com/ceph/ceph/pull/53454
- 04:42 AM Bug #59768 (Duplicate): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): asse...
- Duplicate of https://tracker.ceph.com/issues/58489
- 04:34 AM Backport #62834 (In Progress): pacific: cephfs-top: enhance --dump code to include the missing fi...
- 04:19 AM Backport #62834 (Resolved): pacific: cephfs-top: enhance --dump code to include the missing fields
- https://github.com/ceph/ceph/pull/53453
- 04:11 AM Bug #61397 (Pending Backport): cephfs-top: enhance --dump code to include the missing fields
- Venky Shankar wrote:
> Jos, this needs backports, yes?
Yes, needs backport. https://tracker.ceph.com/issues/57014... - 03:59 AM Bug #61397: cephfs-top: enhance --dump code to include the missing fields
- Jos, this needs backports, yes?
09/13/2023
- 04:55 PM Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute
- It'll be nice if we can handle this just from the MDS side. It may require changes to ceph-fuse and the kclient to pa...
- 01:33 PM Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi/
- 01:32 PM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
- 12:45 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 12:23 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 10:14 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > I was talking to Dhairya about this today and am... - 09:51 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- (I got to this rather late - so excuse me for any discussion that were already resolved).
Dhairya Parmar wrote:
>... - 11:55 AM Bug #59768: crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(g_conf()-...
- Neeraj Pratap Singh wrote:
> While I was debugging this issue, it seemed that the issue doesn't exist anymore.
> An... - 11:32 AM Bug #59768: crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(g_conf()-...
- While I was debugging this issue, it seemed that the issue doesn't exist anymore.
And I found this PR: https://githu... - 04:53 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- /a/https://pulpito.ceph.com/vshankar-2023-09-12_06:47:30-fs-wip-vshankar-testing-20230908.065909-testing-default-smit...
- 04:45 AM Bug #61574: qa: build failure for mdtest project
- Rishabh, this requires changes similar to tracker #61399?
- 04:43 AM Bug #61399 (Resolved): qa: build failure for ior
- Rishabh, this change does not need backport, yes?
09/12/2023
- 01:23 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> Greg Farnum wrote:
> > I was talking to Dhairya about this today and am not quite sure I understa... - 01:05 PM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > I reproduced it by creating *dirk4444/dirk5555* a... - 12:21 PM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Venky Shankar wrote:
> Xiubo Li wrote:
> > I reproduced it by creating *dirk4444/dirk5555* and found the root cause... - 09:41 AM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
- Xiubo Li wrote:
> I reproduced it by creating *dirk4444/dirk5555* and found the root cause:
>
> [...]
>
>
> ... - 12:56 PM Feature #61866 (In Progress): MDSMonitor: require --yes-i-really-mean-it when failing an MDS with...
- I will take a look Venky.
- 12:29 PM Feature #57481 (In Progress): mds: enhance scrub to fragment/merge dirfrags
- 10:26 AM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Additional PR: https://github.com/ceph/ceph/pull/53418
- 04:19 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick,
> >
> > Going by the description here, I assume this... - 03:07 AM Bug #62810: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
- The old commits will be revert in https://github.com/ceph/ceph/pull/52199 and need to fix it again.
- 03:07 AM Bug #62810 (New): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fi...
- 03:07 AM Bug #62810 (New): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fi...
- https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-default-smithi...
09/11/2023
- 06:02 PM Backport #62807 (In Progress): pacific: doc: write cephfs commands in full
- 05:53 PM Backport #62807 (Resolved): pacific: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53403
- 05:57 PM Backport #62806 (In Progress): reef: doc: write cephfs commands in full
- 05:53 PM Backport #62806 (Resolved): reef: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53402
- 05:55 PM Backport #62805 (In Progress): quincy: doc: write cephfs commands in full
- 05:53 PM Backport #62805 (Resolved): quincy: doc: write cephfs commands in full
- https://github.com/ceph/ceph/pull/53401
- 05:51 PM Documentation #62791 (Pending Backport): doc: write cephfs commands in full
- 04:54 PM Documentation #62791 (Resolved): doc: write cephfs commands in full
- 09:57 AM Documentation #62791 (Resolved): doc: write cephfs commands in full
- In @doc/cephfs/admininstration.rst@ we don't write CephFS commands fully. Example: @ceph fs rename@ is written as @fs...
- 04:23 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Venky Shankar wrote:
> Patrick,
>
> Going by the description here, I assume this change is only for the volumes p... - 03:03 PM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick,
Going by the description here, I assume this change is only for the volumes plugin. In case the changes a... - 03:00 PM Backport #62799 (In Progress): quincy: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53907
- 03:00 PM Backport #62798 (Rejected): pacific: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53905
- 03:00 PM Backport #62797 (Resolved): reef: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53906
- 02:56 PM Bug #62236 (Pending Backport): qa: run nfs related tests with fs suite
- 02:55 PM Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute
- Chris, please take this one.
- 12:12 PM Bug #62793 (Fix Under Review): client: setfattr -x ceph.dir.pin: No such attribute
- I've come across documents which suggests to remove ceph.dir.pin to disable export pins, but, looks like it does not ...
- 12:31 PM Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- Milind, PTAL. I vaguely recall a similar issue you were looking into a while back.
- 12:28 PM Bug #62673: cephfs subvolume resize does not accept 'unit'
- Dhariya, I presume, this is similar change to the one you worked on a while back.
- 12:26 PM Bug #62465 (Can't reproduce): pacific (?): LibCephFS.ShutdownRace segmentation fault
- 12:15 PM Bug #62567: postgres workunit times out - MDS_SLOW_REQUEST in logs
- Xiubo, this might be related to the slow rename issue you have a PR for. Could you please check?
- 12:13 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> Greg Farnum wrote:
> > I was talking to Dhairya about this today and am not quite sure I understa... - 11:06 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Another instance: https://pulpito.ceph.com/vshankar-2023-09-08_07:03:01-fs-wip-vshankar-testing-20230830.153114-testi...
- 11:04 AM Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true
- Probably another instance - https://pulpito.ceph.com/vshankar-2023-09-08_07:03:01-fs-wip-vshankar-testing-20230830.15...
- 08:01 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- Venky Shankar wrote:
> FWIW, logs hint at missing (RADOS) objects:
>
> [...]
>
> I'm not certain yet if this i... - 06:04 AM Feature #61866: MDSMonitor: require --yes-i-really-mean-it when failing an MDS with MDS_HEALTH_TR...
- Manish, please take this one on prio.
09/10/2023
- 08:50 AM Bug #52723 (Resolved): mds: improve mds_bal_fragment_size_max config option
- 08:50 AM Backport #53122 (Rejected): pacific: mds: improve mds_bal_fragment_size_max config option
- 08:48 AM Backport #57111 (In Progress): quincy: mds: handle deferred client request core when mds reboot
- 08:48 AM Backport #57110 (In Progress): pacific: mds: handle deferred client request core when mds reboot
- 08:46 AM Bug #58651 (Resolved): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59409 (Resolved): reef: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59002 (Resolved): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59003 (Resolved): pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 08:46 AM Backport #59032 (Resolved): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- 08:45 AM Backport #59719 (Resolved): reef: client: read wild pointer when reconnect to mds
- 08:44 AM Backport #59718 (Resolved): quincy: client: read wild pointer when reconnect to mds
- 08:44 AM Backport #59720 (Resolved): pacific: client: read wild pointer when reconnect to mds
- 08:43 AM Backport #61841 (Resolved): pacific: mds: do not evict clients if OSDs are laggy
- 08:35 AM Backport #62005 (In Progress): quincy: client: readdir_r_cb: get rstat for dir only if using rbyt...
- 08:35 AM Backport #62004 (In Progress): reef: client: readdir_r_cb: get rstat for dir only if using rbytes...
- 08:33 AM Backport #61992 (In Progress): quincy: mds/MDSRank: op_tracker of mds have slow op alway.
- 08:32 AM Backport #61993 (In Progress): reef: mds/MDSRank: op_tracker of mds have slow op alway.
- 08:29 AM Backport #62372 (Resolved): pacific: Consider setting "bulk" autoscale pool flag when automatical...
- 08:28 AM Bug #61907 (Resolved): api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62443 (Resolved): reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62441 (Resolved): quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:28 AM Backport #62442 (Resolved): pacific: api tests fail from "MDS_CLIENTS_LAGGY" warning
- 08:26 AM Backport #62421 (Resolved): pacific: mds: adjust cap acquistion throttle defaults
09/08/2023
- 04:08 PM Backport #62724 (In Progress): reef: mon/MDSMonitor: optionally forbid to use standby for another...
- 03:59 PM Backport #59405 (In Progress): reef: MDS allows a (kernel) client to exceed the xattrs key/value ...
- 10:52 AM Backport #62738 (In Progress): reef: quota: accept values in human readable format as well
- https://github.com/ceph/ceph/pull/53333
- 10:23 AM Feature #57481: mds: enhance scrub to fragment/merge dirfrags
- Chris, please take this one.
- 07:12 AM Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled ...
- Revert change: https://github.com/ceph/ceph/pull/53331
- 05:54 AM Backport #62585 (In Progress): quincy: mds: enforce a limit on the size of a session in the sessi...
- 05:38 AM Backport #62583 (In Progress): reef: mds: enforce a limit on the size of a session in the sessionmap
- 04:10 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > If we are going to move the metadata out of CephFS, I think it sho... - 03:21 AM Bug #62537 (Fix Under Review): cephfs scrub command will crash the standby-replay MDSs
- 01:36 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Greg Farnum wrote:
> I was talking to Dhairya about this today and am not quite sure I understand.
>
> Xiubo, Ven...
09/07/2023
- 07:53 PM Bug #62764 (New): qa: use stdin-killer for kclient mounts
- To reduce the number of dead jobs caused by a e.g. umount command stuck in uninterruptible sleep.
- 07:52 PM Bug #62763 (Fix Under Review): qa: use stdin-killer for ceph-fuse mounts
- To reduce the number of dead jobs caused by a e.g. umount command stuck in uninterruptible sleep.
- 07:44 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > If we are going to move the metadata out of CephFS, I think it sho... - 03:21 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Patrick Donnelly wrote:
> If we are going to move the metadata out of CephFS, I think it should go in cephsqlite. Th... - 02:59 PM Bug #61399: qa: build failure for ior
- What fixed this issue is using latest version of ior project as well as purging and then again installing mpich packa...
- 02:47 PM Bug #61399: qa: build failure for ior
- The PR has been merged just now, I'll check with Venky if this needs to be backported.
- 02:47 PM Bug #61399 (Fix Under Review): qa: build failure for ior
- 01:41 PM Bug #62729 (Resolved): src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘...
- 01:10 PM Feature #47264 (Resolved): "fs authorize" subcommand should work for multiple FSs too
- 12:25 PM Feature #62364: support dumping rstats on a particular path
- Venky Shankar wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Venky Shankar wrote:
> > > > Greg Farnum... - 12:05 PM Feature #62364: support dumping rstats on a particular path
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Venky Shankar wrote:
> > > Greg Farnum wrote:
> > > > Especially no... - 11:07 AM Bug #62739 (Pending Backport): cephfs-shell: remove distutils Version classes because they're dep...
- python 3.10 deprecated distutils [0]. LooseVersion is used at many places in cephfs-shell.py, suggest switching to pa...
- 10:16 AM Backport #62738 (In Progress): reef: quota: accept values in human readable format as well
- 10:07 AM Feature #55940 (Pending Backport): quota: accept values in human readable format as well
09/06/2023
- 09:53 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- Last successful main shaman build: https://shaman.ceph.com/builds/ceph/main/794f4d16c6c8bf35729045062d24322d30b5aa14/...
- 09:32 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- Laura suspected merging https://github.com/ceph/ceph/pull/51942 led tot this issue. I've built the PR branch (@wip-61...
- 07:48 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
- https://shaman.ceph.com/builds/ceph/main/f9a01cf3851ffa2c51b5fb84e304c1481f35fe03/
- 07:48 PM Bug #62729 (Resolved): src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘...
- https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos8,DIST=centos8,MAC...
- 08:49 PM Backport #62733 (Resolved): reef: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53558
- 08:49 PM Backport #62732 (Resolved): quincy: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53557
- 08:49 PM Backport #62731 (Resolved): pacific: mds: add TrackedOp event for batching getattr/lookup
- https://github.com/ceph/ceph/pull/53556
- 08:44 PM Bug #62057 (Pending Backport): mds: add TrackedOp event for batching getattr/lookup
- 06:47 PM Backport #62419 (Resolved): reef: mds: adjust cap acquistion throttle defaults
- https://github.com/ceph/ceph/pull/52972#issuecomment-1708910842
- 05:30 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro... - 02:21 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro... - 09:02 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- Venky Shankar wrote:
> which prompted a variety of code changes to workaround the problem. This all carries a size... - 06:14 AM Feature #62715 (New): mgr/volumes: switch to storing subvolume metadata in libcephsqlite
- A bit of history: The subvolume thing started out as a directory structure in the file system (and that is still the ...
- 03:09 PM Backport #62726 (New): quincy: mon/MDSMonitor: optionally forbid to use standby for another fs as...
- 03:09 PM Backport #62725 (Rejected): pacific: mon/MDSMonitor: optionally forbid to use standby for another...
- 03:09 PM Backport #62724 (In Progress): reef: mon/MDSMonitor: optionally forbid to use standby for another...
- https://github.com/ceph/ceph/pull/53340
- 03:00 PM Feature #61599 (Pending Backport): mon/MDSMonitor: optionally forbid to use standby for another f...
- 09:58 AM Bug #62706: qa: ModuleNotFoundError: No module named XXXXXX
- I too ran into this in one of my runs. I believe this is an env thing since a bunch of other tests from my run had is...
- 09:57 AM Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snaps...
- Dhairya, please take this one.
- 09:45 AM Bug #62674: cephfs snapshot remains visible in nfs export after deletion and new snaps not shown
- https://tracker.ceph.com/issues/58376 is the one reported by a community user.
- 08:55 AM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
- Hi Paul,
You are probably running into https://tracker.ceph.com/issues/59041 - at least for the part for listing s... - 09:41 AM Bug #62682 (Triaged): mon: no mdsmap broadcast after "fs set joinable" is set to true
- The upgrade process uses `fail_fs` which fails the file system and upgrades the MDSs without reducing max_mds to 1. I...
- 09:34 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
- Patrick Donnelly wrote:
> Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread f... - 08:46 AM Feature #62668: qa: use teuthology scripts to test dozens of clients
- Patrick Donnelly wrote:
> We have one small suite for integration testing of multiple clients:
>
> https://github... - 08:32 AM Feature #62670: [RFE] cephfs should track and expose subvolume usage and quota
- Paul Cuzner wrote:
> Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quo... - 06:57 AM Bug #62720 (New): mds: identify selinux relabelling and generate health warning
- This request has come up from folks in the field. A recursive relabel on a file system brings the mds down to its kne...
- 05:54 AM Backport #59200 (Rejected): reef: qa: add testing in fs:workload for different kinds of subvolumes
- Available in reef.
- 05:53 AM Backport #59201 (Resolved): quincy: qa: add testing in fs:workload for different kinds of subvolumes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50974
Merged. - 12:48 AM Bug #62700: postgres workunit failed with "PQputline failed"
- Another one https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/73...
- 12:47 AM Fix #51177 (Resolved): pybind/mgr/volumes: investigate moving calls which may block on libcephfs ...
- 12:47 AM Backport #59417 (Resolved): pacific: pybind/mgr/volumes: investigate moving calls which may block...
09/05/2023
- 09:54 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-09-01_19:14:47-rados-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/7386551
- 08:09 PM Fix #62712 (New): pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when unde...
- Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread for each module, the request...
- 07:42 PM Backport #62268 (Resolved): pacific: qa: _test_stale_caps does not wait for file flush before stat
- 07:42 PM Backport #62517 (Resolved): pacific: mds: inode snaplock only acquired for open in create codepath
- 07:41 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
- 02:53 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- PR https://github.com/ceph/ceph/pull/52924 has been merged for fixing this issue. Original PR https://github.com/ceph...
- 02:51 PM Bug #62084 (Resolved): task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- 01:36 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- I was talking to Dhairya about this today and am not quite sure I understand.
Xiubo, Venky, are we contending the ... - 12:56 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Manish Yathnalli wrote:
> https://github.com/ceph/ceph/pull/52527
Manish, the PR id is linked in the "Pull reques... - 12:42 PM Feature #61863 (Fix Under Review): mds: issue a health warning with estimated time to complete re...
- https://github.com/ceph/ceph/pull/52527
- 12:42 PM Bug #62265 (Fix Under Review): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
- https://github.com/ceph/ceph/pull/53283
- 11:41 AM Bug #62706 (Pending Backport): qa: ModuleNotFoundError: No module named XXXXXX
- https://pulpito.ceph.com/rishabh-2023-08-10_20:13:47-fs-wip-rishabh-2023aug3-b4-testing-default-smithi/7365558/
... - 05:27 AM Bug #62702 (Fix Under Review): MDS slow requests for the internal 'rename' requests
- 04:43 AM Bug #62702 (Pending Backport): MDS slow requests for the internal 'rename' requests
- https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7378922
<... - 04:34 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
- 04:34 AM Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB
- Revert PR: https://github.com/ceph/ceph/pull/53153
- 01:03 AM Bug #62700 (Fix Under Review): postgres workunit failed with "PQputline failed"
- The scale factor will depend on the node's performance and disk sizes being used to run the test, and 500 seems too l...
- 12:53 AM Bug #62700 (Resolved): postgres workunit failed with "PQputline failed"
- https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/7365718/teutho...
09/04/2023
- 03:29 PM Bug #62698: qa: fsstress.sh fails with error code 124
- Copying following log entries on behalf of Radoslaw -...
- 03:21 PM Bug #62698: qa: fsstress.sh fails with error code 124
- These messages mean there was no even a single successful exchange of network heartbeat messages between osd.5 and (o...
- 02:58 PM Bug #62698 (Can't reproduce): qa: fsstress.sh fails with error code 124
- https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7379296
The... - 02:26 PM Backport #62696 (Rejected): reef: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have a...
- 02:26 PM Backport #62695 (Rejected): quincy: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have...
- 02:18 PM Bug #62482 (Pending Backport): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an a...
- 11:15 AM Feature #1680: support reflink (cheap file copy/clone)
- This feature would really be appreciated. We would like to switch to Ceph for our cluster storage, but we rely heavil...
- 09:36 AM Bug #62676: cephfs-mirror: 'peer_bootstrap import' hangs
- If this is just a perception issue then a message to the user like "You need to wait for 5 minutes for this command t...
- 07:39 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
- This is not a bug it seems. It waits for 5 minutes for the secrets to expire. Don't press Ctrl+C, Just wait for 5 min...
- 08:56 AM Bug #62494 (In Progress): Lack of consistency in time format
- 08:53 AM Backport #59408 (In Progress): reef: cephfs_mirror: local and remote dir root modes are not same
- 08:52 AM Backport #59001 (In Progress): pacific: cephfs_mirror: local and remote dir root modes are not same
- 06:31 AM Bug #62682 (Resolved): mon: no mdsmap broadcast after "fs set joinable" is set to true
- ...
- 12:45 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Sudhin Bengeri wrote:
> Hi Xuibo,
>
> Here is the uname -a output from the nodes:
> Linux wkhd 6.3.0-rc4+ #6 SMP...
09/03/2023
- 02:40 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick - I can take this one if you haven't started on it yet.
...
09/02/2023
- 09:05 PM Backport #62569 (In Progress): pacific: ceph_fs.h: add separate owner_{u,g}id fields
- 09:05 PM Backport #62570 (In Progress): reef: ceph_fs.h: add separate owner_{u,g}id fields
- 09:05 PM Backport #62571 (In Progress): quincy: ceph_fs.h: add separate owner_{u,g}id fields
09/01/2023
- 06:58 PM Bug #50250 (New): mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/cli...
- https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-s...
- 09:05 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
- 'peer_bootstrap import' command hangs subsequent to using wrong/invalid token to import. If we use an invalid token i...
08/31/2023
- 11:01 PM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
- When a snapshot is taken of the subvolume, the .snap directory shows the snapshot when viewed from the NFS mount and ...
- 10:26 PM Bug #62673 (New): cephfs subvolume resize does not accept 'unit'
- Specifying the quota or resize for a subvolume requires the value in bytes. This value should be accepted as <num><un...
- 10:06 PM Feature #62670 (Need More Info): [RFE] cephfs should track and expose subvolume usage and quota
- Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quota thresholds to drive...
- 06:34 PM Feature #62668 (New): qa: use teuthology scripts to test dozens of clients
- We have one small suite for integration testing of multiple clients:
https://github.com/ceph/ceph/tree/9d7c1825783... - 03:35 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Hi Xuibo,
Here is the uname -a output from the nodes:
Linux wkhd 6.3.0-rc4+ #6 SMP PREEMPT_DYNAMIC Mon May 22 22:... - 03:44 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Sudhin Bengeri wrote:
> Hi Xiubo,
>
> Thanks for your response.
>
> Are you saying that cephfs does not suppo... - 12:35 PM Backport #62662 (In Progress): pacific: mds: deadlock when getattr changes inode lockset
- 12:02 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53243
- 12:34 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
- 12:01 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53242
- 12:34 PM Bug #62664 (New): ceph-fuse: failed to remount for kernel dentry trimming; quitting!
- Hi,
While #62604 is being addressed I wanted to try the ceph-fuse client. I'm using the same setup with kernel 6.4... - 12:34 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
- 12:01 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
- https://github.com/ceph/ceph/pull/53241
- 12:31 PM Bug #62663 (Can't reproduce): MDS: inode nlink value is -1 causing MDS to continuously crash
- All MDS daemons are continuously crashing. The logs are reporting an inode nlink value is set to -1. I have included ...
- 11:56 AM Bug #62052 (Pending Backport): mds: deadlock when getattr changes inode lockset
- 09:41 AM Bug #62580 (Fix Under Review): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_str...
- 08:57 AM Bug #62580 (In Progress): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.T...
- Xiubo Li wrote:
> This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to ... - 05:30 AM Bug #62580 (Duplicate): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.Tes...
- This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to Pacific yet.
- 09:30 AM Bug #62658 (Pending Backport): error during scrub thrashing: reached maximum tries (31) after wai...
- /a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378338...
- 07:10 AM Bug #62653 (New): qa: unimplemented fcntl command: 1036 with fsstress
- /a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378422
Happens wit...
08/30/2023
- 08:54 PM Bug #62648 (New): pybind/mgr/volumes: volume rm freeze waiting for async job on fs to complete
- ...
- 02:16 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Hi Xiubo,
Thanks for your response.
Are you saying that cephfs does not support fscrypt? I am not exactly sure... - 05:35 AM Feature #45021 (In Progress): client: new asok commands for diagnosing cap handling issues
08/29/2023
- 12:18 PM Bug #62626: mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
- Dhairya, could you link the commit which started causing this? (I recall we discussed a bit about this)
- 10:49 AM Bug #62626 (In Progress): mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
- Currently, when export update fails, this is the reponse:...
- 09:56 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- Venky Shankar wrote:
> FWIW, logs hint at missing (RADOS) objects:
>
> [...]
>
> I'm not certain yet if this i... - 09:39 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- FWIW, logs hint at missing (RADOS) objects:...
- 09:40 AM Backport #61987 (Resolved): reef: mds: session ls command appears twice in command listing
- 09:40 AM Backport #61988 (Resolved): quincy: mds: session ls command appears twice in command listing
- 05:46 AM Feature #61904: pybind/mgr/volumes: add more introspection for clones
- Rishabh, please take this one (along the same lines as https://tracker.ceph.com/issues/61905).
08/28/2023
- 01:33 PM Backport #62517 (In Progress): pacific: mds: inode snaplock only acquired for open in create code...
- 01:32 PM Backport #62516 (In Progress): quincy: mds: inode snaplock only acquired for open in create codepath
- 01:32 PM Backport #62518 (In Progress): reef: mds: inode snaplock only acquired for open in create codepath
- 01:17 PM Backport #62539 (Rejected): reef: qa: Health check failed: 1 pool(s) do not have an application e...
- 01:17 PM Backport #62538 (Rejected): quincy: qa: Health check failed: 1 pool(s) do not have an application...
- 01:17 PM Bug #62508 (Duplicate): qa: Health check failed: 1 pool(s) do not have an application enabled (PO...
- 12:24 PM Documentation #62605: cephfs-journal-tool: update parts of code that need mandatory --rank
- Good catch.
- 12:14 PM Documentation #62605 (New): cephfs-journal-tool: update parts of code that need mandatory --rank
- For instance If someone refers [0] to export journal to a file, it says to run ...
- 12:16 PM Bug #62537: cephfs scrub command will crash the standby-replay MDSs
- Neeraj, please take this one.
- 12:09 PM Tasks #62159 (In Progress): qa: evaluate mds_partitioner
- 12:08 PM Bug #62067 (Duplicate): ffsb.sh failure "Resource temporarily unavailable"
- Duplicate of #62484
- 12:06 PM Feature #62157 (In Progress): mds: working set size tracker
- Hi Yongseok,
Assigning this to you since I presume this being worked on along side the partitioner module. - 11:59 AM Feature #62215 (Rejected): libcephfs: Allow monitoring for any file changes like inotify
- Nothing planned for the foreseeable future related to this feature request.
- 11:11 AM Backport #62443: reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53005
the above PR has been closed and the commit has been ... - 11:08 AM Backport #62441: quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53006
the above PR has been closed and the commit has bee... - 09:19 AM Bug #59413 (Fix Under Review): cephfs: qa snaptest-git-ceph.sh failed with "got remote process re...
- 08:46 AM Bug #62510 (Duplicate): snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-... - 06:46 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Another one, but with kclient
> >
> > > https://pulpito.ceph.com/vsha... - 02:41 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Another one, but with kclient
>
> > https://pulpito.ceph.com/vshankar-2023-08-23_03:59:53-... - 02:29 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-testing-default-smith... - 06:22 AM Bug #62278: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume inf...
- Backport note: also include commit(s) from https://github.com/ceph/ceph/pull/52940
08/27/2023
- 09:06 AM Backport #62572 (In Progress): pacific: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53169
- 09:05 AM Backport #62573 (In Progress): reef: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53168
- 09:05 AM Backport #62574 (In Progress): quincy: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53167
08/25/2023
- 01:22 PM Backport #62585 (In Progress): quincy: mds: enforce a limit on the size of a session in the sessi...
- https://github.com/ceph/ceph/pull/53330
- 01:22 PM Backport #62584 (In Progress): pacific: mds: enforce a limit on the size of a session in the sess...
- https://github.com/ceph/ceph/pull/53634
- 01:21 PM Backport #62583 (Resolved): reef: mds: enforce a limit on the size of a session in the sessionmap
- https://github.com/ceph/ceph/pull/53329
- 01:17 PM Bug #61947 (Pending Backport): mds: enforce a limit on the size of a session in the sessionmap
- 02:54 AM Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- I will work on it.
- 01:42 AM Bug #62580 (Fix Under Review): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_str...
- ...
- 02:38 AM Bug #62510 (In Progress): snaptest-git-ceph.sh failure with fs/thrash
- 01:35 AM Bug #62579 (Fix Under Review): client: evicted warning because client completes unmount before th...
- 01:32 AM Bug #62579 (Pending Backport): client: evicted warning because client completes unmount before th...
- ...
08/24/2023
- 08:02 PM Bug #62577 (Fix Under Review): mds: log a message when exiting due to asok "exit" command
- 07:43 PM Bug #62577 (Pending Backport): mds: log a message when exiting due to asok "exit" command
- So it's clear what caused the call to suicide.
- 03:27 PM Backport #61691: quincy: mon failed to return metadata for mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52228
merged - 12:29 PM Bug #62381 (In Progress): mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() ...
- 12:15 PM Backport #62574 (Resolved): quincy: mds: add cap acquisition throttled event to MDR
- 12:14 PM Backport #62573 (Resolved): reef: mds: add cap acquisition throttled event to MDR
- 12:14 PM Backport #62572 (Resolved): pacific: mds: add cap acquisition throttled event to MDR
- https://github.com/ceph/ceph/pull/53169
- 12:14 PM Backport #62571 (Resolved): quincy: ceph_fs.h: add separate owner_{u,g}id fields
- https://github.com/ceph/ceph/pull/53139
- 12:14 PM Backport #62570 (Resolved): reef: ceph_fs.h: add separate owner_{u,g}id fields
- https://github.com/ceph/ceph/pull/53138
- 12:14 PM Backport #62569 (Rejected): pacific: ceph_fs.h: add separate owner_{u,g}id fields
- https://github.com/ceph/ceph/pull/53137
- 12:07 PM Bug #62217 (Pending Backport): ceph_fs.h: add separate owner_{u,g}id fields
- 12:06 PM Bug #59067 (Pending Backport): mds: add cap acquisition throttled event to MDR
- 12:03 PM Bug #62567 (Won't Fix): postgres workunit times out - MDS_SLOW_REQUEST in logs
- /a/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-20230822.060131-testing-default-smithi/7377197...
- 10:09 AM Bug #62484: qa: ffsb.sh test failure
- Another instance in main branch: /a/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-20230822.060131-testing-defa...
- 10:07 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Another one, but with kclient
> https://pulpito.ceph.com/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-2023... - 12:47 AM Bug #62435 (Need More Info): Pod unable to mount fscrypt encrypted cephfs PVC when it moves to an...
- Hi Sudhin,
This is not cephfs *fscrypt*. You are encrypting from the disk layer, not the filesystem layer. My unde...
08/23/2023
- 09:07 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-08-21_23:10:07-rados-pacific-release-distro-default-smithi/7375005
- 07:01 PM Bug #62556 (Resolved): qa/cephfs: dependencies listed in xfstests_dev.py are outdated
- @python2@ is one of the dependencies for @xfstests-dev@ that is listed in @xfstests_dev.py@ and @python2@ is not avai...
- 12:18 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-08-22_18:16:03-rados-wip-yuri10-testing-2023-08-17-1444-distro-default-smithi/7376742
- 05:44 AM Backport #62539 (In Progress): reef: qa: Health check failed: 1 pool(s) do not have an applicatio...
- https://github.com/ceph/ceph/pull/54380
- 05:43 AM Backport #62538 (In Progress): quincy: qa: Health check failed: 1 pool(s) do not have an applicat...
- https://github.com/ceph/ceph/pull/53863
- 05:38 AM Bug #62508 (Pending Backport): qa: Health check failed: 1 pool(s) do not have an application enab...
- 02:46 AM Bug #62537 (Fix Under Review): cephfs scrub command will crash the standby-replay MDSs
- ...
08/22/2023
- 06:23 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-08-17_21:18:20-rados-wip-yuri11-testing-2023-08-17-0823-distro-default-smithi/7372041
- 12:30 PM Bug #61399: qa: build failure for ior
- https://github.com/ceph/ceph/pull/52416 was merged accidentally (and then reverted). I've opened new PR for same patc...
- 08:54 AM Cleanup #4744 (In Progress): mds: pass around LogSegments via std::shared_ptr
- 08:03 AM Backport #62524 (Resolved): reef: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLO...
- https://github.com/ceph/ceph/pull/53661
- 08:03 AM Backport #62523 (Resolved): pacific: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_...
- https://github.com/ceph/ceph/pull/53662
- 08:03 AM Backport #62522 (Resolved): quincy: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_X...
- https://github.com/ceph/ceph/pull/53663
- 08:03 AM Backport #62521 (Resolved): reef: client: FAILED ceph_assert(_size == 0)
- https://github.com/ceph/ceph/pull/53666
- 08:03 AM Backport #62520 (In Progress): pacific: client: FAILED ceph_assert(_size == 0)
- https://github.com/ceph/ceph/pull/53981
- 08:03 AM Backport #62519 (Resolved): quincy: client: FAILED ceph_assert(_size == 0)
- https://github.com/ceph/ceph/pull/53664
- 08:02 AM Backport #62518 (In Progress): reef: mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/pull/53183
- 08:02 AM Backport #62517 (Resolved): pacific: mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/pull/53185
- 08:02 AM Backport #62516 (In Progress): quincy: mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/pull/53184
- 08:02 AM Backport #62515 (Resolved): reef: Error: Unable to find a match: python2 with fscrypt tests
- https://github.com/ceph/ceph/pull/53624
- 08:02 AM Backport #62514 (Rejected): pacific: Error: Unable to find a match: python2 with fscrypt tests
- https://github.com/ceph/ceph/pull/53625
- 08:02 AM Backport #62513 (In Progress): quincy: Error: Unable to find a match: python2 with fscrypt tests
- https://github.com/ceph/ceph/pull/53626
- 07:57 AM Bug #44565 (Pending Backport): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK ...
- 07:56 AM Bug #56698 (Pending Backport): client: FAILED ceph_assert(_size == 0)
- 07:55 AM Bug #62058 (Pending Backport): mds: inode snaplock only acquired for open in create codepath
- 07:54 AM Bug #62277 (Pending Backport): Error: Unable to find a match: python2 with fscrypt tests
- 07:17 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Xiubo, please take this one.
Sure. - 06:56 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Xiubo, please take this one.
- 06:55 AM Bug #62510 (Duplicate): snaptest-git-ceph.sh failure with fs/thrash
- /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/7369825...
- 07:02 AM Bug #62511 (New): src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
- /a/vshankar-2023-08-09_05:46:29-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/7363998...
- 06:22 AM Feature #58877 (Rejected): mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Update from internal discussion:
Given the complexities involved with the details mentioned in note-6, its risky t... - 06:16 AM Bug #62508 (Fix Under Review): qa: Health check failed: 1 pool(s) do not have an application enab...
- 06:12 AM Bug #62508 (Duplicate): qa: Health check failed: 1 pool(s) do not have an application enabled (PO...
- https://pulpito.ceph.com/yuriw-2023-08-18_20:13:47-fs-main-distro-default-smithi/
Fallout of https://github.com/ce... - 05:49 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Neeraj Pratap Singh wrote:
> @vshankar @kotresh Since, I was on sick leave yesterday. I saw the discussion made on t... - 05:37 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- @vshankar @kotresh Since, I was on sick leave yesterday. I saw the discussion made on the PR today. Seeing the final ...
- 05:31 AM Bug #62246: qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
- Rishabh, were you able to push a fix for this?
- 05:31 AM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
- https://pulpito.ceph.com/vshankar-2023-08-16_11:14:57-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/...
- 04:21 AM Backport #59202 (Resolved): pacific: qa: add testing in fs:workload for different kinds of subvol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51509
Merged.
08/21/2023
- 04:07 PM Backport #62421: pacific: mds: adjust cap acquistion throttle defaults
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52974
merged - 04:06 PM Backport #61841: pacific: mds: do not evict clients if OSDs are laggy
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/52270
merged - 03:36 PM Backport #61793: pacific: mgr/snap_schedule: catch all exceptions to avoid crashing module
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52753
merged - 03:30 PM Bug #62501 (New): pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume...
- Probably misconfiguration allowing OSDs to actually run out of space during test instead of the OSD refusing further ...
- 02:15 PM Feature #61334: cephfs-mirror: use snapdiff api for efficient tree traversal
- Jos,
The crux of the changes will be in PeerReplayer::do_synchronize() which if you see does:... - 02:00 PM Feature #62364: support dumping rstats on a particular path
- Greg Farnum wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Especially now that we have rstats disabled... - 01:03 PM Feature #62364: support dumping rstats on a particular path
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Especially now that we have rstats disabled by default,
>
> When d... - 01:50 PM Bug #62485: quincy (?): pybind/mgr/volumes: subvolume rm timeout
- > 2023-08-09T05:20:40.495+0000 7f0e05951700 0 [volumes DEBUG mgr_util] locking <locked _thread.lock object at 0x7f0e...
- 01:28 PM Bug #62494: Lack of consistency in time format
- Eugen Block wrote:
> I wanted to test cephfs snapshots in latest Reef and noticed a discrepancy when it comes to tim... - 09:52 AM Bug #62494 (Pending Backport): Lack of consistency in time format
- I wanted to test cephfs snapshots in latest Reef and noticed a discrepancy when it comes to time format, for example ...
- 01:10 PM Bug #62484 (Triaged): qa: ffsb.sh test failure
- 10:07 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Venky Shankar wrote:
> Xiubo Li wrote:
> > This should be a similar issue with https://tracker.ceph.com/issues/5848... - 09:43 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> This should be a similar issue with https://tracker.ceph.com/issues/58489. Just the *openc/mknod/m... - 05:34 AM Bug #62074 (Resolved): cephfs-shell: ls command has help message of cp command
- 05:23 AM Feature #61777 (Fix Under Review): mds: add ceph.dir.bal.mask vxattr
- 02:12 AM Backport #59199 (Resolved): quincy: cephfs: qa enables kclient for newop test
- 02:12 AM Bug #59343 (Resolved): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:12 AM Backport #62045 (Resolved): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:11 AM Bug #49912 (Resolved): client: dir->dentries inconsistent, both newname and oldname points to sam...
- 02:11 AM Backport #62010 (Resolved): quincy: client: dir->dentries inconsistent, both newname and oldname ...
- 01:21 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Venky Shankar wrote:
> Xiubo, please take this one.
Sure. - 01:20 AM Bug #58340 (Resolved): mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegr...
- 01:20 AM Backport #61348 (Resolved): quincy: mds: fsstress.sh hangs with multimds (deadlock between unlink...
- 01:20 AM Bug #59705 (Resolved): client: only wait for write MDS OPs when unmounting
- 01:20 AM Backport #61796 (Resolved): quincy: client: only wait for write MDS OPs when unmounting
- 01:20 AM Bug #61523 (Resolved): client: do not send metrics until the MDS rank is ready
- 01:19 AM Backport #62042 (Resolved): quincy: client: do not send metrics until the MDS rank is ready
- 01:19 AM Bug #61782 (Resolved): mds: cap revoke and cap update's seqs mismatched
- 01:19 AM Backport #61985 (Resolved): quincy: mds: cap revoke and cap update's seqs mismatched
- 01:18 AM Backport #62193 (Resolved): pacific: ceph: corrupt snap message from mds1
- 01:18 AM Backport #62202 (Resolved): pacific: crash: MDSRank::send_message_client(boost::intrusive_ptr<Mes...
- 12:49 AM Bug #62096: mds: infinite rename recursion on itself
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Patrick,
> >
> > This should be the same issue with:
> >
> > ht...
08/18/2023
- 01:05 AM Bug #59682 (Resolved): CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
- 01:04 AM Backport #61695 (Resolved): quincy: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't...
- 12:34 AM Bug #59551 (Resolved): mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- 12:33 AM Backport #61736 (Resolved): quincy: mgr/stats: exception ValueError :invalid literal for int() wi...
- 12:29 AM Bug #61201 (Resolved): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in...
- 12:27 AM Backport #62056 (Resolved): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails because m...
- https://github.com/ceph/ceph/pull/52514 merged
08/17/2023
- 09:37 PM Backport #61988: quincy: mds: session ls command appears twice in command listing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52516
merged - 09:36 PM Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific
- https://github.com/ceph/ceph/pull/52514 merged
- 09:35 PM Backport #61985: quincy: mds: cap revoke and cap update's seqs mismatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52508
merged - 09:33 PM Backport #62042: quincy: client: do not send metrics until the MDS rank is ready
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52502
merged - 09:31 PM Backport #61796: quincy: client: only wait for write MDS OPs when unmounting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52303
merged - 09:30 PM Backport #59303: quincy: cephfs: tooling to identify inode (metadata) corruption
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52245
merged - 09:30 PM Backport #59558: quincy: qa: RuntimeError: more than one file system available
- Rishabh Dave wrote:
> https://github.com/ceph/ceph/pull/52241
merged - 09:29 PM Backport #59371: quincy: qa: test_join_fs_unset failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52236
merged - 09:28 PM Backport #61736: quincy: mgr/stats: exception ValueError :invalid literal for int() with base 16:...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52127
merged - 09:28 PM Backport #61695: quincy: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install th...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52074
merged - 09:27 PM Bug #59107: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51697 merged
- 09:26 PM Backport #59722: quincy: qa: run scrub post disaster recovery procedure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51690
merged - 09:25 PM Backport #61348: quincy: mds: fsstress.sh hangs with multimds (deadlock between unlink and reinte...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51685
merged - 09:23 PM Backport #59367: quincy: qa: test_rebuild_simple checks status on wrong file system
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50922
merged - 09:22 PM Backport #59265: quincy: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50815
merged - 09:22 PM Backport #59262: quincy: mds: stray directories are not purged when all past parents are clear
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50815
merged - 09:21 PM Backport #59244: quincy: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50807
merged - 09:21 PM Backport #59247: quincy: qa: intermittent nfs test failures at nfs cluster creation
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50807
merged - 09:21 PM Backport #59250: quincy: mgr/nfs: disallow non-existent paths when creating export
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50807
merged - 09:20 PM Backport #59002: quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50786
merged - 06:43 PM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
- 05:32 PM Bug #62485 (New): quincy (?): pybind/mgr/volumes: subvolume rm timeout
- ...
- 05:04 PM Bug #62484 (Triaged): qa: ffsb.sh test failure
- ...
- 04:18 PM Backport #59720: pacific: client: read wild pointer when reconnect to mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51487
merged - 03:36 PM Backport #62372: pacific: Consider setting "bulk" autoscale pool flag when automatically creating...
- Leonid Usov wrote:
> https://github.com/ceph/ceph/pull/52900
merged - 03:35 PM Backport #62193: pacific: ceph: corrupt snap message from mds1
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52848
merged - 03:35 PM Backport #62202: pacific: crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> const...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52844
merged - 03:35 PM Backport #62242: pacific: mds: linkmerge assert check is incorrect in rename codepath
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52726
merged - 03:34 PM Backport #62190: pacific: mds: replay thread does not update some essential perf counters
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52682
merged - 01:35 PM Bug #62482 (Fix Under Review): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an a...
- 01:26 PM Bug #62482 (Resolved): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an applicati...
- /teuthology/vshankar-2023-08-16_10:55:44-fs-wip-vshankar-testing-20230816.054905-testing-default-smithi/7369639/teuth...
- 01:03 PM Bug #61399 (In Progress): qa: build failure for ior
- had to revert the changes - https://github.com/ceph/ceph/pull/53036
- 12:30 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-08-16_22:40:18-rados-wip-yuri2-testing-2023-08-16-1142-pacific-distro-default-smithi/7370706/
- 08:47 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- i have a concern about this code; if the p->first is still gt than start even after decrementing then it means a sing...
- 08:41 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Xiubo Li wrote:
> It aborted in Line#1623. The *session->take_ino()* may return *0* if the *used_preallocated_ino* d... - 07:49 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
- Igor Fedotov wrote:
> The attached file contains log snippets with apparently relevant information for a few crashes... - 07:44 AM Bug #62435 (Triaged): Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- 07:44 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
- Xiubo, please take this one.
- 06:06 AM Bug #62265 (In Progress): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
08/16/2023
- 04:22 PM Backport #59003: pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51039
merged - 03:56 PM Bug #62465 (Can't reproduce): pacific (?): LibCephFS.ShutdownRace segmentation fault
- ...
- 01:55 PM Feature #62364: support dumping rstats on a particular path
- Greg Farnum wrote:
> Especially now that we have rstats disabled by default,
When did this happen? Or, do you mea... - 01:13 PM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
- Anagh Kumar Baranwal wrote:
> Venky Shankar wrote:
> > Changes originating from the localhost would obviously be no... - 01:07 PM Backport #62460 (In Progress): reef: pybind/mgr/volumes: Document a possible deadlock after a vol...
- 01:03 PM Backport #62460 (In Progress): reef: pybind/mgr/volumes: Document a possible deadlock after a vol...
- https://github.com/ceph/ceph/pull/52946
- 01:07 PM Backport #62459 (In Progress): quincy: pybind/mgr/volumes: Document a possible deadlock after a v...
- 01:03 PM Backport #62459 (In Progress): quincy: pybind/mgr/volumes: Document a possible deadlock after a v...
- https://github.com/ceph/ceph/pull/52947
- 12:56 PM Bug #62407 (Pending Backport): pybind/mgr/volumes: Document a possible deadlock after a volume de...
- 12:55 PM Bug #62407 (Fix Under Review): pybind/mgr/volumes: Document a possible deadlock after a volume de...
- 12:19 PM Backport #62373 (In Progress): quincy: Consider setting "bulk" autoscale pool flag when automatic...
- 12:18 PM Bug #62208 (Fix Under Review): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- 12:18 PM Bug #62208 (In Progress): mds: use MDSRank::abort to ceph_abort so necessary sync is done
- 11:47 AM Feature #61334 (In Progress): cephfs-mirror: use snapdiff api for efficient tree traversal
- 07:28 AM Bug #61399 (Resolved): qa: build failure for ior
- Rishabh, does this need backporting?
Also available in: Atom