Project

General

Profile

Activity

From 08/08/2023 to 09/06/2023

09/06/2023

09:53 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
Last successful main shaman build: https://shaman.ceph.com/builds/ceph/main/794f4d16c6c8bf35729045062d24322d30b5aa14/... Laura Flores
09:32 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
Laura suspected merging https://github.com/ceph/ceph/pull/51942 led tot this issue. I've built the PR branch (@wip-61... Rishabh Dave
07:48 PM Bug #62729: src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘ffs’?
https://shaman.ceph.com/builds/ceph/main/f9a01cf3851ffa2c51b5fb84e304c1481f35fe03/ Laura Flores
07:48 PM Bug #62729 (Resolved): src/mon/FSCommands.cc: ‘fs’ was not declared in this scope; did you mean ‘...
https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos8,DIST=centos8,MAC... Laura Flores
08:49 PM Backport #62733 (Resolved): reef: mds: add TrackedOp event for batching getattr/lookup
https://github.com/ceph/ceph/pull/53558 Backport Bot
08:49 PM Backport #62732 (Resolved): quincy: mds: add TrackedOp event for batching getattr/lookup
https://github.com/ceph/ceph/pull/53557 Backport Bot
08:49 PM Backport #62731 (Resolved): pacific: mds: add TrackedOp event for batching getattr/lookup
https://github.com/ceph/ceph/pull/53556 Backport Bot
08:44 PM Bug #62057 (Pending Backport): mds: add TrackedOp event for batching getattr/lookup
Rishabh Dave
06:47 PM Backport #62419 (Resolved): reef: mds: adjust cap acquistion throttle defaults
https://github.com/ceph/ceph/pull/52972#issuecomment-1708910842 Patrick Donnelly
05:30 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro...
Patrick Donnelly
02:21 PM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
Dhairya Parmar wrote:
> Venky Shankar wrote:
>
> > which prompted a variety of code changes to workaround the pro...
Venky Shankar
09:02 AM Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite
Venky Shankar wrote:
> which prompted a variety of code changes to workaround the problem. This all carries a size...
Dhairya Parmar
06:14 AM Feature #62715 (New): mgr/volumes: switch to storing subvolume metadata in libcephsqlite
A bit of history: The subvolume thing started out as a directory structure in the file system (and that is still the ... Venky Shankar
03:09 PM Backport #62726 (New): quincy: mon/MDSMonitor: optionally forbid to use standby for another fs as...
Backport Bot
03:09 PM Backport #62725 (Rejected): pacific: mon/MDSMonitor: optionally forbid to use standby for another...
Backport Bot
03:09 PM Backport #62724 (In Progress): reef: mon/MDSMonitor: optionally forbid to use standby for another...
https://github.com/ceph/ceph/pull/53340 Backport Bot
03:00 PM Feature #61599 (Pending Backport): mon/MDSMonitor: optionally forbid to use standby for another f...
Mykola Golub
09:58 AM Bug #62706: qa: ModuleNotFoundError: No module named XXXXXX
I too ran into this in one of my runs. I believe this is an env thing since a bunch of other tests from my run had is... Venky Shankar
09:57 AM Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snaps...
Dhairya, please take this one. Venky Shankar
09:45 AM Bug #62674: cephfs snapshot remains visible in nfs export after deletion and new snaps not shown
https://tracker.ceph.com/issues/58376 is the one reported by a community user. Venky Shankar
08:55 AM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
Hi Paul,
You are probably running into https://tracker.ceph.com/issues/59041 - at least for the part for listing s...
Venky Shankar
09:41 AM Bug #62682 (Triaged): mon: no mdsmap broadcast after "fs set joinable" is set to true
The upgrade process uses `fail_fs` which fails the file system and upgrades the MDSs without reducing max_mds to 1. I... Venky Shankar
09:34 AM Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load
Patrick Donnelly wrote:
> Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread f...
Venky Shankar
08:46 AM Feature #62668: qa: use teuthology scripts to test dozens of clients
Patrick Donnelly wrote:
> We have one small suite for integration testing of multiple clients:
>
> https://github...
Venky Shankar
08:32 AM Feature #62670: [RFE] cephfs should track and expose subvolume usage and quota
Paul Cuzner wrote:
> Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quo...
Venky Shankar
06:57 AM Bug #62720 (New): mds: identify selinux relabelling and generate health warning
This request has come up from folks in the field. A recursive relabel on a file system brings the mds down to its kne... Venky Shankar
05:54 AM Backport #59200 (Rejected): reef: qa: add testing in fs:workload for different kinds of subvolumes
Available in reef. Venky Shankar
05:53 AM Backport #59201 (Resolved): quincy: qa: add testing in fs:workload for different kinds of subvolumes
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50974
Merged.
Venky Shankar
12:48 AM Bug #62700: postgres workunit failed with "PQputline failed"
Another one https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/73... Xiubo Li
12:47 AM Fix #51177 (Resolved): pybind/mgr/volumes: investigate moving calls which may block on libcephfs ...
Patrick Donnelly
12:47 AM Backport #59417 (Resolved): pacific: pybind/mgr/volumes: investigate moving calls which may block...
Patrick Donnelly

09/05/2023

09:54 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
/a/yuriw-2023-09-01_19:14:47-rados-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/7386551 Laura Flores
08:09 PM Fix #62712 (New): pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when unde...
Even with the recent changes to the ceph-mgr (#51177) to have a separate finisher thread for each module, the request... Patrick Donnelly
07:42 PM Backport #62268 (Resolved): pacific: qa: _test_stale_caps does not wait for file flush before stat
Patrick Donnelly
07:42 PM Backport #62517 (Resolved): pacific: mds: inode snaplock only acquired for open in create codepath
Patrick Donnelly
07:41 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
Patrick Donnelly
02:53 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
PR https://github.com/ceph/ceph/pull/52924 has been merged for fixing this issue. Original PR https://github.com/ceph... Rishabh Dave
02:51 PM Bug #62084 (Resolved): task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
Rishabh Dave
01:36 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
I was talking to Dhairya about this today and am not quite sure I understand.
Xiubo, Venky, are we contending the ...
Greg Farnum
12:56 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
Manish Yathnalli wrote:
> https://github.com/ceph/ceph/pull/52527
Manish, the PR id is linked in the "Pull reques...
Venky Shankar
12:42 PM Feature #61863 (Fix Under Review): mds: issue a health warning with estimated time to complete re...
https://github.com/ceph/ceph/pull/52527 Manish Yathnalli
12:42 PM Bug #62265 (Fix Under Review): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
https://github.com/ceph/ceph/pull/53283 Manish Yathnalli
11:41 AM Bug #62706 (Pending Backport): qa: ModuleNotFoundError: No module named XXXXXX
https://pulpito.ceph.com/rishabh-2023-08-10_20:13:47-fs-wip-rishabh-2023aug3-b4-testing-default-smithi/7365558/
...
Rishabh Dave
05:27 AM Bug #62702 (Fix Under Review): MDS slow requests for the internal 'rename' requests
Xiubo Li
04:43 AM Bug #62702 (Pending Backport): MDS slow requests for the internal 'rename' requests
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7378922
<...
Xiubo Li
04:34 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
Venky Shankar
04:34 AM Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB
Revert PR: https://github.com/ceph/ceph/pull/53153 Venky Shankar
01:03 AM Bug #62700 (Fix Under Review): postgres workunit failed with "PQputline failed"
The scale factor will depend on the node's performance and disk sizes being used to run the test, and 500 seems too l... Xiubo Li
12:53 AM Bug #62700 (Resolved): postgres workunit failed with "PQputline failed"
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/7365718/teutho... Xiubo Li

09/04/2023

03:29 PM Bug #62698: qa: fsstress.sh fails with error code 124
Copying following log entries on behalf of Radoslaw -... Rishabh Dave
03:21 PM Bug #62698: qa: fsstress.sh fails with error code 124
These messages mean there was no even a single successful exchange of network heartbeat messages between osd.5 and (o... Radoslaw Zarzynski
02:58 PM Bug #62698 (Can't reproduce): qa: fsstress.sh fails with error code 124
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/7379296
The...
Rishabh Dave
02:26 PM Backport #62696 (Rejected): reef: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have a...
Backport Bot
02:26 PM Backport #62695 (Rejected): quincy: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have...
Backport Bot
02:18 PM Bug #62482 (Pending Backport): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an a...
Venky Shankar
11:15 AM Feature #1680: support reflink (cheap file copy/clone)
This feature would really be appreciated. We would like to switch to Ceph for our cluster storage, but we rely heavil... Ole salscheider
09:36 AM Bug #62676: cephfs-mirror: 'peer_bootstrap import' hangs
If this is just a perception issue then a message to the user like "You need to wait for 5 minutes for this command t... Milind Changire
07:39 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
This is not a bug it seems. It waits for 5 minutes for the secrets to expire. Don't press Ctrl+C, Just wait for 5 min... Jos Collin
08:56 AM Bug #62494 (In Progress): Lack of consistency in time format
Milind Changire
08:53 AM Backport #59408 (In Progress): reef: cephfs_mirror: local and remote dir root modes are not same
Milind Changire
08:52 AM Backport #59001 (In Progress): pacific: cephfs_mirror: local and remote dir root modes are not same
Milind Changire
06:31 AM Bug #62682 (Resolved): mon: no mdsmap broadcast after "fs set joinable" is set to true
... Milind Changire
12:45 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
Sudhin Bengeri wrote:
> Hi Xuibo,
>
> Here is the uname -a output from the nodes:
> Linux wkhd 6.3.0-rc4+ #6 SMP...
Xiubo Li

09/03/2023

02:40 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick - I can take this one if you haven't started on it yet.
...
Venky Shankar

09/02/2023

09:05 PM Backport #62569 (In Progress): pacific: ceph_fs.h: add separate owner_{u,g}id fields
Konstantin Shalygin
09:05 PM Backport #62570 (In Progress): reef: ceph_fs.h: add separate owner_{u,g}id fields
Konstantin Shalygin
09:05 PM Backport #62571 (In Progress): quincy: ceph_fs.h: add separate owner_{u,g}id fields
Konstantin Shalygin

09/01/2023

06:58 PM Bug #50250 (New): mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/cli...
https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-s... Patrick Donnelly
09:05 AM Bug #62676 (Closed): cephfs-mirror: 'peer_bootstrap import' hangs
'peer_bootstrap import' command hangs subsequent to using wrong/invalid token to import. If we use an invalid token i... Jos Collin

08/31/2023

11:01 PM Bug #62674 (Duplicate): cephfs snapshot remains visible in nfs export after deletion and new snap...
When a snapshot is taken of the subvolume, the .snap directory shows the snapshot when viewed from the NFS mount and ... Paul Cuzner
10:26 PM Bug #62673 (New): cephfs subvolume resize does not accept 'unit'
Specifying the quota or resize for a subvolume requires the value in bytes. This value should be accepted as <num><un... Paul Cuzner
10:06 PM Feature #62670 (Need More Info): [RFE] cephfs should track and expose subvolume usage and quota
Subvolumes may be queried independently, but at scale we need a way for subvolume usage and quota thresholds to drive... Paul Cuzner
06:34 PM Feature #62668 (New): qa: use teuthology scripts to test dozens of clients
We have one small suite for integration testing of multiple clients:
https://github.com/ceph/ceph/tree/9d7c1825783...
Patrick Donnelly
03:35 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
Hi Xuibo,
Here is the uname -a output from the nodes:
Linux wkhd 6.3.0-rc4+ #6 SMP PREEMPT_DYNAMIC Mon May 22 22:...
Sudhin Bengeri
03:44 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
Sudhin Bengeri wrote:
> Hi Xiubo,
>
> Thanks for your response.
>
> Are you saying that cephfs does not suppo...
Xiubo Li
12:35 PM Backport #62662 (In Progress): pacific: mds: deadlock when getattr changes inode lockset
Patrick Donnelly
12:02 PM Backport #62662 (Resolved): pacific: mds: deadlock when getattr changes inode lockset
https://github.com/ceph/ceph/pull/53243 Backport Bot
12:34 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
Patrick Donnelly
12:01 PM Backport #62660 (In Progress): quincy: mds: deadlock when getattr changes inode lockset
https://github.com/ceph/ceph/pull/53242 Backport Bot
12:34 PM Bug #62664 (New): ceph-fuse: failed to remount for kernel dentry trimming; quitting!
Hi,
While #62604 is being addressed I wanted to try the ceph-fuse client. I'm using the same setup with kernel 6.4...
Rodrigo Arias
12:34 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
Patrick Donnelly
12:01 PM Backport #62661 (In Progress): reef: mds: deadlock when getattr changes inode lockset
https://github.com/ceph/ceph/pull/53241 Backport Bot
12:31 PM Bug #62663 (Can't reproduce): MDS: inode nlink value is -1 causing MDS to continuously crash
All MDS daemons are continuously crashing. The logs are reporting an inode nlink value is set to -1. I have included ... Austin Axworthy
11:56 AM Bug #62052 (Pending Backport): mds: deadlock when getattr changes inode lockset
Venky Shankar
09:41 AM Bug #62580 (Fix Under Review): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_str...
Xiubo Li
08:57 AM Bug #62580 (In Progress): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.T...
Xiubo Li wrote:
> This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to ...
Xiubo Li
05:30 AM Bug #62580 (Duplicate): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.Tes...
This should duplicate with https://tracker.ceph.com/issues/61892, which hasn't been backported to Pacific yet. Xiubo Li
09:30 AM Bug #62658 (Pending Backport): error during scrub thrashing: reached maximum tries (31) after wai...
/a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378338... Venky Shankar
07:10 AM Bug #62653 (New): qa: unimplemented fcntl command: 1036 with fsstress
/a/vshankar-2023-08-24_07:29:19-fs-wip-vshankar-testing-20230824.045828-testing-default-smithi/7378422
Happens wit...
Venky Shankar

08/30/2023

08:54 PM Bug #62648 (New): pybind/mgr/volumes: volume rm freeze waiting for async job on fs to complete
... Patrick Donnelly
02:16 PM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
Hi Xiubo,
Thanks for your response.
Are you saying that cephfs does not support fscrypt? I am not exactly sure...
Sudhin Bengeri
05:35 AM Feature #45021 (In Progress): client: new asok commands for diagnosing cap handling issues
Venky Shankar

08/29/2023

12:18 PM Bug #62626: mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
Dhairya, could you link the commit which started causing this? (I recall we discussed a bit about this) Venky Shankar
10:49 AM Bug #62626 (In Progress): mgr/nfs: include pseudo in JSON output when nfs export apply -i fails
Currently, when export update fails, this is the reponse:... Dhairya Parmar
09:56 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
Venky Shankar wrote:
> FWIW, logs hint at missing (RADOS) objects:
>
> [...]
>
> I'm not certain yet if this i...
Igor Fedotov
09:39 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
FWIW, logs hint at missing (RADOS) objects:... Venky Shankar
09:40 AM Backport #61987 (Resolved): reef: mds: session ls command appears twice in command listing
Neeraj Pratap Singh
09:40 AM Backport #61988 (Resolved): quincy: mds: session ls command appears twice in command listing
Neeraj Pratap Singh
05:46 AM Feature #61904: pybind/mgr/volumes: add more introspection for clones
Rishabh, please take this one (along the same lines as https://tracker.ceph.com/issues/61905). Venky Shankar

08/28/2023

01:33 PM Backport #62517 (In Progress): pacific: mds: inode snaplock only acquired for open in create code...
Patrick Donnelly
01:32 PM Backport #62516 (In Progress): quincy: mds: inode snaplock only acquired for open in create codepath
Patrick Donnelly
01:32 PM Backport #62518 (In Progress): reef: mds: inode snaplock only acquired for open in create codepath
Patrick Donnelly
01:17 PM Backport #62539 (Rejected): reef: qa: Health check failed: 1 pool(s) do not have an application e...
Patrick Donnelly
01:17 PM Backport #62538 (Rejected): quincy: qa: Health check failed: 1 pool(s) do not have an application...
Patrick Donnelly
01:17 PM Bug #62508 (Duplicate): qa: Health check failed: 1 pool(s) do not have an application enabled (PO...
Patrick Donnelly
12:24 PM Documentation #62605: cephfs-journal-tool: update parts of code that need mandatory --rank
Good catch. Venky Shankar
12:14 PM Documentation #62605 (New): cephfs-journal-tool: update parts of code that need mandatory --rank
For instance If someone refers [0] to export journal to a file, it says to run ... Dhairya Parmar
12:16 PM Bug #62537: cephfs scrub command will crash the standby-replay MDSs
Neeraj, please take this one. Venky Shankar
12:09 PM Tasks #62159 (In Progress): qa: evaluate mds_partitioner
Venky Shankar
12:08 PM Bug #62067 (Duplicate): ffsb.sh failure "Resource temporarily unavailable"
Duplicate of #62484 Venky Shankar
12:06 PM Feature #62157 (In Progress): mds: working set size tracker
Hi Yongseok,
Assigning this to you since I presume this being worked on along side the partitioner module.
Venky Shankar
11:59 AM Feature #62215 (Rejected): libcephfs: Allow monitoring for any file changes like inotify
Nothing planned for the foreseeable future related to this feature request. Venky Shankar
11:11 AM Backport #62443: reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53005
the above PR has been closed and the commit has been ...
Dhairya Parmar
11:08 AM Backport #62441: quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53006
the above PR has been closed and the commit has bee...
Dhairya Parmar
09:19 AM Bug #59413 (Fix Under Review): cephfs: qa snaptest-git-ceph.sh failed with "got remote process re...
Xiubo Li
08:46 AM Bug #62510 (Duplicate): snaptest-git-ceph.sh failure with fs/thrash
Xiubo Li wrote:
> Venky Shankar wrote:
> > /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-...
Xiubo Li
06:46 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
Xiubo Li wrote:
> Venky Shankar wrote:
> > Another one, but with kclient
> >
> > > https://pulpito.ceph.com/vsha...
Xiubo Li
02:41 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
Venky Shankar wrote:
> Another one, but with kclient
>
> > https://pulpito.ceph.com/vshankar-2023-08-23_03:59:53-...
Xiubo Li
02:29 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
Venky Shankar wrote:
> /a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-testing-default-smith...
Xiubo Li
06:22 AM Bug #62278: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume inf...
Backport note: also include commit(s) from https://github.com/ceph/ceph/pull/52940 Venky Shankar

08/27/2023

09:06 AM Backport #62572 (In Progress): pacific: mds: add cap acquisition throttled event to MDR
https://github.com/ceph/ceph/pull/53169 Leonid Usov
09:05 AM Backport #62573 (In Progress): reef: mds: add cap acquisition throttled event to MDR
https://github.com/ceph/ceph/pull/53168 Leonid Usov
09:05 AM Backport #62574 (In Progress): quincy: mds: add cap acquisition throttled event to MDR
https://github.com/ceph/ceph/pull/53167 Leonid Usov

08/25/2023

01:22 PM Backport #62585 (In Progress): quincy: mds: enforce a limit on the size of a session in the sessi...
https://github.com/ceph/ceph/pull/53330 Backport Bot
01:22 PM Backport #62584 (In Progress): pacific: mds: enforce a limit on the size of a session in the sess...
https://github.com/ceph/ceph/pull/53634 Backport Bot
01:21 PM Backport #62583 (Resolved): reef: mds: enforce a limit on the size of a session in the sessionmap
https://github.com/ceph/ceph/pull/53329 Backport Bot
01:17 PM Bug #61947 (Pending Backport): mds: enforce a limit on the size of a session in the sessionmap
Venky Shankar
02:54 AM Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
I will work on it. Xiubo Li
01:42 AM Bug #62580 (Fix Under Review): testing: Test failure: test_snapshot_remove (tasks.cephfs.test_str...
... Patrick Donnelly
02:38 AM Bug #62510 (In Progress): snaptest-git-ceph.sh failure with fs/thrash
Xiubo Li
01:35 AM Bug #62579 (Fix Under Review): client: evicted warning because client completes unmount before th...
Patrick Donnelly
01:32 AM Bug #62579 (Pending Backport): client: evicted warning because client completes unmount before th...
... Patrick Donnelly

08/24/2023

08:02 PM Bug #62577 (Fix Under Review): mds: log a message when exiting due to asok "exit" command
Patrick Donnelly
07:43 PM Bug #62577 (Pending Backport): mds: log a message when exiting due to asok "exit" command
So it's clear what caused the call to suicide. Patrick Donnelly
03:27 PM Backport #61691: quincy: mon failed to return metadata for mds
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52228
merged
Yuri Weinstein
12:29 PM Bug #62381 (In Progress): mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() ...
Venky Shankar
12:15 PM Backport #62574 (Resolved): quincy: mds: add cap acquisition throttled event to MDR
Backport Bot
12:14 PM Backport #62573 (Resolved): reef: mds: add cap acquisition throttled event to MDR
Backport Bot
12:14 PM Backport #62572 (Resolved): pacific: mds: add cap acquisition throttled event to MDR
https://github.com/ceph/ceph/pull/53169 Backport Bot
12:14 PM Backport #62571 (Resolved): quincy: ceph_fs.h: add separate owner_{u,g}id fields
https://github.com/ceph/ceph/pull/53139 Backport Bot
12:14 PM Backport #62570 (Resolved): reef: ceph_fs.h: add separate owner_{u,g}id fields
https://github.com/ceph/ceph/pull/53138 Backport Bot
12:14 PM Backport #62569 (Rejected): pacific: ceph_fs.h: add separate owner_{u,g}id fields
https://github.com/ceph/ceph/pull/53137 Backport Bot
12:07 PM Bug #62217 (Pending Backport): ceph_fs.h: add separate owner_{u,g}id fields
Venky Shankar
12:06 PM Bug #59067 (Pending Backport): mds: add cap acquisition throttled event to MDR
Venky Shankar
12:03 PM Bug #62567 (Won't Fix): postgres workunit times out - MDS_SLOW_REQUEST in logs
/a/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-20230822.060131-testing-default-smithi/7377197... Venky Shankar
10:09 AM Bug #62484: qa: ffsb.sh test failure
Another instance in main branch: /a/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-20230822.060131-testing-defa... Venky Shankar
10:07 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
Another one, but with kclient
> https://pulpito.ceph.com/vshankar-2023-08-23_03:59:53-fs-wip-vshankar-testing-2023...
Venky Shankar
12:47 AM Bug #62435 (Need More Info): Pod unable to mount fscrypt encrypted cephfs PVC when it moves to an...
Hi Sudhin,
This is not cephfs *fscrypt*. You are encrypting from the disk layer, not the filesystem layer. My unde...
Xiubo Li

08/23/2023

09:07 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
/a/yuriw-2023-08-21_23:10:07-rados-pacific-release-distro-default-smithi/7375005 Laura Flores
07:01 PM Bug #62556 (Resolved): qa/cephfs: dependencies listed in xfstests_dev.py are outdated
@python2@ is one of the dependencies for @xfstests-dev@ that is listed in @xfstests_dev.py@ and @python2@ is not avai... Rishabh Dave
12:18 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
/a/yuriw-2023-08-22_18:16:03-rados-wip-yuri10-testing-2023-08-17-1444-distro-default-smithi/7376742 Matan Breizman
05:44 AM Backport #62539 (In Progress): reef: qa: Health check failed: 1 pool(s) do not have an applicatio...
https://github.com/ceph/ceph/pull/54380 Backport Bot
05:43 AM Backport #62538 (In Progress): quincy: qa: Health check failed: 1 pool(s) do not have an applicat...
https://github.com/ceph/ceph/pull/53863 Backport Bot
05:38 AM Bug #62508 (Pending Backport): qa: Health check failed: 1 pool(s) do not have an application enab...
Venky Shankar
02:46 AM Bug #62537 (Fix Under Review): cephfs scrub command will crash the standby-replay MDSs
... Xiubo Li

08/22/2023

06:23 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
/a/yuriw-2023-08-17_21:18:20-rados-wip-yuri11-testing-2023-08-17-0823-distro-default-smithi/7372041 Laura Flores
12:30 PM Bug #61399: qa: build failure for ior
https://github.com/ceph/ceph/pull/52416 was merged accidentally (and then reverted). I've opened new PR for same patc... Rishabh Dave
08:54 AM Cleanup #4744 (In Progress): mds: pass around LogSegments via std::shared_ptr
Leonid Usov
08:03 AM Backport #62524 (Resolved): reef: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLO...
https://github.com/ceph/ceph/pull/53661 Backport Bot
08:03 AM Backport #62523 (Resolved): pacific: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_...
https://github.com/ceph/ceph/pull/53662 Backport Bot
08:03 AM Backport #62522 (Resolved): quincy: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_X...
https://github.com/ceph/ceph/pull/53663 Backport Bot
08:03 AM Backport #62521 (Resolved): reef: client: FAILED ceph_assert(_size == 0)
https://github.com/ceph/ceph/pull/53666 Backport Bot
08:03 AM Backport #62520 (In Progress): pacific: client: FAILED ceph_assert(_size == 0)
https://github.com/ceph/ceph/pull/53981 Backport Bot
08:03 AM Backport #62519 (Resolved): quincy: client: FAILED ceph_assert(_size == 0)
https://github.com/ceph/ceph/pull/53664 Backport Bot
08:02 AM Backport #62518 (In Progress): reef: mds: inode snaplock only acquired for open in create codepath
https://github.com/ceph/ceph/pull/53183 Backport Bot
08:02 AM Backport #62517 (Resolved): pacific: mds: inode snaplock only acquired for open in create codepath
https://github.com/ceph/ceph/pull/53185 Backport Bot
08:02 AM Backport #62516 (In Progress): quincy: mds: inode snaplock only acquired for open in create codepath
https://github.com/ceph/ceph/pull/53184 Backport Bot
08:02 AM Backport #62515 (Resolved): reef: Error: Unable to find a match: python2 with fscrypt tests
https://github.com/ceph/ceph/pull/53624 Backport Bot
08:02 AM Backport #62514 (Rejected): pacific: Error: Unable to find a match: python2 with fscrypt tests
https://github.com/ceph/ceph/pull/53625 Backport Bot
08:02 AM Backport #62513 (In Progress): quincy: Error: Unable to find a match: python2 with fscrypt tests
https://github.com/ceph/ceph/pull/53626 Backport Bot
07:57 AM Bug #44565 (Pending Backport): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK ...
Venky Shankar
07:56 AM Bug #56698 (Pending Backport): client: FAILED ceph_assert(_size == 0)
Venky Shankar
07:55 AM Bug #62058 (Pending Backport): mds: inode snaplock only acquired for open in create codepath
Venky Shankar
07:54 AM Bug #62277 (Pending Backport): Error: Unable to find a match: python2 with fscrypt tests
Venky Shankar
07:17 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
Venky Shankar wrote:
> Xiubo, please take this one.
Sure.
Xiubo Li
06:56 AM Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
Xiubo, please take this one. Venky Shankar
06:55 AM Bug #62510 (Duplicate): snaptest-git-ceph.sh failure with fs/thrash
/a/vshankar-2023-08-16_11:13:33-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/7369825... Venky Shankar
07:02 AM Bug #62511 (New): src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
/a/vshankar-2023-08-09_05:46:29-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/7363998... Venky Shankar
06:22 AM Feature #58877 (Rejected): mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
Update from internal discussion:
Given the complexities involved with the details mentioned in note-6, its risky t...
Venky Shankar
06:16 AM Bug #62508 (Fix Under Review): qa: Health check failed: 1 pool(s) do not have an application enab...
Venky Shankar
06:12 AM Bug #62508 (Duplicate): qa: Health check failed: 1 pool(s) do not have an application enabled (PO...
https://pulpito.ceph.com/yuriw-2023-08-18_20:13:47-fs-main-distro-default-smithi/
Fallout of https://github.com/ce...
Venky Shankar
05:49 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
Neeraj Pratap Singh wrote:
> @vshankar @kotresh Since, I was on sick leave yesterday. I saw the discussion made on t...
Venky Shankar
05:37 AM Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
@vshankar @kotresh Since, I was on sick leave yesterday. I saw the discussion made on the PR today. Seeing the final ... Neeraj Pratap Singh
05:31 AM Bug #62246: qa/cephfs: test_mount_mon_and_osd_caps_present_mds_caps_absent fails
Rishabh, were you able to push a fix for this? Venky Shankar
05:31 AM Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read'
https://pulpito.ceph.com/vshankar-2023-08-16_11:14:57-fs-wip-vshankar-testing-20230809.035933-testing-default-smithi/... Venky Shankar
04:21 AM Backport #59202 (Resolved): pacific: qa: add testing in fs:workload for different kinds of subvol...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51509
Merged.
Venky Shankar

08/21/2023

04:07 PM Backport #62421: pacific: mds: adjust cap acquistion throttle defaults
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52974
merged
Yuri Weinstein
04:06 PM Backport #61841: pacific: mds: do not evict clients if OSDs are laggy
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/52270
merged
Yuri Weinstein
03:36 PM Backport #61793: pacific: mgr/snap_schedule: catch all exceptions to avoid crashing module
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52753
merged
Yuri Weinstein
03:30 PM Bug #62501 (New): pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume...
Probably misconfiguration allowing OSDs to actually run out of space during test instead of the OSD refusing further ... Patrick Donnelly
02:15 PM Feature #61334: cephfs-mirror: use snapdiff api for efficient tree traversal
Jos,
The crux of the changes will be in PeerReplayer::do_synchronize() which if you see does:...
Venky Shankar
02:00 PM Feature #62364: support dumping rstats on a particular path
Greg Farnum wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Especially now that we have rstats disabled...
Venky Shankar
01:03 PM Feature #62364: support dumping rstats on a particular path
Venky Shankar wrote:
> Greg Farnum wrote:
> > Especially now that we have rstats disabled by default,
>
> When d...
Greg Farnum
01:50 PM Bug #62485: quincy (?): pybind/mgr/volumes: subvolume rm timeout
> 2023-08-09T05:20:40.495+0000 7f0e05951700 0 [volumes DEBUG mgr_util] locking <locked _thread.lock object at 0x7f0e... Venky Shankar
01:28 PM Bug #62494: Lack of consistency in time format
Eugen Block wrote:
> I wanted to test cephfs snapshots in latest Reef and noticed a discrepancy when it comes to tim...
Venky Shankar
09:52 AM Bug #62494 (Pending Backport): Lack of consistency in time format
I wanted to test cephfs snapshots in latest Reef and noticed a discrepancy when it comes to time format, for example ... Eugen Block
01:10 PM Bug #62484 (Triaged): qa: ffsb.sh test failure
Venky Shankar
10:07 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
Venky Shankar wrote:
> Xiubo Li wrote:
> > This should be a similar issue with https://tracker.ceph.com/issues/5848...
Dhairya Parmar
09:43 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
Xiubo Li wrote:
> This should be a similar issue with https://tracker.ceph.com/issues/58489. Just the *openc/mknod/m...
Venky Shankar
05:34 AM Bug #62074 (Resolved): cephfs-shell: ls command has help message of cp command
Venky Shankar
05:23 AM Feature #61777 (Fix Under Review): mds: add ceph.dir.bal.mask vxattr
Venky Shankar
02:12 AM Backport #59199 (Resolved): quincy: cephfs: qa enables kclient for newop test
Xiubo Li
02:12 AM Bug #59343 (Resolved): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Xiubo Li
02:12 AM Backport #62045 (Resolved): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Xiubo Li
02:11 AM Bug #49912 (Resolved): client: dir->dentries inconsistent, both newname and oldname points to sam...
Xiubo Li
02:11 AM Backport #62010 (Resolved): quincy: client: dir->dentries inconsistent, both newname and oldname ...
Xiubo Li
01:21 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
Venky Shankar wrote:
> Xiubo, please take this one.
Sure.
Xiubo Li
01:20 AM Bug #58340 (Resolved): mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegr...
Xiubo Li
01:20 AM Backport #61348 (Resolved): quincy: mds: fsstress.sh hangs with multimds (deadlock between unlink...
Xiubo Li
01:20 AM Bug #59705 (Resolved): client: only wait for write MDS OPs when unmounting
Xiubo Li
01:20 AM Backport #61796 (Resolved): quincy: client: only wait for write MDS OPs when unmounting
Xiubo Li
01:20 AM Bug #61523 (Resolved): client: do not send metrics until the MDS rank is ready
Xiubo Li
01:19 AM Backport #62042 (Resolved): quincy: client: do not send metrics until the MDS rank is ready
Xiubo Li
01:19 AM Bug #61782 (Resolved): mds: cap revoke and cap update's seqs mismatched
Xiubo Li
01:19 AM Backport #61985 (Resolved): quincy: mds: cap revoke and cap update's seqs mismatched
Xiubo Li
01:18 AM Backport #62193 (Resolved): pacific: ceph: corrupt snap message from mds1
Xiubo Li
01:18 AM Backport #62202 (Resolved): pacific: crash: MDSRank::send_message_client(boost::intrusive_ptr<Mes...
Xiubo Li
12:49 AM Bug #62096: mds: infinite rename recursion on itself
Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Patrick,
> >
> > This should be the same issue with:
> >
> > ht...
Xiubo Li

08/18/2023

01:05 AM Bug #59682 (Resolved): CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
Jos Collin
01:04 AM Backport #61695 (Resolved): quincy: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't...
Jos Collin
12:34 AM Bug #59551 (Resolved): mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Jos Collin
12:33 AM Backport #61736 (Resolved): quincy: mgr/stats: exception ValueError :invalid literal for int() wi...
Jos Collin
12:29 AM Bug #61201 (Resolved): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in...
Jos Collin
12:27 AM Backport #62056 (Resolved): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails because m...
https://github.com/ceph/ceph/pull/52514 merged
Jos Collin

08/17/2023

09:37 PM Backport #61988: quincy: mds: session ls command appears twice in command listing
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52516
merged
Yuri Weinstein
09:36 PM Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific
https://github.com/ceph/ceph/pull/52514 merged Yuri Weinstein
09:35 PM Backport #61985: quincy: mds: cap revoke and cap update's seqs mismatched
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52508
merged
Yuri Weinstein
09:33 PM Backport #62042: quincy: client: do not send metrics until the MDS rank is ready
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52502
merged
Yuri Weinstein
09:31 PM Backport #61796: quincy: client: only wait for write MDS OPs when unmounting
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52303
merged
Yuri Weinstein
09:30 PM Backport #59303: quincy: cephfs: tooling to identify inode (metadata) corruption
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52245
merged
Yuri Weinstein
09:30 PM Backport #59558: quincy: qa: RuntimeError: more than one file system available
Rishabh Dave wrote:
> https://github.com/ceph/ceph/pull/52241
merged
Yuri Weinstein
09:29 PM Backport #59371: quincy: qa: test_join_fs_unset failure
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52236
merged
Yuri Weinstein
09:28 PM Backport #61736: quincy: mgr/stats: exception ValueError :invalid literal for int() with base 16:...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52127
merged
Yuri Weinstein
09:28 PM Backport #61695: quincy: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install th...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52074
merged
Yuri Weinstein
09:27 PM Bug #59107: MDS imported_inodes metric is not updated.
https://github.com/ceph/ceph/pull/51697 merged Yuri Weinstein
09:26 PM Backport #59722: quincy: qa: run scrub post disaster recovery procedure
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51690
merged
Yuri Weinstein
09:25 PM Backport #61348: quincy: mds: fsstress.sh hangs with multimds (deadlock between unlink and reinte...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51685
merged
Yuri Weinstein
09:23 PM Backport #59367: quincy: qa: test_rebuild_simple checks status on wrong file system
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50922
merged
Yuri Weinstein
09:22 PM Backport #59265: quincy: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50815
merged
Yuri Weinstein
09:22 PM Backport #59262: quincy: mds: stray directories are not purged when all past parents are clear
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50815
merged
Yuri Weinstein
09:21 PM Backport #59244: quincy: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50807
merged
Yuri Weinstein
09:21 PM Backport #59247: quincy: qa: intermittent nfs test failures at nfs cluster creation
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50807
merged
Yuri Weinstein
09:21 PM Backport #59250: quincy: mgr/nfs: disallow non-existent paths when creating export
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50807
merged
Yuri Weinstein
09:20 PM Backport #59002: quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50786
merged
Yuri Weinstein
06:43 PM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
Patrick Donnelly
05:32 PM Bug #62485 (New): quincy (?): pybind/mgr/volumes: subvolume rm timeout
... Patrick Donnelly
05:04 PM Bug #62484 (Triaged): qa: ffsb.sh test failure
... Patrick Donnelly
04:18 PM Backport #59720: pacific: client: read wild pointer when reconnect to mds
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51487
merged
Yuri Weinstein
03:36 PM Backport #62372: pacific: Consider setting "bulk" autoscale pool flag when automatically creating...
Leonid Usov wrote:
> https://github.com/ceph/ceph/pull/52900
merged
Yuri Weinstein
03:35 PM Backport #62193: pacific: ceph: corrupt snap message from mds1
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52848
merged
Yuri Weinstein
03:35 PM Backport #62202: pacific: crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> const...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52844
merged
Yuri Weinstein
03:35 PM Backport #62242: pacific: mds: linkmerge assert check is incorrect in rename codepath
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52726
merged
Yuri Weinstein
03:34 PM Backport #62190: pacific: mds: replay thread does not update some essential perf counters
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52682
merged
Yuri Weinstein
01:35 PM Bug #62482 (Fix Under Review): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an a...
Patrick Donnelly
01:26 PM Bug #62482 (Resolved): qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an applicati...
/teuthology/vshankar-2023-08-16_10:55:44-fs-wip-vshankar-testing-20230816.054905-testing-default-smithi/7369639/teuth... Patrick Donnelly
01:03 PM Bug #61399 (In Progress): qa: build failure for ior
had to revert the changes - https://github.com/ceph/ceph/pull/53036 Venky Shankar
12:30 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
/a/yuriw-2023-08-16_22:40:18-rados-wip-yuri2-testing-2023-08-16-1142-pacific-distro-default-smithi/7370706/ Aishwarya Mathuria
08:47 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
i have a concern about this code; if the p->first is still gt than start even after decrementing then it means a sing... Dhairya Parmar
08:41 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
Xiubo Li wrote:
> It aborted in Line#1623. The *session->take_ino()* may return *0* if the *used_preallocated_ino* d...
Dhairya Parmar
07:49 AM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
Igor Fedotov wrote:
> The attached file contains log snippets with apparently relevant information for a few crashes...
Venky Shankar
07:44 AM Bug #62435 (Triaged): Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
Venky Shankar
07:44 AM Bug #62435: Pod unable to mount fscrypt encrypted cephfs PVC when it moves to another node
Xiubo, please take this one. Venky Shankar
06:06 AM Bug #62265 (In Progress): cephfs-mirror: use monotonic clocks in cephfs mirror daemon
Manish Yathnalli

08/16/2023

04:22 PM Backport #59003: pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51039
merged
Yuri Weinstein
03:56 PM Bug #62465 (Can't reproduce): pacific (?): LibCephFS.ShutdownRace segmentation fault
... Patrick Donnelly
01:55 PM Feature #62364: support dumping rstats on a particular path
Greg Farnum wrote:
> Especially now that we have rstats disabled by default,
When did this happen? Or, do you mea...
Venky Shankar
01:13 PM Feature #62215: libcephfs: Allow monitoring for any file changes like inotify
Anagh Kumar Baranwal wrote:
> Venky Shankar wrote:
> > Changes originating from the localhost would obviously be no...
Venky Shankar
01:07 PM Backport #62460 (In Progress): reef: pybind/mgr/volumes: Document a possible deadlock after a vol...
Kotresh Hiremath Ravishankar
01:03 PM Backport #62460 (In Progress): reef: pybind/mgr/volumes: Document a possible deadlock after a vol...
https://github.com/ceph/ceph/pull/52946 Backport Bot
01:07 PM Backport #62459 (In Progress): quincy: pybind/mgr/volumes: Document a possible deadlock after a v...
Kotresh Hiremath Ravishankar
01:03 PM Backport #62459 (In Progress): quincy: pybind/mgr/volumes: Document a possible deadlock after a v...
https://github.com/ceph/ceph/pull/52947 Backport Bot
12:56 PM Bug #62407 (Pending Backport): pybind/mgr/volumes: Document a possible deadlock after a volume de...
Kotresh Hiremath Ravishankar
12:55 PM Bug #62407 (Fix Under Review): pybind/mgr/volumes: Document a possible deadlock after a volume de...
Kotresh Hiremath Ravishankar
12:19 PM Backport #62373 (In Progress): quincy: Consider setting "bulk" autoscale pool flag when automatic...
Leonid Usov
12:18 PM Bug #62208 (Fix Under Review): mds: use MDSRank::abort to ceph_abort so necessary sync is done
Leonid Usov
12:18 PM Bug #62208 (In Progress): mds: use MDSRank::abort to ceph_abort so necessary sync is done
Leonid Usov
11:47 AM Feature #61334 (In Progress): cephfs-mirror: use snapdiff api for efficient tree traversal
Jos Collin
07:28 AM Bug #61399 (Resolved): qa: build failure for ior
Rishabh, does this need backporting? Venky Shankar

08/15/2023

10:10 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
/a/yuriw-2023-08-11_02:49:40-rados-wip-yuri4-testing-2023-08-10-1739-distro-default-smithi/7367034 Laura Flores
08:06 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
/a/yuriw-2023-08-10_20:19:11-rados-wip-yuri2-testing-2023-08-08-0755-pacific-distro-default-smithi/7366072 Laura Flores
06:47 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
/a/yuriw-2023-08-08_14:45:33-rados-wip-yuri6-testing-2023-08-03-0807-pacific-distro-default-smithi/7362839 Laura Flores
06:07 PM Backport #62442 (In Progress): pacific: api tests fail from "MDS_CLIENTS_LAGGY" warning
Laura Flores
06:03 PM Backport #62442 (Resolved): pacific: api tests fail from "MDS_CLIENTS_LAGGY" warning
https://github.com/ceph/ceph/pull/52270 Backport Bot
06:06 PM Backport #62441 (In Progress): quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
Laura Flores
06:03 PM Backport #62441 (Resolved): quincy: api tests fail from "MDS_CLIENTS_LAGGY" warning
https://github.com/ceph/ceph/pull/53006 Backport Bot
06:05 PM Backport #62443 (In Progress): reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
Laura Flores
06:03 PM Backport #62443 (Resolved): reef: api tests fail from "MDS_CLIENTS_LAGGY" warning
https://github.com/ceph/ceph/pull/52268 Backport Bot
06:02 PM Bug #61907 (Pending Backport): api tests fail from "MDS_CLIENTS_LAGGY" warning
Laura Flores
10:54 AM Backport #62425 (In Progress): reef: nofail option in fstab not supported
https://github.com/ceph/ceph/pull/52985 Leonid Usov
10:54 AM Backport #62426 (In Progress): quincy: nofail option in fstab not supported
https://github.com/ceph/ceph/pull/52986 Leonid Usov
10:54 AM Backport #62427 (In Progress): pacific: nofail option in fstab not supported
https://github.com/ceph/ceph/pull/52987 Leonid Usov
10:07 AM Bug #55041 (Resolved): mgr/volumes: display in-progress clones for a snapshot
Konstantin Shalygin
10:07 AM Backport #56469 (Resolved): quincy: mgr/volumes: display in-progress clones for a snapshot
Konstantin Shalygin
10:07 AM Backport #56468 (Resolved): pacific: mgr/volumes: display in-progress clones for a snapshot
Konstantin Shalygin
10:06 AM Bug #55583 (Resolved): Intermittent ParsingError failure in mgr/volumes module during "clone can...
Konstantin Shalygin
10:06 AM Backport #57113 (Resolved): pacific: Intermittent ParsingError failure in mgr/volumes module dur...
Konstantin Shalygin
10:06 AM Bug #55134 (Resolved): ceph pacific fails to perform fs/mirror test
Konstantin Shalygin
10:06 AM Backport #57193 (Resolved): quincy: ceph pacific fails to perform fs/mirror test
Konstantin Shalygin
10:06 AM Backport #57194 (Resolved): pacific: ceph pacific fails to perform fs/mirror test
Konstantin Shalygin
10:05 AM Bug #56632 (Resolved): qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
Konstantin Shalygin
10:05 AM Backport #57554 (Resolved): quincy: qa: test_subvolume_snapshot_clone_quota_exceeded fails Comman...
Konstantin Shalygin
10:05 AM Backport #57555 (Resolved): pacific: qa: test_subvolume_snapshot_clone_quota_exceeded fails Comma...
Konstantin Shalygin
10:04 AM Backport #57719 (Resolved): quincy: Test failure: test_subvolume_group_ls_filter_internal_directo...
Konstantin Shalygin
10:04 AM Backport #57718 (Resolved): pacific: Test failure: test_subvolume_group_ls_filter_internal_direct...
Konstantin Shalygin
10:04 AM Backport #57729 (Resolved): quincy: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
Konstantin Shalygin
10:04 AM Backport #57728 (Resolved): pacific: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
Konstantin Shalygin
10:04 AM Bug #48812 (Resolved): qa: test_scrub_pause_and_resume_with_abort failure
Konstantin Shalygin
10:04 AM Backport #57760 (Resolved): quincy: qa: test_scrub_pause_and_resume_with_abort failure
Konstantin Shalygin
10:03 AM Backport #57761 (Resolved): pacific: qa: test_scrub_pause_and_resume_with_abort failure
Konstantin Shalygin
10:03 AM Bug #57589 (Resolved): cephfs-data-scan: scan_links is not verbose enough
Konstantin Shalygin
10:03 AM Backport #57820 (Resolved): quincy: cephfs-data-scan: scan_links is not verbose enough
Konstantin Shalygin
10:02 AM Backport #57821 (Resolved): pacific: cephfs-data-scan: scan_links is not verbose enough
Konstantin Shalygin
08:59 AM Bug #57084 (Resolved): Permissions of the .snap directory do not inherit ACLs
Konstantin Shalygin
08:59 AM Backport #57874 (Resolved): quincy: Permissions of the .snap directory do not inherit ACLs
Konstantin Shalygin
08:59 AM Backport #57875 (Resolved): pacific: Permissions of the .snap directory do not inherit ACLs
Konstantin Shalygin
08:58 AM Backport #59021 (Resolved): quincy: mds: warning `clients failing to advance oldest client/flush ...
Konstantin Shalygin
08:58 AM Bug #57985 (Resolved): mds: warning `clients failing to advance oldest client/flush tid` seen wit...
Konstantin Shalygin
08:57 AM Backport #59023 (Resolved): pacific: mds: warning `clients failing to advance oldest client/flush...
Konstantin Shalygin
08:57 AM Bug #58029 (Resolved): cephfs-data-scan: multiple data pools are not supported
Konstantin Shalygin
08:57 AM Backport #59020 (Resolved): reef: cephfs-data-scan: multiple data pools are not supported
Konstantin Shalygin
08:57 AM Backport #59018 (Resolved): quincy: cephfs-data-scan: multiple data pools are not supported
Konstantin Shalygin
08:56 AM Backport #59019 (Resolved): pacific: cephfs-data-scan: multiple data pools are not supported
Konstantin Shalygin
08:56 AM Bug #58095 (Resolved): snap-schedule: handle non-existent path gracefully during snapshot creation
Konstantin Shalygin
08:56 AM Backport #59016 (Resolved): quincy: snap-schedule: handle non-existent path gracefully during sna...
Konstantin Shalygin
08:56 AM Backport #59411 (Resolved): reef: snap-schedule: handle non-existent path gracefully during snaps...
Konstantin Shalygin
08:56 AM Backport #59017 (Resolved): pacific: snap-schedule: handle non-existent path gracefully during sn...
Konstantin Shalygin
08:55 AM Bug #58294 (Resolved): MDS: scan_stray_dir doesn't walk through all stray inode fragment
Konstantin Shalygin
08:55 AM Backport #58349 (Resolved): pacific: MDS: scan_stray_dir doesn't walk through all stray inode fra...
Konstantin Shalygin
08:55 AM Bug #57764 (Resolved): Thread md_log_replay is hanged for ever.
Konstantin Shalygin
08:54 AM Backport #58346 (Resolved): pacific: Thread md_log_replay is hanged for ever.
Konstantin Shalygin
08:54 AM Bug #57210 (Resolved): NFS client unable to see newly created files when listing directory conten...
Konstantin Shalygin
08:54 AM Backport #57879 (Resolved): quincy: NFS client unable to see newly created files when listing dir...
Konstantin Shalygin
08:54 AM Backport #57880 (Resolved): pacific: NFS client unable to see newly created files when listing di...
Konstantin Shalygin
08:48 AM Backport #59430 (Resolved): reef: Test failure: test_client_cache_size (tasks.cephfs.test_client_...
Konstantin Shalygin
08:48 AM Backport #59246 (Resolved): pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_exis...
Konstantin Shalygin
08:47 AM Bug #58744 (Resolved): qa: intermittent nfs test failures at nfs cluster creation
Konstantin Shalygin
08:47 AM Backport #59247 (Resolved): quincy: qa: intermittent nfs test failures at nfs cluster creation
Konstantin Shalygin
08:47 AM Backport #59248 (Resolved): reef: qa: intermittent nfs test failures at nfs cluster creation
Konstantin Shalygin
08:47 AM Backport #59249 (Resolved): pacific: qa: intermittent nfs test failures at nfs cluster creation
Konstantin Shalygin
08:46 AM Backport #59251 (Resolved): reef: mgr/nfs: disallow non-existent paths when creating export
Konstantin Shalygin
08:46 AM Backport #59252 (Resolved): pacific: mgr/nfs: disallow non-existent paths when creating export
Konstantin Shalygin
08:45 AM Backport #59373 (Resolved): reef: qa: test_join_fs_unset failure
Konstantin Shalygin
08:45 AM Backport #59372 (Resolved): pacific: qa: test_join_fs_unset failure
Konstantin Shalygin
08:45 AM Backport #59721 (Resolved): pacific: qa: run scrub post disaster recovery procedure
Konstantin Shalygin
08:45 AM Bug #59569 (Resolved): mds: allow entries to be removed from lost+found directory
Konstantin Shalygin
08:44 AM Backport #59724 (Resolved): reef: mds: allow entries to be removed from lost+found directory
Konstantin Shalygin
08:44 AM Backport #59725 (Resolved): pacific: mds: allow entries to be removed from lost+found directory
Konstantin Shalygin
08:44 AM Backport #61204 (Resolved): reef: MDS imported_inodes metric is not updated.
Konstantin Shalygin
08:43 AM Backport #61202 (Resolved): pacific: MDS imported_inodes metric is not updated.
Konstantin Shalygin
08:43 AM Bug #58411 (Resolved): mds: a few simple operations crash mds
Konstantin Shalygin
08:42 AM Backport #61235 (Resolved): pacific: mds: a few simple operations crash mds
Konstantin Shalygin
08:42 AM Bug #59691 (Resolved): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Konstantin Shalygin
08:41 AM Backport #61412 (Resolved): quincy: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Konstantin Shalygin
08:41 AM Backport #61411 (Resolved): pacific: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Konstantin Shalygin
08:41 AM Backport #61413 (Resolved): reef: mon/MDSMonitor: do not trigger propose on error from prepare_up...
Konstantin Shalygin
08:41 AM Backport #61415 (Resolved): quincy: mon/MDSMonitor: do not trigger propose on error from prepare_...
Konstantin Shalygin
08:39 AM Backport #61414 (Resolved): pacific: mon/MDSMonitor: do not trigger propose on error from prepare...
Konstantin Shalygin
08:36 AM Bug #59350 (Resolved): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks...
Konstantin Shalygin
08:35 AM Backport #62069 (Resolved): reef: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.Test...
Konstantin Shalygin
08:35 AM Backport #62070 (Resolved): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.Te...
Konstantin Shalygin
08:35 AM Backport #62068 (Resolved): pacific: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.T...
Konstantin Shalygin
08:34 AM Bug #59552 (Resolved): mon: block osd pool mksnap for fs pools
Konstantin Shalygin
08:34 AM Backport #61959 (Resolved): reef: mon: block osd pool mksnap for fs pools
Konstantin Shalygin
08:33 AM Backport #61961 (Resolved): pacific: mon: block osd pool mksnap for fs pools
Konstantin Shalygin
08:24 AM Bug #47172 (Resolved): mgr/nfs: Add support for RGW export
Konstantin Shalygin
08:23 AM Backport #51421 (Rejected): pacific: mgr/nfs: Add support for RGW export
Konstantin Shalygin

08/14/2023

09:18 PM Bug #62435 (Need More Info): Pod unable to mount fscrypt encrypted cephfs PVC when it moves to an...
Here is our setup:
Kubernetes: 1.27.3
rook: 1.11.9
ceph: 17.2.6
OS: Ubuntu 20.04 modified kernel to support fscry...
Sudhin Bengeri
02:44 PM Backport #61801 (Rejected): pacific: mon/MDSMonitor: plug PAXOS when evicting an MDS
EOL Patrick Donnelly
02:41 PM Backport #61799 (In Progress): quincy: mon/MDSMonitor: plug PAXOS when evicting an MDS
Patrick Donnelly
02:40 PM Bug #62057 (Fix Under Review): mds: add TrackedOp event for batching getattr/lookup
Patrick Donnelly
02:34 PM Bug #62057 (Pending Backport): mds: add TrackedOp event for batching getattr/lookup
(Testing a change to the backport script.) Patrick Donnelly
02:30 PM Backport #62373 (New): quincy: Consider setting "bulk" autoscale pool flag when automatically cre...
Leonid Usov
02:28 PM Backport #61800 (In Progress): reef: mon/MDSMonitor: plug PAXOS when evicting an MDS
Patrick Donnelly
01:49 PM Backport #62424 (Rejected): pacific: mds: print locks when dumping ops
EOL Patrick Donnelly
12:37 PM Backport #62424 (Rejected): pacific: mds: print locks when dumping ops
Backport Bot
01:49 PM Backport #62423 (In Progress): quincy: mds: print locks when dumping ops
Patrick Donnelly
12:37 PM Backport #62423 (In Progress): quincy: mds: print locks when dumping ops
https://github.com/ceph/ceph/pull/52976 Backport Bot
01:45 PM Backport #62422 (In Progress): reef: mds: print locks when dumping ops
Patrick Donnelly
12:36 PM Backport #62422 (In Progress): reef: mds: print locks when dumping ops
https://github.com/ceph/ceph/pull/52975 Backport Bot
01:40 PM Backport #62421 (In Progress): pacific: mds: adjust cap acquistion throttle defaults
Patrick Donnelly
12:36 PM Backport #62421 (Resolved): pacific: mds: adjust cap acquistion throttle defaults
https://github.com/ceph/ceph/pull/52974 Backport Bot
01:37 PM Backport #62420 (In Progress): quincy: mds: adjust cap acquistion throttle defaults
Patrick Donnelly
12:36 PM Backport #62420 (Resolved): quincy: mds: adjust cap acquistion throttle defaults
https://github.com/ceph/ceph/pull/52973 Backport Bot
01:35 PM Backport #62419 (In Progress): reef: mds: adjust cap acquistion throttle defaults
Patrick Donnelly
12:36 PM Backport #62419 (Resolved): reef: mds: adjust cap acquistion throttle defaults
https://github.com/ceph/ceph/pull/52972 Backport Bot
12:37 PM Backport #62427 (In Progress): pacific: nofail option in fstab not supported
https://github.com/ceph/ceph/pull/52987 Backport Bot
12:37 PM Backport #62426 (In Progress): quincy: nofail option in fstab not supported
Backport Bot
12:37 PM Backport #62425 (In Progress): reef: nofail option in fstab not supported
Backport Bot
12:23 PM Feature #62086 (Pending Backport): mds: print locks when dumping ops
Venky Shankar
12:22 PM Bug #62114 (Pending Backport): mds: adjust cap acquistion throttle defaults
Venky Shankar
12:21 PM Bug #58394 (Pending Backport): nofail option in fstab not supported
Venky Shankar

08/13/2023

06:39 AM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
/a/yuriw-2023-08-09_19:52:16-rados-wip-yuri5-testing-2023-08-08-0807-quincy-distro-default-smithi/7364400/ Nitzan Mordechai

08/12/2023

02:33 AM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
Venky Shankar wrote:
> From GChat
>
> > Rishabh Dave, 1:40 PM
> >https://tracker.ceph.com/issues/61903
> >Shoul...
Venky Shankar

08/11/2023

02:34 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
From GChat
> Rishabh Dave, 1:40 PM
>https://tracker.ceph.com/issues/61903
>Should I add an option to turn off th...
Venky Shankar
11:32 AM Bug #61947 (Fix Under Review): mds: enforce a limit on the size of a session in the sessionmap
Venky Shankar
09:12 AM Bug #62407 (Pending Backport): pybind/mgr/volumes: Document a possible deadlock after a volume de...
When a cephfs volume is deleted, the mgr threads (cloner, purge threads) could take a corresponding thread lock
and ...
Kotresh Hiremath Ravishankar
06:18 AM Backport #62406 (In Progress): pacific: pybind/mgr/volumes: pending_subvolume_deletions count is ...
https://github.com/ceph/ceph/pull/53574 Backport Bot
06:18 AM Backport #62405 (In Progress): reef: pybind/mgr/volumes: pending_subvolume_deletions count is alw...
https://github.com/ceph/ceph/pull/53572 Backport Bot
06:18 AM Backport #62404 (Resolved): quincy: pybind/mgr/volumes: pending_subvolume_deletions count is alwa...
https://github.com/ceph/ceph/pull/53573 Backport Bot
06:12 AM Bug #62278 (Pending Backport): pybind/mgr/volumes: pending_subvolume_deletions count is always ze...
Venky Shankar
12:58 AM Bug #61732 (Fix Under Review): pacific: test_cluster_info fails from "No daemons reported"
Venky Shankar

08/10/2023

09:35 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
/a/yuriw-2023-08-02_20:21:03-rados-wip-yuri3-testing-2023-08-01-0825-pacific-distro-default-smithi/7358531 Laura Flores
07:31 PM Bug #62096: mds: infinite rename recursion on itself
Xiubo Li wrote:
> Patrick,
>
> This should be the same issue with:
>
> https://tracker.ceph.com/issues/58340
...
Patrick Donnelly
12:00 PM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
Xiubo Li wrote:
> This should be a similar issue with https://tracker.ceph.com/issues/58489. Just the *openc/mknod/m...
Venky Shankar
10:35 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
@Neeraj,
Could you please check on the point 3 mentioned in the comment 6 above ?
-Kotresh H R
Kotresh Hiremath Ravishankar
07:17 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
Venky, Neeraj and me had a meeting regarding this and please find the meeting minutes below:
1. When the subvolume...
Kotresh Hiremath Ravishankar

08/09/2023

06:20 PM Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_ve...
The attached file contains log snippets with apparently relevant information for a few crashes as well as intermediat... Igor Fedotov
06:17 PM Bug #62381 (In Progress): mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() ...
Despite https://tracker.ceph.com/issues/53597 being marked as resolved we could still face the problem in v17.2.5
...
Igor Fedotov
03:21 PM Backport #62373 (In Progress): quincy: Consider setting "bulk" autoscale pool flag when automatic...
Leonid Usov
10:03 AM Backport #62373: quincy: Consider setting "bulk" autoscale pool flag when automatically creating ...
https://github.com/ceph/ceph/pull/52902 Leonid Usov
09:26 AM Backport #62373 (Resolved): quincy: Consider setting "bulk" autoscale pool flag when automaticall...
Backport Bot
03:21 PM Backport #62372 (In Progress): pacific: Consider setting "bulk" autoscale pool flag when automati...
Leonid Usov
10:04 AM Backport #62372: pacific: Consider setting "bulk" autoscale pool flag when automatically creating...
https://github.com/ceph/ceph/pull/52900 Leonid Usov
09:26 AM Backport #62372 (Resolved): pacific: Consider setting "bulk" autoscale pool flag when automatical...
https://github.com/ceph/ceph/pull/52900 Backport Bot
03:21 PM Backport #62374 (In Progress): reef: Consider setting "bulk" autoscale pool flag when automatical...
Leonid Usov
10:04 AM Backport #62374: reef: Consider setting "bulk" autoscale pool flag when automatically creating a ...
https://github.com/ceph/ceph/pull/52899 Leonid Usov
09:26 AM Backport #62374 (Resolved): reef: Consider setting "bulk" autoscale pool flag when automatically ...
Backport Bot
12:22 PM Bug #62123: mds: detect out-of-order locking
Bumping priority since we have places where the MDS could deadlock due to out-of-order locking, Venky Shankar
12:06 PM Backport #62040 (Resolved): pacific: client: do not send metrics until the MDS rank is ready
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52500
Merged.
Venky Shankar
12:05 PM Backport #62177 (Resolved): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52654
Merged.
Venky Shankar
09:21 AM Feature #61595 (Pending Backport): Consider setting "bulk" autoscale pool flag when automatically...
Venky Shankar
08:05 AM Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock...
Venky Shankar wrote:
> If the auth is sending `LOCK_AC_LOCK` then in the replica it should be handled here:
>
> [...
Xiubo Li
07:58 AM Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock...
If the auth is sending `LOCK_AC_LOCK` then in the replica it should be handled here:... Venky Shankar
07:33 AM Bug #54833 (In Progress): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&...
Xiubo Li
07:33 AM Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock...
There is no logs and I just went through MDS locker code, it seems buggy here:... Xiubo Li
06:39 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
Hi Jos,
Jos Collin wrote:
> In the run https://pulpito.ceph.com/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-test...
Venky Shankar
04:39 AM Bug #61867 (Fix Under Review): mgr/volumes: async threads should periodically check for work
Venky Shankar
04:32 AM Backport #62012 (Resolved): pacific: client: dir->dentries inconsistent, both newname and oldname...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52505
Merged.
Venky Shankar
04:31 AM Backport #61983 (Resolved): pacific: mds: cap revoke and cap update's seqs mismatched
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52506
Merged.
Venky Shankar
04:31 AM Backport #62055 (Resolved): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52513
Merged.
Venky Shankar
04:30 AM Backport #62043 (Resolved): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52499
Merged.
Venky Shankar
03:54 AM Backport #61960 (Resolved): quincy: mon: block osd pool mksnap for fs pools
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52398
Merged.
Venky Shankar
03:27 AM Bug #59785 (Closed): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == L...
This issue won't be seen in latest builds for pacific v16.2.12 and later ... and quincy ceph-17.2.6-2 and later. Milind Changire
01:30 AM Backport #61696 (Resolved): pacific: CephFS: Debian cephfs-mirror package in the Ceph repo doesn'...
Jos Collin
01:29 AM Backport #61734 (Resolved): pacific: mgr/stats: exception ValueError :invalid literal for int() w...
Jos Collin
01:00 AM Bug #52280 (Resolved): Mds crash and fails with assert on prepare_new_inode
Xiubo Li
12:59 AM Backport #59706 (Resolved): pacific: Mds crash and fails with assert on prepare_new_inode
Xiubo Li
12:59 AM Backport #61798 (Resolved): pacific: client: only wait for write MDS OPs when unmounting
Xiubo Li
12:58 AM Bug #62096: mds: infinite rename recursion on itself
Patrick,
This should be the same issue with:
https://tracker.ceph.com/issues/58340
https://tracker.ceph.com/is...
Xiubo Li
12:36 AM Feature #62364 (New): support dumping rstats on a particular path
Especially now that we have rstats disabled by default, we need an easy way to dump rstats (primarily rbytes, though ... Greg Farnum

08/08/2023

06:20 PM Backport #61961: pacific: mon: block osd pool mksnap for fs pools
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52397
merged
Yuri Weinstein
06:20 PM Backport #61798: pacific: client: only wait for write MDS OPs when unmounting
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52304
merged
Yuri Weinstein
06:19 PM Backport #61426: pacific: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot be...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52244
merged
Yuri Weinstein
06:19 PM Backport #61414: pacific: mon/MDSMonitor: do not trigger propose on error from prepare_update
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52240
merged
Yuri Weinstein
06:18 PM Backport #59372: pacific: qa: test_join_fs_unset failure
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52237
merged
Yuri Weinstein
06:18 PM Backport #61411: pacific: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52233
merged
Yuri Weinstein
06:17 PM Backport #61692: pacific: mon failed to return metadata for mds
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52230
merged
Yuri Weinstein
06:17 PM Backport #61734: pacific: mgr/stats: exception ValueError :invalid literal for int() with base 16...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52125
merged
Yuri Weinstein
06:16 PM Backport #61696: pacific: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install t...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52075
merged
Yuri Weinstein
06:15 PM Backport #59706: pacific: Mds crash and fails with assert on prepare_new_inode
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51508
merged
Yuri Weinstein
12:39 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
In the run https://pulpito.ceph.com/vshankar-2023-07-26_04:54:56-fs-wip-vshankar-testing-20230725.053049-testing-defa... Jos Collin
04:49 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
Jos Collin wrote:
> @Venky:
>
> This bug couldn't be reproduced on main with consecutive runs of test_cephfs_mirr...
Venky Shankar
04:31 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
@Venky:
This bug couldn't be reproduced on main with consecutive runs of test_cephfs_mirror_restart_sync_on_blockl...
Jos Collin
11:43 AM Bug #62077 (In Progress): mgr/nfs: validate path when modifying cephfs export
Dhairya Parmar
10:12 AM Feature #61595 (Fix Under Review): Consider setting "bulk" autoscale pool flag when automatically...
The resolved status was a bit premature. See - https://github.com/ceph/ceph/pull/52792#issuecomment-1669259541
Fur...
Venky Shankar
08:25 AM Feature #61595 (Resolved): Consider setting "bulk" autoscale pool flag when automatically creatin...
Leonid Usov
09:58 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
This should be a similar issue with https://tracker.ceph.com/issues/58489. Just the *openc/mknod/mkdir/symblink* even... Xiubo Li
09:29 AM Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
It aborted in Line#1623. The *session->take_ino()* may return *0* if the *used_preallocated_ino* doesn't exist. Then ... Xiubo Li
09:27 AM Bug #62356 (Duplicate): mds: src/include/interval_set.h: 538: FAILED ceph_assert(p->first <= start)
Xiubo Li
08:40 AM Bug #62356: mds: src/include/interval_set.h: 538: FAILED ceph_assert(p->first <= start)
I just realized there is already existing one tracker, which has the same issue https://tracker.ceph.com/issues/61009. Xiubo Li
08:32 AM Bug #62356 (Duplicate): mds: src/include/interval_set.h: 538: FAILED ceph_assert(p->first <= start)
... Xiubo Li
08:43 AM Bug #54943 (Duplicate): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [w...
Xiubo Li
08:33 AM Bug #62357 (Resolved): tools/cephfs_mirror: only perform actions if init succeed
address non-zero return code first and then perform further actions. Dhairya Parmar
08:30 AM Bug #62355 (Closed): cephfs-mirror: do not run concurrent C_RestartMirroring context
closing since added the summary to the existing tracker(which wasnt meant to address C_RestartMirroring contexts but ... Dhairya Parmar
08:20 AM Bug #62355 (Closed): cephfs-mirror: do not run concurrent C_RestartMirroring context
This was majorly discussed in tracker https://tracker.ceph.com/issues/62072 and PR https://github.com/ceph/ceph/pull/... Dhairya Parmar
08:28 AM Bug #62072 (Fix Under Review): cephfs-mirror: do not run concurrent C_RestartMirroring context
Quick summary:
After digging deep the issue was much more than anticipated and kudos to venky for figuring it out ...
Dhairya Parmar
05:39 AM Bug #61717: CephFS flock blocked on itself
Greg Farnum wrote:
> I think there must be more going on here than is understood. The MDS is blocked on getting some...
Venky Shankar
05:24 AM Bug #61717 (Can't reproduce): CephFS flock blocked on itself
I think there must be more going on here than is understood. The MDS is blocked on getting some other internal locks ... Greg Farnum
12:58 AM Backport #61984 (Resolved): reef: mds: cap revoke and cap update's seqs mismatched
Xiubo Li
 

Also available in: Atom