Activity
From 06/27/2023 to 07/26/2023
07/26/2023
- 07:01 PM Bug #62126: test failure: suites/blogbench.sh stops running
- Seen here and probably elsewhere: /teuthology/yuriw-2023-07-10_00:47:51-fs-reef-distro-default-smithi/7331743/teuthol...
- 10:50 AM Backport #62178 (In Progress): reef: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
- 10:38 AM Backport #62178 (Resolved): reef: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemo...
- https://github.com/ceph/ceph/pull/52656
- 10:48 AM Backport #62177 (In Progress): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror...
- 10:38 AM Backport #62177 (Resolved): pacific: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror da...
- https://github.com/ceph/ceph/pull/52654
- 10:46 AM Backport #62176 (In Progress): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror ...
- 10:38 AM Backport #62176 (In Progress): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror ...
- https://github.com/ceph/ceph/pull/52653
- 10:31 AM Bug #61182 (Pending Backport): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
- 10:05 AM Feature #61908 (Fix Under Review): mds: provide configuration for trim rate of the journal
- 09:33 AM Bug #52439 (Can't reproduce): qa: acls does not compile on centos stream
- 08:43 AM Bug #62036 (Fix Under Review): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- 06:32 AM Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- The mds became *up:active* before receiving the last *cache_rejoin ack*:...
- 05:39 AM Bug #62036 (In Progress): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- 08:08 AM Backport #59264 (Resolved): pacific: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50814
Merged. - 08:08 AM Backport #59261 (Resolved): pacific: mds: stray directories are not purged when all past parents ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50814
Merged. - 06:26 AM Backport #61346 (Resolved): pacific: mds: fsstress.sh hangs with multimds (deadlock between unlin...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51686
Merged. - 04:15 AM Bug #57014 (Resolved): cephfs-top: add an option to dump the computed values to stdout
- 04:13 AM Bug #58823 (Resolved): cephfs-top: navigate to home screen when no fs
- 04:12 AM Bug #59553 (Resolved): cephfs-top: fix help text for delay
- 04:12 AM Bug #58677 (Resolved): cephfs-top: test the current python version is supported
- 04:10 AM Documentation #57673 (Resolved): doc: document the relevance of mds_namespace mount option
- 04:09 AM Backport #58408 (Resolved): pacific: doc: document the relevance of mds_namespace mount option
- 04:03 AM Backport #59482 (Resolved): pacific: cephfs-top, qa: test the current python version is supported
- 04:02 AM Backport #58984 (Resolved): pacific: cephfs-top: navigate to home screen when no fs
07/25/2023
- 07:09 PM Bug #62164 (Fix Under Review): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- 02:13 PM Bug #62164 (Pending Backport): qa: "cluster [ERR] MDS abort because newly corrupt dentry to be co...
- /teuthology/yuriw-2023-07-20_14:36:46-fs-wip-yuri-testing-2023-07-19-1340-pacific-distro-default-smithi/7344784/1$
... - 04:38 PM Bug #58813 (Resolved): cephfs-top: Sort menu doesn't show 'No filesystem available' screen when a...
- 04:38 PM Bug #58814 (Resolved): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 04:37 PM Backport #58865 (Resolved): quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' ...
- 03:07 PM Backport #58865: quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' screen when...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50365
merged - 04:37 PM Backport #58985 (Resolved): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 03:08 PM Backport #58985: quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50595
merged - 03:55 PM Bug #52386 (Resolved): client: fix dump mds twice
- 03:55 PM Backport #52442 (Resolved): pacific: client: fix dump mds twice
- 03:15 PM Backport #52442: pacific: client: fix dump mds twice
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51247
merged - 03:32 PM Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-21_02:03:58-rados-wip-yuri7-testing-2023-07-20-0727-distro-default-smithi/7346244
- 05:06 AM Bug #62084 (Fix Under Review): task/test_nfs: AttributeError: 'TestNFS' object has no attribute '...
- 03:20 PM Bug #59107: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51699 merged
- 03:19 PM Backport #59725: pacific: mds: allow entries to be removed from lost+found directory
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51687
merged - 03:17 PM Backport #59721: pacific: qa: run scrub post disaster recovery procedure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51610
merged - 03:17 PM Backport #61235: pacific: mds: a few simple operations crash mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51609
merged - 03:16 PM Backport #59482: pacific: cephfs-top, qa: test the current python version is supported
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51353
merged - 03:15 PM Backport #59017: pacific: snap-schedule: handle non-existent path gracefully during snapshot crea...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51246
merged - 03:12 PM Backport #58984: pacific: cephfs-top: navigate to home screen when no fs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50737
merged - 03:09 PM Backport #59021: quincy: mds: warning `clients failing to advance oldest client/flush tid` seen w...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50785
merged - 03:08 PM Backport #59016: quincy: snap-schedule: handle non-existent path gracefully during snapshot creation
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50780
merged - 10:40 AM Bug #62160 (Duplicate): mds: MDS abort because newly corrupt dentry to be committed
- /a/yuriw-2023-07-20_14:36:46-fs-wip-yuri-testing-2023-07-19-1340-pacific-distro-default-smithi/7344784...
- 09:57 AM Tasks #62159 (In Progress): qa: evaluate mds_partitioner
- Evaluation types
* Various workloads using benchmark tools to mimic realistic scenarios
* unittest
* qa suite for ... - 09:51 AM Bug #62158 (New): mds: quick suspend or abort metadata migration
- This feature has been discussed in the CDS Squid CephFS session https://pad.ceph.com/p/cds-squid-mds-partitioner-2023...
- 09:41 AM Feature #62157 (In Progress): mds: working set size tracker
- This feature has been discussed in the CDS Squid CephFS session https://pad.ceph.com/p/cds-squid-mds-partitioner-2023...
- 07:32 AM Backport #62147 (In Progress): reef: qa: adjust fs:upgrade to use centos_8 yaml
- 07:27 AM Backport #62147 (In Progress): reef: qa: adjust fs:upgrade to use centos_8 yaml
- https://github.com/ceph/ceph/pull/52618
- 07:24 AM Bug #62146 (Pending Backport): qa: adjust fs:upgrade to use centos_8 yaml
- 04:20 AM Bug #62146 (Fix Under Review): qa: adjust fs:upgrade to use centos_8 yaml
- 04:19 AM Bug #62146 (Pending Backport): qa: adjust fs:upgrade to use centos_8 yaml
- Since n/o/p release packages aren't built for centos_9, those tests are failing with package issues.
- 06:04 AM Cleanup #61482: mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` interfaces
- Dhairya, let's get the deprecated warning in place and plan to remove the interface a couple of release down.
- 05:14 AM Bug #62126: test failure: suites/blogbench.sh stops running
- Rishabh, can you run blogbench with verbose flag (if any) to see which operation does it exactly get stuck in?
- 05:12 AM Bug #61909 (Can't reproduce): mds/fsmap: fs fail cause to mon crash
- > Yes, there's really no other way, because have client use rbd storage in this cluster, I am in a hurry to recover c...
- 05:04 AM Bug #62073 (Duplicate): AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- Duplicate of https://tracker.ceph.com/issues/62084
07/24/2023
- 06:37 PM Bug #48673: High memory usage on standby replay MDS
- I've confirmed that `fs set auxtel allow_standby_replay false` does free the memory leak in the standby mds but doesn...
- 06:20 PM Bug #48673: High memory usage on standby replay MDS
- This issue triggered again this morning for the first time in 2 weeks. What's note worthy is that the active mds seem...
- 04:19 PM Backport #61900 (Resolved): pacific: pybind/cephfs: holds GIL during rmdir
- 03:03 PM Backport #61900: pacific: pybind/cephfs: holds GIL during rmdir
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52323
merged - 03:08 PM Bug #52439: qa: acls does not compile on centos stream
- I had a conversation with Patrick last week about this ticket. He doesn't remember what this ticket was even about. I...
- 12:39 PM Bug #62126 (New): test failure: suites/blogbench.sh stops running
- I found this failure during running integration tests for few CephFS PRs. This failure occurred even after running th...
- 11:40 AM Bug #61182 (Fix Under Review): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
- 08:22 AM Bug #62123 (New): mds: detect out-of-order locking
- From Patrick's comments in https://github.com/ceph/ceph/pull/52522#discussion_r1269575242.
We need to make sure th... - 04:56 AM Feature #61908: mds: provide configuration for trim rate of the journal
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > OK, this is what I have in mind:
> >
> > Introduce an MDS con...
07/21/2023
- 07:38 PM Backport #58991 (In Progress): quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 07:38 PM Backport #58992 (In Progress): pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 07:20 PM Backport #62028 (In Progress): pacific: mds/MDSAuthCaps: "fsname", path, root_squash can't be in ...
- 07:07 PM Backport #62027 (In Progress): quincy: mds/MDSAuthCaps: "fsname", path, root_squash can't be in s...
- 06:45 PM Backport #62026 (In Progress): reef: mds/MDSAuthCaps: "fsname", path, root_squash can't be in sam...
- 06:34 PM Backport #59015 (In Progress): pacific: Command failed (workunit test fs/quota/quota.sh) on smith...
- 06:21 PM Backport #59014 (In Progress): quincy: Command failed (workunit test fs/quota/quota.sh) on smithi...
- 06:04 PM Backport #59410 (In Progress): reef: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- 05:11 PM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > This one's interesting. I did mention in the standup yesterday t... - 01:21 AM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Venky Shankar wrote:
> This one's interesting. I did mention in the standup yesterday that I've seen this earlier an... - 04:00 PM Bug #62114 (Fix Under Review): mds: adjust cap acquistion throttle defaults
- 03:53 PM Bug #62114 (Pending Backport): mds: adjust cap acquistion throttle defaults
- They are too conservative and rarely trigger in production clusters.
- 08:47 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Milind Changire wrote:
> Venky,
> The upstream user has also sent across debug (level 20) logs for ceph-fuse as wel... - 08:45 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Venky,
The upstream user has also sent across debug (level 20) logs for ceph-fuse as well as mds.
Unfortunately, th... - 04:41 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos, as per https://tracker.ceph.com/issues/61182#note-31, please check if the volume deletions (and probably creatio...
- 01:33 AM Backport #61797 (Resolved): reef: client: only wait for write MDS OPs when unmounting
- 01:22 AM Bug #61897 (Duplicate): qa: rados:mgr fails with MDS_CLIENTS_LAGGY
07/20/2023
- 11:12 PM Backport #61797: reef: client: only wait for write MDS OPs when unmounting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52302
merged - 09:18 AM Backport #61347 (Resolved): reef: mds: fsstress.sh hangs with multimds (deadlock between unlink a...
- 09:17 AM Backport #59708 (Resolved): reef: Mds crash and fails with assert on prepare_new_inode
- 06:28 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- So, here is the order of tasks unwinding:
HA workunit finishes:... - 06:15 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Greg Farnum wrote:
> Venky Shankar wrote:
> > Greg Farnum wrote:
> > > Oh, I guess the daemons are created via the... - 05:47 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Oh, I guess the daemons are created via the qa/suites/fs/mirror-ha/ce... - 05:31 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- (discussion continued on the PR)
- 05:22 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > If cephfs-mirror daemon faces any issues connecting to the cluster... - 05:17 AM Bug #62072: cephfs-mirror: do not run concurrent C_RestartMirroring context
- Dhairya Parmar wrote:
> If cephfs-mirror daemon faces any issues connecting to the cluster or error accessing local ... - 02:14 AM Backport #61735 (Resolved): reef: mgr/stats: exception ValueError :invalid literal for int() with...
- 02:14 AM Backport #61694 (Resolved): reef: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't i...
- 12:33 AM Bug #62096 (Duplicate): mds: infinite rename recursion on itself
- https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/7337403
I don...
07/19/2023
- 06:29 PM Feature #61908: mds: provide configuration for trim rate of the journal
- Venky Shankar wrote:
> OK, this is what I have in mind:
>
> Introduce an MDS config key that controls the rate of... - 06:34 AM Feature #61908: mds: provide configuration for trim rate of the journal
- OK, this is what I have in mind:
Introduce an MDS config key that controls the rate of trimming - number of log se... - 04:13 PM Feature #62086 (Fix Under Review): mds: print locks when dumping ops
- 04:09 PM Feature #62086 (Pending Backport): mds: print locks when dumping ops
- To help identify where an operation is stuck obtaining locks.
- 03:53 PM Backport #61959: reef: mon: block osd pool mksnap for fs pools
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52399
merged - 03:52 PM Backport #61424: reef: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot beaco...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52242
merged - 03:52 PM Backport #61413: reef: mon/MDSMonitor: do not trigger propose on error from prepare_update
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52238
merged - 03:51 PM Backport #61410: reef: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52232
merged - 03:50 PM Backport #61759: reef: tools/cephfs/first-damage: unicode decode errors break iteration
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52231
merged - 03:48 PM Backport #61693: reef: mon failed to return metadata for mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52229
merged - 03:40 PM Backport #61735: reef: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52126
merged - 03:40 PM Backport #61694: reef: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52073
merged - 03:39 PM Backport #61347: reef: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51684
merged - 03:39 PM Backport #59724: reef: mds: allow entries to be removed from lost+found directory
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51607
merged - 03:38 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Venky Shankar wrote:
> Patrick, the "MDS behind trimming" warning during up:replay is kind of expected in cases wher... - 03:37 PM Backport #59708: reef: Mds crash and fails with assert on prepare_new_inode
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51506
merged - 03:36 PM Backport #59719: reef: client: read wild pointer when reconnect to mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51484
merged - 03:20 PM Bug #62084 (Resolved): task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- ...
- 02:52 PM Feature #62083 (In Progress): CephFS multi-client guranteed-consistent snapshots
- This tracker is to discuss and implement guranteed-consistent snapshots of subdirectories, when using CephFS across m...
- 01:58 PM Bug #62077: mgr/nfs: validate path when modifying cephfs export
- Dhairya, this should be straightforward with the path validation helper you introduced, right?
- 11:04 AM Bug #62077 (In Progress): mgr/nfs: validate path when modifying cephfs export
- ...
- 01:27 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Added a few more notes about reproduction.
- 11:35 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Milind Changire wrote:
> "Similar crash report in ceph-users mailing list":https://lists.ceph.io/hyperkitty/list/cep... - 06:59 AM Backport #62068 (In Progress): pacific: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_check...
- Commits appended in https://github.com/ceph/ceph/pull/50814
- 06:58 AM Backport #62069 (In Progress): reef: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.T...
- commits appended in https://github.com/ceph/ceph/pull/50813
- 06:58 AM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
- Commits appended in https://github.com/ceph/ceph/pull/50815
- 06:57 AM Bug #61897 (Resolved): qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- 06:57 AM Bug #61897: qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- Fixed in https://tracker.ceph.com/issues/61907
- 06:55 AM Bug #61781: mds: couldn't successfully calculate the locker caps
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo - simialr failure here: /a/vshankar-20... - 06:44 AM Bug #62074 (Resolved): cephfs-shell: ls command has help message of cp command
- CephFS:~/>>> help ls
usage: ls [-h] [-l] [-r] [-H] [-a] [-S] [paths [paths ...]]... - 06:32 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- From GChat:...
- 05:17 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo, this needs backported to reef, yes?
It's already in reef. - 04:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- A bit unrelated, but mentioning here for completeness:
/a/yuriw-2023-07-14_23:37:57-fs-wip-yuri8-testing-2023-07-1... - 04:23 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo, this needs backported to reef, yes?
- 03:04 AM Bug #56698 (Fix Under Review): client: FAILED ceph_assert(_size == 0)
- 02:42 AM Bug #56698: client: FAILED ceph_assert(_size == 0)
- Venky Shankar wrote:
> Xiubo, do we have the core for this crash. If you have the debug env, then figuring out which... - 03:03 AM Bug #61913 (Closed): client: crash the client more gracefully
- Will fix this in https://tracker.ceph.com/issues/56698.
- 12:54 AM Bug #62073: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-15_23:37:56-rados-wip-yuri2-testing-2023-07-15-0802-distro-default-smithi/7340872
07/18/2023
- 08:49 PM Bug #62073 (Duplicate): AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
- /a/yuriw-2023-07-17_14:37:31-rados-wip-yuri-testing-2023-07-14-1641-distro-default-smithi/7341551...
- 03:40 PM Bug #62072 (Resolved): cephfs-mirror: do not run concurrent C_RestartMirroring context
- If cephfs-mirror daemon faces any issues connecting to the cluster or error accessing local pool or mounting fs then ...
- 03:15 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> Venky Shankar wrote:
> > Out of the 3 replayer threads, only two exited when the mirror daemon ... - 03:01 PM Bug #56698: client: FAILED ceph_assert(_size == 0)
- Xiubo, do we have the core for this crash. If you have the debug env, then figuring out which xlist member in MetaSes...
- 02:48 PM Backport #62070 (In Progress): quincy: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks...
- 02:48 PM Backport #62069 (Resolved): reef: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.Test...
- 02:48 PM Backport #62068 (Resolved): pacific: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.T...
- 02:47 PM Bug #59350 (Pending Backport): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScr...
- 02:43 PM Bug #62067 (New): ffsb.sh failure "Resource temporarily unavailable"
- /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-20230712.041849-testing-default-smithi/7334808
</pre>
Des... - 02:04 PM Bug #62052 (Fix Under Review): mds: deadlock when getattr changes inode lockset
- 12:36 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Patrick, maybe we should add the detail event when acquiring each loc... - 12:33 PM Bug #62052: mds: deadlock when getattr changes inode lockset
- Xiubo Li wrote:
> Patrick, maybe we should add the detail event when acquiring each locks ? Then it will be easier t... - 03:36 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- Patrick, maybe we should add the detail event when acquiring each locks ? Then it will be easier to find the root cau...
- 03:34 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- So the deadlock is between *getattr* and *create* requests.
- 01:56 AM Bug #62052: mds: deadlock when getattr changes inode lockset
- I have a fix I'm polishing to push for a PR. It'll be up soon.
- 01:55 AM Bug #62052 (Pending Backport): mds: deadlock when getattr changes inode lockset
- During a lot of request contention for locks, it's possible for getattr to change the requested locks for the target ...
- 12:45 PM Bug #62058 (Fix Under Review): mds: inode snaplock only acquired for open in create codepath
- 12:43 PM Bug #62058 (Pending Backport): mds: inode snaplock only acquired for open in create codepath
- https://github.com/ceph/ceph/blob/236f8b632fbddcfe9dcdb484561c0fede717fd2f/src/mds/Server.cc#L4612-L4615
It doesn'... - 12:38 PM Bug #62057 (Fix Under Review): mds: add TrackedOp event for batching getattr/lookup
- 12:36 PM Bug #62057 (Resolved): mds: add TrackedOp event for batching getattr/lookup
- 12:27 PM Bug #61781: mds: couldn't successfully calculate the locker caps
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vsh... - 12:09 PM Bug #61781: mds: couldn't successfully calculate the locker caps
- Venky Shankar wrote:
> Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-2023071... - 11:42 AM Bug #61781: mds: couldn't successfully calculate the locker caps
- Xiubo - simialr failure here: /a/vshankar-2023-07-12_07:14:06-fs-wip-vshankar-testing-20230712.041849-testing-default...
- 11:47 AM Backport #61986 (In Progress): pacific: mds: session ls command appears twice in command listing
- 11:45 AM Backport #61988 (In Progress): quincy: mds: session ls command appears twice in command listing
- 11:43 AM Backport #61987 (In Progress): reef: mds: session ls command appears twice in command listing
- 11:05 AM Backport #62056 (In Progress): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails becaus...
- 10:42 AM Backport #62056 (Resolved): quincy: qa: test_rebuild_moved_file (tasks/data-scan) fails because m...
- https://github.com/ceph/ceph/pull/52514
- 11:03 AM Backport #62055 (In Progress): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails becau...
- 10:42 AM Backport #62055 (Resolved): pacific: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
- https://github.com/ceph/ceph/pull/52513
- 11:00 AM Backport #62054 (In Progress): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because ...
- 10:41 AM Backport #62054 (Resolved): reef: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds...
- https://github.com/ceph/ceph/pull/52512
- 10:41 AM Bug #61201 (Pending Backport): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds cr...
- 05:18 AM Bug #61924: tar: file changed as we read it (unless cephfs mounted with norbytes)
- Venky Shankar wrote:
> Hi Harry,
>
> Harry Coin wrote:
> > Ceph: Pacific. When using tar heavily (such as compi... - 03:17 AM Backport #61985 (In Progress): quincy: mds: cap revoke and cap update's seqs mismatched
- 03:14 AM Backport #61984 (In Progress): reef: mds: cap revoke and cap update's seqs mismatched
- 03:12 AM Backport #61983 (In Progress): pacific: mds: cap revoke and cap update's seqs mismatched
- 03:05 AM Backport #62012 (In Progress): pacific: client: dir->dentries inconsistent, both newname and oldn...
- 02:59 AM Backport #62010 (In Progress): quincy: client: dir->dentries inconsistent, both newname and oldna...
- 02:59 AM Backport #62011 (In Progress): reef: client: dir->dentries inconsistent, both newname and oldname...
- 02:45 AM Backport #62042 (In Progress): quincy: client: do not send metrics until the MDS rank is ready
- 02:42 AM Backport #62041 (In Progress): reef: client: do not send metrics until the MDS rank is ready
- 02:41 AM Backport #62040 (In Progress): pacific: client: do not send metrics until the MDS rank is ready
- 02:30 AM Backport #62043 (In Progress): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:23 AM Backport #62045 (In Progress): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 02:20 AM Backport #62044 (In Progress): reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
07/17/2023
- 01:11 PM Bug #60669: crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in->firs...
- Unassigning since its a duplicate and this crash is being awaited to be reproduced in teuthology run.
- 11:34 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Just FYI - https://github.com/ceph/ceph/pull/52196 disables the balancer by default since it has been a source of per...
- 08:32 AM Backport #62045 (Resolved): quincy: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52498
- 08:32 AM Backport #62044 (Resolved): reef: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52497
- 08:32 AM Backport #62043 (Resolved): pacific: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://github.com/ceph/ceph/pull/52499
- 08:32 AM Bug #54460 (Resolved): snaptest-multiple-capsnaps.sh test failure
- https://tracker.ceph.com/issues/59343 is the other ticket attached to the backport.
- 08:32 AM Backport #62042 (Resolved): quincy: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52502
- 08:32 AM Backport #62041 (Resolved): reef: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52501
- 08:31 AM Backport #62040 (Resolved): pacific: client: do not send metrics until the MDS rank is ready
- https://github.com/ceph/ceph/pull/52500
- 08:30 AM Bug #59343 (Pending Backport): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 08:29 AM Bug #61523 (Pending Backport): client: do not send metrics until the MDS rank is ready
- 08:26 AM Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- BTW, I did not debug into this as it was unrelated to the PRs in the test branch.
This needs triaged and RCA. - 06:47 AM Bug #62036 (Fix Under Review): src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
- /a/vshankar-2023-07-04_11:59:45-fs-wip-vshankar-testing-20230704.040136-testing-default-smithi/7326619...
07/16/2023
07/15/2023
- 02:46 AM Backport #62028 (In Progress): pacific: mds/MDSAuthCaps: "fsname", path, root_squash can't be in ...
- https://github.com/ceph/ceph/pull/52583
- 02:46 AM Backport #62027 (In Progress): quincy: mds/MDSAuthCaps: "fsname", path, root_squash can't be in s...
- https://github.com/ceph/ceph/pull/52582
- 02:46 AM Backport #62026 (In Progress): reef: mds/MDSAuthCaps: "fsname", path, root_squash can't be in sam...
- https://github.com/ceph/ceph/pull/52581
- 02:37 AM Feature #59388 (Pending Backport): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same ...
- 02:37 AM Feature #59388 (Resolved): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with...
07/14/2023
- 09:06 PM Bug #62021 (Fix Under Review): mds: unnecessary second lock on snaplock
- 06:44 PM Bug #62021 (Fix Under Review): mds: unnecessary second lock on snaplock
- https://github.com/ceph/ceph/blob/3ca0f45de9fa00088fc670b19a3ebd8d5e778b3b/src/mds/Server.cc#L4612-L4615...
- 02:26 PM Backport #61234 (Resolved): reef: mds: a few simple operations crash mds
- 02:26 PM Backport #61233 (Resolved): quincy: mds: a few simple operations crash mds
- 10:53 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Venky Shankar wrote:
> Out of the 3 replayer threads, only two exited when the mirror daemon was shutting down:
>
... - 12:56 AM Backport #62012 (Resolved): pacific: client: dir->dentries inconsistent, both newname and oldname...
- https://github.com/ceph/ceph/pull/52505
- 12:56 AM Backport #62011 (Resolved): reef: client: dir->dentries inconsistent, both newname and oldname po...
- https://github.com/ceph/ceph/pull/52504
- 12:56 AM Backport #62010 (Resolved): quincy: client: dir->dentries inconsistent, both newname and oldname ...
- https://github.com/ceph/ceph/pull/52503
- 12:35 AM Bug #49912 (Pending Backport): client: dir->dentries inconsistent, both newname and oldname point...
- 12:35 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- Rishabh Dave wrote:
> The PR has been merged. Should this PR be backported?
Yeah, it should be.
07/13/2023
- 06:48 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- The PR has been merged. Should this PR be backported?
- 12:45 PM Backport #62005 (In Progress): quincy: client: readdir_r_cb: get rstat for dir only if using rbyt...
- https://github.com/ceph/ceph/pull/53360
- 12:45 PM Backport #62004 (In Progress): reef: client: readdir_r_cb: get rstat for dir only if using rbytes...
- https://github.com/ceph/ceph/pull/53359
- 12:45 PM Backport #62003 (Rejected): pacific: client: readdir_r_cb: get rstat for dir only if using rbytes...
- https://github.com/ceph/ceph/pull/54179
- 12:34 PM Bug #61999 (Pending Backport): client: readdir_r_cb: get rstat for dir only if using rbytes for size
- 08:42 AM Bug #61999 (Rejected): client: readdir_r_cb: get rstat for dir only if using rbytes for size
- When client_dirsize_rbytes is off, there should be no need for getting rstat on readdir operations. This fixes perfor...
- 11:11 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- _test_create_cluster() in test_nfs demanded strerr to be looked at; therefore I had created a new helper _nfs_complet...
- 10:36 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Log is full of line complaining it could not find the nfs cluster daemon...
- 07:33 AM Backport #61994 (Rejected): pacific: mds/MDSRank: op_tracker of mds have slow op alway.
- 07:33 AM Backport #61993 (In Progress): reef: mds/MDSRank: op_tracker of mds have slow op alway.
- https://github.com/ceph/ceph/pull/53357
- 07:33 AM Backport #61992 (In Progress): quincy: mds/MDSRank: op_tracker of mds have slow op alway.
- https://github.com/ceph/ceph/pull/53358
- 07:31 AM Bug #61749 (Pending Backport): mds/MDSRank: op_tracker of mds have slow op alway.
- 05:52 AM Backport #61991 (Resolved): quincy: snap-schedule: allow retention spec to specify max number of ...
- https://github.com/ceph/ceph/pull/52749
- 05:52 AM Backport #61990 (Resolved): reef: snap-schedule: allow retention spec to specify max number of sn...
- https://github.com/ceph/ceph/pull/52748
- 05:51 AM Backport #61989 (Resolved): pacific: snap-schedule: allow retention spec to specify max number of...
- https://github.com/ceph/ceph/pull/52750
- 05:51 AM Backport #61988 (Resolved): quincy: mds: session ls command appears twice in command listing
- https://github.com/ceph/ceph/pull/52516
- 05:51 AM Backport #61987 (Resolved): reef: mds: session ls command appears twice in command listing
- https://github.com/ceph/ceph/pull/52515
- 05:51 AM Backport #61986 (Rejected): pacific: mds: session ls command appears twice in command listing
- https://github.com/ceph/ceph/pull/52517
- 05:51 AM Backport #61985 (Resolved): quincy: mds: cap revoke and cap update's seqs mismatched
- https://github.com/ceph/ceph/pull/52508
- 05:51 AM Backport #61984 (Resolved): reef: mds: cap revoke and cap update's seqs mismatched
- https://github.com/ceph/ceph/pull/52507
- 05:51 AM Backport #61983 (Resolved): pacific: mds: cap revoke and cap update's seqs mismatched
- https://github.com/ceph/ceph/pull/52506
- 05:49 AM Bug #61444 (Pending Backport): mds: session ls command appears twice in command listing
- 05:48 AM Bug #59582 (Pending Backport): snap-schedule: allow retention spec to specify max number of snaps...
- 05:43 AM Bug #61782 (Pending Backport): mds: cap revoke and cap update's seqs mismatched
- 05:01 AM Bug #61982 (New): Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_v...
- /a/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/7326482...
- 03:02 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Greg Farnum wrote:
> Oh, I guess the daemons are created via the qa/suites/fs/mirror-ha/cephfs-mirror/three-per-clus... - 02:40 AM Bug #61978 (In Progress): cephfs-mirror: support fan out setups
- Currently, adding multiple file system peers in a fan out fashion which looks something like: fs-local(site-a) -> fs-...
07/12/2023
- 08:41 PM Bug #61399 (In Progress): qa: build failure for ior
- 03:59 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Venky Shankar wrote:
> Patrick - I can take this one if you haven't started on it yet.
https://github.com/ceph/ce... - 02:51 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Venky Shankar wrote:
> Patrick - I can take this one if you haven't started on it yet.
I have started on it. Shou... - 02:38 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Patrick - I can take this one if you haven't started on it yet.
- 02:25 PM Bug #61950 (In Progress): mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scru...
- 12:59 PM Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_o...
- Ok, so an off-by-one error - should be relatively easy to figure...
- 02:48 PM Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- This one's interesting. I did mention in the standup yesterday that I've seen this earlier and that cluster too had N...
- 02:35 PM Backport #61187 (Resolved): reef: qa: ignore cluster warning encountered in test_refuse_client_se...
- 02:35 PM Backport #61165: reef: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10...
- Xiubo, please backport the changes.
- 02:33 PM Backport #59723 (Resolved): reef: qa: run scrub post disaster recovery procedure
- 01:35 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Oh, I guess the daemons are created via the qa/suites/fs/mirror-ha/cephfs-mirror/three-per-cluster.yaml fragment. Loo...
- 01:23 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- I talked about this with Jos today and see that when the cephfs_mirror_thrash.py joins the background thread, the do_...
- 07:37 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- The thread (7f677beaa700) was blocked on a file system call to build snap mapping (local vs remote)...
- 05:45 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Out of the 3 replayer threads, only two exited when the mirror daemon was shutting down:...
- 05:37 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> @Venky,
>
> As discussed attaching job [1] and the mirror daemon log, which I've been referri... - 12:40 PM Bug #61967 (Duplicate): mds: "SimpleLock.h: 417: FAILED ceph_assert(state == LOCK_XLOCK || state ...
- 01:21 AM Bug #61967 (Duplicate): mds: "SimpleLock.h: 417: FAILED ceph_assert(state == LOCK_XLOCK || state ...
- ...
- 12:24 PM Feature #61863: mds: issue a health warning with estimated time to complete replay
- Patrick, the "MDS behind trimming" warning during up:replay is kind of expected in cases where there are lot many jou...
- 12:23 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Laura Flores wrote:
> See https://trello.com/c/qQnRTrLO/1792-wip-yuri8-testing-2023-06-22-1309-pacific-old-wip-yuri8... - 12:14 PM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Also, I think there is a catch to this feature. Commit aae7a70ed... - 11:37 AM Bug #61972 (Duplicate): cephfs/tools: cephfs-data-scan "cleanup" operation is not parallelised
- Duplicate of https://tracker.ceph.com/issues/61357
- 11:07 AM Bug #61972 (Duplicate): cephfs/tools: cephfs-data-scan "cleanup" operation is not parallelised
- https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
scan_exte... - 04:53 AM Bug #60629: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Which means, the [start, len] in `inos_to_free` and/or `inos_to_purge` are not present in prealloc_inos for the clien...
- 04:50 AM Bug #60629: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Dhairya, the interval set operation that's asserting is possibly here:...
- 04:52 AM Bug #61907 (Resolved): api tests fail from "MDS_CLIENTS_LAGGY" warning
- 04:38 AM Bug #61186 (Fix Under Review): mgr/nfs: hitting incomplete command returns same suggestion twice
07/11/2023
- 07:44 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- See https://trello.com/c/qQnRTrLO/1792-wip-yuri8-testing-2023-06-22-1309-pacific-old-wip-yuri8-testing-2023-06-22-100...
- 07:16 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Laura Flores wrote:
> Dhairya Parmar wrote:
> > @laura this isn't seen in quincy or reef, is it?
>
> Right. But ... - 02:25 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Dhairya Parmar wrote:
> @laura this isn't seen in quincy or reef, is it?
Right. But since it occurs in pacific, i... - 11:12 AM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- @laura this isn't seen in quincy or reef, is it?
- 04:51 PM Backport #61899 (Resolved): reef: pybind/cephfs: holds GIL during rmdir
- 02:34 PM Backport #61899: reef: pybind/cephfs: holds GIL during rmdir
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52321
merged - 04:31 PM Bug #61907 (Fix Under Review): api tests fail from "MDS_CLIENTS_LAGGY" warning
- 04:07 PM Backport #61959 (In Progress): reef: mon: block osd pool mksnap for fs pools
- 04:01 PM Backport #61959 (Resolved): reef: mon: block osd pool mksnap for fs pools
- https://github.com/ceph/ceph/pull/52399
- 04:06 PM Backport #61960 (In Progress): quincy: mon: block osd pool mksnap for fs pools
- 04:01 PM Backport #61960 (Resolved): quincy: mon: block osd pool mksnap for fs pools
- https://github.com/ceph/ceph/pull/52398
- 04:05 PM Backport #61961 (In Progress): pacific: mon: block osd pool mksnap for fs pools
- 04:01 PM Backport #61961 (Resolved): pacific: mon: block osd pool mksnap for fs pools
- https://github.com/ceph/ceph/pull/52397
- 04:00 PM Bug #59552 (Pending Backport): mon: block osd pool mksnap for fs pools
- 03:57 PM Bug #59552 (Fix Under Review): mon: block osd pool mksnap for fs pools
- 03:32 PM Bug #61958 (New): mds: add debug logs for handling setxattr for ceph.dir.subvolume
- * add debug logs for EINVAL return case
* add subvolume status during inode dump - 02:27 PM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Sometimes we want to be able to turn off asynchronous subvolume ... - 10:27 AM Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion
- Patrick Donnelly wrote:
> Sometimes we want to be able to turn off asynchronous subvolume deletion during cluster re... - 02:23 PM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Venky Shankar wrote:
> Also, I think there is a catch to this feature. Commit aae7a70ed2cf9c32684cfdaf701778a05f229e... - 02:21 PM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Venky Shankar wrote:
> I didn't know that the balancer would re-export to rank-0 (from rank-N) if a directory become... - 10:59 AM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Also, I think there is a catch to this feature. Commit aae7a70ed2cf9c32684cfdaf701778a05f229e09 introduces per subvol...
- 10:30 AM Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases
- Patrick Donnelly wrote:
> The _deleting directory can often get sudden large volumes to recursively unlink. Rank 0 i... - 01:38 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Rishabh Dave wrote:
> rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smith... - 12:42 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/7328210/
- 11:07 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> @Venky,
>
> As discussed attaching job [1] and the mirror daemon log, which I've been referri... - 10:14 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- @Venky,
As discussed attaching job [1] and the mirror daemon log, which I've been referring to.
[1] http://pulp... - 01:30 PM Bug #61957 (Duplicate): test_client_limits.TestClientLimits.test_client_release_bug fails
- ...
- 07:16 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Venky Shankar wrote:
> Another suggestion/feedback - Should the module also persist (say) the last 10 partitioning s... - 07:12 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Hi Venky,
Venky Shankar wrote:
> Hi Yongseok,
>
> Yongseok Oh wrote:
> > This idea is based on our presentati... - 04:15 AM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Another suggestion/feedback - Should the module also persist (say) the last 10 partitioning strategies? I presume whe...
07/10/2023
- 09:22 PM Bug #61950 (Need More Info): mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_s...
- The changes implemented in [1] should make sure that we never have an openfiletable objects omap keys above osd_deep_...
- 07:13 PM Bug #61947 (Pending Backport): mds: enforce a limit on the size of a session in the sessionmap
- If the session's "completed_requests" vector gets too large, the session can get to a size where the MDS goes read-on...
- 02:39 PM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Hi Yongseok,
Yongseok Oh wrote:
> This idea is based on our presentation in Cephalocon2023. (Please refer to the ... - 02:08 PM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Venky Shankar wrote:
> Thanks for the feature proposal. CephFS team will go through the proposal asap.
I'm going ... - 02:18 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- Patrick Donnelly wrote:
> yite gu wrote:
> > yite gu wrote:
> > > Patrick Donnelly wrote:
> > > > yite gu wrote:
... - 12:56 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- yite gu wrote:
> yite gu wrote:
> > Patrick Donnelly wrote:
> > > yite gu wrote:
> > > > any way to recover this ... - 01:47 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- Jos Collin wrote:
> In quincy branch, this is consistently reproducible:
>
> http://pulpito.front.sepia.ceph.com/... - 11:16 AM Bug #61182 (In Progress): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after ...
- In quincy branch, this is consistently reproducible:
http://pulpito.front.sepia.ceph.com/jcollin-2023-07-10_04:22:... - 01:45 PM Bug #61924: tar: file changed as we read it (unless cephfs mounted with norbytes)
- Hi Harry,
Harry Coin wrote:
> Ceph: Pacific. When using tar heavily (such as compiling a linux kernel into distr... - 01:25 PM Bug #60629: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- Dhairya Parmar wrote:
> I'm trying to think out loud and this is just a hypothesis:
>
> Server::_session_logged()... - 12:51 PM Bug #61945 (Triaged): LibCephFS.DelegTimeout failure
- 12:19 PM Bug #61945 (Triaged): LibCephFS.DelegTimeout failure
- /a/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/7326413...
- 12:15 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Laura Flores wrote:
> Occurs quite a bit. Perhaps from a recent regression?
>
> See http://pulpito.front.sepia.ce... - 07:04 AM Bug #60625 (Fix Under Review): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> ...
- 02:41 AM Cleanup #51383 (Fix Under Review): mgr/volumes/fs/exception.py: fix various flake8 issues
- 02:41 AM Cleanup #51401 (Fix Under Review): mgr/volumes/fs/operations/versions/metadata_manager.py: fix va...
- 02:41 AM Cleanup #51404 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v1.py: fix variou...
- 02:40 AM Cleanup #51405 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v2.py: fix variou...
- 02:39 AM Cleanup #51386 (Fix Under Review): mgr/volumes/fs/volume.py: fix various flake8 issues
- 02:38 AM Cleanup #51388 (Fix Under Review): mgr/volumes/fs/operations/index.py: add extra blank line
- 02:38 AM Cleanup #51389 (Fix Under Review): mgr/volumes/fs/operations/rankevicter.py: fix various flake8 i...
- 02:38 AM Cleanup #51394 (Fix Under Review): mgr/volumes/fs/operations/pin_util.py: fix various flake8 issues
- 02:37 AM Cleanup #51395 (Fix Under Review): mgr/volumes/fs/operations/lock.py: fix various flake8 issues
- 02:37 AM Cleanup #51397 (Fix Under Review): mgr/volumes/fs/operations/volume.py: fix various flake8 issues
- 02:37 AM Cleanup #51399 (Fix Under Review): mgr/volumes/fs/operations/template.py: fix various flake8 issues
- 02:08 AM Fix #52068 (Resolved): qa: add testing for "ms_mode" mount option
- 02:08 AM Backport #52440 (Resolved): pacific: qa: add testing for "ms_mode" mount option
- 02:07 AM Backport #59707 (Resolved): quincy: Mds crash and fails with assert on prepare_new_inode
07/09/2023
- 01:44 PM Feature #10679: Add support for the chattr +i command (immutable file)
- I'm claiming this ticket.
- 01:07 PM Documentation #61865 (Resolved): add doc on how to expedite MDS recovery with a lot of log segments
07/08/2023
07/07/2023
- 07:31 PM Bug #61897: qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- This one seems to be the same as https://tracker.ceph.com/issues/61907, but I'm putting it here since it came from te...
- 05:42 PM Bug #60629: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- I'm trying to think out loud and this is just a hypothesis:
Server::_session_logged() has this part of code where ... - 04:49 PM Bug #48673: High memory usage on standby replay MDS
- I believe that I have observed this issue while trying to reproduce a different mds problem. It manifests by the sta...
- 02:41 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- yite gu wrote:
> Patrick Donnelly wrote:
> > yite gu wrote:
> > > any way to recover this bug?
> >
> > I would ... - 01:42 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- Patrick Donnelly wrote:
> yite gu wrote:
> > any way to recover this bug?
>
> I would reset the MDSMap:
>
> h... - 12:32 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- yite gu wrote:
> any way to recover this bug?
I would reset the MDSMap:
https://docs.ceph.com/en/latest/cephfs... - 10:33 AM Bug #61909: mds/fsmap: fs fail cause to mon crash
- any way to recover this bug?
- 09:14 AM Bug #61909: mds/fsmap: fs fail cause to mon crash
- Patrick Donnelly wrote:
> yite gu wrote:
> > multi active should be as blow:
> > max_mds 2
> > up {0=11123,... - 11:33 AM Bug #61182 (Can't reproduce): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon af...
- This doesn't happen consistently in main and in reef branch. I ran the fs:mirror-ha test suite multiple times on the ...
07/06/2023
- 09:23 PM Bug #61924 (New): tar: file changed as we read it (unless cephfs mounted with norbytes)
- Ceph: Pacific. When using tar heavily (such as compiling a linux kernel into distro-specific packages), when the com...
- 05:54 PM Documentation #61865 (Fix Under Review): add doc on how to expedite MDS recovery with a lot of lo...
- 01:30 AM Documentation #61865 (In Progress): add doc on how to expedite MDS recovery with a lot of log seg...
- 05:52 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Laura Flores wrote:
> Would it be better then to somehow let the MDS know about intentional restarts / recovery situ... - 04:50 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Patrick Donnelly wrote:
> Ignoring this may not be what we want. The MDS would normally evict these clients but does... - 04:18 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Would it be better then to somehow let the MDS know about intentional restarts / recovery situations?
- 03:35 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Ignoring this may not be what we want. The MDS would normally evict these clients but does not because the OSDs are "...
- 03:23 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Testing that out here: https://github.com/ceph/ceph/pull/52342
Marked as a draft since I want to check if the api ... - 03:17 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Pretty sure that this yaml file is what whitelists messages in the api tests: https://github.com/ceph/ceph/blob/main/...
- 02:59 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- I think it makes sense to whitelist this warning in the testing environment since we expect laggy OSDs in some situat...
- 10:08 AM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Seems like some OSDs get laggy in testing environment; in this case either we can globally ignore the line(like addin...
- 03:32 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- yite gu wrote:
> multi active should be as blow:
> max_mds 2
> up {0=11123,1=11087}
> [mds.vees-root-cephfs... - 02:55 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- multi active should be as blow:
max_mds 2
up {0=11123,1=11087}
[mds.vees-root-cephfs-c{0:11123} state up:act... - 02:26 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- ...
- 12:13 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- Dhairya Parmar wrote:
> yite gu wrote:
> > Dhairya Parmar wrote:
> > > I see the scrub status is IDLE:
> > > [...... - 12:01 PM Bug #61909: mds/fsmap: fs fail cause to mon crash
- yite gu wrote:
> Dhairya Parmar wrote:
> > I see the scrub status is IDLE:
> > [...]
> >
> > Since how long has... - 06:56 AM Bug #61909: mds/fsmap: fs fail cause to mon crash
- active mds is vees-root-cephfs-a, gid is 20804286. but used gid of vees-root-cephfs-b,
- 06:45 AM Bug #61909: mds/fsmap: fs fail cause to mon crash
- Dhairya Parmar wrote:
> I see the scrub status is IDLE:
> [...]
>
> Since how long has this been like this? This... - 06:02 AM Bug #61909: mds/fsmap: fs fail cause to mon crash
- I see the scrub status is IDLE:...
- 03:04 AM Bug #61909: mds/fsmap: fs fail cause to mon crash
- Supplement key logs:...
- 02:52 AM Bug #61909 (Can't reproduce): mds/fsmap: fs fail cause to mon crash
- ceph health ok before run `ceph fs fail <fs_name>`...
- 09:06 AM Bug #61914 (Fix Under Review): client: improve the libcephfs when MDS is stopping
- 08:32 AM Bug #61914 (Fix Under Review): client: improve the libcephfs when MDS is stopping
- When an MDS is stopping, the client could receive the corresponding mdsmap and usually the this MDS will take some ti...
- 08:03 AM Bug #56698 (In Progress): client: FAILED ceph_assert(_size == 0)
- 07:47 AM Bug #56698: client: FAILED ceph_assert(_size == 0)
- I have gone through all the *xlist* in the *MetaSession*:...
- 07:51 AM Bug #56003: client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
- Venky,
This should be the same with https://tracker.ceph.com/issues/56698. - 07:50 AM Bug #61913 (Fix Under Review): client: crash the client more gracefully
- 07:41 AM Bug #61913 (Closed): client: crash the client more gracefully
- Instead of crashing the client in *xlist<T>::~xlist()* it will be easier to understand exactly which *xlist* triggers...
- 01:15 AM Feature #61908 (Fix Under Review): mds: provide configuration for trim rate of the journal
- Sometimes the journal trimming is not fast enough. Provide configurations to tune it without requiring changing the m...
07/05/2023
- 10:24 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Hey @Dhairya Parmar can you take a look?
- 10:24 PM Bug #61907: api tests fail from "MDS_CLIENTS_LAGGY" warning
- Full logs are available for all api jobs, i.e. https://jenkins.ceph.com/job/ceph-api/57735/artifact/build/out/.
- 10:19 PM Bug #61907 (Resolved): api tests fail from "MDS_CLIENTS_LAGGY" warning
- Main api tests are failing. Upon investigation, I noticed that they are not all failing on the same api test each tim...
- 09:10 PM Feature #61905 (New): pybind/mgr/volumes: add more introspection for recursive unlink threads
- Similar to #61904, add a command to get more information about the status of the module's unlink threads. In particul...
- 08:53 PM Feature #61904 (New): pybind/mgr/volumes: add more introspection for clones
- `ceph fs clone status` should include information like how many files/directories have been copied so it can be regul...
- 08:34 PM Feature #61903 (New): pybind/mgr/volumes: add config to turn off subvolume deletion
- Sometimes we want to be able to turn off asynchronous subvolume deletion during cluster recovery scenarios. Add a con...
- 08:08 PM Documentation #61902 (New): Recommend pinning _deleting directory to another rank for certain use...
- The _deleting directory can often get sudden large volumes to recursively unlink. Rank 0 is not an ideal default targ...
- 06:04 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- Occurs quite a bit. Perhaps from a recent regression?
See http://pulpito.front.sepia.ceph.com/lflores-2023-07-05_1... - 05:01 PM Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- /a/yuriw-2023-06-23_20:51:14-rados-wip-yuri8-testing-2023-06-22-1309-pacific-distro-default-smithi/7314160
- 05:37 PM Backport #61900 (In Progress): pacific: pybind/cephfs: holds GIL during rmdir
- 05:20 PM Backport #61900 (Resolved): pacific: pybind/cephfs: holds GIL during rmdir
- https://github.com/ceph/ceph/pull/52323
- 05:33 PM Backport #61898 (In Progress): quincy: pybind/cephfs: holds GIL during rmdir
- 05:19 PM Backport #61898 (Resolved): quincy: pybind/cephfs: holds GIL during rmdir
- https://github.com/ceph/ceph/pull/52322
- 05:31 PM Backport #61899 (In Progress): reef: pybind/cephfs: holds GIL during rmdir
- 05:20 PM Backport #61899 (Resolved): reef: pybind/cephfs: holds GIL during rmdir
- https://github.com/ceph/ceph/pull/52321
- 05:10 PM Bug #61869 (Pending Backport): pybind/cephfs: holds GIL during rmdir
- 05:03 PM Bug #61897 (Duplicate): qa: rados:mgr fails with MDS_CLIENTS_LAGGY
- https://pulpito.ceph.com/pdonnell-2023-07-05_12:59:11-rados:mgr-wip-pdonnell-testing-20230705.003205-distro-default-s...
- 09:05 AM Bug #56698: client: FAILED ceph_assert(_size == 0)
- Xiubo, spent half day working on a failure caused by this issue. I've not spent any time with this ticket recently an...
- 08:41 AM Bug #56698: client: FAILED ceph_assert(_size == 0)
- https://pulpito.ceph.com/rishabh-2023-06-19_18:26:08-fs-wip-rishabh-2023June18-testing-default-smithi/7307845/
<pr... - 06:42 AM Backport #59726 (Resolved): quincy: mds: allow entries to be removed from lost+found directory
07/04/2023
- 10:44 AM Feature #58072 (Fix Under Review): enable 'ceph fs new' use 'ceph fs set' options
- 10:43 AM Backport #61158 (Resolved): reef: client: fix dump mds twice
- 10:43 AM Backport #59620 (Resolved): quincy: client: fix dump mds twice
- 01:51 AM Backport #61798 (In Progress): pacific: client: only wait for write MDS OPs when unmounting
- 01:48 AM Backport #61796 (In Progress): quincy: client: only wait for write MDS OPs when unmounting
- 01:46 AM Backport #61797 (In Progress): reef: client: only wait for write MDS OPs when unmounting
- 01:06 AM Feature #61863 (In Progress): mds: issue a health warning with estimated time to complete replay
- 12:43 AM Feature #55554 (In Progress): cephfs-shell: 'rm' cmd needs -r and -f options
- 12:42 AM Documentation #43033 (In Progress): doc: directory fragmentation section on config options
07/03/2023
- 01:27 PM Bug #61831: qa: test_mirroring_init_failure_with_recovery failure
- I think the mds is down as part of cleanup. But the mirror status is failed. Need to debug further on it....
- 01:02 PM Bug #61831: qa: test_mirroring_init_failure_with_recovery failure
- Looks like mds were down...
- 01:01 PM Bug #61831: qa: test_mirroring_init_failure_with_recovery failure
- Kotresh said that he saw no active MDSs. Please RCA, Kotresh.
- 01:19 PM Feature #61778: mgr/mds_partitioner: add MDS partitioner module in MGR
- Thanks for the feature proposal. CephFS team will go through the proposal asap.
- 07:39 AM Bug #61879 (Fix Under Review): mds: linkmerge assert check is incorrect in rename codepath
- 07:38 AM Bug #61879 (Resolved): mds: linkmerge assert check is incorrect in rename codepath
- Let's say there is a hardlink created as below....
07/02/2023
- 04:06 PM Bug #61869 (Fix Under Review): pybind/cephfs: holds GIL during rmdir
- 04:02 PM Bug #61869 (Resolved): pybind/cephfs: holds GIL during rmdir
- https://github.com/ceph/ceph/blob/c42efbf5874de8454e4c7cb3c22bd41bcc0e71f5/src/pybind/cephfs/cephfs.pyx#L1356
!!... - 05:40 AM Bug #61867 (Fix Under Review): mgr/volumes: async threads should periodically check for work
- Right now:...
07/01/2023
- 11:59 PM Feature #61866 (In Progress): MDSMonitor: require --yes-i-really-mean-it when failing an MDS with...
- If an MDS is already having issues with getting behind on trimming its journal or an oversized cache, restarting it m...
06/30/2023
- 11:42 PM Documentation #61865 (Resolved): add doc on how to expedite MDS recovery with a lot of log segments
- notes from my head:
* reduce debugging to 0
* deny_all_reconnect
* mon mds_beacon_grace 3600
* tick interval 1
... - 05:02 PM Bug #61864 (Fix Under Review): mds: replay thread does not update some essential perf counters
- 04:42 PM Bug #61864 (Resolved): mds: replay thread does not update some essential perf counters
- including wrpos, num events, and expire pos
- 04:31 PM Feature #61863 (Fix Under Review): mds: issue a health warning with estimated time to complete re...
- When the MDS is in up:replay, it does not give any indication to the operator when it will complete. We do have this ...
- 11:20 AM Backport #61840 (In Progress): quincy: mds: do not evict clients if OSDs are laggy
- https://github.com/ceph/ceph/pull/52271
- 11:07 AM Backport #61841 (In Progress): pacific: mds: do not evict clients if OSDs are laggy
- https://github.com/ceph/ceph/pull/52270
- 10:26 AM Backport #61842 (In Progress): reef: mds: do not evict clients if OSDs are laggy
- 10:25 AM Backport #61842: reef: mds: do not evict clients if OSDs are laggy
- https://github.com/ceph/ceph/pull/52268
- 10:12 AM Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- "Similar crash report in ceph-users mailing list":https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/A3J...
- 10:01 AM Feature #58072 (In Progress): enable 'ceph fs new' use 'ceph fs set' options
- 03:35 AM Bug #61749: mds/MDSRank: op_tracker of mds have slow op alway.
- yite gu wrote:
> https://github.com/ceph/ceph/pull/52258
Comment error, please ignore.
- 03:33 AM Bug #61749: mds/MDSRank: op_tracker of mds have slow op alway.
- https://github.com/ceph/ceph/pull/52258
06/28/2023
- 05:28 PM Backport #59366 (Resolved): reef: qa: test_rebuild_simple checks status on wrong file system
- 04:30 PM Backport #61800 (New): reef: mon/MDSMonitor: plug PAXOS when evicting an MDS
- 04:29 PM Backport #61800 (In Progress): reef: mon/MDSMonitor: plug PAXOS when evicting an MDS
- 04:28 PM Bug #59183 (Resolved): cephfs-data-scan: does not scan_links for lost+found
- 04:27 PM Bug #58482 (Resolved): mds: catch damage to CDentry's first member before persisting
- 04:27 PM Bug #57677 (Resolved): qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 04:27 PM Backport #57713 (Resolved): quincy: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 04:26 PM Bug #57657 (Resolved): mds: scrub locates mismatch between child accounted_rstats and self rstats
- 04:26 PM Bug #57598 (Resolved): qa: test_recovery_pool uses wrong recovery procedure
- 04:26 PM Bug #57597 (Resolved): qa: data-scan/journal-tool do not output debugging in upstream testing
- 04:26 PM Backport #57720 (Resolved): quincy: qa: data-scan/journal-tool do not output debugging in upstrea...
- 04:26 PM Backport #57721 (Resolved): pacific: qa: data-scan/journal-tool do not output debugging in upstre...
- 04:25 PM Bug #57586 (Resolved): first-damage.sh does not handle dentries with spaces
- 04:25 PM Bug #57249 (Resolved): mds: damage table only stores one dentry per dirfrag
- 04:25 PM Feature #57091 (Resolved): mds: modify scrub to catch dentry corruption
- 04:24 PM Backport #59303 (In Progress): quincy: cephfs: tooling to identify inode (metadata) corruption
- 04:23 PM Feature #55470 (Resolved): qa: postgresql test suite workunit
- 04:23 PM Backport #57745 (Rejected): quincy: qa: postgresql test suite workunit
- will just keep this in reef.
- 04:22 PM Bug #52677 (Resolved): qa: test_simple failure
- 04:22 PM Bug #23724 (Resolved): qa: broad snapshot functionality testing across clients
- 03:45 PM Backport #61426 (In Progress): pacific: mon/MDSMonitor: daemon booting may get failed if mon hand...
- 03:44 PM Backport #61425 (In Progress): quincy: mon/MDSMonitor: daemon booting may get failed if mon handl...
- 03:43 PM Backport #61424 (In Progress): reef: mon/MDSMonitor: daemon booting may get failed if mon handles...
- 03:43 PM Backport #59560 (Resolved): pacific: qa: RuntimeError: more than one file system available
- 03:42 PM Backport #59558 (In Progress): quincy: qa: RuntimeError: more than one file system available
- 03:41 PM Backport #61414 (In Progress): pacific: mon/MDSMonitor: do not trigger propose on error from prep...
- 03:40 PM Backport #61415 (In Progress): quincy: mon/MDSMonitor: do not trigger propose on error from prepa...
- 03:39 PM Backport #61413 (In Progress): reef: mon/MDSMonitor: do not trigger propose on error from prepare...
- 03:38 PM Backport #59372 (In Progress): pacific: qa: test_join_fs_unset failure
- 03:38 PM Backport #59227 (Resolved): reef: cephfs-data-scan: does not scan_links for lost+found
- 03:38 PM Backport #59228 (Resolved): quincy: cephfs-data-scan: does not scan_links for lost+found
- 03:38 PM Backport #59371 (In Progress): quincy: qa: test_join_fs_unset failure
- 03:37 PM Backport #59373 (In Progress): reef: qa: test_join_fs_unset failure
- 03:35 PM Backport #59221 (Resolved): quincy: mds: catch damage to CDentry's first member before persisting
- 03:35 PM Backport #61412 (In Progress): quincy: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- 03:34 PM Backport #57715 (Resolved): quincy: mds: scrub locates mismatch between child accounted_rstats an...
- 03:34 PM Backport #57744 (Resolved): quincy: qa: test_recovery_pool uses wrong recovery procedure
- 03:34 PM Backport #61411 (In Progress): pacific: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- 03:33 PM Backport #57671 (Resolved): pacific: mds: damage table only stores one dentry per dirfrag
- 03:33 PM Backport #59225 (Resolved): quincy: mds: modify scrub to catch dentry corruption
- 03:33 PM Backport #61410 (In Progress): reef: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- 03:32 PM Backport #61759 (In Progress): reef: tools/cephfs/first-damage: unicode decode errors break itera...
- 03:30 PM Bug #52995 (Resolved): qa: test_standby_count_wanted failure
- 03:30 PM Backport #52854 (Resolved): pacific: qa: test_simple failure
- 03:29 PM Feature #51333 (Resolved): qa: use cephadm to provision cephfs for fs:workloads
- 03:29 PM Backport #61692 (In Progress): pacific: mon failed to return metadata for mds
- 03:28 PM Backport #61693 (In Progress): reef: mon failed to return metadata for mds
- 03:27 PM Backport #61691 (In Progress): quincy: mon failed to return metadata for mds
- 03:26 PM Bug #57248 (Resolved): qa: mirror tests should cleanup fs during unwind
- 03:26 PM Backport #57824 (Resolved): quincy: qa: mirror tests should cleanup fs during unwind
- 03:26 PM Backport #57825 (Resolved): pacific: qa: mirror tests should cleanup fs during unwind
- 02:33 PM Bug #61201 (Fix Under Review): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds cr...
- 07:15 AM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo - PTAL at https://pulpito.ceph.com/vshankar-2023-06-20_10:07:44-fs... - 05:04 AM Backport #61842 (In Progress): reef: mds: do not evict clients if OSDs are laggy
- 05:04 AM Backport #61841 (Resolved): pacific: mds: do not evict clients if OSDs are laggy
- 05:04 AM Backport #61840 (In Progress): quincy: mds: do not evict clients if OSDs are laggy
- 04:55 AM Fix #58023 (Pending Backport): mds: do not evict clients if OSDs are laggy
- 01:24 AM Bug #58489 (Resolved): mds stuck in 'up:replay' and crashed.
- 01:23 AM Backport #59399 (Resolved): reef: cephfs: qa enables kclient for newop test
- 01:23 AM Backport #59404 (Resolved): reef: mds stuck in 'up:replay' and crashed.
- 01:23 AM Feature #58680 (Resolved): libcephfs: clear the suid/sgid for fallocate
- 01:22 AM Backport #59266 (Resolved): quincy: libcephfs: clear the suid/sgid for fallocate
- 01:22 AM Bug #58717 (Resolved): client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- 01:22 AM Backport #58993 (Resolved): quincy: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- 01:22 AM Bug #56695 (Resolved): [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- 01:22 AM Backport #59386 (Resolved): pacific: [RHEL stock] pjd test failures(a bug that need to wait the u...
- 01:21 AM Backport #59385 (Resolved): quincy: [RHEL stock] pjd test failures(a bug that need to wait the un...
- 01:21 AM Backport #59384 (Resolved): reef: [RHEL stock] pjd test failures(a bug that need to wait the unli...
- 01:21 AM Backport #59407 (Resolved): reef: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- 01:21 AM Backport #59267 (Resolved): reef: libcephfs: clear the suid/sgid for fallocate
06/27/2023
- 04:19 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Hi Venky,
I'm getting back to investigating the ceph-fuse forever looping problems again. I now have a way to rep... - 02:47 PM Backport #61234: reef: mds: a few simple operations crash mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51608
merged - 02:46 PM Backport #59723: reef: qa: run scrub post disaster recovery procedure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51606
merged - 02:45 PM Backport #61187: reef: qa: ignore cluster warning encountered in test_refuse_client_session_on_re...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/51515
merged - 02:45 PM Backport #59412: reef: libcephfs: client needs to update the mtime and change attr when snaps are...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51041
merged - 02:44 PM Backport #59409: reef: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51040
merged - 02:44 PM Backport #59404: reef: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> https://github.com/ceph/ceph/pull/50997
merged - 02:43 PM Backport #59399: reef: cephfs: qa enables kclient for newop test
- Xiubo Li wrote:
> https://github.com/ceph/ceph/pull/50990
merged - 02:42 PM Backport #59407: reef: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50987
merged - 02:42 PM Backport #59267: reef: libcephfs: clear the suid/sgid for fallocate
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50987
merged - 02:42 PM Backport #59384: reef: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50984
merged - 12:42 PM Bug #61831 (New): qa: test_mirroring_init_failure_with_recovery failure
- This failure was first reported here - https://tracker.ceph.com/issues/50224#note-13.
Seeing this failure again - ... - 12:23 PM Backport #61830 (In Progress): quincy: qa: test_join_fs_vanilla is racy
- https://github.com/ceph/ceph/pull/54038
- 12:23 PM Backport #61829 (In Progress): pacific: qa: test_join_fs_vanilla is racy
- https://github.com/ceph/ceph/pull/54039
- 12:23 PM Backport #61828 (Resolved): reef: qa: test_join_fs_vanilla is racy
- https://github.com/ceph/ceph/pull/54037
- 12:22 PM Bug #61764 (Pending Backport): qa: test_join_fs_vanilla is racy
- 09:43 AM Cleanup #51397 (In Progress): mgr/volumes/fs/operations/volume.py: fix various flake8 issues
- 09:42 AM Cleanup #51395 (In Progress): mgr/volumes/fs/operations/lock.py: fix various flake8 issues
- 09:42 AM Cleanup #51394 (In Progress): mgr/volumes/fs/operations/pin_util.py: fix various flake8 issues
- 09:41 AM Cleanup #51389 (In Progress): mgr/volumes/fs/operations/rankevicter.py: fix various flake8 issues
- 09:40 AM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
- Venky Shankar wrote:
> Xiubo - PTAL at https://pulpito.ceph.com/vshankar-2023-06-20_10:07:44-fs-wip-vshankar-testing... - 09:40 AM Cleanup #51388 (In Progress): mgr/volumes/fs/operations/index.py: add extra blank line
- 09:40 AM Cleanup #51386 (In Progress): mgr/volumes/fs/volume.py: fix various flake8 issues
- 07:39 AM Bug #61574: qa: build failure for mdtest project
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 07:06 AM Bug #61574: qa: build failure for mdtest project
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 07:38 AM Bug #61399: qa: build failure for ior
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 06:55 AM Bug #61399: qa: build failure for ior
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 07:33 AM Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
- ...
- 07:27 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 07:01 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 07:26 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 07:17 AM Bug #48773: qa: scrub does not complete
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr... - 06:36 AM Bug #61818 (Fix Under Review): mds: deadlock between unlink and linkmerge
- 05:52 AM Bug #61818: mds: deadlock between unlink and linkmerge
- Xiubo Li wrote:
> https://pulpito.ceph.com/xiubli-2023-06-26_02:38:43-fs:functional-wip-lxb-xlock-20230619-0716-dist... - 04:28 AM Bug #61818 (Pending Backport): mds: deadlock between unlink and linkmerge
- https://pulpito.ceph.com/xiubli-2023-06-26_02:38:43-fs:functional-wip-lxb-xlock-20230619-0716-distro-default-smithi/7...
- 06:23 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- reef:
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-23_16:20:29-fs-wip-yuri11-testing-2023-06-19-1232-reef-distr...
Also available in: Atom