Activity
From 03/21/2023 to 04/19/2023
04/19/2023
- 09:25 AM Backport #59481 (In Progress): reef: cephfs-top, qa: test the current python version is supported
- 06:07 AM Backport #59481 (Resolved): reef: cephfs-top, qa: test the current python version is supported
- https://github.com/ceph/ceph/pull/51142
- 09:20 AM Backport #59483 (In Progress): quincy: cephfs-top, qa: test the current python version is supported
- 06:08 AM Backport #59483 (Resolved): quincy: cephfs-top, qa: test the current python version is supported
- https://github.com/ceph/ceph/pull/51354
- 09:09 AM Backport #59482 (In Progress): pacific: cephfs-top, qa: test the current python version is supported
- 06:08 AM Backport #59482 (Resolved): pacific: cephfs-top, qa: test the current python version is supported
- https://github.com/ceph/ceph/pull/51353
- 06:41 AM Feature #45021: client: new asok commands for diagnosing cap handling issues
- Kotresh, I'm taking this one and 44279
- 06:38 AM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Did we RCA this, Kotresh?
- 06:02 AM Bug #58677: cephfs-top: test the current python version is supported
- Jos, the PR id was incorrect, yes? (I just fixed it).
- 06:01 AM Bug #58677 (Pending Backport): cephfs-top: test the current python version is supported
- 05:44 AM Backport #55749 (Resolved): quincy: snap_schedule: remove subvolume(-group) interfaces
04/18/2023
- 03:04 PM Backport #58986: pacific: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50597
merged - 02:31 PM Bug #59349 (Fix Under Review): qa: FAIL: test_subvolume_group_quota_exceeded_subvolume_removal_re...
- 09:27 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > cmd scrub status dumped following JSON:
> >
> > [...]
> >
> ... - 09:00 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- Dhairya Parmar wrote:
> cmd scrub status dumped following JSON:
>
> [...]
>
> while it should've something lik... - 07:03 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- ran yuri's branch on fs suite with scrub yaml: http://pulpito.front.sepia.ceph.com/dparmar-2023-04-17_19:11:31-fs:fun...
- 08:51 AM Feature #57481: mds: enhance scrub to fragment/merge dirfrags
- Rishabh, please take this one.
- 05:17 AM Bug #59394: ACLs not fully supported.
- With the root mount point being /CephFS.
I do have several folders with specific EC and replication pools (hence P... - 05:15 AM Bug #59394: ACLs not fully supported.
- The paths given where for illustration only. Exact paths are something closer to:...
- 04:07 AM Bug #59394: ACLs not fully supported.
- Brian,
* Should /CephFS be assumed as the mount point on the host system at which the cephfs is mounted ?
* What wa... - 03:18 AM Bug #59342: qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
- Venky Shankar wrote:
> The lkml link says:
>
> > Sure, but I'll hold that request for a while. I updated to binut...
04/17/2023
- 02:34 PM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- cmd scrub status dumped following JSON:...
- 12:45 PM Bug #58878 (Can't reproduce): mds: FAILED ceph_assert(trim_to > trimming_pos)
- This was suspected due to various metadata inconsistencies which probably surfaced due to destructive tools being run...
- 12:41 PM Bug #59413 (Triaged): cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- 12:40 PM Bug #59463 (Triaged): mgr/nfs: Setting NFS export config using -i option is not working
- 11:57 AM Bug #59463 (Closed): mgr/nfs: Setting NFS export config using -i option is not working
- Unable to set NFS export configuration using config.conf
Steps followed...
04/14/2023
- 03:42 PM Backport #52440: pacific: qa: add testing for "ms_mode" mount option
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50712
merged - 01:33 PM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- Latest run of tasks/scrub http://pulpito.front.sepia.ceph.com/dparmar-2023-04-14_12:28:32-fs:functional-wip-dparmar-M...
- 11:57 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- I found an identical job from my run last month that passed with ease
http://pulpito.front.sepia.ceph.com/dparmar-...
04/13/2023
- 07:07 PM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- This shouldn't have failed as it has been tested by me(multiple times both on vstart as well teuthology) as well venk...
- 06:10 AM Bug #59345 (Need More Info): qa/workunits/fs/test_python.sh failed with "error in rmdir /dir-1: D...
04/12/2023
- 12:58 PM Backport #59032 (In Progress): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_c...
- 12:49 PM Backport #59030 (In Progress): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_cl...
- 11:57 AM Backport #59430 (In Progress): reef: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- 09:12 AM Backport #59430 (Resolved): reef: Test failure: test_client_cache_size (tasks.cephfs.test_client_...
- https://github.com/ceph/ceph/pull/51047
- 10:08 AM Backport #59417 (In Progress): pacific: pybind/mgr/volumes: investigate moving calls which may bl...
- 10:03 AM Backport #59416 (In Progress): quincy: pybind/mgr/volumes: investigate moving calls which may blo...
- 09:48 AM Backport #59415 (In Progress): reef: pybind/mgr/volumes: investigate moving calls which may block...
- 09:17 AM Backport #59412 (In Progress): reef: libcephfs: client needs to update the mtime and change attr ...
- 09:06 AM Backport #59409 (In Progress): reef: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 09:01 AM Backport #59003 (In Progress): pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 05:31 AM Bug #59346 (Fix Under Review): qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQu...
- 01:09 AM Backport #59407 (In Progress): reef: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
04/11/2023
- 02:18 PM Backport #59368: pacific: qa: test_rebuild_simple checks status on wrong file system
- https://tracker.ceph.com/issues/59425
- 02:18 PM Backport #59367: quincy: qa: test_rebuild_simple checks status on wrong file system
- https://tracker.ceph.com/issues/59425
- 02:18 PM Backport #59366: reef: qa: test_rebuild_simple checks status on wrong file system
- https://tracker.ceph.com/issues/59425
- 02:17 PM Bug #59425 (Fix Under Review): qa: RuntimeError: more than one file system available
- 02:10 PM Bug #59425 (Pending Backport): qa: RuntimeError: more than one file system available
- ...
- 12:14 PM Bug #59343 (Duplicate): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 12:13 PM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Venky Shankar wrote:
> Xiubo - I think this is https://tracker.ceph.com/issues/54460 which is assigned to Milind.
... - 11:57 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Xiubo - I think this is https://tracker.ceph.com/issues/54460 which is assigned to Milind.
- 08:34 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://pulpito.ceph.com/vshankar-2023-03-31_06:27:29-fs-wip-vshankar-testing-20230330.125245-testing-default-smithi/...
- 12:14 PM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Venky Shankar wrote:
> Kotresh Hiremath Ravishankar wrote:
> > Neeraj Pratap Singh wrote:
> > > I did try to repro... - 10:05 AM Backport #59406 (In Progress): reef: cephfs-top: navigate to home screen when no fs
- 09:03 AM Backport #59406 (Resolved): reef: cephfs-top: navigate to home screen when no fs
- https://github.com/ceph/ceph/pull/51003
- 09:45 AM Backport #59417 (Resolved): pacific: pybind/mgr/volumes: investigate moving calls which may block...
- https://github.com/ceph/ceph/pull/51045
- 09:45 AM Backport #59416 (Resolved): quincy: pybind/mgr/volumes: investigate moving calls which may block ...
- https://github.com/ceph/ceph/pull/51044
- 09:45 AM Backport #59415 (Resolved): reef: pybind/mgr/volumes: investigate moving calls which may block on...
- https://github.com/ceph/ceph/pull/51042
- 09:38 AM Fix #51177 (Pending Backport): pybind/mgr/volumes: investigate moving calls which may block on li...
- 09:34 AM Bug #59413 (Pending Backport): cephfs: qa snaptest-git-ceph.sh failed with "got remote process re...
- https://pulpito.ceph.com/vshankar-2023-03-31_06:27:29-fs-wip-vshankar-testing-20230330.125245-testing-default-smithi/...
- 09:13 AM Backport #59412 (Resolved): reef: libcephfs: client needs to update the mtime and change attr whe...
- https://github.com/ceph/ceph/pull/51041
- 09:13 AM Backport #59411 (Resolved): reef: snap-schedule: handle non-existent path gracefully during snaps...
- https://github.com/ceph/ceph/pull/51248
- 09:13 AM Backport #59410 (In Progress): reef: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- https://github.com/ceph/ceph/pull/52578
- 09:13 AM Backport #59409 (Resolved): reef: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- https://github.com/ceph/ceph/pull/51040
- 09:13 AM Backport #59408 (Resolved): reef: cephfs_mirror: local and remote dir root modes are not same
- https://github.com/ceph/ceph/pull/53271
- 09:03 AM Backport #59407 (Resolved): reef: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- https://github.com/ceph/ceph/pull/50987
- 08:54 AM Backport #59405 (In Progress): reef: MDS allows a (kernel) client to exceed the xattrs key/value ...
- https://github.com/ceph/ceph/pull/53339
- 08:45 AM Bug #58760: kclient: xfstests-dev generic/317 failed
- Xiubo, does this need backporting? (and other xfstests-dev related changes)
- 08:15 AM Backport #59398 (In Progress): pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
- 02:39 AM Backport #59398 (Resolved): pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
- https://github.com/ceph/ceph/pull/50715
- 07:57 AM Backport #59396 (In Progress): quincy: cephfs-top: cephfs-top -d <seconds> not working as expected
- 02:38 AM Backport #59396 (Resolved): quincy: cephfs-top: cephfs-top -d <seconds> not working as expected
- https://github.com/ceph/ceph/pull/50717
- 07:55 AM Backport #59397 (In Progress): reef: cephfs-top: cephfs-top -d <seconds> not working as expected
- 02:38 AM Backport #59397 (Resolved): reef: cephfs-top: cephfs-top -d <seconds> not working as expected
- https://github.com/ceph/ceph/pull/50998
- 07:36 AM Backport #59404 (In Progress): reef: mds stuck in 'up:replay' and crashed.
- 07:32 AM Backport #59404 (Resolved): reef: mds stuck in 'up:replay' and crashed.
- https://github.com/ceph/ceph/pull/50997
- 05:35 AM Bug #59342: qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
- The lkml link says:
> Sure, but I'll hold that request for a while. I updated to binutils-2.36 on Monday and I'm p... - 05:34 AM Bug #59345: qa/workunits/fs/test_python.sh failed with "error in rmdir /dir-1: Directory not empt...
- Rishabh, could you confirm is this is the same issue (now fixed) you ran into a while back related to a quota related...
- 04:25 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Laura Flores wrote:
> @Venky relevant thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GA77DL... - 03:48 AM Backport #59198 (In Progress): pacific: cephfs: qa enables kclient for newop test
- 03:46 AM Backport #59199 (In Progress): quincy: cephfs: qa enables kclient for newop test
- 03:17 AM Backport #59399 (In Progress): reef: cephfs: qa enables kclient for newop test
- 02:55 AM Backport #59399 (Resolved): reef: cephfs: qa enables kclient for newop test
- https://github.com/ceph/ceph/pull/50990
- 02:42 AM Backport #59266 (In Progress): quincy: libcephfs: clear the suid/sgid for fallocate
- 02:35 AM Bug #59188 (Pending Backport): cephfs-top: cephfs-top -d <seconds> not working as expected
- 02:33 AM Backport #59268 (In Progress): pacific: libcephfs: clear the suid/sgid for fallocate
- 02:28 AM Backport #59267 (In Progress): reef: libcephfs: clear the suid/sgid for fallocate
- 01:52 AM Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB
- This will wait for https://github.com/ceph/ceph/pull/50910.
- 01:50 AM Backport #59386 (In Progress): pacific: [RHEL stock] pjd test failures(a bug that need to wait th...
- 01:48 AM Backport #59385 (In Progress): quincy: [RHEL stock] pjd test failures(a bug that need to wait the...
- 01:43 AM Backport #59384 (In Progress): reef: [RHEL stock] pjd test failures(a bug that need to wait the u...
04/10/2023
- 11:20 PM Bug #59394 (New): ACLs not fully supported.
- Attempting to set the default user or group on a CephFS volume returns an error:...
- 09:32 PM Backport #58350 (Resolved): quincy: MDS: scan_stray_dir doesn't walk through all stray inode frag...
- 07:33 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- @Venky relevant thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GA77DLSQXCXZVJ4BYQ6KDW4DLU5IF...
- 07:08 PM Backport #59037 (In Progress): quincy: MDS allows a (kernel) client to exceed the xattrs key/valu...
- 04:38 PM Feature #59388 (In Progress): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap w...
- 04:27 PM Feature #59388 (Pending Backport): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same ...
- MDS capabilities can take 5 parameters: FS name, path, root squash, UID and GIDs. It's possible to have first 3 toget...
- 03:52 PM Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.ex...
- Venky,
This failure occured in Pacific backport runs - https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-y... - 03:00 PM Backport #58345 (Resolved): quincy: Thread md_log_replay is hanged for ever.
- 01:47 PM Backport #59200: reef: qa: add testing in fs:workload for different kinds of subvolumes
- already available in reef
- 01:43 PM Backport #59201 (In Progress): quincy: qa: add testing in fs:workload for different kinds of subv...
- 01:40 PM Backport #59202 (In Progress): pacific: qa: add testing in fs:workload for different kinds of sub...
- 01:15 PM Bug #59301 (Triaged): pacific (?): test_full_fsync: RuntimeError: Timed out waiting for MDS daemo...
- 01:15 PM Bug #59301: pacific (?): test_full_fsync: RuntimeError: Timed out waiting for MDS daemons to beco...
- No backport target set since this is being currently debugged to assess the scope of the issue.
- 12:54 PM Bug #59350 (Triaged): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks)...
- 09:58 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- And BTW, the linux kernel driver does not have this issue.
- 09:57 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- ftruncate end up calling Client::ll_setattr() with size=0 in ceph-fuse. Note that there is no file handle (fh) that's...
- 06:45 AM Backport #59386 (Resolved): pacific: [RHEL stock] pjd test failures(a bug that need to wait the u...
- https://github.com/ceph/ceph/pull/50986
- 06:45 AM Backport #59385 (Resolved): quincy: [RHEL stock] pjd test failures(a bug that need to wait the un...
- https://github.com/ceph/ceph/pull/50985
- 06:44 AM Backport #59384 (Resolved): reef: [RHEL stock] pjd test failures(a bug that need to wait the unli...
- https://github.com/ceph/ceph/pull/50984
- 06:37 AM Bug #56695 (Pending Backport): [RHEL stock] pjd test failures(a bug that need to wait the unlink ...
04/07/2023
- 10:34 PM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- /a/yuriw-2023-03-30_21:29:24-rados-wip-yuri2-testing-2023-03-30-0826-distro-default-smithi/7227514
- 10:25 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- /a/yuriw-2023-03-30_21:29:24-rados-wip-yuri2-testing-2023-03-30-0826-distro-default-smithi/7227398
04/06/2023
- 05:51 PM Backport #59368 (Resolved): pacific: qa: test_rebuild_simple checks status on wrong file system
- 03:26 PM Backport #59368 (In Progress): pacific: qa: test_rebuild_simple checks status on wrong file system
- 02:48 PM Backport #59368 (Resolved): pacific: qa: test_rebuild_simple checks status on wrong file system
- https://github.com/ceph/ceph/pull/50923
- 05:50 PM Backport #57743 (Resolved): pacific: qa: test_recovery_pool uses wrong recovery procedure
- 05:49 PM Backport #59229 (Resolved): pacific: cephfs-data-scan: does not scan_links for lost+found
- 05:48 PM Backport #59373 (Resolved): reef: qa: test_join_fs_unset failure
- https://github.com/ceph/ceph/pull/52235
- 05:48 PM Backport #59223 (Resolved): pacific: mds: catch damage to CDentry's first member before persisting
- 05:48 PM Backport #59226 (Resolved): pacific: mds: modify scrub to catch dentry corruption
- 05:48 PM Backport #59372 (Resolved): pacific: qa: test_join_fs_unset failure
- https://github.com/ceph/ceph/pull/52237
- 05:48 PM Backport #59371 (In Progress): quincy: qa: test_join_fs_unset failure
- https://github.com/ceph/ceph/pull/52236
- 05:47 PM Backport #57714 (Resolved): pacific: mds: scrub locates mismatch between child accounted_rstats a...
- 05:46 PM Backport #53162 (Resolved): pacific: qa: test_standby_count_wanted failure
- 05:45 PM Backport #57712 (Resolved): pacific: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 05:42 PM Bug #59297 (Pending Backport): qa: test_join_fs_unset failure
- 03:47 PM Bug #59348 (Duplicate): qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs....
- 09:45 AM Bug #59348: qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.T...
- Reason is this commit: https://github.com/ceph/ceph/commit/8679e0c2eb624efa3ab66f2238546629a3e3a339.
Because this ... - 08:03 AM Bug #59348: qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.T...
- from teuthology log...
- 06:49 AM Bug #59348 (Duplicate): qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs....
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232817...
- 03:25 PM Backport #59366 (In Progress): reef: qa: test_rebuild_simple checks status on wrong file system
- 02:48 PM Backport #59366 (Resolved): reef: qa: test_rebuild_simple checks status on wrong file system
- https://github.com/ceph/ceph/pull/50921
- 03:25 PM Backport #59367 (In Progress): quincy: qa: test_rebuild_simple checks status on wrong file system
- 02:48 PM Backport #59367 (In Progress): quincy: qa: test_rebuild_simple checks status on wrong file system
- https://github.com/ceph/ceph/pull/50922
- 02:46 PM Bug #59332 (Pending Backport): qa: test_rebuild_simple checks status on wrong file system
- 07:05 AM Bug #59350 (Resolved): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks...
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232828...
- 07:02 AM Bug #59349 (Resolved): qa: FAIL: test_subvolume_group_quota_exceeded_subvolume_removal_retained_s...
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232813...
- 06:55 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232754/
- 06:07 AM Bug #59346: qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not rai...
- The quota was successfully set:...
- 05:56 AM Bug #59346 (Fix Under Review): qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQu...
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232889...
- 05:14 AM Bug #59345 (Need More Info): qa/workunits/fs/test_python.sh failed with "error in rmdir /dir-1: D...
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232889...
- 05:07 AM Bug #59344 (Fix Under Review): qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Inva...
- 04:45 AM Bug #59344 (Fix Under Review): qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Inva...
- ...
- 03:00 AM Bug #59343 (Resolved): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232845...
- 02:14 AM Bug #59342 (Duplicate): qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
- https://pulpito.ceph.com/yuriw-2023-04-05_14:51:06-fs-wip-yuri5-testing-2023-04-04-0814-distro-default-smithi/7232784...
- 12:48 AM Bug #59314 (Fix Under Review): mon/MDSMonitor: plug PAXOS when evicting an MDS
04/05/2023
- 08:18 PM Bug #59332 (Fix Under Review): qa: test_rebuild_simple checks status on wrong file system
- 07:24 PM Bug #59332 (Pending Backport): qa: test_rebuild_simple checks status on wrong file system
- ...
- 05:38 PM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- This happens when setattr is called on truncate and looks like an incorrect check in inode_permission()....
- 02:08 PM Bug #24403: mon failed to return metadata for mds
- #59318 may also be related somehow but I'm not sure.
- 02:15 AM Bug #24403 (Fix Under Review): mon failed to return metadata for mds
- 02:08 PM Bug #59318 (Fix Under Review): mon/MDSMonitor: daemon booting may get failed if mon handles up:bo...
- 02:10 AM Bug #59318 (Resolved): mon/MDSMonitor: daemon booting may get failed if mon handles up:boot beaco...
- If the leader handles two up:boot beacons from a new MDS, it may fail the new MDS if the two beacon updates are batch...
- 12:46 PM Feature #58072: enable 'ceph fs new' use 'ceph fs set' options
- Patrick Donnelly wrote:
> Rishabh Dave wrote:
> > Patrick Donnelly wrote:
> > > I think at this point we should co... - 12:44 PM Feature #58072: enable 'ceph fs new' use 'ceph fs set' options
- Rishabh Dave wrote:
> Patrick Donnelly wrote:
> > I think at this point we should consider making it possible to se... - 08:31 AM Bug #58757 (Duplicate): qa: Command failed (workunit test suites/fsstress.sh)
- 08:22 AM Bug #58746 (Duplicate): quincy: qa: VersionNotFoundError: Failed to fetch package version
04/04/2023
- 06:17 PM Bug #59314 (Pending Backport): mon/MDSMonitor: plug PAXOS when evicting an MDS
- Various paths which call MDSMonitor::fail_mds_gid should also plug PAXOS so that the pending FSMap is batched with th...
- 01:17 PM Feature #58072: enable 'ceph fs new' use 'ceph fs set' options
- Patrick Donnelly wrote:
> I think at this point we should consider making it possible to set arbitrary settings on a... - 01:09 PM Feature #58072: enable 'ceph fs new' use 'ceph fs set' options
- Dhairya Parmar wrote:
> Patrick Donnelly wrote:
> > I think at this point we should consider making it possible to ... - 12:23 PM Feature #58072: enable 'ceph fs new' use 'ceph fs set' options
- Patrick Donnelly wrote:
> I think at this point we should consider making it possible to set arbitrary settings on a... - 12:52 PM Backport #57743 (In Progress): pacific: qa: test_recovery_pool uses wrong recovery procedure
- 08:08 AM Backport #59306 (New): quincy: client: `df` will show incorrect disk size if the quota size is no...
- 08:08 AM Backport #59305 (New): reef: client: `df` will show incorrect disk size if the quota size is not ...
- 08:08 AM Backport #59304 (Rejected): pacific: client: `df` will show incorrect disk size if the quota size...
- 08:01 AM Bug #56397 (Pending Backport): client: `df` will show incorrect disk size if the quota size is no...
- Rishabh Dave wrote:
> Xiubo, the PR has been merged but I am leaving status of this ticket unchanged because the baa... - 02:04 AM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Ken Dreyer wrote:
> https://github.com/ceph/ceph/pull/47542 will be in reef, and we need this in quincy.
The reas...
04/03/2023
- 07:25 PM Backport #59303 (In Progress): quincy: cephfs: tooling to identify inode (metadata) corruption
- https://github.com/ceph/ceph/pull/52245
- 07:24 PM Feature #56140 (Pending Backport): cephfs: tooling to identify inode (metadata) corruption
- https://github.com/ceph/ceph/pull/47542 will be in reef, and we need this in quincy.
- 06:41 PM Bug #59301 (Triaged): pacific (?): test_full_fsync: RuntimeError: Timed out waiting for MDS daemo...
- ...
- 04:11 PM Bug #59297 (Fix Under Review): qa: test_join_fs_unset failure
- 04:09 PM Bug #59297 (Pending Backport): qa: test_join_fs_unset failure
- ...
- 01:26 PM Bug #59230 (Duplicate): Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- https://tracker.ceph.com/issues/59163
03/31/2023
- 09:14 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- /a/yuriw-2023-03-14_20:10:47-rados-wip-yuri-testing-2023-03-14-0714-reef-distro-default-smithi/7206976
- 03:24 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- /a/yuriw-2023-03-27_23:05:54-rados-wip-yuri4-testing-2023-03-25-0714-distro-default-smithi/7221904
- 06:45 PM Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB
- Xiubo, the PR has been merged but I am leaving status of this ticket unchanged because the baackport field is not set...
- 10:20 AM Backport #59265 (In Progress): quincy: pacific scrub ~mds_dir causes stray related ceph_assert, a...
- https://github.com/ceph/ceph/pull/50815
- 08:40 AM Backport #59265 (In Progress): quincy: pacific scrub ~mds_dir causes stray related ceph_assert, a...
- 10:20 AM Backport #59262 (In Progress): quincy: mds: stray directories are not purged when all past parent...
- https://github.com/ceph/ceph/pull/50815
- 08:39 AM Backport #59262 (In Progress): quincy: mds: stray directories are not purged when all past parent...
- 10:17 AM Backport #59264 (In Progress): pacific: pacific scrub ~mds_dir causes stray related ceph_assert, ...
- https://github.com/ceph/ceph/pull/50814
- 08:40 AM Backport #59264 (Resolved): pacific: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- 10:16 AM Backport #59261 (In Progress): pacific: mds: stray directories are not purged when all past paren...
- https://github.com/ceph/ceph/pull/50814
- 08:39 AM Backport #59261 (Resolved): pacific: mds: stray directories are not purged when all past parents ...
- 10:12 AM Backport #59263 (In Progress): reef: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- https://github.com/ceph/ceph/pull/50813
- 08:39 AM Backport #59263 (In Progress): reef: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- 10:12 AM Backport #59260 (In Progress): reef: mds: stray directories are not purged when all past parents ...
- https://github.com/ceph/ceph/pull/50813
- 08:39 AM Backport #59260 (In Progress): reef: mds: stray directories are not purged when all past parents ...
- 10:01 AM Backport #59023 (In Progress): pacific: mds: warning `clients failing to advance oldest client/fl...
- 09:23 AM Backport #59022 (Duplicate): pacific: mds: warning `clients failing to advance oldest client/flus...
- Duplicate of https://tracker.ceph.com/issues/59023
- 09:19 AM Backport #59031 (Duplicate): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_cli...
- Duplicate of https://tracker.ceph.com/issues/59032
- 09:13 AM Backport #59268 (Resolved): pacific: libcephfs: clear the suid/sgid for fallocate
- https://github.com/ceph/ceph/pull/50988
- 09:13 AM Backport #59267 (Resolved): reef: libcephfs: clear the suid/sgid for fallocate
- https://github.com/ceph/ceph/pull/50987
- 09:13 AM Backport #59266 (Resolved): quincy: libcephfs: clear the suid/sgid for fallocate
- https://github.com/ceph/ceph/pull/50989
- 09:11 AM Feature #58680 (Pending Backport): libcephfs: clear the suid/sgid for fallocate
- Backport note: include https://github.com/ceph/ceph/pull/50793.
- 08:37 AM Bug #51824 (Pending Backport): pacific scrub ~mds_dir causes stray related ceph_assert, abort and...
- 08:37 AM Bug #53724 (Pending Backport): mds: stray directories are not purged when all past parents are clear
- 08:31 AM Backport #59246 (In Progress): pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_e...
- https://github.com/ceph/ceph/pull/50809
- 04:06 AM Backport #59246 (Resolved): pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_exis...
- 08:31 AM Backport #59249 (In Progress): pacific: qa: intermittent nfs test failures at nfs cluster creation
- https://github.com/ceph/ceph/pull/50809
- 04:07 AM Backport #59249 (Resolved): pacific: qa: intermittent nfs test failures at nfs cluster creation
- 08:30 AM Backport #59245 (In Progress): reef: qa: fix testcase 'test_cluster_set_user_config_with_non_exis...
- https://github.com/ceph/ceph/pull/50808
- 04:06 AM Backport #59245 (In Progress): reef: qa: fix testcase 'test_cluster_set_user_config_with_non_exis...
- 08:30 AM Backport #59248 (In Progress): reef: qa: intermittent nfs test failures at nfs cluster creation
- https://github.com/ceph/ceph/pull/50808
- 04:07 AM Backport #59248 (Resolved): reef: qa: intermittent nfs test failures at nfs cluster creation
- 08:29 AM Backport #59244 (In Progress): quincy: qa: fix testcase 'test_cluster_set_user_config_with_non_ex...
- https://github.com/ceph/ceph/pull/50807
- 04:06 AM Backport #59244 (In Progress): quincy: qa: fix testcase 'test_cluster_set_user_config_with_non_ex...
- 08:29 AM Backport #59247 (In Progress): quincy: qa: intermittent nfs test failures at nfs cluster creation
- https://github.com/ceph/ceph/pull/50807
- 04:07 AM Backport #59247 (Resolved): quincy: qa: intermittent nfs test failures at nfs cluster creation
- 08:23 AM Backport #59252 (In Progress): pacific: mgr/nfs: disallow non-existent paths when creating export
- https://github.com/ceph/ceph/pull/50809
- 04:07 AM Backport #59252 (Resolved): pacific: mgr/nfs: disallow non-existent paths when creating export
- 08:15 AM Backport #59251 (In Progress): reef: mgr/nfs: disallow non-existent paths when creating export
- https://github.com/ceph/ceph/pull/50808
- 04:07 AM Backport #59251 (Resolved): reef: mgr/nfs: disallow non-existent paths when creating export
- 08:11 AM Backport #59250 (In Progress): quincy: mgr/nfs: disallow non-existent paths when creating export
- https://github.com/ceph/ceph/pull/50807
- 04:07 AM Backport #59250 (In Progress): quincy: mgr/nfs: disallow non-existent paths when creating export
- 04:04 AM Fix #58758 (Pending Backport): qa: fix testcase 'test_cluster_set_user_config_with_non_existing_c...
- 04:03 AM Bug #58744 (Pending Backport): qa: intermittent nfs test failures at nfs cluster creation
- 04:03 AM Bug #58228 (Pending Backport): mgr/nfs: disallow non-existent paths when creating export
03/30/2023
- 05:52 PM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- very strange. i just saw this show up in an rgw job against ubuntu 20.04
i booted up an old focal vm to test under... - 03:10 PM Bug #58576 (Rejected): do not allow invalid flags with cmd 'scrub start'
- Alright so the cmd works as expected, it's just that the first arg it considers as scrub tag. If given more args then...
- 01:27 PM Bug #58244: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
- Rishabh, please take this one.
- 10:49 AM Bug #51824 (Resolved): pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- 10:49 AM Bug #53724 (Resolved): mds: stray directories are not purged when all past parents are clear
- 09:20 AM Feature #58129 (Fix Under Review): mon/FSCommands: support swapping file systems by name
- 09:08 AM Bug #59230 (Duplicate): Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- https://pulpito.ceph.com/vshankar-2023-03-18_05:01:47-fs-wip-vshankar-testing-20230317.095222-testing-default-smithi/...
- 06:24 AM Documentation #57062: Document access patterns that have good/pathological performance on CephFS
- Venky Shankar wrote:
> Niklas Hambuechen wrote:
> > Hi Venky, I'm using the kclient on Linux 5.10.88 in this cluste... - 04:59 AM Backport #59030: quincy: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.Te...
- Venky Shankar wrote:
> Milind, handing this over to you since it needs to be backported after backporting https://tr... - 04:36 AM Backport #59030: quincy: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.Te...
- Milind, handing this over to you since it needs to be backported after backporting https://tracker.ceph.com/issues/54317
- 04:55 AM Backport #59002 (In Progress): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 04:53 AM Backport #59021 (In Progress): quincy: mds: warning `clients failing to advance oldest client/flu...
- 04:42 AM Backport #59041 (In Progress): quincy: libcephfs: client needs to update the mtime and change att...
- 04:38 AM Backport #59032: pacific: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.T...
- Milind, please include this when backporting https://tracker.ceph.com/issues/54317
- 04:38 AM Backport #59036 (Duplicate): pacific: MDS allows a (kernel) client to exceed the xattrs key/value...
- 04:37 AM Backport #59036: pacific: MDS allows a (kernel) client to exceed the xattrs key/value limits
- https://tracker.ceph.com/issues/59035
- 04:37 AM Backport #59039 (Duplicate): pacific: libcephfs: client needs to update the mtime and change attr...
- https://tracker.ceph.com/issues/59040
- 03:07 AM Backport #59229 (In Progress): pacific: cephfs-data-scan: does not scan_links for lost+found
- 03:04 AM Backport #59229 (Resolved): pacific: cephfs-data-scan: does not scan_links for lost+found
- https://github.com/ceph/ceph/pull/50784
- 03:06 AM Backport #59228 (In Progress): quincy: cephfs-data-scan: does not scan_links for lost+found
- 03:03 AM Backport #59228 (Resolved): quincy: cephfs-data-scan: does not scan_links for lost+found
- https://github.com/ceph/ceph/pull/50783
- 03:05 AM Backport #59227 (In Progress): reef: cephfs-data-scan: does not scan_links for lost+found
- 03:03 AM Backport #59227 (Resolved): reef: cephfs-data-scan: does not scan_links for lost+found
- https://github.com/ceph/ceph/pull/50782
- 03:02 AM Backport #57631 (Rejected): quincy: first-damage.sh does not handle dentries with spaces
- not backporting this script
- 03:01 AM Bug #59183 (Pending Backport): cephfs-data-scan: does not scan_links for lost+found
- 02:59 AM Backport #59226 (In Progress): pacific: mds: modify scrub to catch dentry corruption
- 02:47 AM Backport #59226 (Resolved): pacific: mds: modify scrub to catch dentry corruption
- https://github.com/ceph/ceph/pull/50781
- 02:45 AM Backport #59112 (Resolved): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 02:36 AM Backport #59225 (In Progress): quincy: mds: modify scrub to catch dentry corruption
- 02:18 AM Backport #59225 (Resolved): quincy: mds: modify scrub to catch dentry corruption
- https://github.com/ceph/ceph/pull/50779
- 02:35 AM Backport #59223 (In Progress): pacific: mds: catch damage to CDentry's first member before persis...
- 02:26 AM Backport #59016 (In Progress): quincy: snap-schedule: handle non-existent path gracefully during ...
- 02:09 AM Feature #57091 (Pending Backport): mds: modify scrub to catch dentry corruption
- Quincy backport: https://github.com/ceph/ceph/pull/50779
- 02:08 AM Backport #59221 (In Progress): quincy: mds: catch damage to CDentry's first member before persisting
- 01:19 AM Backport #57714 (In Progress): pacific: mds: scrub locates mismatch between child accounted_rstat...
- 01:17 AM Backport #57715 (In Progress): quincy: mds: scrub locates mismatch between child accounted_rstats...
- 01:15 AM Backport #57721 (In Progress): pacific: qa: data-scan/journal-tool do not output debugging in ups...
- 01:14 AM Backport #57720 (In Progress): quincy: qa: data-scan/journal-tool do not output debugging in upst...
- 01:05 AM Backport #57713 (In Progress): quincy: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 01:03 AM Backport #57744 (In Progress): quincy: qa: test_recovery_pool uses wrong recovery procedure
- 01:00 AM Backport #57824 (In Progress): quincy: qa: mirror tests should cleanup fs during unwind
- 12:59 AM Backport #57825 (In Progress): pacific: qa: mirror tests should cleanup fs during unwind
- 12:57 AM Cleanup #51543 (Resolved): mds: improve debugging for mksnap denial
- 12:57 AM Bug #53641 (Resolved): mds: recursive scrub does not trigger stray reintegration
- 12:56 AM Bug #53619 (Resolved): mds: fails to reintegrate strays if destdn's directory is full (ENOSPC)
03/29/2023
- 09:57 PM Backport #53162 (In Progress): pacific: qa: test_standby_count_wanted failure
- 09:53 PM Backport #54234 (Resolved): quincy: qa: use cephadm to provision cephfs for fs:workloads
- 09:50 PM Backport #57746 (Duplicate): quincy: qa: broad snapshot functionality testing across clients
- 09:12 PM Backport #57712 (In Progress): pacific: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 09:10 PM Backport #57630 (Rejected): pacific: first-damage.sh does not handle dentries with spaces
- Not backporting this script to pacific.
- 09:08 PM Backport #52854 (In Progress): pacific: qa: test_simple failure
- 09:05 PM Backport #50024 (Rejected): octopus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_n...
- 09:04 PM Bug #49834 (Won't Fix - EOL): octopus: qa: test_statfs_on_deleted_fs failure
- 09:02 PM Bug #48524 (Resolved): octopus: run_shell() got an unexpected keyword argument 'timeout'
- 09:00 PM Bug #48143 (Won't Fix - EOL): octopus: qa: statfs command timeout is too short
- 08:52 PM Backport #57670 (Resolved): quincy: mds: damage table only stores one dentry per dirfrag
- 08:49 PM Backport #59222 (In Progress): reef: mds: catch damage to CDentry's first member before persisting
- 08:23 PM Backport #59222 (Resolved): reef: mds: catch damage to CDentry's first member before persisting
- https://github.com/ceph/ceph/pull/50755
- 08:24 PM Backport #59223 (Resolved): pacific: mds: catch damage to CDentry's first member before persisting
- https://github.com/ceph/ceph/pull/50781
- 08:23 PM Backport #59221 (Resolved): quincy: mds: catch damage to CDentry's first member before persisting
- https://github.com/ceph/ceph/pull/50779
- 08:17 PM Bug #58482 (Pending Backport): mds: catch damage to CDentry's first member before persisting
- 03:09 PM Bug #58938: qa: xfstests-dev's generic test suite has 7 failures with kclient
- > Rishabh, could you please check if https://pulpito.ceph.com/vshankar-2023-03-18_05:01:47-fs-wip-vshankar-testing-20...
- 04:55 AM Bug #58938: qa: xfstests-dev's generic test suite has 7 failures with kclient
- Rishabh, could you please check if https://pulpito.ceph.com/vshankar-2023-03-18_05:01:47-fs-wip-vshankar-testing-2023...
- 11:20 AM Backport #58598 (Resolved): pacific: mon: prevent allocating snapids allocated for CephFS
- 09:54 AM Backport #58984 (In Progress): pacific: cephfs-top: navigate to home screen when no fs
- 08:59 AM Backport #58983 (In Progress): quincy: cephfs-top: navigate to home screen when no fs
- 06:58 AM Backport #58826 (In Progress): pacific: mds: make num_fwd and num_retry to __u32
- 06:57 AM Backport #59202 (Resolved): pacific: qa: add testing in fs:workload for different kinds of subvol...
- https://github.com/ceph/ceph/pull/51509
- 06:57 AM Backport #59201 (Resolved): quincy: qa: add testing in fs:workload for different kinds of subvolumes
- https://github.com/ceph/ceph/pull/50974
- 06:56 AM Backport #59200 (Rejected): reef: qa: add testing in fs:workload for different kinds of subvolumes
- 06:50 AM Fix #54317 (Pending Backport): qa: add testing in fs:workload for different kinds of subvolumes
- Milind, I think we should backport this. WDYT? There is merit in doing so (more tests :).
- 06:50 AM Backport #58880 (In Progress): quincy: mds: Jenkins fails with skipping unrecognized type MClient...
- 06:48 AM Backport #58825 (In Progress): quincy: mds: make num_fwd and num_retry to __u32
- 06:47 AM Backport #58881 (In Progress): pacific: mds: Jenkins fails with skipping unrecognized type MClien...
- 06:10 AM Backport #59199 (Resolved): quincy: cephfs: qa enables kclient for newop test
- https://github.com/ceph/ceph/pull/50991
- 06:10 AM Backport #59198 (Rejected): pacific: cephfs: qa enables kclient for newop test
- https://github.com/ceph/ceph/pull/50992
- 06:10 AM Feature #59197 (Fix Under Review): qa: mds_upgrade_sequence switch to merge fragment to filter th...
- 06:04 AM Feature #59197 (Fix Under Review): qa: mds_upgrade_sequence switch to merge fragment to filter th...
- PR https://github.com/ceph/ceph/pull/48183 have add the merge fragment support. This will switch to use fragment to s...
- 06:07 AM Bug #57591 (Pending Backport): cephfs: qa enables kclient for newop test
- 06:06 AM Bug #59195 (Fix Under Review): qa/fscrypt: switch to postmerge fragment to distiguish the mounter...
- 04:54 AM Bug #59195 (Fix Under Review): qa/fscrypt: switch to postmerge fragment to distiguish the mounter...
- https://tracker.ceph.com/issues/57591 has introduced the postmerge fragment, we can reuse it here.
- 03:56 AM Backport #58994 (In Progress): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _w...
- 03:55 AM Backport #58993 (In Progress): quincy: client: fix CEPH_CAP_FILE_WR caps reference leakage in _wr...
- 03:11 AM Backport #59007 (In Progress): pacific: mds stuck in 'up:replay' and crashed.
- 03:10 AM Backport #59006 (In Progress): quincy: mds stuck in 'up:replay' and crashed.
- 03:01 AM Bug #57580 (Resolved): Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
- 03:01 AM Backport #57822 (Rejected): quincy: Test failure: test_newops_getvxattr (tasks.cephfs.test_newops...
- The quincy also doesn't support the dependency, so no need to backport it.
- 02:59 AM Backport #57823 (Rejected): pacific: Test failure: test_newops_getvxattr (tasks.cephfs.test_newop...
- The Pacific doesn't backported the dependency, so no need to fix it.
03/28/2023
- 05:38 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- /a/lflores-2023-03-27_20:42:09-rados-wip-aclamk-bs-elastic-shared-blob-quincy-distro-default-smithi/7221650
- 05:12 PM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- /a/lflores-2023-03-27_02:17:31-rados-wip-aclamk-bs-elastic-shared-blob-save-25.03.2023-a-distro-default-smithi/7221061
- 01:02 PM Backport #58808 (In Progress): quincy: cephfs-top: add an option to dump the computed values to s...
- Backport PR: https://github.com/ceph/ceph/pull/50717
- 12:08 PM Backport #59024 (Duplicate): quincy: mds: warning `clients failing to advance oldest client/flush...
- https://tracker.ceph.com/issues/59021
- 11:38 AM Backport #59033 (Duplicate): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- https://tracker.ceph.com/issues/59030
- 11:37 AM Backport #59034 (Duplicate): quincy: MDS allows a (kernel) client to exceed the xattrs key/value ...
- https://tracker.ceph.com/issues/59037
- 11:35 AM Backport #59038 (Duplicate): quincy: libcephfs: client needs to update the mtime and change attr ...
- https://tracker.ceph.com/issues/59041 (not sure why backport bot created two backport tickets for the same release)
- 11:31 AM Backport #59037: quincy: MDS allows a (kernel) client to exceed the xattrs key/value limits
- Rishabh, please take this one.
- 10:32 AM Bug #59188 (Fix Under Review): cephfs-top: cephfs-top -d <seconds> not working as expected
- 08:20 AM Bug #59188 (Resolved): cephfs-top: cephfs-top -d <seconds> not working as expected
- `cephfs-top -d [--delay]` excepts for float values due to introduction of `curses.halfdelay()` in cephfs-top
- 10:03 AM Backport #58807 (In Progress): pacific: cephfs-top: add an option to dump the computed values to ...
- Backport PR: https://github.com/ceph/ceph/pull/50715.
- 09:26 AM Backport #57571 (Resolved): pacific: client: do not uninline data for read
- 09:25 AM Bug #56553 (Resolved): client: do not uninline data for read
- 09:22 AM Bug #54461 (Resolved): ffsb.sh test failure
- 09:22 AM Bug #50057 (Resolved): client: openned inodes counter is inconsistent
- 09:21 AM Backport #50184 (Rejected): octopus: client: openned inodes counter is inconsistent
- Nathan Cutler wrote:
> This ticket is for tracking the octopus backport of a follow-on fix for #46865 which was back... - 09:12 AM Bug #50744 (Resolved): mds: journal recovery thread is possibly asserting with mds_lock not locked
- 09:11 AM Backport #50847 (Resolved): octopus: mds: journal recovery thread is possibly asserting with mds_...
- 09:09 AM Bug #58000 (Resolved): mds: switch submit_mutex to fair mutex for MDLog
- 09:08 AM Backport #58343 (Resolved): pacific: mds: switch submit_mutex to fair mutex for MDLog
- 09:04 AM Bug #50433 (Resolved): mds: Error ENOSYS: mds.a started profiler
- 09:03 AM Backport #50631 (Resolved): octopus: mds: Error ENOSYS: mds.a started profiler
- 09:01 AM Bug #50808 (Resolved): qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in the...
- 08:59 AM Backport #51323 (Resolved): octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- 08:56 AM Backport #52440 (In Progress): pacific: qa: add testing for "ms_mode" mount option
- 05:52 AM Backport #58409 (Resolved): quincy: doc: document the relevance of mds_namespace mount option
- 05:45 AM Feature #55197 (Resolved): cephfs-top: make cephfs-top display scrollable like top
- 05:23 AM Backport #57974 (Resolved): pacific: cephfs-top: make cephfs-top display scrollable like top
- 05:19 AM Bug #58677 (Fix Under Review): cephfs-top: test the current python version is supported
- 05:10 AM Bug #58663 (Resolved): cephfs-top: drop curses.A_ITALIC
- 05:03 AM Backport #58667 (Resolved): quincy: cephfs-top: drop curses.A_ITALIC
- 05:00 AM Backport #58668 (Resolved): pacific: cephfs-top: drop curses.A_ITALIC
- 12:27 AM Bug #59185 (Fix Under Review): MDSMonitor: should batch propose osdmap/mdsmap changes via some fs...
- 12:23 AM Bug #59185 (Rejected): MDSMonitor: should batch propose osdmap/mdsmap changes via some fs commands
- Especially `fs fail`. Otherwise, you may see the MDS complain about blocklisting before it has a reasonable chance to...
03/27/2023
- 06:58 PM Bug #59183 (Fix Under Review): cephfs-data-scan: does not scan_links for lost+found
- 06:46 PM Bug #59183 (Resolved): cephfs-data-scan: does not scan_links for lost+found
- Importantly, scan_links corrects the placeholder SNAP_HEAD for the first dentry metadata. If lost+found is skipped, t...
- 01:52 PM Bug #58597: The MDS crashes when deleting a specific file
- Tobias Reinhard wrote:
> Venky Shankar wrote:
> > Hi Tobias,
> >
> > Any update on using the tool? Were you able... - 10:03 AM Bug #59169 (New): Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirr...
- Seen in Yuri's quincy run: https://pulpito.ceph.com/yuriw-2023-03-23_15:21:23-fs-quincy-release-distro-default-smithi...
03/24/2023
- 07:29 PM Bug #59163 (New): mds: stuck in up:rejoin when it cannot "open" missing directory inode
- tasks.cephfs.test_damage.TestDamage.test_object_deletion tests for damage when no clients are in the session list (fo...
- 10:36 AM Bug #58597: The MDS crashes when deleting a specific file
- Venky Shankar wrote:
> Hi Tobias,
>
> Any update on using the tool? Were you able to get the file system back onl... - 09:59 AM Bug #57682: client: ERROR: test_reconnect_after_blocklisted
- Another instance - https://pulpito.ceph.com/pdonnell-2023-03-23_18:44:22-fs-wip-pdonnell-testing-20230323.162417-dist...
- 06:01 AM Feature #58216 (Rejected): cephfs: Add quota.max_files limit check in MDS side
- 02:41 AM Bug #48678: client: spins on tick interval
- This spin happens in:...
03/23/2023
- 01:39 PM Bug #58597: The MDS crashes when deleting a specific file
- Hi Tobias,
Any update on using the tool? Were you able to get the file system back online? - 12:49 PM Bug #58949 (Rejected): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- Closing per previous comment.
- 12:45 PM Bug #59134 (Duplicate): mds: deadlock during unlink with multimds (postgres)
- 01:04 AM Bug #59134: mds: deadlock during unlink with multimds (postgres)
- This should be the same with https://tracker.ceph.com/issues/58340.
- 08:49 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- Apologies for looking into this rather late, Bruno.
03/22/2023
03/21/2023
- 10:12 PM Bug #53615: qa: upgrade test fails with "timeout expired in wait_until_healthy"
- Another instance during this upgrade test:
/a/yuriw-2023-03-14_21:33:13-upgrade:octopus-x-quincy-release-distro-defa... - 08:32 PM Bug #59119 (New): mds: segmentation fault during replay of snaptable updates
- For a standby-replay daemon:...
- 09:40 AM Backport #59112 (In Progress): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 09:22 AM Backport #59112 (Resolved): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50604
Also available in: Atom