Activity
From 02/07/2023 to 03/08/2023
03/08/2023
- 04:52 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:38 PM Bug #58795 (Fix Under Review): cephfs-shell: update path to cephfs-shell since its location has c...
- 02:32 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:37 PM Bug #55354 (Resolved): cephfs: xfstests-dev can't be run against fuse mounted cephfs
- 02:32 PM Bug #58726 (Pending Backport): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 02:16 PM Bug #58938 (New): qa: xfstests-dev's generic test suite has 7 failures with kclient
- "PR #45960":https://github.com/ceph/ceph/pull/45960 enables running tests from xfstests-dev against CephFS. For kerne...
- 07:08 AM Feature #55940 (Resolved): quota: accept values in human readable format as well
- 02:43 AM Bug #55725 (Pending Backport): MDS allows a (kernel) client to exceed the xattrs key/value limits
- 02:39 AM Bug #57985 (Pending Backport): mds: warning `clients failing to advance oldest client/flush tid` ...
- 02:37 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- 02:35 AM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
- Hmmm... similar to https://tracker.ceph.com/issues/17172
- 02:31 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
- https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/...
- 02:04 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Milind, PTAL.
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testin...
03/07/2023
- 02:13 PM Bug #57064 (Need More Info): qa: test_add_ancestor_and_child_directory failure
- Looking at the logs, mirror daemon is missing and thus the command failed...
- 08:52 AM Bug #57064: qa: test_add_ancestor_and_child_directory failure
- Dhairya Parmar wrote:
> I tried digging into this failure, while looking at teuthology log, I see
> [...]
>
> I... - 02:59 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Rishabh Dave wrote:
> http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-...
03/06/2023
- 04:25 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 04:18 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 03:17 PM Bug #56830 (Can't reproduce): crash: cephfs::mirror::PeerReplayer::pick_directory()
- After thoroughly assessing the issue with the limited available data in the tracker, it's hard to tell what lead to t...
- 03:16 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya Parmar wrote:
> Issue seems to be at:
> [...]
> @ https://github.com/ceph/ceph/blob/main/src/tools/cephfs_... - 01:49 PM Bug #56830 (Fix Under Review): crash: cephfs::mirror::PeerReplayer::pick_directory()
- See updated in PR.
- 09:07 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- I did try to reproduce the issue mentioned in the tracker due to which, the feature is required.I found that:
1. If ... - 04:15 AM Bug #58029 (Pending Backport): cephfs-data-scan: multiple data pools are not supported
- 12:53 AM Backport #58609 (Resolved): quincy: cephfs:filesystem became read only after Quincy upgrade
- 12:52 AM Backport #58602 (Resolved): quincy: client stalls during vstart_runner test
03/03/2023
- 08:01 AM Backport #58865 (In Progress): quincy: cephfs-top: Sort menu doesn't show 'No filesystem availabl...
- 07:40 AM Bug #57280 (Resolved): qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fe...
- backport merged
- 07:38 AM Backport #58604 (Resolved): quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails -...
- 07:35 AM Backport #58253 (Resolved): quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calc...
- 07:33 AM Bug #58138 (Resolved): "ceph nfs cluster info" shows junk data for non-existent cluster
- backport merged
- 06:59 AM Backport #58348 (Resolved): quincy: "ceph nfs cluster info" shows junk data for non-existent clus...
03/02/2023
- 10:43 PM Backport #58599: quincy: mon: prevent allocating snapids allocated for CephFS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50090
merged - 10:40 PM Backport #58604: quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49957
merged - 10:39 PM Backport #58602: quincy: client stalls during vstart_runner test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49942
merged - 10:39 PM Backport #58609: quincy: cephfs:filesystem became read only after Quincy upgrade
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49939
merged - 10:38 PM Backport #58253: quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49655
merged - 10:37 PM Backport #58348: quincy: "ceph nfs cluster info" shows junk data for non-existent cluster
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49654
merged - 10:31 PM Backport #57970: quincy: cephfs-top: new options to limit and order-by
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 01:38 PM Feature #57090 (Resolved): MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- 10:10 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- Not reproducible in main branch (wip-vshankar-testing-20230228.105516 is just a couple of test PRs on top of main bra...
- 07:05 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- ... and the kclient sees the error:...
- 06:48 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- The MDS did reply back an ENOSPC to the client:...
- 07:42 AM Bug #58760 (Resolved): kclient: xfstests-dev generic/317 failed
- 05:37 AM Bug #58760: kclient: xfstests-dev generic/317 failed
- qa changes are: https://github.com/ceph/ceph/pull/50217
03/01/2023
- 05:59 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Issue seems to be at:...
- 10:43 AM Feature #57090: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- should we backport this?
- 10:08 AM Bug #57064: qa: test_add_ancestor_and_child_directory failure
- I tried digging into this failure, while looking at teuthology log, I see ...
02/28/2023
- 11:03 PM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- /a/yuriw-2023-02-24_17:50:19-rados-main-distro-default-smithi/7186744
- 04:05 PM Bug #44100: cephfs rsync kworker high load.
- Has this been released? I believe I'm hitting it with Ceph 17.2.5 and kernel 5.4.0-136 on Ubuntu 20.04.
- 02:12 PM Backport #58600 (Resolved): quincy: mds/Server: -ve values cause unexpected client eviction while...
- https://github.com/ceph/ceph/pull/48252 has been merged that contained the the relevant commit
- 02:09 PM Backport #58881 (Resolved): pacific: mds: Jenkins fails with skipping unrecognized type MClientRe...
- https://github.com/ceph/ceph/pull/50733
- 02:09 PM Backport #58880 (Resolved): quincy: mds: Jenkins fails with skipping unrecognized type MClientReq...
- https://github.com/ceph/ceph/pull/50732
- 02:00 PM Bug #58853 (Pending Backport): mds: Jenkins fails with skipping unrecognized type MClientRequest:...
- 02:00 PM Bug #58853: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
- Backport note: delay the backport till enough tests have been run on main for this change.
- 01:14 PM Bug #58878 (New): mds: FAILED ceph_assert(trim_to > trimming_pos)
- One of the MDS crash with the following backtrace:...
- 01:09 PM Feature #58877 (Rejected): mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- In situations where the subvolume metadata is missing/corrupted/untrustable, having a way to regenerate it would be h...
- 01:06 PM Bug #56522 (Resolved): Do not abort MDS on unknown messages
- PR merged into main, both backports PRs merged into their respective branches.
- 01:05 PM Backport #57665 (Resolved): pacific: Do not abort MDS on unknown messages
- 01:04 PM Backport #57666 (Resolved): quincy: Do not abort MDS on unknown messages
- PR merged
- 01:04 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Hey Andras,
Andras Pataki wrote:
> I'm experimenting on reproducing the problem on demand. Once I have a way to ... - 12:24 PM Feature #58835 (Fix Under Review): mds: add an asok command to dump export states
- 07:21 AM Backport #58866 (Resolved): pacific: cephfs-top: Sort menu doesn't show 'No filesystem available'...
- https://github.com/ceph/ceph/pull/50596
- 07:21 AM Backport #58865 (Resolved): quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' ...
- https://github.com/ceph/ceph/pull/50365
- 07:14 AM Bug #58813 (Pending Backport): cephfs-top: Sort menu doesn't show 'No filesystem available' scree...
02/27/2023
- 04:59 PM Backport #51936 (Rejected): octopus: mds: improve debugging for mksnap denial
- EOL
- 04:58 PM Backport #53715 (Resolved): octopus: mds: fails to reintegrate strays if destdn's directory is fu...
- 04:58 PM Backport #53735 (Rejected): octopus: mds: recursive scrub does not trigger stray reintegration
- EOL
- 04:57 PM Bug #51905 (Resolved): qa: "error reading sessionmap 'mds1_sessionmap'"
- 04:15 PM Bug #53194 (Resolved): mds: opening connection to up:replay/up:creating daemon causes message drop
- 04:15 PM Backport #53446 (Rejected): octopus: mds: opening connection to up:replay/up:creating daemon caus...
- EOL
- 04:14 PM Bug #56666 (Resolved): mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
- 04:14 PM Bug #49605 (Resolved): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- 05:40 AM Bug #53970 (Rejected): qa/vstart_runner: run_python() functions interface are not same
- 05:14 AM Bug #58853 (Fix Under Review): mds: Jenkins fails with skipping unrecognized type MClientRequest:...
- 05:13 AM Bug #58853 (Resolved): mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
- https://jenkins.ceph.com/job/ceph-pull-requests/111277/console
https://jenkins.ceph.com/job/ceph-pull-requests/11127... - 01:22 AM Bug #55332 (Resolved): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- 01:21 AM Backport #57836 (Resolved): pacific: Failure in snaptest-git-ceph.sh (it's an async unlink/create...
- 01:20 AM Backport #57837 (Resolved): quincy: Failure in snaptest-git-ceph.sh (it's an async unlink/create ...
02/25/2023
- 04:45 PM Backport #57837: quincy: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48452
merged
02/23/2023
- 06:51 PM Feature #58129 (In Progress): mon/FSCommands: support swapping file systems by name
- 06:22 PM Backport #58251: quincy: mount.ceph: will fail with old kernels
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49404
merged - 02:59 AM Feature #58835 (Fix Under Review): mds: add an asok command to dump export states
- Task to export subtree may be blocked, use this command to find out what's going on.
- 02:28 AM Bug #58760: kclient: xfstests-dev generic/317 failed
- I reproduced it by:...
02/22/2023
- 08:52 PM Bug #58744: qa: intermittent nfs test failures at nfs cluster creation
- /a/yuriw-2023-02-18_15:53:03-rados-wip-yuri5-testing-2023-02-17-1400-quincy-distro-default-smithi/7180649
- 12:47 PM Bug #58744 (Fix Under Review): qa: intermittent nfs test failures at nfs cluster creation
- 03:00 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya,
> >
> > Please take a look at this. I think there is s... - 01:00 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Venky Shankar wrote:
> Dhairya,
>
> Please take a look at this. I think there is some sort of race that is causin... - 12:46 PM Fix #58758 (Fix Under Review): qa: fix testcase 'test_cluster_set_user_config_with_non_existing_c...
- 09:16 AM Backport #58826 (Resolved): pacific: mds: make num_fwd and num_retry to __u32
- https://github.com/ceph/ceph/pull/50733
- 09:16 AM Backport #58825 (Resolved): quincy: mds: make num_fwd and num_retry to __u32
- https://github.com/ceph/ceph/pull/50732
- 09:14 AM Bug #57854 (Pending Backport): mds: make num_fwd and num_retry to __u32
- 09:03 AM Bug #58394: nofail option in fstab not supported
- Brian Woods wrote:
> Interesting update, I noticed that even without the nofail, systems will continue to boot even ... - 08:20 AM Bug #58823 (Resolved): cephfs-top: navigate to home screen when no fs
- Return back to the home (All Filesystem Info) screen, when all the filesystems are removed while waiting for a key in...
02/21/2023
- 02:01 PM Bug #58754: qa: Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_vol...
- Duplicate of https://tracker.ceph.com/issues/57446
- 01:48 PM Bug #58754 (Duplicate): qa: Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cep...
- 01:55 PM Bug #58746 (Triaged): quincy: qa: VersionNotFoundError: Failed to fetch package version
- Maybe we require this - https://github.com/ceph/ceph/pull/49957
- 01:48 PM Bug #58756 (Duplicate): qa: error during scrub thrashing
- 01:46 PM Bug #58757 (Triaged): qa: Command failed (workunit test suites/fsstress.sh)
- 01:46 PM Bug #58757: qa: Command failed (workunit test suites/fsstress.sh)
- Maybe related to https://tracker.ceph.com/issues/58340
- 01:05 PM Bug #58813 (Fix Under Review): cephfs-top: Sort menu doesn't show 'No filesystem available' scree...
- 09:44 AM Bug #58813 (Resolved): cephfs-top: Sort menu doesn't show 'No filesystem available' screen when a...
- [1] https://github.com/ceph/ceph/blob/01ad87ef30f99cae14baa152cc0b7bdf6ec0a114/src/tools/cephfs/top/cephfs-top#L476
... - 11:19 AM Bug #58814 (Fix Under Review): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 09:59 AM Bug #58814 (Resolved): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- Currently the fields/metrics which are of `METRIC_TYPE_NONE` are not getting sorted on selecting from the sort menu
- 05:12 AM Backport #58808: quincy: cephfs-top: add an option to dump the computed values to stdout
- This backport is waiting for https://tracker.ceph.com/issues/57970 to get merged to resolve cherry-pick conflicts.
- 04:07 AM Backport #58808 (Resolved): quincy: cephfs-top: add an option to dump the computed values to stdout
- 05:07 AM Backport #58807: pacific: cephfs-top: add an option to dump the computed values to stdout
- This backport is waiting for https://tracker.ceph.com/issues/57971 to get merged to resolve cherry-pick conflicts.
- 04:06 AM Backport #58807 (Resolved): pacific: cephfs-top: add an option to dump the computed values to stdout
- 04:06 AM Bug #57014 (Pending Backport): cephfs-top: add an option to dump the computed values to stdout
02/20/2023
- 01:30 PM Backport #55749 (In Progress): quincy: snap_schedule: remove subvolume(-group) interfaces
- 12:29 PM Backport #55749 (New): quincy: snap_schedule: remove subvolume(-group) interfaces
- re-doing backport
- 08:55 AM Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 p...
- As a workaround to dismiss the warnings please disable *mds_cap_revoke_eviction_timeout* and restart or failover the ...
- 04:08 AM Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 p...
- Usually this is a false alarm by default. But if the *mds_cap_revoke_eviction_timeout* is enabled the corresponding c...
- 05:03 AM Bug #58795 (Pending Backport): cephfs-shell: update path to cephfs-shell since its location has c...
- Commit @dc69033763cc116c6ccdf1f97149a74248691042@ changes location of @cephfs-shell@ from @<CEPH-REPO-ROOT>/src/tools...
02/18/2023
- 12:54 AM Backport #57946 (Resolved): quincy: cephfs-top: make cephfs-top display scrollable like top
- 12:30 AM Backport #57572 (Resolved): quincy: client: do not uninline data for read
- 12:30 AM Bug #57126 (Resolved): client: abort the client daemons when we couldn't invalidate the dentry ca...
- 12:29 AM Bug #56249 (Resolved): crash: int Client::_do_remount(bool): abort
- 12:29 AM Backport #57394 (Resolved): quincy: crash: int Client::_do_remount(bool): abort
- 12:29 AM Backport #57392 (Resolved): quincy: client: abort the client daemons when we couldn't invalidate ...
02/17/2023
- 04:58 PM Backport #58079 (In Progress): quincy: cephfs-top: Sorting doesn't work when the filesystems are ...
- 04:58 PM Backport #58074 (In Progress): quincy: cephfs-top: sorting/limit excepts when the filesystems are...
- 04:57 PM Backport #57970 (In Progress): quincy: cephfs-top: new options to limit and order-by
- 03:37 PM Backport #57946: quincy: cephfs-top: make cephfs-top display scrollable like top
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48677
merged - 03:36 PM Backport #57874: quincy: Permissions of the .snap directory do not inherit ACLs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48563
merged - 03:35 PM Backport #57879: quincy: NFS client unable to see newly created files when listing directory cont...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48520
merged - 03:34 PM Backport #57820: quincy: cephfs-data-scan: scan_links is not verbose enough
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48442
merged - 03:32 PM Backport #57670: quincy: mds: damage table only stores one dentry per dirfrag
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48261
merged - 03:31 PM Backport #57572: quincy: client: do not uninline data for read
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48132
merged - 03:30 PM Backport #57392: quincy: client: abort the client daemons when we couldn't invalidate the dentry ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48110
merged - 03:30 PM Backport #57394: quincy: crash: int Client::_do_remount(bool): abort
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48107
merged - 01:28 PM Bug #58760 (Resolved): kclient: xfstests-dev generic/317 failed
- http://qa-proxy.ceph.com/teuthology/xiubli-2023-02-16_01:24:45-fs:fscrypt-wip-fscrypt-20230215-0834-distro-default-sm...
- 12:56 PM Bug #58756: qa: error during scrub thrashing
- This should be a known issue with https://tracker.ceph.com/issues/58564:...
- 08:46 AM Bug #58756 (Duplicate): qa: error during scrub thrashing
- error during scrub thrashing in [1]
[1] https://pulpito.ceph.com/yuriw-2023-02-16_19:08:52-fs-wip-yuri3-testing-20... - 10:18 AM Fix #58758 (Pending Backport): qa: fix testcase 'test_cluster_set_user_config_with_non_existing_c...
- http://pulpito.front.sepia.ceph.com/dparmar-2023-02-15_20:03:50-orch:cephadm-wip-58228-distro-default-smithi/7175071/...
- 09:55 AM Fix #58023 (In Progress): mds: do not evict clients if OSDs are laggy
- 09:23 AM Bug #58757 (Duplicate): qa: Command failed (workunit test suites/fsstress.sh)
- Command failed (workunit test suites/fsstress.sh) in [1]. Couldn't get more details as the editor hangs because of th...
- 06:20 AM Bug #58726 (Fix Under Review): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 05:43 AM Bug #58754 (Duplicate): qa: Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cep...
- test_subvolume_snapshot_info_if_orphan_clone fails in [1]
[1] https://pulpito.ceph.com/yuriw-2023-02-16_19:08:52-f...
02/16/2023
- 10:02 PM Bug #58394: nofail option in fstab not supported
- Interesting update, I noticed that even without the nofail, systems will continue to boot even if it can't complete a...
- 03:43 PM Backport #58350: quincy: MDS: scan_stray_dir doesn't walk through all stray inode fragment
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49670
merged - 12:52 PM Bug #58746 (Duplicate): quincy: qa: VersionNotFoundError: Failed to fetch package version
- VersionNotFoundError: Failed to fetch package version in [1]
[1] http://qa-proxy.ceph.com/teuthology/yuriw-2023-02... - 11:43 AM Bug #58744 (Resolved): qa: intermittent nfs test failures at nfs cluster creation
- While working on https://github.com/ceph/ceph/pull/49460, I found any random test would fail with "AssertionError: NF...
- 10:49 AM Bug #58220 (Fix Under Review): Command failed (workunit test fs/quota/quota.sh) on smithi081 with...
- 07:41 AM Bug #58220 (In Progress): Command failed (workunit test fs/quota/quota.sh) on smithi081 with stat...
- 09:54 AM Bug #58645 (Fix Under Review): Unclear error when creating new subvolume when subvolumegroup has ...
- 07:45 AM Backport #58322 (Resolved): quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || stat...
- 12:35 AM Backport #58322: quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_X...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/49539
merged - 12:26 AM Backport #58322: quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_X...
- https://github.com/ceph/ceph/pull/49884 merged
- 04:36 AM Backport #57242 (Resolved): quincy: mgr/volumes: Clone operations are failing with Assertion Error
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47894
Merged. - 04:36 AM Backport #57241 (Resolved): pacific: mgr/volumes: Clone operations are failing with Assertion Error
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47112
Merged. - 01:12 AM Backport #58344 (Resolved): quincy: mds: switch submit_mutex to fair mutex for MDLog
- 12:36 AM Backport #58344: quincy: mds: switch submit_mutex to fair mutex for MDLog
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49633
merged - 12:40 AM Backport #58347: quincy: mds: fragment directory snapshots
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49673
merged - 12:38 AM Backport #58345: quincy: Thread md_log_replay is hanged for ever.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49672
merged - 12:33 AM Backport #58249: quincy: mds: avoid ~mdsdir's scrubbing and reporting damage health status
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49473
merged - 12:29 AM Backport #57760: quincy: qa: test_scrub_pause_and_resume_with_abort failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49459
merged
02/15/2023
- 01:22 PM Bug #58564 (In Progress): workunit suites/dbench.sh fails with error code 1
- 01:22 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- Reported this to linux kernel mail list: https://lore.kernel.org/lkml/768be93b-a401-deab-600c-f946e0bd27fa@redhat.com...
- 02:01 AM Bug #58564: workunit suites/dbench.sh fails with error code 1
- It's a kernel cgroup core deadlock bug:...
- 10:49 AM Bug #58727 (New): quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- /a/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/7171571...
- 10:48 AM Bug #58726 (Pending Backport): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- /a/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/7171607...
02/14/2023
- 12:49 PM Bug #58717 (Resolved): client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- If the __setattrx() fails it will leave the CEPH_CAP_FILE_WR caps reference kept.
02/13/2023
- 09:39 AM Backport #58599 (In Progress): quincy: mon: prevent allocating snapids allocated for CephFS
02/10/2023
- 01:01 PM Bug #53246: rhel 8.4 and centos stream unable to install cephfs-java
- /a/sseshasa-2023-02-10_10:52:51-rados-wip-sseshasa-quincy-2023-02-10-mclk-cost-fixes-1-distro-default-smithi/7167432/
- 10:03 AM Bug #57014: cephfs-top: add an option to dump the computed values to stdout
- This feature has become really important given that (backport) bugs creep in due to the lack of automated tests.
- 12:51 AM Feature #58680: libcephfs: clear the suid/sgid for fallocate
- Usually when a file is changed by unprivileged users the *suid/sgid* should be cleared to avoid possible attack from ...
02/09/2023
- 05:04 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- I'm experimenting on reproducing the problem on demand. Once I have a way to make this bad looping ceph-fuse behavio...
- 04:13 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Hi Andras,
Andras Pataki wrote:
> There definitely are config changes that are different from the defaults.
> Al... - 02:10 PM Feature #58680: libcephfs: clear the suid/sgid for fallocate
- The steps to verify this:...
- 02:05 PM Feature #58680 (Fix Under Review): libcephfs: clear the suid/sgid for fallocate
- 02:00 PM Feature #58680 (Resolved): libcephfs: clear the suid/sgid for fallocate
- ...
- 11:00 AM Backport #58598 (In Progress): pacific: mon: prevent allocating snapids allocated for CephFS
- 10:03 AM Bug #58678 (Resolved): cephfs_mirror: local and remote dir root modes are not same
- The top level dir modes of local snap dir root and remote snap dir root don't match
- 09:00 AM Bug #58677 (Resolved): cephfs-top: test the current python version is supported
- Test if the current python version is supported. Many curses constants and api are introduced in newer versions of py...
02/08/2023
- 02:56 PM Bug #56270 (Duplicate): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Sna...
- Based on this appearing to have been resolved, I'm closing this as a duplicate of #56269.
- 02:53 PM Bug #56269 (Resolved): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Snap...
- 02:38 PM Backport #58667 (In Progress): quincy: cephfs-top: drop curses.A_ITALIC
- 01:28 PM Backport #58667 (Resolved): quincy: cephfs-top: drop curses.A_ITALIC
- https://github.com/ceph/ceph/pull/48677
- 01:48 PM Backport #58668 (In Progress): pacific: cephfs-top: drop curses.A_ITALIC
- 01:28 PM Backport #58668 (Resolved): pacific: cephfs-top: drop curses.A_ITALIC
- https://github.com/ceph/ceph/pull/50029
- 01:23 PM Bug #58663 (Pending Backport): cephfs-top: drop curses.A_ITALIC
- 10:22 AM Bug #58663 (Fix Under Review): cephfs-top: drop curses.A_ITALIC
- 10:15 AM Bug #58663 (Resolved): cephfs-top: drop curses.A_ITALIC
- Drop curses.A_ITALIC used in formatting "Filesystem:" header as it's not supported in older python versions.
A_BOLD... - 10:06 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- I see trim messages from object cacher...
02/07/2023
- 04:26 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- Hi, Xiubo. I didn't check dmesg so I don't know if it had a call trace in it.
- 02:33 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- There definitely are config changes that are different from the defaults.
All these objects are in a 6+3 erasure cod... - 02:08 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- It’s also interesting that these appear to all be full-object reads, and the objects are larger than normal — 24 MiB,...
- 10:33 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- There seems to be lot of cache misses for objects in ObjectCacher. The retry is coming from:...
- 04:10 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Andras Pataki wrote:
> I've uploaded the full 1 minute ceph-fuse trace as:
> ceph-post-file: d56ebc47-4ef7-4f01-952... - 04:57 AM Bug #58651 (Fix Under Review): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 04:49 AM Bug #58651 (Resolved): mgr/volumes: avoid returning ESHUTDOWN for cli commands
Also available in: Atom