Activity
From 02/13/2023 to 03/14/2023
03/14/2023
- 05:18 PM Bug #58008 (Resolved): mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- 05:18 PM Backport #58254 (Resolved): pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _cal...
- 03:49 PM Backport #58254: pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49656
merged - 05:11 PM Bug #57359 (Resolved): mds/Server: -ve values cause unexpected client eviction while handling cli...
- 05:11 PM Backport #58601 (Resolved): pacific: mds/Server: -ve values cause unexpected client eviction whil...
- 02:57 PM Backport #58601: pacific: mds/Server: -ve values cause unexpected client eviction while handling ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49956
merged - 04:50 PM Bug #58619 (In Progress): mds: client evict [-h|--help] evicts ALL clients
- 04:48 PM Bug #58030 (Resolved): mds: avoid ~mdsdir's scrubbing and reporting damage health status
- 04:48 PM Bug #58028 (Resolved): cephfs-top: Sorting doesn't work when the filesystems are removed and created
- 04:47 PM Bug #58031 (Resolved): cephfs-top: sorting/limit excepts when the filesystems are removed and cre...
- 04:47 PM Feature #55121 (Resolved): cephfs-top: new options to limit and order-by
- 04:47 PM Bug #57620 (Resolved): mgr/volumes: addition of human-readable flag to volume info command
- 04:46 PM Bug #55234 (Resolved): snap_schedule: replace .snap with the client configured snap dir name
- 04:45 PM Backport #58079 (Resolved): quincy: cephfs-top: Sorting doesn't work when the filesystems are rem...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 04:45 PM Backport #58074 (Resolved): quincy: cephfs-top: sorting/limit excepts when the filesystems are re...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 04:44 PM Backport #57849 (Resolved): quincy: mgr/volumes: addition of human-readable flag to volume info c...
- 04:43 PM Backport #57849: quincy: mgr/volumes: addition of human-readable flag to volume info command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48466
merged - 04:42 PM Backport #58249 (Resolved): quincy: mds: avoid ~mdsdir's scrubbing and reporting damage health st...
- 04:41 PM Backport #57970 (Resolved): quincy: cephfs-top: new options to limit and order-by
- 04:40 PM Backport #57971 (Resolved): pacific: cephfs-top: new options to limit and order-by
- 03:43 PM Backport #57971: pacific: cephfs-top: new options to limit and order-by
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:39 PM Backport #58073 (Resolved): pacific: cephfs-top: sorting/limit excepts when the filesystems are r...
- 03:43 PM Backport #58073: pacific: cephfs-top: sorting/limit excepts when the filesystems are removed and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:39 PM Backport #58078 (Resolved): pacific: cephfs-top: Sorting doesn't work when the filesystems are re...
- 03:44 PM Backport #58078: pacific: cephfs-top: Sorting doesn't work when the filesystems are removed and c...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:38 PM Backport #58250 (Resolved): pacific: mds: avoid ~mdsdir's scrubbing and reporting damage health s...
- 03:45 PM Backport #58250: pacific: mds: avoid ~mdsdir's scrubbing and reporting damage health status
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49440
merged - 04:37 PM Backport #57201 (Resolved): pacific: snap_schedule: replace .snap with the client configured snap...
- 04:14 PM Backport #57201: pacific: snap_schedule: replace .snap with the client configured snap dir name
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47726
merged - 03:52 PM Backport #58349: pacific: MDS: scan_stray_dir doesn't walk through all stray inode fragment
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49669
merged - 03:46 PM Backport #58323: pacific: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/49538
merged - 03:45 PM Backport #57761: pacific: qa: test_scrub_pause_and_resume_with_abort failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49458
merged - 03:20 PM Bug #59067 (Resolved): mds: add cap acquisition throttled event to MDR
- Otherwise a blocked op won't show it's being blocked by the cap acquisition throttle.
Write a test that verifies t... - 03:04 PM Backport #58598: pacific: mon: prevent allocating snapids allocated for CephFS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50050
merged - 02:59 PM Backport #58668: pacific: cephfs-top: drop curses.A_ITALIC
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50029
merged - 02:58 PM Backport #57728: pacific: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49966
merged - 02:58 PM Bug #58082 (Resolved): cephfs:filesystem became read only after Quincy upgrade
- 02:57 PM Backport #58608 (Resolved): pacific: cephfs:filesystem became read only after Quincy upgrade
- 02:56 PM Backport #58608: pacific: cephfs:filesystem became read only after Quincy upgrade
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49941
merged - 02:57 PM Backport #58603: pacific: client stalls during vstart_runner test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49944
merged - 02:53 PM Backport #58346: pacific: Thread md_log_replay is hanged for ever.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49671
merged - 02:44 PM Feature #18475 (Resolved): qa: run xfstests in the nightlies
- Related tickets -
https://tracker.ceph.com/issues/58945
https://tracker.ceph.com/issues/58938 - 12:46 PM Backport #59000 (In Progress): quincy: cephfs_mirror: local and remote dir root modes are not same
- 10:19 AM Backport #59020 (In Progress): reef: cephfs-data-scan: multiple data pools are not supported
- 09:56 AM Backport #59020 (New): reef: cephfs-data-scan: multiple data pools are not supported
- 09:42 AM Backport #59020 (Duplicate): reef: cephfs-data-scan: multiple data pools are not supported
- 09:51 AM Backport #59019 (In Progress): pacific: cephfs-data-scan: multiple data pools are not supported
- 09:44 AM Backport #59018 (In Progress): quincy: cephfs-data-scan: multiple data pools are not supported
03/13/2023
- 04:54 PM Backport #59041 (In Progress): quincy: libcephfs: client needs to update the mtime and change att...
- https://github.com/ceph/ceph/pull/50730
- 04:54 PM Backport #59040 (Rejected): pacific: libcephfs: client needs to update the mtime and change attr ...
- 04:53 PM Backport #59039 (Duplicate): pacific: libcephfs: client needs to update the mtime and change attr...
- 04:53 PM Backport #59038 (Duplicate): quincy: libcephfs: client needs to update the mtime and change attr ...
- 04:53 PM Backport #59037 (In Progress): quincy: MDS allows a (kernel) client to exceed the xattrs key/valu...
- https://github.com/ceph/ceph/pull/50981
- 04:53 PM Backport #59036 (Duplicate): pacific: MDS allows a (kernel) client to exceed the xattrs key/value...
- 04:53 PM Backport #59035 (New): pacific: MDS allows a (kernel) client to exceed the xattrs key/value limits
- 04:53 PM Backport #59034 (Duplicate): quincy: MDS allows a (kernel) client to exceed the xattrs key/value ...
- 04:52 PM Backport #59033 (Duplicate): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- 04:52 PM Backport #59032 (Resolved): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- https://github.com/ceph/ceph/pull/51509
- 04:52 PM Backport #59031 (Duplicate): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_cli...
- 04:52 PM Backport #59030 (In Progress): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_cl...
- https://github.com/ceph/ceph/pull/51049
- 04:51 PM Backport #59024 (Duplicate): quincy: mds: warning `clients failing to advance oldest client/flush...
- 04:51 PM Backport #59023 (Resolved): pacific: mds: warning `clients failing to advance oldest client/flush...
- https://github.com/ceph/ceph/pull/50811
- 04:51 PM Backport #59022 (Duplicate): pacific: mds: warning `clients failing to advance oldest client/flus...
- 04:51 PM Backport #59021 (Resolved): quincy: mds: warning `clients failing to advance oldest client/flush ...
- https://github.com/ceph/ceph/pull/50785
- 04:51 PM Backport #59020 (Resolved): reef: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50524
- 04:50 PM Backport #59019 (Resolved): pacific: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50523
- 04:50 PM Backport #59018 (Resolved): quincy: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50522
- 04:50 PM Backport #59017 (Resolved): pacific: snap-schedule: handle non-existent path gracefully during sn...
- https://github.com/ceph/ceph/pull/51246
- 04:50 PM Backport #59016 (Resolved): quincy: snap-schedule: handle non-existent path gracefully during sna...
- https://github.com/ceph/ceph/pull/50780
- 04:49 PM Backport #59015 (Rejected): pacific: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- https://github.com/ceph/ceph/pull/52580
- 04:49 PM Backport #59014 (In Progress): quincy: Command failed (workunit test fs/quota/quota.sh) on smithi...
- https://github.com/ceph/ceph/pull/52579
- 04:47 PM Backport #59007 (Resolved): pacific: mds stuck in 'up:replay' and crashed.
- https://github.com/ceph/ceph/pull/50725
- 04:47 PM Backport #59006 (Resolved): quincy: mds stuck in 'up:replay' and crashed.
- https://github.com/ceph/ceph/pull/50724
- 04:46 PM Backport #59003 (Resolved): pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- https://github.com/ceph/ceph/pull/51039
- 04:46 PM Backport #59002 (Resolved): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- https://github.com/ceph/ceph/pull/50786
- 04:46 PM Backport #59001 (Resolved): pacific: cephfs_mirror: local and remote dir root modes are not same
- https://github.com/ceph/ceph/pull/53270
- 04:46 PM Backport #59000 (Resolved): quincy: cephfs_mirror: local and remote dir root modes are not same
- https://github.com/ceph/ceph/pull/50528
- 04:45 PM Backport #58994 (Resolved): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- https://github.com/ceph/ceph/pull/50988
- 04:45 PM Backport #58993 (Resolved): quincy: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- https://github.com/ceph/ceph/pull/50989
- 04:44 PM Backport #58992 (Rejected): pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://github.com/ceph/ceph/pull/52584
- 04:44 PM Backport #58991 (In Progress): quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://github.com/ceph/ceph/pull/52585
- 04:43 PM Backport #58986 (Resolved): pacific: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50597
- 04:43 PM Backport #58985 (Resolved): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50595
- 04:43 PM Backport #58984 (Resolved): pacific: cephfs-top: navigate to home screen when no fs
- https://github.com/ceph/ceph/pull/50737
- 04:43 PM Backport #58983 (Resolved): quincy: cephfs-top: navigate to home screen when no fs
- https://github.com/ceph/ceph/pull/50731
- 04:18 PM Bug #58971 (Fix Under Review): mon/MDSMonitor: do not trigger propose on error from prepare_update
- 04:16 PM Bug #58971 (Pending Backport): mon/MDSMonitor: do not trigger propose on error from prepare_update
- https://github.com/ceph/ceph/pull/50404#discussion_r1133791746
- 02:24 PM Feature #55940: quota: accept values in human readable format as well
- Just FYI - follow up PR: https://github.com/ceph/ceph/pull/50493
- 02:00 PM Bug #54501 (Pending Backport): libcephfs: client needs to update the mtime and change attr when s...
- 01:55 PM Bug #58489 (Pending Backport): mds stuck in 'up:replay' and crashed.
- 01:53 PM Bug #58678 (Pending Backport): cephfs_mirror: local and remote dir root modes are not same
- 09:32 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- Forgot to mention that this has been recently verified on v15.2.17 and v16.2.11.
- 08:35 AM Bug #58962 (New): ftruncate fails with EACCES on a read-only file created with write permissions
- When creating a new file with write permissions, with mode set to read-only such as 400 or 444, ftruncate fails with ...
03/10/2023
- 12:42 PM Bug #58651 (Pending Backport): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 11:50 AM Bug #58949: qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- Venky, this most likely is not race condition. Your testing branch had patch that fixes quota issue. See - https://gi...
- 11:24 AM Bug #58949 (Rejected): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- test_cephfs.test_disk_quota_exceeeded_error's failure has been reported here before - https://tracker.ceph.com/issues...
- 11:32 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Venky Shankar wrote:
> Rishabh Dave wrote:
> > @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.... - 04:28 AM Bug #58220 (Pending Backport): Command failed (workunit test fs/quota/quota.sh) on smithi081 with...
- 03:53 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Rishabh Dave wrote:
> @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.py@ fails on this teutholo... - 11:22 AM Bug #53573 (Resolved): qa: test new clients against older Ceph clusters
- 11:17 AM Bug #58095 (Pending Backport): snap-schedule: handle non-existent path gracefully during snapshot...
- 04:42 AM Bug #58717 (Pending Backport): client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- 02:26 AM Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- See this again in http://qa-proxy.ceph.com/teuthology/yuriw-2023-03-08_20:32:29-fs-wip-yuri3-testing-2023-03-08-0800-...
03/09/2023
- 06:41 PM Bug #58945: qa: xfstests-dev's generic test suite has 20 failures with fuse client
- Related ticket - https://tracker.ceph.com/issues/58742
- 06:40 PM Bug #58945 (New): qa: xfstests-dev's generic test suite has 20 failures with fuse client
- "PR #45960":https://github.com/ceph/ceph/pull/45960 enables running tests from xfstests-dev against CephFS. For FUSE ...
- 03:38 PM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.py@ fails on this teuthology run - http://pulpito...
- 02:22 PM Bug #58814 (Pending Backport): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 01:07 PM Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- Seen in main branch integration test: https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-2...
- 12:54 PM Bug #58823 (Pending Backport): cephfs-top: navigate to home screen when no fs
03/08/2023
- 04:52 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:38 PM Bug #58795 (Fix Under Review): cephfs-shell: update path to cephfs-shell since its location has c...
- 02:32 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:37 PM Bug #55354 (Resolved): cephfs: xfstests-dev can't be run against fuse mounted cephfs
- 02:32 PM Bug #58726 (Pending Backport): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 02:16 PM Bug #58938 (New): qa: xfstests-dev's generic test suite has 7 failures with kclient
- "PR #45960":https://github.com/ceph/ceph/pull/45960 enables running tests from xfstests-dev against CephFS. For kerne...
- 07:08 AM Feature #55940 (Resolved): quota: accept values in human readable format as well
- 02:43 AM Bug #55725 (Pending Backport): MDS allows a (kernel) client to exceed the xattrs key/value limits
- 02:39 AM Bug #57985 (Pending Backport): mds: warning `clients failing to advance oldest client/flush tid` ...
- 02:37 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- 02:35 AM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
- Hmmm... similar to https://tracker.ceph.com/issues/17172
- 02:31 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
- https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/...
- 02:04 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Milind, PTAL.
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testin...
03/07/2023
- 02:13 PM Bug #57064 (Need More Info): qa: test_add_ancestor_and_child_directory failure
- Looking at the logs, mirror daemon is missing and thus the command failed...
- 08:52 AM Bug #57064: qa: test_add_ancestor_and_child_directory failure
- Dhairya Parmar wrote:
> I tried digging into this failure, while looking at teuthology log, I see
> [...]
>
> I... - 02:59 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Rishabh Dave wrote:
> http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-...
03/06/2023
- 04:25 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 04:18 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 03:17 PM Bug #56830 (Can't reproduce): crash: cephfs::mirror::PeerReplayer::pick_directory()
- After thoroughly assessing the issue with the limited available data in the tracker, it's hard to tell what lead to t...
- 03:16 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya Parmar wrote:
> Issue seems to be at:
> [...]
> @ https://github.com/ceph/ceph/blob/main/src/tools/cephfs_... - 01:49 PM Bug #56830 (Fix Under Review): crash: cephfs::mirror::PeerReplayer::pick_directory()
- See updated in PR.
- 09:07 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- I did try to reproduce the issue mentioned in the tracker due to which, the feature is required.I found that:
1. If ... - 04:15 AM Bug #58029 (Pending Backport): cephfs-data-scan: multiple data pools are not supported
- 12:53 AM Backport #58609 (Resolved): quincy: cephfs:filesystem became read only after Quincy upgrade
- 12:52 AM Backport #58602 (Resolved): quincy: client stalls during vstart_runner test
03/03/2023
- 08:01 AM Backport #58865 (In Progress): quincy: cephfs-top: Sort menu doesn't show 'No filesystem availabl...
- 07:40 AM Bug #57280 (Resolved): qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fe...
- backport merged
- 07:38 AM Backport #58604 (Resolved): quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails -...
- 07:35 AM Backport #58253 (Resolved): quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calc...
- 07:33 AM Bug #58138 (Resolved): "ceph nfs cluster info" shows junk data for non-existent cluster
- backport merged
- 06:59 AM Backport #58348 (Resolved): quincy: "ceph nfs cluster info" shows junk data for non-existent clus...
03/02/2023
- 10:43 PM Backport #58599: quincy: mon: prevent allocating snapids allocated for CephFS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50090
merged - 10:40 PM Backport #58604: quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49957
merged - 10:39 PM Backport #58602: quincy: client stalls during vstart_runner test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49942
merged - 10:39 PM Backport #58609: quincy: cephfs:filesystem became read only after Quincy upgrade
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49939
merged - 10:38 PM Backport #58253: quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49655
merged - 10:37 PM Backport #58348: quincy: "ceph nfs cluster info" shows junk data for non-existent cluster
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49654
merged - 10:31 PM Backport #57970: quincy: cephfs-top: new options to limit and order-by
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 01:38 PM Feature #57090 (Resolved): MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- 10:10 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- Not reproducible in main branch (wip-vshankar-testing-20230228.105516 is just a couple of test PRs on top of main bra...
- 07:05 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- ... and the kclient sees the error:...
- 06:48 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- The MDS did reply back an ENOSPC to the client:...
- 07:42 AM Bug #58760 (Resolved): kclient: xfstests-dev generic/317 failed
- 05:37 AM Bug #58760: kclient: xfstests-dev generic/317 failed
- qa changes are: https://github.com/ceph/ceph/pull/50217
03/01/2023
- 05:59 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Issue seems to be at:...
- 10:43 AM Feature #57090: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- should we backport this?
- 10:08 AM Bug #57064: qa: test_add_ancestor_and_child_directory failure
- I tried digging into this failure, while looking at teuthology log, I see ...
02/28/2023
- 11:03 PM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- /a/yuriw-2023-02-24_17:50:19-rados-main-distro-default-smithi/7186744
- 04:05 PM Bug #44100: cephfs rsync kworker high load.
- Has this been released? I believe I'm hitting it with Ceph 17.2.5 and kernel 5.4.0-136 on Ubuntu 20.04.
- 02:12 PM Backport #58600 (Resolved): quincy: mds/Server: -ve values cause unexpected client eviction while...
- https://github.com/ceph/ceph/pull/48252 has been merged that contained the the relevant commit
- 02:09 PM Backport #58881 (Resolved): pacific: mds: Jenkins fails with skipping unrecognized type MClientRe...
- https://github.com/ceph/ceph/pull/50733
- 02:09 PM Backport #58880 (Resolved): quincy: mds: Jenkins fails with skipping unrecognized type MClientReq...
- https://github.com/ceph/ceph/pull/50732
- 02:00 PM Bug #58853 (Pending Backport): mds: Jenkins fails with skipping unrecognized type MClientRequest:...
- 02:00 PM Bug #58853: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
- Backport note: delay the backport till enough tests have been run on main for this change.
- 01:14 PM Bug #58878 (New): mds: FAILED ceph_assert(trim_to > trimming_pos)
- One of the MDS crash with the following backtrace:...
- 01:09 PM Feature #58877 (Rejected): mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- In situations where the subvolume metadata is missing/corrupted/untrustable, having a way to regenerate it would be h...
- 01:06 PM Bug #56522 (Resolved): Do not abort MDS on unknown messages
- PR merged into main, both backports PRs merged into their respective branches.
- 01:05 PM Backport #57665 (Resolved): pacific: Do not abort MDS on unknown messages
- 01:04 PM Backport #57666 (Resolved): quincy: Do not abort MDS on unknown messages
- PR merged
- 01:04 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Hey Andras,
Andras Pataki wrote:
> I'm experimenting on reproducing the problem on demand. Once I have a way to ... - 12:24 PM Feature #58835 (Fix Under Review): mds: add an asok command to dump export states
- 07:21 AM Backport #58866 (Resolved): pacific: cephfs-top: Sort menu doesn't show 'No filesystem available'...
- https://github.com/ceph/ceph/pull/50596
- 07:21 AM Backport #58865 (Resolved): quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' ...
- https://github.com/ceph/ceph/pull/50365
- 07:14 AM Bug #58813 (Pending Backport): cephfs-top: Sort menu doesn't show 'No filesystem available' scree...
02/27/2023
- 04:59 PM Backport #51936 (Rejected): octopus: mds: improve debugging for mksnap denial
- EOL
- 04:58 PM Backport #53715 (Resolved): octopus: mds: fails to reintegrate strays if destdn's directory is fu...
- 04:58 PM Backport #53735 (Rejected): octopus: mds: recursive scrub does not trigger stray reintegration
- EOL
- 04:57 PM Bug #51905 (Resolved): qa: "error reading sessionmap 'mds1_sessionmap'"
- 04:15 PM Bug #53194 (Resolved): mds: opening connection to up:replay/up:creating daemon causes message drop
- 04:15 PM Backport #53446 (Rejected): octopus: mds: opening connection to up:replay/up:creating daemon caus...
- EOL
- 04:14 PM Bug #56666 (Resolved): mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
- 04:14 PM Bug #49605 (Resolved): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- 05:40 AM Bug #53970 (Rejected): qa/vstart_runner: run_python() functions interface are not same
- 05:14 AM Bug #58853 (Fix Under Review): mds: Jenkins fails with skipping unrecognized type MClientRequest:...
- 05:13 AM Bug #58853 (Resolved): mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
- https://jenkins.ceph.com/job/ceph-pull-requests/111277/console
https://jenkins.ceph.com/job/ceph-pull-requests/11127... - 01:22 AM Bug #55332 (Resolved): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- 01:21 AM Backport #57836 (Resolved): pacific: Failure in snaptest-git-ceph.sh (it's an async unlink/create...
- 01:20 AM Backport #57837 (Resolved): quincy: Failure in snaptest-git-ceph.sh (it's an async unlink/create ...
02/25/2023
- 04:45 PM Backport #57837: quincy: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48452
merged
02/23/2023
- 06:51 PM Feature #58129 (In Progress): mon/FSCommands: support swapping file systems by name
- 06:22 PM Backport #58251: quincy: mount.ceph: will fail with old kernels
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49404
merged - 02:59 AM Feature #58835 (Fix Under Review): mds: add an asok command to dump export states
- Task to export subtree may be blocked, use this command to find out what's going on.
- 02:28 AM Bug #58760: kclient: xfstests-dev generic/317 failed
- I reproduced it by:...
02/22/2023
- 08:52 PM Bug #58744: qa: intermittent nfs test failures at nfs cluster creation
- /a/yuriw-2023-02-18_15:53:03-rados-wip-yuri5-testing-2023-02-17-1400-quincy-distro-default-smithi/7180649
- 12:47 PM Bug #58744 (Fix Under Review): qa: intermittent nfs test failures at nfs cluster creation
- 03:00 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya,
> >
> > Please take a look at this. I think there is s... - 01:00 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Venky Shankar wrote:
> Dhairya,
>
> Please take a look at this. I think there is some sort of race that is causin... - 12:46 PM Fix #58758 (Fix Under Review): qa: fix testcase 'test_cluster_set_user_config_with_non_existing_c...
- 09:16 AM Backport #58826 (Resolved): pacific: mds: make num_fwd and num_retry to __u32
- https://github.com/ceph/ceph/pull/50733
- 09:16 AM Backport #58825 (Resolved): quincy: mds: make num_fwd and num_retry to __u32
- https://github.com/ceph/ceph/pull/50732
- 09:14 AM Bug #57854 (Pending Backport): mds: make num_fwd and num_retry to __u32
- 09:03 AM Bug #58394: nofail option in fstab not supported
- Brian Woods wrote:
> Interesting update, I noticed that even without the nofail, systems will continue to boot even ... - 08:20 AM Bug #58823 (Resolved): cephfs-top: navigate to home screen when no fs
- Return back to the home (All Filesystem Info) screen, when all the filesystems are removed while waiting for a key in...
02/21/2023
- 02:01 PM Bug #58754: qa: Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_vol...
- Duplicate of https://tracker.ceph.com/issues/57446
- 01:48 PM Bug #58754 (Duplicate): qa: Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cep...
- 01:55 PM Bug #58746 (Triaged): quincy: qa: VersionNotFoundError: Failed to fetch package version
- Maybe we require this - https://github.com/ceph/ceph/pull/49957
- 01:48 PM Bug #58756 (Duplicate): qa: error during scrub thrashing
- 01:46 PM Bug #58757 (Triaged): qa: Command failed (workunit test suites/fsstress.sh)
- 01:46 PM Bug #58757: qa: Command failed (workunit test suites/fsstress.sh)
- Maybe related to https://tracker.ceph.com/issues/58340
- 01:05 PM Bug #58813 (Fix Under Review): cephfs-top: Sort menu doesn't show 'No filesystem available' scree...
- 09:44 AM Bug #58813 (Resolved): cephfs-top: Sort menu doesn't show 'No filesystem available' screen when a...
- [1] https://github.com/ceph/ceph/blob/01ad87ef30f99cae14baa152cc0b7bdf6ec0a114/src/tools/cephfs/top/cephfs-top#L476
... - 11:19 AM Bug #58814 (Fix Under Review): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 09:59 AM Bug #58814 (Resolved): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- Currently the fields/metrics which are of `METRIC_TYPE_NONE` are not getting sorted on selecting from the sort menu
- 05:12 AM Backport #58808: quincy: cephfs-top: add an option to dump the computed values to stdout
- This backport is waiting for https://tracker.ceph.com/issues/57970 to get merged to resolve cherry-pick conflicts.
- 04:07 AM Backport #58808 (Resolved): quincy: cephfs-top: add an option to dump the computed values to stdout
- 05:07 AM Backport #58807: pacific: cephfs-top: add an option to dump the computed values to stdout
- This backport is waiting for https://tracker.ceph.com/issues/57971 to get merged to resolve cherry-pick conflicts.
- 04:06 AM Backport #58807 (Resolved): pacific: cephfs-top: add an option to dump the computed values to stdout
- 04:06 AM Bug #57014 (Pending Backport): cephfs-top: add an option to dump the computed values to stdout
02/20/2023
- 01:30 PM Backport #55749 (In Progress): quincy: snap_schedule: remove subvolume(-group) interfaces
- 12:29 PM Backport #55749 (New): quincy: snap_schedule: remove subvolume(-group) interfaces
- re-doing backport
- 08:55 AM Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 p...
- As a workaround to dismiss the warnings please disable *mds_cap_revoke_eviction_timeout* and restart or failover the ...
- 04:08 AM Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 p...
- Usually this is a false alarm by default. But if the *mds_cap_revoke_eviction_timeout* is enabled the corresponding c...
- 05:03 AM Bug #58795 (Pending Backport): cephfs-shell: update path to cephfs-shell since its location has c...
- Commit @dc69033763cc116c6ccdf1f97149a74248691042@ changes location of @cephfs-shell@ from @<CEPH-REPO-ROOT>/src/tools...
02/18/2023
- 12:54 AM Backport #57946 (Resolved): quincy: cephfs-top: make cephfs-top display scrollable like top
- 12:30 AM Backport #57572 (Resolved): quincy: client: do not uninline data for read
- 12:30 AM Bug #57126 (Resolved): client: abort the client daemons when we couldn't invalidate the dentry ca...
- 12:29 AM Bug #56249 (Resolved): crash: int Client::_do_remount(bool): abort
- 12:29 AM Backport #57394 (Resolved): quincy: crash: int Client::_do_remount(bool): abort
- 12:29 AM Backport #57392 (Resolved): quincy: client: abort the client daemons when we couldn't invalidate ...
02/17/2023
- 04:58 PM Backport #58079 (In Progress): quincy: cephfs-top: Sorting doesn't work when the filesystems are ...
- 04:58 PM Backport #58074 (In Progress): quincy: cephfs-top: sorting/limit excepts when the filesystems are...
- 04:57 PM Backport #57970 (In Progress): quincy: cephfs-top: new options to limit and order-by
- 03:37 PM Backport #57946: quincy: cephfs-top: make cephfs-top display scrollable like top
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48677
merged - 03:36 PM Backport #57874: quincy: Permissions of the .snap directory do not inherit ACLs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48563
merged - 03:35 PM Backport #57879: quincy: NFS client unable to see newly created files when listing directory cont...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48520
merged - 03:34 PM Backport #57820: quincy: cephfs-data-scan: scan_links is not verbose enough
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48442
merged - 03:32 PM Backport #57670: quincy: mds: damage table only stores one dentry per dirfrag
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48261
merged - 03:31 PM Backport #57572: quincy: client: do not uninline data for read
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48132
merged - 03:30 PM Backport #57392: quincy: client: abort the client daemons when we couldn't invalidate the dentry ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48110
merged - 03:30 PM Backport #57394: quincy: crash: int Client::_do_remount(bool): abort
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48107
merged - 01:28 PM Bug #58760 (Resolved): kclient: xfstests-dev generic/317 failed
- http://qa-proxy.ceph.com/teuthology/xiubli-2023-02-16_01:24:45-fs:fscrypt-wip-fscrypt-20230215-0834-distro-default-sm...
- 12:56 PM Bug #58756: qa: error during scrub thrashing
- This should be a known issue with https://tracker.ceph.com/issues/58564:...
- 08:46 AM Bug #58756 (Duplicate): qa: error during scrub thrashing
- error during scrub thrashing in [1]
[1] https://pulpito.ceph.com/yuriw-2023-02-16_19:08:52-fs-wip-yuri3-testing-20... - 10:18 AM Fix #58758 (Pending Backport): qa: fix testcase 'test_cluster_set_user_config_with_non_existing_c...
- http://pulpito.front.sepia.ceph.com/dparmar-2023-02-15_20:03:50-orch:cephadm-wip-58228-distro-default-smithi/7175071/...
- 09:55 AM Fix #58023 (In Progress): mds: do not evict clients if OSDs are laggy
- 09:23 AM Bug #58757 (Duplicate): qa: Command failed (workunit test suites/fsstress.sh)
- Command failed (workunit test suites/fsstress.sh) in [1]. Couldn't get more details as the editor hangs because of th...
- 06:20 AM Bug #58726 (Fix Under Review): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 05:43 AM Bug #58754 (Duplicate): qa: Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cep...
- test_subvolume_snapshot_info_if_orphan_clone fails in [1]
[1] https://pulpito.ceph.com/yuriw-2023-02-16_19:08:52-f...
02/16/2023
- 10:02 PM Bug #58394: nofail option in fstab not supported
- Interesting update, I noticed that even without the nofail, systems will continue to boot even if it can't complete a...
- 03:43 PM Backport #58350: quincy: MDS: scan_stray_dir doesn't walk through all stray inode fragment
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49670
merged - 12:52 PM Bug #58746 (Duplicate): quincy: qa: VersionNotFoundError: Failed to fetch package version
- VersionNotFoundError: Failed to fetch package version in [1]
[1] http://qa-proxy.ceph.com/teuthology/yuriw-2023-02... - 11:43 AM Bug #58744 (Resolved): qa: intermittent nfs test failures at nfs cluster creation
- While working on https://github.com/ceph/ceph/pull/49460, I found any random test would fail with "AssertionError: NF...
- 10:49 AM Bug #58220 (Fix Under Review): Command failed (workunit test fs/quota/quota.sh) on smithi081 with...
- 07:41 AM Bug #58220 (In Progress): Command failed (workunit test fs/quota/quota.sh) on smithi081 with stat...
- 09:54 AM Bug #58645 (Fix Under Review): Unclear error when creating new subvolume when subvolumegroup has ...
- 07:45 AM Backport #58322 (Resolved): quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || stat...
- 12:35 AM Backport #58322: quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_X...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/49539
merged - 12:26 AM Backport #58322: quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_X...
- https://github.com/ceph/ceph/pull/49884 merged
- 04:36 AM Backport #57242 (Resolved): quincy: mgr/volumes: Clone operations are failing with Assertion Error
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47894
Merged. - 04:36 AM Backport #57241 (Resolved): pacific: mgr/volumes: Clone operations are failing with Assertion Error
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47112
Merged. - 01:12 AM Backport #58344 (Resolved): quincy: mds: switch submit_mutex to fair mutex for MDLog
- 12:36 AM Backport #58344: quincy: mds: switch submit_mutex to fair mutex for MDLog
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49633
merged - 12:40 AM Backport #58347: quincy: mds: fragment directory snapshots
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49673
merged - 12:38 AM Backport #58345: quincy: Thread md_log_replay is hanged for ever.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49672
merged - 12:33 AM Backport #58249: quincy: mds: avoid ~mdsdir's scrubbing and reporting damage health status
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49473
merged - 12:29 AM Backport #57760: quincy: qa: test_scrub_pause_and_resume_with_abort failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49459
merged
02/15/2023
- 01:22 PM Bug #58564 (In Progress): workunit suites/dbench.sh fails with error code 1
- 01:22 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- Reported this to linux kernel mail list: https://lore.kernel.org/lkml/768be93b-a401-deab-600c-f946e0bd27fa@redhat.com...
- 02:01 AM Bug #58564: workunit suites/dbench.sh fails with error code 1
- It's a kernel cgroup core deadlock bug:...
- 10:49 AM Bug #58727 (New): quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- /a/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/7171571...
- 10:48 AM Bug #58726 (Pending Backport): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- /a/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/7171607...
02/14/2023
- 12:49 PM Bug #58717 (Resolved): client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- If the __setattrx() fails it will leave the CEPH_CAP_FILE_WR caps reference kept.
02/13/2023
Also available in: Atom