Activity
From 02/27/2023 to 03/28/2023
03/28/2023
- 05:38 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- /a/lflores-2023-03-27_20:42:09-rados-wip-aclamk-bs-elastic-shared-blob-quincy-distro-default-smithi/7221650
- 05:12 PM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- /a/lflores-2023-03-27_02:17:31-rados-wip-aclamk-bs-elastic-shared-blob-save-25.03.2023-a-distro-default-smithi/7221061
- 01:02 PM Backport #58808 (In Progress): quincy: cephfs-top: add an option to dump the computed values to s...
- Backport PR: https://github.com/ceph/ceph/pull/50717
- 12:08 PM Backport #59024 (Duplicate): quincy: mds: warning `clients failing to advance oldest client/flush...
- https://tracker.ceph.com/issues/59021
- 11:38 AM Backport #59033 (Duplicate): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- https://tracker.ceph.com/issues/59030
- 11:37 AM Backport #59034 (Duplicate): quincy: MDS allows a (kernel) client to exceed the xattrs key/value ...
- https://tracker.ceph.com/issues/59037
- 11:35 AM Backport #59038 (Duplicate): quincy: libcephfs: client needs to update the mtime and change attr ...
- https://tracker.ceph.com/issues/59041 (not sure why backport bot created two backport tickets for the same release)
- 11:31 AM Backport #59037: quincy: MDS allows a (kernel) client to exceed the xattrs key/value limits
- Rishabh, please take this one.
- 10:32 AM Bug #59188 (Fix Under Review): cephfs-top: cephfs-top -d <seconds> not working as expected
- 08:20 AM Bug #59188 (Resolved): cephfs-top: cephfs-top -d <seconds> not working as expected
- `cephfs-top -d [--delay]` excepts for float values due to introduction of `curses.halfdelay()` in cephfs-top
- 10:03 AM Backport #58807 (In Progress): pacific: cephfs-top: add an option to dump the computed values to ...
- Backport PR: https://github.com/ceph/ceph/pull/50715.
- 09:26 AM Backport #57571 (Resolved): pacific: client: do not uninline data for read
- 09:25 AM Bug #56553 (Resolved): client: do not uninline data for read
- 09:22 AM Bug #54461 (Resolved): ffsb.sh test failure
- 09:22 AM Bug #50057 (Resolved): client: openned inodes counter is inconsistent
- 09:21 AM Backport #50184 (Rejected): octopus: client: openned inodes counter is inconsistent
- Nathan Cutler wrote:
> This ticket is for tracking the octopus backport of a follow-on fix for #46865 which was back... - 09:12 AM Bug #50744 (Resolved): mds: journal recovery thread is possibly asserting with mds_lock not locked
- 09:11 AM Backport #50847 (Resolved): octopus: mds: journal recovery thread is possibly asserting with mds_...
- 09:09 AM Bug #58000 (Resolved): mds: switch submit_mutex to fair mutex for MDLog
- 09:08 AM Backport #58343 (Resolved): pacific: mds: switch submit_mutex to fair mutex for MDLog
- 09:04 AM Bug #50433 (Resolved): mds: Error ENOSYS: mds.a started profiler
- 09:03 AM Backport #50631 (Resolved): octopus: mds: Error ENOSYS: mds.a started profiler
- 09:01 AM Bug #50808 (Resolved): qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in the...
- 08:59 AM Backport #51323 (Resolved): octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- 08:56 AM Backport #52440 (In Progress): pacific: qa: add testing for "ms_mode" mount option
- 05:52 AM Backport #58409 (Resolved): quincy: doc: document the relevance of mds_namespace mount option
- 05:45 AM Feature #55197 (Resolved): cephfs-top: make cephfs-top display scrollable like top
- 05:23 AM Backport #57974 (Resolved): pacific: cephfs-top: make cephfs-top display scrollable like top
- 05:19 AM Bug #58677 (Fix Under Review): cephfs-top: test the current python version is supported
- 05:10 AM Bug #58663 (Resolved): cephfs-top: drop curses.A_ITALIC
- 05:03 AM Backport #58667 (Resolved): quincy: cephfs-top: drop curses.A_ITALIC
- 05:00 AM Backport #58668 (Resolved): pacific: cephfs-top: drop curses.A_ITALIC
- 12:27 AM Bug #59185 (Fix Under Review): MDSMonitor: should batch propose osdmap/mdsmap changes via some fs...
- 12:23 AM Bug #59185 (Rejected): MDSMonitor: should batch propose osdmap/mdsmap changes via some fs commands
- Especially `fs fail`. Otherwise, you may see the MDS complain about blocklisting before it has a reasonable chance to...
03/27/2023
- 06:58 PM Bug #59183 (Fix Under Review): cephfs-data-scan: does not scan_links for lost+found
- 06:46 PM Bug #59183 (Resolved): cephfs-data-scan: does not scan_links for lost+found
- Importantly, scan_links corrects the placeholder SNAP_HEAD for the first dentry metadata. If lost+found is skipped, t...
- 01:52 PM Bug #58597: The MDS crashes when deleting a specific file
- Tobias Reinhard wrote:
> Venky Shankar wrote:
> > Hi Tobias,
> >
> > Any update on using the tool? Were you able... - 10:03 AM Bug #59169 (New): Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirr...
- Seen in Yuri's quincy run: https://pulpito.ceph.com/yuriw-2023-03-23_15:21:23-fs-quincy-release-distro-default-smithi...
03/24/2023
- 07:29 PM Bug #59163 (New): mds: stuck in up:rejoin when it cannot "open" missing directory inode
- tasks.cephfs.test_damage.TestDamage.test_object_deletion tests for damage when no clients are in the session list (fo...
- 10:36 AM Bug #58597: The MDS crashes when deleting a specific file
- Venky Shankar wrote:
> Hi Tobias,
>
> Any update on using the tool? Were you able to get the file system back onl... - 09:59 AM Bug #57682: client: ERROR: test_reconnect_after_blocklisted
- Another instance - https://pulpito.ceph.com/pdonnell-2023-03-23_18:44:22-fs-wip-pdonnell-testing-20230323.162417-dist...
- 06:01 AM Feature #58216 (Rejected): cephfs: Add quota.max_files limit check in MDS side
- 02:41 AM Bug #48678: client: spins on tick interval
- This spin happens in:...
03/23/2023
- 01:39 PM Bug #58597: The MDS crashes when deleting a specific file
- Hi Tobias,
Any update on using the tool? Were you able to get the file system back online? - 12:49 PM Bug #58949 (Rejected): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- Closing per previous comment.
- 12:45 PM Bug #59134 (Duplicate): mds: deadlock during unlink with multimds (postgres)
- 01:04 AM Bug #59134: mds: deadlock during unlink with multimds (postgres)
- This should be the same with https://tracker.ceph.com/issues/58340.
- 08:49 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- Apologies for looking into this rather late, Bruno.
03/22/2023
03/21/2023
- 10:12 PM Bug #53615: qa: upgrade test fails with "timeout expired in wait_until_healthy"
- Another instance during this upgrade test:
/a/yuriw-2023-03-14_21:33:13-upgrade:octopus-x-quincy-release-distro-defa... - 08:32 PM Bug #59119 (New): mds: segmentation fault during replay of snaptable updates
- For a standby-replay daemon:...
- 09:40 AM Backport #59112 (In Progress): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 09:22 AM Backport #59112 (Resolved): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50604
03/20/2023
- 01:13 PM Feature #44274: mds: disconnect file data from inode number
- @Patrick do you think this is something we still need to carry on its own, in light of https://tracker.ceph.com/issue...
- 11:35 AM Backport #58986 (In Progress): pacific: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 11:30 AM Backport #58866 (In Progress): pacific: cephfs-top: Sort menu doesn't show 'No filesystem availab...
- 11:19 AM Backport #58985 (In Progress): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 10:38 AM Bug #59107 (Pending Backport): MDS imported_inodes metric is not updated.
- ceph daemon mds.$(hostname) perf dump | grep imported
"imported": 29013,
"imported_inodes": 0, - 08:19 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Kotresh Hiremath Ravishankar wrote:
> Neeraj Pratap Singh wrote:
> > I did try to reproduce the issue mentioned in ...
03/16/2023
- 10:30 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Neeraj Pratap Singh wrote:
> I did try to reproduce the issue mentioned in the tracker due to which, the feature is ...
03/15/2023
- 04:33 PM Bug #54730 (Resolved): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state ==...
- 04:33 PM Bug #49132 (Resolved): mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLO...
- 04:32 PM Backport #58323 (Resolved): pacific: mds crashed "assert_condition": "state == LOCK_XLOCK || sta...
03/14/2023
- 05:18 PM Bug #58008 (Resolved): mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- 05:18 PM Backport #58254 (Resolved): pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _cal...
- 03:49 PM Backport #58254: pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49656
merged - 05:11 PM Bug #57359 (Resolved): mds/Server: -ve values cause unexpected client eviction while handling cli...
- 05:11 PM Backport #58601 (Resolved): pacific: mds/Server: -ve values cause unexpected client eviction whil...
- 02:57 PM Backport #58601: pacific: mds/Server: -ve values cause unexpected client eviction while handling ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49956
merged - 04:50 PM Bug #58619 (In Progress): mds: client evict [-h|--help] evicts ALL clients
- 04:48 PM Bug #58030 (Resolved): mds: avoid ~mdsdir's scrubbing and reporting damage health status
- 04:48 PM Bug #58028 (Resolved): cephfs-top: Sorting doesn't work when the filesystems are removed and created
- 04:47 PM Bug #58031 (Resolved): cephfs-top: sorting/limit excepts when the filesystems are removed and cre...
- 04:47 PM Feature #55121 (Resolved): cephfs-top: new options to limit and order-by
- 04:47 PM Bug #57620 (Resolved): mgr/volumes: addition of human-readable flag to volume info command
- 04:46 PM Bug #55234 (Resolved): snap_schedule: replace .snap with the client configured snap dir name
- 04:45 PM Backport #58079 (Resolved): quincy: cephfs-top: Sorting doesn't work when the filesystems are rem...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 04:45 PM Backport #58074 (Resolved): quincy: cephfs-top: sorting/limit excepts when the filesystems are re...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 04:44 PM Backport #57849 (Resolved): quincy: mgr/volumes: addition of human-readable flag to volume info c...
- 04:43 PM Backport #57849: quincy: mgr/volumes: addition of human-readable flag to volume info command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48466
merged - 04:42 PM Backport #58249 (Resolved): quincy: mds: avoid ~mdsdir's scrubbing and reporting damage health st...
- 04:41 PM Backport #57970 (Resolved): quincy: cephfs-top: new options to limit and order-by
- 04:40 PM Backport #57971 (Resolved): pacific: cephfs-top: new options to limit and order-by
- 03:43 PM Backport #57971: pacific: cephfs-top: new options to limit and order-by
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:39 PM Backport #58073 (Resolved): pacific: cephfs-top: sorting/limit excepts when the filesystems are r...
- 03:43 PM Backport #58073: pacific: cephfs-top: sorting/limit excepts when the filesystems are removed and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:39 PM Backport #58078 (Resolved): pacific: cephfs-top: Sorting doesn't work when the filesystems are re...
- 03:44 PM Backport #58078: pacific: cephfs-top: Sorting doesn't work when the filesystems are removed and c...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:38 PM Backport #58250 (Resolved): pacific: mds: avoid ~mdsdir's scrubbing and reporting damage health s...
- 03:45 PM Backport #58250: pacific: mds: avoid ~mdsdir's scrubbing and reporting damage health status
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49440
merged - 04:37 PM Backport #57201 (Resolved): pacific: snap_schedule: replace .snap with the client configured snap...
- 04:14 PM Backport #57201: pacific: snap_schedule: replace .snap with the client configured snap dir name
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47726
merged - 03:52 PM Backport #58349: pacific: MDS: scan_stray_dir doesn't walk through all stray inode fragment
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49669
merged - 03:46 PM Backport #58323: pacific: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/49538
merged - 03:45 PM Backport #57761: pacific: qa: test_scrub_pause_and_resume_with_abort failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49458
merged - 03:20 PM Bug #59067 (Resolved): mds: add cap acquisition throttled event to MDR
- Otherwise a blocked op won't show it's being blocked by the cap acquisition throttle.
Write a test that verifies t... - 03:04 PM Backport #58598: pacific: mon: prevent allocating snapids allocated for CephFS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50050
merged - 02:59 PM Backport #58668: pacific: cephfs-top: drop curses.A_ITALIC
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50029
merged - 02:58 PM Backport #57728: pacific: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49966
merged - 02:58 PM Bug #58082 (Resolved): cephfs:filesystem became read only after Quincy upgrade
- 02:57 PM Backport #58608 (Resolved): pacific: cephfs:filesystem became read only after Quincy upgrade
- 02:56 PM Backport #58608: pacific: cephfs:filesystem became read only after Quincy upgrade
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49941
merged - 02:57 PM Backport #58603: pacific: client stalls during vstart_runner test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49944
merged - 02:53 PM Backport #58346: pacific: Thread md_log_replay is hanged for ever.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49671
merged - 02:44 PM Feature #18475 (Resolved): qa: run xfstests in the nightlies
- Related tickets -
https://tracker.ceph.com/issues/58945
https://tracker.ceph.com/issues/58938 - 12:46 PM Backport #59000 (In Progress): quincy: cephfs_mirror: local and remote dir root modes are not same
- 10:19 AM Backport #59020 (In Progress): reef: cephfs-data-scan: multiple data pools are not supported
- 09:56 AM Backport #59020 (New): reef: cephfs-data-scan: multiple data pools are not supported
- 09:42 AM Backport #59020 (Duplicate): reef: cephfs-data-scan: multiple data pools are not supported
- 09:51 AM Backport #59019 (In Progress): pacific: cephfs-data-scan: multiple data pools are not supported
- 09:44 AM Backport #59018 (In Progress): quincy: cephfs-data-scan: multiple data pools are not supported
03/13/2023
- 04:54 PM Backport #59041 (In Progress): quincy: libcephfs: client needs to update the mtime and change att...
- https://github.com/ceph/ceph/pull/50730
- 04:54 PM Backport #59040 (Rejected): pacific: libcephfs: client needs to update the mtime and change attr ...
- 04:53 PM Backport #59039 (Duplicate): pacific: libcephfs: client needs to update the mtime and change attr...
- 04:53 PM Backport #59038 (Duplicate): quincy: libcephfs: client needs to update the mtime and change attr ...
- 04:53 PM Backport #59037 (In Progress): quincy: MDS allows a (kernel) client to exceed the xattrs key/valu...
- https://github.com/ceph/ceph/pull/50981
- 04:53 PM Backport #59036 (Duplicate): pacific: MDS allows a (kernel) client to exceed the xattrs key/value...
- 04:53 PM Backport #59035 (New): pacific: MDS allows a (kernel) client to exceed the xattrs key/value limits
- 04:53 PM Backport #59034 (Duplicate): quincy: MDS allows a (kernel) client to exceed the xattrs key/value ...
- 04:52 PM Backport #59033 (Duplicate): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- 04:52 PM Backport #59032 (Resolved): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- https://github.com/ceph/ceph/pull/51509
- 04:52 PM Backport #59031 (Duplicate): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_cli...
- 04:52 PM Backport #59030 (In Progress): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_cl...
- https://github.com/ceph/ceph/pull/51049
- 04:51 PM Backport #59024 (Duplicate): quincy: mds: warning `clients failing to advance oldest client/flush...
- 04:51 PM Backport #59023 (Resolved): pacific: mds: warning `clients failing to advance oldest client/flush...
- https://github.com/ceph/ceph/pull/50811
- 04:51 PM Backport #59022 (Duplicate): pacific: mds: warning `clients failing to advance oldest client/flus...
- 04:51 PM Backport #59021 (Resolved): quincy: mds: warning `clients failing to advance oldest client/flush ...
- https://github.com/ceph/ceph/pull/50785
- 04:51 PM Backport #59020 (Resolved): reef: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50524
- 04:50 PM Backport #59019 (Resolved): pacific: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50523
- 04:50 PM Backport #59018 (Resolved): quincy: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50522
- 04:50 PM Backport #59017 (Resolved): pacific: snap-schedule: handle non-existent path gracefully during sn...
- https://github.com/ceph/ceph/pull/51246
- 04:50 PM Backport #59016 (Resolved): quincy: snap-schedule: handle non-existent path gracefully during sna...
- https://github.com/ceph/ceph/pull/50780
- 04:49 PM Backport #59015 (Rejected): pacific: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- https://github.com/ceph/ceph/pull/52580
- 04:49 PM Backport #59014 (In Progress): quincy: Command failed (workunit test fs/quota/quota.sh) on smithi...
- https://github.com/ceph/ceph/pull/52579
- 04:47 PM Backport #59007 (Resolved): pacific: mds stuck in 'up:replay' and crashed.
- https://github.com/ceph/ceph/pull/50725
- 04:47 PM Backport #59006 (Resolved): quincy: mds stuck in 'up:replay' and crashed.
- https://github.com/ceph/ceph/pull/50724
- 04:46 PM Backport #59003 (Resolved): pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- https://github.com/ceph/ceph/pull/51039
- 04:46 PM Backport #59002 (Resolved): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- https://github.com/ceph/ceph/pull/50786
- 04:46 PM Backport #59001 (Resolved): pacific: cephfs_mirror: local and remote dir root modes are not same
- https://github.com/ceph/ceph/pull/53270
- 04:46 PM Backport #59000 (Resolved): quincy: cephfs_mirror: local and remote dir root modes are not same
- https://github.com/ceph/ceph/pull/50528
- 04:45 PM Backport #58994 (Resolved): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- https://github.com/ceph/ceph/pull/50988
- 04:45 PM Backport #58993 (Resolved): quincy: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- https://github.com/ceph/ceph/pull/50989
- 04:44 PM Backport #58992 (Rejected): pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://github.com/ceph/ceph/pull/52584
- 04:44 PM Backport #58991 (In Progress): quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://github.com/ceph/ceph/pull/52585
- 04:43 PM Backport #58986 (Resolved): pacific: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50597
- 04:43 PM Backport #58985 (Resolved): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50595
- 04:43 PM Backport #58984 (Resolved): pacific: cephfs-top: navigate to home screen when no fs
- https://github.com/ceph/ceph/pull/50737
- 04:43 PM Backport #58983 (Resolved): quincy: cephfs-top: navigate to home screen when no fs
- https://github.com/ceph/ceph/pull/50731
- 04:18 PM Bug #58971 (Fix Under Review): mon/MDSMonitor: do not trigger propose on error from prepare_update
- 04:16 PM Bug #58971 (Pending Backport): mon/MDSMonitor: do not trigger propose on error from prepare_update
- https://github.com/ceph/ceph/pull/50404#discussion_r1133791746
- 02:24 PM Feature #55940: quota: accept values in human readable format as well
- Just FYI - follow up PR: https://github.com/ceph/ceph/pull/50493
- 02:00 PM Bug #54501 (Pending Backport): libcephfs: client needs to update the mtime and change attr when s...
- 01:55 PM Bug #58489 (Pending Backport): mds stuck in 'up:replay' and crashed.
- 01:53 PM Bug #58678 (Pending Backport): cephfs_mirror: local and remote dir root modes are not same
- 09:32 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- Forgot to mention that this has been recently verified on v15.2.17 and v16.2.11.
- 08:35 AM Bug #58962 (New): ftruncate fails with EACCES on a read-only file created with write permissions
- When creating a new file with write permissions, with mode set to read-only such as 400 or 444, ftruncate fails with ...
03/10/2023
- 12:42 PM Bug #58651 (Pending Backport): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 11:50 AM Bug #58949: qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- Venky, this most likely is not race condition. Your testing branch had patch that fixes quota issue. See - https://gi...
- 11:24 AM Bug #58949 (Rejected): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- test_cephfs.test_disk_quota_exceeeded_error's failure has been reported here before - https://tracker.ceph.com/issues...
- 11:32 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Venky Shankar wrote:
> Rishabh Dave wrote:
> > @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.... - 04:28 AM Bug #58220 (Pending Backport): Command failed (workunit test fs/quota/quota.sh) on smithi081 with...
- 03:53 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Rishabh Dave wrote:
> @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.py@ fails on this teutholo... - 11:22 AM Bug #53573 (Resolved): qa: test new clients against older Ceph clusters
- 11:17 AM Bug #58095 (Pending Backport): snap-schedule: handle non-existent path gracefully during snapshot...
- 04:42 AM Bug #58717 (Pending Backport): client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- 02:26 AM Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- See this again in http://qa-proxy.ceph.com/teuthology/yuriw-2023-03-08_20:32:29-fs-wip-yuri3-testing-2023-03-08-0800-...
03/09/2023
- 06:41 PM Bug #58945: qa: xfstests-dev's generic test suite has 20 failures with fuse client
- Related ticket - https://tracker.ceph.com/issues/58742
- 06:40 PM Bug #58945 (New): qa: xfstests-dev's generic test suite has 20 failures with fuse client
- "PR #45960":https://github.com/ceph/ceph/pull/45960 enables running tests from xfstests-dev against CephFS. For FUSE ...
- 03:38 PM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.py@ fails on this teuthology run - http://pulpito...
- 02:22 PM Bug #58814 (Pending Backport): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 01:07 PM Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- Seen in main branch integration test: https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-2...
- 12:54 PM Bug #58823 (Pending Backport): cephfs-top: navigate to home screen when no fs
03/08/2023
- 04:52 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:38 PM Bug #58795 (Fix Under Review): cephfs-shell: update path to cephfs-shell since its location has c...
- 02:32 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:37 PM Bug #55354 (Resolved): cephfs: xfstests-dev can't be run against fuse mounted cephfs
- 02:32 PM Bug #58726 (Pending Backport): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 02:16 PM Bug #58938 (New): qa: xfstests-dev's generic test suite has 7 failures with kclient
- "PR #45960":https://github.com/ceph/ceph/pull/45960 enables running tests from xfstests-dev against CephFS. For kerne...
- 07:08 AM Feature #55940 (Resolved): quota: accept values in human readable format as well
- 02:43 AM Bug #55725 (Pending Backport): MDS allows a (kernel) client to exceed the xattrs key/value limits
- 02:39 AM Bug #57985 (Pending Backport): mds: warning `clients failing to advance oldest client/flush tid` ...
- 02:37 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- 02:35 AM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
- Hmmm... similar to https://tracker.ceph.com/issues/17172
- 02:31 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
- https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/...
- 02:04 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Milind, PTAL.
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testin...
03/07/2023
- 02:13 PM Bug #57064 (Need More Info): qa: test_add_ancestor_and_child_directory failure
- Looking at the logs, mirror daemon is missing and thus the command failed...
- 08:52 AM Bug #57064: qa: test_add_ancestor_and_child_directory failure
- Dhairya Parmar wrote:
> I tried digging into this failure, while looking at teuthology log, I see
> [...]
>
> I... - 02:59 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Rishabh Dave wrote:
> http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-...
03/06/2023
- 04:25 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 04:18 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 03:17 PM Bug #56830 (Can't reproduce): crash: cephfs::mirror::PeerReplayer::pick_directory()
- After thoroughly assessing the issue with the limited available data in the tracker, it's hard to tell what lead to t...
- 03:16 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya Parmar wrote:
> Issue seems to be at:
> [...]
> @ https://github.com/ceph/ceph/blob/main/src/tools/cephfs_... - 01:49 PM Bug #56830 (Fix Under Review): crash: cephfs::mirror::PeerReplayer::pick_directory()
- See updated in PR.
- 09:07 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- I did try to reproduce the issue mentioned in the tracker due to which, the feature is required.I found that:
1. If ... - 04:15 AM Bug #58029 (Pending Backport): cephfs-data-scan: multiple data pools are not supported
- 12:53 AM Backport #58609 (Resolved): quincy: cephfs:filesystem became read only after Quincy upgrade
- 12:52 AM Backport #58602 (Resolved): quincy: client stalls during vstart_runner test
03/03/2023
- 08:01 AM Backport #58865 (In Progress): quincy: cephfs-top: Sort menu doesn't show 'No filesystem availabl...
- 07:40 AM Bug #57280 (Resolved): qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fe...
- backport merged
- 07:38 AM Backport #58604 (Resolved): quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails -...
- 07:35 AM Backport #58253 (Resolved): quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calc...
- 07:33 AM Bug #58138 (Resolved): "ceph nfs cluster info" shows junk data for non-existent cluster
- backport merged
- 06:59 AM Backport #58348 (Resolved): quincy: "ceph nfs cluster info" shows junk data for non-existent clus...
03/02/2023
- 10:43 PM Backport #58599: quincy: mon: prevent allocating snapids allocated for CephFS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50090
merged - 10:40 PM Backport #58604: quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49957
merged - 10:39 PM Backport #58602: quincy: client stalls during vstart_runner test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49942
merged - 10:39 PM Backport #58609: quincy: cephfs:filesystem became read only after Quincy upgrade
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49939
merged - 10:38 PM Backport #58253: quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49655
merged - 10:37 PM Backport #58348: quincy: "ceph nfs cluster info" shows junk data for non-existent cluster
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49654
merged - 10:31 PM Backport #57970: quincy: cephfs-top: new options to limit and order-by
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 01:38 PM Feature #57090 (Resolved): MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- 10:10 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- Not reproducible in main branch (wip-vshankar-testing-20230228.105516 is just a couple of test PRs on top of main bra...
- 07:05 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- ... and the kclient sees the error:...
- 06:48 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- The MDS did reply back an ENOSPC to the client:...
- 07:42 AM Bug #58760 (Resolved): kclient: xfstests-dev generic/317 failed
- 05:37 AM Bug #58760: kclient: xfstests-dev generic/317 failed
- qa changes are: https://github.com/ceph/ceph/pull/50217
03/01/2023
- 05:59 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Issue seems to be at:...
- 10:43 AM Feature #57090: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- should we backport this?
- 10:08 AM Bug #57064: qa: test_add_ancestor_and_child_directory failure
- I tried digging into this failure, while looking at teuthology log, I see ...
02/28/2023
- 11:03 PM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- /a/yuriw-2023-02-24_17:50:19-rados-main-distro-default-smithi/7186744
- 04:05 PM Bug #44100: cephfs rsync kworker high load.
- Has this been released? I believe I'm hitting it with Ceph 17.2.5 and kernel 5.4.0-136 on Ubuntu 20.04.
- 02:12 PM Backport #58600 (Resolved): quincy: mds/Server: -ve values cause unexpected client eviction while...
- https://github.com/ceph/ceph/pull/48252 has been merged that contained the the relevant commit
- 02:09 PM Backport #58881 (Resolved): pacific: mds: Jenkins fails with skipping unrecognized type MClientRe...
- https://github.com/ceph/ceph/pull/50733
- 02:09 PM Backport #58880 (Resolved): quincy: mds: Jenkins fails with skipping unrecognized type MClientReq...
- https://github.com/ceph/ceph/pull/50732
- 02:00 PM Bug #58853 (Pending Backport): mds: Jenkins fails with skipping unrecognized type MClientRequest:...
- 02:00 PM Bug #58853: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
- Backport note: delay the backport till enough tests have been run on main for this change.
- 01:14 PM Bug #58878 (New): mds: FAILED ceph_assert(trim_to > trimming_pos)
- One of the MDS crash with the following backtrace:...
- 01:09 PM Feature #58877 (Rejected): mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- In situations where the subvolume metadata is missing/corrupted/untrustable, having a way to regenerate it would be h...
- 01:06 PM Bug #56522 (Resolved): Do not abort MDS on unknown messages
- PR merged into main, both backports PRs merged into their respective branches.
- 01:05 PM Backport #57665 (Resolved): pacific: Do not abort MDS on unknown messages
- 01:04 PM Backport #57666 (Resolved): quincy: Do not abort MDS on unknown messages
- PR merged
- 01:04 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Hey Andras,
Andras Pataki wrote:
> I'm experimenting on reproducing the problem on demand. Once I have a way to ... - 12:24 PM Feature #58835 (Fix Under Review): mds: add an asok command to dump export states
- 07:21 AM Backport #58866 (Resolved): pacific: cephfs-top: Sort menu doesn't show 'No filesystem available'...
- https://github.com/ceph/ceph/pull/50596
- 07:21 AM Backport #58865 (Resolved): quincy: cephfs-top: Sort menu doesn't show 'No filesystem available' ...
- https://github.com/ceph/ceph/pull/50365
- 07:14 AM Bug #58813 (Pending Backport): cephfs-top: Sort menu doesn't show 'No filesystem available' scree...
02/27/2023
- 04:59 PM Backport #51936 (Rejected): octopus: mds: improve debugging for mksnap denial
- EOL
- 04:58 PM Backport #53715 (Resolved): octopus: mds: fails to reintegrate strays if destdn's directory is fu...
- 04:58 PM Backport #53735 (Rejected): octopus: mds: recursive scrub does not trigger stray reintegration
- EOL
- 04:57 PM Bug #51905 (Resolved): qa: "error reading sessionmap 'mds1_sessionmap'"
- 04:15 PM Bug #53194 (Resolved): mds: opening connection to up:replay/up:creating daemon causes message drop
- 04:15 PM Backport #53446 (Rejected): octopus: mds: opening connection to up:replay/up:creating daemon caus...
- EOL
- 04:14 PM Bug #56666 (Resolved): mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
- 04:14 PM Bug #49605 (Resolved): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- 05:40 AM Bug #53970 (Rejected): qa/vstart_runner: run_python() functions interface are not same
- 05:14 AM Bug #58853 (Fix Under Review): mds: Jenkins fails with skipping unrecognized type MClientRequest:...
- 05:13 AM Bug #58853 (Resolved): mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
- https://jenkins.ceph.com/job/ceph-pull-requests/111277/console
https://jenkins.ceph.com/job/ceph-pull-requests/11127... - 01:22 AM Bug #55332 (Resolved): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- 01:21 AM Backport #57836 (Resolved): pacific: Failure in snaptest-git-ceph.sh (it's an async unlink/create...
- 01:20 AM Backport #57837 (Resolved): quincy: Failure in snaptest-git-ceph.sh (it's an async unlink/create ...
Also available in: Atom