Activity
From 03/02/2023 to 03/31/2023
03/31/2023
- 09:14 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- /a/yuriw-2023-03-14_20:10:47-rados-wip-yuri-testing-2023-03-14-0714-reef-distro-default-smithi/7206976
- 03:24 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- /a/yuriw-2023-03-27_23:05:54-rados-wip-yuri4-testing-2023-03-25-0714-distro-default-smithi/7221904
- 06:45 PM Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB
- Xiubo, the PR has been merged but I am leaving status of this ticket unchanged because the baackport field is not set...
- 10:20 AM Backport #59265 (In Progress): quincy: pacific scrub ~mds_dir causes stray related ceph_assert, a...
- https://github.com/ceph/ceph/pull/50815
- 08:40 AM Backport #59265 (In Progress): quincy: pacific scrub ~mds_dir causes stray related ceph_assert, a...
- 10:20 AM Backport #59262 (In Progress): quincy: mds: stray directories are not purged when all past parent...
- https://github.com/ceph/ceph/pull/50815
- 08:39 AM Backport #59262 (In Progress): quincy: mds: stray directories are not purged when all past parent...
- 10:17 AM Backport #59264 (In Progress): pacific: pacific scrub ~mds_dir causes stray related ceph_assert, ...
- https://github.com/ceph/ceph/pull/50814
- 08:40 AM Backport #59264 (Resolved): pacific: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- 10:16 AM Backport #59261 (In Progress): pacific: mds: stray directories are not purged when all past paren...
- https://github.com/ceph/ceph/pull/50814
- 08:39 AM Backport #59261 (Resolved): pacific: mds: stray directories are not purged when all past parents ...
- 10:12 AM Backport #59263 (In Progress): reef: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- https://github.com/ceph/ceph/pull/50813
- 08:39 AM Backport #59263 (In Progress): reef: pacific scrub ~mds_dir causes stray related ceph_assert, abo...
- 10:12 AM Backport #59260 (In Progress): reef: mds: stray directories are not purged when all past parents ...
- https://github.com/ceph/ceph/pull/50813
- 08:39 AM Backport #59260 (In Progress): reef: mds: stray directories are not purged when all past parents ...
- 10:01 AM Backport #59023 (In Progress): pacific: mds: warning `clients failing to advance oldest client/fl...
- 09:23 AM Backport #59022 (Duplicate): pacific: mds: warning `clients failing to advance oldest client/flus...
- Duplicate of https://tracker.ceph.com/issues/59023
- 09:19 AM Backport #59031 (Duplicate): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_cli...
- Duplicate of https://tracker.ceph.com/issues/59032
- 09:13 AM Backport #59268 (Resolved): pacific: libcephfs: clear the suid/sgid for fallocate
- https://github.com/ceph/ceph/pull/50988
- 09:13 AM Backport #59267 (Resolved): reef: libcephfs: clear the suid/sgid for fallocate
- https://github.com/ceph/ceph/pull/50987
- 09:13 AM Backport #59266 (Resolved): quincy: libcephfs: clear the suid/sgid for fallocate
- https://github.com/ceph/ceph/pull/50989
- 09:11 AM Feature #58680 (Pending Backport): libcephfs: clear the suid/sgid for fallocate
- Backport note: include https://github.com/ceph/ceph/pull/50793.
- 08:37 AM Bug #51824 (Pending Backport): pacific scrub ~mds_dir causes stray related ceph_assert, abort and...
- 08:37 AM Bug #53724 (Pending Backport): mds: stray directories are not purged when all past parents are clear
- 08:31 AM Backport #59246 (In Progress): pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_e...
- https://github.com/ceph/ceph/pull/50809
- 04:06 AM Backport #59246 (Resolved): pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_exis...
- 08:31 AM Backport #59249 (In Progress): pacific: qa: intermittent nfs test failures at nfs cluster creation
- https://github.com/ceph/ceph/pull/50809
- 04:07 AM Backport #59249 (Resolved): pacific: qa: intermittent nfs test failures at nfs cluster creation
- 08:30 AM Backport #59245 (In Progress): reef: qa: fix testcase 'test_cluster_set_user_config_with_non_exis...
- https://github.com/ceph/ceph/pull/50808
- 04:06 AM Backport #59245 (In Progress): reef: qa: fix testcase 'test_cluster_set_user_config_with_non_exis...
- 08:30 AM Backport #59248 (In Progress): reef: qa: intermittent nfs test failures at nfs cluster creation
- https://github.com/ceph/ceph/pull/50808
- 04:07 AM Backport #59248 (Resolved): reef: qa: intermittent nfs test failures at nfs cluster creation
- 08:29 AM Backport #59244 (In Progress): quincy: qa: fix testcase 'test_cluster_set_user_config_with_non_ex...
- https://github.com/ceph/ceph/pull/50807
- 04:06 AM Backport #59244 (In Progress): quincy: qa: fix testcase 'test_cluster_set_user_config_with_non_ex...
- 08:29 AM Backport #59247 (In Progress): quincy: qa: intermittent nfs test failures at nfs cluster creation
- https://github.com/ceph/ceph/pull/50807
- 04:07 AM Backport #59247 (Resolved): quincy: qa: intermittent nfs test failures at nfs cluster creation
- 08:23 AM Backport #59252 (In Progress): pacific: mgr/nfs: disallow non-existent paths when creating export
- https://github.com/ceph/ceph/pull/50809
- 04:07 AM Backport #59252 (Resolved): pacific: mgr/nfs: disallow non-existent paths when creating export
- 08:15 AM Backport #59251 (In Progress): reef: mgr/nfs: disallow non-existent paths when creating export
- https://github.com/ceph/ceph/pull/50808
- 04:07 AM Backport #59251 (Resolved): reef: mgr/nfs: disallow non-existent paths when creating export
- 08:11 AM Backport #59250 (In Progress): quincy: mgr/nfs: disallow non-existent paths when creating export
- https://github.com/ceph/ceph/pull/50807
- 04:07 AM Backport #59250 (In Progress): quincy: mgr/nfs: disallow non-existent paths when creating export
- 04:04 AM Fix #58758 (Pending Backport): qa: fix testcase 'test_cluster_set_user_config_with_non_existing_c...
- 04:03 AM Bug #58744 (Pending Backport): qa: intermittent nfs test failures at nfs cluster creation
- 04:03 AM Bug #58228 (Pending Backport): mgr/nfs: disallow non-existent paths when creating export
03/30/2023
- 05:52 PM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- very strange. i just saw this show up in an rgw job against ubuntu 20.04
i booted up an old focal vm to test under... - 03:10 PM Bug #58576 (Rejected): do not allow invalid flags with cmd 'scrub start'
- Alright so the cmd works as expected, it's just that the first arg it considers as scrub tag. If given more args then...
- 01:27 PM Bug #58244: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
- Rishabh, please take this one.
- 10:49 AM Bug #51824 (Resolved): pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- 10:49 AM Bug #53724 (Resolved): mds: stray directories are not purged when all past parents are clear
- 09:20 AM Feature #58129 (Fix Under Review): mon/FSCommands: support swapping file systems by name
- 09:08 AM Bug #59230 (Duplicate): Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- https://pulpito.ceph.com/vshankar-2023-03-18_05:01:47-fs-wip-vshankar-testing-20230317.095222-testing-default-smithi/...
- 06:24 AM Documentation #57062: Document access patterns that have good/pathological performance on CephFS
- Venky Shankar wrote:
> Niklas Hambuechen wrote:
> > Hi Venky, I'm using the kclient on Linux 5.10.88 in this cluste... - 04:59 AM Backport #59030: quincy: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.Te...
- Venky Shankar wrote:
> Milind, handing this over to you since it needs to be backported after backporting https://tr... - 04:36 AM Backport #59030: quincy: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.Te...
- Milind, handing this over to you since it needs to be backported after backporting https://tracker.ceph.com/issues/54317
- 04:55 AM Backport #59002 (In Progress): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 04:53 AM Backport #59021 (In Progress): quincy: mds: warning `clients failing to advance oldest client/flu...
- 04:42 AM Backport #59041 (In Progress): quincy: libcephfs: client needs to update the mtime and change att...
- 04:38 AM Backport #59032: pacific: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.T...
- Milind, please include this when backporting https://tracker.ceph.com/issues/54317
- 04:38 AM Backport #59036 (Duplicate): pacific: MDS allows a (kernel) client to exceed the xattrs key/value...
- 04:37 AM Backport #59036: pacific: MDS allows a (kernel) client to exceed the xattrs key/value limits
- https://tracker.ceph.com/issues/59035
- 04:37 AM Backport #59039 (Duplicate): pacific: libcephfs: client needs to update the mtime and change attr...
- https://tracker.ceph.com/issues/59040
- 03:07 AM Backport #59229 (In Progress): pacific: cephfs-data-scan: does not scan_links for lost+found
- 03:04 AM Backport #59229 (Resolved): pacific: cephfs-data-scan: does not scan_links for lost+found
- https://github.com/ceph/ceph/pull/50784
- 03:06 AM Backport #59228 (In Progress): quincy: cephfs-data-scan: does not scan_links for lost+found
- 03:03 AM Backport #59228 (Resolved): quincy: cephfs-data-scan: does not scan_links for lost+found
- https://github.com/ceph/ceph/pull/50783
- 03:05 AM Backport #59227 (In Progress): reef: cephfs-data-scan: does not scan_links for lost+found
- 03:03 AM Backport #59227 (Resolved): reef: cephfs-data-scan: does not scan_links for lost+found
- https://github.com/ceph/ceph/pull/50782
- 03:02 AM Backport #57631 (Rejected): quincy: first-damage.sh does not handle dentries with spaces
- not backporting this script
- 03:01 AM Bug #59183 (Pending Backport): cephfs-data-scan: does not scan_links for lost+found
- 02:59 AM Backport #59226 (In Progress): pacific: mds: modify scrub to catch dentry corruption
- 02:47 AM Backport #59226 (Resolved): pacific: mds: modify scrub to catch dentry corruption
- https://github.com/ceph/ceph/pull/50781
- 02:45 AM Backport #59112 (Resolved): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 02:36 AM Backport #59225 (In Progress): quincy: mds: modify scrub to catch dentry corruption
- 02:18 AM Backport #59225 (Resolved): quincy: mds: modify scrub to catch dentry corruption
- https://github.com/ceph/ceph/pull/50779
- 02:35 AM Backport #59223 (In Progress): pacific: mds: catch damage to CDentry's first member before persis...
- 02:26 AM Backport #59016 (In Progress): quincy: snap-schedule: handle non-existent path gracefully during ...
- 02:09 AM Feature #57091 (Pending Backport): mds: modify scrub to catch dentry corruption
- Quincy backport: https://github.com/ceph/ceph/pull/50779
- 02:08 AM Backport #59221 (In Progress): quincy: mds: catch damage to CDentry's first member before persisting
- 01:19 AM Backport #57714 (In Progress): pacific: mds: scrub locates mismatch between child accounted_rstat...
- 01:17 AM Backport #57715 (In Progress): quincy: mds: scrub locates mismatch between child accounted_rstats...
- 01:15 AM Backport #57721 (In Progress): pacific: qa: data-scan/journal-tool do not output debugging in ups...
- 01:14 AM Backport #57720 (In Progress): quincy: qa: data-scan/journal-tool do not output debugging in upst...
- 01:05 AM Backport #57713 (In Progress): quincy: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 01:03 AM Backport #57744 (In Progress): quincy: qa: test_recovery_pool uses wrong recovery procedure
- 01:00 AM Backport #57824 (In Progress): quincy: qa: mirror tests should cleanup fs during unwind
- 12:59 AM Backport #57825 (In Progress): pacific: qa: mirror tests should cleanup fs during unwind
- 12:57 AM Cleanup #51543 (Resolved): mds: improve debugging for mksnap denial
- 12:57 AM Bug #53641 (Resolved): mds: recursive scrub does not trigger stray reintegration
- 12:56 AM Bug #53619 (Resolved): mds: fails to reintegrate strays if destdn's directory is full (ENOSPC)
03/29/2023
- 09:57 PM Backport #53162 (In Progress): pacific: qa: test_standby_count_wanted failure
- 09:53 PM Backport #54234 (Resolved): quincy: qa: use cephadm to provision cephfs for fs:workloads
- 09:50 PM Backport #57746 (Duplicate): quincy: qa: broad snapshot functionality testing across clients
- 09:12 PM Backport #57712 (In Progress): pacific: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 09:10 PM Backport #57630 (Rejected): pacific: first-damage.sh does not handle dentries with spaces
- Not backporting this script to pacific.
- 09:08 PM Backport #52854 (In Progress): pacific: qa: test_simple failure
- 09:05 PM Backport #50024 (Rejected): octopus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_n...
- 09:04 PM Bug #49834 (Won't Fix - EOL): octopus: qa: test_statfs_on_deleted_fs failure
- 09:02 PM Bug #48524 (Resolved): octopus: run_shell() got an unexpected keyword argument 'timeout'
- 09:00 PM Bug #48143 (Won't Fix - EOL): octopus: qa: statfs command timeout is too short
- 08:52 PM Backport #57670 (Resolved): quincy: mds: damage table only stores one dentry per dirfrag
- 08:49 PM Backport #59222 (In Progress): reef: mds: catch damage to CDentry's first member before persisting
- 08:23 PM Backport #59222 (Resolved): reef: mds: catch damage to CDentry's first member before persisting
- https://github.com/ceph/ceph/pull/50755
- 08:24 PM Backport #59223 (Resolved): pacific: mds: catch damage to CDentry's first member before persisting
- https://github.com/ceph/ceph/pull/50781
- 08:23 PM Backport #59221 (Resolved): quincy: mds: catch damage to CDentry's first member before persisting
- https://github.com/ceph/ceph/pull/50779
- 08:17 PM Bug #58482 (Pending Backport): mds: catch damage to CDentry's first member before persisting
- 03:09 PM Bug #58938: qa: xfstests-dev's generic test suite has 7 failures with kclient
- > Rishabh, could you please check if https://pulpito.ceph.com/vshankar-2023-03-18_05:01:47-fs-wip-vshankar-testing-20...
- 04:55 AM Bug #58938: qa: xfstests-dev's generic test suite has 7 failures with kclient
- Rishabh, could you please check if https://pulpito.ceph.com/vshankar-2023-03-18_05:01:47-fs-wip-vshankar-testing-2023...
- 11:20 AM Backport #58598 (Resolved): pacific: mon: prevent allocating snapids allocated for CephFS
- 09:54 AM Backport #58984 (In Progress): pacific: cephfs-top: navigate to home screen when no fs
- 08:59 AM Backport #58983 (In Progress): quincy: cephfs-top: navigate to home screen when no fs
- 06:58 AM Backport #58826 (In Progress): pacific: mds: make num_fwd and num_retry to __u32
- 06:57 AM Backport #59202 (Resolved): pacific: qa: add testing in fs:workload for different kinds of subvol...
- https://github.com/ceph/ceph/pull/51509
- 06:57 AM Backport #59201 (Resolved): quincy: qa: add testing in fs:workload for different kinds of subvolumes
- https://github.com/ceph/ceph/pull/50974
- 06:56 AM Backport #59200 (Rejected): reef: qa: add testing in fs:workload for different kinds of subvolumes
- 06:50 AM Fix #54317 (Pending Backport): qa: add testing in fs:workload for different kinds of subvolumes
- Milind, I think we should backport this. WDYT? There is merit in doing so (more tests :).
- 06:50 AM Backport #58880 (In Progress): quincy: mds: Jenkins fails with skipping unrecognized type MClient...
- 06:48 AM Backport #58825 (In Progress): quincy: mds: make num_fwd and num_retry to __u32
- 06:47 AM Backport #58881 (In Progress): pacific: mds: Jenkins fails with skipping unrecognized type MClien...
- 06:10 AM Backport #59199 (Resolved): quincy: cephfs: qa enables kclient for newop test
- https://github.com/ceph/ceph/pull/50991
- 06:10 AM Backport #59198 (Rejected): pacific: cephfs: qa enables kclient for newop test
- https://github.com/ceph/ceph/pull/50992
- 06:10 AM Feature #59197 (Fix Under Review): qa: mds_upgrade_sequence switch to merge fragment to filter th...
- 06:04 AM Feature #59197 (Fix Under Review): qa: mds_upgrade_sequence switch to merge fragment to filter th...
- PR https://github.com/ceph/ceph/pull/48183 have add the merge fragment support. This will switch to use fragment to s...
- 06:07 AM Bug #57591 (Pending Backport): cephfs: qa enables kclient for newop test
- 06:06 AM Bug #59195 (Fix Under Review): qa/fscrypt: switch to postmerge fragment to distiguish the mounter...
- 04:54 AM Bug #59195 (Fix Under Review): qa/fscrypt: switch to postmerge fragment to distiguish the mounter...
- https://tracker.ceph.com/issues/57591 has introduced the postmerge fragment, we can reuse it here.
- 03:56 AM Backport #58994 (In Progress): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _w...
- 03:55 AM Backport #58993 (In Progress): quincy: client: fix CEPH_CAP_FILE_WR caps reference leakage in _wr...
- 03:11 AM Backport #59007 (In Progress): pacific: mds stuck in 'up:replay' and crashed.
- 03:10 AM Backport #59006 (In Progress): quincy: mds stuck in 'up:replay' and crashed.
- 03:01 AM Bug #57580 (Resolved): Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
- 03:01 AM Backport #57822 (Rejected): quincy: Test failure: test_newops_getvxattr (tasks.cephfs.test_newops...
- The quincy also doesn't support the dependency, so no need to backport it.
- 02:59 AM Backport #57823 (Rejected): pacific: Test failure: test_newops_getvxattr (tasks.cephfs.test_newop...
- The Pacific doesn't backported the dependency, so no need to fix it.
03/28/2023
- 05:38 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- /a/lflores-2023-03-27_20:42:09-rados-wip-aclamk-bs-elastic-shared-blob-quincy-distro-default-smithi/7221650
- 05:12 PM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- /a/lflores-2023-03-27_02:17:31-rados-wip-aclamk-bs-elastic-shared-blob-save-25.03.2023-a-distro-default-smithi/7221061
- 01:02 PM Backport #58808 (In Progress): quincy: cephfs-top: add an option to dump the computed values to s...
- Backport PR: https://github.com/ceph/ceph/pull/50717
- 12:08 PM Backport #59024 (Duplicate): quincy: mds: warning `clients failing to advance oldest client/flush...
- https://tracker.ceph.com/issues/59021
- 11:38 AM Backport #59033 (Duplicate): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- https://tracker.ceph.com/issues/59030
- 11:37 AM Backport #59034 (Duplicate): quincy: MDS allows a (kernel) client to exceed the xattrs key/value ...
- https://tracker.ceph.com/issues/59037
- 11:35 AM Backport #59038 (Duplicate): quincy: libcephfs: client needs to update the mtime and change attr ...
- https://tracker.ceph.com/issues/59041 (not sure why backport bot created two backport tickets for the same release)
- 11:31 AM Backport #59037: quincy: MDS allows a (kernel) client to exceed the xattrs key/value limits
- Rishabh, please take this one.
- 10:32 AM Bug #59188 (Fix Under Review): cephfs-top: cephfs-top -d <seconds> not working as expected
- 08:20 AM Bug #59188 (Resolved): cephfs-top: cephfs-top -d <seconds> not working as expected
- `cephfs-top -d [--delay]` excepts for float values due to introduction of `curses.halfdelay()` in cephfs-top
- 10:03 AM Backport #58807 (In Progress): pacific: cephfs-top: add an option to dump the computed values to ...
- Backport PR: https://github.com/ceph/ceph/pull/50715.
- 09:26 AM Backport #57571 (Resolved): pacific: client: do not uninline data for read
- 09:25 AM Bug #56553 (Resolved): client: do not uninline data for read
- 09:22 AM Bug #54461 (Resolved): ffsb.sh test failure
- 09:22 AM Bug #50057 (Resolved): client: openned inodes counter is inconsistent
- 09:21 AM Backport #50184 (Rejected): octopus: client: openned inodes counter is inconsistent
- Nathan Cutler wrote:
> This ticket is for tracking the octopus backport of a follow-on fix for #46865 which was back... - 09:12 AM Bug #50744 (Resolved): mds: journal recovery thread is possibly asserting with mds_lock not locked
- 09:11 AM Backport #50847 (Resolved): octopus: mds: journal recovery thread is possibly asserting with mds_...
- 09:09 AM Bug #58000 (Resolved): mds: switch submit_mutex to fair mutex for MDLog
- 09:08 AM Backport #58343 (Resolved): pacific: mds: switch submit_mutex to fair mutex for MDLog
- 09:04 AM Bug #50433 (Resolved): mds: Error ENOSYS: mds.a started profiler
- 09:03 AM Backport #50631 (Resolved): octopus: mds: Error ENOSYS: mds.a started profiler
- 09:01 AM Bug #50808 (Resolved): qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in the...
- 08:59 AM Backport #51323 (Resolved): octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- 08:56 AM Backport #52440 (In Progress): pacific: qa: add testing for "ms_mode" mount option
- 05:52 AM Backport #58409 (Resolved): quincy: doc: document the relevance of mds_namespace mount option
- 05:45 AM Feature #55197 (Resolved): cephfs-top: make cephfs-top display scrollable like top
- 05:23 AM Backport #57974 (Resolved): pacific: cephfs-top: make cephfs-top display scrollable like top
- 05:19 AM Bug #58677 (Fix Under Review): cephfs-top: test the current python version is supported
- 05:10 AM Bug #58663 (Resolved): cephfs-top: drop curses.A_ITALIC
- 05:03 AM Backport #58667 (Resolved): quincy: cephfs-top: drop curses.A_ITALIC
- 05:00 AM Backport #58668 (Resolved): pacific: cephfs-top: drop curses.A_ITALIC
- 12:27 AM Bug #59185 (Fix Under Review): MDSMonitor: should batch propose osdmap/mdsmap changes via some fs...
- 12:23 AM Bug #59185 (Rejected): MDSMonitor: should batch propose osdmap/mdsmap changes via some fs commands
- Especially `fs fail`. Otherwise, you may see the MDS complain about blocklisting before it has a reasonable chance to...
03/27/2023
- 06:58 PM Bug #59183 (Fix Under Review): cephfs-data-scan: does not scan_links for lost+found
- 06:46 PM Bug #59183 (Resolved): cephfs-data-scan: does not scan_links for lost+found
- Importantly, scan_links corrects the placeholder SNAP_HEAD for the first dentry metadata. If lost+found is skipped, t...
- 01:52 PM Bug #58597: The MDS crashes when deleting a specific file
- Tobias Reinhard wrote:
> Venky Shankar wrote:
> > Hi Tobias,
> >
> > Any update on using the tool? Were you able... - 10:03 AM Bug #59169 (New): Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirr...
- Seen in Yuri's quincy run: https://pulpito.ceph.com/yuriw-2023-03-23_15:21:23-fs-quincy-release-distro-default-smithi...
03/24/2023
- 07:29 PM Bug #59163 (New): mds: stuck in up:rejoin when it cannot "open" missing directory inode
- tasks.cephfs.test_damage.TestDamage.test_object_deletion tests for damage when no clients are in the session list (fo...
- 10:36 AM Bug #58597: The MDS crashes when deleting a specific file
- Venky Shankar wrote:
> Hi Tobias,
>
> Any update on using the tool? Were you able to get the file system back onl... - 09:59 AM Bug #57682: client: ERROR: test_reconnect_after_blocklisted
- Another instance - https://pulpito.ceph.com/pdonnell-2023-03-23_18:44:22-fs-wip-pdonnell-testing-20230323.162417-dist...
- 06:01 AM Feature #58216 (Rejected): cephfs: Add quota.max_files limit check in MDS side
- 02:41 AM Bug #48678: client: spins on tick interval
- This spin happens in:...
03/23/2023
- 01:39 PM Bug #58597: The MDS crashes when deleting a specific file
- Hi Tobias,
Any update on using the tool? Were you able to get the file system back online? - 12:49 PM Bug #58949 (Rejected): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- Closing per previous comment.
- 12:45 PM Bug #59134 (Duplicate): mds: deadlock during unlink with multimds (postgres)
- 01:04 AM Bug #59134: mds: deadlock during unlink with multimds (postgres)
- This should be the same with https://tracker.ceph.com/issues/58340.
- 08:49 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- Apologies for looking into this rather late, Bruno.
03/22/2023
03/21/2023
- 10:12 PM Bug #53615: qa: upgrade test fails with "timeout expired in wait_until_healthy"
- Another instance during this upgrade test:
/a/yuriw-2023-03-14_21:33:13-upgrade:octopus-x-quincy-release-distro-defa... - 08:32 PM Bug #59119 (New): mds: segmentation fault during replay of snaptable updates
- For a standby-replay daemon:...
- 09:40 AM Backport #59112 (In Progress): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 09:22 AM Backport #59112 (Resolved): reef: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50604
03/20/2023
- 01:13 PM Feature #44274: mds: disconnect file data from inode number
- @Patrick do you think this is something we still need to carry on its own, in light of https://tracker.ceph.com/issue...
- 11:35 AM Backport #58986 (In Progress): pacific: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 11:30 AM Backport #58866 (In Progress): pacific: cephfs-top: Sort menu doesn't show 'No filesystem availab...
- 11:19 AM Backport #58985 (In Progress): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 10:38 AM Bug #59107 (Pending Backport): MDS imported_inodes metric is not updated.
- ceph daemon mds.$(hostname) perf dump | grep imported
"imported": 29013,
"imported_inodes": 0, - 08:19 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Kotresh Hiremath Ravishankar wrote:
> Neeraj Pratap Singh wrote:
> > I did try to reproduce the issue mentioned in ...
03/16/2023
- 10:30 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- Neeraj Pratap Singh wrote:
> I did try to reproduce the issue mentioned in the tracker due to which, the feature is ...
03/15/2023
- 04:33 PM Bug #54730 (Resolved): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state ==...
- 04:33 PM Bug #49132 (Resolved): mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLO...
- 04:32 PM Backport #58323 (Resolved): pacific: mds crashed "assert_condition": "state == LOCK_XLOCK || sta...
03/14/2023
- 05:18 PM Bug #58008 (Resolved): mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- 05:18 PM Backport #58254 (Resolved): pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _cal...
- 03:49 PM Backport #58254: pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49656
merged - 05:11 PM Bug #57359 (Resolved): mds/Server: -ve values cause unexpected client eviction while handling cli...
- 05:11 PM Backport #58601 (Resolved): pacific: mds/Server: -ve values cause unexpected client eviction whil...
- 02:57 PM Backport #58601: pacific: mds/Server: -ve values cause unexpected client eviction while handling ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49956
merged - 04:50 PM Bug #58619 (In Progress): mds: client evict [-h|--help] evicts ALL clients
- 04:48 PM Bug #58030 (Resolved): mds: avoid ~mdsdir's scrubbing and reporting damage health status
- 04:48 PM Bug #58028 (Resolved): cephfs-top: Sorting doesn't work when the filesystems are removed and created
- 04:47 PM Bug #58031 (Resolved): cephfs-top: sorting/limit excepts when the filesystems are removed and cre...
- 04:47 PM Feature #55121 (Resolved): cephfs-top: new options to limit and order-by
- 04:47 PM Bug #57620 (Resolved): mgr/volumes: addition of human-readable flag to volume info command
- 04:46 PM Bug #55234 (Resolved): snap_schedule: replace .snap with the client configured snap dir name
- 04:45 PM Backport #58079 (Resolved): quincy: cephfs-top: Sorting doesn't work when the filesystems are rem...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 04:45 PM Backport #58074 (Resolved): quincy: cephfs-top: sorting/limit excepts when the filesystems are re...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 04:44 PM Backport #57849 (Resolved): quincy: mgr/volumes: addition of human-readable flag to volume info c...
- 04:43 PM Backport #57849: quincy: mgr/volumes: addition of human-readable flag to volume info command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48466
merged - 04:42 PM Backport #58249 (Resolved): quincy: mds: avoid ~mdsdir's scrubbing and reporting damage health st...
- 04:41 PM Backport #57970 (Resolved): quincy: cephfs-top: new options to limit and order-by
- 04:40 PM Backport #57971 (Resolved): pacific: cephfs-top: new options to limit and order-by
- 03:43 PM Backport #57971: pacific: cephfs-top: new options to limit and order-by
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:39 PM Backport #58073 (Resolved): pacific: cephfs-top: sorting/limit excepts when the filesystems are r...
- 03:43 PM Backport #58073: pacific: cephfs-top: sorting/limit excepts when the filesystems are removed and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:39 PM Backport #58078 (Resolved): pacific: cephfs-top: Sorting doesn't work when the filesystems are re...
- 03:44 PM Backport #58078: pacific: cephfs-top: Sorting doesn't work when the filesystems are removed and c...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49303
merged - 04:38 PM Backport #58250 (Resolved): pacific: mds: avoid ~mdsdir's scrubbing and reporting damage health s...
- 03:45 PM Backport #58250: pacific: mds: avoid ~mdsdir's scrubbing and reporting damage health status
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49440
merged - 04:37 PM Backport #57201 (Resolved): pacific: snap_schedule: replace .snap with the client configured snap...
- 04:14 PM Backport #57201: pacific: snap_schedule: replace .snap with the client configured snap dir name
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47726
merged - 03:52 PM Backport #58349: pacific: MDS: scan_stray_dir doesn't walk through all stray inode fragment
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49669
merged - 03:46 PM Backport #58323: pacific: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/49538
merged - 03:45 PM Backport #57761: pacific: qa: test_scrub_pause_and_resume_with_abort failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49458
merged - 03:20 PM Bug #59067 (Resolved): mds: add cap acquisition throttled event to MDR
- Otherwise a blocked op won't show it's being blocked by the cap acquisition throttle.
Write a test that verifies t... - 03:04 PM Backport #58598: pacific: mon: prevent allocating snapids allocated for CephFS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50050
merged - 02:59 PM Backport #58668: pacific: cephfs-top: drop curses.A_ITALIC
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50029
merged - 02:58 PM Backport #57728: pacific: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49966
merged - 02:58 PM Bug #58082 (Resolved): cephfs:filesystem became read only after Quincy upgrade
- 02:57 PM Backport #58608 (Resolved): pacific: cephfs:filesystem became read only after Quincy upgrade
- 02:56 PM Backport #58608: pacific: cephfs:filesystem became read only after Quincy upgrade
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49941
merged - 02:57 PM Backport #58603: pacific: client stalls during vstart_runner test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49944
merged - 02:53 PM Backport #58346: pacific: Thread md_log_replay is hanged for ever.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49671
merged - 02:44 PM Feature #18475 (Resolved): qa: run xfstests in the nightlies
- Related tickets -
https://tracker.ceph.com/issues/58945
https://tracker.ceph.com/issues/58938 - 12:46 PM Backport #59000 (In Progress): quincy: cephfs_mirror: local and remote dir root modes are not same
- 10:19 AM Backport #59020 (In Progress): reef: cephfs-data-scan: multiple data pools are not supported
- 09:56 AM Backport #59020 (New): reef: cephfs-data-scan: multiple data pools are not supported
- 09:42 AM Backport #59020 (Duplicate): reef: cephfs-data-scan: multiple data pools are not supported
- 09:51 AM Backport #59019 (In Progress): pacific: cephfs-data-scan: multiple data pools are not supported
- 09:44 AM Backport #59018 (In Progress): quincy: cephfs-data-scan: multiple data pools are not supported
03/13/2023
- 04:54 PM Backport #59041 (In Progress): quincy: libcephfs: client needs to update the mtime and change att...
- https://github.com/ceph/ceph/pull/50730
- 04:54 PM Backport #59040 (Rejected): pacific: libcephfs: client needs to update the mtime and change attr ...
- 04:53 PM Backport #59039 (Duplicate): pacific: libcephfs: client needs to update the mtime and change attr...
- 04:53 PM Backport #59038 (Duplicate): quincy: libcephfs: client needs to update the mtime and change attr ...
- 04:53 PM Backport #59037 (In Progress): quincy: MDS allows a (kernel) client to exceed the xattrs key/valu...
- https://github.com/ceph/ceph/pull/50981
- 04:53 PM Backport #59036 (Duplicate): pacific: MDS allows a (kernel) client to exceed the xattrs key/value...
- 04:53 PM Backport #59035 (New): pacific: MDS allows a (kernel) client to exceed the xattrs key/value limits
- 04:53 PM Backport #59034 (Duplicate): quincy: MDS allows a (kernel) client to exceed the xattrs key/value ...
- 04:52 PM Backport #59033 (Duplicate): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- 04:52 PM Backport #59032 (Resolved): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_clie...
- https://github.com/ceph/ceph/pull/51509
- 04:52 PM Backport #59031 (Duplicate): pacific: Test failure: test_client_cache_size (tasks.cephfs.test_cli...
- 04:52 PM Backport #59030 (In Progress): quincy: Test failure: test_client_cache_size (tasks.cephfs.test_cl...
- https://github.com/ceph/ceph/pull/51049
- 04:51 PM Backport #59024 (Duplicate): quincy: mds: warning `clients failing to advance oldest client/flush...
- 04:51 PM Backport #59023 (Resolved): pacific: mds: warning `clients failing to advance oldest client/flush...
- https://github.com/ceph/ceph/pull/50811
- 04:51 PM Backport #59022 (Duplicate): pacific: mds: warning `clients failing to advance oldest client/flus...
- 04:51 PM Backport #59021 (Resolved): quincy: mds: warning `clients failing to advance oldest client/flush ...
- https://github.com/ceph/ceph/pull/50785
- 04:51 PM Backport #59020 (Resolved): reef: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50524
- 04:50 PM Backport #59019 (Resolved): pacific: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50523
- 04:50 PM Backport #59018 (Resolved): quincy: cephfs-data-scan: multiple data pools are not supported
- https://github.com/ceph/ceph/pull/50522
- 04:50 PM Backport #59017 (Resolved): pacific: snap-schedule: handle non-existent path gracefully during sn...
- https://github.com/ceph/ceph/pull/51246
- 04:50 PM Backport #59016 (Resolved): quincy: snap-schedule: handle non-existent path gracefully during sna...
- https://github.com/ceph/ceph/pull/50780
- 04:49 PM Backport #59015 (Rejected): pacific: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- https://github.com/ceph/ceph/pull/52580
- 04:49 PM Backport #59014 (In Progress): quincy: Command failed (workunit test fs/quota/quota.sh) on smithi...
- https://github.com/ceph/ceph/pull/52579
- 04:47 PM Backport #59007 (Resolved): pacific: mds stuck in 'up:replay' and crashed.
- https://github.com/ceph/ceph/pull/50725
- 04:47 PM Backport #59006 (Resolved): quincy: mds stuck in 'up:replay' and crashed.
- https://github.com/ceph/ceph/pull/50724
- 04:46 PM Backport #59003 (Resolved): pacific: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- https://github.com/ceph/ceph/pull/51039
- 04:46 PM Backport #59002 (Resolved): quincy: mgr/volumes: avoid returning ESHUTDOWN for cli commands
- https://github.com/ceph/ceph/pull/50786
- 04:46 PM Backport #59001 (Resolved): pacific: cephfs_mirror: local and remote dir root modes are not same
- https://github.com/ceph/ceph/pull/53270
- 04:46 PM Backport #59000 (Resolved): quincy: cephfs_mirror: local and remote dir root modes are not same
- https://github.com/ceph/ceph/pull/50528
- 04:45 PM Backport #58994 (Resolved): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- https://github.com/ceph/ceph/pull/50988
- 04:45 PM Backport #58993 (Resolved): quincy: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- https://github.com/ceph/ceph/pull/50989
- 04:44 PM Backport #58992 (Rejected): pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://github.com/ceph/ceph/pull/52584
- 04:44 PM Backport #58991 (In Progress): quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://github.com/ceph/ceph/pull/52585
- 04:43 PM Backport #58986 (Resolved): pacific: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50597
- 04:43 PM Backport #58985 (Resolved): quincy: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- https://github.com/ceph/ceph/pull/50595
- 04:43 PM Backport #58984 (Resolved): pacific: cephfs-top: navigate to home screen when no fs
- https://github.com/ceph/ceph/pull/50737
- 04:43 PM Backport #58983 (Resolved): quincy: cephfs-top: navigate to home screen when no fs
- https://github.com/ceph/ceph/pull/50731
- 04:18 PM Bug #58971 (Fix Under Review): mon/MDSMonitor: do not trigger propose on error from prepare_update
- 04:16 PM Bug #58971 (Pending Backport): mon/MDSMonitor: do not trigger propose on error from prepare_update
- https://github.com/ceph/ceph/pull/50404#discussion_r1133791746
- 02:24 PM Feature #55940: quota: accept values in human readable format as well
- Just FYI - follow up PR: https://github.com/ceph/ceph/pull/50493
- 02:00 PM Bug #54501 (Pending Backport): libcephfs: client needs to update the mtime and change attr when s...
- 01:55 PM Bug #58489 (Pending Backport): mds stuck in 'up:replay' and crashed.
- 01:53 PM Bug #58678 (Pending Backport): cephfs_mirror: local and remote dir root modes are not same
- 09:32 AM Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions
- Forgot to mention that this has been recently verified on v15.2.17 and v16.2.11.
- 08:35 AM Bug #58962 (New): ftruncate fails with EACCES on a read-only file created with write permissions
- When creating a new file with write permissions, with mode set to read-only such as 400 or 444, ftruncate fails with ...
03/10/2023
- 12:42 PM Bug #58651 (Pending Backport): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 11:50 AM Bug #58949: qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- Venky, this most likely is not race condition. Your testing branch had patch that fixes quota issue. See - https://gi...
- 11:24 AM Bug #58949 (Rejected): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- test_cephfs.test_disk_quota_exceeeded_error's failure has been reported here before - https://tracker.ceph.com/issues...
- 11:32 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Venky Shankar wrote:
> Rishabh Dave wrote:
> > @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.... - 04:28 AM Bug #58220 (Pending Backport): Command failed (workunit test fs/quota/quota.sh) on smithi081 with...
- 03:53 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Rishabh Dave wrote:
> @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.py@ fails on this teutholo... - 11:22 AM Bug #53573 (Resolved): qa: test new clients against older Ceph clusters
- 11:17 AM Bug #58095 (Pending Backport): snap-schedule: handle non-existent path gracefully during snapshot...
- 04:42 AM Bug #58717 (Pending Backport): client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- 02:26 AM Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- See this again in http://qa-proxy.ceph.com/teuthology/yuriw-2023-03-08_20:32:29-fs-wip-yuri3-testing-2023-03-08-0800-...
03/09/2023
- 06:41 PM Bug #58945: qa: xfstests-dev's generic test suite has 20 failures with fuse client
- Related ticket - https://tracker.ceph.com/issues/58742
- 06:40 PM Bug #58945 (New): qa: xfstests-dev's generic test suite has 20 failures with fuse client
- "PR #45960":https://github.com/ceph/ceph/pull/45960 enables running tests from xfstests-dev against CephFS. For FUSE ...
- 03:38 PM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- @test_disk_quota_exceeeded_error@ from @src/test/pybind/test_cephfs.py@ fails on this teuthology run - http://pulpito...
- 02:22 PM Bug #58814 (Pending Backport): cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
- 01:07 PM Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- Seen in main branch integration test: https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-2...
- 12:54 PM Bug #58823 (Pending Backport): cephfs-top: navigate to home screen when no fs
03/08/2023
- 04:52 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:38 PM Bug #58795 (Fix Under Review): cephfs-shell: update path to cephfs-shell since its location has c...
- 02:32 PM Bug #58795 (Resolved): cephfs-shell: update path to cephfs-shell since its location has changed
- 02:37 PM Bug #55354 (Resolved): cephfs: xfstests-dev can't be run against fuse mounted cephfs
- 02:32 PM Bug #58726 (Pending Backport): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 02:16 PM Bug #58938 (New): qa: xfstests-dev's generic test suite has 7 failures with kclient
- "PR #45960":https://github.com/ceph/ceph/pull/45960 enables running tests from xfstests-dev against CephFS. For kerne...
- 07:08 AM Feature #55940 (Resolved): quota: accept values in human readable format as well
- 02:43 AM Bug #55725 (Pending Backport): MDS allows a (kernel) client to exceed the xattrs key/value limits
- 02:39 AM Bug #57985 (Pending Backport): mds: warning `clients failing to advance oldest client/flush tid` ...
- 02:37 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- 02:35 AM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
- Hmmm... similar to https://tracker.ceph.com/issues/17172
- 02:31 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
- https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/...
- 02:04 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Milind, PTAL.
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testin...
03/07/2023
- 02:13 PM Bug #57064 (Need More Info): qa: test_add_ancestor_and_child_directory failure
- Looking at the logs, mirror daemon is missing and thus the command failed...
- 08:52 AM Bug #57064: qa: test_add_ancestor_and_child_directory failure
- Dhairya Parmar wrote:
> I tried digging into this failure, while looking at teuthology log, I see
> [...]
>
> I... - 02:59 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Rishabh Dave wrote:
> http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-...
03/06/2023
- 04:25 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 04:18 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/...
- 03:17 PM Bug #56830 (Can't reproduce): crash: cephfs::mirror::PeerReplayer::pick_directory()
- After thoroughly assessing the issue with the limited available data in the tracker, it's hard to tell what lead to t...
- 03:16 PM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya Parmar wrote:
> Issue seems to be at:
> [...]
> @ https://github.com/ceph/ceph/blob/main/src/tools/cephfs_... - 01:49 PM Bug #56830 (Fix Under Review): crash: cephfs::mirror::PeerReplayer::pick_directory()
- See updated in PR.
- 09:07 AM Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes
- I did try to reproduce the issue mentioned in the tracker due to which, the feature is required.I found that:
1. If ... - 04:15 AM Bug #58029 (Pending Backport): cephfs-data-scan: multiple data pools are not supported
- 12:53 AM Backport #58609 (Resolved): quincy: cephfs:filesystem became read only after Quincy upgrade
- 12:52 AM Backport #58602 (Resolved): quincy: client stalls during vstart_runner test
03/03/2023
- 08:01 AM Backport #58865 (In Progress): quincy: cephfs-top: Sort menu doesn't show 'No filesystem availabl...
- 07:40 AM Bug #57280 (Resolved): qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fe...
- backport merged
- 07:38 AM Backport #58604 (Resolved): quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails -...
- 07:35 AM Backport #58253 (Resolved): quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calc...
- 07:33 AM Bug #58138 (Resolved): "ceph nfs cluster info" shows junk data for non-existent cluster
- backport merged
- 06:59 AM Backport #58348 (Resolved): quincy: "ceph nfs cluster info" shows junk data for non-existent clus...
03/02/2023
- 10:43 PM Backport #58599: quincy: mon: prevent allocating snapids allocated for CephFS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50090
merged - 10:40 PM Backport #58604: quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to ...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49957
merged - 10:39 PM Backport #58602: quincy: client stalls during vstart_runner test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49942
merged - 10:39 PM Backport #58609: quincy: cephfs:filesystem became read only after Quincy upgrade
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49939
merged - 10:38 PM Backport #58253: quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49655
merged - 10:37 PM Backport #58348: quincy: "ceph nfs cluster info" shows junk data for non-existent cluster
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/49654
merged - 10:31 PM Backport #57970: quincy: cephfs-top: new options to limit and order-by
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50151
merged - 01:38 PM Feature #57090 (Resolved): MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- 10:10 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- Not reproducible in main branch (wip-vshankar-testing-20230228.105516 is just a couple of test PRs on top of main bra...
- 07:05 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- ... and the kclient sees the error:...
- 06:48 AM Bug #58727: quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- The MDS did reply back an ENOSPC to the client:...
- 07:42 AM Bug #58760 (Resolved): kclient: xfstests-dev generic/317 failed
- 05:37 AM Bug #58760: kclient: xfstests-dev generic/317 failed
- qa changes are: https://github.com/ceph/ceph/pull/50217
Also available in: Atom