Activity
From 01/18/2023 to 02/16/2023
02/16/2023
- 10:02 PM Bug #58394: nofail option in fstab not supported
- Interesting update, I noticed that even without the nofail, systems will continue to boot even if it can't complete a...
- 03:43 PM Backport #58350: quincy: MDS: scan_stray_dir doesn't walk through all stray inode fragment
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49670
merged - 12:52 PM Bug #58746 (Duplicate): quincy: qa: VersionNotFoundError: Failed to fetch package version
- VersionNotFoundError: Failed to fetch package version in [1]
[1] http://qa-proxy.ceph.com/teuthology/yuriw-2023-02... - 11:43 AM Fix #58744 (Pending Backport): qa: intermittent nfs test failures at nfs cluster creation
- While working on https://github.com/ceph/ceph/pull/49460, I found any random test would fail with "AssertionError: NF...
- 10:49 AM Bug #58220 (Fix Under Review): Command failed (workunit test fs/quota/quota.sh) on smithi081 with...
- 07:41 AM Bug #58220 (In Progress): Command failed (workunit test fs/quota/quota.sh) on smithi081 with stat...
- 09:54 AM Bug #58645 (Fix Under Review): Unclear error when creating new subvolume when subvolumegroup has ...
- 07:45 AM Backport #58322 (Resolved): quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || stat...
- 12:35 AM Backport #58322: quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_X...
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/49539
merged - 12:26 AM Backport #58322: quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_X...
- https://github.com/ceph/ceph/pull/49884 merged
- 04:36 AM Backport #57242 (Resolved): quincy: mgr/volumes: Clone operations are failing with Assertion Error
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47894
Merged. - 04:36 AM Backport #57241 (Resolved): pacific: mgr/volumes: Clone operations are failing with Assertion Error
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47112
Merged. - 01:12 AM Backport #58344 (Resolved): quincy: mds: switch submit_mutex to fair mutex for MDLog
- 12:36 AM Backport #58344: quincy: mds: switch submit_mutex to fair mutex for MDLog
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49633
merged - 12:40 AM Backport #58347: quincy: mds: fragment directory snapshots
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49673
merged - 12:38 AM Backport #58345: quincy: Thread md_log_replay is hanged for ever.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49672
merged - 12:33 AM Backport #58249: quincy: mds: avoid ~mdsdir's scrubbing and reporting damage health status
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49473
merged - 12:29 AM Backport #57760: quincy: qa: test_scrub_pause_and_resume_with_abort failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49459
merged
02/15/2023
- 01:22 PM Bug #58564 (In Progress): workunit suites/dbench.sh fails with error code 1
- 01:22 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- Reported this to linux kernel mail list: https://lore.kernel.org/lkml/768be93b-a401-deab-600c-f946e0bd27fa@redhat.com...
- 02:01 AM Bug #58564: workunit suites/dbench.sh fails with error code 1
- It's a kernel cgroup core deadlock bug:...
- 10:49 AM Bug #58727 (New): quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
- /a/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/7171571...
- 10:48 AM Bug #58726 (Pending Backport): Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- /a/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/7171607...
02/14/2023
- 12:49 PM Bug #58717 (Pending Backport): client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- If the __setattrx() fails it will leave the CEPH_CAP_FILE_WR caps reference kept.
02/13/2023
- 09:39 AM Backport #58599 (In Progress): quincy: mon: prevent allocating snapids allocated for CephFS
02/10/2023
- 01:01 PM Bug #53246: rhel 8.4 and centos stream unable to install cephfs-java
- /a/sseshasa-2023-02-10_10:52:51-rados-wip-sseshasa-quincy-2023-02-10-mclk-cost-fixes-1-distro-default-smithi/7167432/
- 10:03 AM Bug #57014: cephfs-top: add an option to dump the computed values to stdout
- This feature has become really important given that (backport) bugs creep in due to the lack of automated tests.
- 12:51 AM Feature #58680: libcephfs: clear the suid/sgid for fallocate
- Usually when a file is changed by unprivileged users the *suid/sgid* should be cleared to avoid possible attack from ...
02/09/2023
- 05:04 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- I'm experimenting on reproducing the problem on demand. Once I have a way to make this bad looping ceph-fuse behavio...
- 04:13 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Hi Andras,
Andras Pataki wrote:
> There definitely are config changes that are different from the defaults.
> Al... - 02:10 PM Feature #58680: libcephfs: clear the suid/sgid for fallocate
- The steps to verify this:...
- 02:05 PM Feature #58680 (Fix Under Review): libcephfs: clear the suid/sgid for fallocate
- 02:00 PM Feature #58680 (Pending Backport): libcephfs: clear the suid/sgid for fallocate
- ...
- 11:00 AM Backport #58598 (In Progress): pacific: mon: prevent allocating snapids allocated for CephFS
- 10:03 AM Bug #58678 (Pending Backport): cephfs_mirror: local and remote dir root modes are not same
- The top level dir modes of local snap dir root and remote snap dir root don't match
- 09:00 AM Bug #58677 (Pending Backport): cephfs-top: test the current python version is supported
- Test if the current python version is supported. Many curses constants and api are introduced in newer versions of py...
02/08/2023
- 02:56 PM Bug #56270 (Duplicate): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Sna...
- Based on this appearing to have been resolved, I'm closing this as a duplicate of #56269.
- 02:53 PM Bug #56269 (Resolved): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Snap...
- 02:38 PM Backport #58667 (In Progress): quincy: cephfs-top: drop curses.A_ITALIC
- 01:28 PM Backport #58667 (Resolved): quincy: cephfs-top: drop curses.A_ITALIC
- https://github.com/ceph/ceph/pull/48677
- 01:48 PM Backport #58668 (In Progress): pacific: cephfs-top: drop curses.A_ITALIC
- 01:28 PM Backport #58668 (Resolved): pacific: cephfs-top: drop curses.A_ITALIC
- https://github.com/ceph/ceph/pull/50029
- 01:23 PM Bug #58663 (Pending Backport): cephfs-top: drop curses.A_ITALIC
- 10:22 AM Bug #58663 (Fix Under Review): cephfs-top: drop curses.A_ITALIC
- 10:15 AM Bug #58663 (Resolved): cephfs-top: drop curses.A_ITALIC
- Drop curses.A_ITALIC used in formatting "Filesystem:" header as it's not supported in older python versions.
A_BOLD... - 10:06 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- I see trim messages from object cacher...
02/07/2023
- 04:26 PM Bug #58564: workunit suites/dbench.sh fails with error code 1
- Hi, Xiubo. I didn't check dmesg so I don't know if it had a call trace in it.
- 02:33 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- There definitely are config changes that are different from the defaults.
All these objects are in a 6+3 erasure cod... - 02:08 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- It’s also interesting that these appear to all be full-object reads, and the objects are larger than normal — 24 MiB,...
- 10:33 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- There seems to be lot of cache misses for objects in ObjectCacher. The retry is coming from:...
- 04:10 AM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- Andras Pataki wrote:
> I've uploaded the full 1 minute ceph-fuse trace as:
> ceph-post-file: d56ebc47-4ef7-4f01-952... - 04:57 AM Bug #58651 (Fix Under Review): mgr/volumes: avoid returning ESHUTDOWN for cli commands
- 04:49 AM Bug #58651 (Pending Backport): mgr/volumes: avoid returning ESHUTDOWN for cli commands
02/06/2023
- 01:59 PM Bug #58617 (Triaged): mds: "Failed to authpin,subtree is being exported" results in large number ...
- 01:56 PM Bug #58640 (Triaged): ceph-fuse in infinite loop reading objects without client requests
- 01:52 PM Bug #58645 (Triaged): Unclear error when creating new subvolume when subvolumegroup has ceph.dir....
- 11:01 AM Bug #58645 (Fix Under Review): Unclear error when creating new subvolume when subvolumegroup has ...
- When an empty name is given while creating a subvolume, this will turn the subvolumegroup into a subvolume instead of...
- 10:37 AM Bug #53126 (Closed): In the 5.4.0 kernel, the mount of ceph-fuse fails
- 10:27 AM Bug #53126: In the 5.4.0 kernel, the mount of ceph-fuse fails
- Ok, I'm planning this, I think I can close this bug.
02/03/2023
- 08:25 PM Bug #58640: ceph-fuse in infinite loop reading objects without client requests
- I've uploaded the full 1 minute ceph-fuse trace as:
ceph-post-file: d56ebc47-4ef7-4f01-952f-8569c8c92982
- 08:23 PM Bug #58640 (Triaged): ceph-fuse in infinite loop reading objects without client requests
- We've been running into a strange issue with ceph-fuse on some nodes lately. After some job runs on the node (and fi...
- 02:02 PM Bug #58411: mds: a few simple operations crash mds
- This bug will only be triggered in kernel client and ceph-fuse client doesn't have this problem.
02/02/2023
- 11:42 AM Backport #58600 (In Progress): quincy: mds/Server: -ve values cause unexpected client eviction wh...
- Since this tracker depends on https://github.com/ceph/ceph/pull/48252 which is pending to be merged, I've cherry-pick...
- 07:53 AM Bug #58489 (Fix Under Review): mds stuck in 'up:replay' and crashed.
- 07:34 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- The *sessionmap* will be persisted when expiring any MDLog Segment in *LogSegment::try_to_expire()->sessionmap.save()...
- 06:41 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > > > [...]
> > > > > > ... - 05:51 AM Backport #57729 (In Progress): quincy: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- 05:48 AM Backport #57728 (In Progress): pacific: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
02/01/2023
- 11:34 PM Feature #58550 (Fix Under Review): mds: add perf counter to track (relatively) larger log events
- 08:14 PM Feature #58550 (In Progress): mds: add perf counter to track (relatively) larger log events
- 03:31 PM Documentation #51459 (Resolved): doc: document what kinds of damage forward scrub can repair
- Neeraj, next time please set "Pending Backport" status to issue - bot create the tickets that you, or somebody, can b...
- 03:09 PM Documentation #51459 (Pending Backport): doc: document what kinds of damage forward scrub can repair
- 02:37 PM Documentation #51459: doc: document what kinds of damage forward scrub can repair
- Quincy backport: https://github.com/ceph/ceph/pull/49932
Pacific backport: https://github.com/ceph/ceph/pull/49933 - 03:29 PM Backport #58623 (Resolved): pacific: doc: document what kinds of damage forward scrub can repair
- https://github.com/ceph/ceph/pull/49933
- 03:20 PM Backport #58623 (Resolved): pacific: doc: document what kinds of damage forward scrub can repair
- 03:28 PM Backport #58624 (Resolved): quincy: doc: document what kinds of damage forward scrub can repair
- https://github.com/ceph/ceph/pull/49932
- 03:20 PM Backport #58624 (Resolved): quincy: doc: document what kinds of damage forward scrub can repair
- 10:25 AM Bug #51824 (Fix Under Review): pacific scrub ~mds_dir causes stray related ceph_assert, abort and...
- 10:20 AM Backport #58604 (In Progress): quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fail...
- https://github.com/ceph/ceph/pull/49957
- 10:20 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > [...]
> > > > > > Why would the session map vers... - 09:42 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Thomas Widhalm wrote:
> > Xiubo Li wrote:
> > > Thomas Widhalm wrote:
> > > > In the meantime I... - 09:23 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Thomas Widhalm wrote:
> Xiubo Li wrote:
> > Thomas Widhalm wrote:
> > > In the meantime I rebooted my hosts for re... - 09:13 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Thomas Widhalm wrote:
> > In the meantime I rebooted my hosts for regular maintenance (rolling re... - 09:09 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Venky Shankar wrote:
> Xiubo Li wrote:
> > [...]
> > > > > Why would the session map version get reset to 0 after ... - 08:49 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> [...]
> > > > Why would the session map version get reset to 0 after a mds failover? Maybe I'm mi... - 08:23 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- [...]
> > > Why would the session map version get reset to 0 after a mds failover? Maybe I'm missing something somew... - 07:02 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Venky Shankar wrote:
> > > > Xiubo Li wrote:
> ... - 06:52 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > [...]
> > > > >
> > ... - 06:42 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> [...]
> > > >
> > > > Case B:
> > > >
> > > > I... - 06:25 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Venky Shankar wrote:
> Xiubo Li wrote:
[...]
> > >
> > > Case B:
> > >
> > > If the MDS crashes and leaving t... - 06:20 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Venky Shankar wrote:
> Xiubo Li wrote:
[...]
> > > Wouldn't the ESessions log event update the sessionmap versio... - 06:10 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > > > There is one case could ... - 06:09 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > There is one case could trigger IMO:
> > >
> >... - 05:14 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > There is one case could trigger IMO:
> > >
> >... - 03:20 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Thomas Widhalm wrote:
> In the meantime I rebooted my hosts for regular maintenance (rolling reboot with only one no... - 10:13 AM Backport #58600: quincy: mds/Server: -ve values cause unexpected client eviction while handling c...
- backport not possible until https://github.com/ceph/ceph/pull/48252 is merged because it brings in a test file that t...
- 10:02 AM Backport #58601 (In Progress): pacific: mds/Server: -ve values cause unexpected client eviction w...
- 10:01 AM Backport #58601: pacific: mds/Server: -ve values cause unexpected client eviction while handling ...
- https://github.com/ceph/ceph/pull/49956
- 04:21 AM Bug #58340 (Fix Under Review): mds: fsstress.sh hangs with multimds (deadlock between unlink and ...
- 02:16 AM Bug #58564: workunit suites/dbench.sh fails with error code 1
- Rishabh,
BTW, is there any other call trace in the dmesg ? Currently I couldn't access the Sepia nodes so I couldn...
01/31/2023
- 01:26 PM Documentation #58620 (New): document asok commands
- some of the asok commands can be found here and there in the docs but most lack documentation, it would be good to ha...
- 09:50 AM Bug #58597: The MDS crashes when deleting a specific file
- Tobias Reinhard wrote:
> Venky Shankar wrote:
> > Hi Tobias,
> >
> > > The crash happens, whenever I delete the ... - 08:32 AM Bug #58597: The MDS crashes when deleting a specific file
- Venky Shankar wrote:
> Hi Tobias,
>
> > The crash happens, whenever I delete the file - every time.
> >
> > I ... - 05:19 AM Bug #58597: The MDS crashes when deleting a specific file
- Hi Tobias,
> The crash happens, whenever I delete the file - every time.
>
> I don't know how or when the corru... - 07:49 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- 1, The first commit in PR 49940 is for case cluster hanged in state EXPORT_WARNING which is consistent with the log.
... - 06:37 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- zhikuo du wrote:
> https://tracker.ceph.com/issues/42338
> There is another tracker for "Failed to authpin,subtree ... - 06:30 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- https://tracker.ceph.com/issues/42338
There is another tracker for "Failed to authpin,subtree is being exported".
... - 06:17 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- There is another tracker also stuck for *Failed to authpin,subtree is being exported*:
https://tracker.ceph.com/is... - 05:55 AM Bug #58617 (Triaged): mds: "Failed to authpin,subtree is being exported" results in large number ...
- A problem: the cluster(octopus 15.2.16) has large numbers of blocked requests. The error associated with the block is...
- 06:32 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Venky Shankar wrote:
> Xiubo Li wrote:
> > There is one case could trigger IMO:
> >
> > For example, there are t... - 06:19 AM Bug #58619 (In Progress): mds: client evict [-h|--help] evicts ALL clients
- ceph --admin-daemon $socketfile client evict [-h|--help] evicts ALL clients.
It is observed that adding "--help|-h" ... - 06:04 AM Backport #58603 (In Progress): pacific: client stalls during vstart_runner test
- 06:00 AM Backport #58602 (In Progress): quincy: client stalls during vstart_runner test
- 05:48 AM Backport #58608 (In Progress): pacific: cephfs:filesystem became read only after Quincy upgrade
- 05:46 AM Backport #58609 (In Progress): quincy: cephfs:filesystem became read only after Quincy upgrade
- 04:19 AM Bug #58219 (Fix Under Review): Test failure: test_journal_migration (tasks.cephfs.test_journal_mi...
- 04:03 AM Documentation #51459 (Resolved): doc: document what kinds of damage forward scrub can repair
01/30/2023
- 04:37 PM Backport #58609 (Resolved): quincy: cephfs:filesystem became read only after Quincy upgrade
- https://github.com/ceph/ceph/pull/49939
- 04:37 PM Backport #58608 (Resolved): pacific: cephfs:filesystem became read only after Quincy upgrade
- https://github.com/ceph/ceph/pull/49941
- 04:32 PM Bug #58082 (Pending Backport): cephfs:filesystem became read only after Quincy upgrade
- 02:45 PM Documentation #51459 (In Progress): doc: document what kinds of damage forward scrub can repair
- 02:33 PM Bug #56446 (Fix Under Review): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- 01:37 PM Bug #58597: The MDS crashes when deleting a specific file
- Venky Shankar wrote:
> Tobias Reinhard wrote:
> > [...]
> >
> >
> > This is perfectly reproducible, unfortunat... - 01:13 PM Bug #58597: The MDS crashes when deleting a specific file
- Tobias Reinhard wrote:
> [...]
>
>
> This is perfectly reproducible, unfortunately on a Production System.
T... - 11:42 AM Backport #58604 (Resolved): quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails -...
- 11:36 AM Bug #57280 (Pending Backport): qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Fail...
- 11:27 AM Backport #58603 (Resolved): pacific: client stalls during vstart_runner test
- https://github.com/ceph/ceph/pull/49944
- 11:27 AM Backport #58602 (Resolved): quincy: client stalls during vstart_runner test
- https://github.com/ceph/ceph/pull/49942
- 11:20 AM Bug #56532 (Pending Backport): client stalls during vstart_runner test
- 11:02 AM Bug #56532 (Resolved): client stalls during vstart_runner test
- 11:19 AM Backport #58601 (Resolved): pacific: mds/Server: -ve values cause unexpected client eviction whil...
- 11:19 AM Backport #58600 (Resolved): quincy: mds/Server: -ve values cause unexpected client eviction while...
- 11:14 AM Bug #57359 (Pending Backport): mds/Server: -ve values cause unexpected client eviction while hand...
- 11:10 AM Backport #58599 (In Progress): quincy: mon: prevent allocating snapids allocated for CephFS
- https://github.com/ceph/ceph/pull/50090
- 11:09 AM Backport #58598 (Resolved): pacific: mon: prevent allocating snapids allocated for CephFS
- https://github.com/ceph/ceph/pull/50050
- 11:05 AM Feature #16745 (Pending Backport): mon: prevent allocating snapids allocated for CephFS
- 09:35 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Latest instance - https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testi...
- 09:18 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Seeing this in me recent run - https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125...
01/29/2023
01/27/2023
- 02:22 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- In the meantime I rebooted my hosts for regular maintenance (rolling reboot with only one node down). Since then I ca...
- 07:56 AM Bug #58434 (Rejected): Widespread metadata corruption
01/25/2023
- 09:06 PM Bug #58434: Widespread metadata corruption
- Venky Shankar wrote:
> Nathan Fish wrote:
> > Further evidence indicates the issue may not be with Ceph; possibly a... - 01:21 PM Bug #58576 (Rejected): do not allow invalid flags with cmd 'scrub start'
- Currently, cmd 'scrub start' would accept any flag and looking at logs, it does actually start the scrubbing on the p...
- 11:21 AM Backport #58573 (In Progress): pacific: mds: fragment directory snapshots
- 10:30 AM Backport #58573 (Resolved): pacific: mds: fragment directory snapshots
- https://github.com/ceph/ceph/pull/49867
01/24/2023
- 02:27 PM Bug #58564 (In Progress): workunit suites/dbench.sh fails with error code 1
- This dbench test job fails with error code 1 and error message @write failed on handle 11133 (Resource temporarily un...
- 12:33 PM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Rishabh - I'm taking this one since its blocking testing for https://tracker.ceph.com/issues/57985
- 12:10 PM Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some work...
- Enforce (stricter) client-id check in client limit test - https://github.com/ceph/ceph/pull/49844
- 08:55 AM Cleanup #58561 (New): cephfs-top: move FSTopBase to fstopbase.py
- move FSTopBase to fstopbase.py and import it to cephfs-top script. This is easy to do. But the issue is when it's don...
01/23/2023
- 03:12 PM Feature #58550 (Fix Under Review): mds: add perf counter to track (relatively) larger log events
- Logging of large (in size) subtreemap over and over again in the log segment is a well know scalability limiting fact...
- 03:07 PM Feature #58488: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- Greg mentioned that this be worked on before its actually proved that this is causing slowness in the MDS - I agree. ...
- 01:48 PM Bug #58489 (Triaged): mds stuck in 'up:replay' and crashed.
- 01:18 PM Bug #58434: Widespread metadata corruption
- Nathan Fish wrote:
> Further evidence indicates the issue may not be with Ceph; possibly a bug in account management...
01/20/2023
- 05:50 AM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- The client *should* receive an updated snap trace from the MDS. Using that some of the snap inodes should be invalida...
01/19/2023
- 02:12 PM Bug #58411 (Triaged): mds: a few simple operations crash mds
- Good catch!
- 01:15 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- Hi,
Thanks for the help. Here are the debug logs of my two active MDS. I thought, I'll tar the whole bunch for you... - 01:11 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> There is one case could trigger IMO:
>
> For example, there are two MDLog entries:
>
> ESess... - 12:51 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- The client log analysis. The lookup on *'.snap/snapshot1'* is successful from the dentry cache. The caps and the leas...
01/18/2023
- 11:47 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- There is one case could trigger IMO:
For example, there are two MDLog entries:
ESessions entry --> {..., versio... - 11:33 AM Bug #58489 (Pending Backport): mds stuck in 'up:replay' and crashed.
- The issue is reported by upstream community user.
The cluster had two filesystems and the active mds of both the f... - 09:48 AM Feature #58488: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- More detail about this:
For example for */AAAA/BBBB/CCCC/* we create snapshots under */* and */AAAA/BBBB/*, and la... - 09:39 AM Feature #58488 (New): mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- This happens via MDCache::predirty_journal_parents() where MDCache::journal_dirty_inode() is called for each ancestor...
- 02:56 AM Bug #58482 (Fix Under Review): mds: catch damage to CDentry's first member before persisting
- 02:54 AM Bug #58482 (Pending Backport): mds: catch damage to CDentry's first member before persisting
Also available in: Atom