Activity
From 12/22/2022 to 01/20/2023
01/20/2023
- 05:50 AM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- The client *should* receive an updated snap trace from the MDS. Using that some of the snap inodes should be invalida...
01/19/2023
- 02:12 PM Bug #58411 (Triaged): mds: a few simple operations crash mds
- Good catch!
- 01:15 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- Hi,
Thanks for the help. Here are the debug logs of my two active MDS. I thought, I'll tar the whole bunch for you... - 01:11 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> There is one case could trigger IMO:
>
> For example, there are two MDLog entries:
>
> ESess... - 12:51 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- The client log analysis. The lookup on *'.snap/snapshot1'* is successful from the dentry cache. The caps and the leas...
01/18/2023
- 11:47 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- There is one case could trigger IMO:
For example, there are two MDLog entries:
ESessions entry --> {..., versio... - 11:33 AM Bug #58489 (Resolved): mds stuck in 'up:replay' and crashed.
- The issue is reported by upstream community user.
The cluster had two filesystems and the active mds of both the f... - 09:48 AM Feature #58488: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- More detail about this:
For example for */AAAA/BBBB/CCCC/* we create snapshots under */* and */AAAA/BBBB/*, and la... - 09:39 AM Feature #58488 (New): mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- This happens via MDCache::predirty_journal_parents() where MDCache::journal_dirty_inode() is called for each ancestor...
- 02:56 AM Bug #58482 (Fix Under Review): mds: catch damage to CDentry's first member before persisting
- 02:54 AM Bug #58482 (Resolved): mds: catch damage to CDentry's first member before persisting
01/17/2023
- 12:01 PM Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some work...
- partial fix (debug aid): https://github.com/ceph/ceph/pull/49766
01/13/2023
- 06:04 PM Bug #58434: Widespread metadata corruption
- Further evidence indicates the issue may not be with Ceph; possibly a bug in account management. Still investigating....
01/12/2023
- 07:38 PM Bug #58434 (Rejected): Widespread metadata corruption
- One of our CephFS volumes has become corrupt, with user files being substituted for each other with no clear pattern;...
- 01:21 PM Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some work...
- Venky Shankar wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=2134709
>
> Generally seen when the MDS is hea... - 05:28 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> okay i see, thanks Xiubo li
>
> i was going through the link and found reset of journal a...
01/11/2023
- 10:55 AM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- https://github.com/ceph/ceph/pull/48418#issuecomment-1378478510
- 09:47 AM Bug #58394 (Triaged): nofail option in fstab not supported
- 09:47 AM Bug #58394: nofail option in fstab not supported
- Dhairya, please take this one. I think we would want to get the semantics right for `nofail` option with ceph-fuse an...
- 07:59 AM Bug #54017 (Duplicate): Problem with ceph fs snapshot mirror and read-only folders
- Duplicate of https://tracker.ceph.com/issues/55313
01/10/2023
- 02:22 PM Bug #58411 (Resolved): mds: a few simple operations crash mds
- mount a cephfs with multiple active mds daemons on '/mnt/cephfs' and then do the following operations in order:
(1... - 01:18 PM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- Tried with a recent build - https://pulpito.ceph.com/vshankar-2022-12-08_04:33:46-fs-wip-vshankar-testing-20221130.04...
- 06:57 AM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- This is a build issue which has started to show up recently.
- 06:09 AM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- Most likely, this is similar to https://bugzilla.redhat.com/show_bug.cgi?id=1569391 (memory is allocated via libc whe...
- 11:50 AM Backport #58409 (In Progress): quincy: doc: document the relevance of mds_namespace mount option
- 11:14 AM Backport #58409 (Resolved): quincy: doc: document the relevance of mds_namespace mount option
- https://github.com/ceph/ceph/pull/49689
- 11:45 AM Backport #58408 (In Progress): pacific: doc: document the relevance of mds_namespace mount option
- 11:14 AM Backport #58408 (Resolved): pacific: doc: document the relevance of mds_namespace mount option
- https://github.com/ceph/ceph/pull/49688
- 11:12 AM Documentation #57673 (Pending Backport): doc: document the relevance of mds_namespace mount option
- 05:11 AM Bug #58392 (Rejected): mds:when want_auth is true, path_traverse should ensure the return Inode i...
- Closing this in favor of https://tracker.ceph.com/issues/58395
- 04:32 AM Bug #58392 (Triaged): mds:when want_auth is true, path_traverse should ensure the return Inode is...
- 04:38 AM Feature #58133 (Resolved): qa: add test cases for fscrypt feature in kernel CephFS client
01/09/2023
- 02:11 PM Bug #58394: nofail option in fstab not supported
- FWIW, a similar issue was attempted to be fixed for the kclient - https://github.com/ceph/ceph/pull/26992, but all th...
- 02:57 AM Bug #58394: nofail option in fstab not supported
- Results from a non-debug mount:...
- 02:56 AM Bug #58394 (Pending Backport): nofail option in fstab not supported
- There are several old bug reports on this from 2019, but they should have all been resolved. However, testing on a 1...
- 02:06 PM Documentation #58393: nfs docs should mention idmap issue, refer to external tools/setup
- (temp assigning to Jeff, so that its notified)
- 02:06 PM Documentation #58393: nfs docs should mention idmap issue, refer to external tools/setup
- Dan Mick wrote:
> I tried to use an NFS mount set up by others, and the files in it were owned by root:root in the u... - 01:50 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Kotresh Hiremath Ravishankar wrote:
> > Now you can still copy the snapshot file from /mnt
> > [kotresh@fedora buil... - 01:48 PM Bug #58376 (Triaged): CephFS Snapshots are accessible even when it's deleted from the other client
- 01:41 PM Bug #58395 (Triaged): mds:in openc, if unlink is not finished we should reintegrate the dentry be...
- 08:58 AM Bug #58395: mds:in openc, if unlink is not finished we should reintegrate the dentry before conti...
- the crash log
- 08:57 AM Bug #58395 (Triaged): mds:in openc, if unlink is not finished we should reintegrate the dentry be...
- In openc, if unlink is not finished we should reintegrate the dentry before continuing further.
If not, cooperate ... - 01:30 PM Bug #58340 (Triaged): mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegra...
- 10:40 AM Backport #58347 (In Progress): quincy: mds: fragment directory snapshots
- 10:35 AM Backport #58345 (In Progress): quincy: Thread md_log_replay is hanged for ever.
- 10:30 AM Backport #58346 (In Progress): pacific: Thread md_log_replay is hanged for ever.
- 10:23 AM Backport #58350 (In Progress): quincy: MDS: scan_stray_dir doesn't walk through all stray inode f...
- 10:16 AM Backport #58349 (In Progress): pacific: MDS: scan_stray_dir doesn't walk through all stray inode ...
- 09:05 AM Bug #58392: mds:when want_auth is true, path_traverse should ensure the return Inode is auth.
- cooperate with: https://tracker.ceph.com/issues/58395
01/07/2023
- 03:18 AM Documentation #58393 (New): nfs docs should mention idmap issue, refer to external tools/setup
- I tried to use an NFS mount set up by others, and the files in it were owned by root:root in the underlying cephfs. ...
01/06/2023
- 05:27 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
> Now you can still copy the snapshot file from /mnt
> [kotresh@fedora build]$ cp -p /mnt/dir1/.snap/snapshot1/fil...- 01:22 PM Backport #58254 (In Progress): pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _...
- https://github.com/ceph/ceph/pull/49656
- 01:19 PM Backport #58253 (In Progress): quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _c...
- https://github.com/ceph/ceph/pull/49655
- 01:11 PM Backport #58348 (In Progress): quincy: "ceph nfs cluster info" shows junk data for non-existent c...
- 01:11 PM Backport #58348: quincy: "ceph nfs cluster info" shows junk data for non-existent cluster
- https://github.com/ceph/ceph/pull/49654
- 10:58 AM Bug #58392 (Rejected): mds:when want_auth is true, path_traverse should ensure the return Inode i...
- When want_auth is true(want_dentry is true or false), we should ensure the return Inode is auth.
If not, it can cras...
01/05/2023
- 01:28 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Venky Shankar wrote:
> Kotresh Hiremath Ravishankar wrote:
> > The issue is seen by upstream user. The snapshot is ... - 09:32 AM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Kotresh Hiremath Ravishankar wrote:
> The issue is seen by upstream user. The snapshot is still accessible from a cl... - 07:36 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- okay i see, thanks Xiubo li
i was going through the link and found reset of journal and session resolved the issue... - 05:21 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> Thanks Xiubo Li for the update
>
> We are facing similar issue currently where client I/O... - 05:03 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Thanks Xiubo Li for the update
We are facing similar issue currently where client I/O is not visible in ceph statu... - 04:09 AM Backport #58254: pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya, please take this one.
- 04:05 AM Backport #58344 (In Progress): quincy: mds: switch submit_mutex to fair mutex for MDLog
- 04:04 AM Backport #58343 (In Progress): pacific: mds: switch submit_mutex to fair mutex for MDLog
01/04/2023
- 10:47 AM Documentation #57673 (In Progress): doc: document the relevance of mds_namespace mount option
- 10:04 AM Bug #58376 (Triaged): CephFS Snapshots are accessible even when it's deleted from the other client
- The issue is seen by upstream user. The snapshot is still accessible from a client which was copying it even when it'...
- 06:19 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> Prayank Saxena wrote:
> > Hello Team,
> >
> > We got the issue similar to 'mds read-only... - 05:04 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> Hello Team,
>
> We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one w... - 04:56 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t... - 04:53 AM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t...
12/22/2022
- 06:02 PM Backport #58350 (Resolved): quincy: MDS: scan_stray_dir doesn't walk through all stray inode frag...
- https://github.com/ceph/ceph/pull/49670
- 06:02 PM Backport #58349 (Resolved): pacific: MDS: scan_stray_dir doesn't walk through all stray inode fra...
- https://github.com/ceph/ceph/pull/49669
- 05:54 PM Bug #58294 (Pending Backport): MDS: scan_stray_dir doesn't walk through all stray inode fragment
- 04:55 PM Backport #58348 (Resolved): quincy: "ceph nfs cluster info" shows junk data for non-existent clus...
- 04:51 PM Bug #58138 (Pending Backport): "ceph nfs cluster info" shows junk data for non-existent cluster
- 04:24 PM Backport #58347 (Resolved): quincy: mds: fragment directory snapshots
- https://github.com/ceph/ceph/pull/49673
- 04:19 PM Feature #55215 (Pending Backport): mds: fragment directory snapshots
- 02:43 PM Backport #58346 (Resolved): pacific: Thread md_log_replay is hanged for ever.
- https://github.com/ceph/ceph/pull/49671
- 02:43 PM Backport #58345 (Resolved): quincy: Thread md_log_replay is hanged for ever.
- https://github.com/ceph/ceph/pull/49672
- 02:39 PM Bug #57764 (Pending Backport): Thread md_log_replay is hanged for ever.
- 02:35 PM Backport #58344 (Resolved): quincy: mds: switch submit_mutex to fair mutex for MDLog
- https://github.com/ceph/ceph/pull/49633
- 02:35 PM Backport #58343 (Resolved): pacific: mds: switch submit_mutex to fair mutex for MDLog
- https://github.com/ceph/ceph/pull/49632
- 02:34 PM Bug #58000 (Pending Backport): mds: switch submit_mutex to fair mutex for MDLog
- 02:19 PM Backport #58342 (New): quincy: ceph-fuse: doesn't work properly when the version of libfuse is 3....
- 02:19 PM Backport #58341 (Rejected): pacific: ceph-fuse: doesn't work properly when the version of libfuse...
- 02:17 PM Bug #58109 (Pending Backport): ceph-fuse: doesn't work properly when the version of libfuse is 3....
- 02:01 PM Bug #58340 (Resolved): mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegr...
- /a/vshankar-2022-12-21_14:01:01-fs-wip-vshankar-testing-20221215.112736-testing-default-smithi/7123518
The MDS is ... - 08:50 AM Backport #58322 (In Progress): quincy: mds crashed "assert_condition": "state == LOCK_XLOCK || s...
- https://github.com/ceph/ceph/pull/49539
- 08:47 AM Backport #58323 (In Progress): pacific: mds crashed "assert_condition": "state == LOCK_XLOCK || ...
- https://github.com/ceph/ceph/pull/49538
Also available in: Atom