Activity
From 01/02/2023 to 01/31/2023
01/31/2023
- 01:26 PM Documentation #58620 (New): document asok commands
- some of the asok commands can be found here and there in the docs but most lack documentation, it would be good to ha...
- 09:50 AM Bug #58597: The MDS crashes when deleting a specific file
- Tobias Reinhard wrote:
> Venky Shankar wrote:
> > Hi Tobias,
> >
> > > The crash happens, whenever I delete the ... - 08:32 AM Bug #58597: The MDS crashes when deleting a specific file
- Venky Shankar wrote:
> Hi Tobias,
>
> > The crash happens, whenever I delete the file - every time.
> >
> > I ... - 05:19 AM Bug #58597: The MDS crashes when deleting a specific file
- Hi Tobias,
> The crash happens, whenever I delete the file - every time.
>
> I don't know how or when the corru... - 07:49 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- 1, The first commit in PR 49940 is for case cluster hanged in state EXPORT_WARNING which is consistent with the log.
... - 06:37 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- zhikuo du wrote:
> https://tracker.ceph.com/issues/42338
> There is another tracker for "Failed to authpin,subtree ... - 06:30 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- https://tracker.ceph.com/issues/42338
There is another tracker for "Failed to authpin,subtree is being exported".
... - 06:17 AM Bug #58617: mds: "Failed to authpin,subtree is being exported" results in large number of blocked...
- There is another tracker also stuck for *Failed to authpin,subtree is being exported*:
https://tracker.ceph.com/is... - 05:55 AM Bug #58617 (Triaged): mds: "Failed to authpin,subtree is being exported" results in large number ...
- A problem: the cluster(octopus 15.2.16) has large numbers of blocked requests. The error associated with the block is...
- 06:32 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- Venky Shankar wrote:
> Xiubo Li wrote:
> > There is one case could trigger IMO:
> >
> > For example, there are t... - 06:19 AM Bug #58619 (Fix Under Review): mds: client evict [-h|--help] evicts ALL clients
- ceph --admin-daemon $socketfile client evict [-h|--help] evicts ALL clients.
It is observed that adding "--help|-h" ... - 06:04 AM Backport #58603 (In Progress): pacific: client stalls during vstart_runner test
- 06:00 AM Backport #58602 (In Progress): quincy: client stalls during vstart_runner test
- 05:48 AM Backport #58608 (In Progress): pacific: cephfs:filesystem became read only after Quincy upgrade
- 05:46 AM Backport #58609 (In Progress): quincy: cephfs:filesystem became read only after Quincy upgrade
- 04:19 AM Bug #58219 (Fix Under Review): Test failure: test_journal_migration (tasks.cephfs.test_journal_mi...
- 04:03 AM Documentation #51459 (Resolved): doc: document what kinds of damage forward scrub can repair
01/30/2023
- 04:37 PM Backport #58609 (Resolved): quincy: cephfs:filesystem became read only after Quincy upgrade
- https://github.com/ceph/ceph/pull/49939
- 04:37 PM Backport #58608 (Resolved): pacific: cephfs:filesystem became read only after Quincy upgrade
- https://github.com/ceph/ceph/pull/49941
- 04:32 PM Bug #58082 (Pending Backport): cephfs:filesystem became read only after Quincy upgrade
- 02:45 PM Documentation #51459 (In Progress): doc: document what kinds of damage forward scrub can repair
- 02:33 PM Bug #56446 (Fix Under Review): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- 01:37 PM Bug #58597: The MDS crashes when deleting a specific file
- Venky Shankar wrote:
> Tobias Reinhard wrote:
> > [...]
> >
> >
> > This is perfectly reproducible, unfortunat... - 01:13 PM Bug #58597: The MDS crashes when deleting a specific file
- Tobias Reinhard wrote:
> [...]
>
>
> This is perfectly reproducible, unfortunately on a Production System.
T... - 11:42 AM Backport #58604 (Resolved): quincy: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails -...
- 11:36 AM Bug #57280 (Pending Backport): qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Fail...
- 11:27 AM Backport #58603 (Resolved): pacific: client stalls during vstart_runner test
- https://github.com/ceph/ceph/pull/49944
- 11:27 AM Backport #58602 (Resolved): quincy: client stalls during vstart_runner test
- https://github.com/ceph/ceph/pull/49942
- 11:20 AM Bug #56532 (Pending Backport): client stalls during vstart_runner test
- 11:02 AM Bug #56532 (Resolved): client stalls during vstart_runner test
- 11:19 AM Backport #58601 (Resolved): pacific: mds/Server: -ve values cause unexpected client eviction whil...
- 11:19 AM Backport #58600 (Resolved): quincy: mds/Server: -ve values cause unexpected client eviction while...
- 11:14 AM Bug #57359 (Pending Backport): mds/Server: -ve values cause unexpected client eviction while hand...
- 11:10 AM Backport #58599 (In Progress): quincy: mon: prevent allocating snapids allocated for CephFS
- https://github.com/ceph/ceph/pull/50090
- 11:09 AM Backport #58598 (Resolved): pacific: mon: prevent allocating snapids allocated for CephFS
- https://github.com/ceph/ceph/pull/50050
- 11:05 AM Feature #16745 (Pending Backport): mon: prevent allocating snapids allocated for CephFS
- 09:35 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- Latest instance - https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testi...
- 09:18 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Seeing this in me recent run - https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125...
01/29/2023
01/27/2023
- 02:22 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- In the meantime I rebooted my hosts for regular maintenance (rolling reboot with only one node down). Since then I ca...
- 07:56 AM Bug #58434 (Rejected): Widespread metadata corruption
01/25/2023
- 09:06 PM Bug #58434: Widespread metadata corruption
- Venky Shankar wrote:
> Nathan Fish wrote:
> > Further evidence indicates the issue may not be with Ceph; possibly a... - 01:21 PM Bug #58576 (Rejected): do not allow invalid flags with cmd 'scrub start'
- Currently, cmd 'scrub start' would accept any flag and looking at logs, it does actually start the scrubbing on the p...
- 11:21 AM Backport #58573 (In Progress): pacific: mds: fragment directory snapshots
- 10:30 AM Backport #58573 (Resolved): pacific: mds: fragment directory snapshots
- https://github.com/ceph/ceph/pull/49867
01/24/2023
- 02:27 PM Bug #58564 (In Progress): workunit suites/dbench.sh fails with error code 1
- This dbench test job fails with error code 1 and error message @write failed on handle 11133 (Resource temporarily un...
- 12:33 PM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Rishabh - I'm taking this one since its blocking testing for https://tracker.ceph.com/issues/57985
- 12:10 PM Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some work...
- Enforce (stricter) client-id check in client limit test - https://github.com/ceph/ceph/pull/49844
- 08:55 AM Cleanup #58561 (New): cephfs-top: move FSTopBase to fstopbase.py
- move FSTopBase to fstopbase.py and import it to cephfs-top script. This is easy to do. But the issue is when it's don...
01/23/2023
- 03:12 PM Feature #58550 (Resolved): mds: add perf counter to track (relatively) larger log events
- Logging of large (in size) subtreemap over and over again in the log segment is a well know scalability limiting fact...
- 03:07 PM Feature #58488: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- Greg mentioned that this be worked on before its actually proved that this is causing slowness in the MDS - I agree. ...
- 01:48 PM Bug #58489 (Triaged): mds stuck in 'up:replay' and crashed.
- 01:18 PM Bug #58434: Widespread metadata corruption
- Nathan Fish wrote:
> Further evidence indicates the issue may not be with Ceph; possibly a bug in account management...
01/20/2023
- 05:50 AM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- The client *should* receive an updated snap trace from the MDS. Using that some of the snap inodes should be invalida...
01/19/2023
- 02:12 PM Bug #58411 (Triaged): mds: a few simple operations crash mds
- Good catch!
- 01:15 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- Hi,
Thanks for the help. Here are the debug logs of my two active MDS. I thought, I'll tar the whole bunch for you... - 01:11 PM Bug #58489: mds stuck in 'up:replay' and crashed.
- Xiubo Li wrote:
> There is one case could trigger IMO:
>
> For example, there are two MDLog entries:
>
> ESess... - 12:51 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- The client log analysis. The lookup on *'.snap/snapshot1'* is successful from the dentry cache. The caps and the leas...
01/18/2023
- 11:47 AM Bug #58489: mds stuck in 'up:replay' and crashed.
- There is one case could trigger IMO:
For example, there are two MDLog entries:
ESessions entry --> {..., versio... - 11:33 AM Bug #58489 (Resolved): mds stuck in 'up:replay' and crashed.
- The issue is reported by upstream community user.
The cluster had two filesystems and the active mds of both the f... - 09:48 AM Feature #58488: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- More detail about this:
For example for */AAAA/BBBB/CCCC/* we create snapshots under */* and */AAAA/BBBB/*, and la... - 09:39 AM Feature #58488 (New): mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
- This happens via MDCache::predirty_journal_parents() where MDCache::journal_dirty_inode() is called for each ancestor...
- 02:56 AM Bug #58482 (Fix Under Review): mds: catch damage to CDentry's first member before persisting
- 02:54 AM Bug #58482 (Resolved): mds: catch damage to CDentry's first member before persisting
01/17/2023
- 12:01 PM Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some work...
- partial fix (debug aid): https://github.com/ceph/ceph/pull/49766
01/13/2023
- 06:04 PM Bug #58434: Widespread metadata corruption
- Further evidence indicates the issue may not be with Ceph; possibly a bug in account management. Still investigating....
01/12/2023
- 07:38 PM Bug #58434 (Rejected): Widespread metadata corruption
- One of our CephFS volumes has become corrupt, with user files being substituted for each other with no clear pattern;...
- 01:21 PM Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some work...
- Venky Shankar wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=2134709
>
> Generally seen when the MDS is hea... - 05:28 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> okay i see, thanks Xiubo li
>
> i was going through the link and found reset of journal a...
01/11/2023
- 10:55 AM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- https://github.com/ceph/ceph/pull/48418#issuecomment-1378478510
- 09:47 AM Bug #58394 (Triaged): nofail option in fstab not supported
- 09:47 AM Bug #58394: nofail option in fstab not supported
- Dhairya, please take this one. I think we would want to get the semantics right for `nofail` option with ceph-fuse an...
- 07:59 AM Bug #54017 (Duplicate): Problem with ceph fs snapshot mirror and read-only folders
- Duplicate of https://tracker.ceph.com/issues/55313
01/10/2023
- 02:22 PM Bug #58411 (Resolved): mds: a few simple operations crash mds
- mount a cephfs with multiple active mds daemons on '/mnt/cephfs' and then do the following operations in order:
(1... - 01:18 PM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- Tried with a recent build - https://pulpito.ceph.com/vshankar-2022-12-08_04:33:46-fs-wip-vshankar-testing-20221130.04...
- 06:57 AM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- This is a build issue which has started to show up recently.
- 06:09 AM Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournal...
- Most likely, this is similar to https://bugzilla.redhat.com/show_bug.cgi?id=1569391 (memory is allocated via libc whe...
- 11:50 AM Backport #58409 (In Progress): quincy: doc: document the relevance of mds_namespace mount option
- 11:14 AM Backport #58409 (Resolved): quincy: doc: document the relevance of mds_namespace mount option
- https://github.com/ceph/ceph/pull/49689
- 11:45 AM Backport #58408 (In Progress): pacific: doc: document the relevance of mds_namespace mount option
- 11:14 AM Backport #58408 (Resolved): pacific: doc: document the relevance of mds_namespace mount option
- https://github.com/ceph/ceph/pull/49688
- 11:12 AM Documentation #57673 (Pending Backport): doc: document the relevance of mds_namespace mount option
- 05:11 AM Bug #58392 (Rejected): mds:when want_auth is true, path_traverse should ensure the return Inode i...
- Closing this in favor of https://tracker.ceph.com/issues/58395
- 04:32 AM Bug #58392 (Triaged): mds:when want_auth is true, path_traverse should ensure the return Inode is...
- 04:38 AM Feature #58133 (Resolved): qa: add test cases for fscrypt feature in kernel CephFS client
01/09/2023
- 02:11 PM Bug #58394: nofail option in fstab not supported
- FWIW, a similar issue was attempted to be fixed for the kclient - https://github.com/ceph/ceph/pull/26992, but all th...
- 02:57 AM Bug #58394: nofail option in fstab not supported
- Results from a non-debug mount:...
- 02:56 AM Bug #58394 (Pending Backport): nofail option in fstab not supported
- There are several old bug reports on this from 2019, but they should have all been resolved. However, testing on a 1...
- 02:06 PM Documentation #58393: nfs docs should mention idmap issue, refer to external tools/setup
- (temp assigning to Jeff, so that its notified)
- 02:06 PM Documentation #58393: nfs docs should mention idmap issue, refer to external tools/setup
- Dan Mick wrote:
> I tried to use an NFS mount set up by others, and the files in it were owned by root:root in the u... - 01:50 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Kotresh Hiremath Ravishankar wrote:
> > Now you can still copy the snapshot file from /mnt
> > [kotresh@fedora buil... - 01:48 PM Bug #58376 (Triaged): CephFS Snapshots are accessible even when it's deleted from the other client
- 01:41 PM Bug #58395 (Triaged): mds:in openc, if unlink is not finished we should reintegrate the dentry be...
- 08:58 AM Bug #58395: mds:in openc, if unlink is not finished we should reintegrate the dentry before conti...
- the crash log
- 08:57 AM Bug #58395 (Triaged): mds:in openc, if unlink is not finished we should reintegrate the dentry be...
- In openc, if unlink is not finished we should reintegrate the dentry before continuing further.
If not, cooperate ... - 01:30 PM Bug #58340 (Triaged): mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegra...
- 10:40 AM Backport #58347 (In Progress): quincy: mds: fragment directory snapshots
- 10:35 AM Backport #58345 (In Progress): quincy: Thread md_log_replay is hanged for ever.
- 10:30 AM Backport #58346 (In Progress): pacific: Thread md_log_replay is hanged for ever.
- 10:23 AM Backport #58350 (In Progress): quincy: MDS: scan_stray_dir doesn't walk through all stray inode f...
- 10:16 AM Backport #58349 (In Progress): pacific: MDS: scan_stray_dir doesn't walk through all stray inode ...
- 09:05 AM Bug #58392: mds:when want_auth is true, path_traverse should ensure the return Inode is auth.
- cooperate with: https://tracker.ceph.com/issues/58395
01/07/2023
- 03:18 AM Documentation #58393 (New): nfs docs should mention idmap issue, refer to external tools/setup
- I tried to use an NFS mount set up by others, and the files in it were owned by root:root in the underlying cephfs. ...
01/06/2023
- 05:27 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
> Now you can still copy the snapshot file from /mnt
> [kotresh@fedora build]$ cp -p /mnt/dir1/.snap/snapshot1/fil...- 01:22 PM Backport #58254 (In Progress): pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _...
- https://github.com/ceph/ceph/pull/49656
- 01:19 PM Backport #58253 (In Progress): quincy: mds/PurgeQueue: don't consider filer_max_purge_ops when _c...
- https://github.com/ceph/ceph/pull/49655
- 01:11 PM Backport #58348 (In Progress): quincy: "ceph nfs cluster info" shows junk data for non-existent c...
- 01:11 PM Backport #58348: quincy: "ceph nfs cluster info" shows junk data for non-existent cluster
- https://github.com/ceph/ceph/pull/49654
- 10:58 AM Bug #58392 (Rejected): mds:when want_auth is true, path_traverse should ensure the return Inode i...
- When want_auth is true(want_dentry is true or false), we should ensure the return Inode is auth.
If not, it can cras...
01/05/2023
- 01:28 PM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Venky Shankar wrote:
> Kotresh Hiremath Ravishankar wrote:
> > The issue is seen by upstream user. The snapshot is ... - 09:32 AM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Kotresh Hiremath Ravishankar wrote:
> The issue is seen by upstream user. The snapshot is still accessible from a cl... - 07:36 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- okay i see, thanks Xiubo li
i was going through the link and found reset of journal and session resolved the issue... - 05:21 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> Thanks Xiubo Li for the update
>
> We are facing similar issue currently where client I/O... - 05:03 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Thanks Xiubo Li for the update
We are facing similar issue currently where client I/O is not visible in ceph statu... - 04:09 AM Backport #58254: pacific: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- Dhairya, please take this one.
- 04:05 AM Backport #58344 (In Progress): quincy: mds: switch submit_mutex to fair mutex for MDLog
- 04:04 AM Backport #58343 (In Progress): pacific: mds: switch submit_mutex to fair mutex for MDLog
01/04/2023
- 10:47 AM Documentation #57673 (In Progress): doc: document the relevance of mds_namespace mount option
- 10:04 AM Bug #58376 (Triaged): CephFS Snapshots are accessible even when it's deleted from the other client
- The issue is seen by upstream user. The snapshot is still accessible from a client which was copying it even when it'...
- 06:19 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> Prayank Saxena wrote:
> > Hello Team,
> >
> > We got the issue similar to 'mds read-only... - 05:04 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Prayank Saxena wrote:
> Hello Team,
>
> We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one w... - 04:56 AM Bug #58082: cephfs:filesystem became read only after Quincy upgrade
- Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t... - 04:53 AM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- Hello Team,
We got the issue similar to 'mds read-only' in Pacific 16.2.9 where one write commit failed and made t...
Also available in: Atom