Activity
From 07/14/2022 to 08/12/2022
08/12/2022
- 05:22 PM Documentation #57062: Document access patterns that have good/pathological performance on CephFS
- I think that a good place for this info to be added would be https://docs.ceph.com/en/quincy/cephfs/app-best-practice...
- 12:03 PM Documentation #57115 (New): Explanation for cache pressure
- Following up on the "thread":https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/DOUQNI5YQ74YB3FS5ZOQI2MS...
- 09:54 AM Bug #56996: Transient data read corruption from other machine
- Confirmed with Venky, when the **CInode::filelock** is in **LOCK_MIX** state we won't guarantee the data consistency ...
- 09:31 AM Feature #40633 (Resolved): mds: dump recent log events for extraordinary events
- 09:23 AM Backport #57113 (Resolved): pacific: Intermittent ParsingError failure in mgr/volumes module dur...
- https://github.com/ceph/ceph/pull/47112
- 09:23 AM Backport #57112 (In Progress): quincy: Intermittent ParsingError failure in mgr/volumes module d...
- https://github.com/ceph/ceph/pull/47747
- 09:12 AM Bug #55583 (Pending Backport): Intermittent ParsingError failure in mgr/volumes module during "c...
- 09:11 AM Backport #57111 (Resolved): quincy: mds: handle deferred client request core when mds reboot
- https://github.com/ceph/ceph/pull/53363
- 09:11 AM Backport #57110 (Resolved): pacific: mds: handle deferred client request core when mds reboot
- https://github.com/ceph/ceph/pull/53362
- 09:10 AM Bug #56116 (Pending Backport): mds: handle deferred client request core when mds reboot
- 05:54 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Milind Changire wrote:
> client-type: fuse
>
> * Iteratively running shell scripts under *qa/workunits/fs/snaps/*...
08/11/2022
- 04:08 PM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > Venky Shankar wrote:
> > > Patrick,
> > >
> > > Do you mean a ... - 03:12 PM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick,
> >
> > Do you mean a standby-replay MDS should tole... - 01:37 PM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Venky Shankar wrote:
> Patrick,
>
> Do you mean a standby-replay MDS should tolerate missing journal objects?
... - 01:40 PM Backport #51337 (Rejected): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.subvo...
- 06:56 AM Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- Note that the test successfully passed on the re-run
https://pulpito.ceph.com/yuriw-2022-08-10_20:34:29-fs-wip-yuri6... - 03:44 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Unfortunately I must report that I'm still hitting this issue even with Ceph 16.2.7 and...
08/10/2022
- 05:30 PM Feature #56140 (Fix Under Review): cephfs: tooling to identify inode (metadata) corruption
- 05:20 PM Feature #57091 (Resolved): mds: modify scrub to catch dentry corruption
- Such as "first" snapshot being an invalid value.
- 05:01 PM Feature #57090 (Resolved): MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- During some recovery situations, it would be useful to have MDS up but prevent clients from establishing sessions. Us...
- 03:18 PM Backport #56979: quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47368
mergedReviewed-by: Ramana Raja <rraja@redhat.com>
- 02:34 PM Bug #55216 (Resolved): cephfs-shell: creates directories in local file system even if file not found
- PR along with backport PRs merged. Marking as resolved.
- 02:30 PM Backport #55627 (Resolved): pacific: cephfs-shell: creates directories in local file system even ...
- merged
- 01:59 PM Feature #55715 (Resolved): pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
- 11:13 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- We will wait for this to happen in recent versions.
- 11:11 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- Lowering the priority as this is seen only in nautilus and not seen in supported versions.
- 10:48 AM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi...
- 10:35 AM Bug #57087 (Pending Backport): qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDat...
- Seen in https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-defaul...
- 10:01 AM Bug #51276 (Resolved): mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for no-o...
- 10:00 AM Backport #51337 (Resolved): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.subvo...
- Nautilus is EOL
- 09:49 AM Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps...
- Seen in https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-defaul...
- 08:18 AM Bug #57083 (Fix Under Review): ceph-fuse: monclient(hunting): handle_auth_bad_method server allow...
- 07:56 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- The **nautilus** is using **python2**, while the **pacific** qa suite is using **python3** and the qa test suite seem...
- 07:37 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- From **remote/smithi029/log/ceph-mon.a.log.gz**: ...
- 07:26 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- The root cause is that in **nautilus** the **qa/workunits/fs/upgrade/volume_client** script is using **python2** to r...
- 07:21 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- From **remote/smithi029/log/ceph-mon.a.log.gz**: ...
- 07:10 AM Bug #57083 (Resolved): ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_metho...
- From https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-s...
- 07:54 AM Bug #53360 (Duplicate): pacific: client: "handle_auth_bad_method server allowed_methods [2] but i...
- Missed this existing tracker. Will track this in https://tracker.ceph.com/issues/57083 tracker. Have found root cause...
- 07:37 AM Bug #57084 (Resolved): Permissions of the .snap directory do not inherit ACLs
- when using CephFS with POSIX ACLs I noticed that the .snap directory does not inherit the ACLs from its parent but on...
- 07:26 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Created a new tracker to fix it https://tracker.ceph.com/issues/57083.
- 06:50 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Xiubo Li wrote:
> Kotresh Hiremath Ravishankar wrote:
> > Xiubo,
> >
> > Looks like this is seen again in this p... - 07:23 AM Bug #55572: qa/cephfs: omit_sudo doesn't have effect when passed to run_shell()
- I think this needs to be backported. Nikhil mentioned that the PR https://github.com/ceph/ceph/pull/47112 in pacific ...
- 07:08 AM Bug #57071 (Fix Under Review): mds: consider mds_cap_revoke_eviction_timeout for get_late_revokin...
08/09/2022
- 04:13 PM Backport #56527: pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47111
merged - 04:12 PM Backport #56152: pacific: mgr/snap_schedule: schedule updates are not persisted across mgr restart
- https://github.com/ceph/ceph/pull/46797 merged
- 12:55 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Thanks for the quick answer. Then, I guess, the patch to the ceph-fs clients will handle thi... - 12:54 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Thanks for the quick answer. Then, I guess, the patch to the ceph-fs clients will handle thi... - 12:47 PM Bug #56529: ceph-fs crashes on getfattr
- Thanks for the quick answer. Then, I guess, the patch to the ceph-fs clients will handle this once it is approved? I ...
- 12:40 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Hi all,
>
> this story continues, this time with a _valid_ vxattr name. I just observed e... - 12:33 PM Bug #56529: ceph-fs crashes on getfattr
- Hi all,
this story continues, this time with a _valid_ vxattr name. I just observed exactly the same problem now w... - 11:40 AM Bug #57072 (Pending Backport): Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- `ceph fs status` return AssertionError
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/m... - 10:24 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Kotresh Hiremath Ravishankar wrote:
> Xiubo,
>
> Looks like this is seen again in this pacific run ?
>
> https... - 10:13 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Xiubo,
Looks like this is seen again in this pacific run ?
https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-f... - 10:24 AM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- Seen in recent pacific run https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pa...
- 09:27 AM Bug #57071 (Fix Under Review): mds: consider mds_cap_revoke_eviction_timeout for get_late_revokin...
- Even though mds_cap_revoke_eviction_timeout is set to zero, ceph-mon reports some clients failing to respond to capab...
- 09:01 AM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Patrick,
Do you mean a standby-replay MDS should tolerate missing journal objects? How can it end up in such a sit... - 08:58 AM Bug #56808: crash: LogSegment* MDLog::get_current_segment(): assert(!segments.empty())
- Looks similar to https://tracker.ceph.com/issues/51589 which was fixed a while ago.
Kotresh, please RCA this. - 08:16 AM Backport #57058 (In Progress): pacific: mgr/volumes: Handle internal metadata directories under '...
- 08:06 AM Backport #57057 (In Progress): quincy: mgr/volumes: Handle internal metadata directories under '/...
- 07:07 AM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- This maybe duplicated to https://tracker.ceph.com/issues/55332.
- 06:55 AM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- Seen in this run too.
https://pulpito.ceph.com/yuriw-2022-08-02_21:20:37-fs-wip-yuri7-testing-2022-07-27-0808-quin... - 06:51 AM Bug #56592: mds: crash when mounting a client during the scrub repair is going on
- More info:
I was just simulating the cu case we hit by just removing one object of the directory from the metadata... - 06:47 AM Bug #56592: mds: crash when mounting a client during the scrub repair is going on
- Venky Shankar wrote:
> Xiubo,
>
> Were you trying to mount /mydir when it was getting repaired?
No, I was just... - 06:23 AM Bug #56592: mds: crash when mounting a client during the scrub repair is going on
- Xiubo,
Were you trying to mount /mydir when it was getting repaired? - 06:47 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Seen in this quincy run https://pulpito.ceph.com/yuriw-2022-08-02_21:20:37-fs-wip-yuri7-testing-2022-07-27-0808-quinc...
- 06:30 AM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya,
Please take a look at this. I think there is some sort of race that is causing this crash while iterating... - 06:25 AM Bug #57014: cephfs-top: add an option to dump the computed values to stdout
- Jos, please take this one.
- 05:54 AM Bug #56996 (In Progress): Transient data read corruption from other machine
- 04:48 AM Bug #56996: Transient data read corruption from other machine
- Witold Baryluk wrote:
> Ok. I still do not understand why this can happen:
>
> writer: write("a"); write("b"); wr... - 04:59 AM Bug #57065 (Closed): qa: test_query_client_ip_filter fails with latest 'perf stats' structure cha...
- test_query_client_ip_filter fails with the below error in tests [1] and [2]. This happens when PR [3] is tested.
<... - 04:41 AM Bug #57064 (Need More Info): qa: test_add_ancestor_and_child_directory failure
- Seen in recent quincy run https://pulpito.ceph.com/yuriw-2022-08-04_11:54:20-fs-wip-yuri8-testing-2022-08-03-1028-qui...
- 02:47 AM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> Please open a PR for discussion.
https://github.com/ceph/ceph/pull/47506 . Please take ... - 02:45 AM Bug #56067 (Fix Under Review): Cephfs data loss with root_squash enabled
08/08/2022
- 07:03 PM Documentation #57062 (New): Document access patterns that have good/pathological performance on C...
- I have a CephFS 16.2.7 with 200 M small files (between 1 KB and 100 KB; ther are a few larger ones up to 200 MB) and ...
- 03:28 PM Bug #56048: ceph.mirror.info is not removed from target FS when mirroring is disabled
- Hi Venky,
I tried it again, now with 17.2.1, and I could reproduce the issue. The mgr debug log is below.
As fa... - 01:08 PM Bug #56048: ceph.mirror.info is not removed from target FS when mirroring is disabled
- Andreas Teuchert wrote:
> When disabling mirroring on a FS with "ceph fs snapshot mirror disable <source-fs>" the "c... - 02:32 PM Bug #56996: Transient data read corruption from other machine
- Ok. I still do not understand why this can happen:
writer: write("a"); write("b"); write("c");
reader (other cl... - 06:33 AM Bug #56996: Transient data read corruption from other machine
- Witold Baryluk wrote:
> What about when there is one writer and one reader?
This will depend on whether they are ... - 01:18 PM Feature #56643: scrub: add one subcommand or option to add the missing objects back
- Venky Shankar wrote:
> Xiubo Li wrote:
> > When we are scrub repairing the metadatas and some objects may get lost ... - 01:02 PM Feature #56643: scrub: add one subcommand or option to add the missing objects back
- Xiubo Li wrote:
> When we are scrub repairing the metadatas and some objects may get lost due to some reasons. After... - 01:01 PM Bug #56249: crash: int Client::_do_remount(bool): abort
- Xiubo Li wrote:
> Should be fixed by https://tracker.ceph.com/issues/54049.
Looks the same. However, I'm not sure... - 09:41 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Milind Changire wrote:
> Adding any more condition to the assertion expression and passing the assertion is not goin... - 08:07 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Adding any more condition to the assertion expression and passing the assertion is not going to do any good.
Since M... - 05:37 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Never mind - I see the err coming from JournalPointer. If the MDS is respawning/shutting down could that condition ad...
- 05:29 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Milind Changire wrote:
> This seems to be a race between an mds respawn and the MDLog::_recovery_thread()
> In Paci... - 08:55 AM Backport #57058 (Resolved): pacific: mgr/volumes: Handle internal metadata directories under '/vo...
- https://github.com/ceph/ceph/pull/47512
- 08:55 AM Backport #57057 (Resolved): quincy: mgr/volumes: Handle internal metadata directories under '/vol...
- https://github.com/ceph/ceph/pull/47511
- 08:54 AM Bug #55762 (Pending Backport): mgr/volumes: Handle internal metadata directories under '/volumes'...
08/05/2022
- 09:26 PM Bug #56067: Cephfs data loss with root_squash enabled
- Greg Farnum wrote:
>
>
> But now I have another question -- does this mean that a kclient which only has access ... - 04:21 PM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- This seems to be a race between an mds respawn and the MDLog::_recovery_thread()
In Pacific, the MDLog::_recovery_th... - 01:15 PM Bug #57048 (Pending Backport): osdc/Journaler: better handle ENOENT during replay as up:standby-r...
- ...
- 06:37 AM Backport #57042 (In Progress): quincy: pybind/mgr/volumes: interface to check the presence of sub...
- 04:42 AM Bug #48673: High memory usage on standby replay MDS
- We seem to be running into this pretty frequently and easily with standby-replay configuration.
08/04/2022
- 11:43 PM Bug #57044 (Fix Under Review): mds: add some debug logs for "crash during construction of interna...
- 11:42 PM Bug #57044 (Resolved): mds: add some debug logs for "crash during construction of internal request"
- ...
- 07:26 PM Bug #56802 (Duplicate): crash: void MDLog::_submit_entry(LogEvent*, MDSLogContextBase*): assert(!...
- 03:33 PM Bug #55897: test_nfs: update of export's access type should not trigger NFS service restart
- /a/yuriw-2022-08-03_20:33:43-rados-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi/6957515
- 02:37 PM Backport #57041 (In Progress): pacific: pybind/mgr/volumes: interface to check the presence of su...
- 01:15 PM Backport #57041 (Resolved): pacific: pybind/mgr/volumes: interface to check the presence of subvo...
- https://github.com/ceph/ceph/pull/47460
- 01:15 PM Backport #57042 (Resolved): quincy: pybind/mgr/volumes: interface to check the presence of subvol...
- https://github.com/ceph/ceph/pull/47474
- 01:10 PM Feature #55821 (Pending Backport): pybind/mgr/volumes: interface to check the presence of subvolu...
- 12:19 PM Bug #56996: Transient data read corruption from other machine
- What about when there is one writer and one reader?
- 12:36 AM Bug #56996: Transient data read corruption from other machine
- I am not very sure this is a bug.
If there are multiple clients and they are in any of:... - 10:59 AM Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into anothe...
- Kotresh, please take a look at this.
08/03/2022
- 02:46 PM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- Rishabh,
Do we know why the space issue started to show up recently? - 02:19 PM Bug #56517 (Resolved): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
- 10:36 AM Bug #57014 (Resolved): cephfs-top: add an option to dump the computed values to stdout
- It would be nice if cephfs-top dumps it's computed values to stdout in json format. The json should contain all the f...
- 08:16 AM Backport #56462 (In Progress): pacific: mds: crash due to seemingly unrecoverable metadata error
- 08:15 AM Backport #56462 (Need More Info): pacific: mds: crash due to seemingly unrecoverable metadata error
- 08:12 AM Backport #56461 (In Progress): quincy: mds: crash due to seemingly unrecoverable metadata error
- 06:13 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Milind, please RCA this.
- 12:04 AM Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into anothe...
- Downstream BZ - https://bugzilla.redhat.com/show_bug.cgi?id=2114615
08/02/2022
- 02:09 PM Bug #56626 (Closed): "ceph fs volume create" fails with error ERANGE
- Closing the bug. Changes in devstack-plugin-ceph, https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/85152...
- 02:03 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
- I've noticed a commonality when this is being triggered, Singularity is being used https://en.wikipedia.org/wiki/Sing...
- 08:15 AM Bug #56802: crash: void MDLog::_submit_entry(LogEvent*, MDSLogContextBase*): assert(!mds->is_any_...
- Maybe this is relevant information to reproduce the crash:
I have NFS Ganesha running to export CephFS and when I ... - 06:47 AM Bug #56988: mds: memory leak suspected
- Here is a graph of the memory summary without and with the automated restart.
- 06:34 AM Bug #56988: mds: memory leak suspected
- I have automated restarting a single MDS-Server when MDS memory consumption is 80GB (roughly twice the configured mds...
- 06:28 AM Bug #56695 (Fix Under Review): [RHEL stock] pjd test failures(a bug that need to wait the unlink ...
- 05:42 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Patrick Donnelly wrote:
> [...]
>
> /ceph/teuthology-archive/pdonnell-2022-07-22_19:42:58-fs-wip-pdonnell-testing... - 02:50 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Xiubo Li wrote:
> Tried **4.18.0-348.20.1.el8_5.x86_64** and couldn't reproduce it.
>
> Will try the exact same ... - 02:37 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Tried **4.18.0-348.20.1.el8_5.x86_64** and couldn't reproduce it.
Will try the exact same version of **kernel-4.1...
08/01/2022
- 04:34 PM Bug #56996 (In Progress): Transient data read corruption from other machine
- Kernel cephfs on both sides.
* ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)
*... - 09:47 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Test this with the latest **testing** kclient branch, I couldn't reproduce it.
Will switch to use the distro kerne... - 09:46 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Xiubo Li wrote:
> Currently the kclient's **testing** branch has merged the fscryption name related patches, which w... - 09:10 AM Bug #56695 (In Progress): [RHEL stock] pjd test failures(a bug that need to wait the unlink to fi...
- Currently the kclient's **testing** branch has merged the fscryption name related patches, which will limit the **MAX...
- 09:08 AM Bug #56633 (Need More Info): mds: crash during construction of internal request
- Locally I couldn't reproduce it. And by reading the code I couldn't figure out in which case will the internal reques...
- 08:59 AM Bug #53573: qa: test new clients against older Ceph clusters
- Xiubo Li wrote:
> The tracker [1] has done the test for new clients with nautilus ceph simultaneously.
>
> [1] ht... - 08:51 AM Bug #53573: qa: test new clients against older Ceph clusters
- The tracker [1] has done the test for new clients with nautilus ceph simultaneously.
[1] https://tracker.ceph.com/... - 07:01 AM Bug #56988 (Need More Info): mds: memory leak suspected
- We are runnung a cephfs pacific cluster in production:
MDS version: ceph version 16.2.9 (4c3647a322c0ff5a1dd2344...
07/30/2022
- 12:39 PM Backport #56978 (In Progress): pacific: mgr/volumes: Subvolume creation failed on FIPs enabled sy...
- 11:45 AM Backport #56978 (Resolved): pacific: mgr/volumes: Subvolume creation failed on FIPs enabled system
- https://github.com/ceph/ceph/pull/47369
- 12:31 PM Backport #56979 (In Progress): quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
- 11:45 AM Backport #56979 (Resolved): quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
- https://github.com/ceph/ceph/pull/47368
- 11:45 AM Backport #56980 (Rejected): octopus: mgr/volumes: Subvolume creation failed on FIPs enabled system
- 11:41 AM Bug #56727 (Pending Backport): mgr/volumes: Subvolume creation failed on FIPs enabled system
07/29/2022
- 02:55 PM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Once we confirm that removing osd_pool_default_pgp_num and osd_pool_default_pg_num in devstack-plugin-ceph works, we ...
- 07:17 AM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Tested the deployment with "osd_pool_default_pg_autoscale_mode = off" in bootstrap_conf and seems to fix the issue. H...
- 01:01 PM Backport #50126 (Rejected): octopus: pybind/mgr/volumes: deadlock on async job hangs finisher thread
07/28/2022
- 09:00 PM Bug #56626 (Need More Info): "ceph fs volume create" fails with error ERANGE
- This does not seem like a bug in the mgr/volumes code. The mgr/volumes module creates FS pools using `osd pool create...
- 01:29 PM Backport #53714 (Resolved): pacific: mds: fails to reintegrate strays if destdn's directory is fu...
- 01:26 PM Bug #56633: mds: crash during construction of internal request
- Xiubo volunteered yesterday and said he's started work on this in standup today.
- 07:53 AM Bug #46140 (Resolved): mds: couldn't see the logs in log file before the daemon get aborted
- Checked the code, all the **assert()/abort()** have been fixed. Closing it.
- 07:37 AM Bug #46140 (New): mds: couldn't see the logs in log file before the daemon get aborted
- 07:37 AM Bug #46140: mds: couldn't see the logs in log file before the daemon get aborted
- I recalled it, we need to switch `assert()` to `ceph_assert()`. And the `ceph_assert()` will help dump the recent log...
- 02:20 AM Bug #56830 (Can't reproduce): crash: cephfs::mirror::PeerReplayer::pick_directory()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d6f26d40363a53f0bed9a466...- 02:19 AM Bug #56808 (In Progress): crash: LogSegment* MDLog::get_current_segment(): assert(!segments.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=70db1b6eecab75317a1e77bd...- 02:19 AM Bug #56802 (Duplicate): crash: void MDLog::_submit_entry(LogEvent*, MDSLogContextBase*): assert(!...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=902003e195a320e2927d5e39...- 02:16 AM Bug #56774 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=90a2f49686f20a5d71a3cdc3...- 01:41 AM Bug #56067: Cephfs data loss with root_squash enabled
- Ramana Raja wrote:
> I made the following change to the Locker code, and then checked how kclient and fuse client be... - 01:15 AM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Our purpose here is to recover the snaprealms and snaptable from the data ...
07/27/2022
- 04:59 PM Bug #56727 (Fix Under Review): mgr/volumes: Subvolume creation failed on FIPs enabled system
- 11:06 AM Bug #56727 (Resolved): mgr/volumes: Subvolume creation failed on FIPs enabled system
The subvolume creation hits the following traceback on fips enabled system....- 02:02 PM Bug #56067: Cephfs data loss with root_squash enabled
- Please open a PR for discussion.
- 12:29 PM Bug #56067: Cephfs data loss with root_squash enabled
- I made the following change to the Locker code, and then checked how kclient and fuse client behaved with root_squash...
- 03:59 AM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> Good work tracking that down Ramana! I don't think it's reasonable to try to require the c... - 01:44 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Xiubo Li wrote:
> Our purpose here is to recover the snaprealms and snaptable from the data pool. It's hard to do th... - 08:17 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- The **listsnaps** could list the snapids of the objects:...
- 07:32 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- > We should be able to see that we're missing snapshots by listing snaps on objects?
Yeah. If a file was snapshote... - 07:10 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-... - 05:42 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Xiubo Li wrote:
> Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-draft.
>
> By using:
... - 01:30 PM Feature #55121: cephfs-top: new options to limit and order-by
- Jos Collin wrote:
> Greg Farnum wrote:
> > Can't fs top already change the sort order? I thought that was done in N... - 01:11 PM Documentation #56730: doc: update snap-schedule notes regarding 'start' time
- Adding chat discussion from #cephfs IRC channel :
<gauravsitlani> Hi team i have a quick question regarding : http... - 01:06 PM Documentation #56730 (Resolved): doc: update snap-schedule notes regarding 'start' time
- Add notes to snap-schedule mgr plugin documentation about the handling of time zone for the 'start' time.
Primary ... - 12:55 PM Bug #46140 (Closed): mds: couldn't see the logs in log file before the daemon get aborted
- After a brief discussion with @Xiubo Li, we decided to close this tracker as this issue was encountered while debuggi...
- 11:50 AM Bug #55112 (Resolved): cephfs-shell: saving files doesn't work as expected
- 11:49 AM Backport #55629 (Resolved): pacific: cephfs-shell: saving files doesn't work as expected
- 11:49 AM Bug #55242 (Resolved): cephfs-shell: put command should accept both path mandatorily and validate...
- 11:49 AM Backport #55625 (Resolved): pacific: cephfs-shell: put command should accept both path mandatoril...
- 11:36 AM Bug #40860 (Resolved): cephfs-shell: raises incorrect error when regfiles are passed to be deleted
- 11:36 AM Documentation #54551 (Resolved): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds ca...
- 11:35 AM Backport #55238 (Resolved): pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-md...
- 10:04 AM Bug #56659: mgr: crash after upgrade pacific to main
- Patrick,
Your patch to fix the libsqlite3-mod-ceph dependency and the eventual crash has worked to resolve the crash...
07/26/2022
- 08:44 PM Bug #56659 (Duplicate): mgr: crash after upgrade pacific to main
- 02:31 PM Backport #56712 (In Progress): pacific: mds: standby-replay daemon always removed in MDSMonitor::...
- 01:05 PM Backport #56712 (Resolved): pacific: mds: standby-replay daemon always removed in MDSMonitor::pre...
- https://github.com/ceph/ceph/pull/47282
- 02:30 PM Backport #56713 (In Progress): quincy: mds: standby-replay daemon always removed in MDSMonitor::p...
- 01:05 PM Backport #56713 (Resolved): quincy: mds: standby-replay daemon always removed in MDSMonitor::prep...
- https://github.com/ceph/ceph/pull/47281
- 01:03 PM Bug #56666 (Pending Backport): mds: standby-replay daemon always removed in MDSMonitor::prepare_b...
- 12:14 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-draft.
By using:...
07/25/2022
- 03:20 PM Bug #56698 (Resolved): client: FAILED ceph_assert(_size == 0)
- ...
- 03:17 PM Bug #56697 (New): qa: fs/snaps fails for fuse
- ...
- 02:46 PM Bug #56695 (Resolved): [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- ...
- 02:38 PM Bug #56694 (Fix Under Review): qa: avoid blocking forever on hung umount
- 02:34 PM Bug #56694 (Rejected): qa: avoid blocking forever on hung umount
- /ceph/teuthology-archive/pdonnell-2022-07-22_19:42:58-fs-wip-pdonnell-testing-20220721.235756-distro-default-smithi/6...
- 11:18 AM Bug #56626 (In Progress): "ceph fs volume create" fails with error ERANGE
- 11:16 AM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Hi Victoria,
I am not very familiar with the osd configs but as per code if 'osd_pool_default_pg_autoscale_mode' i... - 06:39 AM Bug #55858 (Need More Info): Pacific 16.2.7 MDS constantly crashing
- 04:59 AM Backport #56469 (In Progress): quincy: mgr/volumes: display in-progress clones for a snapshot
07/24/2022
- 06:20 PM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> I don't think it's reasonable to try to require the client mount to keep track of which ap...
07/23/2022
- 05:27 PM Bug #55759 (Resolved): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- 05:27 PM Bug #55822 (Resolved): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
- 05:25 PM Backport #56103 (Resolved): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
07/22/2022
- 06:18 PM Feature #50470 (Resolved): cephfs-top: multiple file system support
- 06:17 PM Bug #52982 (Resolved): client: Inode::hold_caps_until should be a time from a monotonic clock
- 06:17 PM Backport #55937 (Resolved): pacific: client: Inode::hold_caps_until should be a time from a monot...
- 05:31 PM Bug #55971 (Resolved): LibRadosMiscConnectFailure.ConnectFailure test failure
- 05:30 PM Backport #56005 (Resolved): pacific: LibRadosMiscConnectFailure.ConnectFailure test failure
- 05:30 PM Backport #56004 (Resolved): quincy: LibRadosMiscConnectFailure.ConnectFailure test failure
- 04:37 PM Backport #55936 (Resolved): quincy: client: Inode::hold_caps_until should be a time from a monoto...
- 12:07 PM Backport #55936: quincy: client: Inode::hold_caps_until should be a time from a monotonic clock
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46563
merged - 04:37 PM Backport #56013 (Resolved): quincy: quota support for subvolumegroup
- 12:10 PM Backport #56013: quincy: quota support for subvolumegroup
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46667
merged - 04:37 PM Backport #56108 (Resolved): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapsh...
- 12:12 PM Backport #56108: quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' co...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46804
merged - 04:36 PM Bug #56067: Cephfs data loss with root_squash enabled
- Good work tracking that down Ramana! I don't think it's reasonable to try to require the client mount to keep track o...
- 12:13 PM Backport #56103: quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46805
merged - 12:09 PM Backport #54578: quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pat...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46647
merged - 02:58 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Matan Breizman wrote:
> > Meaning,
> > > We can see the 1000098a1a5.00000000 object is still... - 02:52 AM Bug #56605 (Need More Info): Snapshot and xattr scanning in cephfs-data-scan
- Matan Breizman wrote:
> Meaning,
> > We can see the 1000098a1a5.00000000 object is still in the data pool: ...
> ... - 12:33 AM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
- John Mulligan wrote:
> I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Ple... - 12:21 AM Bug #56666 (Fix Under Review): mds: standby-replay daemon always removed in MDSMonitor::prepare_b...
07/21/2022
- 10:25 PM Bug #56067: Cephfs data loss with root_squash enabled
- Greg Farnum wrote:
> Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a differ... - 12:49 PM Bug #56067: Cephfs data loss with root_squash enabled
- Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a different path than the more...
- 12:36 PM Bug #56067 (In Progress): Cephfs data loss with root_squash enabled
- 02:14 AM Bug #56067: Cephfs data loss with root_squash enabled
- With vstart cluster (ceph main branch), I was able to reproduce the issue with a kernel client (5.17.11-200.fc35.x86_...
- 08:19 PM Bug #56666 (Resolved): mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
- If a standby-replay daemon's beacon makes it to MDSMonitor::prepare_beacon (rarely), it's automatically removed by th...
- 02:54 PM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
- I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Please correct it if I've f...
- 12:09 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Hi Xiubo, Thank you for the detailed information!
From a RADOS standpoint everything is working as expected.
We a... - 10:22 AM Bug #54283: qa/cephfs: is_mounted() depends on a mutable variable
- Rishabh Dave wrote:
> The PR for this ticket needed fix for "ticket 56476":https://tracker.ceph.com/issues/56476 in ... - 08:48 AM Bug #56659 (Duplicate): mgr: crash after upgrade pacific to main
- ...
07/20/2022
- 05:50 PM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
- 01:10 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Let me describe how the cephfs act for this:
**1**, For the directory and it's contents, which are all metadata in... - 01:25 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
- Hi Mike,
We would need more information on this to proceed further.
1. Output of 'ceph fs dump' ?
2. Was multi... - 09:03 AM Bug #56063: Snapshot retention config lost after mgr restart
- After updating to 17.2.1 I'm not observing the issue anymore. Now, after failing over the mgr, the retention policy i...
- 08:04 AM Bug #56507 (Duplicate): pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.Te...
- 08:04 AM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- h3. From https://tracker.ceph.com/issues/56507 -
https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-t... - 07:01 AM Bug #56644 (Triaged): qa: test_rapid_creation fails with "No space left on device"
- http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default...
- 07:49 AM Bug #55716 (Resolved): cephfs-shell: Cmd2ArgparseError is imported without version check
- The PR was merged by Venky a couple months ago - https://github.com/ceph/ceph/pull/46337#event-6657873439
- 07:32 AM Bug #56416 (Resolved): qa/cephfs: delete path from cmd args after use
- 06:01 AM Feature #56643 (New): scrub: add one subcommand or option to add the missing objects back
- When we are scrub repairing the metadatas and some objects may get lost due to some reasons. After the repair finishe...
- 01:45 AM Bug #56638 (Fix Under Review): Restore the AT_NO_ATTR_SYNC define in libcephfs
- 01:37 AM Bug #56638 (In Progress): Restore the AT_NO_ATTR_SYNC define in libcephfs
07/19/2022
- 11:43 PM Backport #55928 (In Progress): quincy: mds: FAILED ceph_assert(dir->get_projected_version() == di...
- Hit this in downstream too.
- 11:40 PM Backport #55929 (In Progress): pacific: mds: FAILED ceph_assert(dir->get_projected_version() == d...
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
Done. - 04:12 PM Backport #55929 (Need More Info): pacific: mds: FAILED ceph_assert(dir->get_projected_version() =...
- Xiubo, please do this backport.
- 06:14 PM Bug #56632: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
- This test passed on main branch - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-19_12:12:03-fs:volumes-main-dis...
- 04:03 PM Bug #56632 (Resolved): qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
- 100% reproducible so far.
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2... - 05:46 PM Bug #56638 (Resolved): Restore the AT_NO_ATTR_SYNC define in libcephfs
- While working on an unrelated topic but building against the current 'quincy' branch - but not a released quincy - we...
- 04:34 PM Bug #56634 (New): qa: workunit snaptest-intodir.sh fails with MDS crash
- http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default...
- 04:16 PM Bug #56633 (Need More Info): mds: crash during construction of internal request
- ...
- 02:20 PM Bug #56626 (Triaged): "ceph fs volume create" fails with error ERANGE
- 02:20 PM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Kotresh, PTAL.
- 01:43 PM Bug #56626 (Closed): "ceph fs volume create" fails with error ERANGE
- Trying to create a CephFS filesystem within a cluster deployed with cephadm fails
Steps followed
1. sudo cephad... - 05:02 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- I think about this more, even the *xattrs* are not lost, we still couldn't recovery the snapshot from the data pool. ...
- 01:41 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Do you have logs/shell output or can you reproduce this, demonstrating the presence of the xatt... - 12:53 AM Bug #43216 (Resolved): MDSMonitor: removes MDS coming out of quorum election
- 12:53 AM Backport #52636 (Resolved): pacific: MDSMonitor: removes MDS coming out of quorum election
07/18/2022
- 08:38 PM Documentation #49406: Exceeding osd nearfull ratio causes write throttle.
- After wondering for a long time why my clusters get slow at some point, I finally found this as well.
It would be ... - 03:24 PM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- For those following along, most MDS operations involve something like "mut->ls = get_current_segment()", and the poss...
- 10:40 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- >
> Yeah, IMO it should be a good habit to use the shared_ptr to avoid potential use-after-free bugs as we hit in c... - 09:42 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- Tamar Shacked wrote:
> The issue suggests spreading LogSegment* as shared_ptr while class MDLog manages those ptrs l... - 09:26 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- The issue suggests spreading LogSegment* as shared_ptr while class MDLog manages those ptrs lifetime (creates/stores/...
- 03:23 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Do you have logs/shell output or can you reproduce this, demonstrating the presence of the xattr before taking the sn...
- 02:33 PM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
- We are doing the recovery by steps with a *alternate metadata pool*, more detail please see https://docs.ceph.com/en/...
- 06:36 AM Bug #56592 (Triaged): mds: crash when mounting a client during the scrub repair is going on
- ...
- 06:30 AM Feature #55715 (Fix Under Review): pybind/mgr/cephadm/upgrade: allow upgrades without reducing ma...
- 03:46 AM Fix #55567 (Resolved): cephfs-shell: rm returns just the error code and not proper error msg
- 03:46 AM Backport #56591 (Rejected): pacific: qa: iogen workunit: "The following counters failed to be set...
- 03:45 AM Backport #56590 (New): quincy: qa: iogen workunit: "The following counters failed to be set on md...
- 03:45 AM Feature #48911 (Resolved): cephfs-shell needs "ln" command equivalent
- 03:43 AM Bug #54108 (Pending Backport): qa: iogen workunit: "The following counters failed to be set on md...
- 01:37 AM Bug #55778 (Resolved): client: choose auth MDS for getxattr with the Xs caps
- 01:37 AM Backport #56109 (Resolved): quincy: client: choose auth MDS for getxattr with the Xs caps
- 01:37 AM Bug #55824 (Resolved): ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- 01:36 AM Backport #56106 (Resolved): quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown erro...
- 01:35 AM Bug #53504 (Resolved): client: infinite loop "got ESTALE" after mds recovery
- 01:35 AM Backport #55934 (Resolved): quincy: client: infinite loop "got ESTALE" after mds recovery
- 01:35 AM Bug #55253 (Resolved): client: switch to glibc's STATX macros
- 01:35 AM Backport #55994 (Resolved): quincy: client: switch to glibc's STATX macros
- 01:34 AM Bug #53741 (Resolved): crash just after MDS become active
- 01:34 AM Backport #56015 (Resolved): quincy: crash just after MDS become active
07/15/2022
- 08:30 PM Bug #56577 (Pending Backport): mds: client request may complete without queueing next replay request
- We received a report of a situation of a cluster with a single active MDS stuck in up:clientreplay. The status was:
... - 03:29 PM Bug #52430: mds: fast async create client mount breaks racy test
- Copying tracebacks for convenience (recently saw same test fail for different reason) -...
- 02:43 PM Backport #56106: quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46801
merged - 02:42 PM Backport #56109: quincy: client: choose auth MDS for getxattr with the Xs caps
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46800
merged - 02:41 PM Backport #56015: quincy: crash just after MDS become active
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46681
merged - 02:40 PM Backport #55994: quincy: client: switch to glibc's STATX macros
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46680
merged - 02:39 PM Backport #55926: quincy: Unexpected file access behavior using ceph-fuse
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46595
merged - 02:39 PM Backport #55933: quincy: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46566
merged - 02:38 PM Backport #55934: quincy: client: infinite loop "got ESTALE" after mds recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46558
merged - 10:05 AM Bug #56532 (Fix Under Review): client stalls during vstart_runner test
- 01:02 AM Bug #56532: client stalls during vstart_runner test
- From Milind's reproducing logs, there has two different error code, which are *1* and *32*:...
- 05:49 AM Backport #56468 (In Progress): pacific: mgr/volumes: display in-progress clones for a snapshot
- 02:46 AM Backport #56527 (In Progress): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ ...
- 02:44 AM Backport #56526 (In Progress): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ a...
07/14/2022
- 01:00 PM Bug #56537 (Fix Under Review): cephfs-top: wrong/infinitely changing wsp values
- 11:18 AM Bug #48773: qa: scrub does not complete
- Saw this in my Quincy backport reviews as well -
https://pulpito.ceph.com/yuriw-2022-07-08_17:05:01-fs-wip-yuri2-tes... - 10:46 AM Backport #56152 (In Progress): pacific: mgr/snap_schedule: schedule updates are not persisted acr...
- 10:40 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Rishabh, did you get to RCA this?
- 06:09 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen... - 05:31 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen... - 05:14 AM Bug #56522: Do not abort MDS on unknown messages
- Xiubo Li wrote:
> Milind Changire wrote:
> > I had started the GETVXATTR RPC implementation with the introduction o... - 04:20 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> I had started the GETVXATTR RPC implementation with the introduction of a feature bit for t... - 01:29 AM Bug #56553 (Fix Under Review): client: do not uninline data for read
- 01:20 AM Bug #56553 (Resolved): client: do not uninline data for read
- We don't even ask for and to be sure that we have been granted the Fw caps when reading, we shouldn't write contents ...
Also available in: Atom