Activity
From 10/19/2022 to 11/17/2022
11/17/2022
- 12:05 PM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Hey,
Thanks for the update. You should try adjusting `mds_session_cache_liveness_decay_rate` to a lower value (def... - 10:17 AM Bug #58041: mds: src/mds/Server.cc: 3231: FAILED ceph_assert(straydn->get_name() == straydname)
- and another side note, the crash was seen when a directory pin was removed from rank-0 mds. Pinning it back again cea...
- 10:16 AM Bug #58041: mds: src/mds/Server.cc: 3231: FAILED ceph_assert(straydn->get_name() == straydname)
- oh, and btw this was seen in ceph-16.2.8.
- 10:15 AM Bug #58041 (Duplicate): mds: src/mds/Server.cc: 3231: FAILED ceph_assert(straydn->get_name() == s...
- ...
- 09:21 AM Feature #55215 (Fix Under Review): mds: fragment directory snapshots
11/15/2022
- 01:49 PM Bug #58031 (Resolved): cephfs-top: sorting/limit excepts when the filesystems are removed and cre...
- This happens in the main branch. Please check.
1. cephfs-top is launched and the clients are sorted by 'mlatavg(ms... - 01:42 PM Bug #58000 (Fix Under Review): mds: switch submit_mutex to fair mutex for MDLog
- 01:41 PM Bug #58008 (Fix Under Review): mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate...
- 01:41 PM Bug #58028 (Triaged): cephfs-top: Sorting doesn't work when the filesystems are removed and created
- 10:12 AM Bug #58028 (Resolved): cephfs-top: Sorting doesn't work when the filesystems are removed and created
- Sorting doesn't work in the following scenario
1. cephfs-top is launched and the clients are sorted by 'mlatavg(ms... - 11:08 AM Bug #58030 (Resolved): mds: avoid ~mdsdir's scrubbing and reporting damage health status
- We are supposed to handle the case of mdsdir, where we
are not having any backtrace actually.We should prevent the
... - 10:49 AM Bug #58029 (Fix Under Review): cephfs-data-scan: multiple data pools are not supported
- 10:46 AM Bug #58029 (Resolved): cephfs-data-scan: multiple data pools are not supported
- The tool cannot properly recover if a fs has extra data pools. We need access to all data pools on `scan_extents` ste...
11/14/2022
- 09:32 PM Fix #58023 (Pending Backport): mds: do not evict clients if OSDs are laggy
- Monitoring perf dumps from the MDS can sometimes show that OSDs are laggy, "objecter.op_laggy" and "objecter.osd_lagg...
- 01:27 PM Bug #58018 (Fix Under Review): mount.ceph: will fail with old kernels
- 10:09 AM Bug #58018 (Pending Backport): mount.ceph: will fail with old kernels
- ...
11/11/2022
- 02:11 PM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Venky Shankar wrote:
> xianpao chen wrote:
> > Venky Shankar wrote:
> > > Could you share the output of
> > >
>... - 01:02 PM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- xianpao chen wrote:
> Venky Shankar wrote:
> > Could you share the output of
> >
> > [...]
> >
> > Also, does... - 09:14 AM Bug #58008: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- When increasing filer_max_purge_ops on a pacific version mds, pq_executing_ops/pq_executing_ops_high_water of purge_q...
- 09:13 AM Bug #58008 (Resolved): mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
- _calculate_ops relying on a config which can be modified on the fly will cause a bug. e.g.
# A file has 20 objects...
11/10/2022
- 08:18 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Venky Shankar wrote:
> BTW, are you *not* seeing any "oversized cache" warning for the MDS?
there is no "oversize... - 04:06 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- BTW, are you *not* seeing any "oversized cache" warning for the MDS?
- 02:42 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Do you have lots of small files and frequently scan them?
- 01:12 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Venky Shankar wrote:
> Have you tried running `heap release`?
yes,but it didn't seem to work. - 01:45 AM Bug #58000: mds: switch submit_mutex to fair mutex for MDLog
- From Patrick's comment in https://github.com/ceph/ceph/pull/44180#pullrequestreview-1174516711.
- 01:44 AM Bug #58000 (Resolved): mds: switch submit_mutex to fair mutex for MDLog
- The implementations of the Mutex (e.g. std::mutex in C++) do not
guarantee fairness, they do not guarantee that the ...
11/09/2022
- 07:08 PM Feature #57090 (Fix Under Review): MDSMonitor,mds: add MDSMap flag to prevent clients from connec...
- 01:22 PM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Have you tried running `heap release`?
- 09:35 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Venky Shankar wrote:
> Could you share the output of
>
> [...]
>
> Also, does running
>
> [...]
>
> redu... - 09:23 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Venky Shankar wrote:
> Could you share the output of
>
> [...]
>
> Also, does running
>
> [...]
>
> redu... - 08:56 AM Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Could you share the output of...
11/07/2022
- 01:48 PM Bug #57985 (Triaged): mds: warning `clients failing to advance oldest client/flush tid` seen with...
- 09:06 AM Bug #57985 (Pending Backport): mds: warning `clients failing to advance oldest client/flush tid` ...
- https://bugzilla.redhat.com/show_bug.cgi?id=2134709
Generally seen when the MDS is heavily loaded with I/Os. Inter...
11/04/2022
- 07:48 PM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
- Alternative fix is available at https://github.com/ceph/ceph/pull/48743
- 08:54 AM Backport #57974 (In Progress): pacific: cephfs-top: make cephfs-top display scrollable like top
- 08:46 AM Backport #57974 (Resolved): pacific: cephfs-top: make cephfs-top display scrollable like top
- https://github.com/ceph/ceph/pull/48734
- 03:51 AM Backport #57971 (Resolved): pacific: cephfs-top: new options to limit and order-by
- https://github.com/ceph/ceph/pull/49303
- 03:50 AM Backport #57970 (Resolved): quincy: cephfs-top: new options to limit and order-by
- https://github.com/ceph/ceph/pull/50151
- 03:25 AM Feature #55121 (Pending Backport): cephfs-top: new options to limit and order-by
11/03/2022
- 12:45 PM Feature #44455 (In Progress): cephfs: add recursive unlink RPC
- 09:30 AM Feature #57090 (In Progress): MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- 07:34 AM Feature #57090: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- Patrick Donnelly wrote:
> Dhairya, status on this?
Hi Patrick, i'm on this completely now. Will try bring somethi... - 09:20 AM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- If you're running into this bug after upgrading from Pacific to Quincy, you can manually delete the legacy schedule D...
- 08:49 AM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- {"log":"debug 2022-11-03T08:38:12.502+0000 7f46270f5700 -1 mgr load Failed to construct class in 'snap_schedule'\n","...
- 08:46 AM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- How to fix it?
11/01/2022
- 02:32 AM Support #57952 (New): Pacific: the buffer_anon_bytes of ceph-mds is too large
- The buffer_anon_bytes will reach 200+GB, then run out of machine memory.It does not seem to be able to effectively fr...
10/31/2022
- 09:50 AM Bug #57920: mds:ESubtreeMap event size is too large
- Venky Shankar wrote:
> Hi,
>
> Could the list of PRs that try to address this issue be linked? (so, that we don't... - 09:36 AM Bug #57920: mds:ESubtreeMap event size is too large
- Hi,
Could the list of PRs that try to address this issue be linked? (so, that we don't loose track of them).
As... - 04:50 AM Bug #57920: mds:ESubtreeMap event size is too large
- zhikuo du wrote:
> > I am afraid this won't work. As I remembered from my test before, the size of ESubtreeMap could... - 02:47 AM Bug #57920: mds:ESubtreeMap event size is too large
- > I am afraid this won't work. As I remembered from my test before, the size of ESubtreeMap could reach up to several...
- 02:37 AM Bug #57920: mds:ESubtreeMap event size is too large
- zhikuo du wrote:
> > May I ask you a question:
> > What factors decide how many event must have a ESubtreeMap e... - 04:35 AM Backport #57946 (In Progress): quincy: cephfs-top: make cephfs-top display scrollable like top
- 04:26 AM Backport #57946 (Resolved): quincy: cephfs-top: make cephfs-top display scrollable like top
- https://github.com/ceph/ceph/pull/48677
- 04:21 AM Feature #55197 (Pending Backport): cephfs-top: make cephfs-top display scrollable like top
10/30/2022
- 02:18 PM Bug #57920: mds:ESubtreeMap event size is too large
- > @Xiubo Li @Venky Shankar
>
> I readed the codes about: how the segment is trimmed and how ESubtreeMap/EImportSt... - 01:10 PM Bug #57920: mds:ESubtreeMap event size is too large
- > May I ask you a question:
> What factors decide how many event must have a ESubtreeMap event? And what is the...
10/29/2022
10/28/2022
- 04:25 PM Bug #53509 (Resolved): quota support for subvolumegroup
- 04:25 PM Bug #53848 (Resolved): mgr/volumes: Failed to create clones if the source snapshot's quota is exc...
- 07:11 AM Backport #57723: pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- Backport of https://github.com/ceph/ceph/pull/48642 is also included with this
10/27/2022
- 01:22 PM Bug #55804 (Duplicate): qa failure: pjd link tests failed
- 01:21 PM Bug #55804: qa failure: pjd link tests failed
- This issue is probably fixed by PR: https://github.com/ceph/ceph/pull/46331 ("mds: wait unlink to finish to avoid con...
- 12:55 PM Bug #57446: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- Fixed another possible failure with this test
https://github.com/ceph/ceph/pull/48642 - 12:27 PM Bug #51278: mds: "FAILED ceph_assert(!segments.empty())"
- Venky Shankar wrote:
> Latest occurrence with similar backtrace - https://pulpito.ceph.com/vshankar-2022-06-03_10:03... - 02:56 AM Bug #57920: mds:ESubtreeMap event size is too large
- zhikuo du wrote:
> Xiubo Li wrote:
> > zhikuo du wrote:
> > [...]
> > > 4,I think this problem will seriously aff...
10/26/2022
- 02:13 PM Backport #57717 (Resolved): quincy: libcephfs: incorrectly showing the size for snapdirs when sta...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48414
Merged. - 10:09 AM Bug #57920: mds:ESubtreeMap event size is too large
- Xiubo Li wrote:
> zhikuo du wrote:
> [...]
> > 4,I think this problem will seriously affect the IOPS of write and ... - 03:49 AM Bug #57920: mds:ESubtreeMap event size is too large
- zhikuo du wrote:
> Xiubo Li wrote:
> > zhikuo du wrote:
> > [...]
> > > 4,I think this problem will seriously aff... - 01:42 AM Bug #57920: mds:ESubtreeMap event size is too large
- Xiubo Li wrote:
> zhikuo du wrote:
> [...]
> > 4,I think this problem will seriously affect the IOPS of write and ... - 12:42 AM Bug #57920: mds:ESubtreeMap event size is too large
- zhikuo du wrote:
[...]
> 4,I think this problem will seriously affect the IOPS of write and read.
>
> 5, @Xiubo ... - 10:05 AM Bug #57856 (Closed): cephfs-top: Skip refresh when the perf stats query shows no metrics
- Closing this, as refreshes are optimised in a better way in https://github.com/ceph/ceph/pull/48090.
- 06:25 AM Backport #57929 (In Progress): quincy: qa: test_dump_loads fails with JSONDecodeError
- https://github.com/ceph/ceph/pull/54187
- 06:18 AM Bug #57299 (Pending Backport): qa: test_dump_loads fails with JSONDecodeError
10/25/2022
- 12:52 PM Feature #57090: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- Dhairya, status on this?
- 09:32 AM Bug #57920 (New): mds:ESubtreeMap event size is too large
- In production environment, we have a problem: The ESubtreeMap event size is too large.
1,The ESubtreeMap event siz...
10/21/2022
- 04:52 PM Backport #57719: quincy: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48327
merged
10/20/2022
- 06:33 AM Bug #54557 (Fix Under Review): scrub repair does not clear earlier damage health status
- 05:28 AM Backport #57716 (Resolved): pacific: libcephfs: incorrectly showing the size for snapdirs when st...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48413
Merged. - 04:54 AM Backport #57874 (In Progress): quincy: Permissions of the .snap directory do not inherit ACLs
- 04:17 AM Backport #57723 (Resolved): pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48417
Merged.
10/19/2022
- 02:16 PM Backport #57875 (In Progress): pacific: Permissions of the .snap directory do not inherit ACLs
- 09:13 AM Bug #57882: Kernel Oops, kernel NULL pointer dereference
- Xiubo Li wrote:
> It's a known bug and I will check this today or this week.
Oh my ! I did search for anything pr...
Also available in: Atom