Activity
From 04/30/2019 to 05/29/2019
05/29/2019
- 09:47 PM Bug #40001: mds cache oversize after restart
- Are you using snapshots? Can you tell us more about how the cluster is being used like # of clients and versions.
- 06:49 PM Documentation #24641: Document behaviour of fsync-after-close
- Proposed documentation update here:
https://github.com/ceph/ceph/pull/28300
Niklas, please take a look and let ... - 06:21 PM Bug #40034: mds: stuck in clientreplay
- Here's ganesha.log, not sure if there's anything useful:
https://termbin.com/7ni9
Is it really intended for an md... - 06:15 PM Bug #40034: mds: stuck in clientreplay
- Logs from nfs-ganesha would be helpful too if you have them.
- 01:53 PM Bug #39987 (Fix Under Review): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- 12:09 PM Bug #40061 (Fix Under Review): mds: blacklisted clients eviction is broken
- https://github.com/ceph/ceph/pull/28293
- 12:01 PM Bug #40061 (Resolved): mds: blacklisted clients eviction is broken
- 02:11 AM Backport #39669 (In Progress): mimic: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/28274
05/28/2019
- 07:03 PM Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
- Jeff Layton wrote:
> Reconfirming that I think this is a problem. Here's Client::mkdir():
>
> [...]
>
> There ... - 06:44 PM Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
- Reconfirming that I think this is a problem. Here's Client::mkdir():...
- 06:19 PM Documentation #24641: Document behaviour of fsync-after-close
- > Second your answer sounds kcephfs specific -- do the same guarantees still hold for ceph-fuse?
FUSE just farms o... - 06:16 PM Documentation #24641: Document behaviour of fsync-after-close
- Niklas replied via email:
> I think it makes sense to document it the way you say it, e.g. "kcephfs's guarantees i... - 03:13 PM Documentation #24641: Document behaviour of fsync-after-close
- Niklas Hambuechen wrote:
> The following should be documented:
>
> Does close()/re-open()/fsync() provide the sam... - 04:17 PM Bug #40002: mds: not trim log under heavy load
- Zheng Yan wrote:
> multiple-active mds?
yes - 08:22 AM Bug #40002: mds: not trim log under heavy load
- multiple-active mds?
- 10:51 AM Backport #40042 (Resolved): mimic: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28650
- 10:51 AM Backport #40041 (Resolved): luminous: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28543
- 10:50 AM Backport #40040 (Resolved): nautilus: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28582
- 09:25 AM Feature #40036 (Fix Under Review): mgr / volumes: support asynchronous subvolume deletes
- 09:25 AM Feature #40036: mgr / volumes: support asynchronous subvolume deletes
- see: https://github.com/ceph/ceph/blob/master/src/pybind/mgr/volumes/module.py#L393
- 09:00 AM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
- Currently, removing a subvolume does an in-band directory removal. This can the operation to run for long for huge su...
- 07:59 AM Bug #40028 (Pending Backport): mds: avoid trimming too many log segments after mds failover
- 07:57 AM Bug #40034: mds: stuck in clientreplay
- ...
05/27/2019
- 04:13 PM Bug #40034 (Need More Info): mds: stuck in clientreplay
- When I came in on Monday morning, our cluster's cephfs was stuck in clientreplay, and nfs mount through nfs-ganesha h...
- 12:12 PM Bug #40028 (Resolved): mds: avoid trimming too many log segments after mds failover
- If mds was behind on trim before failover, the new mds may trim too many log segments at the same time, and cause unh...
05/24/2019
- 01:23 PM Bug #39947 (Fix Under Review): cephfs-shell: add CI testing with flake8
- 02:55 AM Bug #40019 (New): mds: crash at ms_dispatch thread
- Env: ceph 14.2.1 3 mds
I enabled module ceph crash so paste the meta here
meta:... - 12:29 AM Backport #39670 (In Progress): nautilus: mds: output lock state in format dump
05/23/2019
- 04:00 PM Bug #40001: mds cache oversize after restart
- I set debug_mds to 20/20 and almost all of the log is like...
- 10:11 AM Bug #40014 (Resolved): mgr/volumes: Name 'sub_name' is not defined
- I'm getting a new mypy error in master:...
05/22/2019
- 03:55 PM Bug #40002 (Fix Under Review): mds: not trim log under heavy load
- ceph version 14.2.1
we have 3 mds under a heavy load (create 8k files per second)
we find mds log add very fast... - 03:46 PM Bug #40001 (Rejected): mds cache oversize after restart
- ceph version 14.2.1
we have 3 mds under a heavy load (create 8k files per second)
all 3 mds are under 30G mem... - 12:52 PM Cleanup #4744 (New): mds: pass around LogSegments via std::shared_ptr
- 12:19 PM Feature #38153 (New): client: proactively release caps it is not using
- 12:05 PM Feature #358 (Rejected): mds: efficient revert to snapshot
- There's no RADOS support for reverting to an older snapshot so I don't see this getting fixed in any near-future time...
- 12:01 PM Feature #15066: multifs: Allow filesystems to be assigned RADOS namespace as well as pool for met...
- Needs the ability to delete a RADOS namespace. See also: https://www.spinics.net/lists/ceph-devel/msg36695.html
- 11:48 AM Tasks #39998 (New): client: audit ACL
- Look for race conditions involved with client checks and releasing caps. Jeff wants to help with this.
- 11:27 AM Feature #17835 (Fix Under Review): mds: enable killpoint tests for MDS-MDS subtree export
- 02:35 AM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> Zheng Yan wrote:
> >
> > Yes, this can causes inconsistency. But it's not unique to link cou...
05/21/2019
- 02:20 PM Feature #39982 (Duplicate): cephfs client periodically report cache utilisation to MDS server
- Yup, this is something we're working on for Octopus. Thanks Stefan!
- 12:18 PM Bug #39947 (In Progress): cephfs-shell: add CI testing with flake8
- 09:58 AM Bug #39987 (Resolved): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
An user reported a bug that mds couldn't finish freezing dirfrag. Cache dump includes following entries....- 03:02 AM Backport #39472 (In Progress): mimic: mds: fail to resolve snapshot name contains '_'
- https://github.com/ceph/ceph/pull/28186
05/20/2019
- 05:54 PM Feature #39982 (Duplicate): cephfs client periodically report cache utilisation to MDS server
- After seen Gregory's talk "What are "caps"? (And Why Won't my Client Drop Them?) he explained that the MDS servers ne...
- 07:01 AM Feature #39969 (Fix Under Review): mgr / volume: refactor volume module
- 03:02 AM Feature #39969 (In Progress): mgr / volume: refactor volume module
- 03:02 AM Feature #39969 (Resolved): mgr / volume: refactor volume module
- Now, with the addition of submodule commands (interfaces), volume commands live in the main module source while submo...
05/19/2019
- 08:39 AM Bug #39951 (Fix Under Review): mount: key parsing fail when doing a remount
- 08:24 AM Feature #20 (Fix Under Review): client: recover from a killed session (w/ blacklist)
05/17/2019
- 01:25 PM Backport #39960 (Resolved): nautilus: cephfs-shell: mkdir error for relative path
- https://github.com/ceph/ceph/pull/28616
05/16/2019
- 11:16 AM Bug #39951: mount: key parsing fail when doing a remount
- Here's the link to a PR:
https://github.com/ceph/ceph/pull/28148 - 11:06 AM Bug #39951 (Resolved): mount: key parsing fail when doing a remount
- When doing a CephFS remount (-o remount) the secret is parsed from procfs and we get '<hidden>' as a result and the m...
- 05:30 AM Bug #39949 (Fix Under Review): test: extend mgr/volume test to cover new interfaces
- 04:57 AM Bug #39949 (Resolved): test: extend mgr/volume test to cover new interfaces
- extend `qa/workunits/fs/test-volumes.sh` tests to cover newly introduces subvolume/subvolumegroup interfaces.
05/15/2019
- 10:49 PM Bug #39947 (Resolved): cephfs-shell: add CI testing with flake8
- See discussion here: https://github.com/ceph/ceph/pull/28080#issuecomment-492387844
- 10:46 PM Bug #39507 (Pending Backport): cephfs-shell: mkdir error for relative path
- 05:05 PM Bug #39943 (Fix Under Review): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to t...
- 01:46 PM Bug #39943 (Resolved): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanos...
- This bug was found while investigating https://tracker.ceph.com/issues/39705 .
The following kernel logic is used ... - 01:15 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
- This bug is due to incorrect placement of the pad/width specifier in:
11809 size_t Client::_vxattrcb_snap_btime(In... - 10:37 AM Backport #39937 (Resolved): nautilus: cephfs-shell: add a "stat" command
- https://github.com/ceph/ceph/pull/28681
- 10:36 AM Backport #39936 (Resolved): nautilus: cephfs-shell: add commands to manipulate quotas
- https://github.com/ceph/ceph/pull/28681
- 10:35 AM Backport #39935 (Resolved): nautilus: cephfs-shell: teuthology tests
- https://github.com/ceph/ceph/pull/28614
- 10:35 AM Backport #39934 (Resolved): nautilus: mgr/volumes: add CephFS subvolumes library
- 09:14 AM Bug #39395: ceph: ceph fs auth fails
- This issue is fixed in the latest version. On luminous, I get the same error.
05/14/2019
- 08:05 PM Feature #39610 (Pending Backport): mgr/volumes: add CephFS subvolumes library
- 07:53 PM Bug #39165 (Pending Backport): cephfs-shell: add commands to manipulate quotas
- 07:51 PM Feature #38829 (Pending Backport): cephfs-shell: add a "stat" command
- 07:50 PM Bug #39526 (Pending Backport): cephfs-shell: teuthology tests
- 07:44 PM Bug #39438: workunit fails with EPERM during thrashing
- /ceph/teuthology-archive/pdonnell-2019-05-11_00:01:05-multimds-wip-pdonnell-testing-20190510.182613-distro-basic-smit...
- 07:43 PM Bug #39752 (New): qa: dual workunit on client but one fails to compile
- ...
- 06:29 PM Bug #39704 (Won't Fix): When running multiple filesystems, directories do not fragment
- 03:20 PM Bug #39704: When running multiple filesystems, directories do not fragment
- Zheng Yan wrote:
> the log show you were creating files in root directory. mds never fragment root directory.
I s... - 08:26 AM Bug #39704: When running multiple filesystems, directories do not fragment
- the log show you were creating files in root directory. mds never fragment root directory.
- 06:28 PM Bug #39722 (Duplicate): pybind: ceph_volume_client py3 error
- 01:14 PM Bug #39722: pybind: ceph_volume_client py3 error
- If I am looking at this correctly, you have reported this before - http://tracker.ceph.com/issues/39406#note-2.
Fi... - 03:35 PM Bug #39750 (Resolved): mgr/volumes: cannot create subvolumes with py3 libraries
- Built ceph, master branch with python 3 enabled,...
- 02:50 AM Backport #39233 (In Progress): mimic: kclient: nofail option not supported
- https://github.com/ceph/ceph/pull/28090
05/13/2019
- 11:01 PM Bug #39722: pybind: ceph_volume_client py3 error
- Rishabh, please investigate.
- 11:01 PM Bug #39722 (Duplicate): pybind: ceph_volume_client py3 error
- ...
- 10:10 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> We may not need this after all. The kernel client at least doesn't care a lot about the inode n... - 07:01 PM Bug #39704: When running multiple filesystems, directories do not fragment
- Patrick Donnelly wrote:
> This is with Nautilus v14.2.1? Can you bump up debugging on the MDS during the event and s... - 01:55 PM Bug #39704 (Need More Info): When running multiple filesystems, directories do not fragment
- This is with Nautilus v14.2.1? Can you bump up debugging on the MDS during the event and share the log?
- 02:59 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
- ...
- 12:34 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
- Patrick Donnelly wrote:
> [...]
>
> From: /ceph/teuthology-archive/nojha-2019-05-09_22:58:42-fs:basic_workload-wi... - 01:50 PM Bug #39511 (Need More Info): Cannot remove CephFS snapshot with leading underscore (_)
- This looks like you're deleting a snapshot name in a child directory which was not the original directory where the s...
- 01:47 PM Bug #39510 (Fix Under Review): test_volume_client: test_put_object_versioned is unreliable
- 01:44 PM Bug #39395: ceph: ceph fs auth fails
- src/mon/AuthMonitor.cc src/mds/MDSMonitor.cc
- 01:42 PM Bug #39329 (Won't Fix): ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connecte...
- 12:30 PM Cleanup #39717 (Resolved): cephfs-shell: Fix flake8 warnings and errors
- Flake8 generates following warning and errors:
* E722 do not use bare 'except'
* E303 too many blank lines
* W605 ... - 10:07 AM Bug #38520 (Resolved): qa: fsstress with valgrind may timeout
- 10:07 AM Backport #38540 (Resolved): mimic: qa: fsstress with valgrind may timeout
- 10:06 AM Backport #39469 (Resolved): mimic: There is no punctuation mark or blank between tid and client_...
- 10:06 AM Backport #38736 (Resolved): mimic: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON...
- 10:05 AM Backport #39193 (Resolved): mimic: mds: crash during mds restart
- 10:04 AM Backport #39200 (Resolved): mimic: mds: we encountered "No space left on device" when moving huge...
- 09:40 AM Bug #39715 (Resolved): client: optimize rename operation under different quota root
- We had many source directories with more than ten millions files. It took very long time to move one such directory t...
05/11/2019
- 04:21 PM Backport #38540: mimic: qa: fsstress with valgrind may timeout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27432
merged - 04:20 PM Backport #39469: mimic: There is no punctuation mark or blank between tid and client_id in the o...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27847
merged - 04:20 PM Backport #38736: mimic: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in c...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27906
merged - 04:19 PM Backport #39193: mimic: mds: crash during mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27916
merged - 04:19 PM Backport #39200: mimic: mds: we encountered "No space left on device" when moving huge number of ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27917
merged - 01:37 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- We may not need this after all. The kernel client at least doesn't care a lot about the inode number. We can do prett...
05/10/2019
- 06:32 PM Bug #39705 (Resolved): qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs...
- ...
- 03:21 PM Bug #39704 (Won't Fix): When running multiple filesystems, directories do not fragment
- Nautilus, Ubuntu 18.04.2, HWE kernel 4.18.0-18-generic.
I have created multiple ceph filesystems:
root@mc-3015-20... - 10:58 AM Backport #39691 (Resolved): luminous: mds: error "No space left on device" when create a large n...
- https://github.com/ceph/ceph/pull/29829
- 10:58 AM Backport #39690 (Resolved): nautilus: mds: error "No space left on device" when create a large n...
- https://github.com/ceph/ceph/pull/28394
- 10:57 AM Backport #39689 (Resolved): mimic: mds: error "No space left on device" when create a large numb...
- https://github.com/ceph/ceph/pull/28381
- 10:57 AM Backport #39687 (Rejected): luminous: ceph-fuse: client hang because its bad session PipeConnecti...
- 10:57 AM Backport #39686 (Resolved): nautilus: ceph-fuse: client hang because its bad session PipeConnecti...
- https://github.com/ceph/ceph/pull/28375
- 10:57 AM Backport #39685 (Resolved): mimic: ceph-fuse: client hang because its bad session PipeConnection ...
- https://github.com/ceph/ceph/pull/29200
- 10:56 AM Backport #39680 (Resolved): nautilus: pybind: add the lseek() function to pybind of cephfs
- https://github.com/ceph/ceph/pull/28333
- 10:56 AM Backport #39679 (Resolved): mimic: pybind: add the lseek() function to pybind of cephfs
- https://github.com/ceph/ceph/pull/28337
- 10:56 AM Backport #39678 (Resolved): nautilus: cephfs-shell: fix string decode for ls command
- https://github.com/ceph/ceph/pull/28681
- 10:55 AM Backport #39670 (Resolved): nautilus: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/28233
- 10:55 AM Backport #39669 (Resolved): mimic: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/28274
05/09/2019
- 07:32 PM Bug #39645 (Pending Backport): mds: output lock state in format dump
- 09:06 AM Bug #39645: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/27717
- 09:06 AM Bug #39645 (Resolved): mds: output lock state in format dump
- dump cache in plain text will print lock state. But in json format dump, it won't. It is not convenient to debug some...
- 01:38 PM Bug #39651 (In Progress): qa: test_kill_mdstable fails unexpectedly
- I get following traceback while running the test_kill_mdstable: https://github.com/ceph/ceph/blob/master/qa/tasks/cep...
- 12:16 PM Feature #38951: client: implement asynchronous unlink/create
- Found it. The problem is actually in ceph_mdsc_build_path. When passed a positive dentry, that function will return a...
- 08:05 AM Bug #39641 (Fix Under Review): cephfs-shell: 'du' command produces incorrect results
- 08:01 AM Bug #39641 (Resolved): cephfs-shell: 'du' command produces incorrect results
- Error observed in following cases:
# No error message printed for invalid directories.
# When directory name is gre...
05/08/2019
- 10:57 PM Feature #39403 (Pending Backport): pybind: add the lseek() function to pybind of cephfs
- 09:51 PM Bug #39305 (Pending Backport): ceph-fuse: client hang because its bad session PipeConnection to mds
- 09:41 PM Bug #39166 (Pending Backport): mds: error "No space left on device" when create a large number o...
- 06:17 PM Bug #39634 (Fix Under Review): qa: test_full_same_file timeout
- ...
- 04:39 PM Feature #38951: client: implement asynchronous unlink/create
- Jeff Layton wrote:
> Doing more testing today with my patchset. I doctored up a version of Zheng's MDS locking rewor... - 03:22 PM Feature #38951: client: implement asynchronous unlink/create
- Doing more testing today with my patchset. I doctored up a version of Zheng's MDS locking rework branch with some pat...
- 04:06 PM Bug #39404 (Pending Backport): cephfs-shell: fix string decode for ls command
- 10:09 AM Bug #39617 (Duplicate): cephfs-shell dumps backtrace on "ls"
- Hi Patrick,
Please merge this PR https://github.com/ceph/ceph/pull/27716. It resolves the issue.
05/07/2019
- 06:40 PM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
- We get asked this all the time.
- 04:41 PM Bug #39617: cephfs-shell dumps backtrace on "ls"
- This is on F30, fwiw. I backed out this patch, and it seems to fix the issue:...
- 04:30 PM Bug #39617 (Duplicate): cephfs-shell dumps backtrace on "ls"
- Built ceph based on today's master branch (2d410b5a2e428232dc7d6f3abc006da5e9128e77), using this cmake command:
<p... - 12:01 PM Feature #39610 (Resolved): mgr/volumes: add CephFS subvolumes library
- The FS subvolumes library module will be heavily borrowed from ceph_volume_client. It'll be used to provision FS subv...
05/06/2019
- 12:05 PM Fix #38801 (Fix Under Review): qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
05/02/2019
- 08:35 AM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- PR - https://github.com/ceph/ceph/pull/27679
- 06:24 AM Backport #39200 (In Progress): mimic: mds: we encountered "No space left on device" when moving h...
- https://github.com/ceph/ceph/pull/27917
- 04:02 AM Backport #39193 (In Progress): mimic: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27916
05/01/2019
- 08:26 PM Bug #39437 (Resolved): osd: PriorityCache.cc: 265: FAILED ceph_assert(mem_avail >= 0)
- 06:30 PM Bug #39438: workunit fails with EPERM during thrashing
- /ceph/teuthology-archive/pdonnell-2019-04-25_02:44:21-multimds-wip-pdonnell-testing-20190424.232741-distro-basic-smit...
- 09:58 AM Documentation #38729 (Resolved): doc: add LAZYIO
- 09:58 AM Backport #39051 (Resolved): nautilus: doc: add LAZYIO
- 12:58 AM Backport #39051 (In Progress): nautilus: doc: add LAZYIO
- 09:41 AM Documentation #39130 (Resolved): doc: add documentation for `fs set min_compat_client`
- 09:41 AM Backport #39176 (Resolved): nautilus: doc: add documentation for `fs set min_compat_client`
- 01:03 AM Backport #39176 (In Progress): nautilus: doc: add documentation for `fs set min_compat_client`
- 09:28 AM Bug #36384 (Resolved): src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- 09:28 AM Backport #38448 (Resolved): mimic: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- 09:28 AM Bug #38518 (Resolved): qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because clients cann...
- 09:27 AM Backport #38542 (Resolved): mimic: qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because ...
- 09:27 AM Bug #38487 (Resolved): qa: "Loading libcephfs-jni: Failure!"
- 09:27 AM Backport #38544 (Resolved): mimic: qa: "Loading libcephfs-jni: Failure!"
- 09:26 AM Bug #38723 (Resolved): qa: tolerate longer heartbeat timeouts when using valgrind
- 09:26 AM Backport #38734 (Resolved): mimic: qa: tolerate longer heartbeat timeouts when using valgrind
- 09:26 AM Bug #38491 (Resolved): "log [WRN] : Health check failed: 1 clients failing to respond to capabili...
- 09:24 AM Backport #38670 (Resolved): mimic: "log [WRN] : Health check failed: 1 clients failing to respond...
- 09:24 AM Feature #11172 (Resolved): mds: inode filtering on 'dump cache' asok
- 09:23 AM Backport #38689 (Resolved): mimic: mds: inode filtering on 'dump cache' asok
- 01:21 AM Backport #39471 (In Progress): nautilus: Expose CephFS snapshot creation time to clients
04/30/2019
- 07:28 PM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
>
> Yes, this can causes inconsistency. But it's not unique to link count. For example, one clien... - 02:46 PM Bug #39543 (Fix Under Review): cephfs-shell: df command does not always produce correct output
- 01:22 PM Bug #39543 (Resolved): cephfs-shell: df command does not always produce correct output
- Correct output is not produced in the following cases:
1] For non-existing files, there is not error message
2] Whe... - 02:45 PM Backport #39050 (In Progress): nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx"
- 02:35 PM Backport #38876 (In Progress): nautilus: mds: high debug logging with many subtrees is slow
- 04:11 AM Backport #39222 (In Progress): nautilus: mds: behind on trimming and "[dentry] was purgeable but ...
- https://github.com/ceph/ceph/pull/27879
Also available in: Atom