Project

General

Profile

Activity

From 09/16/2021 to 10/15/2021

10/15/2021

03:20 PM Backport #52954 (Rejected): pacific: qa/xfstest-dev.py: update to include centos stream
https://github.com/ceph/ceph/pull/54184 Backport Bot
03:16 PM Bug #52822 (Resolved): qa: failed pacific install on fs:upgrade
Patrick Donnelly
03:15 PM Bug #52821 (Pending Backport): qa/xfstest-dev.py: update to include centos stream
Patrick Donnelly
03:15 PM Backport #52953 (Resolved): octopus: mds: crash when journaling during replay
https://github.com/ceph/ceph/pull/43842 Backport Bot
03:15 PM Backport #52952 (Resolved): pacific: mds: crash when journaling during replay
https://github.com/ceph/ceph/pull/43841 Backport Bot
03:15 PM Backport #52951 (Rejected): octopus: qa: skip internal metadata directory when scanning ceph debu...
Backport Bot
03:15 PM Backport #52950 (Rejected): pacific: qa: skip internal metadata directory when scanning ceph debu...
Backport Bot
03:13 PM Fix #52824 (Pending Backport): qa: skip internal metadata directory when scanning ceph debugfs di...
Patrick Donnelly
03:12 PM Bug #51589 (Pending Backport): mds: crash when journaling during replay
Patrick Donnelly
03:10 PM Bug #52949 (Fix Under Review): RuntimeError: The following counters failed to be set on mds daemo...
Patrick Donnelly
03:03 PM Bug #52949 (Resolved): RuntimeError: The following counters failed to be set on mds daemons: {'md...
... Patrick Donnelly
07:40 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly wrote:
> dongdong tao wrote:
> > Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
> ...
dongdong tao
01:02 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
dongdong tao wrote:
> Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
The code in MDSMonitor::tick ...
Patrick Donnelly
02:24 AM Backport #52679 (In Progress): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
Xiubo Li

10/14/2021

10:15 PM Feature #52942 (Resolved): mgr/nfs: add 'nfs cluster config get'
Varsha Rao
05:36 PM Fix #52916: mds,client: formally remove inline data support
Patrick mentioned that we should probably have the scrubber just uninline any inodes that it detects that are inlined... Jeff Layton
12:46 PM Bug #51589: mds: crash when journaling during replay
Partially fixed with https://github.com/ceph/ceph/pull/43382 Venky Shankar
07:07 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
dongdong tao

10/13/2021

02:04 PM Fix #52916 (In Progress): mds,client: formally remove inline data support
This feature was added and only half implemented several years ago, and we made a decision to start deprecating it in... Jeff Layton
05:40 AM Fix #52715: mds: reduce memory usage during scrubbing
Greg Farnum wrote:
> I'm a bit confused by this ticket; AFAIK scrub is a depth-first search.
IIRC, Zheng changed ...
Venky Shankar

10/12/2021

09:59 PM Fix #52715: mds: reduce memory usage during scrubbing
I'm a bit confused by this ticket; AFAIK scrub is a depth-first search. Greg Farnum
06:41 PM Bug #52874 (Fix Under Review): Monitor might crash after upgrade from ceph to 16.2.6
Patrick Donnelly
06:06 PM Bug #52874: Monitor might crash after upgrade from ceph to 16.2.6
You can get around this problem by setting in ceph.conf (for the mons):... Patrick Donnelly
01:41 PM Bug #52874 (Triaged): Monitor might crash after upgrade from ceph to 16.2.6
Patrick Donnelly
06:02 PM Bug #52820 (Fix Under Review): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly
04:36 PM Bug #52820 (In Progress): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly
01:42 PM Bug #52821 (Fix Under Review): qa/xfstest-dev.py: update to include centos stream
Rishabh Dave

10/11/2021

12:44 PM Bug #52887 (Resolved): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.Open...
The teuthology test: https://pulpito.ceph.com/yuriw-2021-10-02_15:12:58-fs-wip-yuri2-testing-2021-10-01-0902-pacific-... Xiubo Li
09:22 AM Bug #44139 (New): mds: check all on-disk metadata is versioned
Jos Collin

10/09/2021

03:18 AM Bug #52876 (Fix Under Review): pacific: cluster [WRN] evicting unresponsive client smithi121 (912...
It's due to forget shutdown the mounter after test finishes, this was introduced when resolving the conflicts when ba... Xiubo Li
03:11 AM Bug #52876 (Resolved): pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), afte...
The teuthology test: https://pulpito.ceph.com/yuriw-2021-10-02_15:12:58-fs-wip-yuri2-testing-2021-10-01-0902-pacific-... Xiubo Li

10/08/2021

06:40 PM Backport #52875 (Resolved): pacific: qa: test_dirfrag_limit
https://github.com/ceph/ceph/pull/45565 Backport Bot
06:38 PM Bug #52606 (Pending Backport): qa: test_dirfrag_limit
Patrick Donnelly
01:49 PM Bug #52874 (Resolved): Monitor might crash after upgrade from ceph to 16.2.6
The following assertion might pop up
void FSMap::sanity() const
{
...
if (info.state != MDSMap::STATE_STAND...
Igor Fedotov
01:34 PM Backport #52627: pacific: cephfs-mirror: cephfs-mirror daemon status for a particular FS is not s...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43199
merged
Yuri Weinstein
01:33 PM Backport #52444: pacific: cephfs-mirror: terminating a mirror daemon can cause a crash at times
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43198
merged
Yuri Weinstein
01:32 PM Backport #52441: pacific: mds: slow performance on parallel rm operations for multiple kclients
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43148
merged
Yuri Weinstein
10:54 AM Bug #24030 (Closed): ceph-fuse: double dash meaning
Closing this because:
* In the review, disabling -- is not encouraged.
* When I look at this issue now, this does...
Jos Collin

10/07/2021

06:42 PM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
Xiubo Li wrote:
> Whenever switching to a different lock state the MDS will try to issue the allowed caps to the cli...
Greg Farnum
01:20 PM Backport #52854 (Resolved): pacific: qa: test_simple failure
https://github.com/ceph/ceph/pull/50756 Backport Bot
01:17 PM Bug #52677 (Pending Backport): qa: test_simple failure
Patrick Donnelly
11:48 AM Feature #1276 (Resolved): client: expose mds partition via virtual xattrs
Patch was merged for v5.15. Jeff Layton
09:15 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
I was able to complete the upgrade by switching to version 16.2.5.
After that I tried to upgrade from version 16....
Daniel Keller
06:35 AM Bug #48711 (Need More Info): mds: standby-replay mds abort when replay metablob
Hi haitao,
I don't see a segmentation fault here and the attached logs doesn't have more information about the cra...
Jos Collin

10/06/2021

01:07 PM Bug #52829: cephfs/mirroring: peer_bootstrap import failed with EPERM
Attaching mgr logs split into two files Kotresh Hiremath Ravishankar
12:55 PM Bug #52829 (New): cephfs/mirroring: peer_bootstrap import failed with EPERM
Tried setting up mirroring on vstart cluster as below but peer_bootstrap import failed with EPERM.
#Create vstart ...
Kotresh Hiremath Ravishankar
11:22 AM Bug #52821: qa/xfstest-dev.py: update to include centos stream
rishabh-2021-10-05_09:25:11-fs-wip-rishabh-vr-run-multiple-cmds-distro-basic-smithi/6423166/... Rishabh Dave
05:46 AM Fix #52824 (Closed): qa: skip internal metadata directory when scanning ceph debugfs directory
kclient patchset:
https://patchwork.kernel.org/project/ceph-devel/list/?series=556049
introduces meta direc...
Venky Shankar
12:17 AM Cleanup #51390 (Resolved): mgr/volumes/fs/operations/access.py: fix various flake8 issues
Patrick Donnelly
12:17 AM Cleanup #51407 (Resolved): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix various fla...
Patrick Donnelly
12:16 AM Cleanup #51381 (Resolved): mgr/volumes/fs/async_job.py: fix various flake8 issues
Patrick Donnelly
12:16 AM Cleanup #51403 (Resolved): mgr/volumes/fs/operations/versions/auth_metadata.py: fix various flake...
Patrick Donnelly
12:15 AM Cleanup #51398 (Resolved): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
Patrick Donnelly
12:15 AM Cleanup #51384 (Resolved): mgr/volumes/fs/vol_spec.py: fix various flake8 issues
Patrick Donnelly
12:15 AM Backport #52823 (Resolved): pacific: mgr/nfs: add more log messages
Backport included here:
https://github.com/ceph/ceph/pull/43682
Backport Bot
12:14 AM Cleanup #51396 (Resolved): mgr/volumes/fs/operations/clone_index.py: fix various flake8 issues
Patrick Donnelly
12:14 AM Cleanup #51380 (Resolved): mgr/volumes/module.py: fix various flake8 issues
Patrick Donnelly
12:13 AM Cleanup #51392 (Resolved): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
Patrick Donnelly
12:13 AM Bug #52274 (Pending Backport): mgr/nfs: add more log messages
Patrick Donnelly
12:02 AM Bug #52822 (Fix Under Review): qa: failed pacific install on fs:upgrade
Patrick Donnelly

10/05/2021

11:59 PM Bug #52822 (Resolved): qa: failed pacific install on fs:upgrade
https://pulpito.ceph.com/pdonnell-2021-10-04_23:25:27-fs-wip-pdonnell-testing-20211002.163337-distro-basic-smithi/642... Patrick Donnelly
06:56 PM Bug #52821 (Pending Backport): qa/xfstest-dev.py: update to include centos stream
Rishabh Dave
05:05 PM Bug #52820 (Resolved): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
i tried to upgrade my ceph cluster from 15.2.14 to 16.2.6 on my proxmox 7.0 severs.
after updating the packages i ...
Daniel Keller

10/02/2021

12:18 AM Cleanup #51384 (Fix Under Review): mgr/volumes/fs/vol_spec.py: fix various flake8 issues
Varsha Rao

10/01/2021

11:23 PM Cleanup #51402 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_base.py: fix vari...
Varsha Rao
10:52 PM Cleanup #51390 (Fix Under Review): mgr/volumes/fs/operations/access.py: fix various flake8 issues
Varsha Rao
10:50 PM Cleanup #51381 (Fix Under Review): mgr/volumes/fs/async_job.py: fix various flake8 issues
Varsha Rao
10:49 PM Cleanup #51407 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix var...
Varsha Rao
10:48 PM Cleanup #51382 (Fix Under Review): mgr/volumes/fs/async_cloner.py: fix various flake8 issues
Varsha Rao
10:27 PM Cleanup #51403 (Fix Under Review): mgr/volumes/fs/operations/versions/auth_metadata.py: fix vario...
Varsha Rao
10:05 PM Cleanup #51398 (Fix Under Review): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
Varsha Rao
09:03 PM Cleanup #51396 (Fix Under Review): mgr/volumes/fs/operations/clone_index.py: fix various flake8 i...
Varsha Rao
08:54 PM Cleanup #51400 (Fix Under Review): mgr/volumes/fs/operations/trash.py: fix various flake8 issues
Varsha Rao
08:25 PM Bug #51630 (Fix Under Review): mgr/snap_schedule: don't throw traceback on non-existent fs
Varsha Rao
08:10 PM Cleanup #51380 (Fix Under Review): mgr/volumes/module.py: fix various flake8 issues
Varsha Rao
09:54 AM Feature #46166: mds: store symlink target as xattr in data pool inode for disaster recovery
The link in the description is broken. Here is the working link https://docs.ceph.com/en/latest/cephfs/disaster-recov... Kotresh Hiremath Ravishankar
03:14 AM Cleanup #51392 (Fix Under Review): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
Varsha Rao

09/30/2021

08:52 PM Cleanup #51392: mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
This PR fixes it: https://github.com/ceph/ceph/pull/43375 Varsha Rao
04:50 PM Cleanup #51391 (Resolved): mgr/volumes/fs/operations/resolver.py: add extra blank line
Patrick Donnelly
04:50 PM Cleanup #51387 (Resolved): mgr/volumes/fs/purge_queue.py: add extra blank line
Patrick Donnelly
01:27 PM Bug #52581 (Can't reproduce): Dangling fs snapshots on data pool after change of directory layout
Closing this issue based on previous comment. Please re-open the issue if it happens on the supported versions. Kotresh Hiremath Ravishankar
11:21 AM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
The ceph version of this issue 'ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)' is no... Kotresh Hiremath Ravishankar
08:43 AM Bug #51589: mds: crash when journaling during replay
Venky Shankar wrote:
> ...
> ...
> Patrick suggested that we could defer journaling blocklisted clients in reconne...
Venky Shankar

09/29/2021

12:56 PM Bug #51589: mds: crash when journaling during replay
I was able to reproduce this in master branch. The crash happens when a standby mds takes over as active and there ar... Venky Shankar
10:41 AM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
Hi Frank,
I tried reproducing this issue on the master by changing root data pool to the new one but couldn't achi...
Kotresh Hiremath Ravishankar

09/28/2021

01:03 AM Cleanup #51391 (Fix Under Review): mgr/volumes/fs/operations/resolver.py: add extra blank line
Varsha Rao

09/27/2021

11:36 PM Backport #51199 (In Progress): octopus: msg: active_connections regression
Patrick Donnelly
07:15 AM Backport #51199: octopus: msg: active_connections regression
Backport PR: https://github.com/ceph/ceph/pull/43310
This regression issue is blocking https://tracker.ceph.com/...
gerald yang
02:48 PM Cleanup #51387 (Fix Under Review): mgr/volumes/fs/purge_queue.py: add extra blank line
Varsha Rao
01:43 PM Bug #52723 (Fix Under Review): mds: improve mds_bal_fragment_size_max config option
Patrick Donnelly
01:42 PM Feature #52725 (Fix Under Review): qa: mds_dir_max_entries workunit test case
Patrick Donnelly
01:41 PM Feature #52720 (Fix Under Review): mds: mds_bal_rank_mask config option
Patrick Donnelly

09/24/2021

10:30 AM Feature #52725 (Resolved): qa: mds_dir_max_entries workunit test case
mds_dir_max_entries workunit test case Yongseok Oh
06:04 AM Bug #52723 (Resolved): mds: improve mds_bal_fragment_size_max config option
Maintain mds_bal_fragment_size_max as a member variable in Server.cc Yongseok Oh

09/23/2021

04:55 PM Feature #51162 (Fix Under Review): mgr/volumes: `fs volume rename` command
Ramana Raja
10:27 AM Feature #52720 (Resolved): mds: mds_bal_rank_mask config option
hexadecimal bitmask of the active MDS ranks to rebalance on. MDS balancer dynamically redistributes subtrees within c... Yongseok Oh
08:24 AM Fix #52715 (New): mds: reduce memory usage during scrubbing
Breadth-first search may queue lots of inodes. Change scrub traverse to depth-First search.
The new scrub code no ...
Erqi Chen

09/22/2021

07:31 PM Backport #50126 (In Progress): octopus: pybind/mgr/volumes: deadlock on async job hangs finisher ...
Cory Snyder
06:01 AM Backport #52632 (In Progress): octopus: mds,client: add flag to MClientSession for reject reason
Kotresh Hiremath Ravishankar
05:52 AM Backport #52633 (In Progress): pacific: mds,client: add flag to MClientSession for reject reason
Kotresh Hiremath Ravishankar

09/21/2021

01:23 PM Bug #52688 (New): mds: possibly corrupted entry in journal (causes replay failure with file syste...
Failed replay after a standby took over as active. This marks the file system as damaged.
The journal entry for in...
Venky Shankar
11:55 AM Bug #52642 (Fix Under Review): snap scheduler: cephfs snapshot schedule status doesn't list the s...
Venky Shankar
12:50 AM Backport #52680 (Resolved): pacific: Add option in `fs new` command to start rank 0 in failed state
https://github.com/ceph/ceph/pull/45565 Backport Bot
12:50 AM Backport #52679 (Resolved): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
https://github.com/ceph/ceph/pull/43562 Backport Bot
12:50 AM Backport #52678 (Resolved): pacific: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnap...
https://github.com/ceph/ceph/pull/43702 Backport Bot
12:48 AM Bug #52625 (Pending Backport): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
Patrick Donnelly
12:47 AM Bug #52572 (Pending Backport): "cluster [WRN] 1 slow requests" in smoke pacific
Patrick Donnelly
12:46 AM Feature #51716 (Pending Backport): Add option in `fs new` command to start rank 0 in failed state
Patrick Donnelly
12:09 AM Bug #52677 (Fix Under Review): qa: test_simple failure
Patrick Donnelly
12:03 AM Bug #52677 (Resolved): qa: test_simple failure
... Patrick Donnelly

09/20/2021

04:15 PM Bug #52607 (Duplicate): qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data...
Patrick Donnelly
01:48 PM Fix #52591: mds: mds_oft_prefetch_dirfrags = false is not qa tested
Patrick suggested that we have this setting in the thrasher suite. Ramana Raja
01:42 PM Bug #52641 (Triaged): snap scheduler: Traceback seen when snapshot schedule remove command is pas...
Patrick Donnelly
01:41 PM Bug #52642 (Triaged): snap scheduler: cephfs snapshot schedule status doesn't list the snapshot c...
Patrick Donnelly
01:38 PM Bug #52643 (Triaged): snap scheduler: cephfs snapshot created with schedules stopped on nfs volum...
Patrick Donnelly
11:40 AM Backport #52629 (In Progress): octopus: pybind/mgr/volumes: first subvolume permissions set perms...
Kotresh Hiremath Ravishankar
11:09 AM Backport #52628 (In Progress): pacific: pybind/mgr/volumes: first subvolume permissions set perms...
Kotresh Hiremath Ravishankar

09/17/2021

09:39 AM Feature #51518 (Resolved): client: flush the mdlog in unsafe requests' relevant and auth MDSes only
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
09:34 AM Backport #51833 (Resolved): pacific: client: flush the mdlog in unsafe requests' relevant and aut...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42925
m...
Loïc Dachary
09:34 AM Backport #51937: pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42923
m...
Loïc Dachary
09:33 AM Backport #51977 (Resolved): pacific: client: make sure only to update dir dist from auth mds
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42937
m...
Loïc Dachary
06:59 AM Bug #52643: snap scheduler: cephfs snapshot created with schedules stopped on nfs volume after cr...
Moving to High Priority since python traceback causes loss of functionality.
Milind Changire
06:36 AM Bug #52643 (Closed): snap scheduler: cephfs snapshot created with schedules stopped on nfs volume...
... Milind Changire
06:33 AM Bug #52642 (Resolved): snap scheduler: cephfs snapshot schedule status doesn't list the snapshot ...
... Milind Changire
06:28 AM Bug #52641 (Resolved): snap scheduler: Traceback seen when snapshot schedule remove command is pa...
# ceph fs snap-schedule remove
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_mod...
Milind Changire
01:58 AM Backport #52639 (In Progress): pacific: MDSMonitor: handle damaged state from standby-replay
Patrick Donnelly
01:50 AM Backport #52639 (Resolved): pacific: MDSMonitor: handle damaged state from standby-replay
https://github.com/ceph/ceph/pull/43200 Backport Bot
01:48 AM Bug #52565 (Pending Backport): MDSMonitor: handle damaged state from standby-replay
Patrick Donnelly
01:47 AM Bug #51278: mds: "FAILED ceph_assert(!segments.empty())"
Might be related to: https://tracker.ceph.com/issues/51589 Venky Shankar
01:05 AM Backport #52627 (In Progress): pacific: cephfs-mirror: cephfs-mirror daemon status for a particul...
Venky Shankar
01:00 AM Backport #52444 (In Progress): pacific: cephfs-mirror: terminating a mirror daemon can cause a cr...
Venky Shankar

09/16/2021

08:58 PM Bug #52280: Mds crash and fails with assert on prepare_new_inode
@xiubo Li
Hi Li
Thanks again
- What are the recommended values for mds_log_segment_size and mds_log_segment_size.?...
Yael Azulay
01:31 PM Bug #52280 (Fix Under Review): Mds crash and fails with assert on prepare_new_inode
Yael Azulay wrote:
> @xiubo Li
> Thanks much, Li, for your analysis
> I didnt change mds_log_segment_size and mds_...
Xiubo Li
02:36 PM Backport #52444 (Need More Info): pacific: cephfs-mirror: terminating a mirror daemon can cause a...
Patrick Donnelly
02:32 PM Backport #52627 (Need More Info): pacific: cephfs-mirror: cephfs-mirror daemon status for a parti...
Patrick Donnelly
02:35 AM Backport #52627 (Resolved): pacific: cephfs-mirror: cephfs-mirror daemon status for a particular ...
https://github.com/ceph/ceph/pull/43199 Backport Bot
01:01 PM Fix #52591: mds: mds_oft_prefetch_dirfrags = false is not qa tested
Thanks for bringing this up Dan. We'll try to have someone work on this. Patrick Donnelly
02:42 AM Backport #52636 (Resolved): pacific: MDSMonitor: removes MDS coming out of quorum election
https://github.com/ceph/ceph/pull/43698 Backport Bot
02:40 AM Backport #52635 (Resolved): pacific: mds sends cap updates with btime zeroed out
https://github.com/ceph/ceph/pull/45163 Backport Bot
02:40 AM Backport #52634 (Resolved): octopus: mds sends cap updates with btime zeroed out
https://github.com/ceph/ceph/pull/45164 Backport Bot
02:40 AM Backport #52633 (Resolved): pacific: mds,client: add flag to MClientSession for reject reason
https://github.com/ceph/ceph/pull/43251 Backport Bot
02:40 AM Backport #52632 (Resolved): octopus: mds,client: add flag to MClientSession for reject reason
https://github.com/ceph/ceph/pull/43252 Backport Bot
02:40 AM Backport #52631 (Resolved): pacific: mds: add max_mds_entries_per_dir config option
https://github.com/ceph/ceph/pull/44512 Backport Bot
02:37 AM Feature #52491 (Pending Backport): mds: add max_mds_entries_per_dir config option
Patrick Donnelly
02:36 AM Bug #43216 (Pending Backport): MDSMonitor: removes MDS coming out of quorum election
Patrick Donnelly
02:35 AM Bug #52382 (Pending Backport): mds,client: add flag to MClientSession for reject reason
Patrick Donnelly
02:35 AM Backport #52629 (Resolved): octopus: pybind/mgr/volumes: first subvolume permissions set perms on...
https://github.com/ceph/ceph/pull/43224 Backport Bot
02:35 AM Backport #52628 (Resolved): pacific: pybind/mgr/volumes: first subvolume permissions set perms on...
https://github.com/ceph/ceph/pull/43223 Backport Bot
02:35 AM Bug #52123 (Pending Backport): mds sends cap updates with btime zeroed out
Patrick Donnelly
02:34 AM Bug #51870 (Pending Backport): pybind/mgr/volumes: first subvolume permissions set perms on /volu...
Patrick Donnelly
02:33 AM Bug #51989 (Pending Backport): cephfs-mirror: cephfs-mirror daemon status for a particular FS is ...
Patrick Donnelly
02:27 AM Bug #52626 (Triaged): mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
Patrick Donnelly
02:26 AM Bug #52626 (Triaged): mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
... Patrick Donnelly
02:21 AM Bug #52625 (Fix Under Review): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
Patrick Donnelly
02:20 AM Bug #52625 (Resolved): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
... Patrick Donnelly
 

Also available in: Atom