Project

General

Profile

Activity

From 10/11/2021 to 11/09/2021

11/09/2021

04:35 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
I've done some further debugging to understand the MDS performance problem that has been impacting us more. The fini... Andras Pataki
01:14 AM Bug #52975 (Fix Under Review): MDSMonitor: no active MDS after cluster deployment
Patrick Donnelly

11/08/2021

10:07 PM Bug #53194 (Fix Under Review): mds: opening connection to up:replay/up:creating daemon causes mes...
Patrick Donnelly
05:49 PM Bug #53194 (Resolved): mds: opening connection to up:replay/up:creating daemon causes message drop
Found a QA run where MDS was stuck in up:resolve:
https://pulpito.ceph.com/pdonnell-2021-11-05_19:13:39-fs:upgrade...
Patrick Donnelly
04:52 PM Bug #53192 (Fix Under Review): High cephfs MDS latency and CPU load with snapshots and unlink ope...
We have recently enabled snapshots on our large Nautilus cluster (running 14.2.20) and our fairly smooth running ceph... Andras Pataki
02:35 PM Backport #52953 (In Progress): octopus: mds: crash when journaling during replay
Venky Shankar
02:26 PM Backport #52952 (In Progress): pacific: mds: crash when journaling during replay
Venky Shankar
02:23 PM Bug #52975 (In Progress): MDSMonitor: no active MDS after cluster deployment
Patrick Donnelly
04:21 AM Bug #52975: MDSMonitor: no active MDS after cluster deployment
Thanks for the reproducer Igor.
commit cbd9a7b354abb06cd395753f93564bdc687cdb04 ("mon,mds: use per-MDS compat to i...
Venky Shankar
01:07 AM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
Andras Pataki wrote:
> I haven't had luck keeping the MDS running well with higher log levels unfortunately. Howeve...
Xiubo Li

11/06/2021

02:54 AM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
We are triggering the new warning: https://tracker.ceph.com/issues/53180 玮文 胡
12:54 AM Bug #53179 (Duplicate): Crash when unlink in corrupted cephfs
We have a corrupted cephfs that breaks every time after the repair when files are removed.... Daniel Poelzleithner

11/05/2021

08:14 PM Backport #53006 (In Progress): pacific: RuntimeError: The following counters failed to be set on ...
Patrick Donnelly
08:02 PM Backport #53006 (Need More Info): pacific: RuntimeError: The following counters failed to be set ...
I will need to work on this because it pulls in some commits that don't have a tracker assigned. Patrick Donnelly
03:17 AM Backport #53163 (In Progress): octopus: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)...
Xiubo Li
02:56 AM Backport #53164 (In Progress): pacific: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)...
Xiubo Li

11/04/2021

09:00 PM Backport #53165 (Rejected): pacific: qa/vstart_runner: tests crashes due incompatiblity
https://github.com/ceph/ceph/pull/54183 Backport Bot
08:56 PM Backport #53164 (Resolved): pacific: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
https://github.com/ceph/ceph/pull/43815 Backport Bot
08:56 PM Backport #53163 (Resolved): octopus: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
https://github.com/ceph/ceph/pull/43816 Backport Bot
08:55 PM Backport #53162 (Resolved): pacific: qa: test_standby_count_wanted failure
https://github.com/ceph/ceph/pull/50760 Backport Bot
08:55 PM Bug #53043 (Pending Backport): qa/vstart_runner: tests crashes due incompatiblity
Patrick Donnelly
08:52 PM Bug #52995 (Pending Backport): qa: test_standby_count_wanted failure
Patrick Donnelly
08:51 PM Bug #51023 (Pending Backport): mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
Patrick Donnelly
08:29 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
/ceph/teuthology-archive/pdonnell-2021-11-04_15:43:53-fs-wip-pdonnell-testing-20211103.023355-distro-basic-smithi/648... Patrick Donnelly
02:27 PM Bug #53155 (Fix Under Review): MDSMonitor: assertion during upgrade to v16.2.5+
Patrick Donnelly
02:21 PM Bug #53155 (Resolved): MDSMonitor: assertion during upgrade to v16.2.5+
... Patrick Donnelly

11/03/2021

11:44 PM Bug #53150 (Fix Under Review): pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade s...
Patrick Donnelly
08:38 PM Bug #53150 (Resolved): pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddlin...
If a v16.2.4 or older MDS fails and rejoins, the compat set assigned to it is the empty set (because it sends no comp... Patrick Donnelly
11:49 AM Backport #52823 (In Progress): pacific: mgr/nfs: add more log messages
Alfonso Martínez
11:43 AM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
@Neha hit this one in RBD suite: ... Deepika Upadhyay
11:43 AM Bug #53074 (Resolved): pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
Sebastian Wagner
10:42 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
Main Name wrote:
> Same issue with roughly 1.6M folders.
>
> * Generated a Folder tree with 1611111 Folders
> * ...
Main Name
09:52 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
Same issue with roughly 1.6M folders.
* Generated a Folder tree with 1611111 Folders
* Make snapshot
* Delete Fo...
Main Name
09:19 AM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
I haven't had luck keeping the MDS running well with higher log levels unfortunately. However, I do have one more da... Andras Pataki
06:57 AM Bug #53126 (Triaged): In the 5.4.0 kernel, the mount of ceph-fuse fails
Venky Shankar
06:54 AM Bug #52487 (In Progress): qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragm...
The check here[0] results in `num_strays` being zero _right after_ the journal was flushed::... Venky Shankar
06:36 AM Feature #50372 (Fix Under Review): test: Implement cephfs-mirror trasher test for HA active/active
Venky Shankar
06:35 AM Backport #51415 (In Progress): octopus: mds: "FAILED ceph_assert(r == 0 || r == -2)"
Xiubo Li
02:55 AM Backport #53121 (In Progress): pacific: mds: collect I/O sizes from client for cephfs-top
Xiubo Li
02:53 AM Backport #53120 (In Progress): pacific: client: do not defer releasing caps when revoking
Xiubo Li

11/02/2021

10:15 PM Bug #50622 (Resolved): msg: active_connections regression
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
10:09 PM Bug #52572 (Resolved): "cluster [WRN] 1 slow requests" in smoke pacific
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
10:08 PM Bug #52820 (Resolved): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
10:07 PM Bug #52874 (Resolved): Monitor might crash after upgrade from ceph to 16.2.6
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
10:05 PM Backport #52999 (Resolved): pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43615
m...
Loïc Dachary
01:34 PM Backport #52999: pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43615
merged
Yuri Weinstein
10:04 PM Backport #52998 (Resolved): pacific: Monitor might crash after upgrade from ceph to 16.2.6
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43614
m...
Loïc Dachary
01:33 PM Backport #52998: pacific: Monitor might crash after upgrade from ceph to 16.2.6
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43614
merged
Yuri Weinstein
10:03 PM Backport #52679: pacific: "cluster [WRN] 1 slow requests" in smoke pacific
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43562
m...
Loïc Dachary
09:54 PM Backport #51199 (Resolved): octopus: msg: active_connections regression
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43310
m...
Loïc Dachary
03:25 PM Bug #53126: In the 5.4.0 kernel, the mount of ceph-fuse fails
Might be related to #53082 Venky Shankar
06:41 AM Bug #53126 (Closed): In the 5.4.0 kernel, the mount of ceph-fuse fails
Hello everyone,
I use ubuntu18.04.5 server and the ceph version is 14.2.22.
After upgrading the kernel to 5.4.0, th...
Jiang Yu
01:41 PM Bug #53082: ceph-fuse: segmenetation fault in Client::handle_mds_map
Venky, I will take it. Xiubo Li
03:19 AM Bug #52887 (Fix Under Review): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfileta...
Xiubo Li
03:02 AM Bug #52887: qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
The `self.wait_until_true(lambda: self._check_oft_counter('omap_total_removes', 1), timeout=30)` last check was at `2... Xiubo Li

11/01/2021

06:50 PM Backport #53122 (Rejected): pacific: mds: improve mds_bal_fragment_size_max config option
Backport Bot
06:48 PM Bug #52723 (Pending Backport): mds: improve mds_bal_fragment_size_max config option
Patrick Donnelly
04:41 PM Backport #53121 (Resolved): pacific: mds: collect I/O sizes from client for cephfs-top
https://github.com/ceph/ceph/pull/43784 Backport Bot
04:36 PM Feature #49811 (Pending Backport): mds: collect I/O sizes from client for cephfs-top
Patrick Donnelly
04:35 PM Cleanup #51402 (Resolved): mgr/volumes/fs/operations/versions/subvolume_base.py: fix various flak...
Patrick Donnelly
04:35 PM Backport #53120 (Resolved): pacific: client: do not defer releasing caps when revoking
https://github.com/ceph/ceph/pull/43562 Backport Bot
04:33 PM Bug #52994 (Pending Backport): client: do not defer releasing caps when revoking
Patrick Donnelly
05:12 AM Bug #52887 (In Progress): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.O...
Xiubo Li
04:55 AM Feature #46866 (Resolved): kceph: add metric for number of pinned capabilities
Xiubo Li
04:48 AM Backport #52679 (Resolved): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
Xiubo Li

10/30/2021

07:51 PM Bug #53096 (Pending Backport): mgr/nfs: handle `radosgw-admin` timeout exceptions
Sage Weil

10/29/2021

05:41 PM Bug #53096 (Fix Under Review): mgr/nfs: handle `radosgw-admin` timeout exceptions
Michael Fritch
05:38 PM Bug #53096 (Pending Backport): mgr/nfs: handle `radosgw-admin` timeout exceptions
Timeout of the `radosgw-admin` command during nfs export create fails with a cryptic message:... Michael Fritch
07:24 AM Bug #53082: ceph-fuse: segmenetation fault in Client::handle_mds_map
There has some logs before the corruption:... Xiubo Li
02:11 AM Bug #53082 (Resolved): ceph-fuse: segmenetation fault in Client::handle_mds_map
... Patrick Donnelly

10/28/2021

03:10 PM Backport #52678 (In Progress): pacific: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestS...
Cory Snyder
03:09 PM Backport #53006 (In Progress): pacific: RuntimeError: The following counters failed to be set on ...
Cory Snyder
01:00 PM Backport #52636 (In Progress): pacific: MDSMonitor: removes MDS coming out of quorum election
Cory Snyder
12:36 AM Bug #53074 (Fix Under Review): pybind/mgr/cephadm: upgrade sequence does not continue if no MDS a...
Patrick Donnelly
12:19 AM Bug #53074 (Resolved): pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
... Patrick Donnelly

10/27/2021

01:48 PM Bug #52876 (Resolved): pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), afte...
Patrick Donnelly
01:13 PM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
https://github.com/ceph/ceph/pull/43475 merged Yuri Weinstein
01:16 PM Backport #52679: pacific: "cluster [WRN] 1 slow requests" in smoke pacific
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43562
merged
Yuri Weinstein
01:51 AM Documentation #53054: ceph-fuse seems to need root permissions to mount (ceph-fuse-15.2.14-1.fc33...
Userspace ceph-fuse must be root to remount itself to flush dentries. Unfortunately the documentation, yes, should be... Patrick Donnelly
12:22 AM Documentation #53054: ceph-fuse seems to need root permissions to mount (ceph-fuse-15.2.14-1.fc33...
client is fedora-33:
@[~] $ cat /etc/os-release
NAME=Fedora
VERSION="33 (Workstation Edition)"
ID=fedora
VERSI...
Tom H
12:20 AM Documentation #53054 (Resolved): ceph-fuse seems to need root permissions to mount (ceph-fuse-15....
I am running a minimal ceph cluster and am able to mount the filesystem using both kernel and ceph-fuse (as root)
...
Tom H

10/26/2021

08:31 PM Backport #51199: octopus: msg: active_connections regression
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43310
merged
Yuri Weinstein
02:28 PM Documentation #53004 (In Progress): Improve API documentation for struct ceph_client_callback_args
Jeff Layton
12:50 PM Bug #53045 (New): stat->fsid is not unique among filesystems exported by the ceph server
Jeff Layton
12:50 PM Bug #53045: stat->fsid is not unique among filesystems exported by the ceph server
There is a kernel patch for this in flight at the moment, but we need libcephfs to follow suit. See:
https://lore....
Jeff Layton
12:48 PM Bug #53045 (Resolved): stat->fsid is not unique among filesystems exported by the ceph server
We are working on a new kubernetes operator to export ceph filesystem using samba. To do this, we mount the filesyste... Jeff Layton
11:17 AM Bug #52982 (In Progress): client: Inode::hold_caps_until should be a time from a monotonic clock
Neeraj Pratap Singh
10:38 AM Bug #53043 (Fix Under Review): qa/vstart_runner: tests crashes due incompatiblity
Rishabh Dave
10:31 AM Bug #53043 (Resolved): qa/vstart_runner: tests crashes due incompatiblity
The incompatible code is - @output = self.ctx.managers[self.cluster_name].raw_cluster_cmd("fs", "ls")@. The cause for... Rishabh Dave

10/25/2021

01:47 PM Feature #52942: mgr/nfs: add 'nfs cluster config get'
@Sebastian -- Guess the orch team is taking care of backports for mgr/nfs. Venky Shankar
01:44 PM Bug #52996 (Duplicate): qa: test_perf_counters via test_openfiletable
Venky Shankar
01:36 PM Bug #52996: qa: test_perf_counters via test_openfiletable
This one should be duplicated to https://tracker.ceph.com/issues/52887. Xiubo Li
01:05 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
http://pulpito.front.sepia.ceph.com/adking-2021-10-21_19:20:35-rados:cephadm-wip-adk-testing-2021-10-21-1228-distro-b... Sebastian Wagner
05:59 AM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
Greg Farnum wrote:
> Xiubo Li wrote:
> > Whenever switching to a different lock state the MDS will try to issue the...
Xiubo Li
01:33 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
Yael Azulay wrote:
> @xiubo Li
> Hi Li
> Thanks again
> - What are the recommended values for mds_log_segment_siz...
Xiubo Li

10/21/2021

02:30 PM Backport #53006 (Resolved): pacific: RuntimeError: The following counters failed to be set on mds...
https://github.com/ceph/ceph/pull/43828 Backport Bot
02:27 PM Backport #52875: pacific: qa: test_dirfrag_limit
Note to backporters: include fix for https://tracker.ceph.com/issues/52949 Patrick Donnelly
02:27 PM Bug #52949 (Pending Backport): RuntimeError: The following counters failed to be set on mds daemo...
Patrick Donnelly
02:29 AM Bug #52949 (Fix Under Review): RuntimeError: The following counters failed to be set on mds daemo...
Patrick Donnelly
02:28 AM Bug #52949 (Resolved): RuntimeError: The following counters failed to be set on mds daemons: {'md...
Patrick Donnelly
02:04 PM Documentation #53004 (Pending Backport): Improve API documentation for struct ceph_client_callbac...
In the go-ceph project, an issue was recently raised regarding cache pressure on libcephfs clients [1]. Jeff Layton s... John Mulligan
06:00 AM Bug #51722 (Resolved): mds: slow performance on parallel rm operations for multiple kclients
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
06:00 AM Bug #51989 (Resolved): cephfs-mirror: cephfs-mirror daemon status for a particular FS is not showing
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
05:59 AM Bug #52062 (Resolved): cephfs-mirror: terminating a mirror daemon can cause a crash at times
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
05:58 AM Bug #52565 (Resolved): MDSMonitor: handle damaged state from standby-replay
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
05:48 AM Backport #52639 (Resolved): pacific: MDSMonitor: handle damaged state from standby-replay
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43200
m...
Loïc Dachary
05:45 AM Backport #52627 (Resolved): pacific: cephfs-mirror: cephfs-mirror daemon status for a particular ...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43199
m...
Loïc Dachary
05:45 AM Backport #52444 (Resolved): pacific: cephfs-mirror: terminating a mirror daemon can cause a crash...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43198
m...
Loïc Dachary
05:44 AM Backport #52441 (Resolved): pacific: mds: slow performance on parallel rm operations for multiple...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43148
m...
Loïc Dachary
02:34 AM Backport #52999 (In Progress): pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly
02:30 AM Backport #52999 (Resolved): pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
https://github.com/ceph/ceph/pull/43615 Backport Bot
02:32 AM Backport #52998 (In Progress): pacific: Monitor might crash after upgrade from ceph to 16.2.6
Patrick Donnelly
02:30 AM Backport #52998 (Resolved): pacific: Monitor might crash after upgrade from ceph to 16.2.6
https://github.com/ceph/ceph/pull/43614 Backport Bot
02:27 AM Bug #52874 (Pending Backport): Monitor might crash after upgrade from ceph to 16.2.6
Patrick Donnelly
02:26 AM Bug #52820 (Pending Backport): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly
02:25 AM Feature #48736 (Resolved): qa: enable debug loglevel kclient test suits
Patrick Donnelly
02:12 AM Bug #48772 (Need More Info): qa: pjd: not ok 9, 44, 80
/ceph/teuthology-archive/pdonnell-2021-10-19_04:32:14-fs-wip-pdonnell-testing-20211019.013028-distro-basic-smithi/645... Patrick Donnelly
02:05 AM Bug #52996 (Duplicate): qa: test_perf_counters via test_openfiletable
... Patrick Donnelly
01:44 AM Bug #52995 (Fix Under Review): qa: test_standby_count_wanted failure
Patrick Donnelly
01:43 AM Bug #52995 (Resolved): qa: test_standby_count_wanted failure
... Patrick Donnelly
01:03 AM Bug #52994 (Fix Under Review): client: do not defer releasing caps when revoking
Patrick Donnelly
12:46 AM Bug #52994: client: do not defer releasing caps when revoking
The fixing will check the cap immediately instead of queue and defer it when revoking caps. Xiubo Li
12:43 AM Bug #52994 (Resolved): client: do not defer releasing caps when revoking
When revoking caps the if we queue to defer releasing them after 5s
or client_caps_release_delay. What if when the c...
Xiubo Li

10/20/2021

03:35 PM Backport #52639: pacific: MDSMonitor: handle damaged state from standby-replay
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43200
merged
Yuri Weinstein
02:42 PM Bug #52982 (Resolved): client: Inode::hold_caps_until should be a time from a monotonic clock
The use of the real clock is vulnerable to system clock changes that could prevent release of any caps.
Use ceph::...
Patrick Donnelly

10/19/2021

02:28 PM Bug #52975: MDSMonitor: no active MDS after cluster deployment
This behavior isn't present in 16.2.5. Igor Fedotov
02:27 PM Bug #52975 (Resolved): MDSMonitor: no active MDS after cluster deployment
This happens starting v16.2.6 if CephFS volume creation and setting allow_standby_replay mode occur before MDS daemon... Igor Fedotov
09:29 AM Feature #47490: Integration of dashboard with volume/nfs module
Can we close this/mark it as duplicate of https://tracker.ceph.com/issues/46493 (where all pacific backporting will t... Ernesto Puerta
09:18 AM Feature #47490 (Pending Backport): Integration of dashboard with volume/nfs module
Ernesto Puerta

10/18/2021

07:05 PM Backport #52968 (Rejected): pacific: mgr/nfs: add 'nfs cluster config get'
Backport Bot
07:03 PM Feature #52942 (Pending Backport): mgr/nfs: add 'nfs cluster config get'
Sage Weil
01:47 PM Feature #46166 (In Progress): mds: store symlink target as xattr in data pool inode for disaster ...
Kotresh Hiremath Ravishankar
01:44 PM Fix #52916 (In Progress): mds,client: formally remove inline data support
Patrick Donnelly
12:53 PM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
Sorry, just noticed status is set to "Can't reproduce". This is OK.
I would like to help building a reproducer. Fo...
Frank Schilder
12:50 PM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
Please don't close an issue without providing an actual fix, that you can't reproduce it with a simple test doesn't m... Frank Schilder

10/15/2021

03:20 PM Backport #52954 (Rejected): pacific: qa/xfstest-dev.py: update to include centos stream
https://github.com/ceph/ceph/pull/54184 Backport Bot
03:16 PM Bug #52822 (Resolved): qa: failed pacific install on fs:upgrade
Patrick Donnelly
03:15 PM Bug #52821 (Pending Backport): qa/xfstest-dev.py: update to include centos stream
Patrick Donnelly
03:15 PM Backport #52953 (Resolved): octopus: mds: crash when journaling during replay
https://github.com/ceph/ceph/pull/43842 Backport Bot
03:15 PM Backport #52952 (Resolved): pacific: mds: crash when journaling during replay
https://github.com/ceph/ceph/pull/43841 Backport Bot
03:15 PM Backport #52951 (Rejected): octopus: qa: skip internal metadata directory when scanning ceph debu...
Backport Bot
03:15 PM Backport #52950 (Rejected): pacific: qa: skip internal metadata directory when scanning ceph debu...
Backport Bot
03:13 PM Fix #52824 (Pending Backport): qa: skip internal metadata directory when scanning ceph debugfs di...
Patrick Donnelly
03:12 PM Bug #51589 (Pending Backport): mds: crash when journaling during replay
Patrick Donnelly
03:10 PM Bug #52949 (Fix Under Review): RuntimeError: The following counters failed to be set on mds daemo...
Patrick Donnelly
03:03 PM Bug #52949 (Resolved): RuntimeError: The following counters failed to be set on mds daemons: {'md...
... Patrick Donnelly
07:40 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly wrote:
> dongdong tao wrote:
> > Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
> ...
dongdong tao
01:02 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
dongdong tao wrote:
> Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
The code in MDSMonitor::tick ...
Patrick Donnelly
02:24 AM Backport #52679 (In Progress): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
Xiubo Li

10/14/2021

10:15 PM Feature #52942 (Resolved): mgr/nfs: add 'nfs cluster config get'
Varsha Rao
05:36 PM Fix #52916: mds,client: formally remove inline data support
Patrick mentioned that we should probably have the scrubber just uninline any inodes that it detects that are inlined... Jeff Layton
12:46 PM Bug #51589: mds: crash when journaling during replay
Partially fixed with https://github.com/ceph/ceph/pull/43382 Venky Shankar
07:07 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
dongdong tao

10/13/2021

02:04 PM Fix #52916 (In Progress): mds,client: formally remove inline data support
This feature was added and only half implemented several years ago, and we made a decision to start deprecating it in... Jeff Layton
05:40 AM Fix #52715: mds: reduce memory usage during scrubbing
Greg Farnum wrote:
> I'm a bit confused by this ticket; AFAIK scrub is a depth-first search.
IIRC, Zheng changed ...
Venky Shankar

10/12/2021

09:59 PM Fix #52715: mds: reduce memory usage during scrubbing
I'm a bit confused by this ticket; AFAIK scrub is a depth-first search. Greg Farnum
06:41 PM Bug #52874 (Fix Under Review): Monitor might crash after upgrade from ceph to 16.2.6
Patrick Donnelly
06:06 PM Bug #52874: Monitor might crash after upgrade from ceph to 16.2.6
You can get around this problem by setting in ceph.conf (for the mons):... Patrick Donnelly
01:41 PM Bug #52874 (Triaged): Monitor might crash after upgrade from ceph to 16.2.6
Patrick Donnelly
06:02 PM Bug #52820 (Fix Under Review): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly
04:36 PM Bug #52820 (In Progress): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
Patrick Donnelly
01:42 PM Bug #52821 (Fix Under Review): qa/xfstest-dev.py: update to include centos stream
Rishabh Dave

10/11/2021

12:44 PM Bug #52887 (Resolved): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.Open...
The teuthology test: https://pulpito.ceph.com/yuriw-2021-10-02_15:12:58-fs-wip-yuri2-testing-2021-10-01-0902-pacific-... Xiubo Li
09:22 AM Bug #44139 (New): mds: check all on-disk metadata is versioned
Jos Collin
 

Also available in: Atom