Activity
From 10/08/2021 to 11/06/2021
11/06/2021
- 02:54 AM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- We are triggering the new warning: https://tracker.ceph.com/issues/53180
- 12:54 AM Bug #53179 (Duplicate): Crash when unlink in corrupted cephfs
- We have a corrupted cephfs that breaks every time after the repair when files are removed....
11/05/2021
- 08:14 PM Backport #53006 (In Progress): pacific: RuntimeError: The following counters failed to be set on ...
- 08:02 PM Backport #53006 (Need More Info): pacific: RuntimeError: The following counters failed to be set ...
- I will need to work on this because it pulls in some commits that don't have a tracker assigned.
- 03:17 AM Backport #53163 (In Progress): octopus: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)...
- 02:56 AM Backport #53164 (In Progress): pacific: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)...
11/04/2021
- 09:00 PM Backport #53165 (In Progress): pacific: qa/vstart_runner: tests crashes due incompatiblity
- https://github.com/ceph/ceph/pull/54183
- 08:56 PM Backport #53164 (Resolved): pacific: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
- https://github.com/ceph/ceph/pull/43815
- 08:56 PM Backport #53163 (Resolved): octopus: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
- https://github.com/ceph/ceph/pull/43816
- 08:55 PM Backport #53162 (Resolved): pacific: qa: test_standby_count_wanted failure
- https://github.com/ceph/ceph/pull/50760
- 08:55 PM Bug #53043 (Pending Backport): qa/vstart_runner: tests crashes due incompatiblity
- 08:52 PM Bug #52995 (Pending Backport): qa: test_standby_count_wanted failure
- 08:51 PM Bug #51023 (Pending Backport): mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
- 08:29 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- /ceph/teuthology-archive/pdonnell-2021-11-04_15:43:53-fs-wip-pdonnell-testing-20211103.023355-distro-basic-smithi/648...
- 02:27 PM Bug #53155 (Fix Under Review): MDSMonitor: assertion during upgrade to v16.2.5+
- 02:21 PM Bug #53155 (Resolved): MDSMonitor: assertion during upgrade to v16.2.5+
- ...
11/03/2021
- 11:44 PM Bug #53150 (Fix Under Review): pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade s...
- 08:38 PM Bug #53150 (Resolved): pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddlin...
- If a v16.2.4 or older MDS fails and rejoins, the compat set assigned to it is the empty set (because it sends no comp...
- 11:49 AM Backport #52823 (In Progress): pacific: mgr/nfs: add more log messages
- 11:43 AM Bug #53074 (Resolved): pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
- 10:42 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Main Name wrote:
> Same issue with roughly 1.6M folders.
>
> * Generated a Folder tree with 1611111 Folders
> * ... - 09:52 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Same issue with roughly 1.6M folders.
* Generated a Folder tree with 1611111 Folders
* Make snapshot
* Delete Fo... - 09:19 AM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
- I haven't had luck keeping the MDS running well with higher log levels unfortunately. However, I do have one more da...
- 06:57 AM Bug #53126 (Triaged): In the 5.4.0 kernel, the mount of ceph-fuse fails
- 06:54 AM Bug #52487 (In Progress): qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragm...
- The check here[0] results in `num_strays` being zero _right after_ the journal was flushed::...
- 06:36 AM Feature #50372 (Fix Under Review): test: Implement cephfs-mirror trasher test for HA active/active
- 06:35 AM Backport #51415 (In Progress): octopus: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- 02:55 AM Backport #53121 (In Progress): pacific: mds: collect I/O sizes from client for cephfs-top
- 02:53 AM Backport #53120 (In Progress): pacific: client: do not defer releasing caps when revoking
11/02/2021
- 10:15 PM Bug #50622 (Resolved): msg: active_connections regression
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:09 PM Bug #52572 (Resolved): "cluster [WRN] 1 slow requests" in smoke pacific
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:08 PM Bug #52820 (Resolved): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:07 PM Bug #52874 (Resolved): Monitor might crash after upgrade from ceph to 16.2.6
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:05 PM Backport #52999 (Resolved): pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43615
m... - 01:34 PM Backport #52999: pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43615
merged - 10:04 PM Backport #52998 (Resolved): pacific: Monitor might crash after upgrade from ceph to 16.2.6
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43614
m... - 01:33 PM Backport #52998: pacific: Monitor might crash after upgrade from ceph to 16.2.6
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43614
merged - 10:03 PM Backport #52679: pacific: "cluster [WRN] 1 slow requests" in smoke pacific
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43562
m... - 09:54 PM Backport #51199 (Resolved): octopus: msg: active_connections regression
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43310
m... - 03:25 PM Bug #53126: In the 5.4.0 kernel, the mount of ceph-fuse fails
- Might be related to #53082
- 06:41 AM Bug #53126 (Closed): In the 5.4.0 kernel, the mount of ceph-fuse fails
- Hello everyone,
I use ubuntu18.04.5 server and the ceph version is 14.2.22.
After upgrading the kernel to 5.4.0, th... - 01:41 PM Bug #53082: ceph-fuse: segmenetation fault in Client::handle_mds_map
- Venky, I will take it.
- 03:19 AM Bug #52887 (Fix Under Review): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfileta...
- 03:02 AM Bug #52887: qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
- The `self.wait_until_true(lambda: self._check_oft_counter('omap_total_removes', 1), timeout=30)` last check was at `2...
11/01/2021
- 06:50 PM Backport #53122 (Rejected): pacific: mds: improve mds_bal_fragment_size_max config option
- 06:48 PM Bug #52723 (Pending Backport): mds: improve mds_bal_fragment_size_max config option
- 04:41 PM Backport #53121 (Resolved): pacific: mds: collect I/O sizes from client for cephfs-top
- https://github.com/ceph/ceph/pull/43784
- 04:36 PM Feature #49811 (Pending Backport): mds: collect I/O sizes from client for cephfs-top
- 04:35 PM Cleanup #51402 (Resolved): mgr/volumes/fs/operations/versions/subvolume_base.py: fix various flak...
- 04:35 PM Backport #53120 (Resolved): pacific: client: do not defer releasing caps when revoking
- https://github.com/ceph/ceph/pull/43562
- 04:33 PM Bug #52994 (Pending Backport): client: do not defer releasing caps when revoking
- 05:12 AM Bug #52887 (In Progress): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.O...
- 04:55 AM Feature #46866 (Resolved): kceph: add metric for number of pinned capabilities
- 04:48 AM Backport #52679 (Resolved): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
10/30/2021
10/29/2021
- 05:41 PM Bug #53096 (Fix Under Review): mgr/nfs: handle `radosgw-admin` timeout exceptions
- 05:38 PM Bug #53096 (Pending Backport): mgr/nfs: handle `radosgw-admin` timeout exceptions
- Timeout of the `radosgw-admin` command during nfs export create fails with a cryptic message:...
- 07:24 AM Bug #53082: ceph-fuse: segmenetation fault in Client::handle_mds_map
- There has some logs before the corruption:...
- 02:11 AM Bug #53082 (Resolved): ceph-fuse: segmenetation fault in Client::handle_mds_map
- ...
10/28/2021
- 03:10 PM Backport #52678 (In Progress): pacific: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestS...
- 03:09 PM Backport #53006 (In Progress): pacific: RuntimeError: The following counters failed to be set on ...
- 01:00 PM Backport #52636 (In Progress): pacific: MDSMonitor: removes MDS coming out of quorum election
- 12:36 AM Bug #53074 (Fix Under Review): pybind/mgr/cephadm: upgrade sequence does not continue if no MDS a...
- 12:19 AM Bug #53074 (Resolved): pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
- ...
10/27/2021
- 01:48 PM Bug #52876 (Resolved): pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), afte...
- 01:13 PM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
- https://github.com/ceph/ceph/pull/43475 merged
- 01:16 PM Backport #52679: pacific: "cluster [WRN] 1 slow requests" in smoke pacific
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43562
merged - 01:51 AM Documentation #53054: ceph-fuse seems to need root permissions to mount (ceph-fuse-15.2.14-1.fc33...
- Userspace ceph-fuse must be root to remount itself to flush dentries. Unfortunately the documentation, yes, should be...
- 12:22 AM Documentation #53054: ceph-fuse seems to need root permissions to mount (ceph-fuse-15.2.14-1.fc33...
- client is fedora-33:
@[~] $ cat /etc/os-release
NAME=Fedora
VERSION="33 (Workstation Edition)"
ID=fedora
VERSI... - 12:20 AM Documentation #53054 (Resolved): ceph-fuse seems to need root permissions to mount (ceph-fuse-15....
- I am running a minimal ceph cluster and am able to mount the filesystem using both kernel and ceph-fuse (as root)
...
10/26/2021
- 08:31 PM Backport #51199: octopus: msg: active_connections regression
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43310
merged - 02:28 PM Documentation #53004 (In Progress): Improve API documentation for struct ceph_client_callback_args
- 12:50 PM Bug #53045 (New): stat->fsid is not unique among filesystems exported by the ceph server
- 12:50 PM Bug #53045: stat->fsid is not unique among filesystems exported by the ceph server
- There is a kernel patch for this in flight at the moment, but we need libcephfs to follow suit. See:
https://lore.... - 12:48 PM Bug #53045 (Resolved): stat->fsid is not unique among filesystems exported by the ceph server
- We are working on a new kubernetes operator to export ceph filesystem using samba. To do this, we mount the filesyste...
- 11:17 AM Bug #52982 (In Progress): client: Inode::hold_caps_until should be a time from a monotonic clock
- 10:38 AM Bug #53043 (Fix Under Review): qa/vstart_runner: tests crashes due incompatiblity
- 10:31 AM Bug #53043 (Pending Backport): qa/vstart_runner: tests crashes due incompatiblity
- The incompatible code is - @output = self.ctx.managers[self.cluster_name].raw_cluster_cmd("fs", "ls")@. The cause for...
10/25/2021
- 01:47 PM Feature #52942: mgr/nfs: add 'nfs cluster config get'
- @Sebastian -- Guess the orch team is taking care of backports for mgr/nfs.
- 01:44 PM Bug #52996 (Duplicate): qa: test_perf_counters via test_openfiletable
- 01:36 PM Bug #52996: qa: test_perf_counters via test_openfiletable
- This one should be duplicated to https://tracker.ceph.com/issues/52887.
- 01:05 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- http://pulpito.front.sepia.ceph.com/adking-2021-10-21_19:20:35-rados:cephadm-wip-adk-testing-2021-10-21-1228-distro-b...
- 05:59 AM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Whenever switching to a different lock state the MDS will try to issue the... - 01:33 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Yael Azulay wrote:
> @xiubo Li
> Hi Li
> Thanks again
> - What are the recommended values for mds_log_segment_siz...
10/21/2021
- 02:30 PM Backport #53006 (Resolved): pacific: RuntimeError: The following counters failed to be set on mds...
- https://github.com/ceph/ceph/pull/43828
- 02:27 PM Backport #52875: pacific: qa: test_dirfrag_limit
- Note to backporters: include fix for https://tracker.ceph.com/issues/52949
- 02:27 PM Bug #52949 (Pending Backport): RuntimeError: The following counters failed to be set on mds daemo...
- 02:29 AM Bug #52949 (Fix Under Review): RuntimeError: The following counters failed to be set on mds daemo...
- 02:28 AM Bug #52949 (Resolved): RuntimeError: The following counters failed to be set on mds daemons: {'md...
- 02:04 PM Documentation #53004 (Pending Backport): Improve API documentation for struct ceph_client_callbac...
- In the go-ceph project, an issue was recently raised regarding cache pressure on libcephfs clients [1]. Jeff Layton s...
- 06:00 AM Bug #51722 (Resolved): mds: slow performance on parallel rm operations for multiple kclients
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:00 AM Bug #51989 (Resolved): cephfs-mirror: cephfs-mirror daemon status for a particular FS is not showing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:59 AM Bug #52062 (Resolved): cephfs-mirror: terminating a mirror daemon can cause a crash at times
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:58 AM Bug #52565 (Resolved): MDSMonitor: handle damaged state from standby-replay
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:48 AM Backport #52639 (Resolved): pacific: MDSMonitor: handle damaged state from standby-replay
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43200
m... - 05:45 AM Backport #52627 (Resolved): pacific: cephfs-mirror: cephfs-mirror daemon status for a particular ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43199
m... - 05:45 AM Backport #52444 (Resolved): pacific: cephfs-mirror: terminating a mirror daemon can cause a crash...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43198
m... - 05:44 AM Backport #52441 (Resolved): pacific: mds: slow performance on parallel rm operations for multiple...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43148
m... - 02:34 AM Backport #52999 (In Progress): pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- 02:30 AM Backport #52999 (Resolved): pacific: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- https://github.com/ceph/ceph/pull/43615
- 02:32 AM Backport #52998 (In Progress): pacific: Monitor might crash after upgrade from ceph to 16.2.6
- 02:30 AM Backport #52998 (Resolved): pacific: Monitor might crash after upgrade from ceph to 16.2.6
- https://github.com/ceph/ceph/pull/43614
- 02:27 AM Bug #52874 (Pending Backport): Monitor might crash after upgrade from ceph to 16.2.6
- 02:26 AM Bug #52820 (Pending Backport): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- 02:25 AM Feature #48736 (Resolved): qa: enable debug loglevel kclient test suits
- 02:12 AM Bug #48772 (Need More Info): qa: pjd: not ok 9, 44, 80
- /ceph/teuthology-archive/pdonnell-2021-10-19_04:32:14-fs-wip-pdonnell-testing-20211019.013028-distro-basic-smithi/645...
- 02:05 AM Bug #52996 (Duplicate): qa: test_perf_counters via test_openfiletable
- ...
- 01:44 AM Bug #52995 (Fix Under Review): qa: test_standby_count_wanted failure
- 01:43 AM Bug #52995 (Resolved): qa: test_standby_count_wanted failure
- ...
- 01:03 AM Bug #52994 (Fix Under Review): client: do not defer releasing caps when revoking
- 12:46 AM Bug #52994: client: do not defer releasing caps when revoking
- The fixing will check the cap immediately instead of queue and defer it when revoking caps.
- 12:43 AM Bug #52994 (Resolved): client: do not defer releasing caps when revoking
- When revoking caps the if we queue to defer releasing them after 5s
or client_caps_release_delay. What if when the c...
10/20/2021
- 03:35 PM Backport #52639: pacific: MDSMonitor: handle damaged state from standby-replay
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43200
merged - 02:42 PM Bug #52982 (Resolved): client: Inode::hold_caps_until should be a time from a monotonic clock
- The use of the real clock is vulnerable to system clock changes that could prevent release of any caps.
Use ceph::...
10/19/2021
- 02:28 PM Bug #52975: MDSMonitor: no active MDS after cluster deployment
- This behavior isn't present in 16.2.5.
- 02:27 PM Bug #52975 (Resolved): MDSMonitor: no active MDS after cluster deployment
- This happens starting v16.2.6 if CephFS volume creation and setting allow_standby_replay mode occur before MDS daemon...
- 09:29 AM Feature #47490: Integration of dashboard with volume/nfs module
- Can we close this/mark it as duplicate of https://tracker.ceph.com/issues/46493 (where all pacific backporting will t...
- 09:18 AM Feature #47490 (Pending Backport): Integration of dashboard with volume/nfs module
10/18/2021
- 07:05 PM Backport #52968 (Rejected): pacific: mgr/nfs: add 'nfs cluster config get'
- 07:03 PM Feature #52942 (Pending Backport): mgr/nfs: add 'nfs cluster config get'
- 01:47 PM Feature #46166 (In Progress): mds: store symlink target as xattr in data pool inode for disaster ...
- 01:44 PM Fix #52916 (In Progress): mds,client: formally remove inline data support
- 12:53 PM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- Sorry, just noticed status is set to "Can't reproduce". This is OK.
I would like to help building a reproducer. Fo... - 12:50 PM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- Please don't close an issue without providing an actual fix, that you can't reproduce it with a simple test doesn't m...
10/15/2021
- 03:20 PM Backport #52954 (Rejected): pacific: qa/xfstest-dev.py: update to include centos stream
- https://github.com/ceph/ceph/pull/54184
- 03:16 PM Bug #52822 (Resolved): qa: failed pacific install on fs:upgrade
- 03:15 PM Bug #52821 (Pending Backport): qa/xfstest-dev.py: update to include centos stream
- 03:15 PM Backport #52953 (Resolved): octopus: mds: crash when journaling during replay
- https://github.com/ceph/ceph/pull/43842
- 03:15 PM Backport #52952 (Resolved): pacific: mds: crash when journaling during replay
- https://github.com/ceph/ceph/pull/43841
- 03:15 PM Backport #52951 (Rejected): octopus: qa: skip internal metadata directory when scanning ceph debu...
- 03:15 PM Backport #52950 (Rejected): pacific: qa: skip internal metadata directory when scanning ceph debu...
- 03:13 PM Fix #52824 (Pending Backport): qa: skip internal metadata directory when scanning ceph debugfs di...
- 03:12 PM Bug #51589 (Pending Backport): mds: crash when journaling during replay
- 03:10 PM Bug #52949 (Fix Under Review): RuntimeError: The following counters failed to be set on mds daemo...
- 03:03 PM Bug #52949 (Resolved): RuntimeError: The following counters failed to be set on mds daemons: {'md...
- ...
- 07:40 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- Patrick Donnelly wrote:
> dongdong tao wrote:
> > Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
> ... - 01:02 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- dongdong tao wrote:
> Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
The code in MDSMonitor::tick ... - 02:24 AM Backport #52679 (In Progress): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
10/14/2021
- 10:15 PM Feature #52942 (Resolved): mgr/nfs: add 'nfs cluster config get'
- 05:36 PM Fix #52916: mds,client: formally remove inline data support
- Patrick mentioned that we should probably have the scrubber just uninline any inodes that it detects that are inlined...
- 12:46 PM Bug #51589: mds: crash when journaling during replay
- Partially fixed with https://github.com/ceph/ceph/pull/43382
- 07:07 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
10/13/2021
- 02:04 PM Fix #52916 (In Progress): mds,client: formally remove inline data support
- This feature was added and only half implemented several years ago, and we made a decision to start deprecating it in...
- 05:40 AM Fix #52715: mds: reduce memory usage during scrubbing
- Greg Farnum wrote:
> I'm a bit confused by this ticket; AFAIK scrub is a depth-first search.
IIRC, Zheng changed ...
10/12/2021
- 09:59 PM Fix #52715: mds: reduce memory usage during scrubbing
- I'm a bit confused by this ticket; AFAIK scrub is a depth-first search.
- 06:41 PM Bug #52874 (Fix Under Review): Monitor might crash after upgrade from ceph to 16.2.6
- 06:06 PM Bug #52874: Monitor might crash after upgrade from ceph to 16.2.6
- You can get around this problem by setting in ceph.conf (for the mons):...
- 01:41 PM Bug #52874 (Triaged): Monitor might crash after upgrade from ceph to 16.2.6
- 06:02 PM Bug #52820 (Fix Under Review): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- 04:36 PM Bug #52820 (In Progress): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- 01:42 PM Bug #52821 (Fix Under Review): qa/xfstest-dev.py: update to include centos stream
10/11/2021
- 12:44 PM Bug #52887 (Resolved): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.Open...
- The teuthology test: https://pulpito.ceph.com/yuriw-2021-10-02_15:12:58-fs-wip-yuri2-testing-2021-10-01-0902-pacific-...
- 09:22 AM Bug #44139 (New): mds: check all on-disk metadata is versioned
10/09/2021
- 03:18 AM Bug #52876 (Fix Under Review): pacific: cluster [WRN] evicting unresponsive client smithi121 (912...
- It's due to forget shutdown the mounter after test finishes, this was introduced when resolving the conflicts when ba...
- 03:11 AM Bug #52876 (Resolved): pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), afte...
- The teuthology test: https://pulpito.ceph.com/yuriw-2021-10-02_15:12:58-fs-wip-yuri2-testing-2021-10-01-0902-pacific-...
10/08/2021
- 06:40 PM Backport #52875 (Resolved): pacific: qa: test_dirfrag_limit
- https://github.com/ceph/ceph/pull/45565
- 06:38 PM Bug #52606 (Pending Backport): qa: test_dirfrag_limit
- 01:49 PM Bug #52874 (Resolved): Monitor might crash after upgrade from ceph to 16.2.6
- The following assertion might pop up
void FSMap::sanity() const
{
...
if (info.state != MDSMap::STATE_STAND... - 01:34 PM Backport #52627: pacific: cephfs-mirror: cephfs-mirror daemon status for a particular FS is not s...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43199
merged - 01:33 PM Backport #52444: pacific: cephfs-mirror: terminating a mirror daemon can cause a crash at times
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43198
merged - 01:32 PM Backport #52441: pacific: mds: slow performance on parallel rm operations for multiple kclients
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43148
merged - 10:54 AM Bug #24030 (Closed): ceph-fuse: double dash meaning
- Closing this because:
* In the review, disabling -- is not encouraged.
* When I look at this issue now, this does...
Also available in: Atom