Activity
From 09/19/2021 to 10/18/2021
10/18/2021
- 07:05 PM Backport #52968 (Rejected): pacific: mgr/nfs: add 'nfs cluster config get'
- 07:03 PM Feature #52942 (Pending Backport): mgr/nfs: add 'nfs cluster config get'
- 01:47 PM Feature #46166 (In Progress): mds: store symlink target as xattr in data pool inode for disaster ...
- 01:44 PM Fix #52916 (In Progress): mds,client: formally remove inline data support
- 12:53 PM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- Sorry, just noticed status is set to "Can't reproduce". This is OK.
I would like to help building a reproducer. Fo... - 12:50 PM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- Please don't close an issue without providing an actual fix, that you can't reproduce it with a simple test doesn't m...
10/15/2021
- 03:20 PM Backport #52954 (Rejected): pacific: qa/xfstest-dev.py: update to include centos stream
- https://github.com/ceph/ceph/pull/54184
- 03:16 PM Bug #52822 (Resolved): qa: failed pacific install on fs:upgrade
- 03:15 PM Bug #52821 (Pending Backport): qa/xfstest-dev.py: update to include centos stream
- 03:15 PM Backport #52953 (Resolved): octopus: mds: crash when journaling during replay
- https://github.com/ceph/ceph/pull/43842
- 03:15 PM Backport #52952 (Resolved): pacific: mds: crash when journaling during replay
- https://github.com/ceph/ceph/pull/43841
- 03:15 PM Backport #52951 (Rejected): octopus: qa: skip internal metadata directory when scanning ceph debu...
- 03:15 PM Backport #52950 (Rejected): pacific: qa: skip internal metadata directory when scanning ceph debu...
- 03:13 PM Fix #52824 (Pending Backport): qa: skip internal metadata directory when scanning ceph debugfs di...
- 03:12 PM Bug #51589 (Pending Backport): mds: crash when journaling during replay
- 03:10 PM Bug #52949 (Fix Under Review): RuntimeError: The following counters failed to be set on mds daemo...
- 03:03 PM Bug #52949 (Resolved): RuntimeError: The following counters failed to be set on mds daemons: {'md...
- ...
- 07:40 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- Patrick Donnelly wrote:
> dongdong tao wrote:
> > Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
> ... - 01:02 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- dongdong tao wrote:
> Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
The code in MDSMonitor::tick ... - 02:24 AM Backport #52679 (In Progress): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
10/14/2021
- 10:15 PM Feature #52942 (Resolved): mgr/nfs: add 'nfs cluster config get'
- 05:36 PM Fix #52916: mds,client: formally remove inline data support
- Patrick mentioned that we should probably have the scrubber just uninline any inodes that it detects that are inlined...
- 12:46 PM Bug #51589: mds: crash when journaling during replay
- Partially fixed with https://github.com/ceph/ceph/pull/43382
- 07:07 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- Do we know why it can succeed on 16.2.5 but failed on 16.2.6?
10/13/2021
- 02:04 PM Fix #52916 (In Progress): mds,client: formally remove inline data support
- This feature was added and only half implemented several years ago, and we made a decision to start deprecating it in...
- 05:40 AM Fix #52715: mds: reduce memory usage during scrubbing
- Greg Farnum wrote:
> I'm a bit confused by this ticket; AFAIK scrub is a depth-first search.
IIRC, Zheng changed ...
10/12/2021
- 09:59 PM Fix #52715: mds: reduce memory usage during scrubbing
- I'm a bit confused by this ticket; AFAIK scrub is a depth-first search.
- 06:41 PM Bug #52874 (Fix Under Review): Monitor might crash after upgrade from ceph to 16.2.6
- 06:06 PM Bug #52874: Monitor might crash after upgrade from ceph to 16.2.6
- You can get around this problem by setting in ceph.conf (for the mons):...
- 01:41 PM Bug #52874 (Triaged): Monitor might crash after upgrade from ceph to 16.2.6
- 06:02 PM Bug #52820 (Fix Under Review): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- 04:36 PM Bug #52820 (In Progress): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- 01:42 PM Bug #52821 (Fix Under Review): qa/xfstest-dev.py: update to include centos stream
10/11/2021
- 12:44 PM Bug #52887 (Resolved): qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.Open...
- The teuthology test: https://pulpito.ceph.com/yuriw-2021-10-02_15:12:58-fs-wip-yuri2-testing-2021-10-01-0902-pacific-...
- 09:22 AM Bug #44139 (New): mds: check all on-disk metadata is versioned
10/09/2021
- 03:18 AM Bug #52876 (Fix Under Review): pacific: cluster [WRN] evicting unresponsive client smithi121 (912...
- It's due to forget shutdown the mounter after test finishes, this was introduced when resolving the conflicts when ba...
- 03:11 AM Bug #52876 (Resolved): pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), afte...
- The teuthology test: https://pulpito.ceph.com/yuriw-2021-10-02_15:12:58-fs-wip-yuri2-testing-2021-10-01-0902-pacific-...
10/08/2021
- 06:40 PM Backport #52875 (Resolved): pacific: qa: test_dirfrag_limit
- https://github.com/ceph/ceph/pull/45565
- 06:38 PM Bug #52606 (Pending Backport): qa: test_dirfrag_limit
- 01:49 PM Bug #52874 (Resolved): Monitor might crash after upgrade from ceph to 16.2.6
- The following assertion might pop up
void FSMap::sanity() const
{
...
if (info.state != MDSMap::STATE_STAND... - 01:34 PM Backport #52627: pacific: cephfs-mirror: cephfs-mirror daemon status for a particular FS is not s...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43199
merged - 01:33 PM Backport #52444: pacific: cephfs-mirror: terminating a mirror daemon can cause a crash at times
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43198
merged - 01:32 PM Backport #52441: pacific: mds: slow performance on parallel rm operations for multiple kclients
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43148
merged - 10:54 AM Bug #24030 (Closed): ceph-fuse: double dash meaning
- Closing this because:
* In the review, disabling -- is not encouraged.
* When I look at this issue now, this does...
10/07/2021
- 06:42 PM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Xiubo Li wrote:
> Whenever switching to a different lock state the MDS will try to issue the allowed caps to the cli... - 01:20 PM Backport #52854 (Resolved): pacific: qa: test_simple failure
- https://github.com/ceph/ceph/pull/50756
- 01:17 PM Bug #52677 (Pending Backport): qa: test_simple failure
- 11:48 AM Feature #1276 (Resolved): client: expose mds partition via virtual xattrs
- Patch was merged for v5.15.
- 09:15 AM Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- I was able to complete the upgrade by switching to version 16.2.5.
After that I tried to upgrade from version 16.... - 06:35 AM Bug #48711 (Need More Info): mds: standby-replay mds abort when replay metablob
- Hi haitao,
I don't see a segmentation fault here and the attached logs doesn't have more information about the cra...
10/06/2021
- 01:07 PM Bug #52829: cephfs/mirroring: peer_bootstrap import failed with EPERM
- Attaching mgr logs split into two files
- 12:55 PM Bug #52829 (New): cephfs/mirroring: peer_bootstrap import failed with EPERM
- Tried setting up mirroring on vstart cluster as below but peer_bootstrap import failed with EPERM.
#Create vstart ... - 11:22 AM Bug #52821: qa/xfstest-dev.py: update to include centos stream
- rishabh-2021-10-05_09:25:11-fs-wip-rishabh-vr-run-multiple-cmds-distro-basic-smithi/6423166/...
- 05:46 AM Fix #52824 (Closed): qa: skip internal metadata directory when scanning ceph debugfs directory
- kclient patchset:
https://patchwork.kernel.org/project/ceph-devel/list/?series=556049
introduces meta direc... - 12:17 AM Cleanup #51390 (Resolved): mgr/volumes/fs/operations/access.py: fix various flake8 issues
- 12:17 AM Cleanup #51407 (Resolved): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix various fla...
- 12:16 AM Cleanup #51381 (Resolved): mgr/volumes/fs/async_job.py: fix various flake8 issues
- 12:16 AM Cleanup #51403 (Resolved): mgr/volumes/fs/operations/versions/auth_metadata.py: fix various flake...
- 12:15 AM Cleanup #51398 (Resolved): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
- 12:15 AM Cleanup #51384 (Resolved): mgr/volumes/fs/vol_spec.py: fix various flake8 issues
- 12:15 AM Backport #52823 (Resolved): pacific: mgr/nfs: add more log messages
- Backport included here:
https://github.com/ceph/ceph/pull/43682 - 12:14 AM Cleanup #51396 (Resolved): mgr/volumes/fs/operations/clone_index.py: fix various flake8 issues
- 12:14 AM Cleanup #51380 (Resolved): mgr/volumes/module.py: fix various flake8 issues
- 12:13 AM Cleanup #51392 (Resolved): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
- 12:13 AM Bug #52274 (Pending Backport): mgr/nfs: add more log messages
- 12:02 AM Bug #52822 (Fix Under Review): qa: failed pacific install on fs:upgrade
10/05/2021
- 11:59 PM Bug #52822 (Resolved): qa: failed pacific install on fs:upgrade
- https://pulpito.ceph.com/pdonnell-2021-10-04_23:25:27-fs-wip-pdonnell-testing-20211002.163337-distro-basic-smithi/642...
- 06:56 PM Bug #52821 (Pending Backport): qa/xfstest-dev.py: update to include centos stream
- 05:05 PM Bug #52820 (Resolved): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- i tried to upgrade my ceph cluster from 15.2.14 to 16.2.6 on my proxmox 7.0 severs.
after updating the packages i ...
10/02/2021
10/01/2021
- 11:23 PM Cleanup #51402 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_base.py: fix vari...
- 10:52 PM Cleanup #51390 (Fix Under Review): mgr/volumes/fs/operations/access.py: fix various flake8 issues
- 10:50 PM Cleanup #51381 (Fix Under Review): mgr/volumes/fs/async_job.py: fix various flake8 issues
- 10:49 PM Cleanup #51407 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix var...
- 10:48 PM Cleanup #51382 (Fix Under Review): mgr/volumes/fs/async_cloner.py: fix various flake8 issues
- 10:27 PM Cleanup #51403 (Fix Under Review): mgr/volumes/fs/operations/versions/auth_metadata.py: fix vario...
- 10:05 PM Cleanup #51398 (Fix Under Review): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
- 09:03 PM Cleanup #51396 (Fix Under Review): mgr/volumes/fs/operations/clone_index.py: fix various flake8 i...
- 08:54 PM Cleanup #51400 (Fix Under Review): mgr/volumes/fs/operations/trash.py: fix various flake8 issues
- 08:25 PM Bug #51630 (Fix Under Review): mgr/snap_schedule: don't throw traceback on non-existent fs
- 08:10 PM Cleanup #51380 (Fix Under Review): mgr/volumes/module.py: fix various flake8 issues
- 09:54 AM Feature #46166: mds: store symlink target as xattr in data pool inode for disaster recovery
- The link in the description is broken. Here is the working link https://docs.ceph.com/en/latest/cephfs/disaster-recov...
- 03:14 AM Cleanup #51392 (Fix Under Review): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
09/30/2021
- 08:52 PM Cleanup #51392: mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
- This PR fixes it: https://github.com/ceph/ceph/pull/43375
- 04:50 PM Cleanup #51391 (Resolved): mgr/volumes/fs/operations/resolver.py: add extra blank line
- 04:50 PM Cleanup #51387 (Resolved): mgr/volumes/fs/purge_queue.py: add extra blank line
- 01:27 PM Bug #52581 (Can't reproduce): Dangling fs snapshots on data pool after change of directory layout
- Closing this issue based on previous comment. Please re-open the issue if it happens on the supported versions.
- 11:21 AM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- The ceph version of this issue 'ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)' is no...
- 08:43 AM Bug #51589: mds: crash when journaling during replay
- Venky Shankar wrote:
> ...
> ...
> Patrick suggested that we could defer journaling blocklisted clients in reconne...
09/29/2021
- 12:56 PM Bug #51589: mds: crash when journaling during replay
- I was able to reproduce this in master branch. The crash happens when a standby mds takes over as active and there ar...
- 10:41 AM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- Hi Frank,
I tried reproducing this issue on the master by changing root data pool to the new one but couldn't achi...
09/28/2021
- 01:03 AM Cleanup #51391 (Fix Under Review): mgr/volumes/fs/operations/resolver.py: add extra blank line
09/27/2021
- 11:36 PM Backport #51199 (In Progress): octopus: msg: active_connections regression
- 07:15 AM Backport #51199: octopus: msg: active_connections regression
- Backport PR: https://github.com/ceph/ceph/pull/43310
This regression issue is blocking https://tracker.ceph.com/... - 02:48 PM Cleanup #51387 (Fix Under Review): mgr/volumes/fs/purge_queue.py: add extra blank line
- 01:43 PM Bug #52723 (Fix Under Review): mds: improve mds_bal_fragment_size_max config option
- 01:42 PM Feature #52725 (Fix Under Review): qa: mds_dir_max_entries workunit test case
- 01:41 PM Feature #52720 (Fix Under Review): mds: mds_bal_rank_mask config option
09/24/2021
- 10:30 AM Feature #52725 (Resolved): qa: mds_dir_max_entries workunit test case
- mds_dir_max_entries workunit test case
- 06:04 AM Bug #52723 (Resolved): mds: improve mds_bal_fragment_size_max config option
- Maintain mds_bal_fragment_size_max as a member variable in Server.cc
09/23/2021
- 04:55 PM Feature #51162 (Fix Under Review): mgr/volumes: `fs volume rename` command
- 10:27 AM Feature #52720 (Resolved): mds: mds_bal_rank_mask config option
- hexadecimal bitmask of the active MDS ranks to rebalance on. MDS balancer dynamically redistributes subtrees within c...
- 08:24 AM Fix #52715 (New): mds: reduce memory usage during scrubbing
- Breadth-first search may queue lots of inodes. Change scrub traverse to depth-First search.
The new scrub code no ...
09/22/2021
- 07:31 PM Backport #50126 (In Progress): octopus: pybind/mgr/volumes: deadlock on async job hangs finisher ...
- 06:01 AM Backport #52632 (In Progress): octopus: mds,client: add flag to MClientSession for reject reason
- 05:52 AM Backport #52633 (In Progress): pacific: mds,client: add flag to MClientSession for reject reason
09/21/2021
- 01:23 PM Bug #52688 (New): mds: possibly corrupted entry in journal (causes replay failure with file syste...
- Failed replay after a standby took over as active. This marks the file system as damaged.
The journal entry for in... - 11:55 AM Bug #52642 (Fix Under Review): snap scheduler: cephfs snapshot schedule status doesn't list the s...
- 12:50 AM Backport #52680 (Resolved): pacific: Add option in `fs new` command to start rank 0 in failed state
- https://github.com/ceph/ceph/pull/45565
- 12:50 AM Backport #52679 (Resolved): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
- https://github.com/ceph/ceph/pull/43562
- 12:50 AM Backport #52678 (Resolved): pacific: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnap...
- https://github.com/ceph/ceph/pull/43702
- 12:48 AM Bug #52625 (Pending Backport): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
- 12:47 AM Bug #52572 (Pending Backport): "cluster [WRN] 1 slow requests" in smoke pacific
- 12:46 AM Feature #51716 (Pending Backport): Add option in `fs new` command to start rank 0 in failed state
- 12:09 AM Bug #52677 (Fix Under Review): qa: test_simple failure
- 12:03 AM Bug #52677 (Resolved): qa: test_simple failure
- ...
09/20/2021
- 04:15 PM Bug #52607 (Duplicate): qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data...
- 01:48 PM Fix #52591: mds: mds_oft_prefetch_dirfrags = false is not qa tested
- Patrick suggested that we have this setting in the thrasher suite.
- 01:42 PM Bug #52641 (Triaged): snap scheduler: Traceback seen when snapshot schedule remove command is pas...
- 01:41 PM Bug #52642 (Triaged): snap scheduler: cephfs snapshot schedule status doesn't list the snapshot c...
- 01:38 PM Bug #52643 (Triaged): snap scheduler: cephfs snapshot created with schedules stopped on nfs volum...
- 11:40 AM Backport #52629 (In Progress): octopus: pybind/mgr/volumes: first subvolume permissions set perms...
- 11:09 AM Backport #52628 (In Progress): pacific: pybind/mgr/volumes: first subvolume permissions set perms...
Also available in: Atom