Activity
From 01/24/2022 to 02/22/2022
02/22/2022
- 05:03 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- I see this in the current batch of logs (mds.test.cephadm1.xlapqu.log):...
- 03:59 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- FWIW, I also turned up kernel debug logs and did this:...
- 02:27 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- I reproduced this morning again and gathered the mds logs:
ceph-post-file: bf0318cc-3e34-4d61-8895-03dfdae86c25
- 01:18 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Thx! I'll try that.
- 01:16 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Venky Shankar wrote:
> Also, I could not reproduce it running generic/070. Would it be possible to share kernel buff... - 01:06 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Venky Shankar wrote:
> But, I cannot see the unlink request coming in or a failure with -EACCES for the path in ques... - 09:25 AM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Jeff,
For this unlink request... - 02:02 PM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Hey Venky,
yes, the workaround fixes my Ceph 13 cluster (until the next restart).
Whether it should be marked a... - 09:43 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Niklas,
Were you able to get things to a stable state after following your note https://tracker.ceph.com/issues/54... - 11:17 AM Bug #48673: High memory usage on standby replay MDS
- Yongseok/Mykola - Patrick is on PTO - I'll try to make progress on this issue.
Yongseok, you mention https://githu... - 09:22 AM Bug #54052 (In Progress): mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr r...
- 04:29 AM Cleanup #54362 (Fix Under Review): client: do not release the global snaprealm until unmounting
- 04:26 AM Cleanup #54362 (Resolved): client: do not release the global snaprealm until unmounting
- The global snaprealm will be created and then destroyed immediately every time when updating it.
02/21/2022
- 03:08 PM Bug #54345 (Fix Under Review): mds: try to reset heartbeat when fetching or committing.
- 03:05 PM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
- When there have too many dentries need to load, the heartbeat may not get a change to be reset.
- 01:38 PM Bug #54271 (Triaged): mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- 10:05 AM Backport #54217 (In Progress): pacific: client: client session state stuck in opening and hang al...
- 10:02 AM Backport #54220 (In Progress): pacific: mds: seg fault in expire_recursive
- 09:59 AM Backport #54216 (In Progress): quincy: client: client session state stuck in opening and hang all...
- 09:55 AM Backport #54218 (In Progress): quincy: mds: seg fault in expire_recursive
- 09:10 AM Backport #54336 (Resolved): quincy: mgr/volumes: A deleted subvolumegroup when listed using "ceph...
- https://github.com/ceph/ceph/pull/45165
- 09:10 AM Backport #54335 (Resolved): pacific: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- https://github.com/ceph/ceph/pull/45205
- 09:10 AM Backport #54334 (Rejected): octopus: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- 09:10 AM Backport #54333 (Resolved): quincy: mgr/volumes: File Quota attributes not getting inherited to t...
- https://github.com/ceph/ceph/pull/45165
- 09:10 AM Backport #54332 (Resolved): pacific: mgr/volumes: File Quota attributes not getting inherited to ...
- https://github.com/ceph/ceph/pull/45205
- 09:10 AM Backport #54331 (Rejected): octopus: mgr/volumes: File Quota attributes not getting inherited to ...
- 09:07 AM Bug #54121 (Pending Backport): mgr/volumes: File Quota attributes not getting inherited to the cl...
- 09:07 AM Bug #54099 (Pending Backport): mgr/volumes: A deleted subvolumegroup when listed using "ceph fs s...
02/17/2022
- 05:52 PM Fix #54317 (Pending Backport): qa: add testing in fs:workload for different kinds of subvolumes
- Notably:
- subvolume with isolated namespace
- subvolume with (large) quota - 09:10 AM Bug #54283 (Fix Under Review): qa/cephfs: is_mounted() depends on a mutable variable
02/16/2022
- 08:22 PM Backport #54234: quincy: qa: use cephadm to provision cephfs for fs:workloads
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44982
merged - 02:50 PM Backport #54219 (In Progress): octopus: mds: seg fault in expire_recursive
02/15/2022
- 04:20 PM Backport #54160: quincy: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44909
merged - 04:19 PM Backport #54123: quincy:mgr/volumes: Failed to create clones if the source snapshot's quota is ex...
- Kotresh Hiremath Ravishankar wrote:
> https://github.com/ceph/ceph/pull/44875
merged - 12:51 PM Bug #54111 (Fix Under Review): data pool attached to a file system can be attached to another fil...
- 12:37 PM Bug #44100 (Fix Under Review): cephfs rsync kworker high load.
- The patchwork: https://patchwork.kernel.org/project/ceph-devel/list/?series=614517
- 11:17 AM Bug #54285 (New): make stop.sh clear the evicted clients too
- patch [1] doesn't seem to clear the evicted/blocklisted clients, which makes the df hangs still.
Steps to reproduc... - 09:37 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- Just FYI - the workaround is to remove the mds<>_openfiles objects from the metadata pool.
- 09:36 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- This seems to be happening in nautilus. Should check if it can be hit in master (quincy, etc.).
- 06:15 AM Bug #52641 (Resolved): snap scheduler: Traceback seen when snapshot schedule remove command is pa...
- 05:14 AM Bug #52641: snap scheduler: Traceback seen when snapshot schedule remove command is passed withou...
- Also tried to reproduce issue after adding snap-schedule. Still no traceback observed.
[nshelke@fedora build]$ ./b... - 04:55 AM Bug #52641: snap scheduler: Traceback seen when snapshot schedule remove command is passed withou...
- Tried command provided in tracker. Not seen any Traceback if command is passed without required parameters
[nshelk... - 05:30 AM Bug #54283 (Resolved): qa/cephfs: is_mounted() depends on a mutable variable
- This can lead to bugs when variable value is stale or incorrect. It's safer to actually check if the file system is m...
02/14/2022
- 03:49 PM Bug #54271 (Triaged): mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- See: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NPRA7OGSNQUI32WPZQMKH3SJNPPOSBRK/?sort=date
M... - 09:26 AM Backport #54196 (In Progress): quincy: mds: mds_oft_prefetch_dirfrags default to false
- 09:24 AM Backport #54194 (In Progress): pacific: mds: mds_oft_prefetch_dirfrags default to false
- 09:21 AM Backport #54195 (In Progress): octopus: mds: mds_oft_prefetch_dirfrags default to false
- 04:48 AM Backport #54257: quincy: mgr/volumes: uid/gid of the clone is incorrect
- Kotresh, please post the backport.
- 04:47 AM Backport #54256: pacific: mgr/volumes: uid/gid of the clone is incorrect
- Kotresh, please post the backport.
02/11/2022
- 11:38 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Thanks Dan, I will add it to the config of the Ceph 16 cluster.
Unfortunatley I can't use it for the source cluste... - 08:26 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Niklas, you don't have to wait for that PR -- just do `ceph config set mds mds_oft_prefetch_dirfrags false` now.
F... - 02:11 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Niklas Hambuechen wrote:
> Today I had a multi-hour CephFS outage due to a bug that I believe was discussed in vario... - 01:31 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Thanks! That sounds like it might, yes.
From that, it seems related bugs are (I think don't have Redmine permissio... - 01:16 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- This PR may help you https://github.com/ceph/ceph/pull/44667.
- 12:54 AM Bug #54253 (New): Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Today I had a multi-hour CephFS outage due to a bug that I believe was discussed in various mailing lists and posts a...
- 07:45 AM Backport #54257 (Resolved): quincy: mgr/volumes: uid/gid of the clone is incorrect
- https://github.com/ceph/ceph/pull/45165
- 07:45 AM Backport #54256 (Resolved): pacific: mgr/volumes: uid/gid of the clone is incorrect
- https://github.com/ceph/ceph/pull/45205
- 07:41 AM Bug #54066 (Pending Backport): mgr/volumes: uid/gid of the clone is incorrect
- 02:16 AM Bug #44100 (In Progress): cephfs rsync kworker high load.
- Will work on it.
- 01:47 AM Bug #52438 (Fix Under Review): qa: ffsb timeout
02/10/2022
- 05:36 PM Backport #54234 (In Progress): quincy: qa: use cephadm to provision cephfs for fs:workloads
- 12:48 PM Backport #54242 (In Progress): octopus: mds: clients can send a "new" op (file operation) and cra...
- 12:42 PM Backport #54242 (Resolved): octopus: mds: clients can send a "new" op (file operation) and crash ...
- https://github.com/ceph/ceph/pull/44976
- 12:46 PM Backport #54241 (In Progress): pacific: mds: clients can send a "new" op (file operation) and cra...
- 12:41 PM Backport #54241 (Resolved): pacific: mds: clients can send a "new" op (file operation) and crash ...
- https://github.com/ceph/ceph/pull/44975
- 06:46 AM Bug #54066 (Fix Under Review): mgr/volumes: uid/gid of the clone is incorrect
- 05:47 AM Bug #54066: mgr/volumes: uid/gid of the clone is incorrect
- This is the regression caused by commit 18b85c53a (https://tracker.ceph.com/issues/53848)
Modifying the subject ac... - 05:08 AM Bug #52982 (Fix Under Review): client: Inode::hold_caps_until should be a time from a monotonic c...
- 02:58 AM Bug #44100: cephfs rsync kworker high load.
- Hmm. IIRC the equivalent code on the user space client has much less trouble because the SnapRealm has a list of inod...
02/09/2022
- 08:21 PM Bug #54237 (Resolved): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path ...
- opendir() in src/pybind/cephfs/cephfs.pyx returns a generic cephfs.OSError: Errno 13 for this tracker issue https://t...
- 06:51 PM Bug #54236 (Resolved): qa/cephfs: change default timeout from 900 secs to 300
- 15 minutes is unnecessarily large as a default value for timeout for a command. Not having to wait unnecessarily on a...
- 02:41 PM Backport #54234 (Resolved): quincy: qa: use cephadm to provision cephfs for fs:workloads
- https://github.com/ceph/ceph/pull/44982
- 02:35 PM Feature #51333 (Pending Backport): qa: use cephadm to provision cephfs for fs:workloads
- 08:59 AM Bug #52438: qa: ffsb timeout
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Recent failures:
> >
> > - https://pulpito.ceph.com/vshankar-2022-02-... - 08:14 AM Bug #52438: qa: ffsb timeout
- Venky Shankar wrote:
> Recent failures:
>
> - https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshank... - 05:54 AM Bug #52438: qa: ffsb timeout
- Recent failures:
- https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-t... - 06:12 AM Backport #54223 (Rejected): pacific: mgr/volumes: `fs volume rename` command
- https://github.com/ceph/ceph/pull/45542
- 06:12 AM Backport #54222 (Rejected): octopus: mgr/volumes: `fs volume rename` command
- 06:11 AM Backport #54221 (Resolved): quincy: mgr/volumes: `fs volume rename` command
- https://github.com/ceph/ceph/pull/45541
- 06:10 AM Backport #54220 (Resolved): pacific: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45099
- 06:10 AM Backport #54219 (Resolved): octopus: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45055
- 06:10 AM Backport #54218 (In Progress): quincy: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45097
- 06:10 AM Backport #54217 (Resolved): pacific: client: client session state stuck in opening and hang all t...
- https://github.com/ceph/ceph/pull/45100
- 06:10 AM Backport #54216 (Resolved): quincy: client: client session state stuck in opening and hang all th...
- https://github.com/ceph/ceph/pull/45098
- 06:09 AM Bug #53805 (Pending Backport): mds: seg fault in expire_recursive
- 06:07 AM Bug #53911 (Pending Backport): client: client session state stuck in opening and hang all the time
- 06:05 AM Feature #51162 (Pending Backport): mgr/volumes: `fs volume rename` command
- 06:01 AM Feature #53903 (Resolved): mount: add option to support fake mounts
02/08/2022
- 07:32 PM Bug #54107 (Fix Under Review): kclient: hang during umount
- 05:14 PM Feature #54205 (New): hard links: explore using a "referent" inode whenever hard linking
- https://docs.ceph.com/en/latest/dev/cephfs-snapshots/#hard-links
Part of the reason we have to snapshot hard linke... - 01:10 PM Backport #54196 (Resolved): quincy: mds: mds_oft_prefetch_dirfrags default to false
- https://github.com/ceph/ceph/pull/45017
- 01:10 PM Backport #54195 (Resolved): octopus: mds: mds_oft_prefetch_dirfrags default to false
- https://github.com/ceph/ceph/pull/45015
- 01:10 PM Backport #54194 (Resolved): pacific: mds: mds_oft_prefetch_dirfrags default to false
- https://github.com/ceph/ceph/pull/45016
- 01:07 PM Bug #53952 (Pending Backport): mds: mds_oft_prefetch_dirfrags default to false
02/07/2022
- 08:54 AM Bug #52641: snap scheduler: Traceback seen when snapshot schedule remove command is passed withou...
- Nikhil, please take a look.
- 04:52 AM Bug #54066: mgr/volumes: uid/gid of the clone is incorrect
- https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/...
- 02:53 AM Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Patrick Donnelly wrote:
> > > That explains the current behavior but wh...
02/05/2022
- 02:19 PM Backport #54161 (In Progress): pacific: mon/MDSMonitor: sanity assert when inline data turned on ...
- 12:40 PM Backport #54161 (Resolved): pacific: mon/MDSMonitor: sanity assert when inline data turned on in ...
- https://github.com/ceph/ceph/pull/44910
- 02:17 PM Backport #54160 (In Progress): quincy: mon/MDSMonitor: sanity assert when inline data turned on i...
- 12:40 PM Backport #54160 (Resolved): quincy: mon/MDSMonitor: sanity assert when inline data turned on in M...
- https://github.com/ceph/ceph/pull/44909
- 12:35 PM Bug #54081 (Pending Backport): mon/MDSMonitor: sanity assert when inline data turned on in MDSMap...
02/04/2022
- 10:02 PM Bug #54107: kclient: hang during umount
- Jeff Layton wrote:
> It really looks like we just never got a FLUSH_ACK for this cap:
>
> [...]
>
> It really ... - 06:39 PM Bug #54107: kclient: hang during umount
- Venky, Patrick : any thoughts on the above? Either way, it might be nice to revise the dout() messages in Locker::han...
- 09:16 PM Bug #44100: cephfs rsync kworker high load.
- I really don't see a great fix for this anywhere. So much of this work requires holding coarse-grained mutexes that I...
- 08:25 PM Bug #54106: kclient: hang during workunit cleanup
- First, note that this is an 8.4 kernel which is about a year old. Some of this code has seen s
So, 7 mins between ... - 08:20 AM Bug #54111: data pool attached to a file system can be attached to another file system
- Also, a metadata pool can't be reused as a metadata pool for another file-system since there's a check to ensure that...
02/03/2022
- 03:26 PM Bug #54111 (Triaged): data pool attached to a file system can be attached to another file system
- 03:24 PM Backport #52953 (Resolved): octopus: mds: crash when journaling during replay
- 10:09 AM Backport #54123 (In Progress): quincy:mgr/volumes: Failed to create clones if the source snapshot...
- 10:07 AM Backport #54123 (Resolved): quincy:mgr/volumes: Failed to create clones if the source snapshot's ...
- https://github.com/ceph/ceph/pull/44875
- 07:06 AM Bug #54121 (Fix Under Review): mgr/volumes: File Quota attributes not getting inherited to the cl...
- 06:45 AM Bug #54121 (In Progress): mgr/volumes: File Quota attributes not getting inherited to the cloned ...
- 06:45 AM Bug #54121 (Resolved): mgr/volumes: File Quota attributes not getting inherited to the cloned volume
- File Quota attributes not getting inherited to the cloned volume
Version-Release number of selected component (if ... - 07:00 AM Bug #54049 (Fix Under Review): ceph-fuse: If nonroot user runs ceph-fuse mount on then path is no...
02/02/2022
- 05:27 PM Bug #54064: pacific: qa: mon assertion failure during upgrade
- https://github.com/ceph/ceph/pull/44840 merged
- 06:47 AM Bug #54111 (Resolved): data pool attached to a file system can be attached to another file system
- ...
02/01/2022
- 08:12 PM Bug #54107: kclient: hang during umount
- It really looks like we just never got a FLUSH_ACK for this cap:...
- 06:56 PM Bug #54107 (In Progress): kclient: hang during umount
- 06:48 PM Bug #54107: kclient: hang during umount
- Maybe related on testing branch: /ceph/teuthology-archive/pdonnell-2022-01-28_01:59:55-fs:workload-wip-pdonnell-testi...
- 06:34 PM Bug #54107 (Resolved): kclient: hang during umount
- ...
- 06:52 PM Bug #54108 (Pending Backport): qa: iogen workunit: "The following counters failed to be set on md...
- https://pulpito.ceph.com/pdonnell-2022-01-29_01:47:41-fs:workload-wip-pdonnell-testing-20220127.171526-distro-default...
- 06:35 PM Bug #54106: kclient: hang during workunit cleanup
- May be related to #44100, per Jeff.
- 06:33 PM Bug #54106 (Duplicate): kclient: hang during workunit cleanup
- ...
- 03:48 PM Backport #52953: octopus: mds: crash when journaling during replay
- This is merged
- 01:55 PM Bug #46671: nautilus:tasks/cfuse_workunit_suites_fsstress: "kernel: watchdog: BUG: soft lockup - ...
- https://tracker.ceph.com/issues/46284
- 01:55 PM Bug #46671 (Duplicate): nautilus:tasks/cfuse_workunit_suites_fsstress: "kernel: watchdog: BUG: so...
- 11:42 AM Bug #54099 (Fix Under Review): mgr/volumes: A deleted subvolumegroup when listed using "ceph fs s...
- 11:36 AM Bug #54099 (Resolved): mgr/volumes: A deleted subvolumegroup when listed using "ceph fs subvolume...
- Steps :
--------
* Created a subvolume group with pool_layout option.
* Created few subvolumes, mounted the subvol...
01/31/2022
- 11:06 PM Bug #54081 (Fix Under Review): mon/MDSMonitor: sanity assert when inline data turned on in MDSMap...
- 07:05 PM Bug #54081 (Resolved): mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from v1...
- https://www.spinics.net/lists/ceph-users/msg70110.html
- 04:42 PM Bug #53857: qa: fs:upgrade test fails mds count check
- Same test seems to fail in the rados suite too:
/a/yuriw-2022-01-24_17:43:02-rados-wip-yuri2-testing-2022-01-21-0949... - 02:29 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Logfiles are in a tarball at:
ceph-post-file: 20043459-1412-4880-8844-8f575228d1fb
- 02:16 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- ...
- 01:41 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Jeff Layton wrote:
> Yes. I have max_mds set to 3, and I have 4 per fs (with 1 standby).
>
> The good news is tha... - 01:48 PM Backport #53994: quincy: qa: begin grepping kernel logs for kclient warnings/failures to fail a test
- https://github.com/ceph/teuthology/pull/1666
- 01:48 PM Backport #53994 (Rejected): quincy: qa: begin grepping kernel logs for kclient warnings/failures ...
- Fix is in teuthology.
- 01:45 PM Bug #53996 (Triaged): qa: update fs:upgrade tasks to upgrade from pacific instead of octopus, or ...
- 01:43 PM Bug #54017 (Triaged): Problem with ceph fs snapshot mirror and read-only folders
- 12:40 PM Bug #48473 (Resolved): fs perf stats command crashes
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:38 PM Bug #51705 (Resolved): qa: tasks.cephfs.fuse_mount:mount command failed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:37 PM Feature #52491 (Resolved): mds: add max_mds_entries_per_dir config option
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:35 PM Bug #53726 (Resolved): mds: crash when `ceph tell mds.0 dump tree ''`
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:34 PM Bug #53862 (Resolved): mds: remove the duplicated or incorrect respond when the pool is full
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:23 PM Backport #53908 (Resolved): pacific: mds: remove the duplicated or incorrect respond when the poo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44623
m... - 12:23 PM Backport #53860 (Resolved): pacific: mds: crash when `ceph tell mds.0 dump tree ''`
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44622
m... - 12:22 PM Backport #53861 (Resolved): pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44621
m... - 12:22 PM Backport #53864 (Resolved): pacific: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44620
m... - 12:22 PM Backport #53777 (Resolved): pacific: fs perf stats command crashes
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44516
m... - 12:22 PM Backport #53736 (Resolved): pacific: mds: recursive scrub does not trigger stray reintegration
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44514
m... - 12:22 PM Backport #52631 (Resolved): pacific: mds: add max_mds_entries_per_dir config option
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44512
m... - 12:08 PM Bug #44100: cephfs rsync kworker high load.
- Some thoughts...
This is a pretty nasty problem, in that the MDS is essentially handing the client an unbounded am... - 09:55 AM Bug #54066: mgr/volumes: uid/gid of the clone is incorrect
- Another instance: https://pulpito.ceph.com/vshankar-2022-01-21_07:36:21-fs-wip-vshankar-fscrypt-20220121-095846-testi...
- 09:53 AM Bug #54066 (Resolved): mgr/volumes: uid/gid of the clone is incorrect
- https://pulpito.ceph.com/vshankar-2022-01-27_12:23:50-fs-wip-vshankar-fscrypt-20220121-095846-testing-default-smithi/...
- 07:26 AM Bug #54064 (Resolved): pacific: qa: mon assertion failure during upgrade
- ...
01/28/2022
- 01:28 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- FWIW none of our users have noticed this -- all have path/namespace restricted caps. (octopus multi-mds, pacific sing...
- 12:34 PM Bug #54052: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart
- Milind, please take a look.
- 12:34 PM Bug #54052 (Resolved): mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart
- From ceph-user - https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/SLR3JIGNUV3TNMRJKPSEZUXJK7XBA3HC/
...
01/27/2022
- 09:53 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Yes. I have max_mds set to 3, and I have 4 per fs (with 1 standby).
The good news is that this seems to be pretty ... - 09:14 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- This issue should be pretty apparent from MDS logs where we can trace out the incoming client requests.
Jeff, I th... - 05:48 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- The MDS I'm testing against has the latest fscrypt patchset based on top of this commit from ceph mainline:
9e... - 05:40 PM Bug #54046 (Resolved): unaccessible dentries after fsstress run with namespace-restricted caps
- I did an xfstests run in a subvolume with namespace restricted caps. Test generic/070 does an fsstress run, and then ...
- 06:54 PM Bug #54049 (Resolved): ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expect...
- As per documentation, ceph-fuse command requires superuser privileges to mount cephFS.
If nonroot user try to mount ... - 08:00 AM Backport #53947 (In Progress): octopus: mgr/volumes: Failed to create clones if the source snapsh...
- 07:59 AM Backport #51201 (In Progress): octopus: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
01/26/2022
- 09:13 AM Bug #54017 (Duplicate): Problem with ceph fs snapshot mirror and read-only folders
- I want to mirror a snapshot in Ceph v16.2.6 deployed with cephadm using the stock quay.io images. My source file syst...
01/25/2022
- 07:54 PM Backport #53714: pacific: mds: fails to reintegrate strays if destdn's directory is full (ENOSPC)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44513
merged - 04:04 PM Backport #53912: pacific: qa: fs:upgrade test fails mds count check
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44639
merged - 04:03 PM Backport #53908: pacific: mds: remove the duplicated or incorrect respond when the pool is full
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44623
merged - 04:03 PM Backport #53860: pacific: mds: crash when `ceph tell mds.0 dump tree ''`
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44622
merged - 04:02 PM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44621
merged - 04:01 PM Backport #53864: pacific: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44620
merged - 04:00 PM Backport #53777: pacific: fs perf stats command crashes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44516
merged - 04:00 PM Backport #53736: pacific: mds: recursive scrub does not trigger stray reintegration
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44514
merged - 03:59 PM Backport #52631: pacific: mds: add max_mds_entries_per_dir config option
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44512
merged - 07:39 AM Bug #53911: client: client session state stuck in opening and hang all the time
- Formatting for readability
Version: 14.2.5
MDS: single MDS
Description:
Recently,there has an inconsistent ...
01/24/2022
- 02:47 PM Bug #53996 (Resolved): qa: update fs:upgrade tasks to upgrade from pacific instead of octopus, or...
- Yearly (by release) update...
- 02:36 PM Backport #53995 (Rejected): octopus: qa: begin grepping kernel logs for kclient warnings/failures...
- 02:36 PM Backport #53994 (Rejected): quincy: qa: begin grepping kernel logs for kclient warnings/failures ...
- 02:36 PM Backport #53993 (Rejected): pacific: qa: begin grepping kernel logs for kclient warnings/failures...
- 02:33 PM Feature #50150 (Pending Backport): qa: begin grepping kernel logs for kclient warnings/failures t...
- 01:15 PM Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > That explains the current behavior but why did this test not fail in ... - 11:11 AM Backport #53948 (In Progress): pacific: mgr/volumes: Failed to create clones if the source snapsh...
- 10:54 AM Feature #53949 (Fix Under Review): mgr/volumes: Show subvolume snapshot clone progress
Also available in: Atom