Activity
From 09/07/2021 to 10/06/2021
10/06/2021
- 01:07 PM Bug #52829: cephfs/mirroring: peer_bootstrap import failed with EPERM
- Attaching mgr logs split into two files
- 12:55 PM Bug #52829 (New): cephfs/mirroring: peer_bootstrap import failed with EPERM
- Tried setting up mirroring on vstart cluster as below but peer_bootstrap import failed with EPERM.
#Create vstart ... - 11:22 AM Bug #52821: qa/xfstest-dev.py: update to include centos stream
- rishabh-2021-10-05_09:25:11-fs-wip-rishabh-vr-run-multiple-cmds-distro-basic-smithi/6423166/...
- 05:46 AM Fix #52824 (Closed): qa: skip internal metadata directory when scanning ceph debugfs directory
- kclient patchset:
https://patchwork.kernel.org/project/ceph-devel/list/?series=556049
introduces meta direc... - 12:17 AM Cleanup #51390 (Resolved): mgr/volumes/fs/operations/access.py: fix various flake8 issues
- 12:17 AM Cleanup #51407 (Resolved): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix various fla...
- 12:16 AM Cleanup #51381 (Resolved): mgr/volumes/fs/async_job.py: fix various flake8 issues
- 12:16 AM Cleanup #51403 (Resolved): mgr/volumes/fs/operations/versions/auth_metadata.py: fix various flake...
- 12:15 AM Cleanup #51398 (Resolved): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
- 12:15 AM Cleanup #51384 (Resolved): mgr/volumes/fs/vol_spec.py: fix various flake8 issues
- 12:15 AM Backport #52823 (Resolved): pacific: mgr/nfs: add more log messages
- Backport included here:
https://github.com/ceph/ceph/pull/43682 - 12:14 AM Cleanup #51396 (Resolved): mgr/volumes/fs/operations/clone_index.py: fix various flake8 issues
- 12:14 AM Cleanup #51380 (Resolved): mgr/volumes/module.py: fix various flake8 issues
- 12:13 AM Cleanup #51392 (Resolved): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
- 12:13 AM Bug #52274 (Pending Backport): mgr/nfs: add more log messages
- 12:02 AM Bug #52822 (Fix Under Review): qa: failed pacific install on fs:upgrade
10/05/2021
- 11:59 PM Bug #52822 (Resolved): qa: failed pacific install on fs:upgrade
- https://pulpito.ceph.com/pdonnell-2021-10-04_23:25:27-fs-wip-pdonnell-testing-20211002.163337-distro-basic-smithi/642...
- 06:56 PM Bug #52821 (Pending Backport): qa/xfstest-dev.py: update to include centos stream
- 05:05 PM Bug #52820 (Resolved): Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6
- i tried to upgrade my ceph cluster from 15.2.14 to 16.2.6 on my proxmox 7.0 severs.
after updating the packages i ...
10/02/2021
10/01/2021
- 11:23 PM Cleanup #51402 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_base.py: fix vari...
- 10:52 PM Cleanup #51390 (Fix Under Review): mgr/volumes/fs/operations/access.py: fix various flake8 issues
- 10:50 PM Cleanup #51381 (Fix Under Review): mgr/volumes/fs/async_job.py: fix various flake8 issues
- 10:49 PM Cleanup #51407 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix var...
- 10:48 PM Cleanup #51382 (Fix Under Review): mgr/volumes/fs/async_cloner.py: fix various flake8 issues
- 10:27 PM Cleanup #51403 (Fix Under Review): mgr/volumes/fs/operations/versions/auth_metadata.py: fix vario...
- 10:05 PM Cleanup #51398 (Fix Under Review): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
- 09:03 PM Cleanup #51396 (Fix Under Review): mgr/volumes/fs/operations/clone_index.py: fix various flake8 i...
- 08:54 PM Cleanup #51400 (Fix Under Review): mgr/volumes/fs/operations/trash.py: fix various flake8 issues
- 08:25 PM Bug #51630 (Fix Under Review): mgr/snap_schedule: don't throw traceback on non-existent fs
- 08:10 PM Cleanup #51380 (Fix Under Review): mgr/volumes/module.py: fix various flake8 issues
- 09:54 AM Feature #46166: mds: store symlink target as xattr in data pool inode for disaster recovery
- The link in the description is broken. Here is the working link https://docs.ceph.com/en/latest/cephfs/disaster-recov...
- 03:14 AM Cleanup #51392 (Fix Under Review): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
09/30/2021
- 08:52 PM Cleanup #51392: mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
- This PR fixes it: https://github.com/ceph/ceph/pull/43375
- 04:50 PM Cleanup #51391 (Resolved): mgr/volumes/fs/operations/resolver.py: add extra blank line
- 04:50 PM Cleanup #51387 (Resolved): mgr/volumes/fs/purge_queue.py: add extra blank line
- 01:27 PM Bug #52581 (Can't reproduce): Dangling fs snapshots on data pool after change of directory layout
- Closing this issue based on previous comment. Please re-open the issue if it happens on the supported versions.
- 11:21 AM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- The ceph version of this issue 'ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)' is no...
- 08:43 AM Bug #51589: mds: crash when journaling during replay
- Venky Shankar wrote:
> ...
> ...
> Patrick suggested that we could defer journaling blocklisted clients in reconne...
09/29/2021
- 12:56 PM Bug #51589: mds: crash when journaling during replay
- I was able to reproduce this in master branch. The crash happens when a standby mds takes over as active and there ar...
- 10:41 AM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- Hi Frank,
I tried reproducing this issue on the master by changing root data pool to the new one but couldn't achi...
09/28/2021
- 01:03 AM Cleanup #51391 (Fix Under Review): mgr/volumes/fs/operations/resolver.py: add extra blank line
09/27/2021
- 11:36 PM Backport #51199 (In Progress): octopus: msg: active_connections regression
- 07:15 AM Backport #51199: octopus: msg: active_connections regression
- Backport PR: https://github.com/ceph/ceph/pull/43310
This regression issue is blocking https://tracker.ceph.com/... - 02:48 PM Cleanup #51387 (Fix Under Review): mgr/volumes/fs/purge_queue.py: add extra blank line
- 01:43 PM Bug #52723 (Fix Under Review): mds: improve mds_bal_fragment_size_max config option
- 01:42 PM Feature #52725 (Fix Under Review): qa: mds_dir_max_entries workunit test case
- 01:41 PM Feature #52720 (Fix Under Review): mds: mds_bal_rank_mask config option
09/24/2021
- 10:30 AM Feature #52725 (Resolved): qa: mds_dir_max_entries workunit test case
- mds_dir_max_entries workunit test case
- 06:04 AM Bug #52723 (Resolved): mds: improve mds_bal_fragment_size_max config option
- Maintain mds_bal_fragment_size_max as a member variable in Server.cc
09/23/2021
- 04:55 PM Feature #51162 (Fix Under Review): mgr/volumes: `fs volume rename` command
- 10:27 AM Feature #52720 (Resolved): mds: mds_bal_rank_mask config option
- hexadecimal bitmask of the active MDS ranks to rebalance on. MDS balancer dynamically redistributes subtrees within c...
- 08:24 AM Fix #52715 (New): mds: reduce memory usage during scrubbing
- Breadth-first search may queue lots of inodes. Change scrub traverse to depth-First search.
The new scrub code no ...
09/22/2021
- 07:31 PM Backport #50126 (In Progress): octopus: pybind/mgr/volumes: deadlock on async job hangs finisher ...
- 06:01 AM Backport #52632 (In Progress): octopus: mds,client: add flag to MClientSession for reject reason
- 05:52 AM Backport #52633 (In Progress): pacific: mds,client: add flag to MClientSession for reject reason
09/21/2021
- 01:23 PM Bug #52688 (New): mds: possibly corrupted entry in journal (causes replay failure with file syste...
- Failed replay after a standby took over as active. This marks the file system as damaged.
The journal entry for in... - 11:55 AM Bug #52642 (Fix Under Review): snap scheduler: cephfs snapshot schedule status doesn't list the s...
- 12:50 AM Backport #52680 (Resolved): pacific: Add option in `fs new` command to start rank 0 in failed state
- https://github.com/ceph/ceph/pull/45565
- 12:50 AM Backport #52679 (Resolved): pacific: "cluster [WRN] 1 slow requests" in smoke pacific
- https://github.com/ceph/ceph/pull/43562
- 12:50 AM Backport #52678 (Resolved): pacific: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnap...
- https://github.com/ceph/ceph/pull/43702
- 12:48 AM Bug #52625 (Pending Backport): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
- 12:47 AM Bug #52572 (Pending Backport): "cluster [WRN] 1 slow requests" in smoke pacific
- 12:46 AM Feature #51716 (Pending Backport): Add option in `fs new` command to start rank 0 in failed state
- 12:09 AM Bug #52677 (Fix Under Review): qa: test_simple failure
- 12:03 AM Bug #52677 (Resolved): qa: test_simple failure
- ...
09/20/2021
- 04:15 PM Bug #52607 (Duplicate): qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data...
- 01:48 PM Fix #52591: mds: mds_oft_prefetch_dirfrags = false is not qa tested
- Patrick suggested that we have this setting in the thrasher suite.
- 01:42 PM Bug #52641 (Triaged): snap scheduler: Traceback seen when snapshot schedule remove command is pas...
- 01:41 PM Bug #52642 (Triaged): snap scheduler: cephfs snapshot schedule status doesn't list the snapshot c...
- 01:38 PM Bug #52643 (Triaged): snap scheduler: cephfs snapshot created with schedules stopped on nfs volum...
- 11:40 AM Backport #52629 (In Progress): octopus: pybind/mgr/volumes: first subvolume permissions set perms...
- 11:09 AM Backport #52628 (In Progress): pacific: pybind/mgr/volumes: first subvolume permissions set perms...
09/17/2021
- 09:39 AM Feature #51518 (Resolved): client: flush the mdlog in unsafe requests' relevant and auth MDSes only
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:34 AM Backport #51833 (Resolved): pacific: client: flush the mdlog in unsafe requests' relevant and aut...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42925
m... - 09:34 AM Backport #51937: pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42923
m... - 09:33 AM Backport #51977 (Resolved): pacific: client: make sure only to update dir dist from auth mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42937
m... - 06:59 AM Bug #52643: snap scheduler: cephfs snapshot created with schedules stopped on nfs volume after cr...
- Moving to High Priority since python traceback causes loss of functionality.
- 06:36 AM Bug #52643 (Closed): snap scheduler: cephfs snapshot created with schedules stopped on nfs volume...
- ...
- 06:33 AM Bug #52642 (Resolved): snap scheduler: cephfs snapshot schedule status doesn't list the snapshot ...
- ...
- 06:28 AM Bug #52641 (Resolved): snap scheduler: Traceback seen when snapshot schedule remove command is pa...
- # ceph fs snap-schedule remove
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_mod... - 01:58 AM Backport #52639 (In Progress): pacific: MDSMonitor: handle damaged state from standby-replay
- 01:50 AM Backport #52639 (Resolved): pacific: MDSMonitor: handle damaged state from standby-replay
- https://github.com/ceph/ceph/pull/43200
- 01:48 AM Bug #52565 (Pending Backport): MDSMonitor: handle damaged state from standby-replay
- 01:47 AM Bug #51278: mds: "FAILED ceph_assert(!segments.empty())"
- Might be related to: https://tracker.ceph.com/issues/51589
- 01:05 AM Backport #52627 (In Progress): pacific: cephfs-mirror: cephfs-mirror daemon status for a particul...
- 01:00 AM Backport #52444 (In Progress): pacific: cephfs-mirror: terminating a mirror daemon can cause a cr...
09/16/2021
- 08:58 PM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- @xiubo Li
Hi Li
Thanks again
- What are the recommended values for mds_log_segment_size and mds_log_segment_size.?... - 01:31 PM Bug #52280 (Fix Under Review): Mds crash and fails with assert on prepare_new_inode
- Yael Azulay wrote:
> @xiubo Li
> Thanks much, Li, for your analysis
> I didnt change mds_log_segment_size and mds_... - 02:36 PM Backport #52444 (Need More Info): pacific: cephfs-mirror: terminating a mirror daemon can cause a...
- 02:32 PM Backport #52627 (Need More Info): pacific: cephfs-mirror: cephfs-mirror daemon status for a parti...
- 02:35 AM Backport #52627 (Resolved): pacific: cephfs-mirror: cephfs-mirror daemon status for a particular ...
- https://github.com/ceph/ceph/pull/43199
- 01:01 PM Fix #52591: mds: mds_oft_prefetch_dirfrags = false is not qa tested
- Thanks for bringing this up Dan. We'll try to have someone work on this.
- 02:42 AM Backport #52636 (Resolved): pacific: MDSMonitor: removes MDS coming out of quorum election
- https://github.com/ceph/ceph/pull/43698
- 02:40 AM Backport #52635 (Resolved): pacific: mds sends cap updates with btime zeroed out
- https://github.com/ceph/ceph/pull/45163
- 02:40 AM Backport #52634 (Resolved): octopus: mds sends cap updates with btime zeroed out
- https://github.com/ceph/ceph/pull/45164
- 02:40 AM Backport #52633 (Resolved): pacific: mds,client: add flag to MClientSession for reject reason
- https://github.com/ceph/ceph/pull/43251
- 02:40 AM Backport #52632 (Resolved): octopus: mds,client: add flag to MClientSession for reject reason
- https://github.com/ceph/ceph/pull/43252
- 02:40 AM Backport #52631 (Resolved): pacific: mds: add max_mds_entries_per_dir config option
- https://github.com/ceph/ceph/pull/44512
- 02:37 AM Feature #52491 (Pending Backport): mds: add max_mds_entries_per_dir config option
- 02:36 AM Bug #43216 (Pending Backport): MDSMonitor: removes MDS coming out of quorum election
- 02:35 AM Bug #52382 (Pending Backport): mds,client: add flag to MClientSession for reject reason
- 02:35 AM Backport #52629 (Resolved): octopus: pybind/mgr/volumes: first subvolume permissions set perms on...
- https://github.com/ceph/ceph/pull/43224
- 02:35 AM Backport #52628 (Resolved): pacific: pybind/mgr/volumes: first subvolume permissions set perms on...
- https://github.com/ceph/ceph/pull/43223
- 02:35 AM Bug #52123 (Pending Backport): mds sends cap updates with btime zeroed out
- 02:34 AM Bug #51870 (Pending Backport): pybind/mgr/volumes: first subvolume permissions set perms on /volu...
- 02:33 AM Bug #51989 (Pending Backport): cephfs-mirror: cephfs-mirror daemon status for a particular FS is ...
- 02:27 AM Bug #52626 (Triaged): mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
- 02:26 AM Bug #52626 (Triaged): mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
- ...
- 02:21 AM Bug #52625 (Fix Under Review): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
- 02:20 AM Bug #52625 (Resolved): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
- ...
09/15/2021
- 02:16 PM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- @xiubo Li
Thanks much, Li, for your analysis
I didnt change mds_log_segment_size and mds_log_segment_size.
To what... - 05:34 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Hi Yael,
BTW, have you ever changed the "mds_log_segment_size" and "mds_log_events_per_segment" options ?
If no...
09/14/2021
- 05:22 PM Bug #51866 (Can't reproduce): mds daemon damaged after outage
- > rados_osd_op_timeout: 30
yeah that'll do it. thanks for clarifying the root cause! - 04:45 PM Bug #51866: mds daemon damaged after outage
- ...and I've just found that we're globally setting:
```
rados_osd_op_timeout: 30
rados_mon_op_timeout: 30
```
... - 04:11 PM Bug #51866: mds daemon damaged after outage
- Hi again - another update on this one.
We deploy on both OpenStack and VMware hosted VMs, with the intention that ... - 05:07 PM Bug #52572 (Fix Under Review): "cluster [WRN] 1 slow requests" in smoke pacific
- 05:00 PM Bug #52572 (In Progress): "cluster [WRN] 1 slow requests" in smoke pacific
- 03:03 PM Bug #52607 (Duplicate): qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data...
- /ceph/teuthology-archive/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/6387141/teuthology.log
- 02:59 PM Bug #52606 (Fix Under Review): qa: test_dirfrag_limit
- 02:55 PM Bug #52606 (Resolved): qa: test_dirfrag_limit
- ...
- 10:52 AM Bug #52382 (Fix Under Review): mds,client: add flag to MClientSession for reject reason
09/13/2021
- 03:05 PM Fix #52591 (Resolved): mds: mds_oft_prefetch_dirfrags = false is not qa tested
- We'd like to use `mds_oft_prefetch_dirfrags = false` in production; in our tests it demonstrates a massive speedup wh...
- 01:43 PM Bug #52581 (Triaged): Dangling fs snapshots on data pool after change of directory layout
- 09:34 AM Bug #52581: Dangling fs snapshots on data pool after change of directory layout
- Related ceph-users thread, relevant part towards the end: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/th...
- 09:31 AM Bug #52581 (New): Dangling fs snapshots on data pool after change of directory layout
- ...
- 01:14 PM Backport #52441 (In Progress): pacific: mds: slow performance on parallel rm operations for multi...
- 05:44 AM Backport #52441: pacific: mds: slow performance on parallel rm operations for multiple kclients
- I will work on this.
- 09:26 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- I found some memories are not counted into the MDCache memories, such as the inode_map map in the MDCache class.
- 09:17 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Hi Yael,
BTW, did you see any memory killer logs in the /var/log/message* log files ? It should be OS's the memori... - 08:58 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Yael Azulay wrote:
> Xiubo Li wrote:
> > Hi Yael,
> >
> > Do you have the mds side logs ? Thanks.
>
> Hi Xiub... - 06:54 AM Backport #51937 (Resolved): pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)...
09/10/2021
- 06:18 PM Bug #52572: "cluster [WRN] 1 slow requests" in smoke pacific
(08:28:21 AM) neha: yuriw: looks that test has been failing since July due to slow requests on the mds https://pulp...- 06:18 PM Bug #52572 (Resolved): "cluster [WRN] 1 slow requests" in smoke pacific
- Run: https://pulpito.ceph.com/yuriw-2021-09-10_14:17:02-smoke-pacific-distro-basic-smithi/
Run: 6382881
Logs: http:... - 10:00 AM Bug #50719: xattr returning from the dead (sic!)
- Ralph Böhme wrote:
> I'm going to work on trying to reproduce this with a local simple test program in the next days... - 01:48 AM Support #52551: ceph osd pool set-quota
- Patrick Donnelly wrote:
> These kinds of questions should go on ceph-users mailing list, please.
I'm sorry for that. - 12:37 AM Bug #52565 (Fix Under Review): MDSMonitor: handle damaged state from standby-replay
09/09/2021
- 11:37 PM Bug #52565 (Resolved): MDSMonitor: handle damaged state from standby-replay
- After the addition of join_fscid, the state change validation in this code:
https://github.com/ceph/ceph/blob/ac46... - 01:00 PM Support #52551 (Rejected): ceph osd pool set-quota
- These kinds of questions should go on ceph-users mailing list, please.
- 02:41 AM Support #52551 (Rejected): ceph osd pool set-quota
- hi~, I ask for some question about "df -h" after "ceph osd pool set-quota"
CentOS Linux release 8.4.2105
ceph ver... - 05:35 AM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- We didn't change the mclock profile to custom, it is set to high io client.
I am not sure the op_queue is the prob... - 03:07 AM Bug #52508: nfs-ganesha crash when calls libcephfs, it triggers __ceph_assert_fail
- Patrick Donnelly wrote:
> le le wrote:
> > The exception because of compiler’s optimization ?
>
> Probably there...
09/08/2021
- 09:54 PM Backport #51833: pacific: client: flush the mdlog in unsafe requests' relevant and auth MDSes only
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42925
merged - 09:53 PM Backport #51937: pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42923
merged - 07:54 PM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Xiubo Li wrote:
> Hi Yael,
>
> Do you have the mds side logs ? Thanks.
Hi Xiubo
We reinstalled the setup and... - 08:56 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Hi Yael,
Do you have the mds side logs ? Thanks. - 02:49 PM Bug #51866: mds daemon damaged after outage
- I should mention, I've only had this reproduce once after we added the fix, when previously it was 100% reproducible,...
- 02:10 PM Bug #51866: mds daemon damaged after outage
- Hi, I'm working with David and am now looking at this issue. Unfortunately it seems the fix hasn't worked.
I've ju... - 02:38 PM Bug #52531 (Triaged): Quotas smaller than 4MB on subdirs do not have any effect
- 12:41 PM Bug #52508: nfs-ganesha crash when calls libcephfs, it triggers __ceph_assert_fail
- le le wrote:
> The exception because of compiler’s optimization ?
Probably there is a race condition not protecte... - 08:28 AM Bug #51756: crash: std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, std::_Rb_tr...
- For context, we hit this nearly every time we stop an active MDS in 14.2.20. (We see the #52207 variant of this).
- 06:34 AM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Could we use the socket keepalive to detect the socket connection peer's aliveness in the caps_tick() if there has re...
- 06:16 AM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Whenever switching to a different lock state the MDS will try to issue the allowed caps to the clients, even some cap...
09/07/2021
- 08:02 PM Backport #51977: pacific: client: make sure only to update dir dist from auth mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42937
merged - 05:11 PM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- cephuser2345 user wrote:
> 1. we chaned osd_opqueue to mclock_scheduler right after upgrading to pacific. that was o... - 04:38 PM Bug #52531 (Triaged): Quotas smaller than 4MB on subdirs do not have any effect
- This doesn’t work:
root@mon0:~# setfattr -n ceph.quota.max_bytes -v $((4*1024*1024-1)) /mnt/cephfs/
root@mon0:~# ge... - 03:24 PM Bug #51757 (New): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
- Not sure why this tracker is marked as a duplicate. Moving back to new.
- 02:02 PM Feature #52491 (Fix Under Review): mds: add max_mds_entries_per_dir config option
- 01:49 PM Feature #52459 (Need More Info): mds: add failed connections warning
- xinyu wang wrote:
> Greg Farnum wrote:
> > What specific scenario are you trying to avoid here? Messenger-level fai... - 01:46 PM Bug #52487 (Triaged): qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmenta...
- 01:45 PM Bug #52508 (Triaged): nfs-ganesha crash when calls libcephfs, it triggers __ceph_assert_fail
Also available in: Atom