Activity
From 06/03/2018 to 07/02/2018
07/02/2018
- 09:48 PM Cleanup #24745 (Won't Fix): Spurious empty files in CephFS root pool when multiple pools associated
- Hi there.
I have an issue with cephfs and multiple datapools inside. I have like SIX datapools inside the cephfs, ... - 09:38 PM Feature #24724 (Fix Under Review): client: put instance/addr information in status asok command
- https://github.com/ceph/ceph/pull/22801
- 07:28 PM Documentation #24642 (New): doc: visibility semantics to other clients
- Niklas Hambuechen wrote:
> I have upgraded to Mimic now and will check out if I see three minute delays again.
>
... - 04:52 PM Documentation #24726 (Pending Backport): Documentation about CephFS snapshots in experimental fea...
- https://github.com/ceph/ceph/pull/22656
- 04:51 PM Backport #24716: mimic: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/22775
merged - 01:52 PM Bug #24732 (Rejected): lsattr is not usefull for ceph-fuse
- lsattr is not usefull for ceph-fuse
- 01:48 PM Bug #24730: Client::_invalidate_kernel_dcache causes NFS lookup “deleted” dentry
- dup of http://tracker.ceph.com/issues/21423
- 01:46 PM Bug #24730: Client::_invalidate_kernel_dcache causes NFS lookup “deleted” dentry
- set client_try_dentry_invalidate config of ceph-fuse to false
- 12:49 PM Bug #24730 (Duplicate): Client::_invalidate_kernel_dcache causes NFS lookup “deleted” dentry
- We exported an NFS directory via a mounted ceph-fuse directory named "testshawn", and then started a write/read task ...
- 11:41 AM Backport #24534: mimic: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- 07:16 AM Backport #24534 (In Progress): mimic: client: _ll_drop_pins travel inode_map may access invalid ‘...
- https://github.com/ceph/ceph/pull/22791
- 11:40 AM Backport #24535 (In Progress): luminous: client: _ll_drop_pins travel inode_map may access invali...
- https://github.com/ceph/ceph/pull/22786
- 03:09 AM Backport #24540 (In Progress): luminous: multimds pjd open test fails
- https://github.com/ceph/ceph/pull/22783
- 02:32 AM Backport #23989 (In Progress): luminous: mds: don't report slow request for blocked filelock request
- https://github.com/ceph/ceph/pull/22782
07/01/2018
- 08:54 PM Documentation #24726 (Resolved): Documentation about CephFS snapshots in experimental features is...
- http://docs.ceph.com/docs/mimic/cephfs/experimental-features/
- 09:27 AM Feature #24725 (Fix Under Review): mds: propagate rstats from the leaf dirs up to the specified d...
- To make rsync "rstats" aware, we need to implement a way to propagate all rstats up through the subtree that is to be...
- 12:52 AM Feature #24724 (Resolved): client: put instance/addr information in status asok command
- Purpose is to make later lookups in the blacklist possible.
- 12:39 AM Documentation #24642: doc: visibility semantics to other clients
- I have upgraded to Mimic now and will check out if I see three minute delays again.
I would like to ask you to reo...
06/30/2018
- 12:36 AM Backport #24540: luminous: multimds pjd open test fails
- Zheng, please tkae this one. Has non-trivial conflicts.
- 12:34 AM Backport #24295 (In Progress): luminous: repeated eviction of idle client until some IO happens
- 12:26 AM Backport #24538 (In Progress): luminous: common/DecayCounter: set last_decay to current time when...
- 12:24 AM Backport #23989: luminous: mds: don't report slow request for blocked filelock request
- Zheng, this has non-trivial conflicts. Please backport.
06/29/2018
- 11:02 PM Bug #24721 (Resolved): mds: accept an inode number in hex for dump_inode command
- See also: https://github.com/ceph/ceph/pull/22569#issuecomment-398420009
- 10:50 PM Documentation #24642 (Closed): doc: visibility semantics to other clients
- Niklas Hambuechen wrote:
> Thanks everyone for the quick answers.
>
> Patrick Donnelly wrote:
> > It sounds like... - 03:27 PM Documentation #24642: doc: visibility semantics to other clients
- Thanks everyone for the quick answers.
Patrick Donnelly wrote:
> It sounds like you may have found a bug (possibl... - 05:29 PM Backport #24719 (Resolved): mimic: client: returning garbage (?) for readdir
- https://github.com/ceph/ceph/pull/22956
- 05:24 PM Backport #24718 (Resolved): luminous: client: returning garbage (?) for readdir
- https://github.com/ceph/ceph/pull/22955
- 05:24 PM Bug #24579 (Pending Backport): client: returning garbage (?) for readdir
- 05:03 PM Backport #24717: luminous: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- https://github.com/ceph/ceph/pull/22774
- 05:00 PM Backport #24717 (Resolved): luminous: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basi...
- https://github.com/ceph/ceph/pull/22774
- 04:59 PM Backport #24716 (Resolved): mimic: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-s...
- https://github.com/ceph/ceph/pull/22775
- 03:17 PM Documentation #24641: Document behaviour of fsync-after-close
- Great, that certainly makes things easier from the application perspective.
It would be great if we could write it... - 12:57 PM Bug #24679: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- kernel source is too old for gcc 7
06/28/2018
- 08:11 PM Backport #24706 (In Progress): mimic: qa: support picking a random distro using new teuthology $
- 08:09 PM Backport #24706 (Resolved): mimic: qa: support picking a random distro using new teuthology $
- https://github.com/ceph/ceph/pull/22700
- 08:09 PM Backport #24705 (Resolved): mimic: cephfs: allow prohibiting user snapshots in CephFS
- https://github.com/ceph/ceph/pull/22812
- 08:09 PM Backport #24704 (Resolved): mimic: mds: low wrlock efficiency due to dirfrags traversal
- https://github.com/ceph/ceph/pull/22884
- 08:09 PM Backport #24703 (Resolved): mimic: PurgeQueue sometimes ignores Journaler errors
- https://github.com/ceph/ceph/pull/22810
- 08:04 PM Backport #24696 (Resolved): luminous: mds: low wrlock efficiency due to dirfrags traversal
- https://github.com/ceph/ceph/pull/22885
- 08:04 PM Backport #24695 (Rejected): jewel: PurgeQueue sometimes ignores Journaler errors
- 08:04 PM Backport #24694 (Resolved): luminous: PurgeQueue sometimes ignores Journaler errors
- https://github.com/ceph/ceph/pull/22811
- 05:32 AM Feature #24643 (Fix Under Review): libcephfs: add ceph_futimens support
- PR https://github.com/ceph/ceph/pull/22751
- 04:08 AM Bug #23262: kclient: nofail option not supported
- It seems mount command cannot recognize nofail option, not ceph or ceph kernel client.
What version of util-linux do...
06/27/2018
- 09:19 PM Bug #24284 (Pending Backport): cephfs: allow prohibiting user snapshots in CephFS
- 08:37 PM Feature #13231 (Duplicate): kclient: support SELinux
- 06:27 PM Bug #24680 (Fix Under Review): qa: iogen.sh: line 7: cd: too many arguments
- https://github.com/ceph/ceph/pull/22741
- 06:22 PM Bug #24680 (Resolved): qa: iogen.sh: line 7: cd: too many arguments
- ...
- 06:11 PM Bug #24679 (Resolved): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- ...
- 09:11 AM Bug #24665: qa: TestStrays.test_hardlink_reintegration fails self.assertTrue(self.get_backtrace_p...
- the remote dentry was in mds cache when unlink happened, so reintegration started immediately. There were two unexpec...
06/26/2018
- 11:06 PM Bug #24522: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- https://github.com/ceph/ceph/pull/22725
- 09:08 PM Bug #21848 (Resolved): client: re-expand admin_socket metavariables in child process
- 08:32 PM Bug #24533 (Pending Backport): PurgeQueue sometimes ignores Journaler errors
- 08:32 PM Bug #24557: client: segmentation fault in handle_client_reply
- Problem only in mimic/master. Introduced by 4f2fa427f483a29df168053d0021ee35c1aa207d.
- 08:26 PM Backport #23833 (Resolved): luminous: MDSMonitor: crash after assigning standby-replay daemon in ...
- 07:05 PM Bug #24665 (Closed): qa: TestStrays.test_hardlink_reintegration fails self.assertTrue(self.get_ba...
- ...
- 12:37 PM Bug #24579 (Fix Under Review): client: returning garbage (?) for readdir
- https://github.com/ceph/ceph/pull/22718
06/25/2018
- 11:29 PM Bug #24138: qa: support picking a random distro using new teuthology $
- mimic backport https://github.com/ceph/ceph/pull/22700
- 10:48 PM Bug #24138 (Pending Backport): qa: support picking a random distro using new teuthology $
- 01:56 PM Documentation #24641: Document behaviour of fsync-after-close
- they are the same. In cephfs, there is no dirty data/metadata associated with file handle.
- 01:47 PM Documentation #24642: doc: visibility semantics to other clients
- you found a bug. this can be http://tracker.ceph.com/issues/23894
- 01:46 PM Documentation #24642: doc: visibility semantics to other clients
- Niklas Hambuechen wrote:
> I believe to just have run into a situation where one CephFS (fuse) mount created a file,... - 01:41 PM Bug #24644 (Fix Under Review): cephfs-journal-tool: wrong layout info used
- 02:29 AM Bug #24644: cephfs-journal-tool: wrong layout info used
- The fix is save the layout info to header during journal export,
When improt journal, first try to get layout from c... - 02:28 AM Bug #24644 (Resolved): cephfs-journal-tool: wrong layout info used
- when cephfs-journal-tool import journal, it uses default layout
to get object_size, this is wrong. Because default o...
06/24/2018
- 10:28 PM Feature #24643 (Resolved): libcephfs: add ceph_futimens support
- We have ceph_utime but that takes a path. We should provide a call for the file descriptor version too.
- 10:22 PM Documentation #24642: doc: visibility semantics to other clients
- User `SeanR` on freenode `#ceph` reports:
> nh2, I found (depening on mount options) if I don't call fsync() then ... - 01:14 AM Documentation #24642 (In Progress): doc: visibility semantics to other clients
- I believe to just have run into a situation where one CephFS (fuse) mount created a file, and another CephFS mount st...
- 01:08 AM Documentation #24641 (Resolved): Document behaviour of fsync-after-close
- The following should be documented:
Does close()/re-open()/fsync() provide the same durability and visibility-to-o...
06/22/2018
- 11:25 PM Bug #24467 (Pending Backport): mds: low wrlock efficiency due to dirfrags traversal
- 01:18 PM Bug #24004 (Fix Under Review): mds: curate priority of perf counters sent to mgr
- PR https://github.com/ceph/ceph/pull/22668
06/21/2018
- 12:53 PM Feature #24604 (Resolved): Implement "cephfs-journal-tool event splice" equivalent for purge queue
- cephfs-journal-tool recently got the ability to scan the purge queue via the --journal=purge_queue argument.
Howev...
06/20/2018
- 11:06 PM Bug #24522 (New): blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Whoops!
- 09:58 PM Bug #24522: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Patrick, this is a different script(blogbench.sh) not pjd.sh, so might not be a duplicate.
- 09:00 PM Bug #24522 (Duplicate): blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Thanks Neha!
- 08:39 PM Bug #24522: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Following is the problem:...
- 10:01 PM Bug #24137 (Resolved): client: segfault in trim_caps
- 10:00 PM Backport #24185 (Resolved): luminous: client: segfault in trim_caps
- 07:59 PM Backport #24185: luminous: client: segfault in trim_caps
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22201
merged - 10:00 PM Backport #24331 (Resolved): luminous: mon: mds health metrics sent to cluster log indpeendently
- 07:58 PM Backport #24331: luminous: mon: mds health metrics sent to cluster log indpeendently
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22558
merged - 07:57 PM Backport #23792: luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21732
merged - 05:10 PM Feature #17854 (Fix Under Review): mds: only evict an unresponsive client when another client wan...
- 08:22 AM Bug #24579: client: returning garbage (?) for readdir
- ffsb issue. below patch can fix it. I don't know how to update http://download.ceph.com/qa/ffsb.tar.bz2...
- 07:57 AM Bug #24512: Raw used space leak
- some additional info:
- mounted with 'mount -t ceph'
- default config but:
--- 2 mds servers active
--- ram per O...
06/19/2018
- 07:22 PM Documentation #24580 (Resolved): doc: complete documentation for `ceph fs` administration commands
- Current skeleton: http://docs.ceph.com/docs/luminous/cephfs/administration/
- 07:22 PM Bug #24240: qa: 1 mutations had unexpected outcomes
- not indicated for backport to luminous because, as per Zheng, luminous does not have open file table.
- 06:56 PM Bug #24512: Raw used space leak
- I checked 'stored' vs. 'allocated' counters under bluestore section. 'stored' is the actual amount written to bluesto...
- 12:28 PM Bug #24512: Raw used space leak
- Here they are from 3 hosts (the link with hosts is in the df tree) !
Thanks! - 11:13 AM Bug #24512: Raw used space leak
- Would you share performance counters dump for several (3-5) OSDs, preferably from different nodes? And corresponding ...
- 08:08 AM Bug #24512: Raw used space leak
- sorry, wrong ceph version: 12.2.5-407 (luminous stable)
I'm still very interested by any answer. If I try filestor... - 06:09 PM Bug #24579: client: returning garbage (?) for readdir
- This seems to only happen on Ubuntu 18.04:...
- 06:05 PM Bug #24579: client: returning garbage (?) for readdir
- Here too:
/ceph/teuthology-archive/teuthology-2018-06-18_20:06:42-powercycle-master-distro-basic-smithi/2678660
... - 06:03 PM Bug #24579 (Resolved): client: returning garbage (?) for readdir
- ...
- 04:18 PM Bug #24441 (Closed): Ceph fs new cephfs command failed when meta pool already contains some objects
- This is not a bug -- the check was added to avoid people accidentally getting corrupt filesystems by trying to use a ...
- 03:37 AM Feature #24464: cephfs: file-level snapshots
- I think using rados snapshot to support is too expensive.
- 02:55 AM Bug #24557 (Fix Under Review): client: segmentation fault in handle_client_reply
- https://github.com/ceph/ceph/pull/22611
06/18/2018
- 09:19 PM Bug #24557 (Resolved): client: segmentation fault in handle_client_reply
- ...
- 07:21 PM Backport #23833 (In Progress): luminous: MDSMonitor: crash after assigning standby-replay daemon ...
- 05:45 PM Feature #17230: ceph_volume_client: py3 compatible
- There are high-level pushes to py3 in future (minor-)releases of Ceph/Openstack. RHCS 3.X (Luminous) will need to be ...
- 05:35 PM Feature #17230: ceph_volume_client: py3 compatible
- Question for all, and particularly for Patrick: why is a luminous backport of this needed, and is it worth the risk?
- 05:42 PM Bug #24518 (Duplicate): "pjd.sh: line 7: cd: too many arguments" in fs suite
- Yuri, you're using an old qa-suite branch. Thanks to Neha for noticing the cause.
- 05:29 PM Bug #24518: "pjd.sh: line 7: cd: too many arguments" in fs suite
- Line in question: https://github.com/ceph/ceph/blob/2d2293948066cae8d656dfe91bdb6695958a52e9/qa/workunits/suites/pjd....
06/15/2018
- 04:01 PM Backport #24541 (Resolved): mimic: qa: 1 mutations had unexpected outcomes
- https://github.com/ceph/ceph/pull/22841
- 04:01 PM Backport #24540 (Resolved): luminous: multimds pjd open test fails
- https://github.com/ceph/ceph/pull/22783
- 04:01 PM Backport #24539 (Resolved): mimic: multimds pjd open test fails
- https://github.com/ceph/ceph/pull/22819
- 04:01 PM Backport #24538 (Resolved): luminous: common/DecayCounter: set last_decay to current time when de...
- https://github.com/ceph/ceph/pull/22779
- 04:01 PM Backport #24537 (Resolved): mimic: common/DecayCounter: set last_decay to current time when decod...
- https://github.com/ceph/ceph/pull/22816
- 04:01 PM Backport #24536 (Rejected): jewel: client: _ll_drop_pins travel inode_map may access invalid ‘nex...
- 04:01 PM Backport #24535 (Resolved): luminous: client: _ll_drop_pins travel inode_map may access invalid ‘...
- https://github.com/ceph/ceph/pull/22786
- 04:01 PM Backport #24534 (Resolved): mimic: client: _ll_drop_pins travel inode_map may access invalid ‘nex...
- https://github.com/ceph/ceph/pull/22791
- 03:45 PM Bug #24533 (Fix Under Review): PurgeQueue sometimes ignores Journaler errors
- https://github.com/ceph/ceph/pull/22580
- 03:33 PM Bug #24533 (Resolved): PurgeQueue sometimes ignores Journaler errors
- We check journaler.get_error() in PurgeQueue::_recover, but never later in _consume -- if something like a decode err...
- 02:06 PM Bug #24491 (Pending Backport): client: _ll_drop_pins travel inode_map may access invalid ‘next’ i...
- 02:04 PM Bug #24440 (Pending Backport): common/DecayCounter: set last_decay to current time when decoding ...
- 02:03 PM Bug #24269 (Pending Backport): multimds pjd open test fails
- 02:02 PM Bug #24240 (Pending Backport): qa: 1 mutations had unexpected outcomes
06/14/2018
- 10:02 AM Bug #24284 (Fix Under Review): cephfs: allow prohibiting user snapshots in CephFS
- https://github.com/ceph/ceph/pull/22560
- 05:52 AM Backport #24331 (In Progress): luminous: mon: mds health metrics sent to cluster log indpeendently
- https://github.com/ceph/ceph/pull/22558
- 01:44 AM Backport #24330: mimic: mon: mds health metrics sent to cluster log indpeendently
- mimic backport PR https://github.com/ceph/ceph/pull/22265 is already opened. So closing PR#22540.
- 12:24 AM Bug #24306: mds: use intrusive_ptr to manage Message life-time
- https://github.com/ceph/ceph/pull/22555
06/13/2018
- 09:53 PM Bug #24522 (Resolved): blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Seems bionic specific
Run: http://pulpito.ceph.com/yuriw-2018-06-12_20:54:23-upgrade:luminous-x-mimic-distro-basic-s... - 09:05 PM Bug #24518: "pjd.sh: line 7: cd: too many arguments" in fs suite
- Also seems like in run:
http://pulpito.ceph.com/yuriw-2018-06-12_21:34:02-powercycle-mimic-distro-basic-smithi/
Job... - 08:27 PM Bug #24518 (Duplicate): "pjd.sh: line 7: cd: too many arguments" in fs suite
- This seems to be bionic specific
Run: http://pulpito.ceph.com/yuriw-2018-06-12_21:09:43-fs-master-distro-basic-smith... - 08:54 PM Bug #24520 (Duplicate): "[WRN] MDS health message (mds.0): 2 slow requests are blocked > 30 sec""...
- Run: http://pulpito.ceph.com/yuriw-2018-06-12_21:34:02-powercycle-mimic-distro-basic-smithi/
Jobs: '2660103', '26600... - 08:22 PM Bug #24517 (Duplicate): "Loading libcephfs-jni: Failure!" in fs suite
- This seems to be rhel specific
Run: http://pulpito.ceph.com/yuriw-2018-06-12_21:09:43-fs-master-distro-basic-smithi/... - 05:16 PM Bug #23697 (Resolved): mds: load balancer fixes
- 05:15 PM Backport #23698 (Resolved): luminous: mds: load balancer fixes
- 05:15 PM Bug #21745 (Resolved): mds: MDBalancer using total (all time) request count in load statistics
- 05:13 PM Backport #23671 (Resolved): luminous: mds: MDBalancer using total (all time) request count in loa...
- 05:11 PM Feature #23695 (Resolved): VolumeClient: allow ceph_volume_client to create 'volumes' without nam...
- 05:11 PM Backport #24055 (Resolved): luminous: VolumeClient: allow ceph_volume_client to create 'volumes' ...
- 02:57 PM Feature #21571: mds: limit number of snapshots (global and subtree)
- There should be a global limit (if necessary for performance) and subtree limits (from #24429) so that operators can ...
- 02:56 PM Feature #24429 (Duplicate): fs: implement snapshot count limit by subtree
- 02:55 PM Backport #24296: mimic: repeated eviction of idle client until some IO happens
- Zheng Yan wrote:
> just replace 'cbegin()' with begin()
Thanks, Zheng. Did just that. - 02:54 PM Backport #24296 (In Progress): mimic: repeated eviction of idle client until some IO happens
- 02:06 PM Bug #19438 (Won't Fix): ceph mds error "No space left on device"
- dirfrags are not stable on jewel. Closing this.
- 01:44 PM Bug #24512 (New): Raw used space leak
- Hello
I'm testing an setup of cephfs over a EC pool with 21 data + 3 coding chunks ([EC_]stripe_unit of 16k).
All... - 12:52 PM Feature #24465 (Fix Under Review): client: allow client to leave state intact on MDS when tearing...
- https://github.com/ceph/ceph/pull/22543
- 04:14 AM Backport #24330 (In Progress): mimic: mon: mds health metrics sent to cluster log indpeendently
- -https://github.com/ceph/ceph/pull/22540-
06/12/2018
- 07:50 AM Bug #23665 (Resolved): ceph-fuse: return proper exit code
- 07:49 AM Bug #22933 (Resolved): client: add option descriptions and review levels (e.g. LEVEL_DEV)
- 02:23 AM Bug #24491 (Fix Under Review): client: _ll_drop_pins travel inode_map may access invalid ‘next’ i...
- https://github.com/ceph/ceph/pull/22512
- 02:09 AM Bug #24491: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- Thank for reporting this. Could you fix this issue in a way similar to https://github.com/ceph/ceph/pull/22073?
06/11/2018
- 03:50 PM Bug #24491 (Resolved): client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- We have encounter a process crash when using libcephfs.
the call stack is below:
#0 0x00007fdef24941f7 in raise ... - 01:42 PM Bug #24400 (Can't reproduce): CephFS - All MDS went offline and required repair of filesystem
- reopen this ticket if you encounter this issue again
- 01:39 PM Bug #24369 (Resolved): luminous: checking quota while holding cap ref may deadlock
06/10/2018
06/09/2018
- 02:24 PM Backport #23698: luminous: mds: load balancer fixes
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/21412
merged - 02:24 PM Backport #24055: luminous: VolumeClient: allow ceph_volume_client to create 'volumes' without nam...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/21897
merged - 02:23 PM Bug #24369: luminous: checking quota while holding cap ref may deadlock
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/22354
merged - 11:36 AM Bug #23815 (Resolved): client: avoid second lock on client_lock
- 11:35 AM Bug #23829 (Resolved): qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25)
- 11:21 AM Bug #20549 (Resolved): cephfs-journal-tool: segfault during journal reset
- 11:20 AM Bug #23923 (Resolved): mds: stopping rank 0 cannot shutdown until log is trimmed
- 11:17 AM Bug #23919 (Resolved): mds: stuck during up:stopping
- 11:16 AM Bug #23960 (Resolved): mds: scrub on fresh file system fails
- 11:15 AM Bug #23812 (Resolved): mds: may send LOCK_SYNC_MIX message to starting MDS
- 11:14 AM Bug #23855 (Resolved): mds: MClientCaps should carry inode's dirstat
- 11:13 AM Bug #23894 (Resolved): ceph-fuse: missing dentries in readdir result
- 11:12 AM Bug #23518 (Resolved): mds: crash when failover
- 11:11 AM Bug #24073 (Resolved): PurgeQueue::_consume() could return true when there were no purge queue it...
- 11:11 AM Bug #24047 (Resolved): MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
- 07:51 AM Bug #24467 (Fix Under Review): mds: low wrlock efficiency due to dirfrags traversal
- https://github.com/ceph/ceph/pull/22486
- 07:03 AM Bug #24467 (Resolved): mds: low wrlock efficiency due to dirfrags traversal
- Recently, when trying to create/remove massive files/dirs(7x10^6) within a common directory, we found that as the cre...
06/08/2018
- 09:08 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- We've talked about this quite a lot in the past. I thought we had a tracker ticket for it, but on searching the most ...
- 06:21 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Neat. NFS and SMB have directory delegations/leases, but I haven't studied the topic in detail.
So the idea is to ... - 05:10 PM Feature #24461 (Resolved): cephfs: improve file create performance buffering file unlink/create o...
- **Serialized single-client** file creation (e.g. untar/rsync) is an area CephFS (and most distributed file systems) c...
- 07:08 PM Feature #24465 (Resolved): client: allow client to leave state intact on MDS when tearing down ob...
- When ganesha shuts down cleanly, it'll tear down all of its filehandle objects and release the files that it has open...
- 05:50 PM Feature #24464 (New): cephfs: file-level snapshots
- Use-case is to support dropbox-style versioning of files.
- 05:46 PM Feature #24463 (Resolved): kclient: add btime support
- 05:43 PM Feature #24462 (New): MDSMonitor: check for mixed version MDS
- And create a health error if it detects this.
- 09:00 AM Bug #24173 (In Progress): ceph_volume_client: allow atomic update of RADOS objects
- https://github.com/ceph/ceph/pull/22455
06/07/2018
- 01:30 PM Backport #24296: mimic: repeated eviction of idle client until some IO happens
- just replace 'cbegin()' with begin()
- 01:07 PM Backport #24296 (Need More Info): mimic: repeated eviction of idle client until some IO happens
- While backporting changes related to tracker 24052, getting cbegin not found compilation error :
/home/pdvian/backpo... - 01:11 PM Bug #24435 (Resolved): doc: incorrect snaprealm format upgrade process in mimic release note
- 01:07 PM Bug #24435 (Pending Backport): doc: incorrect snaprealm format upgrade process in mimic release note
- 01:11 PM Backport #24451 (Rejected): mimic: doc: incorrect snaprealm format upgrade process in mimic relea...
- Nevermind, this doc doesn't exist in mimic.
- 01:08 PM Backport #24451 (Rejected): mimic: doc: incorrect snaprealm format upgrade process in mimic relea...
- 08:23 AM Feature #24444 (Resolved): cephfs: make InodeStat, DirStat, LeaseStat versioned
- Make InodeStat/DirStat/LeaseStat versioned, so client can decode InodeStat in request reply without checking mds feat...
- 07:34 AM Feature #20598 (Fix Under Review): mds: revisit LAZY_IO
- https://github.com/ceph/ceph/pull/22450
- 06:31 AM Bug #24441: Ceph fs new cephfs command failed when meta pool already contains some objects
- ceph version 10.2.10:
when meta pool has objects.Run ceph fs new cephfs meta data,it can create fs successed.
... - 06:23 AM Bug #24441 (Closed): Ceph fs new cephfs command failed when meta pool already contains some objects
- ceph fs new cephfs meta4 data
Error EINVAL: pool 'meta4' already contains some objects. Use an empty pool instead. - 03:04 AM Bug #24440: common/DecayCounter: set last_decay to current time when decoding decay counter
- https://github.com/ceph/ceph/pull/22357
- 03:03 AM Bug #24440 (Resolved): common/DecayCounter: set last_decay to current time when decoding decay co...
- Recently we found mds load might become zero on another MDS under multi-MDSes scenario. The ceph version is Luminous....
06/06/2018
- 10:28 PM Documentation #24093 (Resolved): doc: Update *remove a metadata server*
- 09:23 PM Bug #24435 (Fix Under Review): doc: incorrect snaprealm format upgrade process in mimic release note
- https://github.com/ceph/ceph/pull/22445
- 09:17 PM Bug #24435 (In Progress): doc: incorrect snaprealm format upgrade process in mimic release note
- 01:55 PM Bug #24435 (Resolved): doc: incorrect snaprealm format upgrade process in mimic release note
- The commands to upgrade snaprealm format in release note are
ceph daemon <mds of rank 0> scrub_path /
ceph daemon... - 08:49 AM Bug #24028: CephFS flock() on a directory is broken
- In fuse filesystem, flock on directory is handled by VFS, there is nothing ceph-fuse can do.
- 08:12 AM Bug #24028: CephFS flock() on a directory is broken
- In that case flock() syscall over FUSEd directory should return an ENOTSUPP?. In any case we must not allow unsafe lo...
- 07:46 AM Bug #24028: CephFS flock() on a directory is broken
- ceph-fuse does not support file lock on directory. It's limitation of fuse kernel module.
- 07:12 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- http://tracker.ceph.com/issues/17177 can explain this issue. full filesystem scrub should repair incorrect dirstat/rs...
- 06:24 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Zheng Yan wrote:
> there are lots of inodes have incorrect dirstat/rstat. have you ever run 'journal reset' before t... - 02:16 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- there are lots of inodes have incorrect dirstat/rstat. have you ever run 'journal reset' before the crash
- 02:07 AM Feature #24430 (Resolved): libcephfs: provide API to change umask
- The current use-case will be the CephFS shell.
06/05/2018
- 09:05 PM Feature #24429 (Duplicate): fs: implement snapshot count limit by subtree
- e.g. don't let a subtree have more than 7 snapshots. This should be configurable via an xattr.
Idea is from Dan va... - 06:06 PM Feature #24426 (New): mds: add second level cache backed by local SSD or NVRAM
- Idea is to have a second level to the MDS cache to improve access time and reduce reads on the metadata pool. This wo...
- 02:47 PM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
- > change default of mds_snap_max_uid to 0
Use-cases such as Manila let the users mount with root so this will be i... - 02:19 AM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
- maybe we can use 'auth string'
- 10:42 AM Bug #24403: mon failed to return metadata for mds
- I have updated first telegeo02 with no different result (as mds on telegeo02 was standby as last one rebooted)
The... - 09:14 AM Feature #22446: mds: ask idle client to trim more caps
- Can I get few implementation specific details to get started working on this issue?
And for clarity on my side, we... - 08:27 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Zheng Yan wrote:
> do have have full log (the time mds started replay to mds crash). thanks
Full MDS log starting... - 12:50 AM Bug #23032 (Resolved): mds: underwater dentry check in CDir::_omap_fetched is racy
- 12:49 AM Backport #23157 (Resolved): luminous: mds: underwater dentry check in CDir::_omap_fetched is racy
- 12:49 AM Backport #22696 (Resolved): luminous: client: dirty caps may never get the chance to flush
06/04/2018
- 10:57 PM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- do have have full log (the time mds started replay to mds crash). thanks
- 02:06 PM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Zheng Yan wrote:
> do you have mds log just before the crash
Excellent timing - we've just finished trawling thro... - 01:55 PM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- do you have mds log just before the crash
- 08:02 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Forgot to say - one of the logs was taken with debug enabled (thus the size). Can provide whole log if needed
- 07:45 AM Bug #24400 (Can't reproduce): CephFS - All MDS went offline and required repair of filesystem
- Hi,
Raising this incase we can get some more insight and/or it helps others.
We have a 12.2.5 cluster provising... - 09:14 PM Bug #24241 (New): NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
- 06:15 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- Sage Weil wrote:
> A few questions:
>
> - What is the sha1 of? The object's content? That isn't necessarily kno... - 05:59 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- John Spray wrote:
> Patrick Donnelly wrote:
> > John Spray wrote:
> > > I'm a fan. Questions that spring to mind:... - 02:20 PM Bug #24403: mon failed to return metadata for mds
- The "sen2agriprod" server actually runs on centOS7 (kernel 3.10.0) which is in the recommended platforms.
If you t... - 01:30 PM Bug #24403: mon failed to return metadata for mds
- please try newer kernel
- 10:04 AM Bug #24403 (Resolved): mon failed to return metadata for mds
- Hello,
Redigging an error found into the ceph-users mailing list: http://lists.ceph.com/pipermail/ceph-users-ceph.... - 01:41 PM Bug #24306 (In Progress): mds: use intrusive_ptr to manage Message life-time
- 09:34 AM Bug #24172 (Resolved): client: fails to respond cap revoke from non-auth mds
- 05:39 AM Bug #23214 (Resolved): doc: Fix -d option in ceph-fuse doc
- 05:36 AM Bug #23248 (Resolved): ceph-fuse: trim ceph-fuse -V output
- 01:24 AM Backport #23704 (Resolved): luminous: ceph-fuse: broken directory permission checking
- 01:24 AM Backport #23770 (Resolved): luminous: ceph-fuse: return proper exit code
- 01:22 AM Backport #23818 (Resolved): luminous: client: add option descriptions and review levels (e.g. LEV...
- 01:22 AM Backport #23475 (Resolved): luminous: ceph-fuse: trim ceph-fuse -V output
- 01:21 AM Backport #23835 (Resolved): luminous: mds: fix occasional dir rstat inconsistency between multi-M...
- 01:21 AM Backport #23638 (Resolved): luminous: ceph-fuse: getgroups failure causes exception
- 01:20 AM Backport #23933 (Resolved): luminous: client: avoid second lock on client_lock
- 01:17 AM Backport #23931 (Resolved): luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < ...
- 01:16 AM Backport #23936 (Resolved): luminous: cephfs-journal-tool: segfault during journal reset
- 01:16 AM Backport #23950 (Resolved): luminous: mds: stopping rank 0 cannot shutdown until log is trimmed
- 01:15 AM Backport #23951 (Resolved): luminous: mds: stuck during up:stopping
- 01:15 AM Backport #23984 (Resolved): luminous: mds: scrub on fresh file system fails
- 01:14 AM Backport #23935 (Resolved): luminous: mds: may send LOCK_SYNC_MIX message to starting MDS
- 01:13 AM Backport #23991 (Resolved): luminous: client: hangs on umount if it had an MDS session evicted
- 01:13 AM Backport #24050 (Resolved): luminous: mds: MClientCaps should carry inode's dirstat
- 01:12 AM Backport #24049 (Resolved): luminous: ceph-fuse: missing dentries in readdir result
- 01:12 AM Backport #23946 (Resolved): luminous: mds: crash when failover
- 01:10 AM Backport #24107 (Resolved): luminous: PurgeQueue::_consume() could return true when there were no...
- 01:09 AM Backport #24108 (Resolved): luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
- 01:03 AM Backport #24130 (Resolved): luminous: mds: race with new session from connection and imported ses...
- 01:02 AM Backport #24188 (Resolved): luminous: kceph: umount on evicted client blocks forever
- 01:01 AM Backport #24201 (Resolved): luminous: client: fails to respond cap revoke from non-auth mds
- 01:00 AM Backport #24207 (Resolved): luminous: client: deleted inode's Bufferhead which was in STATE::Tx w...
- 12:59 AM Bug #24289 (Resolved): mds memory leak
- 12:57 AM Backport #23982 (Resolved): luminous: qa: TestVolumeClient.test_lifecycle needs updated for new e...
- 12:55 AM Backport #24205 (Resolved): luminous: mds: broadcast quota to relevant clients when quota is expl...
- 12:53 AM Backport #24189 (Resolved): luminous: qa: kernel_mount.py umount must handle timeout arg
- 12:52 AM Backport #24341 (Resolved): luminous: mds memory leak
Also available in: Atom