Project

General

Profile

Activity

From 05/05/2019 to 06/03/2019

06/03/2019

10:29 PM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Venky, status on this ticket?
>
> For this ticket: scrub stat...
Patrick Donnelly
09:34 PM Bug #40116 (Resolved): nautilus: qa: cannot schedule kcephfs/multimds
Yuri Weinstein
09:10 PM Bug #40116: nautilus: qa: cannot schedule kcephfs/multimds
merged https://github.com/ceph/ceph/pull/28369 Yuri Weinstein
09:01 PM Bug #40116 (Fix Under Review): nautilus: qa: cannot schedule kcephfs/multimds
Patrick Donnelly
08:46 PM Bug #40116 (Resolved): nautilus: qa: cannot schedule kcephfs/multimds
... Patrick Donnelly
06:44 PM Bug #40034: mds: stuck in clientreplay
Nathan Fish wrote:
> Patrick Donnelly wrote:
> > None of us see why the MDS was stuck in clientreplay. How long do ...
Patrick Donnelly
05:24 PM Bug #40034: mds: stuck in clientreplay
Patrick Donnelly wrote:
> None of us see why the MDS was stuck in clientreplay. How long do you think it was in that...
Nathan Fish
05:23 PM Bug #40034 (Need More Info): mds: stuck in clientreplay
Patrick Donnelly
05:04 PM Bug #40034: mds: stuck in clientreplay
None of us see why the MDS was stuck in clientreplay. How long do you think it was in that state? Patrick Donnelly
06:40 PM Bug #40085 (Fix Under Review): FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_di...
Patrick Donnelly
05:44 PM Bug #40085: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
This set should fix it:
https://github.com/ceph/ceph/pull/28324
Jeff Layton
06:28 PM Documentation #24641 (Pending Backport): Document behaviour of fsync-after-close
Patrick Donnelly
06:27 PM Documentation #24641 (Fix Under Review): Document behaviour of fsync-after-close
Patrick Donnelly
02:45 PM Bug #39750: mgr/volumes: cannot create subvolumes with py3 libraries
I can't reproduce the issue. I get the following warning when I run the do_cmake.sh as shown in the report -
CMake...
Rishabh Dave

06/02/2019

02:28 AM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
it's a longstanding bug. fix by "ceph: use ceph_evict_inode to cleanup inode's resource" in https://github.com/ceph/c... Zheng Yan

06/01/2019

09:53 AM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
it's kernel BUG at fs/ceph/mds_client.c:1500!
BUG_ON(session->s_nr_caps > 0);
No idea how can it happen
Zheng Yan

05/31/2019

10:28 PM Feature #24463: kclient: add btime support
I've spent the last couple of days working on this. The btime piece happens to be pretty simple, but it shares a feat... Jeff Layton
06:14 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
Another: /ceph/teuthology-archive/yuriw-2019-05-30_20:50:30-kcephfs-mimic_v13.2.6_QE-testing-basic-smithi/3989013/teu... Patrick Donnelly
06:12 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
Another: /ceph/teuthology-archive/yuriw-2019-05-30_20:50:30-kcephfs-mimic_v13.2.6_QE-testing-basic-smithi/3989039/teu... Patrick Donnelly
06:11 PM Bug #40102 (Resolved): qa: probable kernel deadlock/oops during umount on testing branch
... Patrick Donnelly
04:51 PM Bug #40101 (Resolved): libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on .sn...
When I make an nfs-ganesha export of a cephfs using FSAL_CEPH, the NFS client receives ESTALE when attempting to stat... Nathan Fish
09:27 AM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
Patrick Donnelly wrote:
> Venky, status on this ticket?
For this ticket: scrub status commands have been added vi...
Venky Shankar
01:27 AM Backport #39679 (In Progress): mimic: pybind: add the lseek() function to pybind of cephfs
https://github.com/ceph/ceph/pull/28337 Prashant D

05/30/2019

11:11 PM Backport #39680 (In Progress): nautilus: pybind: add the lseek() function to pybind of cephfs
Prashant D
08:28 PM Bug #40093 (Can't reproduce): qa: client mount cannot be forcibly unmounted when all MDS are down
... Patrick Donnelly
05:07 PM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
Venky, status on this ticket? Patrick Donnelly
04:52 PM Feature #5486 (In Progress): kclient: make it work with selinux
[PATCH 1/2] ceph: rename struct ceph_acls_info to ceph_acl_sec_ctx
[PATCH 2/2] ceph: add selinux support
Patrick Donnelly
02:42 PM Bug #40085 (Resolved): FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
A customer reported a crash in nfs-ganesha that indicated a problem down in libcephfs:... Jeff Layton
08:23 AM Bug #40001: mds cache oversize after restart
please if these dirfrag fetches are from open_file_table Zheng Yan
05:16 AM Bug #40001: mds cache oversize after restart
Patrick Donnelly wrote:
> Are you using snapshots? Can you tell us more about how the cluster is being used like # o...
Yunzhi Cheng

05/29/2019

09:47 PM Bug #40001: mds cache oversize after restart
Are you using snapshots? Can you tell us more about how the cluster is being used like # of clients and versions. Patrick Donnelly
06:49 PM Documentation #24641: Document behaviour of fsync-after-close
Proposed documentation update here:
https://github.com/ceph/ceph/pull/28300
Niklas, please take a look and let ...
Jeff Layton
06:21 PM Bug #40034: mds: stuck in clientreplay
Here's ganesha.log, not sure if there's anything useful:
https://termbin.com/7ni9
Is it really intended for an md...
Nathan Fish
06:15 PM Bug #40034: mds: stuck in clientreplay
Logs from nfs-ganesha would be helpful too if you have them. Patrick Donnelly
01:53 PM Bug #39987 (Fix Under Review): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
Zheng Yan
12:09 PM Bug #40061 (Fix Under Review): mds: blacklisted clients eviction is broken
https://github.com/ceph/ceph/pull/28293 Zheng Yan
12:01 PM Bug #40061 (Resolved): mds: blacklisted clients eviction is broken
Zheng Yan
02:11 AM Backport #39669 (In Progress): mimic: mds: output lock state in format dump
https://github.com/ceph/ceph/pull/28274 Prashant D

05/28/2019

07:03 PM Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
Jeff Layton wrote:
> Reconfirming that I think this is a problem. Here's Client::mkdir():
>
> [...]
>
> There ...
Frank Filz
06:44 PM Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
Reconfirming that I think this is a problem. Here's Client::mkdir():... Jeff Layton
06:19 PM Documentation #24641: Document behaviour of fsync-after-close
> Second your answer sounds kcephfs specific -- do the same guarantees still hold for ceph-fuse?
FUSE just farms o...
Jeff Layton
06:16 PM Documentation #24641: Document behaviour of fsync-after-close
Niklas replied via email:
> I think it makes sense to document it the way you say it, e.g. "kcephfs's guarantees i...
Jeff Layton
03:13 PM Documentation #24641: Document behaviour of fsync-after-close
Niklas Hambuechen wrote:
> The following should be documented:
>
> Does close()/re-open()/fsync() provide the sam...
Jeff Layton
04:17 PM Bug #40002: mds: not trim log under heavy load
Zheng Yan wrote:
> multiple-active mds?
yes
Yunzhi Cheng
08:22 AM Bug #40002: mds: not trim log under heavy load
multiple-active mds? Zheng Yan
10:51 AM Backport #40042 (Resolved): mimic: avoid trimming too many log segments after mds failover
https://github.com/ceph/ceph/pull/28650 Nathan Cutler
10:51 AM Backport #40041 (Resolved): luminous: avoid trimming too many log segments after mds failover
https://github.com/ceph/ceph/pull/28543 Nathan Cutler
10:50 AM Backport #40040 (Resolved): nautilus: avoid trimming too many log segments after mds failover
https://github.com/ceph/ceph/pull/28582 Nathan Cutler
09:25 AM Feature #40036 (Fix Under Review): mgr / volumes: support asynchronous subvolume deletes
Venky Shankar
09:25 AM Feature #40036: mgr / volumes: support asynchronous subvolume deletes
see: https://github.com/ceph/ceph/blob/master/src/pybind/mgr/volumes/module.py#L393 Venky Shankar
09:00 AM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
Currently, removing a subvolume does an in-band directory removal. This can the operation to run for long for huge su... Venky Shankar
07:59 AM Bug #40028 (Pending Backport): mds: avoid trimming too many log segments after mds failover
Zheng Yan
07:57 AM Bug #40034: mds: stuck in clientreplay
... Zheng Yan

05/27/2019

04:13 PM Bug #40034 (Need More Info): mds: stuck in clientreplay
When I came in on Monday morning, our cluster's cephfs was stuck in clientreplay, and nfs mount through nfs-ganesha h... Nathan Fish
12:12 PM Bug #40028 (Resolved): mds: avoid trimming too many log segments after mds failover
If mds was behind on trim before failover, the new mds may trim too many log segments at the same time, and cause unh... Zheng Yan

05/24/2019

01:23 PM Bug #39947 (Fix Under Review): cephfs-shell: add CI testing with flake8
Varsha Rao
02:55 AM Bug #40019 (New): mds: crash at ms_dispatch thread
Env: ceph 14.2.1 3 mds
I enabled module ceph crash so paste the meta here
meta:...
Yunzhi Cheng
12:29 AM Backport #39670 (In Progress): nautilus: mds: output lock state in format dump
Prashant D

05/23/2019

04:00 PM Bug #40001: mds cache oversize after restart
I set debug_mds to 20/20 and almost all of the log is like... Yunzhi Cheng
10:11 AM Bug #40014 (Resolved): mgr/volumes: Name 'sub_name' is not defined
I'm getting a new mypy error in master:... Sebastian Wagner

05/22/2019

03:55 PM Bug #40002 (Fix Under Review): mds: not trim log under heavy load
ceph version 14.2.1
we have 3 mds under a heavy load (create 8k files per second)
we find mds log add very fast...
Yunzhi Cheng
03:46 PM Bug #40001 (Rejected): mds cache oversize after restart
ceph version 14.2.1
we have 3 mds under a heavy load (create 8k files per second)
all 3 mds are under 30G mem...
Yunzhi Cheng
12:52 PM Cleanup #4744 (New): mds: pass around LogSegments via std::shared_ptr
Patrick Donnelly
12:19 PM Feature #38153 (New): client: proactively release caps it is not using
Patrick Donnelly
12:05 PM Feature #358 (Rejected): mds: efficient revert to snapshot
There's no RADOS support for reverting to an older snapshot so I don't see this getting fixed in any near-future time... Patrick Donnelly
12:01 PM Feature #15066: multifs: Allow filesystems to be assigned RADOS namespace as well as pool for met...
Needs the ability to delete a RADOS namespace. See also: https://www.spinics.net/lists/ceph-devel/msg36695.html Patrick Donnelly
11:48 AM Tasks #39998 (New): client: audit ACL
Look for race conditions involved with client checks and releasing caps. Jeff wants to help with this. Patrick Donnelly
11:27 AM Feature #17835 (Fix Under Review): mds: enable killpoint tests for MDS-MDS subtree export
Patrick Donnelly
02:35 AM Feature #39098: mds: lock caching for asynchronous unlink
Jeff Layton wrote:
> Zheng Yan wrote:
> >
> > Yes, this can causes inconsistency. But it's not unique to link cou...
Zheng Yan

05/21/2019

02:20 PM Feature #39982 (Duplicate): cephfs client periodically report cache utilisation to MDS server
Yup, this is something we're working on for Octopus. Thanks Stefan! Patrick Donnelly
12:18 PM Bug #39947 (In Progress): cephfs-shell: add CI testing with flake8
Varsha Rao
09:58 AM Bug #39987 (Resolved): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps

An user reported a bug that mds couldn't finish freezing dirfrag. Cache dump includes following entries....
Zheng Yan
03:02 AM Backport #39472 (In Progress): mimic: mds: fail to resolve snapshot name contains '_'
https://github.com/ceph/ceph/pull/28186 Prashant D

05/20/2019

05:54 PM Feature #39982 (Duplicate): cephfs client periodically report cache utilisation to MDS server
After seen Gregory's talk "What are "caps"? (And Why Won't my Client Drop Them?) he explained that the MDS servers ne... Stefan Kooman
07:01 AM Feature #39969 (Fix Under Review): mgr / volume: refactor volume module
Venky Shankar
03:02 AM Feature #39969 (In Progress): mgr / volume: refactor volume module
Venky Shankar
03:02 AM Feature #39969 (Resolved): mgr / volume: refactor volume module
Now, with the addition of submodule commands (interfaces), volume commands live in the main module source while submo... Venky Shankar

05/19/2019

08:39 AM Bug #39951 (Fix Under Review): mount: key parsing fail when doing a remount
Patrick Donnelly
08:24 AM Feature #20 (Fix Under Review): client: recover from a killed session (w/ blacklist)
Patrick Donnelly

05/17/2019

01:25 PM Backport #39960 (Resolved): nautilus: cephfs-shell: mkdir error for relative path
https://github.com/ceph/ceph/pull/28616 Nathan Cutler

05/16/2019

11:16 AM Bug #39951: mount: key parsing fail when doing a remount
Here's the link to a PR:
https://github.com/ceph/ceph/pull/28148
Luis Henriques
11:06 AM Bug #39951 (Resolved): mount: key parsing fail when doing a remount
When doing a CephFS remount (-o remount) the secret is parsed from procfs and we get '<hidden>' as a result and the m... Luis Henriques
05:30 AM Bug #39949 (Fix Under Review): test: extend mgr/volume test to cover new interfaces
Venky Shankar
04:57 AM Bug #39949 (Resolved): test: extend mgr/volume test to cover new interfaces
extend `qa/workunits/fs/test-volumes.sh` tests to cover newly introduces subvolume/subvolumegroup interfaces. Venky Shankar

05/15/2019

10:49 PM Bug #39947 (Resolved): cephfs-shell: add CI testing with flake8
See discussion here: https://github.com/ceph/ceph/pull/28080#issuecomment-492387844 Patrick Donnelly
10:46 PM Bug #39507 (Pending Backport): cephfs-shell: mkdir error for relative path
Patrick Donnelly
05:05 PM Bug #39943 (Fix Under Review): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to t...
Patrick Donnelly
01:46 PM Bug #39943 (Resolved): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanos...
This bug was found while investigating https://tracker.ceph.com/issues/39705 .
The following kernel logic is used ...
David Disseldorp
01:15 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
This bug is due to incorrect placement of the pad/width specifier in:
11809 size_t Client::_vxattrcb_snap_btime(In...
David Disseldorp
10:37 AM Backport #39937 (Resolved): nautilus: cephfs-shell: add a "stat" command
https://github.com/ceph/ceph/pull/28681 Nathan Cutler
10:36 AM Backport #39936 (Resolved): nautilus: cephfs-shell: add commands to manipulate quotas
https://github.com/ceph/ceph/pull/28681 Nathan Cutler
10:35 AM Backport #39935 (Resolved): nautilus: cephfs-shell: teuthology tests
https://github.com/ceph/ceph/pull/28614 Nathan Cutler
10:35 AM Backport #39934 (Resolved): nautilus: mgr/volumes: add CephFS subvolumes library
Nathan Cutler
09:14 AM Bug #39395: ceph: ceph fs auth fails
This issue is fixed in the latest version. On luminous, I get the same error. Varsha Rao

05/14/2019

08:05 PM Feature #39610 (Pending Backport): mgr/volumes: add CephFS subvolumes library
Patrick Donnelly
07:53 PM Bug #39165 (Pending Backport): cephfs-shell: add commands to manipulate quotas
Patrick Donnelly
07:51 PM Feature #38829 (Pending Backport): cephfs-shell: add a "stat" command
Patrick Donnelly
07:50 PM Bug #39526 (Pending Backport): cephfs-shell: teuthology tests
Patrick Donnelly
07:44 PM Bug #39438: workunit fails with EPERM during thrashing
/ceph/teuthology-archive/pdonnell-2019-05-11_00:01:05-multimds-wip-pdonnell-testing-20190510.182613-distro-basic-smit... Patrick Donnelly
07:43 PM Bug #39752 (New): qa: dual workunit on client but one fails to compile
... Patrick Donnelly
06:29 PM Bug #39704 (Won't Fix): When running multiple filesystems, directories do not fragment
Patrick Donnelly
03:20 PM Bug #39704: When running multiple filesystems, directories do not fragment
Zheng Yan wrote:
> the log show you were creating files in root directory. mds never fragment root directory.
I s...
Nathan Fish
08:26 AM Bug #39704: When running multiple filesystems, directories do not fragment
the log show you were creating files in root directory. mds never fragment root directory. Zheng Yan
06:28 PM Bug #39722 (Duplicate): pybind: ceph_volume_client py3 error
Patrick Donnelly
01:14 PM Bug #39722: pybind: ceph_volume_client py3 error
If I am looking at this correctly, you have reported this before - http://tracker.ceph.com/issues/39406#note-2.
Fi...
Rishabh Dave
03:35 PM Bug #39750 (Resolved): mgr/volumes: cannot create subvolumes with py3 libraries
Built ceph, master branch with python 3 enabled,... Ramana Raja
02:50 AM Backport #39233 (In Progress): mimic: kclient: nofail option not supported
https://github.com/ceph/ceph/pull/28090 Prashant D

05/13/2019

11:01 PM Bug #39722: pybind: ceph_volume_client py3 error
Rishabh, please investigate. Patrick Donnelly
11:01 PM Bug #39722 (Duplicate): pybind: ceph_volume_client py3 error
... Patrick Donnelly
10:10 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
Jeff Layton wrote:
> We may not need this after all. The kernel client at least doesn't care a lot about the inode n...
Patrick Donnelly
07:01 PM Bug #39704: When running multiple filesystems, directories do not fragment
Patrick Donnelly wrote:
> This is with Nautilus v14.2.1? Can you bump up debugging on the MDS during the event and s...
Nathan Fish
01:55 PM Bug #39704 (Need More Info): When running multiple filesystems, directories do not fragment
This is with Nautilus v14.2.1? Can you bump up debugging on the MDS during the event and share the log? Patrick Donnelly
02:59 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
... Zheng Yan
12:34 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
Patrick Donnelly wrote:
> [...]
>
> From: /ceph/teuthology-archive/nojha-2019-05-09_22:58:42-fs:basic_workload-wi...
David Disseldorp
01:50 PM Bug #39511 (Need More Info): Cannot remove CephFS snapshot with leading underscore (_)
This looks like you're deleting a snapshot name in a child directory which was not the original directory where the s... Patrick Donnelly
01:47 PM Bug #39510 (Fix Under Review): test_volume_client: test_put_object_versioned is unreliable
Patrick Donnelly
01:44 PM Bug #39395: ceph: ceph fs auth fails
src/mon/AuthMonitor.cc src/mds/MDSMonitor.cc Patrick Donnelly
01:42 PM Bug #39329 (Won't Fix): ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connecte...
Patrick Donnelly
12:30 PM Cleanup #39717 (Resolved): cephfs-shell: Fix flake8 warnings and errors
Flake8 generates following warning and errors:
* E722 do not use bare 'except'
* E303 too many blank lines
* W605 ...
Varsha Rao
10:07 AM Bug #38520 (Resolved): qa: fsstress with valgrind may timeout
Nathan Cutler
10:07 AM Backport #38540 (Resolved): mimic: qa: fsstress with valgrind may timeout
Nathan Cutler
10:06 AM Backport #39469 (Resolved): mimic: There is no punctuation mark or blank between tid and client_...
Nathan Cutler
10:06 AM Backport #38736 (Resolved): mimic: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON...
Nathan Cutler
10:05 AM Backport #39193 (Resolved): mimic: mds: crash during mds restart
Nathan Cutler
10:04 AM Backport #39200 (Resolved): mimic: mds: we encountered "No space left on device" when moving huge...
Nathan Cutler
09:40 AM Bug #39715 (Resolved): client: optimize rename operation under different quota root
We had many source directories with more than ten millions files. It took very long time to move one such directory t... Zhi Zhang

05/11/2019

04:21 PM Backport #38540: mimic: qa: fsstress with valgrind may timeout
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27432
merged
Yuri Weinstein
04:20 PM Backport #39469: mimic: There is no punctuation mark or blank between tid and client_id in the o...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27847
merged
Yuri Weinstein
04:20 PM Backport #38736: mimic: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in c...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27906
merged
Yuri Weinstein
04:19 PM Backport #39193: mimic: mds: crash during mds restart
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27916
merged
Yuri Weinstein
04:19 PM Backport #39200: mimic: mds: we encountered "No space left on device" when moving huge number of ...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27917
merged
Yuri Weinstein
01:37 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
We may not need this after all. The kernel client at least doesn't care a lot about the inode number. We can do prett... Jeff Layton

05/10/2019

06:32 PM Bug #39705 (Resolved): qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs...
... Patrick Donnelly
03:21 PM Bug #39704 (Won't Fix): When running multiple filesystems, directories do not fragment
Nautilus, Ubuntu 18.04.2, HWE kernel 4.18.0-18-generic.
I have created multiple ceph filesystems:
root@mc-3015-20...
Nathan Fish
10:58 AM Backport #39691 (Resolved): luminous: mds: error "No space left on device" when create a large n...
https://github.com/ceph/ceph/pull/29829 Nathan Cutler
10:58 AM Backport #39690 (Resolved): nautilus: mds: error "No space left on device" when create a large n...
https://github.com/ceph/ceph/pull/28394 Nathan Cutler
10:57 AM Backport #39689 (Resolved): mimic: mds: error "No space left on device" when create a large numb...
https://github.com/ceph/ceph/pull/28381 Nathan Cutler
10:57 AM Backport #39687 (Rejected): luminous: ceph-fuse: client hang because its bad session PipeConnecti...
Nathan Cutler
10:57 AM Backport #39686 (Resolved): nautilus: ceph-fuse: client hang because its bad session PipeConnecti...
https://github.com/ceph/ceph/pull/28375 Nathan Cutler
10:57 AM Backport #39685 (Resolved): mimic: ceph-fuse: client hang because its bad session PipeConnection ...
https://github.com/ceph/ceph/pull/29200 Nathan Cutler
10:56 AM Backport #39680 (Resolved): nautilus: pybind: add the lseek() function to pybind of cephfs
https://github.com/ceph/ceph/pull/28333 Nathan Cutler
10:56 AM Backport #39679 (Resolved): mimic: pybind: add the lseek() function to pybind of cephfs
https://github.com/ceph/ceph/pull/28337 Nathan Cutler
10:56 AM Backport #39678 (Resolved): nautilus: cephfs-shell: fix string decode for ls command
https://github.com/ceph/ceph/pull/28681 Nathan Cutler
10:55 AM Backport #39670 (Resolved): nautilus: mds: output lock state in format dump
https://github.com/ceph/ceph/pull/28233 Nathan Cutler
10:55 AM Backport #39669 (Resolved): mimic: mds: output lock state in format dump
https://github.com/ceph/ceph/pull/28274 Nathan Cutler

05/09/2019

07:32 PM Bug #39645 (Pending Backport): mds: output lock state in format dump
Patrick Donnelly
09:06 AM Bug #39645: mds: output lock state in format dump
https://github.com/ceph/ceph/pull/27717 Zhi Zhang
09:06 AM Bug #39645 (Resolved): mds: output lock state in format dump
dump cache in plain text will print lock state. But in json format dump, it won't. It is not convenient to debug some... Zhi Zhang
01:38 PM Bug #39651 (In Progress): qa: test_kill_mdstable fails unexpectedly
I get following traceback while running the test_kill_mdstable: https://github.com/ceph/ceph/blob/master/qa/tasks/cep... Rishabh Dave
12:16 PM Feature #38951: client: implement asynchronous unlink/create
Found it. The problem is actually in ceph_mdsc_build_path. When passed a positive dentry, that function will return a... Jeff Layton
08:05 AM Bug #39641 (Fix Under Review): cephfs-shell: 'du' command produces incorrect results
Varsha Rao
08:01 AM Bug #39641 (Resolved): cephfs-shell: 'du' command produces incorrect results
Error observed in following cases:
# No error message printed for invalid directories.
# When directory name is gre...
Varsha Rao

05/08/2019

10:57 PM Feature #39403 (Pending Backport): pybind: add the lseek() function to pybind of cephfs
Patrick Donnelly
09:51 PM Bug #39305 (Pending Backport): ceph-fuse: client hang because its bad session PipeConnection to mds
Patrick Donnelly
09:41 PM Bug #39166 (Pending Backport): mds: error "No space left on device" when create a large number o...
Patrick Donnelly
06:17 PM Bug #39634 (Fix Under Review): qa: test_full_same_file timeout
... Patrick Donnelly
04:39 PM Feature #38951: client: implement asynchronous unlink/create
Jeff Layton wrote:
> Doing more testing today with my patchset. I doctored up a version of Zheng's MDS locking rewor...
Patrick Donnelly
03:22 PM Feature #38951: client: implement asynchronous unlink/create
Doing more testing today with my patchset. I doctored up a version of Zheng's MDS locking rework branch with some pat... Jeff Layton
04:06 PM Bug #39404 (Pending Backport): cephfs-shell: fix string decode for ls command
Patrick Donnelly
10:09 AM Bug #39617 (Duplicate): cephfs-shell dumps backtrace on "ls"
Hi Patrick,
Please merge this PR https://github.com/ceph/ceph/pull/27716. It resolves the issue.
Varsha Rao

05/07/2019

06:40 PM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
We get asked this all the time. Patrick Donnelly
04:41 PM Bug #39617: cephfs-shell dumps backtrace on "ls"
This is on F30, fwiw. I backed out this patch, and it seems to fix the issue:... Jeff Layton
04:30 PM Bug #39617 (Duplicate): cephfs-shell dumps backtrace on "ls"
Built ceph based on today's master branch (2d410b5a2e428232dc7d6f3abc006da5e9128e77), using this cmake command:
<p...
Jeff Layton
12:01 PM Feature #39610 (Resolved): mgr/volumes: add CephFS subvolumes library
The FS subvolumes library module will be heavily borrowed from ceph_volume_client. It'll be used to provision FS subv... Ramana Raja

05/06/2019

12:05 PM Fix #38801 (Fix Under Review): qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
Sidharth Anupkrishnan
 

Also available in: Atom