Activity
From 04/17/2019 to 05/16/2019
05/16/2019
- 11:16 AM Bug #39951: mount: key parsing fail when doing a remount
- Here's the link to a PR:
https://github.com/ceph/ceph/pull/28148 - 11:06 AM Bug #39951 (Resolved): mount: key parsing fail when doing a remount
- When doing a CephFS remount (-o remount) the secret is parsed from procfs and we get '<hidden>' as a result and the m...
- 05:30 AM Bug #39949 (Fix Under Review): test: extend mgr/volume test to cover new interfaces
- 04:57 AM Bug #39949 (Resolved): test: extend mgr/volume test to cover new interfaces
- extend `qa/workunits/fs/test-volumes.sh` tests to cover newly introduces subvolume/subvolumegroup interfaces.
05/15/2019
- 10:49 PM Bug #39947 (Resolved): cephfs-shell: add CI testing with flake8
- See discussion here: https://github.com/ceph/ceph/pull/28080#issuecomment-492387844
- 10:46 PM Bug #39507 (Pending Backport): cephfs-shell: mkdir error for relative path
- 05:05 PM Bug #39943 (Fix Under Review): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to t...
- 01:46 PM Bug #39943 (Resolved): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanos...
- This bug was found while investigating https://tracker.ceph.com/issues/39705 .
The following kernel logic is used ... - 01:15 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
- This bug is due to incorrect placement of the pad/width specifier in:
11809 size_t Client::_vxattrcb_snap_btime(In... - 10:37 AM Backport #39937 (Resolved): nautilus: cephfs-shell: add a "stat" command
- https://github.com/ceph/ceph/pull/28681
- 10:36 AM Backport #39936 (Resolved): nautilus: cephfs-shell: add commands to manipulate quotas
- https://github.com/ceph/ceph/pull/28681
- 10:35 AM Backport #39935 (Resolved): nautilus: cephfs-shell: teuthology tests
- https://github.com/ceph/ceph/pull/28614
- 10:35 AM Backport #39934 (Resolved): nautilus: mgr/volumes: add CephFS subvolumes library
- 09:14 AM Bug #39395: ceph: ceph fs auth fails
- This issue is fixed in the latest version. On luminous, I get the same error.
05/14/2019
- 08:05 PM Feature #39610 (Pending Backport): mgr/volumes: add CephFS subvolumes library
- 07:53 PM Bug #39165 (Pending Backport): cephfs-shell: add commands to manipulate quotas
- 07:51 PM Feature #38829 (Pending Backport): cephfs-shell: add a "stat" command
- 07:50 PM Bug #39526 (Pending Backport): cephfs-shell: teuthology tests
- 07:44 PM Bug #39438: workunit fails with EPERM during thrashing
- /ceph/teuthology-archive/pdonnell-2019-05-11_00:01:05-multimds-wip-pdonnell-testing-20190510.182613-distro-basic-smit...
- 07:43 PM Bug #39752 (New): qa: dual workunit on client but one fails to compile
- ...
- 06:29 PM Bug #39704 (Won't Fix): When running multiple filesystems, directories do not fragment
- 03:20 PM Bug #39704: When running multiple filesystems, directories do not fragment
- Zheng Yan wrote:
> the log show you were creating files in root directory. mds never fragment root directory.
I s... - 08:26 AM Bug #39704: When running multiple filesystems, directories do not fragment
- the log show you were creating files in root directory. mds never fragment root directory.
- 06:28 PM Bug #39722 (Duplicate): pybind: ceph_volume_client py3 error
- 01:14 PM Bug #39722: pybind: ceph_volume_client py3 error
- If I am looking at this correctly, you have reported this before - http://tracker.ceph.com/issues/39406#note-2.
Fi... - 03:35 PM Bug #39750 (Resolved): mgr/volumes: cannot create subvolumes with py3 libraries
- Built ceph, master branch with python 3 enabled,...
- 02:50 AM Backport #39233 (In Progress): mimic: kclient: nofail option not supported
- https://github.com/ceph/ceph/pull/28090
05/13/2019
- 11:01 PM Bug #39722: pybind: ceph_volume_client py3 error
- Rishabh, please investigate.
- 11:01 PM Bug #39722 (Duplicate): pybind: ceph_volume_client py3 error
- ...
- 10:10 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> We may not need this after all. The kernel client at least doesn't care a lot about the inode n... - 07:01 PM Bug #39704: When running multiple filesystems, directories do not fragment
- Patrick Donnelly wrote:
> This is with Nautilus v14.2.1? Can you bump up debugging on the MDS during the event and s... - 01:55 PM Bug #39704 (Need More Info): When running multiple filesystems, directories do not fragment
- This is with Nautilus v14.2.1? Can you bump up debugging on the MDS during the event and share the log?
- 02:59 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
- ...
- 12:34 PM Bug #39705: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09...
- Patrick Donnelly wrote:
> [...]
>
> From: /ceph/teuthology-archive/nojha-2019-05-09_22:58:42-fs:basic_workload-wi... - 01:50 PM Bug #39511 (Need More Info): Cannot remove CephFS snapshot with leading underscore (_)
- This looks like you're deleting a snapshot name in a child directory which was not the original directory where the s...
- 01:47 PM Bug #39510 (Fix Under Review): test_volume_client: test_put_object_versioned is unreliable
- 01:44 PM Bug #39395: ceph: ceph fs auth fails
- src/mon/AuthMonitor.cc src/mds/MDSMonitor.cc
- 01:42 PM Bug #39329 (Won't Fix): ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connecte...
- 12:30 PM Cleanup #39717 (Resolved): cephfs-shell: Fix flake8 warnings and errors
- Flake8 generates following warning and errors:
* E722 do not use bare 'except'
* E303 too many blank lines
* W605 ... - 10:07 AM Bug #38520 (Resolved): qa: fsstress with valgrind may timeout
- 10:07 AM Backport #38540 (Resolved): mimic: qa: fsstress with valgrind may timeout
- 10:06 AM Backport #39469 (Resolved): mimic: There is no punctuation mark or blank between tid and client_...
- 10:06 AM Backport #38736 (Resolved): mimic: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON...
- 10:05 AM Backport #39193 (Resolved): mimic: mds: crash during mds restart
- 10:04 AM Backport #39200 (Resolved): mimic: mds: we encountered "No space left on device" when moving huge...
- 09:40 AM Bug #39715 (Resolved): client: optimize rename operation under different quota root
- We had many source directories with more than ten millions files. It took very long time to move one such directory t...
05/11/2019
- 04:21 PM Backport #38540: mimic: qa: fsstress with valgrind may timeout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27432
merged - 04:20 PM Backport #39469: mimic: There is no punctuation mark or blank between tid and client_id in the o...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27847
merged - 04:20 PM Backport #38736: mimic: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in c...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27906
merged - 04:19 PM Backport #39193: mimic: mds: crash during mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27916
merged - 04:19 PM Backport #39200: mimic: mds: we encountered "No space left on device" when moving huge number of ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27917
merged - 01:37 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- We may not need this after all. The kernel client at least doesn't care a lot about the inode number. We can do prett...
05/10/2019
- 06:32 PM Bug #39705 (Resolved): qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs...
- ...
- 03:21 PM Bug #39704 (Won't Fix): When running multiple filesystems, directories do not fragment
- Nautilus, Ubuntu 18.04.2, HWE kernel 4.18.0-18-generic.
I have created multiple ceph filesystems:
root@mc-3015-20... - 10:58 AM Backport #39691 (Resolved): luminous: mds: error "No space left on device" when create a large n...
- https://github.com/ceph/ceph/pull/29829
- 10:58 AM Backport #39690 (Resolved): nautilus: mds: error "No space left on device" when create a large n...
- https://github.com/ceph/ceph/pull/28394
- 10:57 AM Backport #39689 (Resolved): mimic: mds: error "No space left on device" when create a large numb...
- https://github.com/ceph/ceph/pull/28381
- 10:57 AM Backport #39687 (Rejected): luminous: ceph-fuse: client hang because its bad session PipeConnecti...
- 10:57 AM Backport #39686 (Resolved): nautilus: ceph-fuse: client hang because its bad session PipeConnecti...
- https://github.com/ceph/ceph/pull/28375
- 10:57 AM Backport #39685 (Resolved): mimic: ceph-fuse: client hang because its bad session PipeConnection ...
- https://github.com/ceph/ceph/pull/29200
- 10:56 AM Backport #39680 (Resolved): nautilus: pybind: add the lseek() function to pybind of cephfs
- https://github.com/ceph/ceph/pull/28333
- 10:56 AM Backport #39679 (Resolved): mimic: pybind: add the lseek() function to pybind of cephfs
- https://github.com/ceph/ceph/pull/28337
- 10:56 AM Backport #39678 (Resolved): nautilus: cephfs-shell: fix string decode for ls command
- https://github.com/ceph/ceph/pull/28681
- 10:55 AM Backport #39670 (Resolved): nautilus: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/28233
- 10:55 AM Backport #39669 (Resolved): mimic: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/28274
05/09/2019
- 07:32 PM Bug #39645 (Pending Backport): mds: output lock state in format dump
- 09:06 AM Bug #39645: mds: output lock state in format dump
- https://github.com/ceph/ceph/pull/27717
- 09:06 AM Bug #39645 (Resolved): mds: output lock state in format dump
- dump cache in plain text will print lock state. But in json format dump, it won't. It is not convenient to debug some...
- 01:38 PM Bug #39651 (In Progress): qa: test_kill_mdstable fails unexpectedly
- I get following traceback while running the test_kill_mdstable: https://github.com/ceph/ceph/blob/master/qa/tasks/cep...
- 12:16 PM Feature #38951: client: implement asynchronous unlink/create
- Found it. The problem is actually in ceph_mdsc_build_path. When passed a positive dentry, that function will return a...
- 08:05 AM Bug #39641 (Fix Under Review): cephfs-shell: 'du' command produces incorrect results
- 08:01 AM Bug #39641 (Resolved): cephfs-shell: 'du' command produces incorrect results
- Error observed in following cases:
# No error message printed for invalid directories.
# When directory name is gre...
05/08/2019
- 10:57 PM Feature #39403 (Pending Backport): pybind: add the lseek() function to pybind of cephfs
- 09:51 PM Bug #39305 (Pending Backport): ceph-fuse: client hang because its bad session PipeConnection to mds
- 09:41 PM Bug #39166 (Pending Backport): mds: error "No space left on device" when create a large number o...
- 06:17 PM Bug #39634 (Fix Under Review): qa: test_full_same_file timeout
- ...
- 04:39 PM Feature #38951: client: implement asynchronous unlink/create
- Jeff Layton wrote:
> Doing more testing today with my patchset. I doctored up a version of Zheng's MDS locking rewor... - 03:22 PM Feature #38951: client: implement asynchronous unlink/create
- Doing more testing today with my patchset. I doctored up a version of Zheng's MDS locking rework branch with some pat...
- 04:06 PM Bug #39404 (Pending Backport): cephfs-shell: fix string decode for ls command
- 10:09 AM Bug #39617 (Duplicate): cephfs-shell dumps backtrace on "ls"
- Hi Patrick,
Please merge this PR https://github.com/ceph/ceph/pull/27716. It resolves the issue.
05/07/2019
- 06:40 PM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
- We get asked this all the time.
- 04:41 PM Bug #39617: cephfs-shell dumps backtrace on "ls"
- This is on F30, fwiw. I backed out this patch, and it seems to fix the issue:...
- 04:30 PM Bug #39617 (Duplicate): cephfs-shell dumps backtrace on "ls"
- Built ceph based on today's master branch (2d410b5a2e428232dc7d6f3abc006da5e9128e77), using this cmake command:
<p... - 12:01 PM Feature #39610 (Resolved): mgr/volumes: add CephFS subvolumes library
- The FS subvolumes library module will be heavily borrowed from ceph_volume_client. It'll be used to provision FS subv...
05/06/2019
- 12:05 PM Fix #38801 (Fix Under Review): qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
05/02/2019
- 08:35 AM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- PR - https://github.com/ceph/ceph/pull/27679
- 06:24 AM Backport #39200 (In Progress): mimic: mds: we encountered "No space left on device" when moving h...
- https://github.com/ceph/ceph/pull/27917
- 04:02 AM Backport #39193 (In Progress): mimic: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27916
05/01/2019
- 08:26 PM Bug #39437 (Resolved): osd: PriorityCache.cc: 265: FAILED ceph_assert(mem_avail >= 0)
- 06:30 PM Bug #39438: workunit fails with EPERM during thrashing
- /ceph/teuthology-archive/pdonnell-2019-04-25_02:44:21-multimds-wip-pdonnell-testing-20190424.232741-distro-basic-smit...
- 09:58 AM Documentation #38729 (Resolved): doc: add LAZYIO
- 09:58 AM Backport #39051 (Resolved): nautilus: doc: add LAZYIO
- 12:58 AM Backport #39051 (In Progress): nautilus: doc: add LAZYIO
- 09:41 AM Documentation #39130 (Resolved): doc: add documentation for `fs set min_compat_client`
- 09:41 AM Backport #39176 (Resolved): nautilus: doc: add documentation for `fs set min_compat_client`
- 01:03 AM Backport #39176 (In Progress): nautilus: doc: add documentation for `fs set min_compat_client`
- 09:28 AM Bug #36384 (Resolved): src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- 09:28 AM Backport #38448 (Resolved): mimic: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- 09:28 AM Bug #38518 (Resolved): qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because clients cann...
- 09:27 AM Backport #38542 (Resolved): mimic: qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because ...
- 09:27 AM Bug #38487 (Resolved): qa: "Loading libcephfs-jni: Failure!"
- 09:27 AM Backport #38544 (Resolved): mimic: qa: "Loading libcephfs-jni: Failure!"
- 09:26 AM Bug #38723 (Resolved): qa: tolerate longer heartbeat timeouts when using valgrind
- 09:26 AM Backport #38734 (Resolved): mimic: qa: tolerate longer heartbeat timeouts when using valgrind
- 09:26 AM Bug #38491 (Resolved): "log [WRN] : Health check failed: 1 clients failing to respond to capabili...
- 09:24 AM Backport #38670 (Resolved): mimic: "log [WRN] : Health check failed: 1 clients failing to respond...
- 09:24 AM Feature #11172 (Resolved): mds: inode filtering on 'dump cache' asok
- 09:23 AM Backport #38689 (Resolved): mimic: mds: inode filtering on 'dump cache' asok
- 01:21 AM Backport #39471 (In Progress): nautilus: Expose CephFS snapshot creation time to clients
04/30/2019
- 07:28 PM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
>
> Yes, this can causes inconsistency. But it's not unique to link count. For example, one clien... - 02:46 PM Bug #39543 (Fix Under Review): cephfs-shell: df command does not always produce correct output
- 01:22 PM Bug #39543 (Resolved): cephfs-shell: df command does not always produce correct output
- Correct output is not produced in the following cases:
1] For non-existing files, there is not error message
2] Whe... - 02:45 PM Backport #39050 (In Progress): nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx"
- 02:35 PM Backport #38876 (In Progress): nautilus: mds: high debug logging with many subtrees is slow
- 04:11 AM Backport #39222 (In Progress): nautilus: mds: behind on trimming and "[dentry] was purgeable but ...
- https://github.com/ceph/ceph/pull/27879
04/29/2019
- 08:41 PM Bug #39406 (Fix Under Review): ceph_volume_client: d_name needs to be converted to string before ...
- 08:40 PM Bug #39405 (Fix Under Review): ceph_volume_client: python program embedded in test_volume_client....
- 06:50 PM Bug #39526 (Fix Under Review): cephfs-shell: teuthology tests
- 06:44 PM Bug #39526 (Resolved): cephfs-shell: teuthology tests
- for:
* mkdir
* get
* put - 04:43 PM Backport #38448: mimic: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26643
merged - 04:42 PM Backport #38542: mimic: qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because clients can...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26804
merged - 04:42 PM Backport #38544: mimic: qa: "Loading libcephfs-jni: Failure!"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26807
merged - 04:41 PM Backport #38734: mimic: qa: tolerate longer heartbeat timeouts when using valgrind
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26963
merged - 04:41 PM Backport #38670: mimic: "log [WRN] : Health check failed: 1 clients failing to respond to capabil...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27023
merged - 04:40 PM Backport #38689: mimic: mds: inode filtering on 'dump cache' asok
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27058
merged - 10:29 AM Backport #39211 (In Progress): nautilus: MDSTableServer.cc: 83: FAILED assert(version == tid)
- 10:19 AM Backport #39214 (In Progress): nautilus: mds: there is an assertion when calling Beacon::shutdown()
- 10:11 AM Backport #39232 (In Progress): nautilus: kclient: nofail option not supported
- 09:59 AM Backport #39473 (In Progress): nautilus: mds: fail to resolve snapshot name contains '_'
- 09:55 AM Bug #39511 (Rejected): Cannot remove CephFS snapshot with leading underscore (_)
- Due to a misconfigured cronjob I ended up with CephFS snapshots with leading _
rmdir failed with error "No such file... - 09:54 AM Backport #39468 (In Progress): luminous: There is no punctuation mark or blank between tid and c...
- 09:51 AM Backport #39469 (In Progress): mimic: There is no punctuation mark or blank between tid and clie...
- 09:48 AM Backport #39470 (In Progress): nautilus: There is no punctuation mark or blank between tid and c...
- 08:13 AM Backport #39209 (In Progress): nautilus: mds: mds_cap_revoke_eviction_timeout is not used to init...
- https://github.com/ceph/ceph/pull/27842
- 12:54 AM Backport #39208 (In Progress): luminous: mds: mds_cap_revoke_eviction_timeout is not used to init...
- https://github.com/ceph/ceph/pull/27840
04/28/2019
- 04:12 PM Bug #39510 (Resolved): test_volume_client: test_put_object_versioned is unreliable
- test_put_object_versioned in test_volume_client.py succeeds if it receives a CommandFailedError exception from the em...
04/26/2019
- 06:11 PM Bug #39507 (Fix Under Review): cephfs-shell: mkdir error for relative path
- 05:50 PM Bug #39507 (Resolved): cephfs-shell: mkdir error for relative path
- mkdir does not create directory when relative path is specified.
- 01:23 AM Backport #39198 (In Progress): luminous: mds: we encountered "No space left on device" when movin...
- https://github.com/ceph/ceph/pull/27801
04/25/2019
- 07:47 AM Backport #39430 (In Progress): nautilus: qa: test_sessionmap assumes simple messenger
- 07:45 AM Backport #39473 (Resolved): nautilus: mds: fail to resolve snapshot name contains '_'
- https://github.com/ceph/ceph/pull/27849
- 07:45 AM Backport #39472 (Resolved): mimic: mds: fail to resolve snapshot name contains '_'
- https://github.com/ceph/ceph/pull/28186
- 07:45 AM Backport #39471 (Resolved): nautilus: Expose CephFS snapshot creation time to clients
- https://github.com/ceph/ceph/pull/27901
- 07:44 AM Backport #39470 (Resolved): nautilus: There is no punctuation mark or blank between tid and clie...
- https://github.com/ceph/ceph/pull/27846
- 07:44 AM Backport #39469 (Resolved): mimic: There is no punctuation mark or blank between tid and client_...
- https://github.com/ceph/ceph/pull/27847
- 07:44 AM Backport #39468 (Resolved): luminous: There is no punctuation mark or blank between tid and clie...
- https://github.com/ceph/ceph/pull/27848
04/24/2019
- 07:16 PM Bug #39437 (Fix Under Review): osd: PriorityCache.cc: 265: FAILED ceph_assert(mem_avail >= 0)
- 06:34 PM Bug #39437: osd: PriorityCache.cc: 265: FAILED ceph_assert(mem_avail >= 0)
- https://github.com/ceph/ceph/pull/27763
- 03:42 PM Bug #39437 (In Progress): osd: PriorityCache.cc: 265: FAILED ceph_assert(mem_avail >= 0)
- Mark Nelson wrote:
> Patrick Donnelly wrote:
> > [...]
> >
> > /ceph/teuthology-archive/pdonnell-2019-04-17_06:0... - 03:14 PM Bug #39437: osd: PriorityCache.cc: 265: FAILED ceph_assert(mem_avail >= 0)
- Patrick Donnelly wrote:
> [...]
>
> /ceph/teuthology-archive/pdonnell-2019-04-17_06:07:08-multimds-wip-pdonnell-t... - 04:05 PM Feature #3244 (In Progress): qa: integrate Ganesha into teuthology testing to regularly exercise ...
- 03:00 PM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> Yes, this can causes inconsistency. But it's not unique to link count. For example, one client do... - 07:01 AM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> Zheng Yan wrote:
> > Jeff Layton wrote:
> > > Zheng Yan wrote:
> > > > Sorry, I mean we don'... - 02:33 PM Feature #38829 (Fix Under Review): cephfs-shell: add a "stat" command
- 08:14 AM Backport #39191 (In Progress): luminous: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27737
- 08:12 AM Backport #39199 (In Progress): nautilus: mds: we encountered "No space left on device" when movin...
- https://github.com/ceph/ceph/pull/27736
04/23/2019
- 08:22 PM Feature #39403 (Fix Under Review): pybind: add the lseek() function to pybind of cephfs
- 06:58 AM Feature #39403 (Resolved): pybind: add the lseek() function to pybind of cephfs
- cephfs pybind: add the lseek() function to pybind of cephfs
- 08:17 PM Bug #39436 (Fix Under Review): qa: upgrade task fails from mimic to master
- 05:07 PM Bug #39436 (Resolved): qa: upgrade task fails from mimic to master
- ...
- 05:53 PM Bug #39266 (Pending Backport): There is no punctuation mark or blank between tid and client_id i...
- 05:52 PM Bug #38832 (Pending Backport): mds: fail to resolve snapshot name contains '_'
- 05:51 PM Feature #38838 (Pending Backport): Expose CephFS snapshot creation time to clients
- 05:41 PM Bug #39438 (New): workunit fails with EPERM during thrashing
- ...
- 05:26 PM Bug #39437 (Resolved): osd: PriorityCache.cc: 265: FAILED ceph_assert(mem_avail >= 0)
- ...
- 05:16 PM Bug #39406: ceph_volume_client: d_name needs to be converted to string before using
- Also seen in QA:...
- 09:35 AM Bug #39406 (Resolved): ceph_volume_client: d_name needs to be converted to string before using
- The "d_name in DirEntry":https://github.com/ceph/ceph/blob/master/src/pybind/cephfs/cephfs.pyx#L723 is obtained by ce...
- 01:06 PM Backport #39430 (Resolved): nautilus: qa: test_sessionmap assumes simple messenger
- https://github.com/ceph/ceph/pull/27772
- 08:51 AM Bug #39405 (Resolved): ceph_volume_client: python program embedded in test_volume_client.py use p...
- These embedded program must use python3 instead.
- 07:35 AM Bug #39404 (Fix Under Review): cephfs-shell: fix string decode for ls command
- 07:33 AM Bug #39404: cephfs-shell: fix string decode for ls command
- Updated with PR ID
- 07:18 AM Bug #39404 (Resolved): cephfs-shell: fix string decode for ls command
- CephFS:~/>>> ls -l
startswith first arg must be bytes or a tuple of bytes, not str
Traceback (most recent call last... - 06:03 AM Backport #39192 (In Progress): nautilus: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27714
04/22/2019
- 10:03 PM Bug #39329: ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connected incompatib...
- I would add the flag to both these commands.
- 09:43 PM Bug #39329: ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connected incompatib...
- Марк Коренберг wrote:
> The command ALREADY HAS this flag, but it does destructive actions without requiring it.
... - 09:37 PM Bug #39329: ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connected incompatib...
- The command ALREADY HAS this flag, but it does destructive actions without requiring it.
- 09:15 PM Bug #39329: ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connected incompatib...
- Sorry, what's the ask here? You want there to be a `--yes-i-really-mean-it` flag?
- 06:50 PM Bug #38803 (Pending Backport): qa: test_sessionmap assumes simple messenger
- Because of the fix to http://tracker.ceph.com/issues/38676 is in nautilus, this must be backported too.
- 06:09 PM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> Jeff Layton wrote:
> > Zheng Yan wrote:
> > > Sorry, I mean we don't need Lx
> >
> > I'm not... - 12:11 PM Bug #39350 (Resolved): df command error
- 12:03 PM Feature #38829 (In Progress): cephfs-shell: add a "stat" command
04/20/2019
- 05:34 PM Bug #39395 (Resolved): ceph: ceph fs auth fails
- Since 12.2.12 documentation-backed example invocation of @ceph fs authorize@ fails all the time:...
04/19/2019
- 03:15 PM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> Zheng Yan wrote:
> > Sorry, I mean we don't need Lx
>
> I'm not sure I understand. What if ... - 11:37 AM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> Sorry, I mean we don't need Lx
I'm not sure I understand. What if other clients have Ls on the... - 01:19 AM Feature #39098: mds: lock caching for asynchronous unlink
- Sorry, I mean we don't need Lx
04/18/2019
- 09:15 PM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> With change in http://tracker.ceph.com/issues/39354, Fx is not needed for async unlink. (Other re... - 07:36 AM Feature #39098: mds: lock caching for asynchronous unlink
- With change in http://tracker.ceph.com/issues/39354, Lx is not needed for async unlink. (Other request will release x...
- 11:06 AM Backport #39379 (In Progress): nautilus: cephfs-shell: python traceback with mkdir inside inexist...
- 09:20 AM Backport #39379 (Resolved): nautilus: cephfs-shell: python traceback with mkdir inside inexistant...
- https://github.com/ceph/ceph/pull/27677
- 11:05 AM Backport #39378 (In Progress): nautilus: cephfs-shell: support mkdir with non-octal mode
- 09:20 AM Backport #39378 (Resolved): nautilus: cephfs-shell: support mkdir with non-octal mode
- https://github.com/ceph/ceph/pull/27677
- 11:05 AM Backport #39377 (In Progress): nautilus: cephfs-shell: python traceback with mkdir when reattempt...
- 09:20 AM Backport #39377 (Resolved): nautilus: cephfs-shell: python traceback with mkdir when reattempt of...
- https://github.com/ceph/ceph/pull/27677
- 11:05 AM Backport #39376 (In Progress): nautilus: cephfs-shell: mkdir creates directory with invalid octal...
- 09:20 AM Backport #39376 (Resolved): nautilus: cephfs-shell: mkdir creates directory with invalid octal mode
- https://github.com/ceph/ceph/pull/27677
- 11:03 AM Backport #39197 (In Progress): nautilus: cephfs-shell: ls command produces error: no "colorize" a...
- 08:24 AM Bug #39349 (Closed): mds: cap revokes leak
- 08:03 AM Bug #39349: mds: cap revokes leak
- Zheng Yan wrote:
> already fixed by https://github.com/ceph/ceph/pull/26713
yes, this is already fixed, please cl... - 07:14 AM Bug #39349: mds: cap revokes leak
- already fixed by https://github.com/ceph/ceph/pull/26713
- 05:35 AM Backport #38877 (In Progress): luminous: mds: high debug logging with many subtrees is slow
- 12:08 AM Feature #38740 (Pending Backport): cephfs-shell: support mkdir with non-octal mode
- 12:08 AM Bug #38739 (Pending Backport): cephfs-shell: python traceback with mkdir inside inexistant directory
- 12:08 AM Bug #38741 (Pending Backport): cephfs-shell: python traceback with mkdir when reattempt of mkdir
- 12:08 AM Bug #38743 (Pending Backport): cephfs-shell: mkdir creates directory with invalid octal mode
04/17/2019
- 08:59 PM Feature #39354: mds: derive wrlock from excl caps
- https://github.com/ceph/ceph/pull/27648
- 01:07 PM Feature #39354 (In Progress): mds: derive wrlock from excl caps
- 12:49 PM Feature #39354 (Closed): mds: derive wrlock from excl caps
- preparation for buffered create/unlink
- 08:42 PM Bug #39349 (Fix Under Review): mds: cap revokes leak
- 07:50 AM Bug #39349: mds: cap revokes leak
- These exists possibilities as follows:
1. some req make mds do simple_xlock on a filelock that's in LOCK_xlock... - 07:25 AM Bug #39349 (Closed): mds: cap revokes leak
- Recently, one of our clusters, after updating to 12.2.11, occasionally reports that "XXX clients failing to respond t...
- 09:31 AM Bug #11314 (In Progress): qa: MDS crashed and the runs hung without ever timing out
- 09:31 AM Feature #5520 (New): osdc: should handle namespaces
- 08:01 AM Bug #39350 (Resolved): df command error
- Commit 417836d, causes the df to produce incorrect output....
- 06:01 AM Bug #39078 (Resolved): fs: we lack a feature bit for nautilus
- 06:01 AM Backport #39187 (Resolved): nautilus: fs: we lack a feature bit for nautilus
Also available in: Atom