Activity
From 06/04/2019 to 07/03/2019
07/03/2019
- 06:20 PM Bug #40611 (Rejected): can I upload missing rpm package from my build to: https://download.ceph....
- Please repost to ceph-users or the dev list.
- 06:08 PM Feature #40285: mds: support hierarchical layout transformations on files
- Patrick Donnelly wrote:
> The main goal of this feature is to support moving whole trees to cheaper storage hardware... - 04:46 AM Bug #40584: kernel build failure in kernel_untar_build.sh
- Patrick Donnelly wrote:
> Perhaps related to a new distro being used with luminous builds?
yeh, probably. this is... - 04:15 AM Bug #40584: kernel build failure in kernel_untar_build.sh
- Perhaps related to a new distro being used with luminous builds?
- 04:14 AM Bug #40582 (Rejected): cephfs-journal-tool: Error 22 ((22) Invalid argument)
- Please seek help on the ceph-users mailing list.
07/02/2019
- 09:31 PM Feature #40633 (Resolved): mds: dump recent log events for extraordinary events
- When major events happen like client eviction, we often want to get an idea what went wrong but production clusters u...
- 01:55 PM Bug #40615 (Fix Under Review): ceph-fuse: mount does not support the fallocate()
- 07:12 AM Bug #40615: ceph-fuse: mount does not support the fallocate()
- You can see that libfuse already supports the fallocate() function call in version 2.9,
see https://github.com/libfu... - 06:36 AM Bug #40615 (Resolved): ceph-fuse: mount does not support the fallocate()
- ceph version: 14.2.1
fuse version: 2.9.2-6
err info:... - 10:22 AM Bug #40283 (In Progress): qa: add testing for lazyio
- 09:52 AM Feature #40617 (Resolved): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- ... analogous to `ceph fs subvolume getpath`. This will return the path of the `fs subvolumegroup`.
- 04:42 AM Bug #36029: ceph-fuse assert failed when try to do file lock
- We hit the bug as well, is there any PR targeting this bug somewhere? Seems like the related code _update_lock_state...
07/01/2019
- 10:35 PM Feature #17309 (Resolved): qa: mon_thrash test for CephFS
- 10:25 PM Bug #40613 (New): kclient: .handle_message_footer got old message 1 <= 648 0x558ceadeaac0 client_...
- Got this assertion with the testing kernel. We haven't seen this type of failure in a while. Last time was #18690.
... - 10:10 PM Bug #40612 (New): qa: multimds suite MDS behind on trimming
- ...
- 10:01 PM Bug #40611 (Rejected): can I upload missing rpm package from my build to: https://download.ceph....
- Hi there,
Not sure who is the project manager for nfs-ganesha, need your help.
when I am working on NeoKylin/Ce... - 09:57 PM Bug #38326 (Pending Backport): mds: evict stale client when one of its write caps are stolen
- 09:52 PM Bug #40305: qa: spurious unresponsive client causes eviction due to valgrind/multimds
- /ceph/teuthology-archive/pdonnell-2019-06-21_01:51:23-multimds-wip-pdonnell-testing-20190620.220400-distro-basic-smit...
- 09:16 PM Feature #40563: client: query a single cache information, for example print a single inode cache ...
- MDS has a command to print the inode. Should be straightforward to add to the client.
- 06:24 PM Bug #37681 (Fix Under Review): qa: power off still resulted in client sending session close
- 06:18 PM Bug #37681 (In Progress): qa: power off still resulted in client sending session close
- The correct ipmitool command to simulate pulling the power plug is "power reset". "power off" will permit graceful sh...
- 06:13 PM Bug #37681: qa: power off still resulted in client sending session close
- Still happening:...
- 04:10 PM Backport #40343: luminous: mds: fix corner case of replaying open sessions
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28536
merged - 04:10 PM Backport #40041: luminous: avoid trimming too many log segments after mds failover
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28543
merged - 04:09 PM Backport #40221: luminous: mds: reset heartbeat during long-running loops in recovery
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/28544
merged - 02:57 PM Bug #40608 (Duplicate): mds: assert after `delete gather` in C_Drop_Cache::recall_client_state
- While performing an mds cache drop I had an MDS assert.
Command was:... - 03:04 AM Bug #40603 (Resolved): mds: disallow setting ceph.dir.pin value exceeding max rank id
- Currently we allow to set ceph.dir.pin value to any number. If it is larger than current max id, this dir will stay i...
06/28/2019
- 05:13 PM Bug #40584 (New): kernel build failure in kernel_untar_build.sh
- Have been seeing `qa/workunits/kernel_untar_build.sh` failures in luminous lately. See:
http://qa-proxy.ceph.c... - 03:02 PM Bug #40582 (Rejected): cephfs-journal-tool: Error 22 ((22) Invalid argument)
- For unknown reason journal export stopped working.
journal is 23438084784916~692721059
2019-06-28 17:00:02.692533... - 06:10 AM Feature #40299 (Resolved): mgr/volumes: allow setting mode on fs subvol, subvol group
- 06:10 AM Bug #40431 (Resolved): mgr/volumes: allow setting data pool layout for fs subvolumes
- 06:10 AM Bug #40429 (Resolved): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- 06:10 AM Backport #40571 (Resolved): nautilus: mgr/volumes: allow setting mode on fs subvol, subvol group
- 06:09 AM Backport #40570 (Resolved): nautilus: mgr/volumes: allow setting data pool layout for fs subvolumes
- 06:09 AM Backport #40569 (Resolved): nautilus: mgr/volumes: subvolume.py calls Exceptions with too few arg...
- 02:38 AM Cleanup #40578 (Resolved): mds: reorganize class members in headers to follow coding guidelines
- Guide here: https://google.github.io/styleguide/cppguide.html#Declaration_Order
A past commit that has improved th...
06/27/2019
- 04:07 PM Backport #40571 (In Progress): nautilus: mgr/volumes: allow setting mode on fs subvol, subvol group
- 04:00 PM Backport #40571 (Resolved): nautilus: mgr/volumes: allow setting mode on fs subvol, subvol group
- https://github.com/ceph/ceph/pull/28767
- 03:59 PM Backport #40570 (In Progress): nautilus: mgr/volumes: allow setting data pool layout for fs subvo...
- 03:59 PM Backport #40570 (Resolved): nautilus: mgr/volumes: allow setting data pool layout for fs subvolumes
- https://github.com/ceph/ceph/pull/28767
- 03:58 PM Backport #40569 (In Progress): nautilus: mgr/volumes: subvolume.py calls Exceptions with too few ...
- 03:57 PM Backport #40569 (Resolved): nautilus: mgr/volumes: subvolume.py calls Exceptions with too few arg...
- https://github.com/ceph/ceph/pull/28767
- 02:10 PM Bug #40431 (Pending Backport): mgr/volumes: allow setting data pool layout for fs subvolumes
- 02:09 PM Feature #40299 (Pending Backport): mgr/volumes: allow setting mode on fs subvol, subvol group
- 02:09 PM Bug #40429 (Pending Backport): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- 06:25 AM Feature #40563 (Fix Under Review): client: query a single cache information, for example print a ...
- I want to query a single cache information, for example print a single inode cache information,but the client cache i...
06/26/2019
- 10:50 AM Backport #38686 (Resolved): luminous: kcephfs TestClientLimits.test_client_pin fails with "client...
- 10:49 AM Backport #38445 (Resolved): luminous: mds: drop cache does not timeout as expected
- 10:48 AM Backport #38340 (Resolved): luminous: mds: may leak gather during cache drop
- 10:48 AM Bug #37726 (Pending Backport): mds: high debug logging with many subtrees is slow
- mimic backport is still open
- 10:47 AM Backport #38877 (Resolved): luminous: mds: high debug logging with many subtrees is slow
- 10:47 AM Bug #39026 (Resolved): mds: crash during mds restart
- 10:46 AM Backport #39191 (Resolved): luminous: mds: crash during mds restart
- 10:46 AM Bug #38994 (Resolved): mds: we encountered "No space left on device" when moving huge number of f...
- 10:46 AM Backport #39198 (Resolved): luminous: mds: we encountered "No space left on device" when moving h...
- 10:46 AM Backport #39208 (Resolved): luminous: mds: mds_cap_revoke_eviction_timeout is not used to initial...
- 08:58 AM Bug #39266 (Resolved): There is no punctuation mark or blank between tid and client_id in the ou...
- 08:58 AM Backport #39468 (Resolved): luminous: There is no punctuation mark or blank between tid and clie...
- 08:18 AM Backport #39221 (Resolved): luminous: mds: behind on trimming and "[dentry] was purgeable but no ...
- 08:17 AM Backport #39231 (Resolved): luminous: kclient: nofail option not supported
- 08:16 AM Backport #40160 (Resolved): luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- 08:15 AM Backport #39213 (Resolved): luminous: mds: there is an assertion when calling Beacon::shutdown()
- 05:10 AM Bug #40476: cephfs-shell: cd with no args has no effect
- Rishabh Dave wrote:
> Patrick Donnelly said:
> > What commit/branch are you testing? I thought I just changed this ... - 05:07 AM Bug #40182 (Resolved): luminous: pybind: luminous volume client breaks against nautilus cluster
06/25/2019
- 04:31 PM Backport #38686: luminous: kcephfs TestClientLimits.test_client_pin fails with "client caps fell ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27040
merged - 04:30 PM Backport #38445: luminous: mds: drop cache does not timeout as expected
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27342
merged - 04:30 PM Backport #38340: luminous: mds: may leak gather during cache drop
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27342
merged - 04:29 PM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27679
merged - 04:29 PM Backport #39191: luminous: mds: crash during mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27737
merged - 04:29 PM Backport #39198: luminous: mds: we encountered "No space left on device" when moving huge number ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27801
merged - 04:28 PM Backport #39208: luminous: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server:...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27840
merged - 04:28 PM Backport #39468: luminous: There is no punctuation mark or blank between tid and client_id in th...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27848
merged - 04:27 PM Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28432
merged - 04:27 PM Backport #39231: luminous: kclient: nofail option not supported
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28436
merged - 04:26 PM Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28437
merged - 04:26 PM Backport #39213: luminous: mds: there is an assertion when calling Beacon::shutdown()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28438
merged - 04:25 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Jan Fajerski wrote:
> proposed fix: https://github.com/ceph/ceph/pull/28445
merged - 04:14 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Going over the userland code today to see what's there and what can be reused. Some notes:
struct ceph_mds_request... - 08:20 AM Bug #40476: cephfs-shell: cd with no args has no effect
- Patrick Donnelly said:
> What commit/branch are you testing? I thought I just changed this to cd into the root direc...
06/24/2019
- 08:20 PM Bug #40429: mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- Ramana Raja wrote:
> > +pybind/mgr/volumes/fs/subvolume.py: note: In member "_get_ancestor_xattr" of class "SubVolum... - 01:15 PM Bug #40429: mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- > +pybind/mgr/volumes/fs/subvolume.py: note: In member "_get_ancestor_xattr" of class "SubVolume":
+pybind/mgr/volum... - 12:19 PM Bug #40429 (Fix Under Review): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- 01:13 PM Bug #40476: cephfs-shell: cd with no args has no effect
- I'd consider this to be NOTABUG. We don't really have the concept of a home directory in cephfs shell, so why should ...
- 12:14 PM Bug #40431 (Fix Under Review): mgr/volumes: allow setting data pool layout for fs subvolumes
- 12:14 PM Feature #40299 (Fix Under Review): mgr/volumes: allow setting mode on fs subvol, subvol group
- 10:19 AM Cleanup #39717 (Resolved): cephfs-shell: Fix flake8 warnings and errors
- 10:19 AM Backport #40471 (Resolved): nautilus: cephfs-shell: Fix flake8 warnings and errors
- 10:19 AM Bug #39404 (Resolved): cephfs-shell: fix string decode for ls command
- 10:18 AM Backport #39678 (Resolved): nautilus: cephfs-shell: fix string decode for ls command
- 10:18 AM Bug #39165 (Resolved): cephfs-shell: add commands to manipulate quotas
- 10:18 AM Backport #39936 (Resolved): nautilus: cephfs-shell: add commands to manipulate quotas
- 10:18 AM Feature #38829 (Resolved): cephfs-shell: add a "stat" command
- 10:18 AM Backport #39937 (Resolved): nautilus: cephfs-shell: add a "stat" command
- 10:18 AM Cleanup #40191 (Resolved): cephfs-shell: Fix flake8 errors
- 10:17 AM Backport #40217 (Resolved): nautilus: cephfs-shell: Fix flake8 errors
- 10:17 AM Bug #40244 (Resolved): cephfs-shell: 'lls' command errors
- 10:17 AM Backport #40313 (Resolved): nautilus: cephfs-shell: 'lls' command errors
- 10:17 AM Bug #40243 (Resolved): cephfs-shell: Incorrect error message is printed in 'lcd' command
- 10:17 AM Backport #40314 (Resolved): nautilus: cephfs-shell: Incorrect error message is printed in 'lcd' c...
- 10:17 AM Bug #40418 (Resolved): cephfs-shell: test only python3 and assert python3 in cephfs-shell
- 10:16 AM Bug #40455 (Resolved): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 10:16 AM Backport #40469 (Resolved): nautilus: cephfs-shell: test only python3 and assert python3 in cephf...
- 10:16 AM Backport #40470 (Resolved): nautilus: cephfs-shell: fix unecessary usage of to_bytes for file paths
- 10:03 AM Backport #40495 (Resolved): nautilus: test_volume_client: declare only one default for python ver...
- https://github.com/ceph/ceph/pull/30030
- 10:03 AM Backport #40494 (Resolved): mimic: test_volume_client: declare only one default for python version
- https://github.com/ceph/ceph/pull/30110
- 10:03 AM Backport #40493 (Rejected): luminous: test_volume_client: declare only one default for python ver...
- 08:36 AM Bug #40489 (Fix Under Review): cephfs-shell: name 'files' is not defined error in do_rm()
- 08:30 AM Bug #40489 (Resolved): cephfs-shell: name 'files' is not defined error in do_rm()
- ...
06/22/2019
- 03:00 AM Bug #40476 (Need More Info): cephfs-shell: cd with no args has no effect
- What commit/branch are you testing? I thought I just changed this to cd into the root directory (of CephFS).
- 02:56 AM Bug #40472 (Fix Under Review): MDSMonitor: use stringstream instead of dout for mds repaired
- 02:45 AM Bug #40286 (In Progress): luminous: qa: remove ubuntu 14.04 testing
- 02:42 AM Documentation #39620 (In Progress): doc: MDS and metadata pool hardware requirements/recommendations
06/21/2019
- 07:22 PM Bug #40374 (Resolved): nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reporting..."
- 07:22 PM Bug #40373 (Resolved): nautilus: qa: still testing simple messenger
- 06:45 PM Backport #40471: nautilus: cephfs-shell: Fix flake8 warnings and errors
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #39678: nautilus: cephfs-shell: fix string decode for ls command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #39936: nautilus: cephfs-shell: add commands to manipulate quotas
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #39937: nautilus: cephfs-shell: add a "stat" command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40217: nautilus: cephfs-shell: Fix flake8 errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40313: nautilus: cephfs-shell: 'lls' command errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40314: nautilus: cephfs-shell: Incorrect error message is printed in 'lcd' command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:44 PM Backport #40469: nautilus: cephfs-shell: test only python3 and assert python3 in cephfs-shell
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 06:43 PM Backport #40470: nautilus: cephfs-shell: fix unecessary usage of to_bytes for file paths
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28681
merged - 09:04 AM Bug #40477 (Fix Under Review): mds: cleanup truncating inodes when standby replay mds trim log se...
- 09:02 AM Bug #40477 (Resolved): mds: cleanup truncating inodes when standby replay mds trim log segments
- 08:54 AM Bug #40476 (Resolved): cephfs-shell: cd with no args has no effect
- Issuing cd command with no args implies "cd $HOME" in bash but on CephFS shell it has no effect it leads to an error ...
- 08:21 AM Bug #40474 (Fix Under Review): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 08:15 AM Bug #40474 (Resolved): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- client may set CEPH_CLIENT_CAPS_PENDING_CAPSNAP flag even there is no further cap snap flush. This may confuse mds an...
- 03:37 AM Bug #40472: MDSMonitor: use stringstream instead of dout for mds repaired
- https://github.com/ceph/ceph/pull/28683
- 03:33 AM Bug #40472 (Resolved): MDSMonitor: use stringstream instead of dout for mds repaired
- use stringstream instead of dout for mds repaired to get result directly from command line.
06/20/2019
- 10:43 PM Backport #39936 (In Progress): nautilus: cephfs-shell: add commands to manipulate quotas
- 10:43 PM Backport #39937 (In Progress): nautilus: cephfs-shell: add a "stat" command
- 10:42 PM Backport #40217 (In Progress): nautilus: cephfs-shell: Fix flake8 errors
- 10:42 PM Backport #40313 (In Progress): nautilus: cephfs-shell: 'lls' command errors
- 10:42 PM Backport #40314 (In Progress): nautilus: cephfs-shell: Incorrect error message is printed in 'lcd...
- 10:42 PM Backport #40469 (In Progress): nautilus: cephfs-shell: test only python3 and assert python3 in ce...
- 06:10 PM Backport #40469 (Resolved): nautilus: cephfs-shell: test only python3 and assert python3 in cephf...
- https://github.com/ceph/ceph/pull/28681
- 10:42 PM Backport #40470 (In Progress): nautilus: cephfs-shell: fix unecessary usage of to_bytes for file ...
- 06:10 PM Backport #40470 (Resolved): nautilus: cephfs-shell: fix unecessary usage of to_bytes for file paths
- https://github.com/ceph/ceph/pull/28681
- 10:38 PM Backport #40471 (In Progress): nautilus: cephfs-shell: Fix flake8 warnings and errors
- 10:38 PM Backport #40471 (Resolved): nautilus: cephfs-shell: Fix flake8 warnings and errors
- https://github.com/ceph/ceph/pull/28681
- 10:38 PM Cleanup #39717 (Pending Backport): cephfs-shell: Fix flake8 warnings and errors
- 10:16 PM Backport #40040 (Resolved): nautilus: avoid trimming too many log segments after mds failover
- 04:38 PM Backport #40040: nautilus: avoid trimming too many log segments after mds failover
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28582
merged - 10:16 PM Backport #40223 (Resolved): nautilus: mds: reset heartbeat during long-running loops in recovery
- 04:38 PM Backport #40223: nautilus: mds: reset heartbeat during long-running loops in recovery
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28611
merged - 10:16 PM Bug #40061 (Resolved): mds: blacklisted clients eviction is broken
- 10:15 PM Backport #40236 (Resolved): nautilus: mds: blacklisted clients eviction is broken
- 10:15 PM Backport #40344 (Resolved): nautilus: mds: fix corner case of replaying open sessions
- 04:39 PM Backport #40344: nautilus: mds: fix corner case of replaying open sessions
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28580
merged - 06:10 PM Bug #40455 (Pending Backport): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 03:18 PM Bug #40431 (In Progress): mgr/volumes: allow setting data pool layout for fs subvolumes
- 01:26 PM Bug #40460 (Pending Backport): test_volume_client: declare only one default for python version
- 06:48 AM Bug #40460 (Resolved): test_volume_client: declare only one default for python version
- test_volume_client.py declares default python version in more than one places.
- 01:09 PM Bug #40418 (Pending Backport): cephfs-shell: test only python3 and assert python3 in cephfs-shell
- 09:18 AM Backport #39670 (Resolved): nautilus: mds: output lock state in format dump
- 09:18 AM Feature #39969 (Resolved): mgr / volume: refactor volume module
- 09:18 AM Backport #40378 (Resolved): nautilus: mgr / volume: refactor volume module
- 09:17 AM Backport #40164 (Resolved): nautilus: mount: key parsing fail when doing a remount
- 09:15 AM Backport #40220 (Resolved): nautilus: TestMisc.test_evict_client fails
- 09:14 AM Backport #40161 (Resolved): nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- 08:43 AM Bug #39526 (Resolved): cephfs-shell: teuthology tests
- 08:43 AM Backport #39935 (Resolved): nautilus: cephfs-shell: teuthology tests
- 08:42 AM Bug #39507 (Resolved): cephfs-shell: mkdir error for relative path
- 08:42 AM Backport #39960 (Resolved): nautilus: cephfs-shell: mkdir error for relative path
- 08:17 AM Bug #39395 (Fix Under Review): ceph: ceph fs auth fails
- 07:56 AM Bug #39395 (In Progress): ceph: ceph fs auth fails
06/19/2019
- 08:08 PM Bug #40455 (Fix Under Review): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 08:03 PM Bug #40455 (Resolved): cephfs-shell: fix unecessary usage of to_bytes for file paths
- 06:56 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- I think we are going to need this after all. If we don't do this, we'll have to delay writing to newly-created files ...
- 06:04 PM Backport #40445 (Resolved): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_sn...
- https://github.com/ceph/ceph/pull/29344
- 06:04 PM Backport #40444 (Resolved): mimic: mds: MDCache::cow_inode does not cleanup unneeded client_snap_...
- https://github.com/ceph/ceph/pull/30234
- 06:04 PM Backport #40443 (Resolved): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when o...
- https://github.com/ceph/ceph/pull/29343
- 06:04 PM Backport #40442 (Resolved): mimic: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when oper...
- https://github.com/ceph/ceph/pull/30108
- 06:03 PM Backport #40440 (Resolved): nautilus: mds: cannot switch mds state from standby-replay to active
- https://github.com/ceph/ceph/pull/29233
- 06:03 PM Backport #40438 (Resolved): nautilus: getattr on snap inode stuck
- https://github.com/ceph/ceph/pull/29231
- 06:03 PM Backport #40437 (Resolved): mimic: getattr on snap inode stuck
- https://github.com/ceph/ceph/pull/29230
- 02:49 PM Feature #40299 (In Progress): mgr/volumes: allow setting mode on fs subvol, subvol group
- 12:04 PM Bug #40431 (Resolved): mgr/volumes: allow setting data pool layout for fs subvolumes
- This is required by CephFS CSI driver. Allow setting data pool layout for fs subvolumes,
$ ceph fs subvolume crea... - 11:02 AM Bug #40430 (Fix Under Review): cephfs-shell: No error message is printed on ls of invalid directo...
- 10:50 AM Bug #40430 (Resolved): cephfs-shell: No error message is printed on ls of invalid directories
- For any invalid ls command, no error message is printed....
- 10:04 AM Backport #40042 (In Progress): mimic: avoid trimming too many log segments after mds failover
- https://github.com/ceph/ceph/pull/28650
- 09:45 AM Bug #40429 (Resolved): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
- mypy revealed...
- 06:51 AM Bug #38326 (Fix Under Review): mds: evict stale client when one of its write caps are stolen
- increment patches https://github.com/ceph/ceph/pull/28642
- 03:30 AM Backport #39260: nautilus: ls -S command produces AttributeError: 'str' object has no attribute '...
- Follow-up for missing commit in backport: https://github.com/ceph/ceph/pull/28641
- 01:35 AM Bug #39987 (Pending Backport): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- 01:33 AM Bug #40213 (Pending Backport): mds: cannot switch mds state from standby-replay to active
- 01:30 AM Bug #40361 (Pending Backport): getattr on snap inode stuck
- 01:29 AM Bug #40101 (Pending Backport): libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operatin...
- 12:43 AM Backport #39670: nautilus: mds: output lock state in format dump
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28233
merged - 12:42 AM Backport #40378: nautilus: mgr / volume: refactor volume module
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28595
merged - 12:42 AM Backport #40164: nautilus: mount: key parsing fail when doing a remount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28610
merged - 12:41 AM Backport #40220: nautilus: TestMisc.test_evict_client fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28613
merged - 12:41 AM Backport #40161: nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28612
merged - 12:40 AM Backport #39935: nautilus: cephfs-shell: teuthology tests
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28614
merged - 12:39 AM Backport #39960: nautilus: cephfs-shell: mkdir error for relative path
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28616
merged
06/18/2019
- 10:21 PM Bug #40418 (Fix Under Review): cephfs-shell: test only python3 and assert python3 in cephfs-shell
- 09:34 PM Bug #40418 (Resolved): cephfs-shell: test only python3 and assert python3 in cephfs-shell
- No reason to support python2 for this new tool. See this failure:...
- 11:59 AM Bug #40411 (Fix Under Review): pybind: Add standard error message and fix print of path as byte o...
- 11:47 AM Bug #40411 (Resolved): pybind: Add standard error message and fix print of path as byte object in...
- Previously, the following message was printed....
06/17/2019
- 09:59 PM Feature #40390 (Rejected): Add support for import/export volumes and snapshots (similar to btrfs/...
- We're planning something like this: http://tracker.ceph.com/projects/ceph/wiki/CDM_05-JUN-2019
Closing this ticket... - 06:54 AM Feature #40390 (Rejected): Add support for import/export volumes and snapshots (similar to btrfs/...
- For disaster recovery and other purposes, it'd be awesome if there was an equivalent to rbd import/export (or similar...
- 08:24 PM Backport #39960 (In Progress): nautilus: cephfs-shell: mkdir error for relative path
- 08:22 PM Backport #39678 (In Progress): nautilus: cephfs-shell: fix string decode for ls command
- 08:18 PM Backport #39935 (In Progress): nautilus: cephfs-shell: teuthology tests
- 08:14 PM Backport #40220 (In Progress): nautilus: TestMisc.test_evict_client fails
- 08:12 PM Backport #40161 (In Progress): nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "par...
- 08:10 PM Backport #40223 (In Progress): nautilus: mds: reset heartbeat during long-running loops in recovery
- 08:06 PM Backport #40164 (In Progress): nautilus: mount: key parsing fail when doing a remount
- 08:04 PM Backport #40324 (In Progress): nautilus: ceph_volume_client: d_name needs to be converted to stri...
- 08:02 PM Backport #40236 (In Progress): nautilus: mds: blacklisted clients eviction is broken
- 07:57 PM Feature #38838 (Resolved): Expose CephFS snapshot creation time to clients
- 07:57 PM Backport #39471 (Resolved): nautilus: Expose CephFS snapshot creation time to clients
- 07:56 PM Backport #39680 (Resolved): nautilus: pybind: add the lseek() function to pybind of cephfs
- 07:02 PM Bug #40373: nautilus: qa: still testing simple messenger
- https://github.com/ceph/ceph/pull/28562 merged
- 07:00 PM Bug #40374: nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reporting..."
- https://github.com/ceph/ceph/pull/28563 merged
- 05:41 PM Bug #40283: qa: add testing for lazyio
- Wrong Sidharth, sorry!
- 01:50 PM Bug #40305: qa: spurious unresponsive client causes eviction due to valgrind/multimds
- Zheng is going to take a look.
- 12:57 PM Bug #40014 (Resolved): mgr/volumes: Name 'sub_name' is not defined
- 12:53 PM Feature #40401 (Resolved): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvo...
- Using the ceph CLI, authorize/deauthorize cephx auth IDs read/read-write access to fs subvolumes and subvolume groups...
- 12:52 PM Backport #40338 (Resolved): nautilus: mgr/volumes: Name 'sub_name' is not defined
- 12:52 PM Bug #39949 (Resolved): test: extend mgr/volume test to cover new interfaces
- 12:52 PM Backport #40321 (Resolved): nautilus: test: extend mgr/volume test to cover new interfaces
- 12:51 PM Bug #40152 (Resolved): mgr/volumes: unable to set quota on fs subvolumes
- 12:00 PM Bug #40152 (Pending Backport): mgr/volumes: unable to set quota on fs subvolumes
- 11:44 AM Bug #40152 (Resolved): mgr/volumes: unable to set quota on fs subvolumes
- 12:51 PM Backport #40158 (Resolved): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- 11:58 AM Backport #40158 (In Progress): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- 11:43 AM Backport #40158 (Resolved): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- 12:51 PM Bug #39750 (Resolved): mgr/volumes: cannot create subvolumes with py3 libraries
- 12:50 PM Backport #40157 (Resolved): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- 12:28 PM Backport #40378 (In Progress): nautilus: mgr / volume: refactor volume module
- 07:28 AM Backport #38097 (In Progress): mimic: mds: optimize revoking stale caps
- replaced by https://tracker.ceph.com/issues/40327
- 07:27 AM Backport #40327 (In Progress): mimic: mds: evict stale client when one of its write caps are stolen
- https://github.com/ceph/ceph/pull/28585
- 07:23 AM Backport #40326 (In Progress): nautilus: mds: evict stale client when one of its write caps are s...
- 07:15 AM Backport #40040 (In Progress): nautilus: avoid trimming too many log segments after mds failover
- 01:51 AM Backport #40344 (In Progress): nautilus: mds: fix corner case of replaying open sessions
- 01:43 AM Backport #40342 (In Progress): mimic: mds: fix corner case of replaying open sessions
- https://github.com/ceph/ceph/pull/28579
06/15/2019
- 10:08 AM Feature #39610 (Resolved): mgr/volumes: add CephFS subvolumes library
- 10:08 AM Backport #39934 (Resolved): nautilus: mgr/volumes: add CephFS subvolumes library
- 10:07 AM Bug #39705 (Resolved): qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs...
- 10:05 AM Backport #40169 (Resolved): nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:...
- 10:04 AM Backport #40167 (Resolved): nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- 10:03 AM Backport #39686 (Resolved): nautilus: ceph-fuse: client hang because its bad session PipeConnecti...
- 10:02 AM Backport #39690 (Resolved): nautilus: mds: error "No space left on device" when create a large n...
06/14/2019
- 09:52 PM Backport #40378: nautilus: mgr / volume: refactor volume module
- Ramana, please do this backport.
- 09:51 PM Backport #40378 (Resolved): nautilus: mgr / volume: refactor volume module
- https://github.com/ceph/ceph/pull/28595
- 09:51 PM Feature #39969 (Pending Backport): mgr / volume: refactor volume module
- 07:43 PM Backport #40338: nautilus: mgr/volumes: Name 'sub_name' is not defined
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #40321: nautilus: test: extend mgr/volume test to cover new interfaces
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #40158: nautilus: mgr/volumes: unable to set quota on fs subvolumes
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #40157: nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:42 PM Backport #39934: nautilus: mgr/volumes: add CephFS subvolumes library
- Ramana Raja wrote:
> https://github.com/ceph/ceph/pull/28429
merged - 07:30 PM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28499
merged - 07:29 PM Backport #40167: nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the n...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28500
merged - 07:26 PM Backport #39686: nautilus: ceph-fuse: client hang because its bad session PipeConnection to mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28375
merged - 07:26 PM Backport #39690: nautilus: mds: error "No space left on device" when create a large number of dirs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28394
merged - 06:40 PM Bug #40374 (Fix Under Review): nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reportin...
- 06:35 PM Bug #40374 (Resolved): nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reporting..."
- ...
- 06:29 PM Bug #40373 (Fix Under Review): nautilus: qa: still testing simple messenger
- 06:25 PM Bug #40373 (Resolved): nautilus: qa: still testing simple messenger
- ...
- 05:57 PM Bug #40369 (Fix Under Review): ceph_volume_client: fs_name must be converted to string before usi...
- 03:52 PM Bug #40369 (Resolved): ceph_volume_client: fs_name must be converted to string before using it
- "fs_name":https://github.com/ceph/ceph/blob/master/src/pybind/ceph_volume_client.py#L255 would normally be assigned a...
- 05:56 PM Bug #40371 (Fix Under Review): cephfs-shell: du must ignore non-directory files
- 05:38 PM Bug #40371 (Resolved): cephfs-shell: du must ignore non-directory files
- cephfs-shell's du command crashes if it comes across because files that are not directories since it tries to get 'ce...
- 09:49 AM Bug #40361 (Fix Under Review): getattr on snap inode stuck
- 09:32 AM Bug #40361 (Resolved): getattr on snap inode stuck
- from maillling list
On Wed, Jun 12, 2019 at 3:26 PM Hector Martin <hector@marcansoft.com> wrote:
>
> Hi list,
... - 04:24 AM Bug #40101 (Fix Under Review): libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operatin...
- 04:22 AM Backport #40221 (In Progress): luminous: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28544
- 03:21 AM Backport #40041 (In Progress): luminous: avoid trimming too many log segments after mds failover
06/13/2019
- 07:36 PM Backport #40343 (In Progress): luminous: mds: fix corner case of replaying open sessions
- 07:35 PM Backport #40343 (Resolved): luminous: mds: fix corner case of replaying open sessions
- https://github.com/ceph/ceph/pull/28536
- 07:35 PM Backport #40344 (Resolved): nautilus: mds: fix corner case of replaying open sessions
- https://github.com/ceph/ceph/pull/28580
- 07:35 PM Backport #40342 (Resolved): mimic: mds: fix corner case of replaying open sessions
- https://github.com/ceph/ceph/pull/28579
- 07:35 PM Bug #40211 (Pending Backport): mds: fix corner case of replaying open sessions
- 06:22 PM Bug #40213 (Fix Under Review): mds: cannot switch mds state from standby-replay to active
- 03:44 PM Backport #40321 (In Progress): nautilus: test: extend mgr/volume test to cover new interfaces
- 02:43 PM Backport #40321: nautilus: test: extend mgr/volume test to cover new interfaces
- https://github.com/ceph/ceph/pull/28429
- 10:24 AM Backport #40321 (Resolved): nautilus: test: extend mgr/volume test to cover new interfaces
- https://github.com/ceph/ceph/pull/28429
- 02:48 PM Backport #40338 (In Progress): nautilus: mgr/volumes: Name 'sub_name' is not defined
- https://github.com/ceph/ceph/pull/28429
- 02:47 PM Backport #40338 (Resolved): nautilus: mgr/volumes: Name 'sub_name' is not defined
- https://github.com/ceph/ceph/pull/28429
- 02:39 PM Bug #40014 (Pending Backport): mgr/volumes: Name 'sub_name' is not defined
- 12:24 PM Feature #17434 (Fix Under Review): qa: background rsync task for FS workunits
- 10:26 AM Backport #40327 (Resolved): mimic: mds: evict stale client when one of its write caps are stolen
- https://github.com/ceph/ceph/pull/28585
- 10:26 AM Backport #40326 (Resolved): nautilus: mds: evict stale client when one of its write caps are stolen
- https://github.com/ceph/ceph/pull/28583
- 10:25 AM Backport #40325 (Rejected): mimic: ceph_volume_client: d_name needs to be converted to string bef...
- https://github.com/ceph/ceph/pull/29766
- 10:25 AM Backport #40324 (Resolved): nautilus: ceph_volume_client: d_name needs to be converted to string ...
- https://github.com/ceph/ceph/pull/28609
- 10:24 AM Backport #40323 (Rejected): luminous: ceph_volume_client: d_name needs to be converted to string ...
- 10:22 AM Backport #40314 (Resolved): nautilus: cephfs-shell: Incorrect error message is printed in 'lcd' c...
- https://github.com/ceph/ceph/pull/28681
- 10:22 AM Backport #40313 (Resolved): nautilus: cephfs-shell: 'lls' command errors
- https://github.com/ceph/ceph/pull/28681
06/12/2019
- 09:33 PM Bug #40288 (Closed): mds: lost mds journal when hot-standby mds switch occurs
- 12:58 PM Bug #40288: mds: lost mds journal when hot-standby mds switch occurs
- Sorry, there doesn't seems to have any problem, it's my misunderstanding. Turn off this issue please, thank you!
- 02:51 AM Bug #40288 (Closed): mds: lost mds journal when hot-standby mds switch occurs
- ceph version: jewel 10.2.2
mds mode: hot-standby
There is a risk mds lost some event because it wake up waiters... - 09:31 PM Bug #40243 (Pending Backport): cephfs-shell: Incorrect error message is printed in 'lcd' command
- 09:30 PM Bug #40244 (Pending Backport): cephfs-shell: 'lls' command errors
- 09:27 PM Bug #39949 (Pending Backport): test: extend mgr/volume test to cover new interfaces
- 09:17 PM Bug #39406 (Pending Backport): ceph_volume_client: d_name needs to be converted to string before ...
- 09:07 PM Bug #38326 (Pending Backport): mds: evict stale client when one of its write caps are stolen
- Zheng, any issues backporting this?
- 08:41 PM Bug #40305 (New): qa: spurious unresponsive client causes eviction due to valgrind/multimds
- ...
- 04:16 PM Bug #40014: mgr/volumes: Name 'sub_name' is not defined
- Ramana Raja wrote:
> Venky Shankar wrote:
> > Ramana, I think we should just mention that this issue will be fixed ... - 03:06 PM Bug #40093: qa: client mount cannot be forcibly unmounted when all MDS are down
- /ceph/teuthology-archive/pdonnell-2019-06-11_01:05:56-fs-wip-pdonnell-testing-20190610.220401-distro-basic-smithi/402...
- 01:47 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- ceph-csi ticket: https://github.com/ceph/ceph-csi/issues/411
Ramana, I'm reassigning this to you. We need this d... - 01:37 PM Bug #40297 (Fix Under Review): cephfs-shell: Produces TypeError on passing '*' pattern to ls, rm ...
- 12:58 PM Bug #40297 (Resolved): cephfs-shell: Produces TypeError on passing '*' pattern to ls, rm or rmdir
- ...
- 01:37 PM Bug #40298 (Fix Under Review): cephfs-shell: 'rmdir *' does not remove all directories
- 01:15 PM Bug #40298 (Resolved): cephfs-shell: 'rmdir *' does not remove all directories
- ...
- 01:30 PM Feature #40299 (Resolved): mgr/volumes: allow setting mode on fs subvol, subvol group
- Allow setting mode bits(directory permissions) when creating fs subvolume, and fs subvolume group through the CLI.
... - 01:18 PM Bug #22038: ceph-volume-client: rados.Error: command not known
- Note: luminous backport is tracked by #40182, where cbbdd0da7d40e4e5def5cc0b9a9250348e71019f is also being backported...
- 01:05 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Patrick Donnelly wrote:
> Are there any other issues?
A couple more. PR is updated. - 09:53 AM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- Oops! Sorry, I didn't notice this was assigned to David.
- 09:30 AM Backport #39209 (Resolved): nautilus: mds: mds_cap_revoke_eviction_timeout is not used to initial...
06/11/2019
- 10:58 PM Bug #40286 (Resolved): luminous: qa: remove ubuntu 14.04 testing
- pdonnell@icewind ~/ceph/qa$ git grep 14.04
distros/all/ubuntu_14.04.yaml:os_version: "14.04"
distros/all/ubuntu_14.... - 10:34 PM Feature #40285 (New): mds: support hierarchical layout transformations on files
- The main goal of this feature is to support moving whole trees to cheaper storage hardware. This can be done manually...
- 09:57 PM Bug #11314 (Duplicate): qa: MDS crashed and the runs hung without ever timing out
- 09:57 PM Feature #10369 (Fix Under Review): qa-suite: detect unexpected MDS failovers and daemon crashes
- 09:56 PM Feature #5486: kclient: make it work with selinux
- Targeting Octopus so it shows up in searches.
- 08:59 PM Bug #40284 (New): kclient: evaluate/fix/add lazio support in the kernel
- ceph-fuse now supports lazyio [2, #20598] but I don't believe we ever checked what needed to be done for the kernel c...
- 08:59 PM Bug #40283 (Resolved): qa: add testing for lazyio
- I'm distressed we have no tests for client behavior (via libcephfs) with lazyio. : /
In particular, verify behavio... - 08:50 PM Backport #39470 (Resolved): nautilus: There is no punctuation mark or blank between tid and clie...
- 08:47 PM Backport #39473 (Resolved): nautilus: mds: fail to resolve snapshot name contains '_'
- 08:36 PM Feature #36397: mds: support real state reclaim
- Raising priority on this. We forgot to finish this and I'd like Zheng to work on it while the problem is still fresh ...
- 08:07 PM Feature #40261 (New): mds: permit executing scripts from various file system events
- Potential uses:
- automatic gzip of closed files meeting some criteria
- automatic archival of unlinked files
- ... - 08:06 PM Backport #39232 (Resolved): nautilus: kclient: nofail option not supported
- 08:05 PM Backport #39214 (Resolved): nautilus: mds: there is an assertion when calling Beacon::shutdown()
- 08:05 PM Backport #39211 (Resolved): nautilus: MDSTableServer.cc: 83: FAILED assert(version == tid)
- 08:04 PM Backport #39222 (Resolved): nautilus: mds: behind on trimming and "[dentry] was purgeable but no ...
- 08:03 PM Bug #37726 (Resolved): mds: high debug logging with many subtrees is slow
- 08:02 PM Backport #38876 (Resolved): nautilus: mds: high debug logging with many subtrees is slow
- 07:55 PM Backport #40166 (In Progress): luminous: client: ceph.dir.rctime xattr value incorrectly prefixes...
- 07:54 PM Backport #40168 (In Progress): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- 07:52 PM Backport #40167 (In Progress): nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes...
- 07:48 PM Backport #40169 (In Progress): nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 ...
- 05:41 PM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- David Disseldorp wrote:
> The backport introducing this bug has now been merged into Nautilus: https://github.com/ce... - 03:55 PM Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.40055...
- The backport introducing this bug has now been merged into Nautilus: https://github.com/ceph/ceph/pull/27901 .
A f... - 07:07 PM Bug #38946 (Resolved): ceph_volume_client: Too many arguments for "WriteOpCtx"
- 07:06 PM Backport #39050 (Resolved): nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx"
- 06:04 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Jan Fajerski wrote:
> Patrick Donnelly wrote:
> > Let's treat this as a backport. Please cherry-pick the commits fr... - 09:56 AM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- Patrick Donnelly wrote:
> Let's treat this as a backport. Please cherry-pick the commits from here:
>
> https://g... - 06:00 PM Bug #40101: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on .snap directory
- I have just run into another problem which may be related:
'ls .snap' now hangs for a long time (indefinitely?) an... - 04:52 PM Bug #40014: mgr/volumes: Name 'sub_name' is not defined
- Venky Shankar wrote:
> Ramana, I think we should just mention that this issue will be fixed w/ subvolume refactor an...
06/10/2019
- 09:19 PM Bug #40197 (Fix Under Review): The command 'node ls' sometimes output some incorrect information ...
- 03:01 PM Bug #40244 (Fix Under Review): cephfs-shell: 'lls' command errors
- 02:56 PM Bug #40244 (Resolved): cephfs-shell: 'lls' command errors
- lls need not print for current working directory. It does not print correct path for relative paths.
- 02:49 PM Bug #40243 (Fix Under Review): cephfs-shell: Incorrect error message is printed in 'lcd' command
- 02:43 PM Bug #40243 (Resolved): cephfs-shell: Incorrect error message is printed in 'lcd' command
- For different types of incorrect arguments passed, appropriate error message is not printed.
- 01:24 PM Bug #40200 (Fix Under Review): luminous: mds: does fails assert(session->get_nref() == 1) when ba...
- 02:05 AM Bug #40200: luminous: mds: does fails assert(session->get_nref() == 1) when balancing
- your patch should fix the issue. Thanks for tracking it down. Could you please create PR ?
- 12:01 PM Bug #38739 (Resolved): cephfs-shell: python traceback with mkdir inside inexistant directory
- 12:01 PM Backport #39379 (Resolved): nautilus: cephfs-shell: python traceback with mkdir inside inexistant...
- 12:01 PM Feature #38740 (Resolved): cephfs-shell: support mkdir with non-octal mode
- 12:01 PM Backport #39378 (Resolved): nautilus: cephfs-shell: support mkdir with non-octal mode
- 12:01 PM Bug #38741 (Resolved): cephfs-shell: python traceback with mkdir when reattempt of mkdir
- 12:01 PM Backport #39377 (Resolved): nautilus: cephfs-shell: python traceback with mkdir when reattempt of...
- 12:00 PM Bug #38743 (Resolved): cephfs-shell: mkdir creates directory with invalid octal mode
- 12:00 PM Backport #39376 (Resolved): nautilus: cephfs-shell: mkdir creates directory with invalid octal mode
- 12:00 PM Bug #38996 (Resolved): cephfs-shell: ls command produces error: no "colorize" attribute found error
- 12:00 PM Backport #39197 (Resolved): nautilus: cephfs-shell: ls command produces error: no "colorize" attr...
- 11:59 AM Backport #39192 (Resolved): nautilus: mds: crash during mds restart
- 11:58 AM Backport #39199 (Resolved): nautilus: mds: we encountered "No space left on device" when moving h...
- 10:37 AM Bug #22524: NameError: global name 'get_mds_map' is not defined
- Note: luminous backport is tracked by #40182, where cbbdd0da7d40e4e5def5cc0b9a9250348e71019f is also being backported...
- 10:28 AM Backport #40236 (Resolved): nautilus: mds: blacklisted clients eviction is broken
- https://github.com/ceph/ceph/pull/28618
- 10:27 AM Backport #40223 (Resolved): nautilus: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28611
- 10:27 AM Backport #40222 (Resolved): mimic: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28918
- 10:27 AM Backport #40221 (Resolved): luminous: mds: reset heartbeat during long-running loops in recovery
- https://github.com/ceph/ceph/pull/28544
- 10:26 AM Backport #40220 (Resolved): nautilus: TestMisc.test_evict_client fails
- https://github.com/ceph/ceph/pull/28613
- 10:26 AM Backport #40219 (Resolved): mimic: TestMisc.test_evict_client fails
- https://github.com/ceph/ceph/pull/29228
- 10:26 AM Backport #40218 (Resolved): luminous: TestMisc.test_evict_client fails
- https://github.com/ceph/ceph/pull/29229
- 10:26 AM Backport #40217 (Resolved): nautilus: cephfs-shell: Fix flake8 errors
- https://github.com/ceph/ceph/pull/28681
- 10:22 AM Bug #38803 (Resolved): qa: test_sessionmap assumes simple messenger
- 10:22 AM Backport #39430 (Resolved): nautilus: qa: test_sessionmap assumes simple messenger
- 03:20 AM Bug #40213 (Resolved): mds: cannot switch mds state from standby-replay to active
- if a standby-replay mds run for a long time, there are too many inodes in cache. In the rejoin phase, the mds server ...
06/09/2019
- 11:24 PM Bug #40200: luminous: mds: does fails assert(session->get_nref() == 1) when balancing
- We have seen 3 identical crashes so far. (Logs of the crashed MDSs are at ceph-post-file: a74beec8-0a68-44c1-bfc5-56d...
- 03:51 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- https://github.com/ceph/ceph/pull/28190 is incomplete
https://github.com/ceph/ceph/pull/28459 - 03:24 AM Bug #39987 (Fix Under Review): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
06/08/2019
- 06:17 PM Bug #24072 (Resolved): mds: race with new session from connection and imported session
- 04:32 PM Backport #39379: nautilus: cephfs-shell: python traceback with mkdir inside inexistant directory
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39378: nautilus: cephfs-shell: support mkdir with non-octal mode
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39377: nautilus: cephfs-shell: python traceback with mkdir when reattempt of mkdir
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39376: nautilus: cephfs-shell: mkdir creates directory with invalid octal mode
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:31 PM Backport #39197: nautilus: cephfs-shell: ls command produces error: no "colorize" attribute found...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27677
merged - 04:30 PM Backport #39192: nautilus: mds: crash during mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27714
merged - 04:30 PM Backport #39199: nautilus: mds: we encountered "No space left on device" when moving huge number ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27736
merged - 04:29 PM Backport #39209: nautilus: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server:...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27842
merged - 04:29 PM Backport #39470: nautilus: There is no punctuation mark or blank between tid and client_id in th...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27846
merged - 04:29 PM Backport #39473: nautilus: mds: fail to resolve snapshot name contains '_'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27849
merged - 03:57 PM Bug #40001: mds cache oversize after restart
- Zheng Yan wrote:
> please if these dirfrag fetches are from open_file_table
How can I figure out if they are from... - 01:06 PM Bug #40211 (Fix Under Review): mds: fix corner case of replaying open sessions
- 12:01 PM Bug #40211 (Resolved): mds: fix corner case of replaying open sessions
- Marking a session dirty may flush all existing dirty sessions. MDS
calls Server::finish_force_open_sessions() for lo... - 04:19 AM Bug #39987 (Pending Backport): mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- 04:16 AM Bug #40061 (Pending Backport): mds: blacklisted clients eviction is broken
- 04:14 AM Feature #40121 (Resolved): mds: count purge queue items left in journal
- 04:13 AM Bug #40171 (Pending Backport): mds: reset heartbeat during long-running loops in recovery
- 04:12 AM Bug #40173 (Pending Backport): TestMisc.test_evict_client fails
- 04:12 AM Cleanup #40191 (Pending Backport): cephfs-shell: Fix flake8 errors
- 02:14 AM Bug #40210 (New): mds: stuck in up:clientreplay during thrashing
- ...
06/07/2019
- 07:17 PM Bug #40182 (Fix Under Review): luminous: pybind: luminous volume client breaks against nautilus c...
- Let's treat this as a backport. Please cherry-pick the commits from here:
https://github.com/ceph/ceph/pull/17266/... - 07:20 AM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- proposed fix: https://github.com/ceph/ceph/pull/28445
- 04:35 PM Backport #39232: nautilus: kclient: nofail option not supported
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27851
merged - 03:44 PM Backport #39214: nautilus: mds: there is an assertion when calling Beacon::shutdown()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27852
merged - 03:44 PM Backport #39211: nautilus: MDSTableServer.cc: 83: FAILED assert(version == tid)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27853
merged - 03:43 PM Backport #39222: nautilus: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27879
merged - 03:43 PM Backport #38876: nautilus: mds: high debug logging with many subtrees is slow
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27892
merged - 03:42 PM Backport #39471: nautilus: Expose CephFS snapshot creation time to clients
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27901
merged - 12:01 PM Bug #40202 (Fix Under Review): cephfs-shell: Error messages are printed to stdout
- 11:38 AM Bug #40202 (Resolved): cephfs-shell: Error messages are printed to stdout
- The error messages are mixed with other output messages.
- 08:52 AM Bug #40200 (Rejected): luminous: mds: does fails assert(session->get_nref() == 1) when balancing
- We've seen this assertion twice after upgrading MDS's from v12.2.11 to v12.2.12 and due to #40190 it can be disruptiv...
- 02:49 AM Bug #40197 (Fix Under Review): The command 'node ls' sometimes output some incorrect information ...
- Env: my ceph cluster has tree nodes.Each node has one monitor and one mds and some osds.
test command: ceph node ls
... - 12:06 AM Backport #39050: nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27893
merged
06/06/2019
- 09:49 PM Bug #39436 (Resolved): qa: upgrade task fails from mimic to master
- 07:46 PM Backport #39213 (In Progress): luminous: mds: there is an assertion when calling Beacon::shutdown()
- 07:44 PM Backport #40160 (In Progress): luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "par...
- 12:46 AM Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Jeff, can you do this backport please?
- 07:44 PM Backport #39231 (In Progress): luminous: kclient: nofail option not supported
- 05:21 PM Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster
- I think adopting `fs dump` instead of `mds dump` is the right thing to do.
- 07:22 AM Bug #40182 (Resolved): luminous: pybind: luminous volume client breaks against nautilus cluster
- Due to the removal of the 'ceph mds dump' command in nautilus, a luminous ceph_volume_client does not work against a ...
- 04:03 PM Cleanup #40191 (Fix Under Review): cephfs-shell: Fix flake8 errors
- 03:52 PM Cleanup #40191 (Resolved): cephfs-shell: Fix flake8 errors
- Fix the following errors:
* E303 too many blank lines
* E722 do not use bare 'except'
* E501 line too long
* F632... - 02:17 PM Backport #39221 (In Progress): luminous: mds: behind on trimming and "[dentry] was purgeable but ...
- 12:38 AM Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Zheng, please do this backport.
- 12:30 PM Bug #40014: mgr/volumes: Name 'sub_name' is not defined
- Ramana, I think we should just mention that this issue will be fixed w/ subvolume refactor and mark as resolved once ...
- 11:10 AM Backport #40158 (In Progress): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- https://github.com/ceph/ceph/pull/28429
- 11:09 AM Backport #40157 (In Progress): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- https://github.com/ceph/ceph/pull/28429
- 11:09 AM Backport #39934 (In Progress): nautilus: mgr/volumes: add CephFS subvolumes library
- https://github.com/ceph/ceph/pull/28429
- 12:33 AM Feature #38153: client: proactively release caps it is not using
- Status on this Zheng?
06/05/2019
- 09:34 PM Feature #40121 (Fix Under Review): mds: count purge queue items left in journal
- 07:55 PM Backport #39430: nautilus: qa: test_sessionmap assumes simple messenger
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27772
merged - 06:20 PM Bug #40159 (Fix Under Review): mds: openfiletable prefetching large amounts of inodes lead to mds...
- 06:31 AM Bug #40159 (Fix Under Review): mds: openfiletable prefetching large amounts of inodes lead to mds...
- Recently, we found that both mdses of one of our cluster can't boot to up:active
After debugging, we believe this ... - 02:07 PM Bug #40173 (Fix Under Review): TestMisc.test_evict_client fails
- 02:01 PM Bug #40173 (Resolved): TestMisc.test_evict_client fails
- /ceph/teuthology-archive/pdonnell-2019-06-04_03:15:58-fs-wip-pdonnell-testing-20190603.231819-distro-basic-smithi/400...
- 01:30 PM Bug #36370: add information about active scrubs to "ceph -s" (and elsewhere)
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Venky, status on this ticket?
> ... - 12:41 PM Bug #40014 (Fix Under Review): mgr/volumes: Name 'sub_name' is not defined
- 10:17 AM Bug #40171 (Fix Under Review): mds: reset heartbeat during long-running loops in recovery
- 09:30 AM Bug #40171 (Resolved): mds: reset heartbeat during long-running loops in recovery
- 08:43 AM Bug #39949: test: extend mgr/volume test to cover new interfaces
- Backporting note: this will probably need to be done by a CephFS developer because it will be part of a series of com...
- 08:42 AM Feature #39969: mgr / volume: refactor volume module
- Backporting note: this will probably need to be done by a CephFS developer because it will be part of a series of com...
- 08:38 AM Backport #40158 (Need More Info): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- feature backport - commits need to be cherry-picked in the correct order
reassigning to the developer - 08:37 AM Backport #40157 (Need More Info): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- feature backport - commits need to be cherry-picked in the correct order
reassigning to the developer - 08:36 AM Backport #39934 (Need More Info): nautilus: mgr/volumes: add CephFS subvolumes library
- feature backport - commits need to be cherry-picked in the correct order
reassigning to the developer - 06:45 AM Backport #40169 (Resolved): nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:...
- https://github.com/ceph/ceph/pull/28499
- 06:45 AM Backport #40168 (Resolved): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" ...
- https://github.com/ceph/ceph/pull/28501
- 06:44 AM Backport #40167 (Resolved): nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- https://github.com/ceph/ceph/pull/28500
- 06:44 AM Backport #40166 (Resolved): luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- https://github.com/ceph/ceph/pull/28502
- 06:44 AM Backport #40165 (Resolved): mimic: mount: key parsing fail when doing a remount
- https://github.com/ceph/ceph/pull/29225
- 06:44 AM Backport #40164 (Resolved): nautilus: mount: key parsing fail when doing a remount
- https://github.com/ceph/ceph/pull/28610
- 06:44 AM Backport #40163 (Resolved): luminous: mount: key parsing fail when doing a remount
- https://github.com/ceph/ceph/pull/29226
- 06:44 AM Backport #40162 (Resolved): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->i...
- https://github.com/ceph/ceph/pull/29609
- 06:44 AM Backport #40161 (Resolved): nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- https://github.com/ceph/ceph/pull/28612
- 06:43 AM Backport #40160 (Resolved): luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- https://github.com/ceph/ceph/pull/28437
- 12:54 AM Backport #39690 (In Progress): nautilus: mds: error "No space left on device" when create a larg...
- https://github.com/ceph/ceph/pull/28394
- 12:38 AM Bug #40085 (Pending Backport): FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_di...
- 12:36 AM Bug #39705 (Pending Backport): qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.4...
- 12:36 AM Bug #39943 (Pending Backport): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to t...
06/04/2019
- 11:23 PM Bug #39951 (Pending Backport): mount: key parsing fail when doing a remount
- 10:36 PM Backport #40158 (In Progress): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- 10:35 PM Backport #40158 (Resolved): nautilus: mgr/volumes: unable to set quota on fs subvolumes
- 10:32 PM Backport #40157 (In Progress): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- 10:26 PM Backport #40157 (Resolved): nautilus: mgr/volumes: cannot create subvolumes with py3 libraries
- 10:31 PM Backport #39934 (In Progress): nautilus: mgr/volumes: add CephFS subvolumes library
- 05:48 PM Bug #40152 (Pending Backport): mgr/volumes: unable to set quota on fs subvolumes
- 04:12 PM Bug #40152 (Fix Under Review): mgr/volumes: unable to set quota on fs subvolumes
- 02:38 PM Bug #40152 (Resolved): mgr/volumes: unable to set quota on fs subvolumes
- Setting quota on fs subvolumes fails in master. Tested on a vstart cluster.
build]$ ./bin/ceph fs subvolume create... - 04:09 PM Bug #39750 (Pending Backport): mgr/volumes: cannot create subvolumes with py3 libraries
- 12:54 PM Bug #39750 (Fix Under Review): mgr/volumes: cannot create subvolumes with py3 libraries
- 12:54 PM Bug #39750: mgr/volumes: cannot create subvolumes with py3 libraries
- Thanks, Nathan! I could reproduce this issue just with -DWITH_PYTHON3=ON.
- 10:32 AM Bug #39750: mgr/volumes: cannot create subvolumes with py3 libraries
- To successfully reproduce, -DWITH_PYTHON2=OFF may also be needed (in addition to the options shown in the bug descrip...
- 12:55 PM Backport #39689 (In Progress): mimic: mds: error "No space left on device" when create a large n...
- https://github.com/ceph/ceph/pull/28381
- 10:33 AM Backport #40131 (Resolved): nautilus: Document behaviour of fsync-after-close
- https://github.com/ceph/ceph/pull/30025
- 10:33 AM Backport #40130 (Resolved): mimic: Document behaviour of fsync-after-close
- https://github.com/ceph/ceph/pull/29765
- 08:46 AM Feature #40121 (Resolved): mds: count purge queue items left in journal
- MDS purge queue didn't have a perf counter to record how many items still left in journal. Even when MDS restarted, t...
- 06:22 AM Backport #39686 (In Progress): nautilus: ceph-fuse: client hang because its bad session PipeConne...
- https://github.com/ceph/ceph/pull/28375
Also available in: Atom