Activity
From 11/11/2019 to 12/10/2019
12/10/2019
- 11:37 PM Bug #43247 (Fix Under Review): qa: test_cephfs_shell.TestSnapshots.test_snap FAIL
- 10:58 PM Bug #43247: qa: test_cephfs_shell.TestSnapshots.test_snap FAIL
- master: http://pulpito.ceph.com/pdonnell-2019-12-10_20:51:09-fs-master-distro-basic-smithi/
- 08:52 PM Bug #43247 (Resolved): qa: test_cephfs_shell.TestSnapshots.test_snap FAIL
- ...
- 11:22 PM Bug #43249 (Resolved): cephfs-shell: exit failure when non-interactive command fails
- If a one-shot command fails, the cephfs-shell should exit with a non-zero status:...
- 11:20 PM Bug #43248 (Resolved): cephfs-shell: do not drop into shell after running command-line command
- e.g....
- 09:34 PM Cleanup #42468 (Resolved): mds: reorg MDSTable header
- 09:33 PM Cleanup #42564 (Resolved): mds: reorg Migrator header
- 09:31 PM Cleanup #42793 (Resolved): mds: reorg PurgeQueue header
- 09:03 AM Documentation #43222 (Resolved): doc: mention multimds in dev guide's list of integration test su...
- 07:16 AM Documentation #43220 (In Progress): doc: clarify difference fs and kcephfs suite in dev guide
- 07:12 AM Documentation #43220 (Resolved): doc: clarify difference fs and kcephfs suite in dev guide
- 05:57 AM Backport #43219 (In Progress): nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_u...
- 05:49 AM Backport #43219 (Resolved): nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_...
- https://github.com/ceph/ceph/pull/31741
- 05:44 AM Bug #43038 (Pending Backport): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (ta...
- 05:42 AM Bug #43218 (Rejected): kclient: when looking up the snap dirs sometime will hit WARN_ON
- Hit this twice in 30 minutes, the following are the warning:
76 <7>[ 3254.346712] ceph: readdir fetching 100... - 05:22 AM Feature #4386: kclient: Mount error message when no MDS present
- And maybe we could return the -ESTALE or some other specified errornos to the userland to mount.ceph and then the mou...
- 05:20 AM Feature #4386: kclient: Mount error message when no MDS present
- Checked the new mount API, we still need the fix when the mount request timedout due to there is no any MDS is up or ...
- 03:52 AM Documentation #22204 (Resolved): doc: scrub_path is missing in the docs
- 12:37 AM Documentation #42016 (Resolved): doc: layout rest of intro page
12/09/2019
- 11:59 PM Bug #43216 (Resolved): MDSMonitor: removes MDS coming out of quorum election
- Event sequence:
- 2019-12-07T12:26:26.854 mon_thrash kills mon.a(leader)
- 2019-12-07T12:27:07.843 mon_thrash rev... - 10:12 PM Bug #43133 (Fix Under Review): vstop.sh: Mounts are not cleaned up
- 06:55 PM Feature #26996 (Fix Under Review): cephfs: get capability cache hits by clients to provide intros...
- 06:29 PM Bug #43191 (Fix Under Review): test_cephfs_shell: set `colors` to Never for cephfs-shell
- 06:17 AM Bug #43191 (Resolved): test_cephfs_shell: set `colors` to Never for cephfs-shell
- Originally, the plan was to use setUpClass and tearDownClass for tests[1] but I missed pushing that modification befo...
- 03:13 PM Documentation #43210 (In Progress): doc: MDS config reference improvements
- https://docs.ceph.com/docs/master/cephfs/mds-config-ref/
Add details on how to apply a configuration option, fetch... - 03:06 PM Backport #43085 (In Progress): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- 02:56 PM Backport #43085 (New): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- Reopening. I will update this PR with this fix: https://github.com/ceph/ceph/pull/31741
- 02:57 PM Feature #40929 (In Progress): pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configur...
- 02:56 PM Fix #41782 (Fix Under Review): mds: allow stray directories to fragment and switch from 10 stray ...
- Update:
Stray dirs are not being dropped from 10 to 1. Zheng recommended having more stray dirs.
Only fragmentation... - 02:44 PM Bug #43208 (Fix Under Review): mds: unsafe req may result in data remaining in the datapool
- 12:56 PM Bug #43208 (Resolved): mds: unsafe req may result in data remaining in the datapool
- when client create file, if early_reply is set true, the metadata has not write to journal and the file data is succe...
- 02:42 PM Bug #43039: client: shutdown race fails with status 141
- (Handing back to Patrick for now)
Is this problem still occurring in teuthology? - 11:55 AM Feature #36253 (Fix Under Review): cephfs: clients should send usage metadata to MDSs for adminis...
- 11:54 AM Feature #24285 (Fix Under Review): mgr: add module which displays current usage of file system (`...
12/08/2019
12/07/2019
- 12:14 AM Feature #43182 (Resolved): mds: increase default cache size to 4GB
- 1GB is too low as a default and usually results in cache size warnings at that size; the MDS will struggle to maintai...
12/06/2019
- 10:50 PM Backport #42440: mimic: mds: create a configurable snapshot limit
- Nathan Cutler wrote:
> feature backport - does it need a release note?
Yes. - 12:58 PM Backport #42440 (Need More Info): mimic: mds: create a configurable snapshot limit
- feature backport - does it need a release note?
- 10:50 PM Backport #42441: nautilus: mds: create a configurable snapshot limit
- Nathan Cutler wrote:
> feature backport - does it need a release note?
Yes. - 12:58 PM Backport #42441 (Need More Info): nautilus: mds: create a configurable snapshot limit
- feature backport - does it need a release note?
- 01:32 PM Backport #43143 (In Progress): nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- 01:32 PM Backport #43141 (In Progress): nautilus: tools/cephfs: linkages injected by cephfs-data-scan have...
- 01:31 PM Backport #43138 (In Progress): nautilus: mds: reports unrecognized message for mgrclient messages
- 01:29 PM Backport #43137 (In Progress): nautilus: pybind/mgr/volumes: idle connection drop is not working
- 01:27 PM Bug #42923 (Resolved): pybind / cephfs: remove static typing in LibCephFS.chown
- 01:26 PM Backport #43085 (Rejected): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- The code being fixed does not exist in nautilus.
- 01:25 PM Backport #43001 (In Progress): nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) k...
- 01:21 PM Backport #42951 (In Progress): nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.t...
- 01:21 PM Backport #42949 (In Progress): nautilus: mds: inode lock stuck at unstable state after evicting c...
- 01:20 PM Backport #43170 (In Progress): nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot ...
- 01:20 PM Backport #43170 (Resolved): nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot tes...
- https://github.com/ceph/ceph/pull/32072
- 01:19 PM Bug #42922 (Pending Backport): nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- 01:18 PM Backport #42738 (Need More Info): nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 01:18 PM Backport #42713 (Need More Info): nautilus: mgr: daemon state for mds not available
- 01:17 PM Backport #42650 (In Progress): nautilus: mds: no assert on frozen dir when scrub path
- 12:58 PM Backport #42631 (In Progress): nautilus: client: FAILED assert(cap == in->auth_cap)
- 10:11 AM Bug #39947 (Resolved): cephfs-shell: add CI testing with flake8
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:11 AM Bug #40202 (Resolved): cephfs-shell: Error messages are printed to stdout
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Bug #40430 (Resolved): cephfs-shell: No error message is printed on ls of invalid directories
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Bug #40476 (Resolved): cephfs-shell: cd with no args has no effect
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Bug #40836 (Resolved): cephfs-shell: flake8 blank line and indentation error
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Cleanup #40992 (Resolved): cephfs-shell: Multiple flake8 errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:09 AM Bug #41164 (Resolved): cephfs-shell: onecmd throws TypeError
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
12/05/2019
- 11:58 PM Bug #42643 (Resolved): vstart.sh: highlight presence of stray conf file
- 10:01 PM Backport #41283 (Rejected): nautilus: cephfs-shell: No error message is printed on ls of invalid ...
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41268 (Rejected): nautilus: cephfs-shell: onecmd throws TypeError
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41118 (Rejected): nautilus: cephfs-shell: add CI testing with flake8
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41112 (Rejected): nautilus: cephfs-shell: cd with no args has no effect
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41105 (Rejected): nautilus: cephfs-shell: flake8 blank line and indentation error
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41089 (Rejected): nautilus: cephfs-shell: Multiple flake8 errors
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:00 PM Backport #40898 (Rejected): nautilus: cephfs-shell: Error messages are printed to stdout
- 09:45 PM Bug #42894 (Fix Under Review): kclient: if there has at least one MDS still not laggy the mount w...
- 09:45 PM Bug #42760 (Fix Under Review): kclient: get random mds not work as expected
- 09:45 PM Bug #42515 (Fix Under Review): fs: OpenFileTable object shards have too many k/v pairs
- 09:37 PM Bug #42088 (New): 'ceph -s' does not show standbys if there are no filesystems
- 09:36 PM Bug #26901 (New): mds: no throttlers set on incoming messages
- 09:36 PM Bug #21507 (New): mds: debug logs near respawn are not flushed
- 09:36 PM Bug #21058 (New): mds: remove UNIX file permissions binary dependency
- 09:35 PM Bug #20938 (New): CephFS: concurrent access to file from multiple nodes blocks for seconds
- 09:35 PM Bug #19812 (New): client: not swapping directory caps efficiently leads to very slow create chains
- 09:35 PM Bug #18883 (New): qa: failures in samba suite
- 09:35 PM Bug #17847 (New): "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- 09:35 PM Bug #17594 (New): cephfs: permission checking not working (MDS should enforce POSIX permissions)
- 09:35 PM Bug #16881 (New): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- 09:35 PM Bug #16920 (New): mds.inodes* perf counters sound like the number of inodes but they aren't
- 09:35 PM Bug #16556 (New): LibCephFS.InterProcessLocking failing on master and jewel
- 09:35 PM Bug #9105 (New): ~ObjectCacher behaves poorly on EBLACKLISTED
- 09:35 PM Bug #9101 (New): multimds: unlinked file is not pruned from replica mds caches
- 09:34 PM Bug #4023 (New): kclient: d_revalidate is abusing d_parent
- 09:34 PM Bug #2277 (New): qa: flock test broken
- 09:23 PM Bug #42252 (Rejected): mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Fixed by https://github.com/ceph/ceph/pull/31017
- 06:56 PM Documentation #43162 (Resolved): doc: "adding an MDS" in deployment is out-of-date
- https://docs.ceph.com/docs/master/cephfs/add-remove-mds/#adding-an-mds
See: https://github.com/ceph/ceph/pull/32... - 06:13 PM Documentation #42016 (Fix Under Review): doc: layout rest of intro page
- 06:11 PM Documentation #43155 (Closed): CephFS Documentation Sprint 4
- 05:20 PM Documentation #43154 (Resolved): doc: migrate best practice recommendations to relevant docs
- Best practices doc:
https://docs.ceph.com/docs/master/cephfs/best-practices/
Should just put these recommendati... - 03:01 PM Bug #43149 (In Progress): kclient: umount will stuck for around 1 minutes sometimes
- During umount, if there the last request reply is only a safe one without unsafe, the umount won't have any chance to...
- 02:55 PM Bug #43149 (Resolved): kclient: umount will stuck for around 1 minutes sometimes
- While running some test, in one terminal run a script to do creating/deleting/listing large amount of directories wit...
- 02:54 PM Feature #38851 (Rejected): mount.ceph.fuse: support secretfile option
- 02:53 PM Bug #43061: ceph fs add_data_pool doesn't set pool metadata properly
- Ramana Raja wrote:
> [...]
> `add_data_pool` sets the pool's meta data properly if the pool's application metadata ... - 11:10 AM Backport #43144 (Rejected): mimic: mds: tolerate no snaprealm encoded in on-disk root inode
- 11:10 AM Backport #43143 (Resolved): nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- https://github.com/ceph/ceph/pull/32079
- 11:10 AM Backport #43142 (Rejected): mimic: tools/cephfs: linkages injected by cephfs-data-scan have first...
- 11:07 AM Backport #43141 (Resolved): nautilus: tools/cephfs: linkages injected by cephfs-data-scan have fi...
- https://github.com/ceph/ceph/pull/32078
- 11:07 AM Backport #43138 (Resolved): nautilus: mds: reports unrecognized message for mgrclient messages
- https://github.com/ceph/ceph/pull/32077
- 11:07 AM Backport #43137 (Resolved): nautilus: pybind/mgr/volumes: idle connection drop is not working
- https://github.com/ceph/ceph/pull/33116
- 08:58 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- For the sake of completeness, here the crash logging with the extra debug output:...
- 02:20 AM Bug #36094 (Fix Under Review): mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- 06:45 AM Bug #43133 (Resolved): vstop.sh: Mounts are not cleaned up
- When vstop is run while the cephFS is mounted, mount processes are retained and can't be killed.
And also the mount ...
12/04/2019
- 10:59 PM Bug #43129 (New): qa: `fs dump` fails during snaptests
- ...
- 10:54 PM Bug #42636 (Resolved): qa: AttributeError: can't set attribute
- 10:53 PM Bug #42636 (Pending Backport): qa: AttributeError: can't set attribute
- 10:50 PM Bug #43036 (Pending Backport): mds: reports unrecognized message for mgrclient messages
- 10:18 PM Bug #42675 (Pending Backport): mds: tolerate no snaprealm encoded in on-disk root inode
- 10:03 PM Bug #43125: qa: ceph_volume_client not available "ModuleNotFoundError: No module named 'ceph_volu...
- Can't seem to reproduce on master:
http://pulpito.ceph.com/pdonnell-2019-12-04_20:54:30-fs-master-distro-basic-smi... - 08:46 PM Bug #43125 (Can't reproduce): qa: ceph_volume_client not available "ModuleNotFoundError: No modul...
- ...
- 09:58 PM Bug #42829 (Pending Backport): tools/cephfs: linkages injected by cephfs-data-scan have first == ...
- 09:57 PM Bug #38452 (Resolved): mds: assert crash loop while unlinking file
- 09:45 PM Bug #43113 (Pending Backport): pybind/mgr/volumes: idle connection drop is not working
- 03:57 PM Documentation #16300 (Resolved): doc: fuse_disable_pagecache
12/03/2019
- 10:35 PM Bug #43113 (Fix Under Review): pybind/mgr/volumes: idle connection drop is not working
- 09:24 PM Bug #43113 (Resolved): pybind/mgr/volumes: idle connection drop is not working
- after creating a subvolume:...
- 02:51 PM Feature #38851: mount.ceph.fuse: support secretfile option
- Yes. I think this is intentional.
The secretfile thing was really for the kernel client, which had a very primitiv... - 12:27 PM Bug #43061 (In Progress): ceph fs add_data_pool doesn't set pool metadata properly
- ...
- 10:45 AM Bug #43038 (Fix Under Review): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (ta...
- 12:48 AM Bug #42894: kclient: if there has at least one MDS still not laggy the mount will fail
- 12:46 AM Feature #7333 (In Progress): client: evaluate multiple O_APPEND writers
- 12:45 AM Feature #4386: kclient: Mount error message when no MDS present
- An extra patch has been post and it is based on the current old mount API.
There is a new mount API for cephfs, and ...
12/02/2019
- 09:31 PM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- I'm not sure what this tracker is really asking for, tbqh.
Hmm...now that I look, I do see this:
> commit 6c99f... - 07:53 AM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- If I have correctly comprehended it and from the current code and my test we had already implemented it.
There has... - 04:43 PM Feature #16468 (Resolved): kclient: Exclude ceph.* xattr namespace in listxattr
- Thanks for verifying Xiubo!
- 02:12 PM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- 02:11 PM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- The ceph.* xattr has been removed, so this has been fixed:
>
> commit e09580b343aa117fd07c1bb7f7dfc5bc630a2953
... - 02:46 PM Bug #42986 (Triaged): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.Test...
- 02:45 PM Bug #43038 (In Progress): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks.c...
- 02:44 PM Bug #43061 (Triaged): ceph fs add_data_pool doesn't set pool metadata properly
- 02:41 PM Bug #43090 (Fix Under Review): mds:check if oldin is null before accessing its member
- 02:41 PM Bug #43090 (Need More Info): mds:check if oldin is null before accessing its member
- Can you share your cluster version, logs, and backtrace?
- 01:58 PM Bug #43090 (Closed): mds:check if oldin is null before accessing its member
- in mds/server : handle_client_rename:
CInode *oldin = 0;
if destdnl->is_null;
then oldin will be still 0;
... - 12:41 PM Backport #43085 (Resolved): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- https://github.com/ceph/ceph/pull/31741
11/28/2019
- 11:26 PM Bug #43061 (Resolved): ceph fs add_data_pool doesn't set pool metadata properly
- maybe related to https://tracker.ceph.com/issues/36028...
- 01:38 AM Feature #15066 (Rejected): multifs: Allow filesystems to be assigned RADOS namespace as well as p...
- Now that pg_autoscaler exists with pg merging, this feature is not compelling. Using separate pools for multifs has a...
- 01:38 AM Feature #5520 (Rejected): osdc: should handle namespaces
- Now that pg_autoscaler exists with pg merging, this feature is not compelling. Using separate pools for multifs has a...
- 01:36 AM Feature #15070: mon: client: multifs: auth caps on client->mon connections to limit their access ...
- Giving this to Rishabh as discussed.
- 01:35 AM Feature #22478 (Rejected): multifs: support snapshots for shared data pool
- Now that pg_autoscaler exists with pg merging, this feature is not compelling. Using separate pools for multifs has a...
11/27/2019
- 10:15 PM Feature #15070: mon: client: multifs: auth caps on client->mon connections to limit their access ...
- also see branch wip-djf-15070-rebase on https://github.com/fullerdj/ceph/
- 05:46 PM Bug #43041 (Rejected): ceph-fuse client reported "No space left on device" when from cluster copy...
- Sorry, we don't consider bugs on clusters this old. Please upgrade!
- 05:35 AM Bug #43041 (Rejected): ceph-fuse client reported "No space left on device" when from cluster copy...
- cluster version: 0.94.9
client version: 0.94.9
ceph-fuse client err info:
2019-11-27 11:04:06.800947 7fddb0dfa7... - 10:33 AM Bug #43039: client: shutdown race fails with status 141
- I think that's probably indicative of a SIGPIPE error, which probably means some task was writing to a pipe that did ...
- 12:24 AM Bug #43039 (Resolved): client: shutdown race fails with status 141
- ...
- 09:10 AM Cleanup #41951 (Fix Under Review): mds: obsolete mds_cache_size
11/26/2019
- 11:56 PM Bug #42923 (Pending Backport): pybind / cephfs: remove static typing in LibCephFS.chown
- 11:55 PM Bug #43038 (Resolved): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks.ceph...
- ...
- 08:20 PM Bug #43036 (Fix Under Review): mds: reports unrecognized message for mgrclient messages
- 07:22 PM Bug #43036 (Resolved): mds: reports unrecognized message for mgrclient messages
- ...
- 08:17 PM Bug #43035 (Rejected): qa: Test failure: test_ceph_config_show (tasks.cephfs.test_admin.TestConfi...
- Closed in favor of #43035.
- 07:12 PM Bug #43035 (Rejected): qa: Test failure: test_ceph_config_show (tasks.cephfs.test_admin.TestConfi...
- http://pulpito.ceph.com/pdonnell-2019-11-26_04:58:35-fs-wip-pdonnell-testing-20191126.005014-distro-basic-smithi/4543...
- 07:00 PM Documentation #43034 (New): doc: document large omap warning for directory fragmentation
- https://docs.ceph.com/docs/master/cephfs/health-messages/
and
https://docs.ceph.com/docs/master/cephfs/dirfrags... - 06:56 PM Documentation #43033 (In Progress): doc: directory fragmentation section on config options
- https://docs.ceph.com/docs/master/cephfs/dirfrags/
Add section on advanced (not dev) config options for the MDS. - 06:55 PM Documentation #43032 (New): doc: directory fragmentation omap cost/benefits
- https://docs.ceph.com/docs/master/cephfs/dirfrags/
* Discussion of rationale for directory fragmentation: object o... - 06:52 PM Documentation #23897 (In Progress): doc: create snapshot user doc
- 06:52 PM Documentation #37746 (In Progress): doc: how to mount a subdir with ceph-fuse/kclient
- 06:52 PM Documentation #16300 (In Progress): doc: fuse_disable_pagecache
- 06:52 PM Documentation #22204 (In Progress): doc: scrub_path is missing in the docs
- 06:43 PM Documentation #42407 (In Progress): doc: add a doc for libcephfs
- 06:43 PM Documentation #41688 (In Progress): doc: client config reference improvements
- 06:41 PM Documentation #24642 (In Progress): doc: visibility semantics to other clients
- 06:37 PM Documentation #41999 (Resolved): CephFS Documentation Sprint 2
- 06:37 PM Documentation #42016 (In Progress): doc: layout rest of intro page
- 06:37 PM Documentation #43031 (Closed): CephFS Documentation Sprint 3
- 05:57 PM Bug #38681 (Resolved): cephfs-shell: add commands to manipulate snapshots
- 03:45 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Zoltan Arnold Nagy wrote:
> What info can I provide?
I think it'd be best to open a new tracker ticket for the... - 03:21 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- I do see this on a new mds setup, with 14.2.4, having the right ganesha setup:
@root@c10n5:~# cat /etc/ganesha/gan... - 02:52 PM Documentation #43028 (Resolved): doc: cephfs-shell options
- Like what's in https://docs.ceph.com/docs/master/cephfs/client-config-ref/
- 02:49 PM Feature #42447 (Fix Under Review): add basic client setup page
- 02:36 PM Bug #42872 (Fix Under Review): qa/tasks: add remaining tests for fs volume
- 10:12 AM Bug #42872 (In Progress): qa/tasks: add remaining tests for fs volume
- 02:33 PM Documentation #41825 (Resolved): CephFS Documentation Sprint 1
- 12:50 PM Bug #36348 (Resolved): luminous(?): blogbench I/O with two kernel clients; one stalls
- The patches were merged into -rc7 kernel, so this should be resolved now.
11/25/2019
- 10:10 PM Bug #42940 (Fix Under Review): client: trim_cache not invalidate kernel cache
- 06:29 PM Bug #42872 (New): qa/tasks: add remaining tests for fs volume
- Jos Collin wrote:
> We cannot test this with accuracy.
>
> Because:
>
> `ceph fs volume ls` would list the al... - 10:01 AM Bug #42872 (Closed): qa/tasks: add remaining tests for fs volume
- We cannot test this with accuracy.
Because:
`ceph fs volume ls` would list the already existing volumes and th... - 01:22 PM Feature #118 (In Progress): kclient: clean pages when throwing out dirty metadata on session tear...
- 09:47 AM Backport #43002 (Rejected): mimic: qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel ...
- 09:47 AM Backport #43001 (Resolved): nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) kern...
- https://github.com/ceph/ceph/pull/32075
- 09:47 AM Backport #43000 (Rejected): luminous: qa: ignore "ceph.dir.pin: No such attribute" for (old) kern...
- 03:00 AM Bug #42986 (Resolved): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.Tes...
- ...
11/23/2019
- 06:06 AM Fix #38801 (Pending Backport): qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
- Whoops, forgot to move this.
11/22/2019
- 11:57 PM Bug #42894 (Fix Under Review): kclient: if there has at least one MDS still not laggy the mount w...
- 08:30 AM Backport #42951 (Resolved): nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test...
- https://github.com/ceph/ceph/pull/33122
- 08:30 AM Backport #42950 (Rejected): mimic: mds: inode lock stuck at unstable state after evicting client
- 08:30 AM Backport #42949 (Resolved): nautilus: mds: inode lock stuck at unstable state after evicting client
- https://github.com/ceph/ceph/pull/32073
- 04:09 AM Backport #42943 (In Progress): nautilus: mds: free heap memory may grow too large for some workloads
- 04:03 AM Backport #42943 (Resolved): nautilus: mds: free heap memory may grow too large for some workloads
- https://github.com/ceph/ceph/pull/31802
- 04:02 AM Backport #42942 (Rejected): mimic: mds: free heap memory may grow too large for some workloads
- 04:02 AM Bug #42938 (Pending Backport): mds: free heap memory may grow too large for some workloads
- 03:51 AM Bug #42887: tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file or directory
- Log: [[http://qa-proxy.ceph.com/teuthology/yuriw-2019-11-09_19:10:09-fs-wip-yuri-mimic_13.2.7_RC2-distro-basic-smithi...
- 02:51 AM Bug #42941 (Fix Under Review): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- 02:42 AM Bug #42941 (In Progress): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- I see the issue.
- 02:37 AM Bug #42941 (Rejected): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- ...
- 01:22 AM Bug #42940 (Fix Under Review): client: trim_cache not invalidate kernel cache
11/21/2019
- 09:26 PM Bug #42707 (Resolved): Kernel 5.0 CephFS client hang
- Looks like the updates have trickled out to ubuntu repos. Let's call this resolved. Please reopen if you see it again...
- 09:24 PM Bug #42842 (Resolved): CephFS linux kernel hang, v4.15
- Glad to hear it. We'll call this one resolved.
- 07:43 PM Bug #42842: CephFS linux kernel hang, v4.15
- I am no longer seeing the problem on -70.79. Had a number of kernel versions installed and must have gotten confused.
- 03:00 PM Bug #42842: CephFS linux kernel hang, v4.15
- -66.75 is definitely bad, but -70.79 should be ok. Can you validate that you still see the problem on that kernel?
- 07:15 PM Bug #42835: qa: test_scrub_abort fails during check_task_status("idle")
- Nautilus backport will be tracked by #42738.
- 07:12 PM Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- and this: #42835
- 06:04 PM Bug #42938 (Resolved): mds: free heap memory may grow too large for some workloads
- MDS should periodically release heap free space to the kernel as part of cache trimming.
- 05:36 PM Backport #41283 (New): nautilus: cephfs-shell: No error message is printed on ls of invalid direc...
- 05:36 PM Backport #41268 (New): nautilus: cephfs-shell: onecmd throws TypeError
- 05:35 PM Backport #41118 (New): nautilus: cephfs-shell: add CI testing with flake8
- 05:35 PM Backport #41112 (New): nautilus: cephfs-shell: cd with no args has no effect
- 05:35 PM Backport #41105 (New): nautilus: cephfs-shell: flake8 blank line and indentation error
- 05:34 PM Backport #41089 (New): nautilus: cephfs-shell: Multiple flake8 errors
- 05:33 PM Backport #40898 (New): nautilus: cephfs-shell: Error messages are printed to stdout
- 02:55 PM Feature #42831 (Fix Under Review): mds: add config to deny all client reconnects
- 02:54 PM Bug #42917 (Duplicate): ceph: task status not available
- 02:52 PM Bug #42872 (Need More Info): qa/tasks: add remaining tests for fs volume
- 02:51 PM Bug #42887 (Need More Info): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such...
- 04:22 AM Bug #42923 (Fix Under Review): pybind / cephfs: remove static typing in LibCephFS.chown
- 04:21 AM Bug #42923 (In Progress): pybind / cephfs: remove static typing in LibCephFS.chown
- 04:21 AM Bug #42923 (Resolved): pybind / cephfs: remove static typing in LibCephFS.chown
- ...
- 02:06 AM Bug #42894: kclient: if there has at least one MDS still not laggy the mount will fail
- The following commits should fix it.
https://github.com/ceph/ceph-client/commit/2f35ef362bc14f25dac6738472180d9a4a... - 01:59 AM Backport #41890: nautilus: mount.ceph: enable consumption of ceph keyring files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30521
m... - 01:43 AM Backport #41890 (Resolved): nautilus: mount.ceph: enable consumption of ceph keyring files
- 01:43 AM Feature #16656 (Resolved): mount.ceph: enable consumption of ceph keyring files
- 01:31 AM Bug #42922 (Resolved): nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- https://github.com/ceph/ceph/pull/29911
needs backport.
11/20/2019
- 11:33 PM Bug #42646 (Pending Backport): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volume...
- 11:32 PM Bug #42020 (Resolved): qa: fuse_mount should check if mounted in umount_wait
- DaemonWatchdog is not in mimic/nautilus.
- 11:31 PM Bug #42020 (Pending Backport): qa: fuse_mount should check if mounted in umount_wait
- 11:26 PM Bug #42746 (Resolved): mds crashed in MDCache::request_forward
- 11:25 PM Bug #42759 (Pending Backport): mds: inode lock stuck at unstable state after evicting client
- 10:37 PM Bug #42920 (New): mds: removed from map due to dropped (?) beacons
- ...
- 10:30 PM Bug #42919 (New): mds: heartbeat timeout during large scale git-clone/rm workload
- ...
- 10:03 PM Bug #42917 (Duplicate): ceph: task status not available
- ...
- 10:21 AM Bug #24679 (Resolved): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:14 AM Bug #41031 (Resolved): qa: malformed job
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:56 AM Bug #42894 (Resolved): kclient: if there has at least one MDS still not laggy the mount will fail
- In case:
# ceph fs dump
[...]
max_mds 3
in 0,1,2
up {0=5139,1=4837,2=4985}
failed
damaged
stoppe... - 12:31 AM Bug #42827 (Fix Under Review): mds: when mounting the extra slash(es) at the end of server path w...
11/19/2019
- 08:41 PM Bug #42887: tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file or directory
- Please link the source teuthology log. Add html "pre" markup around the log so it's readable.
- 04:39 PM Bug #42887 (Won't Fix): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file...
- ...
- 07:18 PM Backport #42678 (Resolved): luminous: qa: malformed job
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31449
m... - 04:33 PM Backport #42678: luminous: qa: malformed job
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31449
merged - 07:18 PM Backport #42672 (Resolved): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31450
m... - 04:32 PM Backport #42672: luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31450
merged - 07:18 PM Backport #42774 (Resolved): luminous: mds: add command that modify session metadata
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31573
m... - 04:31 PM Backport #42774: luminous: mds: add command that modify session metadata
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/31573
mergedhttps://trello.com/c/YlSLupiJ - 04:02 PM Backport #42886 (In Progress): nautilus: mgr/volumes: allow setting uid, gid of subvolume and sub...
- 03:54 PM Backport #42886 (Resolved): nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvol...
- ... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` commands.
https://... - 10:43 AM Bug #42252: mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Rishabh Dave wrote:
> Couldn't reproduce this issue locally; test_21501 passed for me.
with python3? also, I thin... - 10:42 AM Bug #42252: mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Couldn't reproduce this issue locally; test_21501 passed for me.
- 10:25 AM Feature #40959 (Pending Backport): mgr/volumes: allow setting uid, gid of subvolume and subvolume...
- 09:03 AM Bug #40877 (Resolved): client: client should return EIO when it's unsafe reqs have been dropped w...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:03 AM Bug #40967 (Resolved): qa: race in test_standby_replay_singleton_fail
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:03 AM Bug #40968 (Resolved): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:02 AM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:01 AM Bug #41585 (Resolved): mds: client evicted twice in one tick
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:38 AM Backport #41886 (Resolved): nautilus: mds: client evicted twice in one tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30951
m... - 08:36 AM Backport #41488 (Resolved): nautilus: client: client should return EIO when it's unsafe reqs have...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30043
m... - 08:34 AM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29832
m... - 08:33 AM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29811
m... - 08:33 AM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29750
m... - 06:02 AM Feature #42875 (New): mgr/volumes: user credentials for ListVolumes, GetCapacity and ValidateVolu...
- Validate user credentials for the following API/Commands:
ValidateVolumeCapabilities
GetCapacity
ListVolumes - 05:57 AM Feature #42874 (New): mgr/volumes: add ValidateVolumeCapabilities API/command for `fs volume`
- add ValidateVolumeCapabilities API/command for `fs volume` as mentioned in [1]
[1] https://github.com/container-st... - 05:55 AM Feature #42873 (New): mgr/volumes: add GetCapacity API/command for `fs volume`
- add `fs volume getcapacity` command as suggested in [1].
[1] https://github.com/container-storage-interface/spec/i... - 05:06 AM Bug #42872 (Resolved): qa/tasks: add remaining tests for fs volume
- There are missing tests for `fs volume` in test_volumes.py. Only test_volume_rm is available. Where are the tests for...
- 01:32 AM Bug #42827: mds: when mounting the extra slash(es) at the end of server path will be wrongly pars...
- This should fix it: https://github.com/ceph/ceph/pull/31713
11/18/2019
- 04:52 PM Backport #41886: nautilus: mds: client evicted twice in one tick
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30951
merged - 04:52 PM Backport #41488: nautilus: client: client should return EIO when it's unsafe reqs have been dropp...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30043
merged - 04:51 PM Backport #41095: nautilus: qa: race in test_standby_replay_singleton_fail
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29832
merged - 04:50 PM Backport #41093: nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_wr...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29811
merged - 04:50 PM Backport #41087: nautilus: qa: AssertionError: u'open' != 'stale'
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29750
merged - 04:32 PM Cleanup #42867 (Fix Under Review): mds: reorg Server header
- 03:11 PM Cleanup #42867 (Resolved): mds: reorg Server header
- 03:59 PM Cleanup #42866 (Fix Under Review): mds: reorg ScrubStack header
- 03:10 PM Cleanup #42866 (Resolved): mds: reorg ScrubStack header
- 03:54 PM Backport #42155 (Resolved): nautilus: mds: infinite loop in Locker::file_update_finish()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31079
m... - 02:54 PM Backport #42155: nautilus: mds: infinite loop in Locker::file_update_finish()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31079
merged - 03:41 PM Cleanup #42865 (Fix Under Review): mds: reorg ScrubHeader header
- 03:09 PM Cleanup #42865 (Resolved): mds: reorg ScrubHeader header
- 03:30 PM Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- Note to backporters: there are follow-up bugs in progress of being fixed: https://tracker.ceph.com/issues/42744
- 03:19 PM Cleanup #42864 (Fix Under Review): mds: reorg ScatterLock header
- 03:08 PM Cleanup #42864 (Resolved): mds: reorg ScatterLock header
11/16/2019
- 04:56 PM Bug #41228 (Duplicate): mon: deleting a CephFS and its pools causes MONs to crash
- 06:40 AM Bug #40773 (Resolved): qa: 'ceph osd require-osd-release nautilus' fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:34 AM Backport #41495 (Resolved): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31040
m...
11/15/2019
- 10:40 PM Backport #41495: nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31040
merged - 09:15 PM Bug #42842 (Resolved): CephFS linux kernel hang, v4.15
- Simple file system operations like df and ls hang and show a status of D+ when running ps. dmesg logs sometimes show ...
- 09:05 PM Bug #41841 (Resolved): mgr/volumes: missing protection for `fs volume rm` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:05 PM Feature #41842 (Resolved): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:04 PM Bug #42096 (Resolved): mgr/volumes: creating subvolume and subvolume group snapshot fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:39 PM Bug #42837: qa: test_ops_throttle failed with `RuntimeError: Ops in flight high water is unexpect...
- might be same as: https://tracker.ceph.com/issues/16881
but the test case is different, so not marking as dup. - 04:23 PM Bug #42837 (New): qa: test_ops_throttle failed with `RuntimeError: Ops in flight high water is un...
- ...
- 05:21 PM Bug #42829 (Fix Under Review): tools/cephfs: linkages injected by cephfs-data-scan have first == ...
- 07:18 AM Bug #42829 (Resolved): tools/cephfs: linkages injected by cephfs-data-scan have first == head
- something like
[inode 0x100000367e5 [head,head] /pg_xlog_archives/9.6/smobile/000000200000002C000000BB.00000028.ba... - 03:22 PM Bug #42835 (Resolved): qa: test_scrub_abort fails during check_task_status("idle")
- ...
- 10:09 AM Feature #42831 (Resolved): mds: add config to deny all client reconnects
- This helps reduce mds failover time.
- 06:08 AM Bug #42760: kclient: get random mds not work as expected
- Should be fixed by https://github.com/ceph/ceph-client/commit/b570777a96d5dd15b556e73d90177e20cd0b453b
- 05:59 AM Bug #42827 (In Progress): mds: when mounting the extra slash(es) at the end of server path will b...
- 05:58 AM Bug #42827 (Won't Fix): mds: when mounting the extra slash(es) at the end of server path will be ...
- This bug is copied from https://tracker.ceph.com/issues/42771, and need to fix it in the MDS.
This will be very re... - 05:51 AM Bug #42720 (Resolved): client: remove useless variable for ceph::mutex and ceph::condition_variable
- 03:53 AM Bug #42826 (Fix Under Review): mds: client does not response to cap revoke After session stale->r...
- 03:45 AM Bug #42826 (Resolved): mds: client does not response to cap revoke After session stale->resume ci...
- /a/pdonnell-2019-11-11_21:12:02-multimds-wip-pdonnell-testing-20191111.154849-distro-basic-smithi/4497461
11/14/2019
- 07:13 PM Bug #42806 (Resolved): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- 07:12 PM Bug #42806 (Fix Under Review): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- 08:24 AM Bug #42806 (Resolved): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- This can break tests accessing stderr on teuthology without breaking them on vstart_cluster.
- 07:06 PM Fix #42508 (Resolved): cephfs-shell: print a helpful message instead of a Python backtrace when n...
- 06:15 PM Backport #42239: nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30827
m... - 05:37 PM Backport #42239 (Resolved): nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and thei...
- 06:15 PM Backport #42180: nautilus: mgr/volumes: creating subvolume and subvolume group snapshot fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31076
m... - 05:35 PM Backport #42180 (Resolved): nautilus: mgr/volumes: creating subvolume and subvolume group snapsho...
- 06:15 PM Backport #42149: nautilus: mgr/volumes: missing protection for `fs volume rm` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30768
m... - 05:33 PM Backport #42149 (Resolved): nautilus: mgr/volumes: missing protection for `fs volume rm` command
- 03:51 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:33 PM Backport #40944 (Resolved): nautilus: mgr: failover during in qa testing causes unresponsive clie...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29649
m... - 02:29 PM Cleanup #42813 (Fix Under Review): mds: reorg RecoveryQueue header
- 01:54 PM Cleanup #42813 (Resolved): mds: reorg RecoveryQueue header
- 02:16 PM Documentation #42205 (Resolved): doc: update "mount using FUSE" page
- 02:16 PM Documentation #42220 (Resolved): doc: rearrange mounting with kernel doc
- 02:15 PM Documentation #42298 (Resolved): doc: move mount automation part from mounting doc to fstab doc
- 02:15 PM Documentation #42601 (Resolved): doc: separate "system managed mount" vs. "manual mount" for diff...
- 12:47 PM Bug #40863 (Fix Under Review): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- 12:46 PM Bug #40861 (Fix Under Review): cephfs-shell: -p doesn't work for rmdir
- 09:33 AM Bug #39651: qa: test_kill_mdstable fails unexpectedly
- I talked with Zheng. He told me that many tests cannot be executed successfully with vstart cluster and this is one o...
11/13/2019
- 10:58 PM Bug #42602 (Fix Under Review): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- 08:13 PM Backport #40944: nautilus: mgr: failover during in qa testing causes unresponsive client warnings
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29649
merged - 07:32 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- Ok, I posted a couple of patches to the mailing list this morning. The first one addresses this problem, and the seco...
- 06:08 PM Bug #39651 (In Progress): qa: test_kill_mdstable fails unexpectedly
- 06:08 PM Bug #39651: qa: test_kill_mdstable fails unexpectedly
- Part of the problem is that the pipe character wasn't trimmed from output while extracting the path -...
- 11:10 AM Cleanup #42792 (Fix Under Review): mds: reorg OpenFileTable header
- 10:30 AM Cleanup #42792 (Resolved): mds: reorg OpenFileTable header
- 11:04 AM Cleanup #42793 (Fix Under Review): mds: reorg PurgeQueue header
- 11:00 AM Cleanup #42793 (Resolved): mds: reorg PurgeQueue header
- 10:21 AM Backport #42790 (In Progress): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- 08:12 AM Backport #42790 (Resolved): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- https://github.com/ceph/ceph/pull/31332
- 08:40 AM Bug #42707: Kernel 5.0 CephFS client hang
- 5.0.0-33.35~18.04.1 seems fix this issue. I'm installing and testing now.
11/12/2019
- 09:32 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- I got lucky and reproduced it once, but haven't been able to do so since.
Still, I think I may understand what's h... - 05:42 PM Bug #36348 (In Progress): luminous(?): blogbench I/O with two kernel clients; one stalls
- Ran crash on the live (stuck) kernel. Most of the "blogbench" threads are stuck trying to acquire inode->i_rwsem for ...
- 05:06 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- I set up 2 kclients and kicked off a blogbench run on each with both pointed at the same directory on cephfs. They bo...
- 05:55 PM Bug #40863 (In Progress): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- 05:54 PM Bug #40861 (In Progress): cephfs-shell: -p doesn't work for rmdir
- 05:25 PM Feature #42479 (Pending Backport): mgr/volumes: add `fs subvolume resize infinite` command
- 05:06 PM Bug #42759 (Fix Under Review): mds: inode lock stuck at unstable state after evicting client
- 03:33 AM Bug #42759 (Resolved): mds: inode lock stuck at unstable state after evicting client
- 05:05 PM Bug #42770 (Fix Under Review): Regulary trim inode in memory
- 12:29 PM Bug #42770 (Closed): Regulary trim inode in memory
- Inode would be trimmed only when cache reached limit or in the bottom lru now. Too many inode in memory would lead to...
- 04:11 PM Backport #42774 (In Progress): luminous: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/31573
- 02:09 PM Backport #42774 (Resolved): luminous: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/31573
- 04:09 PM Bug #42602: client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- Patrick Donnelly wrote:
> Better would be to wrap the usage of SEEK_DATA/SEEK_HOLE in #ifdefs. Would you like to sub... - 03:08 PM Bug #42365 (New): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- 01:43 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- clone operation design & interface:
. Interface
Introduce `clone` sub-command in `subvolume snapshot` command
... - 05:12 AM Bug #42760 (In Progress): kclient: get random mds not work as expected
- 05:11 AM Bug #42760 (Resolved): kclient: get random mds not work as expected
- When getting random mds from mdsmap, such as there has 5 mds server and only one is in up state, like:
mds = [-1, -1... - 04:56 AM Feature #4386 (In Progress): kclient: Mount error message when no MDS present
- Currently from my test this has been fixed by e9e427f0a14f7.
Will go through the related code and test it more to ma... - 12:01 AM Bug #42707 (In Progress): Kernel 5.0 CephFS client hang
- 12:00 AM Bug #42707: Kernel 5.0 CephFS client hang
- There was a bad backport that crept into a stable release and it looks like this ubuntu kernel pulled it in:
h...
11/11/2019
- 11:21 PM Documentation #42195 (Resolved): Add doc for exporting cephfs over nfs server deployed using rook
- 10:48 PM Bug #42720 (Fix Under Review): client: remove useless variable for ceph::mutex and ceph::conditio...
- 10:47 PM Bug #42602: client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- Better would be to wrap the usage of SEEK_DATA/SEEK_HOLE in #ifdefs. Would you like to submit a PR?
- 08:22 PM Documentation #42406 (Resolved): doc: update mount.ceph man page
- 08:09 PM Documentation #42300 (Resolved): doc/ceph-fuse: -n missing in man page
- 06:47 PM Bug #42101 (Resolved): test_cephfs_shell: test_help doesn't test help
- 05:24 AM Bug #42101 (Fix Under Review): test_cephfs_shell: test_help doesn't test help
- 06:46 PM Bug #42100 (Resolved): cephfs-shell: always returns zero, even when a command has failed
- 05:25 AM Bug #42100 (Fix Under Review): cephfs-shell: always returns zero, even when a command has failed
- 03:46 PM Documentation #37746 (New): doc: how to mount a subdir with ceph-fuse/kclient
- Okay I see. This is not addressed in
https://github.com/ceph/ceph/pull/30754
either. We'll work on this. - 03:17 PM Documentation #37746: doc: how to mount a subdir with ceph-fuse/kclient
- No, please reopen. Nothing has been changed in that direction.
- 03:09 PM Documentation #37746 (Rejected): doc: how to mount a subdir with ceph-fuse/kclient
- I believe the current documentation already shows how to mount a subdir. Please reopen if you can cite the specific p...
- 03:41 PM Bug #42746: mds crashed in MDCache::request_forward
- Is this from a QA run or local testing?
- 03:34 PM Bug #42746 (Fix Under Review): mds crashed in MDCache::request_forward
- 03:26 PM Bug #42746 (Resolved): mds crashed in MDCache::request_forward
- ...
- 03:18 PM Documentation #23611 (Need More Info): doc: add description of new fs-client auth profile
- 02:11 PM Backport #42672 (In Progress): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 02:03 PM Backport #42678 (In Progress): luminous: qa: malformed job
- 12:35 PM Backport #42738 (Resolved): nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- https://github.com/ceph/ceph/pull/33122
- 11:39 AM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- Patrick Donnelly wrote:
> Milind Changire wrote:
> > please see attachment out.tar.bz which includes ceph.conf as t... - 09:05 AM Bug #42061 (Need More Info): volume_client: AssertionError: 237 != 8
- 02:14 AM Bug #42724 (Won't Fix): pybind/mgr/volumes: confirm backwards-compatibility of ceph_volume_client.py
- It is expected that Manila may need to upgrade prior to an existing Ceph cluster in Openstack. It is necessary to con...
- 02:12 AM Bug #42723 (Resolved): pybind/mgr/volumes: add upgrade testing
- We need testing for the volumes plugin consuming volumes configured using the old ceph_volume_client.py interface.
...
Also available in: Atom