Activity
From 10/24/2019 to 11/22/2019
11/22/2019
- 11:57 PM Bug #42894 (Fix Under Review): kclient: if there has at least one MDS still not laggy the mount w...
- 08:30 AM Backport #42951 (Resolved): nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test...
- https://github.com/ceph/ceph/pull/33122
- 08:30 AM Backport #42950 (Rejected): mimic: mds: inode lock stuck at unstable state after evicting client
- 08:30 AM Backport #42949 (Resolved): nautilus: mds: inode lock stuck at unstable state after evicting client
- https://github.com/ceph/ceph/pull/32073
- 04:09 AM Backport #42943 (In Progress): nautilus: mds: free heap memory may grow too large for some workloads
- 04:03 AM Backport #42943 (Resolved): nautilus: mds: free heap memory may grow too large for some workloads
- https://github.com/ceph/ceph/pull/31802
- 04:02 AM Backport #42942 (Rejected): mimic: mds: free heap memory may grow too large for some workloads
- 04:02 AM Bug #42938 (Pending Backport): mds: free heap memory may grow too large for some workloads
- 03:51 AM Bug #42887: tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file or directory
- Log: [[http://qa-proxy.ceph.com/teuthology/yuriw-2019-11-09_19:10:09-fs-wip-yuri-mimic_13.2.7_RC2-distro-basic-smithi...
- 02:51 AM Bug #42941 (Fix Under Review): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- 02:42 AM Bug #42941 (In Progress): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- I see the issue.
- 02:37 AM Bug #42941 (Rejected): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- ...
- 01:22 AM Bug #42940 (Fix Under Review): client: trim_cache not invalidate kernel cache
11/21/2019
- 09:26 PM Bug #42707 (Resolved): Kernel 5.0 CephFS client hang
- Looks like the updates have trickled out to ubuntu repos. Let's call this resolved. Please reopen if you see it again...
- 09:24 PM Bug #42842 (Resolved): CephFS linux kernel hang, v4.15
- Glad to hear it. We'll call this one resolved.
- 07:43 PM Bug #42842: CephFS linux kernel hang, v4.15
- I am no longer seeing the problem on -70.79. Had a number of kernel versions installed and must have gotten confused.
- 03:00 PM Bug #42842: CephFS linux kernel hang, v4.15
- -66.75 is definitely bad, but -70.79 should be ok. Can you validate that you still see the problem on that kernel?
- 07:15 PM Bug #42835: qa: test_scrub_abort fails during check_task_status("idle")
- Nautilus backport will be tracked by #42738.
- 07:12 PM Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- and this: #42835
- 06:04 PM Bug #42938 (Resolved): mds: free heap memory may grow too large for some workloads
- MDS should periodically release heap free space to the kernel as part of cache trimming.
- 05:36 PM Backport #41283 (New): nautilus: cephfs-shell: No error message is printed on ls of invalid direc...
- 05:36 PM Backport #41268 (New): nautilus: cephfs-shell: onecmd throws TypeError
- 05:35 PM Backport #41118 (New): nautilus: cephfs-shell: add CI testing with flake8
- 05:35 PM Backport #41112 (New): nautilus: cephfs-shell: cd with no args has no effect
- 05:35 PM Backport #41105 (New): nautilus: cephfs-shell: flake8 blank line and indentation error
- 05:34 PM Backport #41089 (New): nautilus: cephfs-shell: Multiple flake8 errors
- 05:33 PM Backport #40898 (New): nautilus: cephfs-shell: Error messages are printed to stdout
- 02:55 PM Feature #42831 (Fix Under Review): mds: add config to deny all client reconnects
- 02:54 PM Bug #42917 (Duplicate): ceph: task status not available
- 02:52 PM Bug #42872 (Need More Info): qa/tasks: add remaining tests for fs volume
- 02:51 PM Bug #42887 (Need More Info): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such...
- 04:22 AM Bug #42923 (Fix Under Review): pybind / cephfs: remove static typing in LibCephFS.chown
- 04:21 AM Bug #42923 (In Progress): pybind / cephfs: remove static typing in LibCephFS.chown
- 04:21 AM Bug #42923 (Resolved): pybind / cephfs: remove static typing in LibCephFS.chown
- ...
- 02:06 AM Bug #42894: kclient: if there has at least one MDS still not laggy the mount will fail
- The following commits should fix it.
https://github.com/ceph/ceph-client/commit/2f35ef362bc14f25dac6738472180d9a4a... - 01:59 AM Backport #41890: nautilus: mount.ceph: enable consumption of ceph keyring files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30521
m... - 01:43 AM Backport #41890 (Resolved): nautilus: mount.ceph: enable consumption of ceph keyring files
- 01:43 AM Feature #16656 (Resolved): mount.ceph: enable consumption of ceph keyring files
- 01:31 AM Bug #42922 (Resolved): nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- https://github.com/ceph/ceph/pull/29911
needs backport.
11/20/2019
- 11:33 PM Bug #42646 (Pending Backport): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volume...
- 11:32 PM Bug #42020 (Resolved): qa: fuse_mount should check if mounted in umount_wait
- DaemonWatchdog is not in mimic/nautilus.
- 11:31 PM Bug #42020 (Pending Backport): qa: fuse_mount should check if mounted in umount_wait
- 11:26 PM Bug #42746 (Resolved): mds crashed in MDCache::request_forward
- 11:25 PM Bug #42759 (Pending Backport): mds: inode lock stuck at unstable state after evicting client
- 10:37 PM Bug #42920 (New): mds: removed from map due to dropped (?) beacons
- ...
- 10:30 PM Bug #42919 (New): mds: heartbeat timeout during large scale git-clone/rm workload
- ...
- 10:03 PM Bug #42917 (Duplicate): ceph: task status not available
- ...
- 10:21 AM Bug #24679 (Resolved): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:14 AM Bug #41031 (Resolved): qa: malformed job
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:56 AM Bug #42894 (Resolved): kclient: if there has at least one MDS still not laggy the mount will fail
- In case:
# ceph fs dump
[...]
max_mds 3
in 0,1,2
up {0=5139,1=4837,2=4985}
failed
damaged
stoppe... - 12:31 AM Bug #42827 (Fix Under Review): mds: when mounting the extra slash(es) at the end of server path w...
11/19/2019
- 08:41 PM Bug #42887: tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file or directory
- Please link the source teuthology log. Add html "pre" markup around the log so it's readable.
- 04:39 PM Bug #42887 (Won't Fix): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file...
- ...
- 07:18 PM Backport #42678 (Resolved): luminous: qa: malformed job
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31449
m... - 04:33 PM Backport #42678: luminous: qa: malformed job
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31449
merged - 07:18 PM Backport #42672 (Resolved): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31450
m... - 04:32 PM Backport #42672: luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31450
merged - 07:18 PM Backport #42774 (Resolved): luminous: mds: add command that modify session metadata
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31573
m... - 04:31 PM Backport #42774: luminous: mds: add command that modify session metadata
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/31573
mergedhttps://trello.com/c/YlSLupiJ - 04:02 PM Backport #42886 (In Progress): nautilus: mgr/volumes: allow setting uid, gid of subvolume and sub...
- 03:54 PM Backport #42886 (Resolved): nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvol...
- ... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` commands.
https://... - 10:43 AM Bug #42252: mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Rishabh Dave wrote:
> Couldn't reproduce this issue locally; test_21501 passed for me.
with python3? also, I thin... - 10:42 AM Bug #42252: mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Couldn't reproduce this issue locally; test_21501 passed for me.
- 10:25 AM Feature #40959 (Pending Backport): mgr/volumes: allow setting uid, gid of subvolume and subvolume...
- 09:03 AM Bug #40877 (Resolved): client: client should return EIO when it's unsafe reqs have been dropped w...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:03 AM Bug #40967 (Resolved): qa: race in test_standby_replay_singleton_fail
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:03 AM Bug #40968 (Resolved): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:02 AM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:01 AM Bug #41585 (Resolved): mds: client evicted twice in one tick
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:38 AM Backport #41886 (Resolved): nautilus: mds: client evicted twice in one tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30951
m... - 08:36 AM Backport #41488 (Resolved): nautilus: client: client should return EIO when it's unsafe reqs have...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30043
m... - 08:34 AM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29832
m... - 08:33 AM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29811
m... - 08:33 AM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29750
m... - 06:02 AM Feature #42875 (New): mgr/volumes: user credentials for ListVolumes, GetCapacity and ValidateVolu...
- Validate user credentials for the following API/Commands:
ValidateVolumeCapabilities
GetCapacity
ListVolumes - 05:57 AM Feature #42874 (New): mgr/volumes: add ValidateVolumeCapabilities API/command for `fs volume`
- add ValidateVolumeCapabilities API/command for `fs volume` as mentioned in [1]
[1] https://github.com/container-st... - 05:55 AM Feature #42873 (New): mgr/volumes: add GetCapacity API/command for `fs volume`
- add `fs volume getcapacity` command as suggested in [1].
[1] https://github.com/container-storage-interface/spec/i... - 05:06 AM Bug #42872 (Resolved): qa/tasks: add remaining tests for fs volume
- There are missing tests for `fs volume` in test_volumes.py. Only test_volume_rm is available. Where are the tests for...
- 01:32 AM Bug #42827: mds: when mounting the extra slash(es) at the end of server path will be wrongly pars...
- This should fix it: https://github.com/ceph/ceph/pull/31713
11/18/2019
- 04:52 PM Backport #41886: nautilus: mds: client evicted twice in one tick
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30951
merged - 04:52 PM Backport #41488: nautilus: client: client should return EIO when it's unsafe reqs have been dropp...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30043
merged - 04:51 PM Backport #41095: nautilus: qa: race in test_standby_replay_singleton_fail
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29832
merged - 04:50 PM Backport #41093: nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_wr...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29811
merged - 04:50 PM Backport #41087: nautilus: qa: AssertionError: u'open' != 'stale'
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29750
merged - 04:32 PM Cleanup #42867 (Fix Under Review): mds: reorg Server header
- 03:11 PM Cleanup #42867 (Resolved): mds: reorg Server header
- 03:59 PM Cleanup #42866 (Fix Under Review): mds: reorg ScrubStack header
- 03:10 PM Cleanup #42866 (Resolved): mds: reorg ScrubStack header
- 03:54 PM Backport #42155 (Resolved): nautilus: mds: infinite loop in Locker::file_update_finish()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31079
m... - 02:54 PM Backport #42155: nautilus: mds: infinite loop in Locker::file_update_finish()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31079
merged - 03:41 PM Cleanup #42865 (Fix Under Review): mds: reorg ScrubHeader header
- 03:09 PM Cleanup #42865 (Resolved): mds: reorg ScrubHeader header
- 03:30 PM Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- Note to backporters: there are follow-up bugs in progress of being fixed: https://tracker.ceph.com/issues/42744
- 03:19 PM Cleanup #42864 (Fix Under Review): mds: reorg ScatterLock header
- 03:08 PM Cleanup #42864 (Resolved): mds: reorg ScatterLock header
11/16/2019
- 04:56 PM Bug #41228 (Duplicate): mon: deleting a CephFS and its pools causes MONs to crash
- 06:40 AM Bug #40773 (Resolved): qa: 'ceph osd require-osd-release nautilus' fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:34 AM Backport #41495 (Resolved): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31040
m...
11/15/2019
- 10:40 PM Backport #41495: nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31040
merged - 09:15 PM Bug #42842 (Resolved): CephFS linux kernel hang, v4.15
- Simple file system operations like df and ls hang and show a status of D+ when running ps. dmesg logs sometimes show ...
- 09:05 PM Bug #41841 (Resolved): mgr/volumes: missing protection for `fs volume rm` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:05 PM Feature #41842 (Resolved): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:04 PM Bug #42096 (Resolved): mgr/volumes: creating subvolume and subvolume group snapshot fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:39 PM Bug #42837: qa: test_ops_throttle failed with `RuntimeError: Ops in flight high water is unexpect...
- might be same as: https://tracker.ceph.com/issues/16881
but the test case is different, so not marking as dup. - 04:23 PM Bug #42837 (New): qa: test_ops_throttle failed with `RuntimeError: Ops in flight high water is un...
- ...
- 05:21 PM Bug #42829 (Fix Under Review): tools/cephfs: linkages injected by cephfs-data-scan have first == ...
- 07:18 AM Bug #42829 (Resolved): tools/cephfs: linkages injected by cephfs-data-scan have first == head
- something like
[inode 0x100000367e5 [head,head] /pg_xlog_archives/9.6/smobile/000000200000002C000000BB.00000028.ba... - 03:22 PM Bug #42835 (Resolved): qa: test_scrub_abort fails during check_task_status("idle")
- ...
- 10:09 AM Feature #42831 (Resolved): mds: add config to deny all client reconnects
- This helps reduce mds failover time.
- 06:08 AM Bug #42760: kclient: get random mds not work as expected
- Should be fixed by https://github.com/ceph/ceph-client/commit/b570777a96d5dd15b556e73d90177e20cd0b453b
- 05:59 AM Bug #42827 (In Progress): mds: when mounting the extra slash(es) at the end of server path will b...
- 05:58 AM Bug #42827 (Won't Fix): mds: when mounting the extra slash(es) at the end of server path will be ...
- This bug is copied from https://tracker.ceph.com/issues/42771, and need to fix it in the MDS.
This will be very re... - 05:51 AM Bug #42720 (Resolved): client: remove useless variable for ceph::mutex and ceph::condition_variable
- 03:53 AM Bug #42826 (Fix Under Review): mds: client does not response to cap revoke After session stale->r...
- 03:45 AM Bug #42826 (Resolved): mds: client does not response to cap revoke After session stale->resume ci...
- /a/pdonnell-2019-11-11_21:12:02-multimds-wip-pdonnell-testing-20191111.154849-distro-basic-smithi/4497461
11/14/2019
- 07:13 PM Bug #42806 (Resolved): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- 07:12 PM Bug #42806 (Fix Under Review): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- 08:24 AM Bug #42806 (Resolved): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- This can break tests accessing stderr on teuthology without breaking them on vstart_cluster.
- 07:06 PM Fix #42508 (Resolved): cephfs-shell: print a helpful message instead of a Python backtrace when n...
- 06:15 PM Backport #42239: nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30827
m... - 05:37 PM Backport #42239 (Resolved): nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and thei...
- 06:15 PM Backport #42180: nautilus: mgr/volumes: creating subvolume and subvolume group snapshot fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31076
m... - 05:35 PM Backport #42180 (Resolved): nautilus: mgr/volumes: creating subvolume and subvolume group snapsho...
- 06:15 PM Backport #42149: nautilus: mgr/volumes: missing protection for `fs volume rm` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30768
m... - 05:33 PM Backport #42149 (Resolved): nautilus: mgr/volumes: missing protection for `fs volume rm` command
- 03:51 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:33 PM Backport #40944 (Resolved): nautilus: mgr: failover during in qa testing causes unresponsive clie...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29649
m... - 02:29 PM Cleanup #42813 (Fix Under Review): mds: reorg RecoveryQueue header
- 01:54 PM Cleanup #42813 (Resolved): mds: reorg RecoveryQueue header
- 02:16 PM Documentation #42205 (Resolved): doc: update "mount using FUSE" page
- 02:16 PM Documentation #42220 (Resolved): doc: rearrange mounting with kernel doc
- 02:15 PM Documentation #42298 (Resolved): doc: move mount automation part from mounting doc to fstab doc
- 02:15 PM Documentation #42601 (Resolved): doc: separate "system managed mount" vs. "manual mount" for diff...
- 12:47 PM Bug #40863 (Fix Under Review): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- 12:46 PM Bug #40861 (Fix Under Review): cephfs-shell: -p doesn't work for rmdir
- 09:33 AM Bug #39651: qa: test_kill_mdstable fails unexpectedly
- I talked with Zheng. He told me that many tests cannot be executed successfully with vstart cluster and this is one o...
11/13/2019
- 10:58 PM Bug #42602 (Fix Under Review): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- 08:13 PM Backport #40944: nautilus: mgr: failover during in qa testing causes unresponsive client warnings
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29649
merged - 07:32 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- Ok, I posted a couple of patches to the mailing list this morning. The first one addresses this problem, and the seco...
- 06:08 PM Bug #39651 (In Progress): qa: test_kill_mdstable fails unexpectedly
- 06:08 PM Bug #39651: qa: test_kill_mdstable fails unexpectedly
- Part of the problem is that the pipe character wasn't trimmed from output while extracting the path -...
- 11:10 AM Cleanup #42792 (Fix Under Review): mds: reorg OpenFileTable header
- 10:30 AM Cleanup #42792 (Resolved): mds: reorg OpenFileTable header
- 11:04 AM Cleanup #42793 (Fix Under Review): mds: reorg PurgeQueue header
- 11:00 AM Cleanup #42793 (Resolved): mds: reorg PurgeQueue header
- 10:21 AM Backport #42790 (In Progress): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- 08:12 AM Backport #42790 (Resolved): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- https://github.com/ceph/ceph/pull/31332
- 08:40 AM Bug #42707: Kernel 5.0 CephFS client hang
- 5.0.0-33.35~18.04.1 seems fix this issue. I'm installing and testing now.
11/12/2019
- 09:32 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- I got lucky and reproduced it once, but haven't been able to do so since.
Still, I think I may understand what's h... - 05:42 PM Bug #36348 (In Progress): luminous(?): blogbench I/O with two kernel clients; one stalls
- Ran crash on the live (stuck) kernel. Most of the "blogbench" threads are stuck trying to acquire inode->i_rwsem for ...
- 05:06 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- I set up 2 kclients and kicked off a blogbench run on each with both pointed at the same directory on cephfs. They bo...
- 05:55 PM Bug #40863 (In Progress): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- 05:54 PM Bug #40861 (In Progress): cephfs-shell: -p doesn't work for rmdir
- 05:25 PM Feature #42479 (Pending Backport): mgr/volumes: add `fs subvolume resize infinite` command
- 05:06 PM Bug #42759 (Fix Under Review): mds: inode lock stuck at unstable state after evicting client
- 03:33 AM Bug #42759 (Resolved): mds: inode lock stuck at unstable state after evicting client
- 05:05 PM Bug #42770 (Fix Under Review): Regulary trim inode in memory
- 12:29 PM Bug #42770 (Closed): Regulary trim inode in memory
- Inode would be trimmed only when cache reached limit or in the bottom lru now. Too many inode in memory would lead to...
- 04:11 PM Backport #42774 (In Progress): luminous: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/31573
- 02:09 PM Backport #42774 (Resolved): luminous: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/31573
- 04:09 PM Bug #42602: client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- Patrick Donnelly wrote:
> Better would be to wrap the usage of SEEK_DATA/SEEK_HOLE in #ifdefs. Would you like to sub... - 03:08 PM Bug #42365 (New): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- 01:43 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- clone operation design & interface:
. Interface
Introduce `clone` sub-command in `subvolume snapshot` command
... - 05:12 AM Bug #42760 (In Progress): kclient: get random mds not work as expected
- 05:11 AM Bug #42760 (Resolved): kclient: get random mds not work as expected
- When getting random mds from mdsmap, such as there has 5 mds server and only one is in up state, like:
mds = [-1, -1... - 04:56 AM Feature #4386 (In Progress): kclient: Mount error message when no MDS present
- Currently from my test this has been fixed by e9e427f0a14f7.
Will go through the related code and test it more to ma... - 12:01 AM Bug #42707 (In Progress): Kernel 5.0 CephFS client hang
- 12:00 AM Bug #42707: Kernel 5.0 CephFS client hang
- There was a bad backport that crept into a stable release and it looks like this ubuntu kernel pulled it in:
h...
11/11/2019
- 11:21 PM Documentation #42195 (Resolved): Add doc for exporting cephfs over nfs server deployed using rook
- 10:48 PM Bug #42720 (Fix Under Review): client: remove useless variable for ceph::mutex and ceph::conditio...
- 10:47 PM Bug #42602: client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- Better would be to wrap the usage of SEEK_DATA/SEEK_HOLE in #ifdefs. Would you like to submit a PR?
- 08:22 PM Documentation #42406 (Resolved): doc: update mount.ceph man page
- 08:09 PM Documentation #42300 (Resolved): doc/ceph-fuse: -n missing in man page
- 06:47 PM Bug #42101 (Resolved): test_cephfs_shell: test_help doesn't test help
- 05:24 AM Bug #42101 (Fix Under Review): test_cephfs_shell: test_help doesn't test help
- 06:46 PM Bug #42100 (Resolved): cephfs-shell: always returns zero, even when a command has failed
- 05:25 AM Bug #42100 (Fix Under Review): cephfs-shell: always returns zero, even when a command has failed
- 03:46 PM Documentation #37746 (New): doc: how to mount a subdir with ceph-fuse/kclient
- Okay I see. This is not addressed in
https://github.com/ceph/ceph/pull/30754
either. We'll work on this. - 03:17 PM Documentation #37746: doc: how to mount a subdir with ceph-fuse/kclient
- No, please reopen. Nothing has been changed in that direction.
- 03:09 PM Documentation #37746 (Rejected): doc: how to mount a subdir with ceph-fuse/kclient
- I believe the current documentation already shows how to mount a subdir. Please reopen if you can cite the specific p...
- 03:41 PM Bug #42746: mds crashed in MDCache::request_forward
- Is this from a QA run or local testing?
- 03:34 PM Bug #42746 (Fix Under Review): mds crashed in MDCache::request_forward
- 03:26 PM Bug #42746 (Resolved): mds crashed in MDCache::request_forward
- ...
- 03:18 PM Documentation #23611 (Need More Info): doc: add description of new fs-client auth profile
- 02:11 PM Backport #42672 (In Progress): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 02:03 PM Backport #42678 (In Progress): luminous: qa: malformed job
- 12:35 PM Backport #42738 (Resolved): nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- https://github.com/ceph/ceph/pull/33122
- 11:39 AM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- Patrick Donnelly wrote:
> Milind Changire wrote:
> > please see attachment out.tar.bz which includes ceph.conf as t... - 09:05 AM Bug #42061 (Need More Info): volume_client: AssertionError: 237 != 8
- 02:14 AM Bug #42724 (Won't Fix): pybind/mgr/volumes: confirm backwards-compatibility of ceph_volume_client.py
- It is expected that Manila may need to upgrade prior to an existing Ceph cluster in Openstack. It is necessary to con...
- 02:12 AM Bug #42723 (Resolved): pybind/mgr/volumes: add upgrade testing
- We need testing for the volumes plugin consuming volumes configured using the old ceph_volume_client.py interface.
...
11/10/2019
- 03:58 AM Bug #42720 (Resolved): client: remove useless variable for ceph::mutex and ceph::condition_variable
- ceph::mutex flock = ceph::make_mutex("Client::_read_sync flock");
ceph::condition_variable cond
the flock and cond ...
11/08/2019
- 07:50 PM Fix #42450 (In Progress): MDSMonitor: warn if a new file system is being created with an EC defau...
- 06:43 PM Bug #42299 (Pending Backport): mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 06:42 PM Cleanup #42461 (Resolved): mds: reorg MDSTableClient header
- 06:28 PM Backport #42713 (New): nautilus: mgr: daemon state for mds not available
- 06:22 PM Backport #42713 (In Progress): nautilus: mgr: daemon state for mds not available
- 06:14 PM Backport #42713 (Resolved): nautilus: mgr: daemon state for mds not available
- https://github.com/ceph/ceph/pull/30704
- 06:07 PM Bug #41538 (Resolved): mds: wrong compat can cause MDS to be added daemon registry on mgr but not...
- 06:07 PM Bug #42635 (Pending Backport): mgr: daemon state for mds not available
- 05:29 PM Bug #20735: mds: stderr:gzip: /var/log/ceph/ceph-mds.f.log: file size changed while zipping
- Here the same happened with the mon, with valgrind....
- 02:55 PM Bug #42707 (Resolved): Kernel 5.0 CephFS client hang
- $ uname -a
Linux Dell-Latitude-ideco 5.0.0-32-generic #34~18.04.2-Ubuntu SMP Thu Oct 10 10:36:02 UTC 2019 x86_64 x86... - 12:33 PM Bug #42061: volume_client: AssertionError: 237 != 8
- Couldn't reproduce this locally -...
- 08:21 AM Cleanup #42690 (Fix Under Review): mds: reorg Mutation header
- 08:18 AM Cleanup #42690 (Resolved): mds: reorg Mutation header
- 04:56 AM Bug #41565 (In Progress): mds: detect MDS<->MDS messages that are not versioned
11/07/2019
- 05:44 PM Bug #24679 (Pending Backport): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 04:30 AM Bug #24679: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- Needed in Luminous since apparently we're testing with 18.04 there too now.
https://tracker.ceph.com/issues/42672 - 05:44 PM Backport #42672 (Need More Info): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu...
- 04:30 AM Backport #42672 (Resolved): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/31450
- 05:37 PM Bug #42688 (Triaged): Standard CephFS caps do not allow certain dot files to be written
- I have repeatedly setup a Ceph Nautilus cluster via MAAS/Juju (openstack-charmers charms), using the latest Ubuntu cl...
- 02:42 PM Fix #42508 (In Progress): cephfs-shell: print a helpful message instead of a Python backtrace whe...
- 11:37 AM Backport #40892: luminous: mds: cleanup truncating inodes when standby replay mds trim log segments
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31286
m... - 04:42 AM Backport #40892 (Resolved): luminous: mds: cleanup truncating inodes when standby replay mds trim...
- 11:12 AM Bug #42636 (Fix Under Review): qa: AttributeError: can't set attribute
- 11:02 AM Bug #40477 (Resolved): mds: cleanup truncating inodes when standby replay mds trim log segments
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:01 AM Backport #42678 (Resolved): luminous: qa: malformed job
- https://github.com/ceph/ceph/pull/31449
- 08:20 AM Bug #42675 (Fix Under Review): mds: tolerate no snaprealm encoded in on-disk root inode
- 07:45 AM Bug #42675 (Resolved): mds: tolerate no snaprealm encoded in on-disk root inode
- cephfs-data-scan of luminous and prior versions may update on-disk root inode without encoding snaprealm (cephfs-data...
- 04:01 AM Bug #41031 (Pending Backport): qa: malformed job
- This got into luminous.
- 02:33 AM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- Not sure if we fixed this recently. There was some discussion a month or so ago about removing the ceph.* xattrs but ...
11/06/2019
11/05/2019
- 03:07 PM Bug #42646 (Fix Under Review): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volume...
- 12:44 PM Bug #42646 (Resolved): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.TestVo...
- Test cases in TestVolumes create subvolumes/groups/snapshots in format `<string>_<random_number>`. Some test cases wi...
- 01:13 PM Backport #42650 (Resolved): nautilus: mds: no assert on frozen dir when scrub path
- https://github.com/ceph/ceph/pull/32071
- 01:13 PM Backport #42649 (Rejected): mimic: mds: no assert on frozen dir when scrub path
- 01:09 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- Still seeing the issue after unmounting and restarting :-(
- 12:40 PM Bug #42642 (Fix Under Review): mds: MDCache.h compile warnings
- 08:23 AM Bug #42642 (Resolved): mds: MDCache.h compile warnings
- ...
- 10:57 AM Bug #42636 (In Progress): qa: AttributeError: can't set attribute
- 04:07 AM Bug #42636 (Resolved): qa: AttributeError: can't set attribute
- Looks like #42478 is not fixed....
- 10:55 AM Bug #42643 (Fix Under Review): vstart.sh: highlight presence of stray conf file
- 10:45 AM Bug #42643 (Resolved): vstart.sh: highlight presence of stray conf file
- If there's a stray conf file present in /etc/ceph/ceph.conf, then it leads to a misbehaving cluster. Probably an unre...
- 08:40 AM Bug #41538 (Fix Under Review): mds: wrong compat can cause MDS to be added daemon registry on mgr...
- Backport will be tracked in #42635.
- 08:40 AM Bug #42635 (Fix Under Review): mgr: daemon state for mds not available
- 04:02 AM Bug #42635 (Resolved): mgr: daemon state for mds not available
- ...
- 06:30 AM Bug #42251 (Pending Backport): mds: no assert on frozen dir when scrub path
- 06:28 AM Cleanup #42311 (Resolved): mds: reorg MDSAuthCaps header
- 06:23 AM Cleanup #42191 (Resolved): mds: reorg MDCache header
- 04:40 AM Bug #42637 (Resolved): qa: ffsb suite causes SLOW_OPS warnings
- ...
11/04/2019
- 09:47 PM Backport #42632 (Rejected): mimic: client: FAILED assert(cap == in->auth_cap)
- 09:47 PM Backport #42631 (Resolved): nautilus: client: FAILED assert(cap == in->auth_cap)
- https://github.com/ceph/ceph/pull/32065
- 06:31 PM Backport #42159 (In Progress): mimic: osdc: objecter ops output does not have useful time informa...
- 06:30 PM Backport #42159: mimic: osdc: objecter ops output does not have useful time information
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/31384
ceph-backport.sh versi... - 06:21 PM Backport #42148 (In Progress): mimic: mds: mds returns -5 error when the deleted file does not exist
- 06:19 PM Backport #42146 (In Progress): mimic: client: return error when someone passes bad whence value t...
- 06:16 PM Backport #42143 (In Progress): mimic: mds:split the dir if the op makes it oversized, because som...
- 05:48 PM Backport #42327 (In Progress): nautilus: cephfs-shell: not compatible with cmd2 versions after 0....
- 02:18 PM Documentation #42205 (Fix Under Review): doc: update "mount using FUSE" page
- 02:18 PM Documentation #42220 (Fix Under Review): doc: rearrange mounting with kernel doc
- 02:15 PM Documentation #42601 (In Progress): doc: separate "system managed mount" vs. "manual mount" for d...
- 12:58 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- ceph-backport.sh version 15.0.0.6612: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 12:48 PM Backport #42615 (In Progress): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- 12:46 PM Backport #42615 (Resolved): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- https://github.com/ceph/ceph/pull/31332
- 12:46 PM Feature #41182 (Pending Backport): mgr/volumes: add `fs subvolume extend/shrink` commands
- 09:30 AM Bug #41319 (Can't reproduce): ceph.in: pool creation fails with "AttributeError: 'str' object has...
11/03/2019
- 10:35 PM Bug #42602 (Resolved): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- The non-posix conform constants SEEK_DATA and SEEK_HOLE are missin on alpine linux / musllib. So you cant compile src...
- 08:39 AM Documentation #42196 (Resolved): doc: Document inter-mds export process
- 08:18 AM Documentation #42601 (Resolved): doc: separate "system managed mount" vs. "manual mount" for diff...
- Client docs show manual commands for performing the mount. We should also suggest the systemd/fstab commands to setup...
- 08:04 AM Documentation #42190 (Resolved): doc: document MDS journal event types
11/02/2019
- 03:23 PM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 01:23 AM Feature #41182 (In Progress): mgr/volumes: add `fs subvolume extend/shrink` commands
- 01:17 AM Feature #41182: mgr/volumes: add `fs subvolume extend/shrink` commands
- Nautilus Backport: https://github.com/ceph/ceph/pull/31332
11/01/2019
- 11:08 PM Cleanup #42329 (Resolved): mds: reorg MDSCacheObject header
- 11:04 PM Bug #39715 (Resolved): client: optimize rename operation under different quota root
- 11:02 PM Feature #41182 (Pending Backport): mgr/volumes: add `fs subvolume extend/shrink` commands
- 10:55 PM Bug #41799 (Pending Backport): client: FAILED assert(cap == in->auth_cap)
- 10:50 PM Cleanup #42371 (Resolved): mds: reorg MDSDaemon header
- 10:38 PM Bug #42478 (Resolved): qa: AttributeError: can't set attribute
- 10:38 PM Bug #42062 (Resolved): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- 08:28 PM Bug #42597 (Fix Under Review): mon and mds ok-to-stop commands should validate input names exist ...
- "ceph osd ok-to-stop" accepts only integers, "any", and "all". However, the "mon" and "mds" versions accept any strin...
- 04:58 PM Bug #37378 (Resolved): truncate_seq ordering issues with object creation
- 02:11 PM Bug #42022: mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to become empty fro...
- Rishabh Dave wrote:
> Couldn't reproduce this locally and on teuthology. On teuthology the test passed -
> [...]
>... - 02:10 PM Bug #41415 (Can't reproduce): mgr/volumes: AssertionError: '33' != 'new_pool'
- I haven't seen this since this report. I'll re-open the issue if it comes up again.
- 09:25 AM Bug #40873 (Duplicate): qa: expected MDS_CLIENT_LATE_RELEASE in tasks.cephfs.test_client_recovery...
- dup of https://tracker.ceph.com/issues/40968
- 08:20 AM Feature #39098 (Fix Under Review): mds: lock caching for asynchronous unlink
- 08:15 AM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- still the same issue. logs show there are inode locks in 'snap->sync' states. try umounting all client and restart al...
- 02:53 AM Bug #24088 (Duplicate): mon: slow remove_snaps op reported in cluster health log
- seems like dup of https://tracker.ceph.com/issues/37568
10/31/2019
- 11:44 PM Documentation #42414 (Resolved): doc: hide page contents for Ceph Internals
- 11:43 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- Milind Changire wrote:
> please see attachment out.tar.bz which includes ceph.conf as to why `ceph status` command h... - 02:26 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- please see attachment out.tar.bz which includes ceph.conf as to why `ceph status` command hangs on Fedora 30 laptop.
- 11:01 PM Bug #42515: fs: OpenFileTable object shards have too many k/v pairs
- 06:15 PM Backport #42142 (In Progress): nautilus: mds:split the dir if the op makes it oversized, because ...
- ceph-backport.sh version 15.0.0.6612: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 03:28 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- I've restarted the mds at 15:06, didn't take any snapshots and dumped the cache with the slow requests around 16:00. ...
- 02:55 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- The kernel stack trace just looks like the client is hung waiting for the inode's i_rwsem to become free, which means...
- 02:37 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- Venky Shankar wrote:
> saw this with luminous: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-24_18:14:12-multimd... - 11:31 AM Backport #42155: nautilus: mds: infinite loop in Locker::file_update_finish()
- https://github.com/ceph/ceph/pull/31287
- 11:30 AM Backport #40892 (In Progress): luminous: mds: cleanup truncating inodes when standby replay mds t...
- 11:04 AM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 10:57 AM Backport #41489 (Resolved): luminous: client: client should return EIO when it's unsafe reqs have...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 11:02 AM Backport #37906 (In Progress): mimic: make cephfs-data-scan reconstruct snaptable
- 11:01 AM Backport #38643 (In Progress): mimic: fs: "log [WRN] : failed to reconnect caps for missing inodes"
- 11:00 AM Backport #41114 (In Progress): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 11:00 AM Backport #42156 (In Progress): mimic: mds: infinite loop in Locker::file_update_finish()
10/30/2019
- 08:02 PM Feature #42530: cephfs-shell: add setxattr and getxattr
- ...and listxattr
- 07:24 PM Bug #42494 (Resolved): ceph: config show can't locate mds
- Backport will be managed by #41525.
- 03:24 PM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30242
merged - 12:26 PM Cleanup #42564 (Fix Under Review): mds: reorg Migrator header
- 11:54 AM Cleanup #42564 (Resolved): mds: reorg Migrator header
- 11:53 AM Cleanup #42563 (Fix Under Review): mds: reorg MDSTableServer header
- 11:26 AM Cleanup #42563 (Resolved): mds: reorg MDSTableServer header
- 10:15 AM Feature #39354 (Closed): mds: derive wrlock from excl caps
- New method to implement async create/unlink. This is obsoleted
10/29/2019
- 10:32 PM Documentation #41738 (Resolved): Add documentation for that 'client direct access to data pool'
- 09:51 PM Bug #37723 (Resolved): mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:50 PM Feature #38022 (Resolved): mds: provide a limit for the maximum number of caps a client may have
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:49 PM Bug #39166 (Resolved): mds: error "No space left on device" when create a large number of dirs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #39943 (Resolved): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanos...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #39951 (Resolved): mount: key parsing fail when doing a remount
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #40173 (Resolved): TestMisc.test_evict_client fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #40960 (Resolved): client: failed to drop dn and release caps causing mds stary stacking.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:39 PM Backport #38129 (Resolved): mimic: mds: provide a limit for the maximum number of caps a client m...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28452
m... - 07:37 PM Backport #38129: mimic: mds: provide a limit for the maximum number of caps a client may have
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28452
merged - 09:39 PM Backport #38131 (Resolved): mimic: mds: stopping MDS with a large cache (40+GB) causes it to miss...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28452
m... - 07:36 PM Backport #38131: mimic: mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28452
merged - 09:38 PM Backport #41885 (Resolved): mimic: mds: client evicted twice in one tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30950
m... - 07:36 PM Backport #41885: mimic: mds: client evicted twice in one tick
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30950
merged - 09:35 PM Backport #40166 (Resolved): luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28502
m... - 03:35 PM Backport #40166: luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the n...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28502
merged - 09:35 PM Backport #40807 (Resolved): luminous: mds: msg weren't destroyed before handle_client_reconnect r...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29097
m... - 03:34 PM Backport #40807: luminous: mds: msg weren't destroyed before handle_client_reconnect returned, if...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29097
merged - 09:35 PM Backport #40163 (Resolved): luminous: mount: key parsing fail when doing a remount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29226
m... - 03:33 PM Backport #40163: luminous: mount: key parsing fail when doing a remount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29226
merged - 09:35 PM Backport #40218 (Resolved): luminous: TestMisc.test_evict_client fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29229
m... - 03:33 PM Backport #40218: luminous: TestMisc.test_evict_client fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29229
mergedReviewed-by: Venky Shankar <vshankar@redhat.... - 09:34 PM Backport #39691 (Resolved): luminous: mds: error "No space left on device" when create a large n...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29829
m... - 03:32 PM Backport #39691: luminous: mds: error "No space left on device" when create a large number of dirs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29829
merged - 09:34 PM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29830
m... - 03:32 PM Backport #41000: luminous: client: failed to drop dn and release caps causing mds stary stacking.
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29830
merged - 09:34 PM Bug #40286 (Resolved): luminous: qa: remove ubuntu 14.04 testing
- 03:29 PM Bug #40286: luminous: qa: remove ubuntu 14.04 testing
- https://github.com/ceph/ceph/pull/28701 merged
- 09:33 PM Backport #42039 (Resolved): luminous: client: _readdir_cache_cb() may use the readdir_cache alrea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30934
m... - 03:28 PM Backport #42039: luminous: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30934
merged - 08:25 PM Bug #42515 (Fix Under Review): fs: OpenFileTable object shards have too many k/v pairs
- 02:23 AM Bug #42515 (In Progress): fs: OpenFileTable object shards have too many k/v pairs
- 07:26 PM Bug #42494 (Fix Under Review): ceph: config show can't locate mds
- 01:47 PM Bug #42494: ceph: config show can't locate mds
- Sage, assigning you since I believe you wanted to look into this.
- 06:22 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- 5.0.0-32 introduced the bad backport, -33 reverted it:
http://changelogs.ubuntu.com/changelogs/pool/main/l/linux/l... - 05:19 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Was 5.0.32 actually fixed?
- 03:45 PM Feature #5520 (New): osdc: should handle namespaces
- 02:01 PM Feature #42530 (Resolved): cephfs-shell: add setxattr and getxattr
- Allow cephfs-shell to set and fetch xattrs. This would be nice for testing selinux, for instance.
- 01:50 PM Bug #42466 (Duplicate): Missing subvolumegroup commands
- The nautilus backport is still in review, https://tracker.ceph.com/issues/42239
`subvolumegroup ls` should be availa... - 01:09 PM Bug #42478 (Fix Under Review): qa: AttributeError: can't set attribute
- 12:17 PM Bug #42478 (In Progress): qa: AttributeError: can't set attribute
- https://github.com/ceph/ceph-ci/blob/0772e8a667e86de7945704f53c601d09a49232f1/qa/tasks/mds_thrash.py#L21
https://git... - 12:57 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- This feature will be used by Ceph CSI to create a PVC from a snapshot [1], and by OpenStack Manila to create a share ...
- 12:26 PM Bug #40197: The command 'node ls' sometimes output some incorrect information about mds.
- Min Shi wrote:
> I repeat your steps, but the phenomenon is a little different. When I test the command `ceph node l... - 09:02 AM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- saw this with luminous: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-24_18:14:12-multimds-wip-yuri8-testing-2019...
- 06:23 AM Bug #42062 (Fix Under Review): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- 03:37 AM Bug #42434 (Resolved): qa: TOO_FEW_PGS in mimic during upgrade suite tests
10/28/2019
- 10:09 PM Bug #42516 (Resolved): mds: some mutations have initiated (TrackedOp) set to 0
- From Brad:
> I was looking for tracker ops that had been created with 'initiated'
> set to zero and came across t... - 09:09 PM Bug #42515: fs: OpenFileTable object shards have too many k/v pairs
- ceph-users - http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-October/037076.html
- 08:50 PM Bug #42515 (Resolved): fs: OpenFileTable object shards have too many k/v pairs
- Since #40583 lowered the omap k/v limit to 200k, we've been seeing messages from deep scrubs showing the open file ta...
- 04:04 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- please dump cache and share it again
- 03:48 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- I restarted all mds and did not create a snapshot after that, but still seeing those slow requests..
- 03:52 PM Tasks #42085 (Fix Under Review): qa: create tests for new recover_session=clean option
- 12:50 PM Fix #42508 (Resolved): cephfs-shell: print a helpful message instead of a Python backtrace when n...
- Currently, running @cephfs-shell@ on a blank system without any configuration file fails with a Python backtrace:
<p... - 04:19 AM Bug #42478: qa: AttributeError: can't set attribute
- Jos Collin wrote:
> This happens when there is no setter in thrasher.py. Can you show me your thrasher.py and mds_th... - 03:54 AM Tasks #39998: client: audit ACL
- I think we decided this one need to be tabled for now. Fixing it will likely require a lot of changes to the cephfs p...
10/27/2019
- 03:59 PM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- ...
- 06:29 AM Bug #40197: The command 'node ls' sometimes output some incorrect information about mds.
- I repeat your steps, but the phenomenon is a little different. When I test the command `ceph node ls`, it only show t...
- 12:11 AM Feature #42479 (Fix Under Review): mgr/volumes: add `fs subvolume resize infinite` command
10/25/2019
- 09:41 PM Bug #42494 (Resolved): ceph: config show can't locate mds
- ...
- 04:14 PM Bug #42478 (Need More Info): qa: AttributeError: can't set attribute
- This happens when there is no setter in thrasher.py. Can you show me your thrasher.py and mds_thrash.py, so that I ge...
- 02:40 PM Bug #42491: "probably no MDS server is up?" in upgrade:jewel-x-wip-yuri-luminous_10.22.19
- and in this job http://pulpito.ceph.com/yuriw-2019-10-23_19:22:44-upgrade:jewel-x-wip-yuri-luminous_10.22.19-distro-b...
- 02:39 PM Bug #42491 (New): "probably no MDS server is up?" in upgrade:jewel-x-wip-yuri-luminous_10.22.19
- http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-23_19:22:44-upgrade:jewel-x-wip-yuri-luminous_10.22.19-distro-basic...
- 10:31 AM Feature #42479 (In Progress): mgr/volumes: add `fs subvolume resize infinite` command
- 01:05 AM Feature #42479 (Resolved): mgr/volumes: add `fs subvolume resize infinite` command
- Add a resize infinite command to unset the quota for a subvolume.
- 06:22 AM Bug #40371 (Resolved): cephfs-shell: du must ignore non-directory files
- 06:07 AM Backport #41861 (Rejected): nautilus: cephfs-shell: du must ignore non-directory files
- I talked with Patick, it's fine to cancel this ticket. So, I am marking this ticket as "Rejected".
- 12:16 AM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- Venky Shankar wrote:
> I could not reproduce this with vstart_runner. One way to sneak in a write (+ fsync) was to n...
10/24/2019
- 11:09 PM Bug #42365 (Need More Info): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- Can you also share any surrounding debug log messages.
- 10:38 PM Bug #42478 (Resolved): qa: AttributeError: can't set attribute
- ...
- 10:18 PM Bug #42436 (Resolved): qa: tasks.cephfs.test_volume_client.TestVolumeClient test_data_isolated fa...
- 03:58 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Ilya Dryomov wrote:
> The backport to 4.19 was incorrect, 4.19.76 is busted. Fixed in 4.19.77.
This goes for Ubu... - 01:20 PM Cleanup #42468 (Fix Under Review): mds: reorg MDSTable header
- 01:11 PM Cleanup #42468 (Resolved): mds: reorg MDSTable header
- 01:14 PM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- I could not reproduce this with vstart_runner. One way to sneak in a write (+ fsync) was to not wait for the mons to ...
- 01:01 PM Cleanup #42465 (Fix Under Review): mds: reorg MDSRank header
- 11:39 AM Cleanup #42465 (Resolved): mds: reorg MDSRank header
- 12:31 PM Bug #42467 (Duplicate): mds: daemon crashes while updating blacklist
- Ubuntu 18.04.3 LTS
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
We have setup... - 12:00 PM Bug #42466 (Duplicate): Missing subvolumegroup commands
- When I run the command:
ceph fs subvolumegroup ls <vol_name>
It says "Error EINVAL: invalid command"
Here is... - 11:11 AM Cleanup #42464 (Fix Under Review): mds: reorg MDSMap header
- 11:00 AM Cleanup #42464 (Resolved): mds: reorg MDSMap header
- 10:44 AM Backport #42462 (In Progress): nautilus: doc: MDS and metadata pool hardware requirements/recomme...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 10:42 AM Backport #42462 (Resolved): nautilus: doc: MDS and metadata pool hardware requirements/recommenda...
- https://github.com/ceph/ceph/pull/31116
- 10:42 AM Backport #42463 (Rejected): mimic: doc: MDS and metadata pool hardware requirements/recommendations
- 10:42 AM Documentation #39620 (Pending Backport): doc: MDS and metadata pool hardware requirements/recomme...
- 10:04 AM Cleanup #42461 (Fix Under Review): mds: reorg MDSTableClient header
- 09:55 AM Cleanup #42461 (Resolved): mds: reorg MDSTableClient header
- 06:52 AM Bug #24403: mon failed to return metadata for mds
- Do you restart the mds on sen2agriprod? Or just you restart all mds? We have the similar case, loosing all the mds's ...
- 12:55 AM Feature #42451 (Resolved): mds: add root_squash
- Allow a root squash mode via the MDS capability. The purpose here is not so much to prevent a true adversary (the cli...
Also available in: Atom