Activity
From 10/18/2019 to 11/16/2019
11/16/2019
- 04:56 PM Bug #41228 (Duplicate): mon: deleting a CephFS and its pools causes MONs to crash
- 06:40 AM Bug #40773 (Resolved): qa: 'ceph osd require-osd-release nautilus' fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:34 AM Backport #41495 (Resolved): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31040
m...
11/15/2019
- 10:40 PM Backport #41495: nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31040
merged - 09:15 PM Bug #42842 (Resolved): CephFS linux kernel hang, v4.15
- Simple file system operations like df and ls hang and show a status of D+ when running ps. dmesg logs sometimes show ...
- 09:05 PM Bug #41841 (Resolved): mgr/volumes: missing protection for `fs volume rm` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:05 PM Feature #41842 (Resolved): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:04 PM Bug #42096 (Resolved): mgr/volumes: creating subvolume and subvolume group snapshot fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:39 PM Bug #42837: qa: test_ops_throttle failed with `RuntimeError: Ops in flight high water is unexpect...
- might be same as: https://tracker.ceph.com/issues/16881
but the test case is different, so not marking as dup. - 04:23 PM Bug #42837 (New): qa: test_ops_throttle failed with `RuntimeError: Ops in flight high water is un...
- ...
- 05:21 PM Bug #42829 (Fix Under Review): tools/cephfs: linkages injected by cephfs-data-scan have first == ...
- 07:18 AM Bug #42829 (Resolved): tools/cephfs: linkages injected by cephfs-data-scan have first == head
- something like
[inode 0x100000367e5 [head,head] /pg_xlog_archives/9.6/smobile/000000200000002C000000BB.00000028.ba... - 03:22 PM Bug #42835 (Resolved): qa: test_scrub_abort fails during check_task_status("idle")
- ...
- 10:09 AM Feature #42831 (Resolved): mds: add config to deny all client reconnects
- This helps reduce mds failover time.
- 06:08 AM Bug #42760: kclient: get random mds not work as expected
- Should be fixed by https://github.com/ceph/ceph-client/commit/b570777a96d5dd15b556e73d90177e20cd0b453b
- 05:59 AM Bug #42827 (In Progress): mds: when mounting the extra slash(es) at the end of server path will b...
- 05:58 AM Bug #42827 (Won't Fix): mds: when mounting the extra slash(es) at the end of server path will be ...
- This bug is copied from https://tracker.ceph.com/issues/42771, and need to fix it in the MDS.
This will be very re... - 05:51 AM Bug #42720 (Resolved): client: remove useless variable for ceph::mutex and ceph::condition_variable
- 03:53 AM Bug #42826 (Fix Under Review): mds: client does not response to cap revoke After session stale->r...
- 03:45 AM Bug #42826 (Resolved): mds: client does not response to cap revoke After session stale->resume ci...
- /a/pdonnell-2019-11-11_21:12:02-multimds-wip-pdonnell-testing-20191111.154849-distro-basic-smithi/4497461
11/14/2019
- 07:13 PM Bug #42806 (Resolved): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- 07:12 PM Bug #42806 (Fix Under Review): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- 08:24 AM Bug #42806 (Resolved): test_cephfs_shell: stderr is uninitialized for run_cephfs_shell)_cmd
- This can break tests accessing stderr on teuthology without breaking them on vstart_cluster.
- 07:06 PM Fix #42508 (Resolved): cephfs-shell: print a helpful message instead of a Python backtrace when n...
- 06:15 PM Backport #42239: nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30827
m... - 05:37 PM Backport #42239 (Resolved): nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and thei...
- 06:15 PM Backport #42180: nautilus: mgr/volumes: creating subvolume and subvolume group snapshot fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31076
m... - 05:35 PM Backport #42180 (Resolved): nautilus: mgr/volumes: creating subvolume and subvolume group snapsho...
- 06:15 PM Backport #42149: nautilus: mgr/volumes: missing protection for `fs volume rm` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30768
m... - 05:33 PM Backport #42149 (Resolved): nautilus: mgr/volumes: missing protection for `fs volume rm` command
- 03:51 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:33 PM Backport #40944 (Resolved): nautilus: mgr: failover during in qa testing causes unresponsive clie...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29649
m... - 02:29 PM Cleanup #42813 (Fix Under Review): mds: reorg RecoveryQueue header
- 01:54 PM Cleanup #42813 (Resolved): mds: reorg RecoveryQueue header
- 02:16 PM Documentation #42205 (Resolved): doc: update "mount using FUSE" page
- 02:16 PM Documentation #42220 (Resolved): doc: rearrange mounting with kernel doc
- 02:15 PM Documentation #42298 (Resolved): doc: move mount automation part from mounting doc to fstab doc
- 02:15 PM Documentation #42601 (Resolved): doc: separate "system managed mount" vs. "manual mount" for diff...
- 12:47 PM Bug #40863 (Fix Under Review): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- 12:46 PM Bug #40861 (Fix Under Review): cephfs-shell: -p doesn't work for rmdir
- 09:33 AM Bug #39651: qa: test_kill_mdstable fails unexpectedly
- I talked with Zheng. He told me that many tests cannot be executed successfully with vstart cluster and this is one o...
11/13/2019
- 10:58 PM Bug #42602 (Fix Under Review): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- 08:13 PM Backport #40944: nautilus: mgr: failover during in qa testing causes unresponsive client warnings
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29649
merged - 07:32 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- Ok, I posted a couple of patches to the mailing list this morning. The first one addresses this problem, and the seco...
- 06:08 PM Bug #39651 (In Progress): qa: test_kill_mdstable fails unexpectedly
- 06:08 PM Bug #39651: qa: test_kill_mdstable fails unexpectedly
- Part of the problem is that the pipe character wasn't trimmed from output while extracting the path -...
- 11:10 AM Cleanup #42792 (Fix Under Review): mds: reorg OpenFileTable header
- 10:30 AM Cleanup #42792 (Resolved): mds: reorg OpenFileTable header
- 11:04 AM Cleanup #42793 (Fix Under Review): mds: reorg PurgeQueue header
- 11:00 AM Cleanup #42793 (Resolved): mds: reorg PurgeQueue header
- 10:21 AM Backport #42790 (In Progress): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- 08:12 AM Backport #42790 (Resolved): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- https://github.com/ceph/ceph/pull/31332
- 08:40 AM Bug #42707: Kernel 5.0 CephFS client hang
- 5.0.0-33.35~18.04.1 seems fix this issue. I'm installing and testing now.
11/12/2019
- 09:32 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- I got lucky and reproduced it once, but haven't been able to do so since.
Still, I think I may understand what's h... - 05:42 PM Bug #36348 (In Progress): luminous(?): blogbench I/O with two kernel clients; one stalls
- Ran crash on the live (stuck) kernel. Most of the "blogbench" threads are stuck trying to acquire inode->i_rwsem for ...
- 05:06 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- I set up 2 kclients and kicked off a blogbench run on each with both pointed at the same directory on cephfs. They bo...
- 05:55 PM Bug #40863 (In Progress): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- 05:54 PM Bug #40861 (In Progress): cephfs-shell: -p doesn't work for rmdir
- 05:25 PM Feature #42479 (Pending Backport): mgr/volumes: add `fs subvolume resize infinite` command
- 05:06 PM Bug #42759 (Fix Under Review): mds: inode lock stuck at unstable state after evicting client
- 03:33 AM Bug #42759 (Resolved): mds: inode lock stuck at unstable state after evicting client
- 05:05 PM Bug #42770 (Fix Under Review): Regulary trim inode in memory
- 12:29 PM Bug #42770 (Closed): Regulary trim inode in memory
- Inode would be trimmed only when cache reached limit or in the bottom lru now. Too many inode in memory would lead to...
- 04:11 PM Backport #42774 (In Progress): luminous: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/31573
- 02:09 PM Backport #42774 (Resolved): luminous: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/31573
- 04:09 PM Bug #42602: client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- Patrick Donnelly wrote:
> Better would be to wrap the usage of SEEK_DATA/SEEK_HOLE in #ifdefs. Would you like to sub... - 03:08 PM Bug #42365 (New): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- 01:43 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- clone operation design & interface:
. Interface
Introduce `clone` sub-command in `subvolume snapshot` command
... - 05:12 AM Bug #42760 (In Progress): kclient: get random mds not work as expected
- 05:11 AM Bug #42760 (Resolved): kclient: get random mds not work as expected
- When getting random mds from mdsmap, such as there has 5 mds server and only one is in up state, like:
mds = [-1, -1... - 04:56 AM Feature #4386 (In Progress): kclient: Mount error message when no MDS present
- Currently from my test this has been fixed by e9e427f0a14f7.
Will go through the related code and test it more to ma... - 12:01 AM Bug #42707 (In Progress): Kernel 5.0 CephFS client hang
- 12:00 AM Bug #42707: Kernel 5.0 CephFS client hang
- There was a bad backport that crept into a stable release and it looks like this ubuntu kernel pulled it in:
h...
11/11/2019
- 11:21 PM Documentation #42195 (Resolved): Add doc for exporting cephfs over nfs server deployed using rook
- 10:48 PM Bug #42720 (Fix Under Review): client: remove useless variable for ceph::mutex and ceph::conditio...
- 10:47 PM Bug #42602: client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- Better would be to wrap the usage of SEEK_DATA/SEEK_HOLE in #ifdefs. Would you like to submit a PR?
- 08:22 PM Documentation #42406 (Resolved): doc: update mount.ceph man page
- 08:09 PM Documentation #42300 (Resolved): doc/ceph-fuse: -n missing in man page
- 06:47 PM Bug #42101 (Resolved): test_cephfs_shell: test_help doesn't test help
- 05:24 AM Bug #42101 (Fix Under Review): test_cephfs_shell: test_help doesn't test help
- 06:46 PM Bug #42100 (Resolved): cephfs-shell: always returns zero, even when a command has failed
- 05:25 AM Bug #42100 (Fix Under Review): cephfs-shell: always returns zero, even when a command has failed
- 03:46 PM Documentation #37746 (New): doc: how to mount a subdir with ceph-fuse/kclient
- Okay I see. This is not addressed in
https://github.com/ceph/ceph/pull/30754
either. We'll work on this. - 03:17 PM Documentation #37746: doc: how to mount a subdir with ceph-fuse/kclient
- No, please reopen. Nothing has been changed in that direction.
- 03:09 PM Documentation #37746 (Rejected): doc: how to mount a subdir with ceph-fuse/kclient
- I believe the current documentation already shows how to mount a subdir. Please reopen if you can cite the specific p...
- 03:41 PM Bug #42746: mds crashed in MDCache::request_forward
- Is this from a QA run or local testing?
- 03:34 PM Bug #42746 (Fix Under Review): mds crashed in MDCache::request_forward
- 03:26 PM Bug #42746 (Resolved): mds crashed in MDCache::request_forward
- ...
- 03:18 PM Documentation #23611 (Need More Info): doc: add description of new fs-client auth profile
- 02:11 PM Backport #42672 (In Progress): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 02:03 PM Backport #42678 (In Progress): luminous: qa: malformed job
- 12:35 PM Backport #42738 (Resolved): nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- https://github.com/ceph/ceph/pull/33122
- 11:39 AM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- Patrick Donnelly wrote:
> Milind Changire wrote:
> > please see attachment out.tar.bz which includes ceph.conf as t... - 09:05 AM Bug #42061 (Need More Info): volume_client: AssertionError: 237 != 8
- 02:14 AM Bug #42724 (Won't Fix): pybind/mgr/volumes: confirm backwards-compatibility of ceph_volume_client.py
- It is expected that Manila may need to upgrade prior to an existing Ceph cluster in Openstack. It is necessary to con...
- 02:12 AM Bug #42723 (Resolved): pybind/mgr/volumes: add upgrade testing
- We need testing for the volumes plugin consuming volumes configured using the old ceph_volume_client.py interface.
...
11/10/2019
- 03:58 AM Bug #42720 (Resolved): client: remove useless variable for ceph::mutex and ceph::condition_variable
- ceph::mutex flock = ceph::make_mutex("Client::_read_sync flock");
ceph::condition_variable cond
the flock and cond ...
11/08/2019
- 07:50 PM Fix #42450 (In Progress): MDSMonitor: warn if a new file system is being created with an EC defau...
- 06:43 PM Bug #42299 (Pending Backport): mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 06:42 PM Cleanup #42461 (Resolved): mds: reorg MDSTableClient header
- 06:28 PM Backport #42713 (New): nautilus: mgr: daemon state for mds not available
- 06:22 PM Backport #42713 (In Progress): nautilus: mgr: daemon state for mds not available
- 06:14 PM Backport #42713 (Resolved): nautilus: mgr: daemon state for mds not available
- https://github.com/ceph/ceph/pull/30704
- 06:07 PM Bug #41538 (Resolved): mds: wrong compat can cause MDS to be added daemon registry on mgr but not...
- 06:07 PM Bug #42635 (Pending Backport): mgr: daemon state for mds not available
- 05:29 PM Bug #20735: mds: stderr:gzip: /var/log/ceph/ceph-mds.f.log: file size changed while zipping
- Here the same happened with the mon, with valgrind....
- 02:55 PM Bug #42707 (Resolved): Kernel 5.0 CephFS client hang
- $ uname -a
Linux Dell-Latitude-ideco 5.0.0-32-generic #34~18.04.2-Ubuntu SMP Thu Oct 10 10:36:02 UTC 2019 x86_64 x86... - 12:33 PM Bug #42061: volume_client: AssertionError: 237 != 8
- Couldn't reproduce this locally -...
- 08:21 AM Cleanup #42690 (Fix Under Review): mds: reorg Mutation header
- 08:18 AM Cleanup #42690 (Resolved): mds: reorg Mutation header
- 04:56 AM Bug #41565 (In Progress): mds: detect MDS<->MDS messages that are not versioned
11/07/2019
- 05:44 PM Bug #24679 (Pending Backport): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 04:30 AM Bug #24679: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- Needed in Luminous since apparently we're testing with 18.04 there too now.
https://tracker.ceph.com/issues/42672 - 05:44 PM Backport #42672 (Need More Info): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu...
- 04:30 AM Backport #42672 (Resolved): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/31450
- 05:37 PM Bug #42688 (Triaged): Standard CephFS caps do not allow certain dot files to be written
- I have repeatedly setup a Ceph Nautilus cluster via MAAS/Juju (openstack-charmers charms), using the latest Ubuntu cl...
- 02:42 PM Fix #42508 (In Progress): cephfs-shell: print a helpful message instead of a Python backtrace whe...
- 11:37 AM Backport #40892: luminous: mds: cleanup truncating inodes when standby replay mds trim log segments
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31286
m... - 04:42 AM Backport #40892 (Resolved): luminous: mds: cleanup truncating inodes when standby replay mds trim...
- 11:12 AM Bug #42636 (Fix Under Review): qa: AttributeError: can't set attribute
- 11:02 AM Bug #40477 (Resolved): mds: cleanup truncating inodes when standby replay mds trim log segments
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:01 AM Backport #42678 (Resolved): luminous: qa: malformed job
- https://github.com/ceph/ceph/pull/31449
- 08:20 AM Bug #42675 (Fix Under Review): mds: tolerate no snaprealm encoded in on-disk root inode
- 07:45 AM Bug #42675 (Resolved): mds: tolerate no snaprealm encoded in on-disk root inode
- cephfs-data-scan of luminous and prior versions may update on-disk root inode without encoding snaprealm (cephfs-data...
- 04:01 AM Bug #41031 (Pending Backport): qa: malformed job
- This got into luminous.
- 02:33 AM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- Not sure if we fixed this recently. There was some discussion a month or so ago about removing the ceph.* xattrs but ...
11/06/2019
11/05/2019
- 03:07 PM Bug #42646 (Fix Under Review): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volume...
- 12:44 PM Bug #42646 (Resolved): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.TestVo...
- Test cases in TestVolumes create subvolumes/groups/snapshots in format `<string>_<random_number>`. Some test cases wi...
- 01:13 PM Backport #42650 (Resolved): nautilus: mds: no assert on frozen dir when scrub path
- https://github.com/ceph/ceph/pull/32071
- 01:13 PM Backport #42649 (Rejected): mimic: mds: no assert on frozen dir when scrub path
- 01:09 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- Still seeing the issue after unmounting and restarting :-(
- 12:40 PM Bug #42642 (Fix Under Review): mds: MDCache.h compile warnings
- 08:23 AM Bug #42642 (Resolved): mds: MDCache.h compile warnings
- ...
- 10:57 AM Bug #42636 (In Progress): qa: AttributeError: can't set attribute
- 04:07 AM Bug #42636 (Resolved): qa: AttributeError: can't set attribute
- Looks like #42478 is not fixed....
- 10:55 AM Bug #42643 (Fix Under Review): vstart.sh: highlight presence of stray conf file
- 10:45 AM Bug #42643 (Resolved): vstart.sh: highlight presence of stray conf file
- If there's a stray conf file present in /etc/ceph/ceph.conf, then it leads to a misbehaving cluster. Probably an unre...
- 08:40 AM Bug #41538 (Fix Under Review): mds: wrong compat can cause MDS to be added daemon registry on mgr...
- Backport will be tracked in #42635.
- 08:40 AM Bug #42635 (Fix Under Review): mgr: daemon state for mds not available
- 04:02 AM Bug #42635 (Resolved): mgr: daemon state for mds not available
- ...
- 06:30 AM Bug #42251 (Pending Backport): mds: no assert on frozen dir when scrub path
- 06:28 AM Cleanup #42311 (Resolved): mds: reorg MDSAuthCaps header
- 06:23 AM Cleanup #42191 (Resolved): mds: reorg MDCache header
- 04:40 AM Bug #42637 (Resolved): qa: ffsb suite causes SLOW_OPS warnings
- ...
11/04/2019
- 09:47 PM Backport #42632 (Rejected): mimic: client: FAILED assert(cap == in->auth_cap)
- 09:47 PM Backport #42631 (Resolved): nautilus: client: FAILED assert(cap == in->auth_cap)
- https://github.com/ceph/ceph/pull/32065
- 06:31 PM Backport #42159 (In Progress): mimic: osdc: objecter ops output does not have useful time informa...
- 06:30 PM Backport #42159: mimic: osdc: objecter ops output does not have useful time information
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/31384
ceph-backport.sh versi... - 06:21 PM Backport #42148 (In Progress): mimic: mds: mds returns -5 error when the deleted file does not exist
- 06:19 PM Backport #42146 (In Progress): mimic: client: return error when someone passes bad whence value t...
- 06:16 PM Backport #42143 (In Progress): mimic: mds:split the dir if the op makes it oversized, because som...
- 05:48 PM Backport #42327 (In Progress): nautilus: cephfs-shell: not compatible with cmd2 versions after 0....
- 02:18 PM Documentation #42205 (Fix Under Review): doc: update "mount using FUSE" page
- 02:18 PM Documentation #42220 (Fix Under Review): doc: rearrange mounting with kernel doc
- 02:15 PM Documentation #42601 (In Progress): doc: separate "system managed mount" vs. "manual mount" for d...
- 12:58 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- ceph-backport.sh version 15.0.0.6612: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 12:48 PM Backport #42615 (In Progress): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- 12:46 PM Backport #42615 (Resolved): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- https://github.com/ceph/ceph/pull/31332
- 12:46 PM Feature #41182 (Pending Backport): mgr/volumes: add `fs subvolume extend/shrink` commands
- 09:30 AM Bug #41319 (Can't reproduce): ceph.in: pool creation fails with "AttributeError: 'str' object has...
11/03/2019
- 10:35 PM Bug #42602 (Resolved): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- The non-posix conform constants SEEK_DATA and SEEK_HOLE are missin on alpine linux / musllib. So you cant compile src...
- 08:39 AM Documentation #42196 (Resolved): doc: Document inter-mds export process
- 08:18 AM Documentation #42601 (Resolved): doc: separate "system managed mount" vs. "manual mount" for diff...
- Client docs show manual commands for performing the mount. We should also suggest the systemd/fstab commands to setup...
- 08:04 AM Documentation #42190 (Resolved): doc: document MDS journal event types
11/02/2019
- 03:23 PM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 01:23 AM Feature #41182 (In Progress): mgr/volumes: add `fs subvolume extend/shrink` commands
- 01:17 AM Feature #41182: mgr/volumes: add `fs subvolume extend/shrink` commands
- Nautilus Backport: https://github.com/ceph/ceph/pull/31332
11/01/2019
- 11:08 PM Cleanup #42329 (Resolved): mds: reorg MDSCacheObject header
- 11:04 PM Bug #39715 (Resolved): client: optimize rename operation under different quota root
- 11:02 PM Feature #41182 (Pending Backport): mgr/volumes: add `fs subvolume extend/shrink` commands
- 10:55 PM Bug #41799 (Pending Backport): client: FAILED assert(cap == in->auth_cap)
- 10:50 PM Cleanup #42371 (Resolved): mds: reorg MDSDaemon header
- 10:38 PM Bug #42478 (Resolved): qa: AttributeError: can't set attribute
- 10:38 PM Bug #42062 (Resolved): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- 08:28 PM Bug #42597 (Fix Under Review): mon and mds ok-to-stop commands should validate input names exist ...
- "ceph osd ok-to-stop" accepts only integers, "any", and "all". However, the "mon" and "mds" versions accept any strin...
- 04:58 PM Bug #37378 (Resolved): truncate_seq ordering issues with object creation
- 02:11 PM Bug #42022: mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to become empty fro...
- Rishabh Dave wrote:
> Couldn't reproduce this locally and on teuthology. On teuthology the test passed -
> [...]
>... - 02:10 PM Bug #41415 (Can't reproduce): mgr/volumes: AssertionError: '33' != 'new_pool'
- I haven't seen this since this report. I'll re-open the issue if it comes up again.
- 09:25 AM Bug #40873 (Duplicate): qa: expected MDS_CLIENT_LATE_RELEASE in tasks.cephfs.test_client_recovery...
- dup of https://tracker.ceph.com/issues/40968
- 08:20 AM Feature #39098 (Fix Under Review): mds: lock caching for asynchronous unlink
- 08:15 AM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- still the same issue. logs show there are inode locks in 'snap->sync' states. try umounting all client and restart al...
- 02:53 AM Bug #24088 (Duplicate): mon: slow remove_snaps op reported in cluster health log
- seems like dup of https://tracker.ceph.com/issues/37568
10/31/2019
- 11:44 PM Documentation #42414 (Resolved): doc: hide page contents for Ceph Internals
- 11:43 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- Milind Changire wrote:
> please see attachment out.tar.bz which includes ceph.conf as to why `ceph status` command h... - 02:26 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- please see attachment out.tar.bz which includes ceph.conf as to why `ceph status` command hangs on Fedora 30 laptop.
- 11:01 PM Bug #42515: fs: OpenFileTable object shards have too many k/v pairs
- 06:15 PM Backport #42142 (In Progress): nautilus: mds:split the dir if the op makes it oversized, because ...
- ceph-backport.sh version 15.0.0.6612: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 03:28 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- I've restarted the mds at 15:06, didn't take any snapshots and dumped the cache with the slow requests around 16:00. ...
- 02:55 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- The kernel stack trace just looks like the client is hung waiting for the inode's i_rwsem to become free, which means...
- 02:37 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- Venky Shankar wrote:
> saw this with luminous: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-24_18:14:12-multimd... - 11:31 AM Backport #42155: nautilus: mds: infinite loop in Locker::file_update_finish()
- https://github.com/ceph/ceph/pull/31287
- 11:30 AM Backport #40892 (In Progress): luminous: mds: cleanup truncating inodes when standby replay mds t...
- 11:04 AM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 10:57 AM Backport #41489 (Resolved): luminous: client: client should return EIO when it's unsafe reqs have...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 11:02 AM Backport #37906 (In Progress): mimic: make cephfs-data-scan reconstruct snaptable
- 11:01 AM Backport #38643 (In Progress): mimic: fs: "log [WRN] : failed to reconnect caps for missing inodes"
- 11:00 AM Backport #41114 (In Progress): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 11:00 AM Backport #42156 (In Progress): mimic: mds: infinite loop in Locker::file_update_finish()
10/30/2019
- 08:02 PM Feature #42530: cephfs-shell: add setxattr and getxattr
- ...and listxattr
- 07:24 PM Bug #42494 (Resolved): ceph: config show can't locate mds
- Backport will be managed by #41525.
- 03:24 PM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30242
merged - 12:26 PM Cleanup #42564 (Fix Under Review): mds: reorg Migrator header
- 11:54 AM Cleanup #42564 (Resolved): mds: reorg Migrator header
- 11:53 AM Cleanup #42563 (Fix Under Review): mds: reorg MDSTableServer header
- 11:26 AM Cleanup #42563 (Resolved): mds: reorg MDSTableServer header
- 10:15 AM Feature #39354 (Closed): mds: derive wrlock from excl caps
- New method to implement async create/unlink. This is obsoleted
10/29/2019
- 10:32 PM Documentation #41738 (Resolved): Add documentation for that 'client direct access to data pool'
- 09:51 PM Bug #37723 (Resolved): mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:50 PM Feature #38022 (Resolved): mds: provide a limit for the maximum number of caps a client may have
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:49 PM Bug #39166 (Resolved): mds: error "No space left on device" when create a large number of dirs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #39943 (Resolved): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanos...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #39951 (Resolved): mount: key parsing fail when doing a remount
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #40173 (Resolved): TestMisc.test_evict_client fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #40960 (Resolved): client: failed to drop dn and release caps causing mds stary stacking.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:39 PM Backport #38129 (Resolved): mimic: mds: provide a limit for the maximum number of caps a client m...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28452
m... - 07:37 PM Backport #38129: mimic: mds: provide a limit for the maximum number of caps a client may have
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28452
merged - 09:39 PM Backport #38131 (Resolved): mimic: mds: stopping MDS with a large cache (40+GB) causes it to miss...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28452
m... - 07:36 PM Backport #38131: mimic: mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28452
merged - 09:38 PM Backport #41885 (Resolved): mimic: mds: client evicted twice in one tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30950
m... - 07:36 PM Backport #41885: mimic: mds: client evicted twice in one tick
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30950
merged - 09:35 PM Backport #40166 (Resolved): luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28502
m... - 03:35 PM Backport #40166: luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the n...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28502
merged - 09:35 PM Backport #40807 (Resolved): luminous: mds: msg weren't destroyed before handle_client_reconnect r...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29097
m... - 03:34 PM Backport #40807: luminous: mds: msg weren't destroyed before handle_client_reconnect returned, if...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29097
merged - 09:35 PM Backport #40163 (Resolved): luminous: mount: key parsing fail when doing a remount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29226
m... - 03:33 PM Backport #40163: luminous: mount: key parsing fail when doing a remount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29226
merged - 09:35 PM Backport #40218 (Resolved): luminous: TestMisc.test_evict_client fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29229
m... - 03:33 PM Backport #40218: luminous: TestMisc.test_evict_client fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29229
mergedReviewed-by: Venky Shankar <vshankar@redhat.... - 09:34 PM Backport #39691 (Resolved): luminous: mds: error "No space left on device" when create a large n...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29829
m... - 03:32 PM Backport #39691: luminous: mds: error "No space left on device" when create a large number of dirs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29829
merged - 09:34 PM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29830
m... - 03:32 PM Backport #41000: luminous: client: failed to drop dn and release caps causing mds stary stacking.
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29830
merged - 09:34 PM Bug #40286 (Resolved): luminous: qa: remove ubuntu 14.04 testing
- 03:29 PM Bug #40286: luminous: qa: remove ubuntu 14.04 testing
- https://github.com/ceph/ceph/pull/28701 merged
- 09:33 PM Backport #42039 (Resolved): luminous: client: _readdir_cache_cb() may use the readdir_cache alrea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30934
m... - 03:28 PM Backport #42039: luminous: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30934
merged - 08:25 PM Bug #42515 (Fix Under Review): fs: OpenFileTable object shards have too many k/v pairs
- 02:23 AM Bug #42515 (In Progress): fs: OpenFileTable object shards have too many k/v pairs
- 07:26 PM Bug #42494 (Fix Under Review): ceph: config show can't locate mds
- 01:47 PM Bug #42494: ceph: config show can't locate mds
- Sage, assigning you since I believe you wanted to look into this.
- 06:22 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- 5.0.0-32 introduced the bad backport, -33 reverted it:
http://changelogs.ubuntu.com/changelogs/pool/main/l/linux/l... - 05:19 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Was 5.0.32 actually fixed?
- 03:45 PM Feature #5520 (New): osdc: should handle namespaces
- 02:01 PM Feature #42530 (Resolved): cephfs-shell: add setxattr and getxattr
- Allow cephfs-shell to set and fetch xattrs. This would be nice for testing selinux, for instance.
- 01:50 PM Bug #42466 (Duplicate): Missing subvolumegroup commands
- The nautilus backport is still in review, https://tracker.ceph.com/issues/42239
`subvolumegroup ls` should be availa... - 01:09 PM Bug #42478 (Fix Under Review): qa: AttributeError: can't set attribute
- 12:17 PM Bug #42478 (In Progress): qa: AttributeError: can't set attribute
- https://github.com/ceph/ceph-ci/blob/0772e8a667e86de7945704f53c601d09a49232f1/qa/tasks/mds_thrash.py#L21
https://git... - 12:57 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- This feature will be used by Ceph CSI to create a PVC from a snapshot [1], and by OpenStack Manila to create a share ...
- 12:26 PM Bug #40197: The command 'node ls' sometimes output some incorrect information about mds.
- Min Shi wrote:
> I repeat your steps, but the phenomenon is a little different. When I test the command `ceph node l... - 09:02 AM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- saw this with luminous: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-24_18:14:12-multimds-wip-yuri8-testing-2019...
- 06:23 AM Bug #42062 (Fix Under Review): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- 03:37 AM Bug #42434 (Resolved): qa: TOO_FEW_PGS in mimic during upgrade suite tests
10/28/2019
- 10:09 PM Bug #42516 (Resolved): mds: some mutations have initiated (TrackedOp) set to 0
- From Brad:
> I was looking for tracker ops that had been created with 'initiated'
> set to zero and came across t... - 09:09 PM Bug #42515: fs: OpenFileTable object shards have too many k/v pairs
- ceph-users - http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-October/037076.html
- 08:50 PM Bug #42515 (Resolved): fs: OpenFileTable object shards have too many k/v pairs
- Since #40583 lowered the omap k/v limit to 200k, we've been seeing messages from deep scrubs showing the open file ta...
- 04:04 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- please dump cache and share it again
- 03:48 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- I restarted all mds and did not create a snapshot after that, but still seeing those slow requests..
- 03:52 PM Tasks #42085 (Fix Under Review): qa: create tests for new recover_session=clean option
- 12:50 PM Fix #42508 (Resolved): cephfs-shell: print a helpful message instead of a Python backtrace when n...
- Currently, running @cephfs-shell@ on a blank system without any configuration file fails with a Python backtrace:
<p... - 04:19 AM Bug #42478: qa: AttributeError: can't set attribute
- Jos Collin wrote:
> This happens when there is no setter in thrasher.py. Can you show me your thrasher.py and mds_th... - 03:54 AM Tasks #39998: client: audit ACL
- I think we decided this one need to be tabled for now. Fixing it will likely require a lot of changes to the cephfs p...
10/27/2019
- 03:59 PM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- ...
- 06:29 AM Bug #40197: The command 'node ls' sometimes output some incorrect information about mds.
- I repeat your steps, but the phenomenon is a little different. When I test the command `ceph node ls`, it only show t...
- 12:11 AM Feature #42479 (Fix Under Review): mgr/volumes: add `fs subvolume resize infinite` command
10/25/2019
- 09:41 PM Bug #42494 (Resolved): ceph: config show can't locate mds
- ...
- 04:14 PM Bug #42478 (Need More Info): qa: AttributeError: can't set attribute
- This happens when there is no setter in thrasher.py. Can you show me your thrasher.py and mds_thrash.py, so that I ge...
- 02:40 PM Bug #42491: "probably no MDS server is up?" in upgrade:jewel-x-wip-yuri-luminous_10.22.19
- and in this job http://pulpito.ceph.com/yuriw-2019-10-23_19:22:44-upgrade:jewel-x-wip-yuri-luminous_10.22.19-distro-b...
- 02:39 PM Bug #42491 (New): "probably no MDS server is up?" in upgrade:jewel-x-wip-yuri-luminous_10.22.19
- http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-23_19:22:44-upgrade:jewel-x-wip-yuri-luminous_10.22.19-distro-basic...
- 10:31 AM Feature #42479 (In Progress): mgr/volumes: add `fs subvolume resize infinite` command
- 01:05 AM Feature #42479 (Resolved): mgr/volumes: add `fs subvolume resize infinite` command
- Add a resize infinite command to unset the quota for a subvolume.
- 06:22 AM Bug #40371 (Resolved): cephfs-shell: du must ignore non-directory files
- 06:07 AM Backport #41861 (Rejected): nautilus: cephfs-shell: du must ignore non-directory files
- I talked with Patick, it's fine to cancel this ticket. So, I am marking this ticket as "Rejected".
- 12:16 AM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- Venky Shankar wrote:
> I could not reproduce this with vstart_runner. One way to sneak in a write (+ fsync) was to n...
10/24/2019
- 11:09 PM Bug #42365 (Need More Info): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- Can you also share any surrounding debug log messages.
- 10:38 PM Bug #42478 (Resolved): qa: AttributeError: can't set attribute
- ...
- 10:18 PM Bug #42436 (Resolved): qa: tasks.cephfs.test_volume_client.TestVolumeClient test_data_isolated fa...
- 03:58 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Ilya Dryomov wrote:
> The backport to 4.19 was incorrect, 4.19.76 is busted. Fixed in 4.19.77.
This goes for Ubu... - 01:20 PM Cleanup #42468 (Fix Under Review): mds: reorg MDSTable header
- 01:11 PM Cleanup #42468 (Resolved): mds: reorg MDSTable header
- 01:14 PM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- I could not reproduce this with vstart_runner. One way to sneak in a write (+ fsync) was to not wait for the mons to ...
- 01:01 PM Cleanup #42465 (Fix Under Review): mds: reorg MDSRank header
- 11:39 AM Cleanup #42465 (Resolved): mds: reorg MDSRank header
- 12:31 PM Bug #42467 (Duplicate): mds: daemon crashes while updating blacklist
- Ubuntu 18.04.3 LTS
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
We have setup... - 12:00 PM Bug #42466 (Duplicate): Missing subvolumegroup commands
- When I run the command:
ceph fs subvolumegroup ls <vol_name>
It says "Error EINVAL: invalid command"
Here is... - 11:11 AM Cleanup #42464 (Fix Under Review): mds: reorg MDSMap header
- 11:00 AM Cleanup #42464 (Resolved): mds: reorg MDSMap header
- 10:44 AM Backport #42462 (In Progress): nautilus: doc: MDS and metadata pool hardware requirements/recomme...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 10:42 AM Backport #42462 (Resolved): nautilus: doc: MDS and metadata pool hardware requirements/recommenda...
- https://github.com/ceph/ceph/pull/31116
- 10:42 AM Backport #42463 (Rejected): mimic: doc: MDS and metadata pool hardware requirements/recommendations
- 10:42 AM Documentation #39620 (Pending Backport): doc: MDS and metadata pool hardware requirements/recomme...
- 10:04 AM Cleanup #42461 (Fix Under Review): mds: reorg MDSTableClient header
- 09:55 AM Cleanup #42461 (Resolved): mds: reorg MDSTableClient header
- 06:52 AM Bug #24403: mon failed to return metadata for mds
- Do you restart the mds on sen2agriprod? Or just you restart all mds? We have the similar case, loosing all the mds's ...
- 12:55 AM Feature #42451 (Resolved): mds: add root_squash
- Allow a root squash mode via the MDS capability. The purpose here is not so much to prevent a true adversary (the cli...
10/23/2019
- 11:21 PM Fix #42450 (Resolved): MDSMonitor: warn if a new file system is being created with an EC default ...
- We do not recommend using an EC pool as the default data pool for many reasons, documented in [1].
`fs new` should... - 08:43 PM Feature #42447 (Resolved): add basic client setup page
- Add a page describing how to set up a client machine. Should cover (or refer to pages that cover):
# installation ... - 08:15 PM Bug #37726 (Resolved): mds: high debug logging with many subtrees is slow
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:15 PM Bug #38043 (Resolved): mds: optimize revoking stale caps
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:15 PM Bug #38326 (Resolved): mds: evict stale client when one of its write caps are stolen
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Backport #40327 (Resolved): mimic: mds: evict stale client when one of its write caps are stolen
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28585
m... - 03:32 PM Backport #40327: mimic: mds: evict stale client when one of its write caps are stolen
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28585
merged - 08:05 PM Backport #38097 (Resolved): mimic: mds: optimize revoking stale caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28585
m... - 03:32 PM Backport #38097: mimic: mds: optimize revoking stale caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28585
merged - 08:02 PM Backport #42122 (Resolved): mimic: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30918
m... - 03:28 PM Backport #42122: mimic: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30918
merged - 08:01 PM Backport #38875 (Resolved): mimic: mds: high debug logging with many subtrees is slow
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29219
m... - 03:26 PM Backport #38875: mimic: mds: high debug logging with many subtrees is slow
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29219
merged - 08:00 PM Backport #42034 (Resolved): mimic: client: lseek function does not return the correct value.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30932
m... - 03:25 PM Backport #42034: mimic: client: lseek function does not return the correct value.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30932
merged - 07:59 PM Backport #42038 (Resolved): mimic: client: _readdir_cache_cb() may use the readdir_cache already ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30933
m... - 03:25 PM Backport #42038: mimic: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30933
merged - 06:15 PM Bug #42436 (Fix Under Review): qa: tasks.cephfs.test_volume_client.TestVolumeClient test_data_iso...
- 06:01 PM Feature #13999: client: richacl support
- Patrick, any update on this issue? You told me few weeks ago that you'll add details to this issue so that I can get ...
- 05:59 PM Feature #13999 (New): client: richacl support
- 06:00 PM Tasks #39998: client: audit ACL
- Patrick, any update on this issue? You told me few weeks ago that you'll add details to this issue so that I can get ...
- 05:56 PM Bug #42317 (Resolved): mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method...
- 03:11 PM Bug #42338 (Duplicate): file system keeps on deadlocking with unresolved slow requests (failed to...
- dup of https://tracker.ceph.com/issues/39987. will be fixed by v14.2.5. you can avoid this bug by not creating new s...
- 01:46 PM Tasks #42085 (In Progress): qa: create tests for new recover_session=clean option
- 01:23 PM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- happens with kclient too on mimic: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-19_00:01:16-kcephfs-wip-yuri-mim...
- 12:39 PM Backport #42424 (In Progress): nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2)...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:38 PM Backport #42422 (In Progress): nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:37 PM Backport #42279 (In Progress): nautilus: qa: logrotate should tolerate connection resets
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:36 PM Backport #42158 (In Progress): nautilus: osdc: objecter ops output does not have useful time info...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:35 PM Backport #42157 (In Progress): nautilus: cephfs-shell: rmdir doesn't complain when directory is n...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:34 PM Backport #42155 (In Progress): nautilus: mds: infinite loop in Locker::file_update_finish()
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:29 PM Backport #41861 (Need More Info): nautilus: cephfs-shell: du must ignore non-directory files
- ...
- 11:59 AM Backport #42180 (In Progress): nautilus: mgr/volumes: creating subvolume and subvolume group snap...
- 11:03 AM Backport #42441 (Resolved): nautilus: mds: create a configurable snapshot limit
- https://github.com/ceph/ceph/pull/33295
- 11:03 AM Backport #42440 (Rejected): mimic: mds: create a configurable snapshot limit
- 03:57 AM Feature #41209 (Pending Backport): mds: create a configurable snapshot limit
- 03:14 AM Bug #42413 (Need More Info): AsyncConnection and Session cause memory leak
- 03:13 AM Bug #42413: AsyncConnection and Session cause memory leak
- A leak is technically possible but the link between the two is broken during connection event processing. In particul...
10/22/2019
- 11:22 PM Bug #42436 (Resolved): qa: tasks.cephfs.test_volume_client.TestVolumeClient test_data_isolated fa...
- ...
- 11:18 PM Bug #42435 (New): qa/suites/kcephfs: client I/O halts or cannot make sufficient progress during t...
- ...
- 11:06 PM Bug #42434 (Fix Under Review): qa: TOO_FEW_PGS in mimic during upgrade suite tests
- 11:03 PM Bug #42434 (Resolved): qa: TOO_FEW_PGS in mimic during upgrade suite tests
- ...
- 10:30 AM Backport #42424 (Resolved): nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2) No...
- https://github.com/ceph/ceph/pull/31084
- 10:30 AM Backport #42423 (Rejected): mimic: qa: "cluster [ERR] Error recovering journal 0x200: (2) No su...
- 10:29 AM Backport #42422 (Resolved): nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in re...
- https://github.com/ceph/ceph/pull/31083
- 10:29 AM Backport #42421 (Rejected): mimic: test_reconnect_eviction fails with "RuntimeError: MDS in rejec...
- 08:21 AM Documentation #42414 (Resolved): doc: hide page contents for Ceph Internals
- Hide the contents section from main page
- 07:23 AM Bug #42413 (Need More Info): AsyncConnection and Session cause memory leak
- It occurred to me that there might be a problem regarding memory leak in ceph-mds.
In function MDSDaemon::mds_handle... - 04:29 AM Bug #41415 (Need More Info): mgr/volumes: AssertionError: '33' != 'new_pool'
- 04:28 AM Bug #41415: mgr/volumes: AssertionError: '33' != 'new_pool'
- Couldn't reproduce on teuth as well - ...
- 04:18 AM Bug #41836 (Pending Backport): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such f...
- 04:13 AM Bug #42213 (Pending Backport): test_reconnect_eviction fails with "RuntimeError: MDS in reject st...
- 04:07 AM Bug #42022 (Need More Info): mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to...
- 04:07 AM Bug #42022: mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to become empty fro...
- Couldn't reproduce this locally and on teuthology. On teuthology the test passed -...
10/21/2019
- 10:11 PM Backport #41495 (In Progress): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 04:09 PM Documentation #42407 (In Progress): doc: add a doc for libcephfs
- 04:09 PM Documentation #42406 (Resolved): doc: update mount.ceph man page
- 03:48 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
I've sent it through google drive this time. Thanks again!
K- 01:50 PM Bug #42338 (Need More Info): file system keeps on deadlocking with unresolved slow requests (fail...
- 01:34 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- where did you send the log file? please share the file to me (ukernel@gmail.com) through google drive.
- 02:41 PM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- This patch by Yan, Zheng to get some extra debug statements:
diff --git a/src/mds/OpenFileTable.cc b/src/mds/OpenF... - 08:42 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- Yan, Zheng suggested the following:
delete 'mdsX_openfiles.0' object from cephfs metadata pool. (X is rank
of the... - 02:12 PM Documentation #42195 (In Progress): Add doc for exporting cephfs over nfs server deployed using rook
- 01:49 PM Bug #42348 (Rejected): TestClientRecovery.test_dont_mark_unresponsive_client_stale failure
- Just needs another backport.
- 12:07 PM Bug #40369: ceph_volume_client: fs_name must be converted to string before using it
- The mimic backport caused a regression, #42317, which was caught during v13.2.7 release preparation, and was reverted...
- 12:05 PM Backport #40896 (Rejected): mimic: ceph_volume_client: fs_name must be converted to string before...
- 12:05 PM Backport #40896: mimic: ceph_volume_client: fs_name must be converted to string before using it
- This caused a regression, #42317, which was caught during v13.2.7 release preparation, and was reverted by https://gi...
- 12:04 PM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- Revert: https://github.com/ceph/ceph/pull/31017
- 09:11 AM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- test run here: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-19_00:01:32-fs-wip-yuri-mimic_10.18.19-testing-basic...
- 06:16 AM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- There's a similar problem with `test_full_fclose` with `fclose()` going through on a full pool (w/ quota).
- 05:44 AM Bug #42388 (In Progress): mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.T...
- Started seeing this frequently in mimic test runs:
Things look fine initially:...
10/20/2019
- 06:05 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- Our MDSs keep failing over until we enable debug output (debug_mds=10/10) ... MDS becomes active and stays active ......
10/19/2019
- 11:31 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- And yet another crash ~ 5 hours later. We have adjusted the mds_cache_memory_limit from 150G -> 32G after the last cr...
- 07:35 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- A search for this assert gave this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036702.htm...
- 07:26 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- Today our active MDS crashed with an assert:
2019-10-19 08:14:50.645 7f7906cb7700 -1 /build/ceph-13.2.6/src/mds/Op...
10/18/2019
- 11:01 PM Bug #42381 (Rejected): cephfs: metadata pool cephx cap does not have permissions
- We had the syntax wrong:...
- 10:59 PM Bug #42381 (Rejected): cephfs: metadata pool cephx cap does not have permissions
- ...
- 09:57 PM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- !
https://github.com/ceph/ceph/pull/30238
should never have been approved :(. The test failure
http://pulpit... - 09:35 PM Bug #42348: TestClientRecovery.test_dont_mark_unresponsive_client_stale failure
- Venky Shankar wrote:
> This was not as straightforward as I suggested. However, it's due to PR https://github.com/ce... - 03:05 PM Bug #42348: TestClientRecovery.test_dont_mark_unresponsive_client_stale failure
- This was not as straightforward as I suggested. However, it's due to PR https://github.com/ceph/ceph/pull/28585 not b...
- 12:02 PM Backport #42130 (Resolved): mimic: doc/ceph-fuse: -k missing in man page
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30936
m... - 11:54 AM Bug #40213 (Resolved): mds: cannot switch mds state from standby-replay to active
- 11:54 AM Backport #42375 (Resolved): mimic: mds: cannot switch mds state from standby-replay to active
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29232
m... - 11:53 AM Backport #42375 (In Progress): mimic: mds: cannot switch mds state from standby-replay to active
- 11:38 AM Backport #42375 (Resolved): mimic: mds: cannot switch mds state from standby-replay to active
- https://github.com/ceph/ceph/pull/29232
- 11:54 AM Backport #42374 (Resolved): mimic: mds: cleanup truncating inodes when standby replay mds trim lo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29232
m... - 11:53 AM Backport #42374 (In Progress): mimic: mds: cleanup truncating inodes when standby replay mds trim...
- 11:37 AM Backport #42374 (Resolved): mimic: mds: cleanup truncating inodes when standby replay mds trim lo...
- https://github.com/ceph/ceph/pull/29232
- 11:40 AM Bug #38679 (Resolved): mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:39 AM Bug #38822 (Resolved): mds: there is an assertion when calling Beacon::shutdown()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:39 AM Bug #38835 (Resolved): MDSTableServer.cc: 83: FAILED assert(version == tid)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:39 AM Bug #38844 (Resolved): mds: mds_cap_revoke_eviction_timeout is not used to initialize Server::cap...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:38 AM Bug #40361 (Resolved): getattr on snap inode stuck
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:08 AM Cleanup #42371 (Fix Under Review): mds: reorg MDSDaemon header
- 10:01 AM Cleanup #42371 (Resolved): mds: reorg MDSDaemon header
- 09:12 AM Bug #42213 (Fix Under Review): test_reconnect_eviction fails with "RuntimeError: MDS in reject st...
- 08:58 AM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- Hi,
I didn't create a new snapshot while syncing data.
Files sent by mail¸ not able to post them here.
- 08:29 AM Backport #39223 (Resolved): mimic: mds: behind on trimming and "[dentry] was purgeable but no lon...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:27 AM Backport #41001 (Resolved): mimic: client: failed to drop dn and release caps causing mds stary s...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/294... - 08:27 AM Backport #38709 (Resolved): mimic: qa: kclient unmount hangs after file system goes down
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:27 AM Backport #39210 (Resolved): mimic: mds: mds_cap_revoke_eviction_timeout is not used to initialize...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:27 AM Backport #39212 (Resolved): mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:26 AM Backport #39215 (Resolved): mimic: mds: there is an assertion when calling Beacon::shutdown()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:26 AM Backport #40219 (Resolved): mimic: TestMisc.test_evict_client fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:26 AM Backport #40437 (Resolved): mimic: getattr on snap inode stuck
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:23 AM Bug #42289: mds: rejoin_gather_finish() core
- 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x5646b7223935]
2: (MDCache::rejoi... - 06:21 AM Bug #42365 (Resolved): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- ...
Also available in: Atom