Activity
From 10/12/2019 to 11/10/2019
11/10/2019
- 03:58 AM Bug #42720 (Resolved): client: remove useless variable for ceph::mutex and ceph::condition_variable
- ceph::mutex flock = ceph::make_mutex("Client::_read_sync flock");
ceph::condition_variable cond
the flock and cond ...
11/08/2019
- 07:50 PM Fix #42450 (In Progress): MDSMonitor: warn if a new file system is being created with an EC defau...
- 06:43 PM Bug #42299 (Pending Backport): mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 06:42 PM Cleanup #42461 (Resolved): mds: reorg MDSTableClient header
- 06:28 PM Backport #42713 (New): nautilus: mgr: daemon state for mds not available
- 06:22 PM Backport #42713 (In Progress): nautilus: mgr: daemon state for mds not available
- 06:14 PM Backport #42713 (Resolved): nautilus: mgr: daemon state for mds not available
- https://github.com/ceph/ceph/pull/30704
- 06:07 PM Bug #41538 (Resolved): mds: wrong compat can cause MDS to be added daemon registry on mgr but not...
- 06:07 PM Bug #42635 (Pending Backport): mgr: daemon state for mds not available
- 05:29 PM Bug #20735: mds: stderr:gzip: /var/log/ceph/ceph-mds.f.log: file size changed while zipping
- Here the same happened with the mon, with valgrind....
- 02:55 PM Bug #42707 (Resolved): Kernel 5.0 CephFS client hang
- $ uname -a
Linux Dell-Latitude-ideco 5.0.0-32-generic #34~18.04.2-Ubuntu SMP Thu Oct 10 10:36:02 UTC 2019 x86_64 x86... - 12:33 PM Bug #42061: volume_client: AssertionError: 237 != 8
- Couldn't reproduce this locally -...
- 08:21 AM Cleanup #42690 (Fix Under Review): mds: reorg Mutation header
- 08:18 AM Cleanup #42690 (Resolved): mds: reorg Mutation header
- 04:56 AM Bug #41565 (In Progress): mds: detect MDS<->MDS messages that are not versioned
11/07/2019
- 05:44 PM Bug #24679 (Pending Backport): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 04:30 AM Bug #24679: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- Needed in Luminous since apparently we're testing with 18.04 there too now.
https://tracker.ceph.com/issues/42672 - 05:44 PM Backport #42672 (Need More Info): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu...
- 04:30 AM Backport #42672 (Resolved): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/31450
- 05:37 PM Bug #42688 (Triaged): Standard CephFS caps do not allow certain dot files to be written
- I have repeatedly setup a Ceph Nautilus cluster via MAAS/Juju (openstack-charmers charms), using the latest Ubuntu cl...
- 02:42 PM Fix #42508 (In Progress): cephfs-shell: print a helpful message instead of a Python backtrace whe...
- 11:37 AM Backport #40892: luminous: mds: cleanup truncating inodes when standby replay mds trim log segments
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31286
m... - 04:42 AM Backport #40892 (Resolved): luminous: mds: cleanup truncating inodes when standby replay mds trim...
- 11:12 AM Bug #42636 (Fix Under Review): qa: AttributeError: can't set attribute
- 11:02 AM Bug #40477 (Resolved): mds: cleanup truncating inodes when standby replay mds trim log segments
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:01 AM Backport #42678 (Resolved): luminous: qa: malformed job
- https://github.com/ceph/ceph/pull/31449
- 08:20 AM Bug #42675 (Fix Under Review): mds: tolerate no snaprealm encoded in on-disk root inode
- 07:45 AM Bug #42675 (Resolved): mds: tolerate no snaprealm encoded in on-disk root inode
- cephfs-data-scan of luminous and prior versions may update on-disk root inode without encoding snaprealm (cephfs-data...
- 04:01 AM Bug #41031 (Pending Backport): qa: malformed job
- This got into luminous.
- 02:33 AM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- Not sure if we fixed this recently. There was some discussion a month or so ago about removing the ceph.* xattrs but ...
11/06/2019
11/05/2019
- 03:07 PM Bug #42646 (Fix Under Review): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volume...
- 12:44 PM Bug #42646 (Resolved): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.TestVo...
- Test cases in TestVolumes create subvolumes/groups/snapshots in format `<string>_<random_number>`. Some test cases wi...
- 01:13 PM Backport #42650 (Resolved): nautilus: mds: no assert on frozen dir when scrub path
- https://github.com/ceph/ceph/pull/32071
- 01:13 PM Backport #42649 (Rejected): mimic: mds: no assert on frozen dir when scrub path
- 01:09 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- Still seeing the issue after unmounting and restarting :-(
- 12:40 PM Bug #42642 (Fix Under Review): mds: MDCache.h compile warnings
- 08:23 AM Bug #42642 (Resolved): mds: MDCache.h compile warnings
- ...
- 10:57 AM Bug #42636 (In Progress): qa: AttributeError: can't set attribute
- 04:07 AM Bug #42636 (Resolved): qa: AttributeError: can't set attribute
- Looks like #42478 is not fixed....
- 10:55 AM Bug #42643 (Fix Under Review): vstart.sh: highlight presence of stray conf file
- 10:45 AM Bug #42643 (Resolved): vstart.sh: highlight presence of stray conf file
- If there's a stray conf file present in /etc/ceph/ceph.conf, then it leads to a misbehaving cluster. Probably an unre...
- 08:40 AM Bug #41538 (Fix Under Review): mds: wrong compat can cause MDS to be added daemon registry on mgr...
- Backport will be tracked in #42635.
- 08:40 AM Bug #42635 (Fix Under Review): mgr: daemon state for mds not available
- 04:02 AM Bug #42635 (Resolved): mgr: daemon state for mds not available
- ...
- 06:30 AM Bug #42251 (Pending Backport): mds: no assert on frozen dir when scrub path
- 06:28 AM Cleanup #42311 (Resolved): mds: reorg MDSAuthCaps header
- 06:23 AM Cleanup #42191 (Resolved): mds: reorg MDCache header
- 04:40 AM Bug #42637 (Resolved): qa: ffsb suite causes SLOW_OPS warnings
- ...
11/04/2019
- 09:47 PM Backport #42632 (Rejected): mimic: client: FAILED assert(cap == in->auth_cap)
- 09:47 PM Backport #42631 (Resolved): nautilus: client: FAILED assert(cap == in->auth_cap)
- https://github.com/ceph/ceph/pull/32065
- 06:31 PM Backport #42159 (In Progress): mimic: osdc: objecter ops output does not have useful time informa...
- 06:30 PM Backport #42159: mimic: osdc: objecter ops output does not have useful time information
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/31384
ceph-backport.sh versi... - 06:21 PM Backport #42148 (In Progress): mimic: mds: mds returns -5 error when the deleted file does not exist
- 06:19 PM Backport #42146 (In Progress): mimic: client: return error when someone passes bad whence value t...
- 06:16 PM Backport #42143 (In Progress): mimic: mds:split the dir if the op makes it oversized, because som...
- 05:48 PM Backport #42327 (In Progress): nautilus: cephfs-shell: not compatible with cmd2 versions after 0....
- 02:18 PM Documentation #42205 (Fix Under Review): doc: update "mount using FUSE" page
- 02:18 PM Documentation #42220 (Fix Under Review): doc: rearrange mounting with kernel doc
- 02:15 PM Documentation #42601 (In Progress): doc: separate "system managed mount" vs. "manual mount" for d...
- 12:58 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- ceph-backport.sh version 15.0.0.6612: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 12:48 PM Backport #42615 (In Progress): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- 12:46 PM Backport #42615 (Resolved): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- https://github.com/ceph/ceph/pull/31332
- 12:46 PM Feature #41182 (Pending Backport): mgr/volumes: add `fs subvolume extend/shrink` commands
- 09:30 AM Bug #41319 (Can't reproduce): ceph.in: pool creation fails with "AttributeError: 'str' object has...
11/03/2019
- 10:35 PM Bug #42602 (Resolved): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- The non-posix conform constants SEEK_DATA and SEEK_HOLE are missin on alpine linux / musllib. So you cant compile src...
- 08:39 AM Documentation #42196 (Resolved): doc: Document inter-mds export process
- 08:18 AM Documentation #42601 (Resolved): doc: separate "system managed mount" vs. "manual mount" for diff...
- Client docs show manual commands for performing the mount. We should also suggest the systemd/fstab commands to setup...
- 08:04 AM Documentation #42190 (Resolved): doc: document MDS journal event types
11/02/2019
- 03:23 PM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 01:23 AM Feature #41182 (In Progress): mgr/volumes: add `fs subvolume extend/shrink` commands
- 01:17 AM Feature #41182: mgr/volumes: add `fs subvolume extend/shrink` commands
- Nautilus Backport: https://github.com/ceph/ceph/pull/31332
11/01/2019
- 11:08 PM Cleanup #42329 (Resolved): mds: reorg MDSCacheObject header
- 11:04 PM Bug #39715 (Resolved): client: optimize rename operation under different quota root
- 11:02 PM Feature #41182 (Pending Backport): mgr/volumes: add `fs subvolume extend/shrink` commands
- 10:55 PM Bug #41799 (Pending Backport): client: FAILED assert(cap == in->auth_cap)
- 10:50 PM Cleanup #42371 (Resolved): mds: reorg MDSDaemon header
- 10:38 PM Bug #42478 (Resolved): qa: AttributeError: can't set attribute
- 10:38 PM Bug #42062 (Resolved): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- 08:28 PM Bug #42597 (Fix Under Review): mon and mds ok-to-stop commands should validate input names exist ...
- "ceph osd ok-to-stop" accepts only integers, "any", and "all". However, the "mon" and "mds" versions accept any strin...
- 04:58 PM Bug #37378 (Resolved): truncate_seq ordering issues with object creation
- 02:11 PM Bug #42022: mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to become empty fro...
- Rishabh Dave wrote:
> Couldn't reproduce this locally and on teuthology. On teuthology the test passed -
> [...]
>... - 02:10 PM Bug #41415 (Can't reproduce): mgr/volumes: AssertionError: '33' != 'new_pool'
- I haven't seen this since this report. I'll re-open the issue if it comes up again.
- 09:25 AM Bug #40873 (Duplicate): qa: expected MDS_CLIENT_LATE_RELEASE in tasks.cephfs.test_client_recovery...
- dup of https://tracker.ceph.com/issues/40968
- 08:20 AM Feature #39098 (Fix Under Review): mds: lock caching for asynchronous unlink
- 08:15 AM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- still the same issue. logs show there are inode locks in 'snap->sync' states. try umounting all client and restart al...
- 02:53 AM Bug #24088 (Duplicate): mon: slow remove_snaps op reported in cluster health log
- seems like dup of https://tracker.ceph.com/issues/37568
10/31/2019
- 11:44 PM Documentation #42414 (Resolved): doc: hide page contents for Ceph Internals
- 11:43 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- Milind Changire wrote:
> please see attachment out.tar.bz which includes ceph.conf as to why `ceph status` command h... - 02:26 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- please see attachment out.tar.bz which includes ceph.conf as to why `ceph status` command hangs on Fedora 30 laptop.
- 11:01 PM Bug #42515: fs: OpenFileTable object shards have too many k/v pairs
- 06:15 PM Backport #42142 (In Progress): nautilus: mds:split the dir if the op makes it oversized, because ...
- ceph-backport.sh version 15.0.0.6612: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 03:28 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- I've restarted the mds at 15:06, didn't take any snapshots and dumped the cache with the slow requests around 16:00. ...
- 02:55 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- The kernel stack trace just looks like the client is hung waiting for the inode's i_rwsem to become free, which means...
- 02:37 PM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- Venky Shankar wrote:
> saw this with luminous: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-24_18:14:12-multimd... - 11:31 AM Backport #42155: nautilus: mds: infinite loop in Locker::file_update_finish()
- https://github.com/ceph/ceph/pull/31287
- 11:30 AM Backport #40892 (In Progress): luminous: mds: cleanup truncating inodes when standby replay mds t...
- 11:04 AM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 10:57 AM Backport #41489 (Resolved): luminous: client: client should return EIO when it's unsafe reqs have...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30242
m... - 11:02 AM Backport #37906 (In Progress): mimic: make cephfs-data-scan reconstruct snaptable
- 11:01 AM Backport #38643 (In Progress): mimic: fs: "log [WRN] : failed to reconnect caps for missing inodes"
- 11:00 AM Backport #41114 (In Progress): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 11:00 AM Backport #42156 (In Progress): mimic: mds: infinite loop in Locker::file_update_finish()
10/30/2019
- 08:02 PM Feature #42530: cephfs-shell: add setxattr and getxattr
- ...and listxattr
- 07:24 PM Bug #42494 (Resolved): ceph: config show can't locate mds
- Backport will be managed by #41525.
- 03:24 PM Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropp...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30242
merged - 12:26 PM Cleanup #42564 (Fix Under Review): mds: reorg Migrator header
- 11:54 AM Cleanup #42564 (Resolved): mds: reorg Migrator header
- 11:53 AM Cleanup #42563 (Fix Under Review): mds: reorg MDSTableServer header
- 11:26 AM Cleanup #42563 (Resolved): mds: reorg MDSTableServer header
- 10:15 AM Feature #39354 (Closed): mds: derive wrlock from excl caps
- New method to implement async create/unlink. This is obsoleted
10/29/2019
- 10:32 PM Documentation #41738 (Resolved): Add documentation for that 'client direct access to data pool'
- 09:51 PM Bug #37723 (Resolved): mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:50 PM Feature #38022 (Resolved): mds: provide a limit for the maximum number of caps a client may have
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:49 PM Bug #39166 (Resolved): mds: error "No space left on device" when create a large number of dirs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #39943 (Resolved): client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanos...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #39951 (Resolved): mount: key parsing fail when doing a remount
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #40173 (Resolved): TestMisc.test_evict_client fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #40960 (Resolved): client: failed to drop dn and release caps causing mds stary stacking.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:39 PM Backport #38129 (Resolved): mimic: mds: provide a limit for the maximum number of caps a client m...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28452
m... - 07:37 PM Backport #38129: mimic: mds: provide a limit for the maximum number of caps a client may have
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28452
merged - 09:39 PM Backport #38131 (Resolved): mimic: mds: stopping MDS with a large cache (40+GB) causes it to miss...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28452
m... - 07:36 PM Backport #38131: mimic: mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28452
merged - 09:38 PM Backport #41885 (Resolved): mimic: mds: client evicted twice in one tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30950
m... - 07:36 PM Backport #41885: mimic: mds: client evicted twice in one tick
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30950
merged - 09:35 PM Backport #40166 (Resolved): luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "0...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28502
m... - 03:35 PM Backport #40166: luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the n...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28502
merged - 09:35 PM Backport #40807 (Resolved): luminous: mds: msg weren't destroyed before handle_client_reconnect r...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29097
m... - 03:34 PM Backport #40807: luminous: mds: msg weren't destroyed before handle_client_reconnect returned, if...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29097
merged - 09:35 PM Backport #40163 (Resolved): luminous: mount: key parsing fail when doing a remount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29226
m... - 03:33 PM Backport #40163: luminous: mount: key parsing fail when doing a remount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29226
merged - 09:35 PM Backport #40218 (Resolved): luminous: TestMisc.test_evict_client fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29229
m... - 03:33 PM Backport #40218: luminous: TestMisc.test_evict_client fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29229
mergedReviewed-by: Venky Shankar <vshankar@redhat.... - 09:34 PM Backport #39691 (Resolved): luminous: mds: error "No space left on device" when create a large n...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29829
m... - 03:32 PM Backport #39691: luminous: mds: error "No space left on device" when create a large number of dirs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29829
merged - 09:34 PM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29830
m... - 03:32 PM Backport #41000: luminous: client: failed to drop dn and release caps causing mds stary stacking.
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29830
merged - 09:34 PM Bug #40286 (Resolved): luminous: qa: remove ubuntu 14.04 testing
- 03:29 PM Bug #40286: luminous: qa: remove ubuntu 14.04 testing
- https://github.com/ceph/ceph/pull/28701 merged
- 09:33 PM Backport #42039 (Resolved): luminous: client: _readdir_cache_cb() may use the readdir_cache alrea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30934
m... - 03:28 PM Backport #42039: luminous: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30934
merged - 08:25 PM Bug #42515 (Fix Under Review): fs: OpenFileTable object shards have too many k/v pairs
- 02:23 AM Bug #42515 (In Progress): fs: OpenFileTable object shards have too many k/v pairs
- 07:26 PM Bug #42494 (Fix Under Review): ceph: config show can't locate mds
- 01:47 PM Bug #42494: ceph: config show can't locate mds
- Sage, assigning you since I believe you wanted to look into this.
- 06:22 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- 5.0.0-32 introduced the bad backport, -33 reverted it:
http://changelogs.ubuntu.com/changelogs/pool/main/l/linux/l... - 05:19 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Was 5.0.32 actually fixed?
- 03:45 PM Feature #5520 (New): osdc: should handle namespaces
- 02:01 PM Feature #42530 (Resolved): cephfs-shell: add setxattr and getxattr
- Allow cephfs-shell to set and fetch xattrs. This would be nice for testing selinux, for instance.
- 01:50 PM Bug #42466 (Duplicate): Missing subvolumegroup commands
- The nautilus backport is still in review, https://tracker.ceph.com/issues/42239
`subvolumegroup ls` should be availa... - 01:09 PM Bug #42478 (Fix Under Review): qa: AttributeError: can't set attribute
- 12:17 PM Bug #42478 (In Progress): qa: AttributeError: can't set attribute
- https://github.com/ceph/ceph-ci/blob/0772e8a667e86de7945704f53c601d09a49232f1/qa/tasks/mds_thrash.py#L21
https://git... - 12:57 PM Feature #24880: pybind/mgr/volumes: restore from snapshot
- This feature will be used by Ceph CSI to create a PVC from a snapshot [1], and by OpenStack Manila to create a share ...
- 12:26 PM Bug #40197: The command 'node ls' sometimes output some incorrect information about mds.
- Min Shi wrote:
> I repeat your steps, but the phenomenon is a little different. When I test the command `ceph node l... - 09:02 AM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- saw this with luminous: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-24_18:14:12-multimds-wip-yuri8-testing-2019...
- 06:23 AM Bug #42062 (Fix Under Review): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- 03:37 AM Bug #42434 (Resolved): qa: TOO_FEW_PGS in mimic during upgrade suite tests
10/28/2019
- 10:09 PM Bug #42516 (Resolved): mds: some mutations have initiated (TrackedOp) set to 0
- From Brad:
> I was looking for tracker ops that had been created with 'initiated'
> set to zero and came across t... - 09:09 PM Bug #42515: fs: OpenFileTable object shards have too many k/v pairs
- ceph-users - http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-October/037076.html
- 08:50 PM Bug #42515 (Resolved): fs: OpenFileTable object shards have too many k/v pairs
- Since #40583 lowered the omap k/v limit to 200k, we've been seeing messages from deep scrubs showing the open file ta...
- 04:04 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- please dump cache and share it again
- 03:48 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- I restarted all mds and did not create a snapshot after that, but still seeing those slow requests..
- 03:52 PM Tasks #42085 (Fix Under Review): qa: create tests for new recover_session=clean option
- 12:50 PM Fix #42508 (Resolved): cephfs-shell: print a helpful message instead of a Python backtrace when n...
- Currently, running @cephfs-shell@ on a blank system without any configuration file fails with a Python backtrace:
<p... - 04:19 AM Bug #42478: qa: AttributeError: can't set attribute
- Jos Collin wrote:
> This happens when there is no setter in thrasher.py. Can you show me your thrasher.py and mds_th... - 03:54 AM Tasks #39998: client: audit ACL
- I think we decided this one need to be tabled for now. Fixing it will likely require a lot of changes to the cephfs p...
10/27/2019
- 03:59 PM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- ...
- 06:29 AM Bug #40197: The command 'node ls' sometimes output some incorrect information about mds.
- I repeat your steps, but the phenomenon is a little different. When I test the command `ceph node ls`, it only show t...
- 12:11 AM Feature #42479 (Fix Under Review): mgr/volumes: add `fs subvolume resize infinite` command
10/25/2019
- 09:41 PM Bug #42494 (Resolved): ceph: config show can't locate mds
- ...
- 04:14 PM Bug #42478 (Need More Info): qa: AttributeError: can't set attribute
- This happens when there is no setter in thrasher.py. Can you show me your thrasher.py and mds_thrash.py, so that I ge...
- 02:40 PM Bug #42491: "probably no MDS server is up?" in upgrade:jewel-x-wip-yuri-luminous_10.22.19
- and in this job http://pulpito.ceph.com/yuriw-2019-10-23_19:22:44-upgrade:jewel-x-wip-yuri-luminous_10.22.19-distro-b...
- 02:39 PM Bug #42491 (New): "probably no MDS server is up?" in upgrade:jewel-x-wip-yuri-luminous_10.22.19
- http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-23_19:22:44-upgrade:jewel-x-wip-yuri-luminous_10.22.19-distro-basic...
- 10:31 AM Feature #42479 (In Progress): mgr/volumes: add `fs subvolume resize infinite` command
- 01:05 AM Feature #42479 (Resolved): mgr/volumes: add `fs subvolume resize infinite` command
- Add a resize infinite command to unset the quota for a subvolume.
- 06:22 AM Bug #40371 (Resolved): cephfs-shell: du must ignore non-directory files
- 06:07 AM Backport #41861 (Rejected): nautilus: cephfs-shell: du must ignore non-directory files
- I talked with Patick, it's fine to cancel this ticket. So, I am marking this ticket as "Rejected".
- 12:16 AM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- Venky Shankar wrote:
> I could not reproduce this with vstart_runner. One way to sneak in a write (+ fsync) was to n...
10/24/2019
- 11:09 PM Bug #42365 (Need More Info): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- Can you also share any surrounding debug log messages.
- 10:38 PM Bug #42478 (Resolved): qa: AttributeError: can't set attribute
- ...
- 10:18 PM Bug #42436 (Resolved): qa: tasks.cephfs.test_volume_client.TestVolumeClient test_data_isolated fa...
- 03:58 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- Ilya Dryomov wrote:
> The backport to 4.19 was incorrect, 4.19.76 is busted. Fixed in 4.19.77.
This goes for Ubu... - 01:20 PM Cleanup #42468 (Fix Under Review): mds: reorg MDSTable header
- 01:11 PM Cleanup #42468 (Resolved): mds: reorg MDSTable header
- 01:14 PM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- I could not reproduce this with vstart_runner. One way to sneak in a write (+ fsync) was to not wait for the mons to ...
- 01:01 PM Cleanup #42465 (Fix Under Review): mds: reorg MDSRank header
- 11:39 AM Cleanup #42465 (Resolved): mds: reorg MDSRank header
- 12:31 PM Bug #42467 (Duplicate): mds: daemon crashes while updating blacklist
- Ubuntu 18.04.3 LTS
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
We have setup... - 12:00 PM Bug #42466 (Duplicate): Missing subvolumegroup commands
- When I run the command:
ceph fs subvolumegroup ls <vol_name>
It says "Error EINVAL: invalid command"
Here is... - 11:11 AM Cleanup #42464 (Fix Under Review): mds: reorg MDSMap header
- 11:00 AM Cleanup #42464 (Resolved): mds: reorg MDSMap header
- 10:44 AM Backport #42462 (In Progress): nautilus: doc: MDS and metadata pool hardware requirements/recomme...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 10:42 AM Backport #42462 (Resolved): nautilus: doc: MDS and metadata pool hardware requirements/recommenda...
- https://github.com/ceph/ceph/pull/31116
- 10:42 AM Backport #42463 (Rejected): mimic: doc: MDS and metadata pool hardware requirements/recommendations
- 10:42 AM Documentation #39620 (Pending Backport): doc: MDS and metadata pool hardware requirements/recomme...
- 10:04 AM Cleanup #42461 (Fix Under Review): mds: reorg MDSTableClient header
- 09:55 AM Cleanup #42461 (Resolved): mds: reorg MDSTableClient header
- 06:52 AM Bug #24403: mon failed to return metadata for mds
- Do you restart the mds on sen2agriprod? Or just you restart all mds? We have the similar case, loosing all the mds's ...
- 12:55 AM Feature #42451 (Resolved): mds: add root_squash
- Allow a root squash mode via the MDS capability. The purpose here is not so much to prevent a true adversary (the cli...
10/23/2019
- 11:21 PM Fix #42450 (Resolved): MDSMonitor: warn if a new file system is being created with an EC default ...
- We do not recommend using an EC pool as the default data pool for many reasons, documented in [1].
`fs new` should... - 08:43 PM Feature #42447 (Resolved): add basic client setup page
- Add a page describing how to set up a client machine. Should cover (or refer to pages that cover):
# installation ... - 08:15 PM Bug #37726 (Resolved): mds: high debug logging with many subtrees is slow
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:15 PM Bug #38043 (Resolved): mds: optimize revoking stale caps
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:15 PM Bug #38326 (Resolved): mds: evict stale client when one of its write caps are stolen
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Backport #40327 (Resolved): mimic: mds: evict stale client when one of its write caps are stolen
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28585
m... - 03:32 PM Backport #40327: mimic: mds: evict stale client when one of its write caps are stolen
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28585
merged - 08:05 PM Backport #38097 (Resolved): mimic: mds: optimize revoking stale caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28585
m... - 03:32 PM Backport #38097: mimic: mds: optimize revoking stale caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28585
merged - 08:02 PM Backport #42122 (Resolved): mimic: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30918
m... - 03:28 PM Backport #42122: mimic: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30918
merged - 08:01 PM Backport #38875 (Resolved): mimic: mds: high debug logging with many subtrees is slow
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29219
m... - 03:26 PM Backport #38875: mimic: mds: high debug logging with many subtrees is slow
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29219
merged - 08:00 PM Backport #42034 (Resolved): mimic: client: lseek function does not return the correct value.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30932
m... - 03:25 PM Backport #42034: mimic: client: lseek function does not return the correct value.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30932
merged - 07:59 PM Backport #42038 (Resolved): mimic: client: _readdir_cache_cb() may use the readdir_cache already ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30933
m... - 03:25 PM Backport #42038: mimic: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30933
merged - 06:15 PM Bug #42436 (Fix Under Review): qa: tasks.cephfs.test_volume_client.TestVolumeClient test_data_iso...
- 06:01 PM Feature #13999: client: richacl support
- Patrick, any update on this issue? You told me few weeks ago that you'll add details to this issue so that I can get ...
- 05:59 PM Feature #13999 (New): client: richacl support
- 06:00 PM Tasks #39998: client: audit ACL
- Patrick, any update on this issue? You told me few weeks ago that you'll add details to this issue so that I can get ...
- 05:56 PM Bug #42317 (Resolved): mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method...
- 03:11 PM Bug #42338 (Duplicate): file system keeps on deadlocking with unresolved slow requests (failed to...
- dup of https://tracker.ceph.com/issues/39987. will be fixed by v14.2.5. you can avoid this bug by not creating new s...
- 01:46 PM Tasks #42085 (In Progress): qa: create tests for new recover_session=clean option
- 01:23 PM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- happens with kclient too on mimic: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-19_00:01:16-kcephfs-wip-yuri-mim...
- 12:39 PM Backport #42424 (In Progress): nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2)...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:38 PM Backport #42422 (In Progress): nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:37 PM Backport #42279 (In Progress): nautilus: qa: logrotate should tolerate connection resets
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:36 PM Backport #42158 (In Progress): nautilus: osdc: objecter ops output does not have useful time info...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:35 PM Backport #42157 (In Progress): nautilus: cephfs-shell: rmdir doesn't complain when directory is n...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:34 PM Backport #42155 (In Progress): nautilus: mds: infinite loop in Locker::file_update_finish()
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:29 PM Backport #41861 (Need More Info): nautilus: cephfs-shell: du must ignore non-directory files
- ...
- 11:59 AM Backport #42180 (In Progress): nautilus: mgr/volumes: creating subvolume and subvolume group snap...
- 11:03 AM Backport #42441 (Resolved): nautilus: mds: create a configurable snapshot limit
- https://github.com/ceph/ceph/pull/33295
- 11:03 AM Backport #42440 (Rejected): mimic: mds: create a configurable snapshot limit
- 03:57 AM Feature #41209 (Pending Backport): mds: create a configurable snapshot limit
- 03:14 AM Bug #42413 (Need More Info): AsyncConnection and Session cause memory leak
- 03:13 AM Bug #42413: AsyncConnection and Session cause memory leak
- A leak is technically possible but the link between the two is broken during connection event processing. In particul...
10/22/2019
- 11:22 PM Bug #42436 (Resolved): qa: tasks.cephfs.test_volume_client.TestVolumeClient test_data_isolated fa...
- ...
- 11:18 PM Bug #42435 (New): qa/suites/kcephfs: client I/O halts or cannot make sufficient progress during t...
- ...
- 11:06 PM Bug #42434 (Fix Under Review): qa: TOO_FEW_PGS in mimic during upgrade suite tests
- 11:03 PM Bug #42434 (Resolved): qa: TOO_FEW_PGS in mimic during upgrade suite tests
- ...
- 10:30 AM Backport #42424 (Resolved): nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2) No...
- https://github.com/ceph/ceph/pull/31084
- 10:30 AM Backport #42423 (Rejected): mimic: qa: "cluster [ERR] Error recovering journal 0x200: (2) No su...
- 10:29 AM Backport #42422 (Resolved): nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in re...
- https://github.com/ceph/ceph/pull/31083
- 10:29 AM Backport #42421 (Rejected): mimic: test_reconnect_eviction fails with "RuntimeError: MDS in rejec...
- 08:21 AM Documentation #42414 (Resolved): doc: hide page contents for Ceph Internals
- Hide the contents section from main page
- 07:23 AM Bug #42413 (Need More Info): AsyncConnection and Session cause memory leak
- It occurred to me that there might be a problem regarding memory leak in ceph-mds.
In function MDSDaemon::mds_handle... - 04:29 AM Bug #41415 (Need More Info): mgr/volumes: AssertionError: '33' != 'new_pool'
- 04:28 AM Bug #41415: mgr/volumes: AssertionError: '33' != 'new_pool'
- Couldn't reproduce on teuth as well - ...
- 04:18 AM Bug #41836 (Pending Backport): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such f...
- 04:13 AM Bug #42213 (Pending Backport): test_reconnect_eviction fails with "RuntimeError: MDS in reject st...
- 04:07 AM Bug #42022 (Need More Info): mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to...
- 04:07 AM Bug #42022: mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to become empty fro...
- Couldn't reproduce this locally and on teuthology. On teuthology the test passed -...
10/21/2019
- 10:11 PM Backport #41495 (In Progress): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 04:09 PM Documentation #42407 (In Progress): doc: add a doc for libcephfs
- 04:09 PM Documentation #42406 (Resolved): doc: update mount.ceph man page
- 03:48 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
I've sent it through google drive this time. Thanks again!
K- 01:50 PM Bug #42338 (Need More Info): file system keeps on deadlocking with unresolved slow requests (fail...
- 01:34 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- where did you send the log file? please share the file to me (ukernel@gmail.com) through google drive.
- 02:41 PM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- This patch by Yan, Zheng to get some extra debug statements:
diff --git a/src/mds/OpenFileTable.cc b/src/mds/OpenF... - 08:42 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- Yan, Zheng suggested the following:
delete 'mdsX_openfiles.0' object from cephfs metadata pool. (X is rank
of the... - 02:12 PM Documentation #42195 (In Progress): Add doc for exporting cephfs over nfs server deployed using rook
- 01:49 PM Bug #42348 (Rejected): TestClientRecovery.test_dont_mark_unresponsive_client_stale failure
- Just needs another backport.
- 12:07 PM Bug #40369: ceph_volume_client: fs_name must be converted to string before using it
- The mimic backport caused a regression, #42317, which was caught during v13.2.7 release preparation, and was reverted...
- 12:05 PM Backport #40896 (Rejected): mimic: ceph_volume_client: fs_name must be converted to string before...
- 12:05 PM Backport #40896: mimic: ceph_volume_client: fs_name must be converted to string before using it
- This caused a regression, #42317, which was caught during v13.2.7 release preparation, and was reverted by https://gi...
- 12:04 PM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- Revert: https://github.com/ceph/ceph/pull/31017
- 09:11 AM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- test run here: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-19_00:01:32-fs-wip-yuri-mimic_10.18.19-testing-basic...
- 06:16 AM Bug #42388: mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.TestQuotaFull)
- There's a similar problem with `test_full_fclose` with `fclose()` going through on a full pool (w/ quota).
- 05:44 AM Bug #42388 (In Progress): mimic: Test failure: test_full_different_file (tasks.cephfs.test_full.T...
- Started seeing this frequently in mimic test runs:
Things look fine initially:...
10/20/2019
- 06:05 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- Our MDSs keep failing over until we enable debug output (debug_mds=10/10) ... MDS becomes active and stays active ......
10/19/2019
- 11:31 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- And yet another crash ~ 5 hours later. We have adjusted the mds_cache_memory_limit from 150G -> 32G after the last cr...
- 07:35 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- A search for this assert gave this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036702.htm...
- 07:26 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- Today our active MDS crashed with an assert:
2019-10-19 08:14:50.645 7f7906cb7700 -1 /build/ceph-13.2.6/src/mds/Op...
10/18/2019
- 11:01 PM Bug #42381 (Rejected): cephfs: metadata pool cephx cap does not have permissions
- We had the syntax wrong:...
- 10:59 PM Bug #42381 (Rejected): cephfs: metadata pool cephx cap does not have permissions
- ...
- 09:57 PM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- !
https://github.com/ceph/ceph/pull/30238
should never have been approved :(. The test failure
http://pulpit... - 09:35 PM Bug #42348: TestClientRecovery.test_dont_mark_unresponsive_client_stale failure
- Venky Shankar wrote:
> This was not as straightforward as I suggested. However, it's due to PR https://github.com/ce... - 03:05 PM Bug #42348: TestClientRecovery.test_dont_mark_unresponsive_client_stale failure
- This was not as straightforward as I suggested. However, it's due to PR https://github.com/ceph/ceph/pull/28585 not b...
- 12:02 PM Backport #42130 (Resolved): mimic: doc/ceph-fuse: -k missing in man page
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30936
m... - 11:54 AM Bug #40213 (Resolved): mds: cannot switch mds state from standby-replay to active
- 11:54 AM Backport #42375 (Resolved): mimic: mds: cannot switch mds state from standby-replay to active
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29232
m... - 11:53 AM Backport #42375 (In Progress): mimic: mds: cannot switch mds state from standby-replay to active
- 11:38 AM Backport #42375 (Resolved): mimic: mds: cannot switch mds state from standby-replay to active
- https://github.com/ceph/ceph/pull/29232
- 11:54 AM Backport #42374 (Resolved): mimic: mds: cleanup truncating inodes when standby replay mds trim lo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29232
m... - 11:53 AM Backport #42374 (In Progress): mimic: mds: cleanup truncating inodes when standby replay mds trim...
- 11:37 AM Backport #42374 (Resolved): mimic: mds: cleanup truncating inodes when standby replay mds trim lo...
- https://github.com/ceph/ceph/pull/29232
- 11:40 AM Bug #38679 (Resolved): mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:39 AM Bug #38822 (Resolved): mds: there is an assertion when calling Beacon::shutdown()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:39 AM Bug #38835 (Resolved): MDSTableServer.cc: 83: FAILED assert(version == tid)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:39 AM Bug #38844 (Resolved): mds: mds_cap_revoke_eviction_timeout is not used to initialize Server::cap...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:38 AM Bug #40361 (Resolved): getattr on snap inode stuck
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:08 AM Cleanup #42371 (Fix Under Review): mds: reorg MDSDaemon header
- 10:01 AM Cleanup #42371 (Resolved): mds: reorg MDSDaemon header
- 09:12 AM Bug #42213 (Fix Under Review): test_reconnect_eviction fails with "RuntimeError: MDS in reject st...
- 08:58 AM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- Hi,
I didn't create a new snapshot while syncing data.
Files sent by mail¸ not able to post them here.
- 08:29 AM Backport #39223 (Resolved): mimic: mds: behind on trimming and "[dentry] was purgeable but no lon...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:27 AM Backport #41001 (Resolved): mimic: client: failed to drop dn and release caps causing mds stary s...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/294... - 08:27 AM Backport #38709 (Resolved): mimic: qa: kclient unmount hangs after file system goes down
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:27 AM Backport #39210 (Resolved): mimic: mds: mds_cap_revoke_eviction_timeout is not used to initialize...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:27 AM Backport #39212 (Resolved): mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:26 AM Backport #39215 (Resolved): mimic: mds: there is an assertion when calling Beacon::shutdown()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:26 AM Backport #40219 (Resolved): mimic: TestMisc.test_evict_client fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:26 AM Backport #40437 (Resolved): mimic: getattr on snap inode stuck
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/292... - 08:23 AM Bug #42289: mds: rejoin_gather_finish() core
- 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x5646b7223935]
2: (MDCache::rejoi... - 06:21 AM Bug #42365 (Resolved): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- ...
10/17/2019
- 04:45 PM Backport #39223: mimic: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29224
merged - 04:44 PM Backport #41001: mimic: client: failed to drop dn and release caps causing mds stary stacking.
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29479
merged - 04:04 PM Backport #38709: mimic: qa: kclient unmount hangs after file system goes down
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29218
merged - 04:04 PM Backport #39210: mimic: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server::ca...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29220
merged - 04:04 PM Backport #39212: mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29222
merged - 03:58 PM Backport #39215: mimic: mds: there is an assertion when calling Beacon::shutdown()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29223
merged
- 03:57 PM Backport #40219: mimic: TestMisc.test_evict_client fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29228
merged - 03:57 PM Backport #40437: mimic: getattr on snap inode stuck
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29230
merged - 01:46 PM Bug #42213: test_reconnect_eviction fails with "RuntimeError: MDS in reject state up:active"
- there's one more instance of this in test_reconnect_eviction() -- need to fix that too. I'll push a PR.
- 01:24 PM Bug #41426 (Can't reproduce): mds: wrongly signals directory is empty when dentry is damaged?
- 01:22 PM Bug #41836 (Fix Under Review): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such f...
- 12:36 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- did you create new snapshots while syncing data?
please run 'ceph daemon mds.x dump cache /tmp/cachedump.x' for al... - 12:31 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- yes we have a few snapshots
Thanks - 12:23 PM Bug #42338: file system keeps on deadlocking with unresolved slow requests (failed to authpin, su...
- do you use snapshot
- 09:23 AM Bug #42348 (Rejected): TestClientRecovery.test_dont_mark_unresponsive_client_stale failure
- Saw this in mimic, seems to exist in master too: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-16_13:28:41-fs-wip...
10/16/2019
- 10:42 AM Backport #42339 (In Progress): nautilus: mds: move MDSDaemon conf change handling to MDSRank fini...
- 10:41 AM Backport #42339 (Resolved): nautilus: mds: move MDSDaemon conf change handling to MDSRank finisher
- https://github.com/ceph/ceph/pull/30761
- 10:41 AM Cleanup #40694 (Pending Backport): mds: move MDSDaemon conf change handling to MDSRank finisher
- 10:04 AM Bug #42338 (Duplicate): file system keeps on deadlocking with unresolved slow requests (failed to...
- While syncing data to cephfs , using both fuse or kclient, our file systems always got stuck. After I restart the mds...
- 08:27 AM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- Rishabh Dave wrote:
> Just cherry pick this commit and not the entire PR, right?
Glancing at the code, I don't se... - 07:24 AM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- @Nathan
> @Rishabh Is it enough to cherry-pick 153a8cb025da7a500356d65f80792c1b51de71fe to mimic? I see it applies c...
10/15/2019
- 07:45 PM Backport #42162 (Rejected): mimic: qa: add testing for lazyio
- 11:46 AM Backport #42162 (Need More Info): mimic: qa: add testing for lazyio
- The PR "client: LAZY_IO support" https://github.com/ceph/ceph/pull/22450 was merged for nautilus and not backported t...
- 11:24 AM Backport #42162 (In Progress): mimic: qa: add testing for lazyio
- 07:43 PM Backport #41887 (Rejected): mimic: client: lazyio synchronize does not get file size
- 11:37 AM Backport #41887 (Need More Info): mimic: client: lazyio synchronize does not get file size
- PR "client: LAZY_IO support" https://github.com/ceph/ceph/pull/22450 was merged for nautilus and not backported to mi...
- 09:11 AM Backport #41887 (In Progress): mimic: client: lazyio synchronize does not get file size
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 04:51 PM Backport #41886 (In Progress): nautilus: mds: client evicted twice in one tick
- Nathan Cutler wrote:
> nautilus does not have 480fbc72f7da0e17329915321ce01f6e51ecef21 so I wonder if this fix is ap... - 09:09 AM Backport #41886 (Need More Info): nautilus: mds: client evicted twice in one tick
- nautilus does not have 480fbc72f7da0e17329915321ce01f6e51ecef21 so I wonder if this fix is applicable...
- 04:51 PM Backport #41885 (In Progress): mimic: mds: client evicted twice in one tick
- Nathan Cutler wrote:
> mimic does not have 480fbc72f7da0e17329915321ce01f6e51ecef21 so I wonder if this fix is appli... - 09:06 AM Backport #41885 (Need More Info): mimic: mds: client evicted twice in one tick
- mimic does not have 480fbc72f7da0e17329915321ce01f6e51ecef21 so I wonder if this fix is applicable...
- 02:22 PM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- @Rishabh Is it enough to cherry-pick 153a8cb025da7a500356d65f80792c1b51de71fe to mimic? I see it applies cleanly.
- 02:21 PM Bug #42317: mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method signature ...
- Indeed, https://github.com/ceph/ceph/commit/153a8cb025da7a500356d65f80792c1b51de71fe was merged for nautilus and I ca...
- 05:55 AM Bug #42317 (Resolved): mimic: incomplete backport LibCephFS.mount in cephfs.pyx needs it's method...
- I came across this issue while trying to reproduce the issue "here":https://github.com/ceph/ceph/pull/29766#issuecomm...
- 02:00 PM Cleanup #42329 (Fix Under Review): mds: reorg MDSCacheObject header
- 01:52 PM Cleanup #42329 (Resolved): mds: reorg MDSCacheObject header
- 11:45 AM Backport #42161 (In Progress): nautilus: qa: add testing for lazyio
- 11:45 AM Backport #42161 (Need More Info): nautilus: qa: add testing for lazyio
- 11:27 AM Backport #42161 (In Progress): nautilus: qa: add testing for lazyio
- 11:19 AM Backport #42130 (In Progress): mimic: doc/ceph-fuse: -k missing in man page
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 09:48 AM Backport #42327 (Rejected): nautilus: cephfs-shell: not compatible with cmd2 versions after 0.9.13
- 09:41 AM Backport #42039 (In Progress): luminous: client: _readdir_cache_cb() may use the readdir_cache al...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 09:37 AM Backport #42038 (In Progress): mimic: client: _readdir_cache_cb() may use the readdir_cache alrea...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 09:13 AM Backport #42034 (In Progress): mimic: client: lseek function does not return the correct value.
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 01:00 AM Bug #42228 (Resolved): mgr/dashboard: backend API test failure "test_access_permissions"
10/14/2019
- 09:19 PM Backport #42122 (In Progress): mimic: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 09:14 PM Bug #42274 (Need More Info): mds: FAILED assert(in->filelock.can_read(mdr->get_client()))
- Can you share more logs around the time of the event. What kind of clients do you have.
- 08:00 PM Cleanup #42192 (Resolved): mds: reorg MDLog header
- 06:33 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- That would make it a little simpler, but it's not really that big a deal to track an arbitrary interval_set. We have ...
- 06:11 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> I added a patch that makes the MDS encode it as part of the "extra" info that gets tacked onto ... - 05:57 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- I added a patch that makes the MDS encode it as part of the "extra" info that gets tacked onto the create reply. I th...
- 06:27 PM Bug #42057 (Pending Backport): cephfs-shell: not compatible with cmd2 versions after 0.9.13
- 05:41 PM Cleanup #42311 (Fix Under Review): mds: reorg MDSAuthCaps header
- 05:34 PM Cleanup #42311 (Resolved): mds: reorg MDSAuthCaps header
- 02:11 PM Documentation #42300 (Fix Under Review): doc/ceph-fuse: -n missing in man page
- 02:10 PM Documentation #42300 (Resolved): doc/ceph-fuse: -n missing in man page
- 02:01 PM Feature #21571 (Need More Info): mds: limit number of snapshots (global and subtree)
- Zheng, looking at #41209: do we need a snapshot limit by subtree or is a directory limit via config sufficient?
- 02:00 PM Feature #21571 (Duplicate): mds: limit number of snapshots (global and subtree)
- 01:56 PM Tasks #4492 (In Progress): mds: Define kill points involved in clustered migration and recovery
- 01:47 PM Bug #42271 (Fix Under Review): client: ceph-fuse which had been blacklisted couldn't auto reconne...
- 01:45 PM Bug #42289 (Need More Info): mds: rejoin_gather_finish() core
- Can you share a coredump or backtrace?
Logs would be helpful too. - 01:50 AM Bug #42289: mds: rejoin_gather_finish() core
- Recently we met the core during switching/restarting mds frequently, I found an osd fetch cost 5 minutes, rejoin_ack_...
- 01:36 PM Bug #42299 (Fix Under Review): mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 01:35 PM Bug #42299 (Resolved): mgr/volumes: cleanup libcephfs handles on mgr shutdown
- We should be able to clenaup on SIGTERM/SIGINT or when the plugin object is freed.
- 01:30 PM Bug #42213: test_reconnect_eviction fails with "RuntimeError: MDS in reject state up:active"
- Patrick Donnelly wrote:
> This looks like the same problem as #40999. Can't verify because there are no mds logs. T... - 01:11 PM Backport #41854 (In Progress): mimic: mds: reject sessionless messages
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 01:04 PM Documentation #42298 (Resolved): doc: move mount automation part from mounting doc to fstab doc
- 12:56 PM Backport #41865 (In Progress): nautilus: mds: ask idle client to trim more caps
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 09:15 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- Sorry for the delay
I understand what happened. (two mds exported non-auth caps )
PR#30402 is a good workarou...
10/12/2019
- 07:22 AM Bug #42289 (Need More Info): mds: rejoin_gather_finish() core
- rejoin_ack_gather is empty, when rejoin_gather_finish() running, assert(rejoin_ack_gather.count(mds->get_nodeid())) c...
Also available in: Atom