Activity
From 08/13/2018 to 09/11/2018
09/11/2018
- 11:12 PM Bug #23380 (Resolved): mds: ceph.dir.rctime follows dir ctime not inode ctime
- Forked this new fix into a separate issue.
- 07:02 AM Bug #23380 (Fix Under Review): mds: ceph.dir.rctime follows dir ctime not inode ctime
- https://github.com/ceph/ceph/pull/24022
- 11:11 PM Bug #35945 (Resolved): client: update ctime when modifying file content
- Follow-up fix to #23380.
https://github.com/ceph/ceph/pull/24022 - 10:53 PM Backport #26904 (In Progress): luminous: qa: reduce slow warnings arising due to limited testing ...
- 08:39 PM Backport #24841 (Resolved): mimic: qa: move mds/client config to qa from teuthology ceph.conf.tem...
- 08:39 PM Backport #26903 (Resolved): mimic: qa: reduce slow warnings arising due to limited testing hardware
- 07:16 PM Backport #35940 (Resolved): mimic: client: statfs inode count odd
- https://github.com/ceph/ceph/pull/24377
- 07:16 PM Backport #35939 (Resolved): luminous: client: statfs inode count odd
- https://github.com/ceph/ceph/pull/24376
- 07:16 PM Backport #35938 (Resolved): mimic: mds: add average session age (uptime) perf counter
- https://github.com/ceph/ceph/pull/24467
- 07:16 PM Backport #35937 (Resolved): luminous: mds: add average session age (uptime) perf counter
- https://github.com/ceph/ceph/pull/24421
- 07:15 PM Backport #35934 (Resolved): mimic: client: cannot list out files created by another ceph-fuse client
- https://github.com/ceph/ceph/pull/24295
- 07:15 PM Backport #35933 (Resolved): luminous: client: cannot list out files created by another ceph-fuse ...
- https://github.com/ceph/ceph/pull/24282
- 07:15 PM Backport #35932 (Resolved): mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- https://github.com/ceph/ceph/pull/24695
- 07:15 PM Backport #35931 (Resolved): luminous: mds: retry remounting in ceph-fuse on dcache invalidation
- https://github.com/ceph/ceph/pull/24303
- 06:27 PM Bug #27657 (Pending Backport): mds: retry remounting in ceph-fuse on dcache invalidation
- 06:26 PM Bug #27051 (Pending Backport): client: cannot list out files created by another ceph-fuse client
- 06:26 PM Bug #24849 (Pending Backport): client: statfs inode count odd
- 06:22 PM Feature #25013 (Pending Backport): mds: add average session age (uptime) perf counter
- 05:44 PM Documentation #27209 (Resolved): doc: document state of kernel client feature parity with ceph-fuse
- 12:24 PM Bug #27237: unix df reports global raw storage as usable storage after mimic 13.2.1 update.
- Reporting raw available space was the historic behaviour of cephfs's fsstat implementation -- it's because in the gen...
- 10:41 AM Bug #34531: cephfs does not appear in 'df' if the quota size is too small
- Interesting... it's not obvious how we can fix this, if the quota is below one block then it probably doesn't make se...
- 10:35 AM Support #35694: CephFS stops working after upgrade from 12.2.7 to 12.2.8
- Suggest setting "debug mds = 10" and gathering logs from the period when an MDS daemon goes active to the point that ...
- 08:22 AM Bug #26968: klient: mount fails during MDS failover
- It's mount timeout. I think it's related to socket failure injection...
- 07:53 AM Bug #35829: qa: workunits/fs/misc/acl.sh failure from unexpected system.posix_acl_default attribute
- looks like the test executes qa/workunits/fs/misc/acl.sh on two mounts at the same time. acl.sh is not designed be ru...
- 07:15 AM Bug #35916: mds: rctime may go back
- /ceph/teuthology-archive/pdonnell-2018-09-08_06:58:28-kcephfs-wip-pdonnell-testing-20180908.044939-distro-basic-smith...
- 07:04 AM Bug #35916 (Fix Under Review): mds: rctime may go back
- https://github.com/ceph/ceph/pull/24023
- 02:11 AM Bug #35916 (Resolved): mds: rctime may go back
- code like below can cause rctime to go back
if (new_mtime > pi.inode.ctime)
pi.inode.ctime = pi.inode.rstat.rct...
09/10/2018
- 09:19 PM Bug #35850: mds: runs out of file descriptors after several respawns
- https://github.com/ceph/ceph/pull/24020
- 01:41 PM Feature #35411 (In Progress): mds: store session birth time on-disk in session map
- 01:36 PM Bug #26901: mds: no throttlers set on incoming messages
- Zheng was working on this as part of his efforts to sort client requests by priority but ran into issues. Will revisi...
09/07/2018
- 11:19 PM Backport #35859 (In Progress): luminous: MDSMonitor: lookup of gid in prepare_beacon that has bee...
- 11:18 PM Backport #35859 (Resolved): luminous: MDSMonitor: lookup of gid in prepare_beacon that has been r...
- https://github.com/ceph/ceph/pull/23990
- 11:18 PM Backport #35858 (Resolved): mimic: MDSMonitor: lookup of gid in prepare_beacon that has been remo...
- https://github.com/ceph/ceph/pull/24272
- 11:18 PM Bug #35848 (Pending Backport): MDSMonitor: lookup of gid in prepare_beacon that has been removed ...
- 07:39 PM Bug #35848 (Fix Under Review): MDSMonitor: lookup of gid in prepare_beacon that has been removed ...
- https://github.com/ceph/ceph/pull/23984
- 07:04 PM Bug #35848: MDSMonitor: lookup of gid in prepare_beacon that has been removed will cause exception
- ...
- 05:30 PM Bug #35848 (Resolved): MDSMonitor: lookup of gid in prepare_beacon that has been removed will cau...
- ...
- 11:17 PM Bug #35850 (In Progress): mds: runs out of file descriptors after several respawns
- 10:36 PM Bug #35850 (Pending Backport): mds: runs out of file descriptors after several respawns
- 09:41 PM Bug #35850 (In Progress): mds: runs out of file descriptors after several respawns
- 07:05 PM Bug #35850 (Resolved): mds: runs out of file descriptors after several respawns
- ...
- 10:21 AM Backport #35841 (Resolved): mimic: client: segmentation fault in handle_client_reply
- https://github.com/ceph/ceph/pull/24187
- 10:21 AM Backport #35840 (Duplicate): luminous: unhealthy heartbeat map during subtree migration
- https://github.com/ceph/ceph/pull/23507
- 10:21 AM Backport #35839 (Duplicate): mimic: unhealthy heartbeat map during subtree migration
- https://github.com/ceph/ceph/pull/23506
- 10:21 AM Backport #35838 (Resolved): luminous: mds: use monotonic clock for beacon message timekeeping
- https://github.com/ceph/ceph/pull/24311
- 10:21 AM Backport #35837 (Resolved): mimic: mds: use monotonic clock for beacon message timekeeping
- https://github.com/ceph/ceph/pull/24467
- 08:02 AM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- I just noticed that the qa test added in this PR doesn't check the >> case.
qa/workunits/fs: test for cephfs rs... - 07:57 AM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- Patrick Donnelly wrote:
> Dan van der Ster wrote:
> > Sorry but I think we need to re-open this.
> >
> > On v12.... - 01:53 AM Bug #15783 (New): client: enable acls by default
- 01:52 AM Bug #18532 (New): mds: forward scrub failing to repair dir stats (was: subdir with corrupted dirs...
- 01:51 AM Bug #24522 (Pending Backport): blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- 01:50 AM Feature #26974: mds: provide mechanism to allow new instance of an application to cancel old MDS ...
- New PR: https://github.com/ceph/ceph/pull/23069
- 01:48 AM Bug #24881 (Pending Backport): unhealthy heartbeat map during subtree migration
- 01:48 AM Bug #24870 (Resolved): ceph-debug-docker: python3 libraries not installed in docker image
- 01:47 AM Bug #26959 (Pending Backport): mds: use monotonic clock for beacon message timekeeping
- 01:47 AM Documentation #22989 (Resolved): doc: add documentation for MDS states
- 01:37 AM Bug #24557 (Pending Backport): client: segmentation fault in handle_client_reply
09/06/2018
- 06:04 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- Dropping priority on this as there have been no known reoccurrence.
- 05:59 PM Bug #26995 (Resolved): qa: specify distro/kernel matrix to test in kclient qa-suite
- 05:54 PM Bug #35829 (Rejected): qa: workunits/fs/misc/acl.sh failure from unexpected system.posix_acl_defa...
- ...
- 05:21 PM Bug #35828 (Resolved): qa: RuntimeError: FSCID 10 has no rank 1
- ...
- 06:51 AM Backport #32100 (In Progress): mimic: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23952
- 12:23 AM Backport #35722 (In Progress): mimic: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23945
- 12:11 AM Backport #35722 (Resolved): mimic: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23105
- 12:22 AM Backport #35721 (In Progress): luminous: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23946
- 12:09 AM Backport #35721 (Resolved): luminous: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23946
- 12:06 AM Bug #35720 (Resolved): evicting client session may block finisher thread
- Thread 11 (Thread 0x7f14e11c5700 (LWP 10777)):
#0 0x00007f14ec009995 in pthread_cond_wait@@GLIBC_2.3.2 () from /li...
09/05/2018
- 11:44 PM Backport #35719 (Resolved): mimic: mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/24161
- 11:44 PM Backport #35718 (Resolved): luminous: mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/24138
- 11:43 PM Bug #35250 (Pending Backport): mds: beacon spams is_laggy message
- 04:02 PM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- Dan van der Ster wrote:
> Sorry but I think we need to re-open this.
>
> On v12.2.8 ceph-fuse client and server, ... - 02:56 PM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- Sorry but I think we need to re-open this.
On v12.2.8 ceph-fuse client and server, we see the exact same behaviour... - 12:34 PM Support #35694 (Rejected): CephFS stops working after upgrade from 12.2.7 to 12.2.8
- Hi !
We have done an upgrade from 12.2.7 to 12.2.8 on Ubuntu 14.04.5 LTS (amd64).
In the order MON-OSD-MDS-MGR-RA... - 05:31 AM Bug #24030 (New): ceph-fuse: double dash meaning
09/04/2018
- 10:00 AM Bug #27657 (Fix Under Review): mds: retry remounting in ceph-fuse on dcache invalidation
- PR https://github.com/ceph/ceph/pull/23908
09/03/2018
- 04:23 AM Feature #35411 (In Progress): mds: store session birth time on-disk in session map
- PR https://github.com/ceph/ceph/pull/23314 adds session birth time to track average session age. During MDS failover ...
09/02/2018
- 05:03 PM Backport #32084 (In Progress): luminous: mds: MDBalancer::try_rebalance() may stop prematurely
- 04:52 PM Backport #32086 (In Progress): mimic: mds: MDBalancer::try_rebalance() may stop prematurely
- 04:48 PM Backport #26991 (In Progress): mimic: mds: curate priority of perf counters sent to mgr
- 04:40 PM Backport #26977 (In Progress): luminous: cephfs-data-scan: print the max used ino
- 04:38 PM Backport #26978 (In Progress): mimic: cephfs-data-scan: print the max used ino
- 02:25 PM Backport #24863 (In Progress): mimic: ceph_volume_client: allow atomic update of RADOS objects
- 02:12 PM Backport #24842 (In Progress): luminous: qa: move mds/client config to qa from teuthology ceph.co...
- 02:10 PM Backport #24841 (In Progress): mimic: qa: move mds/client config to qa from teuthology ceph.conf....
- 01:52 AM Bug #35250 (Fix Under Review): mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/23851
- 01:50 AM Bug #35250 (Resolved): mds: beacon spams is_laggy message
- e.g....
08/30/2018
- 03:15 PM Bug #34531 (New): cephfs does not appear in 'df' if the quota size is too small
- cephfs does not appear in 'df' if the quota size is too small
If the quota is 10000000 (bytes), the mount point ap... - 05:25 AM Backport #26989 (In Progress): mimic: Implement "cephfs-journal-tool event splice" equivalent for...
- https://github.com/ceph/ceph/pull/23818
08/29/2018
- 05:26 AM Backport #26983 (In Progress): luminous: client: requests that do name lookup may be sent to wron...
- https://github.com/ceph/ceph/pull/23793
- 05:24 AM Backport #26988 (In Progress): mimic: mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23792
08/28/2018
- 11:42 PM Backport #32098 (In Progress): luminous: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23789
- 11:10 AM Backport #32098 (Resolved): luminous: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23789
- 10:14 PM Backport #26976 (In Progress): mimic: qa: kcephfs suite has kernel build failures
- 08:35 PM Backport #24931 (Resolved): mimic: client: put instance/addr information in status asok command
- 08:00 PM Backport #24931: mimic: client: put instance/addr information in status asok command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23109
merged - 08:26 PM Bug #24856 (Resolved): mds may get discontinuous mdsmap
- 08:26 PM Backport #25047 (Resolved): mimic: mds may get discontinuous mdsmap
- 08:00 PM Backport #25047: mimic: mds may get discontinuous mdsmap
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23180
merged - 08:25 PM Bug #24852 (Resolved): mds: dump MDSMap epoch to log at low debug
- 08:25 PM Backport #25035 (Resolved): mimic: mds: dump MDSMap epoch to log at low debug
- 08:00 PM Backport #25035: mimic: mds: dump MDSMap epoch to log at low debug
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23196
merged - 08:24 PM Backport #26905 (Resolved): mimic: MDSMonitor: consider raising priority of MMDSBeacons from MDS ...
- 07:59 PM Backport #26905: mimic: MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23565
merged - 01:00 PM Bug #23958 (Resolved): mds: scrub doesn't always return JSON results
- 01:00 PM Backport #25037 (Resolved): mimic: mds: scrub doesn't always return JSON results
- 01:00 PM Bug #24853 (Resolved): mds: dump recent (memory) log messages before respawning due to being remo...
- 01:00 PM Backport #25040 (Resolved): mimic: mds: dump recent (memory) log messages before respawning due t...
- 12:59 PM Bug #24855 (Resolved): mds: reduce debugging for missing inodes during subtree migration
- 12:59 PM Backport #25042 (Resolved): mimic: mds: reduce debugging for missing inodes during subtree migration
- 12:54 PM Backport #25045 (Resolved): mimic: mds: create health warning if we detect metadata (journal) wri...
- 12:53 PM Backport #25044 (Resolved): mimic: overhead of g_conf->get_val<type>("config name") is high
- 12:51 PM Backport #26914 (Resolved): mimic: handle ceph_ll_close on unmounted filesystem without crashing
- 11:20 AM Backport #26956 (In Progress): mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- included both 24679 and 26967 :
https://github.com/ceph/ceph/pull/23769 - 11:13 AM Bug #26966 (Resolved): nfs-ganesha: epochs out of sync in cluster
- Patch merged.
- 11:11 AM Backport #32104 (Resolved): mimic: mds: allows client to create ".." and "." dirents
- https://github.com/ceph/ceph/pull/24384
- 11:11 AM Backport #32103 (Resolved): luminous: mds: allows client to create ".." and "." dirents
- https://github.com/ceph/ceph/pull/24329
- 11:10 AM Backport #32100 (Resolved): mimic: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23952
- 11:10 AM Backport #32092 (Resolved): mimic: mds: migrate strays part by part when shutdown mds
- https://github.com/ceph/ceph/pull/24435
- 11:10 AM Backport #32091 (Resolved): luminous: mds: migrate strays part by part when shutdown mds
- https://github.com/ceph/ceph/pull/24324
- 11:10 AM Backport #32090 (Resolved): mimic: mds: use monotonic clock for beacon sender thread waits
- https://github.com/ceph/ceph/pull/24467
- 11:10 AM Backport #32088 (Resolved): luminous: mds: use monotonic clock for beacon sender thread waits
- https://github.com/ceph/ceph/pull/24375
- 11:10 AM Backport #32086 (Resolved): mimic: mds: MDBalancer::try_rebalance() may stop prematurely
- https://github.com/ceph/ceph/pull/23883
- 11:10 AM Backport #32084 (Resolved): luminous: mds: MDBalancer::try_rebalance() may stop prematurely
- https://github.com/ceph/ceph/pull/23884
- 05:41 AM Bug #27657 (In Progress): mds: retry remounting in ceph-fuse on dcache invalidation
- 05:28 AM Bug #27216: kclient: usable space suddently decreased
- Patrick Donnelly wrote:
> Vasanth M wrote:
> > Zheng Yan wrote:
> > > what do you mean "decrease"
> > We had 19Tb...
08/27/2018
- 08:20 PM Backport #25037: mimic: mds: scrub doesn't always return JSON results
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23225
merged - 08:20 PM Backport #25040: mimic: mds: dump recent (memory) log messages before respawning due to being rem...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23275
merged - 08:19 PM Backport #25042: mimic: mds: reduce debugging for missing inodes during subtree migration
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23309
merged - 08:19 PM Backport #25045: mimic: mds: create health warning if we detect metadata (journal) writes are slow
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23343
merged - 08:19 PM Cleanup #24820: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23407 merged
- 08:17 PM Backport #26914: mimic: handle ceph_ll_close on unmounted filesystem without crashing
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23603
merged - 05:14 PM Bug #27216: kclient: usable space suddently decreased
- Vasanth M wrote:
> Zheng Yan wrote:
> > what do you mean "decrease"
> We had 19Tb of total storage , Totally we... - 09:26 AM Bug #27657 (Resolved): mds: retry remounting in ceph-fuse on dcache invalidation
- Right now, failure in remounting on dcache invalidation results in the following crash:
> 1: (()+0x6d1ff4) [0x55f2...
08/25/2018
- 08:14 PM Feature #25131 (Pending Backport): mds: optimize the way how max export size is enforced
- 08:14 PM Bug #26926 (Pending Backport): mds: migrate strays part by part when shutdown mds
- 07:57 PM Bug #26973 (Pending Backport): mds: MDBalancer::try_rebalance() may stop prematurely
- 07:56 PM Bug #25113 (Pending Backport): mds: allows client to create ".." and "." dirents
- 07:51 PM Bug #26962 (Pending Backport): mds: use monotonic clock for beacon sender thread waits
- 01:02 AM Bug #27237 (Rejected): unix df reports global raw storage as usable storage after mimic 13.2.1 up...
- os: Ubuntu 18.04.1
ceph version: 13.2.1-1
repo: ceph stable for mimic - 'deb https://download.ceph.com/debian-mimic...
08/24/2018
- 10:29 AM Bug #27216: kclient: usable space suddently decreased
- Zheng Yan wrote:
> what do you mean "decrease"
We had 19Tb of total storage , Totally we have 4 osds each have ... - 09:36 AM Bug #27216: kclient: usable space suddently decreased
- what do you mean "decrease"
- 06:49 AM Bug #27216 (New): kclient: usable space suddently decreased
- Hi,
its Suddenly happens in ceph fs mountpoint(client side ) to decrease , but the cluster overall storage avail... - 09:36 AM Documentation #27209 (In Progress): doc: document state of kernel client feature parity with ceph...
- https://github.com/ceph/ceph/pull/23728
- 03:54 AM Backport #25205 (In Progress): luminous: CephVolumeClient: delay required after adding data pool ...
- 03:50 AM Backport #25206 (In Progress): mimic: CephVolumeClient: delay required after adding data pool to ...
08/23/2018
- 09:15 PM Bug #27051 (Fix Under Review): client: cannot list out files created by another ceph-fuse client
- 04:14 PM Documentation #27209 (Resolved): doc: document state of kernel client feature parity with ceph-fuse
- In particular, snapshots are now supported but as of which version? Quotas? etc.
- 08:19 AM Backport #26929 (In Progress): mimic: MDSMonitor: note ignored beacons/map changes at higher debu...
- https://github.com/ceph/ceph/pull/23704
- 08:16 AM Backport #26923 (In Progress): mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23703
- 06:28 AM Backport #26984 (In Progress): mimic: client: requests that do name lookup may be sent to wrong mds
08/22/2018
- 01:33 PM Bug #27051: client: cannot list out files created by another ceph-fuse client
- my fix pull request is
https://github.com/ceph/ceph/pull/23691 - 01:22 PM Bug #27051 (Resolved): client: cannot list out files created by another ceph-fuse client
- recently, in our cephfs (ceph-fuse client) online production environment, i found several rmdir fail due to not empty...
- 03:51 AM Backport #26924: luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Apologies Prashant, I forgot to update this ticket with the backport I made.
- 03:10 AM Backport #26924 (In Progress): luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- -https://github.com/ceph/ceph/pull/23679-
- 02:53 AM Backport #26987 (In Progress): luminous: mds: explain delayed client_request due to subtree migra...
- https://github.com/ceph/ceph/pull/23678
- 02:46 AM Backport #26981 (In Progress): luminous: mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23677
08/21/2018
- 09:58 PM Feature #26996 (Resolved): cephfs: get capability cache hits by clients to provide introspection ...
- This is potentially useful to identify workloads running on clients.
The idea is that the client will keep track o... - 09:18 PM Feature #26974 (Fix Under Review): mds: provide mechanism to allow new instance of an application...
- 02:27 PM Feature #26974 (Resolved): mds: provide mechanism to allow new instance of an application to canc...
- This is a subset of the overall project to provide a way for client applications to reclaim state held by a previous ...
- 09:06 PM Feature #24854 (New): mds: if MDS fails internal heartbeat, then debugging should be increased to...
- 06:05 PM Bug #26995 (Fix Under Review): qa: specify distro/kernel matrix to test in kclient qa-suite
- https://github.com/ceph/ceph/pull/23673
- 06:02 PM Bug #26995 (Resolved): qa: specify distro/kernel matrix to test in kclient qa-suite
- Currently we always test the kernel client with `-k testing` via teuthology-suite. We should also test the distributi...
- 04:07 PM Backport #26982 (In Progress): mimic: mds: crash when dumping ops in flight
- 04:01 PM Backport #26982 (Resolved): mimic: mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23672
- 04:02 PM Backport #26991 (Resolved): mimic: mds: curate priority of perf counters sent to mgr
- https://github.com/ceph/ceph/pull/24467
- 04:02 PM Backport #26990 (Resolved): luminous: mds: curate priority of perf counters sent to mgr
- https://github.com/ceph/ceph/pull/24089
- 04:02 PM Backport #26989 (Resolved): mimic: Implement "cephfs-journal-tool event splice" equivalent for pu...
- https://github.com/ceph/ceph/pull/23818
- 04:02 PM Backport #26988 (Resolved): mimic: mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23792
- 04:02 PM Backport #26987 (Resolved): luminous: mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23678
- 04:01 PM Backport #26984 (Resolved): mimic: client: requests that do name lookup may be sent to wrong mds
- https://github.com/ceph/ceph/pull/23700
- 04:01 PM Backport #26983 (Resolved): luminous: client: requests that do name lookup may be sent to wrong mds
- https://github.com/ceph/ceph/pull/23793
- 04:01 PM Backport #26981 (Resolved): luminous: mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23677
- 04:01 PM Backport #26978 (Resolved): mimic: cephfs-data-scan: print the max used ino
- https://github.com/ceph/ceph/pull/23880
- 04:01 PM Backport #26977 (Resolved): luminous: cephfs-data-scan: print the max used ino
- https://github.com/ceph/ceph/pull/23881
- 04:01 PM Backport #26976 (Resolved): mimic: qa: kcephfs suite has kernel build failures
- https://github.com/ceph/ceph/pull/23769
- 03:50 PM Bug #26967 (Pending Backport): qa: kcephfs suite has kernel build failures
- 10:14 AM Bug #26966: nfs-ganesha: epochs out of sync in cluster
- Patch submitted:
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/422894 - 08:15 AM Bug #26973 (Fix Under Review): mds: MDBalancer::try_rebalance() may stop prematurely
- https://github.com/ceph/ceph/pull/23413
- 08:04 AM Bug #26973 (Resolved): mds: MDBalancer::try_rebalance() may stop prematurely
08/20/2018
- 10:14 PM Bug #24004 (Pending Backport): mds: curate priority of perf counters sent to mgr
- 10:14 PM Bug #26860 (Pending Backport): client: requests that do name lookup may be sent to wrong mds
- 10:13 PM Feature #24604 (Pending Backport): Implement "cephfs-journal-tool event splice" equivalent for pu...
- 10:13 PM Feature #26925 (Pending Backport): cephfs-data-scan: print the max used ino
- 10:12 PM Bug #26894 (Pending Backport): mds: crash when dumping ops in flight
- 10:12 PM Bug #24840 (Pending Backport): mds: explain delayed client_request due to subtree migration
- 10:07 PM Bug #26969 (Need More Info): kclient: mount unexpectedly gets osdmap updates causing test to fail
- ...
- 09:52 PM Bug #26968 (New): klient: mount fails during MDS failover
- ...
- 09:01 PM Backport #26903 (In Progress): mimic: qa: reduce slow warnings arising due to limited testing har...
- 09:00 PM Backport #26956: mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- See also fix for http://tracker.ceph.com/issues/26967
- 09:00 PM Bug #26967 (Fix Under Review): qa: kcephfs suite has kernel build failures
- https://github.com/ceph/ceph/pull/23658
- 08:58 PM Bug #26967 (Resolved): qa: kcephfs suite has kernel build failures
- http://pulpito.ceph.com/pdonnell-2018-08-17_17:43:14-kcephfs-master-testing-basic-smithi/...
- 04:18 PM Bug #26966 (Resolved): nfs-ganesha: epochs out of sync in cluster
- After starting a brand new nfs-ganesha cluster, I found the following set of recovery dbs:...
08/18/2018
- 06:36 PM Bug #26962 (Fix Under Review): mds: use monotonic clock for beacon sender thread waits
- https://github.com/ceph/ceph/pull/23640
- 06:34 PM Bug #26962 (Resolved): mds: use monotonic clock for beacon sender thread waits
- This guarantees that the sender thread cannot be disrupted by system clock changes.
08/17/2018
- 09:19 PM Bug #26961 (Resolved): mds: fix instances of wrongly sending client messages outside of MDSRank::...
- ...
- 07:01 PM Cleanup #26960 (New): mds: avoid modification of const Messages
- PR #22555 (https://github.com/ceph/ceph/pull/22555/) converted all Message handling to use const to avoid potential e...
- 06:55 PM Bug #26959 (Fix Under Review): mds: use monotonic clock for beacon message timekeeping
- https://github.com/ceph/ceph/pull/23635
- 06:54 PM Bug #26959 (Resolved): mds: use monotonic clock for beacon message timekeeping
- 06:32 PM Bug #26874 (Resolved): cephfs-shell: unable to copy files to a sub-directory from local file system.
- 09:33 AM Bug #21014 (Resolved): fs: reduce number of helper debug messages at level 5 for client
- 09:33 AM Backport #24190 (Resolved): luminous: fs: reduce number of helper debug messages at level 5 for c...
- 06:48 AM Backport #26956 (Resolved): mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/23769
- 01:49 AM Backport #26915 (In Progress): luminous: handle ceph_ll_close on unmounted filesystem without cra...
- https://github.com/ceph/ceph/pull/23617
08/16/2018
- 11:20 PM Bug #24306 (Resolved): mds: use intrusive_ptr to manage Message life-time
- 09:13 PM Bug #26869: mds: becomes stuck in up:replay during thrashing (not multimds)
- I think the potential right solution here is to make the thrasher more tolerant of slow recovery if the cluster healt...
- 04:57 PM Bug #24679 (Pending Backport): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 11:06 AM Backport #26914 (In Progress): mimic: handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23603
08/15/2018
- 01:00 PM Bug #24840 (Fix Under Review): mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23589
- 07:36 AM Bug #26894 (Fix Under Review): mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23584
08/14/2018
- 04:28 AM Backport #26905 (In Progress): mimic: MDSMonitor: consider raising priority of MMDSBeacons from M...
- https://github.com/ceph/ceph/pull/23565
08/13/2018
- 05:27 PM Backport #26906 (In Progress): luminous: MDSMonitor: consider raising priority of MMDSBeacons fro...
- 05:24 PM Backport #26930 (In Progress): luminous: MDSMonitor: note ignored beacons/map changes at higher d...
- 05:17 PM Backport #26930 (Resolved): luminous: MDSMonitor: note ignored beacons/map changes at higher debu...
- https://github.com/ceph/ceph/pull/23553
- 05:17 PM Backport #26929 (Resolved): mimic: MDSMonitor: note ignored beacons/map changes at higher debug l...
- https://github.com/ceph/ceph/pull/23704
- 05:16 PM Bug #26898 (Pending Backport): MDSMonitor: note ignored beacons/map changes at higher debug level
- 04:31 PM Bug #24679 (In Progress): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/23551
- 01:45 PM Bug #26867 (Closed): client: missing temporary file break rsync
- Bug probably fixed by #23088
- 11:25 AM Bug #26926 (Fix Under Review): mds: migrate strays part by part when shutdown mds
- https://github.com/ceph/ceph/pull/23548
- 11:20 AM Bug #26926 (Resolved): mds: migrate strays part by part when shutdown mds
- 09:18 AM Bug #26874 (Fix Under Review): cephfs-shell: unable to copy files to a sub-directory from local f...
- https://github.com/ceph/ceph/pull/23470
- 02:31 AM Feature #26925 (Fix Under Review): cephfs-data-scan: print the max used ino
- https://github.com/ceph/ceph/pull/23543
- 02:27 AM Feature #26925 (Resolved): cephfs-data-scan: print the max used ino
- 02:16 AM Bug #26900: qa: reduce slow warnings arising due to limited testing hardware
- Follow-up fix: https://github.com/ceph/ceph/pull/23542
- 02:08 AM Feature #26855: cephfs-shell: add support to execute commands from arguments
- https://github.com/ceph/ceph/pull/23444
- 02:08 AM Feature #26855 (Resolved): cephfs-shell: add support to execute commands from arguments
- 02:08 AM Feature #26853 (Resolved): cephfs-shell: add batch file processing
Also available in: Atom