Activity
From 07/27/2018 to 08/25/2018
08/25/2018
- 08:14 PM Feature #25131 (Pending Backport): mds: optimize the way how max export size is enforced
- 08:14 PM Bug #26926 (Pending Backport): mds: migrate strays part by part when shutdown mds
- 07:57 PM Bug #26973 (Pending Backport): mds: MDBalancer::try_rebalance() may stop prematurely
- 07:56 PM Bug #25113 (Pending Backport): mds: allows client to create ".." and "." dirents
- 07:51 PM Bug #26962 (Pending Backport): mds: use monotonic clock for beacon sender thread waits
- 01:02 AM Bug #27237 (Rejected): unix df reports global raw storage as usable storage after mimic 13.2.1 up...
- os: Ubuntu 18.04.1
ceph version: 13.2.1-1
repo: ceph stable for mimic - 'deb https://download.ceph.com/debian-mimic...
08/24/2018
- 10:29 AM Bug #27216: kclient: usable space suddently decreased
- Zheng Yan wrote:
> what do you mean "decrease"
We had 19Tb of total storage , Totally we have 4 osds each have ... - 09:36 AM Bug #27216: kclient: usable space suddently decreased
- what do you mean "decrease"
- 06:49 AM Bug #27216 (New): kclient: usable space suddently decreased
- Hi,
its Suddenly happens in ceph fs mountpoint(client side ) to decrease , but the cluster overall storage avail... - 09:36 AM Documentation #27209 (In Progress): doc: document state of kernel client feature parity with ceph...
- https://github.com/ceph/ceph/pull/23728
- 03:54 AM Backport #25205 (In Progress): luminous: CephVolumeClient: delay required after adding data pool ...
- 03:50 AM Backport #25206 (In Progress): mimic: CephVolumeClient: delay required after adding data pool to ...
08/23/2018
- 09:15 PM Bug #27051 (Fix Under Review): client: cannot list out files created by another ceph-fuse client
- 04:14 PM Documentation #27209 (Resolved): doc: document state of kernel client feature parity with ceph-fuse
- In particular, snapshots are now supported but as of which version? Quotas? etc.
- 08:19 AM Backport #26929 (In Progress): mimic: MDSMonitor: note ignored beacons/map changes at higher debu...
- https://github.com/ceph/ceph/pull/23704
- 08:16 AM Backport #26923 (In Progress): mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23703
- 06:28 AM Backport #26984 (In Progress): mimic: client: requests that do name lookup may be sent to wrong mds
08/22/2018
- 01:33 PM Bug #27051: client: cannot list out files created by another ceph-fuse client
- my fix pull request is
https://github.com/ceph/ceph/pull/23691 - 01:22 PM Bug #27051 (Resolved): client: cannot list out files created by another ceph-fuse client
- recently, in our cephfs (ceph-fuse client) online production environment, i found several rmdir fail due to not empty...
- 03:51 AM Backport #26924: luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Apologies Prashant, I forgot to update this ticket with the backport I made.
- 03:10 AM Backport #26924 (In Progress): luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- -https://github.com/ceph/ceph/pull/23679-
- 02:53 AM Backport #26987 (In Progress): luminous: mds: explain delayed client_request due to subtree migra...
- https://github.com/ceph/ceph/pull/23678
- 02:46 AM Backport #26981 (In Progress): luminous: mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23677
08/21/2018
- 09:58 PM Feature #26996 (Resolved): cephfs: get capability cache hits by clients to provide introspection ...
- This is potentially useful to identify workloads running on clients.
The idea is that the client will keep track o... - 09:18 PM Feature #26974 (Fix Under Review): mds: provide mechanism to allow new instance of an application...
- 02:27 PM Feature #26974 (Resolved): mds: provide mechanism to allow new instance of an application to canc...
- This is a subset of the overall project to provide a way for client applications to reclaim state held by a previous ...
- 09:06 PM Feature #24854 (New): mds: if MDS fails internal heartbeat, then debugging should be increased to...
- 06:05 PM Bug #26995 (Fix Under Review): qa: specify distro/kernel matrix to test in kclient qa-suite
- https://github.com/ceph/ceph/pull/23673
- 06:02 PM Bug #26995 (Resolved): qa: specify distro/kernel matrix to test in kclient qa-suite
- Currently we always test the kernel client with `-k testing` via teuthology-suite. We should also test the distributi...
- 04:07 PM Backport #26982 (In Progress): mimic: mds: crash when dumping ops in flight
- 04:01 PM Backport #26982 (Resolved): mimic: mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23672
- 04:02 PM Backport #26991 (Resolved): mimic: mds: curate priority of perf counters sent to mgr
- https://github.com/ceph/ceph/pull/24467
- 04:02 PM Backport #26990 (Resolved): luminous: mds: curate priority of perf counters sent to mgr
- https://github.com/ceph/ceph/pull/24089
- 04:02 PM Backport #26989 (Resolved): mimic: Implement "cephfs-journal-tool event splice" equivalent for pu...
- https://github.com/ceph/ceph/pull/23818
- 04:02 PM Backport #26988 (Resolved): mimic: mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23792
- 04:02 PM Backport #26987 (Resolved): luminous: mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23678
- 04:01 PM Backport #26984 (Resolved): mimic: client: requests that do name lookup may be sent to wrong mds
- https://github.com/ceph/ceph/pull/23700
- 04:01 PM Backport #26983 (Resolved): luminous: client: requests that do name lookup may be sent to wrong mds
- https://github.com/ceph/ceph/pull/23793
- 04:01 PM Backport #26981 (Resolved): luminous: mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23677
- 04:01 PM Backport #26978 (Resolved): mimic: cephfs-data-scan: print the max used ino
- https://github.com/ceph/ceph/pull/23880
- 04:01 PM Backport #26977 (Resolved): luminous: cephfs-data-scan: print the max used ino
- https://github.com/ceph/ceph/pull/23881
- 04:01 PM Backport #26976 (Resolved): mimic: qa: kcephfs suite has kernel build failures
- https://github.com/ceph/ceph/pull/23769
- 03:50 PM Bug #26967 (Pending Backport): qa: kcephfs suite has kernel build failures
- 10:14 AM Bug #26966: nfs-ganesha: epochs out of sync in cluster
- Patch submitted:
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/422894 - 08:15 AM Bug #26973 (Fix Under Review): mds: MDBalancer::try_rebalance() may stop prematurely
- https://github.com/ceph/ceph/pull/23413
- 08:04 AM Bug #26973 (Resolved): mds: MDBalancer::try_rebalance() may stop prematurely
08/20/2018
- 10:14 PM Bug #24004 (Pending Backport): mds: curate priority of perf counters sent to mgr
- 10:14 PM Bug #26860 (Pending Backport): client: requests that do name lookup may be sent to wrong mds
- 10:13 PM Feature #24604 (Pending Backport): Implement "cephfs-journal-tool event splice" equivalent for pu...
- 10:13 PM Feature #26925 (Pending Backport): cephfs-data-scan: print the max used ino
- 10:12 PM Bug #26894 (Pending Backport): mds: crash when dumping ops in flight
- 10:12 PM Bug #24840 (Pending Backport): mds: explain delayed client_request due to subtree migration
- 10:07 PM Bug #26969 (Need More Info): kclient: mount unexpectedly gets osdmap updates causing test to fail
- ...
- 09:52 PM Bug #26968 (New): klient: mount fails during MDS failover
- ...
- 09:01 PM Backport #26903 (In Progress): mimic: qa: reduce slow warnings arising due to limited testing har...
- 09:00 PM Backport #26956: mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- See also fix for http://tracker.ceph.com/issues/26967
- 09:00 PM Bug #26967 (Fix Under Review): qa: kcephfs suite has kernel build failures
- https://github.com/ceph/ceph/pull/23658
- 08:58 PM Bug #26967 (Resolved): qa: kcephfs suite has kernel build failures
- http://pulpito.ceph.com/pdonnell-2018-08-17_17:43:14-kcephfs-master-testing-basic-smithi/...
- 04:18 PM Bug #26966 (Resolved): nfs-ganesha: epochs out of sync in cluster
- After starting a brand new nfs-ganesha cluster, I found the following set of recovery dbs:...
08/18/2018
- 06:36 PM Bug #26962 (Fix Under Review): mds: use monotonic clock for beacon sender thread waits
- https://github.com/ceph/ceph/pull/23640
- 06:34 PM Bug #26962 (Resolved): mds: use monotonic clock for beacon sender thread waits
- This guarantees that the sender thread cannot be disrupted by system clock changes.
08/17/2018
- 09:19 PM Bug #26961 (Resolved): mds: fix instances of wrongly sending client messages outside of MDSRank::...
- ...
- 07:01 PM Cleanup #26960 (New): mds: avoid modification of const Messages
- PR #22555 (https://github.com/ceph/ceph/pull/22555/) converted all Message handling to use const to avoid potential e...
- 06:55 PM Bug #26959 (Fix Under Review): mds: use monotonic clock for beacon message timekeeping
- https://github.com/ceph/ceph/pull/23635
- 06:54 PM Bug #26959 (Resolved): mds: use monotonic clock for beacon message timekeeping
- 06:32 PM Bug #26874 (Resolved): cephfs-shell: unable to copy files to a sub-directory from local file system.
- 09:33 AM Bug #21014 (Resolved): fs: reduce number of helper debug messages at level 5 for client
- 09:33 AM Backport #24190 (Resolved): luminous: fs: reduce number of helper debug messages at level 5 for c...
- 06:48 AM Backport #26956 (Resolved): mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/23769
- 01:49 AM Backport #26915 (In Progress): luminous: handle ceph_ll_close on unmounted filesystem without cra...
- https://github.com/ceph/ceph/pull/23617
08/16/2018
- 11:20 PM Bug #24306 (Resolved): mds: use intrusive_ptr to manage Message life-time
- 09:13 PM Bug #26869: mds: becomes stuck in up:replay during thrashing (not multimds)
- I think the potential right solution here is to make the thrasher more tolerant of slow recovery if the cluster healt...
- 04:57 PM Bug #24679 (Pending Backport): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 11:06 AM Backport #26914 (In Progress): mimic: handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23603
08/15/2018
- 01:00 PM Bug #24840 (Fix Under Review): mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23589
- 07:36 AM Bug #26894 (Fix Under Review): mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23584
08/14/2018
- 04:28 AM Backport #26905 (In Progress): mimic: MDSMonitor: consider raising priority of MMDSBeacons from M...
- https://github.com/ceph/ceph/pull/23565
08/13/2018
- 05:27 PM Backport #26906 (In Progress): luminous: MDSMonitor: consider raising priority of MMDSBeacons fro...
- 05:24 PM Backport #26930 (In Progress): luminous: MDSMonitor: note ignored beacons/map changes at higher d...
- 05:17 PM Backport #26930 (Resolved): luminous: MDSMonitor: note ignored beacons/map changes at higher debu...
- https://github.com/ceph/ceph/pull/23553
- 05:17 PM Backport #26929 (Resolved): mimic: MDSMonitor: note ignored beacons/map changes at higher debug l...
- https://github.com/ceph/ceph/pull/23704
- 05:16 PM Bug #26898 (Pending Backport): MDSMonitor: note ignored beacons/map changes at higher debug level
- 04:31 PM Bug #24679 (In Progress): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/23551
- 01:45 PM Bug #26867 (Closed): client: missing temporary file break rsync
- Bug probably fixed by #23088
- 11:25 AM Bug #26926 (Fix Under Review): mds: migrate strays part by part when shutdown mds
- https://github.com/ceph/ceph/pull/23548
- 11:20 AM Bug #26926 (Resolved): mds: migrate strays part by part when shutdown mds
- 09:18 AM Bug #26874 (Fix Under Review): cephfs-shell: unable to copy files to a sub-directory from local f...
- https://github.com/ceph/ceph/pull/23470
- 02:31 AM Feature #26925 (Fix Under Review): cephfs-data-scan: print the max used ino
- https://github.com/ceph/ceph/pull/23543
- 02:27 AM Feature #26925 (Resolved): cephfs-data-scan: print the max used ino
- 02:16 AM Bug #26900: qa: reduce slow warnings arising due to limited testing hardware
- Follow-up fix: https://github.com/ceph/ceph/pull/23542
- 02:08 AM Feature #26855: cephfs-shell: add support to execute commands from arguments
- https://github.com/ceph/ceph/pull/23444
- 02:08 AM Feature #26855 (Resolved): cephfs-shell: add support to execute commands from arguments
- 02:08 AM Feature #26853 (Resolved): cephfs-shell: add batch file processing
08/12/2018
- 09:25 PM Feature #24444 (Resolved): cephfs: make InodeStat, DirStat, LeaseStat versioned
- 09:17 PM Backport #26924 (Resolved): luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23556
- 09:17 PM Backport #26923 (Resolved): mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23703
- 09:16 PM Backport #26915 (Resolved): luminous: handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23617
- 09:16 PM Backport #26914 (Resolved): mimic: handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23603
- 09:16 PM Backport #26906 (Resolved): luminous: MDSMonitor: consider raising priority of MMDSBeacons from M...
- https://github.com/ceph/ceph/pull/23554
- 09:15 PM Backport #26905 (Resolved): mimic: MDSMonitor: consider raising priority of MMDSBeacons from MDS ...
- https://github.com/ceph/ceph/pull/23565
- 09:15 PM Backport #26904 (Resolved): luminous: qa: reduce slow warnings arising due to limited testing har...
- https://github.com/ceph/ceph/pull/23877
- 09:15 PM Backport #26903 (Resolved): mimic: qa: reduce slow warnings arising due to limited testing hardware
- https://github.com/ceph/ceph/pull/23659
- 09:15 PM Bug #26899 (Pending Backport): MDSMonitor: consider raising priority of MMDSBeacons from MDS so t...
- 09:14 PM Bug #23519 (Pending Backport): mds: mds got laggy because of MDSBeacon stuck in mqueue
- We've merged the fast dispatch fix but I want to point out for the record that the beacon replies from the monitors a...
- 09:10 PM Bug #26900 (Pending Backport): qa: reduce slow warnings arising due to limited testing hardware
- 12:02 AM Bug #26900: qa: reduce slow warnings arising due to limited testing hardware
- https://github.com/ceph/ceph/pull/23457
- 12:00 AM Bug #26900 (Resolved): qa: reduce slow warnings arising due to limited testing hardware
- Bump thresholds for slow ops/requests to be reported.
- 05:38 PM Bug #26901 (New): mds: no throttlers set on incoming messages
- This means aggressive clients can consume unbounded mds memory.
See mon/mgr/osd throttlers for comparison:...
08/11/2018
- 11:59 PM Bug #24520 (Duplicate): "[WRN] MDS health message (mds.0): 2 slow requests are blocked > 30 sec""...
- 11:35 PM Bug #26852 (Resolved): cephfs-shell: add CMake directives to install the shell
- 11:13 PM Bug #26899 (Fix Under Review): MDSMonitor: consider raising priority of MMDSBeacons from MDS so t...
- https://github.com/ceph/ceph/pull/23538
- 05:55 PM Bug #26899 (Resolved): MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are ...
- It's possible MDS beacons can get stuck in the queue long enough for an MDS to be removed from the MDSMap, increase t...
- 05:58 PM Bug #19706 (Can't reproduce): Laggy mon daemons causing MDS failover (symptom: failed to set coun...
- Last instance almost a year ago:
tail -n 100 /a/*2017-*fs*master*/*/teu* | grep -C150 'The following counters fail... - 05:49 PM Bug #26898 (Fix Under Review): MDSMonitor: note ignored beacons/map changes at higher debug level
- https://github.com/ceph/ceph/pull/23536
- 01:15 AM Bug #26898 (Resolved): MDSMonitor: note ignored beacons/map changes at higher debug level
- https://github.com/ceph/ceph/blob/c6eb1c4a5aae28ded5cd30fb92c3e981bcbb9246/src/mon/MDSMonitor.cc#L402-L406
This ma...
08/10/2018
- 09:47 AM Bug #23519 (Fix Under Review): mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23527
- 07:14 AM Bug #25228 (Resolved): mds: recovering mds receive export_cancel message
- 07:14 AM Backport #26836 (Closed): mimic: mds: recovering mds receive export_cancel message
- RP https://github.com/ceph/ceph/pull/23180 which cause the regression hasn't been merged, I have added the fix to tha...
- 12:50 AM Bug #26894 (Resolved): mds: crash when dumping ops in flight
- http://perf1.perf.lab.eng.bos.redhat.com/pub/bengland/public/ceph/linode/logs/smf-2018-08-08-20-22/mds-fail/mds0/ceph...
08/09/2018
- 09:16 AM Backport #26888: mimic: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23503
- 09:01 AM Backport #26888 (In Progress): mimic: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23503
- 08:59 AM Backport #26888 (Resolved): mimic: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23503
- 09:16 AM Backport #26889: luminous: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23505
- 09:04 AM Backport #26889 (In Progress): luminous: mds: use self CPU usage to calculate load
- 09:03 AM Backport #26889 (Resolved): luminous: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23505
- 09:15 AM Backport #26886: mimic: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23506
- 08:49 AM Backport #26886 (In Progress): mimic: mds: reset heartbeat map at potential time-consuming places
- 08:12 AM Backport #26886 (Resolved): mimic: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23506
- 09:14 AM Backport #26885: luminous: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23507
- 08:49 AM Backport #26885 (In Progress): luminous: mds: reset heartbeat map at potential time-consuming places
- 08:12 AM Backport #26885 (Resolved): luminous: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23507
- 08:10 AM Backport #25048 (Resolved): luminous: mds may get discontinuous mdsmap
- 04:42 AM Bug #26852 (Fix Under Review): cephfs-shell: add CMake directives to install the shell
- https://github.com/ceph/ceph/pull/23500
08/08/2018
- 04:38 PM Feature #26853: cephfs-shell: add batch file processing
- https://github.com/ceph/ceph/pull/23444
08/07/2018
- 01:16 PM Bug #26874 (Resolved): cephfs-shell: unable to copy files to a sub-directory from local file system.
- Put command is not allowing to copy files to sub-directories.This not working....
- 01:16 PM Bug #25113 (Fix Under Review): mds: allows client to create ".." and "." dirents
- PR: https://github.com/ceph/ceph/pull/23469
- 11:52 AM Bug #24030 (In Progress): ceph-fuse: double dash meaning
- 10:12 AM Feature #24604: Implement "cephfs-journal-tool event splice" equivalent for purge queue
- I've just added a new PR: https://github.com/ceph/ceph/pull/23467, please take a look
- 09:06 AM Feature #24604 (New): Implement "cephfs-journal-tool event splice" equivalent for purge queue
- commit reverted
- 09:35 AM Bug #26869: mds: becomes stuck in up:replay during thrashing (not multimds)
- -1148> 2018-08-05 16:23:02.524 7fd0f6516700 1 -- 172.21.15.26:6817/673291871 --> 172.21.15.26:6813/22088 -- osd_op(...
- 08:42 AM Feature #26853 (Fix Under Review): cephfs-shell: add batch file processing
- 08:16 AM Bug #26867: client: missing temporary file break rsync
- likely fixed by https://github.com/ceph/ceph/pull/23088/commits/b17602a00c2519278e41ca1d7fa7b3cf341b7c17. No idea why...
- 08:01 AM Bug #26865 (Duplicate): mds: src/mds/CInode.cc: 2330: FAILED assert(!"unmatched rstat" == g_conf(...
- dup of #26865
08/06/2018
- 08:27 PM Backport #25033: luminous: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23155
merged - 06:47 PM Bug #26858: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/22999
- 06:45 PM Bug #26858 (Pending Backport): mds: reset heartbeat map at potential time-consuming places
- 03:32 AM Bug #26858 (Fix Under Review): mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/22999
- 03:29 AM Bug #26858 (Resolved): mds: reset heartbeat map at potential time-consuming places
- 06:45 PM Feature #24604 (Pending Backport): Implement "cephfs-journal-tool event splice" equivalent for pu...
- 06:45 PM Bug #26834 (Pending Backport): mds: use self CPU usage to calculate load
- 01:47 PM Bug #26834 (Fix Under Review): mds: use self CPU usage to calculate load
- 06:44 PM Bug #25213 (Pending Backport): handle ceph_ll_close on unmounted filesystem without crashing
- 01:47 PM Bug #25213 (Fix Under Review): handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23370
- 06:35 PM Bug #26869 (New): mds: becomes stuck in up:replay during thrashing (not multimds)
- ...
- 05:23 PM Bug #26867 (Closed): client: missing temporary file break rsync
- ...
- 05:03 PM Backport #26833 (Resolved): luminous: mds: recovering mds receive export_cancel message
- 04:27 PM Backport #26833: luminous: mds: recovering mds receive export_cancel message
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23169
merged - 04:29 PM Backport #24190: luminous: fs: reduce number of helper debug messages at level 5 for client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23014
merged - 04:29 PM Bug #26865 (Duplicate): mds: src/mds/CInode.cc: 2330: FAILED assert(!"unmatched rstat" == g_conf(...
- ...
- 04:27 PM Backport #25048: luminous: mds may get discontinuous mdsmap
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23169
merged - 03:45 PM Bug #24177: qa: fsstress workunit does not execute in parallel on same host without clobbering files
- /ceph/teuthology-archive/yuriw-2018-08-04_04:21:34-multimds-wip-yuri5-testing-2018-08-03-2359-luminous-testing-basic-...
- 03:42 PM Bug #26863 (Can't reproduce): qa: test_full_different_file "dd: error writing 'large_file': No sp...
- ...
- 01:48 PM Bug #26860 (Fix Under Review): client: requests that do name lookup may be sent to wrong mds
- https://github.com/ceph/ceph/pull/23438
- 07:08 AM Bug #26860 (Resolved): client: requests that do name lookup may be sent to wrong mds
- 01:45 PM Bug #25113 (In Progress): mds: allows client to create ".." and "." dirents
- 01:45 PM Bug #25099 (Closed): mds: don't use dispatch queue length to calculate mds load
- 07:45 AM Feature #25188 (Fix Under Review): mds: configurable timeout for client eviction
- PR: https://github.com/ceph/ceph/pull/23439
- 04:04 AM Backport #26836 (In Progress): mimic: mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23436
08/03/2018
- 09:56 PM Backport #26851: luminous: ceph_volume_client: py3 compatible
- Needs https://github.com/ceph/ceph/pull/23411 as well.
- 04:04 PM Backport #26851 (Resolved): luminous: ceph_volume_client: py3 compatible
- https://github.com/ceph/ceph/pull/24083
- 09:28 PM Backport #24929 (In Progress): luminous: qa: test_recovery_pool tries asok on wrong node
- 09:26 PM Backport #25041 (Resolved): luminous: mds: reduce debugging for missing inodes during subtree mig...
- 09:04 PM Backport #25041: luminous: mds: reduce debugging for missing inodes during subtree migration
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23214
merged - 09:25 PM Backport #25039 (Resolved): luminous: mds: dump recent (memory) log messages before respawning du...
- 09:05 PM Backport #25039: luminous: mds: dump recent (memory) log messages before respawning due to being ...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23213
merged - 09:24 PM Backport #25036 (Resolved): luminous: mds: dump MDSMap epoch to log at low debug
- 09:06 PM Backport #25036: luminous: mds: dump MDSMap epoch to log at low debug
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23212
merged - 09:24 PM Bug #24467 (Resolved): mds: low wrlock efficiency due to dirfrags traversal
- 09:23 PM Backport #24696 (Resolved): luminous: mds: low wrlock efficiency due to dirfrags traversal
- 09:11 PM Backport #24696: luminous: mds: low wrlock efficiency due to dirfrags traversal
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22885
merged - 09:20 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
- 09:20 PM Backport #24136 (Resolved): luminous: MDSMonitor: uncommitted state exposed to clients/mdss
- 09:10 PM Backport #24136: luminous: MDSMonitor: uncommitted state exposed to clients/mdss
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23013
merged - 09:20 PM Backport #23790 (Resolved): luminous: mds: crash during shutdown_pass
- 09:20 PM Bug #23766 (Resolved): mds: crash during shutdown_pass
- 09:09 PM Bug #23766: mds: crash during shutdown_pass
- https://github.com/ceph/ceph/pull/23015 merged
- 06:11 PM Bug #26854 (Resolved): cephfs-shell: add support to set the ceph.conf file via command line argument
- This was completed as part of the original cephfs-shell merge but the argument parsing still needs refactored to supp...
- 04:37 PM Bug #26854 (Resolved): cephfs-shell: add support to set the ceph.conf file via command line argument
- For example:...
- 04:45 PM Feature #24286: tools: create CephFS shell
- Followup fix: https://github.com/ceph/ceph/pull/23417
- 04:24 PM Feature #24286 (Resolved): tools: create CephFS shell
- 04:37 PM Feature #26855 (Resolved): cephfs-shell: add support to execute commands from arguments
- It would be convenient to support running simple commands via the arguments to the cephfs-shell:...
- 04:29 PM Feature #26853 (Resolved): cephfs-shell: add batch file processing
- The shell should be able to process a batch file instead of interactive commands. For example:...
- 04:26 PM Bug #26852 (Resolved): cephfs-shell: add CMake directives to install the shell
- Shell PR: https://github.com/ceph/ceph/pull/23158
- 04:04 PM Backport #26850 (Resolved): mimic: ceph_volume_client: py3 compatible
- https://github.com/ceph/ceph/pull/24443
- 04:02 PM Backport #26836 (Closed): mimic: mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23436
- 03:22 PM Backport #24717: luminous: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/22774
merged - 12:02 PM Feature #24444 (Fix Under Review): cephfs: make InodeStat, DirStat, LeaseStat versioned
- 08:08 AM Bug #26834: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23341
- 08:06 AM Bug #26834 (Resolved): mds: use self CPU usage to calculate load
- current code uses system CPU load to calculate load. It is not good when the machine runs multiple tasks.
- 07:39 AM Backport #25043 (In Progress): luminous: overhead of g_conf->get_val<type>("config name") is high
- 07:39 AM Backport #25043: luminous: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23408
- 07:05 AM Backport #25044 (In Progress): mimic: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23407
- 01:28 AM Backport #26833 (In Progress): luminous: mds: recovering mds receive export_cancel message
- pushed fix to original PR https://github.com/ceph/ceph/pull/23169
08/02/2018
- 09:07 PM Backport #26833 (Resolved): luminous: mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23169
- 09:05 PM Bug #25228 (Pending Backport): mds: recovering mds receive export_cancel message
- 02:15 AM Bug #25228 (Fix Under Review): mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23381
- 02:00 AM Bug #25228: mds: recovering mds receive export_cancel message
- http://pulpito.ceph.com/pdonnell-2018-08-01_19:07:35-multimds-wip-pdonnell-testing-20180801.165617-testing-basic-smit...
- 02:00 AM Bug #25228 (Resolved): mds: recovering mds receive export_cancel message
- 05:55 PM Backport #25046: luminous: mds: create health warning if we detect metadata (journal) writes are ...
- @prashant, you can just skip OpenFileTable.cc patches.
- 03:58 PM Feature #26831 (New): mds: consider subtree setting to enable LAZYIO automatically (RFC)
- See discussion here: https://github.com/ceph/ceph/pull/22450#issuecomment-400614296
This should be one avenue to e... - 04:43 AM Feature #17230 (Pending Backport): ceph_volume_client: py3 compatible
- 04:43 AM Feature #20598 (Resolved): mds: revisit LAZY_IO
- 04:42 AM Bug #25215 (Resolved): mds: changing mds_cache_memory_limit causes boost::bad_get exception
08/01/2018
- 04:54 PM Cleanup #25111 (Resolved): mds: use vector to manage Contexts rather than a list
- 03:06 PM Bug #25215 (Fix Under Review): mds: changing mds_cache_memory_limit causes boost::bad_get exception
- https://github.com/ceph/ceph/pull/23365
- 02:16 PM Bug #25215 (Resolved): mds: changing mds_cache_memory_limit causes boost::bad_get exception
- ...
- 12:57 PM Bug #25213 (Resolved): handle ceph_ll_close on unmounted filesystem without crashing
- Client::_unmount will unmap and tear down all of the open Fh objects before returning. Programs that use the lowlevel...
- 09:02 AM Feature #25131: mds: optimize the way how max export size is enforced
- When importing a large subtree, mds can spends long time on sending cap import messages.
Importer...
07/31/2018
- 10:45 PM Backport #25206 (Resolved): mimic: CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/pull/23725
- 10:45 PM Backport #25205 (Resolved): luminous: CephVolumeClient: delay required after adding data pool to ...
- https://github.com/ceph/ceph/pull/23726
- 10:35 PM Feature #25188: mds: configurable timeout for client eviction
- Changed wording.
- 04:18 AM Feature #25188 (Resolved): mds: configurable timeout for client eviction
- If a client refuses to release caps, the MDS should allow for a configurable timeout (default unlimited) where it aut...
- 09:51 PM Feature #24430 (Resolved): libcephfs: provide API to change umask
- 08:48 AM Bug #24307 (Resolved): pjd: cd: too many arguments
- 08:48 AM Backport #24311 (Resolved): luminous: pjd: cd: too many arguments
- 08:47 AM Bug #24579 (Resolved): client: returning garbage (?) for readdir
- 08:47 AM Bug #24680 (Resolved): qa: iogen.sh: line 7: cd: too many arguments
- 08:47 AM Backport #24718 (Resolved): luminous: client: returning garbage (?) for readdir
- 08:47 AM Backport #24828 (Resolved): luminous: qa: iogen.sh: line 7: cd: too many arguments
- 08:46 AM Bug #24239 (Resolved): cephfs-journal-tool: Importing a zero-length purge_queue journal breaks it...
- 08:46 AM Backport #24860 (Resolved): luminous: cephfs-journal-tool: Importing a zero-length purge_queue jo...
- 08:46 AM Bug #22683 (Resolved): client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 08:45 AM Backport #23015 (Resolved): luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 08:45 AM Backport #24932 (Resolved): luminous: client: put instance/addr information in status asok command
- 08:44 AM Backport #25038 (Resolved): luminous: mds: scrub doesn't always return JSON results
- 08:44 AM Bug #24440 (Resolved): common/DecayCounter: set last_decay to current time when decoding decay co...
- 08:44 AM Backport #24538 (Resolved): luminous: common/DecayCounter: set last_decay to current time when de...
- 08:11 AM Bug #24849 (Fix Under Review): client: statfs inode count odd
- 07:12 AM Backport #25045 (In Progress): mimic: mds: create health warning if we detect metadata (journal) ...
- https://github.com/ceph/ceph/pull/23343
- 02:40 AM Backport #25046 (Need More Info): luminous: mds: create health warning if we detect metadata (jou...
- Some files are missing in luminous branch, e.g src/mds/OpenFileTable.cc
$ git status
On branch wip-25046-luminous...
07/30/2018
- 11:11 PM Bug #25141 (Pending Backport): CephVolumeClient: delay required after adding data pool to MDSMap
- 09:28 PM Backport #25044: mimic: overhead of g_conf->get_val<type>("config name") is high
- Zheng, PTAL.
- 05:30 AM Backport #25044 (Need More Info): mimic: overhead of g_conf->get_val<type>("config name") is high
- The ConfigProxy is not defined in mimic and needs to backported :
In file included from /home/pdvian/backport/ceph... - 08:56 PM Bug #24052 (Resolved): repeated eviction of idle client until some IO happens
- 08:55 PM Backport #24295 (Resolved): luminous: repeated eviction of idle client until some IO happens
- 08:43 PM Backport #24295: luminous: repeated eviction of idle client until some IO happens
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22780
merged - 08:55 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
- 08:54 PM Backport #23989 (Resolved): luminous: mds: don't report slow request for blocked filelock request
- 08:41 PM Backport #23989: luminous: mds: don't report slow request for blocked filelock request
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22782
merged - 08:54 PM Bug #24269 (Resolved): multimds pjd open test fails
- 08:53 PM Backport #24540 (Resolved): luminous: multimds pjd open test fails
- 08:39 PM Backport #24540: luminous: multimds pjd open test fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22783
merged - 08:52 PM Backport #24535 (Resolved): luminous: client: _ll_drop_pins travel inode_map may access invalid ‘...
- 08:48 PM Backport #24694 (Resolved): luminous: PurgeQueue sometimes ignores Journaler errors
- 08:37 PM Backport #24694: luminous: PurgeQueue sometimes ignores Journaler errors
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22811
merged - 08:48 PM Backport #24311: luminous: pjd: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22883
mergedReviewed-by: Patrick Donnelly <pdonnell@redh... - 08:47 PM Backport #24718: luminous: client: returning garbage (?) for readdir
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22955
merged - 08:47 PM Backport #24828: luminous: qa: iogen.sh: line 7: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22955
merged - 08:47 PM Backport #24860: luminous: cephfs-journal-tool: Importing a zero-length purge_queue journal break...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22980
merged - 08:47 PM Backport #23015: luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23012
merged - 08:46 PM Backport #24932: luminous: client: put instance/addr information in status asok command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23107
merged - 08:45 PM Backport #25038: luminous: mds: scrub doesn't always return JSON results
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23222
merged - 08:44 PM Backport #24538: luminous: common/DecayCounter: set last_decay to current time when decoding deca...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22779
merged - 08:38 PM Backport #24534: mimic: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- https://github.com/ceph/ceph/pull/22786 merged
- 02:35 PM Bug #24849: client: statfs inode count odd
- Assigns total number of files on FS (directories are counted as files) to f_files - https://github.com/ceph/ceph/pull...
- 01:22 PM Feature #12282 (In Progress): mds: progress/abort/pause interface for ongoing scrubs
- 01:14 PM Feature #24286 (Fix Under Review): tools: create CephFS shell
- 01:13 PM Feature #24286: tools: create CephFS shell
- https://github.com/ceph/ceph/pull/23158
- 09:25 AM Feature #25013 (Fix Under Review): mds: add average session age (uptime) perf counter
- PR https://github.com/ceph/ceph/pull/23314
- 04:26 AM Backport #25042 (In Progress): mimic: mds: reduce debugging for missing inodes during subtree mig...
- https://github.com/ceph/ceph/pull/23309
07/29/2018
- 04:02 PM Bug #25148 (Won't Fix - EOL): "ceph session ls" produces unparseable json when run against ceph-m...
- h3. What?
Apparent race condition in upgrade test
h3. Where?
It happened in the upgrade test @upgrade:lumino...
07/28/2018
- 12:04 AM Bug #25141 (Fix Under Review): CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/pull/23297
- 12:00 AM Bug #25141 (In Progress): CephVolumeClient: delay required after adding data pool to MDSMap
07/27/2018
- 11:59 PM Bug #25141 (Resolved): CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/blob/726ca169ee23df2f5521ca0c9677614c6e3a2145/src/pybind/ceph_volume_client.py#L632-L636...
- 09:54 AM Feature #24444: cephfs: make InodeStat, DirStat, LeaseStat versioned
- https://github.com/ceph/ceph/pull/23280
- 03:44 AM Backport #25040 (In Progress): mimic: mds: dump recent (memory) log messages before respawning du...
- https://github.com/ceph/ceph/pull/23275
- 01:57 AM Feature #25131 (Fix Under Review): mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23088
- 01:55 AM Feature #25131 (Resolved): mds: optimize the way how max export size is enforced
- Current way of enforcing export size is checking export size after subtree gets frozen. It may freeze a large subtree...
Also available in: Atom