Activity
From 07/18/2018 to 08/16/2018
08/16/2018
- 11:20 PM Bug #24306 (Resolved): mds: use intrusive_ptr to manage Message life-time
- 09:13 PM Bug #26869: mds: becomes stuck in up:replay during thrashing (not multimds)
- I think the potential right solution here is to make the thrasher more tolerant of slow recovery if the cluster healt...
- 04:57 PM Bug #24679 (Pending Backport): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 11:06 AM Backport #26914 (In Progress): mimic: handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23603
08/15/2018
- 01:00 PM Bug #24840 (Fix Under Review): mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23589
- 07:36 AM Bug #26894 (Fix Under Review): mds: crash when dumping ops in flight
- https://github.com/ceph/ceph/pull/23584
08/14/2018
- 04:28 AM Backport #26905 (In Progress): mimic: MDSMonitor: consider raising priority of MMDSBeacons from M...
- https://github.com/ceph/ceph/pull/23565
08/13/2018
- 05:27 PM Backport #26906 (In Progress): luminous: MDSMonitor: consider raising priority of MMDSBeacons fro...
- 05:24 PM Backport #26930 (In Progress): luminous: MDSMonitor: note ignored beacons/map changes at higher d...
- 05:17 PM Backport #26930 (Resolved): luminous: MDSMonitor: note ignored beacons/map changes at higher debu...
- https://github.com/ceph/ceph/pull/23553
- 05:17 PM Backport #26929 (Resolved): mimic: MDSMonitor: note ignored beacons/map changes at higher debug l...
- https://github.com/ceph/ceph/pull/23704
- 05:16 PM Bug #26898 (Pending Backport): MDSMonitor: note ignored beacons/map changes at higher debug level
- 04:31 PM Bug #24679 (In Progress): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- https://github.com/ceph/ceph/pull/23551
- 01:45 PM Bug #26867 (Closed): client: missing temporary file break rsync
- Bug probably fixed by #23088
- 11:25 AM Bug #26926 (Fix Under Review): mds: migrate strays part by part when shutdown mds
- https://github.com/ceph/ceph/pull/23548
- 11:20 AM Bug #26926 (Resolved): mds: migrate strays part by part when shutdown mds
- 09:18 AM Bug #26874 (Fix Under Review): cephfs-shell: unable to copy files to a sub-directory from local f...
- https://github.com/ceph/ceph/pull/23470
- 02:31 AM Feature #26925 (Fix Under Review): cephfs-data-scan: print the max used ino
- https://github.com/ceph/ceph/pull/23543
- 02:27 AM Feature #26925 (Resolved): cephfs-data-scan: print the max used ino
- 02:16 AM Bug #26900: qa: reduce slow warnings arising due to limited testing hardware
- Follow-up fix: https://github.com/ceph/ceph/pull/23542
- 02:08 AM Feature #26855: cephfs-shell: add support to execute commands from arguments
- https://github.com/ceph/ceph/pull/23444
- 02:08 AM Feature #26855 (Resolved): cephfs-shell: add support to execute commands from arguments
- 02:08 AM Feature #26853 (Resolved): cephfs-shell: add batch file processing
08/12/2018
- 09:25 PM Feature #24444 (Resolved): cephfs: make InodeStat, DirStat, LeaseStat versioned
- 09:17 PM Backport #26924 (Resolved): luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23556
- 09:17 PM Backport #26923 (Resolved): mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23703
- 09:16 PM Backport #26915 (Resolved): luminous: handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23617
- 09:16 PM Backport #26914 (Resolved): mimic: handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23603
- 09:16 PM Backport #26906 (Resolved): luminous: MDSMonitor: consider raising priority of MMDSBeacons from M...
- https://github.com/ceph/ceph/pull/23554
- 09:15 PM Backport #26905 (Resolved): mimic: MDSMonitor: consider raising priority of MMDSBeacons from MDS ...
- https://github.com/ceph/ceph/pull/23565
- 09:15 PM Backport #26904 (Resolved): luminous: qa: reduce slow warnings arising due to limited testing har...
- https://github.com/ceph/ceph/pull/23877
- 09:15 PM Backport #26903 (Resolved): mimic: qa: reduce slow warnings arising due to limited testing hardware
- https://github.com/ceph/ceph/pull/23659
- 09:15 PM Bug #26899 (Pending Backport): MDSMonitor: consider raising priority of MMDSBeacons from MDS so t...
- 09:14 PM Bug #23519 (Pending Backport): mds: mds got laggy because of MDSBeacon stuck in mqueue
- We've merged the fast dispatch fix but I want to point out for the record that the beacon replies from the monitors a...
- 09:10 PM Bug #26900 (Pending Backport): qa: reduce slow warnings arising due to limited testing hardware
- 12:02 AM Bug #26900: qa: reduce slow warnings arising due to limited testing hardware
- https://github.com/ceph/ceph/pull/23457
- 12:00 AM Bug #26900 (Resolved): qa: reduce slow warnings arising due to limited testing hardware
- Bump thresholds for slow ops/requests to be reported.
- 05:38 PM Bug #26901 (New): mds: no throttlers set on incoming messages
- This means aggressive clients can consume unbounded mds memory.
See mon/mgr/osd throttlers for comparison:...
08/11/2018
- 11:59 PM Bug #24520 (Duplicate): "[WRN] MDS health message (mds.0): 2 slow requests are blocked > 30 sec""...
- 11:35 PM Bug #26852 (Resolved): cephfs-shell: add CMake directives to install the shell
- 11:13 PM Bug #26899 (Fix Under Review): MDSMonitor: consider raising priority of MMDSBeacons from MDS so t...
- https://github.com/ceph/ceph/pull/23538
- 05:55 PM Bug #26899 (Resolved): MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are ...
- It's possible MDS beacons can get stuck in the queue long enough for an MDS to be removed from the MDSMap, increase t...
- 05:58 PM Bug #19706 (Can't reproduce): Laggy mon daemons causing MDS failover (symptom: failed to set coun...
- Last instance almost a year ago:
tail -n 100 /a/*2017-*fs*master*/*/teu* | grep -C150 'The following counters fail... - 05:49 PM Bug #26898 (Fix Under Review): MDSMonitor: note ignored beacons/map changes at higher debug level
- https://github.com/ceph/ceph/pull/23536
- 01:15 AM Bug #26898 (Resolved): MDSMonitor: note ignored beacons/map changes at higher debug level
- https://github.com/ceph/ceph/blob/c6eb1c4a5aae28ded5cd30fb92c3e981bcbb9246/src/mon/MDSMonitor.cc#L402-L406
This ma...
08/10/2018
- 09:47 AM Bug #23519 (Fix Under Review): mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23527
- 07:14 AM Bug #25228 (Resolved): mds: recovering mds receive export_cancel message
- 07:14 AM Backport #26836 (Closed): mimic: mds: recovering mds receive export_cancel message
- RP https://github.com/ceph/ceph/pull/23180 which cause the regression hasn't been merged, I have added the fix to tha...
- 12:50 AM Bug #26894 (Resolved): mds: crash when dumping ops in flight
- http://perf1.perf.lab.eng.bos.redhat.com/pub/bengland/public/ceph/linode/logs/smf-2018-08-08-20-22/mds-fail/mds0/ceph...
08/09/2018
- 09:16 AM Backport #26888: mimic: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23503
- 09:01 AM Backport #26888 (In Progress): mimic: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23503
- 08:59 AM Backport #26888 (Resolved): mimic: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23503
- 09:16 AM Backport #26889: luminous: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23505
- 09:04 AM Backport #26889 (In Progress): luminous: mds: use self CPU usage to calculate load
- 09:03 AM Backport #26889 (Resolved): luminous: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23505
- 09:15 AM Backport #26886: mimic: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23506
- 08:49 AM Backport #26886 (In Progress): mimic: mds: reset heartbeat map at potential time-consuming places
- 08:12 AM Backport #26886 (Resolved): mimic: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23506
- 09:14 AM Backport #26885: luminous: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23507
- 08:49 AM Backport #26885 (In Progress): luminous: mds: reset heartbeat map at potential time-consuming places
- 08:12 AM Backport #26885 (Resolved): luminous: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/23507
- 08:10 AM Backport #25048 (Resolved): luminous: mds may get discontinuous mdsmap
- 04:42 AM Bug #26852 (Fix Under Review): cephfs-shell: add CMake directives to install the shell
- https://github.com/ceph/ceph/pull/23500
08/08/2018
- 04:38 PM Feature #26853: cephfs-shell: add batch file processing
- https://github.com/ceph/ceph/pull/23444
08/07/2018
- 01:16 PM Bug #26874 (Resolved): cephfs-shell: unable to copy files to a sub-directory from local file system.
- Put command is not allowing to copy files to sub-directories.This not working....
- 01:16 PM Bug #25113 (Fix Under Review): mds: allows client to create ".." and "." dirents
- PR: https://github.com/ceph/ceph/pull/23469
- 11:52 AM Bug #24030 (In Progress): ceph-fuse: double dash meaning
- 10:12 AM Feature #24604: Implement "cephfs-journal-tool event splice" equivalent for purge queue
- I've just added a new PR: https://github.com/ceph/ceph/pull/23467, please take a look
- 09:06 AM Feature #24604 (New): Implement "cephfs-journal-tool event splice" equivalent for purge queue
- commit reverted
- 09:35 AM Bug #26869: mds: becomes stuck in up:replay during thrashing (not multimds)
- -1148> 2018-08-05 16:23:02.524 7fd0f6516700 1 -- 172.21.15.26:6817/673291871 --> 172.21.15.26:6813/22088 -- osd_op(...
- 08:42 AM Feature #26853 (Fix Under Review): cephfs-shell: add batch file processing
- 08:16 AM Bug #26867: client: missing temporary file break rsync
- likely fixed by https://github.com/ceph/ceph/pull/23088/commits/b17602a00c2519278e41ca1d7fa7b3cf341b7c17. No idea why...
- 08:01 AM Bug #26865 (Duplicate): mds: src/mds/CInode.cc: 2330: FAILED assert(!"unmatched rstat" == g_conf(...
- dup of #26865
08/06/2018
- 08:27 PM Backport #25033: luminous: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23155
merged - 06:47 PM Bug #26858: mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/22999
- 06:45 PM Bug #26858 (Pending Backport): mds: reset heartbeat map at potential time-consuming places
- 03:32 AM Bug #26858 (Fix Under Review): mds: reset heartbeat map at potential time-consuming places
- https://github.com/ceph/ceph/pull/22999
- 03:29 AM Bug #26858 (Resolved): mds: reset heartbeat map at potential time-consuming places
- 06:45 PM Feature #24604 (Pending Backport): Implement "cephfs-journal-tool event splice" equivalent for pu...
- 06:45 PM Bug #26834 (Pending Backport): mds: use self CPU usage to calculate load
- 01:47 PM Bug #26834 (Fix Under Review): mds: use self CPU usage to calculate load
- 06:44 PM Bug #25213 (Pending Backport): handle ceph_ll_close on unmounted filesystem without crashing
- 01:47 PM Bug #25213 (Fix Under Review): handle ceph_ll_close on unmounted filesystem without crashing
- https://github.com/ceph/ceph/pull/23370
- 06:35 PM Bug #26869 (New): mds: becomes stuck in up:replay during thrashing (not multimds)
- ...
- 05:23 PM Bug #26867 (Closed): client: missing temporary file break rsync
- ...
- 05:03 PM Backport #26833 (Resolved): luminous: mds: recovering mds receive export_cancel message
- 04:27 PM Backport #26833: luminous: mds: recovering mds receive export_cancel message
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23169
merged - 04:29 PM Backport #24190: luminous: fs: reduce number of helper debug messages at level 5 for client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23014
merged - 04:29 PM Bug #26865 (Duplicate): mds: src/mds/CInode.cc: 2330: FAILED assert(!"unmatched rstat" == g_conf(...
- ...
- 04:27 PM Backport #25048: luminous: mds may get discontinuous mdsmap
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23169
merged - 03:45 PM Bug #24177: qa: fsstress workunit does not execute in parallel on same host without clobbering files
- /ceph/teuthology-archive/yuriw-2018-08-04_04:21:34-multimds-wip-yuri5-testing-2018-08-03-2359-luminous-testing-basic-...
- 03:42 PM Bug #26863 (Can't reproduce): qa: test_full_different_file "dd: error writing 'large_file': No sp...
- ...
- 01:48 PM Bug #26860 (Fix Under Review): client: requests that do name lookup may be sent to wrong mds
- https://github.com/ceph/ceph/pull/23438
- 07:08 AM Bug #26860 (Resolved): client: requests that do name lookup may be sent to wrong mds
- 01:45 PM Bug #25113 (In Progress): mds: allows client to create ".." and "." dirents
- 01:45 PM Bug #25099 (Closed): mds: don't use dispatch queue length to calculate mds load
- 07:45 AM Feature #25188 (Fix Under Review): mds: configurable timeout for client eviction
- PR: https://github.com/ceph/ceph/pull/23439
- 04:04 AM Backport #26836 (In Progress): mimic: mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23436
08/03/2018
- 09:56 PM Backport #26851: luminous: ceph_volume_client: py3 compatible
- Needs https://github.com/ceph/ceph/pull/23411 as well.
- 04:04 PM Backport #26851 (Resolved): luminous: ceph_volume_client: py3 compatible
- https://github.com/ceph/ceph/pull/24083
- 09:28 PM Backport #24929 (In Progress): luminous: qa: test_recovery_pool tries asok on wrong node
- 09:26 PM Backport #25041 (Resolved): luminous: mds: reduce debugging for missing inodes during subtree mig...
- 09:04 PM Backport #25041: luminous: mds: reduce debugging for missing inodes during subtree migration
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23214
merged - 09:25 PM Backport #25039 (Resolved): luminous: mds: dump recent (memory) log messages before respawning du...
- 09:05 PM Backport #25039: luminous: mds: dump recent (memory) log messages before respawning due to being ...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23213
merged - 09:24 PM Backport #25036 (Resolved): luminous: mds: dump MDSMap epoch to log at low debug
- 09:06 PM Backport #25036: luminous: mds: dump MDSMap epoch to log at low debug
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23212
merged - 09:24 PM Bug #24467 (Resolved): mds: low wrlock efficiency due to dirfrags traversal
- 09:23 PM Backport #24696 (Resolved): luminous: mds: low wrlock efficiency due to dirfrags traversal
- 09:11 PM Backport #24696: luminous: mds: low wrlock efficiency due to dirfrags traversal
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22885
merged - 09:20 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
- 09:20 PM Backport #24136 (Resolved): luminous: MDSMonitor: uncommitted state exposed to clients/mdss
- 09:10 PM Backport #24136: luminous: MDSMonitor: uncommitted state exposed to clients/mdss
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23013
merged - 09:20 PM Backport #23790 (Resolved): luminous: mds: crash during shutdown_pass
- 09:20 PM Bug #23766 (Resolved): mds: crash during shutdown_pass
- 09:09 PM Bug #23766: mds: crash during shutdown_pass
- https://github.com/ceph/ceph/pull/23015 merged
- 06:11 PM Bug #26854 (Resolved): cephfs-shell: add support to set the ceph.conf file via command line argument
- This was completed as part of the original cephfs-shell merge but the argument parsing still needs refactored to supp...
- 04:37 PM Bug #26854 (Resolved): cephfs-shell: add support to set the ceph.conf file via command line argument
- For example:...
- 04:45 PM Feature #24286: tools: create CephFS shell
- Followup fix: https://github.com/ceph/ceph/pull/23417
- 04:24 PM Feature #24286 (Resolved): tools: create CephFS shell
- 04:37 PM Feature #26855 (Resolved): cephfs-shell: add support to execute commands from arguments
- It would be convenient to support running simple commands via the arguments to the cephfs-shell:...
- 04:29 PM Feature #26853 (Resolved): cephfs-shell: add batch file processing
- The shell should be able to process a batch file instead of interactive commands. For example:...
- 04:26 PM Bug #26852 (Resolved): cephfs-shell: add CMake directives to install the shell
- Shell PR: https://github.com/ceph/ceph/pull/23158
- 04:04 PM Backport #26850 (Resolved): mimic: ceph_volume_client: py3 compatible
- https://github.com/ceph/ceph/pull/24443
- 04:02 PM Backport #26836 (Closed): mimic: mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23436
- 03:22 PM Backport #24717: luminous: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- Abhishek Lekshmanan wrote:
> https://github.com/ceph/ceph/pull/22774
merged - 12:02 PM Feature #24444 (Fix Under Review): cephfs: make InodeStat, DirStat, LeaseStat versioned
- 08:08 AM Bug #26834: mds: use self CPU usage to calculate load
- https://github.com/ceph/ceph/pull/23341
- 08:06 AM Bug #26834 (Resolved): mds: use self CPU usage to calculate load
- current code uses system CPU load to calculate load. It is not good when the machine runs multiple tasks.
- 07:39 AM Backport #25043 (In Progress): luminous: overhead of g_conf->get_val<type>("config name") is high
- 07:39 AM Backport #25043: luminous: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23408
- 07:05 AM Backport #25044 (In Progress): mimic: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23407
- 01:28 AM Backport #26833 (In Progress): luminous: mds: recovering mds receive export_cancel message
- pushed fix to original PR https://github.com/ceph/ceph/pull/23169
08/02/2018
- 09:07 PM Backport #26833 (Resolved): luminous: mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23169
- 09:05 PM Bug #25228 (Pending Backport): mds: recovering mds receive export_cancel message
- 02:15 AM Bug #25228 (Fix Under Review): mds: recovering mds receive export_cancel message
- https://github.com/ceph/ceph/pull/23381
- 02:00 AM Bug #25228: mds: recovering mds receive export_cancel message
- http://pulpito.ceph.com/pdonnell-2018-08-01_19:07:35-multimds-wip-pdonnell-testing-20180801.165617-testing-basic-smit...
- 02:00 AM Bug #25228 (Resolved): mds: recovering mds receive export_cancel message
- 05:55 PM Backport #25046: luminous: mds: create health warning if we detect metadata (journal) writes are ...
- @prashant, you can just skip OpenFileTable.cc patches.
- 03:58 PM Feature #26831 (New): mds: consider subtree setting to enable LAZYIO automatically (RFC)
- See discussion here: https://github.com/ceph/ceph/pull/22450#issuecomment-400614296
This should be one avenue to e... - 04:43 AM Feature #17230 (Pending Backport): ceph_volume_client: py3 compatible
- 04:43 AM Feature #20598 (Resolved): mds: revisit LAZY_IO
- 04:42 AM Bug #25215 (Resolved): mds: changing mds_cache_memory_limit causes boost::bad_get exception
08/01/2018
- 04:54 PM Cleanup #25111 (Resolved): mds: use vector to manage Contexts rather than a list
- 03:06 PM Bug #25215 (Fix Under Review): mds: changing mds_cache_memory_limit causes boost::bad_get exception
- https://github.com/ceph/ceph/pull/23365
- 02:16 PM Bug #25215 (Resolved): mds: changing mds_cache_memory_limit causes boost::bad_get exception
- ...
- 12:57 PM Bug #25213 (Resolved): handle ceph_ll_close on unmounted filesystem without crashing
- Client::_unmount will unmap and tear down all of the open Fh objects before returning. Programs that use the lowlevel...
- 09:02 AM Feature #25131: mds: optimize the way how max export size is enforced
- When importing a large subtree, mds can spends long time on sending cap import messages.
Importer...
07/31/2018
- 10:45 PM Backport #25206 (Resolved): mimic: CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/pull/23725
- 10:45 PM Backport #25205 (Resolved): luminous: CephVolumeClient: delay required after adding data pool to ...
- https://github.com/ceph/ceph/pull/23726
- 10:35 PM Feature #25188: mds: configurable timeout for client eviction
- Changed wording.
- 04:18 AM Feature #25188 (Resolved): mds: configurable timeout for client eviction
- If a client refuses to release caps, the MDS should allow for a configurable timeout (default unlimited) where it aut...
- 09:51 PM Feature #24430 (Resolved): libcephfs: provide API to change umask
- 08:48 AM Bug #24307 (Resolved): pjd: cd: too many arguments
- 08:48 AM Backport #24311 (Resolved): luminous: pjd: cd: too many arguments
- 08:47 AM Bug #24579 (Resolved): client: returning garbage (?) for readdir
- 08:47 AM Bug #24680 (Resolved): qa: iogen.sh: line 7: cd: too many arguments
- 08:47 AM Backport #24718 (Resolved): luminous: client: returning garbage (?) for readdir
- 08:47 AM Backport #24828 (Resolved): luminous: qa: iogen.sh: line 7: cd: too many arguments
- 08:46 AM Bug #24239 (Resolved): cephfs-journal-tool: Importing a zero-length purge_queue journal breaks it...
- 08:46 AM Backport #24860 (Resolved): luminous: cephfs-journal-tool: Importing a zero-length purge_queue jo...
- 08:46 AM Bug #22683 (Resolved): client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 08:45 AM Backport #23015 (Resolved): luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 08:45 AM Backport #24932 (Resolved): luminous: client: put instance/addr information in status asok command
- 08:44 AM Backport #25038 (Resolved): luminous: mds: scrub doesn't always return JSON results
- 08:44 AM Bug #24440 (Resolved): common/DecayCounter: set last_decay to current time when decoding decay co...
- 08:44 AM Backport #24538 (Resolved): luminous: common/DecayCounter: set last_decay to current time when de...
- 08:11 AM Bug #24849 (Fix Under Review): client: statfs inode count odd
- 07:12 AM Backport #25045 (In Progress): mimic: mds: create health warning if we detect metadata (journal) ...
- https://github.com/ceph/ceph/pull/23343
- 02:40 AM Backport #25046 (Need More Info): luminous: mds: create health warning if we detect metadata (jou...
- Some files are missing in luminous branch, e.g src/mds/OpenFileTable.cc
$ git status
On branch wip-25046-luminous...
07/30/2018
- 11:11 PM Bug #25141 (Pending Backport): CephVolumeClient: delay required after adding data pool to MDSMap
- 09:28 PM Backport #25044: mimic: overhead of g_conf->get_val<type>("config name") is high
- Zheng, PTAL.
- 05:30 AM Backport #25044 (Need More Info): mimic: overhead of g_conf->get_val<type>("config name") is high
- The ConfigProxy is not defined in mimic and needs to backported :
In file included from /home/pdvian/backport/ceph... - 08:56 PM Bug #24052 (Resolved): repeated eviction of idle client until some IO happens
- 08:55 PM Backport #24295 (Resolved): luminous: repeated eviction of idle client until some IO happens
- 08:43 PM Backport #24295: luminous: repeated eviction of idle client until some IO happens
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22780
merged - 08:55 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
- 08:54 PM Backport #23989 (Resolved): luminous: mds: don't report slow request for blocked filelock request
- 08:41 PM Backport #23989: luminous: mds: don't report slow request for blocked filelock request
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22782
merged - 08:54 PM Bug #24269 (Resolved): multimds pjd open test fails
- 08:53 PM Backport #24540 (Resolved): luminous: multimds pjd open test fails
- 08:39 PM Backport #24540: luminous: multimds pjd open test fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22783
merged - 08:52 PM Backport #24535 (Resolved): luminous: client: _ll_drop_pins travel inode_map may access invalid ‘...
- 08:48 PM Backport #24694 (Resolved): luminous: PurgeQueue sometimes ignores Journaler errors
- 08:37 PM Backport #24694: luminous: PurgeQueue sometimes ignores Journaler errors
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22811
merged - 08:48 PM Backport #24311: luminous: pjd: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22883
mergedReviewed-by: Patrick Donnelly <pdonnell@redh... - 08:47 PM Backport #24718: luminous: client: returning garbage (?) for readdir
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22955
merged - 08:47 PM Backport #24828: luminous: qa: iogen.sh: line 7: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22955
merged - 08:47 PM Backport #24860: luminous: cephfs-journal-tool: Importing a zero-length purge_queue journal break...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22980
merged - 08:47 PM Backport #23015: luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23012
merged - 08:46 PM Backport #24932: luminous: client: put instance/addr information in status asok command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23107
merged - 08:45 PM Backport #25038: luminous: mds: scrub doesn't always return JSON results
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23222
merged - 08:44 PM Backport #24538: luminous: common/DecayCounter: set last_decay to current time when decoding deca...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22779
merged - 08:38 PM Backport #24534: mimic: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- https://github.com/ceph/ceph/pull/22786 merged
- 02:35 PM Bug #24849: client: statfs inode count odd
- Assigns total number of files on FS (directories are counted as files) to f_files - https://github.com/ceph/ceph/pull...
- 01:22 PM Feature #12282 (In Progress): mds: progress/abort/pause interface for ongoing scrubs
- 01:14 PM Feature #24286 (Fix Under Review): tools: create CephFS shell
- 01:13 PM Feature #24286: tools: create CephFS shell
- https://github.com/ceph/ceph/pull/23158
- 09:25 AM Feature #25013 (Fix Under Review): mds: add average session age (uptime) perf counter
- PR https://github.com/ceph/ceph/pull/23314
- 04:26 AM Backport #25042 (In Progress): mimic: mds: reduce debugging for missing inodes during subtree mig...
- https://github.com/ceph/ceph/pull/23309
07/29/2018
- 04:02 PM Bug #25148 (Won't Fix - EOL): "ceph session ls" produces unparseable json when run against ceph-m...
- h3. What?
Apparent race condition in upgrade test
h3. Where?
It happened in the upgrade test @upgrade:lumino...
07/28/2018
- 12:04 AM Bug #25141 (Fix Under Review): CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/pull/23297
- 12:00 AM Bug #25141 (In Progress): CephVolumeClient: delay required after adding data pool to MDSMap
07/27/2018
- 11:59 PM Bug #25141 (Resolved): CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/blob/726ca169ee23df2f5521ca0c9677614c6e3a2145/src/pybind/ceph_volume_client.py#L632-L636...
- 09:54 AM Feature #24444: cephfs: make InodeStat, DirStat, LeaseStat versioned
- https://github.com/ceph/ceph/pull/23280
- 03:44 AM Backport #25040 (In Progress): mimic: mds: dump recent (memory) log messages before respawning du...
- https://github.com/ceph/ceph/pull/23275
- 01:57 AM Feature #25131 (Fix Under Review): mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23088
- 01:55 AM Feature #25131 (Resolved): mds: optimize the way how max export size is enforced
- Current way of enforcing export size is checking export size after subtree gets frozen. It may freeze a large subtree...
07/26/2018
- 04:49 PM Bug #24823 (Fix Under Review): mds: deadlock when setting config value via admin socket
- 01:05 AM Bug #25113 (Resolved): mds: allows client to create ".." and "." dirents
- Pavani Rajula (our Outreachy Intern) found a fantastic bug. Apparently the MDS will happily allow the client to creat...
07/25/2018
- 11:05 PM Cleanup #25111: mds: use vector to manage Contexts rather than a list
- https://github.com/ceph/ceph/pull/23195
- 11:05 PM Cleanup #25111 (Resolved): mds: use vector to manage Contexts rather than a list
- This is all about the usual benefits of converting a list to a vector without any of the potential drawbacks:
* We... - 02:19 PM Support #25089: Many slow requests
- Robert Sander wrote:
> Why was this rejected?
>
> I see slow requests when running more than one active MDS in Ce... - 02:13 PM Support #25089: Many slow requests
- Why was this rejected?
I see slow requests when running more than one active MDS in Ceph Luminous. Ceph Luminous s... - 02:11 PM Support #25089 (Rejected): Many slow requests
- Please seek further help on ceph-users first until it's more certain a bug is found.
- 01:46 PM Support #25089: Many slow requests
- Zheng Yan wrote:
> you need to run "ceph mds deactive xxx". see http://docs.ceph.com/docs/luminous/cephfs/multimds/
... - 12:16 PM Support #25089: Many slow requests
- you need to run "ceph mds deactive xxx". see http://docs.ceph.com/docs/luminous/cephfs/multimds/
- 08:20 AM Support #25089 (Rejected): Many slow requests
- We see many slow requests on a multi MDS setup with 12.2.7.
With 12.2.4 this was not the case, at least not that m... - 02:15 PM Bug #25099: mds: don't use dispatch queue length to calculate mds load
- Zheng, can you share the patch?
- 01:41 PM Bug #25099: mds: don't use dispatch queue length to calculate mds load
- two clients run "fsstress -d . -p 8 -n 100000 -f sync=0 -f write=0 -f dwrite=0", Attached PNGs are graphes of mds loa...
- 01:35 PM Bug #25099 (Closed): mds: don't use dispatch queue length to calculate mds load
- 09:53 AM Backport #25037 (In Progress): mimic: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23225
- 09:31 AM Bug #22008: Processes stuck waiting for write with ceph-fuse
- Zheng Yan wrote:
> The second one is actually different from the first one. Seems like the first one was caused by '... - 08:26 AM Backport #25038 (In Progress): luminous: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23222
07/24/2018
- 07:12 PM Backport #25041 (In Progress): luminous: mds: reduce debugging for missing inodes during subtree ...
- 07:11 PM Backport #25039 (In Progress): luminous: mds: dump recent (memory) log messages before respawning...
- 07:09 PM Backport #25036 (In Progress): luminous: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23212
- 03:34 PM Backport #25034 (Resolved): mimic: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_RE...
- 03:25 AM Backport #25035 (In Progress): mimic: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23196
07/23/2018
- 05:30 PM Bug #24780 (Fix Under Review): Some cephfs tool commands silently operate on only rank 0, even if...
- PR https://github.com/ceph/ceph/pull/23187
- 05:01 PM Bug #24900 (Rejected): fs: ceph.file.layout.stripe_count changed after modification
- vi/vim will create a new file anytime you save. From strace:...
- 04:22 PM Bug #25049 (Need More Info): MDS journal flush too often and resulted in backlog
- 01:52 PM Bug #25049: MDS journal flush too often and resulted in backlog
- I need some mds.log, to check why mds flushes log.
- 01:48 PM Bug #24897 (Rejected): client: writes rejected by quota may truncate/zero parts of file
- 01:44 PM Bug #24897: client: writes rejected by quota may truncate/zero parts of file
- I think this is probably NOTABUG. The shell redirect will end up opening the destination file with O_TRUNC which will...
- 01:03 PM Backport #25047 (In Progress): mimic: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23180
- 08:12 AM Feature #17854: mds: only evict an unresponsive client when another client wants its caps
- Unresponsive/stale clients do not hold any caps[1]. Therefore, deferring their eviction would mean keeping them infin...
07/22/2018
- 10:24 AM Backport #25048: luminous: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23169
- 08:51 AM Feature #24430 (Fix Under Review): libcephfs: provide API to change umask
- https://github.com/ceph/ceph/pull/23157
07/21/2018
- 07:04 AM Bug #25049 (Need More Info): MDS journal flush too often and resulted in backlog
- When doing some operation like rmtree, the MDS get stuck because flush journal too often and the OSD for metadata hav...
- 03:55 AM Backport #25048 (In Progress): luminous: mds may get discontinuous mdsmap
- Zheng's WIP: https://github.com/ukernel/ceph/commits/luminous-24856
- 03:53 AM Backport #25048 (Resolved): luminous: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23169
- 03:53 AM Backport #25047 (Resolved): mimic: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23180
- 03:53 AM Bug #24856 (Pending Backport): mds may get discontinuous mdsmap
- 03:28 AM Backport #25046 (Resolved): luminous: mds: create health warning if we detect metadata (journal) ...
- https://github.com/ceph/ceph/pull/24171
- 03:28 AM Backport #25045 (Resolved): mimic: mds: create health warning if we detect metadata (journal) wri...
- https://github.com/ceph/ceph/pull/23343
- 03:28 AM Bug #24879 (Pending Backport): mds: create health warning if we detect metadata (journal) writes ...
- 03:25 AM Backport #25044 (Resolved): mimic: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23407
- 03:25 AM Backport #25043 (Resolved): luminous: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23408
- 03:24 AM Cleanup #24820 (Pending Backport): overhead of g_conf->get_val<type>("config name") is high
- 03:23 AM Backport #25042 (Resolved): mimic: mds: reduce debugging for missing inodes during subtree migration
- https://github.com/ceph/ceph/pull/23309
- 03:23 AM Backport #25041 (Resolved): luminous: mds: reduce debugging for missing inodes during subtree mig...
- https://github.com/ceph/ceph/pull/23214
- 03:23 AM Bug #24855 (Pending Backport): mds: reduce debugging for missing inodes during subtree migration
- 03:19 AM Backport #25040 (Resolved): mimic: mds: dump recent (memory) log messages before respawning due t...
- https://github.com/ceph/ceph/pull/23275
- 03:19 AM Backport #25039 (Resolved): luminous: mds: dump recent (memory) log messages before respawning du...
- https://github.com/ceph/ceph/pull/23213
- 03:18 AM Bug #24853 (Pending Backport): mds: dump recent (memory) log messages before respawning due to be...
- 03:18 AM Backport #25038 (Resolved): luminous: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23222
- 03:18 AM Backport #25037 (Resolved): mimic: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23225
- 03:17 AM Backport #25036 (Resolved): luminous: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23212
- 03:17 AM Backport #25035 (Resolved): mimic: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23196
- 03:17 AM Bug #24852 (Pending Backport): mds: dump MDSMap epoch to log at low debug
- 03:13 AM Bug #23958 (Pending Backport): mds: scrub doesn't always return JSON results
- 03:11 AM Bug #24893 (Resolved): client: add ceph_ll_fallocate
- 01:25 AM Backport #25033 (In Progress): luminous: "Health check failed: 1 MDSs report slow requests (MDS_S...
- 01:22 AM Backport #25033 (Resolved): luminous: "Health check failed: 1 MDSs report slow requests (MDS_SLOW...
- https://github.com/ceph/ceph/pull/23155
- 01:24 AM Backport #25034 (In Progress): mimic: "Health check failed: 1 MDSs report slow requests (MDS_SLOW...
- 01:22 AM Backport #25034 (Resolved): mimic: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_RE...
- https://github.com/ceph/ceph/pull/23154
- 12:43 AM Bug #25008 (Pending Backport): "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUES...
07/20/2018
- 10:23 PM Feature #25013 (In Progress): mds: add average session age (uptime) perf counter
- 06:14 AM Feature #25013 (Resolved): mds: add average session age (uptime) perf counter
- Add a performance counter in the MDS to capture average age (uptime) of sessions. This should reflect the average upt...
- 07:56 PM Bug #25008: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" in powercycle
- https://github.com/ceph/ceph/pull/23151
- 07:55 PM Bug #25008 (Fix Under Review): "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUES...
- 03:49 AM Bug #24881: unhealthy heartbeat map during subtree migration
- limit size of export
https://github.com/ceph/ceph/pull/23088 - 03:22 AM Bug #24925 (Rejected): qa: pjd test failed ctime change unexpected result
- ...
- 02:43 AM Cleanup #24820: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23139
- 12:57 AM Documentation #22989 (Fix Under Review): doc: add documentation for MDS states
07/19/2018
- 09:37 PM Bug #25008 (Resolved): "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" in p...
- Run: http://pulpito.ceph.com/yuriw-2018-07-18_21:37:13-powercycle-mimic-distro-basic-smithi/
Jobs: 2796207
Logs: ht... - 06:12 AM Backport #24934 (In Progress): luminous: cephfs-journal-tool: wrong layout info used
- -https://github.com/ceph/ceph/pull/23125-
- 06:10 AM Backport #24933 (In Progress): mimic: cephfs-journal-tool: wrong layout info used
- -https://github.com/ceph/ceph/pull/23124-
07/18/2018
- 03:37 PM Bug #24872 (Resolved): qa: client socket inaccessible without sudo
- 03:37 PM Backport #24904 (Resolved): mimic: qa: client socket inaccessible without sudo
- 02:25 PM Backport #24904: mimic: qa: client socket inaccessible without sudo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23030
merged - 01:26 PM Bug #24856: mds may get discontinuous mdsmap
- steps to reproduce mds stuck at reconnect state for mimic and later. (snap cache code gets confused)...
- 09:45 AM Bug #24823: mds: deadlock when setting config value via admin socket
- An alternate way would be to pass a reference to the lock to handle_conf_change() which could then acquire the locks ...
- 04:01 AM Backport #24931 (In Progress): mimic: client: put instance/addr information in status asok command
- https://github.com/ceph/ceph/pull/23109
- 02:24 AM Backport #24932 (In Progress): luminous: client: put instance/addr information in status asok com...
- https://github.com/ceph/ceph/pull/23107
- 12:56 AM Backport #24914 (In Progress): mimic: mon: prevent older/incompatible clients from mounting the f...
- https://github.com/ceph/ceph/pull/23105
Also available in: Atom