Activity
From 04/19/2017 to 05/18/2017
05/18/2017
- 10:46 PM Bug #17259 (Won't Fix): multimds: ranks >= max_mds may be assigned after reducing max_mds
- This affected kcephfs which was fixed in https://github.com/ceph/ceph-client/commit/76201b6354bb3aa31c7ba2bd42b9cbb8d...
- 10:43 PM Bug #19240: multimds on linode: troubling op throughput scaling from 8 to 16 MDS in kernel bulid ...
- To close this we should confirm hypothesis with new op tracking from http://tracker.ceph.com/issues/19362
I'll do ... - 04:07 PM Bug #19934 (Resolved): ceph fs set cephfs standby_count_wanted 0 fails on jewel upgrade
- https://github.com/ceph/ceph/pull/15126
- 03:53 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- hi zheng yan:
the problem occurs when multiple clients read/write a file at the same time in our environment; - 01:14 AM Bug #19955: Too many stat ops when MDS trying to probe a large file
- John Spray wrote:
> File size recovery should only happen when a client has failed to reconnect properly during the ... - 12:14 AM Bug #19969 (Fix Under Review): CDir.cc: 909: FAILED assert(get_num_ref() == (state_test(STATE_STI...
- 12:14 AM Bug #19969: CDir.cc: 909: FAILED assert(get_num_ref() == (state_test(STATE_STICKY) ? 1:0))
- Fix is already in my wip-multimds-misc branch https://github.com/ceph/ceph/pull/14550/commits/b4e0d12c4d00e4994eff3d8...
05/17/2017
- 08:28 PM Bug #19969: CDir.cc: 909: FAILED assert(get_num_ref() == (state_test(STATE_STICKY) ? 1:0))
- MDS log: https://www.dropbox.com/s/hd7bk0lwznwxlcy/mds.a.log.bz2
- 08:20 PM Bug #19969 (Resolved): CDir.cc: 909: FAILED assert(get_num_ref() == (state_test(STATE_STICKY) ? 1...
- ...
- 03:54 PM Bug #16588 (Resolved): ceph mds dump show incorrect number of metadata pools.
- 11:20 AM Bug #19955 (Fix Under Review): Too many stat ops when MDS trying to probe a large file
- https://github.com/ceph/ceph/pull/15131
- 11:12 AM Bug #19955: Too many stat ops when MDS trying to probe a large file
- File size recovery should only happen when a client has failed to reconnect properly during the MDS restart. Is that...
- 03:29 AM Bug #19955 (Resolved): Too many stat ops when MDS trying to probe a large file
- When MDS recovers, it may emit tons of `stat` ops to OSDs trying to probe a large file, which as a result prevents cl...
- 10:49 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- hi zheng yan:
disable page cache at the client(fuse_disable_pagecache = true), problem disappear; - 09:06 AM Bug #19946 (Fix Under Review): CInode.cc: 2481: FAILED assert(s == nested_auth_pins)
- https://github.com/ceph/ceph/pull/15130
- 03:40 AM Bug #19946: CInode.cc: 2481: FAILED assert(s == nested_auth_pins)
- happens only when mds_debug_auth_pins is enabled. the new dirfrag hasn't been added to inode's frag map, which confus...
05/16/2017
- 07:51 PM Bug #19946: CInode.cc: 2481: FAILED assert(s == nested_auth_pins)
- Full MDS log: https://www.dropbox.com/s/skjrie44ngc27cj/mds.b.log.bz2
- 07:50 PM Bug #19946 (Resolved): CInode.cc: 2481: FAILED assert(s == nested_auth_pins)
- nested_auth_pins == 2
s == 1
2017-05-16 10:15:54.754892 7fea71adb700 -1 /home/dfuller/ceph/src/mds/CInode.cc:
I... - 11:00 AM Bug #19934: ceph fs set cephfs standby_count_wanted 0 fails on jewel upgrade
- ...
- 02:19 AM Bug #19934 (Resolved): ceph fs set cephfs standby_count_wanted 0 fails on jewel upgrade
- ...
- 10:00 AM Bug #19892 (Fix Under Review): Test failure: test_purge_queue_op_rate fails
- 09:52 AM Bug #19893 (Resolved): test_rebuild_simple_altpool fails
- 09:45 AM Feature #19362 (Resolved): mds: add perf counters for each type of MDS operation
- 09:43 AM Bug #19890 (Resolved): src/test/pybind/test_cephfs.py fails
- 09:24 AM Feature #19820 (Resolved): ceph-fuse: use userspace permission check by default
- 09:14 AM Bug #19755 (Pending Backport): MDS became unresponsive when truncating a very large file
05/15/2017
- 09:31 PM Bug #19734 (Resolved): mds: subsystems like ceph_subsys_mds_balancer do not log correctly
- 08:40 PM Bug #19893 (Fix Under Review): test_rebuild_simple_altpool fails
- https://github.com/ceph/ceph/pull/15094
- 01:46 PM Bug #19892: Test failure: test_purge_queue_op_rate fails
- Let's tweak the numbers on this test to get more consistent behaviour (more files), or maybe set the concurrent purge...
- 12:25 PM Bug #19891 (Resolved): Test failure: test_full_different_file
- 12:18 PM Bug #19828 (Resolved): mds: valgrind InvalidRead detected in Locker
- 12:18 PM Bug #19903 (Resolved): LibCephFS.ClearSetuid fails
- 08:04 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- Zheng Yan wrote:
> can you reproduce this issue? (errors in the client.log are normal, they shouldn't cause this iss... - 07:24 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- Zheng Yan wrote:
> can you reproduce this issue? (errors in the client.log are normal, they shouldn't cause this iss... - 07:12 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- hi zheng yan:
reproduce this issue is easy, but I don't know what specific logs need to be opened at the client an...
05/12/2017
- 11:02 AM Bug #19706: Laggy mon daemons causing MDS failover (symptom: failed to set counters on mds daemon...
The blacklisting happened here:...- 10:04 AM Bug #19706: Laggy mon daemons causing MDS failover (symptom: failed to set counters on mds daemon...
- So it turns out this is exposing an underlying issue (in at least some cases):
http://qa-proxy.ceph.com/teuthology/j... - 09:41 AM Feature #19819: Add support of FS_IOC_FIEMAP ioctl on files, accessible through CephFS
- Please don't close, since issue is valid, and maybe someone wil eventually fix it.
- 09:29 AM Feature #19819 (Rejected): Add support of FS_IOC_FIEMAP ioctl on files, accessible through CephFS
- Okay. I'm going to close this, because implementing that ioctl efficiently isn't doable without major changes to how...
- 09:12 AM Feature #19819: Add support of FS_IOC_FIEMAP ioctl on files, accessible through CephFS
- This is feature request. I have no skills and proper knowledge on CephFS to implement this feature.
Motivation -- ... - 09:01 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- can you reproduce this issue? (errors in the client.log are normal, they shouldn't cause this issue)
- 08:48 AM Bug #19912 (Fix Under Review): kcephfs: Test failure: test_trim_caps
- https://github.com/ceph/ceph/pull/15062
- 08:17 AM Bug #19912 (Resolved): kcephfs: Test failure: test_trim_caps
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-05-11_05:20:03-kcephfs-kraken-testing-basic-smithi/1122976/
<p... - 12:34 AM Bug #19903 (Fix Under Review): LibCephFS.ClearSetuid fails
- https://github.com/ceph/ceph/pull/15039
05/11/2017
- 06:46 PM Feature #17834 (Resolved): MDS Balancer overrides
- 06:24 PM Feature #19819: Add support of FS_IOC_FIEMAP ioctl on files, accessible through CephFS
- Марк: is this ticket describing something you want to work on, or is it just a request? In either case, what's the m...
- 09:46 AM Bug #19635 (Pending Backport): Deadlock on two ceph-fuse clients accessing the same file
- 09:12 AM Bug #19589 (Resolved): greedyspill.lua: :18: attempt to index a nil value (field '?')
- 07:44 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- Zheng Yan wrote:
> what does 'written in part' mean? application wrote ~23G, failed to write the rest, or applicatio...
05/10/2017
- 01:25 PM Bug #19903 (Resolved): LibCephFS.ClearSetuid fails
- all libcephfs/test.sh failures in http://pulpito.ceph.com/pdonnell-2017-05-10_02:32:15-fs-wip-pdonnell-integration-di...
- 08:59 AM Bug #19891 (Fix Under Review): Test failure: test_full_different_file
- https://github.com/ceph/ceph/pull/15026/
- 08:12 AM Bug #19891: Test failure: test_full_different_file
- mds didn't get osd op reply for purge queue log prezero operations. It seems osd dropped these requests...
- 02:22 AM Bug #19892: Test failure: test_purge_queue_op_rate fails
- 02:21 AM Bug #19892: Test failure: test_purge_queue_op_rate fails
- seems like test case issue. asok command can use several seconds, which is enough for deleting all files.
- 01:02 AM Bug #19896 (Duplicate): client: test failure for O_RDWR file open
- 12:35 AM Bug #19890 (Fix Under Review): src/test/pybind/test_cephfs.py fails
- 12:34 AM Bug #19890: src/test/pybind/test_cephfs.py fails
- https://github.com/ceph/ceph/pull/15018
05/09/2017
- 09:47 PM Bug #19896: client: test failure for O_RDWR file open
- This problem also appears to be affecting a few other tests:...
- 09:42 PM Bug #19896 (Duplicate): client: test failure for O_RDWR file open
- We have a test failure in test_cephfs.test_open (and test_cephfs.test_mount_unmount): http://pulpito.ceph.com/pdonnel...
- 01:37 PM Bug #19450 (Resolved): PurgeQueue read journal crash
- 01:23 PM Bug #19893 (Resolved): test_rebuild_simple_altpool fails
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-05-08_03:25:02-kcephfs-master-testing-basic-smithi/1113182/
- 01:15 PM Bug #19892 (Resolved): Test failure: test_purge_queue_op_rate fails
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-05-06_03:25:02-kcephfs-master-testing-basic-smithi/1107011/
- 01:05 PM Bug #19891 (Resolved): Test failure: test_full_different_file
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-05-06_03:15:02-fs-master---basic-smithi/1106624/
- 12:52 PM Bug #19890 (Resolved): src/test/pybind/test_cephfs.py fails
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-05-08_03:15:05-fs-master---basic-smithi/1113004/teuthology.log
... - 02:49 AM Bug #19426 (Can't reproduce): knfs blogbench hang
05/08/2017
- 03:24 PM Backport #19846 (In Progress): jewel: write to cephfs mount hangs, ceph-fuse and kernel
- 02:00 PM Backport #19845 (In Progress): kraken: write to cephfs mount hangs, ceph-fuse and kernel
- 09:46 AM Bug #19426: knfs blogbench hang
- Jeff Layton wrote:
> Sorry I didn't see this sooner. Is this still cropping up?
>
> So what might be helpful the ... - 09:38 AM Bug #19426: knfs blogbench hang
- It seems the crash no longer happen after rebase the testing branch against 4.11 kernel
- 09:25 AM Bug #19828 (Fix Under Review): mds: valgrind InvalidRead detected in Locker
- https://github.com/ceph/ceph/pull/14991
- 03:27 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- what does 'written in part' mean? application wrote ~23G, failed to write the rest, or application wrote 26G but the ...
- 02:59 AM Bug #18798 (Resolved): FS activity hung, MDS reports client "failing to respond to capability rel...
- "ceph: try getting buffer capability for readahead/fadvise" has backported into 4.9.x
05/04/2017
- 07:06 PM Feature #19862 (New): mds: add LTTnG tracepoints for each type of MDS operation
- It would be nice to know the latency of different file system operations so we can see which operations:
- scale poo... - 09:47 AM Bug #19854: ceph-fuse write a big file,The file is only written in part
- upload client log file
- 09:41 AM Bug #19854 (Duplicate): ceph-fuse write a big file,The file is only written in part
- application write a big file( 26GB) to cephfs, the file is only written in part(23GB);
ceph version: 10.2.6 (656b5b6...
05/03/2017
- 08:20 PM Backport #19846 (Resolved): jewel: write to cephfs mount hangs, ceph-fuse and kernel
- https://github.com/ceph/ceph/pull/15000
- 08:20 PM Backport #19845 (Resolved): kraken: write to cephfs mount hangs, ceph-fuse and kernel
- https://github.com/ceph/ceph/pull/14998
- 05:46 PM Feature #19362: mds: add perf counters for each type of MDS operation
- https://github.com/ceph/ceph/pull/14938
- 03:22 PM Bug #18798: FS activity hung, MDS reports client "failing to respond to capability release"
- Zheng Yan wrote:
> elder one wrote:
> > One difference I noticed between 4.4 and 4.9 kernels
> > - with 4.4 kerne... - 02:27 AM Bug #18798: FS activity hung, MDS reports client "failing to respond to capability release"
- elder one wrote:
> Just reporting that no errors after using patched (commit 2b1ac852) cephfs kernel module.
>
>... - 02:06 AM Bug #18798: FS activity hung, MDS reports client "failing to respond to capability release"
- elder one wrote:
> One difference I noticed between 4.4 and 4.9 kernels
> - with 4.4 kernel on cephfs directory si... - 11:04 AM Backport #18699 (Resolved): jewel: client: fix the cross-quota rename boundary check conditions
- 07:47 AM Bug #18872 (Pending Backport): write to cephfs mount hangs, ceph-fuse and kernel
- 05:25 AM Feature #19819: Add support of FS_IOC_FIEMAP ioctl on files, accessible through CephFS
- This is pretty infeasible right now -- nothing tracks which bits of a file are allocated or exist. The closest it com...
- 03:27 AM Bug #19828 (Resolved): mds: valgrind InvalidRead detected in Locker
- ...
- 01:55 AM Bug #17819 (Can't reproduce): MDS crashed while performing snapshot creation and deletion in a loop
- 01:53 AM Bug #17819: MDS crashed while performing snapshot creation and deletion in a loop
- run the test overnight, can't reproduce the crash.
05/02/2017
- 09:40 AM Bug #17408 (Can't reproduce): Possible un-needed wait on rstats when listing dir?
- can't reproduce. probably fixed by ...
- 09:21 AM Bug #18798: FS activity hung, MDS reports client "failing to respond to capability release"
- Just reporting that no errors after using patched (commit 2b1ac852) cephfs kernel module.
Also upgraded ceph to 1... - 08:29 AM Feature #19820 (Fix Under Review): ceph-fuse: use userspace permission check by default
- https://github.com/ceph/ceph/pull/14907
- 07:39 AM Feature #19820 (Resolved): ceph-fuse: use userspace permission check by default
- 07:32 AM Bug #19812: client: not swapping directory caps efficiently leads to very slow create chains
- The reason of slow creation/deletion is that ceph-fuse sends a getattr request (check permission of test directory) b...
- 06:17 AM Feature #19819 (Rejected): Add support of FS_IOC_FIEMAP ioctl on files, accessible through CephFS
- The purpose of this -- is to have standard interface for examining allocated extents of the files. For example, xfs_i...
04/28/2017
- 10:09 PM Bug #19812 (New): client: not swapping directory caps efficiently leads to very slow create chains
- https://www.mail-archive.com/ceph-users@lists.ceph.com/msg34200.html
In short: if you have a ceph-fuse and a kerne...
04/27/2017
- 07:27 AM Bug #18872: write to cephfs mount hangs, ceph-fuse and kernel
- PR: https://github.com/ceph/ceph/pull/14822
- 03:21 AM Bug #19789 (New): FAIL: test_evict_client (tasks.cephfs.test_misc.TestMisc)
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-04-24_03:15:02-fs-master---basic-smithi/
04/25/2017
- 09:03 AM Bug #19755 (Fix Under Review): MDS became unresponsive when truncating a very large file
- https://github.com/ceph/ceph/pull/14769
- 03:39 AM Bug #19755 (Resolved): MDS became unresponsive when truncating a very large file
- We were trying to copy a very large file (7TB exactly) between two directories through Samba/CephFS, and cancelled it...
- 06:52 AM Backport #19763 (Resolved): kraken: non-local cephfs quota changes not visible until some IO is done
- https://github.com/ceph/ceph/pull/16108
- 06:52 AM Backport #19762 (Resolved): jewel: non-local cephfs quota changes not visible until some IO is done
- https://github.com/ceph/ceph/pull/15466
04/24/2017
- 09:14 PM Bug #19583 (Resolved): mds: change_attr not inc in Server::handle_set_vxattr
- 09:13 PM Fix #19691 (Resolved): Remove journaler_allow_split_entries option
- 09:13 PM Bug #18816 (Resolved): MDS crashes with log disabled
- 09:12 PM Bug #19306: fs: mount NFS to cephfs, and then ls a directory containing a large number of files, ...
- The userspace piece (https://github.com/ceph/ceph/pull/14317) has merged.
Zheng: please resolve the ticket when th... - 09:11 PM Feature #17855 (Resolved): Don't evict a slow client if it's the only client
- 01:43 PM Bug #19706 (In Progress): Laggy mon daemons causing MDS failover (symptom: failed to set counters...
- Should disable this check for the misc workunit, it was mainly intended for the workunits that have more sustained lo...
- 12:27 PM Feature #18425 (Resolved): mds: add the option to use tcmalloc directly
- 10:13 AM Bug #17939 (Pending Backport): non-local cephfs quota changes not visible until some IO is done
- 07:08 AM Support #16884 (Closed): rename() doesn't work between directories
- 07:02 AM Support #16884: rename() doesn't work between directories
- 03:01 AM Bug #19635 (Fix Under Review): Deadlock on two ceph-fuse clients accessing the same file
- https://github.com/ceph/ceph/pull/14743
04/22/2017
- 02:10 AM Bug #19583 (Fix Under Review): mds: change_attr not inc in Server::handle_set_vxattr
- https://github.com/ceph/ceph/pull/14726
04/21/2017
- 03:41 AM Bug #19734 (Resolved): mds: subsystems like ceph_subsys_mds_balancer do not log correctly
- Our logging for subsystems is relying on a removed configuration macro:...
- 03:36 AM Bug #19589 (Fix Under Review): greedyspill.lua: :18: attempt to index a nil value (field '?')
- https://github.com/ceph/ceph/pull/14704
04/20/2017
- 09:54 PM Backport #19709 (In Progress): jewel: Enable MDS to start when session ino info is corrupt
- 12:13 PM Backport #19709 (Resolved): jewel: Enable MDS to start when session ino info is corrupt
- https://github.com/ceph/ceph/pull/14700
- 09:49 PM Backport #19679 (In Progress): jewel: MDS: damage reporting by ino number is useless
- 09:29 PM Backport #19677 (In Progress): jewel: Jewel ceph-fuse does not recover after lost connection to MDS
- 05:55 PM Backport #19466 (Need More Info): jewel: mds: log rotation doesn't work if mds has respawned
- Needs 3ba63063 1fb15a21
Of these two, 3ba63063 is non-trivial. - 11:32 AM Backport #19466 (In Progress): jewel: mds: log rotation doesn't work if mds has respawned
- 05:06 PM Backport #19620 (Resolved): kraken: MDS server crashes due to inconsistent metadata.
- 05:06 PM Backport #19483 (Resolved): kraken: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" co...
- 05:05 PM Backport #19335 (Resolved): kraken: MDS heartbeat timeout during rejoin, when working with large ...
- 05:04 PM Backport #19045 (Resolved): kraken: buffer overflow in test LibCephFS.DirLs
- 05:03 PM Backport #18950 (Resolved): kraken: mds/StrayManager: avoid reusing deleted inode in StrayManager...
- 05:02 PM Backport #18899 (Resolved): kraken: Test failure: test_open_inode
- 05:01 PM Backport #18706 (Resolved): kraken: fragment space check can cause replayed request fail
- 04:59 PM Backport #18700 (Resolved): kraken: client: fix the cross-quota rename boundary check conditions
- 04:58 PM Bug #18306 (Resolved): segfault in handle_client_caps
- 04:58 PM Backport #18616 (Resolved): kraken: segfault in handle_client_caps
- 04:57 PM Bug #18179 (Resolved): MDS crashes on missing metadata object
- 04:57 PM Backport #18566 (Resolved): kraken: MDS crashes on missing metadata object
- 04:56 PM Bug #18396 (Resolved): Test Failure: kcephfs test_client_recovery.TestClientRecovery
- 07:57 AM Bug #18396: Test Failure: kcephfs test_client_recovery.TestClientRecovery
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-04-13_05:20:02-kcephfs-kraken-testing-basic-smithi/1019312/
- 04:56 PM Backport #18562 (Resolved): kraken: Test Failure: kcephfs test_client_recovery.TestClientRecovery
- 04:55 PM Bug #18460 (Resolved): ceph-fuse crash during snapshot tests
- 04:55 PM Backport #18552 (Resolved): kraken: ceph-fuse crash during snapshot tests
- 02:53 PM Bug #19707 (Duplicate): Hadoop tests fail due to missing upstream tarball
- Indeed it is -- I hadn't seen that other ticket because it was in the wrong project.
- 02:04 PM Bug #19707: Hadoop tests fail due to missing upstream tarball
- Dup of #19456?
- 09:57 AM Bug #19707 (Duplicate): Hadoop tests fail due to missing upstream tarball
- http://pulpito.ceph.com/teuthology-2017-04-03_03:45:03-hadoop-master---basic-mira/...
- 01:49 PM Bug #19712 (New): some kcephfs tests become very slow
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-04-16_04:20:02-kcephfs-jewel-testing-basic-smithi/
http://pulpit... - 01:40 PM Backport #19675 (In Progress): jewel: cephfs: Test failure: test_data_isolated (tasks.cephfs.test...
- 01:24 PM Backport #19673 (In Progress): jewel: cephfs: mds is crushed, after I set about 400 64KB xattr kv...
- 01:22 PM Backport #19671 (In Progress): jewel: MDS assert failed when shutting down
- 01:15 PM Backport #19668 (In Progress): jewel: MDS goes readonly writing backtrace for a file whose data p...
- 01:04 PM Bug #18872 (Fix Under Review): write to cephfs mount hangs, ceph-fuse and kernel
- Turns out this is an issue of ceph leaking arch-dependend flags on the wire. See kernel ml [PATCH] ceph: Fix file ope...
- 12:13 PM Backport #19710 (Resolved): kraken: Enable MDS to start when session ino info is corrupt
- https://github.com/ceph/ceph/pull/16107
- 12:08 PM Backport #19666 (In Progress): jewel: fs:The mount point break off when mds switch hanppened.
- 11:42 AM Backport #19665 (In Progress): jewel: C_MDSInternalNoop::complete doesn't free itself
- 11:37 AM Backport #19619 (In Progress): jewel: MDS server crashes due to inconsistent metadata.
- 11:34 AM Backport #19482 (In Progress): jewel: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" ...
- 10:57 AM Backport #19334 (In Progress): jewel: MDS heartbeat timeout during rejoin, when working with larg...
- 10:55 AM Backport #19044 (In Progress): jewel: buffer overflow in test LibCephFS.DirLs
- 10:54 AM Backport #18949 (In Progress): jewel: mds/StrayManager: avoid reusing deleted inode in StrayManag...
- 10:49 AM Backport #18900 (In Progress): jewel: Test failure: test_open_inode
- 10:47 AM Backport #18705 (In Progress): jewel: fragment space check can cause replayed request fail
- 10:44 AM Backport #18699 (In Progress): jewel: client: fix the cross-quota rename boundary check conditions
- 10:22 AM Bug #16842: mds: replacement MDS crashes on InoTable release
- Created ticket for the workaround and marked it for backport here: http://tracker.ceph.com/issues/19708
I should h... - 10:21 AM Fix #19708 (Resolved): Enable MDS to start when session ino info is corrupt
This was a mitigation for issue #16842, which is itself a mystery.
Creating ticket to backport it.
Fix on mas...- 09:03 AM Bug #18816 (Fix Under Review): MDS crashes with log disabled
- I'm proposing that we rip out this configuration option, it's a trap for the unwary:
https://github.com/ceph/ceph/pu... - 08:30 AM Bug #19706 (Can't reproduce): Laggy mon daemons causing MDS failover (symptom: failed to set coun...
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-04-15_03:15:10-fs-master---basic-smithi/1027137/
04/19/2017
- 10:24 PM Feature #17834 (Fix Under Review): MDS Balancer overrides
- https://github.com/ceph/ceph/pull/14598
- 04:02 PM Bug #15467 (Won't Fix): After "mount -l", ceph-fuse does not work
- This appears to have happened on pre-jewel code, so it's unlikely anyone is interested in investigating.
- 10:11 AM Fix #19691 (Fix Under Review): Remove journaler_allow_split_entries option
- https://github.com/ceph/ceph/pull/14636
- 10:06 AM Fix #19691 (Resolved): Remove journaler_allow_split_entries option
- This has been broken in practice since at least when the MDS journal format (JournalStream etc) was changed, as the r...
Also available in: Atom