Activity
From 05/29/2017 to 06/27/2017
06/27/2017
- 04:11 PM Bug #20072: TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- FWIW that is basicaly what we did with the rados api test cleanup failures (loop waiting for snaptrimmer to do its th...
- 04:59 AM Bug #20424 (Resolved): doc: improve description of `mds deactivate` to better contrast with `mds ...
- Currently the help output is not very useful for `ceph mds deactivate`:...
- 02:21 AM Backport #20412: test_remote_update_write (tasks.cephfs.test_quota.TestQuota) fails in Jewel 10.2...
- Also: https://github.com/ceph/ceph/pull/15937
- 02:07 AM Backport #20412 (Fix Under Review): test_remote_update_write (tasks.cephfs.test_quota.TestQuota) ...
- John, that looks like the problem. Here's a PR:
https://github.com/ceph/ceph/pull/15936
06/26/2017
- 08:25 PM Backport #20027 (Resolved): jewel: Deadlock on two ceph-fuse clients accessing the same file
- 08:24 PM Backport #19846 (Resolved): jewel: write to cephfs mount hangs, ceph-fuse and kernel
- 08:21 PM Backport #20412: test_remote_update_write (tasks.cephfs.test_quota.TestQuota) fails in Jewel 10.2...
- Aargh, I think this might just be failing because this is a new test that was written for luminous, where client_quot...
- 07:49 PM Backport #20412 (In Progress): test_remote_update_write (tasks.cephfs.test_quota.TestQuota) fails...
- I dug into the logs. It looks like the MDS is not sending a quota update to the client. From a brief look at the code...
- 07:30 PM Bug #20337 (New): test_rebuild_simple_altpool triggers MDS assertion
- 06:54 PM Bug #20337: test_rebuild_simple_altpool triggers MDS assertion
- I'm not seeing how Filesystem.are_daemons_healthy is waiting for daemons outside the filesystem: it's inspecting daem...
- 06:42 PM Bug #20337 (Need More Info): test_rebuild_simple_altpool triggers MDS assertion
- wait_for_daemons should wait for every daemon regardless of filesystem. is there a failure log I can look at?
- 05:26 AM Backport #20140 (Resolved): jewel: Journaler may execute on_safe contexts prematurely
06/25/2017
- 07:56 AM Backport #20412 (Resolved): test_remote_update_write (tasks.cephfs.test_quota.TestQuota) fails in...
- https://github.com/ceph/ceph/pull/15936
06/23/2017
- 08:14 PM Backport #20404 (Rejected): kraken: cephfs permission denied until second client accesses file
- 08:14 PM Backport #20403 (Resolved): jewel: cephfs permission denied until second client accesses file
- https://github.com/ceph/ceph/pull/16150
- 12:04 PM Backport #20148 (Resolved): jewel: Too many stat ops when MDS trying to probe a large file
- 08:05 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- try uploading it somewhere else or send it to my email zyan@redhat.com
- 07:51 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- yanmei ding wrote:
> yanmei ding wrote:
> > Zheng Yan wrote:
> > > please upload detailed log for the slow case.
... - 07:47 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- yanmei ding wrote:
> Zheng Yan wrote:
> > please upload detailed log for the slow case.
>
> This is a detailed l... - 07:46 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- Zheng Yan wrote:
> please upload detailed log for the slow case.
This is a detailed log.
Thank you! - 06:43 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- please upload detailed log for the slow case.
- 02:05 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- John Spray wrote:
> In that case I suggest you wait for 12.1.0 to see if the issue is fixed there.
John Spray: I ... - 03:54 AM Bug #20376 (Fix Under Review): last_epoch_(over|under) in MDBalancer should be updated if mds0 ha...
- 12:17 AM Bug #20376: last_epoch_(over|under) in MDBalancer should be updated if mds0 has failed
- There is a merge request for this bug fix: https://github.com/ceph/ceph/pull/15825, could you have a review? @Patrick
06/22/2017
- 10:51 PM Bug #20376: last_epoch_(over|under) in MDBalancer should be updated if mds0 has failed
- 02:20 AM Bug #20376 (Resolved): last_epoch_(over|under) in MDBalancer should be updated if mds0 has failed
- When mds0 has failed and started up again, it will reset beat_epoch to zero. In this case, other MDSes should update ...
- 05:28 PM Bug #20122: Ceph MDS crash with assert failure
- Are you able to reliably reproduce this? Do you have any MDS logs during the failure?
- 11:08 AM Bug #20340 (Pending Backport): cephfs permission denied until second client accesses file
- 11:07 AM Bug #20338 (Resolved): mem leak in Journaler::_issue_read() in ceph-mds
- 11:06 AM Bug #20165 (Resolved): Deadlock during shutdown in PurgeQueue::_consume
- 10:57 AM Bug #19706: Laggy mon daemons causing MDS failover (symptom: failed to set counters on mds daemon...
- The fix is not working in at least some cases. Here's a smoking gun failure:
http://pulpito.ceph.com/jspray-2017-... - 10:47 AM Feature #20196: mds: early reintegration of strays on hardlink deletion
- Zheng's patch for the special case (both links in cache at time of primary unlink) is merged for luminous -- hopefull...
06/21/2017
- 10:06 PM Bug #20212 (Fix Under Review): test_fs_new failure on race between pool creation and appearance i...
- https://github.com/ceph/ceph/pull/15822
- 08:36 PM Bug #20212 (In Progress): test_fs_new failure on race between pool creation and appearance in `df`
- 09:12 PM Bug #20254 (Fix Under Review): mds: coverity error in Server::_rename_prepare
- https://github.com/ceph/ceph/pull/15818
- 08:34 PM Bug #20318 (Fix Under Review): Race in TestExports.test_export_pin
- https://github.com/ceph/ceph/pull/15817
- 02:58 PM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- In that case I suggest you wait for 12.1.0 to see if the issue is fixed there.
- 12:52 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- John Spray wrote:
> yanmei ding: there have been fixes on master since 12.0.3, please could you retest with the tip ... - 10:22 AM Bug #20340: cephfs permission denied until second client accesses file
- Thanks for this patch. It seems to fix the problem for our users.
- 07:57 AM Bug #20340 (Fix Under Review): cephfs permission denied until second client accesses file
- https://github.com/ceph/ceph/pull/15800
06/20/2017
- 06:25 PM Bug #20170 (Resolved): filelock_interrupt.py fails on multimds
- 06:21 PM Documentation #13311 (Resolved): explain user permission syntax, details
- 06:21 PM Documentation #13311: explain user permission syntax, details
- Not sure why this got made a FS ticket, but fortunately I wrote the docs for cephfs client auth caps a while ago so t...
- 06:20 PM Feature #8786 (Resolved): ceph kernel module for el7
- CephFS kernel module is in RHEL since 7.4. the kmod-* packages are discontinued as per the note at https://github.co...
- 06:15 PM Bug #20060 (Resolved): segmentation fault in _do_cap_update
- 06:14 PM Bug #20131 (Resolved): mds/MDBalancer: update MDSRank export_targets according to current balance...
- 06:14 PM Bug #20335 (Resolved): test_migration_on_shutdown, test_grow_shrink failing
- 01:24 PM Bug #20335 (Fix Under Review): test_migration_on_shutdown, test_grow_shrink failing
- https://github.com/ceph/ceph/pull/15768
- 06:09 PM Bug #16914 (Fix Under Review): multimds: pathologically slow deletions in some tests
- It looks like this case is now working properly with the latest code, so flipping this ticket to need review and remo...
- 02:03 PM Bug #18641 (Can't reproduce): mds: stalled clients apparently due to stale sessions
- 02:02 PM Bug #17069 (Closed): multimds: slave rmdir assertion failure
- Closing because currently we know that snapshots+multimds is broken.
- 02:00 PM Bug #16925 (Can't reproduce): multimds: cfuse (?) hang on fsx.sh workunit
- 01:54 PM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- yanmei ding: there have been fixes on master since 12.0.3, please could you retest with the tip of master?
- 01:08 PM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- Zheng Yan wrote:
> yanmei ding wrote:
> > John Spray wrote:
> > > Yanmei Ding: can you be more specific about how ... - 08:03 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- yanmei ding wrote:
> John Spray wrote:
> > Yanmei Ding: can you be more specific about how to reproduce this or wha... - 01:20 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- John Spray wrote:
> Yanmei Ding: can you be more specific about how to reproduce this or what is going wrong interna... - 01:49 PM Fix #20246 (In Progress): Make clog message on scrub errors friendlier.
- 01:47 PM Bug #20282 (Closed): qa: missing even trivial tests for many commands
- 01:44 PM Bug #20282: qa: missing even trivial tests for many commands
- The script just greps for anything that looks like a COMMMAND and then greps for their existence in qa/ and src/tests...
- 01:44 PM Bug #20329 (Resolved): Ceph file system hang on Jewel
- Resolving, patch will show up in stable release as and when.
- 01:42 AM Bug #20329: Ceph file system hang on Jewel
- Eric Eastman wrote:
> The number in the first column changes. Here is the output running the command in a while loop... - 11:05 AM Bug #20338 (Fix Under Review): mem leak in Journaler::_issue_read() in ceph-mds
- https://github.com/ceph/ceph/pull/15776
- 09:49 AM Bug #20340: cephfs permission denied until second client accesses file
- Yup, in this case the diri is_stray (it looks like this...
- 09:28 AM Bug #20340: cephfs permission denied until second client accesses file
- Ahh so it *is* related to path-restricted cap.
I tried as above with client B having the same client caps -- didn't ... - 09:18 AM Bug #20340: cephfs permission denied until second client accesses file
- I've confirmed that none of these help resolve these EPERM files:
* restart the ceph-fuse on client A
* mount... - 09:29 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Just pinging this to say that there remain some issues with path-restricted caps, as shown in #20340.
06/19/2017
- 08:52 PM Backport #20350 (Rejected): kraken: df reports negative disk "used" value when quota exceed
- 08:52 PM Backport #20349 (Resolved): jewel: df reports negative disk "used" value when quota exceed
- https://github.com/ceph/ceph/pull/16151
- 03:54 PM Bug #20338: mem leak in Journaler::_issue_read() in ceph-mds
- Was there a teuthology run where this was happening?
- 10:57 AM Bug #20338 (Resolved): mem leak in Journaler::_issue_read() in ceph-mds
- ...
- 01:47 PM Bug #20341 (Duplicate): test_migration_on_shutdown fails on master
- 01:29 PM Bug #20341 (Duplicate): test_migration_on_shutdown fails on master
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-06-19_03:15:05-fs-master-distro-basic-smithi/1300512/teuthology.l...
- 01:30 PM Bug #20340: cephfs permission denied until second client accesses file
- I should mention that while client A was a user that has a path-restricted mds cap, the client B that "fixes" the EPE...
- 01:25 PM Bug #20340 (Resolved): cephfs permission denied until second client accesses file
- Here is a file that client A gets permission denied during stat:...
- 12:49 PM Bug #20329: Ceph file system hang on Jewel
- Eric: that's a conversation to have with whoever is providing your kernel -- the kernel bits of Ceph are not part of ...
- 12:38 PM Bug #20329: Ceph file system hang on Jewel
- The number in the first column changes. Here is the output running the command in a while loop, once a second. Every ...
- 09:19 AM Bug #20329: Ceph file system hang on Jewel
- ...
- 12:41 PM Bug #20178 (Pending Backport): df reports negative disk "used" value when quota exceed
- 10:53 AM Bug #20282: qa: missing even trivial tests for many commands
- Greg: can you say which script you're looking to cover these commands in? Things like session kill would be pretty a...
- 10:50 AM Bug #20272 (Rejected): Ceph OSD & MDS Failure
- I don't think there's anything to be done with this right now -- feel free to reopen if there's some other evidence t...
- 10:47 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- Yanmei Ding: can you be more specific about how to reproduce this or what is going wrong internally?
- 10:34 AM Bug #20337 (Resolved): test_rebuild_simple_altpool triggers MDS assertion
- Two things are going wrong here, I think:
* The test code is doing a self.fs.wait_for_daemons() (test_data_scan.py:4... - 10:18 AM Bug #20335 (Resolved): test_migration_on_shutdown, test_grow_shrink failing
- Seems to be happening repeatedly since June 10. Latest fs-master failures:
http://pulpito.ceph.com/teuthology-201... - 09:35 AM Bug #20313 (Fix Under Review): Assertion in handle_dir_update
- https://github.com/ceph/ceph/pull/15510/commits/1a5fd47880229d69a6ea484e662e8b8280ff5158
06/18/2017
- 05:47 PM Bug #20328 (Duplicate): Test failure: test_export_pin (tasks.cephfs.test_exports.TestExports)
- 02:24 PM Bug #20334 (Resolved): I/O become slowly when multi mds which subtree root has replica
06/16/2017
- 04:22 PM Bug #20329 (Resolved): Ceph file system hang on Jewel
- We are running Ceph 10.2.7 and after adding a new multi-threaded writer application we are seeing hangs accessing met...
- 01:24 PM Bug #20328 (Duplicate): Test failure: test_export_pin (tasks.cephfs.test_exports.TestExports)
- http://qa-proxy.ceph.com/teuthology/jspray-2017-06-15_02:50:24-multimds-wip-jcsp-testing-20170614-testing-basic-smith...
06/15/2017
- 02:51 PM Bug #19706 (Resolved): Laggy mon daemons causing MDS failover (symptom: failed to set counters on...
- 02:28 PM Bug #20318 (Resolved): Race in TestExports.test_export_pin
- Seen failure here:
http://pulpito.ceph.com/jspray-2017-06-15_02:50:24-multimds-wip-jcsp-testing-20170614-testing-bas... - 02:13 PM Bug #20313 (Resolved): Assertion in handle_dir_update
- Seen in test branch that had the following PRs in it:
[15125] mds: miscellaneous multimds fixes part2
[15510] mds...
06/14/2017
- 03:30 PM Bug #20282: qa: missing even trivial tests for many commands
- The damage and client stuff is all exercised in tasks/cephfs/test_* stuff. Are you talking specifically about unit t...
- 02:16 PM Backport #20294 (In Progress): jewel: Populate DamageTable from forward scrub
- 02:13 PM Backport #20294 (Resolved): jewel: Populate DamageTable from forward scrub
- https://github.com/ceph/ceph/pull/14699
- 02:10 PM Feature #16016 (Pending Backport): Populate DamageTable from forward scrub
- 02:01 PM Backport #19334 (Resolved): jewel: MDS heartbeat timeout during rejoin, when working with large a...
- 01:43 PM Backport #19665 (Resolved): jewel: C_MDSInternalNoop::complete doesn't free itself
- 01:38 PM Backport #19677 (Resolved): jewel: Jewel ceph-fuse does not recover after lost connection to MDS
- 01:37 PM Backport #19762 (Resolved): jewel: non-local cephfs quota changes not visible until some IO is done
- 01:33 PM Backport #19709 (Resolved): jewel: Enable MDS to start when session ino info is corrupt
- 01:31 PM Backport #19675 (Resolved): jewel: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_vo...
- 01:30 PM Backport #19673 (Resolved): jewel: cephfs: mds is crushed, after I set about 400 64KB xattr kv pa...
- 01:30 PM Backport #19671 (Resolved): jewel: MDS assert failed when shutting down
- 01:29 PM Backport #19668 (Resolved): jewel: MDS goes readonly writing backtrace for a file whose data pool...
- 01:27 PM Backport #19666 (Resolved): jewel: fs:The mount point break off when mds switch hanppened.
- 01:26 PM Backport #19619 (Resolved): jewel: MDS server crashes due to inconsistent metadata.
- 01:24 PM Backport #19482 (Resolved): jewel: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" com...
- 01:23 PM Backport #19044 (Resolved): jewel: buffer overflow in test LibCephFS.DirLs
- 01:23 PM Backport #18949 (Resolved): jewel: mds/StrayManager: avoid reusing deleted inode in StrayManager:...
- 01:22 PM Backport #18900 (Resolved): jewel: Test failure: test_open_inode
- 01:22 PM Backport #18705 (Resolved): jewel: fragment space check can cause replayed request fail
06/13/2017
- 11:57 PM Bug #20282 (Closed): qa: missing even trivial tests for many commands
- I wrote a trivial script to look for missing commands in tests (https://github.com/ceph/ceph/pull/15675/commits/3aad0...
- 01:17 PM Bug #20272: Ceph OSD & MDS Failure
- The MDS backtrace is just the same as the OSD one.
- 02:27 AM Bug #20272: Ceph OSD & MDS Failure
- You probably need to bump up the number of allowed thread/process IDs on your box if it's crashing there. But that sh...
- 08:08 AM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- John Spray wrote:
> dongdong tao -- could you please open a pull request with your code change once it is working fo...
06/12/2017
- 10:43 PM Bug #20272 (Rejected): Ceph OSD & MDS Failure
- The following error from one of the OSDs in my cluster brought the Ceph MDS server down over the weekend:...
- 12:44 PM Bug #20254 (Resolved): mds: coverity error in Server::_rename_prepare
- ...
06/11/2017
- 11:47 AM Fix #20246 (Resolved): Make clog message on scrub errors friendlier.
- Currently it looks something like this:...
06/09/2017
- 04:18 PM Backport #18283 (Closed): kraken: monitor cannot start because of "FAILED assert(info.state == MD...
- 09:54 AM Bug #19955: Too many stat ops when MDS trying to probe a large file
- Later I found two more related problems that may need further discussion:
# Some tools(like fio) will set the size o...
06/07/2017
- 11:16 PM Bug #20072: TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- Yes, that's correct.
- 10:23 PM Bug #20072: TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- Just to check I understand -- the claim is that we are seeing the object in pgnls output because it has a snapshot th...
- 10:04 PM Bug #20072: TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- Agree with Greg. Also: making pgnls skip whiteout objects means we need to load the object_info_t, which is signific...
- 07:45 PM Bug #20072: TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- I don't think we want a plain "rados ls" with no flags (--all?) output something that can't be "rados stat"ed. Suppo...
- 05:12 PM Bug #20072: TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- Hmm, listing and snapshots have never gotten along well, but that makes me think we should list stuff even if it does...
- 11:14 AM Bug #20212 (Resolved): test_fs_new failure on race between pool creation and appearance in `df`
- ...
- 10:27 AM Feature #20196: mds: early reintegration of strays on hardlink deletion
- RP for the incomplete solution https://github.com/ceph/ceph/pull/15548
- 08:42 AM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- pull requests: https://github.com/ceph/ceph/pull/15544
06/06/2017
- 09:15 PM Bug #20072: TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- Ah, yes, this is the SnapSet refactor, not bluestore. This was a semi-intentional change.
Before, we pgnls would i... - 10:30 AM Feature #20196 (New): mds: early reintegration of strays on hardlink deletion
The same symptom as http://tracker.ceph.com/issues/11950
If someone creates a large number of files (1M+), hardl...- 10:12 AM Feature #9466 (Resolved): kclient: Extend CephFSTestCase tests to cover kclient
- This got done a while back.
06/05/2017
- 02:02 PM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- John Spray wrote:
> dongdong tao -- could you please open a pull request with your code change once it is working fo... - 01:46 PM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- dongdong tao -- could you please open a pull request with your code change once it is working for you?
- 12:52 AM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- dongdong tao wrote:
> Zheng Yan wrote:
> > the last cap message has CHECK_CAPS_SYNCHRONOUS flag. mds flushes mdlog ... - 01:01 PM Bug #20131 (Fix Under Review): mds/MDBalancer: update MDSRank export_targets according to current...
- 10:35 AM Bug #20178 (Fix Under Review): df reports negative disk "used" value when quota exceed
- https://github.com/ceph/ceph/pull/15481
- 12:56 AM Bug #20178 (Resolved): df reports negative disk "used" value when quota exceed
- first, set the maxbytes quota for a directory in cephfs,for example the directory named test and the quota is 10G;
... - 07:03 AM Backport #20148 (In Progress): jewel: Too many stat ops when MDS trying to probe a large file
06/04/2017
- 07:26 PM Backport #20140 (In Progress): jewel: Journaler may execute on_safe contexts prematurely
- 09:45 AM Backport #19762 (In Progress): jewel: non-local cephfs quota changes not visible until some IO is...
06/03/2017
- 07:58 AM Backport #20025 (In Progress): jewel: MDS became unresponsive when truncating a very large file
- 06:40 AM Bug #20170 (Fix Under Review): filelock_interrupt.py fails on multimds
- https://github.com/ceph/ceph/pull/15440
- 03:15 AM Backport #20027 (In Progress): jewel: Deadlock on two ceph-fuse clients accessing the same file
06/02/2017
- 05:17 PM Bug #20170 (In Progress): filelock_interrupt.py fails on multimds
- 04:17 PM Bug #20170 (Resolved): filelock_interrupt.py fails on multimds
- 11:25 AM Bug #20165 (Fix Under Review): Deadlock during shutdown in PurgeQueue::_consume
- https://github.com/ceph/ceph/pull/15430
- 11:01 AM Bug #20165 (Resolved): Deadlock during shutdown in PurgeQueue::_consume
- PurgeQUeue does this while holding its lock:...
- 07:36 AM Backport #20149 (Rejected): kraken: Too many stat ops when MDS trying to probe a large file
- 07:36 AM Backport #20148 (Resolved): jewel: Too many stat ops when MDS trying to probe a large file
- https://github.com/ceph/ceph/pull/15472
- 07:36 AM Backport #20141 (Rejected): kraken: Journaler may execute on_safe contexts prematurely
- 07:36 AM Backport #20140 (Rejected): jewel: Journaler may execute on_safe contexts prematurely
- https://github.com/ceph/ceph/pull/15468
- 03:15 AM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- Zheng Yan wrote:
> the last cap message has CHECK_CAPS_SYNCHRONOUS flag. mds flushes mdlog when it sees the flag. Th... - 01:12 AM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- the last cap message has CHECK_CAPS_SYNCHRONOUS flag. mds flushes mdlog when it sees the flag. The patch isn't perfec...
06/01/2017
- 03:32 PM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- Zheng Yan wrote:
> please try below patch. If it still doesn't work for you(there is no dirty caps), you need to imp... - 12:47 PM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- please try below patch. If it still doesn't work for you(there is no dirty caps), you need to implement mechanism tha...
- 10:22 AM Bug #20129: Client syncfs is slow (waits for next MDS tick)
- This is the same behaviour you would see if you were running "sync" on a filesystem.
We handle this for fsync (on ... - 03:42 AM Bug #20129 (Resolved): Client syncfs is slow (waits for next MDS tick)
- in function client::unmount there are codes below:
-------------
while (!mds_requests.empty()) {
ldout(cct, 10... - 03:05 PM Feature #18490: client: implement delegation support in userland cephfs
- Brief writeup of one way to implement this.
- 10:36 AM Feature #17980 (Resolved): MDS should reject connections from OSD-blacklisted clients
- 10:36 AM Feature #9754 (Resolved): A 'fence and evict' client eviction command
- 10:35 AM Bug #19239 (Resolved): mds: stray count remains static after workflows complete
- 10:35 AM Bug #19395 (Resolved): "Too many inodes in cache" warning can happen even when trimming is working
- 10:35 AM Bug #19630 (Resolved): StrayManager::num_stray is inaccurate
- 10:34 AM Bug #20055 (Pending Backport): Journaler may execute on_safe contexts prematurely
- 10:28 AM Bug #20076 (Resolved): Pass empty string to clear mantle balancer
- 10:24 AM Bug #20039 (Resolved): mds: replay of export pinned inode does not result in export
- 10:24 AM Bug #20083 (Resolved): TestExports.test_export_pin failing on master
- 07:33 AM Bug #20131: mds/MDBalancer: update MDSRank export_targets according to current balance state
- https://github.com/ceph/ceph/pull/15407
- 07:31 AM Bug #20131 (Resolved): mds/MDBalancer: update MDSRank export_targets according to current balance...
- I think this might be a regression issue introduced by https://github.com/ceph/ceph/commit/082e86c58f5abb93e2a912e603...
05/31/2017
- 06:34 PM Feature #18490: client: implement delegation support in userland cephfs
- I have a couple of patches to start implementing this, but I've not had the time to really do a good job of it. The p...
- 09:44 AM Bug #20122 (Need More Info): Ceph MDS crash with assert failure
- The cluster is running Kraken on CentOS 7.3 and has 3 MDS servers, 01 was up:active and is the one that crashed as pe...
- 05:01 AM Bug #20118 (Duplicate): Test failure: test_ops_throttle (tasks.cephfs.test_strays.TestStrays)
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-05-30_05:10:01-fs-kraken---basic-smithi/1243987/teuthology.log
...
Also available in: Atom