Activity
From 07/14/2016 to 08/12/2016
08/12/2016
- 11:18 AM Bug #16640: libcephfs: Java bindings failing to load on CentOS
So, the PR had a passing test run:
https://github.com/ceph/ceph-qa-suite/pull/1084
http://pulpito.ceph.com/jspray...- 01:43 AM Bug #16983: mds: handle_client_open failing on open
- It's already fixed by https://github.com/ceph/ceph/pull/8778
08/11/2016
- 05:04 PM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
- User seeing an assertion failure in the MDS in v10.2.1:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-A... - 05:03 PM Bug #16983: mds: handle_client_open failing on open
- Zheng, I think you fixed this in 4d15eb12298e007744486e28924a6f0ae071bd06 from PR #8778.
Here's the issue from cep...
08/10/2016
- 06:18 PM Bug #16983 (Resolved): mds: handle_client_open failing on open
- Randy Orr reported an assertion failure on the ceph-users list:...
- 11:33 AM Feature #16419: add statx-like interface to libcephfs
- Test run mostly passed last night, with only failures for unrelated problems -- the known problem with valgrind on ce...
08/09/2016
- 08:04 PM Feature #15069 (Fix Under Review): MDS: multifs: enable two filesystems to point to same pools if...
- https://github.com/ceph/ceph/pull/10636
- 08:04 PM Feature #15068 (Fix Under Review): fsck: multifs: enable repair tools to read from one filesystem...
- 08:04 PM Feature #15068: fsck: multifs: enable repair tools to read from one filesystem and write to another
- https://github.com/ceph/ceph/pull/10636
- 07:47 PM Feature #16419: add statx-like interface to libcephfs
- Found it. I had transposed the size and change_attr args in one call to update_inode_file_bits. fsx now seems to be O...
- 03:22 PM Feature #16419: add statx-like interface to libcephfs
- Mostly working now, but I'm seeing occasional problems with truncating files. I bisected the problem down to a one li...
- 06:14 PM Feature #16973 (Resolved): Log path as well as ino when detecting metadata damage
- Currently our cluster log messages look like this:...
- 04:21 PM Bug #16909 (Fix Under Review): Stopping an MDS rank does not stop standby-replays for that rank
- https://github.com/ceph/ceph/pull/10628
- 04:20 PM Bug #16919 (Fix Under Review): MDS: Standby replay daemons don't drop purged strays
- https://github.com/ceph/ceph/pull/10606
- 01:29 PM Bug #16925: multimds: cfuse (?) hang on fsx.sh workunit
- this can either be caused by hang MDS request or be caused by hang read/write (MDS does not properly issue Frw caps t...
- 10:42 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- Hmm, no failures in that re-run, so it's not quite completely reproducible.
08/08/2016
- 02:45 PM Bug #16926: multimds: kclient fails to mount
- (pass "-k testing" when scheduling runs that will use a kclient, to ensure you're getting a nice recent cephfs kernel)
- 01:49 PM Bug #16914: multimds: pathologically slow deletions in some tests
- retest with fuse default permissions set differently because it's doing too many getattr at the moment
- 12:51 PM Support #16884: rename() doesn't work between directories
- Donatas: currently, renaming files in and out of trees with different quotas is going to give you EXDEV. You can wor...
- 06:41 AM Support #16884: rename() doesn't work between directories
- guys, so what's the summary about this 'feature'?
- 10:48 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- ...
- 10:36 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- Given that it happened twice in one job, seems a decent change it's reproducible, let's see:
http://pulpito.ceph.com... - 10:31 AM Bug #16954 (New): Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on...
http://pulpito.ceph.com/jspray-2016-08-07_16:42:13-fs-wip-prompt-frag-distro-basic-mira/353833...- 08:44 AM Bug #14681 (Resolved): Wrong ceph get mdsmap assertion
- 08:44 AM Bug #14319 (Resolved): Double decreased the count to trim caps which will cause failing to respon...
- 08:42 AM Bug #16154 (Resolved): mds: lock waiters are not finished in the same order that they were added
- 08:42 AM Bug #15920 (Resolved): mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 08:42 AM Bug #15723 (Resolved): client: fstat cap release
- 08:42 AM Bug #15689 (Resolved): Confusing MDS log message when shut down with stalled journaler reads
- 08:42 AM Feature #15615 (Resolved): CephFSVolumeClient: List authorized IDs by share
- 08:41 AM Feature #15406 (Resolved): Add versioning to CephFSVolumeClient interface
- 08:34 AM Bug #11482: kclient: intermittent log warnings "client.XXXX isn't responding to mclientcaps(revoke)"
- infernalis is EOL
- 08:33 AM Bug #15050 (Resolved): deleting striped file in cephfs doesn't free up file's space
- 08:32 AM Bug #14144 (Resolved): standy-replay MDS does not cleanup finished replay threads
- 08:28 AM Backport #15281 (Rejected): infernalis: standy-replay MDS does not cleanup finished replay threads
- 08:28 AM Backport #15057 (Rejected): infernalis: deleting striped file in cephfs doesn't free up file's space
- 08:28 AM Backport #14843 (Rejected): infernalis: test_object_deletion fails (tasks.cephfs.test_damage.Test...
- 08:28 AM Backport #14761 (Rejected): infernalis: ceph-fuse does not mount at boot on Debian Jessie
- 08:28 AM Backport #14690 (Rejected): infernalis: Client::_fsync() on a given file does not wait unsafe req...
- 08:28 AM Backport #13890 (Rejected): infernalis: Race in TestSessionMap.test_version_splitting
- 08:25 AM Backport #16299: jewel: mds: fix SnapRealm::have_past_parents_open()
- https://github.com/ceph/ceph/pull/9447
- 08:22 AM Backport #14668 (Resolved): hammer: Wrong ceph get mdsmap assertion
- 08:22 AM Backport #15056 (Resolved): hammer: deleting striped file in cephfs doesn't free up file's space
- 08:21 AM Backport #15512 (Resolved): hammer: Double decreased the count to trim caps which will cause fail...
- 08:21 AM Backport #15898 (Resolved): jewel: Confusing MDS log message when shut down with stalled journale...
- 08:21 AM Backport #16041 (Resolved): jewel: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 08:21 AM Backport #16082 (Resolved): hammer: mds: wrongly treat symlink inode as normal file/dir when syml...
- 08:21 AM Backport #16135 (Resolved): jewel: MDS: fix getattr starve setattr
- 08:21 AM Backport #16136 (Resolved): jewel: MDSMonitor fixes
- 08:21 AM Backport #16152 (Resolved): jewel: fs: client: fstat cap release
- 08:20 AM Backport #16626 (Resolved): hammer: Failing file operations on kernel based cephfs mount point le...
- 08:19 AM Backport #16830 (Resolved): jewel: CephFSVolumeClient: List authorized IDs by share
- 08:19 AM Backport #16831 (Resolved): jewel: Add versioning to CephFSVolumeClient interface
08/05/2016
- 09:04 PM Backport #16946 (Resolved): jewel: client: nlink count is not maintained correctly
- https://github.com/ceph/ceph/pull/10877
- 02:57 PM Bug #16919 (In Progress): MDS: Standby replay daemons don't drop purged strays
- 02:57 PM Bug #16909 (In Progress): Stopping an MDS rank does not stop standby-replays for that rank
08/04/2016
- 08:04 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- so, I tried to run ceph outside of docker to run gdb on ceph-mon, but I don't know what I suppose to see.
$ gdb /usr... - 07:26 PM Bug #16926 (Rejected): multimds: kclient fails to mount
- In many test cases, the kernel client fails to mount with EIO:
http://pulpito.ceph.com/pdonnell-2016-08-03_12:43:1... - 06:58 PM Bug #16925 (Can't reproduce): multimds: cfuse (?) hang on fsx.sh workunit
- http://pulpito.ceph.com/pdonnell-2016-07-18_20:02:54-multimds-master---basic-mira/321794/...
- 05:23 PM Bug #16924: Crash replaying EExport
- git blame points at b7e698a52bf7838f8e37842074c510a6561f165b from Zheng.
> mds: no bloom filter for replica dir
>... - 04:49 PM Bug #16924 (Resolved): Crash replaying EExport
- ...
- 01:55 PM Bug #16919: MDS: Standby replay daemons don't drop purged strays
- The standby does have all the information about which files are open (since they get journaled), right? Or do we only...
- 09:59 AM Bug #16919 (Resolved): MDS: Standby replay daemons don't drop purged strays
This is not fatal, because the inodes will ultimately end up at the top of the LRU list and get trimmed, but it's a...- 12:24 PM Documentation #16906: doc: clarify path restriction instructions
- @John Spray
fixup:
https://github.com/ceph/ceph/pull/10573/commits/d1277f116cd297bae8da7b3e1a7000d3f99c6a51 - 10:41 AM Bug #16920 (New): mds.inodes* perf counters sound like the number of inodes but they aren't
These counters actually reflect the LRU, which is a collection of dentries, not inodes.
mds_mem.ino on the other...
08/03/2016
- 09:08 PM Bug #16914 (Resolved): multimds: pathologically slow deletions in some tests
- http://qa-proxy.ceph.com/teuthology/pdonnell-2016-07-18_20:02:54-multimds-master---basic-mira/321823/teuthology.log
... - 08:52 PM Bug #16886: multimds: kclient hang (?) in tests
- Another blogbench:
http://pulpito.ceph.com/pdonnell-2016-07-29_08:28:00-multimds-master---basic-mira/339886/
<p... - 08:01 PM Feature #16419: add statx-like interface to libcephfs
- I have a prototype set for this, but I now think that the handling of the change_attr is wrong for directories. I'm g...
- 01:56 PM Bug #16909 (Resolved): Stopping an MDS rank does not stop standby-replays for that rank
Run vstart with MDS=2 and -s flag
Set max_mds to 2
See that you get two active daemons and two standby-replays
S...- 12:57 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
- Yes, you really don't want to load the unversioned library at runtime. It's possible that you'll end up picking up a ...
- 11:21 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- They are racy, but this particular case is a bit odd:...
- 11:08 AM Bug #16668 (Pending Backport): client: nlink count is not maintained correctly
- 11:02 AM Documentation #16906: doc: clarify path restriction instructions
- So there's no bug here as such, it's just that the instructions don't explicitly tell you to write out your client ke...
- 08:08 AM Documentation #16906 (Resolved): doc: clarify path restriction instructions
- I do path restriction follow:http://docs.ceph.com/docs/master/cephfs/client-auth/...
- 10:45 AM Bug #16880 (Duplicate): saw valgrind issue <kind>Leak_StillReachable</kind> in /var/log/ceph/va...
- See:
http://tracker.ceph.com/issues/14794
http://tracker.ceph.com/issues/15356
(aka The Mystery Of the Valgrind ... - 10:41 AM Bug #16876 (Duplicate): java.lang.UnsatisfiedLinkError: Can't load library: /usr/lib/jni/libcephf...
- This should be fixed in master now (there was a backed-out change for 16640 in teuthology, then finally the fix was h...
- 10:39 AM Bug #16879 (Resolved): scrub: inode wrongly marked free: 0x10000000002
08/02/2016
- 07:08 PM Bug #16186 (Duplicate): kclient: drops requests without poking system calls on reconnect
- 07:08 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- I'm going to go ahead and close this out, and pursue the follow-up work in tracker #15255.
- 03:33 PM Bug #16668 (Resolved): client: nlink count is not maintained correctly
- Ok, PR is now merged!
- 11:13 AM Bug #16879 (Fix Under Review): scrub: inode wrongly marked free: 0x10000000002
- https://github.com/ceph/ceph-qa-suite/pull/1107
08/01/2016
- 11:15 PM Feature #10627: teuthology: qa: enable Samba runs on RHEL
- Passing this to John to watch.
- 08:50 PM Support #16884: rename() doesn't work between directories
- Zheng, what are the limits and requirements of that quota root EXDEV?
I think it's probably required and can't cha... - 08:15 PM Support #16884: rename() doesn't work between directories
- Debug output is:
todir->snapid:-2 todir->quota.is_enable:0 fromdir->snapid:-2 fromdir->quota->max_files:20000 return... - 08:12 PM Support #16884: rename() doesn't work between directories
- What about removing this block at all? Or is it required too much?
- 08:04 PM Support #16884: rename() doesn't work between directories
- Looks like this part is failing: https://github.com/ceph/ceph/blob/0080b6bc92cefdd2115c904fd0c83ae83c9c2f01/src/clien...
- 08:00 PM Support #16884: rename() doesn't work between directories
- More details, please. Cross-directory rename definitely works in general. What's the output of "mount"? What versions...
- 07:03 PM Support #16884 (Closed): rename() doesn't work between directories
- Hi folks!
looks like rename() just doesn't work between directories. Here is the snippet FTP daemon does:
#incl... - 08:39 PM Bug #16886: multimds: kclient hang (?) in tests
- Updated title/description.
- 07:54 PM Bug #16886: multimds: kclient hang (?) in tests
- Well I feel silly. This is actually more general but wasn't obvious by how I had organized the failures. I'm going to...
- 07:32 PM Bug #16886 (Can't reproduce): multimds: kclient hang (?) in tests
- There are strange pauses which are showing up in several tests for the kclient:
* http://qa-proxy.ceph.com/teuthol... - 06:57 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- root@ceph1:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
ceph 1 0 0 18:51 ? 00:... - 05:42 AM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- I must admit that I have trouble putting it in place. I do not know enough how to use gdb, and as my ceph-mon is in a...
- 04:42 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
- This test also fails with the master branch (as of earlier this morning):
http://pulpito.ceph.com/jlayton-2016... - 03:08 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
- Rebased onto current master branch, and still seeing the error. Rerunning the test now on a branch without any of my ...
- 01:00 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
- Reran the test and it failed again: (btw: thanks Nathan for the pointer to how to filter out failures and rerun only ...
- 12:34 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
- Ahh thanks, Nathan. Ok, this is a recently-added test and my local ceph-qa-suite was missing it. A git pull fixed tha...
- 11:36 AM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
- I found it by looking at the "task" function in "tasks/cephfs_test_runner.py" - it says: ...
- 11:35 AM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
- Hi Jeff, this:
https://github.com/ceph/ceph-qa-suite/blob/master/tasks/cephfs/test_forward_scrub.py - 10:51 AM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
- Message comes from CInode::validate_disk_state, but I haven't yet been able to figure out where the test itself comes...
- 10:23 AM Bug #16879 (Resolved): scrub: inode wrongly marked free: 0x10000000002
- I ran the "fs" testsuite on a branch that has a pile of small, userland client-side patches. One of the tests (tasks/...
- 01:34 PM Bug #16807: Crash in handle_slave_rename_prep
- 12:56 PM Bug #16876: java.lang.UnsatisfiedLinkError: Can't load library: /usr/lib/jni/libcephfs_jni.so
- Of course, it may be that I reached the box too late and the filesystem had been changed. I'm not sure how to tell. E...
- 10:57 AM Bug #16832 (Resolved): libcephfs failure at shutdown (Attempt to free invalid pointer)
- I haven't seen this in the latest test runs, so I'm going to go ahead and close this out under the assumption that is...
- 10:40 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Comments in test_strays.py seem to indicate that this test is racy anyway:...
- 10:39 AM Bug #16881 (Resolved): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- During a test_files_throttle test run, I hit the following error:...
- 10:34 AM Bug #16880: saw valgrind issue <kind>Leak_StillReachable</kind> in /var/log/ceph/valgrind/clien...
- client.0.log is here:
http://qa-proxy.ceph.com/teuthology/jlayton-2016-07-29_18:51:42-fs-wip-jlayton-nlink---b... - 10:27 AM Bug #16880 (Duplicate): saw valgrind issue <kind>Leak_StillReachable</kind> in /var/log/ceph/va...
- One of my "fs" test runs over the weekend failed with this:...
- 02:36 AM Bug #16768: multimds: check_rstat assertion failure
- Here's another instance of the assertion failure on a more recent master branch:
http://qa-proxy.ceph.com/teutholo...
07/31/2016
- 09:12 PM Bug #16876 (Duplicate): java.lang.UnsatisfiedLinkError: Can't load library: /usr/lib/jni/libcephf...
- I had a failed fs testsuite run, and a couple of the jobs failed with what looks like the error below:...
07/29/2016
- 10:18 PM Documentation #16743 (Resolved): client: config settings missing in documentation
- 12:18 PM Backport #16797 (In Progress): jewel: MDS Deadlock on shutdown active rank while busy with metada...
- 12:13 PM Bug #16842: mds: replacement MDS crashes on InoTable release
- Min Chen: can you describe the client part of how to reproduce this? What does the client have to be doing to reprod...
- 12:08 PM Backport #16621 (In Progress): jewel: mds: `session evict` tell command blocks forever with async...
- 11:48 AM Backport #16620 (In Progress): jewel: Fix shutting down mds timed-out due to deadlock
- 11:44 AM Backport #16299 (In Progress): jewel: mds: fix SnapRealm::have_past_parents_open()
- 11:00 AM Cleanup #15923 (Resolved): MDS: remove TMAP2OMAP check and move Objecter into MDSRank
- 10:58 AM Cleanup #16195 (Resolved): mds: Don't spam log with standby_replay_restart messages
- 10:44 AM Bug #16857 (Duplicate): Crash in Client::_invalidate_kernel_dcache
- ...
- 03:18 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
- It's not only Ceph's locks, mutexes, etc. that we need to be aware of or concerned with. I have seen multiple occurre...
- 03:08 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
- 03:05 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
- John,
i marked PR#10472 as a fix of this issue. it does. but i would like to keep this issue open, because:
by ...
07/28/2016
- 06:33 PM Bug #16842: mds: replacement MDS crashes on InoTable release
- This looks more complicated than that to reproduce. The code that's crashing is timing out a client connection that d...
- 06:25 AM Bug #16842 (Can't reproduce): mds: replacement MDS crashes on InoTable release
- ceph version 10.2.0-2638-gf7fc985
reproduce step:
1. new fs and start mds.a
2. start mds.b
3. kill mds.a
fai... - 01:58 PM Bug #16844 (Duplicate): hammer: libcephfs-java/test.sh fails
- 11:09 AM Bug #16844 (Duplicate): hammer: libcephfs-java/test.sh fails
- Failing consistently on hammer-backports branch:
http://pulpito.ceph.com/smithfarm-2016-07-25_05:09:12-fs-hammer-ba... - 11:20 AM Bug #16556 (Fix Under Review): LibCephFS.InterProcessLocking failing on master and jewel
- https://github.com/ceph/ceph/pull/10472
- 08:40 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
- ...
07/27/2016
- 09:07 PM Bug #16829: ceph-mds crashing constantly
- 1) Possibly, but we're more likely to just lock out pools which have data in them.
2) You mean you intermingled RB... - 08:50 PM Bug #16829: ceph-mds crashing constantly
- So, two things:
1) can ceph-mds be made more resiliant when finding data from non-existing filesystems?
2) I ca... - 06:31 PM Bug #16829 (Closed): ceph-mds crashing constantly
- It looks like you did "fs rm" and "fs new" but kept the same metadata pool in RADOS. That doesn't work; you can resol...
- 12:50 PM Bug #16829 (Closed): ceph-mds crashing constantly
- I'm using CEPH packages from Fedora 24: ceph-mds-10.2.2-2.fc24.x86_64
I've created simple cephfs once, stored some... - 04:41 PM Bug #16832: libcephfs failure at shutdown (Attempt to free invalid pointer)
- @Jeff: Just in case you don't know it yet, here is a trick for rescheduling failed and dead jobs from a previous run ...
- 04:17 PM Bug #16832: libcephfs failure at shutdown (Attempt to free invalid pointer)
- The other two test failures -- one was a segfault in ceph_test_libcephfs:
(gdb) bt
#0 0x00007f771c5aaa63 in lock... - 02:58 PM Bug #16832: libcephfs failure at shutdown (Attempt to free invalid pointer)
- > 1) some binary segfaulted, but I don't seem to be able to track down the core to see what actually failed:
<dgallo... - 02:47 PM Bug #16832 (Resolved): libcephfs failure at shutdown (Attempt to free invalid pointer)
- My fs test run had 3 failures:
1) some binary segfaulted, but I don't seem to be able to track down the core to se... - 02:33 PM Feature #15406 (Pending Backport): Add versioning to CephFSVolumeClient interface
- 02:32 PM Feature #15615 (Pending Backport): CephFSVolumeClient: List authorized IDs by share
- 01:54 PM Backport #16831 (In Progress): jewel: Add versioning to CephFSVolumeClient interface
- 01:53 PM Backport #16831 (Resolved): jewel: Add versioning to CephFSVolumeClient interface
- https://github.com/ceph/ceph/pull/10453
https://github.com/ceph/ceph-qa-suite/pull/1100 - 01:23 PM Backport #16830 (In Progress): jewel: CephFSVolumeClient: List authorized IDs by share
- 01:03 PM Backport #16830 (Resolved): jewel: CephFSVolumeClient: List authorized IDs by share
- https://github.com/ceph/ceph/pull/10453
https://github.com/ceph/ceph-qa-suite/pull/1100 - 11:58 AM Cleanup #16035 (Resolved): Remove "cephfs" CLI
- 11:01 AM Cleanup #16035: Remove "cephfs" CLI
- Mop-up *master PR*: https://github.com/ceph/ceph/pull/10444
- 11:00 AM Cleanup #16035 (Fix Under Review): Remove "cephfs" CLI
- 11:22 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
- Looking into what's happening in the case of running LibCephFS.InterProcessLocking on its own, I see that the forked ...
- 05:52 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
- tested with latest master, still fails.
@greg, do we have a fix for this issue now? - 10:59 AM Cleanup #16808 (Resolved): Merge "ceph-fs-common" into "ceph-common"
- 06:27 AM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
- https://github.com/ceph/ceph-qa-suite/pull/1098 is still open
- 05:32 AM Cleanup #16808 (Resolved): Merge "ceph-fs-common" into "ceph-common"
07/26/2016
- 08:18 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
- Given the current state of CephFS kernel support I wouldn't expect so. The upstream community still asks users to run...
- 05:06 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
- @Greg does that mean there is a chance that 3.16 kernel may never have the fix back-ported ?
- 04:09 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
- Hmm, we aren't very good about tagging CephFS patches for stable trees. Distributors might have an easier time mainta...
- 03:37 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
- @Rohith new Ceph patches go onto the latest upstream linux kernel, and then it's up to linux distributions which thin...
- 03:18 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
- @John: For Ubuntu 14.04.02, 3.16v kernel is the latest and I using the most recent version of 3.16 kernel. In that ca...
- 02:15 PM Bug #16754 (Can't reproduce): mounting cephfs root and sub-directory on the same node makes the s...
- Closing because this is happening on an old kernel. Please re-open if you can reproduce the issue on a recent 4.x ke...
- 02:15 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- Hmm, all we can tell from the backtrace is that a data structure got corrupted at some stage. The function where you...
- 02:02 PM Bug #16768: multimds: check_rstat assertion failure
- Zheng, I think we already have "debug mds = 20", right? From the config for this run: http://pulpito.ceph.com/pdonnel...
- 01:55 PM Bug #16768: multimds: check_rstat assertion failure
- please add a line "debug mds = 10" to ceph.conf
- 01:50 PM Bug #16739: Client::setxattr always sends setxattr request to MDS
- Zheng says the client also doesn't release the caps voluntarily, which makes this extra bad.
(Maybe that should be i... - 09:28 AM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- Oliver: on the mailing list it seemed like this was probably not a cephfs issue (there were very busy cache tiers).
- 02:10 AM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- Hi,
client kernel: 4.5.5
MDS Server kernel: 4.5.5
Only ONE client is accessing.
Only a specific directory wi... - 02:26 AM Bug #16764 (Resolved): ceph-fuse crash on force unmount with file open
07/25/2016
- 08:38 PM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
- Oh, and for 16035 we definitely shouldn't backport, we should only remove things in new versions.
- 08:37 PM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
- I don't feel strongly. My intuition would be that we should avoid changes in jewel that might confuse anyone, unless...
- 08:29 PM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
- @John: Would it make sense to backport #16035 and this one to jewel?
- 08:13 PM Cleanup #16808 (Fix Under Review): Merge "ceph-fs-common" into "ceph-common"
- *master PRs*:
https://github.com/ceph/ceph/pull/10433 (ceph)
https://github.com/ceph/ceph-qa-suite/pull/1098 (cep... - 02:52 PM Cleanup #16808 (Resolved): Merge "ceph-fs-common" into "ceph-common"
- After the merge of https://github.com/ceph/ceph/pull/10243 the ceph-fs-common package (which exists only in Debian) c...
- 07:11 PM Documentation #16743: client: config settings missing in documentation
- 07:10 PM Documentation #16743: client: config settings missing in documentation
- PR: https://github.com/ceph/ceph/pull/10434
- 02:48 PM Cleanup #16035 (Pending Backport): Remove "cephfs" CLI
- 02:16 PM Bug #16768: multimds: check_rstat assertion failure
- I've opened a separate ticket for the segfault, seems likely to be it's own issue (http://tracker.ceph.com/issues/16807)
- 02:13 PM Bug #16768: multimds: check_rstat assertion failure
- Zheng, which setting is that and how do I enable it? Sorry...
- 02:59 AM Bug #16768: multimds: check_rstat assertion failure
- please enable mds_debug
- 02:15 PM Bug #16807: Crash in handle_slave_rename_prep
- Added an assertion to see if this is happening when a rename points to a null dentry
https://github.com/ceph/ceph/pu... - 02:12 PM Bug #16807 (Resolved): Crash in handle_slave_rename_prep
- Opening from Patrick's http://tracker.ceph.com/issues/16768...
- 12:44 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- It is interesting that we're hitting this in maybe_promote_standby and *not* in sanity(). Sanity gets called after p...
- 10:13 AM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- Denis: you are seeing http://tracker.ceph.com/issues/15705, unrelated to this ticket.
- 12:41 PM Bug #16774 (Rejected): file: ceph.file.layout.pool_namespace: No such attribute
- Yes, pretty sure that version is simply too old.
- 07:07 AM Bug #16737 (Resolved): Mounting ceph fs on client leads to kernel crash
- 06:09 AM Bug #16737: Mounting ceph fs on client leads to kernel crash
- yes. the issue is not seen with newer kernels.
- 02:29 AM Bug #16737: Mounting ceph fs on client leads to kernel crash
- seem like duplicate of http://tracker.ceph.com/issues/15302. Updating your kernel should resolve this issue.
07/24/2016
- 09:04 PM Bug #16768: multimds: check_rstat assertion failure
- Segmentation fault in another test which may be related:...
- 09:03 PM Bug #16768: multimds: check_rstat assertion failure
- Another of the same:...
- 04:44 PM Cleanup #16035 (Fix Under Review): Remove "cephfs" CLI
- 04:44 PM Bug #16255 (Fix Under Review): ceph-create-keys: sometimes blocks forever if mds "allow" is set
- https://github.com/ceph/ceph/pull/10415
- 04:24 PM Cleanup #16195 (Fix Under Review): mds: Don't spam log with standby_replay_restart messages
- https://github.com/ceph/ceph/pull/10243/commits
- 04:23 PM Feature #16570 (Resolved): MDS health warning for failure to enforce cache size limit
- 04:22 PM Bug #16764 (Fix Under Review): ceph-fuse crash on force unmount with file open
- https://github.com/ceph/ceph/pull/10419
07/23/2016
- 09:59 PM Bug #16691: sepia LRC lost directories
- The offending dentries that point to damaged dirfrags have been removed (by removing the omap keys). The objects the...
07/22/2016
- 11:22 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- don't know if it's usefull, but when I launch the command in debug mode 10 I have this log:
2016-07-22 23:18:43.43... - 07:11 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- output of ceph -s
2016-07-22 19:08:42.844997 6c700470 0 -- :/1260326585 >> 192.168.100.151:6789/0 pipe(0x6c405b30 s... - 07:03 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- sorry: /usr/bin/ceph-mds ceph -d -i mds-ceph1 --setuser ceph --setgroup ceph
- 06:56 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- ceph mon crash when I launch:
/usr/bin/ceph-mds --cluster ceph -d -i mds-ceph1 --setuser ceph --s
It's the first md... - 01:09 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- Also, has this ever worked before for you? I don't know that we've ever done any cephfs testing at all on ARM builds.
- 01:08 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- Please provide more information.
What series of commands did you run that caused the crash?
Were there already ... - 08:37 PM Backport #16797 (Resolved): jewel: MDS Deadlock on shutdown active rank while busy with metadata IO
- https://github.com/ceph/ceph/pull/10502
- 06:30 PM Feature #16775: MDS command for listing open files
- See also #15507.
- 11:18 AM Feature #16775 (Resolved): MDS command for listing open files
- 03:25 PM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
- We've encountered the same issue when upgrading. The " on systemd-using systems will probably not be seeing this" is ...
- 02:22 PM Bug #16691: sepia LRC lost directories
- (Mainly for my reference) etherpad from repairing is here http://etherpad.corp.redhat.com/efev9SA7rn
- 08:19 AM Bug #16774: file: ceph.file.layout.pool_namespace: No such attribute
- Zheng Yan wrote:
> It seems client does not support pool namespace. which client are you using ? (kernel or ceph-fus... - 07:34 AM Bug #16774: file: ceph.file.layout.pool_namespace: No such attribute
- It seems client does not support pool namespace. which client are you using ? (kernel or ceph-fuse, which version)
- 07:30 AM Bug #16774 (Rejected): file: ceph.file.layout.pool_namespace: No such attribute
- Hi!
when i test the ci: kcephfs/cephfs/{conf.yaml clusters/fixed-3-cephfs.yaml fs/btrfs.yaml inline/no.yaml tasks/kc... - 06:10 AM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
- 3.16.0-76
- 02:35 AM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
- I can't reproduce this on 4.5 kernel. which verion of kernel are you using?
07/21/2016
- 09:11 PM Bug #16771 (New): mon crash in MDSMonitor::prepare_beacon on ARM
- ceph 10.2.2
ubuntu 16.10
in Docker version 1.11.1, build 5604cbe
on arch armhf (rapsberry pi running hypriot)
<... - 08:11 PM Bug #16768 (Resolved): multimds: check_rstat assertion failure
- ...
- 04:31 PM Bug #16042 (Pending Backport): MDS Deadlock on shutdown active rank while busy with metadata IO
- 02:42 PM Bug #16668: client: nlink count is not maintained correctly
- Pull request with the fix is up here:
https://github.com/ceph/ceph/pull/10386 - 01:15 PM Bug #16764 (Resolved): ceph-fuse crash on force unmount with file open
Reproducing this in a vstart environment:
1. Mount a client
2. in python, do "f = open('mnt/foo.bin', 'w')"
3....- 11:08 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
- The other thing of note is the logs seem to indicate that these hosts are running pretty bleeding-edge kernels -- 4.7...
- 11:06 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
- As Scott Mayhew pointed out, the version of nfs-utils that ships in RHEL7.2 uses fopen to open the channel file, and ...
- 12:11 AM Bug #4212: mds: open_snap_parents isn't called all the times it needs to be
- I had a misunderstanding about what data a SnapRealm/sr_t has directly.
So, yes, right now we need *all* past_pare...
07/20/2016
- 08:19 PM Feature #16757 (New): enable MDS replacement migration
- Right now, without multi-mds the only way we have to switch MDSes is to do a failover from the current active to some...
- 02:02 PM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- I gave a shot at fixing this today (kclient only) as per the email thread.
listxattr() does not return internal xa... - 02:00 PM Bug #16668: client: nlink count is not maintained correctly
- Ok, I have a couple of small patches that fix the testcase. One is a client-side patch to fix the ctime handling in f...
- 01:37 PM Support #16526: cephfs client side quotas - nfs-ganesha
- Oh, we just recently flipped the bit so quotas are enforced by default. This should work if you set "client quota = t...
- 09:21 AM Support #16526: cephfs client side quotas - nfs-ganesha
- For this test I was using the below versions:
ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
nfs-... - 12:15 PM Bug #16754 (Can't reproduce): mounting cephfs root and sub-directory on the same node makes the s...
- Steps to reproduce:
*********************************************************************
ems@host1: sudo mount -t ... - 08:06 AM Bug #16737: Mounting ceph fs on client leads to kernel crash
- attaching full kernel log
- 06:15 AM Bug #3718: multi-client dbench gets stuck over NFS exported cephfs
- Probably not a bug any more?
- 06:14 AM Bug #1787 (Closed): mds: laggy oneshot replays pollute mdsmap
- One shot replay got zapped.
- 06:08 AM Bug #5864 (Closed): cfuse_workunit_suites_ffsb suite on Centos hangs with *** Got Signal Interrup...
- 06:08 AM Bug #4732 (Closed): uclient: client/Inode.cc: 126: FAILED assert(cap_refs[c] > 0)
- Things have changed.
- 05:59 AM Bug #4738 (Closed): libceph: unlink vs. readdir (and other dir orders)
- We have file locking and redid the listing code.
- 05:57 AM Bug #8432 (Closed): ceph-fuse process not dying
- These are definitely out of date, whatever the bug was.
- 05:52 AM Bug #9276: Client::get_file_extent_osds asserts in object_locator_to_pg if osd map is out of date
- This might be fixed now?
- 05:46 AM Bug #3845 (Closed): mds: standby_for_rank not getting cleared on takeover
- A bunch of this got rejiggered in John's multi-fs and follow-on work; it's probably gone.
- 05:42 AM Bug #9884 (Closed): too many files in /usr for multiple_rsync.sh
- Pretty sure we reduced the size and this isn't a problem any more.
- 05:41 AM Bug #10061: uclient: MDS: output cap data in messages
- This should also be exposed via the admin socket.
- 05:35 AM Bug #10542: ceph-fuse cap trimming fails with: mount: only root can use "--options" option
- I think this got resolved into one of the many fuse cache invalidate PRs, but I'm not sure.
- 05:28 AM Cleanup #11 (Resolved): mds: replace ALLOW_MESSAGES_FROM macro
- This got fixed up in the security stuff last summer.
- 12:29 AM Feature #16745 (Pending Backport): mon: prevent allocating snapids allocated for CephFS
- The MDS allocates its own snapids. In general, the monitor allocates self-managed snapids for librados users.
We n...
07/19/2016
- 11:29 PM Bug #11789 (Can't reproduce): knfs mount fails with "getfh failed: Function not implemented"
- 11:28 PM Bug #12209 (Won't Fix): CephFS should have a complete timeout mechanism to avoid endless waiting ...
- There's been no movement here and we didn't seem to like the idea.
- 11:26 PM Bug #13689 (Won't Fix): ceph-mds not build with libjemalloc
- We're switching to cmake so hopefully this is fixed.
- 11:23 PM Support #15268 (Resolved): CephFS mount blocks VM
- 11:22 PM Bug #15783: client: enable acls by default
- Zheng?
- 11:20 PM Documentation #3113 (Rejected): Ceph FS Options Could Use Some Additional Information
- The cephfs tool got zapped.
- 11:19 PM Fix #4286 (Rejected): SLES 11 - cfuse: disable 'big_writes'and 'atomic_o_trunc
- I think/hope we can ditch this now. There have been several SLES11 service packs and SLES12 is out now.
- 11:12 PM Bug #16322 (Need More Info): ceph mds getting killed for no reason
- 11:08 PM Bug #15502 (Resolved): files read or written with cephfs (fuse or kernel) on client drop all thei...
- I think this is all cleaned up now.
- 10:54 PM Bug #4212: mds: open_snap_parents isn't called all the times it needs to be
- See the email thread at http://www.spinics.net/lists/ceph-devel/msg12818.html
Unfortunately it doesn't include any... - 08:39 PM Documentation #16743 (Resolved): client: config settings missing in documentation
- These include at least:
* client_cache_mid
* client_oc_size
* client_oc_max_objects
* client_oc_max_dirty
* cl... - 08:05 PM Bug #16668: client: nlink count is not maintained correctly
- I think the actual bug here is that, as you note, ll_lookup calls fill_stat without checking that it has As (and what...
- 05:01 PM Bug #16668: client: nlink count is not maintained correctly
- Actually we could probably just always return the updated inode attrs on unlink. There's always the possibility that ...
- 04:46 PM Bug #16668: client: nlink count is not maintained correctly
- Ok, I think I sort of get it now. Here's my reproducer:...
- 04:13 PM Bug #16668: client: nlink count is not maintained correctly
- Successful test -- the lookup after the unlink calls into _do_lookup:...
- 03:41 PM Bug #16668: client: nlink count is not maintained correctly
- Tracked down the problem with the ctime and it appears to be a fairly simple bug in fill_stat(). It was only looking ...
- 01:56 PM Bug #16737: Mounting ceph fs on client leads to kernel crash
- the screenshot does not contain full backtrace. please setup netconsole to get full kernel message
- 10:25 AM Bug #16737 (Resolved): Mounting ceph fs on client leads to kernel crash
- Mounting the cephfs on client side with IO running leads to the client crashing sometimes.
Client version:-
uname... - 01:50 PM Bug #16739 (Resolved): Client::setxattr always sends setxattr request to MDS
- If client has CEPH_CAP_AUTH_EXCL, it can updates xattr locally and marks CEPH_CAP_AUTH_EXCL dirty
- 01:43 PM Support #16738 (Closed): mount.ceph: unknown mount options: rbytes and norbytes
- Ceph: @v10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)@
Linux Kernel: @4.6.3-300.fc24.x86_64@
Hello,
When t... - 01:32 PM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
- https://github.com/ceph/ceph/pull/10304
- 01:32 PM Bug #16610 (Fix Under Review): Jewel: segfault in ObjectCacher::FlusherThread
- 05:07 AM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
- Just to keep the full history in this issue, we have understood that the segfault only appears in VM with AMD62xx pro...
- 01:09 PM Cleanup #15923 (Fix Under Review): MDS: remove TMAP2OMAP check and move Objecter into MDSRank
- https://github.com/ceph/ceph/pull/10243
- 01:09 PM Cleanup #16035: Remove "cephfs" CLI
- https://github.com/ceph/ceph/pull/10243
- 11:33 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
- I'm trying to get some clarification of what the application was doing when it got these AVC denials. In the meantime...
- 11:07 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
- Just some notes. It looks like the machine has already been torn down and rebuilt, but the new machine is using the s...
- 05:08 AM Bug #16709: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
- Yes that would be ideal. as of now we cannot be sure if it has been actually removed or not.
- 04:46 AM Bug #16730 (Won't Fix): mds'dump display incomplete
- This is deliberate. "mds dump" dumps a specific filesystem (it defaults to the first one, but a client which is set u...
- 02:34 AM Bug #16730: mds'dump display incomplete
- mds dump" and "fs dump" are repeated,and "mds dump" display incomplete.
so delete "mds dump" I think is the best cho... - 01:32 AM Bug #16730 (Won't Fix): mds'dump display incomplete
- create "cephfs&&leadorfs2" fs when run "create fs flag set enable_multiple"...
07/18/2016
- 10:29 PM Bug #16397 (New): nfsd selinux denials causing knfs tests to fail
- Oh dear, this is happening again:
http://pulpito.ceph.com/teuthology-2016-07-13_02:25:02-knfs-jewel-testing-basic-... - 08:20 PM Cleanup #16035: Remove "cephfs" CLI
- For additional info, quoting Sage from an internal RH bug (sorry this is restricted, not sure why. https://bugzilla.r...
- 03:18 PM Cleanup #16035: Remove "cephfs" CLI
- (Agreed with merging "ceph-fs-common" into "ceph-common". I've never found an explanation for why that was its own pa...
- 02:25 PM Cleanup #16035: Remove "cephfs" CLI
- After the cephfs tool is dropped, mount.ceph will be the only thing remaining in the (deb-only) "ceph-fs-common" pack...
- 08:19 PM Bug #16691: sepia LRC lost directories
- Well, I checked the code again and the tmap2omap path looks appropriately durable.
I did notice one thing that hel... - 01:53 PM Bug #16691: sepia LRC lost directories
- Plan is for greg to look into the TMAP2OMAP OSD code to look for what might have causd that.
Afterwards John+Doug ... - 07:39 PM Bug #16709: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
- Without looking at the code, I would imagine that you're seeing EINVAL for rank 1 because there is no such rank (so i...
- 12:28 PM Bug #16709 (Resolved): No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
- there is no output for the command ceph mds rmfailed 0 --yes-i-really-mean-it. The command is successful how many eve...
- 04:43 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- Dzianis reported that he upgraded to 10.2.2 without ever upgrading to 10.2.0 (and downgrading after, if that's even p...
- 04:42 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- I make (maybe wrong, but no way back) one-shot upgrade: stop all client, stop all ceph daemons (mds,osd,mon) and run ...
07/16/2016
- 05:08 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- PR for added assertions: https://github.com/ceph/ceph/pull/10316
07/15/2016
- 07:18 PM Bug #16668: client: nlink count is not maintained correctly
- I set up a ganesha + ceph test rig today and was able to reproduce the problem. Interestingly, it does not reproduce ...
- 04:24 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- So, rambling brain dump of my current thoughts on this:
I haven't been able to reproduce this problem. There are t... - 03:19 PM Backport #16697 (Fix Under Review): jewel: ceph-fuse is not linked to libtcmalloc
- PR for jewel is https://github.com/ceph/ceph/pull/10303
- 09:36 AM Backport #16697 (Resolved): jewel: ceph-fuse is not linked to libtcmalloc
- https://github.com/ceph/ceph/pull/10303
- 03:28 AM Bug #16655: ceph-fuse is not linked to libtcmalloc
- https://github.com/ceph/ceph/pull/10303
- 02:25 AM Bug #16691: sepia LRC lost directories
- what do you mean they are old? what does 'rados stat xxxx' show?
07/14/2016
- 11:33 PM Documentation #16664 (Resolved): Standby Replay configuration doc is wrong
- 04:12 PM Documentation #16664: Standby Replay configuration doc is wrong
- Backport: https://github.com/ceph/ceph/pull/10298
I can't mark this issue as Resolved for some reason. - 11:18 PM Bug #16655: ceph-fuse is not linked to libtcmalloc
- tcmalloc is also missing from @ldd /usr/bin/ceph-fuse@ in ceph-fuse-0.94.7-0.el7, FYI, so this has gone on for quite ...
- 01:39 PM Bug #16655 (Pending Backport): ceph-fuse is not linked to libtcmalloc
- 09:48 PM Bug #16691 (Resolved): sepia LRC lost directories
- If you log in to the sepia long-running cluster, it has 37 directories whose objects it lost.
I spot-checked one o... - 05:39 PM Bug #16640 (New): libcephfs: Java bindings failing to load on CentOS
- Let's leave this open to work out if there is a change to the build we can make to avoid the java bindings requiring ...
- 04:01 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
- Noah, John, I'm guessing the Java bindings ought to link to the versioned libcephfs_jni.so.1.0.0 instead of the unver...
- 05:38 PM Feature #4139 (Resolved): MDS: forward scrub: add scrub_stamp infrastructure and a function to sc...
- I think Greg meant to mark this Resolved.
- 01:11 AM Feature #4139: MDS: forward scrub: add scrub_stamp infrastructure and a function to scrub a singl...
- This bit has been done forever: we have admin socket interfaces to scrub a dentry or recursive folder.
- 01:44 PM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
- looks like ObjectCacher::bh_write_adjacencies() passed an empty list to ObjectCacher::bh_write_scattered(). Maybe the...
- 01:25 PM Bug #16668: client: nlink count is not maintained correctly
- It also occurred to me yesterday that I was using the path-based calls, whereas ganesha would likely be using the ll ...
- 10:39 AM Bug #8255 (Resolved): mds: directory with missing object cannot be removed
- This kind of issue should be handled cleanly (MDS will raise 'damaged' health alert, specifics in "damage ls") as of ...
- 01:14 AM Feature #12275 (Duplicate): Handle metadata migration during forward scrub
- #4143 and #4144
- 01:03 AM Feature #12141: cephfs-data-scan: File size correction from backward scan
- This was discussed elsewhere, but we need to be able to disable file size correction as well – via a config option at...
Also available in: Atom