Project

General

Profile

Activity

From 07/04/2016 to 08/02/2016

08/02/2016

07:08 PM Bug #16186 (Duplicate): kclient: drops requests without poking system calls on reconnect
Jeff Layton
07:08 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
I'm going to go ahead and close this out, and pursue the follow-up work in tracker #15255.
Jeff Layton
03:33 PM Bug #16668 (Resolved): client: nlink count is not maintained correctly
Ok, PR is now merged! Jeff Layton
11:13 AM Bug #16879 (Fix Under Review): scrub: inode wrongly marked free: 0x10000000002
https://github.com/ceph/ceph-qa-suite/pull/1107 John Spray

08/01/2016

11:15 PM Feature #10627: teuthology: qa: enable Samba runs on RHEL
Passing this to John to watch. Greg Farnum
08:50 PM Support #16884: rename() doesn't work between directories
Zheng, what are the limits and requirements of that quota root EXDEV?
I think it's probably required and can't cha...
Greg Farnum
08:15 PM Support #16884: rename() doesn't work between directories
Debug output is:
todir->snapid:-2 todir->quota.is_enable:0 fromdir->snapid:-2 fromdir->quota->max_files:20000 return...
Donatas Abraitis
08:12 PM Support #16884: rename() doesn't work between directories
What about removing this block at all? Or is it required too much? Donatas Abraitis
08:04 PM Support #16884: rename() doesn't work between directories
Looks like this part is failing: https://github.com/ceph/ceph/blob/0080b6bc92cefdd2115c904fd0c83ae83c9c2f01/src/clien... Donatas Abraitis
08:00 PM Support #16884: rename() doesn't work between directories
More details, please. Cross-directory rename definitely works in general. What's the output of "mount"? What versions... Greg Farnum
07:03 PM Support #16884 (Closed): rename() doesn't work between directories
Hi folks!
looks like rename() just doesn't work between directories. Here is the snippet FTP daemon does:
#incl...
Donatas Abraitis
08:39 PM Bug #16886: multimds: kclient hang (?) in tests
Updated title/description. Patrick Donnelly
07:54 PM Bug #16886: multimds: kclient hang (?) in tests
Well I feel silly. This is actually more general but wasn't obvious by how I had organized the failures. I'm going to... Patrick Donnelly
07:32 PM Bug #16886 (Can't reproduce): multimds: kclient hang (?) in tests
There are strange pauses which are showing up in several tests for the kclient:
* http://qa-proxy.ceph.com/teuthol...
Patrick Donnelly
06:57 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
root@ceph1:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
ceph 1 0 0 18:51 ? 00:...
stephane beuret
05:42 AM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
I must admit that I have trouble putting it in place. I do not know enough how to use gdb, and as my ceph-mon is in a... stephane beuret
04:42 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
This test also fails with the master branch (as of earlier this morning):
http://pulpito.ceph.com/jlayton-2016...
Jeff Layton
03:08 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
Rebased onto current master branch, and still seeing the error. Rerunning the test now on a branch without any of my ... Jeff Layton
01:00 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
Reran the test and it failed again: (btw: thanks Nathan for the pointer to how to filter out failures and rerun only ... Jeff Layton
12:34 PM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
Ahh thanks, Nathan. Ok, this is a recently-added test and my local ceph-qa-suite was missing it. A git pull fixed tha... Jeff Layton
11:36 AM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
I found it by looking at the "task" function in "tasks/cephfs_test_runner.py" - it says: ... Nathan Cutler
11:35 AM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
Hi Jeff, this:
https://github.com/ceph/ceph-qa-suite/blob/master/tasks/cephfs/test_forward_scrub.py
Nathan Cutler
10:51 AM Bug #16879: scrub: inode wrongly marked free: 0x10000000002
Message comes from CInode::validate_disk_state, but I haven't yet been able to figure out where the test itself comes... Jeff Layton
10:23 AM Bug #16879 (Resolved): scrub: inode wrongly marked free: 0x10000000002
I ran the "fs" testsuite on a branch that has a pile of small, userland client-side patches. One of the tests (tasks/... Jeff Layton
01:34 PM Bug #16807: Crash in handle_slave_rename_prep
Zheng Yan
12:56 PM Bug #16876: java.lang.UnsatisfiedLinkError: Can't load library: /usr/lib/jni/libcephfs_jni.so
Of course, it may be that I reached the box too late and the filesystem had been changed. I'm not sure how to tell. E... Jeff Layton
10:57 AM Bug #16832 (Resolved): libcephfs failure at shutdown (Attempt to free invalid pointer)
I haven't seen this in the latest test runs, so I'm going to go ahead and close this out under the assumption that is... Jeff Layton
10:40 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
Comments in test_strays.py seem to indicate that this test is racy anyway:... Jeff Layton
10:39 AM Bug #16881 (Resolved): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
During a test_files_throttle test run, I hit the following error:... Jeff Layton
10:34 AM Bug #16880: saw valgrind issue <kind>Leak_StillReachable</kind> in /var/log/ceph/valgrind/clien...
client.0.log is here:
http://qa-proxy.ceph.com/teuthology/jlayton-2016-07-29_18:51:42-fs-wip-jlayton-nlink---b...
Jeff Layton
10:27 AM Bug #16880 (Duplicate): saw valgrind issue <kind>Leak_StillReachable</kind> in /var/log/ceph/va...
One of my "fs" test runs over the weekend failed with this:... Jeff Layton
02:36 AM Bug #16768: multimds: check_rstat assertion failure
Here's another instance of the assertion failure on a more recent master branch:
http://qa-proxy.ceph.com/teutholo...
Patrick Donnelly

07/31/2016

09:12 PM Bug #16876 (Duplicate): java.lang.UnsatisfiedLinkError: Can't load library: /usr/lib/jni/libcephf...
I had a failed fs testsuite run, and a couple of the jobs failed with what looks like the error below:... Jeff Layton

07/29/2016

10:18 PM Documentation #16743 (Resolved): client: config settings missing in documentation
Patrick Donnelly
12:18 PM Backport #16797 (In Progress): jewel: MDS Deadlock on shutdown active rank while busy with metada...
Abhishek Varshney
12:13 PM Bug #16842: mds: replacement MDS crashes on InoTable release
Min Chen: can you describe the client part of how to reproduce this? What does the client have to be doing to reprod... John Spray
12:08 PM Backport #16621 (In Progress): jewel: mds: `session evict` tell command blocks forever with async...
Abhishek Varshney
11:48 AM Backport #16620 (In Progress): jewel: Fix shutting down mds timed-out due to deadlock
Abhishek Varshney
11:44 AM Backport #16299 (In Progress): jewel: mds: fix SnapRealm::have_past_parents_open()
Abhishek Varshney
11:00 AM Cleanup #15923 (Resolved): MDS: remove TMAP2OMAP check and move Objecter into MDSRank
John Spray
10:58 AM Cleanup #16195 (Resolved): mds: Don't spam log with standby_replay_restart messages
John Spray
10:44 AM Bug #16857 (Duplicate): Crash in Client::_invalidate_kernel_dcache
... John Spray
03:18 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
It's not only Ceph's locks, mutexes, etc. that we need to be aware of or concerned with. I have seen multiple occurre... Brad Hubbard
03:08 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
Kefu Chai
03:05 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
John,
i marked PR#10472 as a fix of this issue. it does. but i would like to keep this issue open, because:
by ...
Kefu Chai

07/28/2016

06:33 PM Bug #16842: mds: replacement MDS crashes on InoTable release
This looks more complicated than that to reproduce. The code that's crashing is timing out a client connection that d... Greg Farnum
06:25 AM Bug #16842 (Can't reproduce): mds: replacement MDS crashes on InoTable release
ceph version 10.2.0-2638-gf7fc985
reproduce step:
1. new fs and start mds.a
2. start mds.b
3. kill mds.a
fai...
Min Chen
01:58 PM Bug #16844 (Duplicate): hammer: libcephfs-java/test.sh fails
Nathan Cutler
11:09 AM Bug #16844 (Duplicate): hammer: libcephfs-java/test.sh fails
Failing consistently on hammer-backports branch:
http://pulpito.ceph.com/smithfarm-2016-07-25_05:09:12-fs-hammer-ba...
Nathan Cutler
11:20 AM Bug #16556 (Fix Under Review): LibCephFS.InterProcessLocking failing on master and jewel
https://github.com/ceph/ceph/pull/10472 Kefu Chai
08:40 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
... Kefu Chai

07/27/2016

09:07 PM Bug #16829: ceph-mds crashing constantly
1) Possibly, but we're more likely to just lock out pools which have data in them.
2) You mean you intermingled RB...
Greg Farnum
08:50 PM Bug #16829: ceph-mds crashing constantly
So, two things:
1) can ceph-mds be made more resiliant when finding data from non-existing filesystems?
2) I ca...
Tomasz Torcz
06:31 PM Bug #16829 (Closed): ceph-mds crashing constantly
It looks like you did "fs rm" and "fs new" but kept the same metadata pool in RADOS. That doesn't work; you can resol... Greg Farnum
12:50 PM Bug #16829 (Closed): ceph-mds crashing constantly
I'm using CEPH packages from Fedora 24: ceph-mds-10.2.2-2.fc24.x86_64
I've created simple cephfs once, stored some...
Tomasz Torcz
04:41 PM Bug #16832: libcephfs failure at shutdown (Attempt to free invalid pointer)
@Jeff: Just in case you don't know it yet, here is a trick for rescheduling failed and dead jobs from a previous run ... Nathan Cutler
04:17 PM Bug #16832: libcephfs failure at shutdown (Attempt to free invalid pointer)
The other two test failures -- one was a segfault in ceph_test_libcephfs:
(gdb) bt
#0 0x00007f771c5aaa63 in lock...
Jeff Layton
02:58 PM Bug #16832: libcephfs failure at shutdown (Attempt to free invalid pointer)
> 1) some binary segfaulted, but I don't seem to be able to track down the core to see what actually failed:
<dgallo...
David Galloway
02:47 PM Bug #16832 (Resolved): libcephfs failure at shutdown (Attempt to free invalid pointer)
My fs test run had 3 failures:
1) some binary segfaulted, but I don't seem to be able to track down the core to se...
Jeff Layton
02:33 PM Feature #15406 (Pending Backport): Add versioning to CephFSVolumeClient interface
Ramana Raja
02:32 PM Feature #15615 (Pending Backport): CephFSVolumeClient: List authorized IDs by share
Ramana Raja
01:54 PM Backport #16831 (In Progress): jewel: Add versioning to CephFSVolumeClient interface
Ramana Raja
01:53 PM Backport #16831 (Resolved): jewel: Add versioning to CephFSVolumeClient interface
https://github.com/ceph/ceph/pull/10453
https://github.com/ceph/ceph-qa-suite/pull/1100
Ramana Raja
01:23 PM Backport #16830 (In Progress): jewel: CephFSVolumeClient: List authorized IDs by share
Ramana Raja
01:03 PM Backport #16830 (Resolved): jewel: CephFSVolumeClient: List authorized IDs by share
https://github.com/ceph/ceph/pull/10453
https://github.com/ceph/ceph-qa-suite/pull/1100
Ramana Raja
11:58 AM Cleanup #16035 (Resolved): Remove "cephfs" CLI
Nathan Cutler
11:01 AM Cleanup #16035: Remove "cephfs" CLI
Mop-up *master PR*: https://github.com/ceph/ceph/pull/10444 Nathan Cutler
11:00 AM Cleanup #16035 (Fix Under Review): Remove "cephfs" CLI
Nathan Cutler
11:22 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
Looking into what's happening in the case of running LibCephFS.InterProcessLocking on its own, I see that the forked ... John Spray
05:52 AM Bug #16556: LibCephFS.InterProcessLocking failing on master and jewel
tested with latest master, still fails.
@greg, do we have a fix for this issue now?
Kefu Chai
10:59 AM Cleanup #16808 (Resolved): Merge "ceph-fs-common" into "ceph-common"
Nathan Cutler
06:27 AM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
https://github.com/ceph/ceph-qa-suite/pull/1098 is still open Nathan Cutler
05:32 AM Cleanup #16808 (Resolved): Merge "ceph-fs-common" into "ceph-common"
Kefu Chai

07/26/2016

08:18 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
Given the current state of CephFS kernel support I wouldn't expect so. The upstream community still asks users to run... Greg Farnum
05:06 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
@Greg does that mean there is a chance that 3.16 kernel may never have the fix back-ported ? Rohith Radhakrishnan
04:09 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
Hmm, we aren't very good about tagging CephFS patches for stable trees. Distributors might have an easier time mainta... Greg Farnum
03:37 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
@Rohith new Ceph patches go onto the latest upstream linux kernel, and then it's up to linux distributions which thin... John Spray
03:18 PM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
@John: For Ubuntu 14.04.02, 3.16v kernel is the latest and I using the most recent version of 3.16 kernel. In that ca... Rohith Radhakrishnan
02:15 PM Bug #16754 (Can't reproduce): mounting cephfs root and sub-directory on the same node makes the s...
Closing because this is happening on an old kernel. Please re-open if you can reproduce the issue on a recent 4.x ke... John Spray
02:15 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
Hmm, all we can tell from the backtrace is that a data structure got corrupted at some stage. The function where you... John Spray
02:02 PM Bug #16768: multimds: check_rstat assertion failure
Zheng, I think we already have "debug mds = 20", right? From the config for this run: http://pulpito.ceph.com/pdonnel... Patrick Donnelly
01:55 PM Bug #16768: multimds: check_rstat assertion failure
please add a line "debug mds = 10" to ceph.conf Zheng Yan
01:50 PM Bug #16739: Client::setxattr always sends setxattr request to MDS
Zheng says the client also doesn't release the caps voluntarily, which makes this extra bad.
(Maybe that should be i...
Greg Farnum
09:28 AM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
Oliver: on the mailing list it seemed like this was probably not a cephfs issue (there were very busy cache tiers). John Spray
02:10 AM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
Hi,
client kernel: 4.5.5
MDS Server kernel: 4.5.5
Only ONE client is accessing.
Only a specific directory wi...
Oliver Dzombc
02:26 AM Bug #16764 (Resolved): ceph-fuse crash on force unmount with file open
Zheng Yan

07/25/2016

08:38 PM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
Oh, and for 16035 we definitely shouldn't backport, we should only remove things in new versions. John Spray
08:37 PM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
I don't feel strongly. My intuition would be that we should avoid changes in jewel that might confuse anyone, unless... John Spray
08:29 PM Cleanup #16808: Merge "ceph-fs-common" into "ceph-common"
@John: Would it make sense to backport #16035 and this one to jewel? Nathan Cutler
08:13 PM Cleanup #16808 (Fix Under Review): Merge "ceph-fs-common" into "ceph-common"
*master PRs*:
https://github.com/ceph/ceph/pull/10433 (ceph)
https://github.com/ceph/ceph-qa-suite/pull/1098 (cep...
Nathan Cutler
02:52 PM Cleanup #16808 (Resolved): Merge "ceph-fs-common" into "ceph-common"
After the merge of https://github.com/ceph/ceph/pull/10243 the ceph-fs-common package (which exists only in Debian) c... Nathan Cutler
07:11 PM Documentation #16743: client: config settings missing in documentation
Patrick Donnelly
07:10 PM Documentation #16743: client: config settings missing in documentation
PR: https://github.com/ceph/ceph/pull/10434 Patrick Donnelly
02:48 PM Cleanup #16035 (Pending Backport): Remove "cephfs" CLI
Nathan Cutler
02:16 PM Bug #16768: multimds: check_rstat assertion failure
I've opened a separate ticket for the segfault, seems likely to be it's own issue (http://tracker.ceph.com/issues/16807) John Spray
02:13 PM Bug #16768: multimds: check_rstat assertion failure
Zheng, which setting is that and how do I enable it? Sorry... Patrick Donnelly
02:59 AM Bug #16768: multimds: check_rstat assertion failure
please enable mds_debug Zheng Yan
02:15 PM Bug #16807: Crash in handle_slave_rename_prep
Added an assertion to see if this is happening when a rename points to a null dentry
https://github.com/ceph/ceph/pu...
John Spray
02:12 PM Bug #16807 (Resolved): Crash in handle_slave_rename_prep
Opening from Patrick's http://tracker.ceph.com/issues/16768... John Spray
12:44 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
It is interesting that we're hitting this in maybe_promote_standby and *not* in sanity(). Sanity gets called after p... John Spray
10:13 AM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
Denis: you are seeing http://tracker.ceph.com/issues/15705, unrelated to this ticket. John Spray
12:41 PM Bug #16774 (Rejected): file: ceph.file.layout.pool_namespace: No such attribute
Yes, pretty sure that version is simply too old. John Spray
07:07 AM Bug #16737 (Resolved): Mounting ceph fs on client leads to kernel crash
Zheng Yan
06:09 AM Bug #16737: Mounting ceph fs on client leads to kernel crash
yes. the issue is not seen with newer kernels. Rohith Radhakrishnan
02:29 AM Bug #16737: Mounting ceph fs on client leads to kernel crash
seem like duplicate of http://tracker.ceph.com/issues/15302. Updating your kernel should resolve this issue. Zheng Yan

07/24/2016

09:04 PM Bug #16768: multimds: check_rstat assertion failure
Segmentation fault in another test which may be related:... Patrick Donnelly
09:03 PM Bug #16768: multimds: check_rstat assertion failure
Another of the same:... Patrick Donnelly
04:44 PM Cleanup #16035 (Fix Under Review): Remove "cephfs" CLI
John Spray
04:44 PM Bug #16255 (Fix Under Review): ceph-create-keys: sometimes blocks forever if mds "allow" is set
https://github.com/ceph/ceph/pull/10415
John Spray
04:24 PM Cleanup #16195 (Fix Under Review): mds: Don't spam log with standby_replay_restart messages
https://github.com/ceph/ceph/pull/10243/commits John Spray
04:23 PM Feature #16570 (Resolved): MDS health warning for failure to enforce cache size limit
John Spray
04:22 PM Bug #16764 (Fix Under Review): ceph-fuse crash on force unmount with file open
https://github.com/ceph/ceph/pull/10419 John Spray

07/23/2016

09:59 PM Bug #16691: sepia LRC lost directories
The offending dentries that point to damaged dirfrags have been removed (by removing the omap keys). The objects the... John Spray

07/22/2016

11:22 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
don't know if it's usefull, but when I launch the command in debug mode 10 I have this log:
2016-07-22 23:18:43.43...
stephane beuret
07:11 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
output of ceph -s
2016-07-22 19:08:42.844997 6c700470 0 -- :/1260326585 >> 192.168.100.151:6789/0 pipe(0x6c405b30 s...
stephane beuret
07:03 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
sorry: /usr/bin/ceph-mds ceph -d -i mds-ceph1 --setuser ceph --setgroup ceph stephane beuret
06:56 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
ceph mon crash when I launch:
/usr/bin/ceph-mds --cluster ceph -d -i mds-ceph1 --setuser ceph --s
It's the first md...
stephane beuret
01:09 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
Also, has this ever worked before for you? I don't know that we've ever done any cephfs testing at all on ARM builds. John Spray
01:08 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
Please provide more information.
What series of commands did you run that caused the crash?
Were there already ...
John Spray
08:37 PM Backport #16797 (Resolved): jewel: MDS Deadlock on shutdown active rank while busy with metadata IO
https://github.com/ceph/ceph/pull/10502 Nathan Cutler
06:30 PM Feature #16775: MDS command for listing open files
See also #15507. Greg Farnum
11:18 AM Feature #16775 (Resolved): MDS command for listing open files

John Spray
03:25 PM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
We've encountered the same issue when upgrading. The " on systemd-using systems will probably not be seeing this" is ... Alfredo Deza
02:22 PM Bug #16691: sepia LRC lost directories
(Mainly for my reference) etherpad from repairing is here http://etherpad.corp.redhat.com/efev9SA7rn John Spray
08:19 AM Bug #16774: file: ceph.file.layout.pool_namespace: No such attribute
Zheng Yan wrote:
> It seems client does not support pool namespace. which client are you using ? (kernel or ceph-fus...
de lan
07:34 AM Bug #16774: file: ceph.file.layout.pool_namespace: No such attribute
It seems client does not support pool namespace. which client are you using ? (kernel or ceph-fuse, which version) Zheng Yan
07:30 AM Bug #16774 (Rejected): file: ceph.file.layout.pool_namespace: No such attribute
Hi!
when i test the ci: kcephfs/cephfs/{conf.yaml clusters/fixed-3-cephfs.yaml fs/btrfs.yaml inline/no.yaml tasks/kc...
de lan
06:10 AM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
3.16.0-76
Rohith Radhakrishnan
02:35 AM Bug #16754: mounting cephfs root and sub-directory on the same node makes the sub-directory inacc...
I can't reproduce this on 4.5 kernel. which verion of kernel are you using? Zheng Yan

07/21/2016

09:11 PM Bug #16771 (New): mon crash in MDSMonitor::prepare_beacon on ARM
ceph 10.2.2
ubuntu 16.10
in Docker version 1.11.1, build 5604cbe
on arch armhf (rapsberry pi running hypriot)
<...
stephane beuret
08:11 PM Bug #16768 (Resolved): multimds: check_rstat assertion failure
... Patrick Donnelly
04:31 PM Bug #16042 (Pending Backport): MDS Deadlock on shutdown active rank while busy with metadata IO
Patrick Donnelly
02:42 PM Bug #16668: client: nlink count is not maintained correctly
Pull request with the fix is up here:
https://github.com/ceph/ceph/pull/10386
Jeff Layton
01:15 PM Bug #16764 (Resolved): ceph-fuse crash on force unmount with file open

Reproducing this in a vstart environment:
1. Mount a client
2. in python, do "f = open('mnt/foo.bin', 'w')"
3....
John Spray
11:08 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
The other thing of note is the logs seem to indicate that these hosts are running pretty bleeding-edge kernels -- 4.7... Jeff Layton
11:06 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
As Scott Mayhew pointed out, the version of nfs-utils that ships in RHEL7.2 uses fopen to open the channel file, and ... Jeff Layton
12:11 AM Bug #4212: mds: open_snap_parents isn't called all the times it needs to be
I had a misunderstanding about what data a SnapRealm/sr_t has directly.
So, yes, right now we need *all* past_pare...
Greg Farnum

07/20/2016

08:19 PM Feature #16757 (New): enable MDS replacement migration
Right now, without multi-mds the only way we have to switch MDSes is to do a failover from the current active to some... Greg Farnum
02:02 PM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
I gave a shot at fixing this today (kclient only) as per the email thread.
listxattr() does not return internal xa...
Venky Shankar
02:00 PM Bug #16668: client: nlink count is not maintained correctly
Ok, I have a couple of small patches that fix the testcase. One is a client-side patch to fix the ctime handling in f... Jeff Layton
01:37 PM Support #16526: cephfs client side quotas - nfs-ganesha
Oh, we just recently flipped the bit so quotas are enforced by default. This should work if you set "client quota = t... Greg Farnum
09:21 AM Support #16526: cephfs client side quotas - nfs-ganesha
For this test I was using the below versions:
ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
nfs-...
sean redmond
12:15 PM Bug #16754 (Can't reproduce): mounting cephfs root and sub-directory on the same node makes the s...
Steps to reproduce:
*********************************************************************
ems@host1: sudo mount -t ...
Rohith Radhakrishnan
08:06 AM Bug #16737: Mounting ceph fs on client leads to kernel crash
attaching full kernel log Rohith Radhakrishnan
06:15 AM Bug #3718: multi-client dbench gets stuck over NFS exported cephfs
Probably not a bug any more? Greg Farnum
06:14 AM Bug #1787 (Closed): mds: laggy oneshot replays pollute mdsmap
One shot replay got zapped. Greg Farnum
06:08 AM Bug #5864 (Closed): cfuse_workunit_suites_ffsb suite on Centos hangs with *** Got Signal Interrup...
Greg Farnum
06:08 AM Bug #4732 (Closed): uclient: client/Inode.cc: 126: FAILED assert(cap_refs[c] > 0)
Things have changed. Greg Farnum
05:59 AM Bug #4738 (Closed): libceph: unlink vs. readdir (and other dir orders)
We have file locking and redid the listing code. Greg Farnum
05:57 AM Bug #8432 (Closed): ceph-fuse process not dying
These are definitely out of date, whatever the bug was. Greg Farnum
05:52 AM Bug #9276: Client::get_file_extent_osds asserts in object_locator_to_pg if osd map is out of date
This might be fixed now? Greg Farnum
05:46 AM Bug #3845 (Closed): mds: standby_for_rank not getting cleared on takeover
A bunch of this got rejiggered in John's multi-fs and follow-on work; it's probably gone. Greg Farnum
05:42 AM Bug #9884 (Closed): too many files in /usr for multiple_rsync.sh
Pretty sure we reduced the size and this isn't a problem any more. Greg Farnum
05:41 AM Bug #10061: uclient: MDS: output cap data in messages
This should also be exposed via the admin socket. Greg Farnum
05:35 AM Bug #10542: ceph-fuse cap trimming fails with: mount: only root can use "--options" option
I think this got resolved into one of the many fuse cache invalidate PRs, but I'm not sure. Greg Farnum
05:28 AM Cleanup #11 (Resolved): mds: replace ALLOW_MESSAGES_FROM macro
This got fixed up in the security stuff last summer. Greg Farnum
12:29 AM Feature #16745 (Pending Backport): mon: prevent allocating snapids allocated for CephFS
The MDS allocates its own snapids. In general, the monitor allocates self-managed snapids for librados users.
We n...
Greg Farnum

07/19/2016

11:29 PM Bug #11789 (Can't reproduce): knfs mount fails with "getfh failed: Function not implemented"
Greg Farnum
11:28 PM Bug #12209 (Won't Fix): CephFS should have a complete timeout mechanism to avoid endless waiting ...
There's been no movement here and we didn't seem to like the idea. Greg Farnum
11:26 PM Bug #13689 (Won't Fix): ceph-mds not build with libjemalloc
We're switching to cmake so hopefully this is fixed. Greg Farnum
11:23 PM Support #15268 (Resolved): CephFS mount blocks VM
Greg Farnum
11:22 PM Bug #15783: client: enable acls by default
Zheng? Greg Farnum
11:20 PM Documentation #3113 (Rejected): Ceph FS Options Could Use Some Additional Information
The cephfs tool got zapped. Greg Farnum
11:19 PM Fix #4286 (Rejected): SLES 11 - cfuse: disable 'big_writes'and 'atomic_o_trunc
I think/hope we can ditch this now. There have been several SLES11 service packs and SLES12 is out now. Greg Farnum
11:12 PM Bug #16322 (Need More Info): ceph mds getting killed for no reason
Greg Farnum
11:08 PM Bug #15502 (Resolved): files read or written with cephfs (fuse or kernel) on client drop all thei...
I think this is all cleaned up now. Greg Farnum
10:54 PM Bug #4212: mds: open_snap_parents isn't called all the times it needs to be
See the email thread at http://www.spinics.net/lists/ceph-devel/msg12818.html
Unfortunately it doesn't include any...
Greg Farnum
08:39 PM Documentation #16743 (Resolved): client: config settings missing in documentation
These include at least:
* client_cache_mid
* client_oc_size
* client_oc_max_objects
* client_oc_max_dirty
* cl...
Patrick Donnelly
08:05 PM Bug #16668: client: nlink count is not maintained correctly
I think the actual bug here is that, as you note, ll_lookup calls fill_stat without checking that it has As (and what... Greg Farnum
05:01 PM Bug #16668: client: nlink count is not maintained correctly
Actually we could probably just always return the updated inode attrs on unlink. There's always the possibility that ... Jeff Layton
04:46 PM Bug #16668: client: nlink count is not maintained correctly
Ok, I think I sort of get it now. Here's my reproducer:... Jeff Layton
04:13 PM Bug #16668: client: nlink count is not maintained correctly
Successful test -- the lookup after the unlink calls into _do_lookup:... Jeff Layton
03:41 PM Bug #16668: client: nlink count is not maintained correctly
Tracked down the problem with the ctime and it appears to be a fairly simple bug in fill_stat(). It was only looking ... Jeff Layton
01:56 PM Bug #16737: Mounting ceph fs on client leads to kernel crash
the screenshot does not contain full backtrace. please setup netconsole to get full kernel message Zheng Yan
10:25 AM Bug #16737 (Resolved): Mounting ceph fs on client leads to kernel crash
Mounting the cephfs on client side with IO running leads to the client crashing sometimes.
Client version:-
uname...
Rohith Radhakrishnan
01:50 PM Bug #16739 (Resolved): Client::setxattr always sends setxattr request to MDS
If client has CEPH_CAP_AUTH_EXCL, it can updates xattr locally and marks CEPH_CAP_AUTH_EXCL dirty Zheng Yan
01:43 PM Support #16738 (Closed): mount.ceph: unknown mount options: rbytes and norbytes
Ceph: @v10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)@
Linux Kernel: @4.6.3-300.fc24.x86_64@
Hello,
When t...
Alexander Trost
01:32 PM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
https://github.com/ceph/ceph/pull/10304 Zheng Yan
01:32 PM Bug #16610 (Fix Under Review): Jewel: segfault in ObjectCacher::FlusherThread
Zheng Yan
05:07 AM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
Just to keep the full history in this issue, we have understood that the segfault only appears in VM with AMD62xx pro... Goncalo Borges
01:09 PM Cleanup #15923 (Fix Under Review): MDS: remove TMAP2OMAP check and move Objecter into MDSRank
https://github.com/ceph/ceph/pull/10243 John Spray
01:09 PM Cleanup #16035: Remove "cephfs" CLI
https://github.com/ceph/ceph/pull/10243 John Spray
11:33 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
I'm trying to get some clarification of what the application was doing when it got these AVC denials. In the meantime... Jeff Layton
11:07 AM Bug #16397: nfsd selinux denials causing knfs tests to fail
Just some notes. It looks like the machine has already been torn down and rebuilt, but the new machine is using the s... Jeff Layton
05:08 AM Bug #16709: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
Yes that would be ideal. as of now we cannot be sure if it has been actually removed or not. Rohith Radhakrishnan
04:46 AM Bug #16730 (Won't Fix): mds'dump display incomplete
This is deliberate. "mds dump" dumps a specific filesystem (it defaults to the first one, but a client which is set u... Greg Farnum
02:34 AM Bug #16730: mds'dump display incomplete
mds dump" and "fs dump" are repeated,and "mds dump" display incomplete.
so delete "mds dump" I think is the best cho...
huanwen ren
01:32 AM Bug #16730 (Won't Fix): mds'dump display incomplete
create "cephfs&&leadorfs2" fs when run "create fs flag set enable_multiple"... huanwen ren

07/18/2016

10:29 PM Bug #16397 (New): nfsd selinux denials causing knfs tests to fail
Oh dear, this is happening again:
http://pulpito.ceph.com/teuthology-2016-07-13_02:25:02-knfs-jewel-testing-basic-...
John Spray
08:20 PM Cleanup #16035: Remove "cephfs" CLI
For additional info, quoting Sage from an internal RH bug (sorry this is restricted, not sure why. https://bugzilla.r... Ken Dreyer
03:18 PM Cleanup #16035: Remove "cephfs" CLI
(Agreed with merging "ceph-fs-common" into "ceph-common". I've never found an explanation for why that was its own pa... Ken Dreyer
02:25 PM Cleanup #16035: Remove "cephfs" CLI
After the cephfs tool is dropped, mount.ceph will be the only thing remaining in the (deb-only) "ceph-fs-common" pack... Nathan Cutler
08:19 PM Bug #16691: sepia LRC lost directories
Well, I checked the code again and the tmap2omap path looks appropriately durable.
I did notice one thing that hel...
Greg Farnum
01:53 PM Bug #16691: sepia LRC lost directories
Plan is for greg to look into the TMAP2OMAP OSD code to look for what might have causd that.
Afterwards John+Doug ...
John Spray
07:39 PM Bug #16709: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
Without looking at the code, I would imagine that you're seeing EINVAL for rank 1 because there is no such rank (so i... John Spray
12:28 PM Bug #16709 (Resolved): No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
there is no output for the command ceph mds rmfailed 0 --yes-i-really-mean-it. The command is successful how many eve... Rohith Radhakrishnan
04:43 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
Dzianis reported that he upgraded to 10.2.2 without ever upgrading to 10.2.0 (and downgrading after, if that's even p... Patrick Donnelly
04:42 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
I make (maybe wrong, but no way back) one-shot upgrade: stop all client, stop all ceph daemons (mds,osd,mon) and run ... Denis kaganovich

07/16/2016

05:08 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
PR for added assertions: https://github.com/ceph/ceph/pull/10316 Patrick Donnelly

07/15/2016

07:18 PM Bug #16668: client: nlink count is not maintained correctly
I set up a ganesha + ceph test rig today and was able to reproduce the problem. Interestingly, it does not reproduce ... Jeff Layton
04:24 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
So, rambling brain dump of my current thoughts on this:
I haven't been able to reproduce this problem. There are t...
Patrick Donnelly
03:19 PM Backport #16697 (Fix Under Review): jewel: ceph-fuse is not linked to libtcmalloc
PR for jewel is https://github.com/ceph/ceph/pull/10303 Ken Dreyer
09:36 AM Backport #16697 (Resolved): jewel: ceph-fuse is not linked to libtcmalloc
https://github.com/ceph/ceph/pull/10303 Nathan Cutler
03:28 AM Bug #16655: ceph-fuse is not linked to libtcmalloc
https://github.com/ceph/ceph/pull/10303 Zheng Yan
02:25 AM Bug #16691: sepia LRC lost directories
what do you mean they are old? what does 'rados stat xxxx' show? Zheng Yan

07/14/2016

11:33 PM Documentation #16664 (Resolved): Standby Replay configuration doc is wrong
Greg Farnum
04:12 PM Documentation #16664: Standby Replay configuration doc is wrong
Backport: https://github.com/ceph/ceph/pull/10298
I can't mark this issue as Resolved for some reason.
Patrick Donnelly
11:18 PM Bug #16655: ceph-fuse is not linked to libtcmalloc
tcmalloc is also missing from @ldd /usr/bin/ceph-fuse@ in ceph-fuse-0.94.7-0.el7, FYI, so this has gone on for quite ... Ken Dreyer
01:39 PM Bug #16655 (Pending Backport): ceph-fuse is not linked to libtcmalloc
Kefu Chai
09:48 PM Bug #16691 (Resolved): sepia LRC lost directories
If you log in to the sepia long-running cluster, it has 37 directories whose objects it lost.
I spot-checked one o...
Greg Farnum
05:39 PM Bug #16640 (New): libcephfs: Java bindings failing to load on CentOS
Let's leave this open to work out if there is a change to the build we can make to avoid the java bindings requiring ... John Spray
04:01 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
Noah, John, I'm guessing the Java bindings ought to link to the versioned libcephfs_jni.so.1.0.0 instead of the unver... Ken Dreyer
05:38 PM Feature #4139 (Resolved): MDS: forward scrub: add scrub_stamp infrastructure and a function to sc...
I think Greg meant to mark this Resolved. Nathan Cutler
01:11 AM Feature #4139: MDS: forward scrub: add scrub_stamp infrastructure and a function to scrub a singl...
This bit has been done forever: we have admin socket interfaces to scrub a dentry or recursive folder. Greg Farnum
01:44 PM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
looks like ObjectCacher::bh_write_adjacencies() passed an empty list to ObjectCacher::bh_write_scattered(). Maybe the... Zheng Yan
01:25 PM Bug #16668: client: nlink count is not maintained correctly
It also occurred to me yesterday that I was using the path-based calls, whereas ganesha would likely be using the ll ... Jeff Layton
10:39 AM Bug #8255 (Resolved): mds: directory with missing object cannot be removed
This kind of issue should be handled cleanly (MDS will raise 'damaged' health alert, specifics in "damage ls") as of ... John Spray
01:14 AM Feature #12275 (Duplicate): Handle metadata migration during forward scrub
#4143 and #4144 Greg Farnum
01:03 AM Feature #12141: cephfs-data-scan: File size correction from backward scan
This was discussed elsewhere, but we need to be able to disable file size correction as well – via a config option at... Greg Farnum

07/13/2016

11:37 PM Bug #13271 (Resolved): Missing dentry in cache when doing readdirs under cache pressure (?????s i...
Zheng fixed this. Greg Farnum
11:28 PM Feature #14427: qa: run snapshot tests under thrashing
https://github.com/ceph/ceph/pull/9955 improves snapshots and https://github.com/ceph/ceph-qa-suite/pull/1073 enables... Greg Farnum
11:25 PM Bug #10834 (Closed): SAMBA VFS module: Timestamps revert back to 01-01-1970
Closing in favor of #16679, since this is really about birthtime and we're adding a real one. Greg Farnum
11:25 PM Bug #16679 (New): Samba: hook up to birthtime correctly
https://github.com/ceph/ceph/pull/9965 is adding birthtime to Ceph internally. Once done, we need to plug samba in to... Greg Farnum
11:20 PM Feature #12671: Enforce cache limit during dirfrag load during open_ino (during rejoin)
If we do #13688, we probably won't need this one or can put it off. Greg Farnum
11:17 PM Fix #5268 (New): mds: fix/clean up file size/mtime recovery code
Greg Farnum
11:15 PM Bug #15379 (Closed): ceph mds continiously crashes and going into laggy state (stray purging prob...
We have open tickets about improving purge, and the specific issue here seems to have been addressed. Greg Farnum
11:09 PM Feature #3314: client: client interfaces should take a set of group ids
This is a natural part of what I'm already doing for #16367. Greg Farnum
11:07 PM Bug #8090 (New): multimds: mds crash in check_rstats
Greg Farnum
05:28 AM Bug #8090: multimds: mds crash in check_rstats
There may no longer be an issue now that #8094 is resolved? Greg Farnum
11:06 PM Feature #7321 (Duplicate): qa: multimds thrasher
#10792 Greg Farnum
10:49 PM Bug #16668 (In Progress): client: nlink count is not maintained correctly
Noted on irc that cap handling that involves the root directory (so, anything in root and frequently things in its im... Greg Farnum
06:21 PM Bug #16668: client: nlink count is not maintained correctly
I rolled up a testcase for this:... Jeff Layton
12:44 PM Bug #16668: client: nlink count is not maintained correctly
MDS revokes CEPH_CAP_LINK_EXCL when unlinking files. It's odd, but I can't see how does it cause problem Zheng Yan
10:47 PM Feature #10498 (New): ObjectCacher: order wakeups when write calls block on throttling
Greg Farnum
10:38 PM Feature #15393 (Resolved): ceph-fuse: Request for logrotate for client side log files
ceph-fuse was included in Jewel! commit:98744fdf9bda9d3b14bbf7f528f05ba50a923f97 Greg Farnum
10:34 PM Feature #10060: uclient: warn about stuck cap flushes
This should be pretty simple by looking at each session->flushing_caps_tids! Greg Farnum
10:30 PM Feature #6511 (Rejected): MDS: add special purging options for testing
This is kind of vague now and will get caught up in our future purge fixes anyway. Greg Farnum
10:23 PM Feature #15067 (Resolved): mon: client: multifs: enable clients to map a filesystem name to a FSCID
Greg Farnum
10:22 PM Feature #15068 (In Progress): fsck: multifs: enable repair tools to read from one filesystem and ...
I think Doug is working on this as well as #15069?
(Reset if not.)
Greg Farnum
10:15 PM Bug #16640 (Resolved): libcephfs: Java bindings failing to load on CentOS
Greg Farnum
12:01 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
I suppose the convention of putting the unversioned libraries into -dev packages is based on the idea that built code... John Spray
09:57 PM Feature #6290 (Resolved): Journaler: warn and shut down if we hit end of journal too early
Looks like this got fixed in our Journaler refactor. Greg Farnum
09:50 PM Feature #16676: flush dirty data to journal on SIGTERM
We sort of assume that there's a standby and the client who will replay the op, but if we've lost the client it's (al... Greg Farnum
09:36 PM Feature #16676 (New): flush dirty data to journal on SIGTERM
When it receives SIGTERM, the MDS should commit unsafe data to its journal before terminating. Douglas Fuller
03:36 PM Feature #15615 (Fix Under Review): CephFSVolumeClient: List authorized IDs by share
https://github.com/ceph/ceph/pull/9864
https://github.com/ceph/ceph-qa-suite/pull/1080
Ramana Raja
03:31 PM Feature #15406 (Fix Under Review): Add versioning to CephFSVolumeClient interface
https://github.com/ceph/ceph/pull/9864
https://github.com/ceph/ceph-qa-suite/pull/1080
Ramana Raja
06:13 AM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
Alas, the gdb log does not give us much more to go on.
Thread 1 (Thread 0x7f891cdfa700 (LWP 5467)):
#0 0x00007f8...
Brad Hubbard
05:48 AM Cleanup #13868 (Resolved): mds: MDCache::cap_import_paths is never used
This member no longer turns up when grepping. Greg Farnum
05:43 AM Bug #7206 (Can't reproduce): Ceph MDS Hang on hadoop workloads
If this was a time issue, we fixed a bunch of weird stuff in the switch to solely client-directed mtime updates. Greg Farnum
05:42 AM Bug #6458 (Can't reproduce): journaler: journal too short during replay
The journal format is different now too; this is probably not useful any more. Greg Farnum
05:39 AM Bug #8405: multimds: FAILED assert(dir->is_frozen_tree_root())
I don't think we've run many multi-mds tests in a while so this is probably still an issue? Greg Farnum
05:28 AM Fix #8094 (Resolved): MDS: be accurate about stats in check_rstats
Zheng fixed this ages ago. Greg Farnum
05:26 AM Bug #10996 (Can't reproduce): dumpling MDS: failed MDLog assert
Dumpling is old and we don't seem to have seen the error again. Greg Farnum
05:18 AM Bug #14641 (Duplicate): don't let users specify 0 on stripe count or object size
Greg Farnum
05:10 AM Bug #8255: mds: directory with missing object cannot be removed
John, much of this is handled now with the metadata damaged flags. What's left? Greg Farnum
01:12 AM Feature #13688: mds: performance: journal inodes with capabilities to limit rejoin time on failover
This might have already been done...Zheng, maybe? Greg Farnum
12:57 AM Cleanup #3677 (Closed): libcephfs, mds: test creation/addition of data pools, create policy
Things have changed a lot and we definitely test adding multiple data pools now. Greg Farnum
12:51 AM Bug #4023: kclient: d_revalidate is abusing d_parent
Is this still an issue? Greg Farnum
12:50 AM Bug #6770: ceph fscache: write file more than a page size to orignal file cause cachfiles bug on EOF
fscache has been through a lot of changes; anybody know if this is still a problem? Greg Farnum
12:41 AM Bug #5950 (Rejected): kcephfs: cephfs set_layout -p 4 gets EINVAL
I think we actually got rid of the cephfs tool at last. Greg Farnum
12:20 AM Bug #7685 (Can't reproduce): hung/failed teuthology test: cfuse_workunit_misc
Greg Farnum
12:19 AM Bug #11294: samba: DISCONNECTED inode warning
This doesn't look anything like #11835 to me; I've not been tracking closely enough to know if we're still seeing han... Greg Farnum
12:01 AM Bug #12895: Failure in TestClusterFull.test_barrier
Is this still a problem? It looks to me like the code is still there but I don't think the test has been failing. Greg Farnum

07/12/2016

11:52 PM Bug #5360 (Rejected): ceph-fuse: failing smbtorture tests
We have other tickets about smbtorture but we also fixed a bunch; who knows which one this was. Greg Farnum
11:51 PM Feature #4906 (Resolved): ceph-fuse: use the Preforker class
See auto-associated revision 66f0704c; this got done years ago. Greg Farnum
11:48 PM Bug #5731 (Can't reproduce): failed pjd link permissions check
So much stuff has changed and we haven't linked any other failures to this ticket. Greg Farnum
11:43 PM Bug #11499 (Can't reproduce): ceph-fuse: don't try and remount during shutdown
We haven't seen this again. Greg Farnum
11:42 PM Fix #13126 (Resolved): qa: ceph-fuse flushes very slowly in some workunits
I spot-checked one wow that the ObjectCacher is coalescing IOs to a single object. It looks like things have gotten f... Greg Farnum
11:32 PM Bug #14735 (Resolved): ceph-fuse does not mount at boot on Debian Jessie
I don't think there are likely to be any more infernalis releases now that Jewel is out. Greg Farnum
11:28 PM Feature #16467: ceph-fuse: Exclude ceph.* xattr namespace in listxattr
This applies to the kernel client as well, right? Greg Farnum
11:16 PM Feature #15634 (Resolved): Enable fuse_use_invalidate_cb by default
This got merged beginning of June. Greg Farnum
09:33 PM Bug #16668: client: nlink count is not maintained correctly
I suspect the kclient has a similar problem. I'll test it out when I get a chance. I do agree that we probably ought ... Jeff Layton
08:53 PM Bug #16668 (Resolved): client: nlink count is not maintained correctly
Frank reported in #ceph-devel that we don't seem to update nlink correctly from the Client. Looking through the sourc... Greg Farnum
08:31 PM Bug #16655: ceph-fuse is not linked to libtcmalloc
Okay, I guess it was just introduced in some autotools refactor or update then. Thanks! Greg Farnum
08:23 PM Bug #16655: ceph-fuse is not linked to libtcmalloc
Greg Farnum wrote:
> But the fix is on top of master, which should already work?
It looks like the bug is in t...
Nathan Cutler
05:47 PM Bug #16655: ceph-fuse is not linked to libtcmalloc
The link is definitely missing in v10.2.2.... Ken Dreyer
05:35 PM Bug #16655: ceph-fuse is not linked to libtcmalloc
I'm a little confused about the cause here. Ken says
>confirmed that /usr/bin/ceph-fuse is linked to libtcmalloc.so....
Greg Farnum
10:12 AM Bug #16655: ceph-fuse is not linked to libtcmalloc
https://github.com/ceph/ceph/pull/10258 John Spray
08:27 AM Bug #16655 (Fix Under Review): ceph-fuse is not linked to libtcmalloc
Kefu Chai
03:43 AM Bug #16655 (Resolved): ceph-fuse is not linked to libtcmalloc
For ceph-fuse binary at http://download.ceph.com/rpm-jewel/el7/x86_64/ceph-fuse-10.2.2-0.el7.x86_64.rpm
[root@zh...
Zheng Yan
08:29 PM Documentation #16664 (Fix Under Review): Standby Replay configuration doc is wrong
Nathan Cutler
07:46 PM Documentation #16664: Standby Replay configuration doc is wrong
PR: https://github.com/ceph/ceph/pull/10268 Patrick Donnelly
07:33 PM Documentation #16664 (Resolved): Standby Replay configuration doc is wrong
The config settings here are wrong:
http://docs.ceph.com/docs/master/cephfs/standby/
The settings should be pre...
Patrick Donnelly
04:09 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
I saw this before with Debian. It looks like it's now showing with with rhelish stuff. The non-devel package includes... Noah Watkins
11:25 AM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
Just another update after further investigation and discussion in the mailing list.
1. I have tried to run the app...
Goncalo Borges
10:42 AM Bug #16643 (Won't Fix): MDS memory leak in hammer integration testing
MDS leaks are ignored by default in the valgrind task, so presumably you're only seeing this because something else f... John Spray
10:32 AM Feature #16656: mount.ceph: enable consumption of ceph keyring files
I'll go ahead and grab this one. Not a high priority but definitely a nice-to-have from a usability perspective. Jeff Layton
09:42 AM Feature #16656 (Resolved): mount.ceph: enable consumption of ceph keyring files
Jeff pointed this out in doc review:
> we really ought to fix up the mount helper to use the same sort of keyring ...
John Spray
09:46 AM Feature #16570 (Fix Under Review): MDS health warning for failure to enforce cache size limit
https://github.com/ceph/ceph/pull/10245 John Spray

07/11/2016

06:46 PM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
Just as a quick update, we're waiting on some more information from Goncalo concerning the possibility of nodes runni... Patrick Donnelly
01:54 PM Feature #15619 (In Progress): Repair InoTable during forward scrub
Vishal Kanaujia

07/09/2016

07:45 AM Bug #16643 (Won't Fix): MDS memory leak in hammer integration testing
Lots of Leak_PossiblyLost and Leak_DefinitelyLost in
smithfarm@teuthology:/a/smithfarm-2016-07-08_15:27:38-fs-ham...
Nathan Cutler

07/08/2016

10:36 PM Feature #16419: add statx-like interface to libcephfs
Possibly. The thing is that the btime should only ever change due to an deliberate setattr call. It's unlike the othe... Jeff Layton
10:20 PM Feature #16419: add statx-like interface to libcephfs
We need to be able to serve an accurate btime. I suppose we could break our rules and assume it won't get changed in ... Greg Farnum
09:54 PM Feature #16419: add statx-like interface to libcephfs
Aside from the stuff Greg noticed in his latest review pass, I noticed a number of flaws in the original patchset and... Jeff Layton
08:02 PM Feature #16419: add statx-like interface to libcephfs
Changing the description since this has ballooned a bit in scope. We want to add btime support and a change_attribute... Jeff Layton
10:21 PM Bug #16640 (Won't Fix): libcephfs: Java bindings failing to load on CentOS
http://qa-proxy.ceph.com/teuthology/jspray-2016-07-08_05:19:56-fs-master-distro-basic-mira/302088/teuthology.log
<...
John Spray
12:49 PM Feature #16631 (New): ObjectCacher cache size stats for ceph-fuse
Currently the perf stats from ObjectCacher don't include the actual size of the cache (get_stat_clean, get_stat_dirty... John Spray
08:26 AM Bug #16588 (Fix Under Review): ceph mds dump show incorrect number of metadata pools.
https://github.com/ceph/ceph/pull/10202 Xiaoxi Chen
07:28 AM Backport #16625 (In Progress): jewel: Failing file operations on kernel based cephfs mount point ...
Nathan Cutler
07:18 AM Backport #16625 (Resolved): jewel: Failing file operations on kernel based cephfs mount point lea...
https://github.com/ceph/ceph/pull/10199 Nathan Cutler
07:27 AM Backport #16626 (In Progress): hammer: Failing file operations on kernel based cephfs mount point...
Nathan Cutler
07:18 AM Backport #16626 (Resolved): hammer: Failing file operations on kernel based cephfs mount point le...
https://github.com/ceph/ceph/pull/10198 Nathan Cutler
07:06 AM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
*master PR*: https://github.com/ceph/ceph/pull/8778 Nathan Cutler
07:05 AM Bug #16013 (Pending Backport): Failing file operations on kernel based cephfs mount point leaves ...
Nathan Cutler

07/07/2016

09:53 PM Backport #16621 (Resolved): jewel: mds: `session evict` tell command blocks forever with async me...
https://github.com/ceph/ceph/pull/10501 Loïc Dachary
09:53 PM Backport #16620 (Resolved): jewel: Fix shutting down mds timed-out due to deadlock
https://github.com/ceph/ceph/pull/10500 Loïc Dachary
08:58 PM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
Should note that this is maybe related to: http://tracker.ceph.com/issues/15591 Patrick Donnelly
05:44 PM Bug #16610: Jewel: segfault in ObjectCacher::FlusherThread
Log is now here: /ceph/post/i16610/client.log Patrick Donnelly
02:04 PM Bug #16610 (Resolved): Jewel: segfault in ObjectCacher::FlusherThread
... Patrick Donnelly
03:10 PM Feature #15942: MDS: use FULL_TRY Objecter flag instead of relying on an exemption from full chec...
Related: https://github.com/ceph/ceph/pull/9087 John Spray
03:09 PM Cleanup #16144 (Resolved): Remove cephfs-data-scan tmap_upgrade
John Spray
03:08 PM Cleanup #16195 (In Progress): mds: Don't spam log with standby_replay_restart messages
John Spray
03:05 PM Bug #16288 (Pending Backport): mds: `session evict` tell command blocks forever with async messen...
John Spray
03:04 PM Bug #16396 (Pending Backport): Fix shutting down mds timed-out due to deadlock
John Spray
01:05 PM Feature #16570 (In Progress): MDS health warning for failure to enforce cache size limit
John Spray
01:04 PM Bug #15485 (Duplicate): drop /usr/bin/cephfs
John Spray
11:28 AM Bug #16588: ceph mds dump show incorrect number of metadata pools.
h3. original description
Ceph mds dump shows metadata pool count as 2, even though only one metadata pool is prese...
Nathan Cutler
08:54 AM Bug #16588: ceph mds dump show incorrect number of metadata pools.
Hi Xiaoxi,
You are right about the bug. The metadata_pool field should be left blank. I have changed the descripti...
Rohith Radhakrishnan
08:48 AM Bug #16588: ceph mds dump show incorrect number of metadata pools.
Rohith Radhakrishnan wrote:
> Ceph mds dump shows metadata_pool id as 0. When no FS is present, then metadata_pool ...
Rohith Radhakrishnan
08:34 AM Bug #16588: ceph mds dump show incorrect number of metadata pools.
Hmm, yes, this is because metadata_pool is initialized to 0 , this seems worth to fix.
The bug is , when no FS pr...
Xiaoxi Chen
06:50 AM Bug #16588: ceph mds dump show incorrect number of metadata pools.
ceph osd pool stats
*there are no pools!*
ems@rack2-client-3:~$ ceph mds dump
dumped fsmap epoch 3
fs_name ceph...
Rohith Radhakrishnan
06:42 AM Bug #16588: ceph mds dump show incorrect number of metadata pools.
on what basis is the pool id generated? There are no existing pools. So shouldn't the count start with 0 or 1?
Als...
Rohith Radhakrishnan

07/06/2016

03:33 PM Feature #15406 (In Progress): Add versioning to CephFSVolumeClient interface
Ramana Raja
06:29 AM Bug #16588 (Rejected): ceph mds dump show incorrect number of metadata pools.
This is not a bug.
The numbers following "data_pools" and "metadata_pool" are not count, but the pool ids.
root...
Xiaoxi Chen
03:44 AM Bug #16588: ceph mds dump show incorrect number of metadata pools.
Xiaoxi Chen

07/05/2016

09:00 PM Bug #16042 (Fix Under Review): MDS Deadlock on shutdown active rank while busy with metadata IO
PR: https://github.com/ceph/ceph/pull/10142 Patrick Donnelly
05:44 PM Bug #16592 (Need More Info): Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(in...
We've seen a few reports on the ceph-user mailing lists of the latest jewel.... Greg Farnum
11:42 AM Bug #16588 (Resolved): ceph mds dump show incorrect number of metadata pools.
Ceph mds dump shows metadata_pool id as 0. When no FS is present, then metadata_pool id should be left blank.
ceph...
Rohith Radhakrishnan
 

Also available in: Atom