Activity
From 08/02/2016 to 08/31/2016
08/31/2016
- 10:01 PM Support #17171: Ceph-fuse client hangs on unmount
- When are you doing this unmount? If it's on shutdown, and it happens to be unmounted after networking gets shut down,...
- 09:21 PM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- Okay, there's also one at http://pulpito.ceph.com/teuthology-2016-08-14_02:35:02-samba-jewel---basic-mira/
That se... - 08:19 PM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- I'm not seeing this at all on master (just from browsing http://pulpito.ceph.com/?suite=samba#)
Jewel has a core d... - 03:47 PM Bug #17184 (Rejected): "Segmentation fault" in samba-jewel---basic-mira
- This is for jewel 10.2.3 release
Seems to be verified by several last runs
Runs:
http://pulpito.ceph.com/teuthol... - 09:03 PM Bug #16909 (Resolved): Stopping an MDS rank does not stop standby-replays for that rank
- 09:01 PM Bug #17172: Failure in snaptest-git-ceph.sh
- This also showed up in a testing branch of mine: http://pulpito.ceph.com/gregf-2016-08-29_04:30:16-fs-greg-fs-testing...
- 08:03 PM Support #17183: caught error when trying to handle auth request, probably malformed request
- You'll need to be a little more clear about the keyring involved; I imagine that's the problem. You should be able to...
- 03:44 PM Support #17183 (New): caught error when trying to handle auth request, probably malformed request
- When trying to start up a new MDS server, I'm getting an authentication failure. Attached is a snippet of the authent...
- 04:01 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- i am using 4.4.8-040408-generic
- 01:29 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- I tried pool quota on 4.8-rc1 kernel. the kernel does recover from hang when unset quota
- 10:24 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- In this latest kernel, the warnings appear only when we try to unmount the FS. And the umount command hangs and fails...
- 09:48 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- This is the expected behaviour. (otherwise cephfs needs to drop some dirty data silently). Does kernel stop to print ...
- 08:19 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- Reproduced with below 4.8 kernel :-
rack6-client-5:~$ uname -a
Linux rack6-client-5 4.4.8-040408-generic #201604200... - 03:13 PM Bug #17181 (Duplicate): "[ FAILED ] LibCephFS.ThreesomeInterProcessRecordLocking" in smoke
- Run: http://pulpito.ceph.com/teuthology-2016-08-31_05:00:01-smoke-master-testing-basic-vps/
Job: 394020
Logs: http:...
08/30/2016
- 12:20 PM Bug #17173 (Resolved): Duplicate damage table entries
- Seen on mira021 long-running MDS....
- 10:14 AM Bug #17172 (Resolved): Failure in snaptest-git-ceph.sh
- This run on master:
http://pulpito.ceph.com/jspray-2016-08-29_11:24:10-fs-master-testing-basic-mira/389772/... - 09:32 AM Support #17171 (Closed): Ceph-fuse client hangs on unmount
- We use autofs/automount to mount/unmount ceph-fuse mounts and from time to time ceph-fuse client hangs on umount and ...
- 12:37 AM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
- I've had the same issue when use ceph-deploy gatherkeys(jewel)
if I change "mds 'allow'" to "mds 'allow *'", it's t...
08/29/2016
- 09:23 PM Bug #17105: multimds: allow_multimds not required when max_mds is set in ceph.conf at startup
- New PR: https://github.com/ceph/ceph/pull/10914
- 06:20 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- It would be helpful; we're still *surprised* that this is a problem. Just noting that we don't include it in our nigh...
- 02:22 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- @Greg: How to proceed now. Is there a need to test with 4.8 kernel now?
- 02:19 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- Rohith Radhakrishnan wrote:
> 4.4 is is the latest for Ubuntu 14.04.5. But let me see if i can get hold of a 16.04 ... - 02:17 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- 4.4 is is the latest for Ubuntu 14.04.5. But let me see if i can get hold of a 16.04 machine with 4.8 kernel and try...
- 02:16 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- Turns out we don't actually test the kernel against full pools; see #9466 for updates on it.
- 01:55 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- 4.4.0 is pretty old at this point, and there are some fixes that may help this that have gone upstream since then. Is...
- 02:15 PM Feature #9466: kclient: Extend CephFSTestCase tests to cover kclient
- Updating title to reflect that these days we have lots of tests (in fs/recovery, which is now a bit of a silly name f...
08/26/2016
- 04:14 PM Feature #9880 (Resolved): mds: more gracefully handle EIO on missing dir object
- I think we're good to go, then.
- 03:01 PM Feature #9880: mds: more gracefully handle EIO on missing dir object
- no specific suggestions
- 03:55 PM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- I think the logs you've provided should be enough. Thanks!
- 10:43 AM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- Will full logs be enough for diagnose?
I'd like to start recovering this cluster, but if you would need me to run ad...
08/25/2016
- 10:47 PM Backport #17126 (Resolved): mds: fix double-unlock on shutdown
- 07:28 PM Feature #11172 (In Progress): mds: inode filtering on 'dump cache' asok
- 05:33 PM Feature #12274 (Fix Under Review): mds: start forward scrubs from all subtree roots, skip non-aut...
- https://github.com/ceph/ceph/pull/10876
- 05:31 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- FYI: Github is annoying and does some kind of timestamp sort when displaying commits. I'm not sure if it's the origin...
- 05:17 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- @Jeff this is a very unusual situation and I apologize for the noise. It turns out that github does not display the c...
- 03:13 PM Backport #16946 (In Progress): jewel: client: nlink count is not maintained correctly
- 03:13 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- You want the latter approach, and you want to pick them in the order they were originally committed, in case we need ...
- 02:54 PM Backport #16946 (Need More Info): jewel: client: nlink count is not maintained correctly
- Actually, you were right to ask, my question was about something else :-) It's good to know that the four commits are...
- 02:40 PM Backport #16946 (New): jewel: client: nlink count is not maintained correctly
- This is perfect, thank you !
- 02:38 PM Backport #16946 (In Progress): jewel: client: nlink count is not maintained correctly
- 12:11 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- Yes. I think you'll want the entire patch pile from that PR. These 4 patches at least:
https://github.com/ceph/cep... - 11:59 AM Backport #16946 (Need More Info): jewel: client: nlink count is not maintained correctly
- git cherry-pick -x https://github.com/ceph/ceph/pull/10386/commits/f3605d39e53b3ff777eb64538abfa62a5f98a4f2 which is ...
- 04:59 PM Bug #17074 (Closed): "SELinux denials" in knfs-master-testing-basic-smithi
- per IRC
(09:54:34 AM) yuriw: loicd dgalloway can we say that old tests for hammer ran in ovh never had SELinux enabl... - 04:53 PM Bug #17074: "SELinux denials" in knfs-master-testing-basic-smithi
- the suite defensively passed in previous point releases
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-04-24... - 04:47 PM Bug #17074 (Need More Info): "SELinux denials" in knfs-master-testing-basic-smithi
- I don't think CephFS/knfs tests and SELinux ever worked on Hammer. Yuri, can you find evidence they did or else close...
- 04:55 PM Feature #4142 (Duplicate): MDS: forward scrub: Implement cross-MDS scrubbing
- 04:25 PM Bug #16592 (Need More Info): Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(in...
- Moving this down and setting Need More Info based on Patrick's investigation and the new asserts; let me know if that...
- 04:23 PM Bug #15903: smbtorture failing on pipe_number test
- We aren't seeing this in regular nightlies; marking it down.
- 03:28 PM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- It's not super-likely the rebooting client actually caused this problem. If it did, it was only incidentally, and it'...
- 06:55 AM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- Full log was uploaded ceph-post-file: 610fd186-9150-4e6b-8050-37dc314af39b
Before I recover, I'd really like to se... - 12:35 PM Bug #16655 (Resolved): ceph-fuse is not linked to libtcmalloc
- 12:35 PM Bug #15705 (Resolved): ceph status mds output ignores active MDS when there is a standby replay
- 11:56 AM Backport #15968 (Resolved): jewel: ceph status mds output ignores active MDS when there is a stan...
- 11:54 AM Backport #15968 (In Progress): jewel: ceph status mds output ignores active MDS when there is a s...
- 11:56 AM Backport #16697 (Resolved): jewel: ceph-fuse is not linked to libtcmalloc
- 11:54 AM Backport #16697 (In Progress): jewel: ceph-fuse is not linked to libtcmalloc
- 11:56 AM Backport #17131 (In Progress): jewel: Jewel: segfault in ObjectCacher::FlusherThread
- 06:27 AM Backport #17131 (Resolved): jewel: Jewel: segfault in ObjectCacher::FlusherThread
- https://github.com/ceph/ceph/pull/10864
- 07:23 AM Bug #15702 (Resolved): mds: wrongly treat symlink inode as normal file/dir when symlink inode is ...
- 07:20 AM Backport #16083 (Resolved): jewel: mds: wrongly treat symlink inode as normal file/dir when symli...
- 01:11 AM Bug #16610 (Pending Backport): Jewel: segfault in ObjectCacher::FlusherThread
- This got merged to master forever ago. Guess it should get backported too.
08/24/2016
- 11:41 PM Bug #17105 (Fix Under Review): multimds: allow_multimds not required when max_mds is set in ceph....
- PR: https://github.com/ceph/ceph/pull/10848
- 09:55 PM Bug #17096 (Won't Fix): Pool name is not displayed after changing CephFS File layout using extend...
- I think this is just a result of not having the current OSDMap yet. If you're doing IO on the client, you're unlikely...
- 08:59 PM Backport #17126 (Resolved): mds: fix double-unlock on shutdown
- https://github.com/ceph/ceph/pull/10847
- 06:00 PM Bug #17113 (Need More Info): MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->s...
- It looks like you're running with multiple active MDSes, which is not currently recommended. We saw this in #16043 as...
- 09:44 AM Bug #17113 (Can't reproduce): MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->...
- I have tiny CEPH cluster (3xmon, 8xosd, 2xmds) with ceph-mds-10.2.2-2.fc24.x86_64.
Recently, one of the clients usin... - 04:24 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- We increased the pool size to a higher size. but system is in same state
Steps done:-
=========================... - 01:36 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- These are warning (write blocked for too long) instead of panic. When pool is full, write osd requests get paused. If...
- 01:12 PM Bug #17115 (Resolved): kernel panic when running IO with cephfs and resource pool becomes full
- Steps:-
Create a data pool with limited quota size and start running IO from client. After the pool becomes full, ... - 03:53 PM Bug #16288 (Resolved): mds: `session evict` tell command blocks forever with async messenger (Tes...
- 08:41 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- You are right. I could do that.
- 07:17 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- There is no option to do that. Your requirement is strange, why not enlarge quota of the first pool.
- 05:58 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- @Zheng: What I would like to achieve is after adding 2 pools to a ceph FS, I should be able to redirect the objects f...
08/23/2016
- 06:23 PM Bug #17105: multimds: allow_multimds not required when max_mds is set in ceph.conf at startup
- I think we want to force users to set multi-mds flags explicitly, not implicitly via the initial config. I'm fine wit...
- 06:02 PM Bug #17105 (Resolved): multimds: allow_multimds not required when max_mds is set in ceph.conf at ...
- Problem:...
- 04:08 PM Bug #17099 (Closed): MDS command for listing mds_cache_size
- The config option can be shown through the standard config interface. The counter values are exported via the perf co...
- 07:52 AM Bug #17099 (Closed): MDS command for listing mds_cache_size
- Not able to find mds_cache_size listed anywhere. For e.g in ceph mds dump or elsewhere. If currently there is no way ...
- 01:44 PM Backport #16621 (Resolved): jewel: mds: `session evict` tell command blocks forever with async me...
- 01:27 PM Bug #17096: Pool name is not displayed after changing CephFS File layout using extended attributes
- Just saw the note: *Note When reading layouts, the pool will usually be indicated by name. However, in rare cases whe...
- 07:39 AM Bug #16396 (Resolved): Fix shutting down mds timed-out due to deadlock
- 07:39 AM Bug #16358 (Resolved): Session::check_access() is buggy
- 07:39 AM Bug #16164 (Resolved): mds: enforce a dirfrag limit on entries
- 07:39 AM Bug #16137 (Resolved): client: crash in unmount when fuse_use_invalidate_cb is enabled
- 07:39 AM Bug #16042 (Resolved): MDS Deadlock on shutdown active rank while busy with metadata IO
- 07:39 AM Bug #16022 (Resolved): MDSMonitor::check_subs() is very buggy
- 07:39 AM Bug #16013 (Resolved): Failing file operations on kernel based cephfs mount point leaves unaccess...
- 07:39 AM Bug #12653 (Resolved): fuse mounted file systems fails SAMBA CTDB ping_pong rw test with v9.0.2
- 06:51 AM Backport #16037 (Resolved): jewel: MDSMonitor::check_subs() is very buggy
- 06:51 AM Backport #16215 (Resolved): jewel: client: crash in unmount when fuse_use_invalidate_cb is enabled
- 06:51 AM Backport #16299 (Resolved): jewel: mds: fix SnapRealm::have_past_parents_open()
- 06:51 AM Backport #16320 (Resolved): jewel: fs: fuse mounted file systems fails SAMBA CTDB ping_pong rw te...
- 06:51 AM Backport #16515 (Resolved): jewel: Session::check_access() is buggy
- 06:50 AM Backport #16560 (Resolved): jewel: mds: enforce a dirfrag limit on entries
- 06:50 AM Backport #16620 (Resolved): jewel: Fix shutting down mds timed-out due to deadlock
- 06:50 AM Backport #16625 (Resolved): jewel: Failing file operations on kernel based cephfs mount point lea...
- 06:50 AM Backport #16797 (Resolved): jewel: MDS Deadlock on shutdown active rank while busy with metadata IO
08/22/2016
- 06:45 PM Bug #17096 (Won't Fix): Pool name is not displayed after changing CephFS File layout using extend...
- Steps-
1)Create a pool and a metadata pool and create a new cephfs using the pools and mount the file system from ... - 11:47 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- Tried setting a non-default pool using "SETFATT", but I am not able to set more than one pool to a directory at a tim...
08/19/2016
- 04:18 PM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- Same in hammer 0.94.8
http://qa-proxy.ceph.com/teuthology/yuriw-2016-08-18_20:11:00-fs-master---basic-smithi/373246/... - 01:18 PM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- the first pool is default pool. see http://docs.ceph.com/docs/master/cephfs/file-layouts/ for how to store file in no...
- 11:23 AM Support #17079 (New): Io runs only on one pool even though 2 pools are attached to cephfs FS.
- Steps:-
1) Create a pool and a metadata pool and create a new cephfs using the pools.
2) Now create another data ...
08/18/2016
- 08:44 PM Bug #17074: "SELinux denials" in knfs-master-testing-basic-smithi
- Not a result of environmental issue or system misconfiguration.
- 08:21 PM Bug #17074 (Closed): "SELinux denials" in knfs-master-testing-basic-smithi
- This is point release tests hammer 0.94.8
Run: http://pulpito.front.sepia.ceph.com/yuriw-2016-08-17_20:57:47-knfs-... - 06:47 AM Bug #17069: multimds: slave rmdir assertion failure
- strange. have you ever use snapshot on the testing cluster?
08/17/2016
- 08:06 PM Bug #17069 (Closed): multimds: slave rmdir assertion failure
- ...
- 04:09 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- https://github.com/ceph/ceph/pull/10386/commits/f3605d39e53b3ff777eb64538abfa62a5f98a4f2 conflicts
08/16/2016
- 02:20 PM Feature #16419: add statx-like interface to libcephfs
- Ok, smaller set of changes is now merged. Now we have the larger set to contend with. I've gone ahead and rolled some...
08/15/2016
- 06:48 PM Feature #12274 (In Progress): mds: start forward scrubs from all subtree roots, skip non-auth met...
08/13/2016
- 11:09 AM Feature #16419: add statx-like interface to libcephfs
- I have a PR up with a smaller set of changes here:
https://github.com/ceph/ceph/pull/10691
This is just cha...
08/12/2016
- 11:18 AM Bug #16640: libcephfs: Java bindings failing to load on CentOS
So, the PR had a passing test run:
https://github.com/ceph/ceph-qa-suite/pull/1084
http://pulpito.ceph.com/jspray...- 01:43 AM Bug #16983: mds: handle_client_open failing on open
- It's already fixed by https://github.com/ceph/ceph/pull/8778
08/11/2016
- 05:04 PM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
- User seeing an assertion failure in the MDS in v10.2.1:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-A... - 05:03 PM Bug #16983: mds: handle_client_open failing on open
- Zheng, I think you fixed this in 4d15eb12298e007744486e28924a6f0ae071bd06 from PR #8778.
Here's the issue from cep...
08/10/2016
- 06:18 PM Bug #16983 (Resolved): mds: handle_client_open failing on open
- Randy Orr reported an assertion failure on the ceph-users list:...
- 11:33 AM Feature #16419: add statx-like interface to libcephfs
- Test run mostly passed last night, with only failures for unrelated problems -- the known problem with valgrind on ce...
08/09/2016
- 08:04 PM Feature #15069 (Fix Under Review): MDS: multifs: enable two filesystems to point to same pools if...
- https://github.com/ceph/ceph/pull/10636
- 08:04 PM Feature #15068 (Fix Under Review): fsck: multifs: enable repair tools to read from one filesystem...
- 08:04 PM Feature #15068: fsck: multifs: enable repair tools to read from one filesystem and write to another
- https://github.com/ceph/ceph/pull/10636
- 07:47 PM Feature #16419: add statx-like interface to libcephfs
- Found it. I had transposed the size and change_attr args in one call to update_inode_file_bits. fsx now seems to be O...
- 03:22 PM Feature #16419: add statx-like interface to libcephfs
- Mostly working now, but I'm seeing occasional problems with truncating files. I bisected the problem down to a one li...
- 06:14 PM Feature #16973 (Resolved): Log path as well as ino when detecting metadata damage
- Currently our cluster log messages look like this:...
- 04:21 PM Bug #16909 (Fix Under Review): Stopping an MDS rank does not stop standby-replays for that rank
- https://github.com/ceph/ceph/pull/10628
- 04:20 PM Bug #16919 (Fix Under Review): MDS: Standby replay daemons don't drop purged strays
- https://github.com/ceph/ceph/pull/10606
- 01:29 PM Bug #16925: multimds: cfuse (?) hang on fsx.sh workunit
- this can either be caused by hang MDS request or be caused by hang read/write (MDS does not properly issue Frw caps t...
- 10:42 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- Hmm, no failures in that re-run, so it's not quite completely reproducible.
08/08/2016
- 02:45 PM Bug #16926: multimds: kclient fails to mount
- (pass "-k testing" when scheduling runs that will use a kclient, to ensure you're getting a nice recent cephfs kernel)
- 01:49 PM Bug #16914: multimds: pathologically slow deletions in some tests
- retest with fuse default permissions set differently because it's doing too many getattr at the moment
- 12:51 PM Support #16884: rename() doesn't work between directories
- Donatas: currently, renaming files in and out of trees with different quotas is going to give you EXDEV. You can wor...
- 06:41 AM Support #16884: rename() doesn't work between directories
- guys, so what's the summary about this 'feature'?
- 10:48 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- ...
- 10:36 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- Given that it happened twice in one job, seems a decent change it's reproducible, let's see:
http://pulpito.ceph.com... - 10:31 AM Bug #16954 (New): Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on...
http://pulpito.ceph.com/jspray-2016-08-07_16:42:13-fs-wip-prompt-frag-distro-basic-mira/353833...- 08:44 AM Bug #14681 (Resolved): Wrong ceph get mdsmap assertion
- 08:44 AM Bug #14319 (Resolved): Double decreased the count to trim caps which will cause failing to respon...
- 08:42 AM Bug #16154 (Resolved): mds: lock waiters are not finished in the same order that they were added
- 08:42 AM Bug #15920 (Resolved): mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 08:42 AM Bug #15723 (Resolved): client: fstat cap release
- 08:42 AM Bug #15689 (Resolved): Confusing MDS log message when shut down with stalled journaler reads
- 08:42 AM Feature #15615 (Resolved): CephFSVolumeClient: List authorized IDs by share
- 08:41 AM Feature #15406 (Resolved): Add versioning to CephFSVolumeClient interface
- 08:34 AM Bug #11482: kclient: intermittent log warnings "client.XXXX isn't responding to mclientcaps(revoke)"
- infernalis is EOL
- 08:33 AM Bug #15050 (Resolved): deleting striped file in cephfs doesn't free up file's space
- 08:32 AM Bug #14144 (Resolved): standy-replay MDS does not cleanup finished replay threads
- 08:28 AM Backport #15281 (Rejected): infernalis: standy-replay MDS does not cleanup finished replay threads
- 08:28 AM Backport #15057 (Rejected): infernalis: deleting striped file in cephfs doesn't free up file's space
- 08:28 AM Backport #14843 (Rejected): infernalis: test_object_deletion fails (tasks.cephfs.test_damage.Test...
- 08:28 AM Backport #14761 (Rejected): infernalis: ceph-fuse does not mount at boot on Debian Jessie
- 08:28 AM Backport #14690 (Rejected): infernalis: Client::_fsync() on a given file does not wait unsafe req...
- 08:28 AM Backport #13890 (Rejected): infernalis: Race in TestSessionMap.test_version_splitting
- 08:25 AM Backport #16299: jewel: mds: fix SnapRealm::have_past_parents_open()
- https://github.com/ceph/ceph/pull/9447
- 08:22 AM Backport #14668 (Resolved): hammer: Wrong ceph get mdsmap assertion
- 08:22 AM Backport #15056 (Resolved): hammer: deleting striped file in cephfs doesn't free up file's space
- 08:21 AM Backport #15512 (Resolved): hammer: Double decreased the count to trim caps which will cause fail...
- 08:21 AM Backport #15898 (Resolved): jewel: Confusing MDS log message when shut down with stalled journale...
- 08:21 AM Backport #16041 (Resolved): jewel: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 08:21 AM Backport #16082 (Resolved): hammer: mds: wrongly treat symlink inode as normal file/dir when syml...
- 08:21 AM Backport #16135 (Resolved): jewel: MDS: fix getattr starve setattr
- 08:21 AM Backport #16136 (Resolved): jewel: MDSMonitor fixes
- 08:21 AM Backport #16152 (Resolved): jewel: fs: client: fstat cap release
- 08:20 AM Backport #16626 (Resolved): hammer: Failing file operations on kernel based cephfs mount point le...
- 08:19 AM Backport #16830 (Resolved): jewel: CephFSVolumeClient: List authorized IDs by share
- 08:19 AM Backport #16831 (Resolved): jewel: Add versioning to CephFSVolumeClient interface
08/05/2016
- 09:04 PM Backport #16946 (Resolved): jewel: client: nlink count is not maintained correctly
- https://github.com/ceph/ceph/pull/10877
- 02:57 PM Bug #16919 (In Progress): MDS: Standby replay daemons don't drop purged strays
- 02:57 PM Bug #16909 (In Progress): Stopping an MDS rank does not stop standby-replays for that rank
08/04/2016
- 08:04 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- so, I tried to run ceph outside of docker to run gdb on ceph-mon, but I don't know what I suppose to see.
$ gdb /usr... - 07:26 PM Bug #16926 (Rejected): multimds: kclient fails to mount
- In many test cases, the kernel client fails to mount with EIO:
http://pulpito.ceph.com/pdonnell-2016-08-03_12:43:1... - 06:58 PM Bug #16925 (Can't reproduce): multimds: cfuse (?) hang on fsx.sh workunit
- http://pulpito.ceph.com/pdonnell-2016-07-18_20:02:54-multimds-master---basic-mira/321794/...
- 05:23 PM Bug #16924: Crash replaying EExport
- git blame points at b7e698a52bf7838f8e37842074c510a6561f165b from Zheng.
> mds: no bloom filter for replica dir
>... - 04:49 PM Bug #16924 (Resolved): Crash replaying EExport
- ...
- 01:55 PM Bug #16919: MDS: Standby replay daemons don't drop purged strays
- The standby does have all the information about which files are open (since they get journaled), right? Or do we only...
- 09:59 AM Bug #16919 (Resolved): MDS: Standby replay daemons don't drop purged strays
This is not fatal, because the inodes will ultimately end up at the top of the LRU list and get trimmed, but it's a...- 12:24 PM Documentation #16906: doc: clarify path restriction instructions
- @John Spray
fixup:
https://github.com/ceph/ceph/pull/10573/commits/d1277f116cd297bae8da7b3e1a7000d3f99c6a51 - 10:41 AM Bug #16920 (New): mds.inodes* perf counters sound like the number of inodes but they aren't
These counters actually reflect the LRU, which is a collection of dentries, not inodes.
mds_mem.ino on the other...
08/03/2016
- 09:08 PM Bug #16914 (Resolved): multimds: pathologically slow deletions in some tests
- http://qa-proxy.ceph.com/teuthology/pdonnell-2016-07-18_20:02:54-multimds-master---basic-mira/321823/teuthology.log
... - 08:52 PM Bug #16886: multimds: kclient hang (?) in tests
- Another blogbench:
http://pulpito.ceph.com/pdonnell-2016-07-29_08:28:00-multimds-master---basic-mira/339886/
<p... - 08:01 PM Feature #16419: add statx-like interface to libcephfs
- I have a prototype set for this, but I now think that the handling of the change_attr is wrong for directories. I'm g...
- 01:56 PM Bug #16909 (Resolved): Stopping an MDS rank does not stop standby-replays for that rank
Run vstart with MDS=2 and -s flag
Set max_mds to 2
See that you get two active daemons and two standby-replays
S...- 12:57 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
- Yes, you really don't want to load the unversioned library at runtime. It's possible that you'll end up picking up a ...
- 11:21 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- They are racy, but this particular case is a bit odd:...
- 11:08 AM Bug #16668 (Pending Backport): client: nlink count is not maintained correctly
- 11:02 AM Documentation #16906: doc: clarify path restriction instructions
- So there's no bug here as such, it's just that the instructions don't explicitly tell you to write out your client ke...
- 08:08 AM Documentation #16906 (Resolved): doc: clarify path restriction instructions
- I do path restriction follow:http://docs.ceph.com/docs/master/cephfs/client-auth/...
- 10:45 AM Bug #16880 (Duplicate): saw valgrind issue <kind>Leak_StillReachable</kind> in /var/log/ceph/va...
- See:
http://tracker.ceph.com/issues/14794
http://tracker.ceph.com/issues/15356
(aka The Mystery Of the Valgrind ... - 10:41 AM Bug #16876 (Duplicate): java.lang.UnsatisfiedLinkError: Can't load library: /usr/lib/jni/libcephf...
- This should be fixed in master now (there was a backed-out change for 16640 in teuthology, then finally the fix was h...
- 10:39 AM Bug #16879 (Resolved): scrub: inode wrongly marked free: 0x10000000002
08/02/2016
- 07:08 PM Bug #16186 (Duplicate): kclient: drops requests without poking system calls on reconnect
- 07:08 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- I'm going to go ahead and close this out, and pursue the follow-up work in tracker #15255.
- 03:33 PM Bug #16668 (Resolved): client: nlink count is not maintained correctly
- Ok, PR is now merged!
- 11:13 AM Bug #16879 (Fix Under Review): scrub: inode wrongly marked free: 0x10000000002
- https://github.com/ceph/ceph-qa-suite/pull/1107
Also available in: Atom