Activity
From 08/07/2016 to 09/05/2016
09/05/2016
- 01:09 PM Bug #17212 (Resolved): Unable to remove symlink / fill_inode badness on ffff88025f049f88
- We have a symlink in our filesystem that we cannot remove.
Ceph MDS version: 10.2.2
Kernel version: 4.4.0-34-gene... - 11:13 AM Bug #16842: mds: replacement MDS crashes on InoTable release
- Hmm, so the client session thinks it had something preallocated that the inotable thinks is already free.
I wonder... - 07:46 AM Bug #16842: mds: replacement MDS crashes on InoTable release
- I met the same problem, ceph version is 10.2.2.0,
the client just simple copy file to the dir of ceph-fuse mounting
...
09/02/2016
- 06:52 PM Backport #17206 (In Progress): jewel: ceph-fuse crash in Client::get_root_ino
- 06:44 PM Backport #17206 (Resolved): jewel: ceph-fuse crash in Client::get_root_ino
- https://github.com/ceph/ceph/pull/10921
- 06:44 PM Backport #17207 (Resolved): jewel: ceph-fuse crash on force unmount with file open
- https://github.com/ceph/ceph/pull/10958
- 09:04 AM Bug #17197 (Resolved): ceph-fuse crash in Client::get_root_ino
(this is a retroactively created ticket for a fix that was pushed without a ticket)
Failure:
http://pulpito.cep...- 08:46 AM Bug #17172 (Fix Under Review): Failure in snaptest-git-ceph.sh
- 08:42 AM Bug #17172 (Resolved): Failure in snaptest-git-ceph.sh
- https://github.com/ceph/ceph/pull/10957
- 08:44 AM Bug #16764: ceph-fuse crash on force unmount with file open
- https://github.com/ceph/ceph/pull/10958
- 06:58 AM Bug #16764 (Pending Backport): ceph-fuse crash on force unmount with file open
- 06:59 AM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- The area of code mentioned above was fixed in master for http://tracker.ceph.com/issues/16764
That wasn't marked f... - 06:09 AM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- Maybe this is a solved problem
> while (!ll_unclosed_fh_set.empty()) {
set<Fh*>::iterator it = ll_unclosed_fh... - 04:42 AM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- Can you fix that up and make sure it was just a backport error, Zheng? We aren't seeing this in master at all so I pr...
- 12:11 AM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- ...
- 12:15 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- No, unless you can accept losing data (use umount -f /mnt/xxx)
09/01/2016
- 10:13 PM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- Ran 6 times against commit:e400999a2cb0972919e35dd8510f8d85f48ceace (jewel-samba-1) and got zero failures. That's one...
- 08:42 PM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- Well, I ran the test in question (samba-basic, btrfs, install, fuse, smbtorture) 9 times and got 4 failures. There ar...
- 05:31 PM Bug #17193: truncate can cause unflushed snapshot data lose
- Added some more debugging for this to my wip qa-suite branch https://github.com/ceph/ceph-qa-suite/pull/1156/commits/...
- 05:25 PM Bug #17193 (Resolved): truncate can cause unflushed snapshot data lose
Failure in test TestStrays.test_snapshot_remove
http://qa-proxy.ceph.com/teuthology/jspray-2016-08-30_12:07:21-kce...- 05:14 PM Bug #17192: "SELinux denials" in knfs-master-testing-basic-smithi
- This is the same as http://tracker.ceph.com/issues/16397, right?
- 04:29 PM Bug #17192 (Duplicate): "SELinux denials" in knfs-master-testing-basic-smithi
- This is for jewel 10.2.3 release
See #17074 we closed for hammer
Run: http://pulpito.front.sepia.ceph.com/yuriw-2... - 05:09 PM Support #17183: caught error when trying to handle auth request, probably malformed request
- I'm guessing blob_size=2 is never a reasonable thing for the MDS to be sending to the mon, so I'd suspect that someth...
- 03:47 PM Support #17183: caught error when trying to handle auth request, probably malformed request
- The keyring in question has mon "allow *" osd "allow *" mds "allow *" permissions, and is configured in the ceph.conf...
- 02:15 PM Feature #9466: kclient: Extend CephFSTestCase tests to cover kclient
- Updated test branch that only calls umount when really needed (https://github.com/ceph/ceph-qa-suite/pull/1156)
Lo... - 10:13 AM Support #17171: Ceph-fuse client hangs on unmount
- Hmm, logs are at the moment pretty useless only contains such entries:...
- 07:18 AM Support #17171: Ceph-fuse client hangs on unmount
- This happens during automount/autofs process decision to unmount filesystem when no process is using it for couple of...
- 09:16 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- Upgraded to 4.8.0-040800rc1-generic. Results are different now.
When pool becomes full gets the below message:-
l...
08/31/2016
- 10:01 PM Support #17171: Ceph-fuse client hangs on unmount
- When are you doing this unmount? If it's on shutdown, and it happens to be unmounted after networking gets shut down,...
- 09:21 PM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- Okay, there's also one at http://pulpito.ceph.com/teuthology-2016-08-14_02:35:02-samba-jewel---basic-mira/
That se... - 08:19 PM Bug #17184: "Segmentation fault" in samba-jewel---basic-mira
- I'm not seeing this at all on master (just from browsing http://pulpito.ceph.com/?suite=samba#)
Jewel has a core d... - 03:47 PM Bug #17184 (Rejected): "Segmentation fault" in samba-jewel---basic-mira
- This is for jewel 10.2.3 release
Seems to be verified by several last runs
Runs:
http://pulpito.ceph.com/teuthol... - 09:03 PM Bug #16909 (Resolved): Stopping an MDS rank does not stop standby-replays for that rank
- 09:01 PM Bug #17172: Failure in snaptest-git-ceph.sh
- This also showed up in a testing branch of mine: http://pulpito.ceph.com/gregf-2016-08-29_04:30:16-fs-greg-fs-testing...
- 08:03 PM Support #17183: caught error when trying to handle auth request, probably malformed request
- You'll need to be a little more clear about the keyring involved; I imagine that's the problem. You should be able to...
- 03:44 PM Support #17183 (New): caught error when trying to handle auth request, probably malformed request
- When trying to start up a new MDS server, I'm getting an authentication failure. Attached is a snippet of the authent...
- 04:01 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- i am using 4.4.8-040408-generic
- 01:29 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- I tried pool quota on 4.8-rc1 kernel. the kernel does recover from hang when unset quota
- 10:24 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- In this latest kernel, the warnings appear only when we try to unmount the FS. And the umount command hangs and fails...
- 09:48 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- This is the expected behaviour. (otherwise cephfs needs to drop some dirty data silently). Does kernel stop to print ...
- 08:19 AM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- Reproduced with below 4.8 kernel :-
rack6-client-5:~$ uname -a
Linux rack6-client-5 4.4.8-040408-generic #201604200... - 03:13 PM Bug #17181 (Duplicate): "[ FAILED ] LibCephFS.ThreesomeInterProcessRecordLocking" in smoke
- Run: http://pulpito.ceph.com/teuthology-2016-08-31_05:00:01-smoke-master-testing-basic-vps/
Job: 394020
Logs: http:...
08/30/2016
- 12:20 PM Bug #17173 (Resolved): Duplicate damage table entries
- Seen on mira021 long-running MDS....
- 10:14 AM Bug #17172 (Resolved): Failure in snaptest-git-ceph.sh
- This run on master:
http://pulpito.ceph.com/jspray-2016-08-29_11:24:10-fs-master-testing-basic-mira/389772/... - 09:32 AM Support #17171 (Closed): Ceph-fuse client hangs on unmount
- We use autofs/automount to mount/unmount ceph-fuse mounts and from time to time ceph-fuse client hangs on umount and ...
- 12:37 AM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
- I've had the same issue when use ceph-deploy gatherkeys(jewel)
if I change "mds 'allow'" to "mds 'allow *'", it's t...
08/29/2016
- 09:23 PM Bug #17105: multimds: allow_multimds not required when max_mds is set in ceph.conf at startup
- New PR: https://github.com/ceph/ceph/pull/10914
- 06:20 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- It would be helpful; we're still *surprised* that this is a problem. Just noting that we don't include it in our nigh...
- 02:22 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- @Greg: How to proceed now. Is there a need to test with 4.8 kernel now?
- 02:19 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- Rohith Radhakrishnan wrote:
> 4.4 is is the latest for Ubuntu 14.04.5. But let me see if i can get hold of a 16.04 ... - 02:17 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- 4.4 is is the latest for Ubuntu 14.04.5. But let me see if i can get hold of a 16.04 machine with 4.8 kernel and try...
- 02:16 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- Turns out we don't actually test the kernel against full pools; see #9466 for updates on it.
- 01:55 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- 4.4.0 is pretty old at this point, and there are some fixes that may help this that have gone upstream since then. Is...
- 02:15 PM Feature #9466: kclient: Extend CephFSTestCase tests to cover kclient
- Updating title to reflect that these days we have lots of tests (in fs/recovery, which is now a bit of a silly name f...
08/26/2016
- 04:14 PM Feature #9880 (Resolved): mds: more gracefully handle EIO on missing dir object
- I think we're good to go, then.
- 03:01 PM Feature #9880: mds: more gracefully handle EIO on missing dir object
- no specific suggestions
- 03:55 PM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- I think the logs you've provided should be enough. Thanks!
- 10:43 AM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- Will full logs be enough for diagnose?
I'd like to start recovering this cluster, but if you would need me to run ad...
08/25/2016
- 10:47 PM Backport #17126 (Resolved): mds: fix double-unlock on shutdown
- 07:28 PM Feature #11172 (In Progress): mds: inode filtering on 'dump cache' asok
- 05:33 PM Feature #12274 (Fix Under Review): mds: start forward scrubs from all subtree roots, skip non-aut...
- https://github.com/ceph/ceph/pull/10876
- 05:31 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- FYI: Github is annoying and does some kind of timestamp sort when displaying commits. I'm not sure if it's the origin...
- 05:17 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- @Jeff this is a very unusual situation and I apologize for the noise. It turns out that github does not display the c...
- 03:13 PM Backport #16946 (In Progress): jewel: client: nlink count is not maintained correctly
- 03:13 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- You want the latter approach, and you want to pick them in the order they were originally committed, in case we need ...
- 02:54 PM Backport #16946 (Need More Info): jewel: client: nlink count is not maintained correctly
- Actually, you were right to ask, my question was about something else :-) It's good to know that the four commits are...
- 02:40 PM Backport #16946 (New): jewel: client: nlink count is not maintained correctly
- This is perfect, thank you !
- 02:38 PM Backport #16946 (In Progress): jewel: client: nlink count is not maintained correctly
- 12:11 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- Yes. I think you'll want the entire patch pile from that PR. These 4 patches at least:
https://github.com/ceph/cep... - 11:59 AM Backport #16946 (Need More Info): jewel: client: nlink count is not maintained correctly
- git cherry-pick -x https://github.com/ceph/ceph/pull/10386/commits/f3605d39e53b3ff777eb64538abfa62a5f98a4f2 which is ...
- 04:59 PM Bug #17074 (Closed): "SELinux denials" in knfs-master-testing-basic-smithi
- per IRC
(09:54:34 AM) yuriw: loicd dgalloway can we say that old tests for hammer ran in ovh never had SELinux enabl... - 04:53 PM Bug #17074: "SELinux denials" in knfs-master-testing-basic-smithi
- the suite defensively passed in previous point releases
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-04-24... - 04:47 PM Bug #17074 (Need More Info): "SELinux denials" in knfs-master-testing-basic-smithi
- I don't think CephFS/knfs tests and SELinux ever worked on Hammer. Yuri, can you find evidence they did or else close...
- 04:55 PM Feature #4142 (Duplicate): MDS: forward scrub: Implement cross-MDS scrubbing
- 04:25 PM Bug #16592 (Need More Info): Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(in...
- Moving this down and setting Need More Info based on Patrick's investigation and the new asserts; let me know if that...
- 04:23 PM Bug #15903: smbtorture failing on pipe_number test
- We aren't seeing this in regular nightlies; marking it down.
- 03:28 PM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- It's not super-likely the rebooting client actually caused this problem. If it did, it was only incidentally, and it'...
- 06:55 AM Bug #17113: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_ver...
- Full log was uploaded ceph-post-file: 610fd186-9150-4e6b-8050-37dc314af39b
Before I recover, I'd really like to se... - 12:35 PM Bug #16655 (Resolved): ceph-fuse is not linked to libtcmalloc
- 12:35 PM Bug #15705 (Resolved): ceph status mds output ignores active MDS when there is a standby replay
- 11:56 AM Backport #15968 (Resolved): jewel: ceph status mds output ignores active MDS when there is a stan...
- 11:54 AM Backport #15968 (In Progress): jewel: ceph status mds output ignores active MDS when there is a s...
- 11:56 AM Backport #16697 (Resolved): jewel: ceph-fuse is not linked to libtcmalloc
- 11:54 AM Backport #16697 (In Progress): jewel: ceph-fuse is not linked to libtcmalloc
- 11:56 AM Backport #17131 (In Progress): jewel: Jewel: segfault in ObjectCacher::FlusherThread
- 06:27 AM Backport #17131 (Resolved): jewel: Jewel: segfault in ObjectCacher::FlusherThread
- https://github.com/ceph/ceph/pull/10864
- 07:23 AM Bug #15702 (Resolved): mds: wrongly treat symlink inode as normal file/dir when symlink inode is ...
- 07:20 AM Backport #16083 (Resolved): jewel: mds: wrongly treat symlink inode as normal file/dir when symli...
- 01:11 AM Bug #16610 (Pending Backport): Jewel: segfault in ObjectCacher::FlusherThread
- This got merged to master forever ago. Guess it should get backported too.
08/24/2016
- 11:41 PM Bug #17105 (Fix Under Review): multimds: allow_multimds not required when max_mds is set in ceph....
- PR: https://github.com/ceph/ceph/pull/10848
- 09:55 PM Bug #17096 (Won't Fix): Pool name is not displayed after changing CephFS File layout using extend...
- I think this is just a result of not having the current OSDMap yet. If you're doing IO on the client, you're unlikely...
- 08:59 PM Backport #17126 (Resolved): mds: fix double-unlock on shutdown
- https://github.com/ceph/ceph/pull/10847
- 06:00 PM Bug #17113 (Need More Info): MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->s...
- It looks like you're running with multiple active MDSes, which is not currently recommended. We saw this in #16043 as...
- 09:44 AM Bug #17113 (Can't reproduce): MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->...
- I have tiny CEPH cluster (3xmon, 8xosd, 2xmds) with ceph-mds-10.2.2-2.fc24.x86_64.
Recently, one of the clients usin... - 04:24 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- We increased the pool size to a higher size. but system is in same state
Steps done:-
=========================... - 01:36 PM Bug #17115: kernel panic when running IO with cephfs and resource pool becomes full
- These are warning (write blocked for too long) instead of panic. When pool is full, write osd requests get paused. If...
- 01:12 PM Bug #17115 (Resolved): kernel panic when running IO with cephfs and resource pool becomes full
- Steps:-
Create a data pool with limited quota size and start running IO from client. After the pool becomes full, ... - 03:53 PM Bug #16288 (Resolved): mds: `session evict` tell command blocks forever with async messenger (Tes...
- 08:41 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- You are right. I could do that.
- 07:17 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- There is no option to do that. Your requirement is strange, why not enlarge quota of the first pool.
- 05:58 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- @Zheng: What I would like to achieve is after adding 2 pools to a ceph FS, I should be able to redirect the objects f...
08/23/2016
- 06:23 PM Bug #17105: multimds: allow_multimds not required when max_mds is set in ceph.conf at startup
- I think we want to force users to set multi-mds flags explicitly, not implicitly via the initial config. I'm fine wit...
- 06:02 PM Bug #17105 (Resolved): multimds: allow_multimds not required when max_mds is set in ceph.conf at ...
- Problem:...
- 04:08 PM Bug #17099 (Closed): MDS command for listing mds_cache_size
- The config option can be shown through the standard config interface. The counter values are exported via the perf co...
- 07:52 AM Bug #17099 (Closed): MDS command for listing mds_cache_size
- Not able to find mds_cache_size listed anywhere. For e.g in ceph mds dump or elsewhere. If currently there is no way ...
- 01:44 PM Backport #16621 (Resolved): jewel: mds: `session evict` tell command blocks forever with async me...
- 01:27 PM Bug #17096: Pool name is not displayed after changing CephFS File layout using extended attributes
- Just saw the note: *Note When reading layouts, the pool will usually be indicated by name. However, in rare cases whe...
- 07:39 AM Bug #16396 (Resolved): Fix shutting down mds timed-out due to deadlock
- 07:39 AM Bug #16358 (Resolved): Session::check_access() is buggy
- 07:39 AM Bug #16164 (Resolved): mds: enforce a dirfrag limit on entries
- 07:39 AM Bug #16137 (Resolved): client: crash in unmount when fuse_use_invalidate_cb is enabled
- 07:39 AM Bug #16042 (Resolved): MDS Deadlock on shutdown active rank while busy with metadata IO
- 07:39 AM Bug #16022 (Resolved): MDSMonitor::check_subs() is very buggy
- 07:39 AM Bug #16013 (Resolved): Failing file operations on kernel based cephfs mount point leaves unaccess...
- 07:39 AM Bug #12653 (Resolved): fuse mounted file systems fails SAMBA CTDB ping_pong rw test with v9.0.2
- 06:51 AM Backport #16037 (Resolved): jewel: MDSMonitor::check_subs() is very buggy
- 06:51 AM Backport #16215 (Resolved): jewel: client: crash in unmount when fuse_use_invalidate_cb is enabled
- 06:51 AM Backport #16299 (Resolved): jewel: mds: fix SnapRealm::have_past_parents_open()
- 06:51 AM Backport #16320 (Resolved): jewel: fs: fuse mounted file systems fails SAMBA CTDB ping_pong rw te...
- 06:51 AM Backport #16515 (Resolved): jewel: Session::check_access() is buggy
- 06:50 AM Backport #16560 (Resolved): jewel: mds: enforce a dirfrag limit on entries
- 06:50 AM Backport #16620 (Resolved): jewel: Fix shutting down mds timed-out due to deadlock
- 06:50 AM Backport #16625 (Resolved): jewel: Failing file operations on kernel based cephfs mount point lea...
- 06:50 AM Backport #16797 (Resolved): jewel: MDS Deadlock on shutdown active rank while busy with metadata IO
08/22/2016
- 06:45 PM Bug #17096 (Won't Fix): Pool name is not displayed after changing CephFS File layout using extend...
- Steps-
1)Create a pool and a metadata pool and create a new cephfs using the pools and mount the file system from ... - 11:47 AM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- Tried setting a non-default pool using "SETFATT", but I am not able to set more than one pool to a directory at a tim...
08/19/2016
- 04:18 PM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- Same in hammer 0.94.8
http://qa-proxy.ceph.com/teuthology/yuriw-2016-08-18_20:11:00-fs-master---basic-smithi/373246/... - 01:18 PM Support #17079: Io runs only on one pool even though 2 pools are attached to cephfs FS.
- the first pool is default pool. see http://docs.ceph.com/docs/master/cephfs/file-layouts/ for how to store file in no...
- 11:23 AM Support #17079 (New): Io runs only on one pool even though 2 pools are attached to cephfs FS.
- Steps:-
1) Create a pool and a metadata pool and create a new cephfs using the pools.
2) Now create another data ...
08/18/2016
- 08:44 PM Bug #17074: "SELinux denials" in knfs-master-testing-basic-smithi
- Not a result of environmental issue or system misconfiguration.
- 08:21 PM Bug #17074 (Closed): "SELinux denials" in knfs-master-testing-basic-smithi
- This is point release tests hammer 0.94.8
Run: http://pulpito.front.sepia.ceph.com/yuriw-2016-08-17_20:57:47-knfs-... - 06:47 AM Bug #17069: multimds: slave rmdir assertion failure
- strange. have you ever use snapshot on the testing cluster?
08/17/2016
- 08:06 PM Bug #17069 (Closed): multimds: slave rmdir assertion failure
- ...
- 04:09 PM Backport #16946: jewel: client: nlink count is not maintained correctly
- https://github.com/ceph/ceph/pull/10386/commits/f3605d39e53b3ff777eb64538abfa62a5f98a4f2 conflicts
08/16/2016
- 02:20 PM Feature #16419: add statx-like interface to libcephfs
- Ok, smaller set of changes is now merged. Now we have the larger set to contend with. I've gone ahead and rolled some...
08/15/2016
- 06:48 PM Feature #12274 (In Progress): mds: start forward scrubs from all subtree roots, skip non-auth met...
08/13/2016
- 11:09 AM Feature #16419: add statx-like interface to libcephfs
- I have a PR up with a smaller set of changes here:
https://github.com/ceph/ceph/pull/10691
This is just cha...
08/12/2016
- 11:18 AM Bug #16640: libcephfs: Java bindings failing to load on CentOS
So, the PR had a passing test run:
https://github.com/ceph/ceph-qa-suite/pull/1084
http://pulpito.ceph.com/jspray...- 01:43 AM Bug #16983: mds: handle_client_open failing on open
- It's already fixed by https://github.com/ceph/ceph/pull/8778
08/11/2016
- 05:04 PM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
- User seeing an assertion failure in the MDS in v10.2.1:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-A... - 05:03 PM Bug #16983: mds: handle_client_open failing on open
- Zheng, I think you fixed this in 4d15eb12298e007744486e28924a6f0ae071bd06 from PR #8778.
Here's the issue from cep...
08/10/2016
- 06:18 PM Bug #16983 (Resolved): mds: handle_client_open failing on open
- Randy Orr reported an assertion failure on the ceph-users list:...
- 11:33 AM Feature #16419: add statx-like interface to libcephfs
- Test run mostly passed last night, with only failures for unrelated problems -- the known problem with valgrind on ce...
08/09/2016
- 08:04 PM Feature #15069 (Fix Under Review): MDS: multifs: enable two filesystems to point to same pools if...
- https://github.com/ceph/ceph/pull/10636
- 08:04 PM Feature #15068 (Fix Under Review): fsck: multifs: enable repair tools to read from one filesystem...
- 08:04 PM Feature #15068: fsck: multifs: enable repair tools to read from one filesystem and write to another
- https://github.com/ceph/ceph/pull/10636
- 07:47 PM Feature #16419: add statx-like interface to libcephfs
- Found it. I had transposed the size and change_attr args in one call to update_inode_file_bits. fsx now seems to be O...
- 03:22 PM Feature #16419: add statx-like interface to libcephfs
- Mostly working now, but I'm seeing occasional problems with truncating files. I bisected the problem down to a one li...
- 06:14 PM Feature #16973 (Resolved): Log path as well as ino when detecting metadata damage
- Currently our cluster log messages look like this:...
- 04:21 PM Bug #16909 (Fix Under Review): Stopping an MDS rank does not stop standby-replays for that rank
- https://github.com/ceph/ceph/pull/10628
- 04:20 PM Bug #16919 (Fix Under Review): MDS: Standby replay daemons don't drop purged strays
- https://github.com/ceph/ceph/pull/10606
- 01:29 PM Bug #16925: multimds: cfuse (?) hang on fsx.sh workunit
- this can either be caused by hang MDS request or be caused by hang read/write (MDS does not properly issue Frw caps t...
- 10:42 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- Hmm, no failures in that re-run, so it's not quite completely reproducible.
08/08/2016
- 02:45 PM Bug #16926: multimds: kclient fails to mount
- (pass "-k testing" when scheduling runs that will use a kclient, to ensure you're getting a nice recent cephfs kernel)
- 01:49 PM Bug #16914: multimds: pathologically slow deletions in some tests
- retest with fuse default permissions set differently because it's doing too many getattr at the moment
- 12:51 PM Support #16884: rename() doesn't work between directories
- Donatas: currently, renaming files in and out of trees with different quotas is going to give you EXDEV. You can wor...
- 06:41 AM Support #16884: rename() doesn't work between directories
- guys, so what's the summary about this 'feature'?
- 10:48 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- ...
- 10:36 AM Bug #16954: Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on disk")
- Given that it happened twice in one job, seems a decent change it's reproducible, let's see:
http://pulpito.ceph.com... - 10:31 AM Bug #16954 (New): Metadata damage reported with snapshots+smallcache+dirfrags ("object missing on...
http://pulpito.ceph.com/jspray-2016-08-07_16:42:13-fs-wip-prompt-frag-distro-basic-mira/353833...- 08:44 AM Bug #14681 (Resolved): Wrong ceph get mdsmap assertion
- 08:44 AM Bug #14319 (Resolved): Double decreased the count to trim caps which will cause failing to respon...
- 08:42 AM Bug #16154 (Resolved): mds: lock waiters are not finished in the same order that they were added
- 08:42 AM Bug #15920 (Resolved): mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 08:42 AM Bug #15723 (Resolved): client: fstat cap release
- 08:42 AM Bug #15689 (Resolved): Confusing MDS log message when shut down with stalled journaler reads
- 08:42 AM Feature #15615 (Resolved): CephFSVolumeClient: List authorized IDs by share
- 08:41 AM Feature #15406 (Resolved): Add versioning to CephFSVolumeClient interface
- 08:34 AM Bug #11482: kclient: intermittent log warnings "client.XXXX isn't responding to mclientcaps(revoke)"
- infernalis is EOL
- 08:33 AM Bug #15050 (Resolved): deleting striped file in cephfs doesn't free up file's space
- 08:32 AM Bug #14144 (Resolved): standy-replay MDS does not cleanup finished replay threads
- 08:28 AM Backport #15281 (Rejected): infernalis: standy-replay MDS does not cleanup finished replay threads
- 08:28 AM Backport #15057 (Rejected): infernalis: deleting striped file in cephfs doesn't free up file's space
- 08:28 AM Backport #14843 (Rejected): infernalis: test_object_deletion fails (tasks.cephfs.test_damage.Test...
- 08:28 AM Backport #14761 (Rejected): infernalis: ceph-fuse does not mount at boot on Debian Jessie
- 08:28 AM Backport #14690 (Rejected): infernalis: Client::_fsync() on a given file does not wait unsafe req...
- 08:28 AM Backport #13890 (Rejected): infernalis: Race in TestSessionMap.test_version_splitting
- 08:25 AM Backport #16299: jewel: mds: fix SnapRealm::have_past_parents_open()
- https://github.com/ceph/ceph/pull/9447
- 08:22 AM Backport #14668 (Resolved): hammer: Wrong ceph get mdsmap assertion
- 08:22 AM Backport #15056 (Resolved): hammer: deleting striped file in cephfs doesn't free up file's space
- 08:21 AM Backport #15512 (Resolved): hammer: Double decreased the count to trim caps which will cause fail...
- 08:21 AM Backport #15898 (Resolved): jewel: Confusing MDS log message when shut down with stalled journale...
- 08:21 AM Backport #16041 (Resolved): jewel: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 08:21 AM Backport #16082 (Resolved): hammer: mds: wrongly treat symlink inode as normal file/dir when syml...
- 08:21 AM Backport #16135 (Resolved): jewel: MDS: fix getattr starve setattr
- 08:21 AM Backport #16136 (Resolved): jewel: MDSMonitor fixes
- 08:21 AM Backport #16152 (Resolved): jewel: fs: client: fstat cap release
- 08:20 AM Backport #16626 (Resolved): hammer: Failing file operations on kernel based cephfs mount point le...
- 08:19 AM Backport #16830 (Resolved): jewel: CephFSVolumeClient: List authorized IDs by share
- 08:19 AM Backport #16831 (Resolved): jewel: Add versioning to CephFSVolumeClient interface
Also available in: Atom