Activity
From 03/27/2014 to 04/25/2014
04/25/2014
- 07:45 PM Bug #7966 (Fix Under Review): ceph-mds respawn doesn't always work
- 07:02 PM Bug #8211: 0.80~rc1: MDS failed to respawn
- Debian GNU/Linux x86_64 (i.e. amd64)
- 10:23 AM Bug #8211: 0.80~rc1: MDS failed to respawn
- Yeah. What environment are you running under?
- 01:38 AM Bug #8211 (Resolved): 0.80~rc1: MDS failed to respawn
- From the MDS log (timestamps dropped):...
- 06:47 AM Bug #8200: failing kclient_workunit_kclient test
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-04-23_23:01:45-kcephfs-master-testing-basic-plana/212542/
04/24/2014
- 11:30 AM Bug #8201 (Resolved): client: (optionally) crash/exit if we are refused reconnect to the mds
- currently we hang and there is no way for users of the fs to know that it is not going to unhang in the future.
- 11:29 AM Bug #8200: failing kclient_workunit_kclient test
- teuthology-2014-04-20_23:04:17-kcephfs-master-testing-basic-plana/206396/
- 11:27 AM Bug #8200 (Resolved): failing kclient_workunit_kclient test
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-04-22_23:05:45-kcephfs-firefly-testing-basic-plana/210650/
http:...
04/22/2014
- 07:16 PM Bug #8177 (Duplicate): Client: seg fault in verify_reply_trace on traceless reply
- 07:11 AM Bug #8177: Client: seg fault in verify_reply_trace on traceless reply
- No idea if they are same.
- 06:52 AM Bug #8177: Client: seg fault in verify_reply_trace on traceless reply
- See #5021 and wip-5021; possibly related (or same)?
The wip-5021 worked okay except that it caused a crash with sm...
04/21/2014
- 10:18 PM Bug #8177: Client: seg fault in verify_reply_trace on traceless reply
- 08:37 PM Bug #8177: Client: seg fault in verify_reply_trace on traceless reply
- Also /a/teuthology-2014-04-14_23:00:38-fs-master-testing-basic-plana/192241
- 08:31 PM Bug #8177 (Resolved): Client: seg fault in verify_reply_trace on traceless reply
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-04-17_23:01:49-fs-firefly-distro-basic-plana/199687/...
- 03:48 PM Bug #8172 (Resolved): ceph_get_cap+0x2b/0x120
- commit b9baf44e(ceph: pre-allocate ceph_cap struct for ceph_add_cap())
- 02:44 PM Bug #8172 (Resolved): ceph_get_cap+0x2b/0x120
- ...
04/20/2014
04/19/2014
- 10:23 PM Bug #8140: 0.79: MDS / CephFS: unable to read directory
- Already using. :) Thanks for useful advise. Very helpful.
- 08:43 PM Bug #8140: 0.79: MDS / CephFS: unable to read directory
- 3.15. For now, please use readdir_max_entries mount option
- 06:23 PM Bug #8140: 0.79: MDS / CephFS: unable to read directory
- Makes sense, thank you for explaining.
04/18/2014
- 06:36 AM Bug #8140: 0.79: MDS / CephFS: unable to read directory
- When -ENOMEM happens, the kclient does not properly release (cache coherence related) resources. that's why ceph-fuse...
- 05:04 AM Bug #8140: 0.79: MDS / CephFS: unable to read directory
- I found commit in "ceph-client: https://github.com/ceph/ceph-client/commit/54008399dc0ce511a07b87f1af3d1f5c791982a4
...
04/17/2014
- 11:28 PM Bug #8140: 0.79: MDS / CephFS: unable to read directory
- Sorry, that can't be right. First of all I can't find this commit. Could you please use correct commit ID? I'd like t...
- 11:13 PM Bug #8140: 0.79: MDS / CephFS: unable to read directory
- Which kernel version contains this fix?
- 05:14 PM Bug #8140 (Resolved): 0.79: MDS / CephFS: unable to read directory
- This issue should be fixed by commit 54008399 (ceph: preallocate buffer for readdir reply). For old kernel, you can a...
- 02:11 PM Bug #8140 (Resolved): 0.79: MDS / CephFS: unable to read directory
- With kernel client I got the following error when I attempted to list files in directory containing 1021 files:
<p... - 09:21 PM Bug #8092 (Resolved): multimds ceph-fuse hang on write waiting for max size
- 10:05 AM Bug #7966 (Resolved): ceph-mds respawn doesn't always work
04/16/2014
- 01:21 AM Bug #8118 (Closed): MDS crashes
- Active MDS crashes (v0.79).
log file attached.
Host did not ran out of memory, Standby MDS took over successfully....
04/15/2014
04/14/2014
- 06:33 PM Bug #8092 (Need More Info): multimds ceph-fuse hang on write waiting for max size
- 04:20 PM Bug #8025: nfs-on-kclient: rm -r failed
- yes, commit 85f6def97b75b840d6be97671cb8bacd2a074a24 should fix this
- 01:37 PM Bug #8025: nfs-on-kclient: rm -r failed
- This turned up again on yesterday's run: http://qa-proxy.ceph.com/teuthology/teuthology-2014-04-13_23:01:11-knfs-mast...
- 09:39 AM Bug #8004 (Resolved): LibCephFS.HardlinkNoOriginal hang
- 07:18 AM Fix #8094: MDS: be accurate about stats in check_rstats
- 07:03 AM Fix #8094 (Resolved): MDS: be accurate about stats in check_rstats
- We see occasional crashes in CDir::check_rstats function, on...
- 07:04 AM Bug #8090: multimds: mds crash in check_rstats
- I made a fix ticket for that: http://tracker.ceph.com/issues/8094
- 06:54 AM Bug #8090: multimds: mds crash in check_rstats
- it's CDir::check_rstats() bug instead of rstats corruption
04/13/2014
- 10:12 PM Bug #8092 (Resolved): multimds ceph-fuse hang on write waiting for max size
- ...
- 09:45 PM Bug #8090 (Duplicate): multimds: mds crash in check_rstats
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-04-11_23:01:29-multimds-master-testing-basic-plana/187...
04/11/2014
- 10:32 PM Bug #8054 (Resolved): multimds hang on fsstress
- 10:34 AM Bug #8054 (In Progress): multimds hang on fsstress
- reverted the commit.. it breaks mds startup in the trivial single-mds case
- 06:55 AM Bug #8054 (Resolved): multimds hang on fsstress
04/10/2014
- 06:50 PM Bug #8054: multimds hang on fsstress
- 06:54 AM Bug #8054 (Resolved): multimds hang on fsstress
- ceph-fuse:
ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-04-09_23:01:37-multimds-master-testing-bas... - 06:58 AM Bug #8055 (Can't reproduce): knfs: NFS: nfs4_discover_server_trunking unhandled error -5. Exiting...
- ...
04/08/2014
- 08:25 PM Bug #8004: LibCephFS.HardlinkNoOriginal hang
- 12:12 PM Bug #8026 (Resolved): shared pointer completely break multiple mds
- 07:04 AM Bug #8026: shared pointer completely break multiple mds
- 12:04 AM Bug #8026 (Resolved): shared pointer completely break multiple mds
- 06:59 AM Bug #3424: java: Add the correct JUnit package dependencies on supported platforms and ensure the...
- 06:51 AM Bug #8025: nfs-on-kclient: rm -r failed
04/07/2014
- 10:08 PM Bug #8025 (Resolved): nfs-on-kclient: rm -r failed
- teuthology-2014-04-06_23:01:11-knfs-master-testing-basic-plana/175859/...
- 09:26 PM Bug #7958 (Resolved): ceph-fuse+fsx umount hang on leaked inode reference
- 09:17 PM Feature #7319 (Resolved): qa: multimds, no failure
- 09:13 PM Bug #7739 (Resolved): mds: uninitialized field in message
- 04:18 PM Bug #8005 (Rejected): fuse hang
- no error in client log, looks like mds was killed by someone
- 12:57 PM Bug #8010: It's impossible to remove unused filesystem pools from a cluster
- It's happening with
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)
Linux access.car.dot.com 3.13.0-... - 10:07 AM Bug #8010 (Resolved): It's impossible to remove unused filesystem pools from a cluster
- We've inadvertently made it impossible to remove a filesystem from a Ceph cluster. If there is not data in the FS, it...
04/06/2014
- 09:22 PM Bug #8005: fuse hang
- still looks like MDS was dead
- 01:44 PM Bug #8005 (Rejected): fuse hang
- ...
- 05:40 PM Bug #8006 (Rejected): fuse hang on flush (icache branch)
- The flush hang is because ceph-fuse was umounting (received signal). umounting can't finish becase MDS was dead at th...
- 01:49 PM Bug #8006 (Rejected): fuse hang on flush (icache branch)
- ...
- 05:29 PM Bug #8004: LibCephFS.HardlinkNoOriginal hang
- oh, and the 32-bit pointer thing is because ceph-fuse is running under valgrind.
- 05:27 PM Bug #8004: LibCephFS.HardlinkNoOriginal hang
- seems easy to reproduce, just hit this again with...
- 01:41 PM Bug #8004 (Resolved): LibCephFS.HardlinkNoOriginal hang
- ubuntu@teuthology:/var/lib/teuthworker/archive/sage-2014-04-05_15:44:13-multimds:verify-wip-ms-dump-testing-basic-pla...
- 01:22 PM Bug #7739: mds: uninitialized field in message
04/04/2014
- 07:28 PM Bug #7980: 0.78: MDS crash (segmentation fault) on client wake-up from suspend.
- Works as expected, problem solved, thank you.
- 12:18 AM Bug #7980: 0.78: MDS crash (segmentation fault) on client wake-up from suspend.
- Very nice, thank you. I'll test and confirm.
- 12:01 AM Bug #7980 (Resolved): 0.78: MDS crash (segmentation fault) on client wake-up from suspend.
- fixed by https://github.com/ceph/ceph/commit/fb72330fb3514be690dc60598242036aa560e023
04/03/2014
- 11:47 PM Bug #7980 (Resolved): 0.78: MDS crash (segmentation fault) on client wake-up from suspend.
- MDS crashes (segmentation fault) when I wake-up machine with CephFS (mounted using kernel client) from suspend:
<p... - 06:53 AM Bug #7958: ceph-fuse+fsx umount hang on leaked inode reference
- 05:32 AM Bug #7958: ceph-fuse+fsx umount hang on leaked inode reference
- I guess it's introduce by commit f1c7b4ef0 (client: pin Inode during readahead). Readahead raced with truncate. Objec...
04/02/2014
- 10:00 PM Bug #7958: ceph-fuse+fsx umount hang on leaked inode reference
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-04-01_23:00:33-fs-firefly-distro-basic-plana/160589
- 10:23 AM Bug #7958 (Resolved): ceph-fuse+fsx umount hang on leaked inode reference
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-03-31_23:00:36-fs-master-testing-basic-plana/157636
... - 04:12 PM Bug #7966 (Resolved): ceph-mds respawn doesn't always work
- ...
03/29/2014
03/28/2014
- 03:56 PM Bug #7867 (Resolved): client/Client.cc: 2087: FAILED assert(!unclean)
- 03:47 PM Bug #7867 (Pending Backport): client/Client.cc: 2087: FAILED assert(!unclean)
- 08:25 AM Bug #7780 (Resolved): When full flag is set, even MDS writes are blocked
- Fix was merged at c647a03fffb2e1e997dbdb0ff128eeb6efc47deb
03/27/2014
- 10:26 PM Bug #7880: multimds: directory gets rsynced twice
- 06:54 AM Bug #7880 (Resolved): multimds: directory gets rsynced twice
- probalby the mtime doesn't get set properly teh first time?...
- 10:40 AM Bug #7867: client/Client.cc: 2087: FAILED assert(!unclean)
Also available in: Atom