Activity
From 06/08/2014 to 07/07/2014
07/07/2014
- 01:24 PM Tasks #8535 (Closed): audit the journaler send-to-OSD ordering
- 07:30 AM Bug #8757 (Won't Fix): no need to hold write lock on hardlink's dir while creating anchortable entry
- I've finally figured out why creating hardlink farms takes so long with up to 0.80: we take a write lock on the dir t...
- 06:55 AM Bug #8749 (Duplicate): knfs: EBUSY on umount
- 06:55 AM Bug #8748 (Duplicate): knfs: mount failure
- 06:52 AM Bug #8708 (Resolved): kcephfs: direct_io tests failing
- 06:52 AM Bug #8745 (Resolved): ceph-fuse: pjd link 78 failure
- 01:16 AM Bug #8745: ceph-fuse: pjd link 78 failure
07/06/2014
- 01:54 AM Feature #8690: MDS: Allow some kind of recovery when pools are deleted out from underneath us
- Yes, "data_cache" was a tiered cache pool but EC pool behind it was dropped as well.
IMHO recovery shouldn't bee t...
07/05/2014
- 02:40 PM Bug #8749 (Duplicate): knfs: EBUSY on umount
- ubuntu@teuthology:/a/teuthology-2014-07-04_23:10:02-knfs-master-testing-basic-plana/343740
- 02:39 PM Bug #8748 (Duplicate): knfs: mount failure
- Command failed on plana18 with status 32: 'sudo mount -o
rw,hard,intr,nfsvers=4
plana24.front.sepia.cep...
07/03/2014
- 09:47 PM Bug #8745 (Resolved): ceph-fuse: pjd link 78 failure
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-07-02_23:04:01-fs-next-testing-basic-plana/339949...
07/02/2014
- 12:36 PM Tasks #8535: audit the journaler send-to-OSD ordering
- This looks fine — we send out the journal header at the same time as the blocks to write, but we update the write_pos...
- 09:40 AM Bug #8725 (Resolved): mds crashed in upgrade:dumpling-x:stress-split-master-testing-basic-plana
- Logs are in http://qa-proxy.ceph.com/teuthology/ubuntu-2014-07-01_11:38:37-upgrade:dumpling-x:stress-split-master-tes...
- 07:02 AM Bug #8708: kcephfs: direct_io tests failing
- It's new regression in 3.16 rc. introduced by commit 2b777c9d (ceph_sync_read: stop poking into iov_iter guts)
07/01/2014
- 08:03 PM Bug #8677: multimds: pjd failures
- https://github.com/ceph/ceph-qa-suite/commit/88cc7c0e2d3e2d37750759762edc7b7d7f00ca11
- 06:54 AM Bug #8677 (In Progress): multimds: pjd failures
- 07:55 PM Bug #8708: kcephfs: direct_io tests failing
- fixed by commit 8102ce75 (ceph: pass proper page offset to copy_page_to_iter() )
- 08:32 AM Bug #8708 (Resolved): kcephfs: direct_io tests failing
- teuthology-2014-06-29_23:01:50-kcephfs-next-testing-basic-plana/334012...
- 07:53 PM Bug #8719 (Duplicate): failed test_sync_io workunit
- 07:53 PM Bug #8719: failed test_sync_io workunit
- dup #8708
- 03:32 PM Bug #8719 (Duplicate): failed test_sync_io workunit
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-06-26_07:42:35-kcephfs-next-testing-basic-plana/327859/
http://q... - 04:14 PM Bug #8010 (Resolved): It's impossible to remove unused filesystem pools from a cluster
- 02:32 PM Feature #8690: MDS: Allow some kind of recovery when pools are deleted out from underneath us
- Except that's not really sufficient; we'd need to identify it as a non-existent pool and deal with cases where the po...
- 02:11 PM Feature #8690: MDS: Allow some kind of recovery when pools are deleted out from underneath us
- Hmm, so to recover from this case I guess we could catch the case where we're writing to a data pool no longer exists...
- 07:01 AM Bug #8255: mds: directory with missing object cannot be removed
- I think the remaining step is to eventually incorporate the ability to remove teh last trace of the damaged directory.
- 06:57 AM Feature #8634 (In Progress): mds: admin commands list, evict, etc session
- 06:49 AM Bug #8624 (Resolved): monitor: disallow specifying an EC pool as a data or metadata pool
- PR was merged 4f7e26f2befed9bd3a77ad5aee650c08ffd1a366
06/30/2014
- 11:36 PM Bug #8677: multimds: pjd failures
- we has fuse_default_permissions = 0, it causes every permission check fail
- 06:49 AM Bug #8677: multimds: pjd failures
- ubuntu@teuthology:/a/teuthology-2014-06-26_07:42:59-multimds-next-testing-basic-plana$ teuthology-ls --archive-dir ....
- 12:34 PM Bug #8542 (Resolved): kcephfs: fsx failure on read (expected 0's)
- 07:04 AM Bug #8542 (Fix Under Review): kcephfs: fsx failure on read (expected 0's)
- https://github.com/ceph/ceph/pull/2045
from our fsx's help:
-z: Do not use zero range calls
- 12:03 AM Bug #8542: kcephfs: fsx failure on read (expected 0's)
- file system may choose to zero out the extent or do whatever which will result in reading zeros from the range *while...
06/28/2014
- 08:54 AM Bug #8542: kcephfs: fsx failure on read (expected 0's)
- We can support it, just not "preferably without issuing data IO". It needs to iterate over the range and zero the ra...
- 08:38 AM Bug #8542: kcephfs: fsx failure on read (expected 0's)
- There is no way we can support it
/*
* FALLOC_FL_ZERO_RANGE is used to convert a range of file to zeros preferab... - 06:26 AM Feature #8690: MDS: Allow some kind of recovery when pools are deleted out from underneath us
- CephFS was practically unusable until I applied the following patch to MDS:...
06/27/2014
- 09:35 PM Feature #8690 (New): MDS: Allow some kind of recovery when pools are deleted out from underneath us
- I had a secondary (cache) pool once connected to CephFS directory as follows:...
- 04:25 PM Bug #8542: kcephfs: fsx failure on read (expected 0's)
- So...we need to implement that functionality, don't we? Just knowing the problem isn't a resolution, if we're no long...
06/26/2014
- 05:38 PM Bug #8677 (Resolved): multimds: pjd failures
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-06-26_07:42:59-multimds-next-testing-basic-plana$ teut...
- 04:28 PM Bug #8542 (Resolved): kcephfs: fsx failure on read (expected 0's)
- ceph_fallocate() does not recognize FALLOC_FL_ZERO_RANGE
06/25/2014
- 12:36 PM Bug #8622: erasure-code: rados command does not enforce alignement constraints
- Needs to be backported along with https://github.com/ceph/ceph/pull/2020 which fixes a bug introduced by the fix :-/
- 12:57 AM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- Zheng Yan wrote:
> you can try removing that assertion from the source code, than recompile ceph.
Thanks for your...
06/24/2014
- 06:32 PM Bug #2825: File lock doesn't work properly
- the file was opened in O_APPEND mode. client needs to ask for file size before each write
- 08:25 AM Bug #2825: File lock doesn't work properly
- Yeah, please open a new ticket; this one's been closed for a while. :)
- 08:32 AM Bug #8651 (Need More Info): crashing mds in an active-active mds setup
- The MDS got blacklisted, presumably because it got overloaded and stopped heartbeating the monitor or its MDS peers. ...
- 02:15 AM Bug #8651 (Won't Fix): crashing mds in an active-active mds setup
- 2 active mds, crashing while writing 4 rsync streams to it with cephko
{ "mdsmap": { "epoch": 1428,
"flags"...
06/23/2014
- 06:43 PM Bug #2825: File lock doesn't work properly
- I've been using 0.80.1 on a vanilla 3.10.33 kernel. I am seeing this issue and can reproduce it reliably using the te...
- 04:59 PM Bug #8648: Standby MDS leaks memory over time
- I believe we're leaking CInodes in open_root_inode et al.
- 01:28 PM Bug #8648 (Resolved): Standby MDS leaks memory over time
- I've discovered in my Ceph cluster that the MDS overtime will leak memory. In my case it usually takes a week two or ...
- 01:24 PM Bug #8622 (Pending Backport): erasure-code: rados command does not enforce alignement constraints
- Loic - this needs to be backported to Firefly.
06/20/2014
- 10:49 AM Feature #8636 (Resolved): mds/libcephfs: read only mount
- 10:42 AM Feature #8634 (Resolved): mds: admin commands list, evict, etc session
- 10:33 AM Feature #7352 (Resolved): mds: make classes encode/decode-able
- 08:35 AM Bug #8624 (Fix Under Review): monitor: disallow specifying an EC pool as a data or metadata pool
- https://github.com/ceph/ceph/pull/2005
- 06:53 AM Bug #8624 (In Progress): monitor: disallow specifying an EC pool as a data or metadata pool
06/18/2014
- 02:08 PM Bug #8622: erasure-code: rados command does not enforce alignement constraints
- Would you have time to review / try this : https://github.com/ceph/ceph/pull/1987 ?
- 02:05 PM Bug #8622 (Resolved): erasure-code: rados command does not enforce alignement constraints
- This is fine, great work :-)
- 10:01 AM Bug #8622: erasure-code: rados command does not enforce alignement constraints
- Loic,
I had some problems squashing the commits and I created a new pull request:
https://github.com/ceph/ceph/pu... - 09:20 AM Bug #8622 (In Progress): erasure-code: rados command does not enforce alignement constraints
- 12:02 AM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- Zheng Yan wrote:
> you can try removing that assertion from the source code, than recompile ceph.
Thanks for the ...
06/17/2014
- 11:58 PM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- you can try removing that assertion from the source code, than recompile ceph.
- 11:44 PM Bug #8623 (Won't Fix): MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtu...
- 11:28 PM Bug #8623 (New): MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual voi...
- So the only way to use CephFS on EC pool is to stack CephFS on cache pool on top of EC pool? Will it work?
I got m... - 11:01 PM Bug #8623 (Won't Fix): MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtu...
- Yeah, EC pools need to be used underneath a cache pool (limited validation) or for RGW storage. CephFS definitely won...
- 09:16 PM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- I think EC pool does not support block storage either, because it does not support seeky write.
- 07:44 PM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- Zheng Yan wrote:
> EC pool does not support truncate operation. One option is make MDS return (Operation not support... - 07:14 PM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- EC pool does not support truncate operation. One option is make MDS return (Operation not supported) when client want...
- 06:52 PM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- Thanks for quick reply.
Zheng Yan wrote:
> osd_op_reply(169108 100000cfe98.00000000 [trimtrunc 2@0] v0'0 uv0 ondi... - 06:32 PM Bug #8623: MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtual void C_MD...
- osd_op_reply(169108 100000cfe98.00000000 [trimtrunc 2@0] v0'0 uv0 ondisk = -95 ((95) Operation not supported))
Thi... - 06:17 PM Bug #8623 (Won't Fix): MDS crashes (unable to access CephFS) / mds/MDCache.cc: In function 'virtu...
- All of a sudden I found all three MDS servers down and not starting (crashing):...
- 11:00 PM Bug #8624 (Resolved): monitor: disallow specifying an EC pool as a data or metadata pool
- Apparently you can, and things go horribly wrong when the MDS or clients try to write to it.
- 06:27 PM Bug #8622: erasure-code: rados command does not enforce alignement constraints
- I created a pull request for the second option.
https://github.com/ceph/ceph/pull/1981/
- 05:43 PM Bug #8622: erasure-code: rados command does not enforce alignement constraints
- After some debugging it seems to me that the problem is that rados client reads ands sends data in chunks of size (1<...
- 03:55 PM Bug #8622 (Resolved): erasure-code: rados command does not enforce alignement constraints
- Original title for the record : "EC pool fails for certain (k,m) combinations for >4MB objs"
Steps to reproduce th... - 06:51 AM Fix #7564 (Duplicate): synchronize MDS and client times in a way that makes pjd happy even under ...
- 06:49 AM Feature #3727 (Resolved): mds: refactor EMetablob encoding paths
- 06:49 AM Feature #8386 (Resolved): mds: use client (instead of mds) ctime/mtime
- 05:20 AM Bug #8575 (Rejected): linux kernel: possible circular locking dependency detected
06/12/2014
- 12:52 PM Bug #8576: teuthology: nfs tests failing on umount
- teuthology-2014-06-09_23:02:09-knfs-master-testing-basic-plana/303822/
teuthology-2014-06-09_23:02:09-knfs-master-te... - 07:01 AM Fix #5399 (Resolved): timestamp changes on replayed mds request (pjd link 71)
- commit:f81c53a716f43c8a36d6b2aea88cbebe961502d8
06/10/2014
- 06:56 PM Bug #8542: kcephfs: fsx failure on read (expected 0's)
- /teuthology-2014-05-30_23:02:02-kcephfs-master-testing-basic-plana/283020/
- 06:46 PM Bug #8542: kcephfs: fsx failure on read (expected 0's)
- /a/teuthology-2014-06-01_23:02:07-kcephfs-next-testing-basic-plana/285055
- 06:10 PM Bug #8542: kcephfs: fsx failure on read (expected 0's)
- teuthology-2014-06-06_23:01:56-kcephfs-master-testing-basic-plana/297431/
- 06:22 PM Bug #8576 (Resolved): teuthology: nfs tests failing on umount
- Just a sample:
http://qa-proxy.ceph.com/teuthology/teuthology-2014-06-02_23:02:20-knfs-master-testing-basic-plana/28... - 06:19 PM Bug #8575 (Rejected): linux kernel: possible circular locking dependency detected
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-06-02_23:02:20-knfs-master-testing-basic-plana/289526/
Not sur... - 06:00 PM Bug #8574 (Resolved): teuthology: NFS mounts on trusty are failing
- It looks like there are some changes to the settings which make Trusty's default NFS server settings incompatible wit...
06/09/2014
- 10:22 PM Feature #8563: mds: permit storing all metadata in metadata pool
- Yeah. I'm actually not sure why it would be sending out rados xattr requests on restart unless you'd lost clients, bu...
- 05:53 PM Feature #8563: mds: permit storing all metadata in metadata pool
- I observe the slow requests with ceph -w, or watching osd log files.
(i) The mds getxattr parent requests during r... - 03:55 PM Feature #8563: mds: permit storing all metadata in metadata pool
- Can you talk about cases (i) and (ii) in a little more detail? And how you're observing the delayed xattr requests?
- 03:40 PM Feature #8563: mds: permit storing all metadata in metadata pool
- I like the notion of being able to recover at least part of the tree structure from data pools alone. Maybe the opti...
- 10:19 AM Feature #8563: mds: permit storing all metadata in metadata pool
- My concern with this is that part of the point is to provide a way to recover data into the hierarchy even if we've l...
06/08/2014
- 10:36 PM Feature #8563: mds: permit storing all metadata in metadata pool
- I like this idea too, but currently have no time to implement it.
- 09:57 PM Feature #8563 (New): mds: permit storing all metadata in metadata pool
- (this had originally been filed as issue #8230, which was hijacked into an unrelated issue)
I'm speaking specifica...
Also available in: Atom