Activity
From 12/19/2012 to 01/17/2013
01/17/2013
- 10:14 PM Feature #1236: libceph: set layout via virtual xattrs (libceph/cfuse)
- 10:02 PM Feature #3857: mds: enforce unique mds names in mdsmap
- see wip-mds-names
- 09:36 PM Feature #3857 (Resolved): mds: enforce unique mds names in mdsmap
- Currently mds's are uniquely identified by their addr (i.e., a unique instance of the process). The name is useful on...
- 12:28 PM Bug #3832 (Fix Under Review): client: does not observe O_SYNC
- Implemented in wip-3832. Needs review.
- 12:17 PM Bug #3845: mds: standby_for_rank not getting cleared on takeover
- I dont' think it matters. It's is a fixed lifecycle from standby -> active -> dead, so the leftover standby_ just te...
- 12:13 PM Bug #3845: mds: standby_for_rank not getting cleared on takeover
- This is a monitor thing; the MDS is only involved in relaying the config setting over on boot-up.
- 11:38 AM Bug #3845 (Closed): mds: standby_for_rank not getting cleared on takeover
- This is the mdsmap after mds.a was active and given rank 0, then killed, and another mds (mds.b-s-r0) that had standb...
- 11:34 AM Feature #3730: Support replication factor in Hadoop
- Sage Weil wrote:
> If there are more such cases, that is a separate bug!
It was a bug I had introduced in wip-cli... - 09:51 AM Feature #3730: Support replication factor in Hadoop
- Noah Watkins wrote:
> In Client, osdmap is protected by client_lock? If so, new version of branch isn't broken..
... - 08:55 AM Feature #3730: Support replication factor in Hadoop
- In Client, osdmap is protected by client_lock? If so, new version of branch isn't broken..
- 10:24 AM Bug #1435: mds: loss of layout policies upon mds restart
- wip-mds-layout2
needs to be rebased reviewed and tested! - 09:08 AM Bug #3261 (Rejected): mds crashes in EMetaBlob::replay
- Understood. I'm sorry we weren't able to dig in when it happened. When do you get around to retesting we should be ...
- 02:09 AM Bug #3261: mds crashes in EMetaBlob::replay
- should i test the same btrfs volume with a new ceph? if so i might get to it in the next month. please close with ins...
01/16/2013
- 05:59 PM Bug #3832 (Resolved): client: does not observe O_SYNC
- if the file was opened with O_SYNC we need to flush the io on every write call.
- 05:34 PM Feature #3730: Support replication factor in Hadoop
- Oh right, libcephfs is not built on top of librados. Never mind, that's a whole different discussion we start occasio...
- 05:15 PM Feature #3730: Support replication factor in Hadoop
- I don't think libcephfs will give up an instance of the rados client, if that's what you mean by grant access to rado...
- 04:33 PM Feature #3730: Support replication factor in Hadoop
- Sorry to back this up a little, but I can't recall — does using libcephfs automatically grant a user access to the RA...
- 04:30 PM Feature #3730: Support replication factor in Hadoop
- This interface update is up for review in wip-client-pool-api
- 09:52 AM Feature #3730: Support replication factor in Hadoop
- From stand-up, stick with int64_t for userspace, and enforce 32-bit range.
- 09:43 AM Feature #3730: Support replication factor in Hadoop
- The move from int32 -> int64 was misguided, and incomplete. At this point it's not really worth the effort to move a...
- 07:31 AM Feature #3730: Support replication factor in Hadoop
- It looks like in OSDMap there is some mixed usage of int64 and int for pool id, too. In Client::_create pool id is e...
- 06:40 AM Feature #3730: Support replication factor in Hadoop
- Can we change the type in libcephfs to uint64? We're the only ones calling ceph_get_file_pool() right now as far as ...
- 04:12 PM Bug #3828 (Rejected): seeing error: fault, server, going to standby whenever I run a ceph-syn loa...
- This is showing up on your MDS, about 15 minutes after a client completes accesses, right? This is associated with th...
- 04:01 PM Bug #3828 (Rejected): seeing error: fault, server, going to standby whenever I run a ceph-syn loa...
- while validating bug 520, i saw an interesting error. it may be a red herring, as I am seeing no problem with the wr...
- 03:47 PM Bug #520 (Closed): mds: change ifile state mix->sync on (many) lookups?
- 3 Node Cluster:
ceph version 0.56.1 (e4a541624df62ef353e754391cbbb707f54b16f7)
# cat /etc/ceph/ceph.conf
[global]... - 02:51 PM Bug #520: mds: change ifile state mix->sync on (many) lookups?
- csyn is now called ceph-syn
and --debug-ms 1 to see those messages go by! - 03:26 PM Bug #3261: mds crashes in EMetaBlob::replay
- This looks like a problem with what's in the journal, but soo much MDS code has changed since then that I don't think...
- 03:24 PM Bug #1760 (Resolved): multiple_rsync workunit cannot remove non-empty directory intermittently
- this also looks like the tmap problem, commit:e52ebacb73747ef642aabdb3cc3cb2a328687a4c and preceeding 4 commits.
- 03:23 PM Bug #2380 (Rejected): kclient: aufs over a cephfs mount fails with Stale NFS file handle
- this is a generic problem with lookup by ino, see #3541 and other features
- 03:23 PM Bug #2092 (Can't reproduce): BUG at fs/ceph/caps.c:999
- commit:561cf283173360c39db19dc735da4a319be68ff6 fixes the multi-mds case. we haven't seen this again for single-mds.....
- 03:11 PM Feature #3826 (Resolved): uclient: Be more aggressive about checking for pools we can't write to
- Right now the client will happily buffer up writes to a pool that it can't actually write to. #2753 is going to make ...
- 03:06 PM Bug #3746 (Rejected): kclient mmap doesn't zero past EOF
- Run against bad code.
- 03:03 PM Bug #2444 (Can't reproduce): null pointer deference in ceph_d_prune inside kvm
- 03:00 PM Bug #2071 (Can't reproduce): kclient: pjd mkfifo failures
- 02:59 PM Bug #1770 (Can't reproduce): directory nonexistent on kernel_untar_build.sh
- 02:58 PM Bug #1749 (Can't reproduce): nonexistent directory in kclient_workunit_kernel_untar_build
- 02:55 PM Bug #1318 (Resolved): directories disappear across multiple rsyncs
- commit:e52ebacb73747ef642aabdb3cc3cb2a328687a4c and 4 preceeding patches fix up the TMAP bug that is the likely cause...
- 02:55 PM Bug #1511: fsstress failure with 3 active mds
- Sam thinks this works now! Adding to QA suite.
- 02:50 PM Bug #3625 (Resolved): client: EEXIST error on multiple clients to create
- commit:b4d3bd06d4083d780755f6ef506df1643932fa2f
- 02:49 PM Bug #3625: client: EEXIST error on multiple clients to create
- Maybe you already handled this?
- 02:11 PM Bug #3625 (Fix Under Review): client: EEXIST error on multiple clients to create
- 06:16 AM Bug #3625: client: EEXIST error on multiple clients to create
- The kernel side has been reviewed and tested, but needs to be merged. The fuse side has been tested, but I think it ...
- 02:48 PM Bug #2753: Writes to mounted Ceph FS fail silently if client has no write capability on data pool
- we should return an error code on fsync().. that is the quick fix.
a more polite feature will be opened to return ... - 09:19 AM Bug #2753: Writes to mounted Ceph FS fail silently if client has no write capability on data pool
- This is clearly a bug, bureaucracy or not. It should not be a feature. We can do new development to fix a bug. If you...
- 02:46 PM Bug #3544: ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also needs to ch...
- I think this can be closed. There is a bunch of autoconf changes for Java that have or will be merged.
- 02:41 PM Bug #3544: ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also needs to ch...
- I just did a ./configure and using CPPFLAGS to indicate where the jni headers were and that worked just fine. Using C...
- 02:45 PM Bug #3254: mds: Replica inode's parent snaprealms are not open
- Multi-mds, currently low priority.
- 02:44 PM Bug #3637 (In Progress): client: not issuing caps for with clients doing shared writes
- 02:43 PM Bug #3637 (Fix Under Review): client: not issuing caps for with clients doing shared writes
- 02:40 PM Bug #3498 (Resolved): mds: mds assert failure during untar_kernel
- this was a msgr bug, long since fixed. commit:36c0fd220ef02b1ffd7a3ae0d98e0fdec6b55a5b or thereabouts
- 02:39 PM Bug #1666: hadoop: time-related meta-data problems
- http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg10334.html
Also wip-mtime-incr in the ceph repo. - 02:38 PM Bug #2218: CephFS "mismatch between child accounted_rstats and my rstats!"
- 02:32 PM Feature #3821 (New): qa: run backuppc as part of qa suite
- 02:32 PM Bug #2494 (Can't reproduce): mds: Cannot remove directory despite it being empty.
- The dupe inode suggests this is the problem fixed by Yan's tmap fixes.
- 02:29 PM Bug #2019 (Can't reproduce): mds: CInode::filelock stuck in sync->mix
- Presumably we'll see this again, but it hasn't turned up in our testing lately and we need more info to debug it.
- 02:27 PM Bug #1811 (Duplicate): 2 pjd chown tests failed on cfuse
- 02:22 PM Bug #1537 (Resolved): cmds 100% when copying lots of files, mds_cache_size and mds_bal_frag
- This is an optimization issue, which we'll get to!
- 02:21 PM Feature #3819 (Resolved): mds: re-add snaptests to qa suite
- 02:02 PM Bug #3818 (Duplicate): kclient: fsx fails in mapread
With the fix in #3681, fsx fails in mapread with bad data. It looks like this is unrelated to the fix, and is a se...- 11:09 AM Feature #3543 (In Progress): mds: new encoding
- Oh, this has been in progress all week.
- 10:35 AM Bug #3773 (Can't reproduce): mds crashed at LogEvent::decode
- I have been trying to reproduce this but have not hit it yet.
will reopen the bug, when needed. - 06:04 AM Bug #3601: client: With multiple clients, file remove doesn't free up space
- Yeah its that the lru doesn't have a timeout.
The mds could send an "enable timeout" message to clients once it se...
01/15/2013
- 08:53 PM Feature #3728 (Resolved): mds: draft design for lookup by ino
- 08:38 PM Feature #3730: Support replication factor in Hadoop
- pool ids are currently exposed via libcephfs from ceph_file_layout, which uses a 32bit integer for pool id. However, ...
- 08:34 PM Feature #3730: Support replication factor in Hadoop
- Someone could toss a 'ceph osd pool set size' Hadoop's way, so a static mapping between pg pool size and pool name co...
- 05:35 PM Bug #3254: mds: Replica inode's parent snaprealms are not open
- No. So far I'm focus on stabilize basic fs function for multiple MDS setup, completely ignore snapshot.
- 03:28 PM Bug #3254: mds: Replica inode's parent snaprealms are not open
- Hmm, did this get fixed by some of Zheng's later patches? I remember things about snaprealms and migration...
- 04:44 PM Feature #3289: ceph-fuse: somehow exert pressure on the VFS to remove dentries from the cache
- #3575 should be kept in mind while doing this/instead of this — there's a forget_multi as well.
- 04:44 PM Bug #3601 (New): client: With multiple clients, file remove doesn't free up space
- Whoops, didn't mean to change that status.
- 04:43 PM Bug #3601 (Duplicate): client: With multiple clients, file remove doesn't free up space
- The LRU actually already exists; check out Client::lru. (Unless I'm misunderstanding something?) So we might want to ...
- 04:37 PM Bug #925: mds: update replica snaprealm on rename
- De-prioritizing multi-MDS issues...
- 04:34 PM Bug #1117: mds: rename rollback broken on slaves during replay
- De-prioritizing multi-mds issues for now.
- 04:27 PM Bug #1435: mds: loss of layout policies upon mds restart
- I'm guessing we want to move this up the queue; will discuss in bug scrub tomorrow!
- 04:23 PM Bug #1511: fsstress failure with 3 active mds
- De-prioritizing multi-mds failures at this time.
- 04:23 PM Bug #1535: concurrent creating and removing directories crashes cmds
- De-prioritizing multi-MDS bugs at this time.
- 03:51 PM Bug #2753: Writes to mounted Ceph FS fail silently if client has no write capability on data pool
- Fair enough, but if I can just make a suggestion, perhaps you might want to explain these procedures somewhere in the...
- 03:45 PM Bug #2753: Writes to mounted Ceph FS fail silently if client has no write capability on data pool
- I agree it's a bug, but given the procedures we have now (ack! changing procedures coming alert!) I don't think we wa...
- 03:43 PM Bug #2753: Writes to mounted Ceph FS fail silently if client has no write capability on data pool
- No, please. A write pretending to succeed while actually not writing data _is_ a bug. The filesystem _not lying to it...
- 03:33 PM Bug #2753: Writes to mounted Ceph FS fail silently if client has no write capability on data pool
- This is a great suggestion but falls into feature rather than bug-fix category. My initial thought is keeping a list ...
- 03:42 PM Bug #1675 (Can't reproduce): mds: failed rstat assert
- The logs are long gone. This will presumably pop up again; it's a pretty common failure mode, but there's nothing in ...
- 03:38 PM Bug #1938: mds: snaptest-2 doesn't pass with 3 MDS system
- De-prioritizing all multi-MDS bugs for now.
- 03:27 PM Bug #3267: Multiple active MDSes stall when listing freshly created files
- Currently de-prioritizing multi-MDS bugs.
- 03:18 PM Bug #3625: client: EEXIST error on multiple clients to create
- I know you guys did a couple rounds on this one, what's the status?
- 01:25 PM Bug #3637: client: not issuing caps for with clients doing shared writes
- Sage has a different proposed fix than what's in the branch. Still needs to be tested.
- 12:50 PM Bug #3637: client: not issuing caps for with clients doing shared writes
- I don't remember where this ended up. Was the proposed fix problematic, or did it never get looked at?
- 11:39 AM Bug #3718: multi-client dbench gets stuck over NFS exported cephfs
- This apparently is only a problem under re-export, which I believe we are not focusing on right now.
- 11:35 AM Bug #3553: MDS core dumped running 0.48.2argonaut
- Given what we know so far (the Op got sent to the wrong OSD) this is a bug in the Objecter, not the MDS. Or possibly ...
01/14/2013
- 07:49 PM Bug #3544: ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also needs to ch...
- Is this still an issue?
- 03:04 PM Documentation #3796 (Resolved): FUSE mount documentation needs some corrections for v0,56x
- The FUSE instructions need to be updated for v0.56 and later
currently:
> http://ceph.com/docs/master/cephfs/fuse... - 01:28 PM Feature #3749 (Resolved): Remove forced synchronization from Java bindings
- 07:00 AM Bug #2187: pjd chown/00.t failed test 97
- Happened again on Friday. Time to add the delay injection to the nightlies?
2013-01-11T07:32:37.489 INFO:teutholo...
01/12/2013
- 08:01 AM Feature #3749: Remove forced synchronization from Java bindings
- In libcephfs mount/unmount race against each other, and the test of the API (e.g. unmount racing against write). In C...
01/11/2013
- 02:45 PM Bug #3793: wrong size reported in some distributions/toolchains
- That makes this sounds like a simple fix... we need to swap the frsize and bsize fields. Except that right now we ar...
- 02:39 PM Bug #3793: wrong size reported in some distributions/toolchains
- I spent a bit of time with gregaf trying to find authoritative sources for what the different values denote. While `...
- 01:40 PM Bug #3793: wrong size reported in some distributions/toolchains
- This coreutils commit may have useful data:
http://git.savannah.gnu.org/cgit/coreutils.git/commit/src?id=0863f018f0f... - 01:38 PM Bug #3793 (Resolved): wrong size reported in some distributions/toolchains
- In ceph_statfs we set f_bsize to be 1MB in order to report very large available spaces. However, nowadays it is appar...
- 02:38 PM Feature #3749: Remove forced synchronization from Java bindings
- This needs more thought than just removing synchronization. We'd like to be segfault free in Java, even though you co...
- 01:39 PM Bug #3794 (Resolved): uclient: reports sizes wrong in some cases
- This is the counterpart to kernel bug #3793. See Client::statfs, in which we set f_bsize to 1MB but f_frsize to 4KB. ...
- 10:52 AM Bug #3773: mds crashed at LogEvent::decode
- Sure Sage. I was running bonnie from client during upgrade.
I had debug ms=1 set, i will try to reproduce this with... - 09:41 AM Bug #3773 (Need More Info): mds crashed at LogEvent::decode
- Tamil, I wonder if you can try to reproduce this with mds logging turned up from teh start (debug mds = 20, debug ms ...
01/10/2013
- 05:06 PM Bug #3773: mds crashed at LogEvent::decode
- Okay, I gathered up a core file, a high-debug MDS log, and the log with the bad event (and the bad event itself) in t...
- 02:05 PM Bug #3773: mds crashed at LogEvent::decode
- I'll at least start this off.
- 09:55 AM Feature #3621 (Closed): qa: add knfsd reexport tests to qa suite
- 09:52 AM Feature #3621: qa: add knfsd reexport tests to qa suite
- commit:aaa03bbcd2549a38f962a61fc63be16cca3a6d90 in teuthology.git
01/09/2013
- 02:58 PM Bug #3773 (Can't reproduce): mds crashed at LogEvent::decode
- ceph version: 0.56.1 (e4a541624df62ef353e754391cbbb707f54b16f7)
I had a cluster [burnupi06, burnupi07, burnupi08] ... - 12:05 PM Feature #3570 (In Progress): teuthology: mds thrasher
- 10:58 AM Bug #3681: kclient fsx fails nightly
- Proposed fix to set i_size before the setattr request:
This will resolve the above issue, because the cap flush on...
01/08/2013
- 04:29 PM Bug #3597: ceph-fuse: denying root access
- Is root actually a member of the fuse group? If not that would be correct behavior.
- 12:04 PM Feature #626 (Closed): qa: add IOR, rompio, or other parallel workloads suite
- Added tests to the _marginal_ qa suite that run IOR, mdtest, and fsx-mpi.
- 09:39 AM Feature #3543: mds: new encoding
- I'm going to get started on this (mostly just figuring out current state, probably) today.
01/07/2013
- 04:04 PM Feature #3749 (Resolved): Remove forced synchronization from Java bindings
- Remove "synchronized" keyword from native interface. This was originally added when we were seeing some pthread mutex...
- 03:26 PM Bug #3746 (Rejected): kclient mmap doesn't zero past EOF
- Error coming from fsx:
INFO:teuthology.orchestra.run.out:Mapped Write: non-zero data past EOF (0xb826) page offset... - 12:19 PM Cleanup #3742 (Resolved): Remove old Hadoop wrappers and configuration options
- I think it's likely that the current Hadoop shim is at least at feature parity with the old wrappers.
- 10:02 AM Bug #3726 (Resolved): Enforce Ceph's minimum stripe size in the java bindings
- 10:02 AM Bug #3726 (Closed): Enforce Ceph's minimum stripe size in the java bindings
- 09:21 AM Bug #3738 (Resolved): kclient fsx truncate/write multi-client race
This bug is similar to #3681, but occurs only in the non-exclusive case (multiple clients), where a truncate doesn'...- 09:09 AM Bug #3681: kclient fsx fails nightly
- The race here is between a truncate down, and completion of osd write ops triggering a cap flush. The exact order th...
01/04/2013
- 07:54 PM Bug #3666 (Resolved): Segfault running test_libcephfs
- commit:3a9408742a8a6cbc870cba543a208285f1a6cec1
- 03:25 PM Bug #3666: Segfault running test_libcephfs
- I pushed a new wip-client-shutdown. This switches the clean-up order of client/messenger in libcephfs, rather than mo...
- 01:36 PM Bug #3666: Segfault running test_libcephfs
- Right, I think your fix will work, but it breaks the interface abstraction (messenger is created above the client, de...
- 01:16 PM Bug #3666: Segfault running test_libcephfs
- This is what I'm running to reproduce the error. It's been running now for an hour on wip-client-shutdown without any...
- 12:57 PM Bug #3666: Segfault running test_libcephfs
- Rather than moving messenger shutdown into client shutdown?
- 12:48 PM Bug #3666: Segfault running test_libcephfs
- A similar issue was just handled in the ceph_fuse.cc code. There we just delay deleting the client till the end. Yo...
- 10:41 AM Bug #3666: Segfault running test_libcephfs
- During unmount, the client is shutdown and free'd before the messenger. If any messages are delivered after the clien...
- 03:29 PM Feature #3730 (Closed): Support replication factor in Hadoop
- In order to support per-file replication values in Hadoop we need to specify that a new file should be generated in a...
- 01:54 PM Bug #3726: Enforce Ceph's minimum stripe size in the java bindings
- Also, name it something along the lines of get_stripe_granularity() and not .._min(imum)_ as that isn't entirely accu...
- 01:40 PM Bug #3726: Enforce Ceph's minimum stripe size in the java bindings
- After a discussion on jabber, the decision is to go with exposing a function call in libcephfs and then using that in...
- 11:09 AM Bug #3726 (Resolved): Enforce Ceph's minimum stripe size in the java bindings
- The Hadoop bindings are using the blocksize as the stripe size. If a block size is explicitly passed down, it ends up...
- 01:00 PM Bug #3718: multi-client dbench gets stuck over NFS exported cephfs
- Heads up, Zheng Yan's patches on the mds fix issues related to running multiclient dbench tests.
- 12:24 PM Feature #3626: mds: debug mode to generate traceless replies to clients
- Hmm, okay. I wasn't real clear on the previous bugs so I'll need to look at it more if I end up taking this, but soun...
- 11:46 AM Feature #3626: mds: debug mode to generate traceless replies to clients
- Greg Farnum wrote:
> Hurray, it is. Nobody except the client looks at the trace_bl and setting that is the only thin... - 11:35 AM Feature #3626: mds: debug mode to generate traceless replies to clients
- Hurray, it is. Nobody except the client looks at the trace_bl and setting that is the only thing set_trace() does. Ex...
- 11:17 AM Feature #3626: mds: debug mode to generate traceless replies to clients
- Greg Farnum wrote:
> Am I reading it correctly that this is just going to be doing the config and wrapper work to no... - 09:01 AM Feature #3626: mds: debug mode to generate traceless replies to clients
- Am I reading it correctly that this is just going to be doing the config and wrapper work to not call set_trace() in ...
- 12:20 PM Feature #3543: mds: new encoding
- 12:20 PM Feature #3728: mds: draft design for lookup by ino
- 12:14 PM Feature #3728 (Resolved): mds: draft design for lookup by ino
- 12:20 PM Feature #3570: teuthology: mds thrasher
- 12:06 PM Feature #3727 (Resolved): mds: refactor EMetablob encoding paths
- Right now, the EMetaBlob sub-structures — for performance reasons — use an encoding pattern that doesn't match anythi...
- 11:42 AM Cleanup #89: mds: put inode dirty fields in dirty_bits_t to reduce memory footprint
- Greg Farnum wrote:
> I briefly scanned the CInode and inode_t structs and it wasn't obvious to me what this should e... - 09:34 AM Cleanup #89: mds: put inode dirty fields in dirty_bits_t to reduce memory footprint
- I briefly scanned the CInode and inode_t structs and it wasn't obvious to me what this should encompass. Are you talk...
- 11:41 AM Subtask #547: mds: define fsck strategy, required metadata
- This was a whiteboard discussion 2 years ago. Nothing was written down. We should reopen new and more detailed issu...
- 09:29 AM Subtask #547: mds: define fsck strategy, required metadata
- Where are the results of this bug? It's marked resolved but I don't see any fsck references in the git tree, and ther...
- 11:38 AM Cleanup #3677: libcephfs, mds: test creation/addition of data pools, create policy
- Greg Farnum wrote:
> Do we have a separate bug for the library calls this needs?
#685, which would take the clien... - 09:27 AM Cleanup #3677: libcephfs, mds: test creation/addition of data pools, create policy
- Do we have a separate bug for the library calls this needs?
- 11:36 AM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- Greg Farnum wrote:
> And for this one as well: setting up Ganesha in teuthology, run tests against it? Not using the... - 09:24 AM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- And for this one as well: setting up Ganesha in teuthology, run tests against it? Not using the Ceph shim or anything...
- 11:35 AM Feature #3243: qa: test samba reexport via libcephfs vfs plugin in teuthology
- Greg Farnum wrote:
> Is this a matter of setting up (via teuthology) a Samba server which sits on top of a Ceph moun... - 09:24 AM Feature #3243: qa: test samba reexport via libcephfs vfs plugin in teuthology
- Is this a matter of setting up (via teuthology) a Samba server which sits on top of a Ceph mount and then running tes...
- 11:34 AM Feature #3426: ceph-fuse: build/run on os x
- Greg Farnum wrote:
> Noah has done some work on this in the wip-osx branch; last I heard you could compile and get a... - 09:22 AM Feature #3426: ceph-fuse: build/run on os x
- Noah has done some work on this in the wip-osx branch; last I heard you could compile and get a cluster going with vs...
- 11:32 AM Feature #3542: mds: migration path for existing anchors, anchortables, etc.
- Greg Farnum wrote:
> What all does this encompass? Design? Implementation? Does it need to be an online switch or ca... - 09:13 AM Feature #3542: mds: migration path for existing anchors, anchortables, etc.
- What all does this encompass? Design? Implementation? Does it need to be an online switch or can it be an offline job?
- 11:30 AM Feature #3541: mds: robust ino lookup using file backpointers
- Greg Farnum wrote:
> Is this bug supposed to encompass the anchor table replacement work as well? I wouldn't expect ... - 09:12 AM Feature #3541: mds: robust ino lookup using file backpointers
- Is this bug supposed to encompass the anchor table replacement work as well? I wouldn't expect so, but the presence o...
- 11:23 AM Feature #3540: mds: maintain per-file backpointers on first file object
- Greg Farnum wrote:
> Do we have any kind of design for this? We've talked about it some and it's conceptually simple... - 09:08 AM Feature #3540: mds: maintain per-file backpointers on first file object
- Do we have any kind of design for this? We've talked about it some and it's conceptually simple, but splitting up the...
- 11:15 AM Feature #626 (In Progress): qa: add IOR, rompio, or other parallel workloads suite
- Yeah, that's what slang's working on to enable this. Assigning this to him.
- 08:57 AM Feature #626: qa: add IOR, rompio, or other parallel workloads suite
- SamL has done some work on getting MPI going under teuthology, and on running some multi-client FS tests. I'm not sur...
- 11:13 AM Feature #3621 (Resolved): qa: add knfsd reexport tests to qa suite
- 09:43 AM Feature #3399: java: add accessor to Ceph version numbers
- Oh, those are librados specific numbers, aren't they. So this bug is to create and expose a libceph version, then. Wh...
- 09:35 AM Feature #3399: java: add accessor to Ceph version numbers
- In libcephfs there is a call to get Ceph version (yes, just expose this). But, I recall Sage mentioning that it might...
- 09:19 AM Feature #3399: java: add accessor to Ceph version numbers
- This is just exposing the librados version() function to Java, right?
- 09:41 AM Cleanup #660: mds: use helpers in mknod, mkdir, openc paths
- What kind of helpers are you talking about with this? inode fetchers and lock grabbers? In a quick scan over handle_c...
- 09:36 AM Feature #603: mds: repair directory hierarchy
- This is part of #82 fsck, right? Do we have a more detailed algorithm anywhere?
01/03/2013
- 01:59 PM Bug #3597: ceph-fuse: denying root access
- I believe that we can reproduce this error. We are running Ubuntu 12.04 LTS Server on both the client and on the Cep...
- 12:56 PM Bug #3719 (Can't reproduce): pjd test 145 failed in the nightly runs
- logs: ubuntu@teuthology:/a/teuthology-2013-01-02_19:00:03-regression-next-testing-basic/33621...
- 12:48 PM Bug #3718 (Rejected): multi-client dbench gets stuck over NFS exported cephfs
- When running qa/workunit dbench.sh the dbench 1 passes, but the dbench 10 gets hung up.
We should check this with ... - 12:28 PM Feature #3621 (In Progress): qa: add knfsd reexport tests to qa suite
- 09:32 AM Bug #3681: kclient fsx fails nightly
- Its most likely all the same bug, but fsx fails in different ways each time (always because of a truncate down). The...
- 09:27 AM Feature #3543: mds: new encoding
- right. about 80% complete, see wip-mds-encoding.
- 09:22 AM Feature #3543: mds: new encoding
- What is this task? Switching to use our versioned encoding scheme?
01/02/2013
- 09:45 AM Bug #3700: mds: FAILED assert(!item_session_list.is_on_list())
- fixed by revert of bad fix, see commit:6711a4c4038dbdf843f9dfe42c7809c5c37ae534
- 09:37 AM Bug #3700 (Resolved): mds: FAILED assert(!item_session_list.is_on_list())
12/30/2012
- 06:08 PM Fix #3630: mds: broken closed connection cleanup
- ...
- 06:06 PM Fix #3630: mds: broken closed connection cleanup
- The con re-use looks like this:
- client connects
- mds ms_verify_authorizer creates a new session
- msgr see ex... - 06:04 PM Bug #3696 (Resolved): mds: FAILED assert(session_map.count(s->inst.name) == 0)
- see #3630..let's fix this properly.
12/29/2012
- 02:39 PM Bug #3700 (Resolved): mds: FAILED assert(!item_session_list.is_on_list())
- logs: ubuntu@teuthology:/a/teuthology-2012-12-29_03:00:03-regression-master-testing-gcov/30039...
- 02:32 PM Bug #3696: mds: FAILED assert(session_map.count(s->inst.name) == 0)
- ubuntu@teuthology:/a/teuthology-2012-12-29_03:00:03-regression-master-testing-gcov/30036
- 09:43 AM Bug #3696: mds: FAILED assert(session_map.count(s->inst.name) == 0)
- reverted the broken fix, reproducing the original problem again.
12/28/2012
- 09:11 PM Bug #3696: mds: FAILED assert(session_map.count(s->inst.name) == 0)
- 06:42 PM Bug #3696 (Resolved): mds: FAILED assert(session_map.count(s->inst.name) == 0)
- This occurred shortly after startup when trying to reproduce another bug on the master branch:...
- 06:21 PM Fix #3630: mds: broken closed connection cleanup
12/26/2012
- 09:59 AM Bug #3681 (Resolved): kclient fsx fails nightly
- ...
- 08:39 AM Feature #3679 (Closed): Any API to get metadata?
- Yep! See libcephfs. There is...
- 01:08 AM Feature #3679 (Closed): Any API to get metadata?
- hello,there.
I am wondering if there is any API to get the metadata of a file .
I have the ceph file system run by ... - 01:10 AM Tasks #3680 (Rejected): deduplication in ceph
- I am wondering how to do deduplication in ceph...the big problem is how to get the metadata of a file
and how to mod...
12/24/2012
- 02:58 PM Feature #1448 (In Progress): test hadoop on sepia
- 02:58 PM Cleanup #814 (Resolved): hadoop: refactor hadoop shim in terms of java libceph bindings
12/23/2012
- 09:12 PM Cleanup #3677 (Closed): libcephfs, mds: test creation/addition of data pools, create policy
- the create data pool argument is tested only with the default pools. once an lib is in place for the unit/functional...
- 09:06 PM Bug #3663 (Rejected): ceph kernel client is getting stuck on xstat* operations
- No worries. Let us know if you do come across behavior that looks like a bug!
- 08:59 PM Bug #3663: ceph kernel client is getting stuck on xstat* operations
- Hi Sage,
i am very sorry for taking your time with this issue, I feel like an idiot :(
The buggy client is runnin...
12/21/2012
- 02:39 PM Documentation #3672 (Resolved): doc: how to mount ceph-fuse from fstab
- There's a new mount helper in bobtail for this. It contains these comments:...
- 10:20 AM Bug #3666 (Resolved): Segfault running test_libcephfs
- ...
- 08:36 AM Bug #3655 (Can't reproduce): client: hang in fsstress
- I ran this test throughout the day yesterday and couldn't reproduce it, with message delays enabled. Marking as can'...
- 07:52 AM Bug #3663: ceph kernel client is getting stuck on xstat* operations
- Hi Roman-
The logging levels are right, but in both mds logs neither mds was ever active; both were in the up:stan...
12/20/2012
- 10:19 PM Bug #3663: ceph kernel client is getting stuck on xstat* operations
- Hello Sage,
added 4 logs:
screen output from console of the laggy client. it ends up on 'jroger@pr02:~/data$ cp... - 09:07 PM Bug #3663 (Need More Info): ceph kernel client is getting stuck on xstat* operations
- Hmm. It's actually just saying its the oldest client; it's not actually too old (yet). The looping connect attempts...
- 08:48 PM Bug #3663 (Rejected): ceph kernel client is getting stuck on xstat* operations
- there are 2 kernel clients happily working with ceph. as soon as I try mounting ceph from the third client, it's gett...
12/19/2012
- 11:19 PM Bug #3655 (Can't reproduce): client: hang in fsstress
- fsstress stuck in _read_sync()
#0 pthread_cond_wait@@GLIBC_2.3.2 ()
at ../nptl/sysdeps/unix/sysv/linux/x86_6... - 04:03 PM Bug #3637: client: not issuing caps for with clients doing shared writes
- Proposed fix in wip-3637. The client's max size request in MClientCaps gets dropped if the file lock is in a non-sta...
- 12:30 PM Bug #3625: client: EEXIST error on multiple clients to create
- Pushed fixes to wip-3625 (ceph and ceph-client repos) that implement #3 (mds sends back the created flag in reply to ...
- 12:29 PM Bug #3625: client: EEXIST error on multiple clients to create
- David and I have posted comments on github about the fix to allow multiple
clients opening the same file to get a va...
Also available in: Atom