Activity
From 03/01/2016 to 03/30/2016
03/30/2016
- 11:54 PM Feature #15264: libcephfs: enable non-"ll" users to set their uid/gid
- Hmm, is setting config options really the interface we want for it? It's easy enough for the moment but I'm not sure ...
- 12:11 PM Feature #15264 (Rejected): libcephfs: enable non-"ll" users to set their uid/gid
- Oh, never mind, I was looking in the libcephfs API (where you can't set it), but not in the config opts (where you ca...
- 06:18 PM Bug #15309: Error: Creation of multiple filesystems is disabled. To enable this experimental fea...
- 10:45 AM Bug #15309 (Fix Under Review): Error: Creation of multiple filesystems is disabled. To enable th...
- https://github.com/ceph/ceph/pull/8372
- 01:31 PM Bug #15266 (In Progress): ceph_volume_client purge failing on non-ascii filenames
- 01:10 PM Bug #15045: CephFSVolumeClient.evict should be limited by path, not just auth ID
- I am working on this issue.
03/29/2016
- 05:35 PM Bug #15309: Error: Creation of multiple filesystems is disabled. To enable this experimental fea...
- Could be failing because default pool names changed, so this looks like a "create second" rather than "it already exi...
- 05:17 PM Bug #15309 (Resolved): Error: Creation of multiple filesystems is disabled. To enable this exper...
- 2016-03-28T14:22:24.439 INFO:tasks.rest_api.client.rest0.smithi037.stderr:127.0.0.1 - - [28/Mar/2016 21:22:24] "PUT /...
- 01:59 PM Bug #15303 (Fix Under Review): Client holds incorrect complete flag on dir after losing caps
- https://github.com/ceph/ceph/pull/8353
- 01:33 PM Bug #15303 (In Progress): Client holds incorrect complete flag on dir after losing caps
- Client::handle_cap_grant() pass wrong parameter to Client::check_cap_issue()
- 01:29 PM Bug #15303: Client holds incorrect complete flag on dir after losing caps
- Reproducer is https://github.com/ceph/ceph-qa-suite/pull/917
- 11:07 AM Bug #15303 (Resolved): Client holds incorrect complete flag on dir after losing caps
We saw this from testing Manila. The symptom is that the Manila driver gets ENOTEMPTY when trying to remove a shar...- 01:50 PM Bug #15304 (Resolved): qa: samba tests need to run on testing kernel
- I think some of our "random" samba failures are not so random. It looks like we no longer pass on any of our kclient ...
- 12:51 PM Bug #15270: "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- the test use 3.13 kernel, maybe the failure was caused by unsupported feature.
- 08:33 AM Support #15268: CephFS mount blocks VM
- Zheng Yan wrote:
> grab_super+0x2e is down_write(&s->s_umount). the backtrace show there is already a mounted cephfs... - 02:43 AM Support #15268: CephFS mount blocks VM
- grab_super+0x2e is down_write(&s->s_umount). the backtrace show there is already a mounted cephfs, you tried mounting...
03/28/2016
- 08:48 PM Bug #14805 (Resolved): Hadoop tests failing with EPERM
- 01:54 PM Support #15268: CephFS mount blocks VM
- Joao Castro wrote:
> Trying to mount it on a EC2 machine
>
> Mar 27 10:37:25 ip-172-31-5-103 kernel: [ 1320.75613...
03/27/2016
- 10:41 AM Support #15268: CephFS mount blocks VM
- Trying to mount it on a EC2 machine
Mar 27 10:37:25 ip-172-31-5-103 kernel: [ 1320.756137] INFO: task mount.ceph:1...
03/25/2016
- 08:20 AM Backport #15281 (Rejected): infernalis: standy-replay MDS does not cleanup finished replay threads
- 01:02 AM Feature #15065 (Resolved): multifs: add standby_for_fscid setting on MDS and pass in MMDSBeacon
- 12:57 AM Feature #15063 (Resolved): multifs: option to disable sanity() calls
- 12:57 AM Feature #15063: multifs: option to disable sanity() calls
- commit:b296629f1ace2b2c394e9303b17e952fabb06696, merged in f474c23d7c619fe014bd4cf42cab12802f1e4e2c
- 12:57 AM Fix #15062 (Resolved): multifs cleanup: pass feature bits into MDSMap from Filesystem::encode
- commit:68398258e152803cf9166a5bc3c0b1e153062d1e, merged in f474c23d7c619fe014bd4cf42cab12802f1e4e2c
- 12:45 AM Bug #15167 (Resolved): CID 1355575: null CDir* can get pushed on ScrubStack?
- 12:44 AM Bug #15210 (Rejected): qa: snaptests-0.sh: file exists error after deleting+reusing name
- I thought each of these was running in their own subdirectory, but if that's not the case then this is expected behav...
03/24/2016
- 11:44 PM Support #15268: CephFS mount blocks VM
- Hmm, if it were newer features in userspace than in the kernel you should be getting errors. Zheng, any idea?
- 11:42 PM Support #15268: CephFS mount blocks VM
- Joao Castro wrote:
> Greg Farnum wrote:
> > You'll need to provide a little more context. You're just trying to mou... - 11:41 PM Support #15268: CephFS mount blocks VM
- Greg Farnum wrote:
> You'll need to provide a little more context. You're just trying to mount CephFS inside of a VM... - 10:33 PM Support #15268: CephFS mount blocks VM
- You'll need to provide a little more context. You're just trying to mount CephFS inside of a VM, and the mount is han...
- 03:47 PM Support #15268 (Resolved): CephFS mount blocks VM
- cat /proc/modules | grep -i ceph
ceph 315392 0 - Live 0xffffffffc0460000
libceph 241664 1 ceph, Live 0xffffffffc041... - 10:59 PM Bug #15235: MDS : erroneous error message about reading config file
- Ok I didn't know it could be related. No problem in this case :)
- 10:52 PM Bug #15235 (Fix Under Review): MDS : erroneous error message about reading config file
- 10:51 PM Bug #15235: MDS : erroneous error message about reading config file
- Oh. I gather you're running into #14144, given https://github.com/ceph/ceph/pull/7060#issuecomment-200745759. The "un...
- 10:36 PM Bug #15235: MDS : erroneous error message about reading config file
- Yes I know, but is ends like this... :(
- 10:26 PM Bug #15235: MDS : erroneous error message about reading config file
- That paste doesn't contain an error of any kind in it. :) It seems to end with the respawn command getting invoked, b...
- 09:06 AM Bug #15235: MDS : erroneous error message about reading config file
- It should respawn itself, but fails on reading configuration file. But as you can see in strace, it does not *try* to...
- 12:17 AM Bug #15235: MDS : erroneous error message about reading config file
- If you're failing an MDS from the monitor it's respawning itself, so maybe something weird happened there. Or maybe i...
- 10:27 PM Bug #14805: Hadoop tests failing with EPERM
- d'oh, right. Okay, I get the problem now. I've run this through a couple of times in my latest integration branch, bt...
- 12:09 PM Bug #14805: Hadoop tests failing with EPERM
- Greg Farnum wrote:
> Wait, can you expand on that John? I wasn't really looking at the python tests, although I know... - 01:04 AM Bug #14805: Hadoop tests failing with EPERM
- Wait, can you expand on that John? I wasn't really looking at the python tests, although I know it involved root owne...
- 10:20 PM Bug #14144 (Pending Backport): standy-replay MDS does not cleanup finished replay threads
- 04:47 PM Bug #15270 (Resolved): "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- Run: http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-03-24_05:00:02-smoke-master-distro-basic-openstack/
Jobs... - 03:04 PM Bug #15266 (Resolved): ceph_volume_client purge failing on non-ascii filenames
- ...
- 01:59 PM Feature #15264 (Rejected): libcephfs: enable non-"ll" users to set their uid/gid
- Currently libcephfs mostly doesn't pass through a uid/gid to Client, and Client defaults to reading the uid/gid of th...
- 12:18 AM Bug #15008 (Resolved): fuse expects root inode number to be FUSE_ROOT_ID
03/23/2016
- 05:26 PM Feature #15252 (New): client: support fadvise DONTNEED
- This can apply to both our own caching, and be passed through to the OSDs. DONTNEED is great because it prevents poll...
03/22/2016
- 12:58 PM Bug #15235 (Closed): MDS : erroneous error message about reading config file
- Hi,
I have a bug on Infernalis with MDS.
When a MDS is failing and going to standby mode (ceph mds fail X), it ... - 12:10 PM Feature #15065 (Fix Under Review): multifs: add standby_for_fscid setting on MDS and pass in MMDS...
- https://github.com/ceph/ceph/pull/8257
- 01:56 AM Bug #15210: qa: snaptests-0.sh: file exists error after deleting+reusing name
- two instance of snaptest-0.sh were executed in root directory at the same time, that's why we saw EEXIST.
- 01:09 AM Bug #15210: qa: snaptests-0.sh: file exists error after deleting+reusing name
- Oh duh. I am having serious trouble setting up the qa-suite in a way that makes this test and all the others happy.
...
03/21/2016
03/18/2016
- 11:24 PM Bug #15210: qa: snaptests-0.sh: file exists error after deleting+reusing name
- n/m, I can't use my email client I guess.
- 11:20 PM Bug #15210 (Rejected): qa: snaptests-0.sh: file exists error after deleting+reusing name
- http://pulpito.ceph.com/gregf-2016-03-17_23:35:13-fs-greg-fs-testing-316---basic-mira/71116/...
03/17/2016
- 06:52 PM Bug #15156 (Can't reproduce): mds stuck in clientreplay
- I'm not sure if that patch is correct in Infernalis or not (some of the dispatch stuff changed slightly at one point ...
- 04:22 PM Bug #15156: mds stuck in clientreplay
- And are you think, backporting of http://tracker.ceph.com/issues/14357 as https://github.com/ceph/ceph/commit/24de350...
- 04:11 PM Bug #15156: mds stuck in clientreplay
- Yes. now healthy. It was not first similar problem, but first after upgrade to Infernalis/git. Kernels >=4.4.3 (preci...
- 06:58 AM Bug #15156 (Need More Info): mds stuck in clientreplay
- Is your cluster healthy now? This may be related to http://tracker.ceph.com/issues/14357, although we've also seen br...
- 12:07 PM Bug #15167 (Fix Under Review): CID 1355575: null CDir* can get pushed on ScrubStack?
- https://github.com/ceph/ceph/pull/8180
- 06:46 AM Bug #15167 (Resolved): CID 1355575: null CDir* can get pushed on ScrubStack?
- ...
- 07:19 AM Bug #15169 (Rejected): have trouble in mount.ceph in precise
- mount ceph fs with the kernel driver with below command in admin node
mount -t ceph 192.168.0.164:6789:/ /mnt/myc... - 07:13 AM Bug #15168 (Duplicate): have trouble in mount.ceph
- mount ceph fs with the kernel driver with below command in admin node
mount -t ceph 192.168.0.164:6789:/ /mnt/my...
03/16/2016
- 08:19 AM Bug #15106 (Resolved): ceph.py does 'fs get ...' and breaks upgrade tests
- https://github.com/ceph/ceph-qa-suite/pull/875
- 05:27 AM Bug #15106: ceph.py does 'fs get ...' and breaks upgrade tests
- There are three instances of "fs get" in filesystem.py:...
- 08:18 AM Bug #15124 (Resolved): ceph.py 'fs get ...' doesn't handle old installed ceph version
- https://github.com/ceph/ceph-qa-suite/pull/875
- 05:46 AM Bug #15156 (Can't reproduce): mds stuck in clientreplay
- 3 mds, standby-replay mode,
1) active (b) - also question (usual state):
2016-03-16 04:16:48.713071 7f6752cd2700 ... - 03:40 AM Bug #14685: dbench hang on native cifs mount
- http://teuthology.ovh.sepia.ceph.com/teuthology/teuthology-2016-03-13_23:14:01-samba-hammer---basic-openstack/15126/t...
03/15/2016
- 09:26 PM Backport #15056 (In Progress): hammer: deleting striped file in cephfs doesn't free up file's space
- 08:19 PM Backport #15056: hammer: deleting striped file in cephfs doesn't free up file's space
- Original PR has been merged.
- 09:26 PM Backport #15057 (In Progress): infernalis: deleting striped file in cephfs doesn't free up file's...
- 09:26 PM Backport #15057: infernalis: deleting striped file in cephfs doesn't free up file's space
- Original PR has been merged.
- 07:28 PM Bug #15050 (Pending Backport): deleting striped file in cephfs doesn't free up file's space
- It's been merged.
- 01:05 PM Bug #15106: ceph.py does 'fs get ...' and breaks upgrade tests
- Nevermind - jcsp and loicd answered my question out-of-band.
- 12:42 PM Bug #15106: ceph.py does 'fs get ...' and breaks upgrade tests
- I see in the test definition:...
- 11:57 AM Bug #15106: ceph.py does 'fs get ...' and breaks upgrade tests
- The command line for triggering this suite included ...
- 11:32 AM Bug #15106: ceph.py does 'fs get ...' and breaks upgrade tests
- @Kefu: reopening, because this issue is still affecting the upgrade tests. For example, this job:...
- 12:16 PM Fix #15134: multifs: test case exercising mds_thrash for multiple filesystems
- NB also check while doing this if upgrade suites require to be able to run thrasher on old versions of Ceph, if so th...
- 12:13 PM Fix #15134 (Resolved): multifs: test case exercising mds_thrash for multiple filesystems
Currently the MDS thrasher only acts on one filesystem. Still pretty useful for checking the multifs stuff didn't ...
03/14/2016
- 10:56 PM Bug #15124: ceph.py 'fs get ...' doesn't handle old installed ceph version
- The "fs new" case was handled, but this is probably the get_mds_map path trying to call "fs get"
- 09:25 PM Bug #15124: ceph.py 'fs get ...' doesn't handle old installed ceph version
- /a/sage-2016-03-14_14:05:31-rados-wip-sage-testing---basic-smithi/59145
- 09:25 PM Bug #15124 (Resolved): ceph.py 'fs get ...' doesn't handle old installed ceph version
- for example, this test
description: rados/singleton-nomsgr/{rados.yaml all/13234.yaml}
installs dumpling and ... - 07:45 AM Bug #15075 (Resolved): "suspicious RCU usage" message in knfs, kcephfs tests
- test results http://pulpito.ceph.com/zyan-2016-03-14_00:15:28-kcephfs-jewel-testing-basic-multi/
- 03:38 AM Bug #15075: "suspicious RCU usage" message in knfs, kcephfs tests
- I should use rcu_dereference_protected() instead of rcu_deference() when swapping pointers. Updated testing branch of...
- 06:29 AM Bug #15106 (Rejected): ceph.py does 'fs get ...' and breaks upgrade tests
- i ran into this problem also, see http://qa-proxy.ceph.com/teuthology/kchai-2016-03-11_19:18:43-rados-wip-kefu-testin...
03/13/2016
03/11/2016
- 05:01 PM Bug #15060 (Duplicate): client: Leak_StillReachable from boost::detail::get_once_per_thread_epoch()
- http://tracker.ceph.com/issues/14794, which got fixed yesterday.
- 11:18 AM Bug #15060 (Duplicate): client: Leak_StillReachable from boost::detail::get_once_per_thread_epoch()
- http://pulpito.ceph.com/teuthology-2016-03-09_14:03:10-fs-jewel---basic-smithi/49376/...
- 02:30 PM Bug #14805: Hadoop tests failing with EPERM
- Maybe this has same issue as the python libcephfs tests did, they were creating files with mode 0, which used to work
- 02:23 PM Bug #15075 (Resolved): "suspicious RCU usage" message in knfs, kcephfs tests
- Zheng, does this look familiar to you?
http://pulpito.ceph.com/teuthology-2016-03-09_17:10:02-knfs-jewel-testing-b... - 02:11 PM Bug #15061 (Duplicate): TestStrays.test_snapshot_remove failing, objects remain in data pool
- Ah, I thought it sounded familiar, but I couldn't find the ticket. Thanks.
- 11:57 AM Bug #15061: TestStrays.test_snapshot_remove failing, objects remain in data pool
- It's an OSD issue. dup of http://tracker.ceph.com/issues/14962
- 11:32 AM Bug #15061: TestStrays.test_snapshot_remove failing, objects remain in data pool
- ...
- 11:31 AM Bug #15061 (Duplicate): TestStrays.test_snapshot_remove failing, objects remain in data pool
http://pulpito.ceph.com/teuthology-2016-03-07_18:04:01-fs-master---basic-smithi/45775/
Reproduces locally on vst...- 02:03 PM Feature #15074 (New): multifs: infer client's target filesystem based on its auth caps
- Usually, if a client subscribes to unqualified "mdsmap", it'll get whichever filesystem is marked as the legacy files...
- 01:53 PM Feature #15072 (New): mon: multifs: auth caps of MDS->mon connections to limit by FSCID
- MDSs already only receive a populated MDSMap once they have been assigned a rank.
These caps should be used by MDS... - 01:51 PM Feature #15071 (New): mds: client: multifs: auth caps on client->MDS connections to limit by FSCID
- When a client is to be limited to a particular filesystem, it needs to not only be restricted to seeing that MDSMap, ...
- 01:49 PM Feature #15070 (Resolved): mon: client: multifs: auth caps on client->mon connections to limit th...
- Currently clients with 'mds allow r' capabilities can see any MDSMap.
We would like to be able to craft client aut... - 01:47 PM Feature #15069 (Resolved): MDS: multifs: enable two filesystems to point to same pools if one of ...
- The 'damaged' flag on a filesystem prevents any MDS from being assigned a rank in that filesystem. While a filesyste...
- 01:43 PM Feature #15068 (Resolved): fsck: multifs: enable repair tools to read from one filesystem and wri...
- To create a workflow in which the user marks an existing filesystem damaged, and then goes through a repair process w...
- 01:41 PM Feature #15067 (Resolved): mon: client: multifs: enable clients to map a filesystem name to a FSCID
Currently clients have to specify the ID of the filesystem they want to connect to (or specify no ID to get the leg...- 01:40 PM Feature #15066 (Rejected): multifs: Allow filesystems to be assigned RADOS namespace as well as p...
- Everywhere we accept a pool argument currently (e.g. in "ceph fs new"), we should additionally accept a RADOS namespa...
- 01:37 PM Feature #15065 (Resolved): multifs: add standby_for_fscid setting on MDS and pass in MMDSBeacon
- Currently in MDSMonitor::prepare_beacon:...
- 01:33 PM Fix #15064 (Closed): multifs: tweak text on "flag set enable multiple"
- (Requested by Greg https://github.com/ceph/ceph/pull/6953#issuecomment-194731565)
Currently when someone tries to ... - 01:30 PM Feature #15063 (Resolved): multifs: option to disable sanity() calls
- (Requested by Greg https://github.com/ceph/ceph/pull/6953#discussion-diff-55642577R359)
The MDSMonitor calls FSMap... - 01:28 PM Fix #15062 (Resolved): multifs cleanup: pass feature bits into MDSMap from Filesystem::encode
- Currently a static set of bits is passed when encoding MDSMap in Filesystem objects.
https://github.com/ceph/ceph/... - 05:20 AM Backport #15057: infernalis: deleting striped file in cephfs doesn't free up file's space
- NOTE: Original PR not yet merged.
- 05:16 AM Backport #15057 (Rejected): infernalis: deleting striped file in cephfs doesn't free up file's space
- https://github.com/ceph/ceph/pull/8041
- 05:19 AM Backport #15056: hammer: deleting striped file in cephfs doesn't free up file's space
- NOTE: Original PR not yet merged.
- 05:16 AM Backport #15056 (Resolved): hammer: deleting striped file in cephfs doesn't free up file's space
- https://github.com/ceph/ceph/pull/8042
- 05:10 AM Bug #13268 (Resolved): Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- 03:38 AM Bug #15050 (Fix Under Review): deleting striped file in cephfs doesn't free up file's space
- https://github.com/ceph/ceph/pull/8040
https://github.com/ceph/ceph/pull/8041
https://github.com/ceph/ceph/pull/8042 - 03:03 AM Bug #15050: deleting striped file in cephfs doesn't free up file's space
- Well spotted.
- 02:25 AM Bug #15050: deleting striped file in cephfs doesn't free up file's space
- we don't handle 'stripe_count > 1' properly when purging stray....
- 03:10 AM Backport #13809 (Resolved): hammer: Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- 03:10 AM Backport #13813 (Resolved): hammer: Daily segfault ll_forget reader couldn't read tag
03/10/2016
- 11:55 PM Bug #15050: deleting striped file in cephfs doesn't free up file's space
- Can you describe in a little more detail how you were creating and deleting these files? You talk about them being 10...
- 07:08 PM Bug #15050 (Resolved): deleting striped file in cephfs doesn't free up file's space
- I've been creating large (16-64 GB) files in an otherwise empty test cephfs (mounted via the kernel client in linux 4...
- 01:48 PM Bug #15045 (Resolved): CephFSVolumeClient.evict should be limited by path, not just auth ID
- 11:08 AM Bug #15008: fuse expects root inode number to be FUSE_ROOT_ID
- *jewel PR*: https://github.com/ceph/ceph/pull/7976
- 09:05 AM Bug #12776 (Resolved): qa: standby MDS not shutting down, "reached maximum tries (50) after waiti...
- 09:04 AM Bug #13583 (Resolved): Client::_fsync() on a given file does not wait unsafe requests that create...
- 09:03 AM Bug #13675 (Resolved): Failure in LibCephFS.DirLs
- 09:02 AM Bug #13729 (Resolved): Daily segfault ll_forget reader couldn't read tag
- 09:00 AM Bug #14196 (Resolved): test_object_deletion fails (tasks.cephfs.test_damage.TestDamage)
- 08:59 AM Bug #14374 (Resolved): MDS asok handlers trigger lock cycle assertion if they take mds_lock
- 08:59 AM Bug #14380 (Resolved): "ceph mds setmap" crashes mon on invalid input
- 08:58 AM Bug #14379 (Resolved): Add confirmation flag to "ceph mds rmfailed"
- 08:52 AM Bug #10436 (Resolved): ceph-fuse: snapshot flushing from page cache to Client is not coherent
- 06:43 AM Bug #14996 (Resolved): libcephfs hangs on shutdown if an unclosed opendir handle exists
- 06:26 AM Bug #14800 (Resolved): [ceph-fuse] Fh ref might leak at umounting
- 06:24 AM Bug #14798 (Resolved): free fds being exhausted eventually because freed fds are never put back
- 06:12 AM Bug #15038 (Resolved): unittest_mds_types: inode_t.compare_equal fails
- 03:13 AM Bug #15038 (Fix Under Review): unittest_mds_types: inode_t.compare_equal fails
- https://github.com/ceph/ceph/pull/8014
- 01:11 AM Bug #15038 (Resolved): unittest_mds_types: inode_t.compare_equal fails
- This causes 'make check' to fail on master.
http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-tarball-trusty-amd64-...
03/09/2016
- 02:05 PM Bug #14758 (Resolved): failed TestJournalRepair test on smithi
- In this case, mds.b was going into resolve (rank 0), while mds.a was remaining in standby. That's correct (rank 1 is...
- 11:53 AM Bug #14758 (New): failed TestJournalRepair test on smithi
- 11:52 AM Bug #14758 (Rejected): failed TestJournalRepair test on smithi
- The link in the report was wrong, it was actually:
http://pulpito.ceph.com/teuthology-2016-02-12_14:03:01-fs-jewel--... - 06:34 AM Bug #14758: failed TestJournalRepair test on smithi
- Hmm, we haven't seen this since then, and there have been some smithi runs so it's apparently not just infrastructure...
- 11:50 AM Feature #14642: Validate layouts everywhere we load them
- Yes, and in testing I also found other places in the MDS that get upset with zeros, so the ticket probably either nee...
- 06:46 AM Feature #14642: Validate layouts everywhere we load them
- John, I think maybe you said in the user thread that we seem to guard against this on inputs, right? So we've no idea...
- 06:50 AM Bug #14255: qa: we are filling smithi disks with ffsb workloads
- This is still happening (http://pulpito.ceph.com/teuthology-2016-03-07_18:04:01-fs-master---basic-smithi/45723/, http...
- 06:45 AM Bug #14807 (Need More Info): MDS crashes repeatedly after upgrade to Infernalis from Hammer
- 06:44 AM Bug #14735: ceph-fuse does not mount at boot on Debian Jessie
- Bumping down priority as it's just the backport.
- 06:43 AM Bug #14608 (Need More Info): snaptests.yaml failure: [WRN] open_snap_parents has:" in cluster log
- For whatever reason we don't actually have the logs here (any more?), and it doesn't seem to have reproduced elsewher...
- 06:43 AM Bug #14996 (Fix Under Review): libcephfs hangs on shutdown if an unclosed opendir handle exists
- https://github.com/ceph/ceph/pull/7994
- 06:32 AM Bug #14196: test_object_deletion fails (tasks.cephfs.test_damage.TestDamage)
- Bumping down priority as it's just the backport now.
- 01:53 AM Bug #15008 (Fix Under Review): fuse expects root inode number to be FUSE_ROOT_ID
- https://jenkins.ceph.com/job/ceph-pull-requests/2751/
03/08/2016
- 03:27 PM Bug #15008 (Resolved): fuse expects root inode number to be FUSE_ROOT_ID
- it's not true when mounting into subdir
- 05:09 AM Bug #9679: Ceph hadoop terasort job failure
- I forgot to mention, I tested both: Hadoop 2.7.1 and 2.7.2 with the same outcome.
64 bit Arch linux circa Jan 2015. ... - 05:01 AM Bug #9679: Ceph hadoop terasort job failure
- Greg,
My setup was simple: ceph jni bindings were build from ceph repo.
Cehp hadoop2 integration was build from h... - 12:00 AM Bug #9679: Ceph hadoop terasort job failure
- Dmitry, can you elucidate more on your environment? Which Hadoop, and what bindings did you use to connect Ceph and H...
- 02:15 AM Bug #14805: Hadoop tests failing with EPERM
- old libcephfs only has permission check for open. Now, It has full permission checks (open, lookup, setattr ...)
03/07/2016
- 11:55 PM Bug #14805: Hadoop tests failing with EPERM
- Do you have any idea what about the client permission checking is busting Hadoop? We want to fix it properly (or at l...
- 02:25 PM Bug #14996: libcephfs hangs on shutdown if an unclosed opendir handle exists
- I think we should unify open file/directory handle
- 10:41 AM Bug #14996 (Resolved): libcephfs hangs on shutdown if an unclosed opendir handle exists
This will hang on shutdown:...
03/03/2016
- 02:38 PM Bug #14684 (Resolved): test_scrub_checks fails
- 02:08 PM Bug #14970 (Rejected): Post-new-layouts clients crash talking to older MDSs
- Ah, I see: we reused the version, makes sense. Thanks!
- 02:01 PM Bug #14970: Post-new-layouts clients crash talking to older MDSs
- I think this only affects older MDS with feature bit (1<<58)
- 01:48 PM Bug #14970 (Rejected): Post-new-layouts clients crash talking to older MDSs
- By sheer coincidence I had a vstart cluster created before rebasing on master, and a client compiled after rebasing o...
- 02:38 AM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- The file by default goes to a file called "cachedump.*", like it says. I think it tends to go in /, but I don't remem...
03/02/2016
- 05:28 AM Bug #14805: Hadoop tests failing with EPERM
- tests passed
http://pulpito.ceph.com/zyan-2016-03-01_20:05:19-hadoop-jewel-testing-basic-mira/
Also available in: Atom