Project

General

Profile

Actions

Bug #3726

closed

Enforce Ceph's minimum stripe size in the java bindings

Added by Anonymous over 11 years ago. Updated about 5 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Hadoop/Java
Labels (FS):
Java/Hadoop
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The Hadoop bindings are using the blocksize as the stripe size. If a block size is explicitly passed down, it ends up being checked by ceph_file_layout_is_valid() in src/include/ceph_fs.cc (eventually). This function requires that the stripe unit be a multiple of 64 K (well, CEPH_MIN_STRIPE_UNIT). We need to adjust the block size in our Hadoop code and then log that we changed the block size on the caller.

Our current plan is to expose CEPH_MIN_STRIPE_UNIT via a JNI call and then do a minimum on that and the specified block size.

Actions #1

Updated by Anonymous over 11 years ago

After a discussion on jabber, the decision is to go with exposing a function call in libcephfs and then using that in the java bindings. The call will return the #define CEPH_MIN_STRIPE_UNIT for now.

Actions #2

Updated by Anonymous over 11 years ago

Also, name it something along the lines of get_stripe_granularity() and not .._min(imum)_ as that isn't entirely accurate (the value must be both >= the #define and a multiple of it).

Actions #3

Updated by Noah Watkins over 11 years ago

  • Status changed from New to Closed
Actions #4

Updated by Noah Watkins over 11 years ago

  • Status changed from Closed to Resolved
Actions #5

Updated by Greg Farnum almost 8 years ago

  • Component(FS) Hadoop/Java added
Actions #6

Updated by Patrick Donnelly about 5 years ago

  • Category deleted (48)
  • Labels (FS) Java/Hadoop added
Actions

Also available in: Atom PDF