Bug #3726
closedEnforce Ceph's minimum stripe size in the java bindings
0%
Description
The Hadoop bindings are using the blocksize as the stripe size. If a block size is explicitly passed down, it ends up being checked by ceph_file_layout_is_valid() in src/include/ceph_fs.cc (eventually). This function requires that the stripe unit be a multiple of 64 K (well, CEPH_MIN_STRIPE_UNIT). We need to adjust the block size in our Hadoop code and then log that we changed the block size on the caller.
Our current plan is to expose CEPH_MIN_STRIPE_UNIT via a JNI call and then do a minimum on that and the specified block size.
Updated by Anonymous over 11 years ago
After a discussion on jabber, the decision is to go with exposing a function call in libcephfs and then using that in the java bindings. The call will return the #define CEPH_MIN_STRIPE_UNIT for now.
Updated by Anonymous over 11 years ago
Also, name it something along the lines of get_stripe_granularity() and not .._min(imum)_ as that isn't entirely accurate (the value must be both >= the #define and a multiple of it).
Updated by Noah Watkins over 11 years ago
- Status changed from Closed to Resolved
Updated by Patrick Donnelly about 5 years ago
- Category deleted (
48) - Labels (FS) Java/Hadoop added