Project

General

Profile

Bug #837

AuthAuthorizeHandler fails to build on s390

Added by Greg Farnum over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Low
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

Also from Laszlo:

It also fails on s390, on the same file but with a different error
message:
g++ -DHAVE_CONFIG_H -I. -Wall -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_THREAD_SAFE -rdynamic -g -O2 -MT AuthAuthorizeHandler.o -MD -MP -MF .deps/AuthAuthorizeHandler.Tpo -c -o AuthAuthorizeHandler.o `test -f 'auth/AuthAuthorizeHandler.cc' || echo './'`auth/AuthAuthorizeHandler.cc
In file included from ./msg/msg_types.h:19,
from auth/Auth.h:19,
from auth/AuthAuthorizeHandler.cc:1:
./include/blobhash.h: In member function 'size_t blobhash::operator()(const char*, unsigned int)':
./include/blobhash.h:42: error: no match for call to '(rjhash<long unsigned int>) (size_t&)'
make[3]: *** [AuthAuthorizeHandler.o] Error 1

The full building log can be found on the same location[2]. I would be
grateful if someone with more knowledge of Ceph internals can look into
them.

Regards,
Laszlo/GCS
[1] https://buildd.debian.org/fetch.cgi?pkg=ceph&arch=armel&ver=0.24.3-2&stamp=1298712439&file=log&as=raw
[2] https://buildd.debian.org/fetch.cgi?pkg=ceph&arch=s390&ver=0.24.3-2&stamp=1298712185&file=log&as=raw

Associated revisions

Revision 68a2f46f (diff)
Added by Tommi Virtanen over 9 years ago

blobhash: Avoid size_t in templatized hash functions.

On S/390, the earlier rjhash<size_t> failed with
"no match for call to '(rjhash<long unsigned int>) (size_t&)'".
It seems the rjhash<size_T> logic was only enabled
on some architectures, and relied on some pretty deep
internals of the bit layout (LP64).

Use an explicitly 32-bit type as early as possible, and
convert back to size_t only when really needed. This
should work, and simplifies the code. In theory, we might
have a narrower output (size_t might be 64-bit, max value
we now output is 32-bit), but this doesn't matter as this
is only ever used for picking a slot in an in-memory hash
table, hash(key) modulo num_of_buckets, there won't be >4G
buckets.

Closes: #837

Signed-off-by: Tommi Virtanen <>

History

#1 Updated by Sage Weil over 9 years ago

  • Target version changed from v0.25 to v0.25.1

#2 Updated by Greg Farnum over 9 years ago

  • Assignee set to Anonymous

#3 Updated by Anonymous over 9 years ago

Current understanding:

it's the key hashing function for an in-RAM hash table

Plan:

rewrite hash table to use rjhash* directly, avoid size_t in favor of u_int32_t etc

remove blobhash from source tree

Will fix next week.

#4 Updated by Sage Weil over 9 years ago

  • Priority changed from Normal to Low

#5 Updated by Sage Weil over 9 years ago

  • Target version changed from v0.25.1 to v0.25.2

#6 Updated by Sage Weil over 9 years ago

  • translation missing: en.field_story_points set to 3
  • translation missing: en.field_position set to 554

#7 Updated by Sage Weil over 9 years ago

  • translation missing: en.field_position deleted (554)
  • translation missing: en.field_position set to 552

#8 Updated by Anonymous over 9 years ago

There's a stab in the dark that might solve this in branch tv-blobhash-837.

#9 Updated by Sage Weil over 9 years ago

  • Status changed from New to Resolved
  • Target version changed from v0.25.2 to v0.26

merging this for v0.26 to get some extra testing. no rush on s390 support :)

#10 Updated by Sage Weil over 9 years ago

  • translation missing: en.field_position deleted (574)
  • translation missing: en.field_position set to 573

#11 Updated by Sage Weil over 9 years ago

  • translation missing: en.field_story_points changed from 3 to 1
  • translation missing: en.field_position deleted (573)
  • translation missing: en.field_position set to 573

Also available in: Atom PDF