Bug #52799
closedSegmentation Fault in radosgw-admin period update --commit
0%
Description
On one of our ceph clusters it is not possible to set up a simple radosgw/S3 basic setup. The cluster was running fine for several months including a working S3 setup. While exploring new ceph features like multisite S3/sync, we decided to recreate S3 from scratch. Now, it is not possible to setup even the most simple S3 setup.
I deleted all radosgw-related pools (the .rgw.root pool as well as the corresponding log, meta, index, control, and so on),
deleted all zones, zonegroups, realms, periods,
and then tried to create a simple new realm, a simple zonegroup, an simple zone, (radosgw-admin create ... works fine),
and then when doing the "radosgw-admin period update --commit" it always failes with segmentation fault.
In attached file "commands" there are the commands to create the fresh installation,
in "cephadm.log" there is the radosgw-admin period update including the debug log.
I would be happy to provide further log / debug if necessary.
Files
Updated by Casey Bodley over 2 years ago
we've seen a similar crash from `radosgw-admin period update --commit` inside of openssl due to FIPS enforcement. is FIPS enabled here?
Updated by Stefan Schueffler over 2 years ago
Yes, FIPS is enabled.
I just tested this right now with FIPS disabled - and... it works.
So, i have a work-a-round to get the config back to a running working cluster (temporarily disable FIPS-mode, reboot the nodes), but we need to (re-)enable FIPS-mode due to customer requirements.
Updated by Casey Bodley over 2 years ago
- Is duplicate of Bug #52900: segfault on FIPS enabled server as result of EVP_md5 disabled in openssl added