Bug #44926
closeddashboard: creating a new bucket causes InvalidLocationConstraint
0%
Description
In Octopus I created an object gateway as mentioned in https://ceph.io/ceph-management/introducing-cephadm/ chapter "deploying storage services". After doing so I followed instructions in https://docs.ceph.com/docs/master/mgr/dashboard/#enabling-the-object-gateway-management-frontend to create a system user (username: dashboard) and getting its access and secret key configure the dashboard. This done, I can see the object gateways in dashboard. Additionally I created another user (system flag not set, username: rsasse) via dashboard.
When trying to create a bucket for any user (dashboard or rsasse) and selecting default placement, I get an error message as attached in screenshot. One of our developers managed to create a bucket using api keys of user rsasse directly using the rest api, so I guess, the porblem is caused by the dashboard.
To be complete, at the moment I'm running on version octopus-f8d5631-centos-8-x86_64-devel
as supposed by sage while debugging another dashboard issue.
Files
Updated by Andreas Haase about 4 years ago
Commands used to create the object gateway:
radosgw-admin realm create --rgw-realm=C4U --default radosgw-admin zonegroup create --rgw-zonegroup=CHE --master --default radosgw-admin zone create --rgw-zonegroup=CHE --rgw-zone=Villa_Hahn --master --default ceph orch apply rgw C4U Villa_Hahn ceph config set client.rgw.C4U.Villa_Hahn rgw_frontends "beast port=8080"
Commands used for creating the systemuser:
radosgw-admin user create --uid=dashboard --display-name=Dashboard --system radosgw-admin user info --uid=dashboard # looked for access key an secret key ceph dashboard set-rgw-api-access-key <KEY> ceph dashboard set-rgw-api-secret-key <KEY> ceph dashboard set-rgw-api-ssl-verify False
Updated by Sebastian Wagner about 4 years ago
- Related to Feature #44605: cephadm: RGW: missing dashboard integration added
Updated by Sebastian Wagner about 4 years ago
- Project changed from mgr to Orchestrator
- Description updated (diff)
- Category set to cephadm
Updated by Alfonso Martínez about 4 years ago
- Related to Feature #43687: cephadm: haproxy (or lb) added
Updated by Sebastian Wagner about 4 years ago
- Related to Bug #38119: rgw can't create bucket, because can't find zonegroup? location constraint (default) can't be found. added
Updated by Alfonso Martínez about 4 years ago
- Related to deleted (Feature #43687: cephadm: haproxy (or lb))
Updated by Apely AGAMAKOU almost 4 years ago
Hi, i've got the same issue:
OS: Debian 10 (buster)
Ceph: Octopus (15.2.1)
Nodes: 3
Updated by Apely AGAMAKOU almost 4 years ago
Apely AGAMAKOU wrote:
Hi, i've got the same issue:
OS: Debian 10 (buster)
Ceph: Octopus (15.2.1)
Nodes: 3
radosgw-admin zonegroup get --rgw-zonegroup=default { "id": "4a71b901-5166-4318-b605-1a4de2183d87", "name": "default", "api_name": "default", "is_master": "true", "endpoints": [], "hostnames": [], "hostnames_s3website": [], "master_zone": "db0cfcd6-b7a2-47f3-b4c4-862a1e492a4c", "zones": [ { "id": "db0cfcd6-b7a2-47f3-b4c4-862a1e492a4c", "name": "eu-west-1", "endpoints": [], "log_meta": "false", "log_data": "false", "bucket_index_max_shards": 11, "read_only": "false", "tier_type": "", "sync_from_all": "true", "sync_from": [], "redirect_zone": "" } ], "placement_targets": [ { "name": "default-placement", "tags": [], "storage_classes": [ "STANDARD" ] } ], "default_placement": "default-placement", "realm_id": "1943f5e9-1e85-41d5-a004-7f7209a88d5e", "sync_policy": { "groups": [] } }
Updated by Sebastian Wagner almost 4 years ago
- Project changed from Orchestrator to mgr
- Category changed from cephadm to 143
Updated by Sebastian Wagner almost 4 years ago
- Related to Bug #46667: mgr/dashboard: Handle buckets without a realm_id added
Updated by Stephan Müller almost 4 years ago
- Related to deleted (Bug #46667: mgr/dashboard: Handle buckets without a realm_id)
Updated by Stephan Müller almost 4 years ago
- Has duplicate Bug #46667: mgr/dashboard: Handle buckets without a realm_id added
Updated by Andy Gold almost 4 years ago
Hi, i've got the same issue:
OS: CentOS 8.2.2004
Ceph: Octopus (15.2.4)
Nodes: 3
Updated by Alfonso Martínez over 3 years ago
Andreas Haase wrote:
Commands used to create the object gateway:
[...]
Commands used for creating the systemuser:
[...]
After creating realms & users, you should update realm period:
radosgw-admin period update --rgw-realm=C4U --commit
See: https://docs.ceph.com/docs/master/radosgw/multisite/#update-the-period
It's recommended to update the period after any multisite change.
After that, you can create buckets through Dashboard (tested locally and it works).
The question is: should cephadm update the period as a final step when deploying rgw daemons?
Updated by Sebastian Wagner over 3 years ago
- Project changed from mgr to Orchestrator
- Category deleted (
143) - Status changed from New to Resolved
- Target version set to v15.2.5
- Pull request ID set to 36496
Updated by Jiffin Tony Thottan over 3 years ago
Similar issue is reported in rook upstream https://github.com/rook/rook/issues/6210, I tried in mydev enviroment and it was easily reproduced.
radosgw-admin period update --rgw-realm=my-store --commit didn't work for me, ceph version was v15.2.4.
debug logs from rgw for create_bucket
s3:create_bucket init permissions debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 get_system_obj_state: rctx=0x7f6b9b68d8c8 obj=my-store.rgw.meta:users.uid:jiffin state=0x55b3d08b7ba0 s->prefetch_data=0 debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 10 cache get: name=my-store.rgw.meta+users.uid+jiffin : hit (requested=0x6, cached=0x17) debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 get_system_obj_state: s->obj_tag was set empty debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 Read xattr: user.rgw.idtag debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 10 cache get: name=my-store.rgw.meta+users.uid+jiffin : hit (requested=0x3, cached=0x17) debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 2 req 69 0s s3:create_bucket recalculating target debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 2 req 69 0s s3:create_bucket reading permissions debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 2 req 69 0s s3:create_bucket init op debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 2 req 69 0s s3:create_bucket verifying op mask debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 req 69 0s s3:create_bucket required_mask= 2 user.op_mask=7 debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 2 req 69 0s s3:create_bucket verifying op permissions debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 WARNING: blocking librados call debug 2020-09-15T05:54:46.174+0000 7f6bdc916700 2 req 69 0.000999956s s3:create_bucket verifying op params debug 2020-09-15T05:54:46.174+0000 7f6bdc916700 2 req 69 0.000999956s s3:create_bucket pre-executing debug 2020-09-15T05:54:46.174+0000 7f6bdc916700 2 req 69 0.000999956s s3:create_bucket executing debug 2020-09-15T05:54:46.174+0000 7f6bdc916700 5 req 69 0.000999956s s3:create_bucket <span style="background-color:#FFFFFF"><font color="#300A24">NOTICE</font></span>: call to do_aws4_auth_completion debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 20 req 69 0.001999912s s3:create_bucket create bucket input data=<CreateBucketConfiguration><LocationConstraint>default:default-placement</LocationConstraint></CreateBucketConfiguration> debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 10 req 69 0.001999912s s3:create_bucket create bucket location constraint: default:default-placement debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 0 req 69 0.001999912s s3:create_bucket location constraint (default) can't be found. debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 2 req 69 0.001999912s s3:create_bucket completing debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 2 req 69 0.001999912s s3:create_bucket op status=-2208 debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 2 req 69 0.001999912s s3:create_bucket http status=400
Updated by Alfonso Martínez over 3 years ago
- Related to Bug #47676: mgr/dashboard: do not rely on realm_id value when retrieving zone info added