Project

General

Profile

Actions

Bug #44926

closed

dashboard: creating a new bucket causes InvalidLocationConstraint

Added by Andreas Haase about 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In Octopus I created an object gateway as mentioned in https://ceph.io/ceph-management/introducing-cephadm/ chapter "deploying storage services". After doing so I followed instructions in https://docs.ceph.com/docs/master/mgr/dashboard/#enabling-the-object-gateway-management-frontend to create a system user (username: dashboard) and getting its access and secret key configure the dashboard. This done, I can see the object gateways in dashboard. Additionally I created another user (system flag not set, username: rsasse) via dashboard.

When trying to create a bucket for any user (dashboard or rsasse) and selecting default placement, I get an error message as attached in screenshot. One of our developers managed to create a bucket using api keys of user rsasse directly using the rest api, so I guess, the porblem is caused by the dashboard.

To be complete, at the moment I'm running on version octopus-f8d5631-centos-8-x86_64-devel as supposed by sage while debugging another dashboard issue.


Files


Related issues 4 (0 open4 closed)

Related to Dashboard - Feature #44605: cephadm: RGW: missing dashboard integrationResolvedSage Weil

Actions
Related to rgw - Bug #38119: rgw can't create bucket, because can't find zonegroup? location constraint (default) can't be found.Closed

Actions
Related to Dashboard - Bug #47676: mgr/dashboard: do not rely on realm_id value when retrieving zone infoResolvedAlfonso Martínez

Actions
Has duplicate Dashboard - Bug #46667: mgr/dashboard: Handle buckets without a realm_idResolved

Actions
Actions #1

Updated by Andreas Haase about 4 years ago

Commands used to create the object gateway:

radosgw-admin realm create --rgw-realm=C4U --default
radosgw-admin zonegroup create --rgw-zonegroup=CHE --master --default
radosgw-admin zone create --rgw-zonegroup=CHE --rgw-zone=Villa_Hahn --master --default
ceph orch apply rgw C4U Villa_Hahn
ceph config set client.rgw.C4U.Villa_Hahn rgw_frontends "beast port=8080" 

Commands used for creating the systemuser:

radosgw-admin user create --uid=dashboard --display-name=Dashboard --system
radosgw-admin user info --uid=dashboard # looked for access key an secret key
ceph dashboard set-rgw-api-access-key <KEY>
ceph dashboard set-rgw-api-secret-key <KEY>
ceph dashboard set-rgw-api-ssl-verify False
Actions #2

Updated by Greg Farnum about 4 years ago

  • Project changed from Ceph to mgr
Actions #3

Updated by Sebastian Wagner about 4 years ago

  • Related to Feature #44605: cephadm: RGW: missing dashboard integration added
Actions #4

Updated by Sebastian Wagner about 4 years ago

  • Project changed from mgr to Orchestrator
  • Description updated (diff)
  • Category set to cephadm
Actions #5

Updated by Alfonso Martínez about 4 years ago

Actions #6

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #38119: rgw can't create bucket, because can't find zonegroup? location constraint (default) can't be found. added
Actions #7

Updated by Alfonso Martínez about 4 years ago

Actions #8

Updated by Apely AGAMAKOU almost 4 years ago

Hi, i've got the same issue:

OS: Debian 10 (buster)
Ceph: Octopus (15.2.1)
Nodes: 3

Actions #9

Updated by Apely AGAMAKOU almost 4 years ago

Apely AGAMAKOU wrote:

Hi, i've got the same issue:

OS: Debian 10 (buster)
Ceph: Octopus (15.2.1)
Nodes: 3

radosgw-admin  zonegroup get --rgw-zonegroup=default
{
    "id": "4a71b901-5166-4318-b605-1a4de2183d87",
    "name": "default",
    "api_name": "default",
    "is_master": "true",
    "endpoints": [],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "db0cfcd6-b7a2-47f3-b4c4-862a1e492a4c",
    "zones": [
        {
            "id": "db0cfcd6-b7a2-47f3-b4c4-862a1e492a4c",
            "name": "eu-west-1",
            "endpoints": [],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": "" 
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD" 
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "1943f5e9-1e85-41d5-a004-7f7209a88d5e",
    "sync_policy": {
        "groups": []
    }
}
Actions #10

Updated by Sebastian Wagner almost 4 years ago

  • Project changed from Orchestrator to mgr
  • Category changed from cephadm to 143
Actions #11

Updated by Sebastian Wagner almost 4 years ago

  • Related to Bug #46667: mgr/dashboard: Handle buckets without a realm_id added
Actions #12

Updated by Stephan Müller almost 4 years ago

  • Related to deleted (Bug #46667: mgr/dashboard: Handle buckets without a realm_id)
Actions #13

Updated by Stephan Müller almost 4 years ago

  • Has duplicate Bug #46667: mgr/dashboard: Handle buckets without a realm_id added
Actions #14

Updated by Andy Gold almost 4 years ago

Hi, i've got the same issue:

OS: CentOS 8.2.2004
Ceph: Octopus (15.2.4)
Nodes: 3

Actions #15

Updated by Alfonso Martínez over 3 years ago

Andreas Haase wrote:

Commands used to create the object gateway:

[...]

Commands used for creating the systemuser:

[...]

After creating realms & users, you should update realm period:

radosgw-admin period update --rgw-realm=C4U --commit

See: https://docs.ceph.com/docs/master/radosgw/multisite/#update-the-period
It's recommended to update the period after any multisite change.

After that, you can create buckets through Dashboard (tested locally and it works).

The question is: should cephadm update the period as a final step when deploying rgw daemons?

Actions #16

Updated by Sebastian Wagner over 3 years ago

  • Project changed from mgr to Orchestrator
  • Category deleted (143)
  • Status changed from New to Resolved
  • Target version set to v15.2.5
  • Pull request ID set to 36496
Actions #17

Updated by Jiffin Tony Thottan over 3 years ago

Similar issue is reported in rook upstream https://github.com/rook/rook/issues/6210, I tried in mydev enviroment and it was easily reproduced.
radosgw-admin period update --rgw-realm=my-store --commit didn't work for me, ceph version was v15.2.4.

debug logs from rgw for create_bucket

 s3:create_bucket init permissions
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 get_system_obj_state: rctx=0x7f6b9b68d8c8 obj=my-store.rgw.meta:users.uid:jiffin state=0x55b3d08b7ba0 s-&gt;prefetch_data=0
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 10 cache get: name=my-store.rgw.meta+users.uid+jiffin : hit (requested=0x6, cached=0x17)
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 get_system_obj_state: s-&gt;obj_tag was set empty
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 Read xattr: user.rgw.idtag
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 10 cache get: name=my-store.rgw.meta+users.uid+jiffin : hit (requested=0x3, cached=0x17)
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700  2 req 69 0s s3:create_bucket recalculating target
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700  2 req 69 0s s3:create_bucket reading permissions
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700  2 req 69 0s s3:create_bucket init op
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700  2 req 69 0s s3:create_bucket verifying op mask
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 req 69 0s s3:create_bucket required_mask= 2 user.op_mask=7
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700  2 req 69 0s s3:create_bucket verifying op permissions
debug 2020-09-15T05:54:46.173+0000 7f6bdc916700 20 WARNING: blocking librados call
debug 2020-09-15T05:54:46.174+0000 7f6bdc916700  2 req 69 0.000999956s s3:create_bucket verifying op params
debug 2020-09-15T05:54:46.174+0000 7f6bdc916700  2 req 69 0.000999956s s3:create_bucket pre-executing
debug 2020-09-15T05:54:46.174+0000 7f6bdc916700  2 req 69 0.000999956s s3:create_bucket executing
debug 2020-09-15T05:54:46.174+0000 7f6bdc916700  5 req 69 0.000999956s s3:create_bucket <span style="background-color:#FFFFFF"><font color="#300A24">NOTICE</font></span>: call to do_aws4_auth_completion
debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 20 req 69 0.001999912s s3:create_bucket create bucket input data=&lt;CreateBucketConfiguration&gt;&lt;LocationConstraint&gt;default:default-placement&lt;/LocationConstraint&gt;&lt;/CreateBucketConfiguration&gt;
debug 2020-09-15T05:54:46.175+0000 7f6bdc916700 10 req 69 0.001999912s s3:create_bucket create bucket location constraint: default:default-placement
debug 2020-09-15T05:54:46.175+0000 7f6bdc916700  0 req 69 0.001999912s s3:create_bucket location constraint (default) can&apos;t be found.
debug 2020-09-15T05:54:46.175+0000 7f6bdc916700  2 req 69 0.001999912s s3:create_bucket completing
debug 2020-09-15T05:54:46.175+0000 7f6bdc916700  2 req 69 0.001999912s s3:create_bucket op status=-2208
debug 2020-09-15T05:54:46.175+0000 7f6bdc916700  2 req 69 0.001999912s s3:create_bucket http status=400

Actions #18

Updated by Alfonso Martínez over 3 years ago

  • Related to Bug #47676: mgr/dashboard: do not rely on realm_id value when retrieving zone info added
Actions

Also available in: Atom PDF