Project

General

Profile

Actions

Bug #47676

closed

mgr/dashboard: do not rely on realm_id value when retrieving zone info

Added by Alfonso Martínez over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Category:
Component - RGW
Target version:
% Done:

0%

Source:
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Deployed Ceph cluster with rook:

  • minikube version: v1.13.1 (Kubernetes sever: v1.19.2)
  • rook: v1.4.4
  • ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

After creating a ceph object store with:
https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/object-test.yaml

The zonegroup has the realm_id correctly set but the zone (which is the master zone of that zonegroup) has empty realm_id:

[root@rook-ceph-tools-649c4dd574-hpb22 /]# radosgw-admin zonegroup get
{
    "id": "a8884b6c-4973-4dfe-98bf-6e38cb0442a8",
    "name": "my-store",
    "api_name": "my-store",
    "is_master": "true",
    "endpoints": [
        "http://10.109.211.89:80" 
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "afc17554-7d24-42e7-b70c-1674e58a0d60",
    "zones": [
        {
            "id": "afc17554-7d24-42e7-b70c-1674e58a0d60",
            "name": "my-store",
            "endpoints": [
                "http://10.109.211.89:80" 
            ],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": "" 
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD" 
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "fd05302a-e699-48c9-8dc7-bdb015ab451c",
    "sync_policy": {
        "groups": []
    }
}

[root@rook-ceph-tools-649c4dd574-hpb22 /]# radosgw-admin zone get
{
    "id": "afc17554-7d24-42e7-b70c-1674e58a0d60",
    "name": "my-store",
    "domain_root": "my-store.rgw.meta:root",
    "control_pool": "my-store.rgw.control",
    "gc_pool": "my-store.rgw.log:gc",
    "lc_pool": "my-store.rgw.log:lc",
    "log_pool": "my-store.rgw.log",
    "intent_log_pool": "my-store.rgw.log:intent",
    "usage_log_pool": "my-store.rgw.log:usage",
    "roles_pool": "my-store.rgw.meta:roles",
    "reshard_pool": "my-store.rgw.log:reshard",
    "user_keys_pool": "my-store.rgw.meta:users.keys",
    "user_email_pool": "my-store.rgw.meta:users.email",
    "user_swift_pool": "my-store.rgw.meta:users.swift",
    "user_uid_pool": "my-store.rgw.meta:users.uid",
    "otp_pool": "my-store.rgw.otp",
    "system_key": {
        "access_key": "",
        "secret_key": "" 
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "my-store.rgw.buckets.index",
                "storage_classes": {
                    "STANDARD": {
                        "data_pool": "my-store.rgw.buckets.data" 
                    }
                },
                "data_extra_pool": "my-store.rgw.buckets.non-ec",
                "index_type": 0
            }
        }
    ],
    "realm_id": "" 
}

Therefore, dashboard code should not rely anymore on this field as it's preventing bucket creation.

It should be investigated if this is a bug in rgw or rook deployment workflow. Related to:
https://github.com/rook/rook/issues/6210


Related issues 2 (0 open2 closed)

Related to Orchestrator - Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraintResolved

Actions
Copied to Dashboard - Backport #47811: octopus: mgr/dashboard: do not rely on realm_id value when retrieving zone infoResolvedAlfonso MartínezActions
Actions

Also available in: Atom PDF