Project

General

Profile

Actions

Bug #43172

open

Radosgw(rgw_build_bucket_policies) fails on old buckets

Added by Ingo Reimann over 4 years ago. Updated almost 3 years ago.

Status:
Triaged
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
placement
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

We have an old cluster with pre-jewel buckets with indices in the data pool. The nautilus radosgw-admin commands do well with all these buckets, but radosgw fails on access. Buckets that have their indices in rgw.data.index are fine.

I tracked it down to the function rgw_build_bucket_policies in rgw/rgw_op.cc where at the end the placement rules are checked:

    /* init dest placement -- only if bucket exists, otherwise request is either not relevant, or
     * it's a create_bucket request, in which case the op will deal with the placement later */
    if (s->bucket_exists) {
      s->dest_placement.storage_class = s->info.storage_class;
      s->dest_placement.inherit_from(s->bucket_info.placement_rule);

      if (!store->svc.zone->get_zone_params().valid_placement(s->dest_placement)) {
        ldpp_dout(s, 0) << "NOTICE: invalid dest placement: " << s->dest_placement.to_str() << dendl;
        return -EINVAL;
      }
    }

In the Logs of radosgw I find the lines "req 5 0.000s NOTICE: invalid dest placement:", s->dest_placement.to_str() is empty.

I tried to add a new placement rule "pre-jewel", that reflects our old buckets and even added that as default for one user. He may create and work with new buckets. His old buckets still are inaccessible.

As we have lots of buckets that are in use, a solution would be nice, so we can finish our nautilus upgrade.

There once was a issue with luminous, that has been discussed and solved here (https://tracker.ceph.com/issues/22928) by Yehuda

kind regards,
Ingo

Actions #1

Updated by Casey Bodley over 4 years ago

could you please share the bucket instance metadata of one of these buckets?

radosgw-admin metadata get --metadata-key=bucket.instance:<bucket name and instance id>

Actions #2

Updated by Casey Bodley over 4 years ago

  • Status changed from New to Need More Info
Actions #3

Updated by Ingo Reimann over 4 years ago

Hi Casey,

here you are:

{
    "key": "bucket.instance:dcs:default.7518007.3",
    "ver": {
        "tag": "_rmiP5GK0UfT1b4mLcLE-Fu1",
        "ver": 1
    },
    "mtime": "2014-08-09 14:29:41.000000Z",
    "data": {
        "bucket_info": {
            "bucket": {
                "name": "dcs",
                "marker": "default.7518007.3",
                "bucket_id": "default.7518007.3",
                "tenant": "",
                "explicit_placement": {
                    "data_pool": "rgw.buckets",
                    "data_extra_pool": "",
                    "index_pool": "rgw.buckets" 
                }
            },
            "creation_time": "2014-08-09 14:29:41.000000Z",
            "owner": "Dunkel Archiv",
            "flags": 0,
            "zonegroup": "default",
            "placement_rule": "",
            "has_instance_obj": "true",
            "quota": {
                "enabled": false,
                "check_on_raw": false,
                "max_size": -1024,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "num_shards": 0,
            "bi_shard_hash_type": 0,
            "requester_pays": "false",
            "has_website": "false",
            "swift_versioning": "false",
            "swift_ver_location": "",
            "index_type": 0,
            "mdsearch_config": [],
            "reshard_status": 0,
            "new_bucket_instance_id": "" 
        },
        "attrs": [
            {
                "key": "user.rgw.acl",
                "val": "AgK3AAAAAgIpAAAADQAAAER1bmtlbCBBcmNoaXYUAAAAQXJjaGl2IEFibGFnZSBEdW5rZWwDA4IAAAABAQAAAA0AAABEdW5rZWwgQXJjaGl2DwAAAAEAAAANAAAARHVua2VsIEFyY2hpdgMDSQAAAAICBAAAAAAAAAANAAAARHVua2VsIEFyY2hpdgAAAAAAAAAAAgIEAAAADwAAABQAAABBcmNoaXYgQWJsYWdlIER1bmtlbAAAAAAAAAAA" 
            }
        ]
    }
}


NB: meanwhile i patched radosgw-admin to be able to change the placement_rule with "radosgw-admin metadata put". That fixed my issue, but this is not possible out of the box.

Actions #4

Updated by Casey Bodley over 4 years ago

  • Status changed from Need More Info to New
  • Tags set to placement
Actions #5

Updated by Casey Bodley over 4 years ago

  • Status changed from New to Triaged
Actions #6

Updated by Casey Bodley about 4 years ago

if (!store->svc.zone->get_zone_params().valid_placement(s->dest_placement)) {

It looks like this needs to take the bucket's explicit_placement into account.

Actions #7

Updated by Marcin Gibula almost 3 years ago

I've hit the same issue, while upgrading (very old) cluster from luminous to nautilus. I'm currently forced to keep old radosgw binaries to keep production running. Was there any progress on this bug?

Actions

Also available in: Atom PDF