Project

General

Profile

Actions

Bug #24551

closed

RGW Dynamic bucket index resharding keeps resharding all buckets

Added by Sander van Schie almost 6 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
Urgent
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

We're into some problems with dynamic bucket index resharding. After an upgrade from Ceph 12.2.2 to 12.2.5, which fixed an issue with the resharding when using tenants (which we do), the cluster was busy resharding for 2 days straight, resharding the same buckets over and over again.

After disabling it and re-enabling it a while later, it resharded all buckets again and then kept quiet for a bit. Later on it started resharding buckets over and over again, even buckets which didn't have any data added in the meantime. In the reshard list it always says 'old_num_shards: 1' for every bucket, even though I can confirm with 'bucket stats' there's already the desired amount of shards present. It looks like the background process which scans buckets doesn't properly recognize the amount of shards a bucket currently has. When I manually add a reshard job, it does properly recognize the current amount of shards.

While Ceph was resharding buckets over and over again, the maximum available storage as reported by 'ceph df' also decreased by about 20%, while usage stayed the same, we have yet to find out where the missing storage went. The decreasing stopped once we disabled resharding.

On a sidenote, we had two buckets in the reshard list which were removed a long while ago. We were unable to cancel the reshard job for those buckets. After recreating the users and buckets we were able to remove them from the list though, so they are no longer present. Probably not relevant, but you never know.


Files

Ceph High IO.png (190 KB) Ceph High IO.png High IO Aleksandr Rudenko, 07/18/2018 08:31 AM

Related issues 1 (0 open1 closed)

Related to rgw - Bug #27219: lock in resharding may expires before the dynamic resharding completesResolvedJ. Eric Ivancich08/24/2018

Actions
Actions #1

Updated by Greg Farnum almost 6 years ago

  • Project changed from Ceph to rgw
Actions #2

Updated by Yehuda Sadeh almost 6 years ago

  • Priority changed from Normal to High
Actions #3

Updated by sean redmond almost 6 years ago

I also seem to have a case on 12.2.5 where buckets are in an endless attempt to reshard - If there is any useful data I can provide to help track this down just let me know.

Actions #4

Updated by Orit Wasserman almost 6 years ago

  • Assignee set to Orit Wasserman
Actions #5

Updated by Yehuda Sadeh almost 6 years ago

  • Status changed from New to Triaged
  • Assignee deleted (Orit Wasserman)
Actions #6

Updated by Orit Wasserman almost 6 years ago

  • Assignee set to Orit Wasserman
Actions #7

Updated by Beom-Seok Park almost 6 years ago

This problem occurs in the version-enabled bucket.

Test env: ceph 12.2.5 + applied this patch https://github.com/ceph/ceph/pull/21669

non-versioned bucket

bucket name: bucket1
put 200k objects


$ sudo radosgw-admin bucket limit check
[
    {
        "user_id": "9f67fb8460ab483c9aa7f130d76ef81b",
        "buckets": [
            {
                "bucket": "bucket1",
                "tenant": "",
                "num_objects": 200000,
                "num_shards": 4,
                "objects_per_shard": 50000,
                "fill_status": "OK" 
            }
        ]
    }
]

$ sudo radosgw-admin bucket stats --bucket=bucket1
{
    "bucket": "bucket1",
    "zonegroup": "27314b14-86d8-459f-b88f-8ff63e421a82",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": "" 
    },
    "id": "a21e2c9e-7ae0-4175-b611-8a5781f13301.390908.1",
    "marker": "a21e2c9e-7ae0-4175-b611-8a5781f13301.390434.1",
    "index_type": "Normal",
    "owner": "9f67fb8460ab483c9aa7f130d76ef81b",
    "ver": "0#783,1#782,2#783,3#783",
    "master_ver": "0#0,1#0,2#0,3#0",
    "mtime": "2018-07-02 17:24:48.471156",
    "max_marker": "0#,1#,2#,3#",
    "usage": {
        "rgw.main": {
            "size": 3800000,
            "size_actual": 819200000,
            "size_utilized": 0,
            "size_kb": 3711,
            "size_kb_actual": 800000,
            "size_kb_utilized": 0,
            "num_objects": 200000
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

version-enabled bucket

bucket name: bucket2
put 200k objects


$ sudo radosgw-admin bucket limit check
[
    {
        "user_id": "9f67fb8460ab483c9aa7f130d76ef81b",
        "buckets": [
            {
                "bucket": "bucket2",
                "tenant": "",
                "num_objects": 586076,
                "num_shards": 3,
                "objects_per_shard": 195358,
                "fill_status": "OVER 100.000000%" 
            }
        ]
    }
]

$ sudo radosgw-admin bucket stats --bucket=bucket2
{
    "bucket": "bucket2",
    "zonegroup": "27314b14-86d8-459f-b88f-8ff63e421a82",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": "" 
    },
    "id": "a21e2c9e-7ae0-4175-b611-8a5781f13301.426059.2",
    "marker": "a21e2c9e-7ae0-4175-b611-8a5781f13301.426059.1",
    "index_type": "Normal",
    "owner": "9f67fb8460ab483c9aa7f130d76ef81b",
    "ver": "0#11077,1#11121,2#11041",
    "master_ver": "0#0,1#0,2#0",
    "mtime": "2018-07-02 19:08:56.641405",
    "max_marker": "0#,1#,2#",
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 192945
        },
        "rgw.main": {
            "size": 7469489,
            "size_actual": 1610264576,
            "size_utilized": 134064,
            "size_kb": 7295,
            "size_kb_actual": 1572524,
            "size_kb_utilized": 131,
            "num_objects": 393131
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

$ sudo radosgw-admin reshard status --bucket=bucket2
[
    {
        "reshard_status": 0,
        "new_bucket_instance_id": "",
        "num_shards": -1
    },
    {
        "reshard_status": 0,
        "new_bucket_instance_id": "",
        "num_shards": -1
    },
    {
        "reshard_status": 0,
        "new_bucket_instance_id": "",
        "num_shards": -1
    }
]

put 10 objects

$ sudo radosgw-admin bucket limit check
[
    {
        "user_id": "9f67fb8460ab483c9aa7f130d76ef81b",
        "buckets": [
            {
                "bucket": "bucket2",
                "tenant": "",
                "num_objects": 586086,
                "num_shards": 3,
                "objects_per_shard": 195362,
                "fill_status": "OVER 100.000000%" 
            }
        ]
    }
]

$ sudo radosgw-admin reshard list
[
    {
        "time": "2018-07-02 10:30:33.613614Z",
        "tenant": "",
        "bucket_name": "bucket2",
        "bucket_id": "a21e2c9e-7ae0-4175-b611-8a5781f13301.426059.2",
        "new_instance_id": "",
        "old_num_shards": 3,
        "new_num_shards": 11
    }
]

$ sudo radosgw-admin reshard process

$ sudo radosgw-admin bucket stats --bucket=bucket2
{
    "bucket": "bucket2",
    "zonegroup": "27314b14-86d8-459f-b88f-8ff63e421a82",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": "" 
    },
    "id": "a21e2c9e-7ae0-4175-b611-8a5781f13301.437290.1",
    "marker": "a21e2c9e-7ae0-4175-b611-8a5781f13301.426059.1",
    "index_type": "Normal",
    "owner": "9f67fb8460ab483c9aa7f130d76ef81b",
    "ver": "0#1142,1#1141,2#1138,3#1143,4#1140,5#1144,6#1138,7#1137,8#1136,9#1134,10#1130",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
    "mtime": "2018-07-02 19:33:29.865466",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#",
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 200017
        },
        "rgw.main": {
            "size": 7604066,
            "size_actual": 1639276544,
            "size_utilized": 0,
            "size_kb": 7426,
            "size_kb_actual": 1600856,
            "size_kb_utilized": 0,
            "num_objects": 400214
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

$ sudo radosgw-admin bucket limit check
[
    {
        "user_id": "9f67fb8460ab483c9aa7f130d76ef81b",
        "buckets": [
            {
                "bucket": "bucket2",
                "tenant": "",
                "num_objects": 600231,
                "num_shards": 11,
                "objects_per_shard": 54566,
                "fill_status": "OK" 
            }
        ]
    }
]
Actions #8

Updated by Aleksandr Rudenko almost 6 years ago

I think i have the same problem: http://tracker.ceph.com/issues/24937

But some buckets have no versioning enabled.

radosgw-admin reshard list
...
   {
        "time": "2018-07-17 11:08:20.336354Z",
        "tenant": "",
        "bucket_name": "bucket-name",
        "bucket_id": "default.32785769.2",
        "new_instance_id": "",
        "old_num_shards": 1,
        "new_num_shards": 161
    },
...
radosgw-admin bucket limit check
... 
           {
                "bucket": "bucket-name",
                "tenant": "",
                "num_objects": 20840702,
                "num_shards": 161,
                "objects_per_shard": 129445,
                "fill_status": "OVER 100.000000%" 
            },
...
Actions #9

Updated by Aleksandr Rudenko almost 6 years ago

And i have very high Ceph IO during last days.

Actions #10

Updated by Casey Bodley over 5 years ago

  • Assignee changed from Orit Wasserman to J. Eric Ivancich
  • Priority changed from High to Urgent
Actions #11

Updated by J. Eric Ivancich over 5 years ago

PR https://github.com/ceph/ceph/pull/24406 addresses issues that were described similarly.

Actions #12

Updated by J. Eric Ivancich over 5 years ago

This PR is a luminous backport of a downstream issue that seems to be related. I hope it will be in the next luminous release. Are you able to try your cluster with this code?

https://github.com/ceph/ceph/pull/24898

Actions #13

Updated by J. Eric Ivancich over 5 years ago

  • Status changed from Triaged to Pending Backport

The code in the above-listed PR (https://github.com/ceph/ceph/pull/24898) will allow resharding to complete if it's taking too long. It does this by periodically renewing the reshard lock. Previously the reshard lock could be lost and another reshard job started, thereby creating the problem described.

Actions #14

Updated by Nathan Cutler over 5 years ago

  • Status changed from Pending Backport to Resolved

Backports are going via #27219

Actions #15

Updated by Nathan Cutler over 5 years ago

  • Related to Bug #27219: lock in resharding may expires before the dynamic resharding completes added
Actions #16

Updated by Beom-Seok Park over 5 years ago

luminous commit 8157642b94a60dbfc3c88529a543a094d45d2b5e + https://github.com/ceph/ceph/pull/24898
rgw dynamic resharding = true
CentOS 7.5

NON-VERSIONED BUCKET

put 200k objects

# radosgw-admin bucket limit check
[
    {
        "user_id": "9f67fb8460ab483c9aa7f130d76ef81b",
        "buckets": [
            {
                "bucket": "testbucket1",
                "tenant": "",
                "num_objects": 200001,
                "num_shards": 2,
                "objects_per_shard": 100000,
                "fill_status": "OK" 
            }
        ]
    }
]

# radosgw-admin bucket stats --bucket=testbucket1
{
    "bucket": "testbucket1",
    "zonegroup": "7a9f2786-47b2-40eb-bd47-0a8ec72dc4ff",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": "" 
    },
    "id": "414ba8b0-b151-4c0c-8fc1-dd0979035249.4391.2",
    "marker": "414ba8b0-b151-4c0c-8fc1-dd0979035249.4391.1",
    "index_type": "Normal",
    "owner": "9f67fb8460ab483c9aa7f130d76ef81b",
    "ver": "0#88940,1#88921",
    "master_ver": "0#0,1#0",
    "mtime": "2018-11-07 16:25:59.458044",
    "max_marker": "0#,1#",
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 12
        },
        "rgw.main": {
            "size": 3799791,
            "size_actual": 819154944,
            "size_utilized": 3799791,
            "size_kb": 3711,
            "size_kb_actual": 799956,
            "size_kb_utilized": 3711,
            "num_objects": 199989
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

VERSION-ENABLED BUCKET

put 200k objects

# radosgw-admin bucket limit check
[
    {
        "user_id": "9f67fb8460ab483c9aa7f130d76ef81b",
        "buckets": [
            {
                "bucket": "testbucket2",
                "tenant": "",
                "num_objects": 561662,
                "num_shards": 3,
                "objects_per_shard": 187220,
                "fill_status": "OVER 100.000000%" 
            }
        ]
    }
]

# radosgw-admin bucket stats --bucket=testbucket2
{
    "bucket": "testbucket2",
    "zonegroup": "7a9f2786-47b2-40eb-bd47-0a8ec72dc4ff",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": "" 
    },
    "id": "414ba8b0-b151-4c0c-8fc1-dd0979035249.4391.4",
    "marker": "414ba8b0-b151-4c0c-8fc1-dd0979035249.4391.3",
    "index_type": "Normal",
    "owner": "9f67fb8460ab483c9aa7f130d76ef81b",
    "ver": "0#22964,1#23309,2#22808",
    "master_ver": "0#0,1#0,2#0",
    "mtime": "2018-11-07 16:55:59.790519",
    "max_marker": "0#,1#,2#",
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 180743
        },
        "rgw.main": {
            "size": 7237461,
            "size_actual": 1560244224,
            "size_utilized": 7237461,
            "size_kb": 7068,
            "size_kb_actual": 1523676,
            "size_kb_utilized": 7068,
            "num_objects": 380919
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

put 10 objects

# radosgw-admin reshard list
[
    {
        "time": "2018-11-07 08:10:55.962720Z",
        "tenant": "",
        "bucket_name": "testbucket2",
        "bucket_id": "414ba8b0-b151-4c0c-8fc1-dd0979035249.4391.4",
        "new_instance_id": "",
        "old_num_shards": 3,
        "new_num_shards": 11
    }
]

# radosgw-admin reshard process

# radosgw-admin bucket limit check
[
    {
        "user_id": "9f67fb8460ab483c9aa7f130d76ef81b",
        "buckets": [
            {
                "bucket": "testbucket2",
                "tenant": "",
                "num_objects": 600220,
                "num_shards": 11,
                "objects_per_shard": 54565,
                "fill_status": "OK" 
            }
        ]
    }
]

# radosgw-admin bucket stats --bucket=testbucket2
{
    "bucket": "testbucket2",
    "zonegroup": "7a9f2786-47b2-40eb-bd47-0a8ec72dc4ff",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": "" 
    },
    "id": "414ba8b0-b151-4c0c-8fc1-dd0979035249.4745.1",
    "marker": "414ba8b0-b151-4c0c-8fc1-dd0979035249.4391.3",
    "index_type": "Normal",
    "owner": "9f67fb8460ab483c9aa7f130d76ef81b",
    "ver": "0#1142,1#1141,2#1138,3#1143,4#1140,5#1144,6#1138,7#1137,8#1136,9#1134,10#1131",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
    "mtime": "2018-11-07 17:13:35.762290",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#",
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 200005
        },
        "rgw.main": {
            "size": 7604085,
            "size_actual": 1639280640,
            "size_utilized": 7604085,
            "size_kb": 7426,
            "size_kb_actual": 1600860,
            "size_kb_utilized": 7426,
            "num_objects": 400215
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

Actions #17

Updated by J. Eric Ivancich over 5 years ago

@Beom-Seok Park

From all the output you displayed, I wasn't clear what the issue you were reporting was. The main part of this BZ is that rgw keeps resharding the same bucket. Are you seeing that issue or another issue?

Also, please highlight what, in the output you're sharing, you believe to be anomalous. Thank you!

Actions

Also available in: Atom PDF