Project

General

Profile

Actions

Bug #52964

open

garbage collection doesn't remove gc list entries if the object's pool doesn't exist

Added by Semyon Poklad over 2 years ago. Updated over 2 years ago.

Status:
Triaged
Priority:
Normal
Target version:
-
% Done:

0%

Source:
Tags:
gc
Backport:
octopus pacific
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

How is reproduced

added file to bucket from S3 browser


 radosgw-admin --bucket=support-files bucket radoslist | wc -l

 96

 ceph df

 --- RAW STORAGE ---

 CLASS    SIZE   AVAIL     USED  RAW USED  %RAW USED

 hdd    44 TiB  44 TiB  4.7 GiB   4.7 GiB       0.01

 TOTAL  44 TiB  44 TiB  4.7 GiB   4.7 GiB       0.01

--- POOLS ---

POOL                        ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL

device_health_metrics       21    1  242 KiB        6  727 KiB      0     14 TiB

.rgw.root                   22   32  3.2 KiB        6   72 KiB      0     14 TiB

default.rgw.log             23   32   61 KiB      209  600 KiB      0     14 TiB

default.rgw.control         24   32      0 B        8      0 B      0     14 TiB

default.rgw.meta            25    8  1.9 KiB        9   96 KiB      0     14 TiB

default.rgw.buckets.index   26    8   47 KiB       22  142 KiB      0     14 TiB

default.rgw.buckets.data    27   32  377 MiB       96  1.1 GiB      0     14 TiB

default.rgw.buckets.non-ec  28   32  110 KiB        0  330 KiB      0     14 TiB

test.bucket.data            30   32      0 B        0      0 B      0     14 TiB

removed file from bucket from s3 Browser


 radosgw-admin --bucket=support-files bucket radoslist | wc -l

 0

 ceph df

 --- RAW STORAGE ---

 CLASS    SIZE   AVAIL     USED  RAW USED  %RAW USED

 hdd    44 TiB  44 TiB  4.7 GiB   4.7 GiB       0.01

 TOTAL  44 TiB  44 TiB  4.7 GiB   4.7 GiB       0.01

 --- POOLS ---

POOL                        ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL

 device_health_metrics       21    1  242 KiB        6  727 KiB      0     14 TiB

 .rgw.root                   22   32  3.2 KiB        6   72 KiB      0     14 TiB

 default.rgw.log             23   32   89 KiB      209  696 KiB      0     14 TiB

 default.rgw.control         24   32      0 B        8      0 B      0     14 TiB

 default.rgw.meta            25    8  1.9 KiB        9   96 KiB      0     14 TiB

 default.rgw.buckets.index   26    8   47 KiB       22  142 KiB      0     14 TiB

 default.rgw.buckets.data    27   32  377 MiB       95  1.1 GiB      0     14 TiB

 default.rgw.buckets.non-ec  28   32  110 KiB        0  330 KiB      0     14 TiB

 test.bucket.data            30   32      0 B        0      0 B      0     14 TiB

As you can see, the pool was not actually cleaned up.

radosgw-admin --bucket=support-files bucket check, radosgw-admin --bucket=support-files gc process, radosgw-admin --bucket=support-files gc list   - clean

rados -p default.rgw.buckets.data ls | wc -l

95

rgw-orphan-list  default.rgw.buckets.data

cat ./rados-20211018094020.intermediate | wc -l

95

Objects name mask in  ./rados-20211018094020.intermediate

<bucket-id>__shadow_<file_name>~* or

<bucket-id>__multipart_<file_name>~*

There were no errors in the rados log during the deletion

2021-10-18T13:09:06.536+0300 7fcf0564a700 1 ====== starting new request req=0x7fd0e446c820 =====

2021-10-18T13:09:06.608+0300 7fcf6ef1d700 1 ====== req done req=0x7fd0e446c820 op status=0 http_status=204 latency=0.072000369s ======

2021-10-18T13:09:06.608+0300 7fcf6ef1d700 1 beast: 0x7fd0e446c820: 10.77.1.185 - nextcloud [18/Oct/2021:13:09:06.536 +0300] "DELETE /support-files/debian-11.0.0-amd64-netinst.iso HTTP/1.1" 204 0 - "S3 Browser/10.0.9 ([URL]https://s3browser.com[/URL])" - latency=0.072000369s

Actions

Also available in: Atom PDF