Project

General

Profile

Actions

Bug #10946

closed

RGW garbage collector not clean ceph object from .rgw.bucket pool if osd down

Added by Artem Savinov about 9 years ago. Updated about 4 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)

I've got rgw, mon, osd on one node and 2 osd on the other two nodes. In total it makes 1 rgw, 1 mon and 3 osd.

I tried to upload a file via s3cmd using multipart upload and one of the osd nodes gets down.
s3cmd also hangs for some time and then I get error 500, after that it reloads some part of the file.
I delete the s3 object via s3cmd del,
wait for rgw_gc_obj_min_wait for some seconds and then launch garbage collector.

The expected result:
gc deletes all ceph objects relating to the s3 object.

The actual result:
gc doesn't delete any ceph objects relating to the s3 object.

To put the process to the down status, I use 'kill -9' or the iptables rules.

iptables -D INPUT -m multiport -p tcp --dports 6800:7000 -j DROP
iptables -D OUTPUT -m multiport -p tcp --dports 6800:7000 -j DROP

source- a list of initial ceph objects in rgw pools.
gc- a list of ceph objects in rgw pools after the gc launch.

All logs attached.


Files

source (593 Bytes) source a list of initial ceph objects in rgw pools Artem Savinov, 02/25/2015 08:51 AM
gc (918 Bytes) gc a list of ceph objects in rgw pools after the gc launch Artem Savinov, 02/25/2015 08:51 AM
client.radosgw.set004.log (109 KB) client.radosgw.set004.log Artem Savinov, 02/25/2015 08:51 AM
ceph-osd.2.log (36.1 KB) ceph-osd.2.log Artem Savinov, 02/25/2015 08:51 AM
ceph-osd.1.log (15.7 KB) ceph-osd.1.log Artem Savinov, 02/25/2015 08:51 AM
ceph-osd.0.log (23.2 KB) ceph-osd.0.log Artem Savinov, 02/25/2015 08:51 AM
ceph-mon.c.log (31.2 KB) ceph-mon.c.log Artem Savinov, 02/25/2015 08:51 AM
ceph.log (21.7 KB) ceph.log Artem Savinov, 02/25/2015 08:51 AM
apache-access.log (1.6 KB) apache-access.log Artem Savinov, 02/25/2015 08:51 AM
logs.tgz (33.6 KB) logs.tgz all i one archive Artem Savinov, 02/25/2015 08:51 AM
Actions #1

Updated by Artem Savinov about 9 years ago

Im sory, fix
The actual result:
gc doesn't delete all ceph objects relating to the s3 object.(some objects list in .rgw.cuskets pool )

To put the osd to the down status, I use 'kill -9' or the iptables rules.
iptables -D INPUT -m multiport -p tcp --dports 6800:7000 -j DROP
iptables -D OUTPUT -m multiport -p tcp --dports 6800:7000 -j DROP

Actions #2

Updated by Casey Bodley about 4 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF