Bug #10946
closedRGW garbage collector not clean ceph object from .rgw.bucket pool if osd down
0%
Description
ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)
I've got rgw, mon, osd on one node and 2 osd on the other two nodes. In total it makes 1 rgw, 1 mon and 3 osd.
I tried to upload a file via s3cmd using multipart upload and one of the osd nodes gets down.
s3cmd also hangs for some time and then I get error 500, after that it reloads some part of the file.
I delete the s3 object via s3cmd del,
wait for rgw_gc_obj_min_wait for some seconds and then launch garbage collector.
The expected result:
gc deletes all ceph objects relating to the s3 object.
The actual result:
gc doesn't delete any ceph objects relating to the s3 object.
To put the process to the down status, I use 'kill -9' or the iptables rules.
iptables -D INPUT -m multiport -p tcp --dports 6800:7000 -j DROP
iptables -D OUTPUT -m multiport -p tcp --dports 6800:7000 -j DROP
source- a list of initial ceph objects in rgw pools.
gc- a list of ceph objects in rgw pools after the gc launch.
All logs attached.
Files
Updated by Artem Savinov about 9 years ago
Im sory, fix
The actual result:
gc doesn't delete all ceph objects relating to the s3 object.(some objects list in .rgw.cuskets pool )
To put the osd to the down status, I use 'kill -9' or the iptables rules.
iptables -D INPUT -m multiport -p tcp --dports 6800:7000 -j DROP
iptables -D OUTPUT -m multiport -p tcp --dports 6800:7000 -j DROP