Project

General

Profile

Bug #14820

Updated by Kefu Chai about 8 years ago

Hello, 

 When you use s3cmd with 'rb --recursive', the stats informations about user total_entries is not well decreased. 
 And if you have a 'max_bucket' limit in your quotas, you can anymore put files. 
 The problem seems come from the multiple delete. 

 How to reproduce: 

 # create an user 
 <pre> 
 $> radosgw-admin user create --uid=johndoe --display-name="John Doe" 
 $> radosgw-admin user stats --uid=johndoe    --sync-stats 
 ... 
     "total_entries": 0, 
 ... 
 </pre> 
 

 # use s3cmd to sync "files" directory with your bucket 
 <pre> 
 $> ls -l files | wc -l 
    30002 
 $> du -h files                          # 16k / file 
 469M 	 files 
 $> s3cmd mb s3://bucket 
 $> s3cmd sync files s3://bucket 
 ... 
 ... 
 ... 
 ~3000 file 
 -- perform ^C 
 </pre> 
 

 # check user stats 
 <pre> 
 $> radosgw-admin user stats --uid=johndoe    --sync-stats 
 ... 
     "total_entries": 3000, 
 ... 
 </pre> 
 # delete the bucket 
 <pre> 
 $> s3cmd rb --recursive s3://bucket 
 </pre> 
 # check user stats 
 <pre> 
 $> radosgw-admin user stats --uid=johndoe    --sync-stats 
 ... 
     "total_entries": 45, # random value but not 0 as expected 
 ... 
 </pre> 
 # check my buckets 
 <pre> 
 $> s3cmd ls s3://          # clean 
 </pre> 
 

 # add max-objects limit 
 <pre> 
 $> radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=10 
 $> s3cmd mb s3://bucket 
 $> s3cmd put files/file3 s3://bucket 
 'files/file3' -> 's3://bucket/file3'    [1 of 1] 
  10240 of 10240     100% in      0s       2.64 MB/s    done 
 ERROR: S3 error: 403 (QuotaExceeded) 
 </pre> 

 Tested with Hammer and Infernalis 
 <pre> 
 ceph --version 
 ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) 
 </pre> 
 <pre> 
 

 ceph --version 
 ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) 
 </pre> 
 

Back