Project

General

Profile

Bug #24505

Updated by Mark Kogan almost 6 years ago

On a Ceph cluster started with vstart:
<pre>
MON=1 OSD=1 MDS=0 MGR=1 RGW=1 ../src/vstart.sh -n -o "bluestore_block_size = 5000000000000" -o "rgw_enable_usage_log = true" | ccze -A -onolookups
</pre>

Using COSBench S3 workload - complete writing 1 million objects of 1KB --
<pre>
<work type="prepare" workers="4" interval="5" rampup="5" config="content=zero;cprefix=s3cosbench;containers=r(1,1);objects=r(1,1000000);sizes=c(1)KB" />
</pre>

although "ceph df" shows 999999 OBJECTS
<pre>
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
default.rgw.buckets.data 6 954 MiB 0.02 4.4 TiB 999999
</pre>

radosgw-admin user stats shows only 272721 entries :
<pre>
$ ./bin/radosgw-admin user stats --sync-stats --uid=testid
2018-06-12 08:06:24.588 7fa2f6410800 0 check_bucket_shards: resharding needed: stats.num_objects=272721 shard max_objects=100000
{
"stats": {
"total_entries": 272721,
"total_bytes": 272721000,
"total_bytes_rounded": 1117065216
},
"last_stats_sync": "2018-06-12 12:06:24.590311Z",
"last_stats_update": "2018-06-12 12:06:24.586702Z"
}
</pre>

(Note the line above - "2018-06-12 08:06:24.588 7fa2f6410800 0 check_bucket_shards: resharding needed: stats.num_objects=272721 shard max_objects=100000")

Trying with adding the "--reset-stats" parameter did not remedy:

<pre>
$ ./bin/radosgw-admin user stats --reset-stats --uid=testid
{
"stats": {
"total_entries": 272721,
"total_bytes": 272721000,
"total_bytes_rounded": 1117065216
},
"last_stats_sync": "0.000000",
"last_stats_update": "2018-06-12 12:08:15.044129Z"
}
</pre>

Back