Project

General

Profile

Actions

Bug #21375

closed

Average latency output in rados bench is arithmetically flawed

Added by Florian Haas over 6 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
Low
Assignee:
Mohamad Gebai
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
4 - irritation
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Seen on v12.2.0-296-g706f78da1a on a virtualized test system:

sudo rados -p bench bench 30 write
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 30 seconds or 0 objects
Object prefix: benchmark_data_deploy_22718
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        58        42   167.914       168    0.734898    0.300907
    2      16       105        89   177.941       188    0.358432    0.322523
    3      16       154       138    183.95       196    0.351548    0.322912
    4      16       206       190   189.954       208    0.141235    0.319715
    5      16       254       238   190.358       192    0.156576     0.31991
    6      16       304       288   191.961       200    0.231245    0.323936
    7      16       348       332   189.678       176    0.120068    0.323501
    8      16       400       384   191.963       208    0.763719    0.323801
    9      16       452       436   193.742       208    0.336659    0.324155
   10      16       502       486   194.366       200    0.412668    0.323478
   11      16       550       534   194.135       192    0.318614    0.323487
   12      16       601       585   194.954       204    0.330679    0.323183
   13      16       650       634   195.032       196    0.546288    0.322944
   14      16       695       679   193.958       180    0.139749    0.322562
   15      16       779       763   203.424       336    0.638265 9.00648e+07
   16      16       828       812   202.958       196    0.492445 8.46299e+07
   17      16       877       861   202.548       196    0.164353 7.98136e+07
   18      16       923       907   201.516       184    0.254678 7.57657e+07
   19      16       976       960   202.066       212    0.237107 7.15828e+07
2017-09-12 16:20:08.253560 min lat: 0.0693709 max lat: 4.29497e+09 avg lat: 6.79718e+07
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
   20      16      1027      1011   202.162       204    0.219402 6.79718e+07
   21      16      1074      1058   201.485       188    0.194049 6.49522e+07
   22      16      1124      1108   201.416       200    0.254869 6.20212e+07
   23      16      1175      1159   201.527       204    0.530085  5.9292e+07
   24      16      1213      1197   199.462       152      0.5128 5.74098e+07
   25      16      1270      1254   200.602       228    0.212382 5.48002e+07
   26      16      1317      1301   200.116       188    0.161623 5.28205e+07
   27      16      1366      1350   199.963       196    0.183112 5.09033e+07
   28      16      1414      1398   199.677       192    0.281947 4.91556e+07
   29      16      1453      1437   198.171       156    0.362389 4.78215e+07
   30      16      1499      1483   197.698       184    0.186634 4.63382e+07
Total time run:         30.448048
Total writes made:      1500
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     197.057
Stddev Bandwidth:       30.4936
Max bandwidth (MB/sec): 336
Min bandwidth (MB/sec): 152
Average IOPS:           49
Stddev IOPS:            7
Max IOPS:               84
Min IOPS:               38
Average Latency(s):     4.5813e+07
Stddev Latency(s):      4.41357e+08
Max latency(s):         4.29497e+09
Min latency(s):         0.0693709

None of the numbers here would be realistic if running on real hardware, but the latency from second 15 onward looks just implausible. :)

Actions

Also available in: Atom PDF