Documentation #3466
closedrados manpage: bench still documents "read" rather than "seq/rand"
0%
Description
rados bench read has been replaced with "seq" and "rand", the latter of which is
still unimplemented, and since "bench write" now cleans up after itself,
the command has grown a --no-cleanup option; these need to be added.
Updated by James McClune almost 5 years ago
Dan Mick wrote:
rados bench read has been replaced with "seq" and "rand", the latter of which is
still unimplemented, and since "bench write" now cleans up after itself,
the command has grown a --no-cleanup option; these need to be added.
I think this ticket can be closed. The rados bench command has been documented per Dan's request.
See: http://docs.ceph.com/docs/master/man/8/rados/#pool-specific-commands
bench seconds mode [ -b objsize ] [ -t threads ]
Benchmark for seconds. The mode can be write, seq, or rand. seq and rand are read benchmarks, either sequential or random. Before running one of the reading benchmarks, run a write benchmark with the –no-cleanup option. The default object size is 4 MB, and the default number of simulated threads (parallel writes) is 16. The –run-name <label> option is useful for benchmarking a workload test from multiple clients. The <label> is an arbitrary object name. It is “benchmark_last_metadata” by default, and is used as the underlying object name for “read” and “write” ops. Note: -b objsize option is valid only in write mode. Note: write and seq must be run on the same host otherwise the objects created by write will have names that will fail seq.
Updated by Zac Dover over 4 years ago
- Status changed from New to Closed
This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prior to Luminous, or 2) just really old, and untouched for so long that it is unlikely nowadays to represent a live documentation concern.
If you think that the closing of this bug is an error, raise another bug of a similar kind. If you think that the matter requires urgent attention, please let Zac Dover know at zac.dover@gmail.com.