We are using the Java Microbenchmark Harness (JMH) tool.
Last released version:
sbt "benchmarks-vprev/jmh:run -o proto-benchmark-results.txt -i 10 -wi 10 -f 2 -t 1 -r 1 -w 1 higherkindness.mu.rpc.benchmarks.ProtoBenchmark"Next version:
sbt "benchmarks-vnext/jmh:run -o proto-benchmark-results-next.txt -i 10 -wi 10 -f 2 -t 1 -r 1 -w 1 higherkindness.mu.rpc.benchmarks.ProtoBenchmark"Last released version:
sbt "benchmarks-vprev/jmh:run -o avro-benchmark-results.txt -i 10 -wi 10 -f 2 -t 1 -r 1 -w 1 higherkindness.mu.rpc.benchmarks.AvroBenchmark"Next version:
sbt "benchmarks-vnext/jmh:run -o avro-benchmark-results-next.txt -i 10 -wi 10 -f 2 -t 1 -r 1 -w 1 higherkindness.mu.rpc.benchmarks.AvroBenchmark"Last released version:
sbt "benchmarks-vprev/jmh:run -o avrowithschema-benchmark-results.txt -i 10 -wi 10 -f 2 -t 1 -r 1 -w 1 higherkindness.mu.rpc.benchmarks.AvroWithSchemaBenchmark"Next version:
sbt "benchmarks-vnext/jmh:run -o avrowithschema-benchmark-results-next.txt -i 10 -wi 10 -f 2 -t 1 -r 1 -w 1 higherkindness.mu.rpc.benchmarks.AvroWithSchemaBenchmark"In the commands above, the specified command line arguments mean "10 iterations", "10 warmup iterations", "2 forks", "1 threads". r and w are specifying the minimum time (seconds) to spend at each measurement warmup iteration/iteration.
The next command will run all the available benchmarks in the project:
benchmarks/run-benchmarks-allThen, we can aggregate the results to compare each other: the current (work in progress) with the last released version:
benchmarks/aggregate /path/to/the/results/directoryThese scripts are based on the ones from the great Monix library.