Low Level Elm Benchmarking API
This API exposes the raw tasks necessary to create higher-level benchmarking abstractions.
As a user, you're probably not going to need to use this library. Take
a look at Benchmark
instead, it has the user-friendly primitives. If
you do find yourself using this library often, please open an issue
on
elm-benchmark
and we'll find a way to make your use case friendlier.
An operation to benchmark. Use operation
to
construct these.
operation : (() -> a) -> Operation
Make an Operation
, given a function that runs the code you want
to benchmark when given a unit (()
.)
warmup : Operation -> Task Error ()
Warm up the JIT for a benchmarking run. You should call this
before calling findSampleSize
or trusting the
times coming out of measure
.
If we don't warm up the JIT beforehand, it will slow down your benchmark and result in inaccurate data. (By the way, Mozilla has an excellent explanation of how this all works.)
findSampleSize : Operation -> Task Error Basics.Int
Find an appropriate sample size for benchmarking. This should be much greater than the clock resolution (5µs in the browser) to make sure we get good data.
We do this by starting at sample size 1. If that doesn't pass our threshold, we multiply by the golden ratio and try again until we get a large enough sample.
In addition, we want the sample size to be more-or-less the same across runs, despite small differences in measured fit. We do this by rounding to the nearest order of magnitude. So, for example, if the sample size is 1,234 we round to 1,000. If it's 8,800, we round to 9,000.
sample : Basics.Int -> Operation -> Task Error Basics.Float
Run a benchmark a number of times. The returned value is the total time it took for the given number of runs.
In the browser, high-resolution timing data from these functions comes
from the Performance
API and
is accurate to 5µs. If performance.now
is unavailable, it will fall
back to
Date,
accurate to 1ms.
Error states that can terminate a sampling run.