[com.sagevisuals/fastester "0"]
com.sagevisuals/fastester {:mvn/version "0"}
(require '[fastester.define :refer [defbench]]
'[fastester.display :refer [generate-documents]]
'[fastester.measure :refer [run-benchmarks]])
Imagine: We notice that the zap
function of version 11 of our library is sub-optimal. We improve the implementation so that
zap
executes faster.
In the version 12 changelog, we could mumble,
Function zap
is faster.
Or instead, we could assert,
Version 12 of function zap
is 20 to 30 percent faster than version 11 for integers spanning five orders of
magnitude. This implementation change will improve performance for the vast majority of intended use cases.
arg, n | |||||
---|---|---|---|---|---|
version | 1 | 10 | 100 | 1000 | 10000 |
12 | 1.8e-04±1.5e-06 | 1.8e-03±3.8e-05 | 1.9e-02±4.6e-04 | 1.8e-01±1.4e-03 | 1.8e+00±1.9e-02 |
11 | 2.6e-04±5.9e-06 | 2.7e-03±1.6e-04 | 2.6e-02±9.1e-04 | 2.7e-01±7.0e-03 | 2.6e+00±2.2e-02 |
The Fastester library streamlines the tasks of writing benchmarks for a function, objectively measuring evaluation times of different versions of that function, and concisely communicating how performance changes between versions.
We ought to strive to write fast software. Fast software respects other people's time and their computing resources. Fast software demonstrates skill and attention to detail. And fast software is just plain cool.
But, how fast is "fast"? It's not terribly convincing to say Our software is fast. We'd like some objective measure of
fast. Fortunately, the Criterium library provides a handy group of benchmarking utilities that
measures the evaluation time of a Clojure expression. We could use Criterium to learn that (zap inc [1 2 3])
requires
183±2 microseconds to evaluate.
Is…that good? Difficult to say. What we'd really like to know is how 183 microseconds compares to some previous version. So if, for
example, version 12 of zap
evaluates in 183 microseconds, whereas version 11 required 264 microseconds, we
have good reason to believe the later implementation is faster.
Another problem is that tossing out raw numbers like "183" and "264" requires people to perform mental arithmetic to figure out if version 12 is better. One-hundred, eighty-three divided by two-hundred, sixty-four is approximately eighteen divided by twenty-six, which is approximately… Not ideal.
To address these problems, Fastester aspires to generate an objective, relative, and comprehensible performance report.
Thanks to Criterium, we can measure, in concrete, real world time units, how long it takes to evaluate a function with a particular argument, (somewhat) independent of vagaries of the computing environment.
A single, isolated timing measurement doesn't mean much to a person, even if it is objective. People simply don't have everyday intuition for an event that occurs in a few nanoseconds or microseconds. So when we discuss the concept of 'fast', we're often implicitly speaking in relative terms.
Fastester focuses on comparing the speed of one function to a previous version of itself.
Humans are visually-oriented, and a straightforward two-dimensional chart is an excellent tool to convey relative performance changes between versions. A person ought to be able to glance at the performance report and immediately grasp the improvements, with details available as needed.
Fastester documents consist primarily of charts with accompanying text. A show/hide button reveals details as the reader desires.
The performance document is accreting. Once version 12 is benchmarked and released, it's there for good. Corrections are encouraged, and later additional tests to compare to some new feature are also okay. The data is versioned-controlled, and the html/markdown documents that are generated from the data are also under version-control.
The performance data is objective, but people may interpret it to suit their tastes. 183 microseconds may be fast enough for one person, but not another. The accompanying commentary may express the library author's opinions. That's okay. The author is merely communicating that opinion to the person considering switching versions. The author may consider a particular version fast, but the person using the software may not.
We should probably consider a performance regression as a breaking change. Fastester can help estimate and communicate how much the performance regressed.
Let's review our imaginary scenario. We have previously profiled some execution path of version 11 of our library. We discovered a
bottleneck in a function, zap
, which just so happens to behave exactly like clojure.core/map
: apply some function to
every element of a collection.
(zap inc [1 2 3]) ;; => (2 3 4)
We then changed zap
's implementation for better performance and objectively verified that this new implementation for
version 12 provides shorter execution times than the previous version.
We're now ready to release version 12 with the updated zap
. When we're writing the changelog/release notes, we want to
include a performance report that demonstrates the improvement. After declaring and requiring the dependency, there are four
steps to using Fastester.
Set the options.
Write benchmarks.
Run the benchmarks.
Generate an html document that displays the performance data.
Keep in mind we don't need to do these steps for every function for every release, but only when a function's implementation changes with measurable affects on performance.
Follow along with this example options file and this example benchmark definition file.
We must first set the options that govern how Fastester behaves. Options live in a file (defaulting to
resources/fastester_options.edn
) as a Clojure map. One way to get up and running quickly is to copy-paste a sample options file and edit as needed.
The following options have default values.
key | default | usage |
---|---|---|
:benchmarks
|
{}
|
Hashmap arranging a hierarchy of namespaces and benchmark definitions. Keys (quoted symbols) represent namespaces. Values (sets of quoted symbols) represent benchmark names. See discussion. Note: This setting only affects running benchmarks. It does not affect which data sets are used to generate the html documents. |
:html-directory
|
"doc/"
|
Directory to write html document. |
:html-filename
|
"performance.html"
|
Filename to write html document. |
:img-subdirectory
|
"img/"
|
Under |
:markdown-directory
|
"doc/"
|
Directory to write markdown files. |
:markdown-filename
|
"performance.md"
|
Filename to write markdown document. |
:results-url
|
"https://example.com"
|
Base URL for where to find benchmark data. For local filesystem, use something like |
:results-directory
|
"resources/performance_entries/"
|
Directory to find benchmark data, appended to |
:verbose?
|
false
|
A boolean that governs printing benchmark status. |
:testing-thoroughness
|
:quick
|
Assigns Criterium benchmark settings. One of |
:parallel?
|
false
|
A boolean that governs running benchmarks on multiple threads in parallel.
Warning: Running benchmarks in parallel results in undesirable, high variance. Associate to |
:save-benchmark-fn-results?
|
true
|
When assigned |
:tidy-html?
|
false
|
Default setting causes html to be written to file with no line breaks. If set to |
:preamble
|
[:div "..."]
|
A hiccup/html block inserted at the top of the results document. |
:sort-comparator
|
clojure.core/compare
|
Comparator used for sorting versions in the performance document chart legends and table rows. Comparator must accept two strings representing version entries extracted from either a Leiningen 'project.clj' or a 'pom.xml'. Write custom comparators with caution. |
The following options have no defaults.
key | example | usage |
---|---|---|
:title
|
"Taffy Yo-yo Library performance"
|
A string providing the title for the performance document. |
:responsible
|
{:name "Grace Hopper", :email "univac@example.com"}
|
A hashmap with |
:copyright-holder
|
"Abraham Lincoln"
|
Copyright holder listed in the footer of the document. |
:fastester-UUID
|
de280faa-ebc5-4d75-978a-0a2b2dd8558b
|
A version 4 Universally Unique ID listed in the footer of the document. To generate a new UUID, evaluate
|
:preferred-version-info
|
:pom-xml
|
Declares preference for source of project version. If |
Before we start bashing the keyboard, let's think a little about how we want to test zap
. We'd like to demonstrate
zap
's improved performance for a variety of argument types, over a wide range of argument sizes. To do that, we'll write
two benchmarks.
The first benchmark will measure the evaluation times of incrementing increasingly-lengthy sequences of integers.
(benchmark (zap inc [1]))
(benchmark (zap inc [1 2]))
(benchmark (zap inc [1 2 3]))
(benchmark (zap inc [1 2 3 ...]))
We'll label this series of benchmark runs with the name zap-inc
.
The second benchmark will measure the evaluation times of upper-casing ever-longer sequences of strings.
(benchmark (zap str/uppercase ["a"]))
(benchmark (zap str/uppercase ["a" "b"]))
(benchmark (zap str/uppercase ["a" "b" "c"]))
(benchmark (zap str/uppercase ["a" "b" "c" ...]))
We'll label this series of benchmark runs with the name zap-uc
.
Writing benchmarks follows a similar pattern to writing unit tests. We create a file, perhaps named benchmarks.clj, topped with a namespace declaration. For organizing purposes, we may write more than one benchmarks file if, for example, we'd like to write one benchmark file per source namespace.
Within our benchmarks file, we use defbench
to define a benchmark. Here is its signature.
(defbench name "group" fn-expression args)
For the first argument, we supply defbench
with a name, an unquoted symbol. The name resolves the benchmark definition in a
namespace. We've chosen zap-inc
and zap-uc
. The names don't have any functional significance. We could have
named the benchmarks Romeo
and Juliet
without affecting the measurements, but like any other Clojure symbol, it's
nice if the names have some semantic meaning.
So far, we have the following two incomplete benchmark definitions: defbench
followed by a name.
(defbench zap-inc ...)
(defbench zap-uc ...)
When we evaluate a defbench
expression, Fastester binds a hashmap to the name in the namespace where we evaluated the expression.
If two expressions use the same name in the same namespace, the later-evaluated definition will overwrite the earlier. If we'd like to
give the same name to two different benchmarks, we could isolate the definitions into two different namespaces. For this demonstration
benchmarking zap
, we've chosen two different names, so we won't worry about overwriting.
After the name, we supply a group, a string that associates one benchmark with other conceptually-related benchmarks. Later, while
generating the html results document, Fastester will aggregate benchmarks sharing a group. For zap
, we
have our two related benchmarks. Let's assign both of those benchmarks to the "faster zap implementation"
group.
Now, we have the following two incomplete benchmark definitions, with the addition of the group.
(defbench zap-inc "faster zap implementation" ...)
(defbench zap-uc "faster zap implementation" ...)
The final two arguments, fn-expression and args, do the heavy lifting. The next step of the workflow, running the benchmarks, involves serially supplying elements of args
to the function expression.
The function expression is a 1-arity function that demonstrates some performance aspect of the new version of the function. We updated
zap
so that it processes elements faster. One way to demonstrate its improved performance is to increment a sequence of integers
with inc
. That particular function expression looks like this.
(fn [n] (zap inc (range n)))
In addition to incrementing integers, we wanted to demonstrate upper-casing strings. Clojure's clojure.string/upper-case
performs that operation on a single string.
(require '[clojure.string :as str])
To create sequence of strings, we can use cycle
, and take
the number of elements we desire.
(take 1 (cycle ["a" "b" "c"])) ;; => ("a")
(take 2 (cycle ["a" "b" "c"])) ;; => ("a" "b")
(take 3 (cycle ["a" "b" "c"])) ;; => ("a" "b" "c")
Our second function expression looks like this.
(fn [i] (zap str/upper-case (take i (cycle ["a" "b" "c"]))))
And with the addition of their respective function expressions, our two almost-complete benchmark definitions look like this.
(defbench zap-inc "faster zap implementation" (fn [n] (zap inc (range n))) ...)
(defbench zap-uc "faster zap implementation" (fn [i] (zap str/upper-case (take i (cycle ["a" "b" "c"])))) ...)
Note that there is nothing special about the function expression's parameter. zap-inc
uses n
, while
zap-uc
uses i
.
'Running' a benchmark with those function expressions means that arguments are serially passed to the expression, measuring the
evaluation times for each. The arguments are supplied by the final component of the benchmark definition, a sequence. For
zap-inc
, let's explore range
s from ten to one-hundred thousand.
An args sequence of five integers like this…
[10 100 1000 10000 100000]
…declares a series of five maximum values, producing the following series of five sequences to feed to zap
for benchmarking.
[0 ... 9]
[0 ... 99]
[0 ... 999]
[0 ... 9999]
[0 ... 99999]
Ratcheting (range n)
by powers of ten stresses zap's
performance. Roughly speaking, we'll be doing this.
(benchmark (zap inc (range 10)))
(benchmark (zap inc (range 100)))
(benchmark (zap inc (range 1000)))
(benchmark (zap inc (range 10000)))
(benchmark (zap inc (range 100000)))
Altogether, that benchmark definition looks like this.
(defbench zap-inc
"faster zap implementation"
(fn [n] (zap inc (range n)))
[10 100 1000 10000 100000])
Similarly, we'd like zap-uc
to exercise a span of strings.
(benchmark (zap str/upper-case (take 10 (cycle "a" "b" "c"))))
(benchmark (zap str/upper-case (take 100 (cycle "a" "b" "c"))))
(benchmark (zap str/upper-case (take 1000 (cycle "a" "b" "c"))))
(benchmark (zap str/upper-case (take 10000 (cycle "a" "b" "c"))))
(benchmark (zap str/upper-case (take 100000 (cycle "a" "b" "c"))))
That completed benchmark definition looks like this.
(defbench zap-uc
"faster zap implementation"
(fn [i] (zap str/upper-case (take i (cycle ["a" "b" "c"]))))
[10 100 1000 10000 100000])
However, there's a problem. The function expressions contain range
and cycle
. If we run these benchmarks as is, the
evaluation times would include range
's and cycle
's processing times. We might want to do that in some other
scenario, but in this case, it would be misleading. We want to focus solely on how fast zap
can process its elements. Let's
extract range
to an external expression.
(def range-of-length-n (reduce #(assoc %1 %2 (range %2)) {} [10 100 1000 10000 100000]))
(defbench zap-inc "faster zap implementation" (fn [n] (zap inc (range-of-length-n n))) [10 100 1000 10000 100000])
range-of-length-n
generates all the sequences ahead of time. With the sequences now created outside of the benchmark expression, the
time measurement will mainly reflect the work done by zap
itself.
If you extrapolated that zap
behaves like map
, perhaps you anticipated a remaining problem. If we were to run the
zap-inc
benchmarks as defined above, we'd notice that the evaluation times were suspiciously consistent, no matter how many
integers the sequence contained. zap
, like many core sequence functions, returns a lazy sequence. We must force the return sequence
to be realized so that zap-inc
measures zap
actually doing work. doall
is handy for that.
(defbench zap-inc
"faster zap implementation"
(fn [n] (doall (zap inc (range-of-length-n n))))
[10 100 1000 10000 100000])
We handle zap-uc
similarly. First, we'll pre-compute the test sequences so that running the benchmark doesn't measure
cycle
. Then we'll wrap the zap
expression in a doall
.
(def abc-cycle-of-length-n (reduce #(assoc %1 %2 (take %2 (cycle ["a" "b" "c"]))) {} [10 100 1000 10000 100000]))
(defbench zap-uc "faster zap implementation" (fn [n] (doall (zap str/upper-case (abc-cycle-of-length-n n)))) [10 100 1000 10000 10000])
So what happens when we evaluate a defbench
expression? It binds the benchmark name to a hashmap of group, function expression,
arguments, and some metadata. Let's evaluate the name zap-inc
.
zap-inc ;; => {:fexpr (fn [n] (zap inc (range-of-length-n n)))
:group "faster zap implementation"
:ns "zap.benchmarks"
:name "zap-inc"
:n [10 100 1000 10000 100000]
:f #function[fn--8882]}
Yup. We can see an everyday Clojure hashmap containing all of the arguments we supplied to defbench
(some stringified), plus the
namespace and the repl's rendering of the function object.
Soon, in the run benchmarks step, Fastester will rip through the benchmark names declared in the options hashmap
key :benchmarks
and run a Criterium benchmark for every name.
Once we evaluate the two defbench
expressions, the namespace contains two benchmark definitions that will demonstrate
zap
's performance: one incrementing sequences of integers, named zap-inc
, and one upper-casing sequences of strings,
named zap-uc
.
Fastester provides a few helper utilities. If we want to see how a benchmark would work, we can invoke run-one-defined-benchmark
.
(require '[fastester.measure :refer [run-one-defined-benchmark]])
(run-one-defined-benchmark zap-inc :quick)
;; => ...output elided for brevity...
In the course of writing benchmarks, we often need a sequence of exponentially-growing integers. For that, Fastester offers
range‑pow‑10
and range‑pow‑2
.
(require '[fastester.measure :refer [range-pow-2 range-pow-10]])
(range-pow-10 5) ;; => (1 10 100 1000 10000 100000)
(range-pow-2 5) ;; => (1 2 4 8 16 32)
Sometimes, we'll want to remove a defined benchmark, which we can do with clojure.core/ns-unmap
.
(ns-unmap *ns* 'zap-something-else)
Before we go to the next step, running the benchmarks, let's double-check the options. We need Fastester to find our two benchmark
definitions, so we must correctly set :benchmarks
. This options key is associated to a hashmap.
That nested hashmap's keys are symbols indicating the namespace. In our example, we have one namespace, and therefore one key,
'zap.benchmarks
. Associated to that one key is a set of simple symbols indicating the benchmark names, in our example,
'zap-inc
and 'zap-uc
. Altogether, that section of the options looks like this.
:benchmarks {'zap.benchmarks #{'zap-inc
'zap-uc}}
We should also be on guard: saving zap
's results (e.g., one-hundred-thousand incremented integers) blows up the file sizes, so
let's set :save-benchmark-fn-results?
to false
.
Now that we've written zap-inc
and zap-uc
, we can run the benchmarks in two ways. If we've got our editor open with
an attached repl, we can invoke (run-benchmarks)
. If we're at the command line, invoking
$ lein run -m fastester.core :benchmarks
has the same effect. Later, we'll discuss a modification of this invocation that attempts to address a possible issue with contemporary CPUs.
We should expect each benchmark to take about a minute with the default benchmark settings. To minimize timing variance, we ought to use a multi-core machine with minimal competing processes, network activity, etc.
We should see one edn
file per function-expression/argument pairing. If not, double check the options hashmap to make sure all the
namespace and name symbols within :benchmarks
are complete and correct.
When the benchmarks are finished running, we can generate the performance report. Sometimes it's useful to have an html file to quickly view in the browser, and other times it's useful to have a markdown file (i.e., to show on GitHub), so Fastester generates one of each.
To generate the documents, we can invoke (generate‑documents)
at the repl, or
$ lein run -m fastester.core :documents
from the command line.
Note: Fastester uses all data files in the directory set by the options :results-directory
. The :benchmarks
setting has no
affect on generating the documents.
When we look at the report, there's only version 12! We wanted a comparative report which shows how the
performance of version 12's zap
has improved relative to version 11's zap
. To fix this, we use
our version control system to roll-back to the version 11 tag, and then we run the benchmarks with version 11. Once done, we
roll-forward again to version 12.
After a followup generate-documents
invocation, the charts and tables show the version 12 benchmark measurements side-by-side
with version 11's, similar to the introduction example. We can clearly see that the new zap
implementation
executes faster across a wide range of sequence lengths, both for incrementing integers and upper-casing strings.
The charts and tables present strong evidence, but a morsel of explanatory text enhances our story. Fastester provides two opportunities to add
text. Near the top, between the table of contents and the first group's section, we can insert some introductory text by associating a
hiccup/html block to the :preamble
key in the options hashmap.
Also, we can insert text after each group's section heading by creating an entry in the :comments
part of the options hashmap.
The comments option is a nested hashmap whose keys are the group (a string) and the values are hiccup/html
blocks.
For example, what we read in the zap
performance document derives
from the :preamble
and :comments
options defined roughly like this.
:preamble [:div
[:p "This page follows the "
[:code "zap"]
" function benchmark example from the "
[:a {:href "https://github.com/blosavio/fastester"}
"Fastester ReadMe"]
...]]
:comments {"faster zap
implementation" [:div
[:p "This is the comments section... "
[:em "group"]
", 'faster zap
implementation'..."
[:code "zap-inc"]
" and "]
...]}
For both the preamble and group comments, we can insert more than one html element by wrapping them with a
[:div ...]
.
We must be particularly careful to define our benchmarks to test exactly and only what we intend to test. One danger is idiomatic Clojure patterns polluting our time measurements. It's typical to compose a sequence right at the spot where we require it, like this.
(map inc (repeatedly 99 #(rand))
However, if we were to submit this expression to Criterium, intending to measure how long it takes map
to increment the sequence,
we'd be also benchmarking creating the sequence, which may be a non-negligible portion of the evaluation time. Instead, we should
hoist the sequence creation out of the expression.
;; create the sequence
(def ninety-nine-rands (repeatedly 99 #(rand)))
;; use the pre-existing sequence
(map inc ninety-nine-rands)
The second expression now involves mostly the map
action, and is more appropriate for benchmarking.
Another danger is that while we may be accurately timing an expression, the expression isn't calculating what we'd like to measure.
map
(and friends) returns a lazy sequence, which is almost certainly not what we were intending to benchmark. We must remember to
force the realization of the lazy sequence, conveniently done with doall
.
(doall (map inc ninety-nine-rands))
Regarding Fastester itself, three final gotchas will be familiar to Clojurists programming at the repl.
During development, it's typical to define and re-define benchmarks with defbench
. It's not difficult for the namespace to get
out of sync with the visual appearance of the text represented in the file. Maybe we renamed a benchmark, and the old benchmark is still
floating around invisibly. Such an orphaned definition won't hurt anything because Fastester will only run benchmarks explicitly
listed in the option's :benchmarks
. If we want to actively remove the benchmark, we can use clojure.core/ns-unmap
.
Perhaps more dangerous, maybe we edited a defbench
's textual expression, but failed to re-evaluate it. What we see with our
eyes won't accurately reflect the benchmark definition that Fastester actually runs. To fix this problem, a quick re-evaluation of the
entire text buffer redefines all the benchmarks currently expressed in the namespace.
Finally, we need to remember that when running from the command line, Fastester consults only the options and benchmark definitions from the file contents as they exist on disk. A repl-attached editor with unsaved options or definitions, even with a freshly-evaluated namespace, will not affect the results from a command line invocation. Saving the files to disk synchronizes what we see in the editor and what is consumed by command line-initiated actions.
When displaying relative performance comparisons, it's crucial to hold the environment as consistent as possible. If a round of benchmarks are run when the CPU, RAM, operating system, Java version, or Clojure version are changed, we need to re-run all previous benchmarks. Or, maybe better, we ought to make a new options file and generate a completely different performance document, while keeping the old around.
Unresolved: Contemporary systems often use multiple, heterogeneous CPU cores, i.e., X efficiency cores running light tasks at low
power and Y high-performance cores running intense tasks. Linux provides a utility, taskset
, that explicitly sets CPU affinity.
Invoking
$ taskset --cpu-list 3 lein run -m fastester.core :benchmarks
from the command line pins the benchmark process to the fourth CPU. Fastester does not provide a turn-key solution for setting CPU affinity for other operating systems such as Windows or MacOS.
Modern operating systems (OSes) and virtual machines (VMs) provide a perilous environment for accurate, reliable benchmarking. They both toss an uncountable number of non-deterministic confounders onto our laps. The OS may host other processes which contend for computing resources, interrupt for I/O or network events, etc. The Java VM may nondeterministically just-in-time (JIT) compile hot spots, making the code run faster (or slower!) after some unpredictable delay, and the garbage collector (GC) is quite skilled at messing with precise timing measurements.
So we must exercise great care when running the benchmarks and be very conservative with our claims when reporting the benchmark results.
Fastester delegates the benchmarking to Criterium, which fortunately goes to considerable effort to minimize this non-determinism. First, just before running the benchmark, Criterium forces the GC in order to minimize the chance of it running during the benchmark itself. Furthermore, Criterium includes a warm-up period to give the JIT compiler an opportunity to optimize the benchmarked code so that the evaluation times are more consistent.
To try to control for other sources of non-determinism, we should run each benchmark multiple times (default 60), and calculate statistics on those results, which helps suggest whether or not our benchmark data is consistent and significantly different.
Fastester, following Criterium's lead, focuses on the mean (average) evaluation time, not the minimum. This policy is intended to avoid over-emphasizing edge cases that coincidentally perform well and giving a more unbiased view.
If our new implementation is only a few percent 'faster' than the old version, we ought to consider very carefully whether it is worth changing the the implementation which may or may not be an actual improvement.
clj-async-profiler A single-dependency, embedded high-precision performance profiler.
clojure-benchmarks Andy Fingerhut's project for benchmarking programs to compare amongst other languages.
Clojure Goes Fast A hub for news, docs, and tools related to Clojure and high performance.
Criterium Measures the computation time of an expression, addressing some of the pitfalls of benchmarking. Criterium provides the vital benchmarking engine of the Fastester library.
Java Microbenchmark Harness (JMH) For building, running, and analyzing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM.
Laurence Tratt's benchmarking essays.
Minimum times tend to mislead when
benchmarking
Virtual
machine warmup blows hot and cold
Why aren’t more users more happy with our VMs? Part 1 Part 2
time+
A paste-and-go macro that measures an expression's
evaluation time, useful for interactive development.
Fastester does not aspire to be:
A diagnostic profiler. Fastester will not locate bottlenecks. It is only intended to communicate performance changes when a new version behaves differently than a previous version. I.e., we've already located the bottlenecks and made it quicker. Fastester performs a release task, not a dev-time task.
A comparative profiler. Fastester doesn't address if My Clojure function runs faster than that OCaml function, and, in fact, isn't intended to demonstrate My Clojure function runs faster than someone else's Clojure function. Fastester focuses on comparing benchmark results of one particular function to a previous version of itself.
A general-purpose charting facility. Apart from choosing whether a chart axis is linear or logarithmic, any other charting option like color, marker shape, etc., will not be adjustable.
An artificial example that simulates performance changes of a few
clojure.core
functions across several versions, and demonstrates many of Fastester's features.
An example that follows-along benchmarking zap
, the scenario
presented above in this ReadMe.
An assembly of a 1-arity function expression, a sequence of arguments, a name, and a group. We define or write a benchmark. Criterium runs benchmarks.
An html or markdown file that contains benchmarks results consisting of charts, text, and tables.
One or more conceptually-related benchmarks, e.g., all benchmarks that demonstrate the performance of map
.
A Clojure symbol that refers to a benchmark definition. defbench
binds a name to benchmark definition.
A notable release of software, labeled by a version number.
This program and the accompanying materials are made available under the terms of the MIT License.
Can you improve this documentation?Edit on GitHub
cljdoc builds & hosts documentation for Clojure/Script libraries
Ctrl+k | Jump to recent docs |
← | Move to previous article |
→ | Move to next article |
Ctrl+/ | Jump to the search field |