Interface to the Criterium native agent for allocation tracking and call tracing.
This namespace provides functions for:
Key Features:
Agent Loading: Due to JVMTI limitations, the agent MUST be loaded at JVM startup using -agentpath for full functionality. Start your JVM with: clojure -J-agentpath:/path/to/libcriterium.dylib -M:dev
Or use (jvm-opts) to get the correct path, then restart your JVM with that option.
Example usage:
;; Allocation tracking
(let [[allocations result] (with-allocation-tracing
(your-code-here))]
(let [thread-allocs (filter (allocation-on-thread?) allocations)]
(allocations-summary thread-allocs)))
;; Call tracing
(let [[call-tree result] (with-call-tracing
(your-code-here))]
;; call-tree is a nested map with :class, :method, :call-count, :children
call-tree)
For more details, see the README in projects/agent/.
Interface to the Criterium native agent for allocation tracking and call tracing.
This namespace provides functions for:
- Tracking JVM heap allocations during benchmark execution
- Tracing method calls to build call graphs
Key Features:
- Allocation Tracking: Capture detailed information about object allocations
- Call Tracing: Record method entry/exit to build hierarchical call trees
Agent Loading:
Due to JVMTI limitations, the agent MUST be loaded at JVM startup using
-agentpath for full functionality. Start your JVM with:
clojure -J-agentpath:/path/to/libcriterium.dylib -M:dev
Or use (jvm-opts) to get the correct path, then restart your JVM with that option.
Example usage:
```clojure
;; Allocation tracking
(let [[allocations result] (with-allocation-tracing
(your-code-here))]
(let [thread-allocs (filter (allocation-on-thread?) allocations)]
(allocations-summary thread-allocs)))
;; Call tracing
(let [[call-tree result] (with-call-tracing
(your-code-here))]
;; call-tree is a nested map with :class, :method, :call-count, :children
call-tree)
```
For more details, see the README in projects/agent/.Low-level interface to the Criterium native agent for allocation tracking.
This namespace provides the core implementation for interacting with the agent that tracks JVM heap allocations. It manages agent state, handles allocation records, and provides primitives for the high-level API.
Key Components:
Implementation Notes:
WARNING: The agent must be loaded before calling most functions in this namespace. Use criterium.agent.runtime/load-agent! or -agentpath JVM arg.
This is an internal implementation namespace. Most users should use criterium.agent instead.
Low-level interface to the Criterium native agent for allocation tracking. This namespace provides the core implementation for interacting with the agent that tracks JVM heap allocations. It manages agent state, handles allocation records, and provides primitives for the high-level API. Key Components: - Native Agent Commands: Protocol for controlling agent behavior - State Management: Track and validate agent state transitions - Allocation Recording: Capture and store allocation events - Data Processing: Transform raw allocation data into usable records Implementation Notes: - Uses wrapper namespace for zero-allocation Agent access - Manages thread-local and global agent state - Optimized for minimal allocation overhead during tracing - Handles concurrent access to shared state - Safe to load even when agent classes are not available WARNING: The agent must be loaded before calling most functions in this namespace. Use criterium.agent.runtime/load-agent! or -agentpath JVM arg. This is an internal implementation namespace. Most users should use criterium.agent instead.
A non-native agent to access the Instrumentation interface.
A non-native agent to access the Instrumentation interface.
Runtime extraction of bundled native agent binaries.
Extracts platform-specific agent binaries from JAR resources to temporary directory for loading. Handles concurrent extraction safely using file locking and registers shutdown hooks for cleanup.
The extraction process:
Thread-safe and safe for concurrent JVM processes.
Runtime extraction of bundled native agent binaries. Extracts platform-specific agent binaries from JAR resources to temporary directory for loading. Handles concurrent extraction safely using file locking and registers shutdown hooks for cleanup. The extraction process: 1. Detects current platform 2. Reads SHA256 hash from bundled .sha256 file 3. Checks if agent already extracted to temp directory 4. If not, locks and extracts atomically 5. Sets appropriate file permissions (executable on Unix) 6. Verifies file permissions 7. Registers cleanup hook on first extraction 8. Returns absolute path to extracted agent Thread-safe and safe for concurrent JVM processes.
Platform detection for bundled native agent binaries.
Provides functions to detect the current operating system and architecture, mapping them to canonical platform identifiers used for locating bundled native libraries.
Supported platforms:
Returns nil for unsupported platforms to enable graceful degradation.
Platform detection for bundled native agent binaries. Provides functions to detect the current operating system and architecture, mapping them to canonical platform identifiers used for locating bundled native libraries. Supported platforms: - linux-x64: Linux on x86-64 architecture - macos-x64: macOS on x86-64 architecture - macos-arm64: macOS on ARM64 architecture (Apple Silicon) Returns nil for unsupported platforms to enable graceful degradation.
Runtime agent loading and management without Agent class dependencies.
This namespace provides functions for loading the native agent at runtime without requiring the Agent classes to be available at compile time. Use this namespace to load the agent before requiring criterium.agent.core.
Key Functions:
Example: ;; Load agent at runtime (require '[criterium.agent.runtime :as runtime]) (runtime/load-agent!)
;; Now safe to require core namespace (require '[criterium.agent.core :as core])
The agent can also be loaded via -agentpath JVM argument, in which case load-agent! will detect it's already loaded and skip loading.
Runtime agent loading and management without Agent class dependencies. This namespace provides functions for loading the native agent at runtime without requiring the Agent classes to be available at compile time. Use this namespace to load the agent before requiring criterium.agent.core. Key Functions: - pid: Get current JVM process ID - agent-path: Extract bundled agent to temp directory - load-agent!: Programmatically load agent into running JVM - loaded?: Check if agent is currently loaded Example: ;; Load agent at runtime (require '[criterium.agent.runtime :as runtime]) (runtime/load-agent!) ;; Now safe to require core namespace (require '[criterium.agent.core :as core]) The agent can also be loaded via -agentpath JVM argument, in which case load-agent! will detect it's already loaded and skip loading.
Direct access wrappers for Agent class methods.
This namespace imports criterium.agent.Agent and provides Clojure functions that compile to direct field access and static method calls, avoiding reflection.
This namespace will fail to load if the Agent class is not available.
Direct access wrappers for Agent class methods. This namespace imports criterium.agent.Agent and provides Clojure functions that compile to direct field access and static method calls, avoiding reflection. This namespace will fail to load if the Agent class is not available.
Allocation tracing for benchmarks.
Provides a wrapper around the native agent's allocation tracing that produces allocation trace maps conforming to :criterium/allocation-trace.
This namespace bridges the low-level agent API with the criterium pipeline architecture, capturing thread context and timing metadata alongside raw allocation records.
Allocation tracing for benchmarks. Provides a wrapper around the native agent's allocation tracing that produces allocation trace maps conforming to :criterium/allocation-trace. This namespace bridges the low-level agent API with the criterium pipeline architecture, capturing thread context and timing metadata alongside raw allocation records.
Analysis functions for allocation trace data.
Provides functions for summarizing allocations, detecting hotspots, and grouping by object type. Functions follow the pipeline pattern where each takes options and returns a transformer function that operates on a data-map.
Analysis functions for allocation trace data. Provides functions for summarizing allocations, detecting hotspots, and grouping by object type. Functions follow the pipeline pattern where each takes options and returns a transformer function that operates on a data-map.
Analysis functions for call graph data from method tracing.
Provides analysis functions that operate on call tree data collected via the method tracing agent.
Analysis functions for call graph data from method tracing. Provides analysis functions that operate on call tree data collected via the method tracing agent.
Analysis methods for t-digest compressed sample data.
Unlike metrics-samples which stores raw sample values, digest-samples uses t-digest compression, which preserves quantile accuracy but loses individual sample identity. This affects outlier detection: medcouple cannot be computed from digest centroids, so this module uses standard symmetric boxplot thresholds instead of the adjusted boxplot method. For skewed distributions, consider using full sample collection if accurate outlier classification is important.
Analysis methods for t-digest compressed sample data. Unlike metrics-samples which stores raw sample values, digest-samples uses t-digest compression, which preserves quantile accuracy but loses individual sample identity. This affects outlier detection: medcouple cannot be computed from digest centroids, so this module uses standard symmetric boxplot thresholds instead of the adjusted boxplot method. For skewed distributions, consider using full sample collection if accurate outlier classification is important.
Analysis dispatch based on ample-map type
Analysis dispatch based on ample-map type
Argument generation using test.check generators.
Provides macros for creating benchmarks with generated arguments and for creating zero-arg functions that generate varied inputs.
Primary use cases:
measured: Create a Measured with generated arguments for benchmarkingargs-fn: Create a warmup-args-fn for use with :warmup-args-fn optionThe args-fn macro is particularly useful for creating warmup functions
that generate varied inputs, enabling more representative JIT optimization
during the warmup phase.
Example: (require '[criterium.arg-gen :as arg-gen] '[criterium.bench :refer [bench]] '[clojure.test.check.generators :as gen])
;; Use varied warmup inputs for better JIT optimization (let [coll (vec (range 1000))] (bench (sort coll) :warmup-args-fn (arg-gen/args-fn {:size 200} [v (gen/vector gen/small-integer)] [v])))
Argument generation using test.check generators.
Provides macros for creating benchmarks with generated arguments and
for creating zero-arg functions that generate varied inputs.
Primary use cases:
- `measured`: Create a Measured with generated arguments for benchmarking
- `args-fn`: Create a warmup-args-fn for use with :warmup-args-fn option
The `args-fn` macro is particularly useful for creating warmup functions
that generate varied inputs, enabling more representative JIT optimization
during the warmup phase.
Example:
(require '[criterium.arg-gen :as arg-gen]
'[criterium.bench :refer [bench]]
'[clojure.test.check.generators :as gen])
;; Use varied warmup inputs for better JIT optimization
(let [coll (vec (range 1000))]
(bench (sort coll)
:warmup-args-fn (arg-gen/args-fn {:size 200}
[v (gen/vector gen/small-integer)]
[v])))Typed array collections using primitive arrays to avoid boxing.
Provides wrappers around primitive arrays (double-array, long-array) that can be used for efficient sample storage during benchmarking. Three types are provided for the three metric types:
Typed array collections using primitive arrays to avoid boxing. Provides wrappers around primitive arrays (double-array, long-array) that can be used for efficient sample storage during benchmarking. Three types are provided for the three metric types: - DoubleArray for :quantitative metrics - LongArray for :event metrics - ObjectArray for :nominal metrics
Interfaces for typed array collections with primitive support.
These interfaces define the contract for typed array wrappers that avoid boxing overhead when working with primitive arrays.
Interface hierarchy:
Interfaces for typed array collections with primitive support. These interfaces define the contract for typed array wrappers that avoid boxing overhead when working with primitive arrays. Interface hierarchy: - ITypedArray: basic array metadata (element type, length) - IDoubleArray/ILongArray: marker interfaces extending ITypedArray - IResizable: resize operations for fixed-capacity arrays - IDoubleFill/ILongFill/IObjectFill: fill operations - IFold: generic object-returning fold - IDoubleFold/ILongFold: primitive-in, primitive-out folds - IDoubleObjectFold/ILongObjectFold: primitive-in, object-out folds - IIndexed: indexed access to elements - IArrayOps: type-specific operations (sum, getAt)
No vars found in this namespace.
Resizable array types for algorithms where size isn't known upfront.
Provides mutable-size wrappers around primitive arrays that can be resized down (but not up) from their initial capacity. Use cases include algorithms like Knuth histogram binning where the final array size depends on the data.
Three types are provided:
All operations respect the current size (not capacity). Map/filter operations return fixed arrays (DoubleArray, LongArray, ObjectArray).
Resizable array types for algorithms where size isn't known upfront. Provides mutable-size wrappers around primitive arrays that can be resized down (but not up) from their initial capacity. Use cases include algorithms like Knuth histogram binning where the final array size depends on the data. Three types are provided: - ResizableDoubleArray for double values - ResizableLongArray for long values - ResizableObjectArray for arbitrary objects All operations respect the current size (not capacity). Map/filter operations return fixed arrays (DoubleArray, LongArray, ObjectArray).
Utilities for defining typed array interfaces.
Provides definterface+, an extension of Clojure's definterface that
supports interface inheritance via gen-interface's :extends option.
Utilities for defining typed array interfaces. Provides `definterface+`, an extension of Clojure's `definterface` that supports interface inheritance via `gen-interface`'s `:extends` option.
Perform sound benchmarking of Clojure code.
Provides functions and macros for measuring code performance while accounting for:
Primary API:
Example: (bench (+ 1 1)) ; Basic usage (bench (+ 1 1) :viewer :pprint) ; With pretty-printed output (set-default-viewer! :kindly) ; Set default for all bench calls
Perform sound benchmarking of Clojure code. Provides functions and macros for measuring code performance while accounting for: - JVM warmup periods - Garbage collection effects - Statistical significance Primary API: - bench - Macro for benchmarking expressions - bench-measured - Function for benchmarking pre-wrapped measurements - last-bench - Access results from most recent benchmark - set-default-viewer! - Set default output viewer - default-viewer - Get current default viewer Example: (bench (+ 1 1)) ; Basic usage (bench (+ 1 1) :viewer :pprint) ; With pretty-printed output (set-default-viewer! :kindly) ; Set default for all bench calls
Provide pre-configured benchmark definitions.
Provide pre-configured benchmark definitions.
Internal implementation details for criterium.bench namespace. Not intended for direct use by consumers of the library.
Internal implementation details for criterium.bench namespace. Not intended for direct use by consumers of the library.
Namespace for composing and executing benchmarks from declarative specs. Provides functionality to construct benchmark functions from analysis and view configurations.
Namespace for composing and executing benchmarks from declarative specs. Provides functionality to construct benchmark functions from analysis and view configurations.
Blackhole for preventing dead code elimination (DCE) in benchmarks.
Two modes are supported:
Compiler Blackhole (JVM 17+) Uses -XX:CompileCommand=blackhole,criterium.blackhole.Blackhole::consume to mark consume methods as compiler blackholes. Zero overhead.
Runtime Blackhole (JVM < 17 or flag not present) Uses volatile fields and XOR-based impossible conditions. ~3ns overhead.
Mode is detected at namespace load time. Use mode to check which is active.
Blackhole for preventing dead code elimination (DCE) in benchmarks. Two modes are supported: Compiler Blackhole (JVM 17+) Uses -XX:CompileCommand=blackhole,criterium.blackhole.Blackhole::consume to mark consume methods as compiler blackholes. Zero overhead. Runtime Blackhole (JVM < 17 or flag not present) Uses volatile fields and XOR-based impossible conditions. ~3ns overhead. Mode is detected at namespace load time. Use `mode` to check which is active.
Primary API for call tracing with call graph display.
Provides functions and macros for tracing method calls during expression evaluation and displaying the resulting call graph.
This is a standalone tracing feature (not integrated with benchmark collection), providing visibility into what code paths are exercised.
Primary API:
Filter Functions (re-exported from criterium.agent):
Example: (bench (my-function args)) ; Basic usage (bench (my-function args) :viewer :portal) ; With Portal visualization
(let [results (last-bench)] (filter-call-tree (:call-tree (:data results)) jdk-filter))
Primary API for call tracing with call graph display. Provides functions and macros for tracing method calls during expression evaluation and displaying the resulting call graph. This is a standalone tracing feature (not integrated with benchmark collection), providing visibility into what code paths are exercised. Primary API: - bench - Macro for tracing expressions and displaying call graphs - last-bench - Access results from most recent trace - set-default-viewer! - Set default output viewer - default-viewer - Get current default viewer Filter Functions (re-exported from criterium.agent): - filter-call-tree - Apply filters to call tree data - jdk-filter - Predefined filter excluding JDK packages - clojure-core-boundary-filter - Filter stopping at clojure.core boundary Example: (bench (my-function args)) ; Basic usage (bench (my-function args) :viewer :portal) ; With Portal visualization (let [results (last-bench)] (filter-call-tree (:call-tree (:data results)) jdk-filter))
Pre-configured call graph analysis plans.
Call graph plans specify how to analyze and view call tracing results. Each plan is a map with:
:analyse - Vector of analysis specs to invoke :view - Vector of view specs to invoke
Analysis specs are keywords that resolve to functions in criterium.analyse: :most-called - Aggregate methods by total call count
View specs are keywords that resolve to multimethods in criterium.view: :call-tree - ASCII tree or Vega hierarchical tree diagram :call-flame - Flame chart where width represents call count :most-called - Table or bar chart of most frequently called methods
Pre-configured call graph analysis plans. Call graph plans specify how to analyze and view call tracing results. Each plan is a map with: :analyse - Vector of analysis specs to invoke :view - Vector of view specs to invoke Analysis specs are keywords that resolve to functions in criterium.analyse: :most-called - Aggregate methods by total call count View specs are keywords that resolve to multimethods in criterium.view: :call-tree - ASCII tree or Vega hierarchical tree diagram :call-flame - Flame chart where width represents call count :most-called - Table or bar chart of most frequently called methods
Collect samples using a metrics collector.
Collect samples using a metrics collector.
Collection plan to control the collection of metrics from a measured.
Collection plan to control the collection of metrics from a measured.
Metrics collector.
A metrics collector collects metrics associated with executing a measured.
A metrics collector is a pipeline is a pipeline with two stages. It collects metrics into and array with an element for each metric, without creating any allocation garbage. The array is then transformed into a map, keyed by metric id.
The collect-array function takes a measure, a measured state, and
an eval count. It returns an array of sample data. The array is
allocated once, and all objects allocated during sampling are recorded
in the array, in order to make the sample phase garbage free.
The pipeline transform takes the sample array, and returns a sample
map. The transform is free to create garbage.
A pipeline is specified via keywords, which specify sample metrics to be collecteds and a pipeline terminal function, which is responsible for actually calling the measured.
Each sample function can collect data before and after the measured's execution.
Metrics collector. A metrics collector collects metrics associated with executing a measured. A metrics collector is a pipeline is a pipeline with two stages. It collects metrics into and array with an element for each metric, without creating any allocation garbage. The array is then transformed into a map, keyed by metric id. The `collect-array` function takes a measure, a measured state, and an eval count. It returns an array of sample data. The array is allocated once, and all objects allocated during sampling are recorded in the array, in order to make the sample phase garbage free. The pipeline `transform` takes the sample array, and returns a sample map. The transform is free to create garbage. A pipeline is specified via keywords, which specify sample metrics to be collecteds and a pipeline terminal function, which is responsible for actually calling the measured. Each sample function can collect data before and after the measured's execution.
Provide pre-configured collector configs
Provide pre-configured collector configs
A pipeline function takes a sample, a measured state, and a measured, calls the next pipeline function and returns an updated sample state. It is usually called via the execute function.
A pipeline function can be composed with other pipeline functions and a pipeline terminal function, which is responsible for actually calling the measured.
Each pipeline function collects one or metrics around the measured's invocation.
A pipeline function takes a sample, a measured state, and a measured, calls the next pipeline function and returns an updated sample state. It is usually called via the execute function. A pipeline function can be composed with other pipeline functions and a pipeline terminal function, which is responsible for actually calling the measured. Each pipeline function collects one or metrics around the measured's invocation.
Criterium measures the computation time of an expression. It is designed to address some of the pitfalls of benchmarking, and benchmarking on the JVM in particular.
This includes:
Usage: (use 'criterium.core) (bench (Thread/sleep 1000) :verbose) (with-progress-reporting (bench (Thread/sleep 1000) :verbose)) (report-result (benchmark (Thread/sleep 1000)) :verbose) (report-result (quick-bench (Thread/sleep 1000)))
References: See http://www.ellipticgroup.com/html/benchmarkingArticle.html for a Java benchmarking library. The accompanying article describes many of the JVM benchmarking pitfalls.
See http://hackage.haskell.org/package/criterion for a Haskell benchmarking library that applies many of the same statistical techniques.
Criterium measures the computation time of an expression. It is designed to address some of the pitfalls of benchmarking, and benchmarking on the JVM in particular. This includes: - statistical processing of multiple evaluations - inclusion of a warm-up period, designed to allow the JIT compiler to optimise its code - purging of gc before testing, to isolate timings from GC state prior to testing - a final forced GC after testing to estimate impact of cleanup on the timing results Usage: (use 'criterium.core) (bench (Thread/sleep 1000) :verbose) (with-progress-reporting (bench (Thread/sleep 1000) :verbose)) (report-result (benchmark (Thread/sleep 1000)) :verbose) (report-result (quick-bench (Thread/sleep 1000))) References: See http://www.ellipticgroup.com/html/benchmarkingArticle.html for a Java benchmarking library. The accompanying article describes many of the JVM benchmarking pitfalls. See http://hackage.haskell.org/package/criterion for a Haskell benchmarking library that applies many of the same statistical techniques.
Abstraction for working with multiple related benchmark runs.
A domain is an immutable collection of benchmark runs indexed by coordinates. It enables analysis of performance behaviour across a parameter space rather than at a single point.
Supports:
Example domain structure: {:type :criterium/domain :runs [{:coord {:n 100} :data <bench-result>} {:coord {:n 1000} :data <bench-result>} {:coord {:n 100 :impl :foo} :data <bench-result>}]}
See also:
Abstraction for working with multiple related benchmark runs.
A domain is an immutable collection of benchmark runs indexed by coordinates.
It enables analysis of performance behaviour across a parameter space rather
than at a single point.
Supports:
- Scaling behaviour analysis (e.g., O(N)) with varying argument values
- Identifying parts of argument space with different metric behaviours
- Comparing different implementation behaviours
- Tracking behaviour over time (in-memory session)
Example domain structure:
{:type :criterium/domain
:runs [{:coord {:n 100} :data <bench-result>}
{:coord {:n 1000} :data <bench-result>}
{:coord {:n 100 :impl :foo} :data <bench-result>}]}
See also:
- criterium.domain.analysis for extract, compare-by, group-by-axis,
fit-complexity
- criterium.domain.builder for domain-builder and input generatorsPre-configured domain analysis plans.
Domain plans specify how to analyze and view benchmark results across a domain of runs. Each plan is a map with:
:analyse - Vector of analysis specs resolved from criterium.domain :view - Vector of view specs resolved from criterium.view :viewer - Keyword specifying output format (:print, :portal, :none)
Analysis specs are either keywords or vectors of [keyword options-map]. View specs follow the same pattern.
Pre-configured domain analysis plans. Domain plans specify how to analyze and view benchmark results across a domain of runs. Each plan is a map with: :analyse - Vector of analysis specs resolved from criterium.domain :view - Vector of view specs resolved from criterium.view :viewer - Keyword specifying output format (:print, :portal, :none) Analysis specs are either keywords or vectors of [keyword options-map]. View specs follow the same pattern.
Analysis functions for domain data.
Provides functions for extracting metrics, comparing across dimensions, fitting complexity models, and composable analysis pipelines.
Analysis functions for domain data. Provides functions for extracting metrics, comparing across dimensions, fitting complexity models, and composable analysis pipelines.
Domain builder for automated benchmark collection across parameter spaces.
Provides functions for building domains by running benchmarks with varying inputs, plus input sequence generators for scaling analysis.
Domain builder for automated benchmark collection across parameter spaces. Provides functions for building domains by running benchmarks with varying inputs, plus input sequence generators for scaling analysis.
Core domain types and operations.
A domain is an immutable collection of benchmark runs indexed by coordinates. This namespace provides the foundational data structures used by criterium.domain.builder and criterium.domain.analysis.
Core domain types and operations. A domain is an immutable collection of benchmark runs indexed by coordinates. This namespace provides the foundational data structures used by criterium.domain.builder and criterium.domain.analysis.
Instrumentation facilities for collecting performance samples from functions.
This namespace provides tools for measuring function performance during normal execution, outside of criterium's direct control. It works by wrapping functions with instrumentation code that collects timing data while preserving the original function behavior.
Key features:
Example usage:
(with-instrumentation [my-fn collector-config]
(some-code
(my-fn args)))
The instrumentation can also be manually controlled using instrument!/uninstrument! for more fine-grained control over the scope which is sampled.
Instrumentation facilities for collecting performance samples from functions.
This namespace provides tools for measuring function performance
during normal execution, outside of criterium's direct control. It
works by wrapping functions with instrumentation code that collects
timing data while preserving the original function behavior.
Key features:
- Non-intrusive function wrapping that maintains original behavior
- Automatic sample collection during function execution
- Safe metadata management for storing/restoring original functions
- Integration with criterium's analysis pipeline
Example usage:
```clojure
(with-instrumentation [my-fn collector-config]
(some-code
(my-fn args)))
```
The instrumentation can also be manually controlled using
instrument!/uninstrument! for more fine-grained control over the
scope which is sampled.First-class function instrumentation for performance sampling.
Provides functionality for wrapping functions with instrumentation code that collects performance data during execution. The instrumented functions are first-class objects that maintain their own sample collection state.
First-class function instrumentation for performance sampling. Provides functionality for wrapping functions with instrumentation code that collects performance data during execution. The instrumented functions are first-class objects that maintain their own sample collection state.
JVM monitoring and management interface.
Provides zero-garbage access to JVM metrics and controls via JMX management beans.
Core capabilities include:
Time Management Memory Management Thread Management JMX Bean Access
Key design principles:
Performance characteristics:
Usage notes:
JVM monitoring and management interface. Provides zero-garbage access to JVM metrics and controls via JMX management beans. Core capabilities include: Time Management Memory Management Thread Management JMX Bean Access Key design principles: - Zero garbage sampling methods for performance measurement - Thread-safe monitoring capabilities - Consistent snapshot semantics - High-precision timing functions Performance characteristics: - Sampling functions avoid allocation - Low-overhead monitoring options - Batch collection capabilities Usage notes: - Use -sample variants for time series collection - Monitor allocation in performance-sensitive code - Verify timing precision requirements
Implements the concept of a measured function for benchmarking.
Criterium's metric collection works on a Measured instance. A Measured represents a benchmarkable unit of code, consisting of:
The Measured implements a timed, batch invocation interface that:
Warmup Customization: Functions may have different complexities based on their inputs. If warmup always uses the same arguments, JIT may over-specialize for those inputs. The warmup-args-fn field enables using varied inputs during warmup for more representative JIT optimization.
Priority rule for warmup arguments:
While Criterium automatically creates Measured instances for expressions, you can also construct custom ones for special measurement needs.
Implements the concept of a measured function for benchmarking. Criterium's metric collection works on a Measured instance. A Measured represents a benchmarkable unit of code, consisting of: - A function to execute and measure - An arguments generator to prevent constant folding - Optional symbolic representation for debugging - Optional warmup arguments generator for JIT optimization The Measured implements a timed, batch invocation interface that: - Supports multiple evaluations per timing sample for fast expressions - Guarantees zero garbage allocation during measurement - Prevents constant folding optimization of inputs Warmup Customization: Functions may have different complexities based on their inputs. If warmup always uses the same arguments, JIT may over-specialize for those inputs. The warmup-args-fn field enables using varied inputs during warmup for more representative JIT optimization. Priority rule for warmup arguments: 1. The bench macro's :warmup-args-fn option (baked into Measured at compile time) 2. Measured-level warmup-args-fn (from measured constructor or with-warmup-args-fn) 3. Fall back to regular args-fn While Criterium automatically creates Measured instances for expressions, you can also construct custom ones for special measurement needs.
Functions for working with metric configurations and definitions.
A metric represents a measurable value that can be collected during benchmarking. Each metric is described by a configuration map with the following structure:
{:type keyword ; The type of metric (e.g., :timing, :memory) :name string ; Human readable name of the metric :values [...] ; Collection of metric value configurations}
Metrics can be organized in groups using a metrics configuration map: {:group-name {:values [...]} ; Direct metric values :other-group {:groups {:subgroup {:values [...]}}} ; Nested metric groups}
This namespace provides functions for querying and filtering metric configurations. It supports both flat and hierarchical metric organization structures.
Functions for working with metric configurations and definitions.
A metric represents a measurable value that can be collected during
benchmarking. Each metric is described by a configuration map with
the following structure:
{:type keyword ; The type of metric (e.g., :timing, :memory)
:name string ; Human readable name of the metric
:values [...] ; Collection of metric value configurations}
Metrics can be organized in groups using a metrics configuration map:
{:group-name {:values [...]} ; Direct metric values
:other-group {:groups {:subgroup {:values [...]}}} ; Nested metric groups}
This namespace provides functions for querying and filtering metric
configurations. It supports both flat and hierarchical metric
organization structures.Public API for the optimisation component.
Provides numerical optimisation algorithms:
Public API for the optimisation component. Provides numerical optimisation algorithms: - Linear regression for fitting linear models to data
Linear regression algorithms.
Linear regression algorithms.
Platform characterisation
Platform characterisation
Primitive-typed versions of core Clojure functions.
These functions are properly type-hinted to work with .invokePrim, avoiding boxing overhead when passed to higher-order primitive functions like fold-double, fold-long, dmap, and lmap.
Use these instead of inline lambdas when the operation matches a standard function (add, min, max, etc.).
Naming convention: d-prefix for double, l-prefix for long.
Primitive-typed versions of core Clojure functions. These functions are properly type-hinted to work with .invokePrim, avoiding boxing overhead when passed to higher-order primitive functions like fold-double, fold-long, dmap, and lmap. Use these instead of inline lambdas when the operation matches a standard function (add, min, max, etc.). Naming convention: d-prefix for double, l-prefix for long.
Public API for the random component.
Provides pseudo-random number generation:
Primary API (mutable RNGs for performance):
make-well-rng-1024a - Create a uniform RNGnext-double! - Generate next uniform random double in [0,1)make-normal-rng - Create a normal/Gaussian RNGnext-gaussian! - Generate next standard normal variateReferences:
Public API for the random component. Provides pseudo-random number generation: - WELL RNG 1024a algorithm for high-quality uniform random doubles - Ziggurat algorithm for normal (Gaussian) random variates Primary API (mutable RNGs for performance): - `make-well-rng-1024a` - Create a uniform RNG - `next-double!` - Generate next uniform random double in [0,1) - `make-normal-rng` - Create a normal/Gaussian RNG - `next-gaussian!` - Generate next standard normal variate References: - WELL RNG: Improved Long-Period Generators Based on Linear Recurrences Modulo 2, F. Panneton, P. L'Ecuyer and M. Matsumoto http://www.iro.umontreal.ca/~panneton/WELLRNG.html - Ziggurat: An improved Ziggurat method to generate normal random samples, Doornik, 2005
Ziggurat algorithm for generating normal random variates.
See: An improved Ziggurat method to generate normal random samples, Doornik, 2005
Ziggurat algorithm for generating normal random variates. See: An improved Ziggurat method to generate normal random samples, Doornik, 2005
First-class function instrumentation for aggregated sampling.
Provides functionality for wrapping functions with instrumentation code that collects performance data during execution. The instrumented functions are first-class objects that maintain their own sample collection state.
First-class function instrumentation for aggregated sampling. Provides functionality for wrapping functions with instrumentation code that collects performance data during execution. The instrumented functions are first-class objects that maintain their own sample collection state.
Protocol and utilities for working with performance sampling state.
Provides a standard interface for components that collect and store performance metrics samples during execution.
Protocol and utilities for working with performance sampling state. Provides a standard interface for components that collect and store performance metrics samples during execution.
Autocorrelation function (ACF) computation and related statistics.
Provides FFT-based ACF computation for detecting sample non-independence in benchmark results, along with derived statistics for quantifying the impact on statistical reliability.
Main functions:
acf - Compute autocorrelation coefficients for all lagsljung-box - Ljung-Box Q statistic and p-value for independence testingeffective-sample-size - Adjusted sample size accounting for autocorrelationci-inflation-factor - Factor to widen confidence intervalsAutocorrelation function (ACF) computation and related statistics. Provides FFT-based ACF computation for detecting sample non-independence in benchmark results, along with derived statistics for quantifying the impact on statistical reliability. Main functions: - `acf` - Compute autocorrelation coefficients for all lags - `ljung-box` - Ljung-Box Q statistic and p-value for independence testing - `effective-sample-size` - Adjusted sample size accounting for autocorrelation - `ci-inflation-factor` - Factor to widen confidence intervals
Bootstrap resampling and confidence interval estimation.
Provides core bootstrap algorithms for statistical inference:
References:
Bootstrap resampling and confidence interval estimation. Provides core bootstrap algorithms for statistical inference: - bootstrap-sample: Resampling with replacement - bootstrap-estimate: Mean, variance and confidence intervals - jacknife: Leave-one-out resampling - bca-nonparametric: Bias-corrected and accelerated bootstrap - bootstrap-bca: Bootstrap with BCa confidence intervals References: - Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. - http://lib.stat.cmu.edu/S/bootstrap.funs
Chi-squared distribution functions.
The chi-squared distribution with k degrees of freedom is the distribution of a sum of squares of k independent standard normal random variables. It is a special case of the gamma distribution with shape = k/2 and scale = 2.
Chi-squared distribution functions. The chi-squared distribution with k degrees of freedom is the distribution of a sum of squares of k independent standard normal random variables. It is a special case of the gamma distribution with shape = k/2 and scale = 2.
Core statistical functions: min, max, mean, sum, variance, median, quartiles, quantile.
All functions require typed arrays (ITypedArray) as input. Primitive-optimized implementations avoid boxing overhead.
Core statistical functions: min, max, mean, sum, variance, median, quartiles, quantile. All functions require typed arrays (ITypedArray) as input. Primitive-optimized implementations avoid boxing overhead.
Pure Clojure radix-2 Cooley-Tukey FFT implementation.
Provides O(n log n) Fast Fourier Transform for autocorrelation computation. Uses interleaved complex representation [re0 im0 re1 im1 ...] for cache efficiency. All operations use primitive double arrays with zero garbage allocation during transform execution.
Main functions:
fft! / fft - Forward FFT (in-place / copying)ifft! / ifft - Inverse FFT (in-place / copying)next-power-of-2 - Find smallest power of 2 >= nzero-pad-real - Zero-pad real signal to power-of-2 lengthComplex arrays use interleaved format: [re0 im0 re1 im1 ...] Array length is 2*n where n is the number of complex samples.
Pure Clojure radix-2 Cooley-Tukey FFT implementation. Provides O(n log n) Fast Fourier Transform for autocorrelation computation. Uses interleaved complex representation [re0 im0 re1 im1 ...] for cache efficiency. All operations use primitive double arrays with zero garbage allocation during transform execution. Main functions: - `fft!` / `fft` - Forward FFT (in-place / copying) - `ifft!` / `ifft` - Inverse FFT (in-place / copying) - `next-power-of-2` - Find smallest power of 2 >= n - `zero-pad-real` - Zero-pad real signal to power-of-2 length Complex arrays use interleaved format: [re0 im0 re1 im1 ...] Array length is 2*n where n is the number of complex samples.
Histogram computation utilities with multiple binning methods.
Supports:
All functions require typed arrays (DoubleArray, LongArray).
Histogram computation utilities with multiple binning methods. Supports: - :freedman-diaconis (default) - Uses IQR-based bin width calculation - :knuth - Bayesian optimal bin count selection All functions require typed arrays (DoubleArray, LongArray).
Kernel Density Estimation utilities.
Provides ISJ (Improved Sheather-Jones) bandwidth selection, Gaussian kernel density estimation, bootstrap confidence bands, and mode finding.
All functions require typed arrays (DoubleArray, LongArray).
Kernel Density Estimation utilities. Provides ISJ (Improved Sheather-Jones) bandwidth selection, Gaussian kernel density estimation, bootstrap confidence bands, and mode finding. All functions require typed arrays (DoubleArray, LongArray).
Kernel functions for density estimation.
Provides kernel weight functions and basic kernel density estimators for modal estimation and bandwidth selection.
Kernel functions for density estimation. Provides kernel weight functions and basic kernel density estimators for modal estimation and bandwidth selection.
Knuth's Bayesian histogram binning algorithm.
Implements optimal bin count selection by maximizing a log-posterior based on Knuth (2019) DOI: 10.1016/j.dsp.2019.102581
The algorithm finds the optimal number of equal-width bins M by maximizing: F(M|x,I) = n·log(M) + logΓ(M/2) - M·logΓ(1/2) - logΓ((2n+M)/2) + Σₖ₌₁ᴹ logΓ(nₖ + 1/2)
where n = sample count, nₖ = count in bin k.
All functions require typed arrays (DoubleArray, LongArray).
Knuth's Bayesian histogram binning algorithm. Implements optimal bin count selection by maximizing a log-posterior based on Knuth (2019) DOI: 10.1016/j.dsp.2019.102581 The algorithm finds the optimal number of equal-width bins M by maximizing: F(M|x,I) = n·log(M) + logΓ(M/2) - M·logΓ(1/2) - logΓ((2n+M)/2) + Σₖ₌₁ᴹ logΓ(nₖ + 1/2) where n = sample count, nₖ = count in bin k. All functions require typed arrays (DoubleArray, LongArray).
Maximum Likelihood Estimation for statistical distributions.
Provides MLE fitting functions that return both parameter estimates and log-likelihood values for model comparison via AIC/BIC.
Distributions supported:
All functions return maps with :params and :log-likelihood keys. All functions require typed arrays (DoubleArray, LongArray).
Maximum Likelihood Estimation for statistical distributions. Provides MLE fitting functions that return both parameter estimates and log-likelihood values for model comparison via AIC/BIC. Distributions supported: - Gamma: Minka's fast fixed-point approximation for shape - Log-normal: Closed-form MLE - Inverse Gaussian: Closed-form MLE - Weibull: Newton-Raphson iteration for shape All functions return maps with :params and :log-likelihood keys. All functions require typed arrays (DoubleArray, LongArray).
Moment-based parameter estimation and distribution suitability screening.
Provides method-of-moments initial parameter estimates for distributions and a prefilter to screen out distributions that are unsuitable for a given dataset based on sample statistics.
This is used before MLE fitting to quickly eliminate distributions where moment-based estimates yield invalid parameters (e.g., negative shape).
Moment-based parameter estimation and distribution suitability screening. Provides method-of-moments initial parameter estimates for distributions and a prefilter to screen out distributions that are unsuitable for a given dataset based on sample statistics. This is used before MLE fitting to quickly eliminate distributions where moment-based estimates yield invalid parameters (e.g., negative shape).
Outlier detection using boxplot thresholds.
Provides both standard symmetric boxplot and adjusted boxplot for skewed distributions using the medcouple statistic.
All functions require typed arrays (ITypedArray) as input.
Outlier detection using boxplot thresholds. Provides both standard symmetric boxplot and adjusted boxplot for skewed distributions using the medcouple statistic. All functions require typed arrays (ITypedArray) as input.
Probability functions: log-gamma, error function, normal distribution, and common statistical distributions (gamma, weibull, lognormal, inverse-gaussian).
Probability functions: log-gamma, error function, normal distribution, and common statistical distributions (gamma, weibull, lognormal, inverse-gaussian).
Sampling utilities: sample functions, confidence intervals.
All sampling functions take mutable uniform RNGs and call next-double! to generate random values.
Sampling utilities: sample functions, confidence intervals. All sampling functions take mutable uniform RNGs and call next-double! to generate random values.
T-digest streaming quantile estimation. Provides a wrapper API over the merging-digest implementation.
T-digest streaming quantile estimation. Provides a wrapper API over the merging-digest implementation.
Implementation of the t-digest algorithm for streaming quantile estimation. Based on the MergingDigest variant from https://github.com/tdunning/t-digest
Implementation of the t-digest algorithm for streaming quantile estimation. Based on the MergingDigest variant from https://github.com/tdunning/t-digest
Scale functions for t-digest algorithm. These control how cluster sizes are determined and affect accuracy in different ways.
Scale functions for t-digest algorithm. These control how cluster sizes are determined and affect accuracy in different ways.
Tail statistics for extreme value analysis.
Provides functions for analyzing distribution tails, including:
All functions requiring sample data accept typed arrays (ITypedArray).
References:
Tail statistics for extreme value analysis. Provides functions for analyzing distribution tails, including: - Hill estimator for tail index estimation - Generalized Pareto Distribution (GPD) fitting and functions - Mean residual life for threshold selection - Tail ratios from percentiles All functions requiring sample data accept typed arrays (ITypedArray). References: - Hill (1975), A Simple General Approach to Inference About the Tail of a Distribution - Grimshaw (1993), Computing Maximum Likelihood Estimates for the GPD - Coles (2001), An Introduction to Statistical Modeling of Extreme Values
Implementation types for primitive transducers.
Implementation types for primitive transducers.
Interfaces for primitive transducer operations.
Interface hierarchy:
Interfaces for primitive transducer operations. Interface hierarchy: - ILLLReducible: reduce with primitive long accumulator - IDDDReducible: reduce with primitive double accumulator - IOLOReducible: reduce long elements into array - IODOReducible: reduce double elements into array - ILongReducible: extends ILLLReducible, IOLOReducible - IDoubleReducible: extends IDDDReducible, IODOReducible - IPrimOps: transduce/reduce/into/range operations
No vars found in this namespace.
Provide a trigger for collecting elapsed time samples between trigger events.
The trigger maintains internal state about when it was last triggered and collects samples of elapsed time between trigger events.
Typical usage:
(let [t (trigger)] (fire! t) ;; Start timing (do-something) (fire! t) ;; Record elapsed time (do-something-else) (fire! t) ;; Record another sample (let [samples (sampler/samples-map t)] ;; Get samples and reset (analyze-samples samples)))
Provide a trigger for collecting elapsed time samples between trigger events.
The trigger maintains internal state about when it was last triggered
and collects samples of elapsed time between trigger
events.
Typical usage:
(let [t (trigger)]
(fire! t) ;; Start timing
(do-something)
(fire! t) ;; Record elapsed time
(do-something-else)
(fire! t) ;; Record another sample
(let [samples (sampler/samples-map t)] ;; Get samples and reset
(analyze-samples samples)))Blackhole wrapper for preventing dead code elimination. Delegates to criterium.blackhole.
Blackhole wrapper for preventing dead code elimination. Delegates to criterium.blackhole.
Bootstrap statistics for criterium.
Core bootstrap algorithms are provided by criterium.stats.bootstrap. This namespace provides criterium-specific integration with metrics and collect plans.
Bootstrap statistics for criterium. Core bootstrap algorithms are provided by criterium.stats.bootstrap. This namespace provides criterium-specific integration with metrics and collect plans.
Metric formatters
Metric formatters
Control flow macros.
This namespace delegates to criterium.utils.interface for the core implementation and is retained for backward compatibility.
Control flow macros. This namespace delegates to criterium.utils.interface for the core implementation and is retained for backward compatibility.
Criterium domain helpers and backward-compatible re-exports from utils.
Criterium domain helpers and backward-compatible re-exports from utils.
Assertion macros inspired by truss.
This namespace delegates to criterium.utils.interface for the core implementation and is retained for backward compatibility.
Assertion macros inspired by truss. This namespace delegates to criterium.utils.interface for the core implementation and is retained for backward compatibility.
Re-exports t-digest functionality from stats component for backward compatibility.
Re-exports t-digest functionality from stats component for backward compatibility.
Re-exports merging-digest functions from stats component for backward compatibility.
Re-exports merging-digest functions from stats component for backward compatibility.
Re-exports scale functions from stats component for backward compatibility.
Re-exports scale functions from stats component for backward compatibility.
Generic utility functions.
Generic utility functions.
Public API for the utils component.
Provides generic utilities including:
Public API for the utils component. Provides generic utilities including: - Assertion macros (have, have?) - Control flow macros (cond*) - Math utilities (sqr, sqrd, cubed, trunc) - Collection utilities (update-vals, filter-map, deep-merge) - Tree walking (walk, postwalk) - Debugging (spy, report)
Assertion macros inspired by truss.
Assertion macros inspired by truss.
Text viewer for call tree data from method tracing.
Renders call trees as ASCII trees with box-drawing characters, showing call counts and call count percentages.
Text viewer for call tree data from method tracing. Renders call trees as ASCII trees with box-drawing characters, showing call counts and call count percentages.
Chart specifications for ACF (Autocorrelation Function) plots.
Provides:
Both chart types show autocorrelation coefficients for all lags, with severity-based coloring and threshold visualization.
Chart specifications for ACF (Autocorrelation Function) plots. Provides: - Vega-Lite point charts for graphical viewers (:portal, :kindly) - ASCII bar chart rendering for text viewers (:print, :pprint) Both chart types show autocorrelation coefficients for all lags, with severity-based coloring and threshold visualization.
Comparison chart functions for Vega-Lite charts.
Provides single-point bar/box charts and multi-point line charts for comparing implementations across benchmarks. Used by both domain extracts and comparison data structures.
Comparison chart functions for Vega-Lite charts. Provides single-point bar/box charts and multi-point line charts for comparing implementations across benchmarks. Used by both domain extracts and comparison data structures.
Distribution overlay functions for Vega-Lite charts.
Provides KDE, PDF, and CDF visualization layers and complete chart specs for displaying distribution fits and comparing empirical vs theoretical distributions.
Distribution overlay functions for Vega-Lite charts. Provides KDE, PDF, and CDF visualization layers and complete chart specs for displaying distribution fits and comparing empirical vs theoretical distributions.
ASCII chart rendering for distribution analysis visualization.
Provides ASCII chart functions for:
These functions are used by :print and :pprint viewers for terminal-based visualization of distribution fit results.
ASCII chart rendering for distribution analysis visualization. Provides ASCII chart functions for: - PDF (probability density function) line charts - CDF (cumulative distribution function) step charts - Q-Q (quantile-quantile) scatter plots These functions are used by :print and :pprint viewers for terminal-based visualization of distribution fit results.
Profile visualization for call trees, treemaps, and most-called methods.
Provides Vega and Vega-Lite specs for: - Allocation treemaps showing memory usage by type - Call tree visualizations (hierarchical tree and flame chart) - Most-called method bar charts
Profile visualization for call trees, treemaps, and most-called methods. Provides Vega and Vega-Lite specs for: - Allocation treemaps showing memory usage by type - Call tree visualizations (hierarchical tree and flame chart) - Most-called method bar charts
Q-Q plot visualization functions for Vega-Lite charts.
Provides Q-Q (quantile-quantile) plot generation for comparing sample distributions against fitted theoretical distributions. Q-Q plots show sample quantiles vs theoretical quantiles - points lying on the y=x diagonal indicate good fit.
Q-Q plot visualization functions for Vega-Lite charts. Provides Q-Q (quantile-quantile) plot generation for comparing sample distributions against fitted theoretical distributions. Q-Q plots show sample quantiles vs theoretical quantiles - points lying on the y=x diagonal indicate good fit.
Regression chart functions for Vega-Lite visualizations.
Provides scatter plots with fit lines, residual plots, and log-log regression charts for complexity analysis. Supports both single-implementation and multi-implementation comparison modes.
Regression chart functions for Vega-Lite visualizations. Provides scatter plots with fit lines, residual plots, and log-log regression charts for complexity analysis. Supports both single-implementation and multi-implementation comparison modes.
Sample visualization functions for Vega-Lite charts.
Provides scatter plots, histograms, event markers, and sample difference visualizations for criterium benchmark data.
Sample visualization functions for Vega-Lite charts. Provides scatter plots, histograms, event markers, and sample difference visualizations for criterium benchmark data.
Tail analysis visualization functions for Vega-Lite charts.
Provides chart generation for extreme value analysis including:
Tail analysis visualization functions for Vega-Lite charts. Provides chart generation for extreme value analysis including: - Tail ratios table showing percentile ratios - Hill plot showing tail index estimates across k values - Mean residual life plot for threshold selection - Zipf plot (complementary CDF on log-log scale) - Q-Q plots comparing exceedances to exponential and GPD distributions
ASCII chart rendering for tail analysis visualization.
Provides ASCII chart functions for:
These functions are used by :print and :pprint viewers for terminal-based visualization of tail analysis results.
ASCII chart rendering for tail analysis visualization. Provides ASCII chart functions for: - Tail ratios bar chart - Hill plot (tail index estimates vs k) - MRL plot (mean residual life vs threshold) - Zipf plot (complementary CDF on log-log scale) - Exponential Q-Q plot - GPD Q-Q plot These functions are used by :print and :pprint viewers for terminal-based visualization of tail analysis results.
Shared helpers for common-charts sub-namespaces.
Provides utility functions used across multiple chart generation namespaces to avoid circular dependencies.
Shared helpers for common-charts sub-namespaces. Provides utility functions used across multiple chart generation namespaces to avoid circular dependencies.
Allocation view helpers for formatting and rendering allocation data.
Provides functions for formatting call sites and object types, as well as ASCII treemap rendering for allocation visualization.
Allocation view helpers for formatting and rendering allocation data. Provides functions for formatting call sites and object types, as well as ASCII treemap rendering for allocation visualization.
ASCII chart rendering for terminal-based visualization.
Provides LTTB (Largest Triangle Three Buckets) downsampling and ASCII line/scatter plot rendering for use in :print and :pprint viewers.
Main entry points:
lttb-downsample - reduce points while preserving visual shaperender-chart - render points as ASCII chart, returns vector of stringsASCII chart rendering for terminal-based visualization. Provides LTTB (Largest Triangle Three Buckets) downsampling and ASCII line/scatter plot rendering for use in :print and :pprint viewers. Main entry points: - `lttb-downsample` - reduce points while preserving visual shape - `render-chart` - render points as ASCII chart, returns vector of strings
Autocorrelation view helpers.
Provides functions for formatting autocorrelation data and iterating over metrics for display. Used by print and pprint viewers.
Autocorrelation view helpers. Provides functions for formatting autocorrelation data and iterating over metrics for display. Used by print and pprint viewers.
Bootstrap statistics view helpers.
Provides functions for formatting bootstrap estimates and building bootstrap statistics table rows.
Bootstrap statistics view helpers. Provides functions for formatting bootstrap estimates and building bootstrap statistics table rows.
Core utility functions for viewer data preparation.
This namespace provides foundational functions used across multiple viewer namespaces for formatting metrics, computing SI scaling, and preparing basic statistical data for display.
Core utility functions for viewer data preparation. This namespace provides foundational functions used across multiple viewer namespaces for formatting metrics, computing SI scaling, and preparing basic statistical data for display.
Common distribution formatting utilities for viewer implementations.
This namespace provides shared functions used by all viewers for formatting distribution fit results in tables and text output.
Common distribution formatting utilities for viewer implementations. This namespace provides shared functions used by all viewers for formatting distribution fit results in tables and text output.
Domain comparison data preparation functions.
Provides functions to prepare domain-comparison data for various chart types including box plots, bar charts, and line charts, as well as table rendering.
Domain comparison data preparation functions. Provides functions to prepare domain-comparison data for various chart types including box plots, bar charts, and line charts, as well as table rendering.
Domain shape detection predicates for visualization strategy selection.
Provides functions to analyze the structure of domain extract and comparison data to determine the appropriate visualization strategy (box plot, line chart, or table).
Domain shape detection predicates for visualization strategy selection. Provides functions to analyze the structure of domain extract and comparison data to determine the appropriate visualization strategy (box plot, line chart, or table).
Domain extract table preparation functions.
Provides functions to prepare domain-extract data for table rendering, including transposed tables for single-point multi-impl scenarios and grouped data tables.
Domain extract table preparation functions. Provides functions to prepare domain-extract data for table rendering, including transposed tables for single-point multi-impl scenarios and grouped data tables.
Modal analysis view helpers.
Provides functions for formatting mode locations and iterating over multimodal metrics for display.
Modal analysis view helpers. Provides functions for formatting mode locations and iterating over multimodal metrics for display.
Domain regression data preparation functions.
Provides functions to prepare regression model data for table and chart rendering, including model fit data, log-log analysis, and an orchestration function for rendering regression views.
Domain regression data preparation functions. Provides functions to prepare regression model data for table and chart rendering, including model fit data, log-log analysis, and an orchestration function for rendering regression views.
Shape statistics view helpers.
Provides functions for formatting and classifying shape statistics (skewness, kurtosis, CV) from bootstrap results.
Shape statistics view helpers. Provides functions for formatting and classifying shape statistics (skewness, kurtosis, CV) from bootstrap results.
Common tail analysis context extraction for portal and kindly viewers.
Provides shared data extraction that both graphical viewers need for tail analysis views including samples access for charts.
Common tail analysis context extraction for portal and kindly viewers. Provides shared data extraction that both graphical viewers need for tail analysis views including samples access for charts.
A viewer that outputs Kindly-annotated data structures for Clay notebooks.
Uses an accumulator pattern where view functions append Kindly-annotated
values to an atom. The flush-viewer multimethod returns a kind/fragment
combining all accumulated values.
No runtime dependency on scicloj/kindly - produces plain maps with
appropriate :kindly/kind metadata.
A viewer that outputs Kindly-annotated data structures for Clay notebooks. Uses an accumulator pattern where view functions append Kindly-annotated values to an atom. The `flush-viewer` multimethod returns a `kind/fragment` combining all accumulated values. No runtime dependency on scicloj/kindly - produces plain maps with appropriate `:kindly/kind` metadata.
Kindly viewer implementations for allocation profiling views.
Provides allocation-summary, allocation-hotspots, allocation-by-type, and allocation-treemap views that output Kindly-annotated tables and charts.
Kindly viewer implementations for allocation profiling views. Provides allocation-summary, allocation-hotspots, allocation-by-type, and allocation-treemap views that output Kindly-annotated tables and charts.
No vars found in this namespace.
Kindly viewer for autocorrelation analysis results.
Provides view/* multimethod implementations for displaying autocorrelation diagnostics in Kindly notebooks, including ACF plots, classification tables, and effective sample size statistics.
Kindly viewer for autocorrelation analysis results. Provides view/* multimethod implementations for displaying autocorrelation diagnostics in Kindly notebooks, including ACF plots, classification tables, and effective sample size statistics.
No vars found in this namespace.
Kindly viewer core functions for basic metrics display.
Provides Kindly-annotated output for:
Uses an accumulator pattern where view functions append Kindly-annotated
values to an atom. The flush-viewer multimethod returns a kind/fragment
combining all accumulated values.
No runtime dependency on scicloj/kindly - produces plain maps with
appropriate :kindly/kind metadata.
Kindly viewer core functions for basic metrics display. Provides Kindly-annotated output for: - metrics, stats, extremes - bootstrap statistics - samples, histograms, KDE - outlier counts and significance - event stats (class loader, JIT, GC) - quantiles, sample percentiles, sample diffs - collect plan, OS, and runtime info Uses an accumulator pattern where view functions append Kindly-annotated values to an atom. The `flush-viewer` multimethod returns a `kind/fragment` combining all accumulated values. No runtime dependency on scicloj/kindly - produces plain maps with appropriate `:kindly/kind` metadata.
Distribution fit views for kindly viewer.
Contains views for:
Distribution fit views for kindly viewer. Contains views for: - Distribution model comparison (AIC, BIC, goodness-of-fit tests) - Parameter confidence intervals for best-fit models - Distribution PDF, CDF, and Q-Q plot charts
No vars found in this namespace.
Domain analysis views for Kindly viewer.
Provides views for:
Domain analysis views for Kindly viewer. Provides views for: - Domain extract tables and charts - Domain grouped tables - Domain comparison tables and charts - Domain regression analysis with tables and charts
No vars found in this namespace.
Modal analysis views for kindly viewer.
Contains multimodal distribution warning display.
Modal analysis views for kindly viewer. Contains multimodal distribution warning display.
No vars found in this namespace.
Kindly viewer for shape statistics.
Displays skewness, kurtosis, and coefficient of variation (CV) for bootstrap results in a Kindly table.
Kindly viewer for shape statistics. Displays skewness, kurtosis, and coefficient of variation (CV) for bootstrap results in a Kindly table.
No vars found in this namespace.
Tail analysis views for kindly viewer.
Contains views for:
Tail analysis views for kindly viewer. Contains views for: - Tail summary (GPD/Hill parameters) - Tail ratios (p99/p95, p999/p99, p999/p95) - High quantile estimates (GPD extrapolation) - Chart views (tail ratio charts, Hill/MRL/Zipf plots, Q-Q plots)
No vars found in this namespace.
A viewer that outputs to portal using tap>.
Core functionality (tap infrastructure, metrics, stats, extremes, bootstrap, samples, outliers, events, KDE) is in criterium.viewer.portal.core.
Domain analysis views (grouped, extract, comparison, regression, apply) are in criterium.viewer.portal.domain.
Allocation profiling views (summary, hotspots, by-type, treemap) are in criterium.viewer.portal.allocation.
Distribution fit views (models, parameter CIs, PDF, CDF, Q-Q charts) are in criterium.viewer.portal.distribution.
Tail analysis views (summary, ratios, high quantiles, charts) are in criterium.viewer.portal.tail.
Shape statistics views (skewness, kurtosis, CV) are in criterium.viewer.portal.shape.
Modal analysis views (multimodal warnings) are in criterium.viewer.portal.modal.
Autocorrelation analysis views (ACF plots, classification, ESS) are in criterium.viewer.portal.autocorrelation.
A viewer that outputs to portal using tap>. Core functionality (tap infrastructure, metrics, stats, extremes, bootstrap, samples, outliers, events, KDE) is in criterium.viewer.portal.core. Domain analysis views (grouped, extract, comparison, regression, apply) are in criterium.viewer.portal.domain. Allocation profiling views (summary, hotspots, by-type, treemap) are in criterium.viewer.portal.allocation. Distribution fit views (models, parameter CIs, PDF, CDF, Q-Q charts) are in criterium.viewer.portal.distribution. Tail analysis views (summary, ratios, high quantiles, charts) are in criterium.viewer.portal.tail. Shape statistics views (skewness, kurtosis, CV) are in criterium.viewer.portal.shape. Modal analysis views (multimodal warnings) are in criterium.viewer.portal.modal. Autocorrelation analysis views (ACF plots, classification, ESS) are in criterium.viewer.portal.autocorrelation.
Portal viewer functions for allocation profiling display.
Provides Portal output for:
Portal viewer functions for allocation profiling display. Provides Portal output for: - allocation summary (totals, counts, freed ratio) - allocation hotspots (call sites with highest allocations) - allocations by type (aggregated by object type) - allocation treemap (Vega treemap visualization)
No vars found in this namespace.
Portal viewer for autocorrelation analysis results.
Provides view/* multimethod implementations for displaying autocorrelation diagnostics in Portal, including ACF plots, classification tables, and effective sample size statistics.
Portal viewer for autocorrelation analysis results. Provides view/* multimethod implementations for displaying autocorrelation diagnostics in Portal, including ACF plots, classification tables, and effective sample size statistics.
No vars found in this namespace.
Portal viewer core functions for basic metrics display.
Provides Portal output for:
Portal viewer core functions for basic metrics display. Provides Portal output for: - tap infrastructure (submit, flush) - metrics, stats, extremes - bootstrap statistics - samples with outliers - outlier counts and significance - event stats (class loader, JIT, GC) - histograms, KDE, quantiles
Distribution fit views for portal viewer.
Contains views for:
Distribution fit views for portal viewer. Contains views for: - Distribution model comparison (AIC, BIC, goodness-of-fit tests) - Parameter confidence intervals for best-fit models - Distribution PDF, CDF, and Q-Q plot charts
No vars found in this namespace.
Portal viewer domain analysis views.
Provides Portal output for:
Portal viewer domain analysis views. Provides Portal output for: - domain-grouped views - domain-extract tables and charts - domain-comparison tables and charts - domain-regression results - domain-apply iteration
No vars found in this namespace.
Modal analysis views for portal viewer.
Contains multimodal distribution warning display.
Modal analysis views for portal viewer. Contains multimodal distribution warning display.
No vars found in this namespace.
Portal viewer for shape statistics.
Displays skewness, kurtosis, and coefficient of variation (CV) for bootstrap results in a Portal table.
Portal viewer for shape statistics. Displays skewness, kurtosis, and coefficient of variation (CV) for bootstrap results in a Portal table.
No vars found in this namespace.
Tail analysis views for portal viewer.
Contains views for:
Tail analysis views for portal viewer. Contains views for: - Tail summary (GPD/Hill parameters) - Tail ratios (p99/p95, p999/p99, p999/p95) - High quantile estimates (GPD extrapolation) - Chart views (tail ratio charts, Hill/MRL/Zipf plots, Q-Q plots)
No vars found in this namespace.
A pretty print viewer
A pretty print viewer
A print viewer
Core functionality (metrics, stats, extremes, bootstrap, samples, outliers, events, GC, OS, runtime) is in criterium.viewer.print.core.
Domain analysis (grouped, extract, comparison, regression, apply) is in criterium.viewer.print.domain.
Allocation profiling (summary, hotspots, by-type, treemap) is in criterium.viewer.print.allocation.
Distribution fit (models, parameter CIs) is in criterium.viewer.print.distribution.
Tail analysis (summary, ratios, high quantiles) is in criterium.viewer.print.tail.
Shape statistics (skewness, kurtosis, CV) is in criterium.viewer.print.shape.
Modal analysis (multimodal warnings) is in criterium.viewer.print.modal.
Autocorrelation analysis (lag analysis, effective sample size, classification) is in criterium.viewer.print.autocorrelation.
A print viewer Core functionality (metrics, stats, extremes, bootstrap, samples, outliers, events, GC, OS, runtime) is in criterium.viewer.print.core. Domain analysis (grouped, extract, comparison, regression, apply) is in criterium.viewer.print.domain. Allocation profiling (summary, hotspots, by-type, treemap) is in criterium.viewer.print.allocation. Distribution fit (models, parameter CIs) is in criterium.viewer.print.distribution. Tail analysis (summary, ratios, high quantiles) is in criterium.viewer.print.tail. Shape statistics (skewness, kurtosis, CV) is in criterium.viewer.print.shape. Modal analysis (multimodal warnings) is in criterium.viewer.print.modal. Autocorrelation analysis (lag analysis, effective sample size, classification) is in criterium.viewer.print.autocorrelation.
No vars found in this namespace.
Print viewer functions for allocation profiling display.
Provides text output for:
Print viewer functions for allocation profiling display. Provides text output for: - allocation summary (totals, counts, freed ratio) - allocation hotspots (call sites with highest allocations) - allocations by type (aggregated by object type) - allocation treemap (ASCII tree visualization)
Print viewer for autocorrelation analysis.
Provides views for lag analysis, effective sample size, CI inflation factors, pattern classification, and ACF plots.
Print viewer for autocorrelation analysis. Provides views for lag analysis, effective sample size, CI inflation factors, pattern classification, and ACF plots.
Print viewer core functions for basic metrics display.
Provides text output for:
Print viewer core functions for basic metrics display. Provides text output for: - metrics, stats, extremes - bootstrap statistics - samples with outliers - outlier counts and significance - event stats (class loader, JIT, GC) - final GC warnings - OS and runtime info - histograms, KDE, quantiles
Distribution fit views for print viewer.
Contains views for:
Distribution fit views for print viewer. Contains views for: - Distribution model comparison (AIC, BIC, goodness-of-fit tests) - Parameter confidence intervals for best-fit models - ASCII chart views (PDF, CDF, Q-Q plots)
Print viewer domain analysis views.
Provides text output for:
Print viewer domain analysis views. Provides text output for: - domain-grouped views - domain-extract tables - domain-comparison tables - domain-regression results - domain-apply iteration
No vars found in this namespace.
Modal analysis views for print viewer.
Contains multimodal distribution warning display.
Modal analysis views for print viewer. Contains multimodal distribution warning display.
No vars found in this namespace.
Print viewer for shape statistics.
Displays skewness, kurtosis, and coefficient of variation (CV) for bootstrap results.
Print viewer for shape statistics. Displays skewness, kurtosis, and coefficient of variation (CV) for bootstrap results.
Print viewer table formatting.
Provides generic table printing with box-drawing separators.
Print viewer table formatting. Provides generic table printing with box-drawing separators.
Tail analysis views for print viewer.
Contains views for:
Tail analysis views for print viewer. Contains views for: - Tail summary (GPD/Hill parameters) - Tail ratios (p99/p95, p999/p99, p999/p95) - High quantile estimates (GPD extrapolation) - ASCII chart views (tail ratio charts, Hill/MRL/Zipf plots, Q-Q plots)
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |