Liking cljdoc? Tell your friends :D

org.soulspace.qclojure.ml.application.quantum-kernel

Production-ready quantum kernel methods for quantum machine learning.

Quantum kernels compute similarity measures between classical data points by encoding them into quantum states and measuring their overlap. This implementation provides hardware-compatible kernel computation using measurement-based approaches suitable for real quantum devices.

Key Features:

  • Hardware-compatible adjoint/fidelity circuits for overlap estimation
  • Support for multiple encoding strategies (angle, amplitude, basis, IQP)
  • Efficient kernel matrix computation using transients
  • Batched processing for large datasets
  • Integration with QClojure backend protocols
  • Production-ready error handling and validation

Algorithm:

  1. Encode classical data points into quantum states using feature maps
  2. Compute pairwise overlaps |⟨φ(x_i)|φ(x_j)⟩|² using adjoint/fidelity method
  3. Build kernel matrix for use with classical ML algorithms
  4. Support symmetric and asymmetric kernel computations

The adjoint method prepares |ψ⟩ = U_φ(x)|0⟩ then applies U†_φ(x') and measures P(|0⟩) = |⟨φ(x)|φ(x')⟩|², avoiding ancilla qubits and working correctly for feature-mapped superposition states (unlike SWAP test).

Production-ready quantum kernel methods for quantum machine learning.

Quantum kernels compute similarity measures between classical data points by encoding
them into quantum states and measuring their overlap. This implementation provides
hardware-compatible kernel computation using measurement-based approaches suitable
for real quantum devices.

Key Features:
- Hardware-compatible adjoint/fidelity circuits for overlap estimation
- Support for multiple encoding strategies (angle, amplitude, basis, IQP)
- Efficient kernel matrix computation using transients
- Batched processing for large datasets
- Integration with QClojure backend protocols
- Production-ready error handling and validation

Algorithm:
1. Encode classical data points into quantum states using feature maps
2. Compute pairwise overlaps |⟨φ(x_i)|φ(x_j)⟩|² using adjoint/fidelity method
3. Build kernel matrix for use with classical ML algorithms
4. Support symmetric and asymmetric kernel computations

The adjoint method prepares |ψ⟩ = U_φ(x)|0⟩ then applies U†_φ(x') and measures
P(|0⟩) = |⟨φ(x)|φ(x')⟩|², avoiding ancilla qubits and working correctly for
feature-mapped superposition states (unlike SWAP test).
raw docstring

analyze-kernel-matrixclj

(analyze-kernel-matrix kernel-matrix)

Analyze properties of computed kernel matrix.

Provides statistical analysis of the kernel matrix including:

  • Eigenvalue spectrum
  • Condition number
  • Symmetry verification
  • Positive semidefinite check

Parameters:

  • kernel-matrix: Computed kernel matrix

Returns: Map with analysis results

Analyze properties of computed kernel matrix.

Provides statistical analysis of the kernel matrix including:
- Eigenvalue spectrum
- Condition number
- Symmetry verification
- Positive semidefinite check

Parameters:
- kernel-matrix: Computed kernel matrix

Returns:
Map with analysis results
sourceraw docstring

batch-kernel-computationclj

(batch-kernel-computation backend data-matrix config)
(batch-kernel-computation backend data-matrix config batch-size)

Compute kernel matrix using batched approach for memory efficiency.

For large datasets, this function computes the kernel matrix in batches to manage memory usage and provide progress monitoring.

Parameters:

  • backend: Quantum backend
  • data-matrix: Matrix of data vectors
  • config: Kernel configuration
  • batch-size: Number of kernel computations per batch (default: 100)

Returns: Complete kernel matrix computed in batches

Compute kernel matrix using batched approach for memory efficiency.

For large datasets, this function computes the kernel matrix in batches
to manage memory usage and provide progress monitoring.

Parameters:
- backend: Quantum backend
- data-matrix: Matrix of data vectors
- config: Kernel configuration
- batch-size: Number of kernel computations per batch (default: 100)

Returns:
Complete kernel matrix computed in batches
sourceraw docstring

calculate-trainable-parameter-countclj

(calculate-trainable-parameter-count num-qubits num-layers)

Calculate the number of trainable parameters needed for a parametrized feature map.

Parameters:

  • num-qubits: Number of qubits
  • num-layers: Number of trainable layers

Returns: Total number of trainable parameters

Calculate the number of trainable parameters needed for a parametrized feature map.

Parameters:
- num-qubits: Number of qubits
- num-layers: Number of trainable layers

Returns:
Total number of trainable parameters
sourceraw docstring

compute-trainable-kernel-matrixclj

(compute-trainable-kernel-matrix backend data-matrix trainable-params config)

Compute quantum kernel matrix with trainable parameters.

Parameters:

  • backend: Quantum backend
  • data-matrix: Matrix of data vectors
  • trainable-params: Trainable parameters for feature map
  • config: Trainable kernel configuration

Returns: Kernel matrix computed with trainable feature maps

Compute quantum kernel matrix with trainable parameters.

Parameters:
- backend: Quantum backend
- data-matrix: Matrix of data vectors
- trainable-params: Trainable parameters for feature map
- config: Trainable kernel configuration

Returns:
Kernel matrix computed with trainable feature maps
sourceraw docstring

create-quantum-kernelclj

(create-quantum-kernel backend config)

Create a quantum kernel function for use with classical ML algorithms.

Returns a function that computes quantum kernel values between data points. This can be used as a drop-in replacement for classical kernels in ML pipelines.

Parameters:

  • backend: Quantum backend
  • config: Kernel configuration

Returns: Function (data-point1, data-point2) -> kernel-value

Create a quantum kernel function for use with classical ML algorithms.

Returns a function that computes quantum kernel values between data points.
This can be used as a drop-in replacement for classical kernels in ML pipelines.

Parameters:
- backend: Quantum backend
- config: Kernel configuration

Returns:
Function (data-point1, data-point2) -> kernel-value
sourceraw docstring

encode-data-for-kernelclj

(encode-data-for-kernel data-point encoding-type num-qubits options)

Encode classical data point using specified encoding strategy.

This function creates a quantum circuit that encodes a classical feature vector into a quantum state using one of the available encoding methods.

Parameters:

  • data-point: Classical feature vector
  • encoding-type: Type of encoding (:angle, :amplitude, :basis, :iqp)
  • num-qubits: Number of qubits for encoding
  • options: Encoding-specific options

Returns: Function that applies encoding to a circuit

Encode classical data point using specified encoding strategy.

This function creates a quantum circuit that encodes a classical feature vector
into a quantum state using one of the available encoding methods.

Parameters:
- data-point: Classical feature vector
- encoding-type: Type of encoding (:angle, :amplitude, :basis, :iqp)
- num-qubits: Number of qubits for encoding
- options: Encoding-specific options

Returns:
Function that applies encoding to a circuit
sourceraw docstring

parametrized-feature-mapclj

(parametrized-feature-map data-point
                          trainable-params
                          num-qubits
                          num-layers
                          options)

Create a parametrized feature map with trainable parameters.

This feature map combines data encoding with trainable rotation gates, allowing the kernel to be optimized for specific datasets. This is critical for achieving quantum advantage over classical kernels.

Architecture:

  1. Data encoding layer (angle encoding of features)
  2. Trainable rotation layers (parametrized Ry and Rz gates)
  3. Entangling layers (to create feature interactions)

Parameters:

  • data-point: Classical feature vector
  • trainable-params: Vector of trainable parameters
  • num-qubits: Number of qubits for encoding
  • num-layers: Number of trainable layers
  • options: Additional options

Returns: Function that applies parametrized encoding to a circuit

Create a parametrized feature map with trainable parameters.

This feature map combines data encoding with trainable rotation gates,
allowing the kernel to be optimized for specific datasets. This is critical
for achieving quantum advantage over classical kernels.

Architecture:
1. Data encoding layer (angle encoding of features)
2. Trainable rotation layers (parametrized Ry and Rz gates)
3. Entangling layers (to create feature interactions)

Parameters:
- data-point: Classical feature vector
- trainable-params: Vector of trainable parameters
- num-qubits: Number of qubits for encoding
- num-layers: Number of trainable layers
- options: Additional options

Returns:
Function that applies parametrized encoding to a circuit
sourceraw docstring

precompute-encodingsclj

(precompute-encodings data-matrix encoding-type num-qubits encoding-options)

Precompute quantum encodings for all data points to optimize repeated kernel computations.

This optimization strategy precomputes the quantum circuits for encoding each data point, avoiding redundant encoding operations when computing the full kernel matrix.

Parameters:

  • data-matrix: Matrix of classical data vectors
  • encoding-type: Type of encoding to use
  • num-qubits: Number of qubits for encoding
  • encoding-options: Encoding-specific options

Returns: Vector of precomputed encoding functions

Precompute quantum encodings for all data points to optimize repeated kernel computations.

This optimization strategy precomputes the quantum circuits for encoding each data point,
avoiding redundant encoding operations when computing the full kernel matrix.

Parameters:
- data-matrix: Matrix of classical data vectors
- encoding-type: Type of encoding to use
- num-qubits: Number of qubits for encoding
- encoding-options: Encoding-specific options

Returns:
Vector of precomputed encoding functions
sourceraw docstring

quantum-kernel-matrixclj

(quantum-kernel-matrix backend data-matrix config)
(quantum-kernel-matrix backend data-matrix config symmetric?)

Compute quantum kernel matrix for a dataset using efficient batched processing.

This function computes the full kernel matrix K where K[i,j] represents the quantum kernel value between data points i and j. Uses transient data structures for efficient matrix construction.

Parameters:

  • backend: Quantum backend for circuit execution
  • data-matrix: Matrix of classical data vectors (rows are data points)
  • config: Kernel configuration
  • symmetric?: If true, compute only upper triangle (default: true)

Returns: Symmetric kernel matrix as vector of vectors

Compute quantum kernel matrix for a dataset using efficient batched processing.

This function computes the full kernel matrix K where K[i,j] represents the
quantum kernel value between data points i and j. Uses transient data structures
for efficient matrix construction.

Parameters:
- backend: Quantum backend for circuit execution
- data-matrix: Matrix of classical data vectors (rows are data points)
- config: Kernel configuration
- symmetric?: If true, compute only upper triangle (default: true)

Returns:
Symmetric kernel matrix as vector of vectors
sourceraw docstring

quantum-kernel-overlapclj

(quantum-kernel-overlap backend data-point1 data-point2 config)

Compute quantum kernel overlap between two data points using adjoint method.

This function implements the core quantum kernel computation using the fidelity test:

  1. Prepare state |ψ⟩ = U_φ(x)|0⟩ using feature map U_φ(x)
  2. Apply adjoint U†_φ(x') of the feature map for the second data point
  3. Measure probability of returning to |0⟩ state
  4. This probability equals |⟨φ(x)|φ(x')⟩|², the quantum kernel value

Parameters:

  • backend: Quantum backend for circuit execution
  • data-point1: First classical data vector
  • data-point2: Second classical data vector
  • config: Kernel configuration including encoding type and options

Returns: Map with overlap value and measurement details

Compute quantum kernel overlap between two data points using adjoint method.

This function implements the core quantum kernel computation using the fidelity test:
1. Prepare state |ψ⟩ = U_φ(x)|0⟩ using feature map U_φ(x)
2. Apply adjoint U†_φ(x') of the feature map for the second data point
3. Measure probability of returning to |0⟩ state
4. This probability equals |⟨φ(x)|φ(x')⟩|², the quantum kernel value

Parameters:
- backend: Quantum backend for circuit execution
- data-point1: First classical data vector
- data-point2: Second classical data vector
- config: Kernel configuration including encoding type and options

Returns:
Map with overlap value and measurement details
sourceraw docstring

quantum-kernel-svm-matrixclj

(quantum-kernel-svm-matrix backend data-matrix config)
(quantum-kernel-svm-matrix backend data-matrix config regularization)

Compute kernel matrix optimized for SVM training.

This function provides a kernel matrix suitable for SVM training with additional optimizations and regularization options.

Parameters:

  • backend: Quantum backend
  • data-matrix: Training data matrix
  • config: Kernel configuration
  • regularization: Regularization parameter (added to diagonal)

Returns: Regularized kernel matrix for SVM training

Compute kernel matrix optimized for SVM training.

This function provides a kernel matrix suitable for SVM training with additional
optimizations and regularization options.

Parameters:
- backend: Quantum backend
- data-matrix: Training data matrix
- config: Kernel configuration
- regularization: Regularization parameter (added to diagonal)

Returns:
Regularized kernel matrix for SVM training
sourceraw docstring

train-quantum-kernelclj

(train-quantum-kernel backend data-matrix labels config)

Train a quantum kernel using Quantum Kernel Alignment (QKA).

This function optimizes the trainable parameters of a quantum kernel to maximize alignment with an ideal kernel (supervised) or to optimize for a specific task. This is the key to achieving quantum advantage over classical kernels.

Kernel Alignment Objective:

  • Supervised: Maximize alignment with ideal kernel from labels
  • Target Alignment: Maximize alignment with provided target kernel

Parameters:

  • backend: Quantum backend
  • data-matrix: Training data matrix
  • labels: Training labels (for supervised alignment)
  • config: Training configuration

Required config:

  • :num-qubits - Number of qubits
  • :num-trainable-layers - Number of trainable layers
  • :alignment-objective - :supervised or :target-alignment

Optional config:

  • :target-kernel - Target kernel matrix (for target alignment)
  • :optimization-method - Optimizer (:adam, :cmaes, :nelder-mead, :powell, :bobyqa, :gradient-descent)
  • :max-iterations - Maximum training iterations (default: 100)
  • :learning-rate - Learning rate for gradient-based optimizers (default: 0.01)
  • :shots - Shots per circuit (default: 1024)
  • :parameter-strategy - Parameter init strategy (:random, :zero, :custom, :legacy, default: :random)
  • :parameter-range - Range for random init (default: [-π π])
  • :initial-parameters - Custom initial parameters (if :custom strategy)
  • :regularization - Regularization type (:none, :l1, :l2, :elastic-net, default: :none)
  • :reg-lambda - Regularization strength (default: 0.01)
  • :reg-alpha - Elastic net mix ratio (default: 0.5)

Returns: Map with trained parameters and training history

Train a quantum kernel using Quantum Kernel Alignment (QKA).

This function optimizes the trainable parameters of a quantum kernel to maximize
alignment with an ideal kernel (supervised) or to optimize for a specific task.
This is the key to achieving quantum advantage over classical kernels.

Kernel Alignment Objective:
- Supervised: Maximize alignment with ideal kernel from labels
- Target Alignment: Maximize alignment with provided target kernel

Parameters:
- backend: Quantum backend
- data-matrix: Training data matrix
- labels: Training labels (for supervised alignment)
- config: Training configuration

Required config:
- :num-qubits - Number of qubits
- :num-trainable-layers - Number of trainable layers
- :alignment-objective - :supervised or :target-alignment

Optional config:
- :target-kernel - Target kernel matrix (for target alignment)
- :optimization-method - Optimizer (:adam, :cmaes, :nelder-mead, :powell, :bobyqa, :gradient-descent)
- :max-iterations - Maximum training iterations (default: 100)
- :learning-rate - Learning rate for gradient-based optimizers (default: 0.01)
- :shots - Shots per circuit (default: 1024)
- :parameter-strategy - Parameter init strategy (:random, :zero, :custom, :legacy, default: :random)
- :parameter-range - Range for random init (default: [-π π])
- :initial-parameters - Custom initial parameters (if :custom strategy)
- :regularization - Regularization type (:none, :l1, :l2, :elastic-net, default: :none)
- :reg-lambda - Regularization strength (default: 0.01)
- :reg-alpha - Elastic net mix ratio (default: 0.5)

Returns:
Map with trained parameters and training history
sourceraw docstring

trainable-quantum-kernel-overlapclj

(trainable-quantum-kernel-overlap backend
                                  data-point1
                                  data-point2
                                  trainable-params
                                  config)

Compute quantum kernel overlap using trainable parametrized feature maps with adjoint method.

This function extends the standard kernel computation with trainable parameters, allowing the kernel to be optimized for specific datasets.

Parameters:

  • backend: Quantum backend for circuit execution
  • data-point1: First classical data vector
  • data-point2: Second classical data vector
  • trainable-params: Vector of trainable parameters
  • config: Kernel configuration with trainable settings

Returns: Map with overlap value and measurement details

Compute quantum kernel overlap using trainable parametrized feature maps with adjoint method.

This function extends the standard kernel computation with trainable parameters,
allowing the kernel to be optimized for specific datasets.

Parameters:
- backend: Quantum backend for circuit execution
- data-point1: First classical data vector
- data-point2: Second classical data vector
- trainable-params: Vector of trainable parameters
- config: Kernel configuration with trainable settings

Returns:
Map with overlap value and measurement details
sourceraw docstring

cljdoc builds & hosts documentation for Clojure/Script libraries

Keyboard shortcuts
Ctrl+kJump to recent docs
Move to previous article
Move to next article
Ctrl+/Jump to the search field
× close