Quantum optimization algorithms and gradient computation for variational quantum algorithms.
This namespace provides a comprehensive suite of optimization methods specifically designed for quantum variational algorithms like VQE, QAOA, and quantum machine learning. It combines quantum-aware gradient computation with classical optimization techniques to achieve efficient parameter optimization on quantum circuits.
Core features
Quantum Gradient Computation:
Classical Optimization Methods:
Quantum Fisher Information:
Optimization Method Selection Guide
For Quantum Variational Algorithms (VQE, QAOA):
For Noisy or Difficult Landscapes:
For High-Precision Requirements:
Parameter Shift Rule
The parameter shift rule is fundamental to quantum optimization:
∂⟨H⟩/∂θ = (1/2)[⟨H⟩(θ + π/2) - ⟨H⟩(θ - π/2)]
This provides exact gradients for quantum circuits with rotation gates, requiring only 2 circuit evaluations per parameter.
Usage Examples
;; Basic Adam optimization
(adam-optimization objective-fn initial-params
{:learning-rate 0.01
:max-iterations 500
:tolerance 1e-6
:gradient-method :parameter-shift})
;; Quantum Natural Gradient with Fisher Information
(quantum-natural-gradient-optimization objective-fn initial-params
{:ansatz-fn ansatz-constructor
:backend quantum-backend
:learning-rate 0.1
:fisher-regularization 1e-8})
;; Robust derivative-free optimization
(fastmath-derivative-free-optimization :cmaes objective-fn initial-params
{:max-iterations 2000
:cmaes-sigma 0.3
:parameter-bounds [[-π π] [-π π]]})
Integration with VQE
This namespace is designed to integrate seamlessly with VQE and other variational quantum algorithms. The optimization functions expect:
Performance Considerations
Design Principles
See also: org.soulspace.qclojure.application.algorithm.vqe
for usage in VQE.
Quantum optimization algorithms and gradient computation for variational quantum algorithms. This namespace provides a comprehensive suite of optimization methods specifically designed for quantum variational algorithms like VQE, QAOA, and quantum machine learning. It combines quantum-aware gradient computation with classical optimization techniques to achieve efficient parameter optimization on quantum circuits. Core features Quantum Gradient Computation: - **Parameter Shift Rule**: Exact gradients for parameterized quantum circuits - **Finite Differences**: General gradient computation for classical functions - **Adaptive Parameter Shifts**: Optimized shifts for different parameter types - **Parallel Gradient Evaluation**: Efficient computation using multiple threads Classical Optimization Methods: - **Gradient Descent**: Basic optimization with momentum and adaptive learning rates - **Adam Optimizer**: Adaptive moment estimation with bias correction - **Quantum Natural Gradient**: Fisher Information Matrix-based natural gradients - **Fastmath Integration**: Derivative-free and gradient-based external optimizers Quantum Fisher Information: - **QFIM Computation**: Quantum Fisher Information Matrix calculation - **Natural Gradient Updates**: Optimal parameter space metrics - **Matrix Operations**: Linear algebra utilities for quantum optimization - **Regularization**: Numerical stability for ill-conditioned systems Optimization Method Selection Guide For Quantum Variational Algorithms (VQE, QAOA): - **:adam** - Fast convergence, adaptive learning rates (recommended default) - **:gradient-descent** - Simple, reliable, theoretical guarantees - **:quantum-natural-gradient** - Optimal convergence in quantum parameter space For Noisy or Difficult Landscapes: - **:cmaes** - Robust global optimization, handles noise well - **:nelder-mead** - Derivative-free simplex method - **:powell** - Coordinate descent without gradients For High-Precision Requirements: - **:quantum-natural-gradient** - Uses quantum Fisher information - **:adam** - With small learning rates and tight tolerances Parameter Shift Rule The parameter shift rule is fundamental to quantum optimization: ``` ∂⟨H⟩/∂θ = (1/2)[⟨H⟩(θ + π/2) - ⟨H⟩(θ - π/2)] ``` This provides exact gradients for quantum circuits with rotation gates, requiring only 2 circuit evaluations per parameter. Usage Examples ```clojure ;; Basic Adam optimization (adam-optimization objective-fn initial-params {:learning-rate 0.01 :max-iterations 500 :tolerance 1e-6 :gradient-method :parameter-shift}) ;; Quantum Natural Gradient with Fisher Information (quantum-natural-gradient-optimization objective-fn initial-params {:ansatz-fn ansatz-constructor :backend quantum-backend :learning-rate 0.1 :fisher-regularization 1e-8}) ;; Robust derivative-free optimization (fastmath-derivative-free-optimization :cmaes objective-fn initial-params {:max-iterations 2000 :cmaes-sigma 0.3 :parameter-bounds [[-π π] [-π π]]}) ``` Integration with VQE This namespace is designed to integrate seamlessly with VQE and other variational quantum algorithms. The optimization functions expect: - **Objective Function**: Takes parameter vector, returns scalar energy - **Initial Parameters**: Starting point for optimization - **Options Map**: Configuration for optimization behavior Performance Considerations - **Parameter Shift**: 2N circuit evaluations per gradient (N = parameters) - **Finite Differences**: 2N circuit evaluations per gradient - **Quantum Natural Gradient**: N² + 2N circuit evaluations per iteration - **Derivative-Free**: Varies by method, typically 10-100x more evaluations Design Principles - **Quantum-Aware**: Exploits quantum circuit structure for efficiency - **Flexible**: Supports multiple optimization strategies - **Robust**: Handles numerical instabilities and edge cases - **Extensible**: Easy to add new optimization methods - **Production-Ready**: Suitable for real quantum hardware See also: `org.soulspace.qclojure.application.algorithm.vqe` for usage in VQE.
(adam-optimization objective-fn initial-parameters options)
VQE optimization using Adam optimizer with parameter shift gradients.
Adam combines momentum with adaptive learning rates per parameter, often providing faster and more stable convergence than plain gradient descent.
Parameters:
Returns: Map with optimization results
VQE optimization using Adam optimizer with parameter shift gradients. Adam combines momentum with adaptive learning rates per parameter, often providing faster and more stable convergence than plain gradient descent. Parameters: - objective-fn: VQE objective function - initial-parameters: Starting parameter values - options: Optimization options Returns: Map with optimization results
(adaptive-parameter-shift-gradient objective-fn parameters param-types options)
Calculate gradient using adaptive parameter shift rule.
Uses different shift values for different types of parameters to improve gradient accuracy. For example, uses π/2 for rotation angles but smaller shifts for amplitude parameters.
Parameters:
Returns: Vector of gradients for all parameters
Calculate gradient using adaptive parameter shift rule. Uses different shift values for different types of parameters to improve gradient accuracy. For example, uses π/2 for rotation angles but smaller shifts for amplitude parameters. Parameters: - objective-fn: VQE objective function - parameters: Current parameter vector - param-types: Vector indicating parameter types (:rotation, :amplitude, :phase) - options: Options map Returns: Vector of gradients for all parameters
(calculate-gradient objective-fn parameters options)
Calculate gradient using the appropriate method.
For VQE quantum objectives, uses parameter shift rule. For general functions, uses finite differences.
Parameters:
Returns: Vector of gradients for all parameters
Calculate gradient using the appropriate method. For VQE quantum objectives, uses parameter shift rule. For general functions, uses finite differences. Parameters: - objective-fn: Objective function - parameters: Current parameter vector - options: Options map, can contain :gradient-method (:parameter-shift or :finite-difference) Returns: Vector of gradients for all parameters
(calculate-parameter-shift-gradient objective-fn parameters)
(calculate-parameter-shift-gradient objective-fn parameters options)
Calculate full gradient vector using parameter shift rule.
Uses parallel computation for efficiency when computing multiple gradients.
Parameters:
Returns: Vector of gradients for all parameters
Calculate full gradient vector using parameter shift rule. Uses parallel computation for efficiency when computing multiple gradients. Parameters: - objective-fn: VQE objective function - parameters: Current parameter vector - options: Options map with :parallel? (default true) and :shift (default π/2) Returns: Vector of gradients for all parameters
(compute-fisher-information-matrix ansatz-fn backend parameters options)
Compute the Quantum Fisher Information Matrix (QFIM).
The QFIM is defined as: F_ij = 4 * Re[⟨∂ψ/∂θᵢ|∂ψ/∂θⱼ⟩ - ⟨∂ψ/∂θᵢ|ψ⟩⟨ψ|∂ψ/∂θⱼ⟩]
This matrix provides the optimal metric for parameter updates in the quantum parameter space, leading to faster convergence than standard gradient descent.
Parameters:
Returns: Fisher Information Matrix as vector of vectors
Compute the Quantum Fisher Information Matrix (QFIM). The QFIM is defined as: F_ij = 4 * Re[⟨∂ψ/∂θᵢ|∂ψ/∂θⱼ⟩ - ⟨∂ψ/∂θᵢ|ψ⟩⟨ψ|∂ψ/∂θⱼ⟩] This matrix provides the optimal metric for parameter updates in the quantum parameter space, leading to faster convergence than standard gradient descent. Parameters: - ansatz-fn: Function that creates quantum circuit from parameters - backend: Quantum backend for execution - parameters: Current parameter vector - options: Execution options Returns: Fisher Information Matrix as vector of vectors
(compute-state-derivative ansatz-fn backend parameters param-index options)
Compute the derivative of the quantum state with respect to a parameter.
Uses the parameter shift rule to compute |∂ψ(θ)/∂θᵢ⟩ efficiently. For a state |ψ(θ)⟩, the derivative is computed as: |∂ψ/∂θᵢ⟩ ≈ [|ψ(θ + π/2·eᵢ)⟩ - |ψ(θ - π/2·eᵢ)⟩] / 2
Parameters:
Returns: Map representing the state derivative
Compute the derivative of the quantum state with respect to a parameter. Uses the parameter shift rule to compute |∂ψ(θ)/∂θᵢ⟩ efficiently. For a state |ψ(θ)⟩, the derivative is computed as: |∂ψ/∂θᵢ⟩ ≈ [|ψ(θ + π/2·eᵢ)⟩ - |ψ(θ - π/2·eᵢ)⟩] / 2 Parameters: - ansatz-fn: Function that creates quantum circuit from parameters - backend: Quantum backend for execution - parameters: Current parameter vector - param-index: Index of parameter to compute derivative for - options: Execution options Returns: Map representing the state derivative
(fastmath-derivative-free-optimization method
objective-fn
initial-parameters
options)
VQE optimization using fastmath derivative-free optimizers.
These optimizers don't require gradients and can be used when parameter shift gradients are expensive or unavailable.
Supported methods:
Parameters:
Returns: Map with optimization results
VQE optimization using fastmath derivative-free optimizers. These optimizers don't require gradients and can be used when parameter shift gradients are expensive or unavailable. Supported methods: - :nelder-mead - Nelder-Mead simplex (good general purpose) - :powell - Powell's method (coordinate descent) - :cmaes - Covariance Matrix Adaptation Evolution Strategy (robust) - :bobyqa - Bound Optimization BY Quadratic Approximation (handles bounds well) Parameters: - method: Fastmath optimization method keyword - objective-fn: VQE objective function - initial-parameters: Starting parameter values - options: Optimization options Returns: Map with optimization results
(fastmath-gradient-based-optimization method
objective-fn
initial-parameters
options)
VQE optimization using fastmath gradient-based optimizers with parameter shift gradients.
These optimizers use our exact parameter shift rule gradients for faster convergence than derivative-free methods.
Supported methods:
Parameters:
Returns: Map with optimization results
VQE optimization using fastmath gradient-based optimizers with parameter shift gradients. These optimizers use our exact parameter shift rule gradients for faster convergence than derivative-free methods. Supported methods: - :lbfgsb - L-BFGS-B (limited memory BFGS with bounds) - :gradient - Simple gradient descent (not recommended for VQE) Parameters: - method: Fastmath optimization method keyword - objective-fn: VQE objective function - initial-parameters: Starting parameter values - options: Optimization options Returns: Map with optimization results
(finite-difference-gradient objective-fn parameters)
(finite-difference-gradient objective-fn parameters h)
Calculate gradient using finite differences (for general functions).
This is more appropriate for classical test functions or when the parameter shift rule doesn't apply.
Parameters:
Returns: Vector of gradients for all parameters
Calculate gradient using finite differences (for general functions). This is more appropriate for classical test functions or when the parameter shift rule doesn't apply. Parameters: - objective-fn: Objective function - parameters: Current parameter vector - h: Step size (default: 1e-6) Returns: Vector of gradients for all parameters
(gradient-descent-optimization objective-fn initial-parameters options)
VQE optimization using gradient descent with parameter shift rules.
This is the preferred optimization method for VQE as it:
Parameters:
Returns: Map with optimization results
VQE optimization using gradient descent with parameter shift rules. This is the preferred optimization method for VQE as it: 1. Uses exact quantum gradients via parameter shift rule 2. Has theoretical convergence guarantees 3. Is efficient (2 circuit evaluations per parameter per iteration) 4. Handles quantum circuit structure naturally Parameters: - objective-fn: VQE objective function - initial-parameters: Starting parameter values - options: Optimization options Returns: Map with optimization results
(parameter-shift-gradient objective-fn parameters param-index)
(parameter-shift-gradient objective-fn parameters param-index shift)
Calculate gradient using the parameter shift rule.
For a parameterized gate with parameter θ, the gradient is: ∂⟨H⟩/∂θ = (1/2)[⟨H⟩(θ + π/2) - ⟨H⟩(θ - π/2)]
This gives exact gradients for quantum circuits with rotation gates.
Parameters:
Returns: Gradient value for the specified parameter
Calculate gradient using the parameter shift rule. For a parameterized gate with parameter θ, the gradient is: ∂⟨H⟩/∂θ = (1/2)[⟨H⟩(θ + π/2) - ⟨H⟩(θ - π/2)] This gives exact gradients for quantum circuits with rotation gates. Parameters: - objective-fn: VQE objective function - parameters: Current parameter vector - param-index: Index of parameter to compute gradient for - shift: Parameter shift value (default: π/2) Returns: Gradient value for the specified parameter
(quantum-natural-gradient-optimization objective-fn initial-parameters options)
VQE optimization using Quantum Natural Gradient (QNG) with full Fisher Information Matrix.
QNG uses the quantum Fisher information matrix to define a more natural metric for parameter updates. The update rule is: θ_{k+1} = θ_k - α * F⁻¹ * ∇E(θ_k)
where F is the Fisher Information Matrix and ∇E is the energy gradient. This often leads to faster convergence than standard gradient descent.
Parameters:
Returns: Map with optimization results
VQE optimization using Quantum Natural Gradient (QNG) with full Fisher Information Matrix. QNG uses the quantum Fisher information matrix to define a more natural metric for parameter updates. The update rule is: θ_{k+1} = θ_k - α * F⁻¹ * ∇E(θ_k) where F is the Fisher Information Matrix and ∇E is the energy gradient. This often leads to faster convergence than standard gradient descent. Parameters: - objective-fn: VQE objective function - initial-parameters: Starting parameter values - options: Optimization options Returns: Map with optimization results
(regularize-fisher-matrix fisher-matrix regularization)
Regularize the Fisher Information Matrix to ensure numerical stability.
Adds a small diagonal term to prevent singularity and improve conditioning.
Parameters:
Returns: Regularized Fisher Information Matrix
Regularize the Fisher Information Matrix to ensure numerical stability. Adds a small diagonal term to prevent singularity and improve conditioning. Parameters: - fisher-matrix: Fisher Information Matrix - regularization: Regularization parameter (default: 1e-8) Returns: Regularized Fisher Information Matrix
(state-inner-product state1 state2)
Compute inner product ⟨ψ₁|ψ₂⟩ between two quantum states.
Parameters:
Returns: Complex number representing the inner product
Compute inner product ⟨ψ₁|ψ₂⟩ between two quantum states. Parameters: - state1: First quantum state - state2: Second quantum state Returns: Complex number representing the inner product
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close