Access logging for the load balancer. Logs connection events to stdout and rotating file. Supports JSON and CLF (Common Log Format) output.
Access logging for the load balancer. Logs connection events to stdout and rotating file. Supports JSON and CLF (Common Log Format) output.
Rotating file writer for access logs. Handles log file rotation based on size with configurable max files.
Rotating file writer for access logs. Handles log file rotation based on size with configurable max files.
Access log formatters and async processing. Supports JSON and CLF (Common Log Format) output.
Access log formatters and async processing. Supports JSON and CLF (Common Log Format) output.
Public API for Admin HTTP REST server.
This module provides a RESTful HTTP API for runtime management of the load balancer, enabling automation, scripting, and integration with orchestration tools without requiring REPL access.
Configuration:
{:settings
{:admin-api {:enabled true
:port 8081 ; HTTP port (default 8081)
:api-key "secret-key" ; Optional API key auth
:allowed-origins nil}}} ; Optional CORS origins
Example usage:
# Get status
curl http://localhost:8081/api/v1/status
# List proxies
curl http://localhost:8081/api/v1/proxies
# Add a proxy
curl -X POST http://localhost:8081/api/v1/proxies \
-H 'Content-Type: application/json' \
-d '{"name":"web","listen":{"interfaces":["eth0"],"port":80}}'
# With API key authentication
curl -H 'X-API-Key: secret-key' http://localhost:8081/api/v1/status
Public API for Admin HTTP REST server.
This module provides a RESTful HTTP API for runtime management of the
load balancer, enabling automation, scripting, and integration with
orchestration tools without requiring REPL access.
Configuration:
```clojure
{:settings
{:admin-api {:enabled true
:port 8081 ; HTTP port (default 8081)
:api-key "secret-key" ; Optional API key auth
:allowed-origins nil}}} ; Optional CORS origins
```
Example usage:
```bash
# Get status
curl http://localhost:8081/api/v1/status
# List proxies
curl http://localhost:8081/api/v1/proxies
# Add a proxy
curl -X POST http://localhost:8081/api/v1/proxies \
-H 'Content-Type: application/json' \
-d '{"name":"web","listen":{"interfaces":["eth0"],"port":80}}'
# With API key authentication
curl -H 'X-API-Key: secret-key' http://localhost:8081/api/v1/status
```Request handlers for Admin REST API endpoints.
Each handler receives a request map with:
Each handler returns a result map with either:
Request handlers for Admin REST API endpoints.
Each handler receives a request map with:
- :exchange - The HttpExchange object
- :params - Path parameters extracted from URL
- :body - Parsed JSON request body (or nil)
Each handler returns a result map with either:
- {:data ...} for success
- {:error "message" :code "CODE" :status 400} for errorsHTTP server for Admin REST API.
Provides a simple HTTP server using Java's built-in HttpServer that serves RESTful admin endpoints for runtime management.
HTTP server for Admin REST API. Provides a simple HTTP server using Java's built-in HttpServer that serves RESTful admin endpoints for runtime management.
Circuit breaker pattern for backend protection.
Prevents cascade failures by automatically stopping traffic to backends that are experiencing high error rates, allowing them time to recover.
State Machine:
Transitions:
Circuit breaker pattern for backend protection. Prevents cascade failures by automatically stopping traffic to backends that are experiencing high error rates, allowing them time to recover. State Machine: - CLOSED (normal): Traffic flows, errors are counted - OPEN (blocking): No traffic sent, waiting for timeout - HALF-OPEN (testing): Limited traffic to test recovery Transitions: - CLOSED -> OPEN: When error rate exceeds threshold - OPEN -> HALF-OPEN: After open-duration-ms timeout - HALF-OPEN -> CLOSED: After N consecutive successes - HALF-OPEN -> OPEN: On any failure
Public API for distributed state sharing.
This namespace provides the public interface for cluster functionality:
Configuration example: {:cluster {:enabled true :node-id "auto" :bind-address "0.0.0.0" :bind-port 7946 :seeds ["192.168.1.10:7946" "192.168.1.11:7946"] :gossip-interval-ms 200 :gossip-fanout 2 :push-pull-interval-ms 10000 :sync-health true :sync-circuit-breaker true :sync-drain true :sync-conntrack true}}
Public API for distributed state sharing.
This namespace provides the public interface for cluster functionality:
- Cluster lifecycle (start!/stop!)
- State broadcasting
- Membership queries
- Event subscription
Configuration example:
{:cluster
{:enabled true
:node-id "auto"
:bind-address "0.0.0.0"
:bind-port 7946
:seeds ["192.168.1.10:7946" "192.168.1.11:7946"]
:gossip-interval-ms 200
:gossip-fanout 2
:push-pull-interval-ms 10000
:sync-health true
:sync-circuit-breaker true
:sync-drain true
:sync-conntrack true}}Connection tracking synchronization for seamless failover.
This namespace implements connection state replication across cluster nodes:
Sync Strategy:
Connection tracking synchronization for seamless failover. This namespace implements connection state replication across cluster nodes: - Owner node tracks connections where first packet arrived - Shadow entries stored on non-owner nodes for failover - On node failure, shadow entries promoted to active BPF entries Sync Strategy: - Batch updates for efficiency (every 100ms or 100 connections) - Delta sync: only changed entries since last gossip - Full sync on node join via push-pull - Conservative: shadow entries don't affect routing until failover
Gossip protocol implementation for state synchronization.
Uses UDP for small messages (<1KB) and TCP for larger payloads. Implements push-pull anti-entropy for eventual consistency.
Gossip protocol implementation for state synchronization. Uses UDP for small messages (<1KB) and TCP for larger payloads. Implements push-pull anti-entropy for eventual consistency.
Cluster manager - orchestrates membership, gossip, and state synchronization.
This is the internal orchestration layer. Use lb.cluster namespace for the public API.
Cluster manager - orchestrates membership, gossip, and state synchronization. This is the internal orchestration layer. Use lb.cluster namespace for the public API.
SWIM-style cluster membership management.
Implements failure detection using:
References:
SWIM-style cluster membership management. Implements failure detection using: - Direct ping probes - Indirect ping-req through other members - Suspicion mechanism with timeout before declaring dead References: - SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol - Serf/Memberlist (HashiCorp) for practical implementation patterns
Protocol definitions and data types for distributed state sharing.
This namespace defines the core abstractions:
Protocol definitions and data types for distributed state sharing. This namespace defines the core abstractions: - NodeInfo: cluster member information - SyncableState: versioned state that can be synchronized - GossipMessage: messages exchanged between nodes - IStateProvider: protocol for modules that provide syncable state
State synchronization providers for cluster mode.
This namespace provides IStateProvider implementations for:
Each provider implements the protocol from lb.cluster.protocol.
State synchronization providers for cluster mode. This namespace provides IStateProvider implementations for: - Health status synchronization - Circuit breaker state synchronization - Drain coordination Each provider implements the protocol from lb.cluster.protocol.
Configuration management for the load balancer. Handles configuration data structures, validation, and persistence.
Configuration management for the load balancer. Handles configuration data structures, validation, and persistence.
Connection tracking management for the load balancer. Provides utilities for monitoring, cleaning, and querying connection state.
Connection tracking management for the load balancer. Provides utilities for monitoring, cleaning, and querying connection state.
Core API for the eBPF load balancer. Provides high-level functions for initialization, configuration, and management.
Core API for the eBPF load balancer. Provides high-level functions for initialization, configuration, and management.
DNS-based backend resolution for dynamic environments.
Provides:
Usage: ;; In configuration :default-target {:host "backend.local" :port 8080 :dns-refresh-seconds 30}
;; Programmatic usage (dns/start!) (dns/register-target! "proxy-name" "hostname" config update-fn) (dns/get-status "proxy-name") (dns/stop!)
DNS-based backend resolution for dynamic environments.
Provides:
- DNS hostname support for backend targets
- Periodic re-resolution with configurable intervals
- Multiple A record expansion to weighted targets
- Graceful failure handling with last-known-good fallback
Usage:
;; In configuration
:default-target {:host "backend.local" :port 8080 :dns-refresh-seconds 30}
;; Programmatic usage
(dns/start!)
(dns/register-target! "proxy-name" "hostname" config update-fn)
(dns/get-status "proxy-name")
(dns/stop!)Background daemon for periodic DNS re-resolution.
Follows the same pattern as lb.health.manager:
Background daemon for periodic DNS re-resolution. Follows the same pattern as lb.health.manager: - ScheduledExecutorService for periodic tasks - Jitter to avoid thundering herd - Callbacks to update BPF maps on IP changes - Last-known-good fallback on failures
DNS resolution logic with multiple A record support.
Provides:
DNS resolution logic with multiple A record support. Provides: - Resolution of hostnames to all A records - Conversion of resolved IPs to weighted targets - Result types for success/failure handling
Connection draining for graceful backend removal. Allows backends to be drained by stopping new connections while allowing existing connections to complete.
Connection draining for graceful backend removal. Allows backends to be drained by stopping new connections while allowing existing connections to complete.
Public API for health checking system. Provides a simple interface for managing health-aware load balancing.
Public API for health checking system. Provides a simple interface for managing health-aware load balancing.
Health check implementations for TCP and HTTP protocols. Uses virtual threads for efficient concurrent checking.
Health check implementations for TCP and HTTP protocols. Uses virtual threads for efficient concurrent checking.
Health check orchestration using virtual threads. Manages health state for all targets and triggers weight updates.
Health check orchestration using virtual threads. Manages health state for all targets and triggers weight updates.
Weight redistribution logic for health-aware load balancing. Computes effective weights based on target health status.
Weight redistribution logic for health-aware load balancing. Computes effective weights based on target health status.
Backend latency tracking for the load balancer. Tracks connection lifetime (creation to close) as latency metric. Exposes per-backend histograms for Prometheus export.
Backend latency tracking for the load balancer. Tracks connection lifetime (creation to close) as latency metric. Exposes per-backend histograms for Prometheus export.
Load balancing algorithm implementations. Provides weight computation for different load balancing strategies:
Load balancing algorithm implementations. Provides weight computation for different load balancing strategies: - weighted-random: Original configured weights (default) - least-connections: Route to backends with fewer active connections
Background manager for least-connections load balancing. Periodically scans connection tracking to compute and update weights based on the current connection distribution across backends.
Background manager for least-connections load balancing. Periodically scans connection tracking to compute and update weights based on the current connection distribution across backends.
eBPF map management for the load balancer. Provides functions to create, operate on, and close all required maps.
eBPF map management for the load balancer. Provides functions to create, operate on, and close all required maps.
Prometheus metrics export for the load balancer.
Provides an HTTP endpoint for Prometheus scraping with metrics:
Usage: ;; In configuration :settings {:metrics {:enabled true :port 9090 :path "/metrics"}}
;; Programmatic usage (metrics/start! {:port 9090}) (metrics/register-data-sources! {...}) (metrics/stop!)
Prometheus metrics export for the load balancer.
Provides an HTTP endpoint for Prometheus scraping with metrics:
- lb_connections_active - Current active connections
- lb_bytes_total - Bytes transferred (forward/reverse)
- lb_packets_total - Packets transferred
- lb_backend_health - Backend health status (0/1)
- lb_health_check_latency_seconds - Health check latency histogram
- lb_dns_resolution_status - DNS resolution status
Usage:
;; In configuration
:settings {:metrics {:enabled true :port 9090 :path "/metrics"}}
;; Programmatic usage
(metrics/start! {:port 9090})
(metrics/register-data-sources! {...})
(metrics/stop!)Collects and formats metrics in Prometheus text format.
Gathers data from various sources:
Collects and formats metrics in Prometheus text format. Gathers data from various sources: - Connection tracking (active connections, bytes, packets) - Health checking (backend health status, latency) - DNS resolution (resolution status) - Stats aggregator (totals) - Cluster (nodes, gossip messages, sync lag)
Histogram implementation for Prometheus metrics.
Provides cumulative bucket histograms compatible with Prometheus format.
Histogram implementation for Prometheus metrics. Provides cumulative bucket histograms compatible with Prometheus format.
HTTP server for Prometheus metrics endpoint.
Provides a simple HTTP server using Java's built-in HttpServer that serves Prometheus-formatted metrics on a configurable endpoint.
HTTP server for Prometheus metrics endpoint. Provides a simple HTTP server using Java's built-in HttpServer that serves Prometheus-formatted metrics on a configurable endpoint.
Common eBPF program fragments and DSL utilities shared between XDP and TC programs.
Common eBPF program fragments and DSL utilities shared between XDP and TC programs.
TC egress program for the load balancer. Handles reply packets from backends: performs SNAT to restore original destination.
TC egress program for the load balancer. Handles reply packets from backends: performs SNAT to restore original destination.
TC ingress program for PROXY protocol v2 header injection.
This program runs on the TC ingress path (after XDP DNAT) and injects PROXY protocol v2 headers into the first data packet of each connection that has proxy-protocol enabled.
Flow:
TC ingress program for PROXY protocol v2 header injection. This program runs on the TC ingress path (after XDP DNAT) and injects PROXY protocol v2 headers into the first data packet of each connection that has proxy-protocol enabled. Flow: 1. Parse packet headers (Ethernet, IPv4/IPv6, TCP) 2. Lookup conntrack entry by 5-tuple 3. Check if proxy_enabled flag is set 4. Track TCP state (NEW -> SYN_SENT -> SYN_RECV -> ESTABLISHED) 5. On first DATA packet in ESTABLISHED: inject PROXY v2 header 6. Set header_injected flag and seq_offset for subsequent packets
XDP ingress program for the load balancer. Handles incoming packets: parses headers, looks up routing, performs DNAT.
XDP ingress program for the load balancer. Handles incoming packets: parses headers, looks up routing, performs DNAT.
Rate limiting management for the load balancer.
Provides per-source IP and per-backend rate limiting using a token bucket algorithm. Rate limits are configured via BPF maps and enforced in the XDP program.
Token bucket parameters:
Rate limiting is disabled by default (rate = 0).
Rate limiting management for the load balancer. Provides per-source IP and per-backend rate limiting using a token bucket algorithm. Rate limits are configured via BPF maps and enforced in the XDP program. Token bucket parameters: - rate: tokens added per second (requests/sec) - burst: maximum tokens (handles traffic spikes) Rate limiting is disabled by default (rate = 0).
Configuration hot reload for the load balancer.
Provides:
Usage: ;; Enable hot reload for a config file (enable-hot-reload! "/etc/lb/config.edn")
;; Manual reload (reload-config!)
;; Disable hot reload (disable-hot-reload!)
Configuration hot reload for the load balancer. Provides: - File watching (inotify-based via Java NIO WatchService) - SIGHUP signal handling - Incremental configuration updates - Validation before apply with rollback on failure Usage: ;; Enable hot reload for a config file (enable-hot-reload! "/etc/lb/config.edn") ;; Manual reload (reload-config!) ;; Disable hot reload (disable-hot-reload!)
Statistics streaming and aggregation for the load balancer. Consumes events from the eBPF ring buffer and provides real-time stats.
Statistics streaming and aggregation for the load balancer. Consumes events from the eBPF ring buffer and provides real-time stats.
Utility functions for IP address conversion, CIDR parsing, and binary encoding. Supports both IPv4 and IPv6 addresses with unified 16-byte internal format.
Utility functions for IP address conversion, CIDR parsing, and binary encoding. Supports both IPv4 and IPv6 addresses with unified 16-byte internal format.
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |