Within Clojure, we call start-node
from within crux.api
, passing it a set of
options for the node. There are a number of different configuration options a Crux node
can have, grouped into topologies.
Name | Transaction Log | Topology |
---|---|---|
Uses local event log |
| |
Uses Kafka |
| |
Uses JDBC event log |
|
Use a Kafka node when horizontal scalability is required or when you want the guarantees that Kafka offers in terms of resiliency, availability and retention of data.
Multiple Kafka nodes participate in a cluster with Kafka as the primary store and as the central means of coordination.
The JDBC node is useful when you don’t want the overhead of maintaining a Kafka cluster. Read more about the motivations of this setup here.
The Standalone node is a single Crux instance which has everything it needs locally. This is good for experimenting with Crux and for small to medium sized deployments, where running a single instance is permissible.
Crux nodes implement the ICruxAPI
interface and are the
starting point for making use of Crux. Nodes also implement
java.io.Closeable
and can therefore be lifecycle managed.
The following properties are within the topology used as a base
for the other topologies, crux.node
:
Property | Default Value |
---|---|
|
|
|
|
The following set of options are used by KV backend implementations,
defined within crux.kv
:
Property | Description | Default Value |
---|---|---|
| Directory to store K/V files | data |
| Sync the KV store to disk after every write? | false |
| Check and store index version upon start? | true |
Using a Crux standalone node is the best way to get started. Once you’ve started a standalone Crux instance as described below, you can then follow the getting started example.
Property | Description | Default Value |
---|---|---|
| Key/Value store to use for standalone event-log persistence | 'crux.kv.rocksdb/kv |
| Directory used to store the event-log and used for backup/restore, i.e. | |
| Sync the event-log backend KV store to disk after every write? | false |
Project Dependency
link:./deps.edn[role=include]
Getting started
The following code creates a node which runs completely within memory (with both the event-log store and db store using crux.kv.memdb/kv
):
link:./src/docs/examples.clj[role=include]
link:./src/docs/examples.clj[role=include]
You can later stop the node if you wish:
link:./src/docs/examples.clj[role=include]
RocksDB is used, by default, as Crux’s primary store (in place of the in memory kv store in the example above). In order to use RocksDB within crux, however, you must first add RocksDB as a project dependency:
Project Dependency
Starting a node using RocksDB
link:./src/docs/examples.clj[role=include]
An alternative to RocksDB, LMDB provides faster queries in exchange for a slower ingest rate.
Project Dependency
Starting a node using LMDB
link:./src/docs/examples.clj[role=include]
When using Crux at scale it is recommended to use multiple Crux nodes connected via a Kafka cluster.
Kafka nodes have the following properties:
Property | Description | Default value |
---|---|---|
| URL for connecting to Kafka | localhost:9092 |
| Name of Kafka transaction log topic | crux-transaction-log |
| Name of Kafka documents topic | crux-docs |
| Option to automatically create Kafka topics if they do not already exist | true |
| Number of partitions for the document topic | 1 |
| Number of times to replicate data on Kafka | 1 |
| File to supply Kafka connection properties to the underlying Kafka API | |
| Map to supply Kafka connection properties to the underlying Kafka API |
Project Dependencies
link:./deps.edn[role=include]
link:./deps.edn[role=include]
Getting started
Use the API to start a Kafka node, configuring it with the
bootstrap-servers
property in order to connect to Kafka:
link:./src/docs/examples.clj[role=include]
If you don’t specify kv-store then by default the
Kafka node will use RocksDB. You will need to add RocksDB to
your list of project dependencies.
|
You can later stop the node if you wish:
link:./src/docs/examples.clj[role=include]
Crux is ready to work with an embedded Kafka for when you don’t have an independently running Kafka available to connect to (such as during development).
Project Depencies
Getting started
link:./src/docs/examples.clj[role=include]
link:./src/docs/examples.clj[role=include]
You can later stop the Embedded Kafka if you wish:
link:./src/docs/examples.clj[role=include]
JDBC Nodes use next.jdbc
internally and pass through the relevant configuration options that
you can find
here.
Below is the minimal configuration you will need:
Property | Description |
---|---|
| One of: postgresql, oracle, mysql, h2, sqlite |
| Database Name |
Depending on the type of JDBC database used, you may also need some of the following properties:
Property | Description |
---|---|
| For h2 and sqlite |
| Database Host |
| Database Username |
| Database Password |
Project Dependencies
link:./deps.edn[role=include]
link:./deps.edn[role=include]
Getting started
Use the API to start a JDBC node, configuring it with the required parameters:
link:./src/docs/examples.clj[role=include]
Crux can be used programmatically as a library, but Crux also ships with an embedded HTTP server, that allows clients to use the API remotely via REST.
Set the server-port
configuration property on a Crux node to
expose a HTTP port that will accept REST requests:
Component | Property | Description |
---|---|---|
http-server |
| Port for Crux HTTP Server e.g. |
Visit the guide on using the REST api for examples of how to interact with Crux over HTTP.
If you wish to user Crux with Docker (no JVM/JDK/Clojure install required!) we have a few separate images:
Standalone web service example: This example web application has an embedded Crux node & HTTP server, showcasing some of the features of Bitemporality, backup/restore functionality within Crux and allowing experimentation with the REST API.
Crux HTTP Node: An image of a Crux node & HTTP server, useful if you wish to a freestanding Crux node accessible over HTTP, only having to use Docker. Allows you to customize the configuration of the node & logging, and optionally opens an nREPL/pREPL port.
Crux provides utility APIs for local backup and restore when you are using the standalone mode. For an example of usage, see the standalone web service example.
An additional example of backup and restore is provided that only applies to a stopped standalone node here.
In a clustered deployment, only Kafka’s official backup and restore functionality should be relied on to provide safe durability. The standalone mode’s backup and restore operations can instead be used for creating operational snapshots of a node’s indexes for scaling purposes.
Can you improve this documentation? These fine people already did:
Daniel Mason, Jeremy Taylor, Jon Pither, Antonelli712, Dan Mason, James Henderson, Tom Taylor, Ivan Fedorov, Ben Gerard & Alex DavisEdit on GitHub
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close