Branch databases, not just code.
Datahike is a durable Datalog database with Datomic-compatible APIs and git-like semantics. Built on persistent data structures and structural sharing, database snapshots are immutable values that can be held, shared, and queried anywhere—without locks or copying.
Key capabilities:
Distributed by design: Datahike is part of the replikativ ecosystem for decentralized data architectures.
Modern applications model increasingly complex relationships—social networks, organizational hierarchies, supply chains, knowledge graphs. Traditional SQL forces you to express graph queries through explicit joins, accumulating complexity as relationships grow. Datalog uses pattern matching over relationships: describe what you're looking for, not how to join tables.
As systems evolve, SQL schemas accumulate join complexity. What starts as simple tables becomes nested subqueries and ad-hoc graph features. Datalog treats relationships as first-class: transitive queries, recursive rules, and multi-database joins are natural to express. The result is maintainable queries that scale with relationship complexity. See Why Datalog? for detailed comparisons.
Time is fundamental to information: Most value derives from how facts evolve over time. Datahike's immutable design treats the database as an append-only log of facts—queryable at any point in history, enabling audit trails, debugging through time-travel, and GDPR-compliant data excision. Immutability also powers Distributed Index Space: database snapshots are values that can be shared, cached, and queried without locks.
You can find API documentation on cljdoc and articles on Datahike on our company's blog page.
We presented Datahike also at meetups,for example at:
Add to your dependencies:
We provide a stable API for the JVM that we extend by first providing experimental/beta features that then get merged into the API over time.
(require '[datahike.api :as d])
;; use the filesystem as storage medium
(def cfg {:store {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-446655440000"
:path "/tmp/example"}})
;; create a database at this place, per default configuration we enforce a strict
;; schema and keep all historical data
(d/create-database cfg)
(def conn (d/connect cfg))
;; the first transaction will be the schema we are using
;; you may also add this within database creation by adding :initial-tx
;; to the configuration
(d/transact conn [{:db/ident :name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one }
{:db/ident :age
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one }])
;; lets add some data and wait for the transaction
(d/transact conn [{:name "Alice", :age 20 }
{:name "Bob", :age 30 }
{:name "Charlie", :age 40 }
{:age 15 }])
;; search the data
(d/q '[:find ?e ?n ?a
:where
[?e :name ?n]
[?e :age ?a]]
@conn)
;; => #{[3 "Alice" 20] [4 "Bob" 30] [5 "Charlie" 40]}
;; add new entity data using a hash map
(d/transact conn {:tx-data [{:db/id 3 :age 25}]})
;; if you want to work with queries like in
;; https://grishaev.me/en/datomic-query/,
;; you may use a hashmap
(d/q {:query '{:find [?e ?n ?a ]
:where [[?e :name ?n]
[?e :age ?a]]}
:args [@conn]})
;; => #{[5 "Charlie" 40] [4 "Bob" 30] [3 "Alice" 25]}
;; query the history of the data
(d/q '[:find ?a
:where
[?e :name "Alice"]
[?e :age ?a]]
(d/history @conn))
;; => #{[20] [25]}
;; you might need to release the connection for specific stores
(d/release conn)
;; clean up the database if it is not need any more
(d/delete-database cfg)
The API namespace provides compatibility to a subset of Datomic functionality and should work as a drop-in replacement on the JVM. The rest of Datahike will be ported to core.async to coordinate IO in a platform-neutral manner.
📖 Complete Documentation Index - Organized by topic and skill level
Quick links:
For simple examples have a look at the projects in the examples folder.
Datahike has beta ClojureScript support for both Node.js (file backend) and browsers (IndexedDB with TieredStore for memory hierarchies).
JavaScript API (beta):
Install from npm:
npm install datahike@next
Example usage:
const d = require('datahike');
const crypto = require('crypto');
const config = {
store: {
backend: ':memory',
id: crypto.randomUUID()
},
'schema-flexibility': ':read'
};
await d.createDatabase(config);
const conn = await d.connect(config);
await d.transact(conn, [{ name: 'Alice' }]);
const db = await d.db(conn);
const results = await d.q('[:find ?n :where [?e :name ?n]]', db);
console.log(results);
// => [['Alice']]
See JavaScript API documentation for details.
Browser with real-time sync: Combine IndexedDB storage with Kabel WebSocket middleware for offline-capable applications.
Native CLI tool (dthk) (beta): Compiled with GraalVM native-image for instant startup. Ships with file backend support, scriptable for quick queries and automation. Available in releases. See CLI documentation.
Babashka pod (beta): Native-compiled pod available in the Babashka pod registry for shell scripting. See Babashka pod documentation.
Java API (beta): Comprehensive bindings with fluent builder pattern and automatic collection conversion. See Java API documentation for the full API guide and examples.
libdatahike (beta): C/C++ native bindings enable embedding Datahike in non-JVM applications. See libdatahike documentation.
Python bindings (beta): High-level Pythonic API with automatic EDN conversion. See Python documentation.
The Swedish Public Employment Service (Arbetsförmedlingen) has been using Datahike in production since 2024 to serve the JobTech Taxonomy (Arbetsmarknadstaxonomin) - a labour market terminology database with 40,000+ concepts representing occupations, skills, and education standards, accessed daily by thousands of case workers across Sweden.
Technical Highlights:
Resources:
This represents one of the most comprehensive open-source Datahike deployments, demonstrating production-readiness at government scale.
Stub is a comprehensive accounting and invoicing platform serving 5,000+ small businesses across South Africa. Built by Alexander Oloo with Datahike powering the core data layer.
Features: Invoicing with payment integration, double-entry bookkeeping, bank sync (Capitec, FNB, Absa, Nedbank), VAT tracking, inventory management, and financial reporting.
Heidelberg University uses Datahike in an internal emotion tracking application for psychological research (source not publicly available).
Proximum is a high-performance HNSW vector index designed for Datahike's persistent data model. It brings semantic search and RAG capabilities to Datahike while maintaining immutability and full audit history.
Key features:
See datahike.io/proximum for details. Integration as secondary index into Datahike coming soon.
Datahike is compositional by design—built from independent, reusable libraries that work together but can be used separately in your own systems. Each component is open source and maintained as part of the replikativ project.
Core libraries:
Advanced:
This modularity enables custom solutions across languages and runtimes: embed konserve in Python applications, use kabel for non-database real-time systems, or build entirely new databases on the same storage layer. Datahike demonstrates how these components work together, but you're not locked into our choices.
Instead of providing a static roadmap, we work closely with the community to decide what will be worked on next in a dynamic and interactive way.
How it works:
Go to GitHub Discussions and upvote the ideas you'd like to see in Datahike. When we have capacity for a new feature, we address the most upvoted items.
You can also propose ideas yourself—either by adding them to Discussions or by creating a pull request. Note that due to backward compatibility considerations, some PRs may take time to integrate.
We are happy to provide commercial support. If you are interested in a particular feature, please contact us at contact@datahike.io.
Copyright © 2014–2026 Christian Weilbach et al.
Licensed under Eclipse Public License (see LICENSE).
Can you improve this documentation? These fine people already did:
Nikita Prokopov, Konrad Kühne, Christian Weilbach, Timo Kramer, Judith Massa, Judith, Anders Hovmöller, Rune Juhl Jacobsen, Yee Fay Lim, David Whittington, Tyler Pirtle, Ryan Sundberg, Robert Stuttaford, Francesco Sardo, zachcp, Coby Tamayo, jonasseglare, Nuttanart Pornprasitsakul, Mike Ivanov, Denis Krivosheev, Linus Ericsson, Matthias Nehlsen, Alejandro Gomez, Thomas Schranz, Vlad & JCEdit on GitHub
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |