Copy a database with history via transaction logs. Based on code from Cognitect.
Copy a database with history via transaction logs. Based on code from Cognitect.
Continuous restore functionality that polls a TransactionStore and applies new segments to a target database as they become available.
This namespace provides background restore that:
Continuous restore functionality that polls a TransactionStore and applies new segments to a target database as they become available. This namespace provides background restore that: - Pre-fetches segments ahead of the consumer using core.async - Uses exponential backoff on errors - Supports graceful shutdown via a running? flag - Logs structured maps for CloudWatch integration
LRU cache for long->long mappings with bounded size.
This is a simple LRU cache - it doesn't know anything about Datomic IDs or entity indices. That logic belongs in the restore code that uses this cache.
LRU cache for long->long mappings with bounded size. This is a simple LRU cache - it doesn't know anything about Datomic IDs or entity indices. That logic belongs in the restore code that uses this cache.
A TransactionStore implementation that reads directly from a live Datomic database connection.
This store provides read-only access to transaction history from a source database, presenting it through the same TransactionStore interface used by backup stores. Useful for direct database-to-database cloning without intermediate storage.
A TransactionStore implementation that reads directly from a live Datomic database connection. This store provides read-only access to transaction history from a source database, presenting it through the same TransactionStore interface used by backup stores. Useful for direct database-to-database cloning without intermediate storage.
JSON logging support for CloudWatch integration.
Log messages should be maps with a :msg key for the human-readable message. Additional keys become structured data that CloudWatch Logs Insights can query.
Example:
(log/info {:msg "Segment restored"
:segment-start 1000
:segment-end 2000
:duration-ms 1234})
JSON logging support for CloudWatch integration.
Log messages should be maps with a :msg key for the human-readable message.
Additional keys become structured data that CloudWatch Logs Insights can query.
Example:
```
(log/info {:msg "Segment restored"
:segment-start 1000
:segment-end 2000
:duration-ms 1234})
```CLI entry point for continuous live replication.
Run with:
clj -X:replicate \
:source-client-config '{:server-type :cloud :region "us-east-1" :system "prod" :endpoint "https://..."}' \
:source-db-name '"my-database"' \
:target-client-config '{:server-type :cloud :region "us-west-2" :system "dr" :endpoint "https://..."}' \
:target-db-name '"my-database-replica"'
CLI entry point for continuous live replication.
Run with:
```
clj -X:replicate \
:source-client-config '{:server-type :cloud :region "us-east-1" :system "prod" :endpoint "https://..."}' \
:source-db-name '"my-database"' \
:target-client-config '{:server-type :cloud :region "us-west-2" :system "dr" :endpoint "https://..."}' \
:target-db-name '"my-database-replica"'
```AWS implementation of a TransactionStore on S3.
NOTES:
last-segment-info are incorrect. This should be ok in most cases because backing up
a segment is an idempotent operation.last-segment-info is the cheapest way to figure out where to resume, since
it is a single request. Using saved-segment-info lists all objects in the store, and can result
in a lot of requests and network overhead.AWS implementation of a TransactionStore on S3. NOTES: * Since s3 is eventually consistent it is possible that any reported data from methods like `last-segment-info` are incorrect. This should be ok in most cases because backing up a segment is an idempotent operation. * S3 bills by request. ** Choosing a small number of transactions per segment leads to more PUT requests. ** The `last-segment-info` is the cheapest way to figure out where to resume, since it is a single request. Using `saved-segment-info` lists all objects in the store, and can result in a lot of requests and network overhead.
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |