A component and helper functions to expose access to a specific schema accessible by FDB's record layer.
This is agnostic to the schema which will need to
be supplied to the component using the rough DDL
exposed in exoscale.vinyl.schema
A component and helper functions to expose access to a specific schema accessible by FDB's record layer. This is agnostic to the schema which will need to be supplied to the component using the rough DDL exposed in `exoscale.vinyl.schema`
(all-of-range txn-context record-type items)
(as-isolation-level level)
(as-query q)
(as-scan-type t)
(between txn-context record-type items start end)
(continuation-range txn-context record-type items continuation)
Given a prefix and an optional continuation, both being a collection of keys, return a TupleRange catching all elements with the given prefix, starting from continuation if present.
Given a prefix and an optional continuation, both being a collection of keys, return a TupleRange catching all elements with the given prefix, starting from continuation if present.
(continuation-traversing-transduce db xform f val continuing-fn transduce-fn)
A transducer over large ranges.
Results are reduced into an accumulator with the help of the reducing
function f
and transformation xform
.
The accumulator is initiated to init
. clojure.core.reduced
is honored.
Obviously, this approach does away with any consistency guarantees usually
offered by FDB. continuing-fn
is called at every step
Results being accumulated in memory, this also means that care must be taken with the accumulator.
A transducer over large ranges. Results are reduced into an accumulator with the help of the reducing function `f` and transformation `xform`. The accumulator is initiated to `init`. `clojure.core.reduced` is honored. Obviously, this approach does away with any consistency guarantees usually offered by FDB. `continuing-fn` is called at every step Results being accumulated in memory, this also means that care must be taken with the accumulator.
(get-metadata this)
Return this context's record metadata
Return this context's record metadata
(new-runner this)
Return a runner to handle retryable logic
Return a runner to handle retryable logic
(run-async this f)
Run an asynchronous function against an FDBRecordStore. Protocolized so it can be called against the database or the store
Run an asynchronous function against an FDBRecordStore. Protocolized so it can be called against the database or the store
(run-in-context this f)
Run a function against an FDBRecordStore. Protocolized so it can be called against the database or the store
Run a function against an FDBRecordStore. Protocolized so it can be called against the database or the store
(db-from-instance)
(db-from-instance cluster-file executor)
(db-from-instance cluster-file executor blocking-in-async-detection)
Build a valid FDB database from configuration. Use the standard cluster-file location or a specific one if instructed to do so.
Set BlockingInAsyncDetection when blocking-in-async-detection variable
is defined.
It is of the form: :<COMPLETE>-complete-<BLOCKING>-blocking
COMPLETE indicates how to report that asyncToSync
was called from a future that is completed.
It can be ignore
or warn
BLOCKING indicates how to report that the CompletableFuture that was to be waited was not complete.
It can be exception
or warn
Build a valid FDB database from configuration. Use the standard cluster-file location or a specific one if instructed to do so. Set BlockingInAsyncDetection when blocking-in-async-detection variable is defined. It is of the form: `:<COMPLETE>-complete-<BLOCKING>-blocking` * COMPLETE indicates how to report that `asyncToSync` was called from a future that is completed. It can be `ignore` or `warn` * BLOCKING indicates how to report that the CompletableFuture that was to be waited was not complete. It can be `exception` or `warn`
(delete-all-records txn-context)
(delete-by-key-component txn-context record-type items)
In cases where composite keys are used, this can be used to clear all records for a specific composite key prefix
In cases where composite keys are used, this can be used to clear all records for a specific composite key prefix
(delete-by-prefix-scan txn-context record-type items)
Delete all records surfaced by a prefix scan. Beware that prefixes are open-ended and may delete under contiguous keys, since no exact prefix is assumed.
This means that for (delete-by-prefix-scan db :Object ["prefix"])
the following tuples are eligible for deletion:
To delete only keys with an exact tuple prefix, use delete-by-tuple-prefix-scan
.
Delete all records surfaced by a prefix scan. Beware that prefixes are open-ended and may delete under contiguous keys, since no exact prefix is assumed. This means that for `(delete-by-prefix-scan db :Object ["prefix"])` the following tuples are eligible for deletion: - ["prefix" 0] - ["prefix" 1] - ["prefix2" 0] - ["prefix2" 1] To delete only keys with an exact tuple prefix, use `delete-by-tuple-prefix-scan`.
(delete-by-query txn-context query)
Delete all records surfaced by a query
Delete all records surfaced by a query
(delete-by-range txn-context range)
(delete-by-tuple-prefix-scan txn-context record-type items)
A variant of delete-by-prefix-scan
which only considers exact tuple prefixes.
A variant of `delete-by-prefix-scan` which only considers exact tuple prefixes.
(delete-record txn-context k)
(delete-record txn-context record-type items)
(deserialize {:exoscale.vinyl.store/keys [metadata builder]} key-value)
Deserialize a com.apple.foundationdb.KeyValue
into a
com.google.protobuf.DynamicMessage
.
Deserialize a `com.apple.foundationdb.KeyValue` into a `com.google.protobuf.DynamicMessage`.
(execute-properties {:exoscale.vinyl.store/keys [fail-on-scan-limit-reached?
isolation-level skip limit]
:as props})
(execute-query txn-context query)
(execute-query txn-context query opts)
(execute-query txn-context query opts values)
(exists? txn-context k)
(exists? txn-context record-type items)
(greater-than-range txn-context record-type items)
(initialize schema-name descriptor schema)
(initialize schema-name descriptor schema opts)
(insert-record txn-context record)
(insert-record-batch txn-context batch)
(iterator-query txn-context query)
(iterator-query txn-context query opts)
(key-for db record-type & args)
(key-for* db record-type items)
(list-query txn-context query)
(list-query txn-context query opts)
(list-query txn-context query opts values)
(load-record txn-context k)
(load-record txn-context record-type items)
(long-index-transduce db xform f init index-name scan-type range opts)
A transducer over large indices.
Results are reduced into an accumulator with the help of the reducing function
f
. The accumulator is initiated to init
. clojure.core.reduced
is honored.
Obviously, this approach does away with any consistency guarantees usually offered by FDB.
Results being accumulated in memory, this also means that care must be taken with the accumulator.
A transducer over large indices. Results are reduced into an accumulator with the help of the reducing function `f`. The accumulator is initiated to `init`. `clojure.core.reduced` is honored. Obviously, this approach does away with any consistency guarantees usually offered by FDB. Results being accumulated in memory, this also means that care must be taken with the accumulator.
(long-query-reduce db f val query)
(long-query-reduce db
f
val
query
{:exoscale.vinyl.store/keys [values] :as opts})
(long-query-reduce db f init query opts values)
A reducer over large queries. Accepts queries as per execute-query
. Results
are reduced into an accumulator with the help of the reducing function f
.
The accumulator is initiated to init
. clojure.core.reduced
is honored.
Obviously, this approach does away with any consistency guarantees usually offered by FDB.
Results being accumulated in memory, this also means that care must be taken with the accumulator.
A reducer over large queries. Accepts queries as per `execute-query`. Results are reduced into an accumulator with the help of the reducing function `f`. The accumulator is initiated to `init`. `clojure.core.reduced` is honored. Obviously, this approach does away with any consistency guarantees usually offered by FDB. Results being accumulated in memory, this also means that care must be taken with the accumulator.
(long-range-reduce db f val record-type items)
(long-range-reduce db f val record-type items opts)
A reducer over large ranges.
Results are reduced into an accumulator with the help of the reducing
function f
.
The accumulator is initiated to init
. clojure.core.reduced
is honored.
Obviously, this approach does away with any consistency guarantees usually offered by FDB.
Results being accumulated in memory, this also means that care must be taken with the accumulator.
A reducer over large ranges. Results are reduced into an accumulator with the help of the reducing function `f`. The accumulator is initiated to `init`. `clojure.core.reduced` is honored. Obviously, this approach does away with any consistency guarantees usually offered by FDB. Results being accumulated in memory, this also means that care must be taken with the accumulator.
(long-range-transduce db xform f val record-type items)
(long-range-transduce db
xform
f
val
record-type
items
{:exoscale.vinyl.store/keys [raw?] :as opts})
A transducer over large ranges. Except for the addition of xform
behaves like long-range-reduce
.
A transducer over large ranges. Except for the addition of `xform` behaves like `long-range-reduce`.
(metadata-index metadata index-name)
(prefix-range txn-context record-type items)
(record-primary-key r)
(record-store-builder)
Yield a new record store builder
Yield a new record store builder
(reindex db index-name)
(reindex db
index-name
{:exoscale.vinyl.store/keys [limit records-per-second max-retries
progress-log-interval]
:or {limit 100
records-per-second 10000
max-retries 100
progress-log-interval 10000}})
The purpose of the reindex
function is to force the recalculation of the
index-name
, regardless of the current state of the index (whether it is
disabled, readable, or writeable). Even if the index is in a readable state,
invoking this function will change its state to writeable,
making it inaccessible for any operations that require read access.
The reindex
function is particularly useful when a new index is introduced but
is not being accessed by any processes at the moment.
By calling reindex
, you ensure that the index is populated with the latest
data. When introducing an index on EXISTING records, it is important to follow
these steps:
reindex
function to populate the index (from scratch) with
existing records. It is important to note that during this process, the index
remains writeable. As a result, it is crucial to exercise caution with
non-idempotent operations that might lead to unintended actions
(e.g: when this reaches that state, do this...).Following those steps are really important to ensure your index has been correctly populated.
Param key | Description |
---|---|
limit | Default number of records to attempt to process in a single transaction. Defaults to 100 |
records-per-second | Default limit to the number of records to attempt in a single second. Defaults to 10'000 |
max-retries | Default number of times to retry a single range rebuild. Defaults to 100 |
progress-log-interval | Default interval to be logging successful progress in millis when building across transactions (-1 will not log). Defaults to 10'000 milliseconds |
The purpose of the `reindex` function is to force the recalculation of the `index-name`, regardless of the current state of the index (whether it is disabled, readable, or writeable). Even if the index is in a readable state, invoking this function will change its state to writeable, making it inaccessible for any operations that require read access. The `reindex` function is particularly useful when a new index is introduced but is not being accessed by any processes at the moment. By calling `reindex`, you ensure that the index is populated with the latest data. When introducing an index on EXISTING records, it is important to follow these steps: - 1. Declare the new index in the metadata definition of the record store. This step ensures that the necessary information about the index is included and any record store operations will start populating it. - 2. Utilize the `reindex` function to populate the index (from scratch) with existing records. It is important to note that during this process, the index remains writeable. As a result, it is crucial to exercise caution with non-idempotent operations that might lead to unintended actions (e.g: when this reaches that state, do this...). - 3. Validate the state of the index through ad-hocs queries to guarantee what has been computed corresponds to what you expect. Ideally from a database dump. - 4. Begin reading from it and rely on it to take decisions. At this point, the index is in a readable state, allowing you to access the data it contains. Following those steps are really important to ensure your index has been correctly populated. Param key | Description | --- | --- | `limit` | Default number of records to attempt to process in a single transaction. Defaults to 100 | `records-per-second` | Default limit to the number of records to attempt in a single second. Defaults to 10'000 | `max-retries` | Default number of times to retry a single range rebuild. Defaults to 100 | `progress-log-interval` | Default interval to be logging successful progress in millis when building across transactions (-1 will not log). Defaults to 10'000 milliseconds
(runner-opts runner
{:exoscale.vinyl.store/keys [max-attempts initial-delay max-delay
transaction-timeout]})
(save-record txn-context record)
(save-record-batch txn-context batch)
(scan-index txn-context index-name scan-type range continuation opts)
(scan-prefix txn-context record-type items opts)
(scan-properties {:exoscale.vinyl.store/keys [reverse?] :as props})
(scan-properties props reverse?)
(scan-range txn-context range opts)
(store-query-fn query
{:exoscale.vinyl.store/keys [values intercept-plan-fn log-plan?]
:as opts})
This builds a directory structure of /$environment/$schema
This builds a directory structure of /$environment/$schema
(wrapped-runner db opts)
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close