Liking cljdoc? Tell your friends :D

exoscale.vinyl.store

A component and helper functions to expose access to a specific schema accessible by FDB's record layer.

This is agnostic to the schema which will need to be supplied to the component using the rough DDL exposed in exoscale.vinyl.schema

A component and helper functions to expose access
to a specific schema accessible by FDB's record layer.

This is agnostic to the schema which will need to
be supplied to the component using the rough DDL
exposed in `exoscale.vinyl.schema`
raw docstring

all-of-rangeclj

(all-of-range txn-context record-type items)

as-isolation-levelclj

(as-isolation-level level)

as-queryclj

(as-query q)

as-scan-typeclj

(as-scan-type t)

betweenclj

(between txn-context record-type items start end)

continuation-rangeclj

(continuation-range txn-context record-type items continuation)

Given a prefix and an optional continuation, both being a collection of keys, return a TupleRange catching all elements with the given prefix, starting from continuation if present.

Given a prefix and an optional continuation, both being a collection of
keys, return a TupleRange catching all elements with the given prefix, starting
from continuation if present.
raw docstring

continuation-traversing-transduceclj

(continuation-traversing-transduce db xform f val continuing-fn transduce-fn)

A transducer over large ranges. Results are reduced into an accumulator with the help of the reducing function f and transformation xform. The accumulator is initiated to init. clojure.core.reduced is honored.

Obviously, this approach does away with any consistency guarantees usually offered by FDB. continuing-fn is called at every step

Results being accumulated in memory, this also means that care must be taken with the accumulator.

A transducer over large ranges.
Results are reduced into an accumulator with the help of the reducing
function `f` and transformation `xform`.
The accumulator is initiated to `init`. `clojure.core.reduced` is honored.

Obviously, this approach does away with any consistency guarantees usually
offered by FDB. `continuing-fn` is called at every step

Results being accumulated in memory, this also means that care must be
taken with the accumulator.
raw docstring

DatabaseContextcljprotocol

get-metadataclj

(get-metadata this)

Return this context's record metadata

Return this context's record metadata

new-runnerclj

(new-runner this)

Return a runner to handle retryable logic

Return a runner to handle retryable logic

run-asyncclj

(run-async this f)

Run an asynchronous function against an FDBRecordStore. Protocolized so it can be called against the database or the store

Run an asynchronous function against an FDBRecordStore.
Protocolized so it can be called against the database or
the store

run-in-contextclj

(run-in-context this f)

Run a function against an FDBRecordStore. Protocolized so it can be called against the database or the store

Run a function against an FDBRecordStore.
Protocolized so it can be called against the database or
the store

db-from-instanceclj

(db-from-instance)
(db-from-instance cluster-file executor)
(db-from-instance cluster-file executor blocking-in-async-detection)

Build a valid FDB database from configuration. Use the standard cluster-file location or a specific one if instructed to do so.

Set BlockingInAsyncDetection when blocking-in-async-detection variable is defined. It is of the form: :<COMPLETE>-complete-<BLOCKING>-blocking

  • COMPLETE indicates how to report that asyncToSync was called from a future that is completed. It can be ignore or warn

  • BLOCKING indicates how to report that the CompletableFuture that was to be waited was not complete. It can be exception or warn

Build a valid FDB database from configuration. Use the standard
cluster-file location or a specific one if instructed to do so.

Set BlockingInAsyncDetection when blocking-in-async-detection variable 
is defined.
It is of the form: `:<COMPLETE>-complete-<BLOCKING>-blocking`
* COMPLETE indicates how to report that `asyncToSync` was called from a future that is completed. 
  It can be `ignore` or `warn`

* BLOCKING indicates how to report that the CompletableFuture that was to be waited was not complete. 
  It can be `exception` or `warn`
raw docstring

delete-all-recordsclj

(delete-all-records txn-context)

delete-by-key-componentclj

(delete-by-key-component txn-context record-type items)

In cases where composite keys are used, this can be used to clear all records for a specific composite key prefix

In cases where composite keys are used, this can be used to clear
all records for a specific composite key prefix
raw docstring

delete-by-prefix-scanclj

(delete-by-prefix-scan txn-context record-type items)

Delete all records surfaced by a prefix scan. Beware that prefixes are open-ended and may delete under contiguous keys, since no exact prefix is assumed.

This means that for (delete-by-prefix-scan db :Object ["prefix"]) the following tuples are eligible for deletion:

  • ["prefix" 0]
  • ["prefix" 1]
  • ["prefix2" 0]
  • ["prefix2" 1]

To delete only keys with an exact tuple prefix, use delete-by-tuple-prefix-scan.

Delete all records surfaced by a prefix scan. Beware that prefixes are
open-ended and may delete under contiguous keys, since no exact prefix
is assumed.

This means that for `(delete-by-prefix-scan db :Object ["prefix"])`
the following tuples are eligible for deletion:

- ["prefix" 0]
- ["prefix" 1]
- ["prefix2" 0]
- ["prefix2" 1]

To delete only keys with an exact tuple prefix, use `delete-by-tuple-prefix-scan`.
raw docstring

delete-by-queryclj

(delete-by-query txn-context query)

Delete all records surfaced by a query

Delete all records surfaced by a query
raw docstring

delete-by-rangeclj

(delete-by-range txn-context range)

delete-by-tuple-prefix-scanclj

(delete-by-tuple-prefix-scan txn-context record-type items)

A variant of delete-by-prefix-scan which only considers exact tuple prefixes.

A variant of `delete-by-prefix-scan` which only considers exact tuple prefixes.
raw docstring

delete-recordclj

(delete-record txn-context k)
(delete-record txn-context record-type items)

deserializeclj

(deserialize {:exoscale.vinyl.store/keys [metadata builder]} key-value)

Deserialize a com.apple.foundationdb.KeyValue into a com.google.protobuf.DynamicMessage.

Deserialize a `com.apple.foundationdb.KeyValue` into a
`com.google.protobuf.DynamicMessage`.
raw docstring

execute-propertiesclj

(execute-properties {:exoscale.vinyl.store/keys [fail-on-scan-limit-reached?
                                                 isolation-level skip limit]
                     :as props})

execute-queryclj

(execute-query txn-context query)
(execute-query txn-context query opts)
(execute-query txn-context query opts values)

exists?clj

(exists? txn-context k)
(exists? txn-context record-type items)

greater-than-rangeclj

(greater-than-range txn-context record-type items)

initializeclj

(initialize schema-name descriptor schema)
(initialize schema-name descriptor schema opts)

insert-recordclj

(insert-record txn-context record)

insert-record-batchclj

(insert-record-batch txn-context batch)

iterator-queryclj

(iterator-query txn-context query)
(iterator-query txn-context query opts)

key-forclj

(key-for db record-type & args)

key-for*clj

(key-for* db record-type items)

list-queryclj

(list-query txn-context query)
(list-query txn-context query opts)
(list-query txn-context query opts values)

load-recordclj

(load-record txn-context k)
(load-record txn-context record-type items)

long-index-transduceclj

(long-index-transduce db xform f init index-name scan-type range opts)

A transducer over large indices. Results are reduced into an accumulator with the help of the reducing function f. The accumulator is initiated to init. clojure.core.reduced is honored.

Obviously, this approach does away with any consistency guarantees usually offered by FDB.

Results being accumulated in memory, this also means that care must be taken with the accumulator.

A transducer over large indices.
 Results are reduced into an accumulator with the help of the reducing function
`f`. The accumulator is initiated to `init`. `clojure.core.reduced` is honored.

 Obviously, this approach does away with any consistency guarantees usually
 offered by FDB.

 Results being accumulated in memory, this also means that care must be
 taken with the accumulator.
raw docstring

long-query-reduceclj

(long-query-reduce db f val query)
(long-query-reduce db
                   f
                   val
                   query
                   {:exoscale.vinyl.store/keys [values] :as opts})
(long-query-reduce db f init query opts values)

A reducer over large queries. Accepts queries as per execute-query. Results are reduced into an accumulator with the help of the reducing function f. The accumulator is initiated to init. clojure.core.reduced is honored.

Obviously, this approach does away with any consistency guarantees usually offered by FDB.

Results being accumulated in memory, this also means that care must be taken with the accumulator.

A reducer over large queries. Accepts queries as per `execute-query`. Results
are reduced into an accumulator with the help of the reducing function `f`.
The accumulator is initiated to `init`. `clojure.core.reduced` is honored.

Obviously, this approach does away with any consistency guarantees usually
offered by FDB.

Results being accumulated in memory, this also means that care must be
taken with the accumulator.
raw docstring

long-range-reduceclj

(long-range-reduce db f val record-type items)
(long-range-reduce db f val record-type items opts)

A reducer over large ranges. Results are reduced into an accumulator with the help of the reducing function f. The accumulator is initiated to init. clojure.core.reduced is honored.

Obviously, this approach does away with any consistency guarantees usually offered by FDB.

Results being accumulated in memory, this also means that care must be taken with the accumulator.

A reducer over large ranges.
Results are reduced into an accumulator with the help of the reducing
function `f`.
The accumulator is initiated to `init`. `clojure.core.reduced` is honored.

Obviously, this approach does away with any consistency guarantees usually
offered by FDB.

Results being accumulated in memory, this also means that care must be
taken with the accumulator.
raw docstring

long-range-transduceclj

(long-range-transduce db xform f val record-type items)
(long-range-transduce db
                      xform
                      f
                      val
                      record-type
                      items
                      {:exoscale.vinyl.store/keys [raw?] :as opts})

A transducer over large ranges. Except for the addition of xform behaves like long-range-reduce.

A transducer over large ranges. Except for the addition of `xform`
behaves like `long-range-reduce`.
raw docstring

metadata-indexclj

(metadata-index metadata index-name)

prefix-rangeclj

(prefix-range txn-context record-type items)

record-primary-keyclj

(record-primary-key r)

record-store-builderclj

(record-store-builder)

Yield a new record store builder

Yield a new record store builder
raw docstring

reindexclj

(reindex db index-name)
(reindex db
         index-name
         {:exoscale.vinyl.store/keys [limit records-per-second max-retries
                                      progress-log-interval]
          :or {limit 100
               records-per-second 10000
               max-retries 100
               progress-log-interval 10000}})

The purpose of the reindex function is to force the recalculation of the index-name, regardless of the current state of the index (whether it is disabled, readable, or writeable). Even if the index is in a readable state, invoking this function will change its state to writeable, making it inaccessible for any operations that require read access.

The reindex function is particularly useful when a new index is introduced but is not being accessed by any processes at the moment. By calling reindex, you ensure that the index is populated with the latest data. When introducing an index on EXISTING records, it is important to follow these steps:

    1. Declare the new index in the metadata definition of the record store. This step ensures that the necessary information about the index is included and any record store operations will start populating it.
    1. Utilize the reindex function to populate the index (from scratch) with existing records. It is important to note that during this process, the index remains writeable. As a result, it is crucial to exercise caution with non-idempotent operations that might lead to unintended actions (e.g: when this reaches that state, do this...).
    1. Validate the state of the index through ad-hocs queries to guarantee what has been computed corresponds to what you expect. Ideally from a database dump.
    1. Begin reading from it and rely on it to take decisions. At this point, the index is in a readable state, allowing you to access the data it contains.

Following those steps are really important to ensure your index has been correctly populated.

Param keyDescription
limitDefault number of records to attempt to process in a single transaction. Defaults to 100
records-per-secondDefault limit to the number of records to attempt in a single second. Defaults to 10'000
max-retriesDefault number of times to retry a single range rebuild. Defaults to 100
progress-log-intervalDefault interval to be logging successful progress in millis when building across transactions (-1 will not log). Defaults to 10'000 milliseconds
The purpose of the `reindex` function is to force the recalculation of the
`index-name`, regardless of the current state of the index (whether it is
disabled, readable, or writeable). Even if the index is in a readable state,
invoking this function will change its state to writeable,
making it inaccessible for any operations that require read access.

The `reindex` function is particularly useful when a new index is introduced but
is not being accessed by any processes at the moment.
By calling `reindex`, you ensure that the index is populated with the latest
data. When introducing an index on EXISTING records, it is important to follow
these steps:

- 1. Declare the new index in the metadata definition of the record store.
     This step ensures that the necessary information about the index is
     included and any record store operations will start populating it.
- 2. Utilize the `reindex` function to populate the index (from scratch) with
     existing records. It is important to note that during this process, the index
     remains writeable. As a result, it is crucial to exercise caution with
     non-idempotent operations that might lead to unintended actions
    (e.g: when this reaches that state, do this...).
- 3. Validate the state of the index through ad-hocs queries to guarantee what
     has been computed corresponds to what you expect. Ideally from a database
     dump.
- 4. Begin reading from it and rely on it to take decisions. At this point,
     the index is in a readable state, allowing you to access the data it
     contains.

Following those steps are really important to ensure your index has been
correctly populated.

Param key                 | Description
| ---                     | ---
| `limit`                 | Default number of records to attempt to process in a single transaction. Defaults to 100
| `records-per-second`    | Default limit to the number of records to attempt in a single second. Defaults to 10'000
| `max-retries`           | Default number of times to retry a single range rebuild. Defaults to 100
| `progress-log-interval` | Default interval to be logging successful progress in millis when building across transactions (-1 will not log). Defaults to 10'000 milliseconds
raw docstring

runner-optsclj

(runner-opts runner
             {:exoscale.vinyl.store/keys [max-attempts initial-delay max-delay
                                          transaction-timeout]})

save-recordclj

(save-record txn-context record)

save-record-batchclj

(save-record-batch txn-context batch)

scan-indexclj

(scan-index txn-context index-name scan-type range continuation opts)

scan-prefixclj

(scan-prefix txn-context record-type items opts)

scan-propertiesclj

(scan-properties {:exoscale.vinyl.store/keys [reverse?] :as props})
(scan-properties props reverse?)

scan-rangeclj

(scan-range txn-context range opts)

startclj


stopclj


store-query-fnclj

(store-query-fn query
                {:exoscale.vinyl.store/keys [values intercept-plan-fn log-plan?]
                 :as opts})

top-level-keyspaceclj

This builds a directory structure of /$environment/$schema

This builds a directory structure of /$environment/$schema
raw docstring

wrapped-runnerclj

(wrapped-runner db opts)

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close