Liking cljdoc? Tell your friends :D

io.mandoline.backend.sqlite

A Mandoline store implementation that uses SQLite databases that are persisted on the local file system.

This store persists each dataset as a separate SQLite database on the local filesystem. Each per-dataset SQLite database has 3 tables:

  • "chunks": This table stores content-addressable binary chunks of the dataset. The columns are "chunk-id" (TEXT), "reference-count" (INTEGER), "data" (BLOB). All versions of the dataset share the same "chunks" table. The SQLiteChunkStore type interacts with this table.
  • "indices": This table stores mappings from coordinates within a versioned dataset to chunk. The columns are "version-id" (TEXT), "coordinates" (TEXT), "chunk-id" (TEXT). The "coordinates" column contains a composite value that includes the variable name and chunk coordinates within the variable. All versions of the dataset share the same "indices" table. The SQLiteIndex type interacts with this table.
  • "versions": This table stores metadata for the version history of the dataset. The columns are "version-id" (TEXT), "timestamp" (TEXT, ISO-8601 encoded), "metadata" (TEXT, JSON encoded). The SQLiteConnection type interacts with this table.

The mk-schema function in this namespace instantiates a SQLiteSchema type with a root-path argument that represents a parent directory. Every dataset that belongs to this schema instance has a sub-directory under this parent directory. For example, the dataset "foo" would be stored in the directory

<root-path>/foo/

Each dataset directory contains the SQLite database and journal files for that dataset. Together, these files store the dataset in a durable and portable format. A dataset directory can be safely moved to a new path and opened with a new schema (using the root of the destination path) and a new dataset name (the basename of the destination path).

The SQLite store uses SQLite database transactions to ensure consistency at the expense of performance. Multiple threads and multiple processes can safely read and write the same dataset; however, performance will suffer from exponential backoff and retry upon database contention.

WARNING: Because this store relies on atomic operations of the local filesystem, consistency is not guaranteed when using a network filesystem.

A Mandoline store implementation that uses SQLite databases that are
persisted on the local file system.

This store persists each dataset as a separate SQLite database on the
local filesystem. Each per-dataset SQLite database has 3 tables:

- "chunks": This table stores content-addressable binary chunks of
  the dataset. The columns are "chunk-id" (TEXT),
  "reference-count" (INTEGER), "data" (BLOB). All versions of the
  dataset share the same "chunks" table. The `SQLiteChunkStore` type
  interacts with this table.
- "indices": This table stores mappings from coordinates within a
  versioned dataset to chunk. The columns are "version-id" (TEXT),
  "coordinates" (TEXT), "chunk-id" (TEXT). The "coordinates"
  column contains a composite value that includes the variable name
  and chunk coordinates within the variable. All versions of the
  dataset share the same "indices" table. The `SQLiteIndex` type
  interacts with this table.
- "versions": This table stores metadata for the version history of
  the dataset. The columns are "version-id" (TEXT), "timestamp"
  (TEXT, ISO-8601 encoded), "metadata" (TEXT, JSON encoded). The
  `SQLiteConnection` type interacts with this table.

The `mk-schema` function in this namespace instantiates a
`SQLiteSchema` type with a `root-path` argument that represents a
parent directory. Every dataset that belongs to this schema instance
has a sub-directory under this parent directory. For example, the
dataset "foo" would be stored in the directory

    <root-path>/foo/

Each dataset directory contains the SQLite database and journal files
for that dataset. Together, these files store the dataset in a durable
and portable format. A dataset directory can be safely moved to a new
path and opened with a new schema (using the root of the destination
path) and a new dataset name (the basename of the destination path).

The SQLite store uses SQLite database transactions to ensure
consistency at the expense of performance. Multiple threads and
multiple processes can safely read and write the same dataset;
however, performance will suffer from exponential backoff and retry
upon database contention.

WARNING: Because this store relies on atomic operations of the local
filesystem, consistency is not guaranteed when using a network
filesystem.
raw docstring

*default-retry-options*clj


interpolate-sql-identifierscljmacro

(interpolate-sql-identifiers & strings)

Given template string(s), interpolate a double-quoted SQL identifier at each position where a ~{} delimited tag appears in the template.

Example:

user=> (interpolate-sql-identifiers "SELECT ~{x},~{y} FROM ~{table}" "SELECT "x","y" FROM "table"

Given template string(s), interpolate a double-quoted SQL identifier
at each position where a ~{} delimited tag appears in the template.

Example:

user=> (interpolate-sql-identifiers "SELECT ~{x},~{y} FROM ~{table}"
"SELECT \"x\",\"y\" FROM \"table\"
raw docstring

mk-schemaclj

(mk-schema store-spec)
(mk-schema store-spec options)

Instantiate a SQLiteSchema instance from a DynamoDB store spec.

Instantiate a SQLiteSchema instance from a DynamoDB store spec.
raw docstring

retry-with-dbcljmacro

(retry-with-db db-spec options & body)

This macro is similar to the clojure.java.jdbc/with-connection macro, except that it sets the journaling mode to write-ahead logging (WAL) before evaluating the body, and it retries on exception.

The behavior of this macro can be customized by binding the dynamic var default-retry-options.

Performance is significantly faster with WAL than without. See documentation at http://www.sqlite.org/wal.html

This macro is similar to the `clojure.java.jdbc/with-connection`
macro, except that it sets the journaling mode to write-ahead logging
(WAL) before evaluating the body, and it retries on exception.

The behavior of this macro can be customized by binding the dynamic
var *default-retry-options*.

Performance is significantly faster with WAL than without. See
documentation at http://www.sqlite.org/wal.html
raw docstring

sqlite-connection-poolclj

(sqlite-connection-pool file)

Given a java.io.File argument that represents a SQLite database file, return a clojure.java.jdbc-style spec map for a database connection pool.

Given a java.io.File argument that represents a SQLite database file,
return a `clojure.java.jdbc`-style spec map for a database connection
pool.
raw docstring

with-no-more-than-one-query-resultcljmacro

(with-no-more-than-one-query-result query-result sql param-group & body)

This macro is similar to the with-query-results macro, except that it applies further processing to the results seq:

  • It asserts that the query returns no more than 1 result.
  • If the query returns a non-empty result, then the body is evaluated within binding to that (first and only) result.
  • If the query returns zero results, then nil is returned.
This macro is similar to the with-query-results macro, except that it
applies further processing to the results seq:

- It asserts that the query returns no more than 1 result.
- If the query returns a non-empty result, then the body is evaluated
  within binding to that (first and only) result.
- If the query returns zero results, then nil is returned.
raw docstring

with-query-resultscljmacro

(with-query-results query-results sql param-group & body)

Execute a parametrized PreparedStatement query, then evaluate an expression on the results that were returned by this query.

This macro takes the following arguments:

query-results A symbol that is bound to a seq of result maps (as returned by the clojure.java.jdbc/resultseq-seq function). Depending on the query, the seq may be empty. The seq of result maps is eagerly realized within the transaction; callers ought to be mindful of performance when there are many query results. sql SQL prepared statement template (string). param-group Collection of parameter values to substitute into the statement. The number of elements in the collection must match the number of "?" placeholders in the sql template. If the template does not contain any "?" placeholders, then param-group must be empty. body Variable number of forms to evaluate with the query-results symbol binding.

Example:

(with-query-results query-results "SELECT z FROM table WHERE x=? AND y=?" ["a" 1] (when-not (seq query-results) (println "Query selected zero records!")) (map :z query-results))

This macro is intended to be nested within the retry-with-db macro.

Execute a parametrized PreparedStatement query, then evaluate an
expression on the results that were returned by this query.

This macro takes the following arguments:

`query-results`
    A symbol that is bound to a seq of result maps (as returned by the
    `clojure.java.jdbc/resultseq-seq` function). Depending on the
    query, the seq may be empty. The seq of result maps is eagerly
    realized within the transaction; callers ought to be mindful of
    performance when there are many query results.
`sql`
    SQL prepared statement template (string).
`param-group`
    Collection of parameter values to substitute into the statement.
    The number of elements in the collection must match the number of
    "?" placeholders in the `sql` template. If the template does not
    contain any "?" placeholders, then `param-group` must be empty.
`body`
    Variable number of forms to evaluate with the `query-results`
    symbol binding.

Example:

(with-query-results query-results
  "SELECT z FROM table WHERE x=? AND y=?" ["a" 1]
  (when-not (seq query-results)
    (println "Query selected zero records!"))
  (map :z query-results))

This macro is intended to be nested within the retry-with-db macro.
raw docstring

with-transaction-update-countcljmacro

(with-transaction-update-count update-count sql param-group & body)

Execute a parametrized PreparedStatement in a transaction, then evaluate an expression on the count of records that were updated in this transaction.

This macro takes the following arguments:

update-count A symbol that is bound to an integer, which is the number of records that were updated by the transaction. Depending on the transaction, the update count may be zero. sql SQL prepared statement template (string). param-group Collection of parameter values to substitute into the statement. The number of elements in the collection must match the number of "?" placeholders in the sql template. If the template does not contain any "?" placeholders, then param-group must be empty. body Variable number of forms to evaluate with the update-count symbol binding.

Example:

(with-transaction-update-count c "INSERT INTO table(x,y,z) VALUES (?,?,?)" ["a" 1 2] (when-not (= 1 c) (println "Number of updated records does not equal one!")) {:count c})

This macro is intended to be nested within the retry-with-db macro.

Execute a parametrized PreparedStatement in a transaction, then
evaluate an expression on the count of records that were updated in
this transaction.

This macro takes the following arguments:

`update-count`
    A symbol that is bound to an integer, which is the number of
    records that were updated by the transaction. Depending on the
    transaction, the update count may be zero.
`sql`
    SQL prepared statement template (string).
`param-group`
    Collection of parameter values to substitute into the statement.
    The number of elements in the collection must match the number of
    "?" placeholders in the `sql` template. If the template does not
    contain any "?" placeholders, then `param-group` must be empty.
`body`
    Variable number of forms to evaluate with the `update-count`
    symbol binding.

Example:

(with-transaction-update-count c
  "INSERT INTO table(x,y,z) VALUES (?,?,?)" ["a" 1 2]
  (when-not (= 1 c)
    (println "Number of updated records does not equal one!"))
  {:count c})

This macro is intended to be nested within the retry-with-db macro.
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close