When bound to a Map<Keyword, LongAdder>, per-phase timings/counters accumulate here. nil by default — single var-deref + nil-check overhead.
When bound to a Map<Keyword, LongAdder>, per-phase timings/counters accumulate here. nil by default — single var-deref + nil-check overhead.
(append-dataset! appender dataset)Append dataset to the open appender. dataset must have the same
column dtypes (in the same order) as the schema sample passed to
open-appender.
Returns the number of rows appended.
Throws IllegalStateException if the appender has been closed, or
IllegalArgumentException if the dataset schema does not match.
Append `dataset` to the open `appender`. `dataset` must have the same column dtypes (in the same order) as the schema sample passed to `open-appender`. Returns the number of rows appended. Throws `IllegalStateException` if the appender has been closed, or `IllegalArgumentException` if the dataset schema does not match.
(flush-appender! appender)Flush the appender's internal DuckDB buffer, committing buffered rows so they become visible to other connections.
Constraint violations (PK, UNIQUE, etc.) surface at flush time. A
failed flush invalidates all data buffered in the appender — DuckDB
cannot recover the partially-written batch — so this fn additionally
poisons the appender on failure: native resources are released and
subsequent operations throw IllegalStateException. Close (a safe
no-op after poisoning) and open a fresh appender to recover.
Returns :ok on success.
Flush the appender's internal DuckDB buffer, committing buffered rows so they become visible to other connections. Constraint violations (PK, UNIQUE, etc.) surface at flush time. A failed flush invalidates all data buffered in the appender — DuckDB cannot recover the partially-written batch — so this fn additionally poisons the appender on failure: native resources are released and subsequent operations throw `IllegalStateException`. Close (a safe no-op after poisoning) and open a fresh appender to recover. Returns `:ok` on success.
(initialize!)(initialize! {:keys [duckdb-home]})Initialize the duckdb ffi system. Must be called first.
Initialize the duckdb ffi system. Must be called first.
(insert-dataset! conn dataset)(insert-dataset! conn dataset options)Bulk-insert dataset into the table named by its :name metadata (or
(:table-name options)). Internally opens a fresh appender, writes one
batch, and closes — paying schema setup costs once per call.
For streaming workloads with many batches of the same schema, prefer
open-appender + append-dataset! to amortize that setup cost across
batches.
Returns the number of rows written.
Bulk-insert `dataset` into the table named by its `:name` metadata (or `(:table-name options)`). Internally opens a fresh appender, writes one batch, and closes — paying schema setup costs once per call. For streaming workloads with many batches of the same schema, prefer `open-appender` + `append-dataset!` to amortize that setup cost across batches. Returns the number of rows written.
(instr-snapshot m)Snapshot the instrumentation buffer to a sorted map of {phase sum}.
Snapshot the instrumentation buffer to a sorted map of {phase sum}.
(new-instr-buf)Build a fresh instrumentation buffer with all known phase counters.
Build a fresh instrumentation buffer with all known phase counters.
(open-appender conn schema-dataset)(open-appender conn schema-dataset options)Open a long-lived appender for schema-dataset's table on conn.
The appender caches schema-derived state — column dtypes, DuckDB
logical types, and the underlying data chunk buffer — across many
append-dataset! calls, avoiding the per-call setup overhead of
insert-dataset!. This is the right choice when a producer streams
many batches of the same shape into the same table.
schema-dataset is a sample tech.v3.dataset whose column dtypes
define the schema every batch fed through this appender must conform
to. The table name is read from (sql/table-name schema-dataset options),
with (:table-name options) taking precedence.
Returns an AutoCloseable Appender. Closing flushes any rows still
buffered inside DuckDB; use with-open for the typical lifetime:
(with-open [app (duck/open-appender conn sample-ds)] (doseq [batch dataset-stream] (duck/append-dataset! app batch)))
Multiple appenders may be open simultaneously on the same connection
(e.g. one per destination table). An Appender is not thread-safe and
must be used only by the thread that owns its connection.
Open a long-lived appender for `schema-dataset`'s table on `conn`.
The appender caches schema-derived state — column dtypes, DuckDB
logical types, and the underlying data chunk buffer — across many
`append-dataset!` calls, avoiding the per-call setup overhead of
`insert-dataset!`. This is the right choice when a producer streams
many batches of the same shape into the same table.
`schema-dataset` is a sample `tech.v3.dataset` whose column dtypes
define the schema every batch fed through this appender must conform
to. The table name is read from `(sql/table-name schema-dataset options)`,
with `(:table-name options)` taking precedence.
Returns an `AutoCloseable` `Appender`. Closing flushes any rows still
buffered inside DuckDB; use `with-open` for the typical lifetime:
(with-open [app (duck/open-appender conn sample-ds)]
(doseq [batch dataset-stream]
(duck/append-dataset! app batch)))
Multiple appenders may be open simultaneously on the same connection
(e.g. one per destination table). An `Appender` is not thread-safe and
must be used only by the thread that owns its connection.(open-db)(open-db path)(open-db path config-options)Open a database. path may be nil for in-memory.
Open a database. path may be nil for in-memory.
(run-query! conn sql)Execute a SQL statement, ignoring results. Used for DDL.
Execute a SQL statement, ignoring results. Used for DDL.
(with-instrumentation & body)Bind a fresh instrumentation buffer for body. Returns [result snapshot].
Bind a fresh instrumentation buffer for `body`. Returns [result snapshot].
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |