Liking cljdoc? Tell your friends :D
Clojure only.

datahike.pg.dump

Dump a pg-datahike database to portable PostgreSQL SQL.

This is the inverse of our pg_dump --inserts | psql import path. It walks a Datahike connection's schema, reverse-engineers the SQL DDL we originally translated from, and emits a sequence of SQL strings that can be replayed against either pg-datahike OR a real PostgreSQL server.

Output layout (mirrors pg_dump's):

-- Header SET client_encoding = 'UTF8';

-- DDL: CREATE TABLE for every table + CREATE SEQUENCE for -- every named sequence. PRIMARY KEY / UNIQUE are inline; FKs -- are deferred to the bottom so data load order doesn't matter. CREATE TABLE "customer" (...); CREATE TABLE "order" (...); CREATE SEQUENCE "customer_id_seq";

-- Data COPY "customer" (...) FROM stdin; ... .

-- FKs (or any other ALTER TABLE ADD CONSTRAINT) — last, so -- forward / cyclic references load cleanly. ALTER TABLE "order" ADD CONSTRAINT order_customer_id_fkey FOREIGN KEY (customer_id) REFERENCES customer (id);

-- Sequence current values SELECT pg_catalog.setval('customer_id_seq', 42, true);

Two output shapes via :format: :inserts (default) — INSERT INTO ... VALUES (...); per row. More universally re-loadable; works against tier-1 of pg-datahike's own import path. :copy — COPY ... FROM stdin (text format) per row. More compact; requires the loader to drive the COPY-IN sub-protocol (psql does this automatically via \copy / pg_dump's stdin path).

What we capture today:

  • column type (driven by :db/valueType + :pg/type override)
  • PRIMARY KEY (from :db.unique/identity)
  • UNIQUE (from :db.unique/value)
  • NOT NULL (from :pg/not-null annotation entity)
  • DEFAULT (from :pg/default-kind + :pg/default-value)
  • FOREIGN KEY (from :datahike.pg/references hint)
  • CHECK (from :pg/check-* entities)
  • sequences (from :seq/* entities)
  • all data rows (filtered by the :<table>/db-row-exists marker)

What we DON'T capture (yet):

  • VIEW / MATERIALIZED VIEW definitions
  • INDEXes (we don't materialize them; PG would create them fresh on the load side)
  • TRIGGERs / functions (we don't have these)
  • COMMENT ON
  • GRANT / REVOKE / OWNER (no role system)

Public API: (dump conn) ; lazy seq of SQL strings (dump conn {:format :copy}) (dump conn {:sections #{:schema}}) ; or #{:data}, default :all (dump conn {:exclude-tables #{"x"}}) ; skip these table namespaces

Dump a pg-datahike database to portable PostgreSQL SQL.

This is the inverse of our `pg_dump --inserts | psql` import path.
It walks a Datahike connection's schema, reverse-engineers the SQL
DDL we originally translated from, and emits a sequence of SQL
strings that can be replayed against either pg-datahike OR a real
PostgreSQL server.

Output layout (mirrors `pg_dump`'s):

  -- Header
  SET client_encoding = 'UTF8';

  -- DDL: CREATE TABLE for every table + CREATE SEQUENCE for
  -- every named sequence. PRIMARY KEY / UNIQUE are inline; FKs
  -- are deferred to the bottom so data load order doesn't matter.
  CREATE TABLE "customer" (...);
  CREATE TABLE "order" (...);
  CREATE SEQUENCE "customer_id_seq";

  -- Data
  COPY "customer" (...) FROM stdin;
  ...
  \.

  -- FKs (or any other ALTER TABLE ADD CONSTRAINT) — last, so
  -- forward / cyclic references load cleanly.
  ALTER TABLE "order" ADD CONSTRAINT order_customer_id_fkey
    FOREIGN KEY (customer_id) REFERENCES customer (id);

  -- Sequence current values
  SELECT pg_catalog.setval('customer_id_seq', 42, true);

Two output shapes via :format:
  :inserts (default) — INSERT INTO ... VALUES (...); per row.
                       More universally re-loadable; works against
                       tier-1 of pg-datahike's own import path.
  :copy              — COPY ... FROM stdin (text format) per row.
                       More compact; requires the loader to drive
                       the COPY-IN sub-protocol (psql does this
                       automatically via \copy / pg_dump's stdin
                       path).

What we capture today:
  - column type (driven by :db/valueType + :pg/type override)
  - PRIMARY KEY (from :db.unique/identity)
  - UNIQUE (from :db.unique/value)
  - NOT NULL (from :pg/not-null annotation entity)
  - DEFAULT (from :pg/default-kind + :pg/default-value)
  - FOREIGN KEY (from :datahike.pg/references hint)
  - CHECK (from :pg/check-* entities)
  - sequences (from :__seq__/* entities)
  - all data rows (filtered by the :<table>/db-row-exists marker)

What we DON'T capture (yet):
  - VIEW / MATERIALIZED VIEW definitions
  - INDEXes (we don't materialize them; PG would create them
    fresh on the load side)
  - TRIGGERs / functions (we don't have these)
  - COMMENT ON
  - GRANT / REVOKE / OWNER (no role system)

Public API:
  (dump conn)                            ; lazy seq of SQL strings
  (dump conn {:format :copy})
  (dump conn {:sections #{:schema}})     ; or #{:data}, default :all
  (dump conn {:exclude-tables #{"x"}})  ; skip these table namespaces
raw docstring

dumpclj

(dump conn)
(dump conn
      {:keys [format sections exclude-tables]
       :or {format :inserts sections #{:schema :data}}})

Dump a pg-datahike database to a lazy seq of SQL strings. Each element is one SQL statement (or one COPY-block, terminating in \.); writers can (str/join "\n" (dump conn)) to get the full file contents.

Dump a pg-datahike database to a lazy seq of SQL strings. Each
element is one SQL statement (or one COPY-block, terminating in
`\.`); writers can `(str/join "\n" (dump conn))` to get the
full file contents.
sourceraw docstring

dump-to-stringclj

(dump-to-string conn)
(dump-to-string conn opts)

Convenience: realise the lazy seq into a single SQL string.

Convenience: realise the lazy seq into a single SQL string.
sourceraw docstring

cljdoc builds & hosts documentation for Clojure/Script libraries

Keyboard shortcuts
Ctrl+kJump to recent docs
Move to previous article
Move to next article
Ctrl+/Jump to the search field
× close