Liking cljdoc? Tell your friends :D

PG2: A Fast PostgreSQL Driver For Clojure

PG2 is a client library for PostgreSQL server. It succeeds PG(one) -- my early attempt to make a JDBC-free client. Comparing to it, PG2 has the following features:

It's fast. Benchmarks prove up to 3 times performance boost compared to Next.JDBC. A simple HTTP application which reads data from the database and responds with JSON handles 2 times more RPS. For details, see the "benchmarks" below.

It's written in Java with a Clojure layer on top of it. Unfortunately, Clojure is not as fast as Java. For performance sake, I've got to implement most of the logic in pure Java. The Clojure layer is extremely thin and serves only to call certain methods.

It's still Clojure friendly: by default, all the queries return a vector of maps, where the keys are keywords. You don't need to remap the result in any way. But moreover, the library provides dozens of ways to group, index, a reduce a result.

It supports JSON out from the box: There is no need to extend any protocols and so on. Read and write json(b) with ease as Clojure data! Also, JSON reading and writing as really fast, again: 2-3 times faster than in Next.JDBC.

It supports COPY operations: you can easily COPY OUT a table into a stream. You can COPY IN a set of rows without CSV-encoding them because it's held by the library. It also supports binary COPY format, which is faster.

It supports java.time. classes. The ordinary JDBC clients still use Timestamp class for dates, which is horrible. In PG2, all the java.time.* classes are supported for reading and writing.

...And plenty of other features.

Table of Contents


Core functionality: the client and the connection pool, type encoding and decoding, COPY IN/OUT, SSL:

;; lein
[com.github.igrishaev/pg2-core "0.1.11"]

;; deps
com.github.igrishaev/pg2-core {:mvn/version "0.1.11"}

HoneySQL integration: special version of query and execute that accept not a string of SQL but a map that gets formatted to SQL under the hood. Also includes various helpers (get-by-id, find, insert, udpate, delete, etc).

;; lein
[com.github.igrishaev/pg2-honey "0.1.11"]

;; deps
com.github.igrishaev/pg2-honey {:mvn/version "0.1.11"}

Component integration: a package that extends the Connection and Pool objects with the Lifecycle protocol from the Component library.

;; lein
[com.github.igrishaev/pg2-component "0.1.11"]

;; deps
com.github.igrishaev/pg2-component {:mvn/version "0.1.11"}

Migrations: a package that provides migration management: migrate forward, rollback, create, list applied migrations and so on.

;; lein
[com.github.igrishaev/pg2-migration "0.1.11]

;; deps
com.github.igrishaev/pg2-migration {:mvn/version "0.1.11"}

HugSQL support: a small wrapper on top of the well-known HugSQL library which creates Clojure functions out from SQL files.

;; lein
[com.github.igrishaev/pg2-hugsql "0.1.11]

;; deps
com.github.igrishaev/pg2-hugsql {:mvn/version "0.1.11"}

Quick start (Demo)

This is a very brief passage through the library:

(ns pg.demo
   [pg.core :as pg]))

;; declare a minimal config
(def config
  {:host ""
   :port 10140
   :user "test"
   :password "test"
   :database "test"})

;; connect to the database
(def conn
  (pg/connect config))

;; a trivial query
(pg/query conn "select 1 as one")
;; [{:one 1}]

;; let's create a table
(pg/query conn "
create table demo (
  id serial primary key,
  title text not null,
  created_at timestamp with time zone default now()
;; {:command "CREATE TABLE"}

;; Insert three rows returning all the columns.
;; Pay attention that PG2 uses not question marks (?)
;; but numered dollars for parameters:
(pg/execute conn
            "insert into demo (title) values ($1), ($2), ($3)
             returning *"
            {:params ["test1" "test2" "test3"]})

;; The result: pay attention we got the j.t.OffsetDateTime class
;; for the timestamptz column (truncated):

[{:title "test1",
  :id 4,
  :created_at #object[j.t.OffsetDateTime "2024-01-17T21:57:58..."]}
 {:title "test2",
  :id 5,
  :created_at #object[j.t.OffsetDateTime "2024-01-17T21:57:58..."]}
 {:title "test3",
  :id 6,
  :created_at #object[j.t.OffsetDateTime "2024-01-17T21:57:58..."]}]

;; Try two expressions in a single transaction
(pg/with-tx [conn]
  (pg/execute conn
              "delete from demo where id = $1"
              {:params [3]})
  (pg/execute conn
              "insert into demo (title) values ($1)"
              {:params ["test4"]}))
;; {:inserted 1}

;; Now check out the database log:

;; LOG:  statement: BEGIN
;; LOG:  execute s3/p4: delete from demo where id = $1
;; DETAIL:  parameters: $1 = '3'
;; LOG:  execute s5/p6: insert into demo (title) values ($1)
;; DETAIL:  parameters: $1 = 'test4'
;; LOG:  statement: COMMIT


See the following posts in my blog:


The library suppors the following authentication types and pipelines:

  • No password (for trusted clients);

  • Clear text password (not used nowadays);

  • MD5 password with hash (default prior to Postgres ver. 15);

  • SASL with the SCRAM-SHA-256 algorithm (default since Postgres ver. 15). The SCRAM-SHA-256-PLUS algorithm is not implemented.

Connecting to the server

To connect the server, define a config map and pass it into the connect function:

(require '[pg.core :as pg])

(def config
  {:host ""
   :port 5432
   :user "test"
   :password "test"
   :database "test"})

(def conn
  (pg/connect config))

The conn is an instance of the class.

The :host, :port, and :password config fields have default values and might be skipped (the password is an empty string by default). Only the :user and :database fields are required when connecting. See the list of possible fields and their values in a separate section.

To close a connection, pass it into the close function:

(pg/close conn)

You cannot open or use this connection again afterwards.

To close the connection automatically, use either with-connection or with-open macro. The with-connection macro takes a binding symbol and a config map; the connection gets bound to the binding symbold while the body is executed:

(pg/with-connection [conn config]
  (pg/query conn "select 1 as one"))

The standard with-open macro calls the (.close connection) method on exit:

(with-open [conn (pg/connect config)]
  (pg/query conn "select 1 as one"))

Avoid situations when you close a connection manually. Use one of these two macros shown above.

Use :pg-params field to specify connection-specific Postgres parameters. These are "TimeZone", "application_name", "DateStyle" and more. Both keys and values are plain strings:

(def config+
  (assoc config
         {"application_name" "Clojure"
          "DateStyle" "ISO, MDY"}))

(def conn
  (pg/connect config+))

Connection config parameters

The following table describes all the possible connection options with the possible values and semantics. Only the two first options are requred. All the rest have predefined values.

:userstringrequiredthe name of the DB user
:databasestringrequiredthe name of the database
:hoststring127.0.0.1IP or hostname
:portinteger5432port number
:passwordstring""DB user password
:pg-paramsmap{}A map of session params like {string string}
:binary-encode?boolfalseWhether to use binary data encoding
:binary-decode?boolfalseWhether to use binary data decoding
:read-only?boolfalseWhether to initiate this connection in READ ONLY mode (see below)
:in-stream-buf-sizeinteger0xFFFFSize of the input buffered socket stream
:out-stream-buf-sizeinteger0xFFFFSize of the output buffered socket stream
:fn-notification1-arg fnlogging fnA function to handle notifications
:fn-protocol-version1-arg fnlogging fnA function to handle negotiation version protocol event
:fn-notice1-arg fnlogging fnA function to handle notices
:use-ssl?boolfalseWhether to use SSL connection
:ssl-contextSSLContextnilAn custom instance of SSLContext class to wrap a socket
:so-keep-alive?booltrueSocket KeepAlive value
:so-tcp-no-delay?booltrueSocket TcpNoDelay value
:so-timeoutinteger15.000Socket timeout value, in ms
:so-recv-buf-sizeinteger0xFFFFSocket receive buffer size
:so-send-buf-sizeinteger0xFFFFSocket send buffer size
:log-levelkeyword:infoConnection logging level. See possible values below
:cancel-timeout-msinteger5.000Default value for the with-timeout macro, in ms
:protocol-versioninteg196608Postgres protocol version
:object-mapperObjectMapperJSON.defaultMapperAn instance of ObjectMapper for custom JSON processing (see the "JSON" section)

Parameter notes


Possible :log-level values are:

  • :all to render all the events
  • :trace
  • :debug
  • :info
  • :error
  • :off, false, or nil to disable logging.

Read Only Mode

The :read-only? connection parameter does two things under the hood:

  1. It appends the default_transaction_read_only parameter to the startup message set to on. Thus, any transaction gets started on READ ONLY mode.

  2. It prevents the :read-only? flag from overriding in the with-tx macro. Say, even if the macro is called like this:

(pg/with-tx [conn {:read-only? false}] ;; try to mute the global :read-only? flag
  (pg/query conn "delete from students"))

The transaction will be in READ ONLY mode anyway.


The query function sends a query to the server and returns the result. Non-data queries return a map with a tag:

(pg/query conn "create table test1 (id serial primary key, name text)")
{:command "CREATE TABLE"}

(pg/query conn "insert into test1 (name) values ('Ivan'), ('Huan')")
{:inserted 2}

Data queries return a vector of maps. This behaviour may be changed with reducers (see below).

(pg/query conn "select * from test1")
[{:name "Ivan", :id 1} {:name "Huan", :id 2}]

The SQL string might include several expressions concatenated with a semicolon. In this case, the result will be a vector of results:

(pg/query conn "insert into test1 (name) values ('Juan'); select * from test1")

[{:inserted 1}
 [{:name "Ivan", :id 1}
  {:name "Huan", :id 2}
  {:name "Juan", :id 3}]]

Use this feature wisely; don't try to do lots of things at once.

Important: the query function doesn't support parameters. You cannot run a query like these two below or similar:

(pg/query conn "select * from test1 where id = ?" 1)
;; or
(pg/query conn ["select * from test1 where id = ?" 1])

NEVER(!), NEVER(!!), NEVER(!!!) put parameters into a SQL string using str, format, or other functions that operate on strings. You will regret it one day. Use execute with parameters instead.


The execute function acts like query but has the following peculiarities:

  • The SQL string cannot have many expressions concatenated with a semicolon. There must be a single expression (although the trailing semicolon is allowed).

  • It may have parameters. The values for these parameters are passed separately. Unlike in JDBC, the parameters use dollar sign with a number, for example $1, $2, etc.

Here is how you can query a row by its primary key:

(pg/execute conn "select * from test1 where id = $1" {:params [2]})
;; [{:name "Huan", :id 2}]

The values are passed into the :params key; they must be a vector, or a list, or a lazy sequence. Passing a set is not recommended as it doesn't guarantee the order of the values.

This is how you insert values into a table using parameters:

(pg/execute conn
            "insert into test1 (name) values ($1), ($2), ($3)"
            {:params ["Huey" "Dewey" "Louie"]})
;; {:inserted 3}

Pay attention that the values are always a flat list. Imagine you'd like to insert rows with explicit ids:

(pg/execute conn
            "insert into test1 (id, name) values ($1, $2), ($3, $4), ($5, $6)"
            {:params [1001 "Harry" 1002 "Hermione" 1003 "Ron"]})
;; {:inserted 3}

The :params vector consists from flat values but not pairs like [1001 "Harry"]. For better readability, make a list of pairs and then flatten it:

(def pairs
  [[1001 "Harry"]
   [1002 "Hermione"]
   [1003 "Ron"]])

(flatten pairs)

;; (1001 "Harry" 1002 "Hermione" 1003 "Ron")

Since the parameters have explicit numbers, you can reference a certain value many times. The following query will create three agents Smith with different ids.

(pg/execute conn
            "insert into test1 (name) values ($1), ($1), ($1)"
            {:params ["Agent Smith"]})
;; {:inserted 3}

Both query and execute functions accept various options that affect data processing. Find their description in the next section.

Common Execute parameters

Type hints

Prepared Statements

In Postgres, prepared statements are queries that have passed all the preliminary stages and now ready to be executed. Running the same prepared statement with different parameters is faster than executing a fresh query each time. To prepare a statement, pass a SQL expression into the prepare function. It will return a special PreparedStatement object:

(def stmt-by-id
  (pg/prepare conn "select * from test1 where id = $1"))

(str stmt-by-id)
<Prepared statement, name: s11, param(s): 1, OIDs: [INT4], SQL: select * from test1 where id = $1>

The statement might have parameters. Now that you have a statement, execute it with the execute-statement function. Below, we execute it three times with various primary keys. We also pass the :first? option set to true to have only row in the result.

(pg/execute-statement conn
                      {:params [1] :first? true})
;; {:name "Ivan", :id 1}

(pg/execute-statement conn
                      {:params [5] :first? true})
;; {:name "Louie", :id 5}

(pg/execute-statement conn
                      {:params [8] :first? true})
;; {:name "Agent Smith", :id 8}

During its lifetime on the server, a statement consumes some resources. When it's not needed any longer, release it with the close-statement function:

(pg/close-statement conn stmt-by-id)

The following macro helps to auto-close a statement. The first argument is a binding symbol. It will be pointing to a fresh prepared statement during the execution of the body. Afterwards, the statement is closed.

Below, we insert tree rows in the database using the same prepared statement. Pay attention to the doall clause: it evaluates the lazy sequence produced by for. Without doall, you'll get an error from the server saying there is no such a prepared statement.

(pg/with-statement [stmt conn "insert into test1 (name) values ($1) returning *"]
   (for [character ["Agent Brown"
                    "Agent Smith"
                    "Agent Jones"]]
     (pg/execute-statement conn stmt {:params [character] :first? true}))))

({:name "Agent Brown", :id 12}
 {:name "Agent Smith", :id 13}
 {:name "Agent Jones", :id 14})

In Postgres, prepared statements are always bound to a certain connection. Don't share a statement opened in a connection A to B and vice versa. Do not share them across different threads.


To execute one or more queries in a transaction, wrap them with begin and commit functions as follows:

(pg/begin conn)

(pg/execute conn
            "insert into test1 (name) values ($1)"
            {:params ["Test1"]})

(pg/execute conn
            "insert into test1 (name) values ($1)"
            {:params ["Test2"]})

(pg/commit conn)

Both rows are inserted in a transaction. Should one of them fail, none will succeed. By checking the database log, you'll see the following entries:

statement: BEGIN
execute s23/p24: insert into test1 (name) values ($1)
  parameters: $1 = 'Test1'
execute s25/p26: insert into test1 (name) values ($1)
  parameters: $1 = 'Test2'
statement: COMMIT

The rollback function rolls back the current transaction. The "Test3" entry will be available during transaction but won't be stored at the end.

(pg/begin conn)
(pg/execute conn
            "insert into test1 (name) values ($1)"
            {:params ["Test3"]})
(pg/rollback conn)

Of course, there is a macro what handles BEGIN, COMMIT, and ROLLBACK logic for you. The with-tx one wraps a block of code. It opens a transaction, executes the body and, if there was not an exception, commits it. If there was an exception, the macro rolls back the transaction and re-throws it.

The first argument of the macro is a connection object:

(pg/with-tx [conn]
  (pg/execute conn
              "delete from test1 where name like $1"
              {:params ["Test%"]})
  (pg/execute conn
              "insert into test1 (name) values ($1)"
              {:params ["Test3"]}))

The macro expands into something like this:

(pg/begin conn)
  (let [result (do <body>)]
    (pg/commit conn)
  (catch Throwable e
    (pg/rollback conn)
    (throw e)))

The macro accepts several optional parameters that affect a transaction, namely:

:read-only?BooleanWhen true, only read operations are allowed
:rollback?BooleanWhen true, he transaction gets rolled back even if there was no an exception
:isolation-levelKeyword or StringSet isolation level for the current transaction

The :read-only? parameter set to true restricts all the queries in this transaction to be read only. Thus, only SELECT queries will work. Running INSERT, UPDATE, or DELETE will cause an exception:

(pg/with-tx [conn {:read-only? true}]
  (pg/execute conn
              "delete from test1 where name like $1"
              {:params ["Test%"]}))

;; Execution error (PGErrorResponse) at (
;; Server error response: {severity=ERROR, code=25006, ... message=cannot execute DELETE in a read-only transaction, verbosity=ERROR}

The :rollback? parameter, when set to true, rolls back a transaction even if it was successful. This is useful for tests:

(pg/with-tx [conn {:rollback? true}]
  (pg/execute conn
              "delete from test1 where name like $1"
              {:params ["Test%"]}))

;; statement: BEGIN
;; execute s11/p12: delete from test1 where name like $1
;;   parameters: $1 = 'Test%'
;; statement: ROLLBACK

Above, inside the with-tx macro, you'll have all the rows deleted but once you get back, they will be there again.

Finally, the :isolation-level parameter sets isolation level for the current transaction. The table below shows its possible values:

:isolation-level parameterPostgres level


(pg/with-tx [conn {:isolation-level :serializable}]
  (pg/execute conn
              "delete from test1 where name like $1"
              {:params ["Test%"]})
  (pg/execute conn
              "insert into test1 (name) values ($1)"
              {:params ["Test3"]}))

;; statement: BEGIN
;; execute s33/p34: delete from test1 where name like $1
;;   parameters: $1 = 'Test%'
;; execute s35/p36: insert into test1 (name) values ($1)
;;   parameters: $1 = 'Test3'
;; statement: COMMIT

The default transation level depends on the settings of your database.

This document doesn't describe the difference between isolation levels. Please refer to the official documentation for more information.

Connection state

There are some function to track the connection state. In Postgres, the state of a connection might be one of these:

  • idle when it's ready for a query;

  • in transaction when a transaction has already been started but not committed yet;

  • error when transaction has failed but hasn't been rolled back yet.

The status function returns either :I, :T, or :E keywords depending on where you are at the moment. For each state, there is a corresponding predicate that returns either true of false.

At the beginning, the connection is idle:

(pg/status conn)

(pg/idle? conn)

Open a transaction and check out the state:

(pg/begin conn)

(pg/status conn)

(pg/in-transaction? conn)

Now send a broken query to the server. You'll get an exception describing the error occurred on the server:

(pg/query conn "selekt dunno")

;; Execution error (PGErrorResponse) at (
;; Server error response: {severity=ERROR, code=42601, ...,  message=syntax error at or near "selekt", verbosity=ERROR}

The connection is in the error state now:

(pg/status conn)

(pg/tx-error? conn)

When state is error, the connection doesn't accept any new queries. Each of them will be rejected with a message saying that the connection is in the error state. To recover from an error, rollback the current transaction:

(pg/rollback conn)

(pg/idle? conn)

Now it's ready for new queries again.

HoneySQL Integration

The pg-honey package (see Installation) allows you to call query and execute functions using maps rather than string SQL expressions. Internally, maps are transformed into SQL using the great HoneySQL library. With HoneySQL, you don't need to format strings to build a SQL, which is clumsy and dangerous in terms of injections.

The package also provides several shortcuts for such common dutiles as get a single row by id, get a bunch of rows by their ids, insert a row having a map of values, update by a map and so on.

For a demo, let's import the package, declare a config map and create a table with some rows as follows:

(require '[pg.honey :as pgh])

(def config
  {:host ""
   :port 10140
   :user "test"
   :password "test"
   :dbname "test"})

(def conn
  (pg/connect config))

(pg/query conn "create table test003 (
  id integer not null,
  name text not null,
  active boolean not null default true

(pg/query conn "insert into test003 (id, name, active)
  (1, 'Ivan', true),
  (2, 'Huan', false),
  (3, 'Juan', true)")

Get by id(s)

The get-by-id function fetches a single row by a primary key which is :id by default:

(pgh/get-by-id conn :test003 1)
;; {:name "Ivan", :active true, :id 1}

Here and below: pass a Connection object to the first argument but it could be a plain config map or a Pool instance as well.

With options, you can specify the name of the primary key and the column names you're interested in:

(pgh/get-by-id conn
               {:pk [:raw ""]
                :fields [:id :name]})

;; {:name "Ivan", :id 1}

;; SELECT id, name FROM test003 WHERE = $1 LIMIT $2
;; parameters: $1 = '1', $2 = '1'

The get-by-ids function accepts a collection of primary keys and fetches them using the IN operator. In additon to options that get-by-id has, you can specify the ordering:

(pgh/get-by-ids conn
                [1 3 999]
                {:pk [:raw ""]
                 :fields [:id :name]
                 :order-by [[:id :desc]]})

[{:name "Juan", :id 3}
 {:name "Ivan", :id 1}]

;; SELECT id, name FROM test003 WHERE IN ($1, $2, $3) ORDER BY id DESC
;; parameters: $1 = '1', $2 = '3', $3 = '999'

Passing many IDs at once is not recommended. Either pass them by chunks or create a temporary table, COPY IN ids into it and INNER JOIN with the main table.


The delete function removes rows from a table. By default, all the rows are deleted with no filtering, and the deleted rows are returned:

(pgh/delete conn :test003)

[{:name "Ivan", :active true, :id 1}
 {:name "Huan", :active false, :id 2}
 {:name "Juan", :active true, :id 3}]

You can specify the WHERE clause and the column names of the result:

(pgh/delete conn
            {:where [:and
                     [:= :id 3]
                     [:= :active true]]
             :returning [:*]})

[{:name "Juan", :active true, :id 3}]

When the :returning option set to nil, no rows are returned.

Insert (one)

To observe all the features of the insert function, let's create a separate table:

(pg/query conn "create table test004 (
  id serial primary key,
  name text not null,
  active boolean not null default true

The insert function accepts a collection of maps each represents a row:

(pgh/insert conn
            [{:name "Foo" :active false}
             {:name "Bar" :active true}]
            {:returning [:id :name]})

[{:name "Foo", :id 1}
 {:name "Bar", :id 2}]

It also accepts options to produce the ON CONFLICT ... DO ... clause known as UPSERT. The following query tries to insert two rows with existing primary keys. Should they exist, the query updates the names of the corresponding rows:

(pgh/insert conn
            [{:id 1 :name "Snip"}
             {:id 2 :name "Snap"}]
            {:on-conflict [:id]
             :do-update-set [:name]
             :returning [:id :name]})

The resulting query looks like this:

INSERT INTO test004 (id, name) VALUES ($1, $2), ($3, $4)
  DO UPDATE SET name =
  RETURNING id, name
parameters: $1 = '1', $2 = 'Snip', $3 = '2', $4 = 'Snap'

The insert-one function acts like insert but accepts and returns a single map. It supports :returning and ON CONFLICT ... clauses as well:

(pgh/insert-one conn
                {:id 2 :name "Alter Ego" :active true}
                {:on-conflict [:id]
                 :do-update-set [:name :active]
                 :returning [:*]})

{:name "Alter Ego", :active true, :id 2}

The logs:

INSERT INTO test004 (id, name, active) VALUES ($1, $2, TRUE)
  DO UPDATE SET name =, active =
parameters: $1 = '2', $2 = 'Alter Ego'


The update function alters rows in a table. By default, it doesn't do any filtering and returns all the rows affected. The following query sets the boolean active value for all rows:

(pgh/update conn
            {:active true})

[{:name "Ivan", :active true, :id 1}
 {:name "Huan", :active true, :id 2}
 {:name "Juan", :active true, :id 3}]

The :where clause determines conditions for update. You can also specify columns to return:

(pgh/update conn
            {:active false}
            {:where [:= :name "Ivan"]
             :returning [:id]})

[{:id 1}]

What is great about update is, you can use such complex expressions as increasing counters, negation and so on. Below, we alter the primary key by adding 100 to it, negate the active column, and change the name column with dull concatenation:

(pgh/update conn
            {:id [:+ :id 100]
             :active [:not :active]
             :name [:raw "name || name"]}
            {:where [:= :name "Ivan"]
             :returning [:id :active]})

[{:active true, :id 101}]

Which produces the following query:

UPDATE test003
    id = id + $1,
    active = NOT active,
    name = name || name
  WHERE name = $2
  RETURNING id, active
parameters: $1 = '100', $2 = 'Ivan'

Find (first)

The find function makes a lookup in a table by column-value pairs. All the pairs are joined using the AND operator:

(pgh/find conn :test003 {:active true})

[{:name "Ivan", :active true, :id 1}
 {:name "Juan", :active true, :id 3}]

Find by two conditions:

(pgh/find conn :test003 {:active true
                         :name "Juan"})

[{:name "Juan", :active true, :id 3}]

;; SELECT * FROM test003 WHERE (active = TRUE) AND (name = $1)
;; parameters: $1 = 'Juan'

The function accepts additional options for LIMIT, OFFSET, and ORDER BY clauses:

(pgh/find conn
          {:active true}
          {:fields [:id :name]
           :limit 10
           :offset 1
           :order-by [[:id :desc]]
           :fn-key identity})

[{"id" 1, "name" "Ivan"}]

;; SELECT id, name FROM test003
;;   WHERE (active = TRUE)
;;   LIMIT $1
;;   OFFSET $2
;; parameters: $1 = '10', $2 = '1'

The find-first function acts the same but returns a single row or nil. Internally, it adds the LIMIT 1 clause to the query:

(pgh/find-first conn :test003
                {:active true}
                {:fields [:id :name]
                 :offset 1
                 :order-by [[:id :desc]]
                 :fn-key identity})

{"id" 1, "name" "Ivan"}


The prepare function makes a prepared statement from a HoneySQL map:

(def stmt
  (pgh/prepare conn {:select [:*]
                     :from :test003
                     :where [:= :id 0]}))

;; <Prepared statement, name: s37, param(s): 1, OIDs: [INT8], SQL: SELECT * FROM test003 WHERE id = $1>

Above, the zero value is a placeholder for an integer parameter.

Now that the statement is prepared, execute it with the right id:

(pg/execute-statement conn stmt {:params [3]
                                 :first? true})

{:name "Juan", :active true, :id 3}

Alternately, use the [:raw ...] syntax to specify a parameter with a dollar sign:

(def stmt
  (pgh/prepare conn {:select [:*]
                     :from :test003
                     :where [:raw "id = $1"]}))

(pg/execute-statement conn stmt {:params [1]
                                 :first? true})

{:name "Ivan", :active true, :id 1}

Query and Execute

There are two general functions called query and execute. Each of them accepts an arbitrary HoneySQL map and performs either Query or Execute request to the server.

Pay attention that, when using query, a HoneySQL map cannot have parameters. This is a limitation of the Query command. The following query will lead to an error response from the server:

(pgh/query conn
           {:select [:id]
            :from :test003
            :where [:= :name "Ivan"]
            :order-by [:id]})

;; Execution error (PGErrorResponse) at (
;; Server error response: {severity=ERROR, ... message=there is no parameter $1, verbosity=ERROR}

Instead, use either [:raw ...] syntax or {:inline true} option:

(pgh/query conn
           {:select [:id]
            :from :test003
            :where [:raw "name = 'Ivan'"] ;; raw (as is)
            :order-by [:id]})

[{:id 1}]

;; OR

(pgh/query conn
           {:select [:id]
            :from :test003
            :where [:= :name "Ivan"]
            :order-by [:id]}
           {:honey {:inline true}}) ;; inline values

[{:id 1}]

;; SELECT id FROM test003 WHERE name = 'Ivan' ORDER BY id ASC

The execute function acceps a HoneySQL map with parameters:

(pgh/execute conn
               {:select [:id :name]
                :from :test003
                :where [:= :name "Ivan"]
                :order-by [:id]})

[{:name "Ivan", :id 1}]

Both query and execute accept not SELECT only but literally everything: inserting, updating, creating a table, an index, and more. You can build combinations like INSERT ... FROM SELECT or UPDATE ... FROM DELETE to perform complex logic in a single atomic query.

HoneySQL options

Any HoneySQL-specific parameter might be passed through the :honey submap in options. Below, we pass the :params map to use the [:param ...] syntax. Also, we produce a pretty-formatted SQL for better logs:

(pgh/execute conn
             {:select [:id :name]
              :from :test003
              :where [:= :name [:param :name]]
              :order-by [:id]}
             {:honey {:pretty true
                      :params {:name "Ivan"}}})

;; SELECT id, name
;; FROM test003
;; WHERE name = $1
;; parameters: $1 = 'Ivan'

For more options, please refer to the official HoneySQL documentation.

HugSQL Support

The pg2-hugsql package brings integration with the HugSQL library. It creates functions out from SQL files like HugSQL does but these functions use the PG2 client instead of JDBC. Under the hood, there is a special database adapter as well as a slight override of protocols to make inner HugSQL stuff compatible with PG2.

Since the package already depends on core HugSQL functionality, there is no need to add the latter to dependencies: having pg2-hugsql by itself will be enough (see Installation).

Basic Usage

Let's go through a short demo. Imagine we have a demo.sql file with the following queries:

-- :name create-demo-table :!
create table :i:table (id serial primary key, title text not null);

-- :name insert-into-table :! :n
insert into :i:table (title) values (:title);

-- :name insert-into-table-returning :<!
insert into :i:table (title) values (:title) returning *;

-- :name select-from-table :? :*
select * from :i:table order by id;

-- :name get-by-id :? :1
select * from :i:table where id = :id limit 1;

-- :name get-by-ids :? :*
select * from :i:table where id in (:v*:ids) order by id;

-- :name insert-rows :<!
insert into :i:table (id, title) values :t*:rows returning *;

-- :name update-title-by-id :<!
update :i:table set title = :title where id = :id returning *;

-- :name delete-from-tablee :n
delete from :i:table;

Prepare a namespace with all the imports:

(ns pg.demo
   [ :as io]
   [pg.hugsql :as hug]
   [pg.core :as pg]))

To inject functions from the file, pass it into the pg.hugsql/def-db-fns function:

(hug/def-db-fns (io/file "test/demo.sql"))

It accepts either a string path to a file, a resource, or a File object. Should there were no exceptions, and the file was correct, the current namespace will get new functions declared in the file. Let's examine them and their metadata:


(-> create-demo-table var meta)

{:doc ""
 :command :!
 :result :raw
 :file "test/demo.sql"
 :line 2
 :arglists ([db] [db params] [db params opt])
 :name create-demo-table
 :ns #namespace[pg.demo]}

Each newborn function has at most three bodies:

  • [db]
  • [db params]
  • [db params opt],


  • db is a source of a connection. It might either a Connection object, a plain Clojure config map, or a Pool object.
  • params is a map of HugSQL parameters like {:id 42};
  • opt is a map of pg/execute parameters that affect processing the current query.

Now that we have functions, let's call them. Establish a connection first:

(def config
  {:host ""
   :port 10140
   :user "test"
   :password "test"
   :dbname "test"})

(def conn
  (jdbc/get-connection config))

Let's create a table using the create-demo-table function:

(def TABLE "demo123")

(create-demo-table conn {:table TABLE})
{:command "CREATE TABLE"}

Insert something into the table:

(insert-into-table conn {:table TABLE
                         :title "hello"})

The insert-into-table function has the :n flag in the source SQL file. Thus, it returns the number of rows affected by the command. Above, there was a single record inserted.

Let's try an expression that inserts something and returns the data:

(insert-into-table-returning conn
                             {:table TABLE
                              :title "test"})
[{:title "test", :id 2}]

Now that the table is not empty any longer, let's select from it:

(select-from-table conn {:table TABLE})

[{:title "hello", :id 1}
 {:title "test", :id 2}]

The get-by-id shortcut fetches a single row by its primary key. It returs nil for a missing key:

(get-by-id conn {:table TABLE
                 :id 1})
{:title "hello", :id 1}

(get-by-id conn {:table TABLE
                 :id 123})

Its bulk version called get-by-ids relies on the in (:v*:ids) HugSQL syntax. It expands into the following SQL vector: ["... where id in ($1, $2, ... )" 1 2 ...]

-- :name get-by-ids :? :*
select * from :i:table where id in (:v*:ids) order by id;
(get-by-ids conn {:table TABLE
                  :ids [1 2 3]})

;; 3 is missing
[{:title "hello", :id 1}
 {:title "test", :id 2}]

To insert multiple rows at once, use the :t* syntax which is short for "tuple list". Such a parameter expects a sequence of sequences:

-- :name insert-rows :<!
insert into :i:table (id, title) values :t*:rows returning *;
(insert-rows conn {:table TABLE
                   :rows [[10 "test10"]
                          [11 "test11"]
                          [12 "test12"]]})

[{:title "test10", :id 10}
 {:title "test11", :id 11}
 {:title "test12", :id 12}]

Let's update a single row by its id:

(update-title-by-id conn {:table TABLE
                          :id 1
                          :title "NEW TITLE"})
[{:title "NEW TITLE", :id 1}]

Finally, clean up the table:

(delete-from-table conn {:table TABLE})

Passing the Source of a Connection

Above, we've been passing a Connection object called conn to all functions. But it can be something else as well: a config map or a pool object. Here is an example with a map:

(insert-rows {:host "..." :port ... :user "..."}
             {:table TABLE
              :rows [[10 "test10"]
                     [11 "test11"]
                     [12 "test12"]]})

Pay attention that, when the first argument is a config map, a Connection object is established from it, and then it gets closed afterward before exiting a function. This might break a pipeline if you rely on a state stored in a connection. A temporary table is a good example. Once you close a connection, all the temporary tables created within this connection get wiped. Thus, if you create a temp table in the first function, and select from it using the second function passing a config map, that won't work: the second function won't know anything about that table.

The first argument might be a Pool instsance as well:

(pool/with-pool [pool config]
  (let [item1 (get-by-id pool {:table TABLE :id 10})
        item2 (get-by-id pool {:table TABLE :id 11})]
    {:item1 item1
     :item2 item2}))

{:item1 {:title "test10", :id 10},
 :item2 {:title "test11", :id 11}}

When the source a pool, each function call borrows a connection from it and returns it back afterwards. But you cannot be sure that both get-by-id calls share the same connection. A parallel thread may interfere and borrow a connection used in the first get-by-id before the second get-by-id call acquires it. As a result, any pipeline that relies on a shared state across two subsequent function calls might break.

To ensure the functions share the same connection, use either pg/with-connection or pool/with-connection macros:

(pool/with-pool [pool config]
  (pool/with-connection [conn pool]
    (pg/with-tx [conn]
      (insert-into-table conn {:table TABLE :title "AAA"})
      (insert-into-table conn {:table TABLE :title "BBB"}))))

Above, there is 100% guarantee that both insert-into-table calls share the same conn object borrowed from the pool. It is also wrapped into transaction which produces the following session:

insert into demo123 (title) values ($1);
  parameters: $1 = 'AAA'
insert into demo123 (title) values ($1);
  parameters: $1 = 'BBB'

Passing Options

PG2 supports a lot of options when processing a query. To use them, pass a map into the third parameter of any function. Above, we override a function that processes column names. Let it be not the default keyword but clojure.string/upper-case:

(get-by-id conn
           {:table TABLE :id 1}
           {:fn-key str/upper-case})

{"TITLE" "AAA", "ID" 1}

If you need such keys everywhere, submitting a map into each call might be inconvenient. The def-db-fns function accepts a map of predefined overrides:

  (io/file "test/demo.sql")
  {:fn-key str/upper-case})

Now, all the generated functions return string column names in upper case by default:

(get-by-id config
           {:table TABLE :id 1})

{"TITLE" "AAA", "ID" 1}

For more details, refer to the official HugSQL documentation.

Next.JDBC API layer

PG2 has a namespace that mimics Next.JDBC API. Of course, it doesn't cover 100% of Next.JDBC features yet most of the functions and macros are there. It will help you to introduce PG2 into the project without rewriting all the database-related code from scratch.

Obtaining a Connection

In Next.JDBC, all the functions and macros accept something that implements the Connectable protocol. It might be a plain Clojure map, an existing connection, or a connection pool. The PG2 wrapper follows this design. It works with either a map, a connection, or a pool.

Import the namespace and declare a config:

(require '[pg.jdbc :as jdbc])

(def config
  {:host ""
   :port 10140
   :user "test"
   :password "test"
   :dbname "test"})

Having a config map, obtain a connection by passing it into the get-connection function:

(def conn
  (jdbc/get-connection config))

This approach, although is a part of the Next.JDBC design, is not recommended to use. Once you've established a connection, you must either close it or, if it was borrowed from a pool, return it to the pool. There is a special macro on-connection that covers this logic:

(jdbc/on-connection [bind source]

If the source was a map, a new connection is spawned and gets closed afterwards. If the source is a pool, the connection gets returned to the pool. When the source is a connection, nothing happens when exiting the macro.

(jdbc/on-connection [conn config]
  (println conn))

A brief example with a connection pool and a couple of futures. Each future borrows a connection from a pool, and returns it afterwards.

(pool/with-pool [pool config]
  (let [f1
          (jdbc/on-connection [conn1 pool]
             (jdbc/execute-one! conn1 ["select 'hoho' as message"]))))
          (jdbc/on-connection [conn2 pool]
             (jdbc/execute-one! conn2 ["select 'haha' as message"]))))]

;; {{:message hoho}:message haha}
;; two overlapping print statements

Executing Queries

Two functions execute! and execute-one! send queries to the database. Each of them takes a source, a SQL vector, and a map of options. The SQL vector is a sequence where the first item is either a string or a prepared statement, and the rest values are parameters.

(jdbc/on-connection [conn config]
  (jdbc/execute! conn ["select $1 as num" 42]))
;; [{:num 42}]

Pay attention that parameters use a dollar sign with a number but not a question mark.

The execute-one! function acts like execute! but returns the first row only. Internaly, this is done by passing the {:first? true} parameter that enables the First reducer.

(jdbc/on-connection [conn config]
  (jdbc/execute-one! conn ["select $1 as num" 42]))
;; {:num 42}

To prepare a statement, pass a SQL-vector into the prepare function. The result will be an instance of the PreparedStatement class. To execute a statement, put it into a SQL-vector followed by the parameters:

(jdbc/on-connection [conn config]
  (let [stmt
        (jdbc/prepare conn
                      ["select $1::int4 + 1 as num"])
        (jdbc/execute-one! conn [stmt 1])

        (jdbc/execute-one! conn [stmt 2])]

    [res1 res2]))

;; [{:num 2} {:num 3}]

Above, the same stmt statement is executed twice with different parameters.

More realistic example with inserting data into a table. Let's prepare the table first:

(jdbc/execute! config ["create table test2 (id serial primary key, name text not null)"])

Insert a couple of rows returning the result:

(jdbc/on-connection [conn config]
  (let [stmt
        (jdbc/prepare conn
                      ["insert into test2 (name) values ($1) returning *"])

        (jdbc/execute-one! conn [stmt "Ivan"])

        (jdbc/execute-one! conn [stmt "Huan"])]

    [res1 res2]))

;; [{:name "Ivan", :id 1} {:name "Huan", :id 2}]

As it was mentioned above, in Postgres, a prepared statement is always bound to a certain connection. Thus, use the prepare function only inside the on-connection macro to ensure that all the underlying database interaction is made within the same connection.


The with-transaction macro wraps a block of code into a transaction. Before entering the block, the macro emits the BEGIN expression, and COMMIT afterwards, if there was no an exception. Should an exception pop up, the transaction gets rolled back with ROLLBACK, and the exception is re-thrown.

The macro takes a binding symbol which a connection is bound to, a source, an a map of options. The standard Next.JDBC transaction options are supported, namely:

  • :isolation
  • :read-only
  • :rollback-only

Here is an example of inserting a couple of rows in a transaction:

(jdbc/on-connection [conn config]

  (let [stmt
        (jdbc/prepare conn
                      ["insert into test2 (name) values ($1) returning *"])]

    (jdbc/with-transaction [TX conn {:isolation :serializable
                                     :read-only false
                                     :rollback-only false}]

      (let [res1
            (jdbc/execute-one! conn [stmt "Snip"])

            (jdbc/execute-one! conn [stmt "Snap"])]

        [res1 res2]))))

;; [{:name "Snip", :id 3} {:name "Snap", :id 4}]

The Postgres log:

insert into test2 (name) values ($1) returning *
  $1 = 'Snip'
insert into test2 (name) values ($1) returning *
  $1 = 'Snap'

The :isolation parameter might be one of the following:

  • :read-uncommitted
  • :read-committed
  • :repeatable-read
  • :serializable

To know more about transaction isolation, refer to the official Postgres documentation.

When read-only is true, any mutable query will trigger an error response from Postgres:

(jdbc/with-transaction [TX config {:read-only true}]
  (jdbc/execute! TX ["delete from test2"]))

;; Execution error (PGErrorResponse) at (
;; Server error response: {severity=ERROR, message=cannot execute DELETE in a read-only transaction, verbosity=ERROR}

When :rollback-only is true, the transaction gets rolled back even there was no an exception. This is useful for tests and experiments:

(jdbc/with-transaction [TX config {:rollback-only true}]
  (jdbc/execute! TX ["delete from test2"]))

The logs:

statement: BEGIN
execute s1/p2: delete from test2
statement: ROLLBACK

The table still has its data:

(jdbc/execute! config ["select * from test2"])

;; [{:name "Ivan", :id 1} ...]

The function active-tx? helps to determine if you're in the middle of a transaction:

(jdbc/on-connection [conn config]
  (let [res1 (jdbc/active-tx? conn)]
    (jdbc/with-transaction [TX conn]
      (let [res2 (jdbc/active-tx? TX)]
        [res1 res2]))))

;; [false true]

It returns true for transactions tha are in the error state as well.

Keys and Namespaces

The pg.jdbc wrapper tries to mimic Next.JDBC and thus uses kebab-case-keys when building maps:

(jdbc/on-connection [conn config]
  (jdbc/execute-one! conn ["select 42 as the_answer"]))

;; {:the-answer 42}

To change that behaviour and use snake_case_keys, pass the {:kebab-keys? false} option map:

(jdbc/on-connection [conn config]
  (jdbc/execute-one! conn
                     ["select 42 as the_answer"]
                     {:kebab-keys? false}))

;; {:the_answer 42}

By default, Next.JDBC returns full-qualified keys where namespaces are table names, for example :user/profile-id or :order/created-at. At the moment, namespaces are not supported by the wrapper.


Cloning a Connectin

Cancelling a Query

Thread Safety

Result reducers



Type Mapping

JSON support

Postgres is amazing when dealing with JSON. There hardly can be a database that serves it better. Unfortunately, Postgres clients never respect the JSON feature, which is horrible. Take JDBC, for example: when querying a JSON(b) value, you'll get a dull PGObject which should be decoded manually. The same applies to insertion: one cannot just pass a Clojure map or a vector. It should be packed into the PGObject as well.

Of course, this can be automated by extending certain protocols. But it's still slow as it's done on Clojure level (not Java), and it forces you to copy the same code across projects.

Fortunately, PG2 supports JSON out from the box. If you query a JSON value, you'll get its Clojure counter-part: a map, a vector, etc. To insert a JSON value to a table, you pass either a Clojure map or a vector. No additional steps are required.

PG2 relies on jsonista library to handle JSON. At the moment of writing, this is the fastest JSON library for Clojure. Jsonista uses a concept of object mappers: objects holding custom rules to encode and decode values. You can compose your own mapper with custom rules and pass it into the connection config.

Basic usage

Let's prepare a connection and a test table with a jsonb column:

(def config
  {:host ""
   :port 10140
   :user "test"
   :password "test"
   :dbname "test"})

(def conn
  (jdbc/get-connection config))

(pg/query conn "create table test_json (
  id serial primary key,
  data jsonb not null

Now insert a row:

(pg/execute conn
            "insert into test_json (data) values ($1)"
            {:params [{:some {:nested {:json 42}}}]})

No need to encode a map manually nor wrap it into a sort of PGObject. Let's fetch the new row by id:

(pg/execute conn
            "select * from test_json where id = $1"
            {:params [1]
             :first? true})

{:id 1 :data {:some {:nested {:json 42}}}}

Again, the JSON data returns as a Clojure map with no wrappers.

When using JSON with HoneySQL though, some circs are still needed. Namely, you have to wrap a value with [:lift ...] as follows:

(pgh/insert-one conn
                {:data [:lift {:another {:json {:value [1 2 3]}}}]})

{:id 2, :data {:another {:json {:value [1 2 3]}}}}

Without the [:lift ...] tag, HoneySQL will treat the value as a nested SQL map and try to render it as a string, which will fail of course or lead to a SQL injection.

Another way is to use HoneySQL parameters conception:

(pgh/insert-one conn
                {:data [:param :data]}
                {:honey {:params {:data {:some [:json {:map [1 2 3]}]}}}})

For details, see the HoneySQL Integration section.

PG2 supports only Clojure maps when encoding values into JSON. Vectors and other sequential values are treated as arrays. For details, see the Arrays support section.

Json Wrapper

In rare cases you might store a string or a number in a JSON field. Say, 123 is a valid JSON value but it's treated as a number. To tell Postgres it's a JSON indeed, wrap the value with pg/json-wrap:

(pgh/insert-one conn
                {:data (pg/json-wrap 42)})

{:id 4, :data 42}

The wrapper is especially useful to store a "null" JSON value: not the standard NULL but "null" which, when parsed, becomes nil. For this, pass (pg/json-wrap nil) as follows:

(pgh/insert-one conn
                {:data (pg/json-wrap nil)})

{:id 5, :data nil} ;; "null" in the database

Custom Object Mapper

One great thing about Jsonista is a conception of mapper objects. A mapper is a set of rules how to encode and decode data. Jsonista provides a way to build a custom mapper. Once built, it can be passed to a connection config so the JSON data is written and read back in a special way.

Let's assume you're going to tag JSON sub-parts to track their types. For example, if encoding a keyword :foo, you'll get a vector of ["!kw", "foo"]. When decoding that vector, by the "!kw" string, the mapper understands it a keyword and coerces "foo" to :foo.

Here is how you create a mapper with Jsonista:

(ns ...
    [jsonista.core :as j]
    [jsonista.tagged :as jt]))

(def tagged-mapper
   {:encode-key-fn true
    :decode-key-fn true
       {Keyword {:tag "!kw"
                 :encode jt/encode-keyword
                 :decode keyword}
        PersistentHashSet {:tag "!set"
                           :encode jt/encode-collection
                           :decode set}}})]}))

The object-mapper function accepts even more options but we skip them for now.

Now that you have a mapper, pass it into a config:

(def config
  {:host ""
   :port 10140
   :user "test"
   :password "test"
   :dbname "test"
   :object-mapper tagged-mapper})

(def conn
  (jdbc/get-connection config))

All the JSON operations made by this connection will use the passed object mapper. Let's insert a set of keywords:

(pg/execute conn
            "insert into test_json (data) values ($1)"
            {:params [{:object #{:foo :bar :baz}}]})

When read back, the JSON value is not a vector of strings any longer but a set of keywords:

(pg/execute conn "select * from test_json")

[{:id 1, :data {:object #{:baz :bar :foo}}}]

To peek a raw JSON value, select it as a plain text and print (just to avoid escaping quotes):

(printl (pg/execute conn "select data::text json_raw from test_json where id = 10"))

;; [{:json_raw {"object": ["!set", [["!kw", "baz"], ["!kw", "bar"], ["!kw", "foo"]]]}}]

If you read that row using another connection with a default object mapper, the data is returned without expanding tags.

Utility pg.json namespace

PG2 provides an utility namespace for JSON encoding and decoding. You can use it for files, HTTP API, etc. If you already have PG2 in the project, there is no need to plug in Cheshire or another JSON library. The namespace is pg.json:

(ns ...
   [pg.json :as json]))

Reading JSON

The read-string function reads a value from a JSON string:

(json/read-string "[1, 2, 3]")

[1 2 3]

The first argument might be an object mapper:

(json/read-string tagged-mapper "[\"!kw\", \"hello\"]")


The functions read-stream and read-reader act the same but accept either an InputStream or a Reader object:

(let [in (-> "[1, 2, 3]" .getBytes io/input-stream)]
  (json/read-stream tagged-mapper in))

(let [in (-> "[1, 2, 3]" .getBytes io/reader)]
  (json/read-reader tagged-mapper in))

Writing JSON

The write-string function dumps an value into a JSON string:

(json/write-string {:test [:hello 1 true]})

;; "{\"test\":[\"hello\",1,true]}"

The first argument might be a custom object mapper. Let's reuse our tagger mapper:

(json/write-string tagged-mapper {:test [:hello 1 true]})

;; "{\"test\":[[\"!kw\",\"hello\"],1,true]}"

The functions write-stream and write-writer act the same. The only difference is, they accept either an OutputStream or Writer objects. The first argument might be a mapper as well:

(let [out (new ByteArrayOutputStream)]
  (json/write-stream tagged-mapper {:foo [:a :b :c]} out))

(let [out (new StringWriter)]
  (json/write-writer tagged-mapper {:foo [:a :b :c]} out))

Ring HTTP middleware

PG2 provides an HTTP Ring middleware for JSON. It acts like wrap-json-request and wrap-json-response middleware from the ring-json library. Comparing to it, the PG2 stuff has the following advantages:

  • it's faster because of Jsonista, whereas Ring-json relies on Cheshire;
  • it wraps both request and response at once with a shortcut;
  • it supports custom object mappers.

Imagine you have a Ring handler that reads JSON body and returns a JSON map. Something like this:

(defn api-handler [request]
  (let [user-id (-> request :data :user_id)
        user (get-user-by-id user-id)]
    {:status 200
     :body {:user user}}))

Here is how you wrap it:

(ns ...
   [pg.ring.json :refer [wrap-json

(def app
  (-> api-handler
      (wrap-json <opt>)

Above, the wrap-json wrapper is a combination of wrap-json-request and wrap-json-response. You can apply them both explicitly:

(def app
  (-> api-handler
      (wrap-json-request <opt>)
      (wrap-json-response <opt>)

All the three wrap-json... middleware accept a handler to wrap and a map of options. Here is the options supported:

:object-mapperrequest, responseAn custom instance of ObjectMapper
:slotrequestA field to assoc the parsed JSON data (1)
:malformed-responserequestA ring response returned when payload cannot be parsed (2)


  1. The default slot name is :json. Please avoid using :body or :params to prevent overriding existing request fields. This is especially important for :body! Often, you need the origin input stream to calculate an MD5 or SHA-256 hash-sum of the payload. If you overwrite the :body field, you cannot do that.

  2. The default malformed response is something like 400 "Malformed JSON" (plain text).

A full example:

(def json-opt
  {:slot :data
   :object-mapper tagged-mapper ;; see above
   :malformed-response {:status 404
                        :body "<h1>Bad JSON</h1>"
                        :headers {"content-type" "text/html"}}})

(def app
  (-> api-handler
      (wrap-json json-opt)

Arrays support

In JDBC, arrays have always been a pain. Every time you're about to pass an array to the database and read it back, you've got to wrap your data in various Java classes, extend protocols, and multimethods. In Postgres, the array type is quite powerful yet underestimated due to poor support of drivers. This is one more reason for running this project: to bring easy access to Postgres arrays.

PG2 tries its best to provide seamless connection between Clojure vectors and Postgres arrays. When reading an array, you get a Clojure vector. And vice versa: to pass an array object into a query, just submit a vector.

PG2 supports arrays of any type: not only primitives like numbers and strings but uuid, numeric, timestamp(tz), json(b), and more as well.

Arrays might have more than one dimension. Nothing prevents you from having a 3D array of integers like cube::int[][][], and it becomes a nested vector when fetched by PG2.

A technical note: PG2 supports both encoding and decoding of arrays in both text and binary modes.

Here is a short demo session. Let's prepare a table with an array of strings:

(pg/query conn "create table arr_demo_1 (id serial, text_arr text[])")

Insert a simple item:

(pg/execute conn
            "insert into arr_demo_1 (text_arr) values ($1)"
            {:params [["one" "two" "three"]]})

In arrays, some elements might be NULL:

(pg/execute conn
            "insert into arr_demo_1 (text_arr) values ($1)"
            {:params [["foo" nil "bar"]]})

Now let's check what we've got so far:

(pg/query conn "select * from arr_demo_1")

[{:id 1 :text_arr ["one" "two" "three"]}
 {:id 2 :text_arr ["foo" nil "bar"]}]

Postgres supports plenty of operators for arrays. Say, the && one checks if there is at least one common element on both sides. Here is how we find those records that have either "tree", "four", or "five":

(pg/execute conn
            "select * from arr_demo_1 where text_arr && $1"
            {:params [["three" "four" "five"]]})

[{:text_arr ["one" "two" "three"], :id 1}]

Another useful operator is @> that checks if the left array contains all elements from the right array:

(pg/execute conn
            "select * from arr_demo_1 where text_arr @> $1"
            {:params [["foo" "bar"]]})

[{:text_arr ["foo" nil "bar"], :id 2}]

Let's proceed with numeric two-dimensional arrays. They're widely used in math, statistics, graphics, and similar areas:

(pg/query conn "create table arr_demo_2 (id serial, matrix bigint[][])")

Here is how you insert a matrix:

(pg/execute conn
            "insert into arr_demo_2 (matrix) values ($1)"
            {:params [[[[1 2] [3 4] [5 6]]
                       [[6 5] [4 3] [2 1]]]]})

{:inserted 1}

Pay attention: each number can be NULL but you cannot have NULL for an entire sub-array. This will trigger an error response from Postgres.

Reading the matrix back:

(pg/query conn "select * from arr_demo_2")

[{:id 1 :matrix [[[1 2] [3 4] [5 6]]
                 [[6 5] [4 3] [2 1]]]}]

A crazy example: let's have a three dimension array of timestamps with a time zone. No idea how it can be used but still:

(pg/query conn "create table arr_demo_3 (id serial, matrix timestamp[][][])")

(def -matrix
  [[[[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]
    [[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]
    [[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]]
   [[[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]
    [[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]
    [[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]]
   [[[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]
    [[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]
    [[(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]
     [(OffsetDateTime/now) (OffsetDateTime/now) (OffsetDateTime/now)]]]])

(pg/execute conn
            "insert into arr_demo_3 (matrix) values ($1)"
            {:params [-matrix]})

Now read it back:

(pg/query conn "select * from arr_demo_3")

  [... truncated
   [[[#object[java.time.LocalDateTime 0x5ed6e62b "2024-04-01T18:32:48.272169"]
      #object[java.time.LocalDateTime 0xb9d6851 "2024-04-01T18:32:48.272197"]
      #object[java.time.LocalDateTime 0x6e35ed84 "2024-04-01T18:32:48.272207"]]
     [#object[java.time.LocalDateTime 0x7319d217 "2024-04-01T18:32:48.272236"]
      #object[java.time.LocalDateTime 0x6153154d "2024-04-01T18:32:48.272241"]
      #object[java.time.LocalDateTime 0x2e4ffd44 "2024-04-01T18:32:48.272247"]]]
    [[#object[java.time.LocalDateTime 0x32c6e526 "2024-04-01T18:32:48.272405"]
      #object[java.time.LocalDateTime 0x496a5bc6 "2024-04-01T18:32:48.272418"]
      #object[java.time.LocalDateTime 0x283531ee "2024-04-01T18:32:48.272426"]]
     [#object[java.time.LocalDateTime 0x677b3def "2024-04-01T18:32:48.272459"]
      #object[java.time.LocalDateTime 0x46d5039f "2024-04-01T18:32:48.272467"]
      #object[java.time.LocalDateTime 0x3d0b906 "2024-04-01T18:32:48.272475"]]]]],
  :id 1}]

You can have an array of JSON(b) objects, too:

(pg/query conn "create table arr_demo_4 (id serial, json_arr jsonb[])")

Inserting an array of three maps:

  (pg/execute conn
              "insert into arr_demo_4 (json_arr) values ($1)"
              {:params [[{:foo 1} {:bar 2} {:test [1 2 3]}]]})

Elements might be everything that can be JSON-encoded: numbers, strings, boolean, etc. The only tricky case is a vector. To not break the algorithm that traverses the matrix, wrap a vector element with pg/json-wrap:

(pg/execute conn
            "insert into arr_demo_4 (json_arr) values ($1)"
            {:params [[42 nil {:some "object"} (pg/json-wrap [1 2 3])]]})

;; Signals that the [1 2 3] is not a nested array but an element.

Now read it back:

(pg/query conn "select * from arr_demo_4")

[{:id 1, :json_arr [42 nil {:some "object"} [1 2 3]]}]

Notify/Listen (PubSub)



Errors and Exceptions

Connection Pool

Component integration

Ring middleware


PG2 provides its own migration engine through the pg2-migration package (see Installation). Like Migratus or Ragtime, it allows to grow the database schema continuously, track changes and apply them with care.


Migrations are SQL files that are applied to the database in certain order. A migration has an id and a direction: next/up or prev/down. Usually it's split on two files called <id>.up.sql and <id>.down.sql holding SQL commands. Say, the -up file creates a table with an index, and the -down one drops the index first, and then the table.

Migrations might have a slug: a short human-friendly text describing changes. For example, in a file called 002.create-users-table.up.sql, the slug is "Create users table".


In PG2, the migration framework looks for files matching the following pattern:



  • id is a Long number, for example 12345 (a counter), or 20240311 (date precision), or 20240311235959 (date & time precision);

  • slug is an optional word or group of words joined with - or _, for example create-users-table-and-index or remove_some_view. When rendered, both - and _ are replaced with spaces, and the phrase is capitalized.

  • direction is either prev/down or next/up. Internally, down and up are transformed to prev and next because these two have the same amount of characters and files look better.



Above, the leading zeroes in ids are used for better alignment only. Infernally they are transferred into 1, 12 and 153 Long numbers. Thus, 001, 01 and 1 become the same id 1 after parsing.

Each id has at most two directions: prev/down and next/up. On bootstrap, the engine checks it to prevent weird behaviour. The table below shows there are two rows which, after parsing, have the same (id, direction) pair. The bootstrap step will end up with an exception saying which files duplicate each other.

FilenameParsed, next), next)

A migration might have only one direction, e.g. next/up or prev/down file only.

When parsing, the registry is ignored meaning that both 001-Create-Users.NEXT.sql and files produce the same map.


The files hold SQL expressions to be evaluated by the engine. Here is the content of the file:

create table IF NOT EXISTS test_users (
  id serial primary key,
  name text not null


insert into test_users (name) values ('Ivan');
insert into test_users (name) values ('Huan');
insert into test_users (name) values ('Juan');


Pay attention to the following points.

  • A single file might have as many SQL expressions as you want. There is no need to separate them with magic comments like --;; as Migratus requires. The whole file is executed in a single query. Use the standard semicolon at the end of each expression.

  • There is no a hidden transaction management. Transactions are up to you: they are explicit! Above, we wrap tree INSERT queries into a single transaction. You can use save-points, rollbacks, or whatever you want. Note that not all expressions can be in a transaction. Say, the CREATE TABLE one cannot and thus is out from the transaction scope.

For granular transaction control, split your complex changes on two or three files named like this:

# direct parts

# backward counterparts

No Code-Driven Migrations

At the moment, neither .edn nor .clj migrations are supported. This is by design because personally I'm highly against mixing SQL and Clojure. Every time I see an EDN transaction, I get angry. Mixing these two for database management is the worst idea one can come up with. If you're thinking about migrating a database with Clojure, please close you laptop and have a walk to the nearest park.

Migration Resources

Migration files are stored in project resources. The default search path is migrations. Thus, their physical location is resources/migrations. The engine scans the migrations resource for children files. Files from nested directories are also taken into account. The engine supports Jar resources when running the code from an uberjar.

The resource path can be overridden with settings.

Migration Table

All the applied migrations are tracked in a database table called migrations by default. The engine saves the id and the slug or a migration applied as well as the current timestamp of the event. The timestamp field has a time zone. Here is the structure of the table:

  slug TEXT,
  created_at timestamp with time zone not null default current_timestamp

Every time you apply a migration, a new record is inserted into the table. On rollback, a corresponding migration is deleted.

You can override the name of the table in settings (see below).

CLI Interface

The migration engine is controlled with both API and CLI interface. Let's review CLI first.

The pg.migration.cli namespaces acts like the main entry point. It accepts general options, a command, and command-specific options:

<global options> <command> <command options>

General options are:

-c, --config CONNFIG     migration.config.edn      Path to the .edn config file
-p, --port PORT          5432                      Port number
-h, --host HOST          localhost                 Host name
-u, --user USER          The current USER env var  User
-w, --password PASSWORD  <empty string>            Password
-d, --database DATABASE  The current USER env var  Database
    --table TABLE        :migrations               Migrations table
    --path PATH          migrations                Migrations path

Most of the options have default values. Both user and database names come from the USER environment variable. The password is an empty string by default. For local trusted connections, the password might not be required.

The list of the commands:

createCreate a pair of blank up & down migration files
helpPrint a help message
listShow all the migrations and their status (applied or not)
migrateMigrate forward (everything, next only, or up to a certain ID)
rollbackRollback (the current one, everything, or down to a certain ID)

Each command has its own sub-options which we will describe below.

Here is how you review the migrations:

<lein or deps preamble> \
    -h \
    -p 10150 \
    -u test \
    -w test \
    -d test \
    --table migrations_test \
    --path migrations \

|    ID | Applied? | Slug
| ----- | -------- | --------
|     1 | true     | create users
|     2 | false    | create profiles
|     3 | false    | next only migration
|     4 | false    | prev only migration
|     5 | false    | add some table

Every command has its own arguments and help message. For example, to review the create command, run:

lein with-profile +migrations run -m pg.migration.cli -c config.example.edn create --help

      --id ID             The id of the migration (auto-generated if not set)
      --slug SLUG         Optional slug (e.g. 'create-users-table')
      --help       false  Show help message


Passing -u, -h, and other arguments all the time is inconvenient. The engine can read them at once from a config file. The default config location is migration.config.edn. Override the path to the config using the -c parameter:

<lein/deps> -c config.edn list

The config file has the following structure:

{:host ""
 :port 10150
 :user "test"
 :password #env PG_PASSWORD
 :database "test"
 :migrations-table :migrations_test
 :migrations-path "migrations"}

The :migrations-table field must be a keyword because it takes place in a HoneySQL map.

The :migrations-path field is a string referencing a resource with migrations.

Pay attention to the #env tag. The engine uses custom readers when loading a config. The tag reads the actual value from an environment variable. Thus, the database password won't be exposed to everyone. When the variable is not set, an exception is thrown.



The create command makes a pair of two blank migration files. If not set, the id is generated automatically using the YYYYmmddHHMMSS pattern.

lein with-profile +migration run -m pg.migration.cli \
  -c config.example.edn \

ls -l migrations

You can also provide a custom id and a slug as well:

lein with-profile +migration run -m pg.migration.cli \
  -c config.example.edn \
  create \
  --id 100500 \
  --slug 'some huge changes in tables'

ll migrations


The list command renders all the migrations and their status: whether they are applied or not.

lein with-profile +migration run -m pg.migration.cli -c config.example.edn list

|    ID | Applied? | Slug
| ----- | -------- | --------
|     1 | true     | create users
|     2 | true     | create profiles
|     3 | true     | next only migration
|     4 | false    | prev only migration
|     5 | false    | add some table


The migrate command applies migrations to the database. By default, all the pending migrations are processed. You can change this behaviour using these flags:

... migrate --help

      --all           Migrate all the pending migrations
      --one           Migrate next a single pending migration
      --to ID         Migrate next to certain migration
      --help   false  Show help message

With the --one flag set, only one next migration will be applied. If --to parameter is set, only migrations up to this given ID are processed. Examples:

... migrate           # all migrations
... migrate --all     # all migrations
... migrate --one     # next only
... migrate --to 123  # all that <= 123


The rollback command reverses changes in the database and removes corresponding records from the migration table. By default, only the current migration is rolled back. Syntax:

... rollback --help

      --all           Rollback all the previous migrations
      --one           Rollback to the previous migration
      --to ID         Rollback to certain migration
      --help   false  Show help message

The --one argument is the default behaviour. When --all is passed, all the backward migrations are processed. To rollback to a certain migration, pass --to ID. Examples:

... rollback               # current only
... rollback --one         # current only
... rollback --to 20240515 # down to 20240515
... rollback --all         # down to the very beginning

Lein examples

Lein preamble looks usually something like this:

> lein run -m pg.migration.cli <ARGS>

The pg2-migration library must be in dependencies. Since migrations are managed aside from the main application, they're put into a separate profile, for example:

 {:main pg.migration.cli
  :resource-paths ["path/to/resources"]
  [[com.github.igrishaev/pg2-core ...]]}}

Above, the migrations profile has the dependency and the :main attribute. Now run lein run with migration arguments:

> lein with-profile +migrations run -c migration.config.edn migrate --to 100500

Deps.edn examples

Here is an example of an alias in deps.edn that prints pending migrations:

   {com.github.igrishaev/pg2-migration {:mvn/version "..."}}
   ["-m" "pg.migration.cli"
    "-h" ""
    "-p" "10150"
    "-u" "test"
    "-w" "test"
    "-d" "test"
    "--table" "migrations_test"
    "--path" "migrations"

Run it as follows:

> clj -M:migrations-list

You can shorten it by using the config file. Move all the parameters into the migration.config.edn file, and keep only a command with its sub-arguments in the :main-opts vector:

   {com.github.igrishaev/pg2-migration {:mvn/version "..."}}
   :main-opts ["migrate" "--all"]}}}

To migrate:

> clj -M:migrations-migrate

API Interface

There is a way to manage migrations through code. The pg.migration.core namespace provides basic functions to list, create, migrate, and rollback migrations.

To migrate, call one of the following functions: migrate-to, migrate-all, and migrate-one. All of them accept a config map:

(ns demo
   [pg.migration.core :as mig]))

  {:host ""
   :port 5432
   :user "test"
   :password "secret"
   :database "test"
   :migrations-table :test_migrations
   :migrations-path "migrations"})

;; migrate all pinding migrations
(mig/migrate-all CONFIG)

;; migrate only one next migration
(mig/migrate-one CONFIG)

;; migrate to a certain migration
(mig/migrate-to CONFIG 20240313)

The same applies to rollback:

;; rollback all previously applied migrations
(mig/rollback-all CONFIG)

;; rollback the current migration
(mig/rollback-one CONFIG)

;; rollback to the given migration
(mig/rollback-to CONFIG 20230228)

The read-disk-migrations function reads migrations from disk. It returns a sorted map without information about whether migrations have been applied:

(mig/read-disk-migrations "migrations")

 {:id 1
  :slug "create users"
  :url-prev #object[ "file:/.../migrations/001.create-users.prev.sql"]
  :url-next #object[ "file:/.../migrations/"]}
 {:id 2
  :slug "create profiles"
  :url-prev #object[ "file:/.../migrations/foobar/002.create-profiles.prev.sql"]
  :url-next #object[ "file:/.../migrations/foobar/"]}

The make-scope function accepts a config map and returns a scope map. The scope map knows everything about the state of migrations, namely: which of them have been applied, what is the current migration, the table name, the resource path, and more.

The function create-migration-files creates and returns a pair of empty SQL files. By default, the id is generated from the current date & time, and the slug is missing:

(create-migration-files "migrations")

[#object[ "migrations/20240313120122.prev.sql"]
 #object[ "migrations/"]]

Pass id and slug in options if needed:

(create-migration-files "migrations" {:id 12345 :slug "Hello migration"})

[#object[ "migrations/12345.hello-migration.prev.sql"]
 #object[ "migrations/"]]


On bootstrap, the engine checks migrations for conflicts. A conflict is a situation when a migration with less id has been applied before a migration with greater id. Usually it happens when two developers create migrations in parallel and merge them in a wrong order. For example:

  • the latest migration id is 20240312;
  • developer A makes a new branch and creates a migration 20240315;
  • the next day, developer B opens a new branch with a migration 20240316;
  • dev B merges the branch, now we have 20240312, then 20240316;
  • dev A merges the branch, and we have 20240312, 20240316, 20240315.

When you try to apply migration 20240315, the engine will check if 20240316 has already been applied. If yes, an exception pops up saying which migration cause the problem (in our case, these are 20240316 and 20240315). To recover from the conflict, rename 20240315 to 20240317.

In other words: this is a conflict:

id        applied?
20240312  true
20240315  false
20240316  true  ;; applied before 20240315

And this is a solution:

id        applied?
20240312  true
20240316  true
20240317  false ;; 20240315 renamed to 20240317


Running tests

Running benchmarks

Can you improve this documentation? These fine people already did:
Ivan Grishaev & Teodor Heggelund
Edit on GitHub

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close