Liking cljdoc? Tell your friends :D

Pathom Developers Guide

Table of Contents

1. Introduction

The pathom library provides a rich set of functionality to build robust parsers to process graph queries for EQL queries.

The library includes:

  • A reader abstraction that allows for easy composition.

  • The concept of entity which works as a base framework for reusable sharable readers.

  • A plugin system with some built-in plugins:

    • Error handler: Handles errors at an attribute level.

    • Request cache: For caching the results of parsing repetition that can happen on a single request.

    • Profiler: a plugin to measure the time spent on each attribute during the parser.

  • Connect: a higher level abstraction that can resolve attribute relationships automatically. For example automatic traversal of database joins or resolving data through network requests. This enables exploratory capabilities and much simpler access when the need arises to do extra work to resolve a single conceptual join.

  • GraphQL integration: Use GraphQL endpoints directly from your query system (in development).

Most people will find the most leverage in the "Connect" features, which allow you to quickly build dynamic query processing systems to easily satisfy client data requests.

1.1. Docs are under upgrade!

The docs are getting more love in a new format! If you want to see it (still under work) you can find it at https://wilkerlucio.github.io/pathom/v2. The new docs have quick links to edit each page, PR’s are very welcome to improve it!

1.2. Aliases Used in Code Examples

Throughout the book our code examples will use aliases instead of explicit namespaces. The aliases used are as if we had the following namespace requires:

(ns my-namespace
  (:require
    [com.wsscode.pathom.core :as p]
    [com.wsscode.pathom.connect :as pc]
    [com.wsscode.pathom.connect.graphql :as pcg]
    [com.wsscode.pathom.graphql :as pg]
    [com.wsscode.pathom.trace :as pt]
    [com.wsscode.common.async-clj(s) :refer [let-chan <!p go-catch <? <?maybe]]))

So, any time you see a usage of a namespace in a keyword or function like p/parser or ::p/reader you should remember that these are the namespaces involved.

1.3. Presentations

If you like to learn by seeing presentations, there are two that mention it:

1.4. Contributing

This source for this book is at https://github.com/wilkerlucio/pathom/blob/master/docs/DevelopersGuide.adoc. Feel free to send a PR with edits, corrections, or other contributions. If you’re wanting to make a large edit, please open an issue first.

1.5. Upgrade notes

In pathom we try at most to don’t introduce breaking change, that means in some cases we prefer to introduce a new function or namespace instead of replacing an old one when they might differ in result. This part of the guide will provide information about suggestions to do when upgrading to a certain version. Also in the exceptional cases we introduce breaking changes they will also appear with upgrade notes here.

1.5.1. 2.2.0 - Upgrade guide

Supporting resolver libraries

This is not a breaking change. Pathom 2.2.0 introduces new dispatchers to call resolvers and mutations, the old dispatchers used to rely on a multi-method to invoke the calls, the new dispatchers will just look up for a lamba in the resolver/mutation definition that’s stored in the index. The main advantage is that we reduce the number of places we need to change when adding resolvers and mutations. In the previous case you have 3 places to change, the index, the resolver dispatch and the mutation dispatch, with the new dispatch there is just the index.

This will facilitate the creation of shared resolvers/mutations libraries that you can inject and make part of your parsing system. To give an example shared library I have wrote a demo repo implementing the Youtube API for connect.

To enable this feature you will have to change the dispatch function used in the parser setup, replacing your resolvers fns with new ones provided by connect, example:

; this is the old setup

; setup indexes atom
(def indexes (atom {}))

; setup resolver dispatch and factory
(defmulti resolver-fn pc/resolver-dispatch)
(def defresolver (pc/resolver-factory resolver-fn indexes))

; setup mutation dispatch and factory
(defmulti mutation-fn pc/mutation-dispatch)
(def defmutation (pc/mutation-factory mutation-fn indexes))

(def parser
  (p/parser {::p/env     {::p/reader             [p/map-reader pc/all-readers]
                          ::pc/resolver-dispatch resolver-fn
                          ::pc/mutate-dispatch   mutation-fn
                          ::pc/indexes           @indexes
                          ::db                   (atom {})}
             ::p/mutate  pc/mutate
             ::p/plugins [p/error-handler-plugin
                          p/request-cache-plugin
                          pp/profile-plugin]}))

To do the minimal change is to use this:

; minimal changes to support custom

; setup indexes atom
(def indexes (atom {}))

; setup resolver dispatch and factory
(defmulti resolver-fn pc/resolver-dispatch)
(def defresolver (pc/resolver-factory resolver-fn indexes))

; setup mutation dispatch and factory
(defmulti mutation-fn pc/mutation-dispatch)
(def defmutation (pc/mutation-factory mutation-fn indexes))

(def parser
  (p/parser {::p/env     {::p/reader             [p/map-reader pc/reader2 pc/ident-reader] ; use reader2
                          ; replace resolver dispatch
                          ::pc/resolver-dispatch pc/resolver-dispatch-embedded
                          ; replace mutation dispatch
                          ::pc/mutate-dispatch   pc/mutation-dispatch-embedded
                          ::pc/indexes           @indexes
                          ::db                   (atom {})}
             ::p/mutate  pc/mutate
             ::p/plugins [; add connect plugin
                          (pc/connect-plugin)
                          p/error-handler-plugin
                          p/request-cache-plugin
                          pp/profile-plugin]}))

The new versions of resolver-factory and mutation-factory will add the lambdas into the definition map, making those compatible with the new *-dispatch-embedded, so you get your old resolvers plus any extra ones from libs.

From now on when I say resolver or resolvers I’m meaning both resolvers and mutations, adding this note here so you don’t have to read all the repetition.

From now on we will be recommending the new way of writing resolvers using the pc/defresolver macro, I see a few advantages that I like to highlight about this approach:

  1. Your resolvers become isolated building blocks on their own, instead of having to spread it’s definition in the index + multi-method, now the map contais everything that resolver needs to be used

  2. You get a fine control of what resolvers you want inject in a given parser, before wasn’t easy to write several parsers using sub sets of resolvers, with each in a symbol you can compose as you please

  3. Simplify the boilerplate, no more need to define the multi-methods for dispatching

This is what the setup looks like by using the new map format:

; setup with map format

; this will generate a def for the symbol `some-resolver` and the def will
; contain a map that is the resolver definition, no external side effects
(pc/defresolver some-resolver [env input]
  {::pc/input  #{::id}
   ::pc/output [::name ::email]}
  (get (::db env) (::id input)))

; define another resolver
(pc/defresolver other-resolver ...)

; now it's a good practice to create a sequence containing the resolvers
(def app-registry [some-resolver other-resolver])

(def parser
  (p/parser {::p/env     {::p/reader             [p/map-reader pc/reader2 pc/ident-reader]
                          ::pc/resolver-dispatch pc/resolver-dispatch-embedded
                          ::pc/mutate-dispatch   pc/mutation-dispatch-embedded
                          ::db                   (atom {})}
             ::p/mutate  pc/mutate
             ::p/plugins [; you can use the connect plugin to register your resolvers,
                          ; but any plugin with the ::pc/register key will be also
                          ; included in the index
                          (pc/connect-plugin {::pc/register app-registry})
                          p/error-handler-plugin
                          p/request-cache-plugin
                          pp/profile-plugin]}))

The pain point add is in the fact you now have to specify the resolvers to use, but think that before this the only option was all or nothing. If you have resolvers spread across many files, I suggest you create one list at the end of each namespace containing all the resolvers from that file, this way you can combine those in a later index. The resolver list will be flattened out when it’s processed, its ok to send multiple lists inside lists, this facilitates de combination of lists of resolvers.

The multi-method format is still ok to use, there are no plans to remove it and keep using it if you prefer.
Parallel parser

Pathom 2.2.0 also introduces the parallel parser. Before this all the processing of Pathom were done serially, one attribute at a time, the new parser brings the ability to support the attributes to be processed in parallel, the mechanism is described at the parallel parser section.

If you are using the async-parser the change to the parallel is just changing the parser to parallerl-parser and the connect readers. If you are using the regular sync parser, then you may need to adapt some things to support an async enviroment, here are things to watch for:

  1. If you wrote plugins, when wrapping things you must consider that their response will be async (return core.async channels), One of the easiest ways to handle this is using the let-chan macro, which is a let that automatically handles channels and make the process transparent.

  2. If you done recursive parser calls (that includes calls to functions like join, entity with arity 2)

Tracer

Pathom 2.2.0 includes a new tracer feature. I recommend you replace the old profiler with this, you remove pp/profile-plugin and add the p/trace-plugin (better as the last plugin on your chain).

1.5.2. 2.2.0-beta11 → 2.2.0-RC1 - Breaking changes

In version 2.2.0-beta11 we introduced the pc/connect-plugin and pc/register with the intent to provider an easier to write shared resolvers and also reduce the boilerplate to setup connect.

This strategy failed in be simple to setup a register and more integrations, because it relied on multiple parts, a better strategy emerged by embedding the lamba to run the resolvers and mutations in their own map instead, so they are complete and stand alone.

But to accomodate this the connect plugin and the pc/register had to change, before the pc/connect-plugin was a var, now it’s an fn that you must call. The register used to take the index atom, the multimethod for resolver and the multimethod for mutations, and did a stateful mutation in all three. Now takes the index in a map format and returns another index with the things registered, now it’s a pure function.

2. How to Use This Library

We expect most of our user base is made up of Fulcro users, but this library is a stand alone thing that you can use to fulfill any system using EQL queries. The purpose of this library is to make it much easier to build code that can process EQL on both the client and server side. We expect you to have have one or more of the following needs:

  • You want to fulfill a client UI query from some server-side data source(s).

  • You want to build a client-side parser for directly filling local UI data queries from a local data source.

  • You want to build a parser (client or server) that uses async APIs to fulfill different parts of a query. Perhaps gluing together data sources from various micro-services.

  • You want to use a GraphQL API from the client.

  • You want to provide third-party users a GraphQL API (Future Work)

When building most parsers you’ll want to use Pathom Connect.

To process EQL queries against GraphQL you’ll use the GraphQL Integration.

3. Pathom Connect

Connect provides is a high-level abstraction layer for building query processing code. It handles a number of the gory details in an automatic way that allows you to focus more on your data model and less on the parser itself. It generates an an index of your graph’s features that can be used for a number of very useful features:

  1. Auto-complete of graph queries in tools for data exploration (see OgE).

  2. Graph edge generation from the index’s connection information.

  3. Multiple ways to reach a given attribute, automatically resolved via known reachable edges and transitive relations.

The Connect index is a rich source of information about how your attributes connect and how they can locate each other.

3.1. The Basics

In order to use connect you need to understand some basics about Pathom’s core features. These are covered in detail in later chapters, but you’ll easily understand the basics we need for connect without going into great detail.

You’re going to be defining a parser that uses an environment and graph query to produce a tree of data from arbitrary data sources.

If you’re unfamiliar with the EQL, you should first read up on that in the EQL documentation.

Some simple examples of what we’re doing are:

;; query
[:person/name]

;; possible result
{:person/name "Samantha"}

;; query
[:account/id
 {:billing/charges [:charge/amount]}]

;; possible result
{:account/id 1
 :billing/charges [{:charge/amount 11}
                   {:charge/amount 22}]}

To make sure we’re on the same page, here’s a quick bit of vocabulary:

Environment

A map of configuration and context that is used to configure the parser and is also passed to the reader.

Resolver

A Pathom Connect component that you write to indicate ways in which data attributes can be resolved against your data sources. Resolvers are composed together into a connect-based Reader.

Reader

A component of that attempts to resolve elements of the query (one at a time). When using connect you also use a built-in Map Reader that can pull attributes that are already in the environment without having to do further work against your underlying resolvers and data sources.

Connect Indexes

A set of indexes that are filled with resolver data and allow connect to understand how graph queries can be resolved by the resolvers.

3.2. Baseline Boilerplate

Connect is generally set up with the minimal steps every time. Other sections of this book cover the options in more detail, but for the moment take this small bit of code as the general "starting point" for writing a connect-based query processing system:

(ns com.wsscode.pathom.book.connect.getting-started
  (:require [com.wsscode.pathom.core :as p]
            [com.wsscode.pathom.connect :as pc]))

;; Define one or more resolvers
(pc/defresolver some-symbol [env input] ...)
(pc/defresolver some-other-symbol [env input] ...)
...

;; resolvers are just maps, we can compose many using sequences
(def my-app-registry [some-symbol some-other-symbol])

;; Create a parser that uses the resolvers:
(def parser
  (p/parallel-parser
    {::p/env     {::p/reader               [p/map-reader
                                            pc/parallel-reader
                                            pc/open-ident-reader
                                            p/env-placeholder-reader]
                  ::p/placeholder-prefixes #{">"}}
     ::p/mutate  pc/mutate-async
     ::p/plugins [(pc/connect-plugin {::pc/register my-app-registry}) ; setup connect and use our resolvers
                  p/error-handler-plugin
                  p/request-cache-plugin
                  p/trace-plugin]}))

; note the parallel parser call will return a channel, you must read the value on it
; to get the parser results

3.3. Resolvers

In Connect you implement the graph by creating resolvers, those resolvers are functions that expose some data on the graph.

A resolver has a few basic elements:

  1. Inputs – A set of attributes that must be in the current parsing context for the resolver to be able to work. Inputs is optional, and missing inputs means that the resolver is always capable of working independent of the current parsing context.

  2. Outputs - A query-like notation that gives the "pattern" of the part of the query the resolver is able to resolve. This is typically a list of attributes/joins, where joins typically include a simple subquery.

  3. A lambda - A (fn [env input-data] tree-of-promised-output) that takes the inputs and turns them into a tree that satisfies the "output query".

So you might define a resolver like this:

(pc/defresolver person-resolver [{:keys [database] :as env} {:keys [person/id]}]
  {::pc/input #{:person/id}
   ::pc/output [:person/first-name :person/age]}
  (let [person (my-database/get-person database id)]
    {:person/age        (:age person)
     :person/first-name (:first-name person)}))
If you use Cursive, you can ask it to resolve the pc/defresolver as a defn and you will get proper symbol resolution

Where the database in the environment would be supplied when running the parser, and the input would have to be found in the current context. Remember that graph queries are contextual…​you have to have a starting node to work from, so in the above example we’re assuming that during our parse we’ll reach a point where the context contains a :person/id. The my-database stuff is just made up for this example, and is intended to show you that your data source need not remotely match the schema of your graph query.

Pathom will scan through the defined resolvers in order to try to satisfy all of the properties in a query. So, technically you can split up your queries as much as makes sense into separate resolvers, and as long as the inputs are in the context Pathom will assemble things back together.

Of course, it doesn’t make sense in this case to do so, because each resolver would end up running a new query:

(pc/defresolver person-age-resolver [{:keys [database] :as env} {:keys [person/id]}]
  {::pc/input #{:person/id}
   ::pc/output [:person/age]}
  (let [person (my-database/get-person database id)]
    {:person/age (:age person)}))

(pc/defresolver person-first-name-resolver [{:keys [database] :as env} {:keys [person/id]}]
  {::pc/input #{:person/id}
   ::pc/output [:person/first-name]}
  (let [person (my-database/get-person database id)]
    {:person/first-name (:first-name person)}))

...

The point is that a single-level query like [:person/id :person/first-name :person/age] can be satisfied and "folded together" by pathom over any number of resolvers.

This fact is the basis of parser (de)composition and extensibility. It can also come in handy for performance refinements when there are computed attributes.

3.3.1. Derived/Computed Attributes

There are times when you’d like to provide an attribute that is computed in some fashion. You can, of course, simply compute it within the resolver along with other properties like so:

(pc/defresolver person-resolver [{:keys [database] :as env} {:keys [person/id]}]
  {::pc/input #{:person/id}
   ::pc/output [:person/first-name :person/last-name :person/full-name :person/age]}
  (let [{:keys [age first-name last-name]} (my-database/get-person database id)]
    {:person/age        age
     :person/first-name first-name
     :person/last-name  last-name
     :person/full-name  (str first-name " " last-name) ; COMPUTED
     ...}))

but this means that you’ll take the overhead of the computation when any query relate to person comes up. You can instead spread such attributes out into other resolvers as we discussed previously, which will only be invoked if the query actually asks for those properties:

(pc/defresolver person-resolver [{:keys [database] :as env} {:keys [person/id]}]
  {::pc/input #{:person/id}
   ::pc/output [:person/first-name :person/last-name :person/age]}
  (let [{:keys [age first-name last-name]} (my-database/get-person database id)]
    {:person/age        age
     :person/first-name first-name
     :person/last-name  last-name}))

(pc/defresolver person-name-resolver [_ {:person/keys [first-name last-name]}]
  {::pc/input #{:person/first-name :person/last-name}
   ::pc/output [:person/full-name]}
  {:person/full-name (str first-name " " last-name)})

This combination of resolvers can still resolve all of the properties in [:person/full-name :person/age] (if :person/id is in the context), but a query for just [:person/age] won’t invoke any of the logic for the person-name-resolver.

3.3.2. Single Inputs — Establishing Context

So far we have seen how to define a resolver that can work as long as the inputs are already in the environment. You’re almost certainly wondering how to do that.

One way is to define global resolvers and start the query from them, but very often you’d just like to be able to say "I’d like the first name of person with id 42."

EQL uses "idents" to specify exactly that sort of query:

[{[:person/id 42] [:person/first-name]}]

The above is a join on an ident, and the expected result is a map with the ident as a key:

{[:person/id 42] {:person/first-name "Joe"}}

The query itself has everything you need to establish the context for running the person-resolver, and in fact that is how Pathom single-input resolvers work.

If you use an ident in a query then Pathom is smart enough to know that it can use that ident to establish the context for finding resolvers. In other words, in the query above the ident [:person/id 42] is turned into the parsing context {:person/id 42}, which satisfies the input of any resolver that needs :person/id to run.

3.3.3. Resolver Without Input — Global Resolver

A resolver that requires no input can output its results at any point in the graph, thus it is really a global resolver. Pay particular note to the qualification: any point in the graph. Not just root. Thus, a resolver without inputs can "inject" its outputs into any level of the query graph result.

We’re going to start building a parser that can satisfy queries about a music store. So, we’ll start with a global resolver that can resolve the "latest product". The code below shows the entire code needed, boilerplate and all:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/getting_started.cljs[role=include]

Our first resolver exposes the attribute ::latest-product, and since it doesn’t require any input it is a global resolver. Also, note that our output description includes the full output details (including nested attributes), this is mostly useful for auto-complete on UI’s and automatic testing. If you return extra data it will still end up in the output context.

Try some of these queries on the demo below:

[::latest-product]
[{::latest-product [:product/title]}]

; ::latest-product can be requested anywhere
[{::latest-product
  [* ::latest-product]}]
[::latest-product]

3.3.4. Resolvers with input

Next, let’s say we want to have a new attribute which is the brand of the product. Of course, we could just throw the data there in our other resolver, but the real power of connect comes out when we start splitting the responsibilities among resolvers, so let’s define a resolver for brand that requires an input of :product/id:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/getting_started2.cljs[role=include]
[{::latest-product [:product/title :product/brand]}]

The input is a set containing the keys required on the current entity in the parsing context for the resolver to be able to work. This is where Connect starts to shine because any time your query asks for a bit of data it will try to figure it out how to satisfy that request based on the attributes that the current contextual entity already has.

More importantly: Connect will explore the dependency graph in order to resolve things if it needs to! To illustrate this let’s pretend we have some external ID for the brand, and that we can derive this ID from the brand string, pretty much just another mapping:

;; a silly pretend lookup
(def brand->id {"Taylor" 44151})

(pc/defresolver brand-id-from-name [_ {:keys [product/brand]}]
  {::pc/input #{:product/brand}
   ::pc/output [:product/brand-id]}
  {:product/brand-id (get brand->id brand)})

(comment
  (parser {} [{::latest-product [:product/title :product/brand-id]}])
  ; => #::{:latest-product #:product{:title "Acoustic Guitar", :brand-id 44151}}
)

Note that our query never said anything about the :product/brand. Connect automatically walked the path :product/id → :product/brand → :product/brand-id to obtain the information desired by the query!

When a required attribute is not present in the current entity, Connect will look for resolvers that can fetch it, analyze their inputs, and recursively walk backwards towards the "known data" in the context. When a required attribute is not present in the current entity, Connect will calculate the possibole paths from the data you have to the data you request, then it can use some euristic to decide which path to take and walk this path to reach the data, if there is no possible path connect reader will return ::p/continue to let another reader try to handle that key. You can read more about how this works in the Index page.

Also remember that single-input resolvers can handle ident-based queries. Thus, the following ident-join queries already work without having to define anything else:

(parser {} [{[:product/id 1] [:product/brand]}])
; => {[:product/id 1] #:product{:brand "Taylor"}}

(parser {} [{[:product/brand "Taylor"] [:product/brand-id]}])
; => {[:product/brand "Taylor"] #:product{:brand-id 44151}}

3.3.5. Multiple inputs

The input to a resolver is a set, and as such you can require more than one thing as input to your resolvers. When doing so, of course, your resolver function will receive all of the inputs requested; however, this also means that the parsing context needs to contain them, or there must exist other resolvers that can use what’s in the context to fill them in.

As you have seen before, the only way to provide ad-hoc information to connect is using the ident query, but in the ident itself you can only provide one attribute at a time.

Since version 2.2.0-beta11 the ident readers from connect (ident-reader and open-ident-reader) support adding extra context to the query using parameters, so let’s say you want to load some customer data, but wants to provide some base information that you already have to reduce the number of resolvers called, you can issue a query like this:

[{([:customer/id 123] {:pathom/context {:customer/first-name "Foo" :customer/last-name "Bar"}})
  [:customer/full-name]}]

3.3.6. Parameters

Parameters enable another dimension of information to be add to the request. Params have different semantics from inputs, inputs are more a dependency thing, while params are more like options. In practice the main difference is that inputs are something Pathom will try to look up and make available, while parameters must always be provided at query time, no auto resolution. Common cases to use parameters are: pagination, sorting, filtering…​

Let’s write a resolver that outputs a key with a sequence that can take a parameter to sort the resulting list via user provided attribute.

(pc/defresolver instruments-list [env _]
  {::pc/output [{::instruments [:instrument/id :instrument/brand
                                :instrument/type :instrument/price]}]}
  (let [{:keys [sort]} (-> env :ast :params)] ; (1)
    {::instruments (cond->> instruments
                     (keyword? sort) (sort-by sort))}))
1Pull the parameters from environment

Then we can run queries like:

[(::instruments {:sort :instrument/brand})]
[(::instruments {:sort :instrument/price})]
[(::instruments {:sort :instrument/type})]

; params with join

[{(::instruments {:sort :instrument/price})
  [:instrument/id
   :instrument/brand]}]

Try it out:

[(::instruments {:sort :instrument/price})]

Note: If you are calling the parser directly, be sure to quote your query when using parameters like so:

(parser {} '[(::instruments {:sort :instrument/brand})])
; => {::instruments
      ({:instrument/id 4,
        :instrument/brand "Cassio",
        :instrument/type :instrument.type/piano,
        :instrument/price 160}
       {:instrument/id 1,
        :instrument/brand "Fender",
        :instrument/type :instrument.type/guitar,
        :instrument/price 300}
        ...

3.3.7. N+1 Queries and Batch resolvers

When you have a to-many relation that is being resolved by a parser you will typically end up with a single query that finds the "IDs", and then N more queries to fill in the details of each item in the sequence. This is known as the N+1 problem, and can be a source of significant performance problems.

To solve this problem, the idea is to instead of running a resolver once for each item on the list, we can send all the inputs as a sequence, so the resolver can do some optimal implementation to handle multiple items, when this happens we call it a batch resolver. For example, let’s take a look at the following demo:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/batch.cljs[role=include]

Try the demo:

[{:items [:number-added]}]
This demo is using Pathom async parsers. The resolvers in async parsers can return channels that (eventually) resolve to the result, which is why you see go blocks in the code. See Async Parsing for more details. We use them in this demo so we can "sleep" in a Javascript environment to mimic overhead in processing. In the rest of the book we recommend using the parallel parser, the reason to use the async parser in this demo is that it more easily demonstrates the n+1 issue.

You can note by the tracer that it took one second for each entry, a clear cascade, because it had to call the :number-added resolver once for each item.

We can improving that by turning this into a batch resolver, like this:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/batch2.cljs[role=include]

Try the demo:

[{:items [:number-added]}]

Note that this time the sleep of one second only happened once, this is because when Pathom is processing a list and the resolver supports batching, the resolver will get all the inputs in a single call, so your batch resolver can get all the items in a single iteration. The results will be cached back for each entry, this will make the other items hit the cache instead of calling the resolver again.

Batch transforms

Starting on version 2.2.0 Pathom add some helpers to facilitate the creation of batch resolvers using Pathom transform facilities.

In the previous example we manually detected if input was a sequence, this API is made this way so the resolver keeps compatibility with the regular resolver API, but often it is easier if you get a consistent input (always a sequence for example). We can enforce this using a transform:

(pc/defresolver slow-resolver [_ input]
  {::pc/input     #{:number}
   ::pc/output    [:number-added]
   ; use the transform, note we removed ::pc/batch? true, that's because the transform
   ; will add this for us
   ::pc/transform pc/transform-batch-resolver}
  (go
    (async/<! (async/timeout 1000))
    ; no need to detect sequence, it is always a sequence now
    (mapv (fn [v] {:number-added (inc (:number v))}) input)))

Try the demo:

[{:items [:number-added]}]

Another helper that pathom provides is to transform a serial resolver that would run one by one, into a batch that runs at concurrency n.

(pc/defresolver slow-resolver [_ {:keys [number]}]
  {::pc/input     #{:number}
   ::pc/output    [:number-added]
   ; set auto-batch with concurrency of 10
   ::pc/transform (pc/transform-auto-batch 10)}
  (go
    (async/<! (async/timeout 1000))
    ; dealing with the single case, as in the first example we did on batch
    {:number-added (inc number)}))

Try the demo:

[{:items [:number-added]}]

Note this time we did called resolver fn multiple times but in parallel, the way this may impact the performance will vary case by case, I suggest giving some thought on the best strategy for each case individually.

Aligning results

Often times when you do a batch request to some service/api the results won’t come in the same order of the request, also the count might not match in case some of the items on request were invalid. To facilitate the coding of these cases Pathom provides a helper to correctly sort the results back, for more info check the docs about batch-restore-sort on cljdoc.

3.4. Connect mutations

Using mutations from connect will give you some extra leverage by adding the mutation information to the index, this will enable auto-complete features for explorer interfaces, and also integrates the mutation result with the connect read engine.

3.4.1. Mutations setup

The mutation setup looks very much like the one from resolvers, you define then using pc/defmutation and include on the registry like resolvers.

(ns com.wsscode.pathom.book.connect.mutations
  (:require [com.wsscode.pathom.core :as p]
            [com.wsscode.pathom.connect :as pc]))

(pc/defmutation my-mutation [env params] ...)

(def parser
  (p/parallel-parser
    {::p/env     {::p/reader [p/map-reader
                              pc/parallel-reader
                              pc/open-ident-reader]}
     ::p/mutate  pc/mutate-async
     ::p/plugins [(pc/connect-plugin {::pc/register send-message})
                  p/error-handler-plugin
                  p/request-cache-plugin
                  p/trace-plugin]}))

Now let’s write a mutation with our factory.

3.4.2. Creating mutations

The defmutation have the same interface that we used with defresolver.

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/mutations.cljs[role=include]
[(send-message {:message/text "Hello Clojurist!"})]

The ::pc/params is currently a non-op, but in the future it can be used to validate the mutation input, it’s format is the same as output (considering the input can have a complex data shape). The ::pc/output is valid and can be used for auto-complete information on explorer tools.

Mutation joins

After doing some operation, you might want to read information about the operation result. With connect you can leverage the resolver engine to expand the information that comes from the mutation. To do that you do a mutation join, and use that to query the information. Here is an example where we create a new user and retrieve some server information with the output.

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/mutation_join.cljs[role=include]
[{(user/create {:user/name "Rick Sanches" :user/email "rick@morty.com"}) [:user/id :user/name :user/created-at]}]

Note that although we only return the :user/id from the mutation, the resolvers can walk the graph and fetch the other requested attributes.

Mutation join globals

Some attributes need to be in the output, even when they are not asked for. For example, if your parser is driving a Fulcro app, the :tempid part of the mutation will be required for the app to remap the ids correctly. We could ask for the user to add it on every remote query, but instead we can also define some global attributes and they will be read every time. As in this example:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/mutation_join_globals.cljs[role=include]
[{(user/create {:user/id "TMP_ID" :user/name "Rick Sanches" :user/email "rick@morty.com"}) [:user/id :user/name :user/created-at]}]

So in case of fulcro apps you can use the :fulcro.client.primitives/tempids as the global and have that pass though.

Mutation output context

Mutation context allow the mutation caller to provide extra data to be used as context information to futher processing in the mutation response.

During UI development, sometimes you may want to load some data in response of the mutation, but the mutation output doens’t have enough context although the UI does (because it has a much bigger view at the client data). For those cases the UI can send some params to the mutation so those are available for traversing in the mutation response.

To demonstrate this check the following example:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/mutation_context.cljs[role=include]
[{(user/create {:user/id "TMP_ID" :user/name "Rick Sanches" :user/email "rick@morty.com" :pathom/context {:number/value 123}}) [:number/value :number/value++]}]

One real use case for this feature would be in a Fulcro app, when you send some mutation but the result needs to update some component elsewere (and the required data is known by the client, but not by the original mutation result).

3.4.3. Async mutations

This section is no longer necessary, the main recommendation now is to use the parallel-parser which is async, no changes needed to write async mutations, all you gotta know is that you can return channels from your mutations and they will be properly coordinated.

Here is an example of doing some mutation operations using async features.

Example:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/mutation_async.cljs[role=include]
[{(user/create {:user/id "TMP_ID" :user/name "Rick Sanches" :user/email "rick@morty.com"}) [:user/id :user/name :user/created-at]}]

Using the same query/mutation interface, we replaced the underlying implementation from an atom to a indexeddb database.

You can do the same to target any type of API you can access.

3.5. Shared resolvers

Since version 2.2.0 Pathom adds support to describe resolvers as pure maps and register those resolvers in your system, making possible to write easy to share resolvers in a library format.

3.5.1. Resolver data format

The map format contains all the information needed for a resolver to run, this means a symbol to name it, the input, the output and the lamba to run the computation. This is considered an open map, any extra keys will end up in the index and can be read later.

Here is an example of how you can specific a resolver using the map format:

(def some-resolver
  {::pc/sym     `some-resolver ; this is important! we need to name each resolver, prefer qualified symbols
   ::pc/input   #{:customer/id}
   ::pc/output  [:customer/id :customer/name :customer/email]
   ::pc/resolve (fn [env input] ...)})

It’s very similar to using defresolver, you just add the key ::pc/resolve to define the runner function of it. Note that using this helper you don’t have to provide the ::pc/sym key, its added automatically for you.

You can also create using the pc/resolver helper function:

(def some-resolver
  (pc/resolver `some-resolver
    {::pc/input #{:customer/id}
     ::pc/output [:customer/id :customer/name :customer/email]}
    (fn [env input] ...)))

This just returns the same map of the previous example.

And using the final macro helper (recommended way):

(pc/defresolver some-resolver
  {::pc/input  #{:customer/id}
   ::pc/output [:customer/id :customer/name :customer/email]}
  (fn [env input] ...))

3.5.2. Mutation data format

Mutations are similar as well:

(def send-message-mutation
  {::pc/sym    `send-message-mutation
   ::pc/params #{:message/body}
   ::pc/output [:message/id :message/body :message/created-at]
   ::pc/mutate (fn [env params] ...)})

As you can see it’s very similar to using defresolver, you just add the key ::pc/resolve to define the runner function of it.

Using the helper:

(def send-message-mutation
  (pc/mutation `send-message-mutation
    {::pc/params #{:message/body}
     ::pc/output [:message/id :message/body :message/created-at]}
    (fn [env params] ...)))

And using the final macro helper (recommended way):

(pc/defmutation send-message-mutation
  {::pc/params #{:message/body}
   ::pc/output [:message/id :message/body :message/created-at]}
  (fn [env params] ...))

Mutations must be included in the register to be available, like resolvers.

3.5.3. Using ::pc/transform

Sometimes it can be interesting to wrap the resolver/mutation function with some generic operation to augment its data or operations. For example, imagine you want some mutations to run in a transaction context:

(pc/defmutation create-user [env user]
  {::pc/sym    'myapp.user/create
   ::pc/params [:user/id :user/name]}
  (db/run-transaction!
    (fn []
      (db.user/create! env user))))

We could use a transform to clean this up:

(defn transform-db-tx [{::pc/keys [mutate] :as mutation}]
  (assoc mutation
    ::pc/mutate
    (fn [env params]
      (db/run-transaction! env #(mutate env params)))))

(pc/defmutation create-user [env user]
  {::pc/sym       'myapp.user/create
   ::pc/params    [:user/id :user/name]
   ::pc/transform transform-db-tx}
  (db.user/create! env user))

Note that ::pc/transform receives the full resolver/mutation map and returns the final version, it can modify anything about the entry.

For another example check the built-in batch transformations.

3.5.4. Using register

Once you have your maps ready you can register then using the connect register function to the index.

(-> {}
    ; register the resolver we created previously
    (pc/register some-resolver)

    ; same method works for mutations
    (pc/register send-message-mutation)

    ; you can also send collections to register many at once
    (pc/register [some-resolver send-message-mutation])

    ; collections will be recursivelly processed, so this is valid too:
    (pc/register [some-resolver [send-message-mutation]]))

; in the end the index will have the information combined of all the resolvers and mutations

If you are a library author, consider defining each resolver/mutation as its own symbol and then create another symbol that is vector combining your features, this way you make easy for your users to just get the vector, but still allow then to cherry pick which operations he wants to pull if he doesn’t want it all.

3.5.5. Plugins with resolvers

It’s also possible for plugins to declare resolvers and mutations so they get installed when the plugin is used. To do that your plugin must provide the ::pc/register key on the plugin map, and you also need to use the pc/connect-plugin, this plugin will make the instalation, here is an example:

...

(def my-plugin-with-resolvers
 {::pc/register [some-resolver send-message-mutation]})

(def parser
  (p/parser {::p/env     (fn [env]
                           (merge
                             {::p/reader [p/map-reader pc/reader pc/open-ident-reader]}
                             env))
             ::p/mutate  pc/mutate-async
             ::p/plugins [(pc/connect-plugin) ; make sure connect-plugin is here, it's order doesn't matter
                          my-plugin-with-resolvers]}))

And that’s it, the resolvers will be registered right after the parser is defined.

3.6. Using a thread pool for parallel resolvers

When you run Pathom in Clojure with the parallel connect, the resolver functions are running inside core.async go blocks. In case of CLJS most IO is done async, making this a non-issue, but if you are on Java environment and doing blocking IO, this means the code is doing IO on go blocks, which is a no no.

If you can switch to some library that does async IO that’s the best option, but if you can’t or wont now, Pathom provides a thread pool helper so you can tell the engine to run the resolvers there to avoid blocking the go blocks.

Here is an example of how to setup a thread pool (clj only!):

(def parser
  (p/parallel-parser
    {::p/env     {::p/reader               [p/map-reader
                                            pc/parallel-reader
                                            pc/open-ident-reader
                                            p/env-placeholder-reader]
                  ; setup the thread pool
                  ::pc/thread-pool         (pc/create-thread-pool (async/chan 200))
                  ::p/placeholder-prefixes #{">"}}
     ::p/mutate  pc/mutate-async
     ::p/plugins [(pc/connect-plugin {::pc/register []})
                  p/error-handler-plugin
                  p/request-cache-plugin
                  p/trace-plugin]}))

3.7. Advanced Connect Details

3.7.1. Connect readers

pc/parallel-reader

Parallel reader from connect is implemented to work with the paralle-parser. This reader is capable of detecting attribute dependencies, execute multiple in parallel and coordinate the return, including back tracking for secondary paths. Here is how it works:

Getting back to the connect basic idea, that we expand information from a context, to illustrate this case let’s have the following set of resolvers:

(pc/defresolver movie-details [env input]
  {::pc/input  #{:movie/id}
   ::pc/output [:movie/id :movie/title :movie/release-date]}
  ...)

(pc/defresolver movie-rating [env input]
  {::pc/input  #{:movie/id}
   ::pc/output [:movie/rating]}
  ...)

(pc/defresolver movie-title-prefixed [env input]
  {::pc/input  #{:movie/title}
   ::pc/output [:movie/title-prefixed]}
  ...)

Note that we have two resolvers that depend on a :movie/id and one that depends on :movie/title.

Now given the query: [{[:movie/id 42] [:movie/title-prefixed]}]

First we use the ident query to create the context with a :movie/id, for the attribute :movie/title-prefixed the parallel-reader will be invoked. The first thing the reader has to do is compute a plan to reach the attribute considering the data it has now, it does it by recursively iterating over the ::pc/index-oir until it reaches some available dependency or gives up because there is no possible path.

Most cases (specially for small apis) there will be only a single path, and this is the case for our example the result of pc/compute-path is this:

#{[[:movie/title `movie-details] [:movie/title-prefixed `movie-title-prefixed]]}

The format returned by pc/compute-path is a vector of paths, each path is a vector of tuples, the tuple contains the attribute reason (why that resolver is been called) and the resolver symbol that will be used to fetch that attribute, this makes the path from the available data to the attribute requested, this is the plan.

For details on the path selection algorithm in cases of multiple options check the paths selection section.

Ok, now let’s see how it behaves when you have multiple attributes to process, this is the new query, this time let’s try using the interactive parser, run the query and check in the tracing how it goes (I added a 100ms delay to each resolver call so its easier to see):

[{[:movie/id 42] [:movie/id :movie/title :movie/release-date :movie/rating :movie/title-prefixed]}]
Try changing the order of the attributes and see what happens, for example if you put :movie/title-prefixed at start you will this attribute been responsible for the title fetching and itself.

This is what’s happening for each attribute:

:movie/id: this data is already in the entity context, this means it will be read from memory and will not even invoke the parallel reader

:movie/title: this attribute is not on entity, so it will create the plan to call movie-details from this plan we can also compute all the attributes we will incorporate in the call chain (by combining the outs of all the resolvers in the path), we store this information as a waiting list. The waiting list on this case is: [:movie/id :movie/title :movie/releast-date]. The processing of attributes continues in parallel while the resolver is called.

:movie/release-date: this attribute is not on entity, but it is in the waiting list, so the parser will ignore it for now and skip to process the next one.

:movie/rating: this attribute is not in entity, neighter in the waiting list, so we can call the resolver for it immediatly, and the plan output ([:movie/rating]) is appended to the waiting list.

:movie/title-prefixed: like the rating this is not in entity or waiting, so we compute the plan and execute, the plan is again:

#{[[:movie/title `movie-details] [:movie/title-prefixed `movie-title-prefixed]]}

But movie-details is already running because of :movie/title, when the parallel-reader calls a resolver, it actually caches it immediatly as a promise channel in the request cache, so when we hit the same resolver with the same input, it hits the cache, getting a hold of the promise channel, so the process continues normally with only one actual call to the resolver, but two listeners on the promise channel (and any posterior cache hit would get to this same promise channel). This is how the data fetch is coordinated across the attributes, placeholder nodes are also supported and optimized to avoid repeated calls to resolvers.

Another difference is during processing of sequences, the parallel parser uses core.async pipeline to process each sequence with a parallelism concurrency of 10.

Path selection

In case there are multiple possible paths Pathom has to decide which path to take, the current implementation chooses the path with less weight, that calculation is made in this way:

  1. Every resolver starts with weight 1 (this is recorded per instance)

  2. Once a resolver is called, it’s execution time is recorded and updated in the map using the formula: new-value = (old-value + last-time) * 0.5

  3. If a resolver call throws an exception, double it’s weight

  4. Every time we mention some resolver in a path calculation its weight is reduced by one.

If you like to make your own sorting of the plan, you can set the key ::pc/sort-plan in your environment, and Pathom will call this function sort the results, it takes the environment and the plan (which is a set like demonstrate in the previous section).

pc/reader2

This reader leverages some tecniques that were develop during the creationg of the parallel reader, things like path choosing and backtracking.

pc/async-reader2

Like pc/reader2 but knows how to handle async processing inside.

pc/reader [DEPRECATED]

DEPRECATED: use pc/reader2 instead

The main Connect reader. This will look up the attribute in the index and try to resolve it, recursively if necessary.

pc/async-reader [DEPRECATED]

DEPRECATED: use pc/async-reader2 instead

Like pc/reader but knows how to handle async processing inside.

pc/ident-reader

The ident-reader is used to resolve ident-based queries by establishing an initial context from the ident. When an ident query reaches this reader it will check the index to see if the ident key is present on in the indexed idents.

Since version 2.2.0-beta11 this reader also supports extra context provision using the param :pathom/context, here is how to send extra data to it:

[{([:user/id 123] {:pathom/context {:other/data 123}})
  [:user/id :user/name :other/data]}]
pc/open-ident-reader

Like ident-reader, but not constrained to the indexed idents, this will create a context from any ident.

pc/index-reader

This reader exposes the index itself with the name ::pc/indexes.

3.7.2. Understanding the indexes

Connect maintains a few indexes containg information about the resolvers and the relationships on attributes. Connect will look up the index in the environment, on the key :com.wsscode.pathom.connect/indexes, which is a map containing the indexes

In order to explain the different indexes we’ll look at the index generated by our example in the getting started section, each piece of the index will be listed with it’s explanation.

index-resolvers
::pc/index-resolvers
{get-started/latest-product
 {::pc/sym     get-started/latest-product
  ::pc/input   #{}
  ::pc/output  [{::get-started/latest-product [:product/id
                                               :product/title
                                               :product/price]}]
  ::pc/resolve (fn ...)}

 get-started/product-brand
 {::pc/sym     get-started/product-brand
  ::pc/input   #{:product/id}
  ::pc/output  [:product/brand]
  ::pc/resolve (fn ...)}

 get-started/brand-id-from-name
 {::pc/sym     get-started/brand-id-from-name
  ::pc/input   #{:product/brand}
  ::pc/output  [:product/brand-id]
  ::pc/resolve (fn ...)}}

This is a raw index of available resolvers, it’s a map resolver-sym → resolver-data. resolver-data is any information relevant that you want to add about that resolver. Any key that you adding during pc/add will end up on this map, also Connect will add the key ::pc/sym automatically, which is the same symbol you added. If you want to access the data for a resolver, Connect provides a helper function for that:

(pc/resolver-data env-or-indexes `product-brand)
; => {::pc/sym     get-started/product-brand
;     ::pc/input   #{:product/id}
;     ::pc/output  [:product/brand]
;     ::pc/resolve (fn ...)}
index-mutations

This index contains the mutation definitions, its similar to the index-resolvers.

index-oir
::pc/index-oir
{:get-started/latest-product {#{} #{get-started/latest-product}}
 :product/brand              {#{:product/id} #{get-started/product-brand}}
 :product/brand-id           {#{:product/brand} #{get-started/brand-id-from-name}}}

This index stands for output → input → resolver. It’s the index used for the Connect reader to look up attributes. This index is built by looking at the input/output for the resolver when you add it. It will save that resolver as a path to each output attribute, given that input. It basically inverts the order of things: it keys the output attribute to all of the potential "starting points".

Let’s do an exercise and see how connect traverses this index in a practical example:

Given we have this index (oir):

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/connect/index_oir_example.cljc[role=include]

Now if you try to run the query:

[:name]

So we look in the index for :name, and we get {#{:id} #{thing-by-id}}, now we try to match the current entity attribute keys with the sets to see if we have enough data to call any of them. If we don’t it will fail because we don’t have enough data.

[{[:id 123] [:name]}]

So, if we start with an ident, our initial context is {:id 123}. This time we have the :id, so it will match with the input set #{:id}, and will call the resolver thing-by-id with that input to figure out the name. Connect uses atom entities: when it gets the return value from the resolver it merges it back into the context entities, making all data returned from the resolver available to access new attributes as needed.

index-io
::pc/index-io
{#{}               {:get-started/latest-product #:product{:id {} :title {} :price {}}}
 #{:product/id}    {:product/brand {}}
 #{:product/brand} {:product/brand-id {}}}

The auto-complete index, input → output. This index accumulates the reach for each single attribute on the index. By walking this information we can know ahead of time all attribute possibilities we can fetch from a given attribute.

If I have a :product/id, what can I reach from it? Looking at the index, the :product/id itself can provide the :product/brand. But if I have access to :product/brand it means I also have access to whatever :product/brand can provide. By doing multiple iterations (until there are no new attributes) we end up knowing that :product/id can provide the attributes :product/brand and :product/brand-id. And this is how autocomplete is done via the index-io.

index-attributes
::pc/index-attributes
{#{}
 {::pc/attribute     #{}

  ::pc/attr-provides {::get-started/latest-product
                      #{get-started/latest-product}

                      [::get-started/latest-product :product/id]
                      #{get-started/latest-product}

                      [::get-started/latest-product :product/title]
                      #{get-started/latest-product}

                      [::get-started/latest-product :product/price]
                      #{get-started/latest-product}}

  ::pc/attr-input-in #{get-started/latest-product}}

 ::get-started/latest-product
 {::pc/attribute      ::get-started/latest-product
  ::pc/attr-reach-via {#{} #{get-started/latest-product}}
  ::pc/attr-output-in #{get-started/latest-product}
  ::pc/attr-branch-in #{get-started/latest-product}}

 :product/id
 {::pc/attribute      :product/id
  ::pc/attr-reach-via {[#{} ::get-started/latest-product] #{get-started/latest-product}}
  ::pc/attr-output-in #{get-started/latest-product}
  ::pc/attr-leaf-in   #{get-started/latest-product}
  ::pc/attr-provides  {:product/brand #{get-started/product-brand}}
  ::pc/attr-input-in  #{get-started/product-brand}}

 :product/title
 {::pc/attribute      :product/title
  ::pc/attr-reach-via {[#{} ::get-started/latest-product] #{get-started/latest-product}}
  ::pc/attr-output-in #{get-started/latest-product}
  ::pc/attr-leaf-in   #{get-started/latest-product}}

 :product/price
 {::pc/attribute      :product/price
  ::pc/attr-reach-via {[#{} ::get-started/latest-product] #{get-started/latest-product}}
  ::pc/attr-output-in #{get-started/latest-product}
  ::pc/attr-leaf-in   #{get-started/latest-product}}

 :product/brand
 {::pc/attribute      :product/brand
  ::pc/attr-reach-via {#{:product/id} #{get-started/product-brand}}
  ::pc/attr-output-in #{get-started/product-brand}
  ::pc/attr-leaf-in   #{get-started/product-brand}
  ::pc/attr-provides  {:product/brand-id #{get-started/brand-id-from-name}}
  ::pc/attr-input-in  #{get-started/brand-id-from-name}}

 :product/brand-id
 {::pc/attribute      :product/brand-id
  ::pc/attr-reach-via {#{:product/brand} #{get-started/brand-id-from-name}}
  ::pc/attr-output-in #{get-started/brand-id-from-name}
  ::pc/attr-leaf-in   #{get-started/brand-id-from-name}}}

Add in Pathom 2.2.13, this index contain detailed information about the system attributes and it’s connections, this index is intended to be used from tools to provide extra information for the user.

This index have a key for each attribute, multiple inputs are present as sets, and there is also the special #{} that represents globals (things without input).

Each value is a map with ::pc/attribute key, which is the attribute itself, and may have one or many of these keys:

::pc/attr-input-in - a set containg the symbols of the resolvers where this attribute appears as an input

::pc/attr-output-in - a set containing the symbols of the resolvers where this attribute appears as an output

::pc/attr-provides - a map telling what attributes can be provided given the current as a base, this only considers direct outputs (that can be reached in a single resolver call). For each map entry, the key can be either a keyword (in case the output is provided at the same entity level as the input) or a vector (telling the path to reach that attribute). The map entry value is a set containing the resolvers available to traverse that path.

::pc/attr-reach-via - a map telling what attributes can be used to reach the current attribute. For each map entry, the key can be either a set (in case the input is provided at the same entity level as the current attribute) or a vector (telling the path to provide the given attribute). The map entry value is a set containing the resolvers available to traverse that path.

::pc/attr-leaf-in - a set containing the resolver where this attribute appears as leaf, meaning it has no subquery.

::pc/attr-branch-in - a set containing the resolver where this attribute appears as branch, meaning it has a subquery.

an attribute should never be a leaf and a branch at the same time, by been a branch it means that attribute value is expected to be a map or a vector of maps. If later it appears as a leaf, this means the data is wrong or the specification is not complete enough and is a sign something is mismatching.
idents
::pc/idents
#{:product/brand :product/id}

The idents index contain information about which single attributes can be used to access some information. This index is used on ident-reader and on OgE to provide auto-complete options for idents. Any time you add a resolver that has a single input, that input attribute is added on the idents index.

autocomplete-ignore

This index is for a more advanced usage. Currently it’s only used by the GraphQL integration. In the GraphQL integration we leverage the fact that types have a fixed set of attributes and add that into the index. The problem is that the types themselves are not valid entries for the query, then autocomplete-ignore is a way to make those things be ignored in the auto-complete. You probably only need this if you are building the index in some custom way.

Merging indexes

Indexes can be merged, use ::pc/merge-indexes to add one index on top of the other.

Each index may have different semantics for merging, Pathom uses the multimethod pc/index-merger, you can add extra implementations to this method to get custom index merging (in case you are building some unique index for your system).

3.8. Exploration with Pathom Viz

A happy and growing index can get hard to tame, and that’s why the Index Explorer is here to help you.

The index explorer requires Pathom version 2.2.13+

The index explorer is a tool to help you navigate and understand the relationships between the attributes in your system.

Better show than tell, here is a demonstration of the index explorer:

Don’t worry if you got confused with all the information, in the next section we are going to drill down and explain each section of the explorer.

3.8.1. Explorer Menu

The menu is always visible in the left bar, you can use this to find attributes, resolvers and mutations. By default it shows a complete index of things, you can click on the grey headers to collapse a group. Try it out in the demo above.

There is a search input on top of the menu, it will do a fuzzy search on everything.

3.8.2. Stats

The first screen you see in the index contains some main stats about the index.

In the counters section here is explanation for some non obvious counters:

  • globals count: the number of attributes accessible that doesn’t depend on any data

  • idents count: number of attributes that by themselves can provide more data

  • edges count: the number of edges connecting the attributes in the system

The most connected attributes section will give you a top list of attributes with most connections, a few attributes in a system tend to raise up on this list and can point to effective "hubs" in the center of your data.

3.8.3. Attribute View

When you navigate to an attribute you will be at the attribute view. This view can tell you details about a single attribute.

Graph View

Right after the title there is a Graph View button, this gives you a visual representation of the attribute and its connections.

This graph is dense on information points, to explain that let’s start with a simple graph with a single resolver that can read an user name from an user id:

; registry
[{::pc/sym    'user-by-id
  ::pc/input  [:user/id]
  ::pc/output [:user/name]}]

The following graph represents the attribute :user/id from this system:

Base elements

Let’s start by the graph itself, these are the basic elements:

Circles represent attributes, a yellow color points the current attribute, which is :user/id in this case.

Lines represent resolvers (the effective edges), it means how the resolver inputs connect to the resolver outputs; note the arrow points from :user/id to :user/name, this means :user/id provides :user/name.

Available controls:

  • click and drag on canvas - pan canvas

  • mouse scroll - zoom

  • click and drag circles - rearrange nodes

  • mouse over circles - highlight attribute

  • mouse over lines - highlight resolver

When you highlight some element, you can see a label for it in the top left corner. The edges get a highlight color as well, when highlighting an attribute, a green edge means it goes from current to the target, red edges are the reverse.

When you highlight an edge it will turn blue and every other occurrence of that same resolver will highlight as well.

Let’s add more attributes for a bigger view:

; registry
[{::pc/sym    'user-by-id
  ::pc/input  [:user/id]
  ::pc/output [:user/name
               :user/email
               :user/dob
               :twitter/url]}]

In this example notice there is one circle with a different stroke color. The stroke color represents the namespace, this way you can see related namespaces by color.

the color pallet for namespaces contains 10 colors, so if you end up with a graph containing more than 10 namespaces they will start repeating colors.

Time to make it more fun, let’s add a second resolver to fetch user data from email:

; registry

[{::pc/sym    'user-by-id
  ::pc/input  #{:user/id}
  ::pc/output [:user/name
               :user/email
               :user/dob
               :twitter/url]}

 {::pc/sym    'user-by-email
  ::pc/input  #{:user/email}
  ::pc/output [:user/name
               :user/id
               :user/dob
               :twitter/url]}]
Nested connections

So far we have only seen direct connections, this means the values are the same "context space", the other option is nested connections, here is an example:

; new resolver
{::pc/sym    'user-groups
 ::pc/input  #{:user/id}
 ::pc/output [{:user/groups
               [:group/id :group/name]}]}

Note the attributes :group/id and :group/name are not visible in this graph, that’s because they are an indirect connection, use the Nested Outputs control to toggle nested outputs and they should show up. Note we represent nested connections using dashed lines.

When we have a chain of many connected direct connections, Pathom can walk any number of paths automatically, but due to ambiguity that’s not true for nested connections.

Let’s see this same graph again, but this time the center will be :group/id:

Not much right? Well, there is no direct connections to this attribute, please start turning on Nested Inputs, this will make visible the connection between :group/id and :user/id.

Now try increasing the Depth, this number indicates how many steps to walk from the center attribute, increasing the reach.

To finish up you can also mark Nested Outputs, this should end up similar to the one we had before with the center in :user/id (considering Nested Outputs on).

Attribute Sizes

You may have noticed that the circles don’t have the same size, that’s because its another point of information, let’s get a clear example of that:

; registry
{::index
 [{::pc/sym    'user-by-id
   ::pc/input  #{:user/id}
   ::pc/output [:user/name
                :user/email
                :user/dob
                :twitter/url
                :youtube/url
                :linked-in/url
                :user/attr1
                :user/attr2
                :user/attr3
                :user/attr4
                :user/attr5]}

  {::pc/sym    'email-by-twitter
   ::pc/input  #{:twitter/url}
   ::pc/output [:user/email]}

  {::pc/sym    'email-by-youtube
   ::pc/input  #{:youtube/url}
   ::pc/output [:user/email]}

  {::pc/sym    'email-by-linkedin
   ::pc/input  #{:linked-in/url}
   ::pc/output [:user/email]}]}

The size of the attribute inner circle represents the number of attributes it provides, while the stroke size depends on how many attributes can be used to reach it. Notice the center attribute :user/id has the inner circle bigger than any other while :user/email has the biggest stroke size.

The sizes grows in a quadratic scale, so the difference can be hard to notice on small demos like this, but in real system it grows in a relevant rate.

Attribute Groups

So far every attribute we saw was a one to one attribute connections, but in Pathom we also have connections that depend on multiple inputs. In the graph we represent multiple attributes as grey circles, always with black borders. Here is an example:

; registry
[{::pc/sym    'user-by-id
  ::pc/input  #{:github.repository/name :github.repository/owner}
  ::pc/output [:github.repository/id
               :github.repository/url
               :github.repository/name-with-owner]}]

Notice when you mouse over the group, you can set the set described in the label section.

There is also a special group, the globals (or you can also call empty set: #{}). This attribute is always available and it connects to attributes with no dependency. Example:

; registry
[{::pc/sym    'time
  ::pc/output [:time/now]}
 {::pc/sym    'pi
  ::pc/output [:math/pi]}]
Reach Via

The Reach Via panel lists the direct and nested paths to reach current attribute in a single step.

You should look at this view as a tree. The first depth of the tree will always contains sets that represent the input you need to reach this attribute. If the set is bold, it means that input can directly reach the current attribute, otherwise it will have some nested list that will provide that necessary path.

You can click in any attribute to navigate into it.

Provides

The Provides panel lists all the direct and nested attributes that you can reach from the current in a single step.

This is a tree, imagine if you merged every resolver output that has the current attribute in the input.

As you mouse over the resolver that makes the link will show up below the attribute.

Output In

List of resolvers where this attribute appears as output.

Input In

List of resolvers where this attribute appears as input.

Input Combinations

In case this attribute appears as a input group with other attributes, all these groups will be listed here.

Mutation Param In

List the mutations that mention this attribute as params.

Mutation Output In

List the mutations that mention this attribute as output.

Spec

In case the attribute has a defined spec, you can see the spec form in this panel.

Examples

When the spec is available you can see some generated examples in this panel. You can generate new examples using the button in this panel header.

3.8.4. Resolver View

In the resolver view the left column will give you details about the resolver input and output. Mouse over items to highlight it in the graph.

The right side will have the graph will all attributes that participate in this resolver, the center of the graph will be the resolver input.

3.8.5. Mutation View

The mutation view lists the mutation parameters and the mutation output.

3.8.6. Full Graph

If you click in the Full Graph button it will display a complete graph of the attributes connection in the system. Use this view to get a general feeling of the system, you can understand the main clusters and how they organize.

3.8.7. Setting up the index explorer resolver

To expose the index for the index explorer you need to a write a resolver that gets your index out.

(pc/defresolver index-explorer [env _]
  {::pc/input  #{:com.wsscode.pathom.viz.index-explorer/id}
   ::pc/output [:com.wsscode.pathom.viz.index-explorer/index]}
  {:com.wsscode.pathom.viz.index-explorer/index
   (get env ::pc/indexes)})

Using this you can control what gets out to the explorer.

3.8.8. Visualizing your index

Here you will find some ways to visualize your index.

Fulcro Inspect

The simplest way is to use the explorer though Fulcro Inspect, this is of course limited to Fulcro Apps. All you need to do is open the Index Explorer tab and click to load the index, happy exploring!

Workspaces

Pathom Viz package includes some helpers to setup a card with an index explorer, you can use the following code as a starting point:

(ns pathom-index-explorer-workspaces-demo
  (:require [com.wsscode.pathom.core :as p]
            [com.wsscode.pathom.viz.workspaces :as p.viz.ws]
            [nubank.workspaces.core :as ws]))

(def parser ...) ; implement your parser, can be sync or async

(ws/defcard index-explorer
  (p.viz.ws/index-explorer-card
    {::p/parser parser}))
Stand alone app

Use the following example as a base to mount the index explorer app in any dom node:

(ns pathom-index-explorer-stand-alone-mount
  (:require [com.wsscode.pathom.viz.index-explorer :as iex]
            [fulcro.client :as fulcro]
            [fulcro.client.data-fetch :as df]
            [fulcro.client.primitives :as fp]))

(fp/defsc Root
  [this {:keys [ui/root]}]
  {:query [{:ui/root (fp/get-query iex/IndexExplorer)}]}
  (iex/index-explorer root))

(def root (fp/factory Root))

(defn init []
  (let [app (fulcro/make-fulcro-client
              {:client-did-mount
               (fn [app]
                 (df/load app [::iex/id "singleton"] iex/IndexExplorer
                   {:target [:ui/root]}))})]
    (fulcro/mount app Root (js/document.getElementById "appContainerNode"))))

3.8.9. Fixing transit encoding issues

One common issue with the index explorer is the fact that resolvers include fns and may include other things that are not possible to encode with transit by default. We suggest you setup a default write handler on Transit so it doesn’t break when it encounter a value that it doesn’t know how to encode.

If you are running Pathom in Clojure, then you also need to know there is a bug in the current Clojure writer, it doesn’t support default handlers (although the docs say it does).

To fix this, here is a code snippet example on how to get around the bug:

(ns your-ns
  (:require [cognitect.transit :as transit])
  (:import [com.cognitect.transit WriteHandler TransitFactory]
           [java.io ByteArrayOutputStream OutputStream]
           [java.util.function Function]))

(deftype DefaultHandler []
  WriteHandler
  (tag [this v] "unknown")
  (rep [this v] (pr-str v)))

(defn writer
  "Creates a writer over the provided destination `out` using
   the specified format, one of: :msgpack, :json or :json-verbose.
   An optional opts map may be passed. Supported options are:
   :handlers - a map of types to WriteHandler instances, they are merged
   with the default-handlers and then with the default handlers
   provided by transit-java.
   :transform - a function of one argument that will transform values before
   they are written."
  ([out type] (writer out type {}))
  ([^OutputStream out type {:keys [handlers transform default-handler]}]
   (if (#{:json :json-verbose :msgpack} type)
     (let [handler-map (merge transit/default-write-handlers handlers)]
       (transit/->Writer
         (TransitFactory/writer (#'transit/transit-format type) out handler-map default-handler
           (when transform
             (reify Function
               (apply [_ x]
                 (transform x)))))))
     (throw (ex-info "Type must be :json, :json-verbose or :msgpack" {:type type})))))

(defn write-transit [x]
  (let [baos (ByteArrayOutputStream.)
        w    (writer baos :json {:handlers transit-write-handlers ; use your handlers here
                                 :default-handler (DefaultHandler.)})
        _    (transit/write w x)
        ret  (.toString baos)]
    (.reset baos)
    ret))

And this is how to do in Clojurescript:

(deftype DefaultHandler []
  Object
  (tag [this v] "unknown")
  (rep [this v] (pr-str v)))

(def write-handlers
  {"default" (DefaultHandler.)})

(defn write-transit [x]
  (let [writer (transit/writer {:handlers write-handlers})]
    (transit/write writer x)))

4. Getting Started on Pathom core engine

4.1. Query Notation Introduction

A query is a vector that lists the items you want. A keyword requests a scalar (opaque) value, and a map indicates a to-many or to-one join (resolved at runtime using database content).

Queries are always "relative" to some starting context (which is typically supplied via parameters or by a top-level join).

If you want to obtain the name and age of "some" person:

[:person/name :person/age]

If you want to obtain a person’s name and the street of their address you might write this:

[:person/name {:person/address [:address/street]}]

where we imagine that the underlying database has some kind of normalization that needs to be traversed in order to satisfy the address data.

The result of running a query is a map containing the result (in the same recursive shape as the query):

Running [:person/name :person/age] against the person "Sam" might give:

{:person/name "Sam" :person/age 32}

Running [:person/name {:person/address [:address/street]}] against that same person might give:

{:person/name "Sam" :person/address {:address/street "111 Main St."}}

The query establishes the request and expectation. Interpreting and satisfying these queries from some arbitrary data source is the job of a query parser/interpreter. This library gives you tools for quickly building the latter.

4.2. Parsing Context

The elements of a graph query are relative: they have a contextual meaning. If you ask for a person’s name, the implication is that you are querying a "person entity"; however, the other required bit of information is which person. Thus, elements of a query cannot be fulfilled they are rooted in a context. This applies to joins as well (e.g. what is the current person’s address?), but once you’ve resolved the context of the root of some graph query the joins simply describe navigation from that context (the person) to another (their address) via a relation that is either already described in the underlying data source itself, or in code you provide that can figure it out.

As the parser moves through a query like [:person/name {:person/address [:address/street]}] it first starts with some context (e.g. "Sam"). When it finds a join it processes the subquery against a new context (e.g. Sam’s address) to give the result:

{:person/name "Sam" :person/address {:address/street "111 Main St."}}

So, there is always a context at any given point when parsing a query. This context is either established at startup by resolving a specific entity, or is the entity (or entities if to-many) that have been reached by processing the joins of the query.

4.3. Parsing Environment and The Reader

The parsing environment is simply a map that carries along data while parsing (and can be augmented as you go). It establishes the meaning of the "current context", can contain anything you wish (via namespaced keywords), and can be seen in any code that you plug in to process the query.

There are some predefined (namespaced) keys that have special meaning to the parser. In particular :com.wsscode.pathom.core/reader can be used to supply reader(s) for the parser to use. The reader can be a map from attributes to functions, a plain function, or even a vector of functions. It is asked to read the value for the elements of the query using the current environment. We’ll expand on that as we go, or you can read more in the Readers section.

4.3.1. Updating the environment in mid-query

During the process of joins it’s possible to modify the environment map that will be used when processing the join. To do that you must return the key ::p/env with the new full environment. As in this example:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/core/join_env_update.cljs[role=include]
[:env-data {:change-env [:env-data]}]

5. Pathom on client side

Pathom can run in Javascript environments (browsers, node, electron, etc…​), in this guide we will see how we can use a Pathom parser as a Fulcro remote, so you don’t need any server to run.

First we create a parser as usual:

(def parser
  (p/parallel-parser
    {::p/env     {::p/reader               [p/map-reader
                                            pc/parallel-reader
                                            pc/open-ident-reader
                                            p/env-placeholder-reader]
                  ::p/placeholder-prefixes #{">"}}
     ::p/mutate  pc/mutate-async
     ::p/plugins [(pc/connect-plugin {::pc/register []})
                  p/error-handler-plugin
                  p/request-cache-plugin
                  p/trace-plugin
                  p/elide-special-outputs-plugin]}))

Then the next step is to use this parser as a Fulcro remote:

; example creating Fulcro app

(fulcro/make-fulcro-client
  {:networking
   {:remote (pfn/pathom-remote parser)}})

; example mounting to a Fulcro workspaces card

(ws/defcard pathom-demo
  (ct.fulcro/fulcro-card
    {::f.portal/root MyApp
     ::f.portal/app  {:networking
                      {:remote (-> parser
                                   (pfn/pathom-remote)
                                   ; this plugin will add the query to request the trace
                                   ; on every request, so you can debug it in Fulcro Inspect
                                   ; not recommended for production given it can add
                                   ; significant overhead to the response
                                   (pfn/trace-remote))}}}))

The parser is your Pathom parser, this works with both sync and async parsers.

6. Pathom Core Engine

6.1. Parsers

6.1.1. Serial parser

TODO: explain serial parser internals

6.1.2. Async parser

TODO: explain async parser internals

6.1.3. Parallel parser

TODO: explain parallel parser internals

6.2. Readers

A reader is a function that will process a single entry from the query. For example, given the following query: [:name :age]. If you ask an om.next parser to read this the reader function will be called twice; once for :name and another one for :age. Note that in the case of joins, the parser will only be called for the join entry, but not for it’s children (not automatically), for example: given the query [:name :age {:parent [:name :gender]}]. The reader function will be called 3 times now, one for :name, one for :age and one for :parent, when reading :parent, your reader code is responsible for checking that it has a children query, and do a recursive call (or anything else you want to do to handle this join). During this documentation, we are going to see many ways to implement those readers.

Please note the following differences between om.next readers and pathom readers: In om.next a parse read functions has the following signature: (fn [env dispatch-key params]). In pathom we use a smaller version instead, which is: (fn [env]). The env already contains the dispatch-key and params, so there is no loss of information.

(get-in env [:ast :dispatch-key]) ; => dispatch-key
(get-in env [:ast :params]) ; => params

Also, in om.next you need to return the value wrapped in {:value "your-content"}. In pathom this wrapping is done automatically for you: just return the final value.

Readers can be 1-arity function, maps, or vectors. See Map dispatcher and Vector dispacher for information on those respectively.

Here is a formal Clojure Spec definiton for a pathom reader:

(s/def ::reader-map (s/map-of keyword? ::reader))
(s/def ::reader-seq (s/coll-of ::reader :kind vector?))
(s/def ::reader-fn (s/fspec :args (s/cat :env ::env)
                            :ret any?))

(s/def ::reader
  (s/or :fn ::reader-fn
        :map ::reader-map
        :list ::reader-seq))

6.2.1. Functions as Readers

These are quite simply a function that receive the env and resolve the read. More than one reader can exist in a chain, and the special return value ::p/continue allows a reader to indicate it cannot resolve the given property (to continue processing the chain). Returning any value (including nil) you’ve resolved the property to that value.

(ns pathom-docs.fn-dispatch
  (:require [com.wsscode.pathom.core :as p]))

(defn read-value [{:keys [ast]}]
  (let [key (get ast :dispatch-key)]
    (case key
      :name "Saul"
      :family "Goodman"
      ; good pratice: return ::p/continue when your reader is unable
      ; to handle the request
      ::p/continue)))

(def parser (p/parser {::p/plugins [(p/env-plugin {::p/reader read-value})]}))

(parser {} [:name :family])
; => {:name "Saul" :family "Goodman"}

6.2.2. Maps as Readers

Since it is very common to want to resolve queries from a fixed set of possibilities we support defining a map as a reader. This is really just a "dispatch table" to functions that will receive env. We can re-write the previous example as:

(ns pathom-docs.reader-map-dispatch
  (:require [com.wsscode.pathom.core :as p]))

(def user-reader
  {:name   (fn [_] "Saul")
   :family (fn [_] "Goodman")})

(def parser (p/parser {::p/plugins [(p/env-plugin {::p/reader user-reader})]}))

(parser {} [:name :family])
; => {:name "Saul" :family "Goodman"}
The built-in Map Reader will return ::p/continue if the map it is looking in does not contain the key for the attribute being resolved. This allows it to be safely used in a vector of readers.

6.2.3. Vectors of Readers [aka composed readers]

Using a vector for a reader is how you define a chain of readers. This allows you to define readers that serve a particular purpose. For example, some library author might want to supply readers to compose into your parser, or you might have different modules of database-specific readers that you’d like to keep separate.

When pathom is trying to resolve a given attribute (say :person/name) in some context (say against the "Sam" entity) it will start at the beginning of the reader chain. The first reader will be asked to resolve the attribute. If the reader can handle the value then it will be returned and no other readers will be consulted. If it instead returns the special value ::p/continue it is signalling that it could not resolve it (map readers do this if the attribute key is not in their map). When this happens the next reader in the chain will be tried.

(ns pathom-docs.reader-vector-dispatch
  (:require [com.wsscode.pathom.core :as p]))

; a map dispatcher for the :name key
(def name-reader
  {:name   (fn [_] "Saul")})

; a map dispatcher for the :family key
(def family-reader
  {:family (fn [_] "Goodman")})

(def parser (p/parser {::p/plugins [(p/env-plugin {::p/reader [name-reader family-reader]})]}))

(parser {} [:name :family :other])
; => {:name "Saul", :family "Goodman", :other :com.wsscode.pathom.core/not-found}

If no reader in the chain returns a value (all readers reeturn ::p/continue), then ::p/not-found will be returned.

When you write your readers you should always remember to return ::p/continue when you can’t handle a given key. This way your reader will play nice in composition scenarios.

6.2.4. The Map Reader

Not all things need to be computed. Very often the current context will already have attributes that were read during some prior step (for example, a computed attribute might have read an entire entity from the database and made it the current context). The map reader plugin is a plugin that has the following behavior:

  • If the attribute requested exists in the current parsing context (with any value, even nil), it returns it.

  • If the attribute is missing, it returns ::p/continue, which is an indication to move to the next reader in the chain.

  • If the attribute is present, it properly returns the value.

The map reader is also capable of resolving relations (if present in the context). For example, if there is a join in the query and a vector of data at that join key in the context, then it will attempt to fulfill the subquery of the join.

The map reader is almost always inserted into a reader chain because it is so common to read clumps of things from a database into the context and resolve them one by one as the query parsing proceeds.

6.3. Entities

An entity to pathom is the graph node that is tracked as the current context, and from which information (attributes and graph edges to other entities) can be derived. The current entity needs to be "map-like": It should work with all normal map-related functions like get, contains?, etc.

As Pathom parses the query it tracks the current entity in the environment at key ::p/entity. This makes it easier to write more reusable and flexible readers as we’ll see later.

6.3.1. Using p/entity

The p/entity function exists as a convenience for pulling the current entity from the parsing environment:

(ns com.wsscode.pathom-docs.using-entity
  (:require [com.wsscode.pathom.core :as p]))

(defn read-attr [env]
  (let [e (p/entity env)
        k (get-in env [:ast :dispatch-key])]
    (if (contains? e k)
      (get e k)
      ::p/continue)))

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [read-attr]})]}))

; we send the entity using ::p/entity key on environment
(parser {::p/entity #:character{:name "Rick" :age 60}} [:character/name :character/age :character/foobar])
; => #:character{:name "Rick", :age 60, :foobar :com.wsscode.pathom.core/not-found}

Note that the code above is a partial implementation of the map-dispatcher.

The map-reader just has the additional ability to understand how to walk a map that has a tree shape that already "fits" our query:

(ns com.wsscode.pathom-docs.using-entity-map-reader
  (:require [com.wsscode.pathom.core :as p]))

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader p/map-reader})]}))

; we send the entity using ::p/entity key on environment
(parser {::p/entity #:character{:name "Rick" :age 60
                                :family [#:character{:name "Morty" :age 14}
                                         #:character{:name "Summer" :age 17}]
                                :first-episode #:episode{:name "Pilot" :season 1 :number 1}}}
        [:character/name :character/age
         {:character/family [:character/age]}
         {:character/first-episode [:episode/name :episode/number]}])
; =>
; #:character{:name "Rick",
;             :age 60,
;             :family [#:character{:age 14} #:character{:age 17}],
;             :first-episode #:episode{:name "Pilot", :number 1}}

Now that you understand where the entity context is tracked I encourage you to check the p/map-reader implementation. It’s not very long and will give you a better understanding of all of the concepts covered so far.

6.3.2. Understanding Joins

The other significant task when processing a graph query is walking a graph edge to another entity (or entities) when we find a join.

The subquery for a join is in the :query of the environment. Essentially it is a recursive step where we run the parser on the subquery while replacing the "current entity":

(defn join [entity {:keys [parser query] :as env}]
  (parser (assoc env ::p/entity entity) query))

The real pathom implementation handles some additional scenarios: like the empty sub-query case (it returns the full entity), the special * query (so you can combine the whole entity + extra computed attributes), and union queries.

The following example shows how to use p/join to "invent" a relation that can then be queried:

(ns com.wsscode.pathom-docs.using-entity-map-reader
  (:require [com.wsscode.pathom.core :as p]))

(def rick
  #:character{:name          "Rick"
              :age           60
              :family        [#:character{:name "Morty" :age 14}
                              #:character{:name "Summer" :age 17}]
              :first-episode #:episode{:name "Pilot" :season 1 :number 1}})

(def char-name->voice
  "Relational information representing edges from character names to actors"
  {"Rick"   #:actor{:name "Justin Roiland" :nationality "US"}
   "Morty"  #:actor{:name "Justin Roiland" :nationality "US"}
   "Summer" #:actor{:name "Spencer Grammer" :nationality "US"}})

(def computed
  {:character/voice ; support an invented join attribute
   (fn [env]
     (let [{:character/keys [name]} (p/entity env)
           voice (get char-name->voice name)]
       (p/join voice env)))})

(def parser
  ; process with map-reader first, then try with computed
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [p/map-reader computed]})]}))

(parser {::p/entity rick} ; start with rick (as current entity)
        '[:character/name
          {:character/voice [:actor/name]}
          {:character/family [* :character/voice]}])

There are three different scenarios demonstrated in the above query:

  1. Using the invented join property in a normal join. This allows for a subquery that constrains the data returned (from the actor in this case).

  2. Using the * in a query, which returns all "known" attributes of the "current contextual" entity.

  3. Using an additional (non-joined) :character/voice with * "adds in" that additional information. When a property is queried for that is processed via p/join then the entire entity will be returned even though there is no subquery.

6.3.3. Dependent Attributes

When computing attributes it is possible that you might need some other attribute for the current context that is also computed. You could hard-code a solution, but that would create all sorts of static code problems that could be difficult to manage as your code evolves: changes to the readers, for example, could easily break it and lead to difficult bugs.

Instead, it is important that readers be able to resolve attributes they need from the "current context" in an abstract manner (i.e. the same way that they query itself is being resolved). The p/entity function has an additional arity for handling this exact case. You pass it a list of attributes that should be "made available" on the current entity, and it will use the parser to ensure that they are there (if possible):

(let [e (p/entity env [:x])]
   ; e now has :x on it if possible, even if it is computed elsewhere
   ...)

The following example shows this in context:

(ns pathom-docs.entity-attribute-dependency
  (:require [com.wsscode.pathom.core :as p]))

(def computed
  {:greet
   (fn [env]
     (let [{:character/keys [name]} (p/entity env)]
       (str "Hello " name "!")))

   :invite
   (fn [env]
     ; requires the computed property `:greet`, which might not have been computed into the current context yet.
     (let [{:keys [greet]} (p/entity env [:greet])]
       (str greet " Come to visit us in Neverland!")))})

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [p/map-reader
                                                     computed]})]}))

(parser {::p/entity #:character{:name "Mary"}}
        [:invite])
; => {:invite "Hello Mary! Come to visit us in Neverland!"}

There is a variant p/entity! that raises an error if your desired attributes are not found. It’s recommended to use the enforced version if you need the given attributes, as it will give your user a better error message.

(ns pathom-docs.entity-attribute-enforce
  (:require [com.wsscode.pathom.core :as p]))

(def computed
  {:greet
   (fn [env]
     ; enfore the character/name to be present, otherwise raises error, try removing
     ; the attribute from the entity and see what happens
     (let [name (p/entity-attr! env :character/name)]
       (str "Hello " name "!")))

   :invite
   (fn [env]
     ; now we are enforcing the attribute to be available, otherwise raise an error
     ; try changing the :greet to :greete and run the file, you will see the error
     (let [greet (p/entity-attr! env :greet)]
       (str greet " Come to visit us in Neverland!")))})

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [p/map-reader
                                                     computed]})]}))

(parser {::p/entity #:character{:name "Mary"}}
        [:invite])
; => {:invite "Hello Mary! Come to visit us in Neverland!"}

If the parse fails on an enforced attribute you will get an exception. For example, if the current entity were #:character{:nam "Mary"} we’d see:

CompilerException clojure.lang.ExceptionInfo: Entity attributes #{:character/name} could not be realized #:com.wsscode.pathom.core{:entity #:character{:nam "Mary"}, :path [:invite :greet], :missing-attributes #{:character/name}}
If computed attributes require IO or intense computation you should consider adding caching to improve parsing performance. Remember that a given query might traverse the same node more than once! Imagine a query that asks for your friends and co-workers. When there is this kind of overlap the same computational code may run more than once. See Request Caching for more details.

6.3.4. Atom entities

As you move from node to node, you can choose to wrap the new contextual entity in an atom. This can be used as a narrow kind of caching mechanism that allows for a reader to add information into the current entity as it computes it, but which is valid for only the processing of the current entity (is lost as soon as the next join is followed). Therefore, this won’t help with the overhead of re-visiting the same entity more than once when processing different parts of the same query.

The built-in function p/entity always returns a Clojure map, if the entity is an atom it will deref it automatically.

Here is an example using an entity atom:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/entities/atom_entities.cljc[role=include]

6.3.5. Union queries

Union queries allow us to handle edges that lead to heterogeneous nodes. For example a to-many relation for media that could result in a book or movie. Following such an edge requires that we have a different subquery depending on what we actually find in the database.

Here is an example where we want to use a query that will search to find a user, a movie or a book:

(ns pathom-docs.entity-union
  (:require [com.wsscode.pathom.core :as p]))

(def search-results
  [{:type :user
    :user/name "Jack Sparrow"}
   {:type :movie
    :movie/title "Ted"
    :movie/year 2012}
   {:type :book
    :book/title "The Joy of Clojure"}])

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [p/map-reader]})]}))

(parser {::p/entity {:search search-results}
         ; here we set where pathom should look on the entity to determine the union path
         ::p/union-path :type}
        [{:search {:user [:user/name]
                   :movie [:movie/title]
                   :book [:book/title]}}])

Of course, unions need to have a way to determine which path to go based on the entity at hand. In the example above we used the :type (a key on the entity) to determine which branch to follow. The value of ::p/union-path can be a keyword (from something inside entity or a computed attribute) or a function (that takes env and returns the correct key (e.g. :book) to use for the union query).

If you want ::p/union-path to be more contextual you can of course set it in the env during the join process, as in the next example:

(ns pathom-docs.entity-union-contextual
  (:require [com.wsscode.pathom.core :as p]))

(def search-results
  [{:type :user
    :user/name "Jack Sparrow"}
   {:type :movie
    :movie/title "Ted"
    :movie/year 2012}
   {:type :book
    :book/title "The Joy of Clojure"}])

(def search
  {:search
   (fn [env]
     ; join-seq is the same as join, but for sequences, note we set the ::p/union-path
     ; here. This is more common since the *method* of determining type will vary for
     ; different queries and data.
     (p/join-seq (assoc env ::p/union-path :type) search-results))})

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [search
                                                     p/map-reader]})]}))

(parser {}
        [{:search {:user [:user/name]
                   :movie [:movie/title]
                   :book [:book/title]}}])

This is something beautiful about having an immutable environment; you can make changes with confidence that it will not affect indirect points of the parsing process.

6.4. Error handling

By default, pathom parser will stop if some exception occurs during the parsing process. This is often undesirable if some node fails you still can return the other ones that succeed. You can use the error-handler-plugin. This plugin will wrap each read call with a try-catch block, and in case an error occurs, a value of ::p/reader-error will be placed in that node, while details of it will go in a separate tree, but at the same path. Better an example to demonstrate:

(ns pathom-docs.error-handling
  (:require [com.wsscode.pathom.core :as p]))

(def computed
  ; create a handle key that will trigger an error when called
  {:trigger-error
   (fn [_]
     (throw (ex-info "Error triggered" {:foo "bar"})))})

; a reader that just flows, until it reaches a leaf
(defn flow-reader [{:keys [query] :as env}]
  (if query
    (p/join env)
    :leaf))

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [computed flow-reader]})
                          ; add the error handler plugin
                          p/error-handler-plugin]}))

(parser {} [{:go [:key {:nest [:trigger-error :other]}
                  :trigger-error]}])
; =>
; {:go {:key :leaf
;       :nest {:trigger-error :com.wsscode.pathom.core/reader-error
;              :other :leaf}
;       :trigger-error :com.wsscode.pathom.core/reader-error}
;  :com.wsscode.pathom.core/errors {[:go :nest :trigger-error] "class clojure.lang.ExceptionInfo: Error triggered - {:foo \"bar\"}"
;                                   [:go :trigger-error] "class clojure.lang.ExceptionInfo: Error triggered - {:foo \"bar\"}"}}

As you can see, when an error occurs, the key ::p/errors will be added to the returned map, containing the detailed error message indexed by the error path. You can customize how the error is exported in this map by setting the key ::p/process-error in your environment:

(ns pathom-docs.error-handling-process
  (:require [com.wsscode.pathom.core :as p]))

(def computed
  ; create a handle key that will trigger an error when called
  {:trigger-error
   (fn [_]
     (throw (ex-info "Error triggered" {:foo "bar"})))})

; a reader that just flows, until it reaches a leaf
(defn flow-reader [{:keys [query] :as env}]
  (if query
    (p/join env)
    :leaf))

; our error processing function
(defn process-error [env err]
  ; if you use some error reporting service, this is a good place
  ; to trigger a call to then, here you have the error and the full
  ; environment of when it ocurred, so you might want to some extra
  ; information like the query and the current path on it so you can
  ; replay it for debugging

  ; we are going to simply return the error message from the error
  ; if you want to return the same thing as the default, use the
  ; function (p/error-str err)
  (.getMessage err))

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [computed flow-reader]
                                         ; add the error processing to the environment
                                         ::p/process-error process-error})
                          ; add the error handler plugin
                          p/error-handler-plugin]}))

(parser {} [{:go [:key {:nest [:trigger-error :other]}
                  :trigger-error]}])
; =>
; {:go {:key :leaf
;       :nest {:trigger-error :com.wsscode.pathom.core/reader-error
;              :other :leaf}
;       :trigger-error :com.wsscode.pathom.core/reader-error}
;  :com.wsscode.pathom.core/errors {[:go :nest :trigger-error] "Error triggered"
;                                   [:go :trigger-error]       "Error triggered"}}

6.4.1. Debugging exceptions

By default Pathom error handler will just return a short error message about the exception, but to debug you will want the stack trace. To view the stack trace you can use a custom process-error, this is an example to do in Clojure:

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [computed flow-reader]
                                         ; add the error processing to the environment
                                         ::p/process-error
                                         (fn [_ err]
                                           ; print stack trace
                                           (.printStackTrace err)

                                           ; return error str
                                           (p/error-str err)})
                          ; add the error handler plugin
                          p/error-handler-plugin]}))

In ClojureScript:

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [computed flow-reader]
                                         ; add the error processing to the environment
                                         ::p/process-error
                                         (fn [_ err]
                                           ; print stack trace on console
                                           (js/console.error err)

                                           ; return error str
                                           (p/error-str err)})
                          ; add the error handler plugin
                          p/error-handler-plugin]}))

6.4.2. Fail fast

Having each node being caught is great for the UI, but not so much for testing. During testing you probably prefer the parser to blow up as fast as possible so you don’t accumulate a bunch of errors that get impossible to read. Having to create a different parser to remove the error-handler-plugin can be annoying, so there is an option to solve that. Send the key ::p/fail-fast? as true in the environment, and the try/catch will not be done, making it fail as soon as an exception fires, for example, using our previous parser:

(parser {::p/fail-fast? true}
        [{:go [:key {:nest [:trigger-error :other]}
               :trigger-error]}])
; => CompilerException clojure.lang.ExceptionInfo: Error triggered {:foo "bar"}, ...

6.4.3. Raising errors

The default error output format (in a separated tree) is very convenient for direct API calls, because they leave a clean output on the data part. But if you want to expose those errors on the UI, pulling then out of the separated tree can be a bit of a pain. To help with that there is a p/raise-errors helper, this will lift the errors so they are present at the same level of the error entry. Let’s take our last error output example and process it with p/raise-errors

(p/raise-errors {:go {:key :leaf
                      :nest {:trigger-error :com.wsscode.pathom.core/reader-error
                             :other :leaf}
                      :trigger-error :com.wsscode.pathom.core/reader-error}
                 :com.wsscode.pathom.core/errors {[:go :nest :trigger-error] "Error triggered"
                                                  [:go :trigger-error] "Error triggered"}})

; outputs:

{:go {:key :leaf
      :nest {:trigger-error :com.wsscode.pathom.core/reader-error
             :other :leaf
             :com.wsscode.pathom.core/errors {:trigger-error "Error triggered"}}
      :trigger-error :com.wsscode.pathom.core/reader-error
      :com.wsscode.pathom.core/errors {:trigger-error "Error triggered"}}}

Notice that we don’t have the root ::p/errors anymore, instead it is placed at the same level of the error attribute. So the path [::p/errors [:go :nest :trigger-error]] turns into [:go :nest ::p/errors :trigger-error]. This makes very easy to pull the error on the client-side.

6.5. Dispatch helpers

Using multi-methods is a good way to make open readers, pathom provides helpers for two common dispatch strategies: key-dispatch and entity-dispatch. Here is a pattern that I often use on parsers:

(ns pathom-docs.dispatch-helpers
  (:require [com.wsscode.pathom.core :as p]))

(def cities
  {"Recife"    {:city/name "Recife" :city/country "Brazil"}
   "São Paulo" {:city/name "São Paulo" :city/country "Brazil"}})

(def city->neighbors
  {"Recife" [{:neighbor/name "Boa Viagem"}
             {:neighbor/name "Piedade"}
             {:neighbor/name "Casa Amarela"}]})

; this will dispatch according to the ast dispatch-key
(defmulti computed p/key-dispatch)

; use virtual attributes to handle data not present on the maps, like computed attributes, relationships, and globals
(defmethod computed :city/neighbors [env]
  (let [name (p/entity-attr! env :city/name)]
    (p/join-seq env (city->neighbors name))))

; an example of global, same as before but without any dependency on the entity
(defmethod computed :city/all [env]
  (p/join-seq env (vals cities)))

; remember to return ::p/continue by default so non-handled cases can flow
(defmethod computed :default [_] ::p/continue)

; just to make easy to re-use, our base entity reader consists of a map reader + virtual attributes
(def entity-reader [p/map-reader computed])

; dispatch for entity keys, eg: [:user/by-id 123]
(defmulti entity-lookup p/entity-dispatch)

(defmethod entity-lookup :city/by-name [env]
  ; the ident-value helper extracts the value part from the ident, as "Recife" in [:city/by-name "Recife"]
  (let [city (get cities (p/ident-value env))]
    (p/join city env)))

(defmethod entity-lookup :default [_] ::p/continue)

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [p/map-reader computed entity-lookup]})]}))

(parser {} [{:city/all [:city/name]}
            {[:city/by-name "Recife"] [:city/neighbors]}])
; =>
;{:city/all [#:city{:name "Recife"} #:city{:name "São Paulo"}]
; [:city/by-name "Recife"] #:city{:neighbors [#:neighbor{:name "Boa Viagem"}
;                                             #:neighbor{:name "Piedade"}
;                                             #:neighbor{:name "Casa Amarela"}]}}

6.6. Mutations

To handle mutations, you can send the :mutate param to the parser.

(ns com.wsscode.pathom.book.mutation
  (:require [com.wsscode.pathom.core :as p]
            [fulcro.client.primitives :as fp]))

(defmulti my-mutate fp/dispatch)

(defmethod my-mutate `do-operation [{:keys [state]} _ params]
  (swap! state update :history conj {:op :operation :params params}))

(def parser (p/parser {:mutate my-mutate}))

(comment
  (let [state (atom {:history []})]
    (parser {:state state} [`(do-operation {:foo "bar"})
                            `(do-operation {:buz "baz"})])
    @state)
  ; => {:history [{:op :operation, :params {:foo "bar"}}
  ;               {:op :operation, :params {:buz "baz"}}]}
  )

6.7. Request Caching

Before 2.2.0 you had to include the p/request-cache plugin into your plugin list, since 2.2.0 this is no longer necessary, it’s always available

As your queries grow, there are more and more optimizations that you can do avoid unnecessary IO or heavy computations. Here we are going to talk about a request cache, which is a fancy name for an atom that is initialized on every query and stays on the environment so you can share the cache across nodes. Let’s see how we can use that to speed up our query processing:

(ns pathom-docs.request-cache
  (:require [com.wsscode.pathom.core :as p]))

(defn my-expensive-operation [env]
  ; the cache key can be anything; if we were had an extra
  ; variable here, like some id, a good cache key would be
  ; like: [::my-expensive-operation id]
  (p/cached env :my-key
    ; we are going to send an atom with an int so that we can count
    ; how many times this was called
    (let [counter (:counter env)]
      ; a secondary sign if cache is working, let's make a delay
      (Thread/sleep 1000)
      ; increment and return
      (swap! counter inc))))

(def computed
  {:cached my-expensive-operation})

; a reader that just flows, until it reaches a leaf
(defn flow-reader [{:keys [query] :as env}]
  (if query
    (p/join env)
    :leaf))

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [computed
                                                     flow-reader]})]}))

(time
  (parser {:counter (atom 0)}
          [:x :y :cached
           {:z [:foo {:bar [:cached]} :cached]}]))
; "Elapsed time: 1006.760165 msecs"
; =>
; {:x      :leaf
;  :y      :leaf
;  :cached 1
;  :z      {:foo    :leaf
;           :bar    {:cached 1}
;           :cached 1}}

Remember this cache is per request, so after a full query gets finished, the atom is discarded. If you want to make a cache that’s more durable (that retains information across requests), check the [[Plugins|Plugins]] documentation for more information on how to do that.

6.8. Plugins

Pathom allows a parser to have a collection of plugins that modify its behavior. Plugins is a top-level option when creating the parser, and the value is a vector of plugins:

(def parser (p/parser {::p/plugins [...]}))

In this section we’ll be using a few plugins to make our lives easier.

Plugins set code that wraps some of pathom operations, a plugin is a map where you bind keys from event names to functions. They work on wrap fashion, kind like ring wrappers. Here is what a plugin looks like:

(ns pathom-docs.plugin-example
  (:require [com.wsscode.pathom.core :as p]))

(def my-plugin
  ; the ::p/wrap-parser entry point wraps the entire parser,
  ; this means it wraps the operation that runs once on each
  ; query that runs with the parser
  {::p/wrap-parser
   (fn [parser]
     ; here you can initialize stuff that runs only once per
     ; parser, like a durable cache across requests
     (fn [env tx]
       ; here you could initialize per-request items, things
       ; that needs to be set up once per query as we do on
       ; request cache, or the error atom to accumulate errors

       ; in this case, we are doing nothing, just calling the
       ; previous parser, a pass-through wrapper if you may
       (parser env tx)))

   ; this wraps the read function, meaning it will run once for
   ; each recursive parser call that happens during your query

   ::p/wrap-read
   (fn [reader]
     (fn [env]
       ; here you can wrap the parse read, in pathom we use this
       ; on the error handler to do the try/catch per node, also
       ; the profiler use this point to calculate the time spent
       ; on a given node

       ; this is also a good point to inject custom read keys if
       ; you need to, the profile plugin, for example, can capture
       ; the key ::p.profile/profile and export the current profile
       ; information
       (reader env)))

   ;; during the connect processing, while the wrap-read will work around
   ;; the entire attribute, this wraps each individual resolver call (excluding cache hits)
   ::pc/wrap-resolve
   (fn [resolve]
     (fn [env input]
       (resolve env input)))

   ::p/wrap-mutate
      ; mutation wrappers require a slightly different pattern
      ; as the actual mutation comes on an ':action' key
      (fn [mutate]
       (fn [env k params]
         ; inject custom mutation keys, etc here
         (let [out (mutate env k params)]
           (cond-> out
             {:action out}
             (update :action
               (fn [action]
                 (fn []
                   (action))))))))})

The plugin engine replaces the old process-reader in a much more powerful way. If you want to check a real example look for the source for the built-in plugins, they are quite small and yet powerful tools (grep for -plugin on the repository to find all of them).

6.8.1. The Environment Plugin

Typically the parsing environment will need to include things you create and inject every time you parse a query (e.g. a database connection) and some parser-related things (e.g. the reader) that might be the same all the time.

In the earlier example we created the parser and then explicitly supplied a reader to the environment every time we called it. In cases where there are specific things that you’d always like included in the environment we can instead use the plugin system to pre-set them for every parse.

So, in our prior example we had:

(def parser (p/parser {}))

and every call to the parser needed an explicit reader: (parser {::p/reader computed} [:hello])

The p/env-plugin is a parser plugin that automatically merges a map of configuration into the parsing environment every time the parser is called. Thus, our earlier example can be converted to:

(def parser (p/parser {::p/plugins [(p/env-plugin {::p/reader computed})]}))

and now each call to the parser needs nothing in the env on each invocation: (parser {} [:hello]).

Providing an environment is such a common operation that there is a shortcut to set it up:

(def parser (p/parser {::p/env {::p/reader computed}}))

The ::p/env option to the parser tells it to install the env-plugin with the given configuration.

6.8.2. Example: Shard switch

For a more practical example, let’s say we are routing in a micro-service architecture and our parser needs to be shard-aware. Let’s write a plugin that anytime it sees a :shard param on a query; and it will update the :shard attribute on the environment and send it down, providing that shard information for any node downstream.

(ns pathom-docs.plugin-shard
  (:require [com.wsscode.pathom.core :as p]))

; a reader that just flows, until it reaches a leaf
(defn flow-reader [{:keys [query] :as env}]
  (if query
    (p/join env)
    :leaf))

(def shard-reader
  ; Clojure neat tricks, let's just fetch the shard
  ; from the environment when :current-shard is asked
  {:current-shard :shard})

(def shard-plugin
  {::p/wrap-read
   (fn [reader]
     (fn [env]
       ; try to get a new shard from the query params
       (let [new-shard (get-in env [:ast :params :shard])]
         (reader (cond-> env new-shard (assoc :shard new-shard))))))})

(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [shard-reader flow-reader]})
                          ; use our shard plugin
                          shard-plugin]}))

(parser {:shard "global"}
        '[:a :b :current-shard
          {(:go-s1 {:shard "s1"})
           ; notice it flows down
           [:x :current-shard {:y [:current-shard]}]}
          :c
          {(:go-s2 {:shard "s2"})
           [:current-shard
            ; we can override at any point
            {(:now-s3 {:shard "s3"})
             [:current-shard]}]}])
; =>
; {:a             :leaf
;  :b             :leaf
;  :current-shard "global"
;  :go-s1         {:x :leaf :current-shard "s1" :y {:current-shard "s1"}}
;  :c             :leaf
;  :go-s2         {:current-shard "s2" :now-s3 {:current-shard "s3"}}}

6.10. Placeholders

Flattening your data makes more convenient for the use because it increases the connection of the data, facilitating the access. But sometimes when developing using interfaces the UI will require some structuring. For example, let’s say you have a user that participates in a group, so you can access :user/id, :user/name, :group/id and :group/name, as in:

{:user/id 1
 :user/name "User"
 :group/id 42
 :group/name "Bar"}

Then we a component to render the group header.

(fp/defsc GroupHeaderView [_ _]
  {:ident [:group/id]
   :query [:group/id :group/name]})

Now it’s time to create a component for the user, but we want to use the GroupHeaderView to display the user group header. In Fulcro this means from the user we need to make a join to query for the GroupHeaderView, something like:

(fp/defsc UserImageView [_ _]
  {:ident [:user/id :user/id]
   :query [:user/id :user/name
           {??? (fp/get-query GroupHeaderView)}]})

To fill in the ???, the trick is to make some namespaces special, they make an edge on a graph that keeps the same context as the previous node. In the default setup the namespace > is the special one, so you can use anything with that, examples: :>/group :>/anything…​

This way we can conveniently reshape the data to give it more structure.

Let’s fill the example:

(fp/defsc UserImageView [_ _]
  {:ident [:user/id :user/id]
   :query [:user/id :user/name
           {:>/group (fp/get-query GroupHeaderView)}]})

The final query will be:

[:user/id :user/name
 {:>/group [:group/id :group/name]}]

That will result in:

{:user/id 1
 :user/name "User"
 :>/group {:group/id 42
           :group/name "Bar"}}

; compare to the original data:

{:user/id 1
 :user/name "User"
 :group/id 42
 :group/name "Bar"}

Take a moment to think about what this means, that is, this feature offers you a dynamic way to break the structure of any arbitrary data in any number of levels, when used with the flattening idea you get the best of both worlds, were one entity can hold as many attributes as they can (as long as there are no ambiguity) and at the same time break that in many smaller components that render specific parts of it.

If you look at the parser default configuration, we set the key ::p/placeholder-prefixes #{">"} in the environment. This set will be used by the p/placeholder-env-reader and make a join using the given key while maintaining the context. Plugin and reader implementors can take advantage of this available information (placeholder namespaces) so the can handle accordingly.

6.11. Tracing

Pathom 2.2.0 provides a replacement for the old profiler. The old profiler works by wrapping the calls to the Pathom reader and measuring the time around that, this is limiting because then you only have one measure per attribute.

The new tracer works as a event stream, you can inject log events at any time, events might have duration or not (even for events with start and finish, they are recorded as separated events and are combined in a post-processing operation).

This enables detailed logs to understand what happened during the processing of a query, and pathom core already has some system level tracing logs that go automatically, and you can add yours.

To enable the tracing you must add the plugin p/trace-plugin to your parser plugins vector.

6.11.1. Logging custom events

To log custom events you use the function com.wsscode.pathom.trace/trace.

Here is an example parser with some interesting tracing details, run the query to have a look:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/tracing/demo.cljs[role=include]
[:com.wsscode.pathom.book.tracing.demo/root-dep :com.wsscode.pathom.book.tracing.demo/root-dep-err]

6.12. Profiling [DEPRECATED, prefer the tracing]

It’s good to know how your queries are performing, and breaking it down by nodes is an excellent level to reason about how your queries are doing. Pathom provides a plugin to make this measurement easy to do:

(ns pathom-docs.profile
  (:require [com.wsscode.pathom.core :as p]
            [com.wsscode.pathom.profile :as p.profile]))

(def computed
  ; to demo delays, this property will take some time
  {:expensive (fn [{:keys [query] :as env}]
                (Thread/sleep 300)
                (if query
                  (p/join env)
                  :done))})

(defn flow-reader [{:keys [query] :as env}]
  (if query
    (p/join env)
    :leaf))

; starting the parser as usual
(def parser
  (p/parser {::p/plugins [(p/env-plugin {::p/reader [computed flow-reader]})
                          ; include the profile plugin
                          p.profile/profile-plugin]}))

(parser {}
        ; run the things
        [:a :b {:expensive [:c :d {:e [:expensive]}]}
         ; profile plugin provide this key, when you ask for it you get the
         ; information, be sure to request this as the last item on your query
         ::p.profile/profile])
; =>
; {:a                  :leaf
;  :b                  :leaf
;  :expensive          {:c :leaf
;                       :d :leaf
;                       :e {:expensive :done}}
;  ::p.profile/profile {:a         0
;                       :b         0
;                       :expensive {:c               1
;                                   :d               0
;                                   :e               {:expensive 304
;                                                     ::p.profile/self 304}
;                                   ::p.profile/self 611}}}

Looking at the profile results, you see the query values, and at the edges is the ms time taken to process that node. When the node has children, a ::p.profile/self indicates the time for the node itself (including children).

If you like to print a flame-graph of this output, you can use some d3 libraries on the web, I recommend the [d3 flame graph from spierman](https://github.com/spiermar/d3-flame-graph). Pathom has a function to convert the profile data to the format accepted by that library:

(-> (parser {}
            ; let's add more things this time
            [:a {:b [:g {:expensive [:f]}]}
             {:expensive [:c :d {:e [:expensive]}]}
             ::p.profile/profile])
    ; get the profile
    ::p.profile/profile
    ; generate the name/value/children format
    p.profile/profile->nvc)
; =>
; {:name     "Root"
;  :value    910
;  :children [{:name ":a" :value 0}
;             {:name     ":b"
;              :value    305
;              :children [{:name ":g" :value 0} {:name ":expensive" :value 304 :children [{:name ":f" :value 1}]}]}
;             {:name     ":expensive"
;              :value    605
;              :children [{:name ":c" :value 0}
;                         {:name ":d" :value 1}
;                         {:name ":e" :value 301 :children [{:name ":expensive" :value 300}]}]}]}

And then use that data to generate the flame graph:

6.13. Path tracking

As you go deep in your parser pathom track record of the current path taken, it’s available at ::p/path at any time. It’s a vector containing the current path from the root, the current main use for it is regarding error reporting and profiling.

(ns pathom-docs.path-tracking
  (:require [com.wsscode.pathom.core :as p]))

(def where-i-am-reader
  {:where-am-i (fn [{::p/keys [path]}] path)})

; a reader that just flows, until it reaches a leaf
(defn flow-reader [{:keys [query] :as env}]
  (if query
    (p/join env)
    :leaf))

(def parser (p/parser {::p/plugins [(p/env-plugin {::p/reader [where-i-am-reader
                                                               flow-reader]})]}))

(parser {} [{:hello [:some {:friend [:place :where-am-i]}]}])
;=>
;{:hello {:some   :leaf
;         :friend {:place      :leaf
;                  :where-am-i [:hello :friend :where-am-i]}}}

6.14. Async parsing

Nowadays the parallel parser is the recommended one to use because of the query strategy, but all the concepts presented here for async parser also applies to the parallel parser, which is async.

If you want to write parsers to run in Javascript environments, then async operations are the norm. The async parser is a version of the parser were you can return core async channels from the readers instead of raw values. This allows for the creation of parsers that do network requests or any other async operation. The async parser is still semantically a serial parser, and it will have the same flow characteristics of the regular parser (the order or resolution is preserved).

To write an async parser we use the p/async-parser function. Here is an example:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/async/intro.cljs[role=include]

Try the example:

[:foo :async-info]

The core plugins work normally with the async parser, so error and profiling will work as expected.

6.14.1. Error propagation

When an exception occurs inside a core async channel the error is triggered as part of the channel exception handler. That doesn’t compose very well, and for the parser needs it’s better if we have something more like the async/await pattern used on JS environments. Pathom provides some macros to help making this a simple thing, instead of using go and <!, use the go-catch and <? macros, as in the following example:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/async/error_propagation.cljs[role=include]
[:foo :async-info :async-error :com.wsscode.pathom.profile/profile]

Use com.wsscode.common.async-clj for Clojure and com.wsscode.common.async-cljs for ClojureScript. If you writing a cljc file, use the following:

[#?(:clj  com.wsscode.common.async-clj
    :cljs com.wsscode.common.async-cljs)
 :refer [go-catch <?]]

6.14.2. JS Promises

In JS world most of the current async responses comes as promises, you can use the <!p macro to read from promises inside go blocks as if they were channels. Example:

link:../docs-src/modules/ROOT/examples/com/wsscode/pathom/book/async/js_promises.cljs[role=include]
[:dog.ceo/random-dog-url]

7. Other helpers

7.1. p/map-select

Imagine select-keys on steroids. This helper function takes a map filter its contents according to an EQL selection. Example:

(p/map-select {:foo "bar" :deep {:a 1 :b 2}} [{:deep [:a]}])
=> {:deep {:a 1}}

8. Removing specs on Clojurescript

If you are not using the specs provided by Pathom you can free some build space by eliding then. To do that you need to set the Clojurescript compiler options with:

{:closure-defines
  {com.wsscode.pathom.misc.INCLUDE_SPECS false
   ; if you also want to remove from EQL
   ; edn-query-language.core.INCLUDE_SPECS false
   }}

9. GraphQL Integration

Pathom provides a collection of utilities to integrate with GraphQL.

9.1. Raw components

One simple usage you can get from pathom is the ability to translate EQL queries into GraphQL queries. To do so you can use the function query→graphql.

(pg/query->graphql [:foo]) ; => query { foo }

query→graphql can take an options argument to configure some settings:

  • ::pg/js-name: a function that will take a keyword prop and return the name string to be used in the GraphQL side

  • ::pg/ident-transform: a function that will convert an ident in a selector + params

If you want to process the query yourself, using query→graphql is a nice way to express the GraphQL queries without having to using strings in Clojure.

Here you can play with the query→graphql function, use the buttons to load examples:

9.1.1. Idents in GraphQL

In GraphQL, when you want to load some entity, the path usually goes from some root accessor that determines which type of entity is gonna be returned, plus a parameter (usually some id) to specifify the entry. For example, to load some user you might write a query like:

query {
  user(id: 42) {
    name
  }
}

You can write it similar in EQL as:

[{(:user {:id 42})
  [:name]}]

When you need to load multiple things at once, GraphQL provides an aliasing feature:

query {
  first: user(id: 42) {
    name
  }

  second: user(id: 424) {
    name
  }
}

To generate that kind of query with EQL you can use special parameters:

[{(:user {::pg/alias "first" :id 42})
  [:name]}

 {(:user {::pg/alias "second" :id 424})
   [:name]}]

The parameter will be removed from the list, and the alias will be created. If you don’t like the syntax, since this is just Clojure data we can make some helper around:

(defn aliased [alias key]
  (eql/update-property-param key assoc ::pg/alias alias))

[{(aliased "first" '(:user {:id 42}))
  [:name]}

 {(aliased "second" '(:user {:id 424}))
   [:name]}]

Not that much different, but you can be more creative and design your favorite API for it.

So far we just used the params to express an entity request, but in EQL we also have another way to express identity, via idents.

If you are not familiar with the ident concept, you can imagine it as a way to point to some specific entity, an ident is takes the form of a vector with two elements, the first element will determine the "identity type", which is always a keyword, and on the second element you have the "identity value", which can be any EDN value.

Some applied example of EQL idents:

The ident concept doesn’t exist in GraphQL, but still can we try to translate it.

What Pathom does then is get the most common case, that is a single entry point with a single parameter, and extract that from a qualified ident using the following syntax:

qualified ident means the ident keyword is qualified, example: [:user/id 123] is qualified because :user/id has a namespace.
[:user/id 42]
; (namespace :user/id) -> gives the graphql entry point
; (name :user/id) -> gives the param name
; 42 -> gives the param value

So the previous ident turns in:

query { _user_id_42: user(id: 42) }

Note that it also created an alias automatically, this way you can write queries using multiple similar idents and they will end up in different names:

[{[:user/id 42] [:name]}
 {[:user/id 48] [:name]}]

Turns in:

query {
  _user_id_42: user(id: 42) {
    name
  }
  _user_id_48: user(id: 48) {
    name
  }
}

In some cases one param is not enough, there is a supported And notation:

[{[:user/idAndname [42 "foo"]] [:name]}]

But to be honest, that just looks bad, if you need more than one param is better just to don’t use an ident for direct translation, still this can be useful if you need a quick and dirty way to access some multi-param in an ident fashion.

You can customize this ident translation behavior by providing ::pg/ident-transform to the pg/query→graphql call, here is the code of the default implementation so you can understand what a translation mean:

(defn ident-transform [[key value]]
  (let [fields (if-let [field-part (name key)]
                 (str/split field-part #"-and-|And") ["id"])
        value  (if (vector? value) value [value])]
    (if-not (= (count fields) (count value))
      (throw (ex-info "The number of fields on value needs to match the entries" {:key key :value value})))
    {::selector (-> (namespace key) (str/split #"\.") last)
     ::params   (zipmap fields value)}))
Pathom GraphQL + Connect integration handles idents in a different way than describe in the previous section, to understand how the connect ident integration is done check this section.

9.2. Fulcro Integration

There are two main ways in which to use Pathom + GraphQL + Fulcro:

  1. Simple: Use utilities to convert queries/mutations to GraphQL, and parse the responses. This gives you a quick and easy interface to existing GraphQL APIs, but is not extensible.

  2. Advanced: Integrate with Connect. This method pulls the GraphQL schema into Connect indexes with various benefits: Tools give better support (e.g. query autocompletion within Fulcro Inspect), and you can add your own client-side resolvers that can derive new shapes/data for the API, making it possible to shape the external API to your local UI whims.

In both cases Pathom includes implementations of Fulcro Remotes, so you can easily drop GraphQL support into a Fulcro application as a remote!

This chapter assumes you’re familiar with Pathom’s async support.

The namespaces concerned are:

[com.wsscode.pathom.graphql :as pg]
[com.wsscode.pathom.connect.graphql2 :as pcg]
[com.wsscode.pathom.fulcro.network :as pfn]
Before Pathom 2.2.12 the default functions to work with GraphQL used to convert the standard Clojure hyphenated to GraphQL camel case format, but after some user reports we realized that wasn’t a good idea because some names could never be accessed when entry points started with capital letters. To avoid those problems, since Pathom 2.2.12 we recommend new implementations that don’t transform the names in any way by default, but at same time provides custom name munging if the user wants to use it. None of the previous code was changed so library clients will not break with this change, we are just using new namespaces that use the new simpler way.

9.2.1. Simple GraphQL

There is a Fulcro Remote in pfn/graphql-network that allows you to easily add plain GraphQL support to a Fulcro client like so:

(fulcro/new-fulcro-client
    :networking
    {:remote
     (pfn/graphql-network2
       {::pfn/url (str "https://api.github.com/graphql?access_token=" token)})})

The queries from components have the following rules:

  1. You can use any namespace on the query keywords.

  2. The name portion of a keyword will be used to send to GraphQL

Mutations on a Simple GraphQL remote have the following rules:

  1. Mutations can have any namespace. The GraphQL conversion will elide the namespace.

Simple GraphQL Example

To demonstrate how easy it is to get a simple application going against an external GraphQL API we’ll build a simple TODO app. We’ve already gone to graph.cool, and created a GraphQL schema at https://www.graph.cool/ (a back-end as a service provider). You can play with the API by entering queries and mutations via their interface to our endpoint at https://api.graph.cool/simple/v1/cjjkw3slu0ui40186ml4jocgk.

For example, entering this query into the left pane:

query {
  allTodoItems {id, title, completed}
}

should give you something like this (people play with this, so yours will be different):

{
  "data": {
    "allTodoItems": [
      {
        "id": "cjjkw7yws06el0135q5sf372s",
        "title": "Write docs on workspaces",
        "completed": true
      }]
  }
}

So, you can see we have a root query that we can run to get all todo items, and each one has an id and title. So, we can write a simple Fulcro tree of components for that query:

(defsc TodoItem
  [this props]
  {:ident         [:todo/id :todo/id]
   :query         [:todo/id :todo/title :todo/completed]}
  ...)

(defsc TodoSimpleDemo [this props]
  {:ident         (fn [] [::root "singleton"])
   :query         [{:allTodoItems (fp/get-query TodoItem)}]}
  ...)

Notice that on TodoItem we namespaced the keys. This is fine, as the integration code will strip these from the query. If TodoSimpleDemo were your root component, the query for it is already compatible with our defined API when using our GraphQL network:

(fulcro/new-fulcro-client
  :started-callback
  (fn [app]
    (df/load app :allTodoItems todo/TodoItem {:target [::root "singleton" :allTodoItems]}))

  :networking
  {:remote (pfn/graphql-network2 "https://api.graph.cool/simple/v1/cjjkw3slu0ui40186ml4jocgk")})

Mutations are similarly easy. The network component translates them as discussed earlier, so doing something like adding a new todo item likes like this:

(fm/defmutation createTodoItem [todo]
  (action [env] ...local optimistic stuff...)
  (remote [{:keys [ast]}]
    ;; Don't send the UI-specific params to the server...just the id and title
    (update ast :params select-keys [:todo/id :todo/title])))

The full source is shown below, but hopefully you can see how simple it is to get something going pretty quickly.

link:../workspaces/src/com/wsscode/pathom/workspaces/graphql/simple_todo_demo.cljs[role=include]

9.2.2. GraphQL and Connect

The more powerful way to use GraphQL from Pathom is to use it with Connect. This gives you the basic features you saw in the simple version, but also gives you a lot more power and extensibility.

The integration has a bit of boilerplate, but it’s all relatively simple. Please make sure you already understand Pathom Connect before reading this.

Keywords and GraphQL – Prefixes

In order to properly generate indexes Connect needs to know how you will prefix them for a given GraphQL endpoint. From there, the keyword also gives an indication of the "type" and attribute name.

Say we are interfacting with GitHub: we might choose the prefix github. Then our keywords would need to be things like :github.User/name.

You will have to formally declare the prefix you’ve decided on in order to Connect to work.

GraphQL Entry Points and Connect Ident Maps

In GraphQL the schema designer indicates what entry points are possible. In GitHub’s public API you can, for example, access a User if you know their login. You can access a Repository if you know both the owner and the repository name.

You might wish to take a moment, log into GitHub, and play with these at https://developer.github.com/v4/explorer.

To look at a user, you need something like this:

query {
   user(login:"wilkerlucio") {
    createdAt
  }
}

To look at a repository, you need something like this:

query {
  repository(owner:"wilkerlucio" name:"pathom") {
    createdAt
  }
}

Our EDN queries use idents to stand for these kind of entry points. So, we’d like to be able to translate an EDN query like this:

[{[:github.User/login "wilkerlucio"] [:github.User/createdAt]}]

into the GraphQL query above. This is the purpose of the "Ident Map". It is a map whose top-level keys are GraphQL entry point names, and whose value is a map of the attributes required at that entry point associated with EDN keywords:

{ENTRY-POINT-NAME {ATTR connect-keyword
                   ...}
 ...}

So, an ident map for the above two GraphQL entry points is:

{"user"       {"login" :github.User/login}
 "repository" {"owner" :github.User/login
               "name"  :github.Repository/name}}

Installing such an ident map (covered shortly) will enable this feature.

If an entry point requires more than one input (as repository does), then there is no standard EDN ident that can directly use it. We’ll cover how to handle that in Multiple Input Entry Points

Interestingly, this feature of Pathom gives you an ability on GraphQL that GraphQL itself doesn’t have: the ability to nest an entry point anywhere in the query. GraphQL only understands entry points at the root of the query, but our EDN notation allows you to use an ident on a join at any level. Pathom Connect will correctly interpret such a join, process it against the GraphQL system, and properly nest the result.
Setting Up Connect with GraphQL

Now that you understand entry points we can explain the rest of the setup. A lot of it is just the standard Connect stuff, but of course there are additions for GraphQL.

First, you need to declare a place to store the indexes, that’s because the GraphQL schema will be loaded asynchronosly later and we need the index reference to add the GraphQL connection.

(defonce indexes (atom {}))

We need to define the configuration for the GraphQL connection:

(def github-gql
  {::pcg/url       (str "https://api.github.com/graphql?access_token=" (ls/get :github-token))
   ::pcg/prefix    "github"
   ::pcg/ident-map {"user"       {"login" :github.User/login}
                    "repository" {"owner" :github.User/login
                                  "name"  :github.Repository/name}}
   ::p.http/driver p.http.fetch/request-async})
::pcg/url

The GraphQL API endpoint

::pcg/prefix

The prefix you’ll use in your EDN queries and mutations.

::pcg/ident-map

The definition of GraphQL entry points, as discussed previously.

::p.http/driver

A driver that can run HTTP requests. Used to issue requests (e.g. fetch schema).

We’re using ls/get to pull our github access token from browser local storage so we don’t have to check it into code, and so anyone can use the example unedited. In Chrome, you can set this via the developer tools "Application" tab (once at the page for your app). Click on local storage, then add a key value pair. The key should be the keyword (typed out), and the value must be a QUOTED token (e.g. "987398ahbckjhbas"). The quotes are required!

Next, we need to create a parser. This will essentially be basically this:

(def parser
  (p/parallel-parser
    {::p/env     {::p/reader               [p/map-reader
                                            pc/parallel-reader
                                            pc/open-ident-reader
                                            p/env-placeholder-reader]
                  ::p/placeholder-prefixes #{">"}
                  ::p.http/driver          p.http.fetch/request-async}
     ::p/mutate  pc/mutate-async
     ::p/plugins [(pc/connect-plugin {; we can specify the index for the connect plugin to use
                                      ; instead of creating a new one internally
                                      ::pc/indexes  indexes})
                  p/error-handler-plugin
                  p/request-cache-plugin
                  p/trace-plugin]}))
Loading the GraphQL Schema and Creating a Remote

The final setup step is to make sure that you load the GraphQL schema into the Connect indexes. If you’re using Fulcro it looks like this:

(new-fulcro-client
  :started-callback
  (fn [app]
    (go-catch
      (try
        (let [idx (<? (pcg/load-index github-gql))]
          (swap! indexes pc/merge-indexes idx))
        (catch :default e (js/console.error "Error making index" e)))))

  :networking
  {:remote (-> (create-parser)
               (pfn/pathom-remote)
               ;; OPTIONAL: Automatically adds profile queries to all outgoing queries, so you see profiling from the parser
               (pfn/profile-remote))}}
Adding Resolvers

Of course we’ve done all of this setup so we can make use of (and extend the capabilities of) some GraphQL API.

The normal stuff is trivial: Make EDN queries that ask for the proper attributes in the proper context.

In our example, we might want to list some information about some repositories. If you remember, repositories take two pieces of information, and idents can supply only one.

That’s ok, we can define a resolver for a root-level Connect property that can pre-establish some repositories into our context!

(pc/defresolver repositories [_ _]
  {::pc/output [{:demo-repos [:github.User/login :github.Repository/name]}]}
  {:demo-repos
   [{:github.User/login "wilkerlucio" :github.Repository/name "pathom"}
    {:github.User/login "fulcrologic" :github.Repository/name "fulcro"}
    {:github.User/login "fulcrologic" :github.Repository/name "fulcro-inspect"}
    {:github.User/login "fulcrologic" :github.Repository/name "fulcro-css"}
    {:github.User/login "fulcrologic" :github.Repository/name "fulcro-spec"}
    {:github.User/login "thheller" :github.Repository/name "shadow-cljs"}]})

Remember, once Connect has enough info in a context, it can fill in the remaining details. Our Ident Map indicates that if we have "user login" and "repository name", then we can get a repository. Thus, a resolver that outputs values for the keywords associated with those requirements is sufficient!

Remember to add this resolver definition before the parser, then we have to add this resolver to our connect system, do that by updating the call to the connect-plugin, here is the updated parser:

(def parser
  (p/parallel-parser
    {::p/env     {::p/reader               [p/map-reader
                                            pc/parallel-reader
                                            pc/open-ident-reader
                                            p/env-placeholder-reader]
                  ::p/placeholder-prefixes #{">"}
                  ::p.http/driver          p.http.fetch/request-async}
     ::p/mutate  pc/mutate-async
     ::p/plugins [(pc/connect-plugin {::pc/register repositories ; registering the resolver
                                      ::pc/indexes  indexes})
                  p/error-handler-plugin
                  p/request-cache-plugin
                  p/trace-plugin]}))

Now we can run a query on :demo-repos like [{:demo-repos [:github.Repository/createdAt]}], and walk the graph from there to anywhere allowed!

Queries

The queries that are supported "out of the box" are those queries that follow the allowed shape of the documented GraphQL schema for your API. The EDN queries in Fulcro might look like this:

(fp/defsc Repository
  [this {:github.Repository/keys [id nameWithOwner viewerHasStarred]}]
  {:ident [:github.Repository/id :github.Repository/id]
   :query [:github.Repository/id :github.Repository/nameWithOwner :github.Repository/viewerHasStarred]}
  ...)

(fp/defsc GraphqlDemo
  [this {:keys [demo-repos]}]
  {:query [{:demo-repos (fp/get-query Repository)}]}
  (dom/div
    (mapv repository demo-repos)))

All of Connect’s additinal features (placeholder nodes, augmenting the graph, reshaping) are now also easily accessible.

Fulcro Mutations and Remote

If you’re using Fulcro, then the normal method of defining mutations will work if you use the remote shown earlier. You simply prefix the mutation name with your GraphQL prefix and it’ll work:

(fm/defmutation github/addStar [_]
  (action [{:keys [state ref]}] ...)
  (remote [_] true))
This is not the defmutation we showed earlier in the setup. This is Fulcro’s defmutation.

You can, of course, modify the parameters, do mutation joins, etc.

Connect-Based Mutations

It is possible that you might want to define a mutation that is not on the GraphQL API, but which does some alternative remote operation.

The notation is the same as for resolvers:

(pc/defmutation custom-mutation [_ params]
  {::pc/sym 'custom-mutation         ;; (optional) if provided will be used as mutation symbol, otherwise it will use the def symbol (including namespace)
   ::pc/params [:id {:boo [:y]}]     ;; future autocomplete...noop now
   ::pc/output [:x]}                 ;; future autocomplete...
  ;; can be async or sync.
  (async/go ...))

Note: The params and output are currently meant as documentation. In an upcoming version they’ll also be leveraged for tool autocomplete.

The body of the mutation can return a value (sync) or a channel (async). This means that the custom mutation could do something like hit an alternate REST API. This allows you to put in mutations that the async parser understands and allows to be integrated into a single expression (and API), even though they are not part of the GraphQL API you’re interacting with.

Of course, if you’re using Fulcro, then you’ll also have to make sure they’re OK with the mutation symbolically (e.g. define a fm/defmutation as well).

Multiple Input Entry Points

Earlier we talked about how the Ident Map might specify GraphQL endpoints the required more than one parameter, and the fact that EDN idents only really have a spot for one bit of data beyond the keyword: [keyword value].

Sometimes we have cases like GitHub’s repository entry point where more than one parameter is required.

This can be gracefully handled with EDN query parameters if you modify how Connect processes the query.

Since version 2.2.0 the connect readers ident-reader and open-ident-reader support the provision of extra context information using the query parameter :pathom/context.

Now, remember that this query:

[{[:github.repository/name "n"] [...]}]

cannot work because there is only one of the required two bits of info (we also need owner).

What we’re going to do is allow parameters to make up the difference. If you unfamiliar with them, you just surround the element of the query in a list and add a map of params, like this:

'[{([:github.repository/name "n"] {:x v}) [...]}]

Here is how you can use it to query for a pathom in the Github GraphQL API:

[{([:github.repository/name "pathom"] {:pathom/context {:github.repository/owner "wilkerlucio"}}) [...]}]

The problem, of course, is that this is really hard on the eyes. A bit too much nesting soup, and you need the quote ' in order to prevent an attempt to run a function! But this is what we need to allow us to add in more information. We can clean up the notation by defining a helper function:

(defn repository-ident
  "Returns a parameterized ident that can be used as a join key to directly query a repository."
  [owner name]
  (list [:github.repository/name name] {:pathom/context {:github.user/login owner}}))

Now we can write a reasonable query that contains everything we need:

[{(repository-ident "joe" "boo") [:github.repository/created-at]}]

and we’re good to go!

Customizing Result Parsing

Under the hood, Pathom uses a parser reader to do some error handling and bookkeeping on the query result. The simplest way to customize query results is to pass in custom mung and demung functions. These can be added as optional keys to the GraphQL configuration map. For example, if our EQL query keywords are in kebab case, but the GraphQL schema uses camel case, we can make the Connect plugin do the conversion for us with the following configuration:

(def github-gql
  {::pcg/url       (str "https://api.github.com/graphql?access_token=" (ls/get :github-token))
   ::pcg/prefix    "github"
   ::pcg/mung      pg/kebab-case
   ::pcg/demung    pg/camel-case
   ::pcg/ident-map {"user"       {"login" :github.User/login}
                    "repository" {"owner" :github.User/login
                                  "name"  :github.Repository/name}}
   ::p.http/driver p.http.fetch/request-async})

We can completely customize the query results by passing our own custom parser. See pcg/parser-item as an example of what such a parser should look like. This could be used to coerce uuid values from strings to uuids. Here’s an example of adapting pcg/parser-item to also coerce :my.gql.item/id values to uuids:

(defn demunger-map-reader
  "Reader that will demunge keys and coerce :my.gql.item/id values to uuids"
  [{::keys [demung]
    :keys  [ast query]
    :as    env}]
  (let [entity (p/entity env)
        k (:key ast)]
    (if-let [[_ v] (find entity (pcg/demung-key demung k))]
      (do
        (if (sequential? v)
          (if query
            (p/join-seq env v)
            (if (= k :my.gql.item/id)
              (map uuid v)
              v))
          (if (and (map? v) query)
            (p/join v env)
            (if (= k :my.gql.item/id)
              (uuid v)
              v))))
      ::p/continue)))

(def parser-item
  (p/parser {::p/env     {::p/reader [pcg/error-stamper
                                      demunger-map-reader
                                      p/env-placeholder-reader
                                      pcg/gql-ident-reader]}
             ::p/plugins [(p/env-wrap-plugin
                           (fn [env]
                             (-> (merge {::demung identity} env)
                                 (update ::p/placeholder-prefixes
                                         #(or % #{})))))]}))

(def my-gql-config
  {::pcg/url         "https://api.mydomain.com/graphql"
   ::pcg/prefix      "my.gql"
   ::pcg/parser-item parser-item
   ::pcg/ident-map   {"item" {"id" :my.gql.item/id}}
   ::p.http/driver   p.http.fetch/request-async})

This is only lightly edited from the implementation of pcg/parser-item.

9.2.3. Complete GraphQL Connect Example

A complete working example (for workspaces) is shown below:

link:../workspaces/src/com/wsscode/pathom/workspaces/graphql/github_demo.cljs[role=include]

9.3. EDN→GraphQL

Here you can try an interactive convertor. Type your EDN graph query on the left side and see the GraphQL equivalent been generated on the right.

Can you improve this documentation? These fine people already did:
Wilker Lucio, Tyler Nisonoff, Wilker Lúcio, Chris O'Donnell, erichkoliphant, Yann Vanhalewyn & José Luis Lafuente
Edit on GitHub

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close