Liking cljdoc? Tell your friends :D

jepsen.adya

Moved to jepsen.tests.adya.

Moved to jepsen.tests.adya.
raw docstring

No vars found in this namespace.

jepsen.checker.clock

Helps analyze clock skew over time.

Helps analyze clock skew over time.
raw docstring

jepsen.cli

Command line interface. Provides a default main method for common Jepsen functions (like the web interface), and utility functions for Jepsen tests to create their own test runners.

Command line interface. Provides a default main method for common Jepsen
functions (like the web interface), and utility functions for Jepsen tests to
create their own test runners.
raw docstring

jepsen.client

Applies operations to a database.

Applies operations to a database.
raw docstring

jepsen.codec

Serializes and deserializes objects to/from bytes.

Serializes and deserializes objects to/from bytes.
raw docstring

jepsen.control

Provides SSH control over a remote node. There's a lot of dynamically bound state in this namespace because we want to make it as simple as possible for scripts to open connections to various nodes.

Provides SSH control over a remote node. There's a lot of dynamically bound
state in this namespace because we want to make it as simple as possible for
scripts to open connections to various nodes.
raw docstring

jepsen.control.net

Network control functions.

Network control functions.
raw docstring

jepsen.core

Entry point for all Jepsen tests. Coordinates the setup of servers, running tests, creating and resolving failures, and interpreting results.

Jepsen tests a system by running a set of singlethreaded processes, each representing a single client in the system, and a special nemesis process, which induces failures across the cluster. Processes choose operations to perform based on a generator. Each process uses a client to apply the operation to the distributed system, and records the invocation and completion of that operation in the history for the test. When the test is complete, a checker analyzes the history to see if it made sense.

Jepsen automates the setup and teardown of the environment and distributed system by using an OS and client respectively. See run! for details.

Entry point for all Jepsen tests. Coordinates the setup of servers, running
tests, creating and resolving failures, and interpreting results.

Jepsen tests a system by running a set of singlethreaded *processes*, each
representing a single client in the system, and a special *nemesis* process,
which induces failures across the cluster. Processes choose operations to
perform based on a *generator*. Each process uses a *client* to apply the
operation to the distributed system, and records the invocation and
completion of that operation in the *history* for the test. When the test is
complete, a *checker* analyzes the history to see if it made sense.

Jepsen automates the setup and teardown of the environment and distributed
system by using an *OS* and *client* respectively. See `run!` for details.
raw docstring

jepsen.db

Allows Jepsen to set up and tear down databases.

Allows Jepsen to set up and tear down databases.
raw docstring

jepsen.faketime

Libfaketime is useful for making clocks run at differing rates! This namespace provides utilities for stubbing out programs with faketime.

Libfaketime is useful for making clocks run at differing rates! This
namespace provides utilities for stubbing out programs with faketime.
raw docstring

jepsen.generator

Generates operations for a test. Generators are composable, stateful objects which emit operations for processes until they are exhausted, at which point they return nil. Generators may sleep when generating operations, to delay the rate at which the test proceeds

Generators do not have to emit a :process for their operations; test workers will take care of that.

Every object may act as a generator, and constantly yields itself.

Big ol box of monads, really.

Generates operations for a test. Generators are composable, stateful objects
which emit operations for processes until they are exhausted, at which point
they return nil. Generators may sleep when generating operations, to delay
the rate at which the test proceeds

Generators do *not* have to emit a :process for their operations; test
workers will take care of that.

Every object may act as a generator, and constantly yields itself.

Big ol box of monads, really.
raw docstring

jepsen.generator.pure

A Jepsen history is a list of operations--invocations and completions. A generator's job is to specify what invocations to perform, and when. In a sense, a generator becomes a history as Jepsen incrementally applies it to a real system.

Naively, we might define a history as a fixed sequence of invocations to perform at certain times, but this is impossible: we have only a fixed set of threads, and they may not be free to perform our operations. A thread must be free in order to perform an operation.

Time, too, is a dependency. When we schedule an operation to occur once per second, we mean that only once a certain time has passed can the next operation begin.

There may also be dependencies between threads. Perhaps only after a nemesis has initiated a network partition does our client perform a particular read. We want the ability to hold until a certain operation has begun.

Conceptually, then, a generator is a graph of events, some of which have not yet occurred. Some events are invocations: these are the operations the generator will provide to clients. Some events are completions: these are provided by clients to the generator. Other events are temporal: a certain time has passed.

This graph has some invocations which are ready to perform. When we have a ready invocation, we apply the invocation as an input to the graph, obtaining a new graph, and hand the operation to the relevant client.

Contexts

A context is a map which provides context for generators. For instance, a generator might need to know the number of threads which will ask it for operations. It can get that number from the context. Users can add their own values to the context map, which allows two generators to share state. When one generator calls another, it can pass a modified version of the context, which allows us to write generators that, say, run two independent workloads, each with their own concurrency and thread mappings.

The standard context mappings, which are provided by Jepsen when invoking the top-level generator, and can be expected by every generator, are:

:time           The current Jepsen linear time, in nanoseconds
:free-threads   A collection of idle threads which could perform work
:workers        A map of thread identifiers to process identifiers

Fetching an operation

We use (op gen test context) to ask the generator for the next invocation that we can process.

The operation can have three forms:

  • The generator may return nil, which means the generator is done, and there is nothing more to do. Once a generator does this, it must never return anything other than nil, even if the context changes.
  • The generator may return :pending, which means there might be more ops later, but it can't tell yet.
  • The generator may return an operation, in which case:
    • If its time is in the past, we can evaluate it now
    • If its time is in the future, we wait until either:
      • The time arrives
      • Circumstances change (e.g. we update the generator)

But (op gen test context) returns more than just an operation; it also returns the subsequent state of the generator, if that operation were to be emitted. The two are bundled into a tuple.

(op gen test context) => [op gen'] ; known op [:pending gen] ; unsure nil ; exhausted

The analogous operations for sequences are (first) and (next); why do we couple them here? Why not use the update mechanism strictly to evolve state? Because the behavior in sequences is relatively simple: next always moves forward one item, whereas only some updates actually cause systems to move forward. Seqs always do the same thing in response to next, whereas generators may do different things depending on context. Moreover, Jepsen generators are often branched, rather than linearly wrapped, as sequences are, resulting in questions about which branch needs to be updated.

When I tried to strictly separate implementations of (op) and (update), it resulted in every update call attempting to determine whether this particular generator did or did not emit the given invocation event. This is remarkably tricky to do well, and winds up relying on all kinds of non-local assumptions about the behavior of the generators you wrap, and those which wrap you.

Updating a generator

We still want the ability to respond to invocations and completions, e.g. by tracking that information in context variables. Therefore, in addition to (op) returning a new generator, we have a separate function, (update gen test context event), which allows generators to react to changing circumstances.

  • We invoke an operation (e.g. one that the generator just gave us)
  • We complete an operation

Updates use a context with a specific relationship to the event:

  • The context :time is equal to the event :time
  • The free processes and worker maps reflect those prior to the event taking place. This ensures that generators can examine the worker map to identify which thread performed the given operation.

TODO: this is not true yet. Fix this.

Default implementations

Nil is a valid generator; it ignores updates and always yields nil for operations.

IPersistentMaps are generators which ignore updates and return operations which look like the map itself, but with default values for time, process, and type provided based on the context. This means you can write a generator like

{:f :write, :value 2}

and it will generate ops like

{:type :invoke, :process 3, :time 1234, :f :write, :value 2}

Sequences are generators which assume the elements of the sequence are themselves generators. They ignore updates, and return all operations from the first generator in the sequence, then all operations from the second, and so on. They do not synchronize.

Functions are generators which ignore updates and can take either test and context as arguments, or no args. Functions should be mostly pure, but some creative impurity is probably OK. For instance, returning randomized :values for maps is probably all right. I don't know the laws! What is this, Haskell?

Functions can return two things:

  • nil: signifies that the function generator is exhausted.
  • a tuple of [op gen]: passed through directly; the gen replaces the fn
  • a map: the map is treated as a generator, which lets it fill in a process, time, etc.

In the future, we might consider:

  • Returning any other generator: the function could be replaced by that generator, allowing us to compute generators lazily?
A Jepsen history is a list of operations--invocations and completions. A
generator's job is to specify what invocations to perform, and when. In a
sense, a generator *becomes* a history as Jepsen incrementally applies it to
a real system.

Naively, we might define a history as a fixed sequence of invocations to
perform at certain times, but this is impossible: we have only a fixed set of
threads, and they may not be free to perform our operations. A thread must be
*free* in order to perform an operation.

Time, too, is a dependency. When we schedule an operation to occur once per
second, we mean that only once a certain time has passed can the next
operation begin.

There may also be dependencies between threads. Perhaps only after a nemesis
has initiated a network partition does our client perform a particular read.
We want the ability to hold until a certain operation has begun.

Conceptually, then, a generator is a *graph* of events, some of which have
not yet occurred. Some events are invocations: these are the operations the
generator will provide to clients. Some events are completions: these are
provided by clients to the generator. Other events are temporal: a certain
time has passed.

This graph has some invocations which are *ready* to perform. When we have a
ready invocation, we apply the invocation as an input to the graph, obtaining
a new graph, and hand the operation to the relevant client.

## Contexts

A *context* is a map which provides context for generators. For instance, a
generator might need to know the number of threads which will ask it for
operations. It can get that number from the *context*. Users can add their
own values to the context map, which allows two generators to share state.
When one generator calls another, it can pass a modified version of the
context, which allows us to write generators that, say, run two independent
workloads, each with their own concurrency and thread mappings.

The standard context mappings, which are provided by Jepsen when invoking the
top-level generator, and can be expected by every generator, are:

    :time           The current Jepsen linear time, in nanoseconds
    :free-threads   A collection of idle threads which could perform work
    :workers        A map of thread identifiers to process identifiers

## Fetching an operation

We use `(op gen test context)` to ask the generator for the next invocation
that we can process.

The operation can have three forms:

- The generator may return `nil`, which means the generator is done, and
  there is nothing more to do. Once a generator does this, it must never
  return anything other than `nil`, even if the context changes.
- The generator may return :pending, which means there might be more
  ops later, but it can't tell yet.
- The generator may return an operation, in which case:
  - If its time is in the past, we can evaluate it now
  - If its time is in the future, we wait until either:
    - The time arrives
    - Circumstances change (e.g. we update the generator)

But (op gen test context) returns more than just an operation; it also
returns the *subsequent state* of the generator, if that operation were to be
emitted. The two are bundled into a tuple.

(op gen test context) => [op gen']      ; known op
                         [:pending gen] ; unsure
                         nil            ; exhausted

The analogous operations for sequences are (first) and (next); why do we
couple them here? Why not use the update mechanism strictly to evolve state?
Because the behavior in sequences is relatively simple: next always moves
forward one item, whereas only *some* updates actually cause systems to move
forward. Seqs always do the same thing in response to `next`, whereas
generators may do different things depending on context. Moreover, Jepsen
generators are often branched, rather than linearly wrapped, as sequences
are, resulting in questions about *which branch* needs to be updated.

When I tried to strictly separate implementations of (op) and (update), it
resulted in every update call attempting to determine whether this particular
generator did or did not emit the given invocation event. This is
*remarkably* tricky to do well, and winds up relying on all kinds of
non-local assumptions about the behavior of the generators you wrap, and
those which wrap you.

## Updating a generator

We still want the ability to respond to invocations and completions, e.g. by
tracking that information in context variables. Therefore, in addition to
(op) returning a new generator, we have a separate function, (update gen test
context event), which allows generators to react to changing circumstances.

- We invoke an operation (e.g. one that the generator just gave us)
- We complete an operation

Updates use a context with a specific relationship to the event:

- The context :time is equal to the event :time
- The free processes and worker maps reflect those *prior* to the event
taking place. This ensures that generators can examine the worker map to
identify which thread performed the given operation.

TODO: this is not true yet. Fix this.

## Default implementations

Nil is a valid generator; it ignores updates and always yields nil for
operations.

IPersistentMaps are generators which ignore updates and return operations
which look like the map itself, but with default values for time, process,
and type provided based on the context. This means you can write a generator
like

{:f :write, :value 2}

and it will generate ops like

{:type :invoke, :process 3, :time 1234, :f :write, :value 2}

Sequences are generators which assume the elements of the sequence are
themselves generators. They ignore updates, and return all operations from
the first generator in the sequence, then all operations from the second, and
so on. They do not synchronize.

Functions are generators which ignore updates and can take either test and
context as arguments, or no args. Functions should be *mostly* pure, but some
creative impurity is probably OK. For instance, returning randomized :values
for maps is probably all right. I don't know the laws! What is this, Haskell?

Functions can return two things:

- nil: signifies that the function generator is exhausted.
- a tuple of [op gen]: passed through directly; the gen replaces the fn
- a map: the map is treated as a generator, which lets it fill in a process,
         time, etc.

In the future, we might consider:

- Returning any other generator: the function could be *replaced* by that
generator, allowing us to compute generators lazily?
raw docstring

jepsen.independent

Some tests are expensive to check--for instance, linearizability--which requires we verify only short histories. But if histories are short, we may not be able to sample often or long enough to reveal concurrency errors. This namespace supports splitting a test into independent components--for example taking a test of a single register and lifting it to a map of keys to registers.

Some tests are expensive to check--for instance, linearizability--which
requires we verify only short histories. But if histories are short, we may
not be able to sample often or long enough to reveal concurrency errors. This
namespace supports splitting a test into independent components--for example
taking a test of a single register and lifting it to a *map* of keys to
registers.
raw docstring

jepsen.net

Controls network manipulation.

TODO: break this up into jepsen.net.proto (polymorphism) and jepsen.net (wrapper fns, default args, etc)

Controls network manipulation.

TODO: break this up into jepsen.net.proto (polymorphism) and jepsen.net
(wrapper fns, default args, etc)
raw docstring

jepsen.net.proto

Protocols for network manipulation. High-level functions live in jepsen.net.

Protocols for network manipulation. High-level functions live in
jepsen.net.
raw docstring

jepsen.os

Controls operating system setup and teardown.

Controls operating system setup and teardown.
raw docstring

jepsen.os.ubuntu

Common tasks for Ubuntu boxes. Tested against Ubuntu 18.04.

Common tasks for Ubuntu boxes. Tested against Ubuntu 18.04.
raw docstring

jepsen.reconnect

Stateful wrappers for automatically reconnecting network clients.

A wrapper is a map with a connection atom conn and a pair of functions: (open), which opens a new connection, and (close conn), which closes a connection. We use these to provide a with-conn macro that acquires the current connection from a wrapper, evaluates body, and automatically closes/reopens the connection when errors occur.

Connect/close/reconnect lock the wrapper, but multiple threads may acquire the current connection at once.

Stateful wrappers for automatically reconnecting network clients.

A wrapper is a map with a connection atom `conn` and a pair of functions:
`(open)`, which opens a new connection, and `(close conn)`, which closes a
connection. We use these to provide a with-conn macro that acquires the
current connection from a wrapper, evaluates body, and automatically
closes/reopens the connection when errors occur.

Connect/close/reconnect lock the wrapper, but multiple threads may acquire
the current connection at once.
raw docstring

jepsen.repl

Helper functions for mucking around with tests!

Helper functions for mucking around with tests!
raw docstring

jepsen.report

Prints out stuff.

Prints out stuff.
raw docstring

jepsen.tests

Provide utilities for writing tests using jepsen.

Provide utilities for writing tests using jepsen.
raw docstring

jepsen.tests.adya

Generators and checkers for tests of Adya's proscribed behaviors for weakly-consistent systems. See http://pmg.csail.mit.edu/papers/adya-phd.pdf

Generators and checkers for tests of Adya's proscribed behaviors for
weakly-consistent systems. See http://pmg.csail.mit.edu/papers/adya-phd.pdf
raw docstring

jepsen.tests.bank

Helper functions for doing bank tests, where you simulate transfers between accounts, and verify that reads always show the same balance. The test map should have these additional options:

:accounts A collection of account identifiers. :total-amount Total amount to allocate. :max-transfer The largest transfer we'll try to execute.

Helper functions for doing bank tests, where you simulate transfers between
accounts, and verify that reads always show the same balance. The test map
should have these additional options:

:accounts     A collection of account identifiers.
:total-amount Total amount to allocate.
:max-transfer The largest transfer we'll try to execute.
raw docstring

jepsen.tests.causal-reverse

Checks for a strict serializability anomaly in which T1 < T2, but T2 is visible without T1.

We perform concurrent blind inserts across n keys, and meanwhile, perform reads of n keys in a transaction. To verify, we replay the history, tracking the writes which were known to have completed before the invocation of any write w_i. If w_i is visible, and some w_j < w_i is not visible, we've found a violation of strict serializability.

Splits keys up onto different tables to make sure they fall in different shard ranges

Checks for a strict serializability anomaly in which T1 < T2, but T2 is
visible without T1.

We perform concurrent blind inserts across n keys, and meanwhile, perform
reads of n keys in a transaction. To verify, we replay the history,
tracking the writes which were known to have completed before the invocation
of any write w_i. If w_i is visible, and some w_j < w_i is *not* visible,
we've found a violation of strict serializability.

Splits keys up onto different tables to make sure they fall in different
shard ranges
raw docstring

jepsen.tests.cycle

Tests based on detecting cycles between operations in a history.

First, we define a bunch of ways you can compute a dependency graph over a history. Our graphs use completions (e.g. ok, info) for the canonical representation of an operation. For instance:

realtime-graph relates a->b if operation b begins after operation a completes successfully.

process-graph relates a->b if a and b are performed by the same process, and that process executes a before b.

monotonic-key-graph assumes op :values are maps of keys to observed values, like {:x 1 :y 2}. It relates a->b if b observed a higher value of some key than a did. For instance, {:x 1 :y 2} -> {:x 2 :y 2}, because :x is higher in the second op.

wr-graph assumes op :values are transactions like [[f k v] ...], and relates operations based on transactional reads and writes: a->b if b observes a value that a wrote. This order requires that keys only have a given value written once.

You can also combine graphs using combine, which takes the union of multiple graphs. (combine realtime monotonic-key), for instance, verifies that not only must the values of each key monotonically increase, but that those increases must be observed in real time: stale reads are not allowed.

Given a function which takes a history and produces a dependency graph, the checker runs that function, identifies strongly-connected components in the resulting graph, and decides whether the history is valid based on whether the graph has any strongly connected components--e.g. cycles.

Tests based on detecting cycles between operations in a history.

First, we define a bunch of ways you can compute a dependency graph over a
history. Our graphs use completions (e.g. ok, info) for the canonical
representation of an operation. For instance:

`realtime-graph` relates a->b if operation b begins after operation a
completes successfully.

`process-graph` relates a->b if a and b are performed by the same process,
and that process executes a before b.

`monotonic-key-graph` assumes op :values are maps of keys to observed values,
like {:x 1 :y 2}. It relates a->b if b observed a higher value of some key
than a did. For instance, {:x 1 :y 2} -> {:x 2 :y 2}, because :x is higher in
the second op.

`wr-graph` assumes op :values are transactions like [[f k v] ...], and
relates operations based on transactional reads and writes: a->b if b
observes a value that a wrote. This order requires that keys only have a
given value written once.

You can also *combine* graphs using `combine`, which takes the union of
multiple graphs. `(combine realtime monotonic-key)`, for instance, verifies
that not only must the values of each key monotonically increase, but that
those increases must be observed in real time: stale reads are not allowed.

Given a function which takes a history and produces a dependency graph, the
checker runs that function, identifies strongly-connected components in the
resulting graph, and decides whether the history is valid based on whether
the graph has any strongly connected components--e.g. cycles.
raw docstring

jepsen.tests.cycle.append

Detects cycles in histories where operations are transactions over named lists lists, and operations are either appends or reads. Used with jepsen.tests.cycle.

Detects cycles in histories where operations are transactions over named
lists lists, and operations are either appends or reads. Used with
jepsen.tests.cycle.
raw docstring

jepsen.tests.linearizable-register

Common generators and checkers for linearizability over a set of independent registers. Clients should understand three functions, for writing a value, reading a value, and compare-and-setting a value from v to v'. Reads receive nil, and replace it with the value actually read.

{:type :invoke, :f :write, :value [k v]}
{:type :invoke, :f :read,  :value [k nil]}
{:type :invoke, :f :cas,   :value [k [v v']]}
Common generators and checkers for linearizability over a set of independent
registers. Clients should understand three functions, for writing a value,
reading a value, and compare-and-setting a value from v to v'. Reads receive
`nil`, and replace it with the value actually read.

    {:type :invoke, :f :write, :value [k v]}
    {:type :invoke, :f :read,  :value [k nil]}
    {:type :invoke, :f :cas,   :value [k [v v']]}
raw docstring

jepsen.tests.long-fork

Tests for an anomaly in parallel snapshot isolation (but which is prohibited in normal snapshot isolation). In long-fork, concurrent write transactions are observed in conflicting order. For example:

T1: (write x 1) T2: (write y 1) T3: (read x nil) (read y 1) T4: (read x 1) (read y nil)

T3 implies T2 < T1, but T4 implies T1 < T2. We aim to observe these conflicts.

To generalize to multiple updates...

Let a write transaction be a transaction of the form [(write k v)], executing a single write to a single register. Assume no two write transactions have the same key and value.

Let a read transaction be a transaction of the form [(read k1 v1) (read k2 v2) ...], reading multiple distinct registers.

Let R be a set of reads, and W be a set of writes. Let the total set of transactions T = R U W; e.g. there are only reads and writes. Let the initial state be empty.

Serializability implies that there should exist an order < over T, such that every read observes the state of the system as given by the prefix of all write transactions up to some point t_i.

Since each value is written exactly once, a sequence of states with the same value for a key k must be contiguous in this order. That is, it would be illegal to read:

[(r x 0)] [(r x 1)] [(r x 0)]

... because although we can infer (w x 0) happened before t1, and (w x 1) happened between t2, there is no second (w x 0) that can satisfy t3. Visually, imagine bars which connect identical values from reads, keeping them together:

key   r1    r2    r3    r4

x     0     1 --- 1 --- 1

y     0     1 --- 1     0

z     0 --- 0     2     1

A long fork anomaly manifests when we cannot construct an order where these bars connect identical values into contiguous blocks.

Note that more than one value may change between reads: we may not observe every change.

Note that the values for a given key are not monotonic; we must treat them as unordered, opaque values because the serialization order of writes is (in general) arbitrary. It would be easier if we knew the write order up front. The classic example of long fork uses inserts because we know the states go from nil to some-value, but in our case, it could be 0 -> 1, or 1 -> 0.

To build a graph like this, we need an incremental process. We know the initial state was [nil nil nil ...], so we can start there. Let our sequence be:

    r0

x   nil
y   nil
z   nil

We know that values can only change away from nil. This implies that those reads with the most nils must come adjacent to r0. In general, if there exists a read r1 adjacent to r0, then there must not exist any read r2 such that r2 has more in common with r0 than r1. This assumes that all reads are total.

Additionally, if the graph is to be... smooth, as opposed to forking, there are only two directions to travel from any given read. This implies there can be at most two reads r1 and r2 with an equal number of links and distinct links to r0. If a third read with this property exists, there's 'nowhere to put it' without creating a fork.

We can verify this property in roughly linear time, which is nice. It doesn't, however, prevent closed loops with no forking structure.

To do loops, I think we have to actually do the graph traversal. Let's punt on that for now.

Tests for an anomaly in parallel snapshot isolation (but which is prohibited
in normal snapshot isolation). In long-fork, concurrent write transactions
are observed in conflicting order. For example:

T1: (write x 1)
T2: (write y 1)
T3: (read x nil) (read y 1)
T4: (read x 1) (read y nil)

T3 implies T2 < T1, but T4 implies T1 < T2. We aim to observe these
conflicts.

To generalize to multiple updates...

Let a write transaction be a transaction of the form [(write k v)], executing
a single write to a single register. Assume no two write transactions have
the same key *and* value.

Let a read transaction be a transaction of the form [(read k1 v1) (read k2
v2) ...], reading multiple distinct registers.

Let R be a set of reads, and W be a set of writes. Let the total set of
transactions T = R U W; e.g. there are only reads and writes. Let the initial
state be empty.

Serializability implies that there should exist an order < over T, such that
every read observes the state of the system as given by the prefix of all
write transactions up to some point t_i.

Since each value is written exactly once, a sequence of states with the same
value for a key k must be *contiguous* in this order. That is, it would be
illegal to read:

    [(r x 0)] [(r x 1)] [(r x 0)]

... because although we can infer (w x 0) happened before t1, and (w x 1)
happened between t2, there is no *second* (w x 0) that can satisfy t3.
Visually, imagine bars which connect identical values from reads, keeping
them together:

    key   r1    r2    r3    r4

    x     0     1 --- 1 --- 1

    y     0     1 --- 1     0

    z     0 --- 0     2     1

A long fork anomaly manifests when we cannot construct an order where these
bars connect identical values into contiguous blocks.

Note that more than one value may change between reads: we may not observe
every change.

Note that the values for a given key are not monotonic; we must treat them as
unordered, opaque values because the serialization order of writes is (in
general) arbitrary. It would be easier if we knew the write order up front.
The classic example of long fork uses inserts because we know the states go
from `nil` to `some-value`, but in our case, it could be 0 -> 1, or 1 -> 0.

To build a graph like this, we need an incremental process. We know the
initial state was [nil nil nil ...], so we can start there. Let our sequence
be:

        r0

    x   nil
    y   nil
    z   nil

We know that values can only change *away* from nil. This implies that those
reads with the most nils must come adjacent to r0. In general, if there
exists a read r1 adjacent to r0, then there must not exist any read r2 such
that r2 has more in common with r0 than r1. This assumes that all reads are
total.

Additionally, if the graph is to be... smooth, as opposed to forking, there
are only two directions to travel from any given read. This implies there can
be at most two reads r1 and r2 with an equal number of links *and* distinct
links to r0. If a third read with this property exists, there's 'nowhere to
put it' without creating a fork.

We can verify this property in roughly linear time, which is nice. It
doesn't, however, prevent *closed loops* with no forking structure.

To do loops, I think we have to actually do the graph traversal. Let's punt
on that for now.
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close