Liking cljdoc? Tell your friends :D

jepsen.adya

Generators and checkers for tests of Adya's proscribed behaviors for weakly-consistent systems. See http://pmg.csail.mit.edu/papers/adya-phd.pdf

Generators and checkers for tests of Adya's proscribed behaviors for
weakly-consistent systems. See http://pmg.csail.mit.edu/papers/adya-phd.pdf
raw docstring

jepsen.cli

Command line interface. Provides a default main method for common Jepsen functions (like the web interface), and utility functions for Jepsen tests to create their own test runners.

Command line interface. Provides a default main method for common Jepsen
functions (like the web interface), and utility functions for Jepsen tests to
create their own test runners.
raw docstring

jepsen.client

Applies operations to a database.

Applies operations to a database.
raw docstring

jepsen.codec

Serializes and deserializes objects to/from bytes.

Serializes and deserializes objects to/from bytes.
raw docstring

jepsen.control

Provides SSH control over a remote node. There's a lot of dynamically bound state in this namespace because we want to make it as simple as possible for scripts to open connections to various nodes.

Provides SSH control over a remote node. There's a lot of dynamically bound
state in this namespace because we want to make it as simple as possible for
scripts to open connections to various nodes.
raw docstring

jepsen.control.net

Network control functions.

Network control functions.
raw docstring

jepsen.core

Entry point for all Jepsen tests. Coordinates the setup of servers, running tests, creating and resolving failures, and interpreting results.

Jepsen tests a system by running a set of singlethreaded processes, each representing a single client in the system, and a special nemesis process, which induces failures across the cluster. Processes choose operations to perform based on a generator. Each process uses a client to apply the operation to the distributed system, and records the invocation and completion of that operation in the history for the test. When the test is complete, a checker analyzes the history to see if it made sense.

Jepsen automates the setup and teardown of the environment and distributed system by using an OS and client respectively. See run! for details.

Entry point for all Jepsen tests. Coordinates the setup of servers, running
tests, creating and resolving failures, and interpreting results.

Jepsen tests a system by running a set of singlethreaded *processes*, each
representing a single client in the system, and a special *nemesis* process,
which induces failures across the cluster. Processes choose operations to
perform based on a *generator*. Each process uses a *client* to apply the
operation to the distributed system, and records the invocation and
completion of that operation in the *history* for the test. When the test is
complete, a *checker* analyzes the history to see if it made sense.

Jepsen automates the setup and teardown of the environment and distributed
system by using an *OS* and *client* respectively. See `run!` for details.
raw docstring

jepsen.db

Allows Jepsen to set up and tear down databases.

Allows Jepsen to set up and tear down databases.
raw docstring

jepsen.faketime

Libfaketime is useful for making clocks run at differing rates! This namespace provides utilities for stubbing out programs with faketime.

Libfaketime is useful for making clocks run at differing rates! This
namespace provides utilities for stubbing out programs with faketime.
raw docstring

jepsen.generator

Generates operations for a test. Generators are composable, stateful objects which emit operations for processes until they are exhausted, at which point they return nil. Generators may sleep when generating operations, to delay the rate at which the test proceeds

Generators do not have to emit a :process for their operations; test workers will take care of that.

Every object may act as a generator, and constantly yields itself.

Big ol box of monads, really.

Generates operations for a test. Generators are composable, stateful objects
which emit operations for processes until they are exhausted, at which point
they return nil. Generators may sleep when generating operations, to delay
the rate at which the test proceeds

Generators do *not* have to emit a :process for their operations; test
workers will take care of that.

Every object may act as a generator, and constantly yields itself.

Big ol box of monads, really.
raw docstring

jepsen.independent

Some tests are expensive to check--for instance, linearizability--which requires we verify only short histories. But if histories are short, we may not be able to sample often or long enough to reveal concurrency errors. This namespace supports splitting a test into independent components--for example taking a test of a single register and lifting it to a map of keys to registers.

Some tests are expensive to check--for instance, linearizability--which
requires we verify only short histories. But if histories are short, we may
not be able to sample often or long enough to reveal concurrency errors. This
namespace supports splitting a test into independent components--for example
taking a test of a single register and lifting it to a *map* of keys to
registers.
raw docstring

jepsen.net

Controls network manipulation.

TODO: break this up into jepsen.net.proto (polymorphism) and jepsen.net (wrapper fns, default args, etc)

Controls network manipulation.

TODO: break this up into jepsen.net.proto (polymorphism) and jepsen.net
(wrapper fns, default args, etc)
raw docstring

jepsen.net.proto

Protocols for network manipulation. High-level functions live in jepsen.net.

Protocols for network manipulation. High-level functions live in
jepsen.net.
raw docstring

jepsen.os

Controls operating system setup and teardown.

Controls operating system setup and teardown.
raw docstring

jepsen.reconnect

Stateful wrappers for automatically reconnecting network clients.

A wrapper is a map with a connection atom conn and a pair of functions: (open), which opens a new connection, and (close conn), which closes a connection. We use these to provide a with-conn macro that acquires the current connection from a wrapper, evaluates body, and automatically closes/reopens the connection when errors occur.

Connect/close/reconnect lock the wrapper, but multiple threads may acquire the current connection at once.

Stateful wrappers for automatically reconnecting network clients.

A wrapper is a map with a connection atom `conn` and a pair of functions:
`(open)`, which opens a new connection, and `(close conn)`, which closes a
connection. We use these to provide a with-conn macro that acquires the
current connection from a wrapper, evaluates body, and automatically
closes/reopens the connection when errors occur.

Connect/close/reconnect lock the wrapper, but multiple threads may acquire
the current connection at once.
raw docstring

jepsen.repl

Helper functions for mucking around with tests!

Helper functions for mucking around with tests!
raw docstring

jepsen.report

Prints out stuff.

Prints out stuff.
raw docstring

jepsen.tests

Provide utilities for writing tests using jepsen.

Provide utilities for writing tests using jepsen.
raw docstring

jepsen.tests.bank

Helper functions for doing bank tests, where you simulate transfers between accounts, and verify that reads always show the same balance. The test map should have these additional options:

:accounts A collection of account identifiers. :total-amount Total amount to allocate. :max-transfer The largest transfer we'll try to execute.

Helper functions for doing bank tests, where you simulate transfers between
accounts, and verify that reads always show the same balance. The test map
should have these additional options:

:accounts     A collection of account identifiers.
:total-amount Total amount to allocate.
:max-transfer The largest transfer we'll try to execute.
raw docstring

jepsen.tests.linearizable-register

Common generators and checkers for linearizability over a set of independent registers. Clients should understand three functions, for writing a value, reading a value, and compare-and-setting a value from v to v'. Reads receive nil, and replace it with the value actually read.

{:type :invoke, :f :write, :value [k v]}
{:type :invoke, :f :read,  :value [k nil]}
{:type :invoke, :f :cas,   :value [k [v v']]}
Common generators and checkers for linearizability over a set of independent
registers. Clients should understand three functions, for writing a value,
reading a value, and compare-and-setting a value from v to v'. Reads receive
`nil`, and replace it with the value actually read.

    {:type :invoke, :f :write, :value [k v]}
    {:type :invoke, :f :read,  :value [k nil]}
    {:type :invoke, :f :cas,   :value [k [v v']]}
raw docstring

jepsen.tests.long-fork

Tests for an anomaly in parallel snapshot isolation (but which is prohibited in normal snapshot isolation). In long-fork, concurrent write transactions are observed in conflicting order. For example:

T1: (write x 1) T2: (write y 1) T3: (read x nil) (read y 1) T4: (read x 1) (read y nil)

T3 implies T2 < T1, but T4 implies T1 < T2. We aim to observe these conflicts.

To generalize to multiple updates...

Let a write transaction be a transaction of the form [(write k v)], executing a single write to a single register. Assume no two write transactions have the same key and value.

Let a read transaction be a transaction of the form [(read k1 v1) (read k2 v2) ...], reading multiple distinct registers.

Let R be a set of reads, and W be a set of writes. Let the total set of transactions T = R U W; e.g. there are only reads and writes. Let the initial state be empty.

Serializability implies that there should exist an order < over T, such that every read observes the state of the system as given by the prefix of all write transactions up to some point t_i.

Since each value is written exactly once, a sequence of states with the same value for a key k must be contiguous in this order. That is, it would be illegal to read:

[(r x 0)] [(r x 1)] [(r x 0)]

... because although we can infer (w x 0) happened before t1, and (w x 1) happened between t2, there is no second (w x 0) that can satisfy t3. Visually, imagine bars which connect identical values from reads, keeping them together:

key   r1    r2    r3    r4

x     0     1 --- 1 --- 1

y     0     1 --- 1     0

z     0 --- 0     2     1

A long fork anomaly manifests when we cannot construct an order where these bars connect identical values into contiguous blocks.

Note that more than one value may change between reads: we may not observe every change.

Note that the values for a given key are not monotonic; we must treat them as unordered, opaque values because the serialization order of writes is (in general) arbitrary. It would be easier if we knew the write order up front. The classic example of long fork uses inserts because we know the states go from nil to some-value, but in our case, it could be 0 -> 1, or 1 -> 0.

To build a graph like this, we need an incremental process. We know the initial state was [nil nil nil ...], so we can start there. Let our sequence be:

    r0

x   nil
y   nil
z   nil

We know that values can only change away from nil. This implies that those reads with the most nils must come adjacent to r0. In general, if there exists a read r1 adjacent to r0, then there must not exist any read r2 such that r2 has more in common with r0 than r1. This assumes that all reads are total.

Additionally, if the graph is to be... smooth, as opposed to forking, there are only two directions to travel from any given read. This implies there can be at most two reads r1 and r2 with an equal number of links and distinct links to r0. If a third read with this property exists, there's 'nowhere to put it' without creating a fork.

We can verify this property in roughly linear time, which is nice. It doesn't, however, prevent closed loops with no forking structure.

To do loops, I think we have to actually do the graph traversal. Let's punt on that for now.

Tests for an anomaly in parallel snapshot isolation (but which is prohibited
in normal snapshot isolation). In long-fork, concurrent write transactions
are observed in conflicting order. For example:

T1: (write x 1)
T2: (write y 1)
T3: (read x nil) (read y 1)
T4: (read x 1) (read y nil)

T3 implies T2 < T1, but T4 implies T1 < T2. We aim to observe these
conflicts.

To generalize to multiple updates...

Let a write transaction be a transaction of the form [(write k v)], executing
a single write to a single register. Assume no two write transactions have
the same key *and* value.

Let a read transaction be a transaction of the form [(read k1 v1) (read k2
v2) ...], reading multiple distinct registers.

Let R be a set of reads, and W be a set of writes. Let the total set of
transactions T = R U W; e.g. there are only reads and writes. Let the initial
state be empty.

Serializability implies that there should exist an order < over T, such that
every read observes the state of the system as given by the prefix of all
write transactions up to some point t_i.

Since each value is written exactly once, a sequence of states with the same
value for a key k must be *contiguous* in this order. That is, it would be
illegal to read:

    [(r x 0)] [(r x 1)] [(r x 0)]

... because although we can infer (w x 0) happened before t1, and (w x 1)
happened between t2, there is no *second* (w x 0) that can satisfy t3.
Visually, imagine bars which connect identical values from reads, keeping
them together:

    key   r1    r2    r3    r4

    x     0     1 --- 1 --- 1

    y     0     1 --- 1     0

    z     0 --- 0     2     1

A long fork anomaly manifests when we cannot construct an order where these
bars connect identical values into contiguous blocks.

Note that more than one value may change between reads: we may not observe
every change.

Note that the values for a given key are not monotonic; we must treat them as
unordered, opaque values because the serialization order of writes is (in
general) arbitrary. It would be easier if we knew the write order up front.
The classic example of long fork uses inserts because we know the states go
from `nil` to `some-value`, but in our case, it could be 0 -> 1, or 1 -> 0.

To build a graph like this, we need an incremental process. We know the
initial state was [nil nil nil ...], so we can start there. Let our sequence
be:

        r0

    x   nil
    y   nil
    z   nil

We know that values can only change *away* from nil. This implies that those
reads with the most nils must come adjacent to r0. In general, if there
exists a read r1 adjacent to r0, then there must not exist any read r2 such
that r2 has more in common with r0 than r1. This assumes that all reads are
total.

Additionally, if the graph is to be... smooth, as opposed to forking, there
are only two directions to travel from any given read. This implies there can
be at most two reads r1 and r2 with an equal number of links *and* distinct
links to r0. If a third read with this property exists, there's 'nowhere to
put it' without creating a fork.

We can verify this property in roughly linear time, which is nice. It
doesn't, however, prevent *closed loops* with no forking structure.

To do loops, I think we have to actually do the graph traversal. Let's punt
on that for now.
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close