Tests for an anomaly in parallel snapshot isolation (but which is prohibited in normal snapshot isolation). In long-fork, concurrent write transactions are observed in conflicting order. For example:
T1: (write x 1) T2: (write y 1) T3: (read x nil) (read y 1) T4: (read x 1) (read y nil)
T3 implies T2 < T1, but T4 implies T1 < T2. We aim to observe these conflicts.
To generalize to multiple updates...
Let a write transaction be a transaction of the form [(write k v)], executing a single write to a single register. Assume no two write transactions have the same key and value.
Let a read transaction be a transaction of the form [(read k1 v1) (read k2 v2) ...], reading multiple distinct registers.
Let R be a set of reads, and W be a set of writes. Let the total set of transactions T = R U W; e.g. there are only reads and writes. Let the initial state be empty.
Serializability implies that there should exist an order < over T, such that every read observes the state of the system as given by the prefix of all write transactions up to some point t_i.
Since each value is written exactly once, a sequence of states with the same value for a key k must be contiguous in this order. That is, it would be illegal to read:
[(r x 0)] [(r x 1)] [(r x 0)]
... because although we can infer (w x 0) happened before t1, and (w x 1) happened between t2, there is no second (w x 0) that can satisfy t3. Visually, imagine bars which connect identical values from reads, keeping them together:
key r1 r2 r3 r4
x 0 1 --- 1 --- 1
y 0 1 --- 1 0
z 0 --- 0 2 1
A long fork anomaly manifests when we cannot construct an order where these bars connect identical values into contiguous blocks.
Note that more than one value may change between reads: we may not observe every change.
Note that the values for a given key are not monotonic; we must treat them as
unordered, opaque values because the serialization order of writes is (in
general) arbitrary. It would be easier if we knew the write order up front.
The classic example of long fork uses inserts because we know the states go
from nil
to some-value
, but in our case, it could be 0 -> 1, or 1 -> 0.
To build a graph like this, we need an incremental process. We know the initial state was [nil nil nil ...], so we can start there. Let our sequence be:
r0
x nil
y nil
z nil
We know that values can only change away from nil. This implies that those reads with the most nils must come adjacent to r0. In general, if there exists a read r1 adjacent to r0, then there must not exist any read r2 such that r2 has more in common with r0 than r1. This assumes that all reads are total.
Additionally, if the graph is to be... smooth, as opposed to forking, there are only two directions to travel from any given read. This implies there can be at most two reads r1 and r2 with an equal number of links and distinct links to r0. If a third read with this property exists, there's 'nowhere to put it' without creating a fork.
We can verify this property in roughly linear time, which is nice. It doesn't, however, prevent closed loops with no forking structure.
To do loops, I think we have to actually do the graph traversal. Let's punt on that for now.
Tests for an anomaly in parallel snapshot isolation (but which is prohibited in normal snapshot isolation). In long-fork, concurrent write transactions are observed in conflicting order. For example: T1: (write x 1) T2: (write y 1) T3: (read x nil) (read y 1) T4: (read x 1) (read y nil) T3 implies T2 < T1, but T4 implies T1 < T2. We aim to observe these conflicts. To generalize to multiple updates... Let a write transaction be a transaction of the form [(write k v)], executing a single write to a single register. Assume no two write transactions have the same key *and* value. Let a read transaction be a transaction of the form [(read k1 v1) (read k2 v2) ...], reading multiple distinct registers. Let R be a set of reads, and W be a set of writes. Let the total set of transactions T = R U W; e.g. there are only reads and writes. Let the initial state be empty. Serializability implies that there should exist an order < over T, such that every read observes the state of the system as given by the prefix of all write transactions up to some point t_i. Since each value is written exactly once, a sequence of states with the same value for a key k must be *contiguous* in this order. That is, it would be illegal to read: [(r x 0)] [(r x 1)] [(r x 0)] ... because although we can infer (w x 0) happened before t1, and (w x 1) happened between t2, there is no *second* (w x 0) that can satisfy t3. Visually, imagine bars which connect identical values from reads, keeping them together: key r1 r2 r3 r4 x 0 1 --- 1 --- 1 y 0 1 --- 1 0 z 0 --- 0 2 1 A long fork anomaly manifests when we cannot construct an order where these bars connect identical values into contiguous blocks. Note that more than one value may change between reads: we may not observe every change. Note that the values for a given key are not monotonic; we must treat them as unordered, opaque values because the serialization order of writes is (in general) arbitrary. It would be easier if we knew the write order up front. The classic example of long fork uses inserts because we know the states go from `nil` to `some-value`, but in our case, it could be 0 -> 1, or 1 -> 0. To build a graph like this, we need an incremental process. We know the initial state was [nil nil nil ...], so we can start there. Let our sequence be: r0 x nil y nil z nil We know that values can only change *away* from nil. This implies that those reads with the most nils must come adjacent to r0. In general, if there exists a read r1 adjacent to r0, then there must not exist any read r2 such that r2 has more in common with r0 than r1. This assumes that all reads are total. Additionally, if the graph is to be... smooth, as opposed to forking, there are only two directions to travel from any given read. This implies there can be at most two reads r1 and r2 with an equal number of links *and* distinct links to r0. If a third read with this property exists, there's 'nowhere to put it' without creating a fork. We can verify this property in roughly linear time, which is nice. It doesn't, however, prevent *closed loops* with no forking structure. To do loops, I think we have to actually do the graph traversal. Let's punt on that for now.
(checker n)
Takes a group size n, and a history of :txn transactions. Verifies that no key is written multiple times. Searches for read transactions where one read observes x but not y, and another observes y but not x.
Takes a group size n, and a history of :txn transactions. Verifies that no key is written multiple times. Searches for read transactions where one read observes x but not y, and another observes y but not x.
(distinct-pairs coll)
Given a collection, returns a sequence of all unique 2-element sets taken from that collection.
Given a collection, returns a sequence of all unique 2-element sets taken from that collection.
(early-reads reads)
Given a set of read txns finds those that are too early to tell us anything; e.g. all nil
Given a set of read txns finds those that are too early to tell us anything; e.g. all nil
(ensure-no-long-forks n reads)
Returns a checker error if any long forks exist.
Returns a checker error if any long forks exist.
(ensure-no-multiple-writes-to-one-key history)
Returns a checker error if we have multiple writes to one key, or nil if things are OK.
Returns a checker error if we have multiple writes to one key, or nil if things are OK.
(find-forks ops)
Given a set of read ops, compares every one to ensure a total order exists. If mutually incomparable reads exist, returns the pair.
Given a set of read ops, compares every one to ensure a total order exists. If mutually incomparable reads exist, returns the pair.
(generator n)
Generates single inserts followed by group reads, mixed with reads of other concurrent groups, just for grins. Takes a group size n.
Generates single inserts followed by group reads, mixed with reads of other concurrent groups, just for grins. Takes a group size n.
(group-for n k)
Takes a key and returns the collection of keys for its group. Lower inclusive, upper exclusive.
Takes a key and returns the collection of keys for its group. Lower inclusive, upper exclusive.
(groups n read-ops)
Given a group size n, and a set of read ops, partitions those read operations by group. Throws if any group has the wrong size.
Given a group size n, and a set of read ops, partitions those read operations by group. Throws if any group has the wrong size.
(late-reads reads)
Given a set of read txns, finds those that are too late to tell us anything; e.g. all 1.
Given a set of read txns, finds those that are too late to tell us anything; e.g. all 1.
(legal-txn? txn)
Checks to ensure a txn is legal.
Checks to ensure a txn is legal.
(op-read-keys op)
Given a read op, returns the set of keys read.
Given a read op, returns the set of keys read.
(read-compare a b)
Given two maps of keys to values, a and b, returns -1 if a dominates, 0 if the two are equal, 1 if b dominates, or nil if a and b are incomparable.
Given two maps of keys to values, a and b, returns -1 if a dominates, 0 if the two are equal, 1 if b dominates, or nil if a and b are incomparable.
(read-op->value-map op)
Takes a read operation, and converts it to a map of keys to values.
Takes a read operation, and converts it to a map of keys to values.
(read-txn-for n k)
Takes a group size and a key and generates a transaction reading that key's group in shuffled order.
Takes a group size and a key and generates a transaction reading that key's group in shuffled order.
(read-txn? txn)
Is this transaction a pure read txn?
Is this transaction a pure read txn?
(workload)
(workload n)
A package of a checker and generator to look for long forks. n is the group size: how many keys to check simultaneously.
A package of a checker and generator to look for long forks. n is the group size: how many keys to check simultaneously.
(write-txn? txn)
Is this a pure write transaction?
Is this a pure write transaction?
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close