Richardson interpolation is a special case of polynomial interpolation; knowing
the ratios of successive x
coordinates in the point sequence allows a more
efficient calculation.
Richardson interpolation is a special case of polynomial interpolation; knowing the ratios of successive `x` coordinates in the point sequence allows a more efficient calculation.
(richardson-column xs col t)
(richardson-column xs col t p-seq)
(richardson-column xs col t p q)
Function with an identical interface to richardson-sequence
above, except for
an additional second argument col
.
richardson-column
will return that /column/ offset the interpolation tableau
instead of the first row. This will give you a sequence of nth-order
Richardson accelerations taken between point i
and the next n
points.
As a reminder, this is the shape of the Richardson tableau:
p0 p01 p012 p0123 p01234 p1 p12 p123 p1234 . p2 p23 p234 . . p3 p34 . . . p4 . . . .
So supplying a column
of 1
gives a single acceleration by combining points
from column 0; 2
kills two terms from the error sequence, etc.
NOTE Given a better interface for richardson-sequence
, this function could
be merged with that function.
Function with an identical interface to `richardson-sequence` above, except for an additional second argument `col`. `richardson-column` will return that /column/ offset the interpolation tableau instead of the first row. This will give you a sequence of nth-order Richardson accelerations taken between point `i` and the next `n` points. As a reminder, this is the shape of the Richardson tableau: p0 p01 p012 p0123 p01234 p1 p12 p123 p1234 . p2 p23 p234 . . p3 p34 . . . p4 . . . . So supplying a `column` of `1` gives a single acceleration by combining points from column 0; `2` kills two terms from the error sequence, etc. NOTE Given a better interface for `richardson-sequence`, this function could be merged with that function.
(richardson-sequence xs t)
(richardson-sequence xs t p-sequence)
(richardson-sequence xs t p q)
Takes:
xs
: a (potentially lazy) sequence of points representing function values
generated by inputs continually decreasing by a factor of t
. For example:
[f(x), f(x/t), f(x/t^2), ...]
t
: the ratio between successive inputs that generated xs
.And returns a new (lazy) sequence of 'accelerated' using Richardson
extrapolation to
cancel out error terms in the taylor series expansion of f(x)
around the
value the series to which the series is trying to converge.
Each term in the returned sequence cancels one of the error terms through a linear combination of neighboring terms in the sequence.
The three-arity version takes one more argument:
p-sequence
: the orders of the error terms in the taylor series expansion
of the function that xs
is estimating. For example, if xs
is generated
from some f(x)
trying to approximate A
, then [p_1, p_2...]
etc are the
correction terms:
$$f(x) = A + B x^{p_1} + C x^{p_2}...$$
The two-arity version uses a default p-sequence
of [1, 2, 3, ...]
The FOUR arity version takes xs
and t
as before, but instead of
p-sequence
makes the assumption that p-sequence
is an arithmetic
progression of the form p + iq
, customized by:
p
: the exponent on the highest-order error termq
: the step size on the error term exponent for each new seq elementRichardson extrapolation is a special case of polynomial extrapolation,
implemented in polynomial.cljc
.
Instead of a sequence of xs
, if you generate an explicit series of points of
the form [x (f x)]
with successively smaller x
values and
polynomial-extrapolate it forward to x == 0 (with,
say, (polynomial/modified-neville xs 0)
) you'll get the exact same result.
Richardson extrapolation is more efficient since it can make assumptions about the spacing between points and pre-calculate a few quantities. See the namespace for more discussion.
References:
Takes: - `xs`: a (potentially lazy) sequence of points representing function values generated by inputs continually decreasing by a factor of `t`. For example: `[f(x), f(x/t), f(x/t^2), ...]` - `t`: the ratio between successive inputs that generated `xs`. And returns a new (lazy) sequence of 'accelerated' using [Richardson extrapolation](https://en.wikipedia.org/wiki/Richardson_extrapolation) to cancel out error terms in the taylor series expansion of `f(x)` around the value the series to which the series is trying to converge. Each term in the returned sequence cancels one of the error terms through a linear combination of neighboring terms in the sequence. ### Custom P Sequence The three-arity version takes one more argument: - `p-sequence`: the orders of the error terms in the taylor series expansion of the function that `xs` is estimating. For example, if `xs` is generated from some `f(x)` trying to approximate `A`, then `[p_1, p_2...]` etc are the correction terms: $$f(x) = A + B x^{p_1} + C x^{p_2}...$$ The two-arity version uses a default `p-sequence` of `[1, 2, 3, ...]` ### Arithmetic Progression The FOUR arity version takes `xs` and `t` as before, but instead of `p-sequence` makes the assumption that `p-sequence` is an arithmetic progression of the form `p + iq`, customized by: - `p`: the exponent on the highest-order error term - `q`: the step size on the error term exponent for each new seq element ## Notes Richardson extrapolation is a special case of polynomial extrapolation, implemented in `polynomial.cljc`. Instead of a sequence of `xs`, if you generate an explicit series of points of the form `[x (f x)]` with successively smaller `x` values and polynomial-extrapolate it forward to x == 0 (with, say, `(polynomial/modified-neville xs 0)`) you'll get the exact same result. Richardson extrapolation is more efficient since it can make assumptions about the spacing between points and pre-calculate a few quantities. See the namespace for more discussion. References: - Wikipedia: https://en.wikipedia.org/wiki/Richardson_extrapolation - GJS, 'Abstraction in Numerical Methods': https://dspace.mit.edu/bitstream/handle/1721.1/6060/AIM-997.pdf?sequence=2
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close