Liking cljdoc? Tell your friends :D

tech.compute.tensor

Tensor library used to implement the basic math abstraction fairly easily implementable across a wide range of compute devices. This abstraction is meant to provide a language in which to implement some amount of functionalty especially useful in quickly testing out algorithmic updates or moving data to/from external libraries. As such, it has extensive support for reshape/select/transpose type operations but only nominal base math facilities are provided by default.

There is an implicit assumption throughout this file that implementations will loop through smaller entities instead of throwing an exception if sizes don't match. This is referred to as broadcasting in numpy (https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

It does mean, however, that certain conditions that would actually be error cases are harder to detect because one has to check for remainders being zero (which potentially could cause a divide by zero error) instead of just checking for equality.

For binary operations there are four forms:

y = ax op by result = ax op by. y[idx] = ax[idx] op by[idx] result[idx] = ax[idx] op by[idx]

Op may be: [:+ :* :/].

In the non-indexed cases the element counts of y or x may differ but they need to be commensurate meaning that the smaller evenly divides the larger. When writing to result it is important that result is as large as the largest. This is a relaxation of the numpy broadcasting rules to allow more forms of broadcasting; the check is that the remainder is zero; not that the smaller dimension is 1.

In general we want as much error checking and analysis done in this file as opposed to at the implementation level (compute stream level) so that different implementations of this duplicate the least number of possible operations and so their edge cases agree to the extent possible.

Tensor library used to implement the basic math abstraction fairly easily
  implementable across a wide range of compute devices.  This abstraction is meant to
  provide a language in which to implement some amount of functionalty especially useful
  in quickly testing out algorithmic updates or moving data to/from external libraries.
  As such, it has extensive support for reshape/select/transpose type operations but
  only nominal base math facilities are provided by default.

There is an implicit assumption throughout this file that implementations will loop
  through smaller entities instead of throwing an exception if sizes don't match.  This
  is referred to as broadcasting in numpy
  (https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

It does mean, however, that certain conditions that would actually be error cases are
  harder to detect because one has to check for remainders being zero (which potentially
  could cause a divide by zero error) instead of just checking for equality.


For binary operations there are four forms:

y = a*x op b*y
result = a*x op b*y.
y[idx] = a*x[idx] op b*y[idx]
result[idx] = a*x[idx] op b*y[idx]

Op may be: [:+ :* :/].

In the non-indexed cases the element counts of y or x may differ but they need to be
  commensurate meaning that the smaller evenly divides the larger.  When writing to
  result it is important that result is as large as the largest.  This is a relaxation
  of the numpy broadcasting rules to allow more forms of broadcasting; the check is that
  the remainder is zero; not that the smaller dimension is 1.


In general we want as much error checking and analysis done in this file as opposed to
  at the implementation level (compute stream level) so that different implementations
  of this duplicate the least number of possible operations and so their edge cases
  agree to the extent possible.
raw docstring

->tensorclj

(->tensor data
          &
          {:keys [datatype unchecked? shape stream force-copy?] :as options})

Create a tensor from the data by copying the data at least once.

Create a tensor from the data by copying the data at least once.
sourceraw docstring

acceptable-tensor-buffer?clj

(acceptable-tensor-buffer? item)
source

as-2d-matrixclj

(as-2d-matrix tensor)

As a 2d matrix of shape [everything-else most-rapidly-changin-dimension]

As a 2d matrix of shape [everything-else most-rapidly-changin-dimension]
sourceraw docstring

as-batch-matrixclj

(as-batch-matrix tensor)

As a 2d matrix of shape [least-rapidly-changing-dimension everything-else]

As a 2d matrix of shape [least-rapidly-changing-dimension everything-else]
sourceraw docstring

as-column-vectorclj

(as-column-vector tensor)
source

as-denseclj

(as-dense tensor)

As dense implies that a memcpy call would succeed as one expects. This means actually 2 conditions are checked:

  1. dense?
  2. dimensions-monotonic-increasing
As dense implies that a memcpy call would succeed as one expects.  This means
  actually 2 conditions are checked:
1.  dense?
2.  dimensions-monotonic-increasing
sourceraw docstring

as-row-vectorclj

(as-row-vector tensor)
source

as-tensorclj

(as-tensor item)

In-place make this a tensor.

In-place make this a tensor.
sourceraw docstring

as-vectorclj

(as-vector tensor)
source

assign!clj

(assign! dest src)
source

binary-op!clj

(binary-op! dest alpha x beta y op & [options])

Perform the operation: dest = alpha * x op beta * y. x or y may be a scalar, dest must not be. Datatypes must match.

Perform the operation:
dest = alpha * x op beta * y.
x or y may be a scalar, dest must not be.
Datatypes must match.
sourceraw docstring

binary-operationsclj

source

cloneclj

(clone src & {:keys [datatype] :as options})

Clone this tensor, keeping as many details as possible identical. This method does type elision, so in most cases it will return something of the same type as passed in.

Clone this tensor, keeping as many details as possible identical.
This method does type elision, so in most cases it will return something
of the same type as passed in.
sourceraw docstring

clone-to-deviceclj

(clone-to-device src & {:keys [datatype] :as options})

Clone this tensor, creating a new one on the currently bound device. Does not to type elision.

Clone this tensor, creating a new one on the currently bound device.
Does not to type elision.
sourceraw docstring

columnsclj

(columns tensor)

Returns a vector of matrixes with width of 1 but large column strides.

Returns a vector of matrixes with width of 1 but large column strides.
sourceraw docstring

constrain-inside-hypersphere!clj

(constrain-inside-hypersphere! dest mag-vec radius-length)

Like normalize, but only shorten vectors that are too long. So instead of projecting to the surface of the hypersphere like normalize does, do a <= operation.

Like normalize, but only shorten vectors that are too long.  So instead of
projecting to the surface of the hypersphere like normalize does, do a <= operation.
sourceraw docstring

construct-tensorclj

(construct-tensor dimensions buffer)
source

copy-to-java-typeclj

(copy-to-java-type dest src & [options])

The options map in this case also contains potentially {:unchecked?} as the dtype/copy method is used.

The options map in this case also contains potentially
{:unchecked?} as the dtype/copy method is used.
sourceraw docstring

dense?clj

(dense? item)
source

ecountclj

(ecount tensor)
source

enable-cpu-tensors!clj

(enable-cpu-tensors!)

Enables a version of the tensors that run on the cpu and that use netlib blas for operations.

Enables a version of the tensors that run on the cpu and that use netlib blas
for operations.
sourceraw docstring

ensure-tensorclj

(ensure-tensor item)
source

flat-distributionclj

(flat-distribution & {:keys [minimum maximum] :or {minimum 0 maximum 1}})

Create a flat distribution description. Flat (equal) distribution including minimum but excluding maximum [minimum maximum)

Create a flat distribution description.
Flat (equal) distribution including minimum but excluding maximum
[minimum maximum)
sourceraw docstring

from-prototypeclj

(from-prototype src & {:keys [datatype shape]})

New tensor just like this one (same device/driver,etc)

New tensor just like this one (same device/driver,etc)
sourceraw docstring

gaussian-distributionclj

(gaussian-distribution & {:keys [mean variance] :or {mean 0 variance 1}})

Create a Gaussian distribution description

Create a Gaussian distribution description
sourceraw docstring

gemm!clj

(gemm! C trans-a? trans-b? alpha A B beta & [options])

C = alpha * (trans-a? A) * (trans-b? B) + beta * C.

C = alpha * (trans-a? A) * (trans-b? B) + beta * C.
sourceraw docstring

get-datatypeclj

(get-datatype tensor)
source

in-place-reshapeclj

(in-place-reshape tensor shape)
source

make-denseclj

(make-dense tensor)
source

new-tensorclj

(new-tensor shape
            &
            {:keys [datatype init-value stream] :or {init-value 0} :as options})
source

normalize!clj

(normalize! dest mag-vec radius-length epsilon & {:as options})

Ensure each vector of the last dimension of dest has length radius-length. Epsilon is used to avoid divide-by-zero conditions. This operation can also be seen as a projection to the surface of a hypersphere of radius radius-length.

Ensure each vector of the last dimension of dest has length radius-length.
Epsilon is used to avoid divide-by-zero conditions.  This operation can also
be seen as a projection to the surface of a hypersphere of radius radius-length.
sourceraw docstring

rand!clj

(rand! dest distribution & {:as options})

Generate a pool of random numbers. Due to cuda limitations, this function is limited to floating point numbers.

Generate a pool of random numbers.
Due to cuda limitations, this function is limited to floating point numbers.
sourceraw docstring

reinterpret-tensorclj

(reinterpret-tensor old-tensor new-dimensions)

Create a new tensor with new dimensions. This is like an in place reinterpretation of the data.

Create a new tensor with new dimensions.  This is like an in place reinterpretation
of the data.
sourceraw docstring

rowsclj

(rows tensor)

Returns a vector rows of dense vectors.

Returns a vector rows of dense vectors.
sourceraw docstring

scalar?clj

(scalar? item)
source

selectclj

(select tensor & args)

Limited implementation of the core.matrix select function call. Same rules apply Except if you pass in an array of numbers for a dimension then they must be contiguous and monotonically increasing (a proper inclusive range). This is due to limitations of the current gpu implementation and a strong reluctance to add complexity there. There must be an entry for every dimension of the tensor. see: https://cloojure.github.io/doc/core.matrix/clojure.core.matrix.html#var-select

Limited implementation of the core.matrix select function call.
Same rules apply *Except* if you pass in an array of numbers for a dimension
then they must be contiguous and monotonically increasing (a proper inclusive range).
This is due to limitations of the current gpu implementation and a strong reluctance
to add complexity there.  There must be an entry for every dimension of the tensor.
see:
https://cloojure.github.io/doc/core.matrix/clojure.core.matrix.html#var-select
sourceraw docstring

shapeclj

(shape tensor)
source

simple-tensor?clj

(simple-tensor? tensor)
source

strided?clj

source

stridesclj

(strides tensor)
source

submatrixclj

(submatrix tensor row-start row-length col-start col-length)

Create a sub matrix of tensor. Tensor will be interpreted as width being n-cols and the rest of the dimensions being squashed into n-rows.

Create a sub matrix of tensor.  Tensor will be interpreted as width being n-cols
and the rest of the dimensions being squashed into n-rows.
sourceraw docstring

subvectorclj

(subvector tensor offset & {:keys [length]})
source

tensor->2d-shapeclj

(tensor->2d-shape tensor)
source

tensor->batch-shapeclj

(tensor->batch-shape tensor)
source

tensor->batch-sizeclj

(tensor->batch-size tensor)
source

tensor->bufferclj

(tensor->buffer tensor)
source

tensor->dimensionsclj

(tensor->dimensions item)
source

tensor->stringclj

(tensor->string tens & {:keys [print-datatype] :or {print-datatype :float64}})
source

tensor?clj

(tensor? item)
source

ternary-op!clj

(ternary-op! dest alpha x beta y gamma z op & [options])

Perform the elementwise operation dest = op( alpha * x, beta * y, gamma * z ) dest tensor and must not alias any other arguments. There is no accumulator version of these operations at this time in order to keep kernel permutations low (3 backend permutations).

x, y, z can be constants or tensors.

operations: select: dest = (if (>= x 0) y z)

Perform the elementwise operation dest = op( alpha * x, beta * y, gamma * z ) dest
tensor and must not alias any other arguments.  There is no accumulator version of
these operations at this time in order to keep kernel permutations low (3 backend
permutations).

x, y, z can be constants or tensors.

operations:
select: dest = (if (>= x 0) y z)
sourceraw docstring

ternary-operationsclj

source

to-array-of-typeclj

(to-array-of-type tensor datatype)
source

to-core-matrixclj

(to-core-matrix tensor)
source

to-core-matrix-vectorclj

(to-core-matrix-vector tensor)
source

to-double-arrayclj

(to-double-array tensor)
source

to-float-arrayclj

(to-float-array tensor)
source

to-jvmclj

(to-jvm item
        &
        {:keys [datatype base-storage]
         :or {datatype :float64 base-storage :persistent-vector}})

Conversion to storage that is efficient for the jvm. Base storage is either jvm-array or persistent-vector.

Conversion to storage that is efficient for the jvm.
Base storage is either jvm-array or persistent-vector.
sourceraw docstring

to-vectorclj

(to-vector tensor)
source

transposeclj

(transpose tensor reorder-vec)

Transpose the tensor returning a new tensor that shares the backing store but indexes into it in a different order. Dimension 0 is the leftmost (greatest) dimension:

(transpose tens (range (count (shape tens))))

is the identity operation.

Transpose the tensor returning a new tensor that shares the backing store but indexes
into it in a different order.
Dimension 0 is the leftmost (greatest) dimension:

(transpose tens (range (count (shape tens))))

is the identity operation.
sourceraw docstring

typed-assign!clj

source

unary-op!clj

(unary-op! dest alpha x op & [options])

dest[idx] = op(alpha * x)

dest[idx] = op(alpha * x)
sourceraw docstring

unary-operationsclj

source

unary-reduce!clj

(unary-reduce! output alpha input op & [options])

Vector operations operate across the last dimension and produce 1 result. output = op((alpha*input)) Output must be a [xyz 1] tensor while input is an [xyz n] tensor; the reduction will occur across the n axis with the results placed in output. The leading dimensions of both vectors must match.

Vector operations operate across the last dimension and produce 1 result.
output = op((alpha*input))
Output must be a [xyz 1] tensor while input is an [xyz n] tensor;
the reduction will occur across the n axis with the results placed in output.
The leading dimensions of both vectors must match.
sourceraw docstring

unary-reduction-operationsclj

source

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close