Tensor library used to implement the basic math abstraction fairly easily implementable across a wide range of compute devices. This abstraction is meant to provide a language in which to implement some amount of functionalty especially useful in quickly testing out algorithmic updates or moving data to/from external libraries. As such, it has extensive support for reshape/select/transpose type operations but only nominal base math facilities are provided by default.
There is an implicit assumption throughout this file that implementations will loop through smaller entities instead of throwing an exception if sizes don't match. This is referred to as broadcasting in numpy (https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
It does mean, however, that certain conditions that would actually be error cases are harder to detect because one has to check for remainders being zero (which potentially could cause a divide by zero error) instead of just checking for equality.
For binary operations there are four forms:
y = ax op by result = ax op by. y[idx] = ax[idx] op by[idx] result[idx] = ax[idx] op by[idx]
Op may be: [:+ :* :/].
In the non-indexed cases the element counts of y or x may differ but they need to be commensurate meaning that the smaller evenly divides the larger. When writing to result it is important that result is as large as the largest. This is a relaxation of the numpy broadcasting rules to allow more forms of broadcasting; the check is that the remainder is zero; not that the smaller dimension is 1.
In general we want as much error checking and analysis done in this file as opposed to at the implementation level (compute stream level) so that different implementations of this duplicate the least number of possible operations and so their edge cases agree to the extent possible.
Tensor library used to implement the basic math abstraction fairly easily implementable across a wide range of compute devices. This abstraction is meant to provide a language in which to implement some amount of functionalty especially useful in quickly testing out algorithmic updates or moving data to/from external libraries. As such, it has extensive support for reshape/select/transpose type operations but only nominal base math facilities are provided by default. There is an implicit assumption throughout this file that implementations will loop through smaller entities instead of throwing an exception if sizes don't match. This is referred to as broadcasting in numpy (https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). It does mean, however, that certain conditions that would actually be error cases are harder to detect because one has to check for remainders being zero (which potentially could cause a divide by zero error) instead of just checking for equality. For binary operations there are four forms: y = a*x op b*y result = a*x op b*y. y[idx] = a*x[idx] op b*y[idx] result[idx] = a*x[idx] op b*y[idx] Op may be: [:+ :* :/]. In the non-indexed cases the element counts of y or x may differ but they need to be commensurate meaning that the smaller evenly divides the larger. When writing to result it is important that result is as large as the largest. This is a relaxation of the numpy broadcasting rules to allow more forms of broadcasting; the check is that the remainder is zero; not that the smaller dimension is 1. In general we want as much error checking and analysis done in this file as opposed to at the implementation level (compute stream level) so that different implementations of this duplicate the least number of possible operations and so their edge cases agree to the extent possible.
(->tensor data
&
{:keys [datatype unchecked? shape stream force-copy?] :as options})
Create a tensor from the data by copying the data at least once.
Create a tensor from the data by copying the data at least once.
(as-2d-matrix tensor)
As a 2d matrix of shape [everything-else most-rapidly-changin-dimension]
As a 2d matrix of shape [everything-else most-rapidly-changin-dimension]
(as-batch-matrix tensor)
As a 2d matrix of shape [least-rapidly-changing-dimension everything-else]
As a 2d matrix of shape [least-rapidly-changing-dimension everything-else]
(as-dense tensor)
As dense implies that a memcpy call would succeed as one expects. This means actually 2 conditions are checked:
As dense implies that a memcpy call would succeed as one expects. This means actually 2 conditions are checked: 1. dense? 2. dimensions-monotonic-increasing
(as-tensor item)
In-place make this a tensor.
In-place make this a tensor.
(binary-op! dest alpha x beta y op & [options])
Perform the operation: dest = alpha * x op beta * y. x or y may be a scalar, dest must not be. Datatypes must match.
Perform the operation: dest = alpha * x op beta * y. x or y may be a scalar, dest must not be. Datatypes must match.
(clone src & {:keys [datatype] :as options})
Clone this tensor, keeping as many details as possible identical. This method does type elision, so in most cases it will return something of the same type as passed in.
Clone this tensor, keeping as many details as possible identical. This method does type elision, so in most cases it will return something of the same type as passed in.
(clone-to-device src & {:keys [datatype] :as options})
Clone this tensor, creating a new one on the currently bound device. Does not to type elision.
Clone this tensor, creating a new one on the currently bound device. Does not to type elision.
(columns tensor)
Returns a vector of matrixes with width of 1 but large column strides.
Returns a vector of matrixes with width of 1 but large column strides.
(constrain-inside-hypersphere! dest mag-vec radius-length)
Like normalize, but only shorten vectors that are too long. So instead of projecting to the surface of the hypersphere like normalize does, do a <= operation.
Like normalize, but only shorten vectors that are too long. So instead of projecting to the surface of the hypersphere like normalize does, do a <= operation.
(copy-to-java-type dest src & [options])
The options map in this case also contains potentially {:unchecked?} as the dtype/copy method is used.
The options map in this case also contains potentially {:unchecked?} as the dtype/copy method is used.
(enable-cpu-tensors!)
Enables a version of the tensors that run on the cpu and that use netlib blas for operations.
Enables a version of the tensors that run on the cpu and that use netlib blas for operations.
(flat-distribution & {:keys [minimum maximum] :or {minimum 0 maximum 1}})
Create a flat distribution description. Flat (equal) distribution including minimum but excluding maximum [minimum maximum)
Create a flat distribution description. Flat (equal) distribution including minimum but excluding maximum [minimum maximum)
(from-prototype src & {:keys [datatype shape]})
New tensor just like this one (same device/driver,etc)
New tensor just like this one (same device/driver,etc)
(gaussian-distribution & {:keys [mean variance] :or {mean 0 variance 1}})
Create a Gaussian distribution description
Create a Gaussian distribution description
(gemm! C trans-a? trans-b? alpha A B beta & [options])
C = alpha * (trans-a? A) * (trans-b? B) + beta * C.
C = alpha * (trans-a? A) * (trans-b? B) + beta * C.
(new-tensor shape
&
{:keys [datatype init-value stream] :or {init-value 0} :as options})
(normalize! dest mag-vec radius-length epsilon & {:as options})
Ensure each vector of the last dimension of dest has length radius-length. Epsilon is used to avoid divide-by-zero conditions. This operation can also be seen as a projection to the surface of a hypersphere of radius radius-length.
Ensure each vector of the last dimension of dest has length radius-length. Epsilon is used to avoid divide-by-zero conditions. This operation can also be seen as a projection to the surface of a hypersphere of radius radius-length.
(rand! dest distribution & {:as options})
Generate a pool of random numbers. Due to cuda limitations, this function is limited to floating point numbers.
Generate a pool of random numbers. Due to cuda limitations, this function is limited to floating point numbers.
(reinterpret-tensor old-tensor new-dimensions)
Create a new tensor with new dimensions. This is like an in place reinterpretation of the data.
Create a new tensor with new dimensions. This is like an in place reinterpretation of the data.
(rows tensor)
Returns a vector rows of dense vectors.
Returns a vector rows of dense vectors.
(select tensor & args)
Limited implementation of the core.matrix select function call. Same rules apply Except if you pass in an array of numbers for a dimension then they must be contiguous and monotonically increasing (a proper inclusive range). This is due to limitations of the current gpu implementation and a strong reluctance to add complexity there. There must be an entry for every dimension of the tensor. see: https://cloojure.github.io/doc/core.matrix/clojure.core.matrix.html#var-select
Limited implementation of the core.matrix select function call. Same rules apply *Except* if you pass in an array of numbers for a dimension then they must be contiguous and monotonically increasing (a proper inclusive range). This is due to limitations of the current gpu implementation and a strong reluctance to add complexity there. There must be an entry for every dimension of the tensor. see: https://cloojure.github.io/doc/core.matrix/clojure.core.matrix.html#var-select
(submatrix tensor row-start row-length col-start col-length)
Create a sub matrix of tensor. Tensor will be interpreted as width being n-cols and the rest of the dimensions being squashed into n-rows.
Create a sub matrix of tensor. Tensor will be interpreted as width being n-cols and the rest of the dimensions being squashed into n-rows.
(tensor->string tens & {:keys [print-datatype] :or {print-datatype :float64}})
(ternary-op! dest alpha x beta y gamma z op & [options])
Perform the elementwise operation dest = op( alpha * x, beta * y, gamma * z ) dest tensor and must not alias any other arguments. There is no accumulator version of these operations at this time in order to keep kernel permutations low (3 backend permutations).
x, y, z can be constants or tensors.
operations: select: dest = (if (>= x 0) y z)
Perform the elementwise operation dest = op( alpha * x, beta * y, gamma * z ) dest tensor and must not alias any other arguments. There is no accumulator version of these operations at this time in order to keep kernel permutations low (3 backend permutations). x, y, z can be constants or tensors. operations: select: dest = (if (>= x 0) y z)
(to-jvm item
&
{:keys [datatype base-storage]
:or {datatype :float64 base-storage :persistent-vector}})
Conversion to storage that is efficient for the jvm. Base storage is either jvm-array or persistent-vector.
Conversion to storage that is efficient for the jvm. Base storage is either jvm-array or persistent-vector.
(transpose tensor reorder-vec)
Transpose the tensor returning a new tensor that shares the backing store but indexes into it in a different order. Dimension 0 is the leftmost (greatest) dimension:
(transpose tens (range (count (shape tens))))
is the identity operation.
Transpose the tensor returning a new tensor that shares the backing store but indexes into it in a different order. Dimension 0 is the leftmost (greatest) dimension: (transpose tens (range (count (shape tens)))) is the identity operation.
(unary-op! dest alpha x op & [options])
dest[idx] = op(alpha * x)
dest[idx] = op(alpha * x)
(unary-reduce! output alpha input op & [options])
Vector operations operate across the last dimension and produce 1 result. output = op((alpha*input)) Output must be a [xyz 1] tensor while input is an [xyz n] tensor; the reduction will occur across the n axis with the results placed in output. The leading dimensions of both vectors must match.
Vector operations operate across the last dimension and produce 1 result. output = op((alpha*input)) Output must be a [xyz 1] tensor while input is an [xyz n] tensor; the reduction will occur across the n axis with the results placed in output. The leading dimensions of both vectors must match.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close