Liking cljdoc? Tell your friends :D

uncomplicate.diamond.dnn

Contains type-agnostic deep neural networks (DNN) functions.

Examples

The Deep Learning for Programmers book contains very detailed examples and explanations. Please check it out.

The most up-to-date examples can be found in the comprehensive test suite, full examples, core tensor examples core DNN examples internal CPU engine tests, internal GPU engine tests,

Cheat Sheet

Contains type-agnostic deep neural networks (DNN) functions.

### Examples

The [Deep Learning for Programmers](https://aiprobook.com/deep-learning-for-programmers) book
contains very detailed examples and explanations. Please check it out.

The most up-to-date examples can be found in the
[comprehensive test suite](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond),
[full examples](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional),
[core tensor examples](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/tensor_test.clj)
[core DNN examples](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj)
[internal CPU engine tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/internal/dnnl),
[internal GPU engine tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/internal/cudnn),

### Cheat Sheet

* Basic dense layers: [[activation]], [[inner-product]], [[fully-connected]], [[dense]].

* Convolutional layers: [[convolution]], [[convo]].

* Recurrent layers [[rnn-op]], [[rnn]], [[abbreviate]].

* Training optimizations: [[pooling]], [[dropout-mask]], [[dropout]], [[batch-norm]].

* [[concatenate]], [[conc]], [[branch]], [[split]], [[sum]].

* Training and using the network: [[cost]], [[network]], [[init!]], [[train]], [[train-shuffle]], [[infer]].
raw docstring

abbreviateclj

(abbreviate)
(abbreviate dst-type)
(abbreviate fact src-desc dst-type)

Extract the relevant part of RNN output sequence.

Extract the relevant part of RNN output sequence.
raw docstring

activationclj

(activation src-desc activ)
(activation fact src-desc activ)
(activation fact src-desc activ alpha)
(activation fact src-desc activ alpha beta)

Creates an activation blueprint, which is also a function that can create activation (usually non-linear) that can then be attached to the end of a network layer. It can be used in many ways, from a relatively low-level structure, to the fully automatic piece in a description of neural network.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the activation input and output.
  • activ: keyword that determines the activation function (:relu, :elu, etc.). See activation functions supported by DNNL ([[uncomplicate.diamond.internal.dnnl.constants/dnnl-eltwise-alg-kind]]), and cuDNN ([[uncomplicate.diamond.internal.cudnn.constants/cudnn-activation-mode]]). Keywords are the same, but not all keywords are supported by all backends, in general.
  • alpha: the first scalar constant (if supported by the chosen activ).
  • beta: the second scalar constant (if supported by the chosen activ).

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional-tests.

Creates an activation blueprint, which is also a function that can create activation
(usually non-linear) that can then be attached to the end of a network layer. It can be
used in many ways, from a relatively low-level structure, to the fully automatic piece
in a description of neural network.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the activation
input and output.
- `activ`: keyword that determines the activation function (:relu, :elu, etc.).
See activation functions supported by DNNL ([[uncomplicate.diamond.internal.dnnl.constants/dnnl-eltwise-alg-kind]]),
and cuDNN ([[uncomplicate.diamond.internal.cudnn.constants/cudnn-activation-mode]]).
Keywords are the same, but not all keywords are supported by all backends, in general.
- `alpha`: the first scalar constant (if supported by the chosen `activ`).
- `beta`: the second scalar constant (if supported by the chosen `activ`).

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional-tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

batch-normclj

(batch-norm)
(batch-norm activ)
(batch-norm activ args)
(batch-norm fact src-desc activ args)

Creates a batch normalization neural network layer blueprint, which is also a function that can create the actual batch normalization layer either when directly called or when used in the neural network description.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • activ: keyword that determines the activation function (:relu, :elu, etc.). See activation.
  • args: a map of additional arguments such as :alpha, :beta, or some of the technology specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional tests.

Creates a batch normalization neural network layer blueprint, which is also a function that can
create the actual batch normalization layer either when directly called or when used in the neural
network description.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `activ`: keyword that determines the activation function (:relu, :elu, etc.). See [[activation]].
- `args`: a map of additional arguments such as `:alpha`, `:beta`, or some of the technology
specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

branchclj

(branch dst-descs)
(branch branch-dim dst-descs)
(branch fact src-desc branch-dim dst-descs)

Creates a branch blueprint, which is also a function that can create the actual branching layer either when directly called or when used in the neural network description. Branching divides the input tensor into multiple output tensors. Also see concatenate and split.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • branch-dim: the dimension where branching is going to divide the input.
  • dst-descs: tensor descriptors (or even just a relevant parts of their shape) of layer outputs.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

As an example, branch divides tensor shape [1 2 1 1] by branch-dim = 1 into [1 1 1 1] and [1 1 1 1].

See examples in dnn-test, and functional tests.

Creates a branch blueprint, which is also a function that can create the actual
branching layer either when directly called or when used in the neural network description.
Branching divides the input tensor into multiple output tensors. Also see [[concatenate]] and [[split]].

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `branch-dim`: the dimension where branching is going to divide the input.
- `dst-descs`: tensor descriptors (or even just a relevant parts of their shape) of layer outputs.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

As an example, branch divides tensor shape  `[1 2 1 1]` by `branch-dim = 1`
into `[1 1 1 1]` and `[1 1 1 1]`.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

concclj

(conc)
(conc conc-dim)

A simpler version of concatenate. You'll usually use this function in the network description. Concatenation stitches multiple input tensors into one output tensor. Also see branch and split.

Arguments:

  • conc-dim: the dimension where concatenation is going to expand the output.

See examples in dnn-test, and functional tests.

A simpler version of [[concatenate]]. You'll usually use this function in the network description.
Concatenation stitches multiple input tensors into one output tensor. Also see [[branch]] and [[split]].

Arguments:

- `conc-dim`: the dimension where concatenation is going to expand the output.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

concatenateclj

(concatenate)
(concatenate conc-dim)
(concatenate conc-dim dst-type)
(concatenate fact conc-dim src-descs)
(concatenate fact conc-dim src-descs dst-type)

Creates a concatenation blueprint, which is also a function that can create the actual concatenation layer either when directly called or when used in the neural network description. Concatenation stitches multiple input tensors into one output tensor. Also see branch and split.

Arguments:

  • fact: technology-specific engine factory.
  • conc-dim: the dimension where concatenation is going to expand the output.
  • src-descs: tensor descriptors (or even just a relevant parts of their shape) of layer inputs.
  • dst-type: output type.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional tests.

Creates a concatenation blueprint, which is also a function that can create the actual
concatenation layer either when directly called or when used in the neural network description.
Concatenation stitches multiple input tensors into one output tensor. Also see [[branch]] and [[split]].

Arguments:

- `fact`: technology-specific engine factory.
- `conc-dim`: the dimension where concatenation is going to expand the output.
- `src-descs`: tensor descriptors (or even just a relevant parts of their shape) of layer inputs.
- `dst-type`: output type.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

convoclj

(convo dst-desc kernel-desc activ)
(convo dst-desc kernel-desc activ args)

A simpler version of convolution. You'll usually use this function in the network description.

Arguments:

  • dst-desc: tensor descriptor (or even just a relevant part of its shape) of the layer output.
  • activ: keyword that determines the activation function (:relu, :elu, etc.). See activation.
  • args: a map of additional arguments such as :alpha, :beta, :weights-desc, or some of the technology specific options supported by the underlying engine.

See fully-connected.

A simpler version of [[convolution]]. You'll usually use this function in the network description.

Arguments:

- `dst-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer output.
- `activ`: keyword that determines the activation function (:relu, :elu, etc.). See [[activation]].
- `args`: a map of additional arguments such as `:alpha`, `:beta`, `:weights-desc`, or some of
the technology specific options supported by the underlying engine.

See [[fully-connected]].
raw docstring

convolutionclj

(convolution dst-desc kernel-desc activ)
(convolution dst-desc kernel-desc activ args)
(convolution fact src-desc weights-desc dst-desc activ)
(convolution fact src-desc weights-desc dst-desc activ args)

Creates a convolution neural network layer blueprint, which is also a function that can create the actual layer either when directly called or when used in the neural network description.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • weights-desc tensor descriptor (or even just a relevant part of its shape) of the weights and biases.
  • dst-desc: tensor descriptor (or even just a relevant part of its shape) of the layer output.
  • activ: keyword that determines the activation function (:relu, :elu, etc.). See activation.
  • args: a map of additional arguments such as :alpha, :beta, :strides, :padding, dilation, or some of the technology specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional tests.

Creates a convolution neural network layer blueprint, which is also a function that
can create the actual layer either when directly called or when used in the neural network
description.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `weights-desc` tensor descriptor (or even just a relevant part of its shape) of the weights and biases.
- `dst-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer output.
- `activ`: keyword that determines the activation function (:relu, :elu, etc.). See [[activation]].
- `args`: a map of additional arguments such as `:alpha`, `:beta`, `:strides`, `:padding`, `dilation`,
or some of the technology specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

costclj

(cost layer)
(cost layer cost-kw)
(cost layer train-tz cost-kw)

Creates cost that goes at the output of the network and drives the minimization of the cost function in respect to the train-tz tensor. Currently supported cost-kw 's are: :quadratic, :mean-absolute, and :crossentropy.

Arguments:

  • layer: the last layer in the network, which provides the estimated ('predicted') output.
  • train-tz: the target training tensor.

See examples in dnn-test, and functional tests.

Creates cost that goes at the output of the network and drives the minimization
of the cost function in respect to the `train-tz` tensor. Currently supported `cost-kw` 's are:
`:quadratic`, `:mean-absolute`, and `:crossentropy`.

Arguments:

- `layer`: the last layer in the network, which provides the estimated ('predicted') output.
- `train-tz`: the target training tensor.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

denseclj

(dense dst-desc activ)
(dense dst-desc activ args)

A simpler version of fully-connected layer. You'll usually use this function in the network description.

Arguments:

  • dst-desc: tensor descriptor (or even just a relevant part of its shape) of the layer output.
  • activ: keyword that determines the activation function (:relu, :elu, etc.). See activation.
  • args: a map of additional arguments such as :alpha, :beta, :weights-desc, or some of the technology specific options supported by the underlying engine.

See fully-connected.

A simpler version of [[fully-connected]] layer. You'll usually use this function in the network
description.

Arguments:

- `dst-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer output.
- `activ`: keyword that determines the activation function (:relu, :elu, etc.). See [[activation]].
- `args`: a map of additional arguments such as `:alpha`, `:beta`, `:weights-desc`, or some of
the technology specific options supported by the underlying engine.

See [[fully-connected]].
raw docstring

dropoutclj

(dropout)
(dropout sd)
(dropout fact src-desc sd)

Creates a dropout neural network layer blueprint, which is also a function that can create the actual dropout layer either when directly called or when used in the neural network description.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • sd: standard deviation of swing around the layer weight.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional tests.

Creates a dropout neural network layer blueprint, which is also a function that can create
the actual dropout layer either when directly called or when used in the neural network description.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `sd`: standard deviation of swing around the layer weight.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

dropout-maskclj

(dropout-mask src-desc mask-dim)

Keeps last mask-dim elements from src-desc and pads the elements before with 1.

Keeps last `mask-dim` elements from `src-desc` and pads the elements before with `1`.
raw docstring

fully-connectedclj

(fully-connected dst-desc activ)
(fully-connected dst-desc activ args)
(fully-connected fact src-desc dst-desc activ)
(fully-connected fact src-desc dst-desc activ args)

Creates a dense aka fully connected neural network layer blueprint, which is also a function that can create the actual layer either when directly called or when used in the neural network description.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • dst-desc: tensor descriptor (or even just a relevant part of its shape) of the layer output.
  • activ: keyword that determines the activation function (:relu, :elu, etc.). See activation.
  • args: a map of additional arguments such as :alpha, :beta, :weights-type, or some of the technology specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional tests.

Creates a dense aka fully connected neural network layer blueprint, which is also a function that
can create the actual layer either when directly called or when used in the neural network description.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `dst-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer output.
- `activ`: keyword that determines the activation function (:relu, :elu, etc.). See [[activation]].
- `args`: a map of additional arguments such as `:alpha`, `:beta`, `:weights-type`, or some of
the technology specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

infer!clj

(infer! net in)
(infer! net in out)

Estimates output out for the provided input in. Works with tensors, connectors, and anything that can provide Does [[uncomplicate.diamond.tensor/input]] and Does [[uncomplicate.diamond.tensor/output]]. If in and out are bigger than the network shapes, automatically does the inference in mini-batches.

Please also see [[train]].

See examples in dnn-test, and functional tests.

Estimates output `out` for the provided input `in`. Works with tensors, connectors, and anything
that can provide Does [[uncomplicate.diamond.tensor/input]] and Does [[uncomplicate.diamond.tensor/output]].
If `in` and `out` are bigger than the network shapes, automatically does the inference in mini-batches.

Please also see [[train]].

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

init!clj

(init! net!)
(init! net! init-fn)

Destructively initializes the parameters (weights, biases, etc.) of the network using Xavier initialization, which is a good default. You are, of course, free to provide different init-fn, which is an one-argument function that receives every tensor that needs initialization in each layer. This is an automatic default for the 99% of cases that need standard stuff. If you need even more liberal initialization, you are free to implement a function that access the parameters using the internal API, and do whatever you want.

The Deep Learning for Programmers book contains a detailed discussion about different trade-offs that should be considered when initializing the network.

Destructively initializes the parameters (weights, biases, etc.) of the network using [Xavier initialization](),
which is a good default. You are, of course, free to provide different `init-fn`, which is an
one-argument function that receives every tensor that needs initialization in each layer.
This is an automatic default for the 99% of cases that need standard stuff. If you need even more
liberal initialization, you are free to implement a function that access the parameters using
the internal API, and do whatever you want.

The [Deep Learning for Programmers](https://aiprobook.com/deep-learning-for-programmers) book
contains a detailed discussion about different trade-offs that should be considered when initializing
the network.
raw docstring

inner-productclj

(inner-product src-desc dst-desc)
(inner-product fact src-desc dst-desc)
(inner-product fact src-desc dst-desc weights-type)

Creates an inner-product blueprint, which is also a function that can create an inner product operation structure that can be the main building block of the linear part of a network layer. If you find this description a bit cryptic, just think about matrix multiplication operation, generalized to more than two dimensions. Even the ND inner product can be efficiently implemented with 2D matrix multiplication, but even more optimization can be provided by DNNL, cuDNN, etc.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the product input.
  • dst-desc: tensor descriptor (or even just a relevant part of its shape) of the product output.
  • weights-type: type of weights and biases.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional-tests.

Creates an inner-product blueprint, which is also a function that can create an inner product
operation structure that can be the main building block of the linear part of a network layer.
If you find this description a bit cryptic, just think about matrix multiplication operation,
generalized to more than two dimensions. Even the ND inner product can be efficiently implemented
with 2D matrix multiplication, but even more optimization can be provided by DNNL, cuDNN, etc.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the product input.
- `dst-desc`: tensor descriptor (or even just a relevant part of its shape) of the product output.
- `weights-type`: type of weights and biases.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional-tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

networkclj

(network layers)
(network src-desc layers)
(network fact src-desc layers)

Creates a neural network blueprint from the specific input (src-desc), and blueprints provided by layers. This function is very flexible and tries to accommodate to diverse layers data. Please see the test folder for detailed examples.

The Deep Learning for Programmers book contains very detailed examples and explanations. Please check it out.

See examples in dnn-test, and functional tests.

Creates a neural network blueprint from the specific input (`src-desc`), and blueprints provided by `layers`.
This function is very flexible and tries to accommodate to diverse `layers` data. Please see the test
folder for detailed examples.

The [Deep Learning for Programmers](https://aiprobook.com/deep-learning-for-programmers) book
contains very detailed examples and explanations. Please check it out.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

poolingclj

(pooling kernel algo)
(pooling kernel algo args)
(pooling fact src-desc kernel algo)
(pooling fact src-desc kernel algo args)

Creates a pooling neural network layer blueprint, which is also a function that can create the actual pooling layer either when directly called or when used in the neural network description.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • kernel: kernel shape.
  • algo: keyword that determines pooling algorithm (:avg, max, etc.). See pooling algorithms supported by DNNL ([[uncomplicate.diamond.internal.dnnl.constants/dnnl-pooling-alg-kind]]), and cuDNN ([[uncomplicate.diamond.internal.cudnn.constants/cudnn-pooling-mode]]). Keywords are the same when possible, but not all keywords are supported by all backends, in general.
  • args: a map of additional arguments such as :strides or :padding.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and functional tests.

Creates a pooling neural network layer blueprint, which is also a function that can create
the actual pooling layer either when directly called or when used in the neural network description.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `kernel`: kernel shape.
- `algo`: keyword that determines pooling algorithm (`:avg`, `max`, etc.).
See pooling algorithms supported by DNNL ([[uncomplicate.diamond.internal.dnnl.constants/dnnl-pooling-alg-kind]]), and cuDNN ([[uncomplicate.diamond.internal.cudnn.constants/cudnn-pooling-mode]]). Keywords are the same when possible, but not all keywords are supported by all backends, in general.
- `args`: a map of additional arguments such as `:strides` or `:padding`.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

rnnclj

(rnn)
(rnn param)
(rnn lrs-or-dst-desc activ)
(rnn lrs activ args)
(rnn dst-desc lrs activ args)
(rnn fact src-desc dst-desc activ args)
(rnn fact src-desc dst-desc lrs activ args)

Creates a recurrent neural network (RNN) layer blueprint, which is also a function that can create the actual RNN layer either when directly called or when used in the neural network description.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • dst-desc: tensor descriptor (or even just a relevant part of its shape) of the layer output.
  • lrs: the number of recurrent layers.
  • activ: keyword that determines the activation function (:relu, :elu, etc.) for vanilla RNN, or the specialized RNN algorithm (:lstm, :gru etc.) supported by DNNL ([[uncomplicate.diamond.internal.dnnl.constants/dnnl-rnn-alg-kind]]), and cuDNN ([[uncomplicate.diamond.internal.cudnn.constants/cudnn-cell-mode]]). Also see activation.
  • args: a map of additional arguments such as :weights-type, :src-iter, :dst-iter, or some of the technology specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used in a DNN DSL in the context of a network.

See examples in dnn-test, and MasterCard functional test.

Creates a recurrent neural network (RNN) layer blueprint, which is also a function that
can create the actual RNN layer either when directly called or when used in the neural network
description.

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `dst-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer output.
- `lrs`: the number of recurrent layers.
- `activ`: keyword that determines the activation function (:relu, :elu, etc.) for vanilla RNN,
or the specialized RNN algorithm (`:lstm`, `:gru` etc.) supported by DNNL ([[uncomplicate.diamond.internal.dnnl.constants/dnnl-rnn-alg-kind]]), and cuDNN ([[uncomplicate.diamond.internal.cudnn.constants/cudnn-cell-mode]]). Also see [[activation]].
- `args`: a map of additional arguments such as `:weights-type`, `:src-iter`, `:dst-iter`,
or some of the technology specific options supported by the underlying engine.

Most of these arguments can be automatically inferred when this blueprint is used
in a DNN DSL in the context of a network.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/rnn_test.clj),
and [MasterCard functional test](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional/mastercard).
raw docstring

rnn-opclj

(rnn-op src-desc dst-desc lrs)
(rnn-op fact src-desc dst-desc lrs)
(rnn-op fact src-desc dst-desc lrs src-iter? dst-iter?)
(rnn-op fact src-desc dst-desc activ dir lrs src-iter? dst-iter?)
(rnn-op fact src-desc dst-desc weights-type activ dir lrs src-iter? dst-iter?)

The RNN operation blueprint. You are probably looking for the rnn function instead.

The RNN operation blueprint. You are probably looking for the [[rnn]] function instead.
raw docstring

splitclj

(split n)
(split fact src-desc n)

Creates a split blueprint, which is also a function that can create the actual split layer either when directly called or when used in the neural network description. Splitting clones the input tensor into multiple output tensors. Also see concatenate and branch.

Arguments:

  • fact: technology-specific engine factory.
  • src-desc: tensor descriptor (or even just a relevant part of its shape) of the layer input.
  • n: number of output clones.

As an example, split clones tensor shape [1 2 1 1] n = 3 times into three tensors shaped [1 2 1 1], [1 2 1 1], and [1 2 1 1].

See examples in dnn-test, and functional tests.

Creates a split blueprint, which is also a function that can create the actual
split layer either when directly called or when used in the neural network description.
Splitting clones the input tensor into multiple output tensors. Also see [[concatenate]] and [[branch]].

Arguments:

- `fact`: technology-specific engine factory.
- `src-desc`: tensor descriptor (or even just a relevant part of its shape) of the layer input.
- `n`: number of output clones.

As an example, split clones tensor shape `[1 2 1 1]` `n = 3` times into three tensors
shaped `[1 2 1 1]`, `[1 2 1 1]`, and `[1 2 1 1]`.


See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

sumclj

(sum)
(sum fact src-descs)

Creates a sum blueprint, which is also a function that can create the actual summing layer either when directly called or when used in the neural network description. The summing layer will sum all respective entries from input tensors into one output tensor of the same shape.

Arguments:

  • fact: technology-specific engine factory.
  • src-descs: tensor descriptors (or even just a relevant parts of their shape) of layer inputs.

See examples in dnn-test, and functional tests.

Creates a sum blueprint, which is also a function that can create the actual summing layer either
when directly called or when used in the neural network description. The summing layer will
sum all respective entries from input tensors into one output tensor of the same shape.

Arguments:

- `fact`: technology-specific engine factory.
- `src-descs`: tensor descriptors (or even just a relevant parts of their shape) of layer inputs.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

train!clj

(train! net cost! options)
(train! net cost! epochs hyperparam)
(train! net in out cost! options)
(train! net in out cost! epochs hyperparam)

This is the magic function that trains your network net using cost!, trough a number of epochs, using hyperparameters hyperparam. It is rather flexible and will automatically figure out how to do mini-batches needed.

Arguments:

  • net: the network that needs to be trained.
  • cost!: the cost function. See cost.
  • epochs: the number training cycles that process all training data points.
  • hyperparam: a vector of hyperparameters relevant in the context of the chosen training algorithm that was provided at the moment of creation of the network from its blueprint (:sgd, adam, etc.). Typically contains learning rate (eta), decay, etc. Please see the DLFP book or some other resource for the explanation of many possible parameters connected with various learning algorithms.
  • options: multiple epochs and hyperparam pairs can be provided in options sequence.
  • in: the input of the network. Typically a tensor or a connector, but really anything that can accept [[uncomplicate.diamond.tensor/output]]. If it's bigger than the network's input shape, the training will be done in mini-batches.
  • out: the output of the network. Typically a tensor or a connector, but really anything that can accept [[uncomplicate.diamond.tensor/input]]. If it's bigger than the network's output shape, the training will be done in mini-batches.

If you need stochastic re-shuffling of the mini-batches, please consider [[train-shullfle]].

Explaining all that this function can do would require a whole essay, so it's best to study the examples from many examples provided by Deep Diamond's functional tests. Even better resource is the Deep Learning for Programmers book book. Please check it out.

This is the magic function that trains your network `net` using `cost!`, trough
a number of `epochs`, using hyperparameters `hyperparam`. It is rather flexible
and will automatically figure out how to do mini-batches needed.

Arguments:

- `net`: the network that needs to be trained.
- `cost!`: the cost function. See [[cost]].
- `epochs`: the number training cycles that process all training data points.
- `hyperparam`: a vector of hyperparameters relevant in the context of the chosen training algorithm
that was provided at the moment of creation of the network from its blueprint (`:sgd`, `adam`, etc.).
Typically contains learning rate (`eta`), decay, etc. Please see the [DLFP](https://aiprobook.com/deep-learning-for-programmers)
book or some other resource for the explanation of many possible parameters connected with various learning algorithms.
- `options`: multiple `epochs` and `hyperparam` pairs can be provided in `options` sequence.
- `in`: the input of the network. Typically a tensor or a connector, but really anything that can
accept [[uncomplicate.diamond.tensor/output]]. If it's bigger than the network's input shape,
the training will be done in mini-batches.
- `out`: the output of the network. Typically a tensor or a connector, but really anything that can
accept [[uncomplicate.diamond.tensor/input]]. If it's bigger than the network's output shape,
the training will be done in mini-batches.

If you need stochastic re-shuffling of the mini-batches, please consider [[train-shullfle]].

Explaining all that this function can do would require a whole essay, so it's best
to study the examples from many examples provided by Deep Diamond's [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional). Even better resource is the [Deep Learning for Programmers](https://aiprobook.com/deep-learning-for-programmers) book
book. Please check it out.
raw docstring

train-shuffle!clj

(train-shuffle! net in out cost! options)
(train-shuffle! net in out cost! epochs hyperparam)

Similar to [[train]], but does stochastic reshuffling of the mini-batches.

Arguments:

  • net: the network that needs to be trained.
  • cost!: the cost function. See cost.
  • epochs: the number training cycles that process all training data points.
  • hyperparam: a vector of hyperparameters relevant in the context of the chosen training algorithm that was provided at the moment of creation of the network from its blueprint (:sgd, adam, etc.). Typically contains learning rate (eta), decay, etc. Please see the DLFP book or some other resource for the explanation of many possible parameters connected with various learning algorithms.
  • options: multiple epochs and hyperparam pairs can be provided in options sequence.
  • in: the input of the network. Typically a tensor or a connector, but really anything that can accept [[uncomplicate.diamond.tensor/output]]. If it's bigger than the network's input shape, the training will be done in mini-batches.
  • out: the output of the network. Typically a tensor or a connector, but really anything that can accept [[uncomplicate.diamond.tensor/input]]. If it's bigger than the network's output shape, the training will be done in mini-batches.

See examples in dnn-test, and functional tests.

Similar to [[train]], but does stochastic reshuffling of the mini-batches.

Arguments:

- `net`: the network that needs to be trained.
- `cost!`: the cost function. See [[cost]].
- `epochs`: the number training cycles that process all training data points.
- `hyperparam`: a vector of hyperparameters relevant in the context of the chosen training algorithm
that was provided at the moment of creation of the network from its blueprint (`:sgd`, `adam`, etc.).
Typically contains learning rate (`eta`), decay, etc. Please see the [DLFP](https://aiprobook.com/deep-learning-for-programmers)
book or some other resource for the explanation of many possible parameters connected with various learning algorithms.
- `options`: multiple `epochs` and `hyperparam` pairs can be provided in `options` sequence.
- `in`: the input of the network. Typically a tensor or a connector, but really anything that can
accept [[uncomplicate.diamond.tensor/output]]. If it's bigger than the network's input shape,
the training will be done in mini-batches.
- `out`: the output of the network. Typically a tensor or a connector, but really anything that can
accept [[uncomplicate.diamond.tensor/input]]. If it's bigger than the network's output shape,
the training will be done in mini-batches.

See examples in [dnn-test](https://github.com/uncomplicate/deep-diamond/blob/master/test/uncomplicate/diamond/dnn_test.clj),
and [functional tests](https://github.com/uncomplicate/deep-diamond/tree/master/test/uncomplicate/diamond/functional).
raw docstring

xavier!clj

(xavier! rng)

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close