(classify-obv dnn obv)
Given a DNN and a single observation, return the model's prediction.
Given a DNN and a single observation, return the model's prediction.
(dbn->dnn dbn classes)
Given a pretrained Deep Belief Network, use the trained weights and biases to build a Deep Neural Network.
Given a pretrained Deep Belief Network, use the trained weights and biases to build a Deep Neural Network.
(edn->DNN data)
The default map->DNN function provided by the defrecord doesn't provide us with the performant implementation (i.e. matrices and arrays from core.matrix), so this function adds a small step to ensure that.
The default map->DNN function provided by the defrecord doesn't provide us with the performant implementation (i.e. matrices and arrays from core.matrix), so this function adds a small step to ensure that.
(feed-forward batch dnn)
Given an initial input batch and a DNN, feed the batch through the net, retaining the output of each layer.
Given an initial input batch and a DNN, feed the batch through the net, retaining the output of each layer.
(layer-error weights next-error output)
Calculate the error for a particular layer in a net, given the weights for the next layer, the error for the next layer, and the output for the current layer.
Calculate the error for a particular layer in a net, given the weights for the next layer, the error for the next layer, and the output for the current layer.
(net-output net input)
Propagate an input matrix through the network.
Propagate an input matrix through the network.
(prop-up input weights bias)
Given an input matrix, weight matrix, and bias vector, propagate the signal through the layer.
Given an input matrix, weight matrix, and bias vector, propagate the signal through the layer.
(softmax->class x)
Get the predicted class from a softmax output.
Get the predicted class from a softmax output.
(test-dnn dnn dataset)
Test a Deep Neural Network on a dataset. Returns an error percentage.
dataset should have the label as the last entry in each observation.
Test a Deep Neural Network on a dataset. Returns an error percentage. dataset should have the label as the last entry in each observation.
(train-batch batch dnn observations learning-rate lambda)
Given a batch of training data and a DNN, update the weights and biases accordingly.
Given a batch of training data and a DNN, update the weights and biases accordingly.
(train-dnn dnn dataset params)
Given a labeled dataset, train a DNN.
The dataset should have the label as the last element of each input vector.
params is a map that may have the following keys: batch-size: default 100 epochs: default 100 learning-rate: default 0.5 lambda: default 0.1
Given a labeled dataset, train a DNN. The dataset should have the label as the last element of each input vector. params is a map that may have the following keys: batch-size: default 100 epochs: default 100 learning-rate: default 0.5 lambda: default 0.1
(train-epoch net dataset observations learning-rate lambda batch-size)
Given a training dataset and a net, train it for one epoch (one pass over the dataset).
Given a training dataset and a net, train it for one epoch (one pass over the dataset).
(train-top-layer dnn
dataset
observations
batch-size
epochs
learning-rate
lambda)
Pre-train the top logistic regression layer before moving to fine-tuning.
Pre-train the top logistic regression layer before moving to fine-tuning.
(update-layer weights
biases
input
error
learning-rate
lambda
batch-size
observations)
Update the weights and biases of a layer, given the previous weights and biases, input coming into the weights, the error for the layer, the learning rate, and the batch size.
Update the weights and biases of a layer, given the previous weights and biases, input coming into the weights, the error for the layer, the learning rate, and the batch size.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close