(add-gradient-to-proposal q-dist gradient-map weights stepsize use-adagrad)
update proposals via gradient step
update proposals via gradient step
(assoc-gradient state address proposal value)
store in state the gradient value at an address
store in state the gradient value at an address
(get-address smp)
returns a unique identifier for sample checkpoint and the updated state
returns a unique identifier for sample checkpoint and the updated state
(get-or-create-q! state address prior)
get the learned proposal at an address; initialize proposal with prior at new addresses.
get the learned proposal at an address; initialize proposal with prior at new addresses.
(get-variational state)
return the learned approximating distributions at each address
return the learned approximating distributions at each address
(ignore? state dist)
determine whether to learn an approximation for a given distribution object
determine whether to learn an approximation for a given distribution object
(merge-q! state proposals)
force specific approximating distributions: used for initialization, or to use as learned proposals within an importance sampler.
force specific approximating distributions: used for initialization, or to use as learned proposals within an importance sampler.
(optimal-scaling f g)
given two vectors f=wg and g, estimate Cov(f,g)/Var(g). this gives the optimal scaling for the variance reduction term.
given two vectors f=wg and g, estimate Cov(f,g)/Var(g). this gives the optimal scaling for the variance reduction term.
(proposal-gradient gradient-samples log-weights)
compute the gradient (and appropriate weightings) at each address
compute the gradient (and appropriate weightings) at each address
(update-proposals! particles stepsize use-adagrad)
outer loop of gradient update procedure to update proposal dist atom
outer loop of gradient update procedure to update proposal dist atom
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close