Liking cljdoc? Tell your friends :D

uncomplicate.neanderthal.auxil

Contains type-agnostic auxiliary functions roughly corresponding to the functionality usually defined in auxiliary LAPACK (sorting etc.), or useful functions that may not commonly be implemented by BLAS engines, but are helpful vectorized ruoutines. This namespace works similarly to uncomplicate.neanderthal.core; see there for more details about the intended use.

Cheat Sheet

Contains type-agnostic auxiliary functions roughly corresponding to the functionality
usually defined in auxiliary LAPACK (sorting etc.), or useful functions that may not commonly be
implemented by BLAS engines, but are helpful vectorized ruoutines. This namespace works similarly
to [[uncomplicate.neanderthal.core]]; see there for more details about the intended use.

### Cheat Sheet

- Sorting:  [[sort!]], [[sort+!]], [[sort-!]].
- Interchanges: [[swap-rows!]], [[swap-rows]], [[swap-cols!]], [[swap-cols]].
- Permutations: [[permute-rows!]], [[permute-rows]], [[permute-cols!]], [[permute-cols]].
raw docstring

uncomplicate.neanderthal.block

Convenient functions for accessing the memory block that holds vector space's data and inquire about its structure. This is less useful for application code, which should use linear algebra functions instead of snooping inside, but is indispensible in code that extends Neanderthal's functionality.

Convenient functions for accessing the memory block that holds vector space's data
and inquire about its structure. This is less useful for application code, which should
use linear algebra functions instead of snooping inside, but is indispensible in code that
extends Neanderthal's functionality.
raw docstring

uncomplicate.neanderthal.core

Contains type-agnostic linear algebraic functions roughly corresponding to functionality defined in BLAS 123, and functions that create and work with various kinds of vectors and matrices. Typically, you would want to require this namespace regardless of the actual type (real, complex, CPU, GPU, pure Java etc.) of the vectors and matrices that you use.

In cases when you need to repeatedly call a function from this namespace that accesses individual entries, and the entries are primitive, it is better to use a primitive version of the function from uncomplicate.neanderthal.real or uncomplicate.neanderthal.integer namespaces. Constructor functions for different specialized types (native, GPU, pure java) are in respective specialized namespaces (uncomplicate.neanderthal.native, [[uncomplicate.neanderthal.cuda]], etc).

Please take care to only use vectors and matrices of the same type in one call of a linear algebra operation. Compute operations typically (and on purpose!) do not support arguments of mixed types. For example, you can not call the dot function with one double vector (dv) and one float vector (fv), or with one vector in the CPU memory and one in the GPU memory. If you'd try, an ex-info would be thrown. You can use those different types side-by-side and transfer the data between them though.

How to use

(ns test
  (:require [uncomplicate.neanderthal core native]))

(ns test
  (:require [uncomplicate.neanderthal core native cuda]))

Examples

The best and most accurate examples can be found in the comprehensive test suite: see real-test, block-test, mkl-test, cublas-test, and device-test. Also, there are tutorial test examples here, the tutorials at the Neanderthal web site, and on my blog dragan.rocks.

For the comprehensive real-world examples, with detailed tutorials and guides, see the Interactive Programming for Artificial intelligence book series.

Cheat Sheet

Most Neanderthal function names are short, and cryptic at the first sight. But there is a very good reason for that! Please see Naming conventions for BLAS routines. The Linear Algebra for Programmers is also a good tutorial-oriented resource that can be very helpful for understanding all that madness.

Contains type-agnostic linear algebraic functions roughly corresponding to functionality
defined in BLAS 123, and functions that create and work with various kinds of vectors and matrices.
Typically, you would want to require this namespace regardless of the actual type
(real, complex, CPU, GPU, pure Java etc.) of the vectors and matrices that you use.

In cases when you need to repeatedly call a function from this namespace that accesses
individual entries, and the entries are primitive, it is better to use a primitive version
of the function from [[uncomplicate.neanderthal.real]] or [[uncomplicate.neanderthal.integer]]
namespaces. Constructor functions for different specialized types (native, GPU, pure java) are
in respective specialized namespaces ([[uncomplicate.neanderthal.native]], [[uncomplicate.neanderthal.cuda]], etc).

Please take care to only use vectors and matrices of the same type in one call of a
linear algebra operation. Compute operations typically (and on purpose!) do not support arguments
of mixed types. For example, you can not call the [[dot]] function with one double vector (dv) and
one float vector (fv), or with one vector in the CPU memory and one in the GPU memory.
If you'd try, an `ex-info` would be thrown. You can use those different types side-by-side
and transfer the data between them though.

### How to use

    (ns test
      (:require [uncomplicate.neanderthal core native]))

    (ns test
      (:require [uncomplicate.neanderthal core native cuda]))

### Examples

The best and most accurate examples can be found in the
[comprehensive test suite](https://github.com/uncomplicate/neanderthal/tree/master/test/uncomplicate/neanderthal):
see [real-test](https://github.com/uncomplicate/neanderthal/blob/master/test/uncomplicate/neanderthal/real_test.clj),
[block-test](https://github.com/uncomplicate/neanderthal/blob/master/test/uncomplicate/neanderthal/block_test.clj),
[mkl-test](https://github.com/uncomplicate/neanderthal/blob/master/test/uncomplicate/neanderthal/mkl_test.clj),
[cublas-test](https://github.com/uncomplicate/neanderthal/blob/master/test/uncomplicate/neanderthal/cublas_test.clj),
and [device-test](https://github.com/uncomplicate/neanderthal/blob/master/test/uncomplicate/neanderthal/device_test.clj).
Also, there are tutorial test examples [here](https://github.com/uncomplicate/neanderthal/tree/master/test/uncomplicate/neanderthal/examples),
the tutorials at [the Neanderthal web site](http://neanderthal.uncomplicate.org),
and [on my blog dragan.rocks](http://dragan.rocks).

For the comprehensive real-world examples, with detailed tutorials and guides, see the
[Interactive Programming for Artificial intelligence book series](aiprobook.com).

### Cheat Sheet

Most Neanderthal function names are short, and cryptic at the first sight. But there is a very good
reason for that! Please see [Naming conventions for BLAS routines](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/naming-conventions-for-blas-routines.html). The [Linear Algebra for Programmers](https://aiprobook.com/numerical-linear-algebra-for-programmers/)
is also a good tutorial-oriented resource that can be very helpful for understanding all that madness.

* Create: [[vctr]], [[view]], [[view-vctr]], [[ge]], [[view-ge]], [[tr]], [[view-tr]], [[sy]], [[view-sy]],
[[gb]], [[tb]], [[sb]], [[tp]], [[sp]], [[gd]], [[gt]], [[dt]], [[st]], [[raw]], [[zero]].

* Move data around: [[transfer!]], [[transfer]], [[native]], [[copy!]], [[copy]], [[swp!]].

* Clean up!: `with-release`, `let-release`, and `release` from the `uncomplicate.commons.core` namespace.

* Vector: [[vctr?]], [[dim]], [[subvector]], [[entry]], [[entry!]], [[alter!]].

* Meta:  [[vspace?]], [[vctr?]] [[matrix?]], [[symmetric?]], [[triangular?]], [[matrix-type?]], [[compatible?]]

* Matrix: [[matrix?]], [[ge]], [[view-ge]], [[tr]], [[view-tr]], [[sy]], [[view-sy]],
[[gb]], [[tb]], [[sb]], [[tp]], [[sp]], [[gd]], [[gt]], [[dt]], [[st]], [[mrows]], [[ncols]],
[[row]], [[col]], [[dia]], [[cols]], [[rows]], [[submatrix]], [[trans]], [[trans!]], [[entry]],
[[entry!]], [[alter!]], [[dim]], [[dia]], [[dias]], [[subband]].

* Change: [[trans!]], [[entry!]], [[alter!]].

* Help: `info` from the `uncomplicate.commons.core` namespace.

* [Monadic functions](http://fluokitten.uncomplicate.org): `fmap!`, `fmap`, `fold`, `foldmap`,
`pure`, `op`, `id`, from the `uncomplicate.fluokitten.core` namespace.

* [Compute level 1](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-1/blas-level-1-routines.html):
[[dot]], [[nrm1]], [[nrm2]], [[nrmi]], [[asum]],
[[iamax]], [[iamin]], [[amax]], [[iamin]], [[imax]], [[imin]], [[swp!]], [[copy!]], [[copy]], [[scal!]],
[[scal]], [[rot!]], [[rotg!]], [[rotm!]], [[rotmg!]], [[axpy!]], [[axpy]], [[ax]], [[xpy]],
[[axpby!]], [[sum]].

* [Compute level 2](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-1/blas-level-2-routines.html):
[[mv!]], [[mv]], [[rk!]], [[rk]].

* [Compute level 3](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-1/blas-level-3-routines.html):
[[mm!]], [[mm]], [[mmt]].

raw docstring

uncomplicate.neanderthal.integer

Contains type-specific primitive integer functions, equivalents of functions from the uncomplicate.neanderthal.core namespace. Typically, you would want to require this namespace if you need to compute matrices that contain longs and/or ints. Please keep in mind that most of higher-level BLAS functions are supported for integers. For example, operations such as matrix multiplication is not supported for integers, nor it makes much sense in general case for big matrices.

Example

(ns test
  (:require [uncomplicate.neanderthal
            [core :refer :all :exclude [entry entry! dot nrm2 asum sum]]
            [integer :refer :all]]))
Contains type-specific primitive integer functions, equivalents of functions from the
[[uncomplicate.neanderthal.core]] namespace. Typically, you would want to require this namespace
if you need to compute matrices that contain longs and/or ints. Please keep in mind that most of
higher-level BLAS functions are supported for integers. For example, operations such as
matrix multiplication is not supported for integers, nor it makes much sense in general case
for big matrices.

### Example

    (ns test
      (:require [uncomplicate.neanderthal
                [core :refer :all :exclude [entry entry! dot nrm2 asum sum]]
                [integer :refer :all]]))
raw docstring

uncomplicate.neanderthal.linalg

Contains type-agnostic linear algebraic functions roughly corresponding to the functionality usually defined in LAPACK (factorizations, solvers, etc.). This namespace works similarly to the uncomplicate.neanderthal.core namespace; see there for more details about the intended use.

Cheat Sheet

Also see:

Contains type-agnostic linear algebraic functions roughly corresponding to the functionality
usually defined in LAPACK (factorizations, solvers, etc.). This namespace works similarly
to the [[uncomplicate.neanderthal.core]] namespace; see there for more details about the intended use.

### Cheat Sheet

- Linear equations and LU factorization: [[trf!]], [[trf]], [[ptrf!]], [[tri!]], [[tri]], [[trs!]],
[[trs]], [[sv!]], [[sv]], [[psv!]], [[psv]], [[con]], [[det]],.
- Orthogonal factorizations: [[qrf!]], [[qrf]], [[qrfp!]], [[qrfp]], [[qpf!]], [[qpf]], [[qrfp!]],
[[qrfp]], [[rqf!]], [[rqf]], [[qlf!]], [[qlf]], [[lqf!]], [[qlf!]], [[qlf]], [[org!]], [[org]],
- Linear least squares: [[ls!]], [[ls]], [[lse!]], [[lse]], [[gls!]], [[gls]].
- Eigen decomposition: [[ev!]], [[es!]].
- Singular value decomposition (SVD): [[svd!]], [[svd]].

### Also see:

- [LAPACK routines](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/lapack-routines.html)
- [Linear Equation Computational Routines](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/lapack-linear-equation-computational-routines.html)
- [Linear Equation Driver Routines](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/lapack-linear-equation-driver-routines.html)
- [Orthogonal Factorizations (Q, R, L)](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/orthogonal-lapack-computational-routines.html)
- [Singular Value Decomposition](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/singular-value-decomposition-lapack-computation.html)
- [Symmetric Eigenvalue Problems](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/symmetric-eigenvalue-problems-lapack-computation.html)
- Other LAPACK documentation, as needed.
raw docstring

uncomplicate.neanderthal.math

Primitive floating point mathematical functions commonly found in Math, FastMath, and the likes. Vectorized counterparts can be found in the [[vect-math]] namespace.

Primitive floating point mathematical functions commonly found in Math, FastMath, and the likes.
Vectorized counterparts can be found in the [[vect-math]] namespace.
raw docstring

uncomplicate.neanderthal.native

Specialized constructors that use native CPU engine by default. A convenience over agnostic uncomplicate.neanderthal.core functions. The default engine is backed by Intel's MKL on Linux and Windows, and the OS specific binaries are provided by JavaCPP's MKL, OpenBLAS, or Accelerate presets. Alternative implementations are allowed, and can be either referred explicitly (see how mkl-float is used as and example), or by binding native-float and the likes to your preferred implementation.

Specialized constructors that use native CPU engine by default. A convenience over agnostic
[[uncomplicate.neanderthal.core]] functions. The default engine is backed by Intel's MKL on Linux and Windows,
and the OS specific binaries are provided by JavaCPP's MKL, OpenBLAS, or Accelerate presets.
Alternative implementations are allowed, and can be either referred explicitly
(see how `mkl-float` is used as and example), or by binding [[native-float]] and the likes
to your preferred implementation.
raw docstring

uncomplicate.neanderthal.random

Polymorphic functions that populate Neanderthal's data structures with random numbers drawn from common distributions.

rng-state, rand-normal!, rand-uniform!

Polymorphic functions that populate Neanderthal's data structures with random numbers
drawn from common distributions.

[[rng-state]], [[rand-normal!]], [[rand-uniform!]]
raw docstring

uncomplicate.neanderthal.real

Contains type-specific primitive floating point functions, equivalents of functions from the uncomplicate.neanderthal.core namespace. Typically, you would want to require this namespace if you need to compute real matrices containing doubles and/or floats.

Example

(ns test
  (:require [uncomplicate.neanderthal
            [core :refer :all :exclude [entry entry! dot nrm2 asum sum]]
            [real :refer :all]]))
Contains type-specific primitive floating point functions, equivalents of functions from the
[[uncomplicate.neanderthal.core]] namespace. Typically, you would want to require this namespace
if you need to compute real matrices containing doubles and/or floats.

### Example

    (ns test
      (:require [uncomplicate.neanderthal
                [core :refer :all :exclude [entry entry! dot nrm2 asum sum]]
                [real :refer :all]]))
raw docstring

uncomplicate.neanderthal.sparse

Functions for creating sparse vectors and matrices. Sparse vectors or matrices are structures in which most elements are zeroes. Therefore, it makes sense to store only the few non-zero entries. There is a performance penalty to pay for these entries, in terms of both storage and computation, as they are stored using one of many possible compression schemes, but if there is only a small fraction of non-zero elements compared to zero elements, that penalty is offset by the fact that only a fraction of computations need to be done.

Compressed sparse storage schemes that we use here use dense vectors to store non-zero entries and appropriate indices. Therefore, most operations can be offloaded to these objects if needed. Neanderthal core functions are supported where it makes sense and where it is technically possible.

Please see examples in [[uncomplicate.neanderthal.sparse-test]] and Intel documentation.

Cheatsheet

csv, csv? csr, csr?

Functions for creating sparse vectors and matrices. Sparse vectors or matrices
are structures in which most elements are zeroes. Therefore, it makes sense to
store only the few non-zero entries. There is a performance penalty to pay for these
entries, in terms of both storage and computation, as they are stored using one of many possible
compression schemes, but if there is only a small fraction of non-zero elements compared to
zero elements, that penalty is offset by the fact that only a fraction of computations
need to be done.

Compressed sparse storage schemes that we use here use dense vectors to store non-zero entries
and appropriate indices. Therefore, most operations can be offloaded to these objects if needed.
Neanderthal core functions are supported where it makes sense and where it is technically
possible.

Please see examples in [[uncomplicate.neanderthal.sparse-test]] and [Intel documentation](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-1/sparse-blas-level-2-and-level-3-routines-001.html).

### Cheatsheet

[[csv]], [[csv?]]
[[csr]], [[csr?]]
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close