This document explains how to build MXNet from source code.
Building from source follows this general two-step flow of building the shared library, then installing your preferred language binding. Use the following links to jump to the different sections of this guide.
libmxnet.so
.
Detailed instructions are provided per operating system. Each of these guides also covers how to install the specific Language Bindings you require. You may jump to those, but it is recommended that you continue reading to understand more general "build from source" options.
git clone --recursive https://github.com/apache/incubator-mxnet mxnet
cd mxnet
The following sections will help you decide which specific prerequisites you need to install.
It is useful to consider your math library selection prior to your other prerequisites. MXNet relies on the BLAS (Basic Linear Algebra Subprograms) library for numerical computations. Those can be extended with LAPACK (Linear Algebra Package), an additional set of mathematical functions.
MXNet supports multiple mathematical backends for computations on the CPU:
The default order of choice for the libraries if found follows the path from the most
(recommended) to less performant backends.
The following lists show this order by library and cmake
switch.
For desktop platforms (x86_64):
USE_MKLDNN
USE_MKL_IF_AVAILABLE
USE_MKLML
USE_APPLE_ACCELERATE_IF_AVAILABLE
| Mac onlyBLAS
| Options: Atlas, Open, MKL, AppleNote: If USE_MKL_IF_AVAILABLE
is set to False then MKLML and MKL-DNN will be disabled as well for configuration
backwards compatibility.
For embedded platforms (all other and if cross compiled):
BLAS
| Options: Atlas, Open, MKL, AppleYou can set the BLAS library explicitly by setting the BLAS variable to:
See the cmake/ChooseBLAS.cmake file for the options.
Intel's MKL (Math Kernel Library) is one of the most powerful math libraries https://software.intel.com/en-us/mkl
It has following flavors:
MKL is a complete math library, containing all the functionality found in ATLAS, OpenBlas and LAPACK. It is free under community support licensing (https://software.intel.com/en-us/articles/free-mkl), but needs to be downloaded and installed manually.
MKLML is a subset of MKL. It contains a smaller number of functions to reduce the size of the download and reduce the number of dynamic libraries user needs.
Since the full MKL library is almost always faster than any other BLAS library it's turned on by default,
however it needs to be downloaded and installed manually before doing cmake
configuration.
Register and download on the Intel performance libraries website.
Note: MKL is supported only for desktop builds and the framework itself supports the following hardware:
If you have a different processor you can still try to use MKL, but performance results are unpredictable.
If you want to run MXNet with GPUs, you must install NVDIA CUDA and cuDNN.
These might be optional, but they're typically desirable as the extend or enhance MXNet's functionality.
More information on turning these features on or off are found in the following build configurations section.
There is a configuration file for make,
make/config.mk
, that contains all the compilation options. You can edit it and then run make
or cmake
. cmake
is recommended for building MXNet (and is required to build with MKLDNN), however you may use make
instead.
lib
and include
folders.config.mk
file with following, in addition to the CUDA related options.echo "USE_NCCL=1" >> make/config.mk
echo "USE_NCCP_PATH=path-to-nccl-installation-folder" >> make/config.mk
cp make/config.mk .
make -j"$(nproc)"
test_nccl.py
file at incubator-mxnet/tests/python/gpu/test_nccl.py
@unittest.skip("Test requires NCCL library installed and enabled during build")
nosetests --verbose tests/python/gpu/test_nccl.py
Recommendation to get the best performance out of NCCL: It is recommended to set environment variable NCCL_LAUNCH_MODE to PARALLEL when using NCCL version 2.1 or newer.
USE_CPP_PACKAGE=1
when you run make
or cmake
.-j
runs multiple jobs against multi-core CPUs.For example, you can specify using all cores on Linux as follows:
cmake -j$(nproc)
cmake
and install with MKL DNN, GPU, and OpenCV support:cmake -j USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_MKLDNN=1
cmake -j BLAS=open USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
cmake
and install with MKL DNN, and OpenCV support:cmake -j USE_CUDA=0 USE_MKLDNN=1
cmake
and install with OpenBLAS and OpenCV support:cmake -j USE_CUDA=0 BLAS=open
cmake USE_OPENCV=0
xcode
(OPENMP is disabled because it is not supported by the Apple version of Clang):cmake -j BLAS=apple USE_OPENCV=0 USE_OPENMP=0
llvm
(the one provided by Apple does not support OpenMP):brew install llvm
cmake -j BLAS=apple USE_OPENMP=1
After building MXNet's shared library, you can install other language bindings.
NOTE: The C++ API binding must be built when you build MXNet from source. See Build MXNet with C++.
The following table provides links to each language binding by operating system: | | Ubuntu | macOS | Windows | | --- | ---- | --- | ------- | | Python | Ubuntu guide | OSX guide | Windows guide | | C++ | C++ guide | C++ guide | C++ guide | | Clojure | Clojure guide | Clojure guide | n/a | | Julia | Ubuntu guide | OSX guide | Windows guide | | Perl | Ubuntu guide | OSX guide | n/a | | R | Ubuntu guide | OSX guide | Windows guide | | Scala | Scala guide | Scala guide | n/a | | Java | Java guide | Java Guide | n/a |
Can you improve this documentation? These fine people already did:
Aaron Markham, Sergey Kolychev, Anirudh, Andrew Ayres, Yao Wang, Sheng Zha, Alexander Zai, Tao Lv, Amol Lele & thinksankyEdit on GitHub
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close