Liking cljdoc? Tell your friends :D

Build MXNet from Source

This document explains how to build MXNet from source code.

Overview

Building from source follows this general two-step flow of building the shared library, then installing your preferred language binding. Use the following links to jump to the different sections of this guide.

  1. Build the MXNet shared library, libmxnet.so.
  2. Install the language API binding(s) you would like to use for MXNet. MXNet's newest and most popular API is Gluon. Gluon is built into the Python binding. If Python isn't your preference, you still have more options. MXNet supports several other language APIs:

## Build Instructions by Operating System

Detailed instructions are provided per operating system. Each of these guides also covers how to install the specific Language Bindings you require. You may jump to those, but it is recommended that you continue reading to understand more general "build from source" options.


Clone the MXNet Project

  1. Clone or fork the MXNet project.
git clone --recursive https://github.com/apache/incubator-mxnet mxnet
cd mxnet

Prerequisites

The following sections will help you decide which specific prerequisites you need to install.

Math Library Selection

It is useful to consider your math library selection prior to your other prerequisites. MXNet relies on the BLAS (Basic Linear Algebra Subprograms) library for numerical computations. Those can be extended with LAPACK (Linear Algebra Package), an additional set of mathematical functions.

MXNet supports multiple mathematical backends for computations on the CPU:

The default order of choice for the libraries if found follows the path from the most (recommended) to less performant backends. The following lists show this order by library and cmake switch.

For desktop platforms (x86_64):

  1. MKL-DNN (submodule) | USE_MKLDNN
  2. MKL | USE_MKL_IF_AVAILABLE
  3. MKLML (downloaded) | USE_MKLML
  4. Apple Accelerate | USE_APPLE_ACCELERATE_IF_AVAILABLE | Mac only
  5. OpenBLAS | BLAS | Options: Atlas, Open, MKL, Apple

Note: If USE_MKL_IF_AVAILABLE is set to False then MKLML and MKL-DNN will be disabled as well for configuration backwards compatibility.

For embedded platforms (all other and if cross compiled):

  1. OpenBLAS | BLAS | Options: Atlas, Open, MKL, Apple

You can set the BLAS library explicitly by setting the BLAS variable to:

  • Atlas
  • Open
  • MKL
  • Apple

See the cmake/ChooseBLAS.cmake file for the options.

Intel's MKL (Math Kernel Library) is one of the most powerful math libraries https://software.intel.com/en-us/mkl

It has following flavors:

  • MKL is a complete math library, containing all the functionality found in ATLAS, OpenBlas and LAPACK. It is free under community support licensing (https://software.intel.com/en-us/articles/free-mkl), but needs to be downloaded and installed manually.

  • MKLML is a subset of MKL. It contains a smaller number of functions to reduce the size of the download and reduce the number of dynamic libraries user needs.

  • MKL-DNN is a separate open-source library, it can be used separately from MKL or MKLML. It is shipped as a subrepo with MXNet source code (see 3rdparty/mkldnn or the MKL-DNN project)

Since the full MKL library is almost always faster than any other BLAS library it's turned on by default, however it needs to be downloaded and installed manually before doing cmake configuration. Register and download on the Intel performance libraries website.

Note: MKL is supported only for desktop builds and the framework itself supports the following hardware:

  • Intel® Xeon Phi™ processor
  • Intel® Xeon® processor
  • Intel® Core™ processor family
  • Intel Atom® processor

If you have a different processor you can still try to use MKL, but performance results are unpredictable.

Install GPU Software

If you want to run MXNet with GPUs, you must install NVDIA CUDA and cuDNN.

Install Optional Software

These might be optional, but they're typically desirable as the extend or enhance MXNet's functionality.

  • OpenCV - Image Loading and Augmentation. Each operating system has different packages and build from source options for OpenCV. Refer to your OS's link in the Build Instructions by Operating System section for further instructions.
  • NCCL - NVIDIA's Collective Communications Library. Instructions for installing NCCL are found in the following Build MXNet with NCCL section.

More information on turning these features on or off are found in the following build configurations section.


Build Configurations

There is a configuration file for make, make/config.mk, that contains all the compilation options. You can edit it and then run make or cmake. cmake is recommended for building MXNet (and is required to build with MKLDNN), however you may use make instead.


Build MXNet

Build MXNet with NCCL

  • Download and install the latest NCCL library from NVIDIA.
  • Note the directory path in which NCCL libraries and header files are installed.
  • Ensure that the installation directory contains lib and include folders.
  • Ensure that the prerequisites for using NCCL such as Cuda libraries are met.
  • Append the config.mk file with following, in addition to the CUDA related options.
  • USE_NCCL=1
  • USE_NCCL_PATH=path-to-nccl-installation-folder
echo "USE_NCCL=1" >> make/config.mk
echo "USE_NCCP_PATH=path-to-nccl-installation-folder" >> make/config.mk
cp make/config.mk .
  • Run make command
make -j"$(nproc)"

Validating NCCL

  • Follow the steps to install MXNet Python binding.
  • Comment the following line in test_nccl.py file at incubator-mxnet/tests/python/gpu/test_nccl.py
@unittest.skip("Test requires NCCL library installed and enabled during build")
  • Run test_nccl.py script as follows. The test should complete. It does not produce any output.
nosetests --verbose tests/python/gpu/test_nccl.py

Recommendation to get the best performance out of NCCL: It is recommended to set environment variable NCCL_LAUNCH_MODE to PARALLEL when using NCCL version 2.1 or newer.


Build MXNet with C++

  • To enable C++ package, just add USE_CPP_PACKAGE=1 when you run make or cmake.

Usage Examples

  • -j runs multiple jobs against multi-core CPUs.

For example, you can specify using all cores on Linux as follows:

cmake -j$(nproc)

Recommended for Systems with NVIDIA GPUs and Intel CPUs

  • Build MXNet with cmake and install with MKL DNN, GPU, and OpenCV support:
cmake -j USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_MKLDNN=1

Recommended for Systems with NVIDIA GPUs

  • Build with both OpenBLAS, GPU, and OpenCV support:
cmake -j BLAS=open USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1

Recommended for Systems with Intel CPUs

  • Build MXNet with cmake and install with MKL DNN, and OpenCV support:
cmake -j USE_CUDA=0 USE_MKLDNN=1

Recommended for Systems with non-Intel CPUs

  • Build MXNet with cmake and install with OpenBLAS and OpenCV support:
cmake -j USE_CUDA=0 BLAS=open

Other Examples

  • Build without using OpenCV:
cmake USE_OPENCV=0
  • Build on macOS with the default BLAS library (Apple Accelerate) and Clang installed with xcode (OPENMP is disabled because it is not supported by the Apple version of Clang):
cmake -j BLAS=apple USE_OPENCV=0 USE_OPENMP=0
  • To use OpenMP on macOS you need to install the Clang compiler, llvm (the one provided by Apple does not support OpenMP):
brew install llvm
cmake -j BLAS=apple USE_OPENMP=1

Installing MXNet Language Bindings

After building MXNet's shared library, you can install other language bindings.

NOTE: The C++ API binding must be built when you build MXNet from source. See Build MXNet with C++.

The following table provides links to each language binding by operating system: | | Ubuntu | macOS | Windows | | --- | ---- | --- | ------- | | Python | Ubuntu guide | OSX guide | Windows guide | | C++ | C++ guide | C++ guide | C++ guide | | Clojure | Clojure guide | Clojure guide | n/a | | Julia | Ubuntu guide | OSX guide | Windows guide | | Perl | Ubuntu guide | OSX guide | n/a | | R | Ubuntu guide | OSX guide | Windows guide | | Scala | Scala guide | Scala guide | n/a | | Java | Java guide | Java Guide | n/a |

Can you improve this documentation? These fine people already did:
Aaron Markham, Sergey Kolychev, Anirudh, Andrew Ayres, Yao Wang, Sheng Zha, Alexander Zai, Tao Lv, Amol Lele & thinksanky
Edit on GitHub

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close