MXNets Continuous Integration system is covering a big variety of environments with the help of Docker. This ensures consistent test behaviour and reproducibility in between multiple runs. This guide explains how to make use of the available tools to recreate test results on your local machine. 

1. Requirements

In order to run this toolchain, the following packages have to be installed. Please note that CPU tests can be run on Mac OS and Ubuntu, while GPU tests may only be executed under Ubuntu. Unfortunately, Windows builds and tests are being done without Docker and are thus not covered by this guide.

  • Docker
  • docker-compose
  • Python3
  • Optional: Nvidia-Docker (Ubuntu only, for GPU tests) 
  • Optional: GPU with Cuda Compute Capability ≥ 3.0
  • Disk space: at least 100GB (150GB recommended)
  • Code and Python dependencies, which are defined in ci/requirements.txt 
pip3 install -r ci/requirements.txt --user

1.1. EC2 instances with automated setup

If you plan to use EC2 to reproduce the test results, you can set up your instance with the automated setup documented in MXNet Developer setup on AWS EC2

Then clone the MXNet repository and either use for common use cases or continue with the instructions.

2. Reproducing failures

2.1. Build

A build failure like shown below can be reproduced by copying the failed command, which starts with ci/, and running it on your local machine while being in the root of your mxnet source directory. This step does NOT require a GPU, nor CUDA dependencies. 

Build failure

In the most recent version of ci, the build command is not displayed in the tab as in the image above. To find it, simply click on show complete log and scroll to the top.

In this case, you would like to run ci/ --platform ubuntu_build_cuda /work/ build_ubuntu_gpu_cuda8_cudnn5, which would produce an output like the following image:

2.2. Test

Reproducing test failures requires an additional step due to MXNet binaries not being present in your local workspace. 


First we have to generate these dependecies before a test can be executed. These can be resolved by the stash commands, which are indicated by the message "Restore files previously stashed"

Files ending with the suffix _gcov_data are used for test coverage reporting, and are therefore not needed to reproduce test results.

In this case, the stash is labelled as mkldnn_gpu. The easiest way to map this to a build-step, is by opening the Jenkinsfile and searching for    pack_lib('mkldnn_gpu'  In this case, you will find a block like the following:

def compile_unix_mkldnn_gpu() {
  return ['GPU: MKLDNN': {
    node(NODE_LINUX_CPU) {
      ws('workspace/build-mkldnn-gpu') {
        timeout(time: max_time, unit: 'MINUTES') {
          utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_mkldnn', false)
          utils.pack_lib('mkldnn_gpu', mx_mkldnn_lib, true)

The important code here is line 7, which contains the three arguments we need to use to build our dependencies. Substitute them in the command below and run it to obtain them.

  1. PLATFORM: in this case, ubuntu_build_cuda
  2. FUNCTION_NAME: in this case, build_ubuntu_gpu_mkldnn
  3. USE_NVIDIA: this argument toggles whether the --nvidiadocker should be used. Here, it is false, meaning <USE_NVIDIA> below should be removed. On the contrary, if set to TRUE, one would replace it with --nvidiadocker.

ci/ --docker-registry mxnetci <USE_NVIDIA> -p <PLATFORM> /work/ <FUNCTION_NAME>

Test execution

After the binaries have been generated successfully, please take the failed command and execute it in the root of your MXNet workspace. To get such command, you need to expand the Shell Script tab and view the complete log. Scrolling to the top will show the command we are interested in:

In this case, you would like to run: 

ci/ --docker-registry mxnetci --nvidiadocker --platform ubuntu_gpu --docker-build-retries 3 --shm-size 500m /work/ unittest_ubuntu_python2_gpu

Please note the parameter --nvidiadocker in this example. This indicates that this test requires a GPU and is thus only executable on a Ubuntu machine with Nvidia-Docker and a GPU installed. The result of this execution should look like follows:

3. Tips and Tricks

Repeating test execution

In order to test a test for it's robustness against flakiness, you might want to repeat the execution multiple times. This can be achieved with the MXNET_TEST_COUNT environment variable. The execution would look like follows:

MXNET_TEST_COUNT=10000 nosetests --logging-level=DEBUG --verbose -s

Setting a fixed test seed

To reproduce a test failure caused by random data, you can use the MXNET_TEST_SEED environment variable. 

MXNET_TEST_SEED=2096230603 nosetests --logging-level=DEBUG --verbose -s

Using the Flakiness Checker

Another way to accomplish the above is with the flakiness checker tool, which is currently located in the tools directory. This automatically sets the correct environment variables and infers the path.

Similar results to the above can be achieved using the following commands:

python tools/ test_module.test_op3
python tools/ test_module.test_op3 -s 2096230603

Usage documentation:

python tools/ [optional_arguments] <test-specifier>

where <test-specifier> is a string specifying which test to run. This can come in two formats:

  1. <file-name>.<test-name>, as is common in the github repository (e.g. test_example.test_flaky)
  2. <directory/<file>:<test-name>, like the input to nosetests (e.g. tests/python/unittest/ Note: This directory can be either relative or absolute. Additionally, if the full path is not given, the script will search whatever directory is given for the provided file.

Optional Arguments:

-h, --help print built-in help message

-n N, --num-trials N run test for n trials, instead of the default of 10,000

-s SEED, --seed SEED use SEED as the test seed, rather than a random seed

Note: additional options will be added once the flaky test detector is deployed

4. Troubleshooting

In case you run into any issues, please try the following steps:

Cleaning the workspace (including subrepos, be careful with data loss)

ci/docker/ clean_repo

Using signal handler to get stack traces:

Use -DUSE_SIGNAL_HANDLER=ON  and maybe also -DCMAKE_BUILD_TYPE=Debug as CMake arguments, you can edit ci/docker/ and change it to build with these options if they are not set.

Stepping into the container

It is possible to step into the container to run commands manually.  In the output of the script, the docker command that is needed is printed which sets up all the needed docker options. You can replace the final script with /bin/bash or nothing to get a shell in the container.

  • No labels


  1. Here are the steps I used to reproduce a nightly test, I'm adding here if some one finds this useful.

    on an AWS Deeplearning Base AMI Ubuntu:

    1. clone mxnet repo (no need to install mxnet)
    2. install requirements
    pip3 install -r ci/requirements.txt --user

  build command copied from jenkins log (click Build -> GPU: CUDA9.1+cuDNN7 -> 'Shell Scripts' -> 'show complete log'):

    ci/ --docker-registry mxnetci --platform ubuntu_build_cuda --docker-build-retries 3 --shm-size 500m /work/ build_ubuntu_gpu_cuda100_cudnn7

            4.Run the specifc job failed (Tutorial Python2 and 3), copy the command from log.

    ci/ --docker-registry mxnetci --nvidiadocker --platform ubuntu_nightly_gpu --docker-build-retries 3 --shm-size 1500m /work/ nightly_tutorial_test_ubuntu_python2_gpu
  2. In case of website pipeline, Build lib command used is

    ci/ --docker-registry mxnetci --platform ubuntu_cpu_lite --docker-build-retries 3 --shm-size 500m /work/ build_ubuntu_cpu_docs

    mxnet-validation/website → Select Build → Select Shell Script → Show complete log