Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

New Features - Added Scala Inference APIs

  • Implemented new MXNet Scala Inference APIs  which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for performing predictions with deep learning models trained with MXNet (#9678). Implemented new ImageClassifier class provides APIs for classification tasks on a Java BufferedImage using a pre-trained model you provide (#10054). Implemented new ObjectDetector class provides APIs for object and boundary detections on a Java BufferedImage using a pre-trained model you provide (#10229).

New Features - Added module to import ONNX models into MXNet

  • Implemented new ONNX module in MXNet offers an easy to use API to import ONNX models into MXNet's symbolic interface (#9963). Checkout the example on how you could use this API to import ONNX models and perform inference on MXNet. Currently, the ONNX-MXNet Import module is still experimental. Please use it with caution.

 

New Features - Added support for Model Quantization with Calibration

  • Implemented model quantization by adopting the TensorFlow approach with calibration by borrowing the idea from Nvidia's TensorRTThe focus of this work is on keeping quantized models (ConvNets for now) inference accuracy loss under control when compared to their corresponding FP32 models. Please see the example on how to quantize a FP32 model with or without calibration (#9552). Currently,theQuantizationsupportisstillexperimental.Pleaseuseitwithcaution.

New Features - MKL-DNN Integration

  • MXNet now integrates with Intel MKL-DNN to accelerate neural network operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch Normalization, Activation, LRN, Softmax, as well as some common operators: sum and concat (#9677). This integration allows NDArray to contain data with MKL-DNN layouts and reduces data layout conversion to get the maximal performance from MKL-DNN. Currently, the MKL-DNN integration is still experimental. Please use it with caution.

New Features - Added Exception Handling Support for Operators

  • Implemented Exception Handling Support for Operators in MXNet.  Transports backend C++ exceptions to the different language front-ends and prevents crashes when exception is thrown during operator execution (#9681).

New Features - Enhanced FP16 support

  • Added support for distributed mixed precision training with FP16. It supports storing of master copy of weights in float32 with the multi_precision mode of optimizers (#10183). Improved speed of float16 operations on x86 CPU by 8 times through F16C instruction set. Added support for more operators to work with FP16 inputs (#10125, #10078, #10169). Added a tutorial on using mixed precision with FP16 (#10391).

New Features - Added Profiling enhancements

  • Enhanced built-in to support native Intel:registered: VTune:tm: Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ and Python -- which is also visible in the Chrome tracing view(#8972). Added Runtime tracking of symbolic and imperative operators as well as memory and API calls. Added Tracking and dumping of aggregate profiling data. Profiler also no longer affects runtime performance when not in use. 

 

Breaking Changes

  • Changed Namespace for MXNet scala from ml.dmlc.mxnet to org.apache.mxnet (#10284).
  • Changed API for the Pooling operator from `mxnet.symbol.Pooling(data=None, global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)` to  `mxnet.symbol.Pooling(data=None,  kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)`. This is a breaking change when kwargs are not provided since the new api expects global_pool in the fourth position instead of the second position. (#10000).

Bug-fixes

  • Fixed tests - Flakiness/Bugs - (#9598, #9951, #10259, #10197, #10136, #10422). Please see: https://github.com/apache/incubator-mxnet/projects/9
  • Fixed cudnn_conv and cudnn_deconv deadlock (#10392).
  • Fixed a race condition in `io.LibSVMIter` when batch size is large (#10124).
  • Fixed a race condition in converting data layouts in MKL-DNN (#9862).
  • Fixed MKL-DNN sigmoid/softrelu issue (#10336).
  • Fixed incorrect indices generated by device row sparse pull (#9887).
  • Fixed cast storage support for same stypes (#10400).
  • Fixed uncaught exception for bucketing module when symbol name not specified (#10094).
  • Fixed regression output layers (#9848).
  • Fixed crash with mx.nd.ones (#10014).
  • Fixed sample_multinomial crash when get_prob=True (#10413).
  • Fixed buggy type inference in correlation (#10135).
  • Fixed race condition for CPUSharedStorageManager->Free and launched workers at iter init stage to avoid frequent relaunch (#10096).
  • Fixed DLTensor Conversion for int64 (#10083).
  • Fixed issues where hex symbols of the profiler were not being recognized by chrome tracing tool(#9932)
  • Fixed crash when profiler was not enabled (#10306)
  • Fixed ndarray assignment issues (#10022, #9981, #10468).
  • Fixed incorrect indices generated by device row sparse pull (#9887).
  • Fixed print_summary bug in visualization module (#9492).
  • Fixed shape mismatch in accuracy metrics (#10446).
  • Fixed random samplers from uniform and random distributions in R bindings (#10450).
  • Fixed a bug that was causing training metrics to be printed as NaN sometimes (#10437).
  • Fixed a crash with non positive reps for tile ops (#10417).

...