Currently, there is no mechanism to verify consistency and integrity of models trained on older MXNet versions. Any unwarranted change in an underlying model saving/loading API could potentially break the backwards compatibility across MXNet versions.
Model Backward compatibility aims to check whether models trained on earlier versions of MXNet are loading fine on the latest version or the latest release candidate. It also aims to do a sanity check for consistency in the inference on these trained models.
Here's a proposed approach to do this :
- Create simple models on earlier versions of MXNet, initialize them with randomly generated weights, perform a forward pass on them. Save the model and model parameters and upload them in an S3 bucket.
- As a continuation of previous step, perform a simple inference on randomly generated input and save the randomly generated input as well as the inference output along with the model files on S3.
- The inference script running on the latest master branch of MXNet repository, would pull the model files + data and would try to load the models back into memory. The tests would fail if the models fail to load into memory or they give a different inference output. The different inference output could indicate or flag a potential change in an underlying operator.
- Use the same seed values to ensure we have the same environment for both training and inference files.
- These tests could be a part of nightly tests and would help in flagging out the above mentioned issues.
- Primarily the model backwards compatibility checker would cover the following APIs to save/load models :
4. Current work
A first cut of model backwards compatibility using the above approach has been implemented here : https://github.com/apache/incubator-mxnet/pull/11626. We would want to make it more robust and would like to get feedback on the ways in which we can improve this further.