File Exchange

image thumbnail

Deep Learning Toolbox Converter for ONNX Model Format

Import and export ONNX™ models within MATLAB for interoperability with other deep learning frameworks

57 Downloads

Updated 21 May 2019

Import and export ONNX™ (Open Neural Network Exchange) models within MATLAB for interoperability with other deep learning frameworks. ONNX enables models to be trained in one framework and transferred to another for inference.

Opening the onnxconverter.mlpkginstall file from your operating system or from within MATLAB will initiate the installation process for the release you have.
This mlpkginstall file is functional for R2018a and beyond.

Usage example:
%% Export to ONNX model format
net = squeezenet; % Pretrained Model to be exported
filename = 'squeezenet.onnx';
exportONNXNetwork(net,filename);

%% Import the network that was exported
net2 = importONNXNetwork('squeezenet.onnx', 'OutputLayerType', 'classification');

% Compare the predictions of the two networks on a random input image
img = rand(net.Layers(1).InputSize);
y = predict(net, img);
y2 = predict(net2,img);

max(abs(y-y2))

To import an ONNX network in MATLAB, please refer:
https://www.mathworks.com/help/deeplearning/ref/importonnxnetwork.html

To export an ONNX network from MATLAB, please refer:
https://www.mathworks.com/help/nnet/ref/exportonnxnetwork.html

Comments and Ratings (33)

Dear Ting Su,

excellent! :D

Best wishes

Andreas

Ting Su

Hi Andreas,
The new version will be released soon.

Kevin Chng

Does it work with yoloV2?

Dear Ting Su,

any word on a new version that can resolve the issue with the LSTM (see github ticket). We would like to deploy some models into an application with the onnxruntime.

Best wishes

Andreas

cui

Dear Ting Su,
The onnx model exported by exportONNXNetwork() is not the same as the result of running in opencv and Matlab? I posted my issue also here:
https://ww2.mathworks.cn/matlabcentral/answers/464550-the-onnx-model-exported-by-exportonnxnetwork-is-not-the-same-as-the-result-of-running-in-opencv-an

cui

Hi Ting Su,
I noticed there was a recent update of the converter but LSTMs still don't seem to work properly. I posted my issue also here:
https://de.mathworks.com/matlabcentral/answers/457176-onnx-export-yields-error-in-windows-ml?s_tid=prof_contriblnk

cui

Dear Ting Su,
Does the current onnx version support the export of target detection networks, such as the Yolov2 network(export to yolov2.onnx)?

Dear Ting Su,

yes, thats the issue I opened on Github.

https://github.com/microsoft/onnxruntime/issues/1016

Best wishes

Andreas

Ting Su

Hi Andreas,
We noticed that some LSTM models exported by MATLAB ONNX Converter don't work well with ONNX Runtime, although they could be loaded into other frameworks, as ONNX Runtime strictly follows ONNX spec for the shape requirement. A new release of MATLAB ONNX converter will be released soon and it will work with ONNX Runtime better.

Ting Su

Hi Andreas,
Thanks for the question. Is this the same issue reported in the following link?
https://github.com/microsoft/onnxruntime/issues/1016
We are looking into this and will get back to you soon.

Dear Matlab Team,

we are exporting an lstm Model (basically build as descripted in the sequene-to-sequence regression example with the turbofan engine example data.

We get an error message when importing it in the onnxruntime (build from source 0.4.0 Release):

Load model from temp.onx failed:Node:fc_2 Output:fc_2 [ShapeInferenceError] Mismatch between number of source and target dimensions. Source=2 Target=3

We can load the onnx File in Netron just fine and have an fc_2 output with somewhat odd <1x1x1> dimension. Could there be a confusion in expected output dimensions?

Could we send the onnx-File / Matlab nnet to you for some help.

Would be much appreciated.

Exporting models from matlab to other runtime engines doesn't work apart from trivial examples. I've seen strange shape flipping on output ONNX network layers which causes failures when importing to python frameworks or c#.

when I import the model to c++ I don't have same result as output layer in matlab can you supply example in c++ opencv or tensorflow which get layer out to be same as matlab
conv layer for example

Hong Wang

thanks to Jihang Wang, with you help I setup this tool.

Hi Jihang, thanks for sharing this information, unfortunately it didn't resolve the problem in my case.

Jihang Wang

Hi everyone, I found the reason why it doesn't work under the help of MathWorks Technical support team. I just want to share my experience here. Basically there is a function on my path which is shadowing one of the built-in MATLAB functions. I reset my MATLAB path using the code below:
>> restoredefaultpath
>> rehash toolboxcache
>> savepath % note: this command will overwrite my current path preferences.

After that, I downloaded and reinstalled the converter app from this page and rerunning the export code. Problem solved :) Hope this helps.

Hi Andreas, I just used a custom CNN and checked it with WinMLRunner, I didn't try any pretrained models though.

Hi Gabriel
Could you tell me which CNN did you use?
As mentioned before i tryed the basic googlenet and i couldn't use it with Microsoft ML.
It would be very helpful if i could use the onnx file exchange.
Thanks in advance

Hi Ting, thanks a lot for the Opset update. However, now I obtain the same error as Andreas for LSTM networks: "First input does not have rank 2". If I have more than one LSTM-layer in the network the error messages somehow changes to: "First input tensor must have rank 3", CNNs seem to work though.

Ting Su

Hi Andreas and Jihang, Can you reach our technical support and send model to us?

Hi Ting, I ran into the same issue with C#. I can export the Network in different versions. If I try to load the Model into windows.ml I get an "ShapeInferenceError" the First Input does not have rank 2. With Opset v6 it is possible to load the File but it can't be used. I tested googlenet and tried to compare the onnx models with a program called "Netron". The difference I found was that the first layer “Sub” changed from [3x244x244] to [1x3x244x244] but I’m not sure if this is the Problem. A second thing is that with onnx v6 Visual Studio can generate a model class automatically but not with v7 or higher. It seems that it is not recognized as an onnx model. Can you give an advice how to use Matlab trained model's in C#?

Jihang Wang

Hi Ting, I have the same issue when loading the ONNX model in C#. I tried to save the model to different Opset versions but none of them works. Please advise.

Ting Su

Hi Gabriel,
We recently added support for ONNX Opset 7, 8 and 9. One can specify which Opset to use via an optional input argument 'OpsetVersion' during the export. You should be able to download it if you have a R2018b MATLAB.

Ting Su

Hi Kennth,
We saw a similar issue and the fix will be released soon. It will be great if you could send us your MATLAB model to allow us to test it.

It would be great if the export could be updated to version 7 or 8 to allow the use with windows ml.

exportONNXNetwork does not work properly using CNTK and Python. The conversion produces a ValueError: Gemm: Invalid shape, input A and B are expected to be rank=2 matrices.

Hui Yin Lee

Hi, Is the code or toolbox available for Faster R-CNN model to be exported? As i get the error mentioning the model is not DAGnetwork. Hopefully can get some feedback or help here

Do you guys know when support for the constant operator will get added?

Error using importONNXNetwork (line 39)
Node 'node_20': Constant operator is not supported yet.

umit kacar

I worked this code:) It is very good. Thank you.

Ting Su

Hi Trihn,
We would like to hear more details on the problem of importONNXNetwork(). Have you installed an old version of this converter before?

Trinh Pham

The function importONNXNetwork() doesn't work when I use example above!

MATLAB Release Compatibility
Created with R2018a
Compatible with R2018a to R2019a
Platform Compatibility
Windows macOS Linux