GPU Coder vs. ONNXRuntime, is there a difference in inference speed?

조회 수: 2 (최근 30일)
David
David 2021년 4월 1일
편집: Joss Knight 2022년 1월 7일
Since I can export from Matlab to ONNX format, why can't I just import my model into TensorRT etc.? Will I get significant speed increases or is the benefit of GPU Coder more about being able to compile all my other Matlab code into optimized Cuda?
Thanks in advance.

답변 (1개)

Joss Knight
Joss Knight 2021년 4월 2일
You can compile your network for TensorRT using GPU Coder if that's your intended target, no need to go through ONNX.
I don't believe MathWorks have any published benchmarks against ONNX runtime specifically. GPU Coder on the whole outperforms other frameworks, although it does depend on the network.
  댓글 수: 2
Matti Kaupenjohann
Matti Kaupenjohann 2022년 1월 7일
Could you show/link the benchmark which includes the performance of gpucoder against other frameworks (which one?).
Joss Knight
Joss Knight 2022년 1월 7일
편집: Joss Knight 2022년 1월 7일
We don't publish the competitive benchmarks, you'll have to make a request through your sales agent. we can provide some numbers for MATLAB.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

태그

제품


릴리스

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by