GPU Coder vs. ONNXRuntime, is there a difference in inference speed?
    조회 수: 2 (최근 30일)
  
       이전 댓글 표시
    
Since I can export from Matlab to ONNX format, why can't I just import my model into TensorRT etc.?  Will I get significant speed increases or is the benefit of GPU Coder more about being able to compile all my other Matlab code into optimized Cuda?  
Thanks in advance.  
댓글 수: 0
답변 (1개)
  Joss Knight
    
 2021년 4월 2일
        You can compile your network for TensorRT using GPU Coder if that's your intended target, no need to go through ONNX.
I don't believe MathWorks have any published benchmarks against ONNX runtime specifically. GPU Coder on the whole outperforms other frameworks, although it does depend on the network.
댓글 수: 2
  Matti Kaupenjohann
 2022년 1월 7일
				Could you show/link the benchmark which includes the performance of gpucoder against other frameworks (which one?). 
  Joss Knight
    
 2022년 1월 7일
				
      편집: Joss Knight
    
 2022년 1월 7일
  
			We don't publish the competitive benchmarks, you'll have to make a request through your sales agent. we can provide some numbers for MATLAB.
참고 항목
카테고리
				Help Center 및 File Exchange에서 Get Started with GPU Coder에 대해 자세히 알아보기
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!


