dlconv inference with int8

조회 수: 2 (최근 30일)
David Eriksson
David Eriksson 2024년 3월 5일
답변: Avadhoot 2024년 3월 13일
Hi, is there a way to run inference (forward pass) with dlconv with int8 in the activations and float with the weights? Is it possible to make a CUDA model that I can run from matlab? Maybe as a mex function? Best, David

답변 (1개)

Avadhoot
Avadhoot 2024년 3월 13일
Hi David,
From your question, I infer that you are trying to pass int8 activations to the "dlconv" function with floating point weights. This will not work because the "dlconv" function is designed to work with only floating point data types (single or double). So the int8 inputs must be converted to floating point numbers before passing them to the "dlconv" function.
A computationally intensive workaround is to implement the convolution operation manually in a custom CUDA kernel and then writing a MEX function to interface it with MATLAB. After that you can call the MEX function normally in MATLAB and pass the int8 data to it and it will handle the invocation of the CUDA kernel. Using this approach, you can use int8 activations in your convolution operation. This operation will entirely bypass "dlconv" as you will be writing a custom CUDA kernel to implement the convolution operation.
I hope this helps.

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

제품


릴리스

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by