주요 콘텐츠

다음에 대한 결과:

(Requested for newer MATLAB releases (e.g. R2026B), MATLAB Parallel Processing toolbox.)
Lower precision array types have been gaining more popularity over the years for deep learning. The current lowest precision built-in array type offered by MATLAB are 8-bit precision arrays, e.g. int8 and uint8. A good thing is that these 8-bit array types do have gpuArray support, meaning that one is able to design GPU MEX codes that take in these 8-bit arrays and reinterpret them bit-wise as other 8-bit array types, e.g. FP8, which is especially common array type used in modern day deep learning applications. I myself have used this to develop forward pass operations with 8-bit precision that are around twice as fast as 16-bit operations and with output arrays that still agree well with 16-bit outputs (measured with high cosine similarity). So the 8-bit support that MATLAB offers is already quite sufficient.
Recently, 4-bit precision array types have been shown also capable of being very useful in deep learning. These array types can be processed with Tensor Cores of more modern GPUs, such as NVIDIA's Blackwell architecture. However, MATLAB does not yet have a built-in 4-bit precision array type.
Just like MATLAB has int8 and uint8, both also with gpuArray support, it would also be nice to have MATLAB have int4 and uint4, also with gpuArray support.
From my experience, MATLAB's Deep Learning Toolbox is quite user-friendly, but it still falls short of libraries like PyTorch in many respects. Most users tend to choose PyTorch because of its flexibility, efficiency, and rich support for many mathematical operators. In recent years, the number of dlarray-compatible mathematical functions added to the toolbox has been very limited, which makes it difficult to experiment with many custom networks. For example, svd is currently not supported for dlarray inputs.
This link (List of Functions with dlarray Support - MATLAB & Simulink) lists all functions that support dlarray as of R2026a — only around 200 functions (including toolbox-specific ones). I would like to see support for many more fundamental mathematical functions so that users have greater freedom when building and researching custom models. For context, the core MATLAB mathematics module contains roughly 600 functions, and many application domains build on that foundation.
I hope MathWorks will prioritize and accelerate expanding dlarray support for basic math functions. Doing so would significantly increase the Deep Learning Toolbox's utility and appeal for researchers and practitioners.
Thank you.
I'm working on training neural networks without backpropagation / automatic differentiation, using locally derived analytic forms of update rules. Given that this allows a direct formula to be derived for the update rule, it removes alot of the overhead that is otherwise required from automatic differentiation.
However, matlab's functionalities for neural networks are currently solely based around backpropagation and automatic differentiation, such as the dlgradient function and requiring everything to be dlarrays during training.
I have two main requests, specifically for functions that perform a single operation within a single layer of a neural network, such as "dlconv", "fullyconnect", "maxpool", "avgpool", "relu", etc:
  • these functions should also allow normal gpuArray data instead of requiring everything to be dlarrays.
  • these functions are currently designed to only perform the forward pass. I request that these also be designed to perform the backward pass if user requests. There can be another input user flag that can be "forward" (default) or "backward", and then the function should have all the necessary inputs to perform that operation (e.g. for "avgpool" forward pass it only needs the avgpool input data and the avgpool parameters, but for the "avgpool" backward pass it needs the deriviative w.r.t. the avgpool output data, the avgpool parameters, and the original data dimensions). I know that there is a maxunpool function that achieves this for maxpool, but it has significant issues when trying to use it this way instead of by backpropagation in a dlgradient type layer, see (https://www.mathworks.com/matlabcentral/answers/2179587-making-a-custom-way-to-train-cnns-and-i-am-noticing-that-avgpool-is-significantly-faster-than-maxpo?s_tid=srchtitle).
I don't know how many people would benefit from this feature, and someone could always spend their time creating these functionalities themselves by matlab scripts, cuDNN mex, etc., but regardless it would be nice for matlab to have this allowable for more customizable neural net training.