我从pytorch中​导入net到工作区,​在把net导入到si​mulink中的ma​tlab function中,​在运行过程中出现如下​问题Code generation for custom layer 'aten__linear0' for target 'mkldnn' is not supported as it returns a dlarra

조회 수: 2 (최근 30일)
guiyang
guiyang 2024년 5월 24일
답변: Shivani 2024년 6월 3일
classdef aten__linear0 < nnet.layer.Layer & nnet.layer.Formattable & ...
nnet.layer.AutogeneratedFromPyTorch & nnet.layer.Acceleratable
%aten__linear0 Auto-generated custom layer
% Auto-generated by MATLAB on 2024-05-24 16:18:35
%#codegen
properties (Learnable)
% Networks (type dlnetwork)
end
properties
% Non-Trainable Parameters
end
properties (Learnable)
% Trainable Parameters
Param_weight
Param_bias
end
methods
function obj = aten__linear0(Name, Type, InputNames, OutputNames)
obj.Name = Name;
obj.Type = Type;
obj.NumInputs = 1;
obj.NumOutputs = 1;
obj.InputNames = InputNames;
obj.OutputNames = OutputNames;
end
function [linear_9] = predict(obj,linear_x_1)
%Validates that the input has the correct format and permutes its dimensions into the reverse of the original PyTorch format.
model_tt.ops.validateInput(linear_x_1,2);
[linear_x_1, linear_x_1_format] = model_tt.ops.permuteInputToReversePyTorch(linear_x_1, 2);
[linear_x_1] = struct('value', linear_x_1, 'rank', int64(2));
[linear_9] = tracedPyTorchFunction(obj,linear_x_1,false,"predict");
%Permute U-labelled output to forward PyTorch dimension ordering
if(any(dims(linear_9.value) == 'U'))
linear_9 = permute(linear_9.value, fliplr(1:max(2,linear_9.rank)));
end
end
function [linear_9] = forward(obj,linear_x_1)
%Validates that the input has the correct format and permutes its dimensions into the reverse of the original PyTorch format.
model_tt.ops.validateInput(linear_x_1,2);
[linear_x_1, linear_x_1_format] = model_tt.ops.permuteInputToReversePyTorch(linear_x_1, 2);
[linear_x_1] = struct('value', linear_x_1, 'rank', int64(2));
[linear_9] = tracedPyTorchFunction(obj,linear_x_1,true,"forward");
%Permute U-labelled output to forward PyTorch dimension ordering
if(any(dims(linear_9.value) == 'U'))
linear_9 = permute(linear_9.value, fliplr(1:max(2,linear_9.rank)));
end
end
function [linear_9] = tracedPyTorchFunction(obj,linear_x_1,isForward,predict)
linear_weight_1 = obj.Param_weight;
[linear_weight_1] = struct('value', dlarray(linear_weight_1,'UU'), 'rank', 2);
linear_bias_1 = obj.Param_bias;
[linear_bias_1] = struct('value', dlarray(linear_bias_1,'UU'), 'rank', 1);
[linear_9] = model_tt.ops.pyLinear(linear_x_1, linear_weight_1, linear_bias_1);
end
end
end

답변 (1개)

Shivani
Shivani 2024년 6월 3일
The error message you're seeing indicates that the custom layer 'aten__linear0', which corresponds to a linear (fully connected) layer in PyTorch, is not supported for code generation for the target 'mkldnn'. MKLDNN is a backend optimized for deep learning operations on Intel CPUs, and it seems the issue is with generating code that can leverage this optimization for the specified layer.
From my understanding, a custom layer will not be supported for Code Generation by default. Please refer to the following documentation for more information on extending Code Generation support to custom layers:https://www.mathworks.com/help/coder/ug/networks-and-layers-supported-for-c-code-generation.html#:~:text=Yes-,Custom%20layers,-Custom%20layers%2C%20with
Additionally, if possible, replace the unsupported custom layer ('aten__linear0') with an equivalent operation or layer that is supported for code generation in MATLAB. The following documentation link lists out all the networks and layers supported for Code Generation: https://www.mathworks.com/help/coder/ug/networks-and-layers-supported-for-c-code-generation.html
Another possible workaround would be to generate generic C or C++ code that does not depend on third-party libraries, without targeting MKLDNN optimizations. You can refer to the following documentation link for more information on this: https://www.mathworks.com/help/coder/ug/generate-generic-cc-code-for-deep-learning-networks.html

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by