Does groupedCon​volution2d​Layer support input data with T dimension.

Whenever I apply groupedConvolution2dLayer to data with T-dimension e.g.
groupedConvolution2dLayer([1 filterSize], 1, "channel-wise", DilationFactor=dilationFactor, Padding="same", Name="conv_1_" + k + "_" + l)
the following error is produced
I could implement channel-wise convolution using the conncatenation depthConcatenationLayer, but such networks end up being much much slower to train.

답변 (1개)

Hi,
I understand that you are facing an error while using 'groupedConvolution2dLayer' while passing input data with "T" dimension.
Output from the layer "channels_1_1" which act as input to the layer "conv_1_1_1" has dimension 5(C) 1(B) 128(T), that means it is a vector-sequence data, with no "spatial(S)" dimensions.
Where as, the "groupedConvolution2dLayer" expects an image data with "spatial(S)" dimensions and channel(C) dimension for convolution. It does not support sequence input hence It does not support "Time(T)" dimension.
Refer to the documentation link to know more about dimension labels in "dlarray".
Use "finddim" function to find the dimension and labels of "dlarray".
dim = finddim(layout,"S")
Refer to the document link to know more about "groupedConvolution2dLayer".

댓글 수: 3

Artem Lensky
Artem Lensky 2023년 9월 13일
편집: Artem Lensky 2023년 9월 17일
Milan,
Thank you for your answer.
A couple of months ago I implemented an apporach that takes advantage of finddim and other functions to first find the "T" dimension and then relabel it to "S" . You can see my soultion here https://au.mathworks.com/matlabcentral/answers/1952999-channel-wise-convolution-deep-learning-layer-for-1d-and-3d-data-i-e-groupedconvolution1dlayer-grou
Althought, this solution works, it has a limitation. It requires all input signals to be of the same length and only works for 1D and 2D.
Another apporach is to split the input into separate channel as proposed by the accepted answer following the link above. This solution works for 1D, 2D, and 3D, which is great! However for some reason depthConcatenationLayer that is required to assemble back the results of channel convolutions is super slow.
It looks like we don't support sequence inputs to groupedConvolution2dLayer but it seems like dlconv does support grouped convolution on sequence data, so it might be reasonable to write a custom layer for this. Here's a demonstration of dlconv on sequence data - note the usage of WeightsFormat :
C = 6; B = 1; T = 128;
X = dlarray(randn(C,B,T),"CBT");
filterSize = 15;
channelsPerGroup = 2;
filtersPerGroup = 4;
numGroups = 3;
W = randn(filterSize,channelsPerGroup,filtersPerGroup,numGroups);
b = zeros(filtersPerGroup*numGroups,1);
Y = dlconv(X,W,b, WeightsFormat="TCUU");
% alternative to WeightsFormat is to make W a formatted dlarray:
W = dlarray(W,"TCUU");
Y = dlconv(X,W,b);
You should be able to use this in the implementation of a custom layer's predict function to get a grouped convolution over sequence data.
The relabel T to S strategy could be OK if you pad the data accordingly. You'd have to do the padding manually yourself, if using trainNetwork it could be quite awkward to do this such that you have the minimal amount of padding necessary per-minibatch (I think you'd need to write a custom datastore). Alternatively you could do that in the "minibatch function" of minibatchqueue and use a custom training loop.
I would have suspected the reason manually splitting the channel dimension is slow is because instead of doing one big grouped convolution the software has to do many separate convolutions. If depthConcatenationLayer is itself very slow then that's something we should look at internally.
Artem Lensky
Artem Lensky 2023년 9월 18일
편집: Artem Lensky 2023년 9월 18일
Thank you Ben. I will play with dlconv and the suggested implementation.
You might be right about the cause for the slow execution, it is not depthConcatenationLayer per se but rather the fact that now convolution is performed on each channel separately and no optimision can be perfromed on all channels altogether.

댓글을 달려면 로그인하십시오.

카테고리

도움말 센터File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

제품

릴리스

R2023a

질문:

2023년 9월 5일

편집:

2023년 9월 18일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by