Caused by: Error using nnet.internal.cnnhost.lstmForwardGeneral Out of memory.
    조회 수: 8 (최근 30일)
  
       이전 댓글 표시
    
I use R2024a why did I get that error.
Error using trainnet (line 46)
Execution failed during layer(s) 'biLSTM'.
Error in LSTM (line 65)
net = trainnet(XTrain,TTrain,layers,"crossentropy",options);
Caused by:
    Error using nnet.internal.cnnhost.lstmForwardGeneral
    Out of memory.
numChannel = size(input{1},2);
className = categories(label);
numObservation = numel(input);
XTrain = input;
TTrain = label;
 numObservation = numel(XTrain);
for i=1:numObservation
    sequence = XTrain{i};
    sequenceLength(i) = size(sequence,1);
end
[sequenceLength,idx] = sort(sequenceLength);
XTrain = XTrain(idx);
TTrain = TTrain(idx);
figure
bar(sequenceLength)
xlabel("Sequence")
ylabel("Length")
title("Sorted Data")
numHiddenUnits = 120;
numClass = 6;
layers = [
    sequenceInputLayer(numChannel)
    bilstmLayer(numHiddenUnits,OutputMode="last")
    fullyConnectedLayer(numClass)
    softmaxLayer]
options = trainingOptions("adam", ...
    MaxEpochs=200, ...
    InitialLearnRate=0.002,...
    GradientThreshold=1, ...
    Shuffle="never", ...
    Plots="training-progress", ...
    Metrics="accuracy", ...
    Verbose=false);
net = trainnet(XTrain,TTrain,layers,"crossentropy",options);
댓글 수: 0
채택된 답변
  Aravind
      
 2025년 6월 17일
        The error you are encountering is typically due to the BiLSTM layer consuming more memory than what your system (either CPU or GPU) can handle. This commonly occurs when dealing with long or variable-length sequences, large batch sizes, or deep network architectures. To address this, you can try one or more of the following approaches: 
Truncate Long Sequences:
Some sequences may be excessively long. Truncating them to a manageable length can help:
maxLen = 100; % Choose an appropriate value based on a data distribution plot
XTrain = cellfun(@(x) x(1:min(end,maxLen),:), XTrain, 'UniformOutput', false); 
Reduce the Mini-Batch Size:
Decreasing the batch size reduces memory consumption:
options = trainingOptions("adam", ...
    MiniBatchSize=16, ... % You can try 8 or 16
    ...); 
Lower the Number of Hidden Units:
Your model currently uses 120 hidden units, which is quite large and can lead to high memory usage. Reducing this number to 60 or even 40 can significantly alleviate memory pressure.
Switch to CPU Training:
If you are using a GPU for training and running out of memory, consider switching to CPU training. The model will be loaded into system RAM, which typically has a much larger capacity than GPU memory.
Applying these adjustments should help reduce the memory footprint of your network and resolve the error you are experiencing.
Hope this helps!
댓글 수: 0
추가 답변 (0개)
참고 항목
카테고리
				Help Center 및 File Exchange에서 Image Data Workflows에 대해 자세히 알아보기
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!