Does more number of hidden units in lstm layer means the network require more training time

조회 수: 14 (최근 30일)
I have the following queries regarding the number of hidden units in LSTM layer:
Does more number of hidden unit in the lstm layer means the network requires more training time?
I mean, how the number of hidden units in lstm layer affects the training time of the network, computational complexity?
Is it so that more number of hidden units helps lstm network to remember the previous data more?

채택된 답변

Himanshu
Himanshu 2023년 3월 3일
Hello Debojit,
I understand that you have some queries regarding the hidden units in the LSTM layer.
The training time of the network depends on various factors, like the number of layers used in the network architecture, the complexity of the network architecture, the size of the dataset, etc.
Increasing the number of hidden units in an LSTM layer can increase the network's training time and computational complexity as the number of computations required to update and propagate information through the layer increases.
Increasing the number of hidden units also increases the capacity of the network to store and learn from past data. However, this is not always the case, and there is a trade-off between the network capacity and generalization performance.
A more extensive network may have more capacity to remember past data. Still, it may also be more prone to overfitting, which can affect the generalization performance of the network on unseen data.
You can refer to the following documentation to learn more about LSTM networks:

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by