Input to output of LSTM classification network
조회 수: 7 (최근 30일)
이전 댓글 표시
I have a signal of 1 x 400 as a input for the LSTM and 75 hidden units as shown in the figure, as per my understanding LSTM gives output as 75 x 400 in my case, then I send to fully connected layer, where it accepts each 1 x 75 as input till the 400 time steps and produce output as 1 x 3 for each time steps and i send it to softmax, where input and output is 1 x 3 and 3 x 1 and same in classification layer.
But when I use the activation function to just to visualize the each layer outputs, I am not able to understanding how exactly LSTM is learning, since the output of LSTM layer and output of all other layers are not corelated (i.e I know the input is belongs to which class but I am unable to claim by seeing the output of each layers), I am getting different output, but when I use classify function for trained network, it gives correct predicted class at the end. It would be great help for my understanding, If some one could explain how LSTM output (because each time time step LSTM is predicting different class) is influencing other layers and on what basis the classification layer at the end classifying the input signal.
I have researched all the matlab resources unfortunately couldnot able to get answer for this.
댓글 수: 0
답변 (1개)
Nadia Shaik
2023년 3월 10일
Hi Manoj,
I understand that you want to understand how LSTM output is influencing other layers and how is the classification taking place.
An LSTM network is a type of recurrent neural network (RNN) that can learn long-term dependencies between time steps of sequence data. In your case, the LSTM network is processing a sequence of length 400.
Each time step in the input sequence of the LSTM layer produces a 75-dimensional output due to the 75 hidden units. However, the output of the LSTM layer for each time step is not directly correlated with the output of the subsequent fully connected layer. Instead, it serves as a condensed summary of the input sequence up to that point, extracting significant features for classification purposes.
The fully connected layer takes each 75-dimensional output from the LSTM layer and maps it to a 3-dimensional output, which represents the probability of the input sequence belonging to each of the 3 classes. This mapping is learned during training using backpropagation, which adjusts the weights of the fully connected layer to minimize the classification error.
The classification layer takes the 3-dimensional output from the fully connected layer and applies a softmax function to obtain a probability distribution over the 3 classes. The class with the highest probability is then selected as the predicted class for the input sequence.
It's important to note that the LSTM layer and the fully connected layer are trained jointly to minimize the classification error. This means that the features learned by the LSTM layer are optimized for classification, and the weights of the fully connected layer are optimized to make use of those features to make accurate predictions.
Therefore, the output of the LSTM layer alone may not be informative for understanding how the network is making predictions. It's the combination of the learned features from the LSTM layer and the weights of the fully connected layer that enable accurate classification.
For more information on LSTM layer, refer to the below documentation:
I hope this helps!
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Image Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!