Using tapped delays in the process of training an artificial neural network for the purpose of dynamic system modeling.
조회 수: 5 (최근 30일)
이전 댓글 표시
Hello,I'm trying to use an artificial neural network for creating a model for the system : 

from what I have got so far first I should get the response of the system to an arbitrary input and use that data for training my network. after discretisation this second order diffrence equation (
) would describe the dynamic behaviour of system.
) would describe the dynamic behaviour of system. Now what is the method to implement this delays in my neural network? should I just give these delayed outputs and inputs of the plant as a input to my neural network or is there an easier way to achive this?
댓글 수: 0
답변 (1개)
Harsh
2025년 1월 11일
Hi Alirea,
You can use a time delay neural network (TDNN) which is specifically designed to handle temporal sequences by using past inputs and outputs as part of the input to the network. TDNNs use a fixed-size window of past inputs as part of the input to the network at each time step. This allows the network to learn dependencies across time.
You can use “timedelaynet” function in MATLAB for this functionality. Please refer to the following documentation to understand how to implement the function in MATLAB - https://www.mathworks.com/help/releases/R2022b/deeplearning/ref/timedelaynet.html
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!