batch size in NARX model

조회 수: 3 (최근 30일)
tony  gian
tony gian 2018년 8월 3일
답변: Kothuri 2025년 6월 4일
If in the NARX model of matlab, the batch size is always the full size of the data and there is no minibatch method, then why should we use the shuffling command net.divideFcn = 'dividerand'; if of course our data is not sequential or in order? How does shuffling help avoid local minima and convergence in this case ?

답변 (1개)

Kothuri
Kothuri 2025년 6월 4일
The NARX‐training algorithm uses the entire dataset in each training epoch (i.e., “full‐batch” training, not mini‐batches). And the "dividerand" function Splits the data into three sets—Training, Validation, and Test at random. During each epoch, the network uses the entire “training set” to compute weight updates.
  • If your data aren’t shuffled first, a “block” split can inadvertently bias the training set.
  • A biased training set can cause the network to fit only a subset of your operating range and then fail badly on the validation or test sets.
  • By using "dividerand" function, you ensure that all regions of your input‐output space are (approximately) represented in the training portion which fosters better convergence on a truly global solution.
You can refer the below documentation link for more info on "dividerand" function:

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

제품


릴리스

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by