Insuring reproducibility in training YOLOv2 in the Deep Learning Toolbox

조회 수: 2 (최근 30일)
Michael Younger
Michael Younger 2020년 4월 30일
답변: Ryan Comeau 2020년 5월 10일
I'm using the YOLOv2 network in the Deep Learning Toolbox. We are seeing significant variations in testing results running the same training code more than once.
Is it possible to insure reproducibility in training? If so, what options/flags would need to be set to insure reproducible training?
One option I see already is to set the "Shuffle" option to "none" (its default is "once").
But are there other flags/random seeds that I need to set to insure repeatability?
Thanks!
  댓글 수: 2
Mohammad Sami
Mohammad Sami 2020년 4월 30일
편집: Mohammad Sami 2020년 4월 30일
You can try using rng with a seed as the first step.
I could not find a direct documentation for the training deep learning models, but i am assuming that this applies to training deep learning models as well.
https://www.mathworks.com/help/matlab/math/generate-random-numbers-that-are-repeatable.html

댓글을 달려면 로그인하십시오.

답변 (1개)

Ryan Comeau
Ryan Comeau 2020년 5월 10일
Hello,
What you are experiencing is very normal for deep learning. The process of network initialization involves assigning initial weights to each of your layers and activation functions. These initial weights can be fixed by fixing the random seed for initialization as mentioned in the comments above. This may not resolve your problem however. The algorithm which minimizes your loss function is called stochastic gradient descent. A stochastic gradient descent is by definition not deterministic, which means there will always be some variance in your results. This should be seen as a good thing however, we don't want to get stuck in a local minima, which is likely to occurr if our algorithm was deterministic.
If you want to see the performance of deep learning being as deterministic as possible, set the mini batch size to 1. This will remove the ability to not get stuck in local minima and you will see a drop in performance.
The shuffle option you are describing is to shuffle the order of data so that your mini-batches do not always have the same data in them.
Lastly, if you do want to have "consistent" training results, simply redefine what consistent means in this case. Run your training 10 times and the results which occurrs the most frequently will be your replicable results.
Hope this helps,
RC

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

제품


릴리스

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by