Why do I see a drop (or jump) in my final validation accuracy when training a deep learning network?
조회 수: 15 (최근 30일)
이전 댓글 표시
MathWorks Support Team
2019년 2월 19일
편집: MathWorks Support Team
2019년 2월 19일
Why do I see a drop (or jump) in my final validation accuracy when training a deep learning network?
채택된 답변
MathWorks Support Team
2019년 2월 19일
If the network contains batch normalization layers, the final validation metrics are often different from the validation metrics evaluated during training. This is because the network undergoes a 'finalization' step after the last iteration to compute the batch normalization layer statistics on the entire training data, while during training the batch normalization statistics are computed from the mini-batches.
If in addition to batch normalization layers the network contains dropout layers, the interaction between these two layers can aggravate this issue, as described here: https://arxiv.org/abs/1801.05134
If one removes the batch normalization (and dropout) layers from the network, the 'final' accuracy should be the same as the last iteration accuracy.
Increasing the size of the mini-batches can also alleviate this issue, since the statistics from a larger mini-batch may be better estimates of the entire training data statistics.
댓글 수: 0
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Image Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!