Custom deep learning loop take more memory than using trainNetwork()?
이전 댓글 표시
Hi,
I followed the instructions from the link below to create a custom training loop by using a U-Net architecture.
By the same network architecture and with same "multi-gpu" setting (I have 2 RTX 2060 GPU), I found that I can only take 4 minibatch size at best in the custom training loop, while 16 minibarch size at best by using the built-in trainNetwork() function.
Is this a normal phenomenon that custom loop training will take more gpu memory than trainNetwork()?
Thanks!
채택된 답변
추가 답변 (0개)
카테고리
도움말 센터 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!