Training Time delay neural networks with Parallel and GPU Computing ?
이전 댓글 표시
I’m trying to speed up the training of my 'timedelaynet' by using the GPU support that ÷ get from the parallel computing toolbox . Although I use the same network structure for both, when I compare the performance in case of CPU vs. GPU, CPU achieves better prtformance. and The training using GPU takes longer time than GPU. any explaination?
채택된 답변
추가 답변 (0개)
카테고리
도움말 센터 및 File Exchange에서 Parallel and Cloud에 대해 자세히 알아보기
제품
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!