why Nvidia A100 GPUs slower than RTX 3090 GPUs?
조회 수: 31 (최근 30일)
이전 댓글 표시
Hello, we have RTX3090 GPU and A100 GPU.
Using the Matlab Deep Learning Toolbox Model for ResNet-50 Network, we found that the A100 was 20% slower than the RTX 3090 when learning from the ResNet50 model.
The questions are as follows.
1. I heard that the speed of A100 and 3090 is different because there is a difference between the number of CUDA cores and the number of Tensor cores, so can only use Cuda cores for Matlab?
If you can use it, I would appreciate it if you could send me a link if you have an example site using Tensor core.
2. You can specify single inference, double inference, and half inference methods when learning GPU. I heard that Matlab uses double inference automatically, so please check if it is the correct answer.
Thank you.
댓글 수: 0
채택된 답변
David Willingham
2022년 5월 13일
See this answer for an explanation:
댓글 수: 2
Joss Knight
2022년 5월 16일
It is possible to train models in double precision, using model functions, or using a dlnetwork and converting its weights to double precision before training.
However, I don't believe this is what you want. You won't get a speedup over the RTX 3090 training in single precision, it will still be considerably slower.
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 GPU Computing에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!