GPU Recommendation for Parallel Computing
조회 수: 9 (최근 30일)
이전 댓글 표시
Hi. I am trying speed up a boosted tree learning algorithm via parfor. I have been able to get it running on AWS, but this hasn't proven to be an ideal solution for development work, as AWS charges a lot for keeping the cluster in an online state and takes a fair amount of time to change the state from offline to online. And so, I am interested in exploring the possibility of doing some of the development work using a local GPU cluster instead of AWS. Can you recommend a decent GPU (@ ~$1000) for a problem that requires 100-500 iterations, each of which takes around 3 minutes to run in serial on a decent laptop, and relies on around 200MB of data to be passed and processed by each worker? Or is this not a sensible route to pursue given my problem and budget? I just don't have a good sense of the extent to which such a problem could be parallelized using a single GPU (and whether the memory or the processing capacity of the individual GPU workers will be the binding constraint).
댓글 수: 4
Matt J
2019년 2월 14일
It's a good start, but we need to see the slow part of the code, presumably growForestp, if we're to recommend ways to optimize it.
채택된 답변
Matt J
2019년 2월 14일
편집: Matt J
2019년 2월 14일
Well, the one general thing I can say is that if you convert all of the variables data1...data5 variables to gpuArray objects, then the manipulations done by growForestp would likely be considerably faster assuming they consist of a lot of matrix arithmetic. In other words, you can use the GPU to gain speed in other ways besides just deploying parallel instances of growForestp.
I don't know what kind of GPU resources the AWS offers. Maybe each cluster node has its own GPU? If you want to implement on your own local cluster sharing a single GPU, I would probably go with the GeForce GTX Titan X (which has 12 GB RAM) or the GeForce GTX 1080 Ti (which has 9 GB RAM). That should easily accomodate jobs from at least 20 parallel workers. Of course, I am not sure what the communications overhead would be from 20 workers trying to share/access a single GPU card...
댓글 수: 2
Walter Roberson
2019년 2월 14일
Mathworks recommends against sharing a gpu between parallel workers . The communication overhead of synch is one of the most expensive gpu operations .
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Parallel Computing Fundamentals에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!