필터 지우기
필터 지우기

GPU computing for machine learning (bagging / ensemble)

조회 수: 5 (최근 30일)
Joe
Joe 2015년 8월 20일
댓글: Ilya 2015년 8월 25일
Hi, Is bagging / ensemble supported by gpu computing in matlab?
I need to create a random forest that requires lots of processing time and was wondering if I can accelerate issuing gpu computing.
Thanks

답변 (4개)

Chetan Rawal
Chetan Rawal 2015년 8월 21일
Yes, treebagger is supported and has in-built GPU support. Give it a try: http://www.mathworks.com/products/parallel-computing/builtin-parallel-support.html
You should be able to grow the trees on the GPU. Then also might gain further performance by aggregating your ensemble on multiple cores of the CPU using Parallel Computing Toolbox. I'd suggest profiling your code first to see if this second step will help.
Chetan
  댓글 수: 1
Sean de Wolski
Sean de Wolski 2015년 8월 21일
It has built in parallel support; not GPU support.

댓글을 달려면 로그인하십시오.


Ilya
Ilya 2015년 8월 21일
There is no GPU support for decision trees or their ensembles. If you work in a sufficiently recent release, decision trees are multithreaded. In addition, TreeBagger, as noted, has parallel support through Parallel Computing Toolbox.
Can you tell us about your data size (number of observations and predictors) and time requirements? Have you already tried fitting a random forest and concluded it is too slow for your case?

Joe
Joe 2015년 8월 22일
Hi,
Thanks for the comments. I am already using parallel computing with my 6 cores.
I have around 10 million observations, around 100 predictors, and very weak signals. I fitted a tree and took 20 mins. However this is one piece of a big optimization and need to do a few thousands.
Any help appreciated!
  댓글 수: 1
Ilya
Ilya 2015년 8월 23일
편집: Ilya 2015년 8월 23일
Try boosting. You don't say much about your data, so I can't recommend a specific boosting algorithm. Use at least a few dozen trees and play with the minimal leaf size ('minleaf') to obtain the optimal accuracy on an independent test set.
If you insist on using TreeBagger, likewise play with the minimal leaf size. By default, trees for random forest are split to a very fine level (minleaf=1 for classification and 5 for regression). You likely don't need such deep trees for 10M observations. Increasing the leaf size would speed up training and reduce the memory footprint of the ensemble a lot.
Boosting typically outperforms random forest in accuracy on large datasets. Boosting trees is also faster because you can keep trees fairly shallow. The disadvantage is that you need to spend more time searching for optimal values of the boosting parameters such as the minimal leaf size and something else (say learning rate for some algorithms).
If you are using 15a or later, trees are multithreaded. parfor across local cores is not going to help you much and can even lead to a slowdown because you would be consuming quite a bit more memory.

댓글을 달려면 로그인하십시오.


Joe
Joe 2015년 8월 24일
Thanks a lot Ilya!
What are the things about the data you need to know to be able to provide a recommendation on specific boosting algorithm?
  댓글 수: 1
Ilya
Ilya 2015년 8월 25일
Would you consider reading the doc, in particular this section, and then asking a specific question?

댓글을 달려면 로그인하십시오.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by