How effective is multi-core machine running Matlab/Simulink these days (2012)
조회 수: 5 (최근 30일)
이전 댓글 표시
Hello, few years ago I was told that even Matlab/Simulink supports multi-core machines, it was better to get a faster machine with fewer cores than a slower one with more cores. Does this rule still hold true?
How are the following Mac Pro ranked?
1 x 3.33GHz 6-Core Intel Xeon
2 x 2.4 GHz 6-Core Intel Xeon (12 cores)
2 x 3.06GHz 6-Core Intel Xeon (12 cores)
Also, is the PC version of Matlab/Simulink+toolboxes still better implemented under Windows than under Mac OS/X? Will I get better performance using the above machines vs. the corresponding one running Windows 7 Professional?
댓글 수: 2
K E
2012년 7월 24일
You may find this answer helpful, and this link if you are actually shopping for a high end computer, though I have not bought from them myself.
Walter Roberson
2012년 7월 24일
I've looked wistfully at that vendor for years; their systems are remarkable, and they specifically benchmark MATLAB. (They also have a good spirit of community involvement.)
답변 (3개)
Jan
2012년 7월 24일
편집: Jan
2012년 7월 25일
The question cannot be answered in general, as for all partially multi-threaded software. Beside the program, the data-size matters also:
-> filter is multi-threaded, when the signal has at least 16 columns:
[b,a] = butter(8, 0.5);
X = rand(1e6 * 16, 1);
Y = filter(b, a, X); % No speedup!
X = rand(1e6, 16);
Y = filter(b, a, X); % Almost 12 times faster!
-> sum is multi-threaded for large arrays:
X = rand(1, 10000);
for i = 100:-1:1
Y(i) = sum(X); % No speedup!
end
X = rand(100, 10000);
Y = sum(X); % Almost 12 times faster!
Using parfor will care for using all cores efficiently. But the auto-multithreading of built-in commands might or might not distribute the work to different threads. And this a standard problem of multi-threading: There is no method, which can decide statically how many threads are optimal to process a certain piece of work. Such a decision must consider the available memory, the memory speed, the sizes of the different caches, the size of the input and output data, the number of cores, the number of threads belonging to other applications, etc. Therefore TMWs decision to multi-thread filter for >= 16 columns is dull, but clear and reproducible.
[EDITED]
A machine, which is twice as fast, costs 10 times more money. If you pay 500$ to a programmer to tweak your program, until the bottleneck is processed by multiple cores, you program can be #cores or #cores/2 times faster. Although these are only very rough estimations, this demonstrates, that you better invest in software than in hardware to reduce the processing time.
댓글 수: 0
Ryan G
2012년 7월 24일
Multi-core operations in MATLAB/Simulink are typically done more explicitly utilizing tools such as the parallel computing toolbox or concurrent execution in simulink.
I believe that some functions utilize multiple cores automatically but based on my reading most do not and most of simulink does not.
댓글 수: 0
Jason Ross
2012년 7월 24일
편집: Jason Ross
2012년 7월 24일
Others have already discussed multi-core and parallel performance. Another difference for parallel performance is if you want to do CUDA and GPU work -- the Tesla class CUDA cards are not available for Mac, and the only selection the Quadro 4000 for Mac. Certain problems map well to being solved by GPUs, and others don't, so this may or may not be something you care about at all.
There have also been improvements in making MATLAB more "Mac" over a few releases. Menus, Dock improvements, etc.
Another thing to keep in mind is that the Mac Pro line has not been updated to the Sandy Bridge architecture, although all the other Mac lines have been updated already. Apple is not forthcoming about when this will happen, but the Sandy Bridge architectural changes are pretty significant when it comes to computational performance -- see any of the hardware sites like Anandtech, Tom's Hardware, etc for discussions of the changes in detail and show the difference in benchmark performance versus the previous generation for largely the same clock speeds.
If you are set on buying a Mac Pro and can wait until Apple goes to Sandy Bridge (hopefully "soon"), that might be a better way to spend your money. I'm also a big proponent of stuffing all the RAM you can afford into the machine, as well as using a SSD for storage if possible. A machine is only going to be as fast as the slowest component, and if you are waiting on disk I/O because you have to utilize swap, the processors are effectively wasted.
댓글 수: 2
Walter Roberson
2012년 7월 24일
Now I'm confused... I find distinct references to Tesla C2050 and C2070 for Mac Pro. I am pretty sure that a couple of months ago I was finding them in the Apple Store (the C2050 officially from Apple itself, and the C2070 a third party with Apple's blessing) but now I cannot find either on Apple's site. Both were only for installation in the Mac Pro, I recall.
Jason Ross
2012년 7월 24일
I shall be more clear. The only turnkey solution to installing a CUDA capable card in a Mac Pro is to use the Quadro 4000 for Mac.
Looking at the MacVidCards site, to get a Fermi card into a Mac Pro requires reflashing the card, fiddling with system extensions and editing plists. Definitely not the "install CUDA driver, install the card" operation you get from the Quadro 4000.
I've also seen various PCI extender solutions offered, as well -- but they can get expensive quickly and likely need similar driver extension fiddling to get working properly.
From nVidia's site, the C2075 is for Windows and Linux only:
참고 항목
카테고리
Help Center 및 File Exchange에서 Multicore Processor Targets에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!