Parpool thread-based pool size

조회 수: 15 (최근 30일)
Alessandro
Alessandro 2025년 9월 8일
댓글: Georgi 2025년 10월 16일
I have a computer with 14 physical cores: 12th Gen Intel(R) Core(TM) i7-12800H, 2400 Mhz, 14 Core(s), 20 Logical Processor(s)).
I can set
p=parpool('processes',8)
and I get what I expect
p =
ProcessPool with properties:
Connected: true
NumWorkers: 8
Busy: false
Cluster: processes (Local Cluster)
AttachedFiles: {}
AutoAddClientPath: true
FileStore: [1x1 parallel.FileStore]
ValueStore: [1x1 parallel.ValueStore]
IdleTimeout: 30 minutes (30 minutes remaining)
SpmdEnabled: true
However, if I type
delete(p)
p=parpool('threads',8)
I get this error message:
Error using parpool (line 108)
A minimum pool size of 8 was requested. The maximum thread-based pool size is currently 6.
Is there a way to increase the maximum thread-based pool size above 6 (given that my Intel CPU has 14 physical cores)?
Thanks!

채택된 답변

Walter Roberson
Walter Roberson 2025년 9월 8일
With that particular number of physical cores (14), chances are high that you have a mix of "performace" cores and "efficiency" cores. There is currently a restriction, that thread-based pools can only use performance cores.
  댓글 수: 4
Walter Roberson
Walter Roberson 2025년 10월 15일
There does not appear to be any way to select process based instead of thread based.
Georgi
Georgi 2025년 10월 16일
Thank you. I found that when executing
maxNumCompThreads(16); % 16 Logical cores in a Gen9 Core i9 CPU
parpool('Processes',maxNumCompThreads);
before I start the regressionLearner model training, the tool uses the parallel pool already created with the 'Processes' profile.
But it turned out that numWorkers was not the bottleneck in my calculations, and even those 8 Workers were far from fully used (CPU load was barely 10-20%). Strangely, in R2024aU7, the regressionLearner tool uses at ~12-20% one of the DMA Engines (shown as 'Copy 1' in Windows Task Manager) of my nVidia Quadro RTX3000, and this at 0% 3D load on that GPU (well, most of the time). In R2025aU1 on the same laptop (128 GB or RAM), that GPU is not used at all, and the same model training (and iterations number) takes notably longer time. On a server with 36 logical cores per CPU and no GPU accelerator, in R2022a, the same model training takes 5.3 times longer (!!!).

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Parallel Computing Fundamentals에 대해 자세히 알아보기

제품


릴리스

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by