CUDA kernels are functions that are executed on the GPU device. These kernels are executed by many GPU threads in parallel. By efficiently mapping compute intensive portions of your algorithm to kernels, you can take advantage of the performance improvements provided by GPU computing technology. You can trigger GPU Coder™ to create CUDA kernels for specific algorithm structures and patterns in your MATLAB® code.
|Verify GPU code generation environment|
|Configuration parameters for CUDA code generation from MATLAB code with GPU Coder|
|Generate C/C++ code from MATLAB code|
|Open GPU Coder app|
|Pragma that maps for-loops to GPU kernels|
|Pragma that maps function to GPU kernels|
|Pragma to disable kernel creation for loops|
|Pragma that maps a variable to the constant memory on GPU|
|Create CUDA code for stencil functions|
|Optimized GPU implementation of functions containing matrix-matrix operations|
|Optimized GPU implementation of the MATLAB sort function|
|Pragma that provides information to the code generator for making parallelization decisions on variable bound loops|
|Optimized GPU implementation of the MATLAB transpose function|
|Optimized GPU implementation for reduction operations|
Create kernels from MATLAB functions containing scalarized, element-wise math operations.
Create kernels from MATLAB functions containing reduction operations.
Target GPU optimized math libraries such as cuBLAS, cuSOLVER, cuFFT, Thrust.
Generate CUDA code that uses GPU arrays.
Integrate custom GPU code with MATLAB code intended for code generation.
Create kernels for MATLAB functions containing computational design patterns.
Memory allocation options and optimizations for GPU Coder.