Main Content

Deep Learning Processor IP Core

The generated deep learning (DL) processor IP core is a standard AXI interface IP core that contains:

  • AXI slave interface to program the DL processor IP core.

  • AXI master interfaces to access the external memory of the target board.

To learn more about the deep learning processor IP core architecture, see Deep Learning Processor IP Core Architecture.

The DL processor IP core is generated using the HDL Coder™ IP core generation workflow. The generated IP core contains a standard set of registers and the generated IP core report. For more information, see Deep Learning Processor IP Core Report.

The DL processor IP core reads inputs from the external memory and sends outputs to the external memory. The external memory buffer allocation is calculated by the compiler based on the network size and your hardware design. For more information, see Use the Compiler Output for System Integration.

The input and output data stored in the external memory in a predefined format. For more information, see External Memory Data Format.

The deep learning processor is implemented as a standalone processor on the programmable logic (PL) portion of the FPGA and does not require the processing subsystem (PS) portion of the FPGA to operate. When you compile and deploy a deep learning network most of the network layers are implemented on the PL portion of the FPGA, except for the input and output layers. The layers in this table , Supported Layers with the output format marked as HW are implemented in the PL layer and the layers marked as SW could be implemented on the PS component of an SoC or on the soft core processor on an FPGA when you integrate the deep learning processor into a larger system. In this case the communication between the PS and PL components occurs through DDR memory and Deep Learning HDL Toolbox™ does not automate the PS or soft core processor implementation.

When you use the dlhdl.Workflow object to deploy the network, Deep Learning HDL Toolbox implements the layers with SW output format in MATLAB®. The communication between MATLAB and the PL component is through an Ethernet or JTAG interface with the layer activation data being written and read from the DDR memory.

Related Topics