Target platform 'DLXCKU5PE' is not supported for quantization.

Hi,
DLXCKU5PE is my self-generated deep learning bitstream with the datatype of int8. When I want to validate the quantized deep learning network on my FPGA platform.
An error occurred.
I wonder how to solve this problem. Or only offical evaluation boards support this function.

 채택된 답변

Anjaneyulu Bairi
Anjaneyulu Bairi 2025년 10월 16일

0 개 추천

Hi,
This error usually arises in Deep Learning HDL Toolbox when:
  • The target FPGA platform you selected (DLXCKU5PE) is not officially supported by the quantization workflow in MATLAB/HDL Coder/Deep Learning HDL Toolbox.
  • The platform is either custom or not included in the list of supported boards for quantized deployment.
Try to follow below steps which might helpful for you:
1. Check Supported Platforms
2. Custom Board Registration
  • For custom boards, you may need to create a custom platform registration using the dlhdl.Target and dlhdl.Board classes, but quantization support may still be limited.
3. Try Float Deployment
  • If quantized (int8) deployment is not supported, you may be able to deploy your network using single (floating point) precision instead.
I hope this helps!

댓글 수: 1

Thanks,
I am glad to receive your reply.
I have solved this problem by following the guide at
The Matlab can work with my platform.
However, the accuracy drops by roughly 4%. I’m currently looking into ways to recover this loss.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

도움말 센터File Exchange에서 Deep Learning HDL Toolbox에 대해 자세히 알아보기

질문:

KH
2025년 9월 26일

댓글:

KH
2025년 11월 6일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by