Unable to map lookup tables to RAM in HDL coder
이전 댓글 표시
I am trying to reduce the utilization of lookup tables in my HDL code while using the Native Floating point library so, I enabled the option to 'map lookup tables to RAM' in the optimization settings. But even after selecting a synthesis tool, the RAM utilization is still at zero. Is there something I am missing that is not mentioned in the documentation?
답변 (1개)
Kiran Kintali
2023년 7월 18일
0 개 추천
Can you share a sample model with your configuration settings and desired synthesis results?
All floating point operator level customization options are available in the floating point panel or right-click HDL block options.
Currently 'Map lookup tables to RAM' is not plugged into the floating point operator specific customizations. I will communicate this request to the team.
Please review some sample synthesis metrics here Synthesis Benchmark of Common Native Floating Point Operators. You can easily regenerate the latest synthesis results using this example for your choice of synthesis tool, data types, frequency, fpga device, power settings.
댓글 수: 6
Ishita Ray
2023년 7월 18일
Kiran Kintali
2023년 7월 18일
The Map loopup tables to RAM option currently applies to Lookup blocks in Simulink and does not apply to implicit luts inferred within Native Floating Point.
In general use of floating-point incurs additional resources. For high dynamic range applications specifically designs where fixed-point error accumulates rapidly within feedback loops HDL Coder native floating point features are recommended.
Feel free to reach out to tech support for additional assistance. In addition to mapping to RAMs, there are other streaming, sharing and pipelining optimizations that you can employ for futher reducing the size of the hardware and improving performance of the generated code.
Ishita Ray
2023년 7월 18일
편집: Ishita Ray
2023년 7월 19일
Viren Monpara
2026년 2월 20일
Regarding your earlier comment: “Currently 'Map lookup tables to RAM' is not plugged into the floating point operator specific customizations.”
Could you please provide an update on this?
Has this integration been added in more recent HDL Coder releases (e.g., R2025b or R2026a), or is it still not part of the floating‑point operator‑specific customization options?
This information would be very helpful for us since we are using native floating point and are aiming to reduce LUT utilization.
Kiran Kintali
2026년 2월 20일
편집: Kiran Kintali
2026년 2월 24일
Just to avoid confusion between two different concepts:
- FPGA LUTs - Logic resources (lookup tables) used by the FPGA fabric to implement combinational logic. These are reported in synthesis utilization reports.
- Simulink Lookup Table blocks - Blocks like n-D Lookup Table, Direct Lookup Table, etc. that can be mapped to block RAM using the "Map lookup tables to RAM" optimization.
In the Native Floating Point (NFP) flow, only a few complex trigonometric operators (sin, cos, atan2, etc.) use internal LUT tables as part of their algorithm. These are algorithmic LUTs inside the NFP operator implementation, not Simulink Lookup Table blocks.
The "Map lookup tables to RAM" feature (LUTMapToRAM) targets Simulink Lookup Table blocks (LookupTableND, DirectLookupTable, SineWave).
I am assuming you referring to reducing FPGA LUTs generated by NFP operators by mapping them to RAM. Let me double check R2025b and R2026a pre-release if there is such an option or if it is automatically done. It would be helpful to know which operators you are currently using with NFP. I believe trignometric and logarithmic ops create large LUTs that are of interest to map to RAM to you.
Kiran Kintali
2026년 2월 24일
I have double checked and the large LUTs in NFP mapping to RAM has not made it into R2026a; but is actively being worked on. I will post an update shortly on the updated timeline.
카테고리
도움말 센터 및 File Exchange에서 Speed and Area Optimization에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
