Unable to map lookup tables to RAM in HDL coder

I am trying to reduce the utilization of lookup tables in my HDL code while using the Native Floating point library so, I enabled the option to 'map lookup tables to RAM' in the optimization settings. But even after selecting a synthesis tool, the RAM utilization is still at zero. Is there something I am missing that is not mentioned in the documentation?

답변 (1개)

Kiran Kintali
Kiran Kintali 2023년 7월 18일

0 개 추천

Can you share a sample model with your configuration settings and desired synthesis results?
All floating point operator level customization options are available in the floating point panel or right-click HDL block options.
Currently 'Map lookup tables to RAM' is not plugged into the floating point operator specific customizations. I will communicate this request to the team.
Please review some sample synthesis metrics here Synthesis Benchmark of Common Native Floating Point Operators. You can easily regenerate the latest synthesis results using this example for your choice of synthesis tool, data types, frequency, fpga device, power settings.

댓글 수: 6

Thanks for your response, Kiran. I unfortunately cannot share the model due to its large size and our IP policy but here are the current synthesis results:
The resource utilization was not an issue when I was using fixed point but switching to floating point made the code LUT-heavy. I have also attached the configuration settings. Please let me know if you find any error there and I'll review the document you shared.
Also, could you clarify what you mean by "Currently 'Map lookup tables to RAM' is not plugged into the floating point operator specific customizations"? Is the option of mapping lookup tables to RAM not available when using native floating point or are you saying that that option is not specific to floating point (which is fine)?
The Map loopup tables to RAM option currently applies to Lookup blocks in Simulink and does not apply to implicit luts inferred within Native Floating Point.
In general use of floating-point incurs additional resources. For high dynamic range applications specifically designs where fixed-point error accumulates rapidly within feedback loops HDL Coder native floating point features are recommended.
Feel free to reach out to tech support for additional assistance. In addition to mapping to RAMs, there are other streaming, sharing and pipelining optimizations that you can employ for futher reducing the size of the hardware and improving performance of the generated code.
Ishita Ray
Ishita Ray 2023년 7월 18일
편집: Ishita Ray 2023년 7월 19일
Okay, I understand now. My model does have lookup table blocks in it so I was surprised that none of those were mapped to RAM either. I see no RAM utilization in the systhesis report neither do is there any file generated for highlighting the lookup tables converted to RAM. I'm trying to figure out if there's something is missing.
Regarding your earlier comment: “Currently 'Map lookup tables to RAM' is not plugged into the floating point operator specific customizations.”
Could you please provide an update on this?
Has this integration been added in more recent HDL Coder releases (e.g., R2025b or R2026a), or is it still not part of the floating‑point operator‑specific customization options?
This information would be very helpful for us since we are using native floating point and are aiming to reduce LUT utilization.
Kiran Kintali
Kiran Kintali 2026년 2월 20일
편집: Kiran Kintali 2026년 2월 24일
Just to avoid confusion between two different concepts:
  • FPGA LUTs - Logic resources (lookup tables) used by the FPGA fabric to implement combinational logic. These are reported in synthesis utilization reports.
  • Simulink Lookup Table blocks - Blocks like n-D Lookup Table, Direct Lookup Table, etc. that can be mapped to block RAM using the "Map lookup tables to RAM" optimization.
In the Native Floating Point (NFP) flow, only a few complex trigonometric operators (sin, cos, atan2, etc.) use internal LUT tables as part of their algorithm. These are algorithmic LUTs inside the NFP operator implementation, not Simulink Lookup Table blocks.
The "Map lookup tables to RAM" feature (LUTMapToRAM) targets Simulink Lookup Table blocks (LookupTableND, DirectLookupTable, SineWave).
I am assuming you referring to reducing FPGA LUTs generated by NFP operators by mapping them to RAM. Let me double check R2025b and R2026a pre-release if there is such an option or if it is automatically done. It would be helpful to know which operators you are currently using with NFP. I believe trignometric and logarithmic ops create large LUTs that are of interest to map to RAM to you.
I have double checked and the large LUTs in NFP mapping to RAM has not made it into R2026a; but is actively being worked on. I will post an update shortly on the updated timeline.

댓글을 달려면 로그인하십시오.

제품

릴리스

R2022b

질문:

2023년 7월 17일

댓글:

2026년 2월 24일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by