HDLCoder RAM mapping not working

조회 수: 2 (최근 30일)
Amogh
Amogh 2025년 4월 1일
답변: Ryan Baird 2025년 4월 1일
I am trying to convert a matlab code into verilog using HDLCoder
The code im using is as follows
function x = pred3(input)
% Ensure input is single precision
input = single(input);
% Load LUT for sigmoid approximation
persistent x_values sigmoid_values;
if isempty(x_values)
data = coder.load('sigmoid_LUT.mat');
x_values = single(data.x_values);
sigmoid_values = single(data.sigmoid_values);
end
coder.hdl.ramconfig({x_values,sigmoid_values},RAMType="Dual Port")
persistent scaler_mean scaler_std_rec layer1_weight layer1_bias layer2_weight layer2_bias layer3_weight layer3_bias;
if isempty(scaler_mean)
% Load model parameters once
model = coder.load("weights_biases2.mat");
scaler_mean = single(model.scaler_mean);
scaler_std_rec = single(model.std_scaler_rec);
layer1_weight = single(model.layer1_weight);
layer1_bias = single(model.layer1_bias);
layer2_weight = single(model.layer2_weight);
layer2_bias = single(model.layer2_bias);
layer3_weight = single(model.layer3_weight);
layer3_bias = single(model.layer3_bias);
end
coder.hdl.ramconfig({scaler_std_rec,scaler_mean,layer1_weight,layer1_bias,layer2_weight,layer2_bias,layer3_weight,layer3_bias},RAMType="Dual Port")
% Normalize input
input = (input - scaler_mean) .* scaler_std_rec;
% Layer 1: Fully Connected + ReLU
layer1_op = input * layer1_weight' + layer1_bias;
% Replace max(layer1_op, 0) with explicit conditional update
for i = 1:numel(layer1_op)
if layer1_op(i) < 0
layer1_op(i) = 0;
end
end
% Layer 2: Fully Connected + ReLU
layer2_op = layer1_op * layer2_weight' + layer2_bias;
for i = 1:numel(layer2_op)
if layer2_op(i) < 0
layer2_op(i) = 0;
end
end
% Layer 3: Fully Connected (No Activation)
layer3_op = layer2_op * layer3_weight' + layer3_bias;
% Ensure layer3_op is within LUT range without using max/min
x_clipped = layer3_op;
x_max = x_values(1);
x_min = x_values(1);
% Find min and max of x_values manually
for i = 2:length(x_values)
if x_values(i) > x_max
x_max = x_values(i);
end
if x_values(i) < x_min
x_min = x_values(i);
end
end
% Clip layer3_op without using max/min
if x_clipped > x_max
x_clipped = x_max;
elseif x_clipped < x_min
x_clipped = x_min;
end
% Find closest index in LUT
min_diff = abs(x_values(1) - x_clipped);
idx = 1;
for i = 2:length(x_values)
diff = abs(x_values(i) - x_clipped);
if diff < min_diff
min_diff = diff;
idx = i;
end
end
% Lookup sigmoid output
output = sigmoid_values(idx);
% Final decision based on threshold
if output < 0.5
x = single(0);
else
x = single(1);
end
end
However after I run the HDLCoder and after turning on the map persistant varibales to ram the RAM usage in resource report is 0

답변 (1개)

Ryan Baird
Ryan Baird 2025년 4월 1일
When RAM Mapping fails there should be a warning in the generated report that gives more information about what happened.
In this case, one thing I can see looking at the provided code is that there are non-scalar accesses to the variables intended to be mapped to RAM. I'd expect to see the warning 'RAM mapping failed for variable, "scaler_std_rec", because it has a non-scalar sub-matrix access.' and similar warnings for other variables.
This is related to the current limitation of the RAM Mapping "Each read or write access is for a single element only." The limitations are listed on the page Map Persistent Arrays and dsp.Delay Objects to RAM. The operation "(input - scaler_mean) .* scaler_std_rec" is considered a read to every element of scaler_std_rec in regards to this limitation.
A workaround would be to write this operation and other similar operations as a loops and use coder.unroll to unroll the loops.

제품


릴리스

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by