The issue of optimization (minimization) of the average relative error between experimental and calculated data
조회 수: 86 (최근 30일)
이전 댓글 표시
hello
I want to share the difficulties that I faced. Can someone help
problem statement:
there is a 'x' column where the values of the independent variable are written and there is a 'y' column where the experimental values of the dependent variable are written.
approximation model is considered:
y_calculate=A*x^B+C,
and based on this model, an objective function is created, which is equal to the average value of the relative deviations between y and y_calculate:
error_function = mean(abs(y - y_calculate)) / y)=mean(abs(y - =A*x^B+C)) / y);
Our goal is to select parameters A,B,C in such a way that 'error_function' takes the value of the global minimum.
I calculated the optimal values of A, B, C and got:
A = 85.5880, B = -0.0460, C = 4.8824,
at which error function value for optimized parameters: 0.0285.
but I know in advance the specific values of A, B, C:
A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400,
at which error function value for specific parameters: 0.002680472178434,
which is much better than with optimization
Below is the code with visualization, which confirms the above.
clear
close all
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guesses for parameters A, B, C
initial_guess = [1, 1, 1];
% Error function
error_function = @(params) mean(abs(y - (params(1) * x.^params(2) + params(3))) ./ y);
% Optimization of parameters
optimized_params = fminsearch(error_function, initial_guess);
% Results of optimization
A_optimized = optimized_params(1);
B_optimized = optimized_params(2);
C_optimized = optimized_params(3);
% Calculation of the fitted function for optimized parameters
y_calculate_optimized = A_optimized * x.^B_optimized + C_optimized;
% Calculate and display the error function value for optimized parameters
value_error_optimized = error_function(optimized_params);
fprintf('Optimized parameters:\nA = %.4f\nB = %.4f\nC = %.4f\n', A_optimized, B_optimized, C_optimized);
fprintf(' error function value for optimized parameters: %.4f\n', value_error_optimized);
% Other specific parameters A, B, C
A_specific = 1005.63358529310;
B_specific = -1.59745963925582;
C_specific = 73.541497447544;
% Calculation of the fitted function for specific parameters
y_calculate_specific = A_specific * x.^B_specific + C_specific;
% Calculate and display the error function value for specific parameters
value_error_specific = error_function([A_specific, B_specific, C_specific]);
fprintf('Specific parameters:\nA = %.10f\nB = %.14f\nC = %.14f\n', A_specific, B_specific, C_specific);
fprintf(' error function value for specific parameters: %.4f\n', value_error_specific);
% Visualization
figure;
plot(x, y, 'bo-', 'DisplayName', 'Experimental data');
hold on;
plot(x, y_calculate_optimized, 'r--', 'DisplayName', 'Fitted model (Optimized)');
plot(x, y_calculate_specific, 'g-.', 'DisplayName', 'Fitted model (Specific)');
xlabel('x');
ylabel('y');
legend('Location', 'best');
title('Approximation of experimental data');
grid on;
Obviously, my optimization code does not lead to a global minimum of the objective function, since there is a better approximation for specific values of A,B,C. Maybe this is caused by a random selection of the initial values of the parameters A=1, B=1, c=1 and therefore my code is stuck in a local minimum?
who can write a code that will select the A,B,C parameters so as to achieve the global minimum of the target function 'error_function', for any initial iteration data of the variables A,B,C. Thoughts for testing: the value of the target function 'error_function' should not be worse (that is, more) than 0.002680472178434, which is obtained with the specific value of A,B,C: A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400
댓글 수: 0
채택된 답변
Matt J
2024년 8월 22일
편집: Matt J
2024년 9월 5일
If you download minL1lin from
then you can do,
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter B
initial_guess = 1;
% Error function
error_function = @(B) objFun(B, x,y);
% Optimization of parameters
B = fminsearch(error_function, initial_guess);
[fval,ABC]=error_function(B);
[A,B,C]=deal(ABC{:})
fval
function [fval,p]=objFun(theta, x,y)
arguments
theta; x (:,1); y(:,1);
end
B=theta;
d=ones(size(y));
Q=[x.^B, d]./y;
[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, B, C};
end
댓글 수: 45
Torsten
대략 12시간 전
편집: Torsten
대략 12시간 전
I don't get errors in all the three cases. You should update to the most recent MATLAB release.
clear
% Data
% the first series of x , y values, at which the script works successfully and gives the result fval=0.0015:
% x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
% y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
%the second series of x , y values, at which the script also works successfully, but gives the message:
% x = [14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.6740, 66, ...
% 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
% y = [8.80982634, 9.359883, 8.83235329, 9.05116997, 8.640265855, 8.87955602, 8.90198004, 8.351331895, ...
% 8.435, 8.23, 8.22833633, 8.15836833, 8.0604534, 8.15348853, 7.671930455, 8.346546765, 8.15];
% the third series of x, y values at which the script fails and returns an error
x = [14.6784; 22.0176; 29.3436; 36.6828; 44.0088; 51.3348; 58.6740; 66; ...
73.3392; 80.6652; 88.0044; 95.3304; 102.6696; 109.9956; 117.3348; 124.6608; 132];
y = [6.53; 6.714628295; 5.75798471; 5.75746671; 5.75884823; 6.17125225; 5.75928009; 6.4; ...
6.0475162; 6.28375061; 5.75971201; 5.981722515; 6.17125225; 6.34975669; 6.325585625; ...
6.304601135; 6.285626125];
% Initial guess for parameter theta
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 = 0;
lambda1 = 3;
m0 = 0;
m1 = inf;
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
%theta = fminsearch(error_function, initial_guess);
[fval,AthetaC]=error_function(theta);
[A, theta,C]=deal(AthetaC{:})
fval
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda * x).^m), d]./y;
%Bounds on A and C
A0 = 0;
A1 = inf;
C0 = 0.9*min(y);
C1 = 1.1*min(y);
lb=[A0,C0];
ub=[A1,C1];
%Call the optimizer
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
end
function [x,varargout]=minL1lin(C,d,varargin)
% The submission minL1lin finds the minimum L1-norm solution of the linear equations C*x=d,
% optionally under linear constraints. It is similar to the Optimization Toolbox's lsqlin except that it minimizes with
% respect to the L1 norm by reformulating the problem as a linear program. The input/output syntax,
%
% [x,resnorm,residual,exitflag,output,lambda] =
% minL1lin(C,d,A,b,Aeq,beq,lb,ub,x0,options)
%
% is the same as for lsqlin() and all the usual defaults for A,b,..,x0 from
% lsqlin apply here. However, the "options" are those of linprog() which is used
% internally. And, of course, the minimization is instead over norm(C*x-d,1)
% rather than norm(C*x-d,2).
%
%
%
%EXAMPLES:
%
% We first construct some data for the examples
%
% C=rand(3);
%
% Xtrue=(1:3)';
% noise=rand(3,1)*.1;
%
% d=C*Xtrue+noise;
% dTrue=C*Xtrue;
%
% %EXAMPLE 1:
% %
% % This is an example of an unconstrained problem, in which we check the optimality
% % of the solution by simulation. First, we find the minimum L1 solution
% % of C*X-d
%
% [X,resnormX] = minL1lin(C,d,[],[],[],[],[],[],[],optimset('Display','none'));
%
% % Now, we randomly generate many neighboring X and test the value of the objective
% % function in this neighborhood,
%
% Xneighbors=bsxfun(@plus,X,randn(3,1e6)*1e-6); %explore neighborhood of X
%
% resnormNeighbors=sum(abs( bsxfun(@minus, C*Xneighbors, d) )); %L1 error of neighbors
%
% % A value of 1 indicates all neighbors have a sub-optimal objective
%
% isOptimal=~any(resnormNeighbors<resnormX);
%
% % Typically it returns 1, indicating that all the neighbors are less
% % optimal.
%
% isOptimal=~any(resnormNeighbors<resnormX)
%
%
% %%EXAMPLE 2:
% %
% % In this continuation of EXAMPLE 1, we add linear constraints, and test the
% % improvement that this gives in the L1 error relative to the
% % noise-free true data, dTrue. For variety's sake, we now use the dual simplex
% % algorithm for this example,
%
% Aeq=[1,1,1];
% beq=6;
% lb=[1;1;1];
% ub=[3;3;3];
%
% options=optimoptions(@linprog,'Algorithm','dual-simplex','Display','none');
%
%
% Xcon = minL1lin(C,d,[],[],Aeq,beq,lb,ub,[],options);
%
% error_Xunc = norm(C*X - dTrue,1),
% error_Xcon = norm(C*Xcon - dTrue,1),
%
%
% %Typically, one will observe error_Xcon < error_Xunc
%
%
if length(varargin)<8
varargin(end+1:8)={[]};
end
[A,b,Aeq,beq,lb,ub,x0,options]=deal(varargin{:});
[m,n]=size(C);
f=[zeros(1,n), ones(1,m)];
Ax=[C,-speye(m);-C,-speye(m)];
bx=[d;-d];
[~,nx]=size(Ax);
LB(1:nx)=-inf;
LB(n+1:end)=0;
UB(1:nx)=inf;
LB(1:length(lb))=lb;
UB(1:length(ub))=ub;
if ~isempty(A),
A(end,nx)=0;
end
if ~isempty(Aeq),
Aeq(end,nx)=0;
end
A=[A;Ax]; b=[b;bx];
try
[xz,~, varargout{3:nargout-1}]=linprog(f,A,b,Aeq,beq,LB,UB,x0,options);
catch
args={}; if ~isempty(options), args{1}=options; end
[xz,~, varargout{3:nargout-1}]=linprog(f,A,b,Aeq,beq,LB,UB,args{:});
end
if ~isempty(xz)
x=xz(1:n);
else
x=xz;
varargout(1:2)={[],[]};
return
end
if nargout>=2 %truncate resnorm if requested
residual=C*x-d;
resnorm=norm(residual,1);
varargout(1:2)={resnorm,residual};
end
end
function [x,fval,exitflag,output] = fminsearchbnd(fun,x0,LB,UB,options,varargin)
% FMINSEARCHBND: FMINSEARCH, but with bound constraints by transformation
% usage: x=FMINSEARCHBND(fun,x0)
% usage: x=FMINSEARCHBND(fun,x0,LB)
% usage: x=FMINSEARCHBND(fun,x0,LB,UB)
% usage: x=FMINSEARCHBND(fun,x0,LB,UB,options)
% usage: x=FMINSEARCHBND(fun,x0,LB,UB,options,p1,p2,...)
% usage: [x,fval,exitflag,output]=FMINSEARCHBND(fun,x0,...)
%
% arguments:
% fun, x0, options - see the help for FMINSEARCH
%
% LB - lower bound vector or array, must be the same size as x0
%
% If no lower bounds exist for one of the variables, then
% supply -inf for that variable.
%
% If no lower bounds at all, then LB may be left empty.
%
% Variables may be fixed in value by setting the corresponding
% lower and upper bounds to exactly the same value.
%
% UB - upper bound vector or array, must be the same size as x0
%
% If no upper bounds exist for one of the variables, then
% supply +inf for that variable.
%
% If no upper bounds at all, then UB may be left empty.
%
% Variables may be fixed in value by setting the corresponding
% lower and upper bounds to exactly the same value.
%
% Notes:
%
% If options is supplied, then TolX will apply to the transformed
% variables. All other FMINSEARCH parameters should be unaffected.
%
% Variables which are constrained by both a lower and an upper
% bound will use a sin transformation. Those constrained by
% only a lower or an upper bound will use a quadratic
% transformation, and unconstrained variables will be left alone.
%
% Variables may be fixed by setting their respective bounds equal.
% In this case, the problem will be reduced in size for FMINSEARCH.
%
% The bounds are inclusive inequalities, which admit the
% boundary values themselves, but will not permit ANY function
% evaluations outside the bounds. These constraints are strictly
% followed.
%
% If your problem has an EXCLUSIVE (strict) constraint which will
% not admit evaluation at the bound itself, then you must provide
% a slightly offset bound. An example of this is a function which
% contains the log of one of its parameters. If you constrain the
% variable to have a lower bound of zero, then FMINSEARCHBND may
% try to evaluate the function exactly at zero.
%
%
% Example usage:
% rosen = @(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;
%
% fminsearch(rosen,[3 3]) % unconstrained
% ans =
% 1.0000 1.0000
%
% fminsearchbnd(rosen,[3 3],[2 2],[]) % constrained
% ans =
% 2.0000 4.0000
%
% See test_main.m for other examples of use.
%
%
% See also: fminsearch, fminspleas
%
%
% Author: John D'Errico
% E-mail: woodchips@rochester.rr.com
% Release: 4
% Release date: 7/23/06
% size checks
xsize = size(x0);
x0 = x0(:);
n=length(x0);
if (nargin<3) || isempty(LB)
LB = repmat(-inf,n,1);
else
LB = LB(:);
end
if (nargin<4) || isempty(UB)
UB = repmat(inf,n,1);
else
UB = UB(:);
end
if (n~=length(LB)) || (n~=length(UB))
error 'x0 is incompatible in size with either LB or UB.'
end
% set default options if necessary
if (nargin<5) || isempty(options)
options = optimset('fminsearch');
end
% stuff into a struct to pass around
params.args = varargin;
params.LB = LB;
params.UB = UB;
params.fun = fun;
params.n = n;
% note that the number of parameters may actually vary if
% a user has chosen to fix one or more parameters
params.xsize = xsize;
params.OutputFcn = [];
% 0 --> unconstrained variable
% 1 --> lower bound only
% 2 --> upper bound only
% 3 --> dual finite bounds
% 4 --> fixed variable
params.BoundClass = zeros(n,1);
for i=1:n
k = isfinite(LB(i)) + 2*isfinite(UB(i));
params.BoundClass(i) = k;
if (k==3) && (LB(i)==UB(i))
params.BoundClass(i) = 4;
end
end
% transform starting values into their unconstrained
% surrogates. Check for infeasible starting guesses.
x0u = x0;
k=1;
for i = 1:n
switch params.BoundClass(i)
case 1
% lower bound only
if x0(i)<=LB(i)
% infeasible starting value. Use bound.
x0u(k) = 0;
else
x0u(k) = sqrt(x0(i) - LB(i));
end
% increment k
k=k+1;
case 2
% upper bound only
if x0(i)>=UB(i)
% infeasible starting value. use bound.
x0u(k) = 0;
else
x0u(k) = sqrt(UB(i) - x0(i));
end
% increment k
k=k+1;
case 3
% lower and upper bounds
if x0(i)<=LB(i)
% infeasible starting value
x0u(k) = -pi/2;
elseif x0(i)>=UB(i)
% infeasible starting value
x0u(k) = pi/2;
else
x0u(k) = 2*(x0(i) - LB(i))/(UB(i)-LB(i)) - 1;
% shift by 2*pi to avoid problems at zero in fminsearch
% otherwise, the initial simplex is vanishingly small
x0u(k) = 2*pi+asin(max(-1,min(1,x0u(k))));
end
% increment k
k=k+1;
case 0
% unconstrained variable. x0u(i) is set.
x0u(k) = x0(i);
% increment k
k=k+1;
case 4
% fixed variable. drop it before fminsearch sees it.
% k is not incremented for this variable.
end
end
% if any of the unknowns were fixed, then we need to shorten
% x0u now.
if k<=n
x0u(k:n) = [];
end
% were all the variables fixed?
if isempty(x0u)
% All variables were fixed. quit immediately, setting the
% appropriate parameters, then return.
% undo the variable transformations into the original space
x = xtransform(x0u,params);
% final reshape
x = reshape(x,xsize);
% stuff fval with the final value
fval = feval(params.fun,x,params.args{:});
% fminsearchbnd was not called
exitflag = 0;
output.iterations = 0;
output.funcCount = 1;
output.algorithm = 'fminsearch';
output.message = 'All variables were held fixed by the applied bounds';
% return with no call at all to fminsearch
return
end
% Check for an outputfcn. If there is any, then substitute my
% own wrapper function.
if ~isempty(options.OutputFcn)
params.OutputFcn = options.OutputFcn;
options.OutputFcn = @outfun_wrapper;
end
% now we can call fminsearch, but with our own
% intra-objective function.
[xu,fval,exitflag,output] = fminsearch(@intrafun,x0u,options,params);
% undo the variable transformations into the original space
x = xtransform(xu,params);
% final reshape to make sure the result has the proper shape
x = reshape(x,xsize);
% Use a nested function as the OutputFcn wrapper
function stop = outfun_wrapper(x,varargin);
% we need to transform x first
xtrans = xtransform(x,params);
% then call the user supplied OutputFcn
stop = params.OutputFcn(xtrans,varargin{1:(end-1)});
end
end % mainline end
% ======================================
% ========= begin subfunctions =========
% ======================================
function fval = intrafun(x,params)
% transform variables, then call original function
% transform
xtrans = xtransform(x,params);
% and call fun
fval = feval(params.fun,reshape(xtrans,params.xsize),params.args{:});
end % sub function intrafun end
% ======================================
function xtrans = xtransform(x,params)
% converts unconstrained variables into their original domains
xtrans = zeros(params.xsize);
% k allows some variables to be fixed, thus dropped from the
% optimization.
k=1;
for i = 1:params.n
switch params.BoundClass(i)
case 1
% lower bound only
xtrans(i) = params.LB(i) + x(k).^2;
k=k+1;
case 2
% upper bound only
xtrans(i) = params.UB(i) - x(k).^2;
k=k+1;
case 3
% lower and upper bounds
xtrans(i) = (sin(x(k))+1)/2;
xtrans(i) = xtrans(i)*(params.UB(i) - params.LB(i)) + params.LB(i);
% just in case of any floating point problems
xtrans(i) = max(params.LB(i),min(params.UB(i),xtrans(i)));
k=k+1;
case 4
% fixed variable, bounds are equal, set it at either bound
xtrans(i) = params.LB(i);
case 0
% unconstrained variable.
xtrans(i) = x(k);
k=k+1;
end
end
end % sub function xtransform end
추가 답변 (0개)
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!