MATLAB Answers

0

Problems with fmincon for design optimization

Hi everyone,
I am usign fmincon for the optimization of my problem. details and the source code of problem are provided below. As can be seen there are 12 design variables that should be optimized. There are several random variables and constraints. Two of the constraints “Pfo and Pfa” are for calculating failure probability using MonteCarlo method. My code is not written properly and for example these two constraints are not controled during optimization.
Any solution?
fmincon-1.jpg
fmincon-2.jpg
Matlab code is written in four separate files:
1- Main code:
clc;clear all;close all;
tic
objective = @objecfun;
% Optimization Variables: d=[a,b,c,B,hc,hd,t,r,Bt,al,as,at]
% Initial Guess
x0 = [2; 1; 6; 4.5; 4; 2.5; 1.75; 3.5; 4.5; 0.5; 0.5; 0.5];
% Statistical Model Parameters
% Hs = normrnd(2.5,0.11);
% T = normrnd(5,1.5);
% gamma = normrnd(26,0.11);
% Nod = normrnd(1.25,0.375);
% Kd = normrnd(1.075,0.0375);
% S = (2*pi().*Hs)/9.81./(T.^2);
% a = normrnd(2.5,0.25); a is x(1)
% b = normrnd(1.25,0.125); b is x(2)
% c = normrnd(6,0.5); c is x(3)
% B = normrnd(5.5,0.25); B is x(4)
% al = normrnd(0.59,0.0963); al is x(10)
% as = normrnd(0.59,0.0963); as is x(11)
% Variable Bounds
lb = [1; 0.5; 5; 3; 3; 1; 0.5; 1; 3; 0.4; 0.4; 0.4];
ub = [3; 1.5; 7; 6; 5; 4; 3; 6; 6; 0.59; 0.59; 0.59];
% Show Initial Objective
disp(['Initial Objective: ' num2str(objective(x0))])
% Linear Constraints
A = [0,0,-1,0,0,0,-1,0,0,0,0,0];
b = -5.99;
Aeq = [1,1,1,0,-1,0,0,0,0,0,0,0;
0,0,0,0,0,0,-2,1,0,0,0,0];
beq = [5.99;0];
% Nonlinear Constraints
nonlincon = @nonlconstr;
% Optimize with fmincon
%[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN]
% = fmincon(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS)
options = optimoptions('fmincon','Algorithm','active-set','PlotFcn',{@optimplotfval},'display','iter');
[x,fval,exitflag,output] = fmincon(objective,x0,A,b,Aeq,beq,lb,ub,nonlincon,options);
% Show Final Objective
disp(['Final Objective: ' num2str(objective(x))])
% Print Solution
disp('Solution')
disp(['a = ' num2str(x(1))])
disp(['b = ' num2str(x(2))])
disp(['c = ' num2str(x(3))])
disp(['B = ' num2str(x(4))])
disp(['hc = ' num2str(x(5))])
disp(['hd = ' num2str(x(6))])
disp(['t = ' num2str(x(7))])
disp(['r = ' num2str(x(8))])
disp(['Bt = ' num2str(x(9))])
disp(['al = ' num2str(x(10))])
disp(['as = ' num2str(x(11))])
disp(['at = ' num2str(x(12))])
output
toc
2- Objective function:
function f = objecfun(x)
Val = x(1).*(x(1)./tan(x(10)) + x(3)./tan(x(10))) + (x(1)./2).*(2.*x(4) - x(1)./tan(x(10)) - ...
x(1)./tan(x(11))) + x(1).*x(3)./sin(x(11));
ht = x(1)+x(2)+x(3)-x(5)-x(6);
Vul = x(2).*(x(2)./tan(x(10)) + x(3)./tan(x(10))) + (x(2)./2).*(2.*x(4) - 2*x(1)./tan(x(10)) - ...
2*x(1)./tan(x(11)) - x(2)./tan(x(10)) - x(2)./tan(x(11))) ...
+ x(2).*x(3)./sin(x(11)) + ht.*(x(9) + ht./tan(x(12)));
Vcl = x(7).*(x(8) + (x(1)+x(2))./sin(x(11))) + (x(3)+x(7)).*(x(4) - x(1)./tan(x(10)) - x(1)./tan(x(11)) - ...
x(2)./tan(x(10)) - x(2)./tan(x(11))...
+0.5*(x(3)+x(7))./tan(x(11))+0.5*(x(3)+x(7))./tan(x(10)));
f = 822*Val + 166*Vul + 131*Vcl;
3- Nonlinear constraints:
function [c,ceq] = nonlconstr(x)
Hs = normrnd(2.5,0.11);
T = normrnd(5,1.5);
gamma = normrnd(26,0.11);
S = (2*pi().*Hs)/9.81./(T.^2);
c = [9.81.*Hs.*T.*0.013.*exp(-22.*(x(5)./Hs).*sqrt(S./2./pi())./gamma) - 0.4;
cot(x(10))-3;
1.5-cot(x(10));
cot(x(11))-3;
1.5-cot(x(11));
cot(x(12))-3;
1.5-cot(x(12));
1-x(5);
x(5)-6;
Pfo(x(5))-0.001;
Pfa(x(1),x(10),x(3),x(4),x(11))-0.01];
ceq = [];
4- Function for calculation of failure probability using monte carlo method:
function p = Pfo(x)
n = 10^6;
Hs = normrnd(2.5,0.11,[1 n]);
T = normrnd(5,1.5,[1 n]);
gamma = normrnd(26,0.11,[1 n]);
Ss = (2*pi().*Hs)/9.81./(T.^2);
G = 0.4 - 9.81.*Hs.*T.*0.013.*exp(-22.*(x./Hs).*sqrt(Ss./2./pi())./gamma);
isuccess=zeros(1,n);
isuccess(G<=0)=1;
p = sum(isuccess)/n;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
function p = Pfa(x1,x10,x3,x4,x11)
n = 10^6;
Hs = normrnd(2.5,0.11,[1 n]);
KD = normrnd(2.75,0.375,[1 n]);
Val = x1.*(x1./tan(x10) + x3./tan(x10)) + (x1./2).*(2.*x4 - x1./tan(x10) - ...
x1./tan(x11)) + x1.*x3./sin(x11);
Dn = (Val).^(1/3);
delta = 1.512;
G = delta*Dn.*(KD.*cot(x10)).^(1/3) - Hs;
isuccess=zeros(1,n);
isuccess(G<=0)=1;
p = sum(isuccess)/n;

  댓글 수: 0

로그인 to comment.

답변 수: 2

Answer by Jeff Miller
on 19 Oct 2019 at 6:53
 Accepted Answer

Hi Soheil,
You have posted a lot of information and I certainly have not digested even a large fraction of it (it would take me days to figure out what this undocumented code is really doing), but I do have a few suggestions:
1. In your objective function, you could call the monte carlo routines and include penalties based on those, something like this:
% inside objecfun:
failurep1 = Pfo(as);
if failurep1 > 0.001
Penalty1 = 1000 * (failurep1 - 0.001); % It works well to have the penalty increase with failurep1
else
Penalty1 = 0;
end
failurep2 = Pfa(a,c,B,as,al); % I am not sure what x is here, but you must know this.
if failurep2 > 0.01
Penalty2 = 1000 * (failurep2 - 0.01);
else
Penalty2 = 0;
end
f = 822*Val + 166*Vul + 131*Vcl + Penalty1 + Penalty2;
% Note that now the parameter search routine will avoid parameter
% combinations that produce high failure probabilities, due to the
% penalities. If the 1000 multiplier is not enough to avoid
% exceeding your thresholds of .001 and .01, just increase it.
2. Now there is the issue of repeatability, as John explained.
As far as I can see, all of your calls to normrnd involve fixed parameters, so the randomization does not have to vary each time the optimization parameters are changed. For example, you could just generate a single Hs vector to use throughout the entire optimization run. Include this statement in the main program and make Hs global:
Hs = normrnd(2.5,0.11,[1 n]);
Correspondingly, remove these random calls from Pfo and Pfa. Then, each time Pfo and Pfa are called, they will use exactly the same Hs values. Do the same thing for all of the other random variables (T, gamma), and all of the functions will be repeatable. (If you think it is a bad idea for Pfo and Pfa to use the same Hs values, generate a separate vector like HsPfo and HsPfa for each routine to use.)
Keep in mind that the final parameter estimates that you get will be specific to the particular sets of random numbers that were drawn in the main program before the optimization run started. But you can see how much difference that makes by running the whole program again and again (each time basing the whole optimization run on new sets of random numbers).

  댓글 수: 1

Thanks for your helpful comments. They helped me solve some of code errors and move forward.
Now, I found a problem with the definition of one of Monte-carlo functions. I will ask it from you in a separate comment.

로그인 to comment.


Answer by John D'Errico
on 18 Oct 2019 at 20:27

Jeff Miller told you this, the first time you asked this question. I'll repeat it, since you don't seem to believe him.
You CANNOT use FMINCON with randomly generated values inside the objective function, or the constraints. This will cause it to fail. It simply will not perform an optimization properly. This is true of almost any optimmization tool. Why?
Because any optimization tool assumes several things about the functions you will pass in.
  1. The function must be repeatable. So called with the same set of parameters, you need to get out exactly the same result. This applies to both the objective function and to the constraints.
  2. The function must be continuous and differentiable. Again, random functions are not that at all. They are not continuous, and certainly not differentiable.
Point #2 can be relaxed somewhat, for SOME optimizers. For example, tools like GA can often tolerate non-differentiable objective functions and constraints, to some extent, but there is no assurance of good convergence properties then.
But make things non-repeatable, and you are asking for random garbage as a result, from any optimizer.

  댓글 수: 2

This problem is based on a published paper (attached here). As you can see, the authors clearly mentioned active-set method using fmincon for solving this problem. If you are true, then probably this paper reported fake results. Yes?!
I am simply trying to solve the same problem using the same method.
Matt J
on 18 Oct 2019 at 21:27
If you are true, then probably this paper reported fake results. Yes?!
No. As Jeff told you, the random generation of the model parameters is not supposed to be repeated each time the cost/constraint function is evaluated. The cost function and constraints in the paper are based on values for the parameters that have already been observed and therefore don't fluctuate randomly anymore.

로그인 to comment.



Translated by