How can I write a self scaling function for fmincon ?

Hey,
I use fmincon and I want to maximize this function =
fun = @(x) -(x(1)*x(2)*x(3))
and now I do not want to change this function everytime I in- or decrease the size of my optimization.
For example:
If I am looking for 6 solutions my function should look like this =
fun = @(x) -(x(1)*x(2)*x(3)*x(4)*x(5)*x(6))
Is there a way to do it automatically ?
Thank you so much!

 채택된 답변

Matt J
Matt J 2018년 11월 28일
편집: Matt J 2018년 11월 28일
fun = @(x) -sum(log(x))

댓글 수: 11

This would work without difficulty for positive x but not if any components were negative .
Matt J
Matt J 2018년 11월 28일
편집: Matt J 2018년 11월 28일
Yes, that is true, but if the x(i) are all positively constrained, it can be important to implement it this way to avoid overflow. It also makes the objective function convex, which can be a good thing.
As there are an indefinite number of duplicate solutions (any one x can be multiplied by an arbitrary real non-zero factor if another is divided by that factor), I figure that the original poster must be distinguishing the valid ones according to satisfying some constraints, possibly including nonlinear and/or equality constraints. However I am not confident at the moment that all of the items are positive.
.... Actually I am even less confident that the product is the real objective function; I think it is just an example.
I am adding some code for better understanding. Here you can see my fun I want to optimize. In this case its a minimization. Thank you for all your comments.
Bd=[3 3 6 2 4 2];
Anz_Var = 18;
PV = 6;
% lb ub
lb = zeros(Anz_Var,1);
ub = zeros(Anz_Var,1);
ub = ub+10;
% A b
E_start = 4*3;
beq = zeros(8,1);
Aeq = zeros(8,Anz_Var);
%%%
Aeq(1,13:15) = 1; beq(1) = E_start;
Aeq(2,7) = 1; Aeq(2,13) = -1; Aeq(2,1) = -1; beq(2) = -Bd(1,1);
Aeq(3,8) = 1; Aeq(3,14) = -1; Aeq(3,2) = -1; beq(3) = -Bd(1,2);
Aeq(4,9) = 1; Aeq(4,15) = -1; Aeq(4,3) = -1; beq(4) = -Bd(1,3);
%%%
Aeq(5,16:18) = 1; Aeq(5,7:9) = -1; beq(5) = 0;
Aeq(6,10) = 1; Aeq(6,16) = -1; Aeq(6,4) = -1; beq(6) = -Bd(1,4);
Aeq(7,11) = 1; Aeq(7,17) = -1; Aeq(7,5) = -1; beq(7) = -Bd(1,5);
Aeq(8,12) = 1; Aeq(8,18) = -1; Aeq(8,6) = -1; beq(8) = -Bd(1,6);
%test=@(X) -prod(6)
fun = @(x) x(1)*x(2)*x(3)*x(4)*x(5)*x(6);
% x0
x0 = zeros(1,Anz_Var);
[x fval] = fmincon(fun,x0,[],[],Aeq,beq,lb,ub)
Matt J
Matt J 2018년 11월 29일
편집: Matt J 2018년 11월 29일
Well then you have several options. Minimize the product,
[x1 fval1] = fmincon(@(x) prod(x(1:6)),x0,[],[],Aeq,beq,lb,ub);
or equivalently minimize its log,
[x2 fval2] = fmincon(@(x) sum(log(x(1:6))),x0,[],[],Aeq,beq,lb,ub);
fval2=exp(fval2);
I like version #2 a bit better, because I find it gives better convergence:
fval1 =
1.9901e-04
fval2 =
1.8983e-18
With the lb of 0 and x0 of 0, then the log version would involve sum of negative infinities which might present difficulties with the algorithm.
Matt J
Matt J 2018년 11월 29일
편집: Matt J 2018년 11월 29일
That's a legitimate concern, but I think it works out because the interior point algorithm is used. The singularities on the boundary can only be approached asymptotically. The bigger problem is that the optimization is ill-posed. Both formulations give me lots of different solutions when I randomize the initial guess,
x0 = 0.5+rand(1,Anz_Var);
In any case, the non-log version is problematic for some reason. It always takes many more iterations to converge, often terminating because MaxIter is reached, and always gets stuck in a local minimum. Here is an amplification of my test code, in which I supply analytical gradients:
x0 = ones(1,Anz_Var);
opts=optimoptions(@fmincon,'SpecifyObjectiveGradient',true,'MaxIter',1e4,'MaxFunEvals',3e4);
[x1, ~,ef1,out1] = fmincon(@fun1,x0,[],[],Aeq,beq,lb,ub,[],opts); fval1=fun1(x1);
[x2, ~,ef2,out2] = fmincon(@fun2,x0,[],[],Aeq,beq,lb,ub,[],opts); fval2=fun1(x2);
fval1,
fval2
function [f,g]=fun1(x)
f=prod(x(1:6)) ;
g(1:18)=0;
g(1:6)=f./x(1:6);
end
function [f,g]=fun2(x)
f=sum(log( x(1:6) )) ;
g(1:18)=0;
g(1:6)=1./x(1:6);
end
Tim
Tim 2018년 11월 29일
Thank you for the detailed answer! I will try the solution of fun2, because it seems that it provides the best minimum solution. I still have to figure out some of the discussion around my problem.
Matt J
Matt J 2018년 11월 29일
편집: Matt J 2018년 11월 29일
Just as a small follow-up, I am finding that the first version, with prod(x), performs much better when the 'HessianFcn' option is used, but I still generally see lower objective values reached by the logged version.
Tim
Tim 2018년 11월 30일
Ah okay. I will try this as well. One additional question came to my mind: Is my code a good way to minimize each of the objective values individually or would you suggest something else?
Matt J
Matt J 2018년 11월 30일
What is "each of the objective values"?

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Walter Roberson
Walter Roberson 2018년 11월 28일

1 개 추천

@(X) -prod(X)

댓글 수: 2

Matt J
Matt J 2018년 11월 28일
Care is needed here to avoid overflow/underflow.
Tim
Tim 2018년 11월 29일
Thank you for answer! I appreciate that you are so passionate to solve my problem.

댓글을 달려면 로그인하십시오.

질문:

Tim
2018년 11월 28일

댓글:

2018년 11월 30일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by