fmincon optimization: is the first order optimality very sensititve to changes in the step tolerance?

조회 수: 10 (최근 30일)
I use fmincon interior-point algorithm to fit parameters to a pde.
Here are my basic settings:
opts = optimoptions('fmincon', ...
'StepTolerance', 1e-12, ...
'FunctionTolerance', 1e-12, ...
'OptimalityTolerance', 1e-12, ...
'MaxIterations', 250,...
'SpecifyObjectiveGradient', true, ...
'CheckGradients', false);
lb = zeros(9,1);
ub = 4 + zeros(9,1);
Aineq = ... ; % entries have dimension 1e2
bineq = zeros(9,1);
problem = createOptimProblem( ...
params.solverName, ...
'objective', myFun, ...
'x0', startVec, ...
'lb',lb, ...
'ub',ub, ...
'Aineq', Aineq,
'bineq', bineq,
options=opts);
%create multistart object
ms = MultiStart('Display', 'iter', ...
'UseParallel', true, ...
'StartPointsToRun', 'all', ...
'FunctionTolerance', 0);
% run
run(ms, problem, myStartPoints)
Thre are nine parameters and I have lower bounds and upper bounds as well as linear inequality constraints.
I scaled the matrix Aineq by 1e2 manually such that fmincon pays more attention to feasability. I am aware that this comes with poor convergence and other drawbacks, but it proved to work quite well so far. The reason to choose those tight tolerances (1e-12) is to work around flat regions of the objective function, if any.
Using these options, I get the following output from multistart:
The solution of all 10 runs is
x = [0 0.00838947 0.0167789 0.0251684 0.0335579 0.0419473 0.0503368 0.0587263 0.0673571]
All solutions have exitflag=2 (probably because of the brutal scaling) and the same value of the objective function. Also, the first-order optimality is small.
However, run index = 4, for instance, converged to the same solution, but the first-order optimality is rather big compared to the other ones.
This gets even more visible if I relax all my tolerances (step, function, first-order-optimality tolerance) to the default value of 1e-6:
The solutions is nearly the same as before
x = [0 0.00838945 0.0167789 0.0251684 0.0335578 0.0419473 0.0503367 0.0587262 0.067357]
however, the first-order optimality is higher by several orders of magnitudes, but the solution is nearly unchanged which is also indicated by the sum of squares.
Those high optimalities make the solution less trustworthy.
How is it possible that the optimalities are so different if the sum of squares as well as the solution are practically identical?
  댓글 수: 7
SA-W
SA-W 2023년 11월 27일
The gradient check works. There is probably not much more that I can do.
Torsten
Torsten 2023년 11월 27일
I suggest you compute the objective function near the point that MATLAB computes as optimal by changing each parameter separately while holding the other parameters constant and see what kind of curves you get.

댓글을 달려면 로그인하십시오.

채택된 답변

Matt J
Matt J 2023년 11월 27일
편집: Matt J 2023년 11월 27일
So given the results I show, can we qualititatively say that the objective is likely to be very flat at the solution? Or something else?
Well, it basically means that a small change in x (near the stopping point) produces a large change in the gradient. The function would seem to have very high curvatures there, or possibly has a discontinuous first derivative.
  댓글 수: 25
SA-W
SA-W 2023년 11월 28일
It must be the Hessian of the Lagrangian, not the objective function, although I guess if you only have linear constraints, they will be the same thing.
Yes, I think so too.
Do you think it makes sense to calculate correlations,etc,... with a hessian that has a condition number ~1e7?
Matt J
Matt J 2023년 11월 28일
편집: Matt J 2023년 11월 28일
No idea. You can try using pinv(Hessian) instead of inv().

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Solver Outputs and Iterative Display에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by