What do DiffMinChange and DiffMaxChange actually do?

조회 수: 19 (최근 30일)
Danny
Danny 2013년 9월 8일
DiffMinChange and DiffMaxChange are options that can be specified, for example, when running fmincon. According the MathWorks website, they stipulate the "Minimum [Maximum] change in variables for finite-difference gradients (a positive scalar)." However, I am not finding that these variables work in a predictable way.
- Are they measuring the magnitude of the gradient taken, or the change? - Which norm is being used? - How should these stipulations show up when I am running a simulation?
Currently, when I set, for example, DiffMaxChange = .1, I still see fmincon changing the variables of the objective function by more than this value, under any norm I can think of. Why is this happening?
Any and all information will be appreciated!
Thank you, Danny

채택된 답변

Matt J
Matt J 2013년 9월 8일
편집: Matt J 2013년 9월 8일
When gradients of your objective function and/or constraints are approximated using finite differences, calculations along the lines of the following are done
y=x;
y(i)=x(i)+delta;
gradient(i) = ( f(y)-f(x) )/delta; %Finite difference approximation
DiffMinChange and DiffMaxChange dictate upper and lower bounds on the value used for delta, and nothing more.
  댓글 수: 2
Danny
Danny 2013년 9월 8일
But why then does a value for DiffMaxChange not restrict the change in any given variable to less than that value, for example? The relationship between the value and the step size taken is not obvious at all.
Matt J
Matt J 2013년 9월 9일
편집: Matt J 2013년 9월 9일
The relationship between the value and the step size taken is not obvious at all.
There isn't meant to be a relationship between them, or at least not a relationship you can exploit. The purpose of DiffMinChange and DiffMaxChange is purely to let the user tune and try to improve the accuracy of the gradient/Hessian approximations.
It goes without saying, I guess, that if you make a really lousy approximation to the derivatives, your step size (and step direction) can be very different from what you would get if you turned GradObj and GradConstr on and supplied an exact gradient/Hessian calculation. But that difference isn't a useful one -- there's never any good reason to deliberately make the finite difference approximation less accurate than it can be.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Creating and Concatenating Matrices에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by