efficient dynamic constraints with fmincon

조회 수: 4 (최근 30일)
Christopher
Christopher 2015년 9월 2일
댓글: Christopher 2015년 9월 2일
I would like to use to fmincon to find a vector of values W in [0,1]^K that can serve as a weighting function matching the K observations to another primary observation. K may represent a high dimension, and not all K may be of use. I would like to restrict which K to use based on some criteria, i.e. assign w_k=0 for some k in [1,K]. There are two easy ways to do this: (1) transform the problem to "get rid" of the dimensions that I won't use before inputting them into fmincon, or (2) set the ub and lb both constraints to 0 for all k that fail to meet the criteria. Which is more efficient? Does fmincon "feel" along a direction if lb=ub for that dimension?

답변 (1개)

Matt J
Matt J 2015년 9월 2일
편집: Matt J 2015년 9월 2일
Which is more efficient? Does fmincon "feel" along a direction if lb=ub for that dimension?
I don't think efficiency will be impacted so much by what fmincon is doing as by what the objective function and constraints that you supply are doing. When you have parameters that are apriori known, you can usually use that to simplify the objective function calculations, a benefit you don't get if you use lb=ub to convey the known value artificially. As an example, consider the simple objective function,
f(x) = sum(x.^2)
Suppose that x is a vector of length N and x(2:N) are in truth, apriori known. Then one option is to use that to reduce the above analytically to the minimization of
f(x)=x(1)^2
which is an N-fold cheaper computation. Same thing goes for the constraints. An MxN matrix inequality system
A*x<=b
reduces to an Mx1 system
A(:,1)*x(1) <=b - A(:,2:N)*x(2:N)
Notice that everything on the RHS is known, fixed, and pre-computable.
Conversely, if you use lb,ub to constrain the values of x(2:N) then, even if fmincon doesn't vary x(2:end) in the course of its iterations, it will still be repeatedly calling/evaluating the full function f(x)=sum(x.^2) and doing the complete matrix multiplications A*x with all of their O(N) computational demands.
  댓글 수: 1
Christopher
Christopher 2015년 9월 2일
That's good to know -- to sum up, it's worth translating the problem of dimension N into some smaller dimension K if possible. Will do.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Surrogate Optimization에 대해 자세히 알아보기

제품

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by