Main Content

There are two Optimization Toolbox™ multiobjective solvers: `fgoalattain`

and `fminimax`

.

`fgoalattain`

addresses the problem of reducing a set of nonlinear functions*F*(_{i}*x*) below a set of goals*F**. Since there are several functions_{i}*F*(_{i}*x*), it is not always clear what it means to solve this problem, especially when you cannot achieve all the goals simultaneously. Therefore, the problem is reformulated to one that is always well-defined.The

*unscaled goal attainment problem*is to minimize the maximum of*F*(_{i}*x*) –*F**._{i}There is a useful generalization of the unscaled problem. Given a set of positive weights

*w*, the_{i}*goal attainment problem*tries to find*x*to minimize the maximum of$$\frac{{F}_{i}(x)-{F}_{i}^{*}}{{w}_{i}}.$$ **(1)**This minimization is supposed to be accomplished while satisfying all types of constraints:

*c*(*x*) ≤ 0,*ceq*(*x*) = 0,*A·x*≤*b*,*Aeq·x*=*beq*, and*l*≤*x*≤*u*.If you set all weights equal to 1 (or any other positive constant), the goal attainment problem is the same as the unscaled goal attainment problem. If the

*F**are positive, and you set all weights as_{i}*w*=_{i}*F**, the goal attainment problem becomes minimizing the relative difference between the functions_{i}*F*(_{i}*x*) and the goals*F**._{i}In other words, the goal attainment problem is to minimize a slack variable

*γ*, defined as the maximum over*i*of the expressions in Equation 1. This implies the expression that is the formal statement of the goal attainment problem:$$\underset{x,\gamma}{\mathrm{min}}\gamma $$

such that

*F*(*x*) –*w*·*γ*≤*F**,*c*(*x*) ≤ 0,*ceq*(*x*) = 0,*A·x*≤*b*,*Aeq·x*=*beq*, and*l*≤*x*≤*u*.`fminimax`

addresses the problem of minimizing the maximum of a set of nonlinear functions, subject to all types of constraints:$$\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}{F}_{i}(x)$$

such that

*c*(*x*) ≤ 0,*ceq*(*x*) = 0,*A·x*≤*b*,*Aeq·x*=*beq*, and*l*≤*x*≤*u*.Clearly, this problem is a special case of the unscaled goal attainment problem, with

*F**= 0 and_{i}*w*= 1._{i}

This section describes the goal attainment method of Gembicki [3]. This method uses a set of design goals, $${F}^{*}=\left\{{F}_{1}^{*},{F}_{2}^{*},\mathrm{...},{F}_{m}^{*}\right\}$$, associated with a set of objectives, *F*(*x*) =
{*F*_{1}(*x*),*F*_{2}(*x*),...,*F _{m}*(

$$\underset{\gamma \in \Re ,\text{}x\in \Omega}{\text{minimize}}\gamma $$ | (2) |

such that $${F}_{i}(x)-{w}_{i}\gamma \le {F}_{i}^{*},\text{}i=1,\mathrm{...},m.$$

The term *w _{i}γ* introduces
an element of

The goal attainment method is represented geometrically in the figure below in two dimensions.

**Figure 8-1, Geometrical Representation of the Goal Attainment
Method**

Specification of the goals, $$\left\{{F}_{1}^{*},{F}_{2}^{*}\right\}$$,
defines the goal point, *P*. The weighting vector
defines the direction of search from *P* to the feasible
function space, Λ(*γ*).
During the optimization *γ* is varied, which
changes the size of the feasible region. The constraint boundaries
converge to the unique solution point *F*_{1s}, *F*_{2s}.

The goal attainment method has the advantage that it can be posed as a nonlinear programming problem. Characteristics of the problem can also be exploited in a nonlinear programming algorithm. In sequential quadratic programming (SQP), the choice of merit function for the line search is not easy because, in many cases, it is difficult to “define” the relative importance between improving the objective function and reducing constraint violations. This has resulted in a number of different schemes for constructing the merit function (see, for example, Schittkowski [36]). In goal attainment programming there might be a more appropriate merit function, which you can achieve by posing Equation 2 as the minimax problem

$$\underset{x\in {\Re}^{n}}{\text{minimize}}\text{}\underset{i}{\mathrm{max}}\left\{{\Lambda}_{i}\right\},$$ | (3) |

where

$${\Lambda}_{i}=\frac{{F}_{i}(x)-{F}_{i}^{*}}{{w}_{i}},\text{}i=1,\mathrm{...},m.$$

Following the argument of Brayton et al. [1] for minimax optimization using SQP, using the merit function of Equation 30 for the goal attainment problem of Equation 3 gives

$$\psi (x,\gamma )=\gamma +{\displaystyle \sum _{i=1}^{m}{r}_{i}\cdot \mathrm{max}\left\{0,{F}_{i}(x)-{w}_{i}\gamma -{F}_{i}^{*}\right\}}.$$ | (4) |

When the merit function of Equation 4 is
used as the basis of a line search procedure, then, although *ψ*(*x*,*γ*) might
decrease for a step in a given search direction, the function `max`

Λ_{i} might
paradoxically increase. This is accepting a degradation in the worst
case objective. Since the worst case objective is responsible for
the value of the objective function *γ*, this
is accepting a step that ultimately increases the objective function
to be minimized. Conversely, *ψ*(*x*,*γ*) might
increase when `max`

Λ_{i} decreases,
implying a rejection of a step that improves the worst case objective.

Following the lines of Brayton et al. [1], a solution is therefore to set *ψ*(*x*) equal to the worst case objective, i.e.,

$$\psi (x)=\underset{i}{\mathrm{max}}{\Lambda}_{i}.$$ | (5) |

A problem in the goal attainment method is that it is common to use a weighting coefficient equal to 0 to incorporate hard constraints. The merit function of Equation 5 then becomes infinite for arbitrary violations of the constraints.

To overcome this problem while still retaining the features of Equation 5, the merit function is combined with that of Equation 31, giving the following:

$$\psi (x)={\displaystyle \sum _{i=1}^{m}\{\begin{array}{ll}{r}_{i}\cdot \mathrm{max}\left\{0,{F}_{i}(x)-{w}_{i}\gamma -{F}_{i}^{*}\right\}\hfill & \text{if}{w}_{i}=0\hfill \\ \underset{i}{\mathrm{max}}{\Lambda}_{i},\text{}i=1,\mathrm{...},m\hfill & \text{otherwise}\text{.}\hfill \end{array}}$$ | (6) |

Another feature that can be exploited in SQP is the objective
function *γ*. From the KKT equations it can
be shown that the approximation to the Hessian of the Lagrangian, *H*,
should have zeros in the rows and columns associated with the variable *γ*.
However, this property does not appear if *H* is
initialized as the identity matrix. *H* is therefore
initialized and maintained to have zeros in the rows and columns associated
with *γ*.

These changes make the Hessian, *H*, indefinite.
Therefore *H* is set to have zeros in the rows and
columns associated with *γ*, except for the
diagonal element, which is set to a small positive number (e.g., `1e`

-10).
This allows use of the fast converging positive definite QP method
described in Quadratic Programming Solution.

The preceding modifications have been implemented in `fgoalattain`

and have been found to make
the method more robust. However, because of the rapid convergence
of the SQP method, the requirement that the merit function strictly
decrease sometimes requires more function evaluations than an implementation
of SQP using the merit function of Equation 30.

`fminimax`

uses a goal attainment method.
It takes goals of 0, and weights of 1. With this formulation, the
goal attainment problem becomes

$$\underset{i}{\mathrm{min}}\underset{x}{\mathrm{max}}\left(\frac{{f}_{i}(x)-goa{l}_{i}}{weigh{t}_{i}}\right)=\underset{i}{\mathrm{min}}\underset{x}{\mathrm{max}}{f}_{i}(x),$$

which is the minimax problem.

Parenthetically, you might expect `fminimax`

to
turn the multiobjective function into a single objective. The function

*f*(*x*)
= max(*F*_{1}(*x*),...*F*_{j}(*x*))

[1] Brayton, R. K., S. W. Director, G. D. Hachtel, and
L.Vidigal, “A New Algorithm for Statistical Circuit Design Based on
Quasi-Newton Methods and Function Splitting,” *IEEE Transactions on
Circuits and Systems*, Vol. CAS-26, pp 784-794, Sept. 1979.

[2] Fleming, P.J. and A.P. Pashkevich, *Computer
Aided Control System Design Using a Multi-Objective Optimisation
Approach*, Control 1985 Conference, Cambridge, UK, pp. 174-179.

[3] Gembicki, F.W., “Vector Optimization for Control with Performance and Parameter Sensitivity Indices,” Ph.D. Dissertation, Case Western Reserve Univ., Cleveland, OH, 1974.

[4] Grace, A.C.W., “Computer-Aided Control System Design Using Optimization Techniques,” Ph.D. Thesis, University of Wales, Bangor, Gwynedd, UK, 1989.

[5] Han, S.P., “A Globally Convergent Method For
Nonlinear Programming,” *Journal of Optimization Theory and
Applications*, Vol. 22, p. 297, 1977.

[6] Madsen, K. and H.
Schjaer-Jacobsen, “Algorithms for Worst Case Tolerance Optimization,”
*IEEE Trans. of Circuits and Systems*, Vol. CAS-26, Sept.
1979.

[7] Powell, M.J.D., “A Fast Algorithm for Nonlinear
Constrained Optimization Calculations,” *Numerical
Analysis*, ed. G.A. Watson, *Lecture Notes in
Mathematics*, Vol. 630, Springer Verlag, 1978.