Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Nonlinear Optimization Examples

Termination Criteria

The input argument tc specifies a vector of bounds corresponding to a set of termination criteria that are tested in each iteration. If you do not specify an IML module with the "ptit" argument, these bounds determine when the optimization process stops.

If you specify the "ptit" argument, the "tc" argument is ignored. The module specified by the "ptit" argument replaces the subroutine that is used by default to test the termination criteria. The module is called in each iteration with the current location, x, and the value, f, of the objective function at x. The module must give a return code, rc, that decides whether the optimization process is to be continued or terminated. As long as the module returns rc = 0, the optimization process continues. When the module returns rc \neq 0, the optimization process stops.

If you use the tc vector, the optimization techniques stop the iteration process when at least one of the corresponding set of termination criteria are satisfied. Table 11.3 and Table 11.4 indicate the criterion associated with each element of the tc vector. There is a default for each criterion, and if you specify a missing value for the corresponding element of the tc vector, the default value is used. You can avoid termination with respect to the ABSFTOL, ABSGTOL, ABSXTOL, FTOL, FTOL2, GTOL, GTOL2, and XTOL criteria by specifying a value of zero for the corresponding element of the tc vector.

Table 11.3: Termination Criteria for the NLPNMS Subroutine
Index Description
1maximum number of iterations (MAXIT)
2maximum number of function calls (MAXFU)
3absolute function criterion (ABSTOL)
4relative function criterion (FTOL)
5relative function criterion (FTOL2)
6absolute function criterion (ABSFTOL)
7FSIZE value used in FTOL criterion
8relative parameter criterion (XTOL)
9absolute parameter criterion (ABSXTOL)
9size of final trust-region radius \rho (COBYLA algorithm)
10XSIZE value used in XTOL criterion

Table 11.4: Termination Criteria for Other Subroutines
Index Description
1maximum number of iterations (MAXIT)
2maximum number of function calls (MAXFU)
3absolute function criterion (ABSTOL)
4relative gradient criterion (GTOL)
5relative gradient criterion (GTOL2)
6absolute gradient criterion (ABSGTOL)
7relative function criterion (FTOL)
8predicted function reduction criterion (FTOL2)
9absolute function criterion (ABSFTOL)
10FSIZE value used in GTOL and FTOL criterion
11relative parameter criterion (XTOL)
12absolute parameter criterion (ABSXTOL)
13XSIZE value used in XTOL criterion

Criteria Used by All Techniques

The following list indicates the termination criteria that are used with all the optimization techniques:

These criteria are useful when you want to divide a time-consuming optimization problem into a series of smaller problems.

Termination Criteria for NLPNMS

Since the Nelder-Mead simplex algorithm does not use derivatives, no termination criteria are available that are based on the gradient of the objective function.

When the NLPNMS subroutine implements Powell's COBYLA algorithm, it uses only one criterion other than the three used by all the optimization techniques. The COBYLA algorithm is a trust-region method that sequentially reduces the radius, \rho, of a spheric trust region from the start radius, \rho_{beg}, which is controlled with the par[2] argument, to the final radius, \rho_{end},which is controlled with the tc[9] argument. The default value for tc[9] is \rho_{end} = 1E-4. Convergence to small values of \rho_{end} may take many calls of the function and constraint modules and may result in numerical problems.

In addition to the criteria used by all techniques, the original Nelder-Mead simplex algorithm uses several other termination criteria, which are described in the following list:

Termination Criteria for Unconstrained and Linearly Constrained Techniques

Termination Criteria for Nonlinearly Constrained Techniques

The only algorithm available for nonlinearly constrained optimization other than the NLPNMS subroutine is the NLPQN subroutine, when you specify the "nlc" module argument. This method, unlike the other optimization methods, does not monotonically reduce the value of the objective function or some kind of merit function that combines objective and constraint functions. Instead, the algorithm uses the watchdog technique with backtracking of Chamberlain and others (1982). Therefore, no termination criteria are implemented that are based on the values x or f in consecutive iterations. In addition to the criteria used by all optimization techniques, there are three other termination criteria available; these are based on the Lagrange function

L(x,\lambda) = f(x) - \sum_{i=1}^m \lambda_i c_i(x)
and its gradient
\nabla_x L(x,\lambda) = 
g(x) - \sum_{i=1}^m \lambda_i \nabla_x c_i(x)
where m denotes the total number of constraints, g=g(x) is the gradient of the objective function, and \lambda is the vector of Lagrange multipliers. The Kuhn-Tucker conditions require that the gradient of the Lagrange function is zero at the optimal point (x^*,\lambda^*), as follows:
\nabla_x L(x^*,\lambda^*) = 0

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.