Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
The NLP Procedure

Termination Criteria

All optimization techniques stop iterating at x(k) if at least one of a set of termination criteria is satisfied. PROC NLP also terminates if the point x(k) is fully constrained by n linearly independent active linear or boundary constraints, and all Lagrange multiplier estimates of active inequality constraints are greater than a small negative tolerance.

Since the Nelder-Mead simplex algorithm does not use derivatives, no termination criterion is available based on the gradient of the objective function. Powell's COBYLA algorithm uses only one more termination criterion. COBYLA is a trust-region algorithm that sequentially reduces the radius \rho of a spheric trust region beginning from a start radius \rho_{beg} = INSTEP to the final radius \rho_{end} = ABSXTOL. The default value is \rho_{end} = 1e-4. The convergence to small values of \rho_{end} (high precision) may take many calls of the function and constraint modules and may result in numerical problems.

In some applications, the small default value of the ABSGCONV= criterion is too difficult to satisfy for some of the optimization techniques. This occurs most often when finite difference approximations of derivatives are used.

The default setting for the GCONV= option sometimes leads to early termination far from the location of the optimum. This is especially true for the special form of this criterion used in the CONGRA optimization.

The QUANEW algorithms for nonlinearly constrained optimization does not monotonically reduce either the value of the objective function or some kind of merit function which combines objective and constraint functions. Furthermore, the algorithm uses the watchdog technique with backtracking (Chamberlain et.al. 1982). Therefore, no termination criteria were implemented that are based on the values (x or f) of successive iterations. In addition to the criteria used by all optimization techniques, three more termination criteria are currently available, and are based on satisfying the Karush-Kuhn-Tucker conditions. For more information, refer to the section "Criteria for Optimality", that requires that the gradient of the Lagrange function is zero at the optimal point (x^*,\lambda^*):

\nabla_x L(x^*,\lambda^*) = 0

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.