Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
The NLP Procedure

Computational Problems

First Iteration Overflows

If you use bad initial values for the parameters, the computation of the value of the objective function (and its derivatives) can lead to arithmetic overflows in the first iteration. The line-search algorithms that work with cubic extrapolation are especially sensitive to arithmetic overflows. If an overflow occurs with an optimization technique that uses line-search, you can use the INSTEP= option to reduce the length of the first trial step during the line-search of the first five iterations or use the DAMPSTEP or MAX STEP option to restrict the step length of the initial \alphain subsequent iterations. If an arithmetic overflow occurs in gthe first iteration of the trust-region, double dogleg, or Levenberg-Marquardt algorithm, you can use the INSTEP= option to reduce the default trust region radius of the first iteration. You can also change the minimization technique or the line-search method. If none of these methods helps, consider the following actions:

Problems in Evaluating the Objective Function

The starting point x(0) must be a point that can be evaluated by all the functions involved in your problem. However, during optimization the optimizer may iterate to a point x(k) where the objective function or nonlinear constraint functions and their derivatives cannot be evaluated. If you can identify the problematic region, you can prevent the algorithm from reaching it by adding another constraint to the problem. Another possiblity is a modification of the objective function, that will, as a result, get a large, undesired function value. As a result, the optimization algorithm reduces the step length and stays closer to the point that has been evaluated successfully in the previous iteration. For more information, refer to the section "Missing Values in Program Statements".

Problems with Quasi-Newton Methods for Nonlinear Constraints

The sequential quadratic programming algorithm in QUANEW, that is used for solving nonlinearly constrained problems, can have problems updating the Lagrange multiplier vector \mu. This results usually in very high values of the Lagrange function and in watchdog restarts indicated in the iteration history. If this happens, there are three actions you can try:

Other Convergence Difficulties

There are a number of things to try if the optimizer fails to converge.

Convergence to Stationary Point

The (projected) gradient at a stationary point is zero and that translates into a zero step size. The stopping criteria are satisfied.

There are two ways to avoid this situation:

The signs of the eigenvalues of the (reduced) Hessian matrix contain information regarding a stationary point.

Precision of Solution

In some applications, PROC NLP may result in parameter estimates that are not precise enough. Usually this means that the procedure terminated too early at a point too far from the optimal point. The termination criteria define the size of the termination region around the optimal point. Any point inside this region can be accepted for terminating the optimization process. The default values of the termination criteria are set to satisfy a reasonable compromise between the computational effort (computer time) and the precision of the computed estimates for the most common applications. However, there are a number of circumstances where the default values of the termination criteria specify a region that is either too large or is too small. If the termination region is too large, then it can contain points with low precision. In such cases, you should inspect your log or list output to find the message stating which termination criterion terminated the optimization process. In many applications, you can obtain a solution with higher precision by simply using the old parameter estimates as starting values in a subsequent run where you specify a smaller value for the termination criterion that was satisfied at the former run.

If the termination region is too small, the optimization process may take longer to find a point inside such a region or cannot even find such a point due to rounding errors in function values and derivatives. This can easily happen in applications where finite difference approximations of derivatives are used and the GCONV and ABSGCONV termination criteria are too small to respect rounding errors in the gradient values.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.