Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Language Reference

NLPTR Call

nonlinear optimization by trust region method

CALL NLPTR( rc, xr, "fun", x0 <,opt, blc, tc, par, "ptit", "grd", "hes">);

See "Nonlinear Optimization and Related Subroutines" for a listing of all NLP subroutines. See Chapter 11, "Nonlinear Optimization Examples," for a description of the inputs to and outputs of all NLP subroutines.

The NLPTR subroutine is a trust-region method that uses the gradient g^{(k)} = \nabla f(x^{(k)}) and Hessian matrix G^{(k)} = \nabla^2 f(x^{(k)}). It requires that the objective function f=f(x) has continuous first- and second-order derivatives inside the feasible region.

The n ×n Hessian matrix G contains the second derivatives of the objective function f with respect to the parameters x1, ... ,xn, as follows:
G(x) = \nabla^2 f(x) 
 = ( \frac{\partial^2 f}{\partial x_j \partial x_k}
 )
The trust-region method works by optimizing a quadratic approximation to the nonlinear objective function within a hyperelliptic trust region. This trust region has a radius, \Delta, that constrains the step size corresponding to the quality of the quadratic approximation. The method is implemented using Dennis, Gay, and Welsch (1981), Gay (1983), and Mor\acute{e} and Sorensen (1983).

Note that finite difference approximations for second-order derivatives using only function calls are computationally very expensive. If you specify first-order derivatives analytically with the "grd" module argument, you can drastically reduce the computation time for numerical second-order derivatives. Computing the finite difference approximation for the Hessian matrix G generally uses only n calls of the module that computes the gradient analytically.

The NLPTR method performs well for small- to medium-sized problems and does not need many function, gradient, and Hessian calls. However, if the gradient is not specified analytically by using the "grd" argument or if the computation of the Hessian module, as specified by the "hes" module argument, is computationally expensive, one of the (dual) quasi-Newton or conjugate gradient algorithms may be more efficient.

In addition to the standard iteration history, the NLPTR subroutine prints the following information:

For an example of the use of the NLPTR subroutine, see "Unconstrained Rosenbrock Function" .

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.