Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Nonlinear Optimization Examples

Finite Difference Approximations of Derivatives

If the optimization technique needs first- or second-order derivatives and you do not specify the corresponding IML modules "grd", "hes", "jac", or "jacnlc", the derivatives are approximated by finite difference formulas using only calls of the module "fun". If the optimization technique needs second-order derivatives and you specify the "grd" module but not the "hes" module, the subroutine approximates the second-order derivatives by finite differences using n or 2n calls of the "grd" module.

The eighth element of the opt argument specifies the type of finite difference approximation used to compute first- or second-order derivatives and whether the finite difference intervals, h, should be computed by an algorithm of Gill, Murray, Saunders, and Wright (1983). The value of opt[8] is a two-digit integer, ij.

Forward Difference Approximations

Central Difference Approximations

The step sizes hj, j = 1, ... ,n, are defined as follows:

If the algorithm of Gill, Murray, Saunders, and Wright (1983) is not used to compute \eta_j, a constant value \eta = \eta_j is used depending on the value of par[8].

If central difference formulas are not specified, the optimization algorithm will switch automatically from the forward-difference formula to a corresponding central-difference formula during the iteration process if one of the following two criteria is satisfied: The algorithm of Gill, Murray, Saunders, and Wright (1983) that computes the finite difference intervals hj can be very expensive in the number of function calls it uses. If this algorithm is required, it is performed twice, once before the optimization process starts and once after the optimization terminates.

Many applications need considerably more time for computing second-order derivatives than for computing first-order derivatives. In such cases, you should use a quasi-Newton or conjugate gradient technique.

If you specify a vector, c, of nc nonlinear constraints with the "nlc" module but you do not specify the "jacnlc" module, the first-order formulas can be used to compute finite difference approximations of the nc ×n Jacobian matrix of the nonlinear constraints.

( \nabla c_i ) = ( \frac{\partial c_i}{\partial x_j} ) , 
  i= 1, ... ,nc,  j=1, ... ,n
You can specify the number of accurate digits in the constraint evaluations with par[9]. This specification also defines the step sizes hj, j = 1, ... ,n.

Note: If you are not able to specify analytic derivatives and if the finite-difference approximations provided by the subroutines are not good enough to solve your optimization problem, you may be able to implement better finite-difference approximations with the "grd", "hes", "jac", and " jacnlc" module arguments.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.