Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Language Reference

NLPFDD Call

approximates derivatives by finite differences method

CALL NLPFDD( f, g, h, "fun", x0, <,par, "grd">);

See "Nonlinear Optimization and Related Subroutines" for a listing of all NLP subroutines. See Chapter 11, "Nonlinear Optimization Examples," for a description of the inputs to and outputs of all NLP subroutines.

The NLPFDD subroutine can be used for the following tasks: If any of the results cannot be computed, the subroutine returns a missing value for that result.

You can specify the following input arguments with the NLPFDD subroutine: If the "fun" module returns a scalar, the subroutine returns the following values: If the "fun" module returns a vector, the subroutine returns the following values: The par argument is a vector of length 3. If you specify a missing value in the par argument, the default value is used.

The NLPFDD subroutine is particularly useful for checking your analytical derivative specifications of the "grd", "hes", and "jac" modules. You can compare the results of the modules with the finite difference approximations of the derivatives of f at the point x0 to verify your specifications.

In the unconstrained Rosenbrock problem (see "Unconstrained Rosenbrock Function" ), the objective function is
f(x) = 50(x2-x12)2 + (1/2)(1-x1)2
Then the gradient and the Hessian, evaluated at the point x=(2,7), are
g^' & = & [ \frac{\partial f}{\partial x_1} \  
 \frac{\partial f}{\partial x_2}...
 ...2 - 200x_2 + 1 & -200x_1 \ -200x_1 & 100 
 ]
 = [ 1001 & -400 \ 
 -400 & 100 
 ]
The following statements define the Rosenbrock function and use the NLPFDD call to compute the gradient and the Hessian. They are shown in Figure 17.2.
   proc iml;
      start F_ROSEN(x);
         y1 = 10. * (x[2] - x[1] * x[1]);
         y2 = 1. - x[1];
         f  = .5 * (y1 * y1 + y2 * y2);
         return(f);
      finish F_ROSEN;
      x = {2 7};
      CALL NLPFDD(crit,grad,hess,"F_ROSEN",x);
      print grad;
      print hess;


                                                  GRAD

                                              -1199 300.00001


                                                  HESS

                                          1000.9998 -400.0018
                                          -400.0018 99.999993


Figure 17.2: Finite Difference Approximations for Gradient and Hessian

If the Rosenbrock problem is considered from a least-squares perspective, the two functions are
f_1(x) & = & 10(x_2-x_1^2) \f_2(x) & = & 1-x_1
Then the Jacobian and the crossproduct of the Jacobian, evaluated at the point x=(2,7), are
J & = & [ \frac{\partial f_1}{\partial x_1} & 
 \frac{\partial f_1}{\partial x_2...
 ...& [ 400x_1^2+1 & -200x_1 \ -200x_1 & 100 
 ]
 = [ 1601 & -400 \ 
 -400 & 100 
 ]
The following statements define the Rosenbrock problem in a least-squares framework and use the NLPFDD call to compute the Jacobian and the crossproduct matrix. Since the value of the PARMS variable, which is used for the par argument, is 2, the NLPFDD subroutine allocates memory for a least-squares problem with two functions, f1(x) and f2(x). The output is shown in Figure 17.3.
   proc iml;
      start F_ROSEN(x);
         y = j(2,1,0.);
         y[1] = 10. * (x[2] - x[1] * x[1]);
         y[2] = 1. - x[1];
         return(y);
      finish F_ROSEN;
      x     = {2 7};
      parms = 2;
      CALL NLPFDD(fun,jac,crpj,"F_ROSEN",x,parms);
      print jac;
      print crpj;

                                                  JAC

                                                -40        10
                                                 -1         0


                                                  CRPJ

                                               1601      -400
                                               -400       100

Figure 17.3: Finite Difference Approximations for Jacobian

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.