Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Language Reference

RZLIND Call

computes rank deficient linear least-squares solutions, complete orthogonal factorization, and Moore-Penrose inverses

CALL RZLIND( lindep, rup, bup, r<, sing><, b>);

The RZLIND subroutine returns the following values:
lindep
is a scalar giving the number of linear dependencies that are recognized in R (number of zeroed rows in rup[n,n]).
rup
is the updated n ×n upper triangular matrix R containing zero rows corresponding to zero recognized diagonal elements in the original R.
bup
is the n ×p matrix B of right-hand sides that is updated simultaneously with R. If b is not specified, bup is not accessible.

The inputs to the RZLIND subroutine are as follows:

r
specifies the n ×n upper triangular matrix R. Only the upper triangle of r is used; the lower triangle may contain any information.
sing
is an optional scalar specifying a relative singularity criterion for the diagonal elements of R. The diagonal element rii is considered zero if r_{ii}\leq sing\Vert{r}_i\Vert,where |ri| is the Euclidean norm of column ri of R. If the value provided for sing is not positive, the default value sing = 1000\epsilon is used, where \epsilon is the relative machine precision.
b
specifies the optional n ×p matrix B of right-hand sides that have to be updated or downdated simultaneously with R.

The singularity test used in the RZLIND subroutine is a relative test using the Euclidean norms of the columns ri of R. The diagonal element rii is considered as nearly zero (and the i th row is zeroed out) if the following test is true:
r_{ii} \leq {sing} \Vert r_i \Vert, 
{ where } \Vert r_i \Vert = \sqrt{r_i^' r_i}
Providing an argument sing \leq 0 is the same as omitting the argument sing in the RZLIND call. In this case, the default is sing = 1000\epsilon, where \epsilon is the relative machine precision. If R is computed by the QR decomposition A = QR, then the Euclidean norm of column i of R is the same (except for rounding errors) as the Euclidean norm of column i of A.

Consider the following possible application of the RZLIND subroutine. Assume that you want to compute the upper triangular Cholesky factor R of the n ×n positive semidefinite matrix A' A,
A^' A= R^' R
{ where } A\in {\cal R}^{m x n},
 {\rm rank}(A)=r,  r \leq n \leq m
The Cholesky factor R of a positive definite matrix A' A is unique (with the exception of the sign of its rows). However, the Cholesky factor of a positive semidefinite (singular) matrix A' A can have many different forms.

In the following example, A is a 12 ×8 matrix with linearly dependent columns a1 = a2 + a3 + a4 and a1 = a5 + a6 + a7 with r=6, n=8, and m=12.
   proc iml;
      a = {1 1 0 0 1 0 0,
           1 1 0 0 1 0 0,
           1 1 0 0 0 1 0,
           1 1 0 0 0 0 1,
           1 0 1 0 1 0 0,
           1 0 1 0 0 1 0,
           1 0 1 0 0 1 0,
           1 0 1 0 0 0 1,
           1 0 0 1 1 0 0,
           1 0 0 1 0 1 0,
           1 0 0 1 0 0 1,
           1 0 0 1 0 0 1};
      a = a || uniform(j(12,1,1));
      aa = a` * a;
      m = nrow(a); n = ncol(a);
Applying the ROOT function to the coefficient matrix A' A of the normal equations,
   r1 = root(aa);
   ss1 = ssq(aa - r1` * r1);
   print ss1 r1 [format=best6.];
generates an upper triangular matrix R1 where linearly dependent rows are zeroed out, and you can verify that A' A = R'1 R1.

Applying the QR subroutine with column pivoting on the original matrix A yields a different result, but you can also verify A' A = R'2 R2 after pivoting the rows and columns of A' A:
   ord = j(n,1,0);
   call qr(q,r2,pivqr,lindqr,a,ord);
   ss2 = ssq(aa[pivqr,pivqr] - r2` * r2);
   print ss2 r2 [format=best6.];


Using the RUPDT subroutine for stepwise updating of R by the m rows of A will finally result in an upper triangular matrix R3 with n-r nearly zero diagonal elements. However, other elements in rows with nearly zero diagonal elements can have significant values. The following statements verify that A' A = R'3 R3,
   r3 = shape(0,n,n);
   call rupdt(rup,bup,sup,r3,a);
   r3 = rup;
   ss3 = ssq(aa - r3` * r3);
   print ss3 r3 [format=best6.];


The result R3 of the RUPDT subroutine can be transformed into the result R1 of the ROOT function by left applications of Givens rotations to zero out the remaining significant elements of rows with small diagonal elements. Applying the RZLIND subroutine on the upper triangular result R3 of the RUPDT subroutine will generate a Cholesky factor R4 with zero rows corresponding to diagonal elements that are small, giving the same result as the ROOT function (except for the sign of rows) if its singularity criterion recognizes the same linear dependencies.
   call rzlind(lind,r4,bup,r3);
   ss4 = ssq(aa - r4` * r4);
   print ss4 r4 [format=best6.];


Consider the rank-deficient linear least-squares problem:
\min_{x} \Vert A{x}- b\Vert^2 
{ where } A\in {\cal R}^{m x n},
 {\rm rank}(A)=r,  r \leq n \leq m
For r=n, the optimal solution, \hat{x}, is unique; however, for r<n, the rank-deficient linear least-squares problem has many optimal solutions, each of which has the same least-squares residual sum of squares:
{ss} = (A\hat{x} - b)^'(A\hat{x} - b)
The solution of the full rank problem, r=n, is illustrated in the QR call. The following list shows several solutions to the singular problem. This example uses the 12 ×8 matrix from the preceding section and generates a new column vector b. The vector b and the matrix A are shown in the output.
   b = uniform(j(12,1,1));
   ab = a` * b;
   print b a [format=best6.];


Each entry in the following list solves the rank-deficient linear least-squares problem. Note that while each method minimizes the residual sum of squares, not all of the given solutions are of minimum Euclidean length.

You can use various methods to compute the Moore-Penrose inverse A- of a rectangular matrix A using orthogonal methods. The entries in the following list find the Moore-Penrose inverse of the matrix A shown on this page.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.