Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
The TSCSREG Procedure

Da Silva Method (Variance-Component Moving Average Model)

Suppose you have a sample of observations at T time points on each of N cross-sectional units. The Da Silva method assumes that the observed value of the dependent variable at the tth time point on the ith cross-sectional unit can be expressed as

y_{it}= x_{it}^{'}{\beta}
+ a_{i}+ b_{t}+ e_{it}
\hspace*{1em}i=1, { ... }, N;\hspace*{1em}t=1, { ... }, T

where

xit' = ( xit1, ... , xitp)is a vector of explanatory variables for the tth time point and ith cross-sectional unit

{{\beta}=( {\beta}_{1}, { ... } , {\beta}_{p}{)'}} is the vector of parameters

ai is a time-invariant, cross-sectional unit effect

bt is a cross-sectionally invariant time effect

eit is a residual effect unaccounted for by the explanatory variables and the specific time and cross-sectional unit effects

Since the observations are arranged first by cross sections, then by time periods within cross sections, these equations can be written in matrix notation as

y=X{\beta}+u

where

u=(a{\otimes}1_{T})+(1_{N}{\otimes}b)+e
y = (y11, ... ,y1T, y21, ... ,yNT)'
X = (x11, ... ,x1T,x21, ... ,xNT)'
a = (a1 ... aN)'
b = (b1 ... bT)'
e = (e11, ... ,e1T, e21, ... ,eNT)'

Here 1N is an N ×1 vector with all elements equal to 1, and {\otimes} denotes the Kronecker product.

It is assumed that

  1. xit is a sequence of nonstochastic, known p×1 vectors in { {\Re}^p} whose elements are uniformly bounded in { {\Re}^p}. The matrix X has a full column rank p.
  2. {\beta} is a p ×1 constant vector of unknown parameters.
  3. a is a vector of uncorrelated random variables such that E( ai)=0 and {{var}( a_{i})= {\sigma}^2_{a}}, { {\sigma}^2_{a}\gt, i=1, { ... }, N}.
  4. b is a vector of uncorrelated random variables such that E( bt)=0 and {{var}( b_{t})= {\sigma}^2_{b}, {\sigma}^2_{b}\gt, t=1, { ... }, T}.
  5. ei = ( ei1, ... ,eiT)' is a sample of a realization of a finite moving average time series of order m < T-1 for each i; hence,

    e_{it}={\alpha}_{0} {\epsilon}_{t}+
{\alpha}_{1} {\epsilon}_{t-1}+{ ... }+
 {\...
 ..._{m} {\epsilon}_{t-m},
\hspace*{1em} t=1,{ ... },T;\hspace*{1em} i=1,{ ... },N

    where {{\alpha}_{0}, {\alpha}_{1},{ ... }, {\alpha}_{m}} are unknown constants such that {\alpha}_{0}{\ne}0 and {{\alpha}_{m}{\ne}0}, and { \{{\epsilon}_{j}\}^{j={\infty}}_{j=-{\infty}}} is a white noise process, that is, a sequence of uncorrelated random variables with {E( {\epsilon}_{t})=0,E( {\epsilon}^2_{t})=
{\sigma}^2_{{\epsilon}} }, and { {\sigma}^2_{{\epsilon}}\gt }.
  6. The sets of random variables {ai}Ni = 1, {bt}Tt = 1, and {eit}Tt = 1 for i = 1, ... , N are mutually uncorrelated.
  7. The random terms have normal distributions: { a_{i}{\sim}N(0, {\sigma}^2_{a}), b_{t}{\sim}N(0, {\sigma}^2_{b}), }and { {\epsilon}_{t-k}{\sim}N(0, {\sigma}^2_{{\epsilon}}), } for i = 1, ... , N; t = 1, ... T; k = 1, ... , m.



If assumptions 1-6 are satisfied, then

E(y)=X{\beta}

and

{var}(y)= {\sigma}^2_{a}
(I_{N}{\otimes}J_{T})+
 {\sigma}^2_{b}(J_{N}{\otimes}I_{T})+
(I_{N}{\otimes}{\Gamma}_{T})

where {{\Gamma}_{T}} is a T×T matrix with elements {{\gamma}_{ts}} as follows:

cov( e_{it} e_{is})=
 \cases{
 {\gamma}({| t-s|}) &\hspace*{1em}\rm{if} {| t-s|} {\le} \space m \cr
 0 &\hspace*{1em}\rm{if} {| t-s|} \gt \space m \cr
 }

where {{\gamma}(k) = {\sigma}^2_{{\epsilon}}\sum_{j=0}^{m-k}{{\alpha}_{j}{\alpha}_{j+k}}} for k=|t-s|. For the definition of IN, IT, JN, and JT, see the "Fuller-Battese Method" section earlier in this chapter.

The covariance matrix, denoted by V, can be written in the form

V= {\sigma}^2_{a}(I_{N}{\otimes}J_{T})
+ {\sigma}^2_{b}(J_{N}{\otimes}I_{T})
+\sum_{k=0}^m{{\gamma}(k)(I_{N}{\otimes}
 {\Gamma}^{(k)}_{T})}

where { {\Gamma}^{(0)}_{T}=I_{T}}, and, for k=1,..., m, { {\Gamma}^{(k)}_{T}} is a band matrix whose kth off-diagonal elements are 1's and all other elements are 0's.

Thus, the covariance matrix of the vector of observations y has the form

{var}(y)=\sum_{k=1}^{m+3}{{\nu}_{k}V_{k}}

where
{\nu}_{1}&=& {\sigma}^2_{a} \cr
{\nu}_{2}&=& {\sigma}^2_{b} \cr
{\nu}_{k}&=&{\...
 ...r
V_{k}&=&I_{N}{\otimes} {\Gamma}^{(k-3)}_{T} 
\hspace*{1em}k=3,{ ... }, m+3

The estimator of {\beta} is a two-step GLS-type estimator, that is, GLS with the unknown covariance matrix replaced by a suitable estimator of V. It is obtained by substituting Seely estimates for the scalar multiples {{\nu}_{k},k=1, 2, { ... }, m+3}.

Seely (1969) presents a general theory of unbiased estimation when the choice of estimators is restricted to finite dimensional vector spaces, with a special emphasis on quadratic estimation of functions of the form {\sum_{i=1}^n{{\delta}_{i}{\nu}_{i}}}.

The parameters {{\nu}_{i}} (i=1,..., n) are associated with a linear model E(y)=X{\beta} with covariance matrix {\sum_{i=1}^n{{\nu}_{i}V_{i}}} where Vi (i=1, ..., n) are real symmetric matrices. The method is also discussed by Seely (1970a,1970b) and Seely and Zyskind (1971). Seely and Soong (1971) consider the MINQUE principle, using an approach along the lines of Seely (1969).

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.