Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
The AUTOREG Procedure

Testing

Heteroscedasticity and Normality Tests

Portmanteau Q-Test

For nonlinear time series models, the portmanteau test statistic based on squared residuals is used to test for independence of the series (McLeod and Li 1983):

Q(q) = N(N+2)\sum_{i=1}^q{\frac{r(i; \hat{{\nu}}^2_{t})}{(N-i)}}

where

r(i; \hat{{\nu}}^2_{t}) =
\frac{\sum_{t=i+1}^N{( \hat{{\nu}}^2_{t}-\hat{{\sigma...
 ...-\hat{{\sigma}}^2)}}
{\sum_{t=1}^N{( \hat{{\nu}}^2_{t}-
\hat{{\sigma}}^2)^2}}
\hat{{\sigma}}^2 = \frac{1}N
\sum_{t=1}^N{\hat{{\nu}}^2_{t}}

This Q statistic is used to test the nonlinear effects (for example, GARCH effects) present in the residuals. The GARCH(p,q) process can be considered as an ARMA(max(p,q),p) process. See the section "Predicting the Conditional Variance" later in this chapter. Therefore, the Q statistic calculated from the squared residuals can be used to identify the order of the GARCH process.

Lagrange Multiplier Test for ARCH Disturbances

Engle (1982) proposed a Lagrange multiplier test for ARCH disturbances. The test statistic is asymptotically equivalent to the test used by Breusch and Pagan (1979). Engle's Lagrange multiplier test for the qth order ARCH process is written

LM(q) = W'W

where

W=
( \frac{\hat{{\nu}}^2_{1}}{\hat{{\sigma}}^2}, { ... },
\frac{\hat{{\nu}}^2_{N}}{\hat{{\sigma}}^2})'
and

Z =
[ \matrix{ 1 & \hat{{\nu}}_{0}^2
 & { ... } & \hat{{\nu}}_{-q+1}^2 \cr
{\...
 ... & {\vdots} \cr
1 & \hat{{\nu}}_{N-1}^2 & { ... } & 
\hat{{\nu}}_{N-q}^2 }
]

The presample values ( {\nu}^2_{0},..., {\nu}^2_{-q+1}) have been set to 0. Note that the LM(q) tests may have different finite sample properties depending on the presample values, though they are asymptotically equivalent regardless of the presample values. The LM and Q statistics are computed from the OLS residuals assuming that disturbances are white noise. The Q and LM statistics have an approximate { {\chi}^2_{(q)}} distribution under the white-noise null hypothesis.

Normality Test

Based on skewness and kurtosis, Bera and Jarque (1982) calculated the test statistic

TN = [[N/6] b21+ [N/24] ( b2-3)2]

where

b_{1} = \frac{\sqrt{N}\sum_{t=1}^N{\hat{u}^3_{t}}}
 {{(\sum_{t=1}^N
 \hat{u}^2_{t})}^{\frac{3}2}}
b_{2} = \frac{N\sum_{t=1}^N{\hat{u}^4_{t}}}
 {(\sum_{t=1}^N{\hat{u}^2_{t}})^2}

The {\chi}2(2)-distribution gives an approximation to the normality test TN.

When the GARCH model is estimated, the normality test is obtained using the standardized residuals { \hat{u}_{t} = \hat{{\epsilon}}_{t}/\sqrt{ h_{t}}}. The normality test can be used to detect misspecification of the family of ARCH models.

Computation of the Chow Test

Consider the linear regression model

y = X{\beta} + u

where the parameter vector {{\beta}} contains k elements.

Split the observations for this model into two subsets at the break point specified by the CHOW= option, so that y = ( y'1, y'2)',

X = ( X'1, X'2)', and

u = ( u'1, u'2)'.

Now consider the two linear regressions for the two subsets of the data modeled separately,

y_{1} = X_{1}{\beta}_{1} + u_{1}
y_{2} = X_{2}{\beta}_{2} + u_{2}

where the number of observations from the first set is n1 and the number of observations from the second set is n2.

The Chow test statistic is used to test the null hypothesis {H_{0}: {\beta}_{1}={\beta}_{2}} conditional on the same error variance V(u1) = V(u2). The Chow test is computed using three sums of square errors.

\rm{F}_{chow} = \frac{({{\hat{u}}'}{\hat{u}}
- {\hat{u}}^{'}_{1}{\hat{u}}_{1}
...
 ...^{'}_{1}
{\hat{u}}_{1} + {\hat{u}}^{'}_{2}
{\hat{u}}_{2}) / (n_{1}+n_{2}-2k)}

where {{\hat{u}}} is the regression residual vector from the full set model, {{\hat{u}}_{1}} is the regression residual vector from the first set model, and {{\hat{u}}_{2}} is the regression residual vector from the second set model. Under the null hypothesis, the Chow test statistic has an F-distribution with k and (n1+n2-2k) degrees of freedom, where k is the number of elements in {{\beta}}.

Chow (1960) suggested another test statistic that tests the hypothesis that the mean of prediction errors is 0. The predictive Chow test can also be used when n2 < k.

The PCHOW= option computes the predictive Chow test statistic

\rm{F}_{pchow} = \frac{({{\hat{u}}'}{\hat{u}}
- {\hat{u}}^{'}_{1}{\hat{u}}_{1}) / {n_2}}
{ {\hat{u}}^{'}_{1}
{\hat{u}}_{1} / (n_{1}+k)}

The predictive Chow test has an F-distribution with n2 and (n1-k) degrees of freedom.

Unit Root and Cointegration Testing

Consider the random walk process

yt = yt-1 + ut

where the disturbances might be serially correlated with possible heteroscedasticity. Phillips and Perron (1988) proposed the unit root test of the OLS regression model.

y_{t} = {\alpha}y_{t-1} + u_{t}

Let {s^2 = \frac{1}{T-k}\sum_{t=1}^T{\hat{u}^2_{t}}} and let {\hat\sigma}^2 be the variance estimate of the OLS estimator \hat{\alpha}, where \hat u_{t} is the OLS residual. You can estimate the asymptotic variance of {\frac{1}{T}\sum_{t=1}^T{\hat{u}^2_{t}}}using the truncation lag l.

\hat{{\lambda}} = \sum_{j=0}^l{{\kappa}_{j}[1-
j/(l+1)]\hat{{\gamma}}_{j}}

where {{\kappa}_{0}=1}, {{\kappa}_{j}=2} for j>0, and {\hat{{\gamma}}_{j} =\frac{1}{T}\sum_{t=j+1}^T{\hat{u}_{t}\hat{u}_{t-j}}}.

Then the Phillips-Perron Z(\hat{\alpha}) test (zero mean case) is written

\rm{Z}(\hat{{\alpha}}) = T(\hat{{\alpha}}-1) -
\frac{1}2
T^2\hat{{\sigma}}^2(\hat{{\lambda}}-\hat{{\gamma}}_{0}) /
{s^2}

and has the following limiting distribution:

\frac{\frac{1}2\{{\ssbeleven B(1)}^2-1\}}{\int_{0}^1{{\ssbeleven [B(x)}]^2dx}}

where B(·) is a standard Brownian motion. Note that the realization Z(x) from the the stochastic process B(·) is distributed as N(0,x) and thus {{\ssbeleven B(1)}^2 {\sim} {\chi}^2_{1}}.

Therefore, you can observe that {\rm{P}(\hat{{\alpha}} \lt 1) {\approx} 0.68} as {T{arrow}{\infty}}, which shows that the limiting distribution is skewed to the left.

Let t_{{\hat {\alpha}}} be the t-test statistic for \hat{\alpha}. The Phillips-Perron {\rm{Z}(t_{\hat{{\alpha}}})} test is written

\rm{Z}(t_{\hat{{\alpha}}}) = ({\hat{{\gamma}}_{0}}/{\hat{{\lambda}}})^{1/2}t_{\h...
 ...at{{\sigma}}(\hat{{\lambda}}-\hat{{\gamma}}_{0})}
/{(s \hat{{\lambda}}^{1/2})}

and its limiting distribution is derived as

\frac{\frac{1}2\{{\ssbeleven [B(1)}]^2-1\}}{\{ \int_{0}^1{{\ssbeleven [B(x)}]^2dx}\}^{1/2} }

When you test the regression model {y_{t}={\mu}+{\alpha}y_{t-1}+u_{t}} for the true random walk process (single mean case), the limiting distribution of the statistic Z({\hat {\alpha}}) is written

\frac{\frac{1}2\{{\ssbeleven [B(1)}]^2-1\}
-B(1)\int_{0}^1{B(x)dx}}{\int_{0}^1{{\ssbeleven [B(x)}]^2dx}
-[{\ssbeleven \int_{0}^1{B(x)dx}}]^2}
while the limiting distribution of the statistic {\rm{Z}(t_{\hat{{\alpha}}})} is given by
\frac{\frac{1}2\{{\ssbeleven [B(1)}]^2-1\}
-B(1)\int_{0}^1{B(x)dx}}
{{\{ \int_...
 ...{{\ssbeleven
[B(x)}]^2dx} 
-[{\ssbeleven \int_{0}^1{B(x)dx}}]^2
}\}^{1/2} }

Finally, the limiting distribution of the Phillips-Perron test for the random walk with drift process {y_{t}={\mu}+y_{t-1}+u_{t}} (trend case) can be derived as

where c=1 for Z({\hat {\alpha}}) and {c=\frac{1}{\sqrt{Q}}} for {\rm{Z}(t_{\hat{{\alpha}}})},

V = [\matrix{
 1 & \int_{0}^1{B(x)dx} & \frac{1}2 \cr
 \int_{0}^1{B(x)dx} &
 ...
 ...nt_{0}^1{x{B}(x)dx} \cr
 \frac{1}2 & \int_{0}^1{x{B}(x)dx} & \frac{1}3
 }
 ]

When several variables zt = (z1t, ... ,zkt)' are cointegrated, there exists

a (k×1) cointegrating vector c such that c'zt is stationary and c is a nonzero vector. The residual based cointegration test is based on the following regression model:

y_{t} = {\beta}_{1} + {x_{t}'}{\beta} + u_{t}

where yt = z1t, xt = (z2t, ... ,zkt)', and {{\beta}} = ({{\beta}}2,...,{{\beta}}k)'. You can estimate the consistent cointegrating vector using OLS if all variables are difference stationary, that is, I(1). The Phillips-Ouliaris test is computed using the OLS residuals from the preceding regression model, and it performs the test for the null hypothesis of no cointegration. The estimated cointegrating vector is {\hat{c}={(1,-\hat{{\beta}}_{2},{ ... },-\hat{{\beta}}_{k})'}}.

Since the AUTOREG procedure does not produce the p-value of the cointegration test, you need to refer to the tables by Phillips and Ouliaris (1990). Before you apply the cointegration test, you might perform the unit root test for each variable.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.