Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Time Series Analysis and Control Examples

Minimum AIC Procedure

The AIC statistic is widely used to select the best model among alternative parametric models. The minimum AIC model selection procedure can be interpreted as a maximization of the expected entropy (Akaike 1981). The entropy of a true probability density function (PDF) \varphi with respect to the fitted PDF f is written as
B(\varphi,f) = -I(\varphi,f)
where I(\varphi,f) is a Kullback-Leibler information measure, which is defined as
I(\varphi,f) = \int [ \log [ \frac{\varphi(z)}{f(z)}
 ]
 ] \varphi(z) dz
where the random variable Z is assumed to be continuous. Therefore,
B(\varphi,f) = {\rm E}_Z \log f(Z) - {\rm E}_Z \log \varphi(Z)
where B(\varphi,f)\leq 0 and EZ denotes the expectation concerning the random variable Z. B(\varphi,f)=0 if and only if \varphi=f (a.s.). The larger the quantity EZ logf(Z), the closer the function f is to the true PDF \varphi.Given the data y = (y1, ... , yT)' that has the same distribution as the random variable Z, let the likelihood function of the parameter vector \theta be \prod_{t=1}^T f(y_t|\theta).Then the average of the log likelihood function \frac{1}T\sum_{t=1}^T \log f(y_t|\theta) is an estimate of the expected value of logf(Z). Akaike (1981) derived the alternative estimate of EZ logf(Z) by using the Bayesian predictive likelihood. The AIC is the bias-corrected estimate of -2T{\rm E}_Z \log f(Z|\hat{\theta}), where \hat{\theta} is the maximum likelihood estimate.
AIC = - 2( maximum log likelihood) + 2( number of free parameters)
Let \theta = (\theta_1, ... ,\theta_K)^' be a K ×1 parameter vector that is contained in the parameter space \Theta_K. Given the data y, the log likelihood function is
\ell(\theta) = \sum_{t=1}^T \log f(y_t|\theta)
Suppose the probability density function f(y|\theta) has the true PDF \varphi(y) = f(y|\theta^0), where the true parameter vector \theta^0 is contained in \Theta_K. Let \hat{\theta}_K be a maximum likelihood estimate. The maximum of the log likelihood function is denoted as \ell(\hat{\theta}_K) = \max_{\theta\in\Theta_K}\ell(\theta).The expected log likelihood function is defined by
\ell^*(\theta) = T{\rm E}_Z \log f(Z|\theta)
The Taylor series expansion of the expected log likelihood function around the true parameter \theta^0 gives the following asymptotic relationship:
\ell^*(\theta) \stackrel{A}{=} \ell^*(\theta^0) + 
T(\theta - \theta^0)^'{\rm E}...
 ...artial \theta} - 
\frac{T}2(\theta - \theta^0)^' 
I(\theta^0)(\theta - \theta^0)
where I(\theta^0) is the information matrix and = stands for asymptotic equality. Note that \frac{\partial \log f(z|\theta^0)}{\partial \theta}=0since \log f(z|\theta) is maximized at \theta^0. By substituting \hat{\theta}_K, the expected log likelihood function can be written as
\ell^*(\hat{\theta}_K) \stackrel{A}{=} 
\ell^*(\theta^0) - \frac{T}2(\hat{\theta}_K - \theta^0)^'
I(\theta^0)(\hat{\theta}_K - \theta^0)
The maximum likelihood estimator is asymptotically normally distributed under the regularity conditions
\sqrt{T}I(\theta^0)^{1/2}(\hat{\theta}_K - \theta^0)
\stackrel{d}{arrow}N(0, I_K)
Therefore,
T(\hat{\theta}_K - \theta^0)^'I(\theta^0)(\hat{\theta}_K - 
\theta^0) \stackrel{a}{\sim} \chi_K^2
The mean expected log likelihood function, \ell^*(K) = {\rm E}_Y \ell^*(\hat{\theta}_K), becomes
\ell^*(K) \stackrel{A}{=} \ell^*(\theta^0) - \frac{K}2
When the Taylor series expansion of the log likelihood function around \hat{\theta}_K is used, the log likelihood function \ell(\theta) is written
\ell(\theta) \stackrel{A}{=} \ell(\hat{\theta}_K) + 
(\theta - \hat{\theta}_K)^'...
 ...{\partial \theta \partial \theta^'}
| _{\hat{\theta}_K}(\theta - \hat{\theta}_K)
Since \ell(\hat{\theta}_K) is the maximum log likelihood function, . \frac{\partial \ell(\theta)}
 {\partial \theta} 
| _{\hat{\theta}_K}=0.Note that {\rm plim} 
[ -\frac{1}T 
 . \frac{\partial^2 \ell(\theta)}
 {\partial \theta \partial \theta^'}
 | _{\hat{\theta}_K}
] = I(\theta^0) if the maximum likelihood estimator \hat{\theta}_K is a consistent estimator of \theta. Replacing \theta with the true parameter \theta^0 and taking expectations with respect to the random variable Y,
{\rm E}_Y \ell(\theta^0) \stackrel{A}{=}
{\rm E}_Y \ell(\hat{\theta}_K) - \frac{K}2
Consider the following relationship:
\ell^*(\theta^0) & = & T{\rm E}_Z \log f(Z|\theta^0) \ & = & {\rm E}_Y \sum_{t=1}^T \log f(Y_t|\theta^0) \ & = & {\rm E}_Y \ell(\theta^0)
From the previous derivation,
\ell^*(K) \stackrel{A}{=} \ell^*(\theta^0) - \frac{K}2
Therefore,
\ell^*(K) \stackrel{A}{=} {\rm E}_Y \ell(\hat{\theta}_K) - K
The natural estimator for E_Y \ell(\hat{\theta}_K)is \ell(\hat{\theta}_K). Using this estimator, you can write the mean expected log likelihood function as
\ell^*(K) \stackrel{A}{=} \ell(\hat{\theta}_K) - K
Consequently, the AIC is defined as an asymptotically unbiased estimator of -2( mean expected log likelihood)
{\rm AIC}(K) = -2\ell(\hat{\theta}_K) + 2K
In practice, the previous asymptotic result is expected to be valid in finite samples if the number of free parameters does not exceed 2\sqrt{T} and the upper bound of the number of free parameters is [T/2]. It is worth noting that the amount of AIC is not meaningful in itself, since this value is not the Kullback-Leibler information measure. The difference of AIC values can be used to select the model. The difference of the two AIC values is considered insignificant if it is far less than 1. It is possible to find a better model when the minimum AIC model contains many free parameters.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.