Non Gaussian series.
The fitting methods we have studied are based on the likelihood for a normal fit. However, the estimates work reasonably well even if the errors are not normal.
Example: AR(1) fit. We fit
using
which is consistent for non-Gaussian errors.
(In fact
Here is an outline of the logic of what follows. We will assume that
the errors are iid mean 0, variance
and finite fourth
moment
E
. We will not assume that
the errors have a normal distribution.
E
negligible remainder
Here are details.
Consistency: One of our many nearly equivalent
estimates of
is
Score function: asymptotic normality
The score function is
To prove this assertion we define for each
a martingale
for
where
ESecond derivative matrix and Fisher information: the matrix of negative second derivatives is
Taylor expansion: In the next step we are supposed
to prove that a random vector has a MVN limit. The usual tactic to
prove this uses the so called Cramér-Wold device -- you prove that
each linear combination of the entries in the vector has a univariate
normal limit.
Then
and Taylor's theorem is
that
Asymptotic normality: This is a consequence of
Slutsky's theorem applied to the Taylor expansion and the results
above for
and
. According to Slutsky's theorem the asymptotic
distribution of
is the same as that
of
Behaviour of
: pick off the first component
and find
Behaviour of
: on the other hand
More general models: For an ARMA
model the parameter
vector is
Model assessment.
Having fitted an ARIMA model you get (essentially automatically)
fitted residuals
. Most of the fitting methods lead to
fewer residuals than there were in the original series. Since the
parameter estimates are consistent (if the model fitted is correct,
of course) the fitted residuals should be essentially the true
which is white noise. We will assess this by plotting the estimated
ACF of
and then seeing if the estimates are all close
enough to 0 to pass for white noise.
To judge close enough we need asymptotic distribution theory for autocovariance estimates.