next up previous

STAT 804: Lecture 19 Notes

Processes with Periodic Components

Some of the series we have looked at have had clear annual cycles, returning to high levels in the same month every year. In our analysis of such processes we have tried to model the mean $\mu_t$ as a periodic function $\mu_{t+12} = \mu_t$. Sometimes we have fitted specific periodic functions to $\mu_t$ - writing $\mu_t = \alpha \cos(2\pi t/12) + \beta \sin(2\pi t/12)$.

Another process we studied, that of sunspot numbers, also seems to show a clearly periodic component, though now the frequency or period of the oscillation is not so obvious. In this section of the course we investigate the notion of decomposing a general stationary time series into simple periodic components. We will take these components to be cosines and sines. We will be focussing on problems in which the period is not prespecified, that is problems more like the sunspot data than the annual cycle examples.

For a statistician, the simplest description of what we will do is to say that we will examine the correlation between our series and sines and cosines of various periods. We will use these correlations in several ways:

Periodic Functions

A periodic function $f$ on the real line has the property that $f(t+d)=f(t)$ for some $d$ and all $t$. The smallest possible choice of $d$ is the period of $f$. The frequency of $f$ in cycles per time unit, is $1/d$. The most famous periodic functions are the trigonometric functions $\sin(\omega t+\phi)$ and its relatives. This function has period $2\pi/\omega$ and frequency $\omega/(2\pi)$ cycles per time unit. Often, for trigonometric functions it is convenient to refer to $\omega$ as the frequency; the units now are radians per time point.

The achievement of Fourier was to recognize that essentially any function $f$ with period 1 can be represented as a sum of functions $\sin(2\pi k t)$ or $\cos(2\pi k t)$. The tactic is to suppose that

\begin{displaymath}
f(t) = a_0 + \sum_{k=1}^\infty a_k \cos(2\pi kt) +
\sum_{k=1}^\infty b_k \sin(2\pi k t)
\end{displaymath}

To discover the values of the coefficients we make use of the orthogonality properties:

\begin{displaymath}
\int_0^1 \cos(2\pi k t) \cos(2 \pi j t)\, dt = 1(j=k)/2
\end{displaymath}


\begin{displaymath}
\int_0^1 \sin(2\pi k t) \sin(2 \pi j t)\, dt = 1(j=k)/2
\end{displaymath}

and

\begin{displaymath}
\int_0^1 \cos(2\pi k t) \sin(2 \pi j t)\, dt = 0 \, .
\end{displaymath}

Now multiply $f(t)$ by say $\cos(2\pi k t)$ and integrate from 0 to 1. Expanding the integral using the supposed expression of $f$ as a sum gives us

\begin{displaymath}
\int_0^1 f(t) \cos(2\pi k t) \, dt = a_k \int_0^1 \cos^2(2 \pi k t) \, dt =
1/2 \, .
\end{displaymath}

Similarly $b_k= 2\int_0^1 f(t) \sin(2\pi k t) \, dt$.

Mathematically the fact that we can derive a formula for the coefficients is far from proving that the resulting sum actually represents $f$; the key missing piece of the proof is that any function whose Fourier coefficients are all 0 is essentially the 0 function.

Correlation between functions

The integrals in the previous section can be thought of as analogous to covariances and variances. For instance a Riemann sum for

\begin{displaymath}
\int_0^1 \cos(2\pi k t) \cos(2 \pi j t)\, dt
\end{displaymath}

is

\begin{displaymath}
\sum_{\ell=0}^{n-1} cos(2\pi k \ell/n)\cos(2 \pi j \ell /n) / n
\end{displaymath}

which is an average product. In fact it is possible to show that

\begin{displaymath}
\sum_{\ell=0}^{n-1} cos(2\pi k \ell/n)/n = 0
\end{displaymath}

So that the average product is just a ``sample'' covariance. It is also possible to evaluate the average product exactly to see that

\begin{displaymath}
\sum_{\ell=0}^{n-1} cos(2\pi k \ell/n)\cos(2 \pi j \ell /n) / n
= 1(j=k)/2
\end{displaymath}

exactly. When $j=k$ this becomes a variance, equal to 1/2 so that the correlation is just the covariance times 2, which is 0 in any case when $j \ne k$.

Interpreting all the integrals above, then, as covariances we see that all the sines are uncorrelated with each other and with all the cosines and all the cosines are uncorrelated with each other.

Notice particularly that the sine with frequency $j$ and the cosine with frequency $j$ are uncorrelated. This has an important implication for looking for components at frequency $j$ cycles per time unit in a time series: if we want a certain frequency we have to consider both the cosine and the sine at that frequency. An alternative summary of what we need is to consider the trigonometric identity

\begin{displaymath}
\sin(\omega t + \phi) = \cos(\phi) \sin(\omega t) + \sin(\phi) \cos(\omega t) \, .
\end{displaymath}

When we look for a component with frequency $\omega$ we will allow ourselves to adjust the number $\phi$, called the phase, in order to mazimize the correlation with our data. This is equivalent to adjusting the coefficients $\cos(\phi)$ and $\sin(\phi)$ to maximize a correlation with the right hand side of the trigonometric identity.

Complex Exponentials

Many of the identities in this subject are more easily derived using complex variables. In particular, the identity

\begin{displaymath}
e^{i\theta} = \cos(\theta) + i \sin(\theta)
\end{displaymath}

where $i^2=-1$ permits any series in sines and cosines to be rewritten in terms of exponentials. We can then often use tricks involving geometric sums to simplify our algebra.

For instance we can write

\begin{displaymath}
\cos(2 \pi k t) = \frac{\exp(2 \pi k ti) + \exp(-2 \pi k ti)}{2}
\end{displaymath}

and

\begin{displaymath}
\sin(2 \pi k t) = \frac{\exp(2 \pi k ti) - \exp(-2 \pi k ti)}{2i}
\end{displaymath}

These permit us to rewrite the expansion (1) in the form

\begin{displaymath}
f(t) = \sum_{-\infty}^\infty c_k \exp(2 \pi k t i)
\end{displaymath}

where $c_k= (a_k-ib_)/2k$ for $k>0$, $c_0=a_0$ and $c_k=(a_k+ib_k)/2$ for $k<0$. In fact

\begin{displaymath}
c_k = \int_0^1 f(t) \exp(-2\pi k ti)\,dt \, .
\end{displaymath}

Fourier transforms

For functions which are not periodic we can proceed by a further approximation Suppose $f$ is defined on the real line and fix a large value of $T$. Define

\begin{displaymath}
g(t) = f(-T/2+tT)\,.
\end{displaymath}

Then $g$ is defined on $[0,1]$ and

\begin{displaymath}
g(t) = \sum \exp(2\pi k ti) \int_0^1 g(s)\exp(2\pi k si) \, ds
\end{displaymath}

according to (1) above. Re-express the conclusion in terms of $f$ to get

\begin{displaymath}
f(u) = \frac{1}{T}\sum \exp(2\pi ki(u+T/2)/T \int_{-T/2}^{T/2} f(v)\exp(-2\pi k i(v+T/2)/T) \, dv
\end{displaymath}

which simplifies to

\begin{displaymath}
f(u) = \frac{1}{T}\sum \exp(2\pi\frac{k}{T} ui) \int_{-T/2}^{T/2} f(v)\exp(-2\pi\frac{k}{T}vi) \, dv
\end{displaymath}

You should recognize this sum as a Riemann sum for the integral

\begin{displaymath}
\int_{-\infty}^\infty \exp(2\pi xui) \int_{-T/2}^{T/2} f(v)\exp(-2\pi xvi)\, dv \, du
\end{displaymath}

which then converges as $T\to\infty$ to the expression

\begin{displaymath}
f(u) =\int_{-\infty}^\infty \exp(2\pi xui)
\int_{-\infty}^\infty f(v)\exp(-2\pi xvi)\, dv \,du
\end{displaymath}

The function

\begin{displaymath}
{\hat f}(x)= \int_{-\infty}^\infty f(v)\exp(-2\pi xvi)\, dv
\end{displaymath}

is called the Fourier transform of $f$ and we have derived a Fourier inversion formula. [WARNING: no proofs here! This integral will exist for, for example, $f$ which are integrable over all the real line. ] This inversion formula expresses the function $f$ as a linear combination of sines and cosines, though there are infinitely many frequencies involved.

Transforms of Stochastic Processes

We now seek to apply these ideas with the function $f$ being our stochastic process $X$. We have several difficulties:

The discrete nature of $X$ leads us to the study of a discrete approximation to the integral:

\begin{displaymath}
\sum_{t=0}^{T-1} X_t \exp(i2\pi\omega t)
\end{displaymath}

This object has real part

\begin{displaymath}
\sum_{t=0}^{T-1} X_t \cos(2\pi\omega t)
\end{displaymath}

and imaginary part

\begin{displaymath}
\sum_{t=0}^{T-1} X_t \sin(2\pi\omega t)
\end{displaymath}

so that apart from the means not being 0 we are studying the sample covariance with sines and cosines at frequency $\omega$. We now study the statistical properties of these objects and then try to interpret them.

Suppose that $X$ is a mean 0 stationary time series with autocovariance function $C$. We define the discrete Fourier transform of $X$ as

\begin{displaymath}
{\hat X}(\omega) = \frac{1}{\sqrt{T}}\sum_{t=0}^{T-1} X_t \exp(i2\pi\omega
t) \,.
\end{displaymath}

Our choice to divide by the square root of $T$ is motivated by the recognition that the sum of $T$ terms typically has a standard deviation on the order of $\sqrt{T}$ leading us to expect that $\hat X$ will have a standard deviation which has a reasonable limit as $T\to\infty$.

We begin by computing moments of $\hat X$. Since $\hat X$ is complex valued we have to think about what these moments should be. One way to think about this is to view $\hat X$ as a vector with two components, the real and imaginary parts. This would give $\hat X$ a mean and a 2 by 2 variance covariance matrix. Also of interest however will be the expected modulus squared of $\hat X$, namely

\begin{displaymath}
{\rm E}[\vert{\hat X}(\omega)\vert^2] = {\rm E}[{\hat X}(\omega)\overline{{\hat
X}(\omega)}]
\end{displaymath}

where $\bar z$ is the complex conjugate of $z$. (If $z=x+iy$ with $x$ and $y$ real then ${\bar z} = x-iy$.)

Since the $X$s have mean 0 we see that

\begin{displaymath}
{\rm E}{\hat X}(\omega) = 0
\end{displaymath}

(you should note that the expected value of a complex valued random variable is computed by finding the expected value of the real and imaginary parts). Then

\begin{displaymath}
{\rm E}[\vert{\hat X}(\omega)\vert^2] = \frac{1}{T} \sum_{s=...
...T-1}
\sum_{t=0}^{T-1}\exp(i2\pi\omega ( s-t)) {\rm E}(X_sX_t)
\end{displaymath}

The expected values are just $C(s-t)$. We can gather together all the terms involving $C(0)$, all those involving $C(1)$ and so on to find

\begin{displaymath}
{\rm E}[\vert{\hat X}(\omega)\vert^2] = \frac{1}{T}\left(TC(0) +
(T-1)(e^{i2\pi\omega} + e^{-2\pi\omega}) + \cdots \right)
\end{displaymath}

which simplifies to

\begin{displaymath}
C(0) +(1-1/T)C(1) (e^{i2\pi\omega} + e^{-2\pi\omega})
+ (1-2/T)C(2) (e^{i4\pi\omega} + e^{-4\pi\omega}) \cdots
\end{displaymath}

As $T\to\infty$ the coefficents of $C(k)$ converges to 1 and we see (using $C(k)=C(-k)$)

\begin{displaymath}
{\rm E}[\vert{\hat X}(\omega)\vert^2] = \sum_{-\infty}^\infty C(k) \exp(i2\pi\omega
k)\, .
\end{displaymath}

The right hand side of this expression is defined to be the spectral density, or power spectrum, of $X$:

\begin{displaymath}
f_X(\omega) = \sum_{-\infty}^\infty C(k) \exp(i2\pi\omega k) \, .
\end{displaymath}

There are a number of ways to look at spectral densities and the discrete Fourier transform:


next up previous



Richard Lockhart
2001-09-30