next up previous
Postscript version of this file

STAT 870 Lecture 15

Continuous Time Markov Chains

Consider a population of single celled organisms in a stable environment.

Fix short time interval, length h.

Each organism has some probability of dividing to produce two organisms and some other probability of dying.

We might suppose:

Notice tacit assumption: constants of proportionality do not depend on time (that is our interpretation of ``stable environment'').

Notice too that we have taken the constants not to depend on which organism we are talking about. We are really assuming that the organisms are all similar and live in similar environments.

Y(t): total population at time t.

tex2html_wrap_inline173 : history of the process up to time t.

We generally take

displaymath177

General definition of a history (alternative jargon filtration): any family of tex2html_wrap_inline179 -fields indexed by t satisfying:

The last assumption is a technical detail we will ignore from now on.

Condition on event Y(t) =n .

Then the probability of two or more divisions (either more than one division by a single organism or two or more organisms dividing) is o(h) by our assumptions.

Similarly the probability of both a division and a death or of two or more deaths is o(h).

So probability of exactly 1 division by any one of the n organisms is tex2html_wrap_inline201 .

Similarly probability of 1 death is tex2html_wrap_inline203 .

We deduce:

align35

These equations lead to:

align37

This is the Markov Property.

Definition: A process tex2html_wrap_inline205 taking values in S, a finite or countable state space is a Markov Chain if

align40

Definition: A Markov chain Y has stationary transitions if

displaymath211

From now on: our chains have stationary transitions.

Summary of Markov Process Results

Detailed development

Suppose X a Markov Chain with stationary transitions. Then

align106

This shows

displaymath281

which is the Chapman-Kolmogorov equation.

Now consider the chain starting from i and let tex2html_wrap_inline217 be the first t for which tex2html_wrap_inline289 . Then tex2html_wrap_inline217 is a stopping time.

[Technically:

displaymath293

for each t.] Then

align108

by the Markov property. Note: we actually are asserting a generalization of the Markov property: If f is some function on the set of possible paths of X then

align110

The formula requires some sophistication to appreciate. In it, f is a function which associates a sample path of X with a real number. For instance,

displaymath305

is such a functional. Jargon: functional is a function whose argument is itself a function and whose value is a scalar.

FACT: Strong Markov Property - for a stopping time T

displaymath309

with suitable fix on event tex2html_wrap_inline311 .

Conclusion: given X(0)=i, tex2html_wrap_inline217 has memoryless porperty so tex2html_wrap_inline217 has an exponential distribution. Let tex2html_wrap_inline221 be the rate parameter.

Embedded Chain: Skeleton

Let tex2html_wrap_inline321 be the stopping times at which tranisitons occur. Then tex2html_wrap_inline323 . The sequence tex2html_wrap_inline325 is a Markov chain by the strong Markov property. That tex2html_wrap_inline225 reflects the fact that tex2html_wrap_inline329 by design.

As before we say tex2html_wrap_inline331 if tex2html_wrap_inline333 for some t. It is fairly clear that tex2html_wrap_inline331 for the X(t) if and only if tex2html_wrap_inline331 for the embedded chain tex2html_wrap_inline325 .

We say tex2html_wrap_inline345 if tex2html_wrap_inline331 and tex2html_wrap_inline349 .

Now consider

displaymath351

Suppose the chain has made n transitions so far so that tex2html_wrap_inline355 . Then the event X(t+h)=j is, except for possibilities of probability o(h) the event that

displaymath361

The probability of this is

displaymath363


next up previous

Richard Lockhart
Tuesday October 31 11:01:13 PST 2000