next up previous


Postscript version of this file

STAT 380 Week 5

Infinite State Spaces

Example: consider a sequence of independent coin tosses with probability $ p$ of Heads on a single toss. Let $ X_n$ be number of heads minus number of tails after $ n$ tosses. Put $ X_0=0$.

$ X_n$ is a Markov Chain. State space is $ \mathbb{Z}$, the integers and

$\displaystyle {\bf P}_{ij} = \begin{cases}
p & j=i+1
\\
1-p & j=i-1
\\
0 & \text{otherwise}
\end{cases}$

This chain has one communicating class (for $ p\neq 0,1$) and all states have period 2. According to the strong law of large numbers $ X_n/n$ converges to $ 2p-1$. If $ p\neq 1/2$ this guarantees that for all large enough $ n$ $ X_n \neq 0$, that is, the number of returns to 0 is not infinite. The state 0 is then transient and so all states must be transient.

For $ p = 1/2$ the situation is different. It is a fact that

$\displaystyle {\bf P}^n_{00} = P($# H = # T at time $n$$\displaystyle )
$

For $ n$ even this is the probability of exactly $ n/2$ heads in $ n$ tosses.

Local Central Limit Theorem (normal approximation to $ P(-1/2 < X_n < 1/2)$) (or Stirling's approximation) shows

$\displaystyle \sqrt{2m}P($Binomial$\displaystyle (2m,1/2) = m) \to (2/\pi)^{1/2}
$

so:

$\displaystyle \sum_n {\bf P}^n_{00} = \infty
$

That is: 0 is a recurrent state.

Hitting Times

Start irreducible recurrent chain $ X_n$ in state $ i$. Let $ T_j$ be first $ n>0$ such that $ X_n=j$. Define

$\displaystyle m_{ij} = {\rm E}(T_j\vert X_0=i)
$

First step analysis:

$\displaystyle m_{ij}$ $\displaystyle = 1\cdot P(X_1=j\vert X_0=i)$    
  $\displaystyle \qquad + \sum_{k\neq j} (1+{\rm E}(T_j\vert X_0=k))P_{ik}$    
  $\displaystyle = \sum_k P_{ik} + \sum_{k\neq j} P_{ik} m_{kj}$    
  $\displaystyle = 1 + \sum_{k\neq j} P_{ik} m_{kj}$    

Example

$\displaystyle {\bf P}=
\left[\begin{array}{cc} \frac{3}{5} & \frac{2}{5}
\\
\\
\frac{1}{5} & \frac{4}{5}
\end{array} \right]
$

The equations are

$\displaystyle m_{11}$ $\displaystyle = 1 +\frac{2}{5} m_{21}$    
$\displaystyle m_{12}$ $\displaystyle = 1 +\frac{3}{5} m_{12}$    
$\displaystyle m_{21}$ $\displaystyle = 1 +\frac{4}{5} m_{21}$    
$\displaystyle m_{22}$ $\displaystyle = 1 +\frac{1}{5} m_{12}$    

The second and third equations give immediately

$\displaystyle m_{12}$ $\displaystyle = \frac{5}{2}$    
$\displaystyle m_{21}$ $\displaystyle = 5$    

Then plug in to the others to get

$\displaystyle m_{11}$ $\displaystyle = 3$    
$\displaystyle m_{22}$ $\displaystyle = \frac{3}{2}$    

Notice stationary initial distribution is

$\displaystyle \left(\frac{1}{m_{11}},\frac{1}{m_{22}}\right)
$

Consider fraction of time spent in state $ j$:

$\displaystyle \frac{1(X_0=j)+\cdots+1(X_n=j)}{n+1}
$

Imagine chain starts in state $ i$; take expected value.

$\displaystyle \frac{\sum_{r=1}^n {\bf P}^r_{ij} +1(i=j)}{n+1}
=
\frac{\sum_{r=0}^n {\bf P}^r_{ij}}{n+1}
$

If rows of $ {\bf P}^n$ converge to $ \pi$ then fraction converges to $ \pi_j$; i.e. limiting fraction of time in state $ j$ is $ \pi_j$.

Heuristic: start chain in $ i$. Expect to return to $ i$ every $ m_{ii}$ time units. So are in state $ i$ about once every $ m_{ii}$ time units; i.e. limiting fraction of time in state $ i$ is $ 1/m_{ii}$.

Conclusion: for an irreducible recurrent finite state space Markov chain

$\displaystyle \pi_i = \frac{1}{m_{ii}} \, .
$

Infinite State Spaces

Previous conclusion is still right if there is a stationary initial distribution.

Example: $ X_n =$   Heads$ -$Tails after $ n$ tosses of fair coin. Equations are

$\displaystyle m_{0,0}$ $\displaystyle = 1 + \frac{1}{2}m_{1,0}+\frac{1}{2} m_{-1,0}$    
$\displaystyle m_{1,0}$ $\displaystyle = 1 + \frac{1}{2} m_{2,0}$    

and many more.

Some observations:

You have to go through 1 to get to 0 from 2 so

$\displaystyle m_{2,0}=m_{2,1}+m_{1,0}
$

Symmetry (switching H and T):

$\displaystyle m_{1,0}=m_{-1,0}
$

The transition probabilities are homogeneous:

$\displaystyle m_{2,1}=m_{1,0}
$

Conclusion:

$\displaystyle m_{0,0}$ $\displaystyle = 1 + m_{1,0}$    
  $\displaystyle = 1+ 1 + \frac{1}{2} m_{2,0}$    
  $\displaystyle = 2 + m_{1,0}$    

Notice that there are no finite solutions!

Summary of the situation:

Every state is recurrent.

All the expected hitting times $ m_{ij}$ are infinite.

All entries $ {\bf P}^n_{ij}$ converge to 0.

Jargon: The states in this chain are null recurrent.

One Example

Page 229, question 21: runner goes from front or back door, prob 1/2 each. Returns front or back, prob 1/2 each. Has $ k$ pairs of shoes, wears pair if any at departure door, leaves at return door. No shoes? Barefoot. Long run fraction of time barefoot?

Solution: Let $ X_n$ be number of shoes at front door on day $ n$. Then $ X_n$ is a Markov Chain.

Transition probabilities:

$ k$ pairs at front door on day $ n$. $ X_{n+1}$ is $ k$ if goes out back door (prob is 1/2) or out front door and back in front door (prob is 1/4). Otherwise $ X_{n+1}$ is $ k-1$.

$ 0 < j < k$ pairs at front door on day $ n$. $ X_{n+1}$ is $ j+1$ if out back, in front (prob is 1/4). $ X_{n+1}$ is $ j-1$ if out front, in back. Otherwise $ X_{n+1}$ is $ j$.

0 pairs at front door on day $ n$. $ X_{n+1}$ is 0 if out front door (prob 1/2) or out back door and in back door (prob 1/4) otherwise $ X_{n+1}$ is $ 1$.

Transition matrix $ {\bf P}$:

$\displaystyle \left[\begin{array}{cccccc}
\frac{3}{4} & \frac{1}{4} & 0 & 0 &\c...
... \vdots
\\
0 & 0 & 0 & \cdots & \frac{1}{4} & \frac{3}{4}
\end{array}\right]
$

Doubly stochastic: row sums and column sums are 1.

So $ \pi_i=1/(k+1)$ for all $ i$ is stationary initial distribution.

Solution to problem: 1 day in $ k+1$ no shoes at front door. Half of those go barefoot. Also 1 day in $ k+1$ all shoes at front door; go barefoot half of these days. Overall go barefoot $ 1/(k+1)$ of the time.

Gambler's Ruin

Insurance company's reserves fluctuate: sometimes up, sometimes down. Ruin is event they hit 0 (company goes bankrupt). General problem. For given model of fluctuation compute probability of ruin either eventually or in next $ k$ time units.

Simplest model: gambling on Red at Casino. Bet $1 at a time. Win $1 with probability $ p$, lose $1 with probability $ 1-p$. Start with $ k$ dollars. Quit playing when down to $0 or up to $ N$. Compute

$\displaystyle P_k = P($reach $N$ before $0$$\displaystyle \vert X_0=k)
$

$ X_n$ = fortune after $ n$ plays. $ X_0=k$.

Transition matrix:

$\displaystyle {\bf P}=
\left[\begin{array}{cccccc}
1 & 0 & 0 & 0 &\cdots & 0
\...
... & \ddots & \ddots & \vdots
\\
0 & 0 & 0 & \cdots & 0 & 1
\end{array}\right]
$

First step analysis:

$\displaystyle P_0$ $\displaystyle =0$    
$\displaystyle P_i$ $\displaystyle = (1-p) P_{i-1}+pP_{i+1}$    
$\displaystyle P_N$ $\displaystyle = 1$    

Middle equation is

$\displaystyle pP_i +(1-p)P_i = (1-p) P_{i-1}+pP_{i+1}
$

or

$\displaystyle P_{i+1}-P_{i}$ $\displaystyle = \frac{1-p}{p} (P_{i}-P_{i-1})$    
  $\displaystyle = \left( \frac{1-p}{p}\right)^2 (P_{i-1}-P_{i-2})$    
  $\displaystyle \vdots$    
  $\displaystyle = \left( \frac{1-p}{p}\right)^{i} (P_1-P_{0})$    
  $\displaystyle = \left( \frac{1-p}{p}\right)^{i}P_1$    

Sum from $ i=0$ to $ i=k-1$ to get

$\displaystyle P_k = \sum_{i=0}^{k-1} \left( \frac{1-p}{p}\right)^i P_1
$

or

$\displaystyle P_k = \frac{1-\{(1-p)/p\}^k}{1-\{(1-p)/p\}} P_1
$

For $ k=N$ we get

$\displaystyle 1=\frac{1-\{(1-p)/p\}^N}{1-\{(1-p)/p\}} P_1
$

so that

$\displaystyle P_k = \frac{1-\{(1-p)/p\}^k}{1-\{(1-p)/p\}^N}
$

Notice that if $ p = 1/2$ our formulas for the sum of the geometric series are wrong. But for $ p = 1/2$ we get

$\displaystyle P_k=kP_1
$

so

$\displaystyle P_k=\frac{k}{N} \, .
$

Mean time in transient states

$\displaystyle {\bf P}= \left[\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & 0 ...
...\\
\frac{1}{4} & \frac{1}{4} & \frac{3}{8} & \frac{1}{8}
\end{array}\right]
$

States 3 and 4 are transient. Let $ m_{i,j}$ be the expected total number of visits to state $ j$ for chain started in $ i$.

For $ i=1$ or $ i=2$ and $ j=3$ or 4:

$\displaystyle m_{ij} = 0
$

For $ j=1$ or $ j=2$

$\displaystyle m_{ij} = \infty
$

For $ i,j \in \{3,4\}$ first step analysis:

$\displaystyle m_{3,3}$ $\displaystyle = 1+ \frac{1}{4} m_{3,3} + \frac{1}{4} m_{4,3}$    
$\displaystyle m_{3,4}$ $\displaystyle = 0 + \frac{1}{4} m_{3,4} + \frac{1}{4} m_{4,4}$    
$\displaystyle m_{4,3}$ $\displaystyle = 0 + \frac{3}{8} m_{3,3} + \frac{1}{8} m_{4,3}$    
$\displaystyle m_{4,4}$ $\displaystyle = 1 + \frac{3}{8} m_{3,4} + \frac{1}{8} m_{4,4}$    

In matrix form

$\displaystyle \left[\begin{array}{cc} m_{3,3} &m_{3,4} \\ \\ m_{4,3} &m_{4,4}\end{array}\right]$ $\displaystyle = \left[\begin{array}{cc} 1 & 0 \\ \\ 0 & 1 \end{array}\right] +$    
  $\displaystyle \qquad \left[\begin{array}{cc} \frac{1}{4} & \frac{1}{4} \\ \\ \f...
...eft[\begin{array}{cc} m_{3,3} &m_{3,4} \\ \\ m_{4,3} &m_{4,4}\end{array}\right]$    

Translate to matrix notation:

$\displaystyle {\bf M} = {\bf I} + {\bf P}_T {\bf M}
$

where $ \bf I$ is the identity, $ \bf M$ is the matrix of means and $ {\bf P}_T$ the part of the transition matrix corresponding to transient states.

Solution is

$\displaystyle {\bf M} = ({\bf I} - {\bf P}_T)^{-1}
$

In our case

$\displaystyle {\bf I} - {\bf P}_T =
\left[\begin{array}{rr} \frac{3}{4} & -\frac{1}{4} \\  \\
-\frac{3}{8} & \frac{7}{8}
\end{array}\right]
$

so that

$\displaystyle {\bf M} = \left[\begin{array}{rr} \frac{14}{9} & \frac{4}{9} \\  \\
\frac{2}{3} & \frac{4}{3}
\end{array}\right]
$


next up previous



Richard Lockhart
2002-02-07