next up previous


Postscript version of these notes

STAT 450

Lecture 25

Today's notes

Reading for Today's Lecture:

Goals of Today's Lecture:

Completeness and Sufficiency

Example: $Y_1,\ldots,Y_n$ iid Bernoulli(p), $\sum Y_i =
X\sim$Binomial(n,p):

Sufficiency

Notice: conditional distribution of $Y_1,\ldots,Y_n$ given X same for all $\theta$; we say conditional distribution is free of $\theta$.

Definition: A statistic T(X) is sufficient for the model $\{ P_\theta;\theta \in \Theta\}$ if the conditional distribution of the data X given T=t is free of $\theta$.

Intuition: Why do the data tell us about $\theta$? Because different values of $\theta$ give different distributions to X. If two different values of $\theta$ correspond to the same joint density or cdf for X then we cannot, even in principle, distinguish these two values of $\theta$ by examining X. We extend this notion to the following. If two values of $\theta$ give the same conditional distribution of X given Tthen observing T in addition to X does not improve our ability to distinguish the two values.

Mathematically Precise version of this intuition: If T(X)is a sufficient statistic then we can do the following. If S(X) is any estimate or confidence interval or whatever for a given problem but we only know the value of T then:

You can carry out the first step only if the statistic T is sufficient; otherwise you need to know the true value of $\theta$ to generate X*.

Example 1: IF $Y_1,\ldots,Y_n$ are iid Bernoulli(p) then given $\sum Y_i = y$ the indexes of the y successes have the same chance of being any one of the $\dbinom{n}{y}$ possible subsets of $\{1,\ldots,n\}$. This chance does not depend on p so $T(Y_1,\ldots,Y_n)
= \sum Y_i$ is a sufficient statistic.

example 2: If $X_1,\ldots,X_n$ are iid $N(\mu,1)$ then the joint distribution of $X_1,\ldots,X_n,\overline{X}$ is multivariate normal with mean vector whose entries are all $\mu$ and variance covariance matrix which can be partitioned as

\begin{displaymath}\left[\begin{array}{cc} I_{n \times n} & {\bf 1}_n /n
\\
{\bf 1}_n^t /n & 1/n \end{array}\right]
\end{displaymath}

where ${\bf 1}_n$ is a column vector of n 1s and $I_{n \times n}$ is an $n \times n$ identity matrix.

You can now compute the conditional means and variances of Xi given $\overline{X}$ and use the fact that the conditional law is multivariate normal to prove that the conditional distribution of the data given $\overline{X} = x$ is multivariate normal with mean vector all of whose entries are x and variance-covariance matrix given by $I_{n\times n} - {\bf 1}_n{\bf 1}_n^t /n $. Since this does not depend on $\mu$ we find that $\overline{X}$ is sufficient.

WARNING: Whether or not a statistic is sufficient depends on the density function and on $\Theta$.

Rao Blackwell Theorem

Theorem: Suppose that S(X) is a sufficient statistic for some model $\{P_\theta,\theta\in\Theta\}$. If T is an estimate of some parameter $\phi(\theta)$ then:

1.
E(T|S) is a statistic.

2.
E(T|S) has the same bias as T; if T is unbiased so is E(T|S).

3.
${\rm Var}_\theta(E(T\vert S)) \le {\rm Var}_\theta(T)$ and the inequality is strict unless T is a function of S.

4.
The MSE of E(T|S) is no more than that of T.

Proof: It will be useful to review conditional distributions a bit more carefully at this point. The abstract definition of conditional expectation is this:

Definition: E(Y|X) is any function of X such that

\begin{displaymath}E\left[R(X)E(Y\vert X)\right] = E\left[R(X) Y\right]
\end{displaymath}

for any function R(X).

Definition: E(Y|X=x) is a function g(x) such that

g(X) = E(Y|X)

Fact: If X,Y has joint density fX,Y(x,y) and conditional density f(y|x) then

\begin{displaymath}g(x) = \int y f(y\vert x) dy
\end{displaymath}

satisfies these definitions.

Proof:
\begin{align*}E(R(X)g(X)) & = \int R(x) g(x)f_X(x) dx
\\
& = \int\int R(x) y f(...
...X(x) dy dx
\\
&= \int\int R(x)y f_{X,Y}(x,y) dy dx
\\
&= E(R(X)Y)
\end{align*}

You should simply think of E(Y|X) as being what you get when you average Y holding X fixed. It behaves like an ordinary expected value but where functions of X only are like constants.

Proof of the Rao Blackwell Theorem

Step 1: The definition of sufficiency is that the conditional distribution of X given S does not depend on $\theta$. This means that E(T(X)|S) does not depend on $\theta$.

Step 2: This step hinges on the following identity (called Adam's law by Jerzy Neyman - he used to say it comes before all the others)

E[E(Y|X)] =E(Y)

which is just the definition of E(Y|X) with $R(X) \equiv 1$.

From this we deduce that

\begin{displaymath}E_\theta[E(T\vert S)] = E_\theta(T)
\end{displaymath}

so that E(T|S) and T have the same bias. If T is unbiased then

\begin{displaymath}E_\theta[E(T\vert S)] = E_\theta(T) = \phi(\theta)
\end{displaymath}

so that E(T|S) is unbiased for $\phi$.

Step 3: This relies on the following very useful decomposition. (In regression courses we say that the total sum of squares is the sum of the regression sum of squares plus the residual sum of squares.)

\begin{displaymath}{\rm Var(Y)} = {\rm Var}(E(Y\vert X)) + E[{\rm Var}(Y\vert X)]
\end{displaymath}

The conditional variance means

\begin{displaymath}{\rm Var}(Y\vert X) = E[ (Y-E(Y\vert X))^2\vert X]
\end{displaymath}

This identity is just a matter of squaring out the right hand side

\begin{displaymath}{\rm Var}(E(Y\vert X)) =E[(E(Y\vert X)-E[E(Y\vert X)])^2] = E[(E(Y\vert X)-E(Y))^2]
\end{displaymath}

and

\begin{displaymath}E[{\rm Var}(Y\vert X)] = E[(Y-E(Y\vert X)^2]
\end{displaymath}

Adding these together gives

E[Y2 -2YE[Y|X]+2(E[Y|X])2 -2E(Y)E[Y|X] + E2(Y)]

The middle term actually simplifies. First, remember that E(Y|X) is a function of X so can be treated as a constant when holding X fixed. This means

E[Y|X]E[Y|X] = E[YE(Y|X)|X]

and taking expectations gives

E[(E[Y|X])2] = E[E[YE(Y|X)|X]] =E[YE(Y|X)]

This makes the middle term above cancel with the second term. Moreover the fourth term simplifies

E[E(Y)E[Y|X]] = E(Y) E[E[Y|X]] =E2(Y)

so that

\begin{displaymath}{\rm Var}(E(Y\vert X)) + E[{\rm Var}(Y\vert X)] = E[Y^2] - E^2(Y)
\end{displaymath}

We apply this to the Rao Blackwell theorem to get

\begin{displaymath}{\rm Var}_\theta(T) = {\rm Var}_\theta(E(T\vert S)) +
E[(T-E(T\vert S))^2]
\end{displaymath}

The second term is non negative so that the variance of E(T|S) must be no more than that of T and will be strictly less unless T=E(T|S). This would mean that Tis already a function of S. Adding the squares of he biases of T (or of E(T|S) which is the same) gives the inequality for mean squared error.

Examples:

In the binomial problem Y1(1-Y2) is an unbiased estimate of p(1-p). We improve this by computing

E(Y1(1-Y2)|X)

We do this in two steps. First compute

E(Y1(1-Y2)|X=x)

Notice that the random variable Y1(1-Y2) is either 1 or 0 so its expected value is just the probability it is equal to 1:
\begin{align*}E(Y_1(1-Y_2)\vert X=x) &= P(Y_1(1-Y_2) =1 \vert X=x)
\\
& = P(Y_1...
...rac{\dbinom{n-2}{x-1}}{\dbinom{n}{x}}
\\
& = \frac{x(n-x)}{n(n-1)}
\end{align*}
This is simply $n\hat p(1-\hat p)/(n-1)$ (which can be bigger than 1/4 which is the maximum value of p(1-p)).

Example: If $X_1,\ldots,X_n$ are iid $N(\mu,1)$ then $\bar{X}$ is sufficient and X1 is an unbiased estimate of $\mu$. Now
\begin{align*}E(X_1\vert\bar{X})& = E[X_1-\bar{X}+\bar{X}\vert\bar{X}]
\\
& = E[X_1-\bar{X}\vert\bar{X}] + \bar{X}
\\
& = \bar{X}
\end{align*}
which is the UMVUE.



Richard Lockhart
1999-11-12