next up previous


Postscript version of this page

STAT 801: Mathematical Statistics

Unbiased Tests

Definition: A test $ \phi$ of $ \Theta_0$ against $ \Theta_1$ is unbiased level $ \alpha$ if it has level $ \alpha$ and, for every $ \theta\in\Theta_1$ we have

$\displaystyle \pi(\theta) \ge \alpha \, .
$

When testing a point null hypothesis like $ \mu=\mu_0$ this requires that the power function be minimized at $ \mu_0$ which will mean that if $ \pi$ is differentiable then

$\displaystyle \pi^\prime(\mu_0) =0
$

Example: $ N(\mu,1)$: data $ X=(X_1,\ldots,X_n)$. If $ \phi$ is any test function then

$\displaystyle \pi^\prime(\mu) = \frac{\partial}{\partial\mu} \int \phi(x) f(x,\mu) dx
$

Differentiate under the integral and use

$\displaystyle \frac{\partial f(x,\mu)}{\partial\mu} = \sum(x_i-\mu) f(x,\mu)
$

to get the condition

$\displaystyle \int \phi(x) \bar{x} f(x,\mu_0) dx = \mu_0 \alpha_0
$

Minimize $ \beta(\mu)$ subject to two constraints

$\displaystyle E_{\mu_0}(\phi(X)) = \alpha_0
$

and

$\displaystyle E_{\mu_0}(\bar{X} \phi(X)) = \mu_0 \alpha_0.
$

Fix two values $ \lambda_1>0$ and $ \lambda_2$ and minimize

$\displaystyle \lambda_1\alpha + \lambda_2 E_{\mu_0}[(\bar{X} - \mu_0)\phi(X)] + \beta
$

The quantity in question is just

$\displaystyle \int [\phi(x) f_0(x)(\lambda_1+\lambda_2(\bar{x} - \mu_0)) +
(1-\phi(x))f_1(x)] dx \, .
$

As before this is minimized by

$\displaystyle \phi(x) =\left\{\begin{array}{ll}
1 & \frac{f_1(x)}{f_0(x)} > \la...
...frac{f_1(x)}{f_0(x)} < \lambda_1+\lambda_2(\bar{x} - \mu_0)
\end{array}\right.
$

The likelihood ratio $ f_1/f_0$ is simply

$\displaystyle \exp\{ n(\mu_1-\mu_0)\bar{X} + n(\mu_0^2-\mu_1^2)/2\}
$

and this exceeds the linear function

$\displaystyle \lambda_1+\lambda_2(\bar{X} - \mu_0)
$

for all $ \bar{X}$ sufficiently large or small. That is,

$\displaystyle \lambda_1\alpha + \lambda_2 E_{\mu_0}[(\bar{X} - \mu_0)\phi(X)] + \beta
$

is minimized by a rejection region of the form

$\displaystyle \{\bar{X} > K_U\} \cup \{ \bar{X} < K_L\}
$

Satisfy constraints: adjust $ K_U$ and $ K_L$ to get level $ \alpha$ and $ \pi^\prime(\mu_0) = 0$. 2nd condition shows rejection region symmetric about $ \mu_0$ so test rejects for

$\displaystyle \sqrt{n}\vert\bar{X} - \mu_0\vert > z_{\alpha/2}
$

Mimic Neyman Pearson lemma proof to check that if $ \lambda_1$ and $ \lambda_2$ are adjusted so that the unconstrained problem has the rejection region given then the resulting test minimizes $ \beta$ subject to the two constraints.

A test $ \phi^*$ is a Uniformly Most Powerful Unbiased level $ \alpha_0$ test if

  1. $ \phi^*$ has level $ \alpha \le \alpha_0$.

  2. $ \phi^*$ is unbiased.

  3. If $ \phi$ has level $ \alpha \le \alpha_0$ and $ \phi$ is unbiased then for every $ \theta\in\Theta_1$ we have

    $\displaystyle E_\theta(\phi(X)) \le E_\theta(\phi^*(X))
$

Conclusion: The two sided $ z$ test which rejects if

$\displaystyle \vert Z\vert > z_{\alpha/2}
$

where

$\displaystyle Z=n^{1/2}(\bar{X} -\mu_0)
$

is the uniformly most powerful unbiased test of $ \mu=\mu_0$ against the two sided alternative $ \mu\neq\mu_0$.

Nuisance Parameters

The $ t$-test is UMPU.

Suppose $ X_1,\ldots,X_n$ iid $ N(\mu,\sigma^2)$. Test $ \mu=\mu_0$ or $ \mu \le \mu_0$ against $ \mu>\mu_0$. Parameter space is two dimensional; boundary between the null and alternative is

$\displaystyle \{(\mu,\sigma); \mu=\mu_0,\sigma>0\}
$

If a test has $ \pi(\mu,\sigma) \le \alpha$ for all $ \mu \le \mu_0$ and $ \pi(\mu,\sigma) \ge \alpha$ for all $ \mu>\mu_0$ then $ \pi(\mu_0,\sigma) =\alpha$ for all $ \sigma$ because the power function of any test must be continuous. (Uses dominated convergence theorem; power function is an integral.)

Think of $ \{(\mu,\sigma);\mu=\mu_0\}$ as parameter space for a model. For this parameter space

$\displaystyle S=\sum(X_i-\mu_0)^2
$

is complete and sufficient. Remember definitions of both completeness and sufficiency depend on the parameter space.

Suppose $ \phi(\sum X_i, S)$ is an unbiased level $ \alpha$ test. Then we have

$\displaystyle E_{\mu_0,\sigma}(\phi(\sum X_i, S)) = \alpha
$

for all $ \sigma$. Condition on $ S$ and get

$\displaystyle E_{\mu_0,\sigma}[E(\phi(\sum X_i, S)\vert S)] = \alpha
$

for all $ \sigma$. Sufficiency guarantees that

$\displaystyle g(S) = E(\phi(\sum X_i, S)\vert S)
$

is a statistic and completeness that

$\displaystyle g(S) \equiv \alpha
$

Now let us fix a single value of $ \sigma$ and a $ \mu_1>\mu_0$. To make our notation simpler I take $ \mu_0=0$. Our observations above permit us to condition on $ S=s$. Given $ S=s$ we have a level $ \alpha$ test which is a function of $ \bar{X}$.

If we maximize the conditional power of this test for each $ s$ then we will maximize its power. What is the conditional model given $ S=s$? That is, what is the conditional distribution of $ \bar{X}$ given $ S=s$? The answer is that the joint density of $ \bar{X},S$ is of the form

$\displaystyle f_{\bar{X},S}(t,s) = h(s,t) \exp\{\theta_1 t + \theta_2 s +c(\theta_1,\theta_2)\}
$

where $ \theta_1=n\mu/\sigma^2$ and $ \theta_2 = -1/\sigma^2$.

This makes the conditional density of $ \bar{X}$ given $ S=s$ of the form

$\displaystyle f_{\bar{X}\vert s}(t\vert s) = h(s,t)\exp\{\theta_1 t+c^*(\theta_1,s)\}
$

Note disappearance of $ \theta_2$ and null is $ \theta_1=0$. This permits application of NP lemma to the conditional family to prove that UMP unbiased test has form

$\displaystyle \phi(\bar{X},S) = 1(\bar{X} > K(S))
$

where $ K(S)$ chosen to make conditional level $ \alpha$. The function $ x\mapsto x/\sqrt{a-x^2}$ is increasing in $ x$ for each $ a$ so that we can rewrite $ \phi$ in the form

$\displaystyle \phi(\bar{X},S) =
1(n^{1/2}\bar{X}/\sqrt{n[S/n-\bar{X}^2]/(n-1)} > K^*(S))
$

for some $ K^*$. The quantity

$\displaystyle T=\frac{n^{1/2}\bar{X}}{\sqrt{n[S/n-\bar{X}^2]/(n-1)}}
$

is the usual $ t$ statistic and is exactly independent of $ S$ (see Theorem 6.1.5 on page 262 in Casella and Berger). This guarantees that

$\displaystyle K^*(S) = t_{n-1,\alpha}
$

and makes our UMPU test the usual $ t$ test.

Optimal tests


next up previous



Richard Lockhart
2001-03-26