Goals for today:
Hypothesis testing is a statistical problem where you must choose,
on the basis of data X, between two alternatives. We formalize this
as the problem of choosing between two hypotheses:
or
where
and
are a partition of the model
.
That is
and
.
A rule for making the required choice can be described in two ways:
For technical reasons which will come up soon I prefer to use the
second description. However, each
corresponds to a
unique rejection region
.
The Neyman Pearson approach to hypothesis testing which we consider first treats the two hypotheses asymmetrically. The hypothesis Ho is referred to as the null hypothesis (because traditionally it has been the hypothesis that some treatment has no effect).
Definition: The power function of a test
(or the corresponding critical region
)
is
We are interested here in optimality theory, that is,
the problem of finding the best
.
A good
will
evidently have
small for
and large for
.
There is generally a
trade off which can be made in many ways, however.
Finding a best test is easiest when the hypotheses are very precise.
Definition: A hypothesis Hi is simple
if
contains only a single value
.
The simple versus simple testing problem arises when we test
against
so that
has only two points in it. This problem is of importance as
a technical tool, not because it is a realistic situation.
Suppose that the model specifies that if
then
the density of X is f0(x) and if
then
the density of X is f1(x). How should we choose
?
To answer the question we begin by studying the problem of
minimizing the total error probability.
We define a Type I error as the error made when
but we choose H1, that is,
.
The other kind of error, when
but we choose
H0 is called a Type II error. We define the level
of a simple versus simple test to be
Suppose we want to minimize
,
the total error
probability. We want to minimize
Theorem: For each fixed
the
quantity
is minimized by any
which has
Neyman and Pearson suggested that in practice the
two kinds of errors might well have unequal consequences.
They suggested that rather than minimize any quantity
of the form above you pick the more serious kind of error, label it Type I and require your rule to hold the
probability
of a Type I error to be no more than
some prespecified level
.
(This value
is typically 0.05 these days, chiefly for historical reasons.)
The Neyman and Pearson approach is then to minimize beta subject to the constraint
.
Usually this is really equivalent to the constraint
(because if you use
you could make R larger and keep
but
make
smaller. For discrete models, however, this
may not be possible.
Example: Suppose X is Binomial(n,p) and
either p=p0=1/2 or p=p1=3/4. If R is any critical region
(so R is a subset of
)
then
The problem in the example is one of discreteness. Here's how we get
around the problem. First we expand the set of possible values of
to include numbers between 0 and 1. Values of
between
0 and 1 represent the chance that we choose H1 given that we observe
x; the idea is that we actually toss a (biased) coin to decide!
This tactic will show us the kinds of rejection regions which are
sensible. In practice we then restrict our attention to levels
for which the best
is always either 0 or 1. In the
binomial example we will insist that the value of
be either 0 or
or
or ...
Definition: A hypothesis test is a function
whose values are always in [0,1]. If we observe X=x then we
choose H1 with conditional probability
.
In this case we have