next up previous


Postscript version of this page
STAT 801: Mathematical Statistics

Statistical Inference

Definition: A model is a family $ \{ P_\theta; \theta \in
\Theta\}$ of possible distributions for some random variable $ X$. (Our data set is $ X$, so $ X$ will generally be a big vector or matrix or even more complicated object.)

We will assume throughout this course that the true distribution $ P$ of $ X$ is in fact some $ P_{\theta_0}$ for some $ \theta_0 \in \Theta$. We call $ \theta_0$ the true value of the parameter. Notice that this assumption will be wrong; we hope it is not wrong in an important way. If we are very worried that it is wrong we enlarge our model putting in more distributions and making $ \Theta$ bigger.

Our goal is to observe the value of $ X$ and then guess $ \theta_0$ or some property of $ \theta_0$. We will consider the following classic mathematical versions of this:

  1. Point estimation: we must compute an estimate $ \hat\theta =
\hat\theta(X)$ which lies in $ \Theta$ (or something close to $ \Theta$).

  2. Point estimation of a function of $ \theta$: we must compute an estimate $ \hat\phi = \hat\phi(X)$ of $ \phi=g(\theta)$.

  3. Interval (or set) estimation. We must compute a set $ C=C(X)$ in $ \Theta$ which we think will contain $ \theta_0$.

  4. Hypothesis testing: We must choose between $ \theta_0\in\Theta_0$ and $ \theta_0\not\in\Theta_0$ where $ \Theta_0 \subset \Theta$.

  5. Prediction: we must guess the value of an observable random variable $ Y$ whose distribution depends on $ \theta_0$. Typically $ Y$ is the value of the variable $ X$ in a repetition of the experiment.

Several schools of statistical thinking. Main schools of thought summarized roughly as follows:



We use Neyman Pearson approach to evaluate quality of likelihood and other methods. next up previous



Richard Lockhart
2001-01-03