For many biological considerations, the concept of an underlying
'true', point, value for a parameter as a premise for statistical
inference is somewhat artificial.
Bayesian methods relax that premise, and have some advantages
over maximum-likelihood (sometimes called frequentist) approaches.
For example, Bayesian methods permit an analyst to conceptualize
that the biological parameter of interest, say a survival rate,
is variable over time or space within the population from which
data were collected to estimate it.
That is, a parameter is conceptualized as a random variable rather
than having a fixed value.
Additionally, Bayesian methods allow an analyst to impose subjective
beliefs or objective uncertainties (so-called prior distributions)
about parameter values upon an analysis.
Likelihood function minimization can still be an element of the
Bayesian approach, but it is used to combine information on parameter
values contained in the data with the prior information on the parameters
independent of the data, to produce parameter estimates associated
with the so-called posterior distributions of the parameter values.
Statistical inference now concerns the posterior probability distributions
of the parameter values of interest.
Statistical inference tends to be discussed more in terms of probability
than likelihood, since the intent of a imposing a prior
distribution is to define a probabilistic domain of a parameter's
possible values.
This contrasts with the maximum-likelihood approach of judging
statistical confidence in estimating the underlying, and perhaps
wrongly-conceived, 'true' values of the parameters.
The appropriateness of maximum-likelihood versus Bayesian approaches to
statistical inference is an ongoing debate among practitioners.
Either or both approaches may be prescribed for a particular statistical
problem depending upon the analytical questions posed, the data
available, and the eventual use of the analytical results.
|