Typically, maximum-likelihood estimates are asymptotically unbiased.
That is they converge to the unknown 'true' value of a parameter
(if that concept applies) as the quantity of observed data (sample
size) increases.
However, rarely do we have sufficient data to assume negligible
biasedness.
A conceptual challenge in statistical inference is judging the
degree to which we accept the trade-off between bias and uncertainty.
Bias and parsimony are interconnected - as with parsimony, diagnostic
tools such as AIC (or AICc, QAIC, QAICc) and BIC assist investigators
in formally deciding upon the 'best' model.
However, the 'best' model may not be a good enough model if it
fails goodness-of-fit diagnostics
or retrospective analysis.
Bias in parameter estimates is defined as the difference between
the 'true' value and the expected value of the parameter (as defined
by the deterministic model and its
error specification).
Bias can sometimes be determined analytically, but for other than
simple models is almost always determined by simulations, where
those simulations use 'known' (as defined by the analyst) values
of those parameters, 'known' being a proxy term for unknown 'true'
values.
Repeated analyses of these simulated data provide a distribution
of parameter estimates which are judged for their ability to unbiasedly
and confidently estimate the 'known' value.
|