Linear models can be solved for explicit analytical expressions
of the maximum-likelihood parameter estimates, given a deterministic
model structure, an error specification,
and observed data.
Simple linear regression is a good example of a linear model.
Non-linear models require mathematical algorithms to efficiently
search for maximum-likelihood estimates.
Familiar examples are the 'simplex
algorithm' (geometric) and 'Marquardt's
algorithm' (calculus).
Other familiar schemes are variants of the latter, i.e., the Gauss-Newton,
quasi-Newton and conjugate schemes.
The more information that observed data provide concerning the
value of a model parameter of interest, the better, the more quickly,
and the more precisely and accurately it will be estimated by any
of the above schemes.
However, even efficient search algorithms can require large amounts
of time to converge upon maximum-likelihood parameter estimates.
Typically, (1) the more parameters that require co-estimation,
(2) the greater the uncertainty in the values of those parameters,
(3) the poorer the choices of initial parameter values, and (4)
the more parameter pairs that tend to be highly correlated, the
greater the time required, or difficulty, to reach convergence.
In some extreme cases convergence may seem not to be achievable.
This may require a that a model's deterministic
model structure and/or parameterization be reconsidered, or in the
case of compiled models, information exterior to the model might
be useful to guide parameter values (e.g., Bayesian
models).
|