Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Forecasting Process Details

Statistics of Fit

This section explains the goodness-of-fit statistics reported to measure how well different models fit the data. The statistics of fit for the various forecasting models can be viewed or stored in a data set using the Model Viewer window.

The various statistics of fit reported are as follows. In these formula, n is the number of nonmissing observations and k is the number of fitted parameters in the model.

Number of Nonmissing Observations.
The number of nonmissing observations used to fit the model.

Number of Observations.
The total number of observations used to fit the model, including both missing and nonmissing observations.

Number of Missing Actuals.
The number of missing actual values.

Number of Missing Predicted Values.
The number of missing predicted values.

Number of Model Parameters.
The number of parameters fit to the data. For combined forecast, this is the number of forecast components.

Total Sum of Squares (Uncorrected).
The total sum of squares for the series, SST, uncorrected for the mean: {\sum_{t=1}^n{y_{t}^2}}.

Total Sum of Squares (Corrected).
The total sum of squares for the series, SST, corrected for the mean: {\sum_{t=1}^n{( y_{t} - {\overline y} )^2}},where {{\overline y}} is the series mean.

Sum of Square Errors.
The sum of the squared prediction errors, SSE. {{SSE} = \sum_{t=1}^n{( y_{t} - \hat{y}_{t} )^2}},where {\hat{y}} is the one-step predicted value.

Mean Square Error.
The mean squared prediction error, MSE, calculated from the one-step-ahead forecasts. MSE = [1/n] SSE. This formula enables you to evaluate small holdout samples.

Root Mean Square Error.
The root mean square error (RMSE), {\sqrt{MSE}}.

Mean Absolute Percent Error.
The mean absolute percent prediction error (MAPE), {\frac{100}n \sum_{t=1}^n{{|( y_{t} - \hat{y}_{t} ) / y_{t} |}}}.
The summation ignores observations where yt = 0.

Mean Absolute Error.
The mean absolute prediction error, {\frac{1}n \sum_{t=1}^n{{| y_{t} - \hat{y}_{t} |}}}.

R-Square.
The R2 statistic, R2 = 1-SSE / SST. If the model fits the series badly, the model error sum of squares, SSE, may be larger than SST and the R2 statistic will be negative.

Adjusted R-Square.
The adjusted R2 statistic, 1 - ([(n-1)/(n-k)]) (1- R2).

Amemiya's Adjusted R-Square.
Amemiya's adjusted R2, 1 - ([(n+k)/(n-k)]) (1 - R2).

Random Walk R-Square.
The random walk R2 statistic (Harvey's R2 statistic using the random walk model for comparison), 1 - ([(n-1)/n]) SSE / RWSSE, where {{RWSSE} = \sum_{t=2}^n{( y_{t} - y_{t-1} - {\mu} )^2}},and {{\mu} = \frac{1}{n-1} \sum_{t=2}^n{( y_{t} - y_{t-1} )}}.

Akaike's Information Criterion.
Akaike's information criterion (AIC), n  ln( MSE ) + 2 k.

Schwarz Bayesian Information Criterion.
Schwarz Bayesian information criterion (SBC or BIC),
n  ln( MSE ) + k  ln( n ).

Amemiya's Prediction Criterion.
Amemiya's prediction criterion, [1/n] SST ([(n+k)/(n-k)])(1- R2) = ([(n+k)/(n-k )]) [1/n] SSE.

Maximum Error.
The largest prediction error.

Minimum Error.
The smallest prediction error.

Maximum Percent Error.
The largest percent prediction error, {100 {\rm max}( ( y_{t} - \hat{y}_{t} ) / y_{t} )}.The summation ignores observations where yt = 0.

Minimum Percent Error.
The smallest percent prediction error, {100 {\rm min}( ( y_{t} - \hat{y}_{t} ) / y_{t} )}.The summation ignores observations where yt = 0.

Mean Error.
The mean prediction error, {\frac{1}n \sum_{t=1}^n{( y_{t} - \hat{y}_{t} )}}.

Mean Percent Error.
The mean percent prediction error, {\frac{100}n \sum_{t=1}^n{\frac{(y_{t} - \hat{y}_{t})}{y_{t}}}}.The summation ignores observations where yt = 0.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.