Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
The LOGISTIC Procedure

Classification Table

For binary response data, the response is either an event or a nonevent. In PROC LOGISTIC, the response with Ordered Value 1 is regarded as the event, and the response with Ordered Value 2 is the nonevent. PROC LOGISTIC models the probability of the event. From the fitted model, a predicted event probability can be computed for each observation. The method to compute a reduced-bias estimate of the predicted probability is given in the "Predicted Probability of an Event for Classification" section, which follows. If the predicted event probability exceeds some cutpoint value z \in [0,1],the observation is predicted to be an event observation; otherwise, it is predicted as a nonevent. A 2×2 frequency table can be obtained by cross-classifying the observed and predicted responses. The CTABLE option produces this table, and the PPROB= option selects one or more cutpoints. Each cutpoint generates a classification table. If the PEVENT= option is also specified, a classification table is produced for each combination of PEVENT= and PPROB= values.

The accuracy of the classification is measured by its sensitivity (the ability to predict an event correctly) and specificity (the ability to predict a nonevent correctly). Sensitivity is the proportion of event responses that were predicted to be events. Specificity is the proportion of nonevent responses that were predicted to be nonevents. PROC LOGISTIC also computes three other conditional probabilities: false positive rate, false negative rate, and rate of correct classification. The false positive rate is the proportion of predicted event responses that were observed as nonevents. The false negative rate is the proportion of predicted nonevent responses that were observed as events. Given prior probabilities specified with the PEVENT= option, these conditional probabilities can be computed as posterior probabilities using Bayes' theorem.

Predicted Probability of an Event for Classification

When you classify a set of binary data, if the same observations used to fit the model are also used to estimate the classification error, the resulting error-count estimate is biased. One way of reducing the bias is to remove the binary observation to be classified from the data, reestimate the parameters of the model, and then classify the observation based on the new parameter estimates. However, it would be costly to fit the model leaving out each observation one at a time. The LOGISTIC procedure provides a less expensive one-step approximation to the preceding parameter estimates. Let b be the MLE of the parameter vector (\alpha, {\beta}')' based on all observations. Let bj denote the MLE computed without the jth observation. The one-step estimate of bj is given by

b_j^1=b-
 \frac{w_j(y_j-\hat{p}_j)}{1-h_{jj}}\hat{V}_{b}
  ( 1 \ x_j )

where

yj
is 1 for an event response and 0 otherwise
wj
is the WEIGHT value
\hat{p}_j
is the predicted event probability based on b
hjj
is the hat diagonal element with nj=1 and rj=yj
\hat{V}_{b}
is the estimated covariance matrix of b

False Positive and Negative Rates Using Bayes' Theorem

Suppose n1 of n individuals experience an event, for example, a disease. Let this group be denoted by C1, and let the group of the remaining n2=n-n1 individuals who do not have the disease be denoted by C2. The jth individual is classified as giving a positive response if the predicted probability of disease (\hat{p}^*_j) is large. The probability \hat{p}^*_j is the reduced-bias estimate based on a one-step approximation given in the previous section. For a given cutpoint z, the jth individual is predicted to give a positive response if \hat{p}^*_j \geq z.

Let B denote the event that a subject has the disease and {\bar{B}} denote the event of not having the disease. Let A denote the event that the subject responds positively, and let {\bar{A}} denote the event of responding negatively. Results of the classification are represented by two conditional probabilities, {\rm Pr}(A| B) and {\rm Pr}(A|{\bar{B}}), where {\rm Pr}(A| B) is the sensitivity, and {\rm Pr}(A|{\bar{B}}) is one minus the specificity.

These probabilities are given by

{\rm Pr}(A| B)= \frac{\sum_{i \in {\cal C}_1} I(\hat{p}^*_j \geq z)}{n_1}\ {\rm Pr}(A|{\bar{B}})= \frac{\sum_{i \in {\cal C}_2} I(\hat{p}^*_j \geq z)}{n_2}
where I(·) is the indicator function.

Bayes' theorem is used to compute the error rates of the classification. For a given prior probability Pr(B) of the disease, the false positive rate PF+ and the false negative rate PF- are given by Fleiss (1981, pp. 4 -5) as follows:

P_{F+} = {\rm Pr}({\bar{B}}| A) & = & \frac{{\rm Pr}(A|{\bar{B}})[1-{\rm Pr}(B)]...
 ... {1-{\rm Pr}(A|{\bar{B}}) - {\rm Pr}(B)[{\rm Pr}(A| B) - {\rm Pr}(A|{\bar{B}})]}
The prior probability Pr(B) can be specified by the PEVENT= option. If the PEVENT= option is not specified, the sample proportion of diseased individuals is used; that is, Pr(B) = n1/n. In such a case, the false positive rate and the false negative rate reduce to
P_{F+} &=& \frac{\sum_{i \in {\cal C}_2} I(\hat{p}^*_j \geq z)}
 {\sum_{i \in {\...
 ...{\cal C}_1} I(\hat{p}^*_j\lt z)
 + \sum_{i \in {\cal C}_2} I(\hat{p}^*_j\lt z)}\
Note that for a stratified sampling situation in which n1 and n2 are chosen a priori, n1/n is not a desirable estimate of Pr(B). For such situations, the PEVENT= option should be specified.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.