The "recall" metric is a measure of the fraction of positive outcomes that were correctly scored. The higher the number the better the model is at identifying who is likely to generate a success.

Recall is therefore defined and computed as:

The terms *true positives (tp)*, *true negatives (tn)*, *false positives (fp)*, and *false negatives (fn)* (see wikipedia: Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms *positive* and *negative* refer to the classifier's prediction (sometimes known as the *expectation*), and the terms *true* and *false* refer to whether that prediction corresponds to the external judgment (sometimes known as the *observation*).

Consider a small example in terms of scoring leads who will convert and who won't. Let's say we are looking at 10 leads, 5 of which will convert (the positive outcomes) and 5 of which won't. The model predicts that 3 of the leads convert. This means that the model observed 3 true predictions and 2 false predictions, equal to a recall of 60% (3/5).

## Comments

0 comments

Please sign in to leave a comment.