Recall
In the field of machine learning and data analysis, recall is a metric that is used to evaluate the performance of a model or algorithm. It is defined as the proportion of true positive predictions made by the model, relative to the total number of true positive cases in the data.
Recall is often used in combination with another metric called precision, which measures the proportion of true positive predictions made by the model relative to the total number of positive predictions made by the model. Together, precision and recall can be used to assess the overall accuracy and effectiveness of a model in different situations.
Recall is particularly useful when the cost of making a false negative prediction is high, or when the goal of the model is to identify as many positive cases as possible among a large number of negative cases. For example, in a spam filter, it might be more important to have a high recall rate, even if it comes at the cost of a lower precision rate, since false negative predictions could result in spam emails being delivered to a user's inbox.
To calculate recall, you can use the following formula:
Recall = True Positives / (True Positives + False Negatives)
Where True Positives are the number of predictions made by the model that are correct, and False Negatives are the number of true positive cases that were not correctly predicted by the model.
Recall can be used to evaluate the performance of a model on a single dataset, or it can be averaged over multiple datasets to get a more comprehensive evaluation of the model's accuracy.