Mean absolute error (MAE)

From Computer Science Wiki

The mean absolute error (MAE) is a metric used to evaluate the performance of a regression model. It is defined as the average absolute difference between the predicted values of the model and the true values of the data.

The MAE is calculated using the following formula:

MAE = (1/n) * Σ|y_i - ŷ_i|

Where n is the number of observations in the data, y_i is the true value of the i-th observation, and ŷ_i is the predicted value of the i-th observation. The vertical bars indicate the absolute value, and the capital Greek letter sigma (Σ) indicates the sum of the differences.

The MAE is a measure of the average magnitude of the errors made by the model in its predictions, and is a useful metric for evaluating the performance of a model when the errors are evenly distributed across the data. It is particularly useful when the errors are symmetrically distributed and there are no extreme outliers, since it is not sensitive to the presence of outliers.

The MAE is easy to interpret and is well-suited for tasks where the goal is to minimize the average error made by the model. It is often used in conjunction with other evaluation metrics, such as the root mean squared error (RMSE) or the mean squared error (MSE), to get a more comprehensive evaluation of the model's performance.