Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error

Citation
Nicholas G. Reich et al., Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error, American statistician , 70(3), 2016, pp. 285-292
Journal title
ISSN journal
00031305
Volume
70
Issue
3
Year of publication
2016
Pages
285 - 292
Database
ACNP
SICI code
Abstract
Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this article, we present a framework for evaluating time series predictions, which emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naïve reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models. predictive performance across different sets of data. We illustrate the use of this metric with a case study comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. This example demonstrates the utility and interpretability of the relative mean absolute error metric in practice, and underscores the practical advantages of using relative performance metrics when evaluating predictions.