Options
Validating Explainer Methods : A Functionally Grounded Approach for Numerical Forecasting
Haag, Felix; Hopf, Konstantin; Staake, Thorsten (2026): Validating Explainer Methods : A Functionally Grounded Approach for Numerical Forecasting, in: Bamberg: Otto-Friedrich-Universität, S. 819–836.
Faculty/Chair:
Author:
Publisher Information:
Year of publication:
2026
Pages:
Source/Other editions:
Journal of Forecasting, New York, NY: Wiley Interscience, 2026, Jg. 45, Nr. 2, S. 819–836, ISSN: 0277-6693
Year of first publication:
2026
Language:
English
Abstract:
Forecasting systems have a long tradition in providing outputs accompanied by explanations. While the vast majority of such explanations relies on inherently interpretable linear statistical models, research has put forth eXplainable Artificial Intelligence (XAI) methods to improve the comprehensibility of nonlinear machine learning models. As explanations related to forecasts constitute important building blocks in forecasting systems, the validation of explainer methods is an essential part of system selection, parameterization, and adoption. Current research on explainer method assessment focuses on metrics for classification rather than numerical forecasting and predominantly assesses explanation quality within time-consuming, costly, and subjective studies involving humans. Given that the functional validation of explanations is of core interest to research on forecasting, our paper makes three contributions: First, we establish an approach for functionally grounded validations of explainer methods for numerical forecasting. Second, we propose computational rules for the metrics consistency, stability, and faithfulness. Third, we demonstrate our approach for the forecasting case of electricity demand estimation for energy benchmarks and compare a linear statistical approach with the state-of-the-art XAI methods SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Explainable Boosting Machine (EBM). Our work allows research and practice to validate and compare the quality of explainer methods on a functionally grounded level.
Keywords: ; ; ; ;
explainable artificial intelligence
explainer method validation
explanation quality
interpretable machine learning
numerical forecasting
Peer Reviewed:
Yes:
International Distribution:
Yes:
Type:
Article
Activation date:
February 12, 2026
Project(s):
Permalink
https://fis.uni-bamberg.de/handle/uniba/113131