Haag, FelixFelixHaagHopf, KonstantinKonstantinHopf0000-0002-5452-0672Staake, ThorstenThorstenStaake2026-02-122026-02-122026https://fis.uni-bamberg.de/handle/uniba/113131Forecasting systems have a long tradition in providing outputs accompanied by explanations. While the vast majority of such explanations relies on inherently interpretable linear statistical models, research has put forth eXplainable Artificial Intelligence (XAI) methods to improve the comprehensibility of nonlinear machine learning models. As explanations related to forecasts constitute important building blocks in forecasting systems, the validation of explainer methods is an essential part of system selection, parameterization, and adoption. Current research on explainer method assessment focuses on metrics for classification rather than numerical forecasting and predominantly assesses explanation quality within time-consuming, costly, and subjective studies involving humans. Given that the functional validation of explanations is of core interest to research on forecasting, our paper makes three contributions: First, we establish an approach for functionally grounded validations of explainer methods for numerical forecasting. Second, we propose computational rules for the metrics consistency, stability, and faithfulness. Third, we demonstrate our approach for the forecasting case of electricity demand estimation for energy benchmarks and compare a linear statistical approach with the state-of-the-art XAI methods SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Explainable Boosting Machine (EBM). Our work allows research and practice to validate and compare the quality of explainer methods on a functionally grounded level.engexplainable artificial intelligenceexplainer method validationexplanation qualityinterpretable machine learningnumerical forecastingValidating Explainer Methods : A Functionally Grounded Approach for Numerical Forecastingarticleurn:nbn:de:bvb:473-irb-113131x