hide
Free keywords:
-
Abstract:
Machine learning algorithms are able to capture complex, nonlinear interacting relationships and are increasingly used to predict yield variability at regional and national scales. Using explainable artificial intelligence (XAI) methods applied to such algorithms may enable better scientific understanding of drivers of yield variability. However, XAI methods may provide misleading results when applied to spatiotemporal correlated datasets. In this study, machine learning models are trained to predict simulated crop yield from climate indices, and the impact of model evaluation strategy on the interpretation and performance of the resulting models is assessed. Using data from a process-based crop model allows us to then comment on the plausibility of the ‘explanations’ provided by XAI methods. Our results show that the choice of evaluation strategy has an impact on (i) interpretations of the model and (ii) model skill on heldout years and regions, after the evaluation strategy is used for hyperparameter-tuning and feature-selection. We find that use of a cross-validation strategy based on clustering in feature-space achieves the most plausible interpretations as well as the best model performance on heldout years and regions. Our results provide first steps towards identifying domain-specific ‘best practices’ for the use of XAI tools on spatiotemporal agricultural or climatic data.