English
 
Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Machine learning for predicting chaotic systems

Authors
/persons/resource/Christof.Schoetz

Schötz,  Christof
Potsdam Institute for Climate Impact Research;

/persons/resource/alistair.white

White,  Alistair       
Potsdam Institute for Climate Impact Research;

/persons/resource/gelbrecht

Gelbrecht,  Maximilian
Potsdam Institute for Climate Impact Research;

/persons/resource/Niklas.Boers

Boers,  Niklas       
Potsdam Institute for Climate Impact Research;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Schötz, C., White, A., Gelbrecht, M., Boers, N. (2026): Machine learning for predicting chaotic systems. - Chaos, 36, 5, 053105.
https://doi.org/10.1063/5.0313297


Cite as: https://publications.pik-potsdam.de/pubman/item/item_34387
Abstract
Predicting chaotic dynamical systems is critical in many scientific fields, such as weather forecasting, but challenging due to the characteristic sensitive dependence on initial conditions. Traditional modeling approaches require extensive domain knowledge, often leading to a shift toward data-driven methods using machine learning. However, existing research provides inconclusive results on which machine learning methods are best suited for predicting chaotic systems. In this paper, we compare different lightweight and heavyweight machine learning architectures using extensive existing benchmark databases of low-dimensional systems, as well as a newly introduced database that allows for uncertainty quantification in the benchmark results. In addition to the state-of-the-art methods from the literature, we also present new advantageous variants of established methods. Hyperparameter tuning is adjusted based on computational cost, with more tuning allocated to less costly methods. Furthermore, we introduce the cumulative maximum error, a novel metric that combines desirable properties of traditional metrics and is tailored for chaotic systems. Our results show that well-tuned simple methods, as well as untuned baseline methods, often outperform the state-of-the-art deep learning models, but their performance can vary significantly with different experimental setups. These findings highlight the importance of aligning prediction methods with data characteristics and caution against the indiscriminate use of overly complex models.