Using Neural Networks to Model Complex Mathematical Functions
Main Article Content
Abstract
Accurately modeling highly complex and irregular mathematical functions like fractals, chaos, and turbulence poses longstanding challenges. Traditional physics-based approaches often fail due to analytic intractability and extreme sensitivity. In this work, we pioneer the usage of long short-term memory (LSTM) recurrent neural networks for learning representations of such complex mathematical functions. We train custom-designed deep LSTM architectures on functions including the Lorenz attractor, Mandelbrot set, and Mackey-Glass delay differential equation. The networks achieve excellent quantitative performance across critical evaluation metrics like mean squared error and R-squared. Qualitative visualizations also demonstrate highly precise function replication and generalization. Comparisons to polynomial regression and multilayer perceptron baselines confirm the superiority of the LSTM modeling approach. Our results provide broader evidence for the potential of deep neural networks to emulate intricate mathematical functions, with transformative implications for overcoming modeling limitations across the sciences and engineering. We conclude by reflecting on current methodological limitations and identifying key open questions to guide future work at the intersection of mathematical modeling and machine learning.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.