Physics-Informed Neural Networks for Solving PDEs Using Gradient-Regularized Loss Functions: Application to the Nonlinear Burgers’ Equation
Main Article Content
Abstract
Through the use of loss function restrictions within neural networks, Physics-Informed Neural Networks (PINNs) have emerged as a potent method for solving partial differential equations (PDEs) by incorporating physical rules into the learning stages. However, when the residuals and provided data are out of balance or when the collocation network noise overstates appropriateness, the stability and convergence rate for PINNs are frequently low. In order to increase the precision and stability of PINN training, this study offers a qualified examination of enhanced loss formulations using gradient regularization. In order to control physical solutions and overfitting, the suggested gradient-regularized loss imposes a smoothness penalty on network parameters that is comparable to the classical regularization of Tikhonov mothed. Improved convergence robustness and improvements lower the mean squared residual error to the standard PINN baseline, according to the case studies.