Theodosios Theodosiou, Christoforos Rekatsinas
Abstract
Physics-Informed Neural Networks (PINNs) have gained significant attention for solving differential equations, yet their efficiency is often hindered by the need for intricate and computationally costly loss-balancing techniques to address residual term imbalance. This paper introduces a direct differential equation term scaling framework that removes the loss-balancing bottleneck entirely. By scaling each term in the governing equations using characteristic physical dimensions, the proposed method ensures numerical consistency across all contributions, eliminating the need for adaptive weighting during training. This not only simplifies the PINN formulation but also improves stability and convergence. The approach is validated on challenging nonlinear one-dimensional elasticity problems, demonstrating that high-accuracy solutions can be obtained with compact neural network architectures and reducing floating-point operations by at least two orders of magnitude. A reverse scaling step restores the solution to the original physical domain, preserving physical interpretability. The results demonstrate that direct term scaling transforms PINN training into an efficient, and easily deployable process, paving the way for broader adoption in computational mechanics and other physics-driven domains.

