Physics-Informed Neural Networks for Time-Domain Simulations: Accuracy, Computational Cost, and Flexibility
Abstract: The simulation of power system dynamics poses a computationally expensive task. Considering the growing uncertainty of generation and demand patterns, thousands of scenarios need to be continuously assessed to ensure the safety of power systems. Physics-Informed Neural Networks (PINNs) have recently emerged as a promising solution for drastically accelerating computations of non-linear dynamical systems. This work investigates the applicability of these methods for power system dynamics, focusing on the dynamic response to load disturbances. Comparing the prediction of PINNs to the solution of conventional solvers, we find that PINNs can be 10 to 1000 times faster than conventional solvers. At the same time, we find them to be sufficiently accurate and numerically stable even for large time steps. To facilitate a deeper understanding, this paper also present a new regularisation of Neural Network (NN) training by introducing a gradient-based term in the loss function. The resulting NNs, which we call dtNNs, help us deliver a comprehensive analysis about the strengths and weaknesses of the NN based approaches, how incorporating knowledge of the underlying physics affects NN performance, and how this compares with conventional solvers for power system dynamics.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
A simple explanation of the paper
What this paper is about (overview)
This paper looks at a new way to do fast, accurate computer simulations of how electric power grids behave over time, especially right after something changes suddenly (like a sudden change in demand). Instead of using traditional step-by-step math solvers, the authors use special machine learning models called Physics-Informed Neural Networks (PINNs). These models learn to predict what will happen quickly while also respecting the laws of physics. The paper tests how accurate, fast, and flexible this method is and introduces a helpful variation called “dtNNs.”
The main questions the paper asks
The study focuses on simple, practical questions:
- Can PINNs predict power grid behavior much faster than traditional solvers while still being accurate?
- Are PINNs stable (do they avoid weird numerical blow-ups) even with big time steps?
- How does performance change with the size or complexity of the power system?
- Does adding physics knowledge to the neural network during training (as PINNs do) actually help?
- What are the trade-offs: how much up-front training time is needed, and when is it worth it?
How the researchers approached it (methods, in plain terms)
To understand the setup, imagine:
- The power grid is a huge, connected system (like a web of springs and weights) that moves and settles after a push.
- Traditional solvers simulate this by walking forward in tiny time steps (like moving along a path stone by stone).
- A neural network tries to learn a shortcut: given the starting point and the push, it jumps directly to the answer at any time you ask (like using a map to skip the stepping-stone walk).
Key pieces explained simply:
- Differential-Algebraic Equations (DAEs): These are the math rules describing the grid. “Differential” parts tell how things change over time (like speed and acceleration). “Algebraic” parts are instant balance rules (like “power in = power out”).
- Neural Network (NN): A flexible function that learns patterns from data (here, it learns the grid’s response).
- Physics-Informed Neural Network (PINN): A neural network trained with an extra rule: during learning, it is penalized whenever its answers break the physics equations. Think of the physics as a teacher checking the homework as it’s written.
- dtNN: A new twist the authors propose. Besides matching the usual target outputs, the network is also encouraged to match the rate of change (the slope) of the outputs over time. It’s like teaching not just where the ball is, but also how fast it’s moving.
- Collocation points: Random “checkpoints” in time where the PINN is tested against the physics laws, even if there’s no ground-truth data there. This helps the network behave well between known data points.
- Training vs run-time:
- Training: The network learns from examples. This can be slow and expensive.
- Run-time: After training, getting a prediction is very fast (one quick calculation).
- Test systems: They used two classic grid models (an 11-bus system and a 39-bus system) and simulated a sudden change in load (like turning lots of devices on or off at once). They checked predictions over 20 seconds after the change.
What they found and why it matters
Main findings:
- Very fast predictions: After training, the neural networks (including PINNs) predict grid behavior 10 to 1,000 times faster than traditional solvers. This is a big deal if you need to run thousands of scenarios quickly.
- Stable behavior: Unlike some traditional methods that can become unstable with large time steps, the neural networks don’t suffer from numerical instability during prediction because they don’t use step-by-step iteration—they compute the answer in one shot.
- Accuracy is good and can be improved:
- PINNs are generally more accurate than plain neural networks when training data is limited, because the physics rules guide them.
- dtNNs (the new idea) help too—they regularize training by teaching the network the “slope” of the solution, which improves generalization.
- Speed does not scale with system size in the usual way:
- Traditional solvers get slower as the grid gets bigger.
- For neural networks, runtime depends more on how complex the behavior is, not just how many buses are in the system. A bigger grid with simpler dynamics can be as fast as a smaller one.
- Training time vs accuracy trade-off:
- More training data and physics regularization (PINN style) generally lead to better accuracy.
- PINNs cost more per training step because they check the physics at many collocation points, but they often reach better accuracy when data is limited.
- The optimizer settings (how the network is tuned during training) matter a lot for both training time and final accuracy.
Why this matters:
- Grid operators need to quickly check many “what-if” scenarios to keep the system safe and reliable.
- Massive speed-ups at run-time can save time and allow more frequent assessments.
- Stability and accuracy are critical for trustworthy decisions.
- PINNs make it possible to have both speed and physics-based reliability.
What it means going forward (impact and implications)
- Best use cases: PINNs and dtNNs are especially useful when you must run many repeated simulations of the same kind (for example, checking many different disturbance sizes or times). The more often you reuse the trained network, the more you benefit from the speed.
- Trade-offs: You pay a training cost up front. If you only need a few simulations, a traditional solver might be simpler. But if you need many, the fast run-time of a trained PINN makes it worth it.
- Flexibility challenge: If the grid model changes a lot (new equipment, very different settings), you may need to retrain. Traditional solvers are more flexible here. The authors suggest:
- Reducing training cost with better learning strategies and architectures,
- Targeting high-repetition tasks where speed matters most,
- Using hybrids: let neural networks handle the repetitive parts and traditional solvers handle the parts that change often.
In short: This paper shows that Physics-Informed Neural Networks can make power grid time-domain simulations dramatically faster while keeping accuracy and stability, especially when data is limited and many runs are needed. The new dtNN approach also helps training. With careful design, these methods could become powerful tools for real-time or large-scale grid studies.
Collections
Sign up for free to add this paper to one or more collections.