Next Article in Journal
Review on Distribution System State Estimation Considering Renewable Energy Sources
Previous Article in Journal
A Systematic Review of Green Roofs’ Thermal and Energy Performance in the Mediterranean Region
Previous Article in Special Issue
Distributed Photovoltaic Short-Term Power Prediction Based on Personalized Federated Multi-Task Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Neural Networks for Solving Power System Transient Simulation Problem

by
Mohammadreza Soltaninia
and
Junpeng Zhan
*,†
Department of Electrical Engineering, Alfred University, Alfred, NY 14802, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Energies 2025, 18(10), 2525; https://doi.org/10.3390/en18102525
Submission received: 23 January 2025 / Revised: 29 April 2025 / Accepted: 30 April 2025 / Published: 13 May 2025

Abstract

:
Quantum computing, leveraging principles of quantum mechanics, represents a transformative approach in computational methodologies, offering significant enhancements over traditional classical systems. This study tackles the complex and computationally demanding task of simulating power system transients through solving differential-algebraic equations (DAEs). We introduce two novel Quantum Neural Networks (QNNs): the Sinusoidal-Friendly QNN and the Polynomial-Friendly QNN, proposing them as effective alternatives to conventional simulation techniques. Our application of these QNNs successfully simulates two small power systems, demonstrating their potential to achieve good accuracy. We further explore various configurations, including time intervals, training points, and the selection of classical optimizers, to optimize the solving of DAEs using QNNs. This research not only marks a pioneering effort in applying quantum computing to power system simulations but also expands the potential of quantum technologies in addressing intricate engineering challenges.

1. Introduction

Quantum computing represents a paradigm shift in computational capabilities, leveraging principles of quantum mechanics to process information in fundamentally different ways from classical computing. Traditional computers use binary bits as the basic unit of data, which can either be a zero or a one. In contrast, quantum computers use quantum bits, or qubits, which can exist in multiple states simultaneously, thanks to the phenomena of superposition and entanglement [1]. This allows quantum computers to process a vast number of possibilities concurrently, offering potential exponential speed-ups for certain computational tasks, for example, in factoring large integer numbers [2] or in searching unstructured database [3,4].
Recent advancements in quantum hardware [5,6,7] and the development of sophisticated algorithms [8,9,10,11] have made quantum computing a highly promising area of research. These advancements make quantum computing a promising field for a variety of applications, particularly in areas that require handling complex, high-dimensional datasets or simulations where classical computers struggle.
One of the pivotal developments in this field is the Quantum Neural Network (QNN) [12,13]. QNNs integrate the principles of quantum computing with the architectural concepts of classical neural networks. This hybrid approach has been explored for various applications, including drug discovery, financial modeling, and, notably, complex system simulations such as those needed in power systems.
Quantum computing has been explored for various power system applications, including power flow analysis [14], unit commitment [15], system reliability [16], and stability assessment [17]. There is a burgeoning body of literature, as indicated by several review papers [18,19,20], that discusses the integration of quantum computing into power system operations, underscoring the participation of entities such as the Department of Energy (DOE) and various utility companies in these research efforts.
Simulation of power systems is of paramount importance for ensuring their stability, efficiency, and reliability [21,22,23,24], particularly as the integration of renewable energy sources like wind and solar continues to expand. These renewables are incorporated into power systems through power electronic devices, such as inverters, which necessitate the simultaneous handling of both small and large time steps in simulations. This is due to the rapid responses of the power electronic devices contrasted with the slower-moving dynamics of synchronous generators. Accurate modeling of these diverse dynamics is essential for effective grid management and planning, highlighting the critical need for robust simulation tools capable of managing complex interactions across varying time scales.
The simulation of power systems typically involves solving differential-algebraic equations (DAEs), which describe both the dynamic behavior and the algebraic constraints within the electrical network. Traditional approaches for these simulations include numerical methods such as Euler’s and Runge–Kutta methods [25], which discretize the equations to approximate solutions, and semi-analytical methods [23,26], which combine analytical and numerical techniques for improved accuracy. More recently, Physics-Informed Neural Networks (PINNs) [27] have been applied to solve differential equations (DEs) by training neural networks to adhere to the underlying physics of the systems, thus providing a data-driven approach to simulation.
However, each of these methods has its limitations. Numerical and semi-analytical methods can become computationally intensive and may not scale efficiently with the increasing complexity of modern power systems. PINNs, while innovative, often require extensive data and computational resources, as they rely on classical neural networks that utilize a large number of parameters to capture complex functions.
In this context, QNNs present a promising alternative. QNNs leverage the principles of quantum mechanics to perform computations, utilizing a smaller number of parameters than classical neural networks to represent nonlinear functions [12]. This reduction in parameters can potentially lead to more efficient simulations, as QNNs could process complex computations faster and with higher precision due to quantum superposition and entanglement.
Beyond the reduced parameterization and expressive power of QNNs, an emerging body of research suggests that QNNs possess strong generalization abilities and resilience against overfitting–especially in nonlinear settings [28]. This is highly relevant in power system simulations, where dynamic behaviors vary across operating scenarios and overfitting to specific conditions can lead to unreliable models. These properties, combined with the potential for quantum-enhanced speedups, make QNNs a promising alternative to conventional solvers for DAEs in power systems.
Quantum computing has been applied to solve DEs and Partial Differential Equations (PDEs) [29]. However, its potential to solve DAEs remains largely unexplored. Therefore, this paper takes a pioneering step by investigating the use of QNNs to address the challenges in power system simulation. This exploration is the first of its kind and aims to demonstrate how quantum-enhanced computational models can solve the simulation of complex power systems.
The rest of the paper is organized as follows. Section 2 provides a description of the DAEs for the two small power systems simulated. Section 3 offers a general overview of the QNN. Section 4 details the two types of QNNs used to solve the power system simulation problems. Section 5 presents the simulation results. Finally, conclusions are given in Section 6.

2. Problem Description: DAE

DAEs are integral to modeling dynamic systems where constraints intertwine the derivatives of some variables with the algebraic relationships among others. DAEs are prevalent across a diverse array of fields, including electrical circuit design, mechanical system simulation, chemical process modeling, and economic dynamics. This wide applicability underlines the importance of understanding their structure, challenges, and solution methods.
DAEs are characterized by their differentiation index, a critical measure that indicates the complexity of numerical solutions. The differentiation index describes the number of times an equation must be differentiated to convert it into an ordinary differential equation (ODE), affecting the choice of numerical methods for solving the DAE. Common forms of DAEs include semi-explicit, where some equations involve derivatives explicitly, and fully implicit, where derivatives may not be explicitly present.
Boundary and initial conditions play pivotal roles in solving DAEs, providing the necessary constraints to ensure unique and stable solutions. Proper formulation of these conditions is crucial, as inconsistent or inadequate initial conditions can lead to non-converging solutions.
Here, we provide a generalized expression of DAEs.
We define the set S t as follows:
S t = t , y 1 ( t ) , y 1 ( t ) , , y 1 ( m 1 ) ( t ) , , y n ( t ) , y n ( t ) , , y n ( m n ) ( t )
where
  • t represents the independent variable, typically time.
  • y i ( t ) denotes the ith dependent variable as a function of t.
  • y i ( t ) , y i ( t ) , , y i ( m i ) ( t ) represent the first, second, and up to the m i th order derivatives of y i ( t ) , where m i is the highest order of derivative for the ith variable.
Subsequently, the DAE system can be succinctly expressed as:
F 1 ( S t ) = 0 , F 2 ( S t ) = 0 , F n e ( S t ) = 0 ,
where
  • F j ( S t ) symbolizes the functions delineating the interrelations within the DAE system. Each F j corresponds to the jth equation in the system.
  • n e denotes the number of equations in the DAE system, indicating the size of the system.
Boundary conditions are essential for constraining the solution space of DAE systems. They are articulated as a series of equations that the solution must satisfy at predetermined points within the domain, not limited to its boundaries. Formally, these conditions are given by:
B 1 ( S t ) = 0 , B 2 ( S t ) = 0 , B n b ( S t ) = 0 ,
where
  • B k ( S t ) denotes the kth boundary condition function, which applies to the system state at a critical point t within the domain.
  • n b represents the total number of boundary conditions enforced on the system.
The DAEs for two power systems are given in Appendix A and Appendix B.

3. A Unified Quantum Neural Network Framework for DAEs

In solving DAEs, our approach employs QNNs, aiming to approximate the solutions y i ( t ) . These QNNs encompass several components: data encoding, a parameterized ansatz, measurement, and a classical optimizer. Each of these components are elaborated on in later sections of this paper. The methodology primarily involves iteratively updating the circuit parameters. This process is continued until the error in the computed solution, based on the circuit’s output, is reduced to a satisfactory level within the context of the DAE equations. An in-depth illustration of the QNN’s architecture, particularly its iterative parameter update mechanism, is presented in Figure 1.
From a high-level perspective, this approach comprises two distinct segments: the quantum segment and the classical segment.

3.1. Quantum Segment

To initiate the process, we start with a quantum state represented as:
| ψ 0 = | 0 n
where n is the number of qubits in the circuit. We then embed our classical input x into the circuit through a quantum operator E ( q ( x ) ) , as expressed in Equation (4).
| ψ 1 ( x , q ) = E ( q ( x ) ) | ψ 0
The encoding function q ( x ) serves as a preprocessing step for the classical input before it is embedded into the quantum circuit. Next, a parametric multi-layer quantum ansatz operator U ( θ ) is applied to make the circuit trainable.
| ψ 2 ( x , q , θ ) = U ( θ ) | ψ 1 ( x , q )
The θ is a matrix of parameters and can be written as
θ = θ 00 θ 01 θ 0 L θ 10 θ 11 θ 1 L θ n 0 θ n 1 θ n L ,
where θ i j represents the parameter in the i th qubit and in the j th layer.

3.2. Classical Segment

The solution f ( x , q , θ ) is considered a function of the expectation value of an observable O ^ from the final state of the circuit ( | ψ 2 ). Mathematically, this is given by:
M ( x , q , θ ) = ψ 2 ( x , q , θ ) | O ^ | ψ 2 ( x , q , θ )
f ( x , q , θ ) = g ( M ( x , q , θ ) )
where g represents the post-measurement function applied to M, approximating the value of the function. Subsequently, the quantum models are trained by updating and optimizing the parameter θ with a classical optimizer guided by a loss function. This loss function comprises two parts, providing a measure of the solution’s accuracy.
The first part assesses the discrepancy between the estimated and true boundary values. This is quantified using (9), where n b denotes the total boundary points.
loss b = i = 1 n b ( B i ( S t ) f model ( S t ) ) 2
Here, B i ( S t ) is derived from the boundary conditions specified by problem (1), while f model is obtained from the quantum circuit, as described in (8).
The second part assesses the degree to which the approximated solutions satisfy the system of DAEs (1) for n p training points. It is expressed as:
loss p = i = 1 n p j = 1 n e F j ( S t i )
Here, n e denotes the number of DAEs in the system.
Finally, the total loss is computed as a weighted sum of the two components, reflecting the relative importance of boundary accuracy and training point adherence. This is expressed as:
loss t o t a l = λ 1 × loss b + λ 2 × loss p
In this equation, λ 1 and λ 2 are tuning coefficients. These coefficients are adjustable parameters that allow for the fine-tuning of the model, enabling the prioritization of either boundary accuracy ( λ 1 ) or adherence at the training points ( λ 2 ). The choice of these coefficients depends on the specific requirements and goals of the model, as well as the characteristics of the problem being addressed.

3.3. Classical Optimizer

We explored several classical optimizers, including Stochastic Gradient Descent (SGD), Adam optimizer, Broyden–Fletcher–Goldfarb–Shanno (BFGS), Limited-Memory BFGS (L-BFGS), and Simultaneous Perturbation Stochastic Approximation (SPSA).
Among these optimizers, BFGS stands out for its fast convergence and robustness in handling non-convex landscapes, making it advantageous in scenarios with saddle points and flat regions [30]. SGD is computationally efficient but may converge slowly [31]. The Adam optimizer offers adaptive learning rates but can exhibit erratic behavior in high-dimensional landscapes [32]. SPSA, on the other hand, is a gradient-free optimizer suitable for optimizing circuits subject to noise [33].
Given the nature of our problem, we opted for the BFGS optimizer to enhance the performance of our QNN circuit. Its capability to navigate non-convex landscapes aligns seamlessly with the intricacies and challenges inherent in our problem domain. Nevertheless, it is imperative to underscore that the choice of an optimizer should be contingent upon the specific demands and constraints of the given problem.
For instance, if the hardware introduces noise, the SPSA optimizer might prove more effective. It is crucial to evaluate the unique attributes of the problem and select an optimizer accordingly to ensure optimal performance. In this study, we utilized ideal simulators devoid of noise, and we opted for BFGS based on the elucidated reasons.

4. QNN Architectures for Power System Simulation

The implementation of the proposed approach necessitated a pivotal consideration: the selection of a quantum model. This model should not only exhibit expressive capabilities but also closely align with the function it is intended to approximate. In the realm of quantum computing, certain global approximator models, such as Haar Unitary, have been proposed. However, their extensive complexity and the vast parameter space required for training them make them less effective. These models often exhibit weak performance, are highly susceptible to encountering saddle-point issues, and may become trapped in local minima during the optimization and training process [34,35].
In this paper, we leveraged the physics information inherent in power system equations to design quantum models. These models feature generative functions that closely mimic the shapes of power system simulation functions. Consequently, the optimization process becomes significantly more efficient. Power systems involve two distinct types of functions: those characterized by fluctuations and sinusoidal patterns, requiring a model adapted to these features, and those amenable to polynomial fitting, such as power-series and Chebyshev functions.
For sinusoidal-friendly models, we employed our previously proposed Sinusoidal-Friendly QNN (SFQ). Additionally, for polynomial-friendly functions, we introduced a novel and promising quantum model named Polynomial-Friendly QNN (PFQ). PFQ harnesses the power of quantum superposition, offering the potential for quantum advantage.
To clearly link the proposed QNNs with the power system transient simulation problem, we emphasize how each QNN approximates solutions to specific DAE components. The DAEs governing power systems, as shown in Equations (A1) to (A3) for the SMIB system and Equations (A4) to (A9) for the 3-machine system, model the time evolution of rotor angle δ ( t ) and speed deviation ω ( t ) .
The QNNs were trained to learn these time-dependent state variables by approximating the functions δ ( t ) and ω ( t ) using quantum circuits that minimized DAE residuals and boundary mismatches, as defined in Equations (9)–(11). The Sinusoidal-Friendly QNN (SFQ) was particularly suited to learn the sinusoidal behaviors, which reflect oscillatory generator dynamics. In contrast, the Polynomial-Friendly QNN (PFQ) better matched the slowly varying or polynomial-like behaviors.
Thus, each QNN architecture was designed with physical interpretability in mind, aligning its structure with the mathematical form of power-system DAEs, ensuring the learned quantum model respected the physics of the underlying system.
In the upcoming section, we delve into the properties of quantum segments for these specific QNNs utilized in our study.

4.1. Structure 1—Sinusoidal Friendly QNN (SFQ)

  • Quantum Segment
The first QNN we utilized was the one we previously explored in [12] for sinusoidal-friendly functions. We proved that this QNN could efficiently approximate sinusoidal and fluctuating functions. Here, we explain the structure of the circuit we used and leave the details of the expressivity of the model to the paper we mentioned [12].
For SFQ, we employed two different kinds of embedding: R y embedding (Figure 2), and sin 1 embedding (Figure 3).

4.1.1. sin 1 Embedding

For this type of embedding, we considered two qubits with
q ( x ) = sin 1 ( x ) 2 π x
and
E ( q ( x ) ) = R y ( sin 1 ( x ) ) R y ( 2 π x )

4.1.2. R y Embedding

For the R y embedding in the SFQ structure, we also considered two qubits, but this time
q ( x ) = I x
and,
E ( q ( x ) ) = I R y ( x )
To implement the quantum ansatz U ( θ ) , we used a series of layers to train the circuit, fitting it to approximate the solution of our DAE. The ansatz consisted of L layers, and the flexibility and variance of our circuit depended on L [12]. The more layers, the more time and data points we needed to train it, so the demand on the number of layers depended on the complexity of our function to approximate [12].
The observable O ^ for this structure is H Z I , where H is a Hadamard gate, Z is the Pauli-Z matrix, and I is the identity matrix. The post-processing function g in this structure is [12]:
g ( u ) = τ 0 + τ 1 · u + τ 2 · u 2 .
Assuming that the expectation value of our observable is Z 0 , then we have
f ( x , q , θ ) = τ 0 + τ 1 · Z 0 + τ 2 · Z 0 2 .

4.2. Structure 2—Polynomial Friendly QNN (PFQ)

We propose an innovative model harnessing quantum principles in QNNs, which exhibit exceptional suitability for approximating functions represented by power series and polynomials. Assume the ultimate solution of a DAE is expressed as:
h ( x ) = a 0 + a 1 x + a 2 x 2 + + a n x m .
Subsequently, we define | ψ 0 as
| ψ 0 = | 0 n ,
where n is the number of qubits. Afterward, we define the quantum embedding operator E ^ as:
| ψ 1 ( x ) = E ^ | 0 = 1 x x m T 1 x x m T ,
where m = 2 n .
The construction of the operator E ^ is described in [36], where researchers developed an effective method for quantum state preparation using measurement-induced steering, enabling the initialization of qubits in arbitrary states on quantum computers. Ultimately, a quantum ansatz operator U ( θ ) is utilized to produce a subset of the power series, as expressed in Equation (20).
| ψ 2 ( x , θ ) = U ( θ ) | ψ 1 ( x )

4.3. Simplified PFQ Form with m = 1

To illustrate this process for a single qubit, let us consider an R y gate as in Equation (21).
| ψ 2 ( x , θ ) = R y ( θ ) | ψ 1 ( x )
The corresponding quantum circuit with one rotation gate is depicted in Figure 4.
Assuming we set m to 1 in Equation (19), we obtain:
ψ 1 ( x ) = 1 1 + x 2 1 x .
The R y ( θ ) is defined as:
R y ( θ ) = cos θ 2 sin θ 2 sin θ 2 cos θ 2 .
By substituting Equations (22) and (23) into Equation (21), we obtain:
ψ 2 ( x , θ ) = 1 1 + x 2 cos θ 2 x sin θ 2 sin θ 2 + x cos θ 2 .
Now, let us calculate the probability P ( | 1 ) :
P ( | 1 ) = 1 1 + x 2 ( sin 2 θ 2 + x 2 cos 2 θ 2 + 2 x sin θ 2 cos θ 2 ) .
After some trigonometric simplifications on P ( | 1 ) , we obtain:
P ( | 1 ) = 1 1 + x 2 ( sin 2 θ 2 + sin θ x + cos 2 θ 2 x 2 ) .
Multiplying P ( 1 ) by ( 1 + x 2 ) yields:
( 1 + x 2 ) P ( | 1 ) = sin 2 θ 2 + sin θ x + cos 2 θ 2 x 2 .
Equation (26) can be abstracted as:
h 1 ( x ) = b 0 + b 1 x + b 2 x 2
where
b 0 = sin 2 θ 2 , b 1 = sin θ , b 2 = cos 2 θ 2 .
This expression can be viewed as a segment of a quadratic power series. However, it encounters some limitations, which we address subsequently.

4.4. Enhanced PFQ Version with m = 1

Initially, the limitation of a simple PFQ concerns the restricted range of b 0 , specifically, 0 b 0 1 . Secondly, the range of b 1 is also constrained within 1 b 1 1 .
To mitigate these constraints, we propose the incorporation of two classical adjustable parameters, enhancing the adaptability of the approximating function. Consequently, we define the output as:
h 2 ( x ) = τ 1 · h 1 ( x ) + τ 0
Here, τ 0 stands as a classical trainable parameter, whereas τ 1 may either be a physics-informed constant, tailored to the problem’s boundary conditions, or a trainable parameter. In cases where the boundary conditions are undefined, opting for a trainable τ 1 is advisable.
With these adjustments, we obtain:
b 0 = τ 0 + τ 1 · b 0 ,
and
b 1 = τ 1 · b 1 .
As a result, the range of b 0 extends to the entire set of real numbers R , and b 1 falls within the interval τ 1 b 1 τ 1 .
The third limitation is that the range of b 2 is limited, i . e . , 0 b 2 1 . With the solution of the second limitation, we mitigate the problem of not having negative coefficients for x 2 since we have added τ 1 b 2 , which can create a negative coefficient for x 2 . However, we still need to improve it and make it more expressive to have both negative and positive coefficients for b 2 . To do this, we can add some rotation gates to add the degree of freedom for the coefficients. As an example, consider we add an R z gate after the R y gate, as shown in Figure 5. Then, we have:
ψ 3 x , θ = R z ( θ ) R y ( θ ) ψ 1 x = R z ( θ ) ψ 2 x , θ
We can write R z as:
R z ( θ ) = e i θ 2 0 0 e i θ 2
or
R z ( θ ) = cos θ 2 i sin θ 2 0 0 cos θ 2 + i sin θ 2 .
By substituting | ψ 2 ( x , θ ) from Equation (24) to (28), we obtain:
ψ 3 x , θ = 1 1 + x 2 R z ( θ ) cos θ 2 x sin θ 2 sin θ 2 + x cos θ 2
Consequently, substituting R z ( θ ) from Equation (30) into (31) and multiplying both sides by 1 + x 2 yield:
1 + x 2 ψ 3 ( x ) = cos θ 2 i sin θ 2 0 0 cos θ 2 + i sin θ 2 cos θ 2 x sin θ 2 sin θ 2 + x cos θ 2 = cos 2 θ 2 x cos θ 2 sin θ 2 i sin θ 2 cos θ 2 + i x sin 2 θ 2 sin θ 2 cos θ 2 + x cos 2 θ 2 + i sin 2 θ 2 + i x sin θ 2 cos θ 2 = cos 2 θ 2 x 2 sin θ i 2 sin θ + i x sin 2 ( θ 2 ) 1 2 sin ( θ ) + x cos 2 θ 2 + i sin 2 θ 2 + 1 2 i x sin ( θ ) ,
which shows that we have x cos 2 θ 2 and i x sin θ in 1 + x 2 ψ 3 ( x , θ ) . This implies that when calculating the probability of 1, terms like i cos 2 θ 2 sin θ x 2 can appear, resulting in both negative and positive coefficients of x 2 .
For simplicity, we assumed the same θ for both R y and R z . However, it is also possible to consider different θ values (e.g., see Figure 6) to enhance the model’s trainability. Additionally, the introduction of more gates generates additional terms, enabling the model to express a broader range of power series with the same degree.
To evaluate the function to be approximated through this method, we use the same approach as in Equation (8), where O is the expectation value of Z for the first qubit. The expectation value can be calculated as follows:
Z = P ( | 0 ) P ( | 1 )
Here, we have both P ( | 0 ) and P ( | 1 ) , incorporating different coefficients for power series of degree 2.

4.5. Scalability and Advantage of PFQ

PFQ’s scalability is facilitated by increasing the number of qubits, as outlined in Equation (19). Specifically, augmenting the qubit count, n, exponentially increases the polynomial’s degree, m, where m = 2 n . Furthermore, the introduction of entanglement between qubits, alongside additional rotation gates, enhances the degree of freedom for the coefficients of the generated independent terms in our polynomials. In fact, this approach enables the model to effectively tackle more complex systems by employing extra qubits, thereby extending the degrees of the power series.
Another critical aspect to note is that the output from our PFQ circuit undergoes post-processing, expressed as
output ( x , τ 3 , τ 4 , θ ) = τ 3 × Z + τ 4 ,
where τ 3 and τ 4 are classical parameters, and θ represents quantum parameters, with Z denoting the expectation value of the observable Z for the first qubit. Consequently, the algorithm involves two classical parameters and a flexible number of quantum parameters, enhancing its quantum scalability.
In real-world power systems, DAEs frequently involve strong nonlinearities (e.g., trigonometric relationships in rotor dynamics) and discontinuities (e.g., abrupt changes during faults or switching events). Our proposed QNN-based framework addresses these challenges through two mechanisms:
Nonlinearity adaptability: Both the SFQ and PFQ architectures are designed to approximate nonlinear behaviors. SFQ targets sinusoidal oscillations, while PFQ, with its polynomial basis, captures smooth nonlinear trends. The quantum encoding and trainable parameterized circuits enable the representation of complex function spaces with fewer parameters than classical NNs, enhancing approximation in nonlinear regions.
Discontinuity management via domain decomposition: For discontinuous phenomena (e.g., faults), we divide the simulation into sequential time segments—pre-fault, fault-on, and post-fault—with QNNs trained independently in each. This approach allows the model to reset its approximation locally, mitigating the impact of discontinuities.
Importantly, recent work (e.g., [28]) has shown that QNNs exhibit strong generalization ability and robustness to overfitting, enabling them to maintain performance across training and testing phases even in complex nonlinear settings.

5. Results

This section presents the results of using various QNNs to solve DAEs. The QNNs were implemented on Pennylane [37,38] on the NCSA Delta high-performance computer (HPC). The BFGS optimizer was employed to optimize the parameters in the QNNs. Our simulations primarily addressed DAEs in power systems, focusing on one-machine and three-machine test systems. The details of their respective DAEs are given in Appendix A and Appendix B. In this study, no additional control mechanisms (e.g., excitation systems or PSS) were included. This design choice allowed us to focus on the fundamental response of the power system models and the QNNs’ capacity to approximate their dynamics. Results detail the performance of different QNN structures with different embedding methods (Section 5.1), the setting of time interval and the number of training points (Section 5.2 and Section 5.3), and the solution of the entire period of the DAE (Section 5.4).

5.1. QNN Structure and Embedding

To analyze the efficacy of various QNN architectures, we initially focused on a single-machine DE over a 0.5-second interval (the setting of the time interval is discussed in the next subsection), utilizing 20 training points. The experimental conditions were kept constant, varying only in the QNN structure applied. As depicted in Figure 7, the SFQ structure excelled in capturing the sinusoidal behavior inherent in the one-machine solution, achieving notably lower error compared to the PFQ structure, which exhibited less precision in replicating the solution dynamics.
Transitioning to more complex systems, we investigated the performance of these QNN structures on DAE governing the three-machine system. This evaluation was conducted over a shorter interval of 0.2 s with 10 training points. Our results, as summarized in Table 1, demonstrate that all models effectively solved the DAE with minimal error. Note that ω and δ in Table 1 and Table 2 represent the rotor speed and rotor angle, respectively. Notably, the PFQ model, illustrated in Figure 6, was particularly adept at managing the complexities of the three-machine system, displaying significant convergence improvements and an eightfold increase in training speed compared to the SFQ model.
To further investigate the robustness of these models, we extended the experimental time interval to 2 s and increased the number of training points to 50. The expanded results, presented in Table 2, clearly indicate that the PFQ structure outperformed the others. It should be noted that due to the restricted domain of the arcsin function, which requires inputs within the range ( 1 , 1 ) , implementing arcsin embedding within the specified architecture did not successfully solve the equations, resulting in an error, as shown in the third column of Table 2. While rescaling the input through normalization could address this issue, our objective was to maintain the same setup for comparison purposes.
Based on these findings, we suggest selecting the PFQ structure for solving the DE and DAE in power systems, and we used it in the rest of the paper, due to its accuracy and shorter training time.

5.2. Time Interval for DAE Solving

In this paper, the DAE was solved by dividing the time domain into specific intervals, using the final state of one interval as the initial state for the next. This approach simplified the challenge of function fitting within each interval.
The selection of an appropriate time interval is critical. A shorter time interval can enhance solution accuracy by limiting the exploration space. However, while a smaller span increases the frequency of DAE resolutions required, it does not necessarily lead to longer overall runtime, as training over shorter spans can be faster, especially for complex models. Determining the best time span involves a trade-off between accuracy and computational efficiency, typically resolved through trial and error.
Figure 8 demonstrates the effect of different time spans on the solution accuracy for ω ( t ) in the one-machine system using 20 training points. The results indicate that a time span of t s = 0.2 s delivered the most precise outcomes. However, a span of t s = 0.5 s also provided satisfactory results, and to minimize the number of computational runs, we selected t s = 0.5 s. Following similar experimental guidance, a span of t s = 2 s was chosen for the three-machine DAEs, balancing precision and computational demand.

5.3. Number of Training Points

The number of training points for a quantum circuit is determined by balancing accuracy against computational constraints. Increasing the number of points typically improves precision but also requires more computational time. Hence, optimizing the number of training points involves striking a balance between computational efficiency and desired accuracy. Figure 9 illustrates this relationship for the SMIB system, showing how an increased number of training points reduces the error between the quantum approximation and the actual solution for ω ( t ) over the interval t [ 0 , 0.5 ] .

5.4. Complete Simulation

By aggregating results over various time intervals, we successfully solved the DAEs for both one-machine and three-machine systems. In the one-machine scenario, we set the time span to 8 s with a time interval of t s = 0.5 s, using 20 training points per interval. We utilized the SFQ structure with R y embedding, and the complete solution is presented in Figure 10.
For the three-machine system, we set the time span to 20 s, maintaining a time interval of t s = 2 s, and similarly using 20 training points per interval. The complete solution achieved with the PFQ structure is illustrated in Figure 11.
The results affirmed that the quantum solutions closely aligned with the actual solutions for both δ and ω variables, demonstrating the ability of the SFQ and PFQ to accurately simulate the dynamics of power systems in the one-machine and three-machine cases, respectively.

6. Conclusions

This study demonstrated the potential of Quantum Neural Networks (QNNs) in enhancing the simulation of power system transients by effectively solving differential-algebraic equations (DAEs). This study presented a novel application of Quantum Neural Networks (QNNs) for solving differential-algebraic equations (DAEs) in power system transient simulations. We developed two specialized QNN architectures—the Sinusoidal-Friendly QNN (SFQ) and the Polynomial-Friendly QNN (PFQ)—that incorporated domain knowledge about the underlying physics of power systems to enhance training efficiency and accuracy. Our results on benchmark systems (SMIB and WSCC 3-machine) demonstrated that QNNs could accurately approximate DAE solutions, with mean square errors as low as 10 5 in multi-phase transient scenarios. Several benefits of the proposed QNN approach were observed: (1) efficient learning with fewer parameters than classical neural networks; and (2) tailored architectures (SFQ and PFQ) that matched the functional characteristics of power system responses.
While the current implementation was simulated on noiseless quantum backends, the proposed models are hardware-compatible and set the stage for real-time quantum-assisted simulation tools in power grid analysis. Future research will aim to incorporate advanced power system controllers, scale the proposed methodology to larger and more complex power networks, and validate the approach on near-term quantum devices with noise-resilient training strategies to rigorously assess scalability and computational benefits. This work advances the frontier of applying quantum machine learning to critical engineering problems and contributes a new paradigm for the data-efficient, structure-aware simulation of power systems using quantum resources.

Author Contributions

Conceptualization, M.S. and J.Z.; methodology, M.S. and J.Z.; software, M.S.; validation, M.S. and J.Z.; formal analysis, M.S. and J.Z.; investigation, M.S. and J.Z.; resources, M.S. and J.Z.; data curation, M.S. and J.Z.; writing—original draft preparation, M.S.; writing—review and editing, J.Z.; visualization, M.S. and J.Z.; supervision, J.Z.; project administration, J.Z.; funding acquisition, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the NSF ERI program, under award number 2138702. This work used the Delta system at the National Center for Supercomputing Applications through allocation CIS220136 and CIS240211 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.

Data Availability Statement

The data will be made available upon request.

Acknowledgments

We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum team. We thank I. Tutul for providing the Julia code used to solve the simulation problem of the two systems using classical methods.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Single-Machine Infinite Bus (SMIB) System

This part details the ODE for the SMIB system, as described below [39]:
d δ d t = ω ω s
d ( ω ω s ) d t = K 1 K 2 sin ( δ ) K 3 ( ω ω s )
where K 1 = ω s 2 H T m 0 , K 2 = ω s 2 H E c V X , K 3 = ω s 2 H D , E c is equal to the magnitude of the internal voltage of the machine, X is the sum of reactance, and T m 0 is the constant mechanical torque. The initial δ 0 was 1 , and the initial speed difference ( ω 0 ω s ) was 7. K 1 = 5 , K 2 = 10 , K 3 = 1.7 . These data were taken from [39].
We combined Equations (A1) and (A2) into a single DE by eliminating ω :
d 2 δ d t 2 = K 1 K 2 sin ( δ ) K 3 d δ d t

Appendix B. WSCC Three-Machine System

Here, we provide the DAEs of the WSCC three-machine nine-bus power system. The DAE of the i th generator of the power system is given below [40]:
d δ i d t = ω s Δ ω i
d Δ ω i d t = 1 2 H i ( P m i P e i D i Δ ω i )
P e i = e x i i x i + e y i i y i
e x i e y i = sin δ i cos δ i cos δ i sin δ i 0 e q i
I t = i x 1 i y 1 i x n i y n = Y e x 1 e y 1 e x n e y n
e x i e y i = e q i cos δ i e q i sin δ i R a i X d i X d i R a i i x i i y i
where n is the total number of generators, δ i and Δ ω i are the rotor angle and rotor speed deviation from the nominal value of generator i, respectively, H i and D i are the inertia and damping constants of generator i, respectively, P m i is the mechanical power of generator i, P e i is the electric power of generator i e x i and e y i are the internal bus voltages of generator i in the non-rotating coordinate, respectively, e q i is the field voltage of generator i, i x i and i y i are the terminal currents of generator i, Y is the admittance matrix, R a i and X d i are the source impedance of generator i, respectively, e x i and e y i are the terminal voltages for the x and y axes of generator i, respectively.
Initial values for δ i , ω i , ω s , H i , P m i , D i , e q i , δ i , R a i , X d i were obtained from the machine dataset given in [40]. Utilizing these data, the admittance matrix Y was calculated. Additionally, P e i was derived from the network equations based on these initial values.
For the disturbance simulation, we simulated a three-phase fault near bus 7 at the end of lines 5–7, which was cleared within five cycles (0.083 s) by opening lines 5–7. The simulation included three distinct operational phases: pre-fault, fault-on, and post-fault conditions. The pre-fault phase spanned from 0 to 10 s, the fault-on phase from 10.0 to 10.083 s, and the post-fault phase from 10.083 to 20 s. Each phase initialized with different starting points: pre-fault initial values were sourced from machine data to calculate P e i ; fault-on initial values were derived from the terminal values of the pre-fault simulation at 10 s, with adjustments made to accommodate the new fault-on Y matrix; and post-fault initial values were taken from the end of the fault-on phase at 10.083 s, incorporating modifications for a subsequent Y matrix change. The respective Y matrices for each condition are documented in [40].
Table A1. Parameter data of the three machines in the WECC system.
Table A1. Parameter data of the three machines in the WECC system.
ParameterGenerator 1Generator 2Generator 3
H23.646.403.01
D23.646.403.01
R a 000
X d 0.06080.11980.1813
P m 0.71641.63000.8500
e q 1.05661.05021.0170
δ ( 0 ) 0.06261.05670.9449
Δ e ( 0 ) 000
Real ( I t ) 0.68891.57990.8179
Imag ( I t ) −0.26010.19240.1730
I d 0.28720.35230.0178
I q 0.67801.55210.8358

References

  1. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information: 10th Anniversary Edition; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar] [CrossRef]
  2. Shor, P.W. Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA, 20–22 November 1994; pp. 124–134. [Google Scholar] [CrossRef]
  3. Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing (STOC ’96), Philadelphia, PA, USA, 22–24 May 1996; ACM: New York, NY, USA, 1996; pp. 212–219. [Google Scholar] [CrossRef]
  4. Grover, L.K. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett. 1997, 79, 325. [Google Scholar] [CrossRef]
  5. Madsen, L.S.; Laudenbach, F.; Askarani, M.F.; Rortais, F.; Vincent, T.; Bulmer, J.F.F.; Miatto, F.M.; Neuhaus, L.; Helt, L.G.; Collins, M.J.; et al. Quantum computational advantage with a programmable photonic processor. Nature 2022, 606, 75–81. [Google Scholar] [CrossRef]
  6. Hyyppä, E.; Kundu, S.; Chan, C.F.; Gunyhó, A.; Hotari, J.; Janzso, D.; Juliusson, K.; Kiuru, O.; Kotilahti, J.; Landra, A.; et al. Unimon qubit. Nat. Commun. 2022, 13, 1–14. [Google Scholar] [CrossRef] [PubMed]
  7. Arrazola, J.M.; Bergholm, V.; Brádler, K.; Bromley, T.R.; Collins, M.J.; Dhand, I.; Fumagalli, A.; Gerrits, T.; Goussev, A.; Helt, L.G.; et al. Quantum circuits with many photons on a programmable nanophotonic chip. Nature 2021, 591, 54–61. [Google Scholar] [CrossRef] [PubMed]
  8. Lloyd, S.; De Palma, G.; Gokler, C.; Kiani, B.; Liu, Z.-W.; Marvian, M.; Tennie, F.; Palmer, T. Quantum algorithm for nonlinear differential equations. arXiv 2020, arXiv:2011.06571. [Google Scholar] [CrossRef]
  9. Jordan, S.P. Quantum Algorithm Zoo. Available online: https://quantumalgorithmzoo.org (accessed on 22 April 2011).
  10. Bharti, K.; Cervera-Lierta, A.; Kyaw, T.H.; Haug, T.; Alperin-Lea, S.; Anand, A.; Degroote, M.; Heimonen, H.; Kottmann, J.S.; Menke, T.; et al. Noisy intermediate-scale quantum algorithms. Rev. Mod. Phys. 2022, 94, 015004. [Google Scholar] [CrossRef]
  11. Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef] [PubMed]
  12. Liao, Y.; Zhan, J. Expressibility-Enhancing Strategies for Quantum Neural Networks. arXiv 2022, arXiv:2211.12670v1. [Google Scholar] [CrossRef]
  13. Skolik, A.; McClean, J.R.; Mohseni, M.; van der Smagt, P.; Leib, M. Layerwise learning for quantum neural networks. Quantum Mach. Intell. 2021, 3, 5. [Google Scholar] [CrossRef]
  14. Feng, F.; Zhou, Y.; Zhang, P. Quantum Power Flow. IEEE Trans. Power Syst. 2021, 36, 3810–3812. [Google Scholar] [CrossRef]
  15. Feng, F.; Zhang, P.; Bragin, M.A.; Zhou, Y. Novel Resolution of Unit Commitment Problems Through Quantum Surrogate Lagrangian Relaxation. IEEE Trans. Power Syst. 2023, 38, 2460–2471. [Google Scholar] [CrossRef]
  16. Nikmehr, N.; Zhang, P. Quantum-Inspired Power System Reliability Assessment. IEEE Trans. Power Syst. 2023, 38, 3476–3490. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Zhang, P. Noise-Resilient Quantum Machine Learning for Stability Assessment of Power Systems. IEEE Trans. Power Syst. 2023, 38, 475–487. [Google Scholar] [CrossRef]
  18. Gao, F.; Wu, G. Application of Quantum Computing in Power Systems. Energies 2023, 16, 2240. [Google Scholar] [CrossRef]
  19. Golestan, S.; Habibi, M.R.; Mousavi, S.Y.M.; Guerrero, J.M.; Vasquez, J.C. Quantum computation in power systems: An overview of recent advances. Energy Rep. 2023, 9, 584–596. [Google Scholar] [CrossRef]
  20. Zhou, Y.; Tang, Z.; Nikmehr, N.; Babahajiani, P.; Feng, F.; Wei, T.-C.; Zheng, H.; Zhang, P. Quantum computing in power systems. iEnergy 2022, 1, 170–187. [Google Scholar] [CrossRef]
  21. Chow, J.H. Power System Coherency and Model Reduction; Springer: New York, NY, USA, 2013; Volume 94. [Google Scholar]
  22. Barret, J.-P.; Bornard, P.; Meyer, B. Modelling and Simulation Techniques for Power System Engineering; Electricite de France: Paris, France, 2024. [Google Scholar]
  23. Sun, K. Power System Simulation Using Semi-Analytical Methods; Wiley: Hoboken, NJ, USA, 2023. [Google Scholar] [CrossRef]
  24. Fan, L.; Miao, Z. Modeling and Stability Analysis of Inverter-Based Resources; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar] [CrossRef]
  25. Wanner, G.; Hairer, E. Solving Ordinary Differential Equations II; Springer: Berlin, Germany, 1996; Volume 375. [Google Scholar]
  26. Liu, Y.; Sun, K. Solving Power System Differential Algebraic Equations Using Differential Transformation. IEEE Trans. Power Syst. 2020, 35, 2289–2299. [Google Scholar] [CrossRef]
  27. Moya, C.; Lin, G. DAE-PINN: A physics-informed neural network model for simulating differential algebraic equations with application to power networks. Neural Comput. Appl. 2023, 35, 3789–3804. [Google Scholar] [CrossRef]
  28. Jiang, J.; Zhao, Y.; Li, R.; Li, C.; Guo, Z.; Fan, B.; Li, X.; Li, R.; Zhang, X. Strong generalization in quantum neural networks. Quantum Inf. Process. 2023, 22, 428. [Google Scholar] [CrossRef]
  29. Oz, F.; San, O.; Kara, K. An efficient quantum partial differential equation solver with chebyshev points. Sci. Rep. 2023, 13, 7767. [Google Scholar] [CrossRef]
  30. Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2401–2419. [Google Scholar] [CrossRef]
  31. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of the COMPSTAT’2010: 19th International Conference on Computational Statistics, Paris, Italy, 22–27 August 2010; pp. 177–186. [Google Scholar] [CrossRef]
  32. Reddi, S.J.; Kale, S.; Kumar, S. On the convergence of Adam and beyond. In Proceedings of the 6th International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar] [CrossRef]
  33. Wiedmann, M.; Hölle, M.; Periyasamy, M.; Meyer, N.; Ufrecht, C.; Scherer, D.D.; Plinge, A.; Mutschler, C. An Empirical Comparison of Optimizers for Quantum Machine Learning with SPSA-based Gradients. arXiv 2023, arXiv:2305.00224v1. [Google Scholar] [CrossRef]
  34. Holmes, Z.; Sharma, K.; Cerezo, M.; Coles, P.J. Connecting Ansatz Expressibility to Gradient Magnitudes and Barren Plateaus. PRX Quantum 2022, 3, 010313. [Google Scholar] [CrossRef]
  35. McClean, J.R.; Boixo, S.; Smelyanskiy, V.N.; Babbush, R.; Neven, H. Barren plateaus in quantum neural network training landscapes. Nat. Commun. 2018, 9, 4812. [Google Scholar] [CrossRef] [PubMed]
  36. Volya, D.; Mishra, P. State Preparation on Quantum Computers via Quantum Steering. IEEE Trans. Quantum Eng. 2024, 5, 1–14. [Google Scholar] [CrossRef]
  37. Bergholm, V.; Izaac, J.; Schuld, M.; Gogolin, C.; Ahmed, S.; Ajith, V.; Alam, M.S.; Alonso-Linaje, G.; AkashNarayanan, B.; Asadi, A.; et al. PennyLane: Automatic differentiation of hybrid quantum-classical computations. arXiv 2022, arXiv:1811.04968. [Google Scholar] [CrossRef]
  38. Soltaninia, M.; Zhan, J. Comparison of Quantum Simulators for Variational Quantum Search: A Benchmark Study. arXiv 2023, arXiv:2309.05924. [Google Scholar] [CrossRef]
  39. Sauer, P.W.; Pai, M.A.; Chow, J.H. Power System Dynamics and Stability: With Synchrophasor Measurement and Power System Toolbox, 2nd ed.; Wiley: Hoboken, NJ, USA, 2017. [Google Scholar]
  40. Wang, B.; Liu, Y.; Sun, K. Power System Differential-Algebraic Equations. arXiv 2021, arXiv:1512.05185v3. [Google Scholar] [CrossRef]
Figure 1. Structure of the QNN used to solve the DAE, including a classical optimizer that updates θ .
Figure 1. Structure of the QNN used to solve the DAE, including a classical optimizer that updates θ .
Energies 18 02525 g001
Figure 2. SFQ circuit with R y embedding and two layers.
Figure 2. SFQ circuit with R y embedding and two layers.
Energies 18 02525 g002
Figure 3. SFQ circuit with arcsin embedding and two layers.
Figure 3. SFQ circuit with arcsin embedding and two layers.
Energies 18 02525 g003
Figure 4. PFQ circuit with one rotation gate.
Figure 4. PFQ circuit with one rotation gate.
Energies 18 02525 g004
Figure 5. PFQ circuit with two rotation gates.
Figure 5. PFQ circuit with two rotation gates.
Energies 18 02525 g005
Figure 6. Single-qubit PFQ circuit with three rotation gates.
Figure 6. Single-qubit PFQ circuit with three rotation gates.
Energies 18 02525 g006
Figure 7. Comparison of actual ω and the ω generated by PFQ and SFQ on a 1-machine test system.
Figure 7. Comparison of actual ω and the ω generated by PFQ and SFQ on a 1-machine test system.
Energies 18 02525 g007
Figure 8. Comparison of quantum and actual solutions for an SMIB system using different time spans: left panel t s = 1, middle panel t s = 0.5, right panel t s = 0.2.
Figure 8. Comparison of quantum and actual solutions for an SMIB system using different time spans: left panel t s = 1, middle panel t s = 0.5, right panel t s = 0.2.
Energies 18 02525 g008
Figure 9. Comparison of quantum and actual solutions for an SMIB system using various training points: top left: 3 points, top right: 5 points, bottom left: 10 points, bottom right: 20 points.
Figure 9. Comparison of quantum and actual solutions for an SMIB system using various training points: top left: 3 points, top right: 5 points, bottom left: 10 points, bottom right: 20 points.
Energies 18 02525 g009
Figure 10. Quantum solution vs. actual solution for the SMIB system: left panel: rotor angle δ ( t ) , right panel: rotor speed ω ( t ) .
Figure 10. Quantum solution vs. actual solution for the SMIB system: left panel: rotor angle δ ( t ) , right panel: rotor speed ω ( t ) .
Energies 18 02525 g010
Figure 11. Quantum solution vs. actual solution for the WSCC 3-machine system: the three panels on the left are the rotor angles δ of the three machines, respectively; the three panels on the right are the rotor speed ω of the three machines, respectively.
Figure 11. Quantum solution vs. actual solution for the WSCC 3-machine system: the three panels on the left are the rotor angles δ of the three machines, respectively; the three panels on the right are the rotor speed ω of the three machines, respectively.
Energies 18 02525 g011
Table 1. Mean square error for the DAE of the 3-machine WSCC system between the actual solution and solutions obtained by various QNN architectures for 0 t 0.2 (10 points).
Table 1. Mean square error for the DAE of the 3-machine WSCC system between the actual solution and solutions obtained by various QNN architectures for 0 t 0.2 (10 points).
SFQ-Ry (Figure 2)SFQ-Arcsin (Figure 3)PFQ (Figure 6)
ω 1 ( t ) 1.48 × 10 12 1.02 × 10 12 9.49 × 10 13
ω 2 ( t ) 1.57 × 10 11 5.79 × 10 12 9.40 × 10 11
ω 3 ( t ) 9.30 × 10 11 1.45 × 10 11 5.47 × 10 11
δ 1 ( t ) 1.36 × 10 7 1.44 × 10 7 1.42 × 10 7
δ 2 ( t ) 4.02 × 10 7 3.72 × 10 7 3.12 × 10 7
δ 3 ( t ) 9.96 × 10 7 1.16 × 10 6 1.26 × 10 6
Average 2.56 × 10 7 2.79 × 10 7 2.86 × 10 7
Table 2. Mean square error for the DAE of the 3-machine WSCC system between the actual solution and solutions obtained by various QNN architectures for 0 t 2 (50 points).
Table 2. Mean square error for the DAE of the 3-machine WSCC system between the actual solution and solutions obtained by various QNN architectures for 0 t 2 (50 points).
SFQ-Ry (Figure 2)SFQ-Arcsin (Figure 3)PFQ (Figure 6)
ω 1 ( t ) 1.40 × 10 9 failed8.62 × 10 10
ω 2 ( t ) 8.15 × 10 9 failed2.11 × 10 9
ω 3 ( t ) 2.43 × 10 7 failed7.31 × 10 9
δ 1 ( t ) 7.76 × 10 6 failed5.20 × 10 7
δ 2 ( t ) 3.92 × 10 4 failed2.37 × 10 6
δ 3 ( t ) 1.27 × 10 2 failed6.67 × 10 5
Average 2.18 × 10 3 - 1.16 × 10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soltaninia, M.; Zhan, J. Quantum Neural Networks for Solving Power System Transient Simulation Problem. Energies 2025, 18, 2525. https://doi.org/10.3390/en18102525

AMA Style

Soltaninia M, Zhan J. Quantum Neural Networks for Solving Power System Transient Simulation Problem. Energies. 2025; 18(10):2525. https://doi.org/10.3390/en18102525

Chicago/Turabian Style

Soltaninia, Mohammadreza, and Junpeng Zhan. 2025. "Quantum Neural Networks for Solving Power System Transient Simulation Problem" Energies 18, no. 10: 2525. https://doi.org/10.3390/en18102525

APA Style

Soltaninia, M., & Zhan, J. (2025). Quantum Neural Networks for Solving Power System Transient Simulation Problem. Energies, 18(10), 2525. https://doi.org/10.3390/en18102525

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop