1. Introduction
With the increasing installed capacity of renewable energy sources such as wind power and photovoltaics, the large-scale commissioning of high-voltage direct current (HVDC) transmission projects, and the continuous expansion and growing complexity of power grid scale and structure [
1,
2], the secure and stable operation of power systems is facing unprecedented challenges [
3,
4]. Due to the high penetration of renewable energy integration and the multi-infeed HVDC configuration, some synchronous generators have been replaced, resulting in a significant reduction in reactive power support capability and disturbance tolerance of the power grid. When severe faults occur in hybrid AC/DC systems, transient voltage instability or even voltage collapse may be triggered [
5,
6], posing serious threats to the secure and stable operation of power systems.
At present, transient stability assessment (TSA) methods for power systems can be broadly classified into time-domain simulation methods, direct methods, and data-driven approaches, each with its own technical characteristics [
7]. As the mainstream approach for TSA, time-domain simulation is typically used for offline analysis. Its modeling accuracy is positively correlated with computational precision and time consumption, which leads to challenges in balancing accuracy and speed in online practical applications. To address these issues, many researchers have adopted parallel and distributed simulation techniques to accelerate system simulation and solution processes, thereby enhancing real-time performance [
8,
9]. The direct method determines system stability by comparing the system energy at fault clearing with a critical energy value after constructing an energy function. However, it is difficult to establish appropriate energy functions for specific power grids, and the use of simplified physical models in practical applications often yields conservative results [
10,
11]. Both time-domain simulation and direct methods rely on physics-based analytical modeling for transient stability judgment, making them dependent on modeling accuracy and computational resources. Consequently, they cannot simultaneously meet the stringent requirements of online TSA in terms of both speed and accuracy.
To overcome the limitations of traditional methods, such as low computational efficiency and poor adaptability, data-driven artificial intelligence (AI) technologies have provided new ideas and methodologies for transient stability analysis and decision-making in power systems. AI has evolved from single techniques to integrated approaches and from shallow learning to deep learning [
12], achieving significant breakthroughs in fields such as computer vision, natural language processing, healthcare, and transportation, and substantially improving accuracy and intelligence across various applications. As early as the late 1980s, researchers began applying AI techniques to TSA of power systems. However, large-scale engineering applications were hindered by limitations in hardware performance and algorithm efficiency at that time [
13]. With the increasing maturity of wide-area measurement systems (WAMS) based on phasor measurement units (PMUs), together with the rapid advancement of AI technologies, a solid data foundation and algorithmic support have been established for the application of data-driven techniques in power systems [
14,
15].
Data-driven approaches can construct models that deeply mine intrinsic mapping relationships from massive, heterogeneous, and low-value-density power system data, thereby characterizing the stability boundaries of power systems. Compared with traditional methods, data-driven models exhibit high efficiency and fast response in online applications, enabling predictions within milliseconds and effectively satisfying the requirements of online security assessment for rapidity and accuracy. Consequently, power system researchers have continuously explored the application of advanced models and algorithms to transient stability studies in order to improve assessment performance. Existing studies and reviews have mainly focused on the suitability of different advanced models and algorithms for specific scenarios [
16,
17], while relatively few have systematically analyzed and discussed each stage of the data-model-data closed-loop process from a holistic and practical application perspective.
The contribution of this paper is described as follows.
- (1)
Introduction of a Three-Stage TSA Framework: Unlike prior surveys, we propose a three-stage closed-loop framework for TSA that integrates data-driven techniques, intelligent algorithms, and interpretability. This systematic approach covers sample construction and enhancement, intelligent algorithms and learning mechanisms, and model training and interpretability.
- (2)
Enhanced Interpretability Integration: This review significantly contributes to TSA by highlighting interpretability not as a supplementary feature but as a core element of the process. By focusing on both inherently interpretable models and post hoc interpretability methods, we emphasize their application for real-time decision-making in power systems. This differs from earlier reviews, which often overlooked the importance of understanding and explaining AI predictions in TSA.
- (3)
Comprehensive Knowledge–Data Fusion Taxonomy: We provide a taxonomy of knowledge–data fusion paradigms (parallel, serial, guided, and feedback modes), which effectively bridges the gap between mechanism-based models (physics-based) and data-driven AI models. This integration is key to addressing the challenges posed by dynamic and complex power systems.
- (4)
Guidance for Practical AI Model Selection: A unique feature of our review is its emphasis on practical deployment. We provide selection criteria for utilities and operators when choosing AI models based on grid size, real-time computational needs, interpretability requirements, and data availability. This offers actionable insights, a feature that is often missing in other surveys.
2. Overall Framework
The data-driven TSA process for power systems mainly includes sample construction and enhancement, intelligent algorithms and learning mechanisms, and model training and interpretability. The overall framework is illustrated in
Figure 1.
Sample generation is a fundamental step in developing data-driven TSA models, as feature selection and sample balance directly affect model performance. Input features are commonly derived from pre-disturbance steady-state information, post-disturbance dynamic responses, or their combination. In practice, power system data are mainly obtained from WAMS based on PMUs and SCADA systems. However, transient instability events are rare, and power systems usually operate near typical conditions, leading to severe class imbalance and insufficient unstable samples [
18]. Consequently, historical data alone cannot meet the balance and diversity requirements of data-driven models. To address this issue, most existing studies rely on simulation-based data generation by varying operating conditions, fault types, and network configurations. By adjusting generator outputs and load levels and introducing different contingencies, balanced datasets containing both stable and unstable samples can be obtained. In addition, imbalanced learning techniques, such as loss reweighting and adaptive synthetic sampling, have been employed to enhance model sensitivity to rare unstable cases.
Model design constitutes the core of data-driven TSA. Model selection, network architecture, loss functions, and hyperparameter settings jointly determine learning performance [
19,
20]. Existing approaches can be broadly classified into traditional machine learning and deep learning. Traditional methods, including decision tree (DT), support vector machine (SVM), and artificial neural network (ANN), feature low computational complexity and are suitable for small-scale datasets, but their performance degrades in high-dimensional and large-scale systems. Ensemble learning has been introduced to improve their accuracy and robustness. Deep learning models are well suited for large-scale and high-dimensional data, as they can automatically extract representative features and capture complex nonlinear relationships. Typical architectures include convolutional neural network (CNN), long short-term memory network (LSTM), generative adversarial network (GAN), and graph neural network (GNN). Nevertheless, deep learning models require high-quality data and substantial computational resources and generally exhibit limited interpretability. Hybrid and ensemble frameworks have therefore been developed to exploit complementary advantages [
21].
Model training bridges offline learning and online application. Its objective is to learn accurate mappings between system states and stability outcomes through optimization and validation. Common optimization methods include stochastic gradient descent, Adam, and second-order algorithms. Model performance is evaluated using validation datasets and confusion-matrix-based metrics, such as accuracy, precision, recall, and F1-score. Models are deployed only after satisfying predefined performance criteria. To enhance transparency, interpretability techniques are increasingly incorporated. Self-explainable models include DT, random forests, attention mechanisms, and Kolmogorov–Arnold Networks, while post hoc methods include Shapley additive explanations (SHAP), accumulated local effects (ALE), local interpretable model-agnostic explanations (LIME), and gradient-weighted class activation mapping (Grad-CAM). Furthermore, transfer learning and active learning have been introduced to accelerate model updating and reduce labeling costs. Transfer learning enables rapid adaptation through fine-tuning or joint training with new data, whereas active learning iteratively selects informative samples for labeling based on uncertainty or diversity criteria. These mechanisms facilitate continuous performance improvement and enhance model adaptability under evolving operating conditions.
Figure 1 presents a deployment-oriented TSA framework that goes beyond conventional data-driven pipelines by organizing data acquisition, feature representation, model development, interpretability analysis, and operational feedback into a unified and structured workflow. Unlike existing frameworks that primarily emphasize prediction accuracy, the proposed architecture explicitly integrates interpretability, physics consistency, reliability evaluation, and closed-loop interaction with real-time operational decision-making. By embedding transparency and operational feedback as intrinsic components rather than optional add-ons, the framework provides a comprehensive and practically grounded blueprint for implementing AI-based TSA in real-world power system environments.
The detailed discussions on benchmark datasets, evaluation metrics, and comparative performance reported in the literature is described as follows. Commonly used benchmark systems, including the IEEE 14-bus, 30-bus, 118-bus, and 300-bus test systems, real-world WAMS/PMU-based datasets, and simulated data covering various fault types and grid configurations, are now systematically summarized. Key evaluation metrics such as accuracy, precision, recall, F1-score, along with confusion matrices and ROC curves, are also introduced to provide a clearer assessment framework.
3. Power System Sample Data Construction and Augmentation
3.1. Dataset Construction with Consideration of Sample Balance
In data-driven TSA, sample dataset construction is a key prerequisite. A large-scale database that sufficiently covers the system state space enables the learning model to capture grid dynamics and accurately characterize the transient stability boundary.
Most existing studies establish TSA databases via time-domain simulations by considering load variations, generator outputs, and network topology changes [
22,
23]. To better align with practical operating conditions, ref. [
24] developed probabilistic generation and load models based on monitoring data and real operating states, and ref. [
25] further incorporated fault probability distributions to improve dataset realism. With the deployment of WAMS, training TSA models directly using large-scale historical electrical data becomes feasible [
26], which can reduce the computational burden of extensive simulations. Nevertheless, historical records are typically dominated by normal operating conditions, and unstable events are scarce, making them inadequate for direct training. Hence, task-oriented data generation remains necessary to mitigate sample imbalance. For unstable sample augmentation, conventional techniques mainly rely on minority-class resampling, such as SMOTE [
27] and ADASYN [
28]. Ref. [
27] combined k-means clustering with SMOTE to increase the proportion of unstable samples. However, interpolation-based approaches may fail to capture nonlinear transient dynamics and can cause model overfitting.
To overcome these drawbacks, deep generative methods, particularly GAN, have been widely adopted. Ref. [
29] enhanced GAN training stability by integrating attention mechanisms and spectral normalization to learn complex spatiotemporal distributions. Ref. [
30] employed an auxiliary classifier GAN to improve sample diversity via supervised attribute learning, while refs. [
31,
32] generated additional unstable samples to improve TSA recognition performance and capture nonlinear mappings between disturbance features and stability categories. Ref. [
33] leveraged Cycle GAN to alleviate small-sample and class-imbalance issues, and ref. [
34] proposed a dual-generator GAN for batch sample generation. Moreover, an LSTM-fused autoencoder was introduced to suppress noise and recover missing data, thereby improving transient data integrity.
3.2. Key Feature Selection Reflecting System Stability Characteristics
WAMS provide the data foundation for data-driven TSA. However, measurement data typically involve multiple buses and heterogeneous features. Directly using all features for model training incurs high computational cost, and redundant information may degrade performance. Therefore, feature engineering is required to select or extract informative features for TSA. Existing approaches mainly include expert-knowledge-based methods, fixed-rule dimensionality reduction, and machine-learning-based feature processing. Expert-knowledge-based feature selection has been widely adopted. Ref. [
35] defined 27 trajectory-clustering features using post-fault rotor angles and used them as inputs to a deep belief network (DBN), achieving system-scale-independent feature dimensions. Ref. [
36] extracted the maximum frequency deviation of each generator and used the Euclidean norm of deviations at different instants as inputs to a multilayer perceptron (MLP). Ref. [
37] employed system-level features rather than unit-level variables to keep the feature dimension invariant with system scale. Despite their effectiveness, these methods remain subjective and often exhibit limited adaptability and generalization. Fixed-rule dimensionality reduction is also commonly used. Ref. [
38] applied principal component analysis (PCA) to extract key features and reduce complexity. Ref. [
39] combined mutual information for preliminary screening with linear discriminant analysis (LDA) to obtain optimal projections and decision features. Ref. [
40] utilized Tabu search to select effective features from high-dimensional candidates, while ref. [
41] mapped response-data-based features into a high-dimensional space to reduce TSA time and space complexity. Ref. [
42] proposed a Relief-based method to evaluate time-series feature importance and construct temporal feature subsets to mitigate TSA degradation under data loss. However, these approaches rely on assumptions (e.g., PCA’s linearity and Relief’s neighborhood hypothesis) that may not hold under nonlinear power system dynamics, and they may ignore task labels, leading to suboptimal feature representations.
4. TSA Techniques Based on AI Algorithms
Machine learning algorithms, as core technologies in AI applications, enable information carriers with the capability of discovering data correlations to simulate human perception, learning, and cognitive abilities by encoding “experience” or “knowledge” in the form of data. During practical applications, machine learning models can continuously accumulate experience or acquire new knowledge through data feedback, thereby coping with uncertainties and time-varying characteristics in real-world scenarios. With the continuous development of machine learning, the emergence of new model architectures and algorithms is expected to significantly promote research progress in TSA. At present, a large number of power system researchers have actively applied novel machine learning models to TSA studies in pursuit of improved model performance. This section reviews the application of machine learning algorithms in TSA from two perspectives: traditional machine learning and deep learning.
4.1. Traditional Machine Learning Algorithms
Traditional machine learning algorithms feature a limited number of computational parameters and low storage requirements, enabling fast prediction and making them suitable for TSA with small-scale datasets. However, their ability to capture complex nonlinear relationships in high-dimensional feature spaces is inherently limited. The general TSA workflow based on traditional machine learning typically consists of four stages: feature extraction and dimensionality reduction, model selection, optimization and performance evaluation, and feedback-based updating. Feature extraction and dimensionality reduction are commonly implemented using techniques such as PCA, LDA, and singular value decomposition (SVD). Once the trained model achieves satisfactory performance, it can be deployed for online TSA applications. Commonly used traditional machine learning classifiers for TSA include DT, ANN, and SVM, whose operating principles, advantages, applicable scenarios, and limitations are discussed in the
Table 1.
Traditional machine learning methods exhibit limited capabilities in data analysis and nonlinear fitting when dealing with complex datasets characterized by intertwined multidimensional variables. As a result, they are unable to comprehensively describe the coupling relationships among data variables or accurately quantify the influence of individual variables on transient stability status. Nevertheless, traditional machine learning algorithms can still achieve relatively satisfactory performance under conditions of limited data availability, and they continue to play an important role in classification and regression problems in various application domains. Therefore, selecting an optimal combination of input features to characterize the input–output mapping relationship of power system transient stability is a critical approach for improving the performance of traditional machine learning-based TSA models.
4.2. Deep Learning Algorithms
With the rapid progress of deep learning in feature extraction and representation learning, deep learning has been widely adopted in power-system TSA. Compared with traditional machine learning approaches, deep models are capable of handling high-dimensional and strongly correlated inputs and can more effectively approximate complex nonlinear input–output mappings, thereby improving TSA accuracy and robustness. The general workflow of deep-learning-based TSA typically includes feature selection, model training, and model updating, forming a closed-loop learning framework. As data availability continues to increase and network architectures become deeper and more complex, the coordination among feature selection strategies, dataset scale, and model depth has become a key issue in deep-learning-based TSA, directly affecting model generalization performance and computational efficiency. Benefiting from their strong feature learning capability, deep models can automatically integrate high-dimensional variables over the entire transient process, including pre-fault steady-state information and post-fault dynamic responses, to achieve comprehensive transient stability evaluation. Representative neural network models applied in TSA include CNN, LSTM, DBN, stacked autoencoders (SAE), Transformers, and GNN. The principles, advantages, application scenarios, and limitations of these methods are summarized as
Table 2.
In summary, compared to shallow machine learning, deep learning demonstrates a powerful ability to process high-dimensional, heterogeneous, multi-source data, enabling the rapid and accurate assessment of the transient stability state of the system. It offers relatively strong generalization capabilities and high accuracy, with the key to its design lying in the mutual matching of data features, scale, and network model depth. However, deep learning also has certain limitations in transient stability problems. On one hand, large-scale high-dimensional data and deep network models require significant computational power and time. Additionally, although deep learning algorithms, based on data fitting, can continually enhance the model’s generalization ability in different scenarios through new models and algorithms, the patterns of power system operations under real-world, open-ended problems cannot be fully enumerated, and not all possible grid operation scenarios can be covered. As a result, the model still faces generalization issues when dealing with atypical operating conditions or system topology changes.
Moreover, the concept of ensemble learning has been widely applied in transient stability analysis, aiming to improve the overall performance of assessment models by constructing multiple classifiers. In ref. [
52], an ensemble learning method was used to perform dynamic security assessment before faults. In ref. [
53], a grid search method was first used to optimize the parameters of the SVM, and then an ensemble model was constructed by selecting the best-performing support vector groups to achieve TSA of the power system. Ref. [
54] constructed multiple CNN models with identical structures but different parameters, combining the results of the sub-models to arrive at TSA conclusion. Ref. [
55] proposed a transient stability evaluation method based on an ensemble model of multiple radial basis function extreme learning machine (ELM), which constructed an ensemble model using several sub-models to obtain more reliable and accurate stability evaluation results. Ref. [
56] introduced a time-adaptive transient stability evaluation method for power systems based on ensemble learning, using multiple LSTM classifiers to output stability predictions at different evaluation time points, providing a more comprehensive assessment of the power system’s transient stability. However, ensemble learning requires substantial computational resources and time, which may increase the complexity of the model.
5. Learning Mechanisms of Intelligent Analysis Models for Power Systems
Machine learning training and its applications are generally based on the assumption that data are independent and identically distributed. However, the open operating environment of power systems and the massive amount of homogeneous data cause machine learning algorithms to suffer from insufficient generalization capability and low training efficiency in practical applications, making it difficult to meet the requirements of online TSA models. To address these issues, it is necessary to modify the learning mechanisms of models in order to improve both assessment accuracy and convergence speed during model updates. Accordingly, this section discusses the application of learning mechanisms in TSA from two perspectives: transfer learning and active learning.
5.1. Transfer Learning Mechanism
The large-scale integration of renewable energy sources and the widespread adoption of electric vehicles have increased the uncertainty on both the generation and load sides of power systems. Meanwhile, the continuous expansion of power grid scale leads to frequent changes in transmission and distribution network topologies. As a result, power systems exhibit strong dynamics and time-varying characteristics at both short-term and long-term time scales. These characteristics cause machine learning models trained on historical scenarios to suffer from insufficient generalization when applied to new operating conditions.
Transfer learning focuses on leveraging data or models from a source domain to rapidly train models suitable for a target domain. Its core idea is to reduce the training cost by exploiting the similarity between the source domain and the target domain when data distributions change. By transferring knowledge from previously learned tasks, transfer learning enables models to adapt efficiently to new scenarios with limited data. The schematic diagram of transfer-learning-based TSA is shown in
Figure 2, where
x denotes the input feature,
n denotes the feature dimension.
When changes occur in grid topology or operating conditions, the transient stability model can be updated using a small amount of data from the target domain through sample transfer, feature transfer, or model transfer. Specifically, sample transfer increases the effective size of the target-domain dataset by reweighting source-domain samples; feature transfer reduces data demand by constructing feature representations based on source-domain knowledge; and model transfer fine-tunes a pre-trained source-domain model using target-domain data to adapt it to new operating conditions.
Transfer learning updates models by exploiting the similarity in input–output mappings under different scenarios, such as changes in power system operating conditions or network topology, through sample transfer, feature transfer, and model transfer. This learning process can accelerate model training while simultaneously enhancing the generalization capability of AI models, thereby improving their adaptability across multiple scenarios. However, negative transfer and transfer boundaries remain major challenges in transfer learning. The relevance between source-domain and target-domain tasks, as well as the similarity of sample features, are the primary factors affecting transfer learning performance.
5.2. Active Learning Mechanism
Active learning is a model enhancement strategy that evaluates the information value of samples using heuristic query strategies and prioritizes the most informative ones for expert labeling and model training. Manual annotation generally introduces low label noise, while well-designed query strategies can reduce sample redundancy and improve data quality. As a result, active learning is particularly suitable for scenarios with high labeling cost and difficulty.
In TSA, active learning typically starts with generating a large pool of unlabeled samples through simulation under diverse operating conditions, network topologies, and fault scenarios. Samples are ranked according to information measures (e.g., entropy), and high-priority samples are labeled via time-domain simulations to form a training set. The model is then trained and iteratively updated until performance convergence, after which it is deployed online. By selectively labeling informative samples, active-learning-based TSA can approach saturated performance with significantly fewer labeled data, effectively balancing annotation cost and model accuracy. However, its performance may vary across datasets; for instance, uncertainty-based strategies may select outliers and degrade fitting quality. Moreover, active learning is inherently sequential, as model updates can only proceed after the selected samples are labeled.
6. Interpretability Analysis of Intelligent Model Prediction Results
In stability analysis scenarios, the task of AI models is to determine whether a power system can continue to operate normally after being subjected to a certain level of disturbance. However, when black-box AI models perform stability assessments, they often fail to provide the underlying reasons for their decisions, which hinders further analysis of system stability. Therefore, to enhance the transparency of prediction results, explainable AI (XAI) methods have attracted increasing attention and research interest. XAI is a concept that has existed since the early stages of AI development. In the 1980s and 1990s, the concept of “explanation” had already been introduced into expert systems for power systems, where the primary function of the explanation module was to clarify the system’s reasoning process and results to facilitate user acceptance. In 2004, ref. [
57] first used the term explainable AI to describe AI systems with interpretability capabilities. In the field of power systems, ref. [
58] reviewed various fault diagnosis methods and pointed out that ANN-based approaches lack the ability to explain their behavior and output results, which limits their application in large-scale power systems.
In 2006, the concept of “deep learning” was proposed [
59], marking a new stage in AI research. Although deep learning significantly improved model performance by increasing network depth, it also greatly reduced the transparency and interpretability of model internal mechanisms. To address this issue, the Defense Advanced Research Projects Agency (DARPA) of the United States launched the XAI program in May 2017 [
60]. The program systematically defined the concept of explainable AI from three aspects—explainable model learning, explanation interface design, and explanation psychology—aiming to develop a suite of machine learning techniques that can produce more interpretable models while maintaining high learning performance, enabling users to understand, trust, and effectively manage AI systems. This initiative formally ushered in the rapid development of XAI. Classical XAI methods can be broadly categorized into two types based on their explanation mechanisms: inherently interpretable AI models and post hoc AI model explanation methods. The former refers to models that possess intrinsic interpretability, including linear models, DT, random forests, and attention mechanisms. The latter refers to methods designed to explain AI models and are generally model-agnostic. Representative approaches include SHAP [
61], LIME [
62], and Grad-CAM) [
63].
6.1. Inherently Interpretable AI Models
6.1.1. DT and Random Forests
DT is an inherently interpretable learning method that identifies optimal splitting features by calculating the entropy of leaf nodes, thereby performing recursive splits at each node. The interpretability of DT lies in their ability to transform data into an explainable structure (as illustrated in
Figure 3), where each node represents a clearly defined and testable decision rule. By traversing the branches of the tree and following a sequence of decisions, a final conclusion can be obtained. Random forests which are constructed based on DT, also exhibit a certain degree of interpretability. As an ensemble learning algorithm, random forests determine a unified output by aggregating the prediction results of multiple DT.
However, DT and random forests share inherent limitations. When a DT becomes excessively deep or a random forest comprises a large number of trees with complex reasoning paths, it becomes difficult to interpret the inference process based on prior knowledge, and the interpretability is therefore constrained. Nevertheless, explainability analysis techniques are still available for deep DT and random forests. For example, feature importance metrics such as weight, gain, and cover can be employed. Specifically, weight denotes the number of times a feature is used to split nodes across all trees; gain represents the average improvement (information gain) contributed by a feature when it is used for splitting; and cover indicates the average sample coverage associated with splits on that feature.
In power systems, the interpretability of DT and random forests lies in their ability to reveal how different features influence model outputs during the decision-making process. For instance, based on historical operational data, system states and control variables can be decomposed into multiple features (e.g., system load, wind speed, and meteorological conditions). By learning control strategies corresponding to various feature combinations, decision-tree-based approaches can be applied to scheduling optimization. In this context, DT can characterize the mapping between system states and control actions and intuitively reflect the control strategies associated with different feature combinations, thereby providing a certain degree of interpretability. Ref. [
64] addressed the challenges of expert-system expansion and excessive reliance on human experts in substation operation and maintenance by proposing a decision-tree-based expert rule extraction method. The extracted rules satisfy fundamental requirements such as high interpretability, high confidence, and clear reasoning paths.
6.1.2. Attention Mechanisms
The attention mechanism in deep learning is a computational model inspired by human attention and was originally proposed to address the difficulty of modeling long-range dependencies in sequence-to-sequence architectures. By dynamically focusing on the most informative features during data processing, attention mechanisms can reduce unnecessary computations while maintaining high prediction accuracy [
65].
The interpretability of attention-based models is mainly reflected in their ability to explicitly highlight the parts of the input on which the model focuses, as illustrated in
Figure 4 [
66], where unweighted feature maps
x ∈
RH×W×C.
R represents the set of real numbers.
W,
H, and
C are the width, height, and number of channels in the feature map, respectively. Darker colors correspond to higher weights. This enables more flexible processing of input data and allows attention weights to be adaptively adjusted according to different tasks and data characteristics, thereby improving both model performance and interpretability.
In the power systems domain, attention mechanisms and attention-based models have been widely applied to interpret deep learning models for TSA [
67], equipment fault diagnosis [
68], and renewable power generation forecasting [
69]. These approaches are used to identify the features within time-series data that contribute most significantly to model decisions. In addition, attention mechanisms can be integrated with reinforcement learning (RL)-based scheduling methods. For example, attention-based embedded vector representations of intelligent agents can be employed to reveal the internal working principles of scheduling models. In this context, attention mechanisms help system operators understand the key factors considered by the model when making scheduling decisions, thereby enhancing the interpretability and reliability of the model.
6.1.3. Kolmogorov–Arnold Networks
In recent years, Kolmogorov–Arnold network (KAN) has emerged as a novel class of deep learning models with enhanced transparency and interpretability, enabling white-box modeling of complex systems. Moreover, the spline-based function approximation paradigm adopted by KANs aligns well with intrinsic characteristics of time-series data, such as periodicity and trend components. To date, KANs have been extensively investigated in a wide range of applications, including time-series forecasting [
70], online reinforcement learning [
71], transfer learning [
72], model fusion [
73], and white-box modeling of power systems [
74].
The theoretical foundation of KANs lies in the Kolmogorov–Arnold representation theorem, which states that any smooth multivariate continuous function defined on a bounded domain can be exactly represented as a finite superposition of univariate continuous functions. As a result, KANs reformulate the learning of high-dimensional functions into the learning of polynomial combinations of univariate functions. The overall architecture of a KAN model is illustrated in
Figure 5.
6.2. Post Hoc Explanation Methods
Interpretability methods for AI models mainly refer to techniques designed to explain existing black-box AI models. In general, commonly used black-box models in power systems include deep neural networks (DNNs) and deep reinforcement learning (DRL) models. The lack of interpretability in DNNs primarily arises from their complex network architectures and the intricate interactions among multi-layer parameters, which make the final prediction results difficult to explain intuitively. Moreover, for the same input, multiple combinations of neuron activation patterns may exist, further increasing the difficulty of understanding the internal mechanisms of the model.
The interpretability challenges of DRL models are mainly attributed to their learning process, in which policies are optimized through continuous interactions with the environment. The resulting state transitions and reward variations during this process are often difficult to interpret intuitively. In addition, the parameter update processes of the policy network and value network in DRL are complex and highly interdependent, rendering the decision-making process of the model difficult to explain.
To address the black-box nature of AI models, several classical interpretability methods have been proposed, including SHAP, LIME, and Grad-CAM. These methods employ different techniques and principles to uncover the underlying working mechanisms of AI models, thereby enhancing the interpretability of black-box models. The following subsections provide a detailed analysis of these methods.
6.2.1. SHAP
SHAP attribution theory is an interpretable AI technique that is applicable to a wide range of machine learning models, including neural networks [
75]. The fundamental principle of SHAP is illustrated in
Figure 6. This method is grounded in the computation of feature attribution values, known as Shapley values, which quantitatively evaluate the contribution of individual input features to the model output. By constructing an additive surrogate model
G(
x) to approximate the trained classifier
F(
x), the prediction of the original model can be expressed as the summation of the Shapley values associated with all input features, as formulated below:
where
denotes the expected value of the model prediction, and
represents the attribution value of the
feature in the
m-dimensional input sample
x.
By analogy with the payoff allocation principle in cooperative game theory, the Shapley value
of a given input feature can be defined as the average marginal contribution of feature
across all possible subsets of features. The detailed computation process can be expressed as follows:
where
denotes the set containing all input features, and
represents a feature subset obtained by excluding feature
.
6.2.2. ALE
To facilitate understanding of the specific influence of important features on the prediction target, existing studies have introduced ALE plots. By leveraging local effects, ALE eliminates the interference caused by correlations among features and enables the analysis of the joint effects of strongly correlated variables on the target. Compared with partial dependence plots, ALE exhibits the advantages of higher computational efficiency and unbiased estimation. ALE averages the local changes in predictions and accumulates them over a predefined grid, as expressed in (3).
where
is the feature of interest,
is the set of remaining features,
is the ALE function of feature
, and
is the model prediction with
varying and
fixed. ALE plots can more accurately capture the mapping relationship between feature values and state labels, thereby providing an interpretable explanation of how changes in feature values influence the prediction results.
6.2.3. LIME
LIME [
76] is a local model interpretability algorithm that provides explanations for the prediction results of a specific sample. Let the sample to be explained be denoted by
, the complex model (e.g., single-channel CNN or multi-channel CNN) by
, and the interpretable surrogate model by
The algorithm shifts the focus from global to local interpretation by approximating the classification boundary of the complex model in the neighborhood of sample
using the interpretable model
, thereby achieving a local linearization of the complex decision boundary. As illustrated in
Figure 7, the red cross represents the original sample
. A set of perturbed samples
(blue crosses) is generated by applying feature perturbations to the original sample
. Subsequently, in the vicinity of the original sample
, a dashed curve is fitted to approximate the transient decision boundary of the evaluated model, yielding a locally linear explanation of the model’ s behavior.
The basic workflow of the algorithm is shown in
Figure 8, where
x denotes the input feature,
n denotes the feature dimension,
f() denotes the machine learning model,
g() denotes the interpretable model. The objective of LIME is to explain the prediction behavior of the original model for a target sample rather than to refit the entire training dataset. By fitting a simple surrogate model within a local neighborhood, LIME reveals how the prediction outcome is influenced by individual feature values and model parameters. In this way, LIME provides sample-specific explanations, which facilitate understanding the differences in the original model’s predictions across different samples.
6.2.4. Grad-CAM
Grad-CAM is a visualization-based method for interpreting the prediction process of CNN. It is applicable to any CNN architecture without requiring modifications to the network structure. By generating heatmaps, Grad-CAM highlights the regions of an image that the network focuses on during recognition, thereby helping humans better understand the basis of the model’s decisions. The core idea of this method lies in computing the gradients of the predicted class with respect to a selected intermediate layer, which are used to measure the importance of the corresponding feature maps. These gradients are then combined with the feature maps to obtain the class activation mapping (CAM). Finally, the CAM is up sampled to the size of the original image to produce an intuitive heatmap. From an input–output perspective, Grad-CAM visualizes the contribution of input features to the output class probability.
6.3. Application of Explainable AI in Power System Stability Analysis
In addressing transient stability issues, researchers have primarily explored two aspects: AI model interpretation methods and self-explainable AI models.
In the domain of AI model interpretation methods, ref. [
77] proposed a SHAP-based attribution analysis framework for transient voltage stability assessment. This framework calculates the average absolute Shapley value to rank the importance of transient voltage stability features and further quantifies the influence of different input features on the model’s output based on their marginal contributions. To achieve fast and accurate online transient stability evaluation of power systems, ref. [
78] introduced a TSA method based on an improved 1D CNN. This method utilizes Grad-CAM to generate class activation maps of the transient stability model and combines the system topology for visual analysis. Ref. [
79] constructed a transient stability prediction model based on XGBoost using the operational characteristics of generators during power system faults, and applied LIME to explain the results for specific faults. This includes the contribution rate of features to individual sample classifications and their impact on prediction outcomes. For example, for a specific branch prediction result (three-phase short circuit fault), the contribution of specific features to the situation was presented. Ref. [
80] used SHAP to interpret the XGBoost model and samples from both global and local perspectives, identifying key stability features to make the model more transparent.
Some scholars have adopted self-explainable AI models to improve the interpretability of TSA. To address the problem of conventional DNNs lacking both reliability and time-dependent feature extraction in transient stability issues, ref. [
81] incorporated self-attention mechanisms within the DNN. This guided the model to adaptively focus on important features during training iterations, enabling it to quickly capture state dependencies between preceding and succeeding moments in the power system, thus demonstrating excellent interpretability. For the issue of explaining prediction results in post-fault TSA based on PMUs. Ref. [
82] proposed a deep learning model with embedded self-attention layers and a transfer learning strategy. The embedded attention layer identifies the most disturbed generators, thus increasing the model’s interpretability. To address the problem of selecting key features for static voltage stability margin estimation, ref. [
83] conducted feature selection based on cumulative contribution rates and SHAP key feature values. The SHAP model was used to rank features in descending order of contribution, and a cyclic optimization process with cumulative contribution rate increments was employed to remove redundant features, demonstrating the potential of the SHAP model for pre-feature selection and optimization. Ref. [
84] utilized natural gradient boosting trees and SHAP interpretation theory in voltage stability margin prediction to analyze the influencing factors.
In practical power system operation, interpretability enables operators to understand the underlying basis of stability predictions rather than relying solely on opaque outputs. When assessment results are accompanied by explanations indicating which system variables, regions, or dynamic behaviors most strongly influence the prediction, operators can better evaluate the credibility and urgency of the warning. This enhanced transparency improves situational awareness, supports more confident real-time control actions, and facilitates post-event analysis and strategy refinement. By explicitly linking interpretability to operational reliability, decision confidence, and risk management, the revised discussion highlights its essential role in enabling safe and effective deployment of AI-based TSA tools in real-world grid environments.
7. Knowledge–Data-Driven Stability Assessment
7.1. Complementary Advantages of Knowledge-Driven and Data-Driven Models
Data-driven methods abandon strict analysis of the internal mechanisms of the research object and, instead, rely on large amounts of experimental and measured data. By applying various data processing algorithms (or standardized processing procedures), they analyze correlations among data to construct empirical models. Their main feature lies in extracting relationships among variables from data samples. However, such correlations are often ambiguous, and their generalizability is usually weaker than that of knowledge-driven methods. Moreover, data-driven approaches are often constrained by the quality and quantity of samples, making it difficult for them to adapt to scenarios with significant changes.
For knowledge-driven models, with the large-scale integration of power electronic equipment and the increasing degree of coupling among devices, the dimensionality and complexity of power systems continue to grow. Mechanism-based knowledge-driven approaches thus suffer from insufficient model accuracy, leading to unreliable assessment results. The inaccuracy of physical mechanism models mainly stems from mismatches between the models and real-world scenarios, as well as the influence of uncertainties such as noise disturbances.
Therefore, in practical research, combining data-driven and knowledge-driven methods can help improve overall performance and enhance applicability. On the one hand, when constructing data-driven models, guidance from knowledge-driven methods—such as introducing physics-informed prior constraints—can prevent abnormal phenomena and enhance the adaptability of empirical models. On the other hand, to address the difficulties in acquiring accurate mechanism models and their high complexity, data-driven methods can assist knowledge-driven approaches in discovering new variables and explaining new phenomena.
7.1.1. Application of Data-Driven Models in Knowledge-Driven Frameworks
To address the mismatch between mechanism models and real-world scenarios, data-driven methods can play a role from three perspectives.
- (1)
Data-driven methods can improve the parameters of mechanism models. By correcting key coefficients of physical models using data-driven approaches, the adaptability of model parameters to real scenarios can be enhanced [
85,
86].
- (2)
Data-driven methods can assist in selecting appropriate mechanism models. For example, in power system fault screening, statistical analysis can be used to evaluate the performance differences of knowledge-driven methods under various scenarios, thereby coordinating different mechanism models in the fault screening process and improving both accuracy and efficiency. Some studies have also selected suitable knowledge-driven error correction models using data-driven methods to improve the accuracy of mechanism-based results [
87].
- (3)
Data-driven methods can quantify differences among mechanism patterns and improve the composition of mechanism models [
88,
89,
90,
91,
92]. Owing to their capability of discovering new patterns, data-driven approaches can support the updating of behavior analysis models generated by knowledge-driven methods [
93].
To mitigate the impact of uncertainties such as noise on mechanism-based analysis results, data-driven methods can be used to characterize uncertain factors and incorporate them into physical analysis models. By iteratively combining physical modeling and data-driven approaches, missing information can be supplemented using data-driven methods, thereby improving predictive performance [
94]. In traffic flow prediction, for example, the interactive integration of knowledge-driven and data-driven methods has been used to estimate instantaneous traffic flow via data-driven models, enhancing the adaptability of mechanism models and improving prediction accuracy [
95].
Physics-informed learning differs fundamentally from conventional hybrid models in how physical knowledge is integrated into the modeling framework. In physics-informed learning, known physical laws—such as power flow equations, differential equations, or stability constraints—are directly embedded into the training process, guiding model optimization and ensuring that predictions remain consistent with underlying physical principles. In contrast, conventional hybrid models typically combine data-driven components with separate physical models at the structural or output level, without explicitly incorporating physical constraints into the learning algorithm itself. Therefore, while hybrid models emphasize functional integration of physical and data-driven modules, physics-informed learning enforces physics consistency during model training, resulting in a tighter coupling between data patterns and system dynamics.
7.1.2. Application of Prior Knowledge in Data-Driven Models
Knowledge-driven methods can provide samples or feature bases for data-driven approaches. For instance, in photovoltaic power forecasting, physical mechanism models of photovoltaic modules have been used to derive the relationships between PV output and meteorological features. These features are then combined through mechanism-based models and used as inputs to AI models, ultimately improving the performance of data-driven methods [
96]. Models or rules generated by knowledge-driven approaches can not only guide the design and construction of empirical models but also be used to verify the rationality of their results [
97].
For example, data-driven empirical model structures and constraints can be designed according to existing physical knowledge and experience. However, such approaches do not explicitly express knowledge, and the effectiveness of introducing prior rules cannot be confirmed before actual testing [
98]. Further studies have first expressed knowledge in the form of deterministic equations and then integrated them with data-driven models [
99,
100]. In guided-learning hybrid frameworks, prior rules are mathematically formulated and embedded into the construction process of data-driven models, allowing the general characteristics of empirical models to be understood before actual testing [
101].
7.2. Typical Fusion Mechanisms of Knowledge-Driven and Data-Driven Models
Based on the above review, data–knowledge hybrid driving methods exhibit strong potential in addressing the limitations of single-model approaches. By systematically synthesizing existing studies, four representative hybrid paradigms are identified to meet diverse power system application requirements: a parallel mode for joint data–knowledge modeling, a serial mode for handling high model complexity or limited accuracy, a guided mode to compensate for insufficient domain knowledge in data-driven model construction, and a feedback mode to mitigate parameter mismatches in power system models.
7.2.1. Parallel Mode
As shown in
Figure 9, the final output in the parallel mode is obtained by integrating the results of data-driven and knowledge-driven methods, using strategies such as direct summation, factor multiplication, weighted summation, or switch-function control. Direct summation and factor multiplication are mainly suited to cases where the knowledge-driven mechanism model exhibits limited accuracy, with data-driven models compensating for the deviation between model outputs and actual behavior. In contrast, weighted summation and switch-function control are more appropriate when both models perform well, enabling scenario-dependent weighting or adaptive selection to achieve effective result fusion.
7.2.2. Serial Mode
As shown in
Figure 10, the serial mode is characterized by the use of data-driven empirical models to correct the outputs of knowledge-driven mechanism models, thereby improving result accuracy. This mode is particularly suitable for mechanism models with a high degree of simplification. Data-driven methods are employed to identify the correlation patterns between the outputs of simplified mechanism models and actual results under different scenarios, enabling systematic correction of mechanism model outputs.
7.2.3. Guided Mode
In the guided mode, the primary feature is the use of known knowledge-driven mechanism models to guide the construction of data-driven empirical models. As shown in
Figure 11, this mode influences the empirical model by modifying the configuration of data-driven methods. For example, during the training of AI models, rules derived from mechanism models can be incorporated into the training objectives of data-driven methods, ensuring that the empirical models exhibit desired performance characteristics.
7.2.4. Feedback Mode
The feedback mode is characterized by the use of data-driven methods to correct or replace specific modules or parameters of knowledge-driven mechanism models. As shown in
Figure 12, this mode is applicable in scenarios where parts of the mechanisms are unknown or model parameters are uncertain. In the feedback mode, the mechanism model serves as the base model for computing final outputs, while the data-driven empirical model updates the predicted values based on discrepancies between model outputs and actual results, and feeds the corrected values back into the mechanism model.
7.3. Application of Knowledge–Data-Driven Models in Power System Stability Analysis
7.3.1. Parallel-Mode-Based TSA in Power Systems
For power-system TSA, existing studies have focused on online analysis of anticipated contingency sets. A representative approach integrates the EEAC method based on P-δ trajectory fitting with an ELM to form a parallel knowledge–data fusion TSA framework. In practical operation, the large number of contingencies to be evaluated imposes stringent computational efficiency requirements. While knowledge-driven mechanism models provide high accuracy, they are computationally intensive, and most contingencies do not result in instability. By coordinating data-driven and knowledge-driven methods, low-risk contingencies can be rapidly screened and high-risk cases selectively analyzed, thereby significantly improving the efficiency of contingency-set assessment, as illustrated in
Figure 13.
As shown in
Figure 13, the knowledge-driven and data-driven methods operate in parallel to assess post-fault transient stability. The data-driven method only requires features at the fault instant, including variations in bus voltage magnitudes and phase angles before and after the fault, as well as changes in generator active and reactive power, and thus achieves high computational speed. In contrast, the knowledge-driven method requires a relatively complete fault simulation process and therefore consumes more computation time. The reliability of the data-driven results can be obtained through testing, yielding a relationship curve between its outputs and the prediction error rate. Ideally, when the data-driven output equals 1, the system is classified as unstable, and when the output equals 0, it is classified as stable. By setting an allowable error probability for the data-driven output, a corresponding confidence threshold can be determined. Based on the data-driven prediction, the sample is then assigned to be evaluated using the EEAC/ELM result, thereby achieving a balance between computation time and assessment accuracy.
7.3.2. Serial-Mode-Based TSA in Power Systems
For transient overvoltage assessment in power systems, existing studies have proposed a knowledge–data serial fusion-driven assessment framework based on the power-voltage relationship and DT. The detailed integration mode between model-driven and data-driven overvoltage analysis methods is illustrated in
Figure 14.
The proposed theoretical analysis method for calculating the overvoltage peak at converter buses achieves a good balance between computational speed and accuracy, making it suitable for online applications and providing effective support for integration with data-driven methods. Specifically, theoretical overvoltage values can be efficiently obtained by using the equivalent parameters of the AC system and the operating parameters of the DC system. To avoid overfitting caused by improper feature selection and to enhance the interpretability of regression results, the theoretical analysis outcomes are treated as additional input features and incorporated into the original training samples of the data-driven method. Consequently, the objective of the improved DT model is shifted from large-scale data relationship mining to revealing the association patterns between theoretical evaluation values and actual measured values. Therefore, for typical fault scenarios, key electrical quantities together with their corresponding theoretical overvoltage values are selected as input features, while the overvoltage peak values obtained from time-domain simulations are used as outputs. The DT model is trained with these enhanced samples to achieve fast and accurate error correction.
7.3.3. Serial and Guided Mode-Based Critical Clearing Time Prediction in Power Systems
For the prediction of the critical clearing time (CCT) in transient stability margin assessment, existing studies have combined the extended equal-area criterion (EEAC) with the ELM and proposed a knowledge–data fusion-driven framework in serial and guided modes.
Conventional knowledge-driven methods struggle to achieve both high speed and high accuracy simultaneously. Therefore, a serial integration of the classical model-based EEAC method and the data-driven ELM method is first adopted. In practical power system operation, a smaller CCT indicates a higher risk of transient instability and a lower tolerance for prediction error. Hence, higher accuracy is required for scenarios with elevated instability risk. However, standard ELM training treats errors from different samples equally and cannot explicitly emphasize accuracy for high-risk cases. As a result, the ELM method needs to be improved so that the ELM-based correction model can be constructed in a way that aligns with engineering requirements.
Figure 15 illustrates the implementation of the proposed knowledge–data jointly driven CCT prediction method. As shown, based on analyses of power system operation and stability characteristics, a knowledge-driven EEAC method can be established through mechanistic modeling. Meanwhile, by summarizing practical operational requirements and system characteristics, the data-driven ELM model and its input features can be guided and improved accordingly.
Key practical challenges associated with AI-based TSA deployment include real-time implementation, regulatory acceptance, and operator trust. Real-time deployment requires fast model inference, seamless integration with existing control systems, and scalability under dynamic grid conditions, highlighting the importance of efficient optimization and low-latency prediction. From a regulatory perspective, AI-driven TSA tools must satisfy compliance requirements, ensuring auditability, interpretability, and alignment with established grid codes and safety standards. In addition, fostering operator trust is essential for practical adoption, which can be achieved through transparent model design, interpretable outputs, and thorough validation against real-world operational data. Explicitly addressing these aspects provides a clearer understanding of the barriers to practical implementation and the corresponding strategies for reliable and trustworthy deployment.
8. Conclusions and Future Work
As uncertainty, openness, and vulnerability continue to increase in modern power systems, grid operating conditions have become increasingly complex and dynamic. Conventional physics-based analysis methods are often unable to meet the stringent requirements of large-scale power systems for rapid online security assessment and decision-making. In this context, big data analytics and AI have emerged as promising technologies and key trends for online stability assessment in modern power systems, enabling fast and accurate identification of transient stability conditions.
This paper presents a data-driven TSA framework that integrates offline training, online application, and model interpretability. A comprehensive review of TSA methods is conducted from the perspectives of data augmentation, machine learning algorithms, and model learning mechanisms. Current studies indicate that advanced machine learning algorithms largely determine the lower bound of TSA model performance, data augmentation techniques help push model performance toward its upper bound, and learning mechanisms play a critical role in ensuring model robustness and transferability. Effectively integrating these three aspects, while coordinating them with physics-driven models, is essential for the practical deployment of AI-based TSA in real-world power systems.
Owing to stringent security requirements and inherent characteristics of power systems, such as uncertainty, openness, and vulnerability, the engineering application of data-driven online security and stability assessment remains in a continuous development stage. With the ongoing advancement of AI technologies, human–machine hybrid augmented intelligence has emerged as a feasible and important paradigm for applying AI to power-system TSA. By leveraging collaborative human–machine enhancement, models can achieve adaptive and progressive performance improvement, effectively mitigating the risk of system instability caused by erroneous autonomous decisions. This paradigm is expected to further promote the digitalization and intelligent evolution of transient stability assessment in modern power systems.
Author Contributions
Conceptualization, F.L. and Z.Z.; methodology, F.L.; software, F.L.; validation, F.L. and J.Q.; formal analysis, J.Q. and D.W.; investigation, T.T. and Z.W.; resources, D.W.; writing—original draft preparation, F.L.; writing—review and editing, Z.Z.; visualization, Z.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Science and Technology Project of the Headquarters of State Grid Corporation of China (Research on evaluation and improvement technology of power system security-supply-consumption carrying boundary in transition period), grant number 1400-202456361A-3-1-DG.
Data Availability Statement
No new data were created or analyzed in this study.
Conflicts of Interest
Authors Fan Li, Jishuo Qin, Taikun Tao, Dan Wang and Zhidong Wang were employed by the company State Grid Economic Technology Research Institute Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
- Shi, Z.; Xu, Y.; Wang, Y.; He, J.; Li, G.; Liu, Z. Coordinating multiple resources for emergency frequency control in the energy receiving-end power system with HVDCs. IEEE Trans. Power Syst. 2023, 38, 4708–4723. [Google Scholar]
- Zhang, Z.; Qin, B.; Gao, X.; Zhang, Y.; Ding, T. An improved decision tree-based method for evaluating transient overvoltage caused by commutation failure in LCC-HVDC systems. IET Gener. Transm. Distrib. 2023, 17, 5124–5134. [Google Scholar]
- Wang, H.; Qin, B.; Su, Y.; Li, F.; Hong, S.; Ding, T. Coordinated planning of mobile electric-hydrogen energy storage for remote power system resilience enhancement. J. Energy Storage 2026, 147, 120160. [Google Scholar] [CrossRef]
- Zhang, J.; Li, K.; Liu, W.; Sun, K.; Liu, Z.; Wang, Z. Grid side reactive power support strategy for MMC-HVDC connected to the wind farms based on unloading resistor. Electr. Power Syst. Res. 2021, 193, 107010. [Google Scholar] [CrossRef]
- Zheng, Z.; Ren, J.; Xiao, X.; Huang, C.; Wang, Y.; Xie, Q. Response mechanism of DFIG to transient voltage disturbance under commutation failure of LCC-HVDC system. IEEE Trans. Power Deliv. 2020, 35, 2972–2979. [Google Scholar] [CrossRef]
- Zhang, T.; Yao, J.; Sun, P.; Zhang, H.; Liu, K.; Zhao, Y. Improved continuous fault ride through control strategy of DFIG-based wind turbine during commutation failure in the LCC-HVDC transmission system. IEEE Trans. Power Electron. 2021, 36, 459–473. [Google Scholar] [CrossRef]
- Qin, B.; Wang, H.; Li, W.; Li, F.; Wang, W.; Ding, T. Aperiodic coordination scheduling of multiple PPLs in shipboard integrated power systems. IEEE Trans. Intell. Transp. Syst. 2024, 25, 14844–14854. [Google Scholar] [CrossRef]
- Li, C.; He, P.; Li, Y. LCC-HVDC auxiliary emergency power coordinated control strategy considering the effect of electrical connection of the sending-end power grid. Electr. Eng. 2019, 101, 1133–1143. [Google Scholar] [CrossRef]
- Qin, B.; Wang, H.; Li, F.; Liu, D.; Liao, Y.; Li, H. Towards zero carbon hydrogen: Co-production of photovoltaic electrolysis and natural gas reforming with CCS. Int. J. Hydrogen Energy 2024, 78, 604–609. [Google Scholar] [CrossRef]
- Wang, H.; Qin, B.; Hong, S.; Cai, Q.; Li, F.; Ding, T.; Li, H. Optimal planning of hybrid hydrogen and battery energy storage for resilience enhancement using bi-layer decomposition algorithm. J. Energy Storage 2025, 110, 115367. [Google Scholar] [CrossRef]
- Wang, H.; He, P. Transient stability assessment and control system for power system. IEEJ Trans. Electr. Electron. Eng. 2019, 14, 1189–1196. [Google Scholar] [CrossRef]
- Sarajčević, P.; Kunać, A.; Petrović, G.; Despalatović, M. Artificial intelligence techniques for power system transient stability assessment. Energies 2022, 15, 507. [Google Scholar] [CrossRef]
- Qin, B.; Hong, S.; Wang, H.; Zhao, J.; Li, H.; Chen, P.; Ding, T. Non-isothermal dynamic model and collaborative optimization for multi-energy system considering pipeline energy storage. J. Energy Storage 2026, 141, 119083. [Google Scholar] [CrossRef]
- Mehinović, A.; Grebović, S.; Fejzić, A.; Oprašić, N.; Konjicija, S.; Akšamović, A. Application of artificial intelligence methods for determination of transients in the power system. Electr. Power Syst. Res. 2023, 223, 109634. [Google Scholar] [CrossRef]
- Zhang, S.; Zhu, Z.; Li, Y. A critical review of data-driven transient stability assessment of power systems: Principles, prospects and challenges. Energies 2021, 14, 7238. [Google Scholar] [CrossRef]
- Ruan, J.; Liang, G.; Zhao, J.; Zhao, H.; Qiu, J.; Wen, F.; Dong, Z. Deep learning for cybersecurity in smart grids: Review and perspectives. Energy Convers. Econ. 2023, 4, 233–251. [Google Scholar] [CrossRef]
- De Caro, F.; Collin, A.J.; Giannuzzi, G.M.; Pisani, C.; Vaccaro, A. Review of data-driven techniques for on-line static and dynamic security assessment of modern power systems. IEEE Access 2023, 11, 130644–130673. [Google Scholar] [CrossRef]
- Liu, J.; Liu, J.; Ding, T.; Ren, C.; Yan, R. A generic scene-dependent credibility evaluation framework for machine learning-based transient stability assessment of power systems. IEEE Trans. Power Syst. 2026, 41, 773–776. [Google Scholar] [CrossRef]
- Zhu, L.; Hill, D.J.; Lu, C. Semi-supervised ensemble learning framework for accelerating power system transient stability knowledge base generation. IEEE Trans. Power Syst. 2022, 37, 2441–2454. [Google Scholar] [CrossRef]
- Wei, J.; Zhou, B.; Althubiti, S.A.; Alsenani, T.R.; Ghoneim, M.E. Transient stability assessment of power systems using support vector regressor and convolution neural network. Sustain. Comput. Inform. Syst. 2023, 37, 100826. [Google Scholar]
- Fang, J.; Liu, C. Artificial intelligence techniques for stability analysis in modern power systems. iEnergy 2024, 3, 194–215. [Google Scholar] [CrossRef]
- Khalil, M.; McGough, A.S.; Pourmirza, Z.; Pazhoohesh, M.; Walker, S. Machine Learning, deep learning and statistical analysis for forecasting building energy consumption-a systematic review. Eng. Appl. Artif. Intell. 2022, 115, 105287. [Google Scholar] [CrossRef]
- Liu, T.; Yan, J.; Liu, Y.; Chung, C.Y. Cascading failure screening based on gradient boosting decision tree for HVDC sending-end systems with high wind power penetration. IEEE Trans. Power Syst. 2025, 40, 3682–3694. [Google Scholar] [CrossRef]
- Qin, B.; Chen, P.; Zhang, Z.; Wang, H.; Ding, T. Coordinated preventive control strategy for transient overvoltage suppression in hybrid AC/DC sending-side systems. Int. J. Electr. Power Energy Syst. 2025, 171, 111017. [Google Scholar] [CrossRef]
- Lu, G.; Bu, S. Advanced probabilistic transient stability assessment for operational planning: A physics-informed graphical learning approach. IEEE Trans. Power Syst. 2025, 40, 740–752. [Google Scholar] [CrossRef]
- Qin, B.; Liu, J.; Wang, H.; Wang, Z.; Xiong, Z.; Wang, M.; Qian, Q. Energy-efficient and reliable urban rail transit: A new framework incorporating underground energy storage systems. iEnergy 2025, 4, 86–97. [Google Scholar] [CrossRef]
- Douzas, G.; Bacao, F.; Last, F. Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE. Inf. Sci. 2018, 465, 11–20. [Google Scholar] [CrossRef]
- Chen, Q.; Lin, N.; Bu, S.; Wang, H.; Zhang, B. Interpretable time-adaptive transient stability assessment based on dual-stage attention mechanism. IEEE Trans. Power Syst. 2023, 38, 2776–2790. [Google Scholar] [CrossRef]
- Wang, H.; Qin, B.; Hong, S.; Xu, X.; Su, Y.; Lu, T.; Ding, T. Enhanced GAN based joint wind-solar-load scenario generation with extreme weather labelling. IEEE Trans. Smart Grid 2025, 16, 4213–4224. [Google Scholar] [CrossRef]
- Fang, J.; Zheng, L.; Liu, C.; Su, C. A data-driven case generation model for transient stability assessment using generative adversarial networks. IEEE Trans. Ind. Inf. 2024, 20, 14391–14400. [Google Scholar] [CrossRef]
- Ren, C.; Xu, Y. A fully data-driven method based on generative adversarial networks for power system dynamic security assessment with missing data. IEEE Trans. Power Syst. 2019, 34, 5044–5052. [Google Scholar] [CrossRef]
- Qin, B.; Wang, H.; Liao, Y.; Li, H.; Ding, T.; Wang, Z.; Li, F.; Liu, D. Challenges and opportunities for long-distance renewable energy transmission in China. Sustain. Energy Technol. Assess. 2024, 69, 103925. [Google Scholar] [CrossRef]
- Zhu, L.; Hill, D.J. Data/model jointly driven high-quality case generation for power system dynamic stability assessment. IEEE Trans. Ind. Inform. 2022, 18, 5055–5066. [Google Scholar] [CrossRef]
- Li, N.; Wu, J.; Shan, L.; Yi, L. Transient stability assessment of power systems based on CLV-GAN and I-ECOC. Energies 2024, 17, 2278. [Google Scholar] [CrossRef]
- Muhyaddin, R.; Salem, A.; Lucian, M.; Raef, A.; Tahir, K.; Abdurrahman, S.H.; Muhammad, F.T.; Ziad, M.A. An efficient scheme for determining the power loss in wind-pv based on deep learning. IEEE Access 2021, 9, 9481–9492. [Google Scholar]
- Tealane, M.; Kilter, J.; Bagleybter, O.; Heimisson, B.; Popov, M. Out-of-step protection based on discrete angle derivatives. IEEE Access 2022, 10, 78290–78305. [Google Scholar] [CrossRef]
- Li, B.; Xu, S.; Li, Z.; Sun, H. A short-term voltage stability evaluation method combining data-driven and mechanism criterion. IEEE Trans. Power Syst. 2025, 40, 1891–1902. [Google Scholar] [CrossRef]
- Ren, C.; Xu, Y.; Zhang, R. An interpretable deep learning method for power system transient stability assessment via tree regularization. IEEE Trans. Power Syst. 2022, 37, 3359–3369. [Google Scholar] [CrossRef]
- Shen, C.; Zuo, K.; Sun, M. Physics-augmented auxiliary learning for power system transient stability assessment. IEEE Trans. Ind. Inf. 2025, 21, 6811–6822. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, M.; Chen, C. A Deep-Learning intelligent system incorporating data augmentation for Short-Term voltage stability assessment of power systems. Appl. Energy 2022, 308, 118347. [Google Scholar] [CrossRef]
- Lee, G.; Park, C.; Kim, D. Event detection-free framework for transient stability prediction via parallel CNN-LSTMs. IEEE Trans. Instrum. Meas. 2024, 73, 9004410. [Google Scholar] [CrossRef]
- Tan, B.; Yang, J.; Zhou, T.; Zhan, X.; Liu, Y.; Jiang, S.; Luo, C. Spatial-temporal adaptive transient stability assessment for power system under missing data. Int. J. Electr. Power Energy Syst. 2020, 123, 106237. [Google Scholar] [CrossRef]
- Liu, Y.; Li, Y.; He, S.; Li, Y.; Zhao, Y.; Zeng, Z. A physics-informed graph convolution network for ac optimal power flow via refining DC solution. IEEE Trans. Power Syst. 2026, 41, 438–453. [Google Scholar] [CrossRef]
- Wang, W.; Zhang, X.; Fan, Z.; Liu, H.; Wei, B.; Zhou, S.; Liao, X.; Ma, G. A dual-function measurement system for detection of very fast transient and partial discharge. IEEE Sens. J. 2025, 25, 3288–3294. [Google Scholar] [CrossRef]
- Tang, W.; Gu, Y.; Xin, Y.; Liang, Q.; Qian, T. Classification for transient overvoltages in offshore wind farms based on sparse decomposition. IEEE Trans. Power Deliv. 2022, 37, 1974–1985. [Google Scholar] [CrossRef]
- Gupta, A.; Gurrala, G.; Sastry, P. An online power system stability monitoring system using convolutional neural networks. IEEE Trans. Power Syst. 2019, 34, 864–872. [Google Scholar] [CrossRef]
- Zhang, Z.; Qin, B.; Gao, X.; Ding, T. CNN-LSTM based power grid voltage stability emergency control coordination strategy. IET Gener. Transm. Distrib. 2023, 17, 3559–3570. [Google Scholar] [CrossRef]
- Su, T.; Liu, Y.; Zhao, J.; Liu, J. Deep belief network enabled surrogate modeling for fast preventive control of power system transient stability. IEEE Trans. Ind. Inform. 2022, 18, 315–326. [Google Scholar] [CrossRef]
- Yuan, X.; Huang, B.; Wang, Y.; Yang, C.; Gui, W. Deep learning-based feature representation for soft sensor modeling with variable-wise weighted SAE. IEEE Trans. Ind. Inform. 2018, 14, 3235–3243. [Google Scholar] [CrossRef]
- Lu, Y.; Jiao, D.; Zhao, H.; Hou, Y.; Zheng, Y.; Yang, Q. Bayesian-LSTM-Transformer model for vehicle fuel cell life prediction. Int. J. Hydrogen Energy 2026, 202, 153045. [Google Scholar] [CrossRef]
- Huang, Y.; Yang, D.; Feng, B.; Tian, A.; Dong, P.; Yu, S.; Zhang, H. A GNN-enabled multipath routing algorithm for spatial-temporal varying LEO satellite networks. IEEE Trans. Veh. Technol. 2024, 73, 5454–5468. [Google Scholar] [CrossRef]
- Wang, B.; Wang, T.; Tang, Y.; Huang, Y. Knowledge-GPT guided generalizable reinforcement learning for intelligent emergency generator tripping in power system. IEEE Trans. Neural. Netw. 2025, 36, 20416–20428. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Y.; Liu, L.; Xin, H.; Cao, Y. Hierarchical Method for Transient Stability Prediction of Power Systems Based on SVM-Ensemble Classification. Energies 2016, 9, 778. [Google Scholar] [CrossRef]
- Li, B.; Wu, J. Adaptive assessment of power system transient stability based on active transfer learning with deep belief network. IEEE Trans. Autom. Sci. Eng. 2023, 20, 1047–1058. [Google Scholar] [CrossRef]
- Wu, S.; Zheng, L.; Hu, W.; Yu, R.; Liu, B. Improved deep belief network and model interpretation method for power system transient stability assessment. J. Mod. Power Syst. Clean Energy 2020, 8, 27–37. [Google Scholar] [CrossRef]
- Zhang, Z.; Qin, B.; Ding, T.; Gao, X.; Zhang, Y. CBAM-CNN based transient overvoltage preventive control considering piecewise linear control sensitivity. IEEE Trans. Power Syst. 2025, 40, 3645–3656. [Google Scholar] [CrossRef]
- Van Lent, M.; Fisher, W.; Mancuso, M. An explainable artificial intelligence system for small-unit tactical behavior. In Proceedings of the 16th Conference on Innovative Applications of Artificial Intelligence (IAAI); MIT Press: Cambridge, MA, USA, 2004; pp. 900–907. [Google Scholar]
- Raza, A.; Benrabah, A.; Alquthami, T.; Akmal, M. A review of fault diagnosing methods in power transmission systems. Appl. Sci. 2020, 10, 1312. [Google Scholar] [CrossRef]
- Dai, J.; Li, J.; Zhao, Z.; Yang, Z.; Ye, J.; Yang, Q.; Huang, C.; Zhang, Z. Online energy efficient multimodal probabilistic semantic communication. IEEE Internet Things J. 2026, 13, 69–86. [Google Scholar] [CrossRef]
- Gunning, D.; Aha, D.W. DARPA’s explainable artificial intelligence program. AI Mag. 2019, 40, 44–58. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Bukhsh, Z.A.; Saeed, A.; Stipanovic, I.; Doree, A. Predictive maintenance using tree-based classification techniques: A case study for railway switches. Transp. Res. Part C Emerg. Technol. 2019, 101, 35–52. [Google Scholar] [CrossRef]
- Luo, G.; Liu, C.; Shang, B.; Wang, X.; He, J. Faulty feeder identification method for active distribution network based on depth feature extraction and semi-supervision domain adaptation. Int. J. Electr. Power Energy Syst. 2025, 172, 111161. [Google Scholar] [CrossRef]
- Liang, L.; Zhang, H.; Cao, S.; Zhao, X.; Li, H.; Chen, Z. Fault location method for distribution networks based on multi-head graph attention networks. Front. Energy Res. 2024, 12, 1395737. [Google Scholar] [CrossRef]
- Li, N.; Dong, J.; Tao, L.; Huang, L. Transient stability assessment model and its updating based on dual-tower transformer. Energy Eng. 2025, 122, 2957–2975. [Google Scholar] [CrossRef]
- Peng, C.; Chen, Y. Fixed-wing unmanned aerial vehicle rotary engine anomaly detection via online digital twin methods. IEEE Trans. Aerosp. 2024, 60, 741–758. [Google Scholar] [CrossRef]
- Liang, J.; He, X.; Xiao, H.; Wu, C. Offshore wind power prediction based on two-stage hybrid modeling. Energy Strategy Rev. 2024, 54, 101468. [Google Scholar] [CrossRef]
- Han, X.; Zhang, X.; Wu, Y.; Zhang, Z.; Wu, Z. KAN4TSF: Are KAN and KAN-based models effective for time series forecasting? arXiv 2024, arXiv:2408.11306. [Google Scholar]
- Kich, V.A.; Bottega, J.A.; Steinmetz, R.; Grando, R.B.; Yorozu, A.; Ohya, A. Kolmogorov–Arnold network for online reinforcement learning. arXiv 2024, arXiv:2408.04841. [Google Scholar]
- Shen, S.; Younes, R. Reimagining linear probing: Kolmogorov–Arnold networks in transfer learning. arXiv 2024, arXiv:2409.07763. [Google Scholar]
- Yang, X.Y.; Wang, X.C. Kolmogorov–Arnold transformer. arXiv 2024, arXiv:2409.10594. [Google Scholar]
- Zhou, Z.; Li, Y.; Guo, Z.; Yan, Z.; Chow, M.Y. A white-box deep-learning method for electrical energy system modeling based on Kolmogorov–Arnold network. arXiv 2024, arXiv:2409.08044. [Google Scholar]
- Gao, Y.; Ruan, Y. Interpretable deep learning model for building energy consumption prediction based on attention mechanism. Energy Build. 2021, 252, 111379. [Google Scholar] [CrossRef]
- Garreau, D.; von Luxburg, U. Explaining the explainer: A first theoretical analysis of LIME. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020), Online, 26–28 August 2020; PMLR 108. pp. 1287–1296. [Google Scholar]
- Chen, Y.; Huang, Z.; Du, Z.; Zhong, G.; Gao, J.; Zhen, H. Transient voltage stability assessment and margin prediction based on feature learning for disturbance signal energy from bus voltages. Front. Energy Res. 2024, 12, 1479478. [Google Scholar] [CrossRef]
- Pournabi, M.; Mohammadi, M.; Afrasiabi, S.; Setoodeh, P. Power system transient security assessment based on deep learning considering partial observability. Electr. Power Syst. Res. 2022, 205, 107736. [Google Scholar] [CrossRef]
- Qu, Y.; Wang, J.; Cheng, X.; Hao, J.; Wang, W.; Niu, Z.; Wu, Y. Migratable power system transient stability assessment method based on improved XGBoost. Energy Eng. 2024, 121, 1847–1863. [Google Scholar] [CrossRef]
- Kilembe, A.B.; Hamilton, R.I.; Papadopoulos, P.N. Explainable machine learning: A SHAP value-based approach to locational frequency stability. Int. J. Electr. Power Energy Syst. 2025, 170, 110885. [Google Scholar] [CrossRef]
- Khorasgani, H.; Hasanzadeh, A.; Farahat, A.; Gupta, C. Fault detection and isolation in industrial networks using graph convolutional neural networks. In Proceedings of the IEEE International Conference on Prognostics and Health Management (ICPHM), San Francisco, CA, USA, 17–20 June 2019; pp. 1–7. [Google Scholar]
- Han, X.; Jin, Y.; Wu, G.; Guo, S.; Liu, T. A self-attention-embedded deep learning model for phasor measurement unit-based post-fault transient stability prediction. In Proceedings of the 2022 Asian Conference on Frontiers of Power and Energy (ACFPE), Chengdu, China, 21–23 October 2022; pp. 165–171. [Google Scholar]
- Qin, H.; Li, S.; Zhang, J.; Rao, Z.; He, C.; Chen, Z.; Li, B. Online prediction and correction of static voltage stability index based on extreme gradient boosting algorithm. Energies 2024, 17, 5710. [Google Scholar] [CrossRef]
- Li, Y.; Cao, J.; Xu, Y.; Zhu, L.; Dong, Z.Y. Deep learning based on transformer architecture for short-term voltage stability assessment under class imbalance. Renew. Sustain. Energy Rev. 2024, 189, 113913. [Google Scholar] [CrossRef]
- Lakshminarayana, S.; Sthapit, S.; Maple, C. Application of Physics-Informed Machine Learning Techniques for Power Grid Parameter Estimation. Sustainability 2022, 14, 2051. [Google Scholar] [CrossRef]
- Hou, J.; Xu, J.; Lin, C.; Jiang, D. State of charge estimation for lithium-ion batteries based on battery model and data-driven fusion method. Energy 2024, 290, 130056. [Google Scholar] [CrossRef]
- Li, H.; Qin, B.; Wang, S.; Ding, T.; Liu, J.; Wang, H. Aggregate power flexibility of multi-energy systems supported by dynamic networks. Appl. Energy 2025, 377, 124565. [Google Scholar] [CrossRef]
- Ling, J.; Kurzawski, A. Data-driven adaptive physics modeling for turbulence simulations. In Proceedings of the 23rd AIAA Computational Fluid Dynamics Conference, Denver, CO, USA, 5–9 June 2017. [Google Scholar]
- Yang, S.; Zhang, Y.; Hao, Z.; Lin, Z.; Zhang, B. CT saturation detection and compensation: A hybrid physical model- and data-driven method. IEEE Trans. Power Deliv. 2022, 37, 3928–3938. [Google Scholar] [CrossRef]
- Melas, I.N.; Mitsos, A.; E Messinis, D.; Weiss, T.S.; Alexopoulos, L.G. Combined logical and data-driven models for linking signalling pathways to cellular response. BMC Syst. Biol. 2011, 5, 107. [Google Scholar] [CrossRef]
- Khorasgani, H.; Biswas, G. A combined model-based and data-driven approach for monitoring smart buildings. Kalpa Publ. Comput. 2018, 4, 21–36. [Google Scholar]
- Pan, Y.A.; Guo, J.; Chen, Y.; Cheng, Q.; Li, W.; Liu, Y. A fundamental diagram based hybrid framework for traffic flow estimation and prediction by combining a Markovian model with deep learning. Expert Syst. Appl. 2024, 238, 122219. [Google Scholar] [CrossRef]
- Mei, B.; Sun, L.; Shi, G. White-black-box hybrid model identification based on RM-RF for ship maneuvering. IEEE Access 2019, 7, 57691–57705. [Google Scholar] [CrossRef]
- Hu, J.; Zhao, H.; Peng, Y. CKFNet: Neural network aided cubature Kalman filtering. IEEE Signal Process. Lett. 2025, 32, 3455–3459. [Google Scholar] [CrossRef]
- Li, Q.; Xu, P.; He, D.; Wu, Y. Multi-source information fusion graph convolution network for traffic flow prediction. Expert Syst. Appl. 2024, 252, 124288. [Google Scholar] [CrossRef]
- Zhang, H.; Yang, J.; Fan, S.; Geng, H.; Shao, C. An ultra-short-term distributed photovoltaic power forecasting method based on GPT. IEEE Trans. Sustain. Energy 2025, 16, 2746–2754. [Google Scholar] [CrossRef]
- Bai, J.; Yu, W.; Xiao, Z.; Havyarimana, V.; Regan, A.C.; Jiang, H.; Jiao, L. Two-stream spatial–temporal graph convolutional networks for driver drowsiness detection. IEEE Trans. Cybern. 2022, 52, 13821–13833. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Liu, C.; Guo, P.; Liu, S.; Ning, J. Deep learning-based transient stability assessment framework for large-scale modern power system. Int. J. Electr. Power Energy Syst. 2022, 139, 108010. [Google Scholar] [CrossRef]
- Wong, J.C.; Gupta, A.; Ooi, C.C.; Chiu, P.; Liu, J.; Ong, Y. Evolutionary optimization of physics-informed neural networks: Evo-PINN frontiers and opportunities. IEEE Comput. Intell. Mag. 2026, 21, 16–36. [Google Scholar] [CrossRef]
- Gil, Y.; Greaves, M.; Hendler, J.; Hirsh, H. Integrating scientific knowledge with machine learning for engineering and environmental systems. ACM Comput. Surv. 2022, 55, 1–37. [Google Scholar] [CrossRef]
- Duraisamy, K.; Iaccarino, G.; Xiao, H. Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 2019, 51, 357–377. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |