Next Article in Journal
Correction: de Menezes et al. A Thorough Procedure to Design Surface-Mounted Permanent Magnet Synchronous Generators. Machines 2024, 12, 384
Previous Article in Journal
The Generation, Measurement, Prediction, and Prevention of Residual Stress in Nickel-Based Superalloys: A Review
Previous Article in Special Issue
Predictive Analysis of Crack Growth in Bearings via Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing UAV Sensor Fault Diagnosis Based on Prior Knowledge and Graph Convolutional Network

1
Chengdu Aircraft Design & Research Institute, Chengdu 610091, China
2
School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Machines 2024, 12(10), 716; https://doi.org/10.3390/machines12100716
Submission received: 10 August 2024 / Revised: 1 October 2024 / Accepted: 5 October 2024 / Published: 10 October 2024

Abstract

:
Unmanned aerial vehicles (UAVs) are equipped with various sensors to facilitate control and navigation. However, UAV sensors are highly susceptible to damage under complex flight environments, leading to severe accidents and economic losses. Although fault diagnosis methods based on deep neural networks have been widely applied in the mechanical field, these methods often fail to integrate multi-source information and overlook the importance of system prior knowledge. As a result, this study employs a spatial-temporal difference graph convolutional network (STDGCN) for the fault diagnosis of UAV sensors, where the graph structure naturally organizes the diverse sensors. Specifically, a difference layer enhances the feature extraction capability of the graph nodes, and the spatial-temporal graph convolutional modules are designed to extract spatial-temporal dependencies from sensor data. Moreover, to ensure the accuracy of the association graph, this research introduces the UAV’s dynamic model as prior knowledge for constructing the association graph. Finally, diagnostic accuracies of 94.93%, 98.71%, and 92.97% were achieved on three self-constructed datasets. In addition, compared to commonly used data-driven approaches, the proposed method demonstrates superior feature extraction capabilities and achieves the highest diagnostic accuracy.

1. Introduction

As an emerging aircraft, unmanned aerial vehicles (UAVs) have been widely used in military, civilian, and commercial fields due to their flexibility, efficiency, and safety [1]. As the key components of UAV systems, UAV sensors are mainly used to measure real-time flight state parameters and provide feedback to the flight control system. The accuracy of sensor measurements is an essential prerequisite for ensuring the safety of UAVs [2]. However, UAVs are susceptible to external interferences, such as wind, magnetic fields, vibrations, etc. Several UAV crashes were related to sensor failures [3]. The aforementioned aspects can cause severe challenges for UAV sensors during complex flight environments. Natural disasters such as hurricanes or lightning can cause physical damage to the UAV, disabling measurements by particular sensors, and the sudden appearance of obstacles can impair the sensor’s ability to record the data. In addition, a rapid change in altitude can also affect the ability of the sensors to measure different data [4]. Effective detection of sensor failures can ensure the safety and reliability of UAV flights, avoiding potential losses. Therefore, finding a method for accurately diagnosing UAV sensor faults holds significant practical significance. Conventional statistical threshold-based UAV sensor fault diagnosis suffers from two main limitations. They are:
(1) For the conventional statistical threshold value-based fault diagnosis approach, the threshold value corresponding to a specific fault for a specific sensor is different. Since UAVs are highly coupled electromechanical systems typically equipped with multiple heterogeneous sensors, designing efficient and consistent thresholds for each of the sensors corresponding to any particular type of fault is an inefficient approach during large-scale application.
(2) During real-time application, different UAVs are expected to work under different operating conditions. Hence, it is expected that sensor data collected from different UAVs will vary in terms of different data properties such as magnitude and strength. Therefore, designing the threshold largely depends on historical data properties, and sensor data collected from different UAVs under any particular operating condition are required to be logged properly and stored in a data storage device. This requires extra effort in data management and computer storage.
As a result, intelligent diagnosis of UAV sensors has received much attention in recent years. Existing methods for intelligent UAV sensor fault diagnosis can be categorized into three main approaches: model-based, data-driven, and knowledge-based methods. Model-based methods are commonly used in traditional fault diagnosis, including parameter estimation, state estimation, and the equivalent space method. The main idea is to establish precise mathematical models and employ control theory to obtain residual signals [5]. In this context, the Kalman filter (KF) and its improvements, such as the extended Kalman filter (EKF), can play significant roles by calculating the best possible output from the measured data with the predicted data through a feedback system [6,7]. However, in real-life applications, UAVs operate in nonlinear situations, which significantly affect the performance of the Kalman filter in correcting the noisy measurement output from the related sensors. A multi-sensor navigation unit provides the flight state feedback required. Some researchers apply signal processing techniques to extract signal features and identify system faults. Chen et al. [8] conducted fault diagnosis using acoustic signals from UAV motors. Features were extracted from the sound signals using statistical equations and then classified using a decision tree (DT), support vector machine (SVM), and K-nearest neighbor (KNN) algorithm. Since this study was based on acoustic signals, the experiments needed to be conducted indoors, making the experimental results susceptible to environmental interference. Some researchers have conducted health monitoring of the UAV by observing the vibration spectrum of the aircraft body and the vibration data were processed utilizing a fast Fourier transform (FFT) [8]. However, model-based methods rely on extensive domain knowledge and experience, making them less adaptable to different fault scenarios. Moreover, the variety and complexity of sensor types in UAVs pose challenges in obtaining precise system models.
Data-driven methods avoid cumbersome high-fidelity system identification and validation and have strong nonlinear mapping capability. Several scholars have conducted fault diagnosis research on mechanical structures such as bearings and gears, leveraging the advantages of neural networks in feature extraction. Liu et al. [9] proposed a novel bearing fault diagnosis method based on CNN networks that directly process time-domain raw signals, eliminating the need for time-consuming feature extraction and reducing dependence on expert experience. Mao et al. [10] performed fault diagnosis based on a short-time Fourier transform and convolutional neural network for rolling bearing vibration signals, achieving end-to-end fault pattern recognition. The fault diagnosis cases above for mechanical structures typically use only a single sensor signal type, such as vibration or current signals. However, UAVs are highly coupled electromechanical systems typically equipped with multiple heterogeneous sensors, exhibiting nonlinearity and time-varying characteristics. Model-based and conventional neural network methods struggle to extract and fuse features from multi-sensor data. Based on the aforementioned limitations of model-based methods and data-driven methods, two main objectives of this research are:
(1)
Incorporating the prior knowledge regarding UAV sensors under operation for fault diagnosis by utilizing the knowledge graph-based adaptive network.
(2)
Integrating the information collected by a heterogenous sensing system in a UAV for achieving robust diagnosis performance.
A graph neural network (GNN) can effectively address this challenge as an emerging knowledge-based neural network model. The graph convolutional neural network (GCN) method, in particular, can easily integrate information from multiple sensors by introducing the association graph in non-Euclidean space, possessing advantages such as solid adaptability, robustness, and high accuracy [11]. Some scholars have researched the GCN and applied it to fault diagnosis and remaining useful life prediction. Tama et al. [12] converted the time-frequency features of original vibration signals into graphs, fed into the GCN to diagnose faults in the gearbox of wind turbines. Sadid et al. [13] developed a spatiotemporal graph convolutional network for fault diagnosis of unmanned vehicles, incorporating the mathematical model of the vehicles to construct the adjacency matrix. He et al. [14] proposed a graph attention network model suitable for fault diagnosis of UAVs. The masked spatial graph attention (Masked-SGAT) module aggregates spatial information, and the gate recurrent unit (GRU) module is employed to extract temporal features.
The key issue in fault diagnosis based on a GCN is establishing the relationship between sensors in UAVs. Conventional GCN methods typically rely on the similarity of variable features or model learning to determine the graph, which fails to ensure the accuracy of the graph. The concept of spatiotemporal fault detection and diagnosis can play a key role in improving the performance of the original GCN algorithm [15,16,17]. To address this issue, prior knowledge related to engineering equipment under maintenance can ensure the accuracy of associations and the robustness of the model [18]. In this context, the mathematical model of a quadrotor is introduced, and the graph of each sensor variable is established based on the proposed STDGCN model. Then, a spatiotemporal difference graph convolutional network (STDGCN) is constructed, which includes a differential layer to enhance input data features using local differences and introduces a spatiotemporal graph convolution module to extract time and spatial correlations between sensors. Unlike conventional model-driven and data-driven techniques, an STDGCN can utilize the relationship among the data collected from different types of sensors to diagnose UAV sensor faults. Comparative experiments are conducted against several existing fault diagnosis methods, demonstrating the superiority of the STDGCN model. The main contributions of this paper are summarized as follows:
(1)
A sensor fault diagnosis method for UAVs based on graph neural networks is proposed, addressing the challenge of diverse sensor types and the difficulty in obtaining precise system models.
(2)
UAV dynamics models are introduced as prior knowledge to construct the association graph, ensuring the accuracy of associations and the robustness of the network model.
(3)
The STDGCN model is constructed, which extracts both temporal and spatial features from sensor data, overcoming the difficulties involved in traditional methods in extracting and integrating features from multiple sensors.
The remaining chapters of this paper are arranged as follows. Section 2 introduces the theory of a GCN, providing the formula and derivations for graph convolutional operations. Section 3 describes the proposed association of the graph construction method and the constructed STDGCN in detail. A quadrotor UAV is employed to generate actual flight data in Section 4, and six categories of sensor fault data are generated based on the flight data. After that, the effectiveness and superiority of our proposal are validated by ablation studies and comparative experiments in Section 5. Finally, Section 6 summarizes this paper and gives directions for future work.

2. Preliminary Theory of a Graph Convolutional Network (GCN)

Construction of the association graph: A graph G   ( V , E ) can be defined by a set of nodes V and edges E [18,19]. The relationship between nodes υ i and υ j is represented by an edge e i , j E . An adjacency matrix A is constructed to facilitate information aggregation in the graph structure, where A   [ i , j ] = 1 if the edge e i j exists, and A   [ i , j ] = 0 otherwise.
Forward propagation of the GCN: According to the convolution theorem, the Fourier transform of the convolution of two signals is equivalent to the pointwise multiplication of their respective Fourier transforms. Let f x denote the convolution operation in the spatial domain, which x = x 1 , x 2 , , x n R n represents a dataset containing n data points and f = { f 1 ,   f 2 ,   ,   f n } are the trainable parameters in the neural network. This operation can be transformed into the frequency domain using the Fourier transform [14,20].
F ( f x ) = F ( f ) . F ( x )
where F represents the Fourier transform. By applying the inverse Fourier transform F 1 to both sides of Equation (1), the convolution operation f x in the spatial domain can be expressed as:
f x = F 1 ( F ( f ) F ( x ) ) = U ( ( U T f ) ( U T x ) )
where U represents the Fourier basis and denotes element-wise multiplication. The GCN is designed to incorporate the association graph into neural networks. To achieve this, the GCN utilizes the Laplacian matrix of the graph to obtain the Fourier basis. Suppose L m = D A is the Laplacian matrix of a graph. It can be normalized as L m = I N D 1 2 A D 1 2 N × N , where I N represents a unit matrix and A is the adjacent matrix [21]. D denotes the degree matrix, D i i j A i j . Then, the Fourier basis, U , along with the eigenvalue matrix Λ , can be obtained by the eigenvalue decomposition:
U Λ   U T = L m ,   λ = d i a g   ( [ λ 0 , , λ N 1 ] )
Based on the properties of the Laplacian matrix, the set U comprises orthogonal matrices that conform to the mathematical requirements of Fourier transformation. Let g θ be the diagonal matrix, g θ = d i a g ( U T f ) . Then, Equation (2) can be simplified as follows:
f x = U ( ( U T f ) ( U T x ) ) = U g θ U T x
The eigenvalue decomposition of the Laplacian matrix is a crucial step in the graph convolution process. However, when the graph scale is large, the computational complexity grows quadratically with the number of nodes. The high cost of eigenvalue decomposition restricts the applicability of graph convolution algorithms primarily to small-scale graphs. To address this issue, Krizhevsky et al. [19] proposed an approximation of g θ using Chebyshev polynomials T k , which can be expressed as follows:
g θ ( Λ ) = k = 0 k 1 θ k T k ( Λ )
where θ is the Chebyshev coefficient and T k denotes the k -th term in the Chebyshev polynomial. Specifically, it can be defined as T k ( x ) = 2 x T k 1 ( x ) T k 2 ( x ) , T o ( x ) = 1 , and T 1 ( x ) = 1 . Λ = 2 Λ / λ max I N is a diagonal matrix of scale eigenvalues. Thus, (4) can be expressed as:
f x = U g θ U T x k = 0 k 1 θ k T k ( U Λ U T ) x = k = 0 k 1 θ k T k ( L m ) x
where L = 2 L / λ max I N and λ max denote the maximum eigenvalue of the Laplacian matrix. Using Chebyshev polynomial approximation allows aggregating information from the 0-th to the k -th order neighborhood nodes using the convolutional kernel, effectively capturing the local information around the central node.
Xiao et al. [22] further simplified the Chebyshev polynomials by setting λ max = 0 and k = 2 , which means that only the information from the first-order neighborhood of the central node is aggregated. Consequently, (6) can be simplified as follows:
f x θ 0 x + θ 1 ( 2 L m / λ max I N ) x θ 0 x θ 1 ( D 1 2 A D 1 2 ) x
By setting the parameter θ = θ 0 = θ 1 , (7) can be further obtained:
f x θ 0 ( I N + D 1 2 A D 1 2 ) x
Furthermore, to facilitate the network’s training through backpropagation, the parameters “ W , D ” are usually renormalized by W = W + I N and D i i = j W i j , respectively. Finally, the convolution operation in the spectral domain can be defined as:
f x θ ( I N + D 1 2 A D 1 2 ) x = θ ( D 1 2 A D 1 2 ) x
A schematic diagram of the original GCN algorithm is shown in Figure 1 as follows.

3. Proposed Method

The core idea of fault diagnosis using an STDGCN is as follows: (1) employing difference layers to enhance the features of graph nodes, (2) utilizing STDGCNs to capture both temporal and spatial features, and (3) ultimately obtaining fault diagnosis results by compressing and aggregating node features through global average pooling layers and fully connected layers. In this paper, the input of the STDGCN is the association graph constructed based on the UAV dynamics model, and the output is the diagnostic results for six types of faults. The flowchart of this article is shown in Figure 2.

3.1. Association Graph Construction

In the association graph G , the quality of the relationships between nodes in the association graph directly impacts the model performance of the GCN. In fault diagnosis, the relationships between nodes are typically determined based on their similarity, such as cosine similarity or Euclidean distance. However, it is usually challenging to guarantee the accuracy of the graph. In this article, the UAV sensor association graph is constructed by incorporating the quadrotor’s dynamics model. This approach allows for integrating the nonlinear physical characteristics of UAVs into a neural network, enabling better utilization of the spatial features of each variable and information from sensor data [22].
The quadrotor operates in a complex kinematic and dynamic system, making establishing an accurate mathematical model difficult. Considering that the association graph only requires the connections between variables, this paper introduces the mathematical model of the quadrotor based on the following assumptions:
(1)
The quadrotor UAV is rigid with complete uniform symmetry;
(2)
The origin of the body coordinate system coincides entirely with the center of mass of the UAV;
(3)
The rotor of the UAV is rigid with zero moment of inertia about its axis of rotation;
(4)
The UAV performs small-angle flight, ignoring air resistance and gyroscopic torque.
Based on these assumptions, the quadrotor UAV used in the modeling is shown in Figure 3, and the relevant symbol definitions are given in Table 1.
As shown in Figure 2, the position and attitude of UAVs in space are described using two reference frames. The absolute position of a quadrotor is represented by a vector ( x ,   y ,   z ) in the Earth coordinate system, while the Euler angle vector gives its attitude ( ϕ ,   θ ,   ψ ) [23]. The vector transformation from the body reference frame B to the ground reference frame E can be achieved by rotating the matrix. The specific formula is shown in Equation (10), where C and S represent the cos and sin functions, respectively, as follows.
R E B = C ψ C θ C ψ S θ S ϕ S ψ C ϕ C ψ S θ C ϕ + S ψ S ϕ S ψ C θ S ψ S θ S ϕ + C ψ C ϕ S ψ S θ C ϕ C ψ S ϕ S θ C θ S ϕ C θ C ϕ
The quadcopter UAV studied in this paper has an X-shaped structure, with four motors arranged at the end of the body to drive the rotor to generate upward lift F 1 ,   F 2 ,   F 3 ,   F 4 . The lift and torque of the rotor can be expressed as:
F i = K T Ω i 2
M i = K M Ω i 2
Therefore, the force acting on the UAV in the demonstrated coordinate system can be expressed as:
F B = F x B F y B F z B = 0 0 i = 1 4 F i
The force value of the UAV in the ground coordinate system can be obtained by rotating the matrix R E B and Equation (13).
F x F y F z = R E B F B = i = 1 4 F i C ψ S θ C ϕ + S ψ S ϕ S ψ S θ C ϕ C ψ S ϕ C ϕ C θ
By defining the system input variables U1, U2, U3 and U4 and ignoring the air resistance experienced by the UAV, the final kinematic equation of the UAV corresponding to the fixed ground coordinate system can be obtained as follows:
U 1 = K T ( Ω 4 2 Ω 3 2 ) = F 4 F 3
U 2 = K T ( Ω 2 2 Ω 1 2 ) = F 2 F 1
U 3 = K M ( Ω 1 2 + Ω 2 2 Ω 3 2 Ω 4 2 )
U 4 = K T ( Ω 1 2 + Ω 2 2 + Ω 3 2 + Ω 4 2 ) = i = 1 4 F i
After conducting an analysis of different forces acting on the UAV, the dynamic model of the UAV is established. Due to the completely uniform and symmetrical mechanical structure of small quadcopter UAVs, the moments of inertia I x y , I y z , and I z x are all zero. At the same time, assuming that the moment of inertia on the rotor axis is zero, the gyroscopic moment and aerodynamic moment are ignored. By using the Lagrange equation, the moment balance equations around the three axes of the body can be obtained as follows:
I x p ˙ = U 1 L + ( I x I z ) q r
I y q ˙ = U 2 L + ( I z I x ) p r
I z r ˙ = U 3 L + ( I x I y ) p q
Similarly, the angular velocity in the body reference system can be converted into the angular velocity in the ground reference system through coordinate transformation:
p q r = 1 0 S θ 0 C ϕ S ϕ C θ 0 S ϕ C ϕ C θ ϕ ¨ θ ¨ ψ ¨
In the case of a UAV flying at a small angle, the angular velocity in the body reference system is approximately equal to the angular velocity in the ground reference frame, such as:
p q r T = ϕ ˙ θ ˙ ψ ˙
By using Equations (19)–(21), the final dynamic model of the quadcopter unmanned aerial vehicle is obtained as follows:
ϕ · · = U 1 L + ( I y I x ) θ · ψ · / I x
θ · · = U 2 L + ( I z I x ) ϕ · ψ · · / I y
ψ · · = U 3 L + ( I x I y ) ϕ · θ · · / I z
This paper applies the kinematic equations and dynamic models of UAVs to construct the graph structure of sensors, which is equivalent to introducing prior knowledge into the neural network model, thus improving the rationality and accuracy of the model. The relationship between various variables can be derived through the aforementioned equations, and the adjacency matrix and degree matrix required for constructing graph convolutional neural networks based on UAV mathematical models can also be used for further training and the prediction of convolutional neural networks.
The relationship between various sensor variables has been derived through equations. The known kinematic and dynamic equations of the UAV are as follows:
m x ¨ y ¨ z ¨ = F x F y F z m g = U 4 ( C ψ S θ C ϕ + S ψ S ϕ ) U 4 ( S ψ S θ C ϕ + C ψ S ϕ ) U 4 C ϕ C θ m g
ϕ · · = U 1 L + ( I y I x ) θ · ψ · / I x
θ · · = U 2 L + ( I z I x ) ϕ · ψ · · / I y
ψ · · = U 3 L + ( I x I y ) ϕ · θ · · / I z
By conducting simple derivations of the aforementioned equations, connections between the variables can be determined [12]. For example, from the kinematic Equation (26), it can be inferred that the tri-axial acceleration of the UAV is affected by the input variable U 4 , which is the sum of the lift generated by the four rotors [24]. Thus, a relationship exists between a x ,   a y ,   a z and the rotational speed of each rotor, implying that each of a x ,   a y ,   a z has at least four adjacent nodes { Ω 1 ,   Ω 2 ,   Ω 3 ,   Ω 4 } . In the constructed graph, the self-connected edge is added for each sensor variable. Eventually, the neighboring nodes of each variable can be derived from the equations mentioned above and the entire graph is obtained [25,26]. As shown in Figure 4, each node in the association graph represents the data of an individual variable.
Subsequently, the adjacency matrix A can be generated based on the association graph, which is used to represent the structural information of the graph. Additionally, the degree matrix D of the graph can be derived from the adjacency matrix. The degree matrix only has values on its diagonal positions, with each value equal to the summation of rows in the adjacency matrix. The specific forms of the adjacency matrix and degree matrix in this paper are shown in Figure 5.

3.2. Spatiotemporal Difference Graph Convolutional Network (STDGCN)

The constructed framework of the STDGCN is depicted in Figure 6 and consists of three primary components. The first component is the difference layer, which enhances graph node features by calculating backward difference properties. The second component represents the STGCM, designed to capture the spatiotemporal dependencies within the sensor data. Following the STGCM, a series of layers, including time-gated convolution, global average pooling, and fully connected layers, compress and aggregate node features, enabling effective fault classification.

3.3. The Difference Layer

A difference layer is introduced into the network model to incorporate additional information for expanding node features. The differential layer is a standard method neural networks use to enhance features in time series data by computing the derivative information of the time series. The constructed differential layer in this paper utilizes local differential information by calculating backward differences from order 0 to D . Subsequently, the obtained multi-order backward difference features are utilized as new features for the graph nodes.
The specific calculation procedure of the difference layer is given below, where x ( t ) denotes the data of all sensors in the t t h time slice and 16 variable nodes are contained in x ( t ) . The features of node q in the t t h time slice are defined as x q ( t ) . Then, the difference operation and the features of the new node after the difference layer x q ( t ) can be defined as:
Δ 0 x q ( t ) = x q ( t ) Δ 1 x q ( t ) = x q ( t ) x q ( t 1 ) Δ 2 x q ( t ) = Δ 1 x q ( t ) Δ 1 x q ( t 1 ) Δ D x q ( t ) = Δ D 1 x q ( t ) Δ D 1 x q ( t 1 ) x q ( t ) = i = 0 D Δ i x q ( t )
where Δ D x q ( t ) denotes the calculated order D of the backward difference feature and ‖ denotes the feature aggregation operation. The difference operation enables the neural network to learn and extract features from the time series data, thereby improving the diagnostic effectiveness of the model.

3.4. Spatiotemporal Graph Convolutional Module (STGCM)

The various sensors of UAVs are interconnected, naturally forming a spatiotemporal graph, with fault information embedded in the historical features of nodes and their neighbors. Modeling only sensor data’s temporal or spatial dependencies alone can lead to unstable model performance. To simultaneously extract fault features from both the spatial and temporal domains, a spatiotemporal graph convolutional module (STGCM) is proposed, as illustrated in Figure 7. The STGCM consists of a graph convolutional layer, two gated convolutional layers, residual architecture, and a batch normalization layer (BN).
Figure 7 shows the composition of the gated convolutional layer, which comprises gated linear units [14]. Gated linear units are implemented with two conventional one-dimensional convolutional layers. The mathematical model of the gated convolutional layer can be represented as:
y g a t e d = ( K 1 x g a t e d + b 1 ) ( K 2 x g a t e d + b 2 )
where K 1 and K 2 represent convolution kernels, b 1 and b 2 denote bias terms, σ is the sigmoid function, and denotes the convolution symbol. y g a t e d and θ are the input and output of the gated convolutional layer. It should be noted that the shapes of the convolution kernels K 1 should be consistent. The sigmoid function is useful for a spatiotemporal graph convolutional model, where a real number must be converted to a probability. The sigmoid function placed as the last layer of a spatial-temporal graph convolutional module can convert the model’s output into a probability score, which can be easier to work with and interpret. Initializing the parameters θ in convolutional layers and the learning rate allows information to flow unimpeded through potentially many timesteps. Without these gates, information could easily vanish through the transformations of each timestep.
The implementation process of the STGCM is shown in Figure 8. First, the sensor data are divided into several time slices. On a single time slice, the graph convolution layer performs convolution operations on the node and its neighboring nodes. In this way, the feature information of each node is extracted, and then the spatial dependence of each node is mined. By stacking gated convolutional layers, gate convolution operations are performed at consecutive time slices, merging the corresponding node’s temporal features and further exploring their time dependencies.
Moreover, residual architecture and batch normalization layers are implemented to make the model learn features more efficiently, which can accelerate the convergence of the model and improve its generalizability [14,27,28].

4. Experiment and Results

4.1. UAV Flight Experiment

Firstly, UAV experiments are conducted to obtain sensor data. The experiments are based on a self-assembled open-source quadcopter UAV, as shown in Figure 9, and the central controller is the open-source flight controller Pixahwk2.4.8. Pixhawk 2.4.8 incorporates the 32-bit STM32F427 microprocessor and the MS5611 barometer. It also integrates an MPU 6000 inertial measurement unit (IMU), including a three-axis accelerometer, a three-axis gyroscope, and a magnetometer [29]. The UAV is manually operated to complete basic movements, such as translation, rotation, circle, ascent, and descent. The flight time is 14 min. The flight data generated by the Pixhawk are recorded in the flight log, which can be downloaded and analyzed through the Mission Planner ground station.
Subsequently, the sensor variables mentioned above and the variables from the UAV dynamics model are considered for model training. Then, 16 key variables are extracted, as shown in Table 2. Signals generated by Hall sensors within the UAV motor are input into the controller for generating the pulse width modulation (PWM) signal for controlling the UAV motor via an electronic speed controller [30]. c 1 ,   c 2 ,   c 3 and c 4 presented in Table 2 represent generated PWM signals. Data corresponding to angular accelerations ϕ .. , θ .. , ψ .. are inferred from the available variable data in this study. Note that the data lengths of the three-axis accelerometer variables differ from others due to variations in sampling frequencies. It is necessary to extract the data after interpolating the acceleration variables a x ,   a y ,   a z . Finally, the data of 16 sensor variables are obtained, and the data length of each variable is 6460.

4.2. Sensor Fault Analysis

Various sensor faults can happen during the flight process of a UAV. To conduct fault diagnosis, it is necessary to classify these faults. The common fault sources include the following categories based on the characteristics of faults.
  • Damage fault: The sensor suddenly becomes inactive at a specific moment, resulting in zero output. This is the most common type of failure, often caused by damage in the power supply circuit of the sensor.
  • Bias fault: The output signal deviates from the original signal by a constant value. When the system is powered on, changes in temperature and stress can introduce bias in the sensor data. Insufficient sensor compensation can also lead to bias faults.
  • Drift fault: The output signal gradually deviates from the original signal by a linearly increasing offset over time. In practical flight scenarios, gyroscope data may experience drift due to temperature fluctuations and accumulated errors.
  • Lock fault: The output signal remains fixed at a constant value. Such faults usually occur when a UAV is subjected to magnetic interference or actuator failure.
  • Scale fault: The output signal is scaled at a specific ratio, causing the sensor’s output values to exceed a reasonable range. Reduced actuator output efficiency can lead to scaling faults.
In real flight missions, faults are usually sudden and unpredictable, making it difficult to collect sufficient fault data. Fault injection is a standard method for simulating sensor faults, which generates faulty data based on healthy data according to specified fault characteristics. Therefore, faulty data can be generated through fault injection. Taking the example of roll rate, the faults are modeled as follows:
Damage simulation: The damage fault is simulated by setting the healthy data at each time point to zero. The comparison between the injected data and the ground truth is shown in Figure 10a.
Bias simulation: The bias fault is simulated by adding the standard deviation of the variable data to the ground truth with the help of y ( t ) = x ( t ) + s t d ( x ) , where x ( t ) represents the healthy data and y ( t ) represents the fault data. s t d ( x ) is the standard deviation of the variable data. In this study, the standard deviation of the roll rate data is 5.13°/s. As shown in Figure 10b, the bias fault data consistently maintain a constant deviation from the ground truth.
Drift simulation: The drift fault is simulated by introducing a time-varying bias to the ground truth. The specific formula is represented as y ( t ) = x ( t ) + 0.01 t and the deviation rate is set to 0.01. As illustrated in Figure 10c, the drift fault data deviate from the ground truth over time, and the magnitude of deviation gradually increases.
Lock simulation: The sensor data remain fixed when the lock fault occurs. In this study, this value is set to be the standard deviation of the variable data. The specific formula is y ( t ) = s t d ( t ) . For the roll rate data, the value remains constant at 5.13°/s. The comparison between the injected and healthy data is shown in Figure 10d.
Scale simulation: In this study, the scaling fault data are set to be 1.2 times the ground truth with the formula y ( t ) = 1.2 x ( t ) . The comparison between the injected and healthy data is shown in Figure 10e.
The simulation of faults is realized based on ground truth for the characteristics of various types of sensor faults [30]. The same type of fault is injected into 16 variables during the complete flight cycle. The data are finally categorized into six types: ground truth, bias fault data, drift fault data, damage fault data, locking fault data, and scale fault data. No further preprocessing has been used on the data before implementing the STDGCN algorithm. Subsequently, the sliding window is employed to segment the sensor data into multiple samples, with a step size of 1 and a window length of 100. Thus, 80% of the samples are randomly selected as the training set to create training and testing datasets, while the remaining 20% are allocated to the testing set. Finally, 384 samples were generated, and each sample’s dimension is x 16 100 .

4.3. Model Training and Prediction

Sensor variable data collected in this research are processed into spatiotemporal datasets, employing the abovementioned methods. Subsequently, the dataset is randomly divided into a training set and a testing set, with a ratio of 0.8 for the training set and 0.2 for the testing set. The STDGCN model is trained using the training set, utilizing the backpropagation algorithm for training and the gradient descent method to optimize the model parameters. After that, the well-trained STDGCN model is applied for fault diagnosis using the testing set. Therefore, to balance computational efficiency and diagnostic performance, two STGCM modules are stacked. In the temporal dimension, the first STGCM module comprises 32 convolutional kernels, while the second module comprises 64 convolutional kernels. Additionally, the parameter D for the difference layer is set to 4, indicating fourth-order differencing. The detailed structure and associated hyperparameters of the proposed STDGCN model is presented in Table 3 and Table 4 respectively and an illustration of the STDGCN model is shown in Figure 11 as follows.
Values of hyperparameters corresponding to the STDGCN are selected as follows:
The network model is constructed using the PaddlePaddle platform. The experiment is executed on a workstation with an i5-9300H CPU and NVIDIA GeForce GTX 1650. During the training phase, the cross-entropy loss function and the Adam optimizer are employed to quantify the model’s predictions and adjust its parameters. The learning rate of the optimizer is fixed at 0.001. Multiple training and prediction cycles of the STDGCN model are performed using the configuration above. The training iterations are set to 50, with a training set comprising 308 samples and a batch size of 32. The testing set consists of 76 samples with a batch size of 1. The computational cost for each epoch is 0.37s. The final fault diagnosis results are shown in Figure 12 and Figure 13.
The training result shows that at the 20th training round, the STDGCN achieves low loss and high accuracy. This performance is sustained in the subsequent training, indicating that the model is fast and effective. As can be seen from the test result plots, the STDGCN demonstrates excellent performance for the sensor fault diagnosis task, with 94.93% accuracy on the test set and low loss. There is no shaded area in the test set accuracy curve, demonstrating the STDGCN model’s stability. Hence, this is due to the deep linear convolutional networks where the layers lack the gating units of the STDGCN. Stacking several layers on top of each other is simply a factorization of the model, which remains linear up to the output, at which point it becomes log-linear. Figure 11 and Figure 12 show that the STDGCN performs best, followed by bilinear layers and then linear layers. The STDGCN improves by stacking gated convolutional layers, and the gated convolution operations are conducted at consecutive times by merging the corresponding node’s temporal features and further exploring their time dependencies.

5. Discussion

5.1. Comparative Analysis of Models

The performance of the STDGCN method was assessed and evaluated through a cross-validation test of three additional flight experiments to obtain spatiotemporal Dataset-1, Dataset_2, and Dataset_3, with sample sizes of 456 and 276, respectively. The performance of the STDGCN is being tested multiple times with the same parameter configuration on different datasets. The purpose of these tests was to confirm the effectiveness of the STDGCN method in various scenarios. Subsequently, six existing fault diagnosis methods are introduced for comparison, including deep neural network (DNN) [31], recurrent neural network (RNN) [24], gated recurrent unit (GRU) [32], long short-term memory network (LSTM) [33], convolutional neural network (CNN) [34], and LeNet architecture [35]. This paper’s constructed LSTM and GRU models consist of two hidden layers, each with 320 units, time steps, and input dimensions of 100 and 20, respectively, as well as fault classification through fully connected layers. The CNN model constructed in this paper comprises a 2D convolutional layer, a max-pooling layer, and three fully connected layers. The LeNet architecture is also incorporated to validate further convolutional neural networks’ effectiveness in sensor fault diagnosis. The LeNet model includes two 2D convolutional layers, two activation layers, two max-pooling layers, and three fully connected layers. The final test accuracy is shown in Figure 14.
Overall, the STDGCN performs better than the other models on all three datasets. It achieves outstanding performance on the Dataset_3 with fewer samples, achieving an accuracy of 92.97%. Furthermore, due to more samples, each model performs better on Dataset_2 compared to the other datasets. The performance gap between LSTM and GRU varies significantly across different datasets, as smaller datasets may not provide sufficient information to train complex models for recurrent neural networks.
Specifically, the accuracy of the RNN is consistently lower than that of other models across all three datasets, indicating poor fault diagnosis performance. This is attributed to the simple structure of the RNN, which struggles to handle long time series and suffers from the vanishing gradient problem. In addition, the highest accuracies achieved by LSTM and GRU are 80.22% and 83.52%. Unlike the STDGCN, LSTM and GRU demonstrate the ability to learn short-term dependencies in time series data. However, their capacity to learn long-term dependencies is limited, making it challenging to capture complex features across multiple time points and impossible to obtain spatial information from various sensors.
Finally, the highest accuracies achieved by the CNN and LeNet models are 70.77% and 72.75%. The aforementioned diagnostic performance by the CNN and LeNET models is undesirable, which can be caused by the inability of CNN-based models to integrate sensor data effectively. Traditional convolutional kernels generate new feature maps by aggregating features from all channels, regardless of their relevance. However, 16 sensor-related input variables are not entirely correlated. Due to the dependence of CNN on spatial information, fault information in specific channels may be influenced by irrelevant, redundant information in other channels, thereby affecting the model’s performance.
This experimental result is further validated by metrics such as F1 score and recall, as shown by Equations (33) and (34).
Recall = True   positive ( True   positive + False   negative )
F 1 - score = 2 × Accuracy   ×   Recall Accuracy + Recall
It can be seen from Table 5, Table 6 and Table 7 that F1 score and recall corresponding to dataset 1, dataset 2, and dataset 3 for the proposed STDGCN are highest compared to other methods. The improvement across the F1 score and recall metrics suggests that the STDGCN method effectively leverages the diversity and volume of data distribution across edge devices. As a result, it can be said that the proposed STDGCN performs best in detecting different types of UAV sensor faults compared to other data-driven methods.
It can be seen from Table 5, Table 6 and Table 7 that F1 score and recall corresponding to dataset 1, dataset 2, and dataset 3 for the proposed STDGCN are highest compared to other methods. As a result, it can be said that the proposed STDGCN performs best in detecting different types of UAV sensor faults compared to other data-driven methods.

5.2. Benefits of a Designed Sensor Data Association Graph

In this section, an experimental study is conducted to validate the benefits of the sensor data association graph. Only self-connections are added to the nodes in the association graph without considering the relationships between the nodes; the model’s structure is unchanged. The experimental results are shown in Table 8. It can be observed that the diagnostic accuracy of the STDGCN outperforms that of the STDGCN-non-A and exhibits better stability. The average value of the STDGCN is approximately 4.48% higher than that of the STDGCN-non-A. The designed sensor data association graph utilizes prior knowledge, ensuring that the model explicitly captures spatial correlations. These observations demonstrate the beneficial role of the designed association graph in fault diagnosis.

5.3. Benefit of the Difference Layer

The benefit of the utilization of the difference layer during the implementation of the STDGCN is studied in this section, as shown in Table 9. During this process, the diagnostic accuracy of STDGCN application is compared for the three studied datasets with and without the incorporation of a difference layer, as shown in Figure 10. The rest of the network structure remained unchanged. As a result, data points are directly incorporated into the STGCM layer without being passed through the difference layer.
It can be seen from Table 5 that the STDGCN with a differential layer demonstrates higher accuracy consistently in comparison to the application of the STDGCN without a differential layer. On average, for the three compared datasets, the proposed STDGCN with a difference layer demonstrates 4.86% higher accuracy compared to the one without a difference layer.
Through the aforementioned comparative analysis, several reasons can be identified for the high accuracy of the STDGCN. Firstly, the introduction of the mathematical model has enabled the acquisition of accurate spatial dependencies among sensor measurements. Secondly, implementing the difference layer has facilitated the extraction of differential features between adjacent time steps in the time series. Finally, the STGCM allows for the simultaneous extraction of temporal and spatial information from sensor data.

6. Conclusions

This paper applies an emerging GCN-based method for fault diagnosis of UAV sensors. The data required for the model are obtained through real UAV flight experiments. Moreover, to better integrate multi-source sensor data, a mathematical model of a quadcopter UAV is introduced to construct the sensor data graph. The proposed STDGCN model in this study explores the spatial dependencies of individual nodes through graph convolutional layers and captures the temporal dependencies of each node through stacked gated convolutional layers. Additionally, a differential layer is incorporated in the STDGCN model to calculate higher-order backward features, enhancing the node features in the graph. Upon comparing the STDGCN model with other state-of-the-art neural network models, the main advantages of the STDGCN model can be listed as follows: (1) The association graph is accurately constructed by utilizing prior knowledge of UAV mathematical models, allowing the STDGCN to capture spatial dependencies among sensor variables effectively. (2) The difference layer enhances the graph nodes’ features, improving the model’s accuracy. (3) The introduction of an STGCM enables the extraction of spatial and temporal dependencies of sensor data, ensuring the stability and robustness of the model. Despite having the advantages mentioned above, not all sensor data on the UAV are integrated into this study. Instead, only 16 sensor variables involved in the mathematical model of the UAV are processed, which can be increased in the future. Furthermore, the performance of the proposed method is dependent on the availability and robustness of the model during the real flight time. To this extent, future research can be conducted regarding the real-time implementation and reliability insurance of the proposed model during UAV flight time. In addition, research can be conducted on enhancing the computational cost of the proposed STDGCN caused by the data association from different sensors for efficient real-time application. Additionally, the quadcopter UAV has limited battery capacity, resulting in limited flight time and insufficient flight data. In the future, more data can be obtained by improving the experimental design of the UAV sensors. Considering the information shared among different sensors during the application of the proposed STDGCN, future research can also be conducted to ensure data privacy, such as with the federated learning framework. The applications of the proposed STDGCN can be expanded into other domains of research with multi-source sensor data requirements, such as in commercial aircraft, satellites, submarines, etc.

Author Contributions

H.L.: Conceptualization, Methodology, Writing-original draft, Software. C.C.: Formal analysis, Visualization, Investigation. T.W.: Validation, Formal analysis. S.S.: Conceptualization, Methodology, Supervision, Funding acquisition, Writing—reviewing and editing. Y.L.: Formal analysis, Validation, Funding acquisition. Z.D.: Validation, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant numbers: 52250410345, 12172290) And the APC was funded by Chengdu Aircraft Design & Research Institute.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agha, S.; Mohsan, H.; Othman, N.Q.H.; Li, Y.; Alsharif, M.H.; Khan, M.A. Unmanned aerial vehicles (UAVs): Practical aspects, applications, open challenges, security issues, and future trends. Intell. Serv. Robot. 2023, 16, 109–137. [Google Scholar]
  2. Singhal, G.; Bansod, B.; Mathew, L. Unmanned Aerial Vehicle Classification, Applications, and Challenges. Preprints 2018, 2018110601. [Google Scholar] [CrossRef]
  3. Cruz, B.S.; de Oliveira Dias, M. Crashed boeing 737-Max: Fatalities or malpractice. GSJ 2020, 8, 2615–2624. [Google Scholar]
  4. Daponte, P.; De Vito, L.; Mazzilli, G.; Picariello, F.; Rapuano, S.; Riccio, M. Metrology for drone and drone for metrology: Measurement systems on small civilian drones. In Proceedings of the 2015 IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 4–5 June 2015; pp. 306–311. [Google Scholar]
  5. Li, W.; Li, H.; Gu, S.; Chen, T. Process fault diagnosis with model- and knowledge-based approaches: Advances and opportunities. Control Eng. Pract. 2020, 105, 104637. [Google Scholar] [CrossRef]
  6. Miao, Z.; Xia, Y.; Zhou, F.; Yuan, X. Fault diagnosis of wheeled robot based on prior knowledge and spatial-temporal difference graph convolutional network. IEEE Trans. Ind. Inform. 2022, 19, 7055–7065. [Google Scholar] [CrossRef]
  7. Khamseh, H.B.; Ghorbani, S.; Janabi-Sharifi, F. Unscented Kalman filter state estimation for manipulating unmanned aerial vehicles. Aerosp. Sci. Technol. 2019, 92, 446–463. [Google Scholar] [CrossRef]
  8. Chen, G.; Li, S.; He, Q.; Zhou, P.; Zhang, Q.; Yang, G.; Lv, D. Fault diagnosis of drone motors driven by current signal data with few samples. Meas. Sci. Technol. 2022, 35, 086202. [Google Scholar] [CrossRef]
  9. Liu, X.; Sun, W.; Li, H.; Hussain, Z.; Liu, A. The method of rolling bearing fault diagnosis based on multi-domain supervised learning of convolution neural network. Energies 2022, 15, 4614. [Google Scholar] [CrossRef]
  10. Mao, G.; Zhang, Z.; Qiao, B.; Li, Y. Fusion domain-adaptation CNN driven by images and vibration signals for fault diagnosis of gearbox cross-working conditions. Entropy 2022, 24, 119. [Google Scholar] [CrossRef]
  11. Lou, G.; Liu, Y.; Zhang, T.; Zheng, X. Stfl: A temporal-spatial federated learning framework for graph neural networks. arXiv 2021, arXiv:2111.06750. [Google Scholar]
  12. Tama, B.A.; Vania, M.; Lee, S.; Lim, S. Recent advances in the application of deep learning for fault diagnosis of rotating machinery using vibration signals. Artif. Intell. Rev. 2023, 56, 4667–4709. [Google Scholar] [CrossRef]
  13. Sadid, H.; Antoniou, C. Dynamic Spatio-temporal Graph Neural Network for Surrounding-aware Trajectory Prediction of Autonomous Vehicles. IEEE Trans. Intell. Veh. 2024. [Google Scholar] [CrossRef]
  14. He, K.; Yu, D.; Wang, D.; Chai, M.; Lei, S.; Zhou, C. Graph Attention Network-Based Fault Detection for UAVs With Multivariant Time Series Flight Data. IEEE Trans. Instrum. Meas. 2022, 71, 1–13. [Google Scholar] [CrossRef]
  15. Yu, W.; Wu, M.; Lu, C. Meticulous process monitoring with multiscale convolutional feature extraction. J. Process Control. 2021, 106, 20–28. [Google Scholar] [CrossRef]
  16. Yu, W.; Zhao, C.; Huang, B. MoniNet with concurrent analytics of temporal and spatial information for fault detection in industrial processes. IEEE Trans. Cybern. 2021, 52, 8340–8351. [Google Scholar] [CrossRef]
  17. Yu, W.; Zhao, C. Broad convolutional neural network based industrial process fault diagnosis with incremental learning capability. IEEE Trans. Ind. Electron. 2019, 67, 5081–5091. [Google Scholar] [CrossRef]
  18. Bhatti, U.A.; Tang, H.; Wu, G.; Marjan, S.; Hussain, A. Deep learning with graph convolutional networks: An overview and latest applications in computational intelligence. Int. J. Intell. Syst. 2023, 1, 8342104. [Google Scholar] [CrossRef]
  19. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  20. Xie, J.; Miao, Q.; Liu, R.; Xin, W.; Tang, L.; Zhong, S.; Gao, X. Attention adjacency matrix-based graph convolutional networks for skeleton-based action recognition. Neurocomputing 2021, 440, 230–239. [Google Scholar] [CrossRef]
  21. Zhang, Y.D.; Satapathy, S.C.; Guttery, D.S.; Górriz, J.M.; Wang, S.H. Improved breast cancer classification through combining graph convolutional network and convolutional neural network. Inf. Process. Manag. 2021, 58, 102439. [Google Scholar] [CrossRef]
  22. Xiao, A.; Yan, W.; Zhang, X.; Liu, Y.; Zhang, H.; Liu, Q. Multi-domain fusion for cargo UAV fault diagnosis knowledge graph construction. Auton. Intell. Syst. 2024, 4, 10. [Google Scholar] [CrossRef]
  23. Huang, Z.; Rodríguez-Piñeiro, J.; Domínguez-Bolaño, C.; Yin, X. Empirical dynamic modeling for low-altitude UAV propagation channels. IEEE Trans. Wirel. Commun. 2021, 20, 5171–5185. [Google Scholar] [CrossRef]
  24. Arul, U.; Arun, V.; Rao, T.P.; Baskaran, R.; Kirubakaran, S.; Hussan, M.T. Effective Anomaly Identification in Surveillance Videos Based on Adaptive Recurrent Neural Network. J. Electr. Eng. Technol. 2024, 19, 1793–1805. [Google Scholar] [CrossRef]
  25. Haruna, A.; Yang, M.; Jiang, P.; Ren, H. Collaborative task of entity and relation recognition for developing a knowledge graph to support knowledge reasoning for design for additive manufacturing. Adv. Eng. Inform. 2024, 60, 102364. [Google Scholar] [CrossRef]
  26. Haruna, A.; Yang, M.; Jiang, P. Design for additive manufacturing: A three-layered conceptual framework for knowledge-based design. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2022, 237, 1405–1421. [Google Scholar] [CrossRef]
  27. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  28. Banerjee, P.; Okolo, W.A.; Moore, A.J. In-flight detection of vibration anomalies in unmanned aerial vehicles. J. Nondestruct. Eval. Diagn. Progn. Eng. Syst. 2020, 3, 041105. [Google Scholar] [CrossRef]
  29. Guan, H.; Wong, K.C. Spring-Damped Underactuated Swashplateless Rotor on a Bicopter Unmanned Aerial Vehicle. Machines 2024, 12, 296. [Google Scholar] [CrossRef]
  30. Wolfram, D.; Vogel, F.; Stauder, D. Condition monitoring for flight performance estimation of small multirotor unmanned aerial vehicles. In Proceedings of the 2018 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2018; pp. 1–17. [Google Scholar]
  31. Reddy, S.R.; Varma, G.S.; Davuluri, R.L. Deep neural network (DNN) mechanism for identification of diseased and healthy plant leaf images using computer vision. Ann. Data Sci. 2024, 11, 243–272. [Google Scholar] [CrossRef]
  32. Niu, Z.; Zhong, G.; Yue, G.; Wang, L.N.; Yu, H.; Ling, X.; Dong, J. Recurrent attention unit: A new gated recurrent unit for long-term memory of important parts in sequential data. Neurocomputing 2023, 517, 1–9. [Google Scholar] [CrossRef]
  33. Bai, R.; Noman, K.; Yang, Y.; Li, Y.; Guo, W. Towards trustworthy remaining useful life prediction through multi-source information fusion and a novel LSTM-DAU model. Reliab. Eng. Syst. Saf. 2024, 245, 110047. [Google Scholar] [CrossRef]
  34. Jia, S.; Li, Y.; Mao, G.; Noman, K. Multi-representation symbolic convolutional neural network: A novel multisource cross-domain fault diagnosis method for rotating system. Struct. Health Monit. 2023, 22, 3940–3955. [Google Scholar] [CrossRef]
  35. Zhang, X.; Wan, S.; He, Y.; Wang, X.; Dou, L. Teager energy spectral kurtosis of wavelet packet transform and its application in locating the sound source of fault bearing of belt conveyor. Measurement 2021, 173, 108367. [Google Scholar] [CrossRef]
Figure 1. Schematic of original GCN algorithm.
Figure 1. Schematic of original GCN algorithm.
Machines 12 00716 g001
Figure 2. The flowchart for fault diagnosis of UAVs.
Figure 2. The flowchart for fault diagnosis of UAVs.
Machines 12 00716 g002
Figure 3. Schematic diagram of forces acting on a quadrotor UAV.
Figure 3. Schematic diagram of forces acting on a quadrotor UAV.
Machines 12 00716 g003
Figure 4. The UAV data association graph.
Figure 4. The UAV data association graph.
Machines 12 00716 g004
Figure 5. Adjacency matrix and degree matrix.
Figure 5. Adjacency matrix and degree matrix.
Machines 12 00716 g005
Figure 6. The framework of the STDGCN.
Figure 6. The framework of the STDGCN.
Machines 12 00716 g006
Figure 7. Spatial-temporal graph convolutional module.
Figure 7. Spatial-temporal graph convolutional module.
Machines 12 00716 g007
Figure 8. Implementation process of the STGCM.
Figure 8. Implementation process of the STGCM.
Machines 12 00716 g008
Figure 9. Quadrotor UAV.
Figure 9. Quadrotor UAV.
Machines 12 00716 g009
Figure 10. (a) Damage fault; (b) Bias fault; (c) Drift fault; (d) Lock fault; (e) Scale fault.
Figure 10. (a) Damage fault; (b) Bias fault; (c) Drift fault; (d) Lock fault; (e) Scale fault.
Machines 12 00716 g010
Figure 11. Illustration of STDGCN architecture.
Figure 11. Illustration of STDGCN architecture.
Machines 12 00716 g011
Figure 12. Training results of the STDGCN. (a) Training set loss. (b) Training set accuracy.
Figure 12. Training results of the STDGCN. (a) Training set loss. (b) Training set accuracy.
Machines 12 00716 g012
Figure 13. Test results of the STDGCN. (a) Test set loss. (b) Test set accuracy.
Figure 13. Test results of the STDGCN. (a) Test set loss. (b) Test set accuracy.
Machines 12 00716 g013
Figure 14. Performance comparison of the STGCN and the other models.
Figure 14. Performance comparison of the STGCN and the other models.
Machines 12 00716 g014
Table 1. Symbol definitions.
Table 1. Symbol definitions.
VariableMeaning
F 1 ,   F 2 ,   F 3 ,   F 4 Lift from each rotor
L Total lift of the UAV
S ,   C Sine function and cosine function
U 1 ,   U 2 ,   U 3 ,   U 4 Input variables of the UAV system
I x ,   I y ,   I z Moment of inertia for aircraft body rotation around three axes
p ,   q ,   r Angular velocity of the UAV relative to the body coordinate system
Table 2. Key variables for the experiments.
Table 2. Key variables for the experiments.
VariableDescriptionVariableDescription
c 1 No.1 output PWM value θ .. Pitch angular acceleration
c 2 No.2 output PWM value ψ .. Yaw angular acceleration
c 3 No.3 output PWM value θ . Pitch rate
c 4 No.4 output PWM value ψ . Yaw rate
ϕ Roll angle a x Acceleration X-component
θ Pitch angle a y Acceleration Y-component
ψ Yaw angle a z Acceleration Z-component
ϕ . Roll rate ϕ .. Roll angular acceleration
Table 3. Structure of the STDGCN model.
Table 3. Structure of the STDGCN model.
The STDGCN ModelOutput Shape
Difference LayerInput[1, 100, 16]
Difference Layer[1, 5, 100, 16]
STGCM1Gated convolutional layer_1[1, 32, 100, 16]
Graph convolutional layer_1[1, 32, 100, 16]
Gated convolutional layer_2[1, 32, 100, 16]
Batch normalization layer_1[1, 32, 100, 16]
STGCM2Gated convolutional layer_3[1, 64, 100, 16]
Graph convolutional layer_2[1, 64, 100, 16]
Gated convolutional layer_4[1, 64, 100, 16]
Batch normalization layer_2[1, 64, 100, 16]
Classification LayerGated convolutional layer_5[1, 16, 100, 16]
Global average pooling layer[1, 16, 1, 16]
Fully connected layer_1[1, 256]
Fully connected layer_2[1, 6]
Table 4. Values of hyperparameters for STDGCN implementation.
Table 4. Values of hyperparameters for STDGCN implementation.
HyperparameterValue of Hyperparameter
Order of backward difference1
Axis for backward difference2
Maximum hop number1
Number of input channels1
Number of channels for backward channels5
Number of classes6
Size of temporal convolution kernel9
Dropout probability0
Stride1
Number of training epochs50
Learning rate0.001
Batch size for training32
Batch size for validation1
Table 5. F1 score and accuracy of different algorithms for dataset 1.
Table 5. F1 score and accuracy of different algorithms for dataset 1.
LabelF1 ScoreRecall
STDGCN0.99210.9946
DNN0.95340.9513
RNN0.92490.9179
GRU0.98890.9818
LSTM0.98670.9839
CNN0.98710.9825
LeNet0.97880.9831
Table 6. F1 score and accuracy of different algorithms for dataset 2.
Table 6. F1 score and accuracy of different algorithms for dataset 2.
LabelF1 ScoreRecall
STDGCN0.99330.9929
DNN0.95360.9509
RNN0.92460.9170
GRU0.99130.9901
LSTM0.98710.9842
CNN0.96560.9705
LeNet0.97810.9652
Table 7. F1 score and accuracy of different algorithms for dataset 3.
Table 7. F1 score and accuracy of different algorithms for dataset 3.
LabelF1 ScoreRecall
STDGCN0.99610.9952
DNN0.99400.9531
RNN0.92510.9187
GRU0.98910.9918
LSTM0.98690.9831
CNN0.97890.9766
LeNet0.98150.9855
Table 8. Comparative results of sensor data association and non-association by the STGCM.
Table 8. Comparative results of sensor data association and non-association by the STGCM.
DatasetSTDGCN-Non-A (%)STDGCN (%)
188.0594.93
293.8598.71
391.2792.97
Table 9. Comparative results of the STDGCN with and without a difference layer.
Table 9. Comparative results of the STDGCN with and without a difference layer.
DatasetSTDGCN
without Difference Layer (%)
STDGCN
with Difference Layer (%)
190.1794.93
292.3198.71
389.5492.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Chen, C.; Wan, T.; Sun, S.; Li, Y.; Deng, Z. Advancing UAV Sensor Fault Diagnosis Based on Prior Knowledge and Graph Convolutional Network. Machines 2024, 12, 716. https://doi.org/10.3390/machines12100716

AMA Style

Li H, Chen C, Wan T, Sun S, Li Y, Deng Z. Advancing UAV Sensor Fault Diagnosis Based on Prior Knowledge and Graph Convolutional Network. Machines. 2024; 12(10):716. https://doi.org/10.3390/machines12100716

Chicago/Turabian Style

Li, Hui, Chaoyin Chen, Tiancai Wan, Shaoshan Sun, Yongbo Li, and Zichen Deng. 2024. "Advancing UAV Sensor Fault Diagnosis Based on Prior Knowledge and Graph Convolutional Network" Machines 12, no. 10: 716. https://doi.org/10.3390/machines12100716

APA Style

Li, H., Chen, C., Wan, T., Sun, S., Li, Y., & Deng, Z. (2024). Advancing UAV Sensor Fault Diagnosis Based on Prior Knowledge and Graph Convolutional Network. Machines, 12(10), 716. https://doi.org/10.3390/machines12100716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop