Next Article in Journal
Towards the Improvement of Soil Salinity Mapping in a Data-Scarce Context Using Sentinel-2 Images in Machine-Learning Models
Previous Article in Journal
Hyperspectral Imaging with Active Illumination: A Theoretical Study on the Use of Incandescent Lamp and Variable Filament Temperature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Graph Convolutional Neural Networks Combined with Single-Model Decision-Making Fusion Neural Networks in Structural Damage Recognition

1
College of Transportation Engineering, Dalian Maritime University, Dalian 116026, China
2
Beijing Key Laboratory of Earthquake Engineering and Structure Retrofit, University of Technology Beijing, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(23), 9327; https://doi.org/10.3390/s23239327
Submission received: 29 September 2023 / Revised: 16 November 2023 / Accepted: 16 November 2023 / Published: 22 November 2023
(This article belongs to the Section Fault Diagnosis & Sensors)

Abstract

:
In cases with a large number of sensors and complex spatial distribution, correctly learning the spatial characteristics of the sensors is vital for structural damage identification. Graph convolutional neural networks (GCNs), unlike other methods, have the ability to learn the spatial characteristics of the sensors, which is targeted at the above problems in structural damage identification. However, under the influence of environmental interference, sensor instability, and other factors, part of the vibration signal can easily change its fundamental characteristics, and there is a possibility of misjudging structural damage. Therefore, on the basis of building a high-performance graphical convolutional deep learning model, this paper considers the integration of data fusion technology in the model decision-making layer and proposes a single-model decision-making fusion neural network (S_DFNN) model. Through experiments involving the frame model and the self-designed cable-stayed bridge model, it is concluded that this method has a better performance of damage recognition for different structures, and the accuracy is improved based on a single model and has good damage recognition performance. The method has better damage identification performance in different structures, and the accuracy rate is improved based on the single model, which has a very good damage identification effect. It proves that the structural damage diagnosis method proposed in this paper with data fusion technology combined with deep learning has a strong generalization ability and has great potential in structural damage diagnosis.

1. Introduction

In the field of structural damage diagnosis, the spatial structure of sensors is crucial for understanding structural behavior and identifying damage. In large civil infrastructures, the large number of sensors and their complex spatial distribution make damage recognition difficult. Deep learning techniques have excellent capabilities in structural damage recognition, using deep network structures and large-scale training datasets which can automatically learn and extract multilevel and high-order features from the data, thus realizing accurate recognition and localization of structural damages [1,2,3,4,5,6]. However, methods such as 1D-CNN, RNN, etc. have some limitations in capturing the spatial structural relationships between sensors. Graph convolutional neural networks are a deep learning model for processing graph-structured data which can be utilized to aggregate the spatial information of sensors and their interconnectivity to learn more informative features from the data. By utilizing the spatial information encoded in the graph structure of the sensor network, GCNs can effectively enhance the vibration signals, facilitate the mapping between the vibration signals and the type of structural damage, and learn the spatial features of the sensors, which has the advantages of automatically extracting the feature information and high accuracy to cope with such problems.
Although GCN models have been widely applied and studied in many other industry fields, such as social networks, computer vision, and natural language processing, there are still relatively few studies on extracting the features of vibration signals using GCN models and using them for structural damage recognition. The application of the GCN model can help to solve the problem of unclear relationships between sensor networks existing in Structure Health Monitoring (SHM). The information transfer in the sensor network can be better modeled by the GCN model, thus improving the accuracy and efficiency of structural damage identification. Zhicheng Liu [7] proposed a health monitoring model for railway structures based on graph neural networks which further extracts the temporal information between individual sensors through one-dimensional convolution, processes the spatial structural information between the sensor network and the damage nodes, predicts the damage location, and uses recurrent neural networks for multi-damage node identification. The superiority of the model was verified on a simulation dataset based on a plate structure, but the dataset is small and a larger dataset is needed for future verification analysis to enhance the performance of the graph neural network to improve the model’s capability. Dang et al. [8] proposed a g-SDDL model that utilizes graph neural networks to capture the spatial correlation of sensor positions and extract vibration signal patterns through convolutional operations, and the feasibility of this method was verified through three case studies, where it consistently achieved a high damage detection accuracy of more than 90% in complex environments. This method is the first time that a GCN has been applied to feature extraction of structural vibration signals and structural damage type identification, but the application of this method in the field of structural damage identification is still in its infancy and requires further research and exploration.
In the identification stage, the new response data of the structure is collected, and the trained model is used to predict the type of damage. However, even very good network models cannot completely and accurately identify the damage occurring in the structure; all of them have certain defects, and some of the vibration signals are prone to be altered from their basic characteristics due to the influence of environmental disturbances, instability of the sensors, etc., which causes the possibility of misjudging the damage to the structure. Data fusion techniques can combine data from multiple sources and relevant information from related databases to achieve higher accuracy and more specific inferences than can be achieved by using a single source of information alone. Generally, the data fusion process is categorized into three levels: low, medium, and high, depending on the processing stage at which the fusion occurs. Among them, high-level fusion, also known as decision-level fusion, combines the results of multiple decisions to form a final decision result. In the field of SHM, many scholars have applied data fusion techniques to structural damage identification methods and achieved some research results. Grande et al. [9] proposed a linear system damage identification method by data fusion of modal strain energy from different information sources. Zhou et al. [10] used DS evidence theory to fuse the outputs of a posteriori probabilistic support vector machine for decision-level information fusion in building structural damage detection. Guo et al. [11] utilized multi-scale space theory and data fusion methods for multiple damage detection in beam structures and slab structures. The neural network has strong self-learning, self-organization, and self-adaptation capabilities, and can simulate complex nonlinear mapping relationships, which precisely meets the processing requirements of the fusion system and can realize decision-making fusion for structural damage identification.
In this paper, the decision-level data fusion technique is fully combined with the advantages of the GCN model in extracting vibration signal features by establishing different sensor network relationships, and a single-model Decision Fusion Neural Networks model is established to enhance the robustness of structural damage diagnosis methods. The main contributions of this paper are as follows:
  • A graph neural network is introduced to show how the positional relationships of the sensors are fed into the network structure, and it is verified that the GCN model has an advantage over the 1D-CNN model in terms of recognition accuracy due to its ability to mine the spatial structure of the sensors;
  • Based on building a high-performance deep learning model, this paper considers incorporating data fusion techniques at the model decision layer to further improve the accuracy and robustness of the model;
  • Based on the frame structure model and the self-designed cable-stayed bridge test model of two structures, experimental validation is carried out, which proves that the method has a certain generalization ability and can mine the spatial characteristics of the structure, correctly identify the structural damage, and has a certain improvement in the performance relative to a single GCN.
The rest of the paper is organized as follows: Section 2 describes the theoretical foundations, including graph neural networks and data fusion techniques, while detailing the specific architecture of the neural network model proposed in this paper; Section 3 describes the two architectures used for the experiments along with specific discussion of the results; and Section 4 gives the conclusions.
The organizational framework of the proposed approach is shown in Figure 1, which outlines the research methodology of this paper. We obtain damage signals from the framework model and cable-stayed bridge model, preprocess and enhance the signals, and then input them into the neural network for training. Based on the different damage problems, we choose the mean method or voting method for data fusion, which will be described in detail in the following subsections.

2. Materials and Methods

This article feeds the spatial characteristics between sensors into a GCN. In the absence of edge connections between adjacent sensors, a sensor mapping network is first established based on the spatial characteristics of the sensors in the model. Then, the adjacency matrix and degree matrix of the sensor network are established to characterize the spatial structural relationship between sensor nodes. Then, the adjacency matrix and sensor array signal are simultaneously input into the network. Finally, the extracted vibration signal features are input into a multi-layer perceptron decision maker (MLP) for damage diagnosis decision-making. The model architecture consists of a graph convolutional neural network module and a multi-layer perceptron module. The former is mainly used to extract feature data that can better represent damage from structural acceleration data, while the latter is mainly used to establish a mapping relationship between feature data and damage types, achieving a structural damage diagnosis function. Figure 2 shows the basic architecture of the proposed deep learning model. The theoretical and functional aspects of each layer will be described in detail in the following sections.

2.1. GCN Module

The image convolution feature extraction module is constructed by overlaying multiple image convolution layers, as shown in Figure 2.
The GCN extracts the feature representation of a node by learning the relationships between the nodes to represent and classify the entire graph. The core idea of GCNs is to aggregate the features of the node’s neighboring nodes, and then linearly transform and nonlinearly activate the aggregated features to obtain a new node representation.
Given an undirected network G = (V, E), where V contains all N nodes, i.e., viV, E denotes the edges between the nodes, (vi, vj) ∈ E, and the elements of the adjacency matrix ARN×N can be either binary encoded or weighted real numbers (cf. Equation (1)), the adjacency matrix can be solved to find the degree matrix D, which is a diagonal array where the diagonal elements of the degree matrix can be computed by Equation (2). As shown in Figure 3, there are 6 nodes in total. According to the network structure constructed adjacency matrix A, the degree matrix D can be calculated from Equation (2).
A = ( a 11 a 1 n a m 1 a m n )
D i i = j = 1 N A i j
A = [ 0 1 0 0 1 0 1 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 1 1 1 1 0 1 0 0 0 0 0 1 0 0 ]
D = [ 2 0 0 0 0 0 0 3 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0 0 0 0 3 0 0 0 0 0 0 1 ]
As Ref. [12] described, in the linear formulation of the GCN, by replacing the convolution kernel in the spectral domain with Chebyshev polynomials, and then considering only Chebyshev polynomials of the first order, and further approximating λmax ≈ 2, the Laplacian matrix of the graph structure can be expressed as:
L = D A
Then, the symmetric normalized Laplace operator is expressed as:
L s = I N D 1 2 A D 1 2
If this operator is used in a deep neural network model, repeated application of the operator leads to numerical instability (divergence) and gradient explosion or vanishing. To solve this problem, a re-normalization trick is introduced:
L s = I N D 1 2 A D 1 2 D ˜ 1 2 A D ˜ 1 2
As shown in the above formulas, the processing of graph signals, Laplace matrix, and Fourier transform are introduced into the GCN to obtain mathematical expressions for each layer of the GCN:
H ( l + 1 ) = σ ( D ˜ 1 2 A ˜ D ˜ 1 2 H ( l ) W ( l ) )
where Hl denotes the lth layer in the network, σ is the activation function of the represented nonlinearity, and W denotes the weight matrix of the current hidden layer. D ˜ and A ˜ are the degree matrix and adjacency matrix of the graph, respectively.
By researching relevant literature and data, and conducting corresponding experimental tests, it was found through comparison that a three-layer GCN feature extraction module is more suitable for the structural damage diagnosis research in this article. Therefore, this article adopts a three-layer GCN model and adds BN and Dropout layers to improve the model’s generalization ability and avoid overfitting. The specific selection method of GCN model parameters is described in the corresponding summary of each model.

2.2. MLP Module

The MLP decision module is mainly used to establish the mapping relationship between feature data and damage type to realize the function of damage decision. In this paper, two fully connected layers are selected to form this module. The number of neurons in the first layer (FC layer 1) is set to 256, and the activation function of this layer adopts the ReLU function. The second layer (FC layer 2) is the output layer, and the number of neurons is equal to the dimension of the damage labels. When classifying the type of damage, the output layer adopts the Softmax function, and the loss function corresponding to it adopts the categorical cross-entropy function, the mathematical expression of which is shown in Equation (7). The sample labels are encoded using one-hot coding; for example, the dataset has a total of n types, and when the sample belongs to type 1, the label encoding contains n elements, with the first element valued at 1, and the rest of the elements valued at 0, i.e., the label encoding is (1, 0, 0, 0, …, 0). In the damage degree regression analysis, the linear function is used as the output layer, and the corresponding loss function selected is the Mean Square Error (MSE) function, whose formula is shown in Equation (8). At this time, the label coding needs to consider the structural unit location and the degree of unit change. For example, when a certain structure consists of n positions that need to be detected, the 1st position change degree is 0.5 while the rest of the positions do not change. The sample label can be encoded as (0.5, 0, 0, 0, 0, …, 0), and the label form contains the structural damage location and the degree of damage. The method can learn the feature representations of the different types of damage, and these will be fused with the extracted features from the deep learning feature extraction module. Finally, the damage location will be output.
l o s s = i = 1 n y i log y i ¯
M S E = i = 1 n ( y i ¯ y i ) n
where y denotes the true label vector, y ¯ denotes the model predicted label vector, and n is the dimension of the label vector.

2.3. Decision Level Data Fusion

In the identification stage, the new response data of the structure is collected, and the trained model is used to predict the type of damage. However, even very excellent network models cannot completely and accurately identify the damage that occurs in a structure. All of them will have certain defects, and some of the vibration signals are prone to alterations that change their basic characteristics under the influence of environmental disturbances, sensor instability, and other factors, which results in the possibility of misjudging the damage to the structure [13,14,15,16,17,18]. Therefore, based on building a high-performance deep learning model, this paper considers the integration of data fusion technology in the model decision-making layer and proposes a single model decision-making fusion neural network model.
The proposed single-model decision-level data fusion refers to the use of certain damage identification methods to process multiple sets of sensor data from the same shorter time period for damage identification. The resulting damage identification results are fused to distill the overall comprehensive damage identification information, and ultimately to make the highest-level decision, which can improve the prediction accuracy and robustness of a single model. In order to avoid problems such as local anomalies, missing data, and noise in the monitoring data, we collect acceleration signals for relatively long time periods during the detection phase and use data slicing methods to generate multiple simultaneous detection samples. These samples are used as inputs to the deep learning model to obtain multiple decision results. Combining multiple decision results, we can derive the structural state of the time period.
In the process of decision-level fusion of deep learning models, one of the key aspects is to choose an optimal binding module. For the multi-label classification problem, the voting method is the most commonly used binding method, which mainly includes the absolute majority voting method, the relative majority voting method, the weighted voting method, and the soft voting method. Suppose there are T classifier models (h1, h2, …, hT) and l category labels (c1, c2, …, cl), whose mathematical expressions are:
H ( x ) = C a r g m a x ( i = 1 T h i 1 ( x ) , , i = 1 T h i l ( x ) )
where H (x) is the final output of the synthesized decision result and C a r g m a x is the return maximum index function.
When the degree of structural damage is considered, the damage detection problem belongs to the regression problem. When the decision layer fusion chooses the combination method of mean values, assuming that there are T regression prediction models, each model outputs the decision as (h1, h2, …, hT) and the label of the degree of change of the n positions (d1, d2, …, dn). The mathematical expression of this is:
H ( x ) = 1 T ( i = 1 T h i 1 ( x ) , , i = 1 T h i n ( x ) )
In this paper, the data fusion technique is applied as Figure 4, the data within a fixed time period T s are continuously collected by a dynamic signal acquisition system for identifying the type of structural damage, and the sampling frequency is set to f Hz. A single T s data sample is augmented to n data samples by a sliding window slicing method. These samples are used as inputs for n model recognizers. The outputs of n damage labeling codes are outputted, and these recognition results are defaulted to the structural state assessment results in the same time period, and the transient time period structural state changes. The next step is to use the data fusion calculation method to synthesize the information of the n damage label codes to give the final structural damage diagnosis results. We consider two data fusion calculation methods, which are the mean value method and the voting method. When only considering identifying the location of structural damage, it belongs to the multi-task classification identification, and the use of the voting method can efficiently and stably compute the comprehensive results. When it is necessary to identify the location of structural damage and the degree of damage, addressing this requires solving a regression problem. The mean value method is more suitable for the output layer result data fusion.

3. Introduction to the Test

To investigate the effectiveness of the deep learning-based structural damage detection method proposed in this paper in recognizing different structural systems, a frame structure and a cable-stayed bridge structure are used, while different structural damage conditions are set up for the cable-stayed bridge structure and vibration signals are collected to build up the dataset. The following subsections describe the two test models and test schemes in detail.

3.1. Verification of Nodal Damage Identification in Steel Frame Structures

3.1.1. Qatar University Grandstand Simulator Basics

In order to verify the effectiveness and generality of the proposed method for damage detection in different structures, a large frame test structure designed and constructed at the Structures Laboratory of Qatar University, which mainly consists of 8 main beams and 25 bracing units connected by 42 bolted nodes, was chosen as the object of study. The mechanism simulates the frame structure damage by releasing the node bolts. The experiment contains 31 test conditions, one of which is the healthy condition, and 30 damage conditions. The 30 damage conditions correspond to 30 instances of single node damage, and the specific damage locations are as follows. The specific damage locations are shown in Figure 5. Two tests were conducted for each test condition, with the random vibration occurring in the shaker as the excitation. In order to collect the structural acceleration data, 30 accelerometers were installed at the 30 nodes of the frame structure where the damage locations were set (numbered positions in Figure 5). The two experiments were recorded for a length of 256 s, with a sampling frequency of 1024 Hz. Each sensor recorded 262,144 samples, so two datasets, A and B, were established, each containing 8,126,464 samples.

3.1.2. Model Parameters and Recognition Effectiveness

For the GCN model, the spatial features between different sensors can be mined and used to process the acceleration signal matrix samples collected by 30 sensor arrays, so all the sensor data of measurement points from No. 1 to No. 30 are used as the detection samples.
Using the GCN model for frame structure node damage detection, firstly, the neighbor matrix of the frame structure is established. By default, there is an association between the two sensors with the closest distance, and a total of 30 acceleration sensors are arranged in the structure. The neighbor matrix with 30 × 30 dimensions is established, and the degree matrix is calculated through Equation (3). Finally, the Laplacian operator for the structure of the sensor network is computed through Equation (7), and it is involved in the GCN model training. Then, the GCN model is discussed for the three-layer structure stacking.
The GCN model requires fewer hyperparameters, and the most important parameters are the output feature size parameter and the batch size parameter in each layer of the structure. In addition, each layer of the structure adopts the ReLU activation function. During the training process, the optimizer uses the Adam function.
In order to analyze the impact of the two parameters of layer output feature size and batch size in the layer structure on the recognition accuracy of the GCN model, the output feature size values are taken as 512, 1024, and 2048, and the batch size values are taken as 16, 32, 64, and 128. Finally, 500 epochs of training are carried out using dataset A. When the output feature size is selected as 2048, the model gradient cannot be decreased, so it is no longer used. A total of eight GCN models were trained, and validated by the data in dataset B. The recognition accuracy of the eight models is shown as follows in Figure 6. When the output feature size value is 512, the batch size value is 128 and the recognition accuracy of the GCN model is the highest; at this time, the accuracy rate is 93.03%. The finalized hyperparameters of the GCN model for frame structures are shown as follows in Table 1.
To demonstrate the superiority of this method, the recognition accuracy of this method is compared with other common neural networks. One CNN model that can effectively extract features from sequence data is 1D-CNN. Unlike traditional RNNs, 1D-CNN can process the entire sequence in parallel, resulting in faster computation speed and fewer parameters, making it a common and efficient neural network model. The use of LSTM for training and identification has achieved certain results in structural damage identification research in fields such as architecture, bridges, and offshore jackets. [19] Comparing the experimental results of this method with the two models mentioned above based on the common dataset, it can be found that the GCN model is more suitable for analyzing complex sensor structures, and its recognition performance is better than the 1D-CNN model and LSTM model. The specific identification accuracy of node damage is shown in Table 2.

3.1.3. Noise Resistance

In practical applications, environmental noise will inevitably affect the quality of monitored vibration signals. In different environments, the form of noise also varies. To test the ability of the proposed method to resist noise, a method of reducing the signal-to-noise ratio (SNR) of the signal is used to simulate the impact of noise on the signal. The signal-to-noise ratio calculation formula is shown in Equation (11), and different noise levels are added to the acceleration response of the test dataset (see Equation (12)). We reduce the signal-to-noise ratio of the validation set signal, verify the recognition accuracy of the proposed method through the denoised validation set, and then analyze the anti-noise ability of the proposed method.
S N R = 10 log 10 ( P si g n a l P n o i s e )
S n ^ = S n + ξ
ξ N ( 0 , σ 2 )
Among these, Sn is the signal test set without noise addition and S n ^ is the signal test set after adding noise, while ξ is a random variable that follows a Gaussian distribution, with zero mean and σ 2 variance. In this work, the values of noise σ to be added for signals with different signal-to-noise ratios were calculated using Equations (11)–(13). Seven signal-to-noise ratios of 80 dB, 60 dB, 40 dB, 20 dB, 15 dB, 10 dB, and 5 dB were selected to process and identify the validation sets of frame structure cases.
In the case study of the frame structure under different signal-to-noise ratio conditions, the recognition results of each model for the frame structure test dataset are shown in Table 3. The recognition accuracy of some models for frame structure node damage is shown at different SNRs. As Figure 7 shows, when the signal-to-noise ratio is greater than 40 dB, the model maintains good recognition accuracy and has stable detection ability. In the range of signal-to-noise ratio from 40 dB to 20 dB, the recognition accuracy of each model began to decline in varying degrees. Among them, the S_DFNN (GCN) model has the least decline, and the LSTM model has the largest decline of 25.9%. When the signal-to-noise ratio of the verification set was lower than 20 dB, the recognition accuracy of the single model was less than 70%, which did not meet the actual demand. At a signal-to-noise ratio of 15 dB, only the S_DFNN (GCN) model still maintains good recognition performance, and the established data fusion neural network model not only improves the recognition accuracy of the model, but also improves its stability and noise resistance in damage detection.

3.2. Introduction to the Laboratory Cable-Stayed Bridge Test Model

In the pre-preparation stage of the test, the basic parameters of the test need to be determined. These parameters mainly include the overall arrangement parameters and specific structural parameters of the cable-stayed bridge model, the arrangement of measurement points in the model structure of the cable-stayed bridge, the simulation method of the excitation load, and the criteria for the baseline state (i.e., the state of health) of the cable-stayed bridge model. The following subsections will introduce the cable-stayed bridge test model and test program in detail.
Under laboratory conditions requiring a maximum total length of 6 m, we established a model of a traditional double-tower three-span cable-stayed bridge. The main span of the cable-stayed bridge physical test model is 3600 mm with side spans of 1200 mm, resulting in a total bridge length of 6000 mm. The cable-stayed bridge model’s overall layout parameters are depicted in Figure 8. The overall arrangement parameters of the cable-stayed bridge model are shown in Figure 8.
The shaker is used as the excitation load. The shaker can be controlled by the signal generator to simulate random waves through the dynamic signal acquisition system to collect the acceleration signal of the main beam of the cable-stayed bridge at 14 measurement points. The sampling frequency is set to 1000 Hz, and each condition is repeated for six tests. Each test process lasts for more than 100 s, and then 100 s of signal data are intercepted. Ultimately, each kind of condition used to obtain 600 s of acceleration signal data for the cable-stayed bridge test scene is shown in Figure 9. The test scene of the cable-stayed bridge is shown in Figure 9.
(1)
Tension sensor measurement point arrangement: Due to the limited number of tension sensors, the demand for arranging tension sensors for each cable-stayed cable is not met. Therefore, through analysis, in order to achieve the purpose of regulating the rope force and controlling the overall state of the cable-stayed bridge, all the tension sensors are embedded in all the cable-stayed cables of the two sectors on the west side, arranged in sparse intervals. In addition, tension sensors are arranged on the east side of the cable-stayed cables to check the numerical values of the tension sensors on the west side with those on the east side. Numerical calibration, to a great extent, is used to monitor the overall state of the bridge rope force. Figure 10 shows the arrangement scheme of tension sensors on the sector tension cables on both sides of the center axis of the main beam, where a total of 38 tension sensors are arranged.
(2)
Arrangement of micrometer measurement points: The arrangement of the micrometer is mainly used to monitor the deflection displacement of the key cross-section of the main girder. According to the parameters of the overall arrangement of the bridge model and finite element calculation and analysis, the selection is shown in Figure 11. At the same time, the displacement control of the key cross-section of the micrometer is coordinated with the horizontal filament positioning method, which is used for the zero displacement control of the main girder when the cable-stayed bridge is in a reasonable bridge condition.
(3)
Acceleration sensor measurement point arrangement: When arranging the acceleration sensor, it is necessary to consider the sensitivity of the measurement point location for the structural modal changes. Combined with other structural analysis experience, the sensors will try to be arranged in the peak vibration position. The establishment of the cable-stayed bridge model of the finite element model involves modal analysis and the extraction of the model of the first ten orders of vertical vibration pattern. The sensors are placed at the peak vibration position. In the main girder unit we select the measurement point to set up a total of 14 acceleration sensors, as shown in Figure 12.

3.3. Design of Main Beam Damage Conditions

The main girder of a cable-stayed bridge test model designed in the laboratory consists of 82 girder units, which are connected by bolts to realize the embedding of damaged girder units into the main girder structure, thus simulating the different damage conditions of the main girder. Through the design of cross-section cutting of the intact beam unit to simulate a damaged unit, this paper mainly designs two kinds of damage units: one with 6 cm of cutting damage, and another with 10 cm of cutting damage. The damage is distributed on the two sides of the main beam unit, as shown in Figure 13b which shows the type A damage unit and type B damage unit. These two kinds of damage units can be installed in different positions of the main beam to simulate a variety of main beam damage conditions. In this test, we mainly study the structural damage of the main span of the cable-stayed bridge model. We selected two positions of unit D0 and unit D10 for the replacement of the damage unit, as shown in Figure 14, and designed three kinds of damage conditions, which are described in the following table. The specific damage conditions are described in Table 4.

Analysis and Discussion of Main Beam Damage Identification Results

Dataset I, established by the cable-stayed bridge model test, is used to verify the accuracy of the model proposed in the previous section and its parameter configurations, and the data in dataset I are divided by data preprocessing methods. Normally, the structural response data under n states are monitored or collected by a signal acquisition system, all of which are multidimensional time series with a length of L s and a sampling frequency set to f Hz, whereas a certain batch of independent data samples of the same size is needed to meet the requirements of the network training in the process of deep learning network training, rather than a complete long-period time series. Therefore, it is necessary to use the sliding window method for data enhancement. The sliding window decomposition is a common method of signal data decomposition, and its division of the data set is shown in Figure 15; that is, the completed data are divided into N identical sub-data samples through a window of size h × w (window length, i.e., time dimension, is h, and window width, i.e., sensor dimension, is w) with a sliding time step of s. When the number of data is limited, the effect of data enhancement can be realized in the form of overlap of the two neighboring windows, and the total sample size of the dataset, N, can be computed by Equation (14):
N = ( 1 s × ( L f h ) ) + 1
In order to accelerate the network training convergence speed and generalization ability, the data normalization process needs to be performed before data decomposition, as shown in Equation (6):
L ^ = L 0 L 0 ¯ σ ( L 0 )
where L 0 is the original dataset, L 0 ¯ and σ   ( L 0 ) are the mean and standard deviation of the original data respectively, and L ^ is the normalized dataset.
Before training the deep learning model, the complete dataset collected from the experiment needs to be preprocessed. Firstly, the data are normalized on the same scale by the Equation (6) method, and the normalized data are augmented by the sliding window decomposition technique for data enhancement. A total of 600 s of data are collected for each scenario, of which two-thirds of the data samples (400 s) are used for model optimization and training, and the remaining one-third of the data samples (200 s) are used for cross-validation of the model performance. During the iterative training of the model, two-thirds of the dataset used for training is divided into a training set and a test set in the ratio of 4:1. The dataset inputs are often batch-processed, with each training session randomizing the order of the mixing batches according to the law of uniform distribution to ensure that all signals have an equal chance of being selected for training and testing.
For the GCN feature extraction module, the sliding window size is 500 × 14, and the sliding window step is 500. Each kind of working condition data in the dataset is divided into 1200 samples, of which 800 samples are used for model training and 400 samples are used for model validation. The ratio of the training set to the test set is 0.8:0.2. The eight models proposed in this paper are trained by the divided dataset and validated by the validation set data. The parameter configuration of the GCN model used to identify structural changes in cable-stayed bridges is shown in Table 5.
All the models for different main beam unit damage recognition accuracy are shown in Table 6. As shown in Figure 16 and Figure 17, from the overall recognition accuracy, the GCN model has a lower recognition accuracy, and the S_DFNN (GCN) model has an overall better recognition accuracy than the three single models, which can completely and accurately recognize the damage of each working condition.

3.4. Design of Damage Conditions of the Cable

Stayed cables are one of the main load-bearing structures of cable-stayed bridge structures, and under long-term operation state, stayed cables are prone to structural weakening, relaxation, and other damage under long-term loading and environmental effects. In this section, the state of the tension cable is adjusted by the hollow bolt anchor cable structure, which is selected as shown in Figure 18. The slackening position of tension cables LB1_W and LB9_W as damage objects was designed as shown in Table 7. The injury conditions of inclined cables, including three typical damage conditions, were designed as shown in Table 7. Using the test method described in Section 3.1 above, a total of 600 s of acceleration signal data were obtained for each working condition, and a data set was established for the identification of structural relaxation damage of the cable structure, named Data Set III.

Analysis and Discussion of Cable Damage Identification Results

After model parameter optimization and several tests using the above hyperparameters, data set III was divided into a training set, test set, and validation set by the data preprocessing method adopted in the previous section. The GCN and S_DFNN (GCN) models are trained through the training set and test set, and the trained and optimized models are used to identify the three working conditions of the damage of the diagonal cable. The identification results are shown in Table 8. As shown in Table 8, the recognition accuracy of the three models does not exceed 90%, but after combining the data fusion technology, the models show excellent recognition ability, and the S_DFNN model based on a single model has a higher accuracy rate. From a comprehensive point of view, data fusion technology is effective in improving the recognition accuracy of the deep learning model for diagonal cable damage.

4. Conclusions and Outlook

The GCN model takes into account the correlation between different dimensions of data in multi-dimensional time series. It is suitable for mining the features of vibration signals collected by sensor array network from the dimension of spatial structure, and is able to address the complexity of spatial characteristics of sensors. This paper proposes to combine the GCN model with the S_DFNN technology to improve its stability and recognition accuracy, and further solve the damage miscalculation due to environmental factors. The specific conclusions are as follows:
  • We demonstrated that the GCN model has certain advantages through the framework model case, and at the same time, can further improve the recognition accuracy when combined with the decision-level data fusion technology. The GCN reaches 93.03% accuracy, which exceeds the 1D-CNN model with data compression, and after combining it with the S_DFNN model, the model’s recognition accuracy reaches 99.57%, which is a substantial improvement;
  • The main girder damage identification and stayed cable relaxation damage identification for the cable-stayed bridge model revealed that the GCN model is more sensitive to the structural change of location, and that the spatial characteristics between the sensor arrays can be mined, but there is room for improvement in identification performance and accuracy. After integrating the S_DFNN model, the accuracy of main girder damage recognition for cable-stayed bridges was improved by 11.44%;
  • For the damage identification of the diagonal cable, the combination of decision-level data fusion technology and a deep learning model approach optimizes the identification accuracy of a single model to a certain extent and improves the identification accuracy of the structure by 21.81% to 99.69%, which can achieve the goal of accurately determining the damage to the diagonal cable.
In summary, combining the GCN model with the S_DFNN model can improve the recognition accuracy of the GCN model, and it has a good damage recognition effect on the application of different models with good generalization ability and robustness, which is an advanced and advantageous damage recognition method.
The application of the GCN model in structural damage recognition research is still relatively small. During the research of this paper, it was found that its application in simple sensor network structure scenarios is not ideal, but it shows an excellent ability for the damage of nodes of the frame structure with complex sensor network structure. Therefore, the study of a reasonable sensor network arrangement will improve the recognition accuracy of the GCN model, and at the same time, the study of a more accurate neighbor matrix establishment method is also very important. For large-span bridge structures such as cable-stayed bridges and suspension bridges, there are close interactions between different tension cables, and it is a worthwhile direction to explore and study the application of the GCN model to the damage of cable-stayed structures by using the cable-stay force as an evaluation index.

Author Contributions

Conceptualization, L.X. and H.G.; methodology, X.L. and L.X.; software, L.X. and H.G.; validation, L.X., H.G. and L.Y.; formal analysis, L.X.; investigation, L.Y.; resources, X.L.; data curation, L.X.; writing—original draft preparation, H.G. and L.X.; writing—review and editing, L.X. and L.Y.; visualization, L.X.; supervision, X.L.; project administration, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Teng, S.; Chen, G.; Gong, P.; Liu, G.; Cui, F. Structural damage detection using convolutional neural networks combining strain energy and dynamic response. Meccanica 2020, 55, 945–959. [Google Scholar] [CrossRef]
  2. Azimi, M.; Eslamlou, A.D.; Pekcan, G. Data-Driven Structural Health Monitoring and Damage Detection through Deep Learning: A State-of-the-Art Review. Sensors 2020, 20, 2778. [Google Scholar] [CrossRef] [PubMed]
  3. Rizzo, P.; Enshaeian, A. Challenges in Bridge Health Monitoring: A Review. Sensors 2021, 21, 4336. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, Z.Y.; Wang, Y.Z.; Wu, J.; Deng, C. Sensor data-driven structural damage detection based on deep convolutional neural networks and continuous wavelet transform. Appl. Intell. 2021, 51, 5598–5609. [Google Scholar] [CrossRef]
  5. Wang, X.W.; Zhang, X.A.; Shahzad, M.M. A novel structural damage identification scheme based on deep learning framework. Structures 2021, 29, 1537–1549. [Google Scholar] [CrossRef]
  6. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
  7. Liu, Z. Health Monitoring of Railroad Structure Based on Graph Neural Network; Nanchang University: Nanchang, China, 2022. [Google Scholar]
  8. Dang, V.H.; Vu, T.C.; Nguyen, B.D.; Nguyen, Q.H.; Nguyen, T.D. Structural damage detection framework based on graph convolutional network directly using vibration data. Structures 2022, 38, 40–51. [Google Scholar] [CrossRef]
  9. Grande, E.; Imbimbo, M. A multi-stage data-fusion procedure for damage detection of linear systems based on modal strain energy. J. Civ. Struct. Health Monit. 2014, 4, 107–118. [Google Scholar] [CrossRef]
  10. Zhou, Q.; Zhou, H.; Zhou, Q.; Yang, F.; Luo, L.; Li, T. Structural damage detection based on posteriori probability support vector machine and Dempster-Shafer evidence theory. Appl. Soft Comput. 2015, 36, 368–374. [Google Scholar] [CrossRef]
  11. Guo, T.; Wu, L.; Wang, C.; Xu, Z. Damage detection in a novel deep-learning framework: A robust method for feature extraction. Struct. Health Monit. 2020, 19, 424–442. [Google Scholar] [CrossRef]
  12. Thomas, N.K.; Max, W. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  13. Wei, X.R.; Zhou, H.Z. Output-only modal parameter identification of civil engineering structures. Struct. Eng. Mech. 2004, 17, 115949. [Google Scholar]
  14. Zhang, Y.; Hou, Y.; OuYang, K.; Zhou, S. Multi-scale signed recurrence plot based time series classification using inception architectural networks. Pattern Recognit. 2022, 123, 108385. [Google Scholar] [CrossRef]
  15. Jeong, S.; Kim, E.J.; Park, J.W.; Sim, S.H. Data fusion-based damage identification for a monopile offshore wind turbine structure using wireless smart sensors. Ocean. Eng. 2020, 195, 106728. [Google Scholar] [CrossRef]
  16. Zhang, J.; Zhang, J.; Teng, S.; Chen, G.; Teng, Z. Structural Damage Detection Based on Vibration Signal Fusion and Deep Learning. J. Vib. Eng. Technol. 2022, 10, 1205–1220. [Google Scholar] [CrossRef]
  17. He, F.; Zhou, J.; Feng, Z.; Liu, G.; Yang, Y. A hybrid short-term load forecasting model based on variational mode decomposition and long short-term memory networks considering relevant factors with Bayesian optimization algorithm. Appl. Energy 2019, 237, 103–116. [Google Scholar] [CrossRef]
  18. Liu, T.; Xu, H.; Ragulskis, M.; Cao, M.; Ostachowicz, W. A Data-Driven Damage Identification Framework Based on Transmissibility Function Datasets and One-Dimensional Convolutional Neural Networks: Verification on a Structural Health Monitoring Benchmark Structure. Sensors 2020, 20, 1059. [Google Scholar] [CrossRef] [PubMed]
  19. Meisam, G.; Saeed, R.S.; Zubaidah, I.; Khaled, G.; Páraic, C. State-of-the-art review on advancements of data mining in structural health monitoring. Measurement 2022, 193, 110939. [Google Scholar]
  20. Flah, M.; Ragab, M.; Lazhari, M.; Nehdi, M.L. Localization and classification of structural damage using deep learning single-channel signal-based measurement. Autom. Constr. 2022, 139, 104271. [Google Scholar] [CrossRef]
  21. Azimi, M.; Pekcan, G. Structural health monitoring using extremely compressed data through deep learning. Comput.-Aided Civ. Infrastruct. Eng. 2020, 35, 597–614. [Google Scholar] [CrossRef]
  22. Truong, T.T.; Lee, J.; Nguyen-Thoi, T. An effective framework for real-time structural damage detection using one-dimensional convolutional gated recurrent unit neural network and high performance computing. Ocean. Eng. 2022, 253, 111202. [Google Scholar] [CrossRef]
Figure 1. The overview of the methodology.
Figure 1. The overview of the methodology.
Sensors 23 09327 g001
Figure 2. GCN model architecture for structural damage diagnosis.
Figure 2. GCN model architecture for structural damage diagnosis.
Sensors 23 09327 g002
Figure 3. Structure of the mapping network.
Figure 3. Structure of the mapping network.
Sensors 23 09327 g003
Figure 4. Single model combined with data fusion technique identification method.
Figure 4. Single model combined with data fusion technique identification method.
Sensors 23 09327 g004
Figure 5. The Qatar University Grandstand Simulator. (The number is the number of 30 sensors).
Figure 5. The Qatar University Grandstand Simulator. (The number is the number of 30 sensors).
Sensors 23 09327 g005
Figure 6. Recognition results of joint damage by GCN model with different parameters.
Figure 6. Recognition results of joint damage by GCN model with different parameters.
Sensors 23 09327 g006
Figure 7. The recognition accuracy of some models for frame structure node damage at different SNRs.
Figure 7. The recognition accuracy of some models for frame structure node damage at different SNRs.
Sensors 23 09327 g007
Figure 8. Parameters of the overall arrangement of the cable-stayed bridge model (unit: mm).
Figure 8. Parameters of the overall arrangement of the cable-stayed bridge model (unit: mm).
Sensors 23 09327 g008
Figure 9. Test scene of cable-stayed bridge.
Figure 9. Test scene of cable-stayed bridge.
Sensors 23 09327 g009
Figure 10. Layout of measuring points for tension sensors.
Figure 10. Layout of measuring points for tension sensors.
Sensors 23 09327 g010
Figure 11. Micrometer point layout.
Figure 11. Micrometer point layout.
Sensors 23 09327 g011
Figure 12. Acceleration sensor measurement point layout.
Figure 12. Acceleration sensor measurement point layout.
Sensors 23 09327 g012
Figure 13. Main beam replacement damage unit.
Figure 13. Main beam replacement damage unit.
Sensors 23 09327 g013
Figure 14. Location of damaged units of main beams.
Figure 14. Location of damaged units of main beams.
Sensors 23 09327 g014
Figure 15. Data set preprocessing methods.
Figure 15. Data set preprocessing methods.
Sensors 23 09327 g015
Figure 16. Regression prediction results of GCN model for main beam variation.
Figure 16. Regression prediction results of GCN model for main beam variation.
Sensors 23 09327 g016
Figure 17. Regression prediction results of GCN-based S_DFNN model for main beam variation.
Figure 17. Regression prediction results of GCN-based S_DFNN model for main beam variation.
Sensors 23 09327 g017
Figure 18. Tie slack position.
Figure 18. Tie slack position.
Sensors 23 09327 g018
Table 1. Parameter configuration of GCN model for damage identification of frame structure.
Table 1. Parameter configuration of GCN model for damage identification of frame structure.
ParameterSpecific Values
First GCN layer512
Second GCN layer512
Third GCN layer512
Dropout value0.3
Training OptimizerAdam
Table 2. Comparison of QUGS structural damage recognition methods based on deep learning.
Table 2. Comparison of QUGS structural damage recognition methods based on deep learning.
Flah et al. [20]1D-CNN30All nodes86%
Azimi et al. [21]Compressed data + 1D-CNN30All nodes91.9%
Truong et al. [22]1D-CNN + GRU1One joint91.31%
Contrast modelGCN30All nodes93.03%
Contrast modelLSTM12All nodes93.37%
Proposed modelS_DFNN (GCN)30All nodes99.57%
Table 3. The recognition accuracy of some models for frame structure node damage at different SNRs.
Table 3. The recognition accuracy of some models for frame structure node damage at different SNRs.
SNR (dB) LSTMGCNS_DFNN (GCN)
800.931930.930950.99899
600.931510.931280.99899
400.930360.929600.99899
200.671370.821910.97757
150.316640.569390.80267
100.106070.245300.38684
50.048810.092240.14037
Table 4. Damage conditions of main girder units.
Table 4. Damage conditions of main girder units.
Damage ConditionDescription of Damage Status
Condition 0 (non-destructive) Structural baseline condition
Working condition 1Replacement of unit D0 of the main beam with a type A damage unit
Condition 2Replacement of unit D10 of the main beam with a type B damage unit
Condition 3Replacement of unit D0 of the main beam with a type A damage unit
Replacement of unit D10 of the main beam with a type B damage unit
Table 5. Parameter configuration of GCN model for damage identification of cable-stayed bridge model.
Table 5. Parameter configuration of GCN model for damage identification of cable-stayed bridge model.
ParameterSpecific Values
First GCN layer500
Second GCN layer500
Third GCN layer500
Dropout value0.3
Training OptimizerAdam
Table 6. Identification accuracy of structural damage of main beams by different models.
Table 6. Identification accuracy of structural damage of main beams by different models.
ModelDamage Unit NumberTotal
Non-DestructiveD0D10D0 + D10
GCN96.75%88.75%89.50%79.25%88.56%
S_DFNN (GCN) 99.99%99.91%99.97%99.99%99.97%
Table 7. Damage conditions of diagonal cable.
Table 7. Damage conditions of diagonal cable.
Damage Lasso NumberDescription of Damage Status
non-destructivestructural baseline condition
LB1_WNo. LB1_W Slope cable slack 3 mm
LB9_WNo. LB9_W Slope cable slack 3 mm
LB1_W + LB9_WNo. LB1_W Slope cable slack 3 mm
No. LB9_W Slope cable slack 3 mm
Table 8. Accuracy of different models in recognizing the damage of diagonal cable.
Table 8. Accuracy of different models in recognizing the damage of diagonal cable.
MoldDamage Lasso Number(Grand) Total
Non-DestructiveLB1_WLB9_WLB1_W + LB9_W
GCN85.25%77.75%83.75%64.75%77.88
S_DFNN (GCN) 99.98%99.99%99.77%98.75%99.62%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Xu, L.; Guo, H.; Yang, L. Application of Graph Convolutional Neural Networks Combined with Single-Model Decision-Making Fusion Neural Networks in Structural Damage Recognition. Sensors 2023, 23, 9327. https://doi.org/10.3390/s23239327

AMA Style

Li X, Xu L, Guo H, Yang L. Application of Graph Convolutional Neural Networks Combined with Single-Model Decision-Making Fusion Neural Networks in Structural Damage Recognition. Sensors. 2023; 23(23):9327. https://doi.org/10.3390/s23239327

Chicago/Turabian Style

Li, Xiaofei, Langxing Xu, Hainan Guo, and Lu Yang. 2023. "Application of Graph Convolutional Neural Networks Combined with Single-Model Decision-Making Fusion Neural Networks in Structural Damage Recognition" Sensors 23, no. 23: 9327. https://doi.org/10.3390/s23239327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop