Next Article in Journal
Monitoring Involuntary Muscle Activity in Acute Patients with Upper Motor Neuron Lesion by Wearable Sensors: A Feasibility Study
Previous Article in Journal
Secure Combination of IoT and Blockchain by Physically Binding IoT Devices to Smart Non-Fungible Tokens Using PUFs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Damage Localization and Severity Assessment of a Cable-Stayed Bridge Using a Message Passing Neural Network

1
Computer Engineering and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Korea
2
Civil and Environmental Engineering, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(9), 3118; https://doi.org/10.3390/s21093118
Submission received: 21 March 2021 / Revised: 18 April 2021 / Accepted: 26 April 2021 / Published: 30 April 2021
(This article belongs to the Section Internet of Things)

Abstract

:
Cable-stayed bridges are damaged by multiple factors such as natural disasters, weather, and vehicle load. In particular, if the stayed cable, which is an essential and weak component of the cable-stayed bridge, is damaged, it may adversely affect the adjacent cables and worsen the bridge structure condition. Therefore, we must accurately determine the condition of the cable with a technology-based evaluation strategy. In this paper, we propose a deep learning model that allows us to locate the damaged cable and estimate its cross-sectional area. To obtain the data required for the deep learning training, we use the tension data of the reduced area cable, which are simulated in the Practical Advanced Analysis Program (PAAP), a robust structural analysis program. We represent the sensor data of the damaged cable-stayed bridge as a graph composed of vertices and edges using tension and spatial information of the sensors. We apply the sensor geometry by mapping the tension data to the graph vertices and the connection relationship between sensors to the graph edges. We employ a Graph Neural Network (GNN) to use the graph representation of the sensor data directly. GNN, which has been actively studied recently, can treat graph-structured data with the most advanced performance. We train the GNN framework, the Message Passing Neural Network (MPNN), to perform two tasks to identify damaged cables and estimate the cable areas. We adopt a multi-task learning method for more efficient optimization. We show that the proposed technique achieves high performance with the cable-stayed bridge data generated from PAAP.

1. Introduction

Cable-stayed bridges, one of the essential transportation infrastructures in modern society, are damaged and corroded by external environments such as natural disasters, climate, ambient vibrations, and vehicle loads. As damage accumulates, the condition of the structure deteriorates, and the bridge loses its function. Damaged bridges even lead to collapse, causing severe problems such as human injury and economic loss. In particular, the stayed cable is a necessary but vulnerable primary component of cable-stayed bridges [1]. When the cable starts to be damaged, the stiffness and cross-sectional area decrease [2]. Since the cable has a small cross-sectional area, it may be lost due to low resistance against accidental lateral loads. The cable loss may cause overloading in the bridge and adversely affect adjacent cables [3]. Therefore, we must thoroughly inspect the cable conditions. However, we cannot directly know the damaged cable and its cross-sectional area only with raw data collected from the sensors on the bridge, such as the cable tension. Furthermore, if the damage degree is not significant, it may be challenging to determine whether the damage occurs visually, unlike cracks detection. Manual checking of all cables one by one is very inefficient and increases maintenance costs. Therefore, to ensure the safety and durability of the bridge, we need a technology-based evaluation strategy. Moreover, the technology must be able to capture small changes in the cable area accurately.
The importance of Structural Health Monitoring (SHM) has been emphasized to assess damage such as corrosion, defects, cracks, and material changes in structures. Researchers have introduced deep learning models as well as statistical analysis and machine learning as SHM techniques to determine the damaged cable locations [2] or detect stiffness reduction [4]. With the advancement of the device fabrication process, artificial intelligence meets the need for fast and accurate problem solving using vast amounts of data collected from sensor devices [5]. Deep learning models learn high-level representations of data and complex nonlinear correlations, which are frequently preferred as an automatic damage pattern prediction tool. In many civil engineering studies, deep learning models have achieved high performance with data-driven SHM techniques. Deep learning contributes to the advancement of SHM analysis because it effectively processes both unstructured data such as images and structured data such as time-series data. As SHM technologies, many researchers have proposed architectures such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Deep Autoencoder (DAE), and generative adversarial network (GNN) [6]. Pathiragea et al. [7] trained the autoencoder neural network to perform dimensionality reduction and estimated the stiffness element of the steel frame structure with modal information. Gu et al. [8] calculated the Euclidean distance between the target data and the output of a multilayer Artificial Neural Network (ANN) trained with undamaged structure data. They proposed an unsupervised learning approach to locate damaged structures from the increased Euclidean distance. Truong et al. [9] introduced deep feedforward neural networks (DFNN) to detect damage to truss structures. They simulated damaged structures by reducing the elastic modulus of individual elements and verified the performance of the proposed DFNN. Changa et al. [10] estimated damage locations and severity by training neural networks with modal properties after reducing stiffness to create damage patterns. Abdeljaber et al. [11] proposed a one-dimensional CNN that extracts features from raw accelerometer signals and classifies damage. With the development of computer vision, 2D CNN has been successfully used as a vision-based SHM technique [12]. CNNs trained with structure images successfully classify surface damage such as concrete cracks and spalling conditions [13,14,15,16].
In this paper, we propose a Graph Neural Network (GNN) to evaluate the cable cross-sectional area reduction caused by corrosion or fracture of structures. The proposed method consolidates the overall structure and geometric features of the cable-stayed bridge. The deep learning-based damage detection method requires sufficient data with various damaged states for neural network learning. However, it is almost impossible to obtain data on damaged bridges in operation for safety reasons. Besides, to learn a classifier to detect a damaged location, data on each damaged location is required. Moreover, it is impossible to obtain balanced data for all damaged cases because in the real world, the damage scenarios are very rare since the bridge must guarantee a safe condition for long service life [6]. Therefore, it is difficult to train a damage detection model due to the difficulty of collecting data and the class imbalance problem. To resolve these limitations, there is a growing need for research on applying SHM technology to digital twin models. Therefore, we introduce a Practical Advanced Analysis Program (PAAP) [17,18,19,20] to extract the GNN training data. PAAP is very efficient as it can capture material non-linearities of space structures. In addition, the reliability of PAAP has been evaluated for the cable-stayed bridges [20,21] and suspension bridges [22,23].Therefore, it is possible to simulate cable-stayed bridges with various conditions, such as material properties and loads, similar to real-world bridge-like conditions. Furthermore, we can extract data on various damage states of cable-stayed bridges that cannot be obtained in real-world bridges and use them for deep learning model training. Besides, we can predict the real-world bridge state by utilizing real SHM data into the trained deep learning model. In this work, we employ PAAP to analyze the cable tensions of cable-stayed bridge models with reduced cable areas. Moreover, we represent the sensor data as a graph composed of vertices and edges using the generated tension data and spatial information. Studies using point clouds produced by laser scanning have been proposed to evaluate the structure state regarding the entire bridge structure and 3D spatial information data between sensors [24,25,26]. We notice that 3D spatial information of bridges can provide helpful information about the structure states from previous studies. However, using point cloud data for structural health evaluation is possible only for visible elements such as cracks and spalling [27,28], and the point cloud data are not directly related to the loss of stiffness or strength since the point cloud data do not adequately capture the depth due to occlusion [29]. Since GNN can learn the graph-structured data, it resolves the limitation of CNN, which accepts only the grid-structured data. Thanks to the rapid development of GNN, which is capable of recent graph prediction, the utilization of deep learning increases in various domains such as traffic forecasting [30,31], recommendation system [32,33], molecular property prediction [34], and natural language processing [35,36]. Recently, a study using GNN for cable-stayed bridge monitoring has been proposed. Li et al. [37] explored the spatiotemporal correlation of the sensor network in a cable-stayed bridge using the graph convolutional network and a one-dimensional convolutional neural network. They showed that the proposed method effectively detects sensor faults and structural variation. We expect GNN to be actively examined as an SHM technology in the future. In this study, we use Message Passing Neural Network (MPNN), a representative architecture designed to process graph data. Glimer et al. [34] proposed the MPNN, a GNN framework that represents the message transfer between the vertices of the graph as a learnable function. MPNN learns the representation of the graph while repeating a vertex update with messages received from neighboring vertices. We create a graph with the connection relationship between the cable-stayed bridge nodes and apply the node and element data as the vertex features and edge features of the graph, respectively. We train MPNN to estimate damaged cables using the graphed sensor data. We also estimate the cross-sectional area of the damaged cable and identify the damaged cable location to reveal a detailed bridge condition. We adopt a multi-task learning method to secure that our deep learning model predicts two tasks effectively. The multi-task learning benefits while learning related tasks together [38]. Since estimating the location and cross-sectional area of damaged cables are not independent tasks, deep learning models can be optimized efficiently while simultaneously learning both tasks.

2. Background

Structural health conditions of cable-stayed bridges are generally monitored based on cable tension changes related to cable area parameters. The tensile forces on cables inevitably change when one or more cables are damaged. A machine learning model is one of the damage detection techniques that identify damage location and degree. This section presents a fundamental understanding of the cable-stayed bridge model and our proposed approach for damage detection. A robust structural analysis program, Practical Advanced Analysis Program (PAAP), is introduced, followed by our cable-stayed bridge model. We then introduce a deep learning theory to understand Message Passing Neural Network (MPNN) adopted as a damage detection technique in this work.

2.1. Practical Advanced Analysis Program (PAAP)

The PAAP is an efficient program in capturing the geometric and material non-linearities of space structures using both the stability function and refined plastic hinge concept. The Generalized Displacement Control (GDC) technique is adopted for solving the nonlinear equilibrium equations with an incremental-iterative scheme. This algorithm accurately traces the equilibrium path of the nonlinear problem with multiple limit points and snap-back points. The details of the GDC are presented in [17,39]. In many studies of cable-stayed bridges [21,40], cables have been modeled as truss elements, while pylons, girders, and cross-beams were modeled as plastic-hinge beam-column elements. The plastic-hinge beam-column elements utilize stability functions [41] to predict the second-order effects. The inelastic behavior of the elements is also captured with the refined plastic hinge model [42,43]. To correctly model the realistic behaviors of cable structures, the catenary cable element is employed in the PAAP due to its precise numerical expressions [40]. The advantage of the PAAP is that the nonlinear structural responses are accurately obtained with only one or two elements per structural member leading to low computational costs [17,21]. Thus, the PAAP is employed to analyze and determine the cable tensions in our cable-stayed bridge model.

2.2. Cable-Stayed Bridge Model

A cable-stayed bridge model of the semi-harp type is proposed as shown in Figure 1. The bridge has a center span of 122 m and two side spans of 54.9 m. Two 30 m-high towers support two traffic lanes with an overall width of 7.5 m. Pylons, girders, and cross beams are made of steel with a specific weight of 76.82 kN/m 3 . The specific weight of the stayed cable is 60.5 kN/m 3 . In the PAAP, the girders, pylons, and cross beams are modeled as plastic-hinge beam-column elements. The stayed cables are modeled as catenary elements. For simplicity in determining the damage of the cable, only the dead load induced by the self-weight of the bridge is considered.

2.3. Multilayer Percpetron

The most straightforward neural network, Multilayer Percpetron (MLP), has a structure that includes multiple hidden layers between the input layer and the output layer. In each fully-connected hidden layer, the activation function is applied to the affine function of hidden unit h ( i ) as follows.
h ( i ) = σ i k w k ( i ) h k ( i 1 ) + b ( i ) ,
where, w ( i ) and b ( i ) represent the i t h hidden layer weight and bias, respectively. σ is an activation function for nonlinear learning. There are various activation functions, and mainly Rectified Linear Unit (ReLU), hyperbolic tangent (tanh), and sigmoid functions defined below are applied frequently.
ReLU x = max ( x , 0 )
tan h x = 1 exp 2 x 1 + exp 2 x
sigmoid x = 1 1 + exp x .
Dropout is applied to the hidden layer to prevent overfitting of the neural network. During the training process, the dropout disconnects randomly selected hidden units at a certain probability, such as the dropout rate. Then, the network becomes more robust because the network output does not depend only on a specific unit.

2.4. Recurrent Neural Network

Recurrent Neural Network (RNN) generates the output with current input and hidden state representing past information of sequence data. Typical RNN structures, Gated Recurrent Units (GRUs) [44] and Long Short-Term Memory (LSTM), support the gating of the hidden state and control information flow. Figure 2a shows how the hidden state is calculated in GRU. GRU computes the reset gate r t R k that controls the memory from i t h data m i R d , where d is the dimension of m i , in the input sequence and the update gate z t R k , where k is the dimension of the hidden state, that controls the similarity between the new state and the old state. GRU integrates the computed gates to determine the candidate hidden state h ˜ t R k and the hidden state h t R k . The equations of GRU are as follows.
r t = σ W m r m t + W h r h t 1 + b r
z t = σ W m z m t + W h z h t 1 + b z
h ˜ t = tanh W m h m t + r t W h h h t 1 + b h
h t = 1 z t h ˜ t + z t h t 1 ,
where, ⊙ and σ are Hadamard product and sigmoid functions, respectively. W m r , W m z , W m h R d × k and W h r , W h z , W h h R k × k are weight parameters. b r , b z , b h R k are biases.
Figure 2b shows the computation process of hidden state in LSTM. The cell state c t R k and hidden state h t R k for input data x t R d with input gate i t , forget gate f t , output gate o t R k are computed as follows.
i t = σ W x i x t + W h i h t 1 + b i
f t = σ W x f x t + W h f h t 1 + b f
o t = σ W x o x t + W h o h t 1 + b o
c t = f t c t 1 + i t t a n h W x c x t + W h c h t 1 + b c
h t = o t tan h c t ,
where W x i , W x f , W x o , W x c R d × k and W h i , W h f , W h o , W h c R k × k are weight parameters. b i , b f , b o , b c R k are biases.
The set2set model [45] is permutation invariant for input data using an attention mechanism.
q t = L S T M ( q t 1 * )
e i , t = f ( m i , q t )
a i , t = e x p e i , t j e x p e j , t
r t = i a i , t m i
q t * = q t r t ,
where m i is the memory vector, q t is the query vector, and f is the dot product.

2.5. Message Passing Neural Network

We assess the damage of the bridge structure using Graph Neural Network (GNN) to apply the sensor network topology. GNN is a powerful deep learning model that manipulates graph-structured data, and it is recently adopted in various domains. GNN updates the hidden state of the vertex with the neighbor information, captures the hidden patterns of the graph. Moreover, it effectively analyzes and infers the graph. MPNN [34] is a general framework of GNN. It has been employed to evaluate chemical properties by representing 3D molecular geometry as a graph.
Graph, G, consists of a vertex set, V, and an edge set, E. We denote the feature of vertex, v V , as x v and the feature of edge, ( u , v ) E , as e u v . As shown in Figure 3, MPNN processes the embedded vertices into a message-passing step and a readout step.
In the message-passing step, each vertex receives the aggregated message m v t + 1 from the adjacent vertices along the edges with the message function, M t . The hidden state of each vertex is updated with the received message, and the previous state of the vertex is updated with the update function, U t . The message passing step is repeated T times until the message is delivered to a wider range in the graph. In this study, the t t h message function M t and the update function U t are defined as follows.
M t h v t , h u t , e u v = σ A e u v h u t
m v t + 1 = u N ( v ) M t h v t , h u t , e u v
U t = GRU h v t , m v t + 1 ,
where σ is the ReLU activation function. A ( · ) is a two-layer neural network generating a matrix and consists of a layer with 2 k neurons and ReLU activation, and a layer with k × k neurons. The neighbors of a vertex, N v = u V u , v E , are adjacent vertices connected through edges from v. h v t and h u t are the t t h hidden states of vertices, v and u, respectively. The initial hidden state, h v 0 , is the embedding of vertex v obtained by substituting x v into the differentiable function. In Equation (21), we define the update function U t as GRU described in Section 2.4. GRU integrates the state of the vertex itself and the message received as M t from adjacent vertices Finally, the hidden state of the t t h updated vertex v is defined as follows.
h v t + 1 = U t h v t , m v t + 1 ,
The readout step aggregates the last hidden states, h T , after iterating the message passing T times. The prediction, y ^ , for the target data is calculated with the readout function, R, as follows.
y ^ = R h v T v V .
We define the readout function R as the set2set model presented in Section 2.4. Since the set2set is invariant for graph isomorphism, it effectively integrates the vertices of the graph and produces a graph level embedding.

3. Data Generating Procedure

In this section, we describe how the cable-stayed bridge data used for the MPNN training is generated.The cable damage model is presented based on the elemental area reduction parameter before the measured cables are specified. Then, the structural analyses are performed to analyze the proposed model for reliable datasets that are essential to construct the machine learning model later.

3.1. Cable Damage Model

During the service life of cable-stayed bridges, cables are the most critical load-bearing components [46,47]. Thus, the potential damage of cables should be identified early to prevent terrible disasters [48,49]. In this study, the damage of cable-stayed bridges is assumed to be caused solely by the cable damages. In the cable-stayed bridge model, there are a total of 40 cables corresponding to the 40 catenary elements that are numbered as shown in Figure 1. The cable element is supposed to be perfectly flexible [40] with the self-weight distributed along its length. It has a uniform cross-sectional area of 3846.5 mm 2 in the intact state of the bridge. The cable damage is expressed through a scalar area reduction variable α with the value between 0 and 1 as follows:
A d = ( 1 α ) · A i
where A i represents the cross-section area of the cable in the intact state and A d denotes the cross-section area of the cable in the damaged state. α is the elemental area reduction parameter to be identified. It is noted that α = 1 indicates a destroyed cable, and α = 0 indicates an intact cable.

3.2. Observed Cables

In most structural health monitoring systems of cable-stayed bridges, sensors are installed to collect data from specific cables due to the cost-effectiveness. The quantity of surveyed cables depends on the scale and complexity of the bridge and the monitoring objectives [47,50]. At surveyed locations, cable sensitivity and safety degrees are evaluated. The measured data is automatically observed and stored as essential sources for later usage during the monitoring time. In this study, 10 out of 40 cables are surveyed, including 5 cables on the front side and 5 cables on the backside. We examine five sensor layout cases as shown in Figure 4. However, we do not include optimization of sensor placement (OSP) in the scope of this study. Since we do not apply the OSP technology, the sensors are evenly arranged. We analyze multiple cases to avoid skewing the experimental results to specific cases. The tensile forces within these cables are determined by simulating the proposed model in the PAAP. Using the GDC method [20,51] to solve nonlinear problems, the PAAP divides the dead load into many incremental steps. Obtained results from the structural system at each incremental step, including internal forces, deformations, displacements, etc., are exported and stored in data files. However, only cable tensions at the step corresponding to the bridge self-weight are considered as measured data.

3.3. Generating Data

Different cable-stayed bridge models are constructed and analyzed by using the PAAP. The geometry configurations of the bridge girders, pylons, and cross beams are kept constant, while only the cable cross-sectional areas vary. The output is the tensile force on observed cables as determined in Section 3.2. The complete procedure for generating data is presented in Table 1. For single-cable damage, 4000 data samples are generated as the elemental area reduction parameter varies from 0 to 1 with a step of 0.01. To evaluate the cable system failure based on the simulation results, the prediction model performs a reverse problem. The tensile forces of 10 observed cables are examined as the input, while the predefined elemental area reduction parameters are employed for the target data. These input and target data are utilized for the training and validation of the proposed damage detection model concerning the cable-stayed bridge. Upon completion, the model predicts damaged cable and its cross-section area A d according to 10 inputs of cable tensile forces S = T 1 , T 2 , T 3 , , T 10 .

4. Proposed Method for Damage Assessment

In this section, we describe MPNN for damage assessment of the cable-stayed bridge. We present the specific MPNN configuration and show how to apply the proposed multi-task learning to identify the location of the damaged cable created in Section 3 and the cross-sectional area of the corresponding cable.

4.1. Configuration of the Proposed Network

We define the graph vertex feature, x v , with tensile forces of the 10 cables. The edge feature e u v is defined as the thresholded Gaussian kernel [52] as presented in the equation below using the XYZ coordinates of the nodes on the girder connected to the 10 cables.
e d g e u v = exp dist u , v 2 σ 2 , if dist u , v   < γ 0 , otherwise ,
where we set the threshold γ to 0.1 and σ is the standard deviation of distances. Since we only define vertices and edges as tension and distance, respectively, the dimensions of vertex and edge features are all 1. We embed the vertex features x v representing the tensor into single fully connected hidden layers with the ReLU activation function. The embedded vertex state is updated with the message function M t in Equation (19), and the update function U t in Equation (21). The hyperparameters of the network we tune in the message-passing step include the vertex embedding dimension, the number of iterations of the message passing step, and the hidden state dimension. Also, we tune the number of LSTM layers of the set2set model for the global pooling, readout function R, and the number of computations, which is another hyperparameter of the set2set model. We add the fully connected hidden layer with the ReLU activation function with the same number of neurons as the vertex embedding. The predictions for target data are generated in two output layers, each of 20 linear units. We describe the two outputs in the next section.

4.2. Multi-Task Learning on MPNN

The target data to determine the cable health of the cable-stayed bridge are the damaged cable location and the damage degree (i.e., cross-sectional area, A d ). Therefore, we adopt multi-task learning to make MPNN learn two tasks effectively. The advantage of multi-task learning is that by predicting multiple tasks simultaneously, related tasks could be learned more efficiently. Therefore, learning to predict the cross-sectional area of the damaged cable and learning to classify the damaged cable simultaneously improves learning efficiency. As shown in Figure 3, the proposed MPNN has outputs for task1 and task2, which are the classification of the damaged cable and the prediction of the cross-sectional area of the damaged cable, respectively. The first task is classification, and the second task is a prediction on continuous data (i.e., regression). Therefore, we utilize the cross-entropy loss function for task1 and the mean absolute error loss function for task2 defined as follows.
L task 1 = i D i log exp D i ^ j exp D j ^
L task 2 = A d A ^ · M ,
where D i represents the target for ith label for the case that the ith cable is damaged. A d is the target for the cross-sectional area of a single damaged cable in the range, 0.99 to 0.0. A ^ R 40 is the vector output by the network for the second task. We define mask M R 40 as a vector in which one element corresponding to the index of the damaged cable is 1, and all others are 0. In the training phase, the position of 1 in the mask M is actually the index of the damaged cable. L task 2 is actually the error between the cross-sectional area of the damaged cable, A d , and the dot product of A ^ and M . Therefore, the loss for task2 is actually only calculated on the damaged cable. In the test step, the mask M is created as an output for the network classification. Then A ^ · M means the estimated cross-sectional area of the cable that the network classified as damaged. We define the total loss L total by combining L task 1 and L task 2 as follows.
L total = L task 1 + 1 M 1 L task 2 .
The total loss L total is the sum of task1 (classification) and task2 (regression) scaled by L1-norm of mask M .

5. Performance Evaluation

To evaluate the proposed model introduced in Section 4, we generate data with the cable-stayed bridge having damaged cables through the PAAP described in Section 3. We preprocess the generated data for the model training and optimize the MPNN model. Then, we train the proposed MPNN to validate the prediction outcomes. We also train the MLP and compare it with the MPNN model results. The input data of the MLP is only ten cable tension data. MLP has four hidden layers with ReLU activation, and we have added a dropout layer to each hidden layer. Similar to MPNN, the MLP output layer generates 80 predictions for two tasks. Additionally, we compare the results with ones by the machine learning technique, XGBoost. Also, we compare the multi-task learning with a network performing only one task. The number of network outputs for the damaged cable classification, which is task1, is 40 and the loss function is the cross-entropy shown in Equation (26). Furthermore, the number of network outputs for the area estimation of damaged cables, which is task2, is 1, and the loss function is the mean absolute error presented in Equation (27).

5.1. Data Preprocessing and Optimization

As mentioned in Section 3, we generate the data for 4000 cases. The input data is the cable-stayed bridge data represented as a graph, as described in Section 4, and the target data include the index and its cross-sectional area of the damaged cable labeled between 1 and 40. The cross-sectional area of the damaged cable is (1 α ), which is between 0.0 (broken state) and 0.99. The α is an elemental area reduction parameter defined in Section 3. We scale the vertex feature values, tensile forces, between 0 and 1, as presented as follows.
T = T min T max T min T
We divide the data into 6:1:3 and generate a 2400 training set, 400 validation set, and 1200 test set.
Table 2 presents the ranges of hyperparameters and selected optimal values for each model. We select the best hyperparameters in the validation set using Tree-structured Parzen Estimators (TPE) [53] with 20 trials. Moreover, we terminate trials with poor performance using Asynchronous Successive Halving Algorithm (ASHA) [54]. We specify the hyperparameters of MPNN in Section 4.1. The hyperparameters of MLP are the number of hidden neurons in each layer and the dropout rate. We optimize the hyperparameters that determine the network structure, batch size, and learning rate. We perform the hyperparameter optimization individually for each of the 5 cases and models.
We utilize the ADAM optimizer [55] and train the MPNN model to minimize the loss function, which is defined in Equation (28). We set the number of epochs to 1000. Then we decay by multiplying the learning rate decided from the hyperparameter optimization by 0.995 per epoch. We use Pytorch and Deep Graph Library (DGL) on a single NVIDIA Geforce RTX2080Ti GPU for network implementation and optimization. We train the MLP model with the same settings as the MPNN model.

5.2. Results

In this section, we report the results of the deep learning network for the test set. We examine the accuracy to evaluate the damaged cable classification performance. We employ the mean absolute error (MAE), the root mean squared error (RMSE), and the correlation coefficient between target data and output data as measures to compare the cross-sectional area prediction. MAE, RMSE, and correlation coefficient are defined as follows.
MAE = i n y y ^ i n
RMSE = i n y y ^ i 2 n
Correlation coefficient = i n y i y ¯ y ^ i y ^ ¯ i n y i y ¯ 2 i n y ^ i y ^ ¯ 2 ,
where n is the number of samples, and y, y ^ , y ¯ , and y ^ ¯ are the target, output, and the average of the target, and the average of the output, respectively. The lower the MAE and RMSE and the higher the correlation coefficient, the better the performance.
Table 3 summarizes the results of MPNN, MLP, and XGBoost. When comparing MPNN with MLP and XGBoost, the classification accuracy and correlation are always higher, and the error for cross-sectional area estimation is lower. Besides, the classification accuracy of MLP drops to 93.58%, depending on the sensor layouts. However, MPNN is more stable with an accuracy of over 98.33% in all 5 cases. Also, for the cross-sectional area prediction, MPNN is better and more stable than MLP and XGBoost. Meanwhile, the multi-task learning performance is similar to one of the single-task learning in which each task is individually trained. However, when multi-task learning is applied, we need to train the network only once, whereas training the network with the single-task increases the time cost by the number of tasks. Therefore, multi-task learning is efficient because it learns multiple tasks simultaneously while achieving performance similar to learning single tasks.
Figure 5 shows scatter plots showing the relationships between the predicted values and actual values for the cross-sectional area estimation of the damaged cables. As shown in Figure 5a,b, which are the results of MLP, since many points deviate enormously from the straight line, especially in cases 2, 4, and 5, we confirm that the errors in the prediction of the cross-sectional area are considerable. However, in the scatter plots of MPNN shown in Figure 5c,d, the data points are closer to the straight line than MLP for all cases. For the classification analysis, in the multi-task learning results Figure 5a,c, we confirm that the red points, which are misclassified data, are mainly concentrated when the cross-sectional area is close to 1. It appears that the smaller the damage, the more likely the damaged cable will be misclassified.
Figure 6 shows the histogram of correctly classified data and incorrectly classified data for varying cross-sectional areas. We observe that in all four network results, in general, the correctly classified data (blue) are evenly distributed, and the misclassified data (red) are skewed toward the cross-sectional area close to 1. For more accurate verification, we divide the cross-sectional area range by 0.1 and calculate the classification accuracy of the data included in each range.
Table 4 presents the accuracies according to the cross-sectional areas. When the cross-sectional area is less than 0.9, the accuracy of the MLP and XGBoost> is between 81% and 100%. When the cross-sectional area is more than 0.9, the classification performance of MLP drops to 50%, and the best accuracy is 79.59% in case 5. However, when the cross-sectional area of MPNN is less than 0.9, the accuracy is over 99.2%. Also, in both multi-task learning and single-task learning, the accuracy of all cases is almost 100%. When the cross-sectional area is more than 0.9, the accuracy of MPNN is at least 73.47% and at most 91.84%. When the cable cross-sectional area loss is small, the accuracy of MPNN decreases slightly, but we notice that MPNN classifies damaged cables relatively more reliably than MLP and XGBoost. Besides, it is noticed that for each cross-sectional area change, none of the multi-task learning method and the single task learning method always outperforms in all cases.
Figure 7 shows the confusion matrix of MPNN combining all 5 cases. Since there are a few misclassified data, we highlight the misclassified data with the orange shade. We observe that the location of the misclassified cable tends to be close to the damaged cable. For example, when the actual labels are 4, 7, 13, and 14, the predicted labels are 3, 6, 14, 16, and 15, respectively. These cables are located next to each other.
Figure 8 shows a histogram of the sensor distances corresponding to the actual damaged cable and the cable incorrectly classified by the network for all 5 cases to illustrate the spatial relationship between the actual labels and the predicted labels in more detail. Of 75 incorrectly classified data, the distance between 17 actual damaged cables and predicted damaged cables is only 12,200, which is the distance between adjacent cables. Therefore, if we apply the proposed method to an actual bridge, we urge that the cables on both sides of the classified cables must be checked to avoid more significant damage.
The tensions used as input data are measured only on ten cables. However, the proposed technique assesses damages to all 40 cables. Therefore, we need to compare the predictions for the ten cables with tension data and the 30 cables without tension data. Table 5 shows the results for the damaged cables with sensors and the damaged cable without sensors in all cases.
We notice that the performance of MPNN is more remarkably different than that of MLP and XGBoost when the cable with no sensor attached is damaged. When the cable with the sensor is damaged, the classification accuracy is higher compared to the case that the cable without the sensor is damaged. The regression performance also shows the similar pattern except for case 1, 2, and 3. In case 1, 2, and 3, on the contrary, the result is better when the cable without the sensor is damaged in results of MLP and MPNN. This seems to be related to a problem with the sensor position. As seen in Figure 4, unlike case 4 and case 5, the spacing of sensors in cases 1, 2, and 3 is always less than five cables. In case 1, 2, and 3, since the sensors are evenly distributed, we observe that even if the cable without the sensor is damaged, especially in the regression task, the performance degradation does not appear. Table 6 for more detailed results for case 3 shows the classification result (accuracy, precision, recall, and F1 score) and the regression result (MAE, RMSE, and correlation coefficient) for each cable of the cross-sectional area when the cable is damaged.
The accuracy, precision, recall, and F1 score are calculated as follows.
accuracy = ( T P + T N ) / ( T P + T N + F P + F N )
precision = ( T P ) / ( T P + F P )
recall = ( T P ) / ( T P + F N )
F 1 = ( 2 T P ) / ( 2 T P + F P + F N ) ,
where T N is true negative, T P true positive, F N false negative, and F P false positive. The rows shaded in green indicate ten cables with tension data, and all four measures for the classification, including accuracy, precision, recall, and F1 score, are 1.00. For each measure, values in the lower 5% of performance appear in red, and values in the upper 5% of performance appear in blue. Then we observe that cable 25 and cable 33 have the lowest precisions, which can be interpreted as a relatively high probability of misclassification among those predicted by MPNN that cable 15 and cable 24 are damaged in multi-task learning. Cable 14 has the lowest recall in both multi-task learning and single-task learning. Therefore, we can interpret that when cable 14 is damaged, MPNN is relatively likely to predict that the other cable is damaged. Also, the accuracy and F1 score of cable 14 and cable 15 are the lowest in multi-task learning. In addition to this, cable 14 in both multi-task learning and single-task learning have lower performance metric values for the classification. The cables mentioned so far are all sensorless cables in case 3. Unlike classification, the cross-sectional area prediction seems to be mostly unrelated to the use of tension data. For example, the regression performance is excellent even for the damaged cable 17 and 9 that do not have any tension data. Therefore, estimating the cross-sectional area of a single damaged cable is less related to the tension data-position than the classification.

5.3. Discussion

We have shown that MPNN can successfully assess cable damage estimation and outperform MLP. When the cable cross-section is damaged less than 0.9, MPNN always classifies the damaged cable more accurately than MLP. However, when the cable cross-section area damage is negligible as 0.9 or more, the classification accuracy slightly deteriorates. Once we improve the deep learning network to work more accurately for the bridge structure data with minor damage, we expect that the overall accuracy becomes 100%. Misclassified cables by MPNN are often located right next to the actual damaged cables. We can utilize these MPNN misclassification trends to update the algorithm and training process. However, since MPNN has reached 98% or even higher accuracy, achieving sufficiently satisfactory results, we believe that MPNN has potential as an SHM technology. Also, the multi-task learning performance is similar to the multiple single-task learning performance. Therefore, we have shown that multi-task learning can efficiently learn a single network that evaluates a bridge state.Besides, we have presented that the multi-task learning technique achieves similar performance to the network learning two tasks while learning only one network. Therefore, it is possible to evaluate the bridge conditions in several ways using only one network. It is worth adding more tasks other than predicting only the cross-sectional area of the cable in the future.

5.3.1. Contribution

We confirmed that MPNN has a higher overall performance than MLP and XGBoost. In particular, MPNN has a significant difference in performance from MLP and XGBoost when a cable without a sensor is damaged. Since MPNN can process spatial information between sensors, it appears that damages to cables without sensors can be estimated more successfully. MPNN has the advantage of being able to transmit information according to the connectivity relationship between sensors through a structure that passes messages. Also, by adding a readout function, MPNN produces an output as a graph unit value from node information, making it possible to predict the state of 40 cables effectively. In this paper, we captured spatial correlation by considering sensor geometry with MPNN. Moreover, we showed that two tasks (classifying damaged cables and estimating cross-sectional area) could be efficiently trained using multi-task learning. Besides, we proposed a loss function using a mask so that the damaged cable could be more successfully estimated.

5.3.2. Limitation

We could represent the sensor geometry as a graph in this study but did not consider mechanical properties such as the material type or Young’s modulus of the structure since it is difficult to define the relationship among the sensors when several types of materials are included between the two sensors. Besides, it may be challenging to learn the entire structure behaviors with the proposed method because it is not possible to deduce the entire topology of the bridge with only a few sensor data, which may necessitate the installation of a sufficiently large number of sensors. However, it cannot always be satisfied due to cost constraints. Therefore, to understand the condition of the entire bridge with only a small number of sensors, we fundamentally need to examine the influence of the sensor locations and apply it to enhance the model.

5.3.3. Extension To Multiple Damaged Cables

In this study, we assess the GNN-based SHM technology assuming that only one cable is damaged. To generate more realistic data similar to a real-world bridge, we need to simulate several damaged cables. We can apply the proposed technique by transforming from one label classification to a multi-label classification problem even when the number of damaged cables is unknown. A straightforward approach is to replace the cross-entropy loss function with a loss function used for multi-label classification, such as binary cross-entropy. However, the threshold for deciding how many cables to classify as damaged should be appropriately set, which is an essential component. If the target data, which indicates the cross-sectional areas of the actual damaged cables, is represented as a vector of dimension w, where w is the number of damaged cables, then we can use the mask as shown in Equation (27) and compute the loss for the regression task. Suppose we multiply the 40 outputs of the network for task2 and the mask that is a 40 × w matrix, where each column is a one-hot encoding vector representing the locations of the damaged cables. In that case, we obtain the cross-sectional areas of the damaged cables. Similarly, the mask is created as the actual label in the training step and the predicted label in the test step. The total loss is obtained by combining the loss for the classification task and the loss for the regression task by scaling the L1-norm of the mask. We do not desire to weigh the regression task more than the classification task as the number of damaged cables increases. We can prevent this by scaling the regression loss with the mask. Therefore, even with multiple damaged cables, we can still apply the proposed method as a multi-task learning approach. As discussed above, we will review the conditions for making the cable-stayed bridge model and the real-world bridge similar and improve our technique to apply to the real-world bridge.

6. Conclusions

In this paper, we defined the sensor data as a graph composed of vertex and edge features. We proposed a damage assessment method of a cable-stayed bridge applying the graph representation on MPNN. We used tension data of only 10 cables to increase the practicality of the experiment. It is challenging to assess the conditions of all cables with only a limited number of sensors. Nevertheless, MPNN successfully estimated the damage of the cable-stayed bridge. We adopted multi-task learning to enable MPNN to efficiently learn two tasks: to locate damaged cables and predict the cable areas. The performance of MPNN is better than MLP trained for the comparison. MPNN classified damaged cables more reliably than MLP, not only when the cable is completely broken and has a zero area, but also when the damage is relatively small. Therefore, we presume that MPNN can detect damages at an early stage for structural maintenance. Furthermore, we can apply MPNN to actual bridge data when we have material information about the structural components. For example, we can train MPNN with stayed-cable bridge data simulated under the same conditions as real bridges in PAAP and utilize pre-trained MPNN with real bridge data for prediction directly. Additionally, although we simulated only one damaged cable in this study, we will generate data with multiple damaged cables to train the network to consider the more general real-world bridge cases. We also introduced an approach to conduct damage localization and severity assessment with the proposed method when several cables are damaged as a future study. Our model is likely to be extended by applying additional data such as the displacement of nodes and xyz coordinates for vertex features. Moreover, we can further expand the study by training MPNNs to predict structural damages, such as decreased stiffness besides cable conditions.

Author Contributions

Conceptualization, Y.J.; data curation, V.-T.P. and S.-E.K.; formal analysis, H.S.; methodology, H.S.; project administration, Y.J.; software, H.S.; supervision, Y.J. and S.-E.K.; validation, H.S. and V.-T.P.; writing—original draft, HyeSook Son; writing— review and editing, Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Basic Research Program through the National Research Foundation of Korea (NRF) funded by the MSIT under Grant 2019R1A4A1021702, and in part by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.2019-0-00242, Development of a Big Data Augmented Analysis Profiling Platform for Maximizing Reliability and Utilization of Big Data.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, S.; Li, H.; Liu, Y.; Lan, C.; Zhou, W.; Ou, J. SMC structural health monitoring benchmark problem using monitored data from an actual cable-stayed bridge. Struct. Control Health Monit. 2014, 21, 156–172. [Google Scholar] [CrossRef]
  2. Kordestani, H.; Xiang, Y.Q.; Ye, X.W.; Yun, C.B.; Shadabfar, M. Localization of damaged cable in a tied-arch bridge using Arias intensity of seismic acceleration response. Struct. Control Health Monit. 2020, 27, e2491. [Google Scholar] [CrossRef]
  3. Das, R.; Pandey, S.A.; Mahesh, M.; Saini, P.; Anvesh, S. Effect of dynamic unloading of cables in collapse progression through a cable stayed bridge. Asian J. Civ. Eng. 2016, 17, 397–416. [Google Scholar]
  4. Santos, J.P.; Crémona, C.; Orcesi, A.D.; Silveira, P. Multivariate statistical analysis for early damage detection. Eng. Struct. 2013, 56, 273–285. [Google Scholar] [CrossRef]
  5. Khan, S.; Yairi, T. A review on the application of deep learning in system health management. Mech. Syst. Signal Process. 2018, 107, 241–265. [Google Scholar] [CrossRef]
  6. Sun, L.; Shang, Z.; Xia, Y.; Bhowmick, S.; Nagarajaiah, S. Review of bridge structural health monitoring aided by big data and artificial intelligence: From condition assessment to damage detection. J. Struct. Eng. 2020, 146, 04020073. [Google Scholar] [CrossRef]
  7. Pathirage, C.S.N.; Li, J.; Li, L.; Hao, H.; Liu, W.; Ni, P. Structural damage identification based on autoencoder neural networks and deep learning. Eng. Struct. 2018, 172, 13–28. [Google Scholar] [CrossRef]
  8. Gu, J.; Gul, M.; Wu, X. Damage detection under varying temperature using artificial neural networks. Struct. Control Health Monit. 2017, 24, e1998. [Google Scholar] [CrossRef]
  9. Truong, T.T.; Dinh-Cong, D.; Lee, J.; Nguyen-Thoi, T. An effective deep feedforward neural networks (DFNN) method for damage identification of truss structures using noisy incomplete modal data. J. Build. Eng. 2020, 30, 101244. [Google Scholar] [CrossRef]
  10. Chang, C.M.; Lin, T.K.; Chang, C.W. Applications of neural network models for structural health monitoring based on derived modal properties. Measurement 2018, 129, 457–470. [Google Scholar] [CrossRef]
  11. Abdeljaber, O.; Avci, O.; Kiranyaz, S.; Gabbouj, M.; Inman, D.J. Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks. J. Sound Vib. 2017, 388, 154–170. [Google Scholar] [CrossRef]
  12. Azimi, M.; Eslamlou, A.D.; Pekcan, G. Data-Driven Structural Health Monitoring and Damage Detection through Deep Learning: State-of-the-Art Review. Sensors 2020, 20, 2778. [Google Scholar] [CrossRef] [PubMed]
  13. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep learning-based crack damage detection using convolutional neural networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  14. Modarres, C.; Astorga, N.; Droguett, E.L.; Meruane, V. Convolutional neural networks for automated damage recognition and damage type identification. Struct. Control Health Monit. 2018, 25, e2230. [Google Scholar] [CrossRef]
  15. Gao, Y.; Mosalam, K.M. Deep transfer learning for image-based structural damage recognition. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 748–768. [Google Scholar] [CrossRef]
  16. Kim, B.; Cho, S. Image-based concrete crack assessment using mask and region-based convolutional neural network. Struct. Control Health Monit. 2019, 26, e2381. [Google Scholar] [CrossRef]
  17. Thai, H.T.; Kim, S.E. Practical advanced analysis software for nonlinear inelastic analysis of space steel structures. Adv. Eng. Softw. 2009, 40, 786–797. [Google Scholar] [CrossRef]
  18. Ngo-Huu, C.; Nguyen, P.C.; Kim, S.E. Second-order plastic-hinge analysis of space semi-rigid steel frames. Thin-Walled Struct. 2012, 60, 98–104. [Google Scholar] [CrossRef]
  19. Nguyen, P.C.; Kim, S.E. Nonlinear inelastic time-history analysis of three-dimensional semi-rigid steel frames. J. Constr. Steel Res. 2014, 101, 192–206. [Google Scholar] [CrossRef]
  20. Truong, V.; Kim, S.E. An efficient method for reliability-based design optimization of nonlinear inelastic steel space frames. Struct. Multidiscip. Optim. 2017, 56, 331–351. [Google Scholar] [CrossRef]
  21. Thai, H.T.; Kim, S.E. Second-order inelastic analysis of cable-stayed bridges. Finite Elem. Anal. Des. 2012, 53, 48–55. [Google Scholar] [CrossRef]
  22. Kim, S.E.; Thai, H.T. Nonlinear inelastic dynamic analysis of suspension bridges. Eng. Struct. 2010, 32, 3845–3856. [Google Scholar] [CrossRef]
  23. Kim, S.E.; Thai, H.T. Second-order inelastic analysis of steel suspension bridges. Finite Elem. Anal. Des. 2011, 47, 351–359. [Google Scholar] [CrossRef]
  24. Dai, K.; Li, A.; Zhang, H.; Chen, S.E.; Pan, Y. Surface damage quantification of postearthquake building based on terrestrial laser scan data. Struct. Control Health Monit. 2018, 25, e2210. [Google Scholar] [CrossRef]
  25. Farahani, B.V.; Barros, F.; Sousa, P.J.; Cacciari, P.P.; Tavares, P.J.; Futai, M.M.; Moreira, P. A coupled 3D laser scanning and digital image correlation system for geometry acquisition and deformation monitoring of a railway tunnel. Tunn. Undergr. Space Technol. 2019, 91, 102995. [Google Scholar] [CrossRef]
  26. Sajedi, S.O.; Liang, X. Vibration-based semantic damage segmentation for large-scale structural health monitoring. Comput.-Aided Civ. Infrastruct. Eng. 2020, 35, 579–596. [Google Scholar] [CrossRef]
  27. Olsen, M.J.; Kuester, F.; Chang, B.J.; Hutchinson, T.C. Terrestrial laser scanning-based structural damage assessment. J. Comput. Civ. Eng. 2010, 24, 264–272. [Google Scholar] [CrossRef]
  28. Suchocki, C.; Jagoda, M.; Obuchovski, R.; Šlikas, D.; Sužiedelytė-Visockienė, J. The properties of terrestrial laser system intensity in measurements of technical conditions of architectural structures. Metrol. Meas. Syst. 2018, 25, 779–792. [Google Scholar]
  29. Song, M.; Yousefianmoghadam, S.; Mohammadi, M.E.; Moaveni, B.; Stavridis, A.; Wood, R.L. An application of finite element model updating for damage assessment of a two-story reinforced concrete building and comparison with lidar. Struct. Health Monit. 2018, 17, 1129–1150. [Google Scholar] [CrossRef]
  30. Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  31. Yu, B.; Yin, H.; Zhu, Z. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, 13–19 July 2018; Lang, J., Ed.; pp. 3634–3640. [Google Scholar] [CrossRef] [Green Version]
  32. Monti, F.; Bronstein, M.; Bresson, X. Geometric matrix completion with recurrent multi-graph neural networks. Adv. Neural Inf. Process. Syst. 2017. Available online: https://arxiv.org/abs/1704.06803 (accessed on 1 April 2021).
  33. Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, 19–23 August 2018; Guo, Y., Farooq, F., Eds.; ACM: New York, NY, USA, 2018; pp. 974–983. [Google Scholar] [CrossRef] [Green Version]
  34. Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Neural Message Passing for Quantum Chemistry. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, Australia, 6–11 August 2017; Precup, D., Teh, Y.W., Eds.; Volume 70, pp. 1263–1272. [Google Scholar]
  35. Marcheggiani, D.; Titov, I. Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, 9–11 September 2017; Palmer, M., Hwa, R., Riedel, S., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2017; pp. 1506–1515. [Google Scholar] [CrossRef]
  36. Peng, H.; Li, J.; He, Y.; Liu, Y.; Bao, M.; Wang, L.; Song, Y.; Yang, Q. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Proceedings of the 2018 World Wide Web Conference, Lyon, France, 23–27 April 2018; pp. 1063–1072. [Google Scholar]
  37. Li, S.; Niu, J.; Li, Z. Novelty detection of cable-stayed bridges based on cable force correlation exploration using spatiotemporal graph convolutional networks. Struct. Health Monit. 2021. [Google Scholar] [CrossRef]
  38. Caruana, R. Multitask learning. Mach. Learn. 1997, 28, 41–75. [Google Scholar] [CrossRef]
  39. Yang, Y.B.; Shieh, M.S. Solution method for nonlinear problems with multiple critical points. AIAA J. 1990, 28, 2110–2116. [Google Scholar] [CrossRef]
  40. Thai, H.T.; Kim, S.E. Nonlinear static and dynamic analysis of cable structures. Finite Elem. Anal. Des. 2011, 47, 237–246. [Google Scholar] [CrossRef]
  41. Rumpf, H. The characteristics of systems and their changes of state disperse. In Particle Technology, Chapman and Hall; Springer: Berlin/Heidelberg, Germany, 1990; pp. 8–54. [Google Scholar]
  42. Chen, W.F.; Kim, S.E. LRFD Steel Design Using Advanced Analysis; CRC Press: Boca Raton, FL, USA, 1997; Volume 13. [Google Scholar]
  43. Kim, S.E.; Kim, M.K.; Chen, W.F. Improved refined plastic hinge analysis accounting for strain reversal. Eng. Struct. 2000, 22, 15–25. [Google Scholar] [CrossRef]
  44. Cho, K.; van Merrienboer, B.; Gülçehre, Ç.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, Doha, Qatar, 25–29 October 2014; pp. 1724–1734. [Google Scholar] [CrossRef]
  45. Vinyals, O.; Bengio, S.; Kudlur, M. Order Matters: Sequence to sequence for sets. In Proceedings of the 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, 2–4 May 2016; Conference Track Proceedings. Bengio, Y., LeCun, Y., Eds.; 2016. [Google Scholar]
  46. Nazarian, E.; Ansari, F.; Zhang, X.; Taylor, T. Detection of tension loss in cables of cable-stayed bridges by distributed monitoring of bridge deck strains. J. Struct. Eng. 2016, 142, 04016018. [Google Scholar] [CrossRef]
  47. Zhang, L.; Qiu, G.; Chen, Z. Structural health monitoring methods of cables in cable-stayed bridge: A review. Measurement 2021, 168, 108343. [Google Scholar] [CrossRef]
  48. Zhang, J.; Au, F. Effect of baseline calibration on assessment of long-term performance of cable-stayed bridges. Appear. Eng. Fail. Anal. 2013, 35, 234–246. [Google Scholar] [CrossRef]
  49. Ho, H.N.; Kim, K.D.; Park, Y.S.; Lee, J.J. An efficient image-based damage detection for cable surface in cable-stayed bridges. NDT E Int. 2013, 58, 18–23. [Google Scholar] [CrossRef]
  50. Hassona, F.; Hashem, M.D.; Abdelmalak, R.I.; Hakeem, B.M. Bumps at bridge approaches: Two case studies for bridges at El-Minia Governorate, Egypt. In International Congress and Exhibition “Sustainable Civil Infrastructures: Innovative Infrastructure Geotechnology”; Springer: Berlin/Heidelberg, Germany, 2017; pp. 265–280. [Google Scholar]
  51. Thai, H.T.; Kim, S.E. Large deflection inelastic analysis of space trusses using generalized displacement control method. J. Constr. Steel Res. 2009, 65, 1987–1994. [Google Scholar] [CrossRef]
  52. Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef] [Green Version]
  53. Bergstra, J.; Bardenet, R.; Bengio, Y.; Kégl, B. Algorithms for hyper-parameter optimization. In Proceedings of the 25th annual conference on neural information processing systems (NIPS 2011), Neural Information Processing Systems Foundation, Granada, Spain, 12–14 December 2011; Volume 24. [Google Scholar]
  54. Li, L.; Jamieson, K.G.; Rostamizadeh, A.; Gonina, E.; Ben-tzur, J.; Hardt, M.; Recht, B.; Talwalkar, A. A System for Massively Parallel Hyperparameter Tuning. In Proceedings of the Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, 2–4 March 2020. [Google Scholar]
  55. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
Figure 1. Cable-stayed bridge model in this study (unit: m).
Figure 1. Cable-stayed bridge model in this study (unit: m).
Sensors 21 03118 g001
Figure 2. (a) GRU structure and (b) LSTM structure.
Figure 2. (a) GRU structure and (b) LSTM structure.
Sensors 21 03118 g002
Figure 3. MPNN architecture.
Figure 3. MPNN architecture.
Sensors 21 03118 g003
Figure 4. 5 sensor layout cases.
Figure 4. 5 sensor layout cases.
Sensors 21 03118 g004
Figure 5. Scatter plots of MLP and MPNN on the test set for estimating the cross-sectional areas of damaged cables in 5 cases. The x-axis is the actual cross-sectional area (target data), and the y-axis is the predicted cross-sectional area. Correctly classified data are indicated as blue points, and incorrectly classified data are shown as red points in multi-task learning. (a) MLP with multi-task learning, (b) MLP with single-task learning (regression), (c) MPNN with multi-task learning, (d) MPNN with single-task learning (regression).
Figure 5. Scatter plots of MLP and MPNN on the test set for estimating the cross-sectional areas of damaged cables in 5 cases. The x-axis is the actual cross-sectional area (target data), and the y-axis is the predicted cross-sectional area. Correctly classified data are indicated as blue points, and incorrectly classified data are shown as red points in multi-task learning. (a) MLP with multi-task learning, (b) MLP with single-task learning (regression), (c) MPNN with multi-task learning, (d) MPNN with single-task learning (regression).
Sensors 21 03118 g005
Figure 6. Histograms of correctly classified data (blue) and misclassified data (red) according to the damaged cable cross-sectional areas. (a) MLP with single-task learning (classification), (b) MLP with multi-task learning, (c) MPNN with single-task learning (classification), (d) MPNN with multi-task learning are presented.
Figure 6. Histograms of correctly classified data (blue) and misclassified data (red) according to the damaged cable cross-sectional areas. (a) MLP with single-task learning (classification), (b) MLP with multi-task learning, (c) MPNN with single-task learning (classification), (d) MPNN with multi-task learning are presented.
Sensors 21 03118 g006
Figure 7. Confusion matrix.
Figure 7. Confusion matrix.
Sensors 21 03118 g007
Figure 8. Histogram of the distance between the original damaged cable and misclassified cable in the misclassified data is presented.
Figure 8. Histogram of the distance between the original damaged cable and misclassified cable in the misclassified data is presented.
Sensors 21 03118 g008
Table 1. Data generating procedure.
Table 1. Data generating procedure.
Step 1.Input structural geometry, material configurations and set applied loads.
Step 2.Generate M samples C 1 , C 2 , , C M of 40 cables of system C i = A 1 , A 2 , A j , , A 40 , where A j is the cross-section area of damaged cable jth in sample ith, that is determined as shown Equation (24).
Step 3.Calculate the tension of 10 observed cables S i =   T 1 , T 2 , T k , , T 10 that mentioned in Section 3.2 corresponding to the sample C i using the PAAP, where T k is the measured tension of cable k t h in sample i t h .
Step 4.Save the input and output data to result files.
Table 2. Hyperparameter optimization. The optimal values for multitask learning, regression learning, and classification learning are separated by commas and appear in order.
Table 2. Hyperparameter optimization. The optimal values for multitask learning, regression learning, and classification learning are separated by commas and appear in order.
Optimal Values
ModelHyperparameterRangeCase 1Case 2Case 3Case 4Case 5
MPNNBatch size[16, 32, 64, 128]16, 16, 1632, 32, 1632, 16, 3216, 16, 1616, 16, 32
Learning rate[0.00001∼0.001]0.00095,
0.00018,
0.00078
0.00029,
0.00081,
0.00047
0.00070,
0.00043,
0.00076
0.00036,
0.00016,
0.00040
0.00030,
0.00092,
0.00098
Vertex embedding dim[8, 16, 32, 64, 128]32, 128, 128128, 32, 128128, 128, 12864, 128, 6464, 64, 128
Hidden state dims[8, 16, 32, 64, 128]16, 16, 3232, 64, 168, 8, 168, 64, 1616, 8, 8
# of message passing steps[3, 4, 5, 6]3, 5, 56, 4, 36, 6, 55, 6, 54, 4, 3
# of set2set computations[1, 2, 3, 4, 5]5, 2, 11, 5, 14, 5, 21, 4, 14, 5, 5
# of LSTM layers[1, 2, 3]2, 3, 32, 2, 22, 1, 23, 1, 23, 1, 2
MLPBatch size[16, 32, 64, 128]32, 64, 12832, 16, 816, 128, 1632, 32, 32128, 32, 128
Learning rate[0.00001∼0.001]0.00038,
0.00015,
0.00054
0.00003,
0.00044,
0.00029
0.00017,
0.00013,
0.00014
0.00049,
0.00015,
0.00031
0.00025,
0.00044,
0.00072
# of hidden neurons
in the hidden layer 1
[32, 64, 128, 256,
512, 1024, 2048]
1024, 256 1024,1024, 32, 5121024, 512, 51264, 128, 1282048, 256, 512
# of hidden neurons
in the hidden layer 2
[32, 64, 128, 256,
512, 1024, 2048]
512, 64, 5122048, 64, 512256, 2048, 204832, 1024, 10242048, 128, 512
# of hidden neurons
in the hidden layer 3
[32, 64, 128, 256,
512, 1024, 2048]
2048, 512, 642048, 512, 2048256, 256, 1024128, 256, 20481024, 512, 32
# of hidden neurons
in the hidden layer 4
[32, 64, 128, 256,
512, 1024, 2048]
128, 2048, 5121024, 32, 128256, 64, 10241024, 128, 64256, 128, 2048
Dropout rate[0.1, 0.2, 0.3,
0.4, 0.5, 0.6, 0.7]
0.1, 0.3, 0.30.2, 0.1, 0.10.5, 0.2, 0.20.1, 0.2, 0.20.2, 0.1, 0.1
XGBoostMinimum sum
of instance weight
[1, 5, 10]1, 51, 51, 101, 11, 1
gamma[0.5, 1, 1.5, 2, 5]0.5, 12, 11, 0.52, 0.50.5, 0.5
Subsample ratio of
the training instance
[0.6, 0.8, 1.0]0.6, 1.00.8, 1.01.0, 1.00.6, 1.00.8, 0.6
Subsample ratio
of columns when
constructing each tree
[0.6, 0.8, 1.0]0.6, 1.00.8, 0.60.8, 1.01.0, 0.80.6, 1.0
Maximum tree depth[3, 4, 5]3, 53, 53, 54, 55, 5
Table 3. Results of MLP and MPNN on the test set for 5 cases. The best value for each case is noted in bold. (MTL: multi-task learning, Single: single-task learning, Class: classification, Reg: regression).
Table 3. Results of MLP and MPNN on the test set for 5 cases. The best value for each case is noted in bold. (MTL: multi-task learning, Single: single-task learning, Class: classification, Reg: regression).
Case 1Case 2Case 3Case 4Case 5
MTLSingleMTLSingleMTLSingleMTLSingleMTLSingle
XGBoost ClassAcc (%)-97.25-97.08-97.5-97.83-98.5
MAE-0.1518-0.1451-0.1422-0.1506-0.1487
RegRMSE-0.1837-0.1748-0.1713-0.1829-0.1806
Corr-0.9093-0.9253-0.9062-0.9084-0.8815
MLPClassAcc (%)98.3398.0893.589597.1794.17949597.0898.08
MAE0.02490.08320.0590.03690.06790.01660.04920.1120.04080.0877
RegRMSE0.03270.16030.08270.08970.07880.03140.06130.16690.06110.1608
Corr0.99530.83210.96990.95410.97340.99440.98850.8310.98070.8261
MPNNClassAcc (%)99.0898.8398.6799.1799.3399.2597.7598.3398.9299.17
MAE0.00930.0090.0050.0080.00350.00280.01040.02650.0070.0046
RegRMSE0.03310.04330.01210.02820.00690.00660.01750.08430.01380.0199
Corr0.99340.98840.99910.99510.99970.99970.99810.95520.99880.9976
Table 4. Classification accuracies by cross-sectional areas of damaged cable for 5 cases. For each cross-sectional area, blue indicates the most accurate, and red denotes the least accurate.
Table 4. Classification accuracies by cross-sectional areas of damaged cable for 5 cases. For each cross-sectional area, blue indicates the most accurate, and red denotes the least accurate.
Case 1Case 2Case 3Case 4Case 5
AreaMTLSingleMTLSingleMTLSingleMTLSingleMTLSingle
XGBoost0.00∼0.69-99.88-99.88-100-100-100
0.70∼0.79-100-100-100-100-100
0.80∼0.89-100-100-100-100-100
0.90∼0.99-67.35-65.31-69.39-73.47-81.63
MLP0.00∼0.6910010097.999.4210099.4298.3610099.42100
0.70∼0.7910010095.8799.1799.1799.1795.0410095.87100
0.80∼0.8910010090.492.896888884.89699.2
0.90∼0.9979.5976.5357.1454.0871.435062.2458.1679.5977.55
MPNN0.00∼0.69100100100100100100100100100100
0.70∼0.79100100100100100100100100100100
0.80∼0.8910010010010010010099.2100100100
0.90∼0.9988.7885.7183.6789.891.8490.8273.4779.5986.7389.8
Table 5. MLP and MPNN results for damaged cable with sensor (Y) and damaged cable without sensor (N). The bold indicates the better result between these two.
Table 5. MLP and MPNN results for damaged cable with sensor (Y) and damaged cable without sensor (N). The bold indicates the better result between these two.
Case 1Case 2Case 3Case 4Case 5
YNYNYNYNYN
XGBoostSingle-CAcc (%)97.3297.2397.2297.0498.2997.2496.9898.1298.6698.45
MAE0.14620.15360.15080.14340.13590.14430.14890.15120.14370.1503
Single-RRMSE0.17550.18630.17490.17480.16030.17470.17970.18390.17140.1836
Corr0.96270.89290.93180.92880.94720.89640.94390.89540.91030.8737
MLPMTL-CAcc (%)10097.7899.6591.6796.2597.4610092.0210096.12
MAE0.02070.02620.0590.0590.07560.06540.04960.04910.03860.0415
MTL-RRMSE0.02560.03480.07170.08590.08740.07580.0590.0620.04450.0657
Corr0.99740.99470.98580.96510.96340.97660.99780.98560.99240.977
Single-CAcc (%)10097.4598.9693.7596.9393.2710093.3510097.45
MAE0.03450.09930.02660.04020.02970.01240.07370.12460.02920.1071
Single-RRMSE0.05170.18250.03160.10130.05540.01770.08140.18670.03920.1842
Corr0.99350.77730.99920.93980.98360.99830.99370.7640.99790.7689
MPNNMTL-CAcc (%)99.6698.8999.6598.3610099.1299.6697.1210098.56
MAE0.00980.00920.00590.00470.00240.00380.00630.01170.00580.0074
MTL-RRMSE0.0570.01960.02010.00810.00480.00750.01180.0190.0120.0143
Corr0.98230.99770.99760.99960.99990.99970.99920.99780.99910.9987
Single-CAcc (%)99.3398.6710098.910099.0199.339899.3399.11
MAE0.00410.01070.00380.00930.00190.00310.00890.03240.00170.0055
Single-RRMSE0.02160.04840.01750.03080.00430.00710.05450.0920.00870.0225
Corr0.99710.98560.99830.99420.99990.99970.98280.9450.99950.997
Table 6. Results of MPNN for each cable in case 3. The rows shaded in green indicate ten cables with tension data and values in the lower 5% of performance appear in red, and values in the upper 5% of performance appear in blue.
Table 6. Results of MPNN for each cable in case 3. The rows shaded in green indicate ten cables with tension data and values in the lower 5% of performance appear in red, and values in the upper 5% of performance appear in blue.
MTL-ClassificationMTL-RegressionSingle-ClassificationSingle-Regression
CablesAccPrecRecallF1MAERMSECorrAccPrecRecallF1MAERMSECorr
111110.00290.00950.9995311110.00070.00090.99999
211110.00350.00370.9999811110.00110.00220.99997
311110.0050.00540.9999711110.00090.00140.99998
40.9710.970.990.00330.0040.999960.9710.970.990.0050.00910.99944
510.9710.980.00360.00380.9999910.9410.970.00560.01240.99955
611110.00180.00230.9999811110.00110.00170.99998
711110.00350.01170.999311110.00330.0080.99971
811110.00140.00210.9999711110.00230.00450.9999
911110.00220.00620.9997711110.00080.00140.99999
1011110.0040.01030.9995111110.0010.00170.99999
1111110.00170.0020.9999911110.00210.00650.99979
1211110.00170.00250.9999711110.0010.0020.99998
130.9610.960.980.00230.00350.999930.9610.960.980.00220.00520.99989
140.9110.910.950.00240.00310.999980.9110.910.950.01150.01940.9989
1510.8910.940.00280.00350.9999810.8910.940.00650.01180.99917
1611110.0020.00240.9999911110.00120.00220.99997
1711110.00120.0014111110.00170.0030.99995
1811110.00270.00740.9997211110.00310.00670.9998
1911110.00280.01120.9994311110.00240.00630.99984
2011110.00160.00380.9999211110.00260.00370.99993
2111110.00080.00140.9999911110.00190.00510.99989
2211110.00450.00490.9999311110.00190.00480.99989
230.9710.970.990.00540.00610.999940.9710.970.990.00260.00340.99993
2410.9510.980.00140.00180.9999811110.00150.00250.99997
2510.9710.980.00280.00430.999910.970.970.970.970.00370.00690.99977
2611110.0030.00460.9999211110.0030.0060.99979
2711110.00170.00240.9999611110.00150.00360.99992
2811110.00390.00430.9999911110.00310.00620.99981
2911110.00240.0030.9999711110.00210.00440.99992
3011110.00180.00270.9999611110.00180.00380.99993
3110.9710.990.00490.00810.9998111110.00220.00440.99993
320.9710.970.980.00250.00550.999880.9710.970.980.00340.00680.99985
3311110.00650.00720.9999510.9710.980.00430.00770.99974
3411110.00310.0040.9999211110.00470.0060.99995
3511110.02230.0270.995810.9710.990.00810.01270.9991
3611110.00270.0030.9999811110.00190.0040.99992
3711110.00190.00240.9999711110.00130.00170.99999
380.9710.970.980.00440.00530.999940.9710.970.980.0010.00160.99998
3911110.00230.00260.9999711110.00150.00240.99997
4011110.00290.00320.9999411110.00260.00350.99989
Q50.970.970.970.980.001390.001780.999420.960.950.960.970.00090.00140.99917
Q9511110.005450.011220.9999911110.006580.012410.99999
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Son, H.; Pham, V.-T.; Jang, Y.; Kim, S.-E. Damage Localization and Severity Assessment of a Cable-Stayed Bridge Using a Message Passing Neural Network. Sensors 2021, 21, 3118. https://doi.org/10.3390/s21093118

AMA Style

Son H, Pham V-T, Jang Y, Kim S-E. Damage Localization and Severity Assessment of a Cable-Stayed Bridge Using a Message Passing Neural Network. Sensors. 2021; 21(9):3118. https://doi.org/10.3390/s21093118

Chicago/Turabian Style

Son, Hyesook, Van-Thanh Pham, Yun Jang, and Seung-Eock Kim. 2021. "Damage Localization and Severity Assessment of a Cable-Stayed Bridge Using a Message Passing Neural Network" Sensors 21, no. 9: 3118. https://doi.org/10.3390/s21093118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop