Next Article in Journal
Analysis of Heat Exchange Efficiency and Influencing Factors of Energy Tunnels: A Case Study of the Torino Metro in Italy
Previous Article in Journal
Effect of Thermo-Oxidative, Ultraviolet and Ozone Aging on Mechanical Property Degradation of Carbon Black-Filled Rubber Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Longitudinal Displacement Reconstruction Method of Suspension Bridge End Considering Multi-Type Data Under Deep Learning Framework

1
Department of Civil Engineering, Chongqing Jiaotong University, Chongqing 400074, China
2
Guangxi Zhuang Autonomous Region Transportation Comprehensive Administrative Law Enforcement Bureau, Nanning 530000, China
3
School of Civil Engineering and Architecture, Northeast Electric Power University, Jilin 132012, China
4
CCCC CONSTRUCTION GROUP Co., Ltd., Chengdu 610036, China
5
Guangxi Road Construction Engineering Group Co., Ltd., Nanning 530000, China
6
Guangxi Communications Design Group Co., Ltd., Nanning 530029, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(15), 2706; https://doi.org/10.3390/buildings15152706
Submission received: 14 April 2025 / Revised: 11 July 2025 / Accepted: 30 July 2025 / Published: 31 July 2025
(This article belongs to the Section Building Structures)

Abstract

Suspension bridges, as a type of long-span bridge, usually have a larger longitudinal displacement at the end of the beam (LDBD). LDBD can be used to evaluate the safety of bridge components at the end of the beam. However, due to factors such as sensor failure and system maintenance, LDBD in the bridge health monitoring system is often missing. Therefore, this study reconstructs the missing part of LDBD based on the long short-term memory network (LSTM) and various data. Specifically, first, the monitoring data that may be related to LDBD in a suspension bridge is analyzed, and the temperature and beam end rotation angle data (RDBD) at representative locations are selected. Then, the temperature data at different places of the bridge are used as the input of the LSTM model to compare and analyze the prediction effect of LDBD. Next, RDBD is used as the input of the LSTM model to observe the prediction effect of LDBD. Finally, temperature and RDBD are used as the input of the LSTM model to observe whether the prediction effect of the LSTM model is improved. The results show that compared with other parts of the bridge, the prediction effect of the temperature inside the box girder in the main span as the model input is better; when RDBD is used as the input of the LSTM model, it is better than the prediction effect of temperature as the model input; temperature and RDBD have higher prediction accuracy when used as the input of the LSTM model together than when used separately as the input of the LSTM model.

1. Introduction

With the continuous increase of traffic load, the safety of bridges as important and most complex traffic components has received more attention. The safety of beam end components (including dampers, expansion joints, bearings, etc.) is one of them. Under the influence of long-term external environment (mainly ambient temperature, followed by wind and traffic loads, etc.), the beam end of the bridge will undergo frequent reciprocating longitudinal displacements, which will lead to damage to the beam end components [1,2,3]. Therefore, health monitoring of beam end longitudinal displacement (LDBD) is particularly important for bridge safety assessment. However, when sensor failure, storage failure, transmission failure, or equipment inspection and replacement occur, data loss will occur [4,5]. It can be said that data loss is an inevitable problem of bridge health monitoring systems [6,7]; especially for health monitoring systems using wireless sensors, data loss is more frequent [8,9,10]. When the data loss rate is too high, the health assessment of the structure cannot be carried out normally, which may lead to serious consequences [11]. Therefore, this paper reconstructs the missing data of LDBD.
This paper selects suspension bridges as the research object because suspension bridges are large-span bridges with greater flexibility, and their LDBD changes more dramatically. Guo et al. [12] compared the beam end displacements of a suspension bridge (Jiangyin Yangtze River Bridge, main span 1385 m) and a cable-stayed bridge (Sutong Bridge, main span 1088 m) in my country and found that the fluctuation amplitude and daily accumulation of the former were one order of magnitude larger than those of the latter. Okuda et al. [13] reported that fatigue cracks appeared in the expansion joint connecting pins of the Akashi Kaikyo Bridge (suspension bridge, main span 1991 m) in Japan after 3 years of use. The expansion joints of the Jiangyin Bridge and Runyang Bridge in my country also showed damage after 3–4 years of use. It can be seen that the damage caused by LDBD in the suspension bridge is greater.
At present, many scholars have reconstructed the missing data of the structural health monitoring system. Niu Yanwei et al. [14] established a finite element model based on bridge health monitoring data and analyzed the mechanism of induced bridge deformation under different temperature fields. Li et al. [15] proposed two multi-scale finite element models to reconstruct structural responses. Zhong Guoqiang et al. [16] established the completed measurement temperature field of the entire bridge based on the finite element model and finite measured temperature data. Wang Xu, Xie Guilin, and others [17] predicted the long-term vertical displacement of concrete bridges based on meteorological data and an optimized GRU model. However, this method of repairing data by relying on finite element models has extremely high requirements on the accuracy of the model. When finite element modeling is performed on complex building structures, it is difficult to accurately simulate the various uncertainties encountered in actual buildings, such as changes in material properties, structural defects, and changes in boundary constraints during long-term use of the building [18]. Therefore, the reconstruction of data by finite element models can only be applied to some simple model structures, and is not applicable to actual large-scale building structures. By using intelligent algorithms such as neural networks, connections between different sensors can be directly established based on historical monitoring data, eliminating the complex modeling process. Although the RGU model performs well in terms of computing efficiency and implementation simplification, it is not as good as the LSTM model in terms of long-range dependency capture capabilities, flexibility, empirical support, and adaptability. LSTM solves the problem of gradient vanishing or gradient exploding in RNN neural networks by adding a forgetting mechanism. It has long-term memory and is convenient for time series data analysis [19,20]. Compared with traditional time series processing methods (such as ARIMA [21]), LSTM has good stability and is more effective in dealing with nonlinear problems [22].
In summary, this paper takes the suspension bridge as the research object. First, the monitoring data related to LDBD, such as temperature and beam end rotation angle data, (RDBD) are analyzed and screened. Then, the temperature data at different positions of the bridge are used as the input of the LSTM model to compare and analyze the prediction effect of LDBD. Next, RDBD is used as the input of the LSTM model to observe the prediction effect of LDBD; then, temperature and RDBD are used as the input of the LSTM model to observe whether the prediction effect of the LSTM model is improved; finally, the prediction method of LDBD is summarized.

2. Methodology

2.1. LSTM Module

LSTM belongs to a recurrent neural network [23] and is widely used in processing sequence-changing data. LSTM cells are composed of four network layers. Unlike RNN neural networks, LSTM cells have a forget gate, input gate, internal state, and output gate to control the LSTM cell state, allowing LSTM to have long-term and short-term memory. At different time steps t, the input of the LSTM cell is   x t , the forget gate is   f t , the input gate is   i t , the output gate is   o t , the cell state is   C t , and the hidden state output is   h t , which can be expressed as
f t = s i g m o i d ( W f x x t + W f h h t 1 + b f )
i t = s i g m o i d ( W i x x t + W i h h t 1 + b i )
C ˜ t = tanh ( W C x x t + W C h h t 1 + b C )
C t = f t C t 1 + i t C ˜ t
o t = s i g m o i d ( W o x x t + W o h h t 1 + b o )
h t = o t tanh ( C t )
where   W i x ,   W f x ,   W o x , and   W C x are the weight matrices and internal states of the product of the input gate, forget gate, output gate, and   x t of the LSTM, respectively;   b i ,   b f ,   b o , and b c are the bias vectors and internal states of the input gate, forget gate, and output gate of the LSTM;   W i h ,   W f h ,   W o h , and   W c h are the weight matrices and internal states of the product of the input gate, forget gate, output gate and   h t 1 , respectively; C ˜ t represents the intermediate value vector constructed from the cell state as the candidate cell information; * represents the Hadamard product [24].

2.2. Long Short-Term Memory Network

The LSTM network is mainly composed of fully connected layers and LSTM layers, and a dropout layer is added after each LSTM layer to prevent the model from overfitting during training and improve its generalization performance. The LSTM network structure is shown in Figure 1. In the figure, the input of LSTM cells is xt and the output is yt. Ct and ht are the outputs of the cell state and the hidden state, respectively.

2.3. Neural Network Model Training Optimization Method

Network model training is actually the process of continuously optimizing the fitting function so that the model has generalization ability. Optimizing the fitting function involves building a suitable network framework and adjusting the framework parameters. Overfitting generally occurs during network model training. This study adopts the following methods to avoid overfitting in model training:
(1) Dataset Normalization
Since the collected data have different amplitudes and even orders of magnitude, this hinders extracting features from the network model dataset. Therefore, normalizing all data before inputting them into the model can reduce the dependence on initialization parameters [25]. The normalization formula is as follows:
x = x min ( x ) max ( x ) min ( x )
(2) Add a dropout layer
The dropout layer can prevent some neural units from participating in learning during each training iteration of the network model, thus solving the model’s overfitting problem [26]. The dropout layer prevents the model features from being enlarged or reduced constantly by “comprehensive averaging,” ensuring that all “feature selectors jointly participate in the extracted features.”
(3) Use optimizer
This study adopts the common Adam optimizer [27], which combines the advantages of AdaGrad and RMSProp optimization algorithms with high computational efficiency and less memory requirement. The Adam optimizer’s parameter update is not affected by the gradient scaling transformation, the hyperparameter has good interpretation, and the step size is automatically adjusted. The Adam optimizer is suitable for large-scale data and parametric scenarios.

2.4. Optimization Method of Model Parameters

Many parameters in deep learning network models determine whether the model is accurate. Most of these parameters can be automatically optimized during model training, but there are a few parameters, called “hyperparameters”, that need to be adjusted before training. Manually adjusting hyperparameters will greatly increase the time cost and rely too much on experience, resulting in limited accuracy of the adjustment results. Therefore, this study uses the Bayesian model [28,29,30] to automatically adjust the model parameters.

2.5. Validation

To evaluate the model’s predictive ability, it is necessary to use the model evaluation indicators [31]. The evaluation indicators can objectively assess the gap between the predicted value generated by the constructed model and the true value. These indicators can also be used to compare and analyze the prediction accuracy of different models. This study uses four evaluation indicators, including root-mean-square error (RMSE), square correlation coefficient (SCC), mean absolute error (MAE), and root mean square error (R2).

3. Case Study

3.1. Analysis of Influencing Factors of Beam End Longitudinal Displacement (LDBD)

First, the end displacement of a suspension bridge is very sensitive to environmental conditions, especially changes in ambient temperature. Studies have shown that changes in the structural static response (TSR) caused by temperature changes sometimes exceed those caused by traffic or other loads [32]. The effect of temperature on the end displacement of a long-span bridge cannot be ignored [33]. Consequently, it is possible to use temperature sensors in the bridge health monitoring system to predict LDBD.
Secondly, LDBD may be related to beam end rotation angle data (RDBD). The main beam of a suspension bridge can be regarded as a flexible cube, as shown in Figure 2. When the rotation angles at both ends change, the longitudinal displacement in the length direction will also change. Therefore, it can be thought that RDBD can be used to predict LDBD.

3.2. Source of Data

The data of this study uses the data of the health monitoring system of Chongqing Wanzhou Fuma Bridge. Chongqing Wanzhou Fuma Bridge is located in the main urban area of Wanzhou. The main span is a single-span 1050 m supported steel box girder suspension bridge. The total length of the bridge is 2030 m. The standard width of the bridge is 26.5 m, and the width of the main bridge plus the wind nozzle is 32 m. Chongqing belongs to the tropical monsoon humid climate, with an average annual temperature of 16~18 °C. Its climate is characterized by hot summers and cool autumns, warm winters and early springs, and four distinct seasons. The highest temperature in a year can exceed 40 °C, and the lowest temperature can drop to below zero. Due to the large temperature difference, the box girder will produce a large deformation, resulting in a large change in the LDBD of the bridge. This requires close monitoring of the beam end displacement to see whether it is within a reasonable range. Therefore, the stability and accuracy of the beam end displacement data monitoring are essential.
Table 1 shows the types, quantities, and corresponding representative symbols of the sensors selected in the structural health monitoring system. Figure 3 shows the arrangement position of using Table 1 symbols instead of sensors.

3.3. Initial Data

Due to the symmetry of the bridge, LDBD at one position is selected for research to reduce the workload. This study determines LDBD in the upstream south bank cable tower. Similarly, the temperature sensors at different monitoring locations are chosen from the one closest to the LDBD sensor in the upstream south bank cable tower as a representative for research. Since the anchor room temperature is too poor, it has not been studied. The selected temperature sensor numbers and positions are shown in Table 2. The RDBD sensor used is the RDBD sensor that is closest to the selected LDBD sensor. The raw data collected by each sensor is shown in Figure 4. It can be seen that there are a series of problems with the raw data.
(1) Missing values: Since the sensors in the bridge health system work externally for a long time, they are very susceptible to equipment failures caused by external or internal factors, resulting in interruptions in the collected data. As shown in Figure 4a,e,g, a flat line segment suddenly appears in the curve in the Figure, and the data in this period is missing;
(2) Abnormal values: Since the monitoring of the bridge health monitoring system is long-term, the monitoring data has a certain trend of change, but some data are obviously out of the overall change trend. These data need to be eliminated. As shown in Figure 4a,e,g,i, a few data points in the curve suddenly drop, which deviates from the overall trend of change.

3.4. Sample Data Preprocessing

(1) Data missing processing
For the part with continuous missing data, it is unrealistic to fill all of them, and the accuracy of such filling is not high, so this part is directly discarded; for the data with a small number of missing data, the moving average method with a window length of 30 is used to fill.
(2) Outlier processing
The 3σ criterion applies to large sample data and provides a clear criterion for judging outliers. When the data error exceeds the range of plus or minus 3 times the standard deviation, these data are considered outliers. Therefore, this study adopts this criterion to identify outliers. After deleting the outliers, the deleted positions are filled with the moving average method with a window length of 30.
(3) Data Frequency Unification
Since different types of sensors collect data at various frequencies, and the input and output of LSTM should correspond one to one, it is necessary to unify the frequency of data collection by different sensors to ensure that the time when the data is generated can correspond.
First, establish a timetable with a time interval of 10 min, and then interpolate the data collected by other sensors based on the time corresponding to this timetable. This study adopts the spline interpolation method, which can produce a smooth interpolation curve and better retain the overall characteristics of the data.
Figure 4b,d,f,h,j,l show the data obtained after preprocessing each sensor data. It can be seen that the outliers and missing values significantly improve after the data is processed.

3.5. Sample Data Denoising

3.5.1. Correlation Analysis and Evaluation Principle

Although the deep learning model is powerful, it cannot find the pattern between two unrelated data. Therefore, before putting the data into the deep learning model training, it is necessary to analyze the correlation coefficient between the beam end displacement and other sensor data to determine whether there is any correlation between the data.
The correlation evaluation model analyzes the correlation between beam end displacement and temperature by linear regression analysis. The fitting equation of linear regression analysis is as follows:
y = p 1 x + p 2
p1 and p2 are the regression coefficients of the fitting equation, x is the independent variable, and y is the dependent variable. The regression coefficients are obtained using the least squares method:
p 1 = x i y i x ¯ y i x i 2 x ¯ x i
p 2 = y ¯ b x ¯
In the formula, xi and yi are the actual values of the independent variable x, and the dependent variable y; x ¯ and y ¯ are the average values of the samples of these variables.
The correlation coefficient R is used to verify the rationality of the calculated linear relationship between the independent variable x and the dependent variable y. The value of R can reflect the linear correlation between the independent and dependent variables, as shown in Table 3. It can be seen that when the absolute value of R is closer to 1, the correlation between the independent variable and the dependent variable is stronger. When the absolute value of R is closer to 0, the correlation between the independent and dependent variables is weaker.

3.5.2. Correlation When the Signal Is Not Denoised

The correlation between temperature at different positions and beam end displacement is evaluated using the above evaluation principle, as shown in Figure 5. It can be seen from the Figure that the temperature of A-T and B-T have the best correlation with the data of L-D, and the correlation coefficient is above 0.960. Secondly, the temperature of C-T and D-T correlates well with the data of L-D, and the correlation coefficient is about 0.820. Finally, the data of R-D correlate well with L-D. It can be seen from the correlation that L-D decreases as the absolute value of R-D increases, which also confirms that the beam end angle in Section 3.1 can predict the beam end displacement.

3.5.3. Correlation When the Signal Is Denoised

The health monitoring system is affected by the uncertainty of the monitoring environment and is inevitably affected by various factors, which reduces the monitoring accuracy of the sensor [11]. As shown in Figure 6, although the LDBD data is relatively good overall, after magnification, it can be found that there are a lot of “burrs” in the data curve. These “burrs” are the noise in the sensor data collection process. This study uses wavelet denoising [34] to reduce the noise of the collected signal. Wavelet denoising is an upgrade of the traditional Fourier transform. Wavelet denoising has high efficiency, adaptability, robustness, sparsity, and multi-resolution analysis capabilities and can better identify signal characteristics of different scales and time intervals [35]. This study selects the wavelet of the Symlets series. The performance of the SymN wavelet is relatively comprehensive, with good orthogonality, biorthogonality, compact support, symmetry, etc. N represents the vanishing moment, and N is taken as 6 in this study. As shown in Figure 6, the displacement curve of the beam end after denoising by wavelet transform retains the change trend of the original data and becomes smooth.
As shown in Figure 7, the correlation between the temperature in each area has been improved after noise reduction. The distribution of the displacement of the beam end and other data after noise reduction is more regular. The points after noise reduction are located on both sides of the linear fitting line and overlap to form a spiral curve.

4. Using Temperature as Input to the LSTM Model

4.1. Model Parameter Analysis

The previous article analyzed the correlation between different regions’ temperatures and the beam end’s displacement. The following uses the temperature of the above four areas to predict the displacement of the beam end. To make the model convincing, this study divides the data of the temperature of different regions and the beam end into three parts, training set, validation set, and test set, according to the ratio of 7:1:2. The training set trains the LSTM model, optimizes the parameters of the model according to the prediction of the validation set, and finally uses the test set to predict and test the model after parameter optimization.
The structure of the LSTM neural network, the model training optimization method, and the model parameter optimization method described in Section 1 are adopted. The specific LSTM parameters and operating environment settings are shown in Table 4.
Other parameters, such as the number of hidden layers, the number of neurons in the hidden layers, the batch size, the learning rate, and the threshold size of the drop layer, are optimized using Bayesian optimization. The specific implementation steps are as follows:
(1) Determine the parameters to be optimized and set each parameter’s range so that forms the parameter space. Bayesian optimization will randomly generate a set of parameters in this search space to train the LSTM model. After the training is completed, the error between the predicted value and the true value of the model can be obtained by inputting the validation set into the model (this error is used as the objective function of Bayesian optimization). The error is returned to the Gaussian model, and the Gaussian model is corrected;
(2) According to the corrected Gaussian model, the acquisition function will obtain the next most likely parameter combination, and the LSTM model will be trained with this parameter combination to obtain a new error. The latest mistake will update the Gaussian model and the parameter combination;
(3) Repeat the second step. After the set number of optimizations is completed, the parameter combination corresponding to the minimum error of the validation set will be obtained. This combination is the better parameter combination. Finally, the LSTM model is trained, verified, and predicted according to the optimized parameter combination.
According to the model input dimension and output dimension, the search space of the approximate range of the optimal parameters is set, as shown in Table 5.

4.2. Prediction Effect

4.2.1. Parameter Regulation

The model is established according to the above content. The temperature data of the four regions of the suspension bridge after noise reduction in Section 3 and the LDBD of the upstream south bank beam end are used as the input and output of the model for training. The model iteration time of each round of Bayesian optimization is 50 times, and 200 rounds of Bayesian optimization are carried out to obtain the optimal parameter combination of each region’s prediction, as shown in Table 6.

4.2.2. Model Training

The optimized parameters were used to set and train the model. After 100 iterations of training, the training error curve was obtained, as shown in Figure 8.
In Figure 8, both the error curve and the verification error curve slowly decline and eventually level off, indicating that the training of each model has converged. However, the bottom spacing of the training and verification curves of Figure 8c,d is significantly larger than that of Figure 8a,b, indicating that the regularity between the corresponding temperature and displacement of the latter two is worse than that of the former two. The lowest verification errors are (b), (a), (d), and (c) in Figure 8, in order from small to large. It can be seen that Figure 8b has the best training effect.

4.2.3. Predicted Results

After training, the above test set of temperature in different regions was used to predict LDBD, and the predicted results were obtained, as shown in Figure 9. As can be seen from the Figure, LSTM can use the temperature of each part of the bridge to predict LDBD. However, different parts of the bridge temperature prediction effect are different. To facilitate the comparison of the impact of varying prediction results, statistical analysis was carried out on the evaluation indicators of different prediction results, as shown in Table 7.
Combining the meaning of the evaluation indicators mentioned in Section 2 and Table 7 shows that the best prediction effect is achieved by using the data of B-T to predict LDBD. The values of RMSE and MAE are as low as 0.033 and 0.024, respectively, which are closest to zero; the values of SCC and R2 are 0.963 and 0.958, respectively, which are very close to 1. It can be seen that the prediction effect of using the data of B-T to predict LDBD is very good; secondly, the prediction effect is good when the data of A-T is used to predict LDBD; finally, the prediction effect is relatively poor when the data of C-T and D-T are used to predict LDBD.

5. Using Beam End Rotation Data as Input to the LSTM Model

As shown in Section 3.1 of this article, if the main beam of a suspension bridge is regarded as a long and flexible cube, the longitudinal displacement in the length direction will also change when the rotation angles at both ends change. However, the rotation angle at the end of the main beam is not limited to one direction. The bridge health monitoring system monitors the beam end’s horizontal and vertical rotation angles. When the beam end rotates in two different directions, as shown in Figure 10, the longitudinal length of the beam is affected by the superposition of the rotation angles in the two directions.
Therefore, this chapter predicts LDBD through the displacement in these two directions. The prediction model was established using the method in Section 3, and the hyperparameters were adjusted using the same method, as shown in Table 8.
The training loss curves and prediction results are obtained after 100 iterations based on the parameter adjustment results in Table 8, as shown in Figure 11. The evaluation indicators of this forecast are shown in Table 9. It can be seen that the prediction results of this time are more excellent than the previous prediction results using temperature as the model input. Although the correlation between two RDBDs and LDBD was only −0.772 and 0.465, which was lower than the correlation coefficient between the temperature and LDBD, the R2 and SCC of the predicted results were as high as 0.970 and 0.973, respectively. This shows that the two RDBDs as the model’s input can better predict LDBD.

6. Temperature and RDBD as Model Input

It can be seen from the above that both temperature and RDBD can predict LDBD. Temperature causes the expansion and contraction of the main beam, causing its length to change, thus changing LDBD. When the size of the main beam is unchanged, the rotation angle of the beam end can change the longitudinal length of the beam end. It can be seen that the influence of these two effects on LDBD may be superimposed, and the temperature and the rotation angle of the beam end can change LDBD by changing the length and shape of the main beam, respectively. Therefore, in this study, the temperature of the box girder in the span of the main beam and RDBD are used as the model inputs to predict LDBD.
The prediction model was established using the method in Section 4, and the hyperparameters were adjusted using the same method, as shown in Table 10:
The model is set using the above hyperparameters. After 100 training iterations, the training loss curve, prediction graph, and prediction result correlation graph are obtained, as shown in Figure 12. The model’s evaluation indicators are shown in Table 11.
As can be seen from Figure 12, the correlation coefficient of the prediction results reaches 0.996, which is very close to 1, indicating that the prediction effect is very good. As shown in Table 12, the temperature input, beam end angle input, and the model evaluation indexes obtained by both of them as inputs were summarized and compared. It was found that the prediction effect of the model was increased by 3.210% and 1.923%, respectively, when temperature and beam end angle were used as the input of the model, compared with them as separate inputs to the model. This shows that compared with using the temperature or angle of the beam end separately to predict LDBD, combining the two as the input of the model can improve the prediction accuracy of LDBD.

7. Comparison Validation

In order to verify the correctness of the above-mentioned LDBD prediction method, the temperature and RDBD data of the middle span box girder in the health monitoring data of the Xintian Yangtze River Bridge, which is also a suspension bridge, were selected. Their correlation with LDBD data is first analyzed, as shown in Figure 13. From the figure, we can see that the temperature inside the span box girder and the correlation characteristics of RDBD and LDBD are more consistent with the correlation between the front bridge.
Use the same method to predict LDBD. Obtain the evaluation indexes corresponding to the prediction results of different input conditions. Their summary is shown in Table 13. From the table, we can see that using B-T and RDBD in the bridge health monitoring system can also make good predictions of LDBD. When combining temperature with RDBD as input to the LSTM model, the prediction accuracy of LDBD can be improved. This verifies the reliability of the aforementioned LDBD prediction method.
Through the cases of the two suspension bridges, we can see that the prediction error of LDBD in this paper is within 2% when combined with RDBD as the prediction method of the LSTM model. Compared with Hui Wang et al. [36] the prediction error of LDBD was 8.1% when using the full-size test method; the error of the method in this paper is reduced by nearly 75%.

8. Conclusions

By analyzing the influencing factors of LDBD, temperature data from different locations in RDBD and health monitoring systems were selected. The original data was processed to improve data quality. Combined with the LSTM model, different input values were used to predict LDBD, and a comparative analysis was performed. The optimal input for LDBD prediction was analyzed. The main conclusions are as follows:
(1) Through data denoising, the correlation between temperature, RDBD, and LDBD can be improved, which is convenient for model learning and training.
(2) In the prediction of LDBD by temperature at different locations, the prediction effect is from best to worst: the temperature in the mid-span box girder at the beam end (B-T), the temperature at the mid-span guardrail (A-T), the saddle cover at the top of the upstream south bank tower (C-T), and the cable tower at the upstream south bank (D-T).
(3) Although the correlation between the RDBD in different directions and LDBD is much worse than the correlation between temperature and LDBD, using the RDBD in two directions as the model input can better predict LDBD than using temperature as the model input. This shows that using the RDBD data in two different directions has a good superposition effect on the prediction of LDBD.
(4) The prediction effect of using temperature and RDBD as the model input is 3.210% and 1.923% higher than that of using them separately as the model input.

Author Contributions

X.Y.: Methodology, Conceptualization, Validation, Funding acquisition. C.W.: Methodology, Software, Conceptualization, Writing—original draft. Y.Z.: Methodology, Writing—original draft. W.S.: Supervision, Funding acquisition, Writing—Review and Editing. L.C.: Methodology, Visualization, Writing—Review and Editing. K.K.: Visualization, Resources, Validation. Q.C.: Funding acquisition, Data curation, Validation. All authors have read and agreed to the published version of the manuscript.

Funding

The financial support is given by Chongqing Natural Science Foundation of China (Grant Nos. cstc2021jcyj-msxmX1168 and cstb2022nscq-msx1655), the State Key Laboratory of Structural Dynamics of Bridge Engineering and Key Laboratory of Bridge Structure Seismic Technology for Transportation Industry Open Fund (Grant Nos. 202205 and 202105), and the Open Fund of State Key Laboratory of the Mountain Bridge and Tunnel Engi-neering (Grant Nos. SKLBT-ZD2102 and SKLBT-19-007).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Wencai Shao was employed by the company CCCC CONSTRUCTION GROUP Co., Ltd. Author Linyuan Chang was employed by the company Guangxi Road Construction Engineering Group Co., Ltd. Author Kaige Kong was employed by the company Guangxi Communications Design Group Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Huang, H.B.; Yi, T.H.; Li, H.N.; Liu, H. New representative temperature for performance alarming of bridge expansion joints through temperature-displacement relationship. J. Bridge Eng. 2018, 23, 04018043. [Google Scholar] [CrossRef]
  2. Xia, Q.; Xia, Y.; Wan, H.P.; Zhang, J.; Ren, W.X. Condition analysis of expansion joints of a long-span suspension bridge through metamodel-based model updating considering thermal effect. Struct. Control Health Monit. 2020, 27, e2521. [Google Scholar] [CrossRef]
  3. Ni, Y.Q.; Wang, Y.W.; Zhang, C. A Bayesian approach for condition assessment and damage alarm of bridge expansion joints using long-term structural health monitoring data. Eng. Struct. 2020, 212, 110520. [Google Scholar] [CrossRef]
  4. Lei, X.; Sun, L.; Xia, Y. Lost data reconstruction for structural health monitoring using deep convolutional generative adversarial networks. Struct. Health Monit. 2021, 20, 2069–2087. [Google Scholar] [CrossRef]
  5. Fan, G.; Li, J.; Hao, H. Dynamic response reconstruction for structural health monitoring using densely connected convolutional networks. Struct. Health Monit. 2021, 20, 1373–1391. [Google Scholar] [CrossRef]
  6. Kullaa, J. Detection, identification, and quantification of sensor fault in a sensor network. Mech. Syst. Signal Process. 2013, 40, 208–221. [Google Scholar] [CrossRef]
  7. Tang, Z.; Chen, Z.; Bao, Y.; Li, H. Convolutional neural network-based data anomaly detection method using multiple information for structural health monitoring. Struct. Control Health Monit. 2019, 26, e2296. [Google Scholar] [CrossRef]
  8. Bao, Y.; Li, H.; Sun, X.; Yu, Y.; Ou, J. Compressive sampling–based data loss recovery for wireless sensor networks used in civil structural health monitoring. Struct. Health Monit. 2013, 12, 78–95. [Google Scholar] [CrossRef]
  9. Amini, F.; Hedayati, Y.; Zanddizari, H. Exploiting the inter-correlation of structural vibration signals for data loss recovery: A disributed compressive sensing based approach. Mech. Syst. Signal Process. 2021, 152, 107473. [Google Scholar] [CrossRef]
  10. Lin, Q.; Bao, X.; Li, C. Deep learning based missing data recovery of non-stationary wind velocity. J. Wind Eng. Ind. Aerodyn. 2022, 224, 104962. [Google Scholar] [CrossRef]
  11. Deng, Z.; Huang, M.; Wan, N.; Zhang, J. The current development of structural health monitoring for bridges: A review. Buildings 2023, 13, 1360. [Google Scholar] [CrossRef]
  12. Guo, T.; Liu, J.; Zhang, Y.; Pan, S. Displacement monitoring and analysis of expansion joints of long-span steel bridges with viscous dampers. J. Bridge Eng. 2015, 20, 04014099. [Google Scholar] [CrossRef]
  13. Okuda, M.; Yumiyama, S.; Ikeda, H. Reinforcement of expansion joint on Akashi Kaikyo Bridge. In Proceedings of the Japan Society of Civil Engineers 59th Annual Meeting, Tokyo, Japan, 1 June–29 July 2004; Society of Civil Engineers: Tokyo, Japan, 2004. (In Japanese). [Google Scholar]
  14. Niu, Y.; Wang, Y.; Tang, Y. Analysis of temperature-induced deformation and stress distribution of long-span concrete truss combination arch bridge based on bridge health monitoring data and finite element simulation. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720945205. [Google Scholar] [CrossRef]
  15. Li, Y.; Huang, H.; Zhang, W.; Sun, L. Structural full-field responses reconstruction by the SVD and pseudo-inverse operator-estimated force with two-degree multi-scale models. Eng. Struct. 2021, 249, 112986. [Google Scholar] [CrossRef]
  16. Zhong, G.; Bi, Y.; Song, J.; Wang, K.; Gao, S.; Zhang, X.; Wang, C.; Liu, S.; Yue, Z.; Wan, C. Digital Integration of Temperature Field of Cable-Stayed Bridge Based on Finite Element Model Updating and Health. Monit. Sustain. 2023, 15, 9028. [Google Scholar] [CrossRef]
  17. Wang, X.; Xie, G.; Liu, W.; Kong, H.; Gao, Y. A long-term vertical displacement prediction method of concrete bridges based on meteorological shared data and optimized GRU model. Measurement 2025, 253, 117811. [Google Scholar] [CrossRef]
  18. Fan, G.; Li, J.; Hao, H.; Xin, Y. Data driven structural dynamic response reconstruction using segment based generative adversarial networks. Eng. Struct. 2021, 234, 111970. [Google Scholar] [CrossRef]
  19. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef]
  20. Kawakami, K. Supervised Sequence Labelling with Recurrent Neural Networks. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2008. [Google Scholar]
  21. Noureen, S.; Atique, S.; Roy, V.; Bayne, S. Analysis and application of seasonal ARIMA model in Energy Demand Forecasting: A case study of small scale agricultural load. In Proceedings of the 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 4–7 August 2019; IEEE: New York, NY, USA, 2019; pp. 521–524. [Google Scholar]
  22. Qian, F.; Chen, X. Stock prediction based on LSTM under different stability. In Proceedings of the 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), Chengdu, China, 12–15 April 2019; IEEE: New York, NY, USA, 2019; pp. 483–486. [Google Scholar]
  23. Schmidhuber, J.; Hochreiter, S. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  24. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  25. Cao, X.H.; Stojkovic, I.; Obradovic, Z. A robust data scaling algorithm to improve classification accuracies in biomedical data. BMC Bioinform. 2016, 17, 359. [Google Scholar] [CrossRef]
  26. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  27. Bock, S.; Goppold, J.; Weiß, M. An improvement of the convergence proof of the ADAM-Optimizer. arXiv 2018, arXiv:1804.10587. [Google Scholar]
  28. Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; De Freitas, N. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 2015, 104, 148–175. [Google Scholar] [CrossRef]
  29. Joy, T.T.; Rana, S.; Gupta, S.; Venkatesh, S. A flexible transfer learning framework for Bayesian optimization with convergence guarantee. Expert Syst. Appl. 2019, 115, 656–672. [Google Scholar] [CrossRef]
  30. Joy, T.T.; Rana, S.; Gupta, S.; Venkatesh, S. Batch Bayesian optimization using multi-scale search. Knowl.-Based Syst. 2020, 187, 104818. [Google Scholar] [CrossRef]
  31. Chadalawada, J.; Babovic, V. Review and comparison of performance indices for automatic model induction. J. Hydroinformatics 2019, 21, 13–31. [Google Scholar] [CrossRef]
  32. Duan, Y.F.; Li, Y.; Xiang, Y.Q. Strain-temperature correlation analysis of a tied arch bridge using monitoring data. In Proceedings of the 2011 International Conference on Multimedia Technology, Hangzhou, China, 26–28 July 2011; IEEE: New York, NY, USA, 2011; pp. 6025–6028. [Google Scholar]
  33. Han, Q.; Ma, Q.; Xu, J.; Liu, M. Structural health monitoring research under varying temperature condition: A review. J. Civ. Struct. Health Monit. 2021, 11, 149–173. [Google Scholar] [CrossRef]
  34. Guido, R.C. Effectively interpreting discrete wavelet transformed signals [lecture notes]. IEEE Signal Process. Mag. 2017, 34, 89–100. [Google Scholar] [CrossRef]
  35. Mallat, S. A Wavelet Tour of Signal Processing; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  36. Wang, H.; Shen, R.; Bai, L.; Wang, L.; Chen, W.; Zuo, H. Analytical method for predicting train-induced longitudinal movement at girder ends of suspension bridges with full-scale test validation. Eng. Struct. 2025, 327, 119673. [Google Scholar] [CrossRef]
Figure 1. LSTM network structure.
Figure 1. LSTM network structure.
Buildings 15 02706 g001
Figure 2. Effect of RDBD on LDBD.
Figure 2. Effect of RDBD on LDBD.
Buildings 15 02706 g002
Figure 3. Sensor layout diagram.
Figure 3. Sensor layout diagram.
Buildings 15 02706 g003
Figure 4. Sensor raw data and preprocessed data.
Figure 4. Sensor raw data and preprocessed data.
Buildings 15 02706 g004
Figure 5. Correlation analysis between each sensor and L-D.
Figure 5. Correlation analysis between each sensor and L-D.
Buildings 15 02706 g005aBuildings 15 02706 g005b
Figure 6. Locally enlarged view of L-D data after noise reduction.
Figure 6. Locally enlarged view of L-D data after noise reduction.
Buildings 15 02706 g006
Figure 7. Correlation analysis between each sensor and L-D after noise reduction.
Figure 7. Correlation analysis between each sensor and L-D after noise reduction.
Buildings 15 02706 g007
Figure 8. Loss curves for each model training.
Figure 8. Loss curves for each model training.
Buildings 15 02706 g008
Figure 9. Predictive renderings of each model training.
Figure 9. Predictive renderings of each model training.
Buildings 15 02706 g009aBuildings 15 02706 g009b
Figure 10. Diagram of influence of RDBD on LDBD.
Figure 10. Diagram of influence of RDBD on LDBD.
Buildings 15 02706 g010
Figure 11. Prediction results of RDBD on LDBD.
Figure 11. Prediction results of RDBD on LDBD.
Buildings 15 02706 g011
Figure 12. Prediction of LDBD using temperature and RDBD.
Figure 12. Prediction of LDBD using temperature and RDBD.
Buildings 15 02706 g012
Figure 13. Correlation analysis.
Figure 13. Correlation analysis.
Buildings 15 02706 g013
Table 1. Sensor list.
Table 1. Sensor list.
Monitoring ContentQuantityRepresentative Symbol
Ambient temperature of the bridge site1A-T
Temperature inside the main beam2B-T
Temperature inside saddle cover4C-T
Temperature in the tower2D-T
Anchor room temperature4E-T
LDBD4L-D
RDBD4R-D
Table 2. List of temperature sensors.
Table 2. List of temperature sensors.
Monitoring ContentQuantityPosition
Ambient temperature of the bridge site1Outside the guardrail in the middle of the main beam span
Temperature inside the main beam1In the box girder of the main beam mid-span
Temperature inside saddle cover1Upstream south bank tower top saddle inside
Temperature in the tower1Upstream south bank tower inside
Table 3. Correlation strength statement table.
Table 3. Correlation strength statement table.
R = −1−1 < R < 0R = 00 < R < 1R = 1
Perfect negative correlationNegative correlationNo correlationPositive correlationComplete association
Table 4. LSTM parameters and operating environment.
Table 4. LSTM parameters and operating environment.
Parameter NameParameter Selection
Activation functionReLu
OptimizerAdam
Loss functionMean_Squared_Error
FrameworkPytorch 2.1.0 + cpu
PythonPython 3.11
CPUi9-12900H (2.50 GHz)
Table 5. Bayesian optimization parameter search range.
Table 5. Bayesian optimization parameter search range.
Parameter NameRange of Parameter
Batch size2n (n = 3~6)
Hidden layers1~2
Hidden size20~500
Dropout ratio0.1~0.3
Learning rate0.01~0.00,001
Sequence length2~50
Table 6. Parametric results of Bayesian optimization.
Table 6. Parametric results of Bayesian optimization.
Parameter NameA-LB-TC-TD-T
Batch size6432328
Hidden layers1111
Hidden size300260280240
Dropout ratio0.3000.2000.1000.100
Learning rate1.577 × 10−43.061 × 10−45.950 × 10−41.214 × 10−4
Sequence length23294946
Table 7. Evaluation indicators of the model.
Table 7. Evaluation indicators of the model.
PositionRMSEMAESCCR2
A-T0.0650.0460.8500.841
B-T0.0330.0240.9630.958
C-T0.0910.0680.6580.655
D-T0.0950.0720.7080.635
Table 8. Parametric results of Bayesian optimization.
Table 8. Parametric results of Bayesian optimization.
Parameter NameParameter Value
Batch size8
Hidden layers1
Hidden size420
Dropout ratio0.100
Learning rate5.688 × 10−4
Sequence length46
Table 9. Evaluation indicators of the model.
Table 9. Evaluation indicators of the model.
RMSEMAESCCR2
0.0280.0200.9730.970
Table 10. Parametric results of Bayesian optimization.
Table 10. Parametric results of Bayesian optimization.
Parameter NameParameter Value
Batch size16
Hidden layers1
Hidden size380
Dropout ratio0.200
Learning rate7.810 × 10−4
Sequence length49
Table 11. Evaluation indicators of the model.
Table 11. Evaluation indicators of the model.
RMSEMAESCCR2
0.0170.0130.9910.988
Table 12. Evaluation indicators of the model.
Table 12. Evaluation indicators of the model.
Model InputRMSEMAESCCR2
Temperature0.0330.0240.9630.958
RDBD0.0280.0200.9730.970
Temperature and RDBD0.0170.0130.9910.988
Table 13. Evaluation indicators of the model.
Table 13. Evaluation indicators of the model.
Model InputRMSEMAESCCR2
Temperature0.0310.0220.9680.965
RDBD0.0290.0210.9690.967
Temperature and RDBD0.0160.0120.9890.985
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Wu, C.; Zhang, Y.; Shao, W.; Chang, L.; Kong, K.; Cheng, Q. Longitudinal Displacement Reconstruction Method of Suspension Bridge End Considering Multi-Type Data Under Deep Learning Framework. Buildings 2025, 15, 2706. https://doi.org/10.3390/buildings15152706

AMA Style

Yang X, Wu C, Zhang Y, Shao W, Chang L, Kong K, Cheng Q. Longitudinal Displacement Reconstruction Method of Suspension Bridge End Considering Multi-Type Data Under Deep Learning Framework. Buildings. 2025; 15(15):2706. https://doi.org/10.3390/buildings15152706

Chicago/Turabian Style

Yang, Xiaoting, Chao Wu, Youjia Zhang, Wencai Shao, Linyuan Chang, Kaige Kong, and Quan Cheng. 2025. "Longitudinal Displacement Reconstruction Method of Suspension Bridge End Considering Multi-Type Data Under Deep Learning Framework" Buildings 15, no. 15: 2706. https://doi.org/10.3390/buildings15152706

APA Style

Yang, X., Wu, C., Zhang, Y., Shao, W., Chang, L., Kong, K., & Cheng, Q. (2025). Longitudinal Displacement Reconstruction Method of Suspension Bridge End Considering Multi-Type Data Under Deep Learning Framework. Buildings, 15(15), 2706. https://doi.org/10.3390/buildings15152706

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop