Next Article in Journal
Shear Capacity of Masonry Walls Externally Strengthened via Reinforced Khorasan Jacketing
Previous Article in Journal
Circular Industrialized Construction: A Perspective Through Design for Manufacturing, Assembly, and Disassembly
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deflection Prediction of Highway Bridges Using Wireless Sensor Networks and Enhanced iTransformer Model

by
Cong Mu
1,
Chen Chang
1,
Jiuyuan Huo
1,2,3,* and
Jiguang Yang
1
1
School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China
2
National Cryosphere Desert Data Center, Northwest Institute of Eco-Environment and Resources, CAS, Lanzhou 730000, China
3
Lanzhou Huahao Science and Technology Co., Ltd., Lanzhou 730030, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(13), 2176; https://doi.org/10.3390/buildings15132176
Submission received: 14 May 2025 / Revised: 16 June 2025 / Accepted: 17 June 2025 / Published: 22 June 2025
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

As an important part of national transportation infrastructure, the operation status of bridges is directly related to transportation safety and social stability. Structural deflection, which reflects the deformation behavior of bridge systems, serves as a key indicator for identifying stiffness degradation and the progression of localized damage. The accurate modeling and forecasting of deflection are thus essential for effective bridge health monitoring and intelligent maintenance. To address the limitations of traditional methods in handling multi-source data fusion and nonlinear temporal dependencies, this study proposes an enhanced iTransformer-based prediction model, termed LDAiT (LSTM Differential Attention iTransformer), which integrates Long Short-Term Memory (LSTM) networks and a differential attention mechanism for high-fidelity deflection prediction under complex working conditions. Firstly, a multi-source heterogeneous time series dataset is constructed based on wireless sensor network (WSN) technology, enabling the real-time acquisition and fusion of key structural response parameters such as deflection, strain, and temperature across critical bridge sections. Secondly, LDAiT enhances the modeling capability of long-term dependence through the introduction of LSTM and combines with the differential attention mechanism to improve the precision of response to the local dynamic changes in disturbance. Finally, experimental validation is carried out based on the measured data of Xintian Yellow River Bridge, and the results show that LDAiT outperforms the existing mainstream models in the indexes of R2, RMSE, MAE, and MAPE and has good accuracy, stability and generalization ability. The proposed approach offers a novel and effective framework for deflection forecasting in complex bridge systems and holds significant potential for practical deployment in structural health monitoring and intelligent decision-making applications.

1. Introduction

As critical hubs within national transportation infrastructure systems, bridges not only bear the important function of the efficient circulation of people and materials but also relate to the stability of national economy and social operation [1,2]. With the accelerating pace of urbanization and increasing transportation demands, the number and scale of highway bridges in China have expanded rapidly [3]. Despite continuous advancements in bridge design, construction techniques, and material technologies, in-service structures remain susceptible to the compounded effects of repeated traffic loading, environmental degradation, material aging, and fatigue crack propagation. These combined factors can lead to progressive deterioration and damage accumulation, and in severe cases, they may result in sudden structural failure with catastrophic consequences for both life and property. Frequent bridge accidents have fully exposed the limitations of the existing management tools in structural risk identification and condition perception, which has promoted extensive attention being paid to bridge health monitoring and structural safety assessment techniques.
Bridge health monitoring (BHM) has become an important research direction in the field of civil engineering as a key technology to ensure the safe operation of bridges, extend the service life of structures, and optimize operation and maintenance management [4,5]. Compared to conventional approaches based on manual inspection and periodic evaluation, BHM systems offer significant advantages in terms of automation, real-time capability, and spatial coverage; they are especially suitable for large-span or structurally complex bridge projects, which can significantly improve the timeliness and accuracy of microdamage identification. The construction of high-precision and intelligent structural monitoring systems is becoming an inevitable development trend of bridge safety assessment methods. Among the many monitoring indicators, deflection, as a core parameter of structural deformation response, can effectively reflect the mechanical behavior of bridges under traffic loads and environmental disturbances [6,7]. Its changes are often closely related to potential damage such as structural stiffness degradation, member cracks, and connection relaxation. The accurate modeling and prediction of deflection not only help to identify abnormal working conditions but also provide key data support for maintenance decisions and repair and strengthening.
In recent years, with the rapid development of wireless sensor network (WSN) technology, the acquisition of bridge deflection and other response parameters has been evolving in the direction of intelligentization and systematization [8,9]. WSNs have the advantages of flexible deployment, low energy consumption, high efficiency of transmission, etc., and can achieve the synchronous acquisition of the structural state and cooperative sensing of multiple points, which significantly improves the real-time health monitoring system and its scalability. A real-time and scalable monitoring system is especially suitable for bridge monitoring tasks under complex working conditions and dynamic environments. Meanwhile, the development of artificial intelligence and the application of deep learning in structural time series modeling also provide new technical support for health prediction [10,11]. Among these advances, Transformer-based architectures—originally developed for natural language processing—have been introduced into structural health monitoring tasks for time series forecasting [12,13]. Compared to traditional recurrent neural networks (e.g., Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU)), Transformer is able to capture dependencies over longer time spans based on the self-attention mechanism and demonstrates greater expressive power in modeling complex response features such as deflection. However, the standard Transformer architecture still faces challenges when applied to multi-source heterogeneous sensor data, particularly in its limited ability to fuse cross-channel information and model localized dynamic variations, which constrains its performance in practical bridge health monitoring scenarios.
Current bridge deflection prediction faces three key engineering challenges: the structural response exhibits significant nonlinearity and multiscale temporal evolution due to environmental changes, load coupling effects, and structural aging, which makes it difficult for traditional linear models to accurately characterize its dynamic behavior; existing models often lack sensitivity to local disturbances and struggle to capture short-term fluctuations in deflection signals, limiting their ability to detect early-stage damage and weakening the effectiveness of structural health warnings; and furthermore, bridge deflection is influenced by multiple heterogeneous monitoring sources such as temperature, strain, and vibration, resulting in high-dimensional and diverse data, and the effective fusion and feature extraction from such multi-source data remain major obstacles in practical model deployment. To address the aforementioned challenges, based on the iTransformer [14] framework, this paper proposes a bridge deflection prediction model that integrates LSTM and the differential attention mechanism, termed LDAiT (LSTM Differential Attention iTransformer). The main contributions of this study include the following:
(1)
Fusing WSN technology to achieve the real-time collection of bridge structural response data, constructing a high-quality time series correlation feature dataset oriented to deflection prediction, and providing accurate and dynamic input support for model training.
(2)
The LDAiT bridge deflection prediction model is proposed, which firstly enhances the modeling capability of the nonlinear dynamic evolution process of the bridge structure by introducing LSTM for time series feature extraction of the initial input data and then embeds the differential attention mechanism into the iTransformer architecture to enhance the model’s sensitivity to the changes in the local response so as to effectively improve the accuracy and stability of the prediction.
(3)
Comparative experiments are carried out on the measured dataset of Xintian Yellow River Bridge to verify the effectiveness and performance advantages of the LDAiT model, which provides powerful support for bridge health monitoring and intelligent operation and maintenance.
The rest of this paper is organized as follows: Section 2 introduces the related work; Section 3 describes in detail the structure of the proposed LDAiT model and its bridge deflection prediction framework; Section 4 carries out the experimental validation of the model and the analysis of the results based on the measured dataset of the Xintian Yellow River Bridge; and Section 5 concludes the research work of this paper and provides outlooks.

2. Related Work

Currently, deflection prediction methods for highway bridges are divided into two main categories: statistical-based modeling methods and data-driven intelligent prediction methods [15]. Among them, statistical modeling methods usually rely on structural mechanics theory and probabilistic analysis framework and use classical time series models such as autoregressive integrated moving average (ARIMA) and Kalman filter to predict structural response. For example, Kaloop et al. [16] used the coefficients and model errors of the ARIMA model to evaluate the dynamic response of bridge structures so as to achieve the identification and trend prediction of structural states. Although such methods have strong model interpretability and are suitable for scenarios with small data samples or clear structural characteristics, their modeling ability and generalization performance are limited in the face of actual bridge structures with high-dimensional multivariate, obvious non-stationary features and significant nonlinear responses.
With the continuous breakthroughs in the field of artificial intelligence, data-driven deep learning models have seen increasing adoption in BHM, particularly for the prediction of key structural response parameters such as deflection [17]. These approaches are capable of automatically extracting latent features from multi-source structural monitoring data, thereby eliminating the need for the explicit modeling of complex physical mechanisms. Notably, they exhibit inherent advantages in handling nonlinear coupling effects, time-varying disturbances, and data redundancy, which are often challenging for traditional analytical methods [18].
Existing studies have proposed various deep learning-based modeling schemes for the bridge deflection prediction problem. For instance, Guo et al. [19] proposed a temperature-induced deflection prediction model that integrates modal decomposition, particle swarm optimization (PSO), and the LSTM network. Incorporating multivariate inputs and time lag characteristics solves the problem of low accuracy due to the limitation of single-point inputs and time lag effects and effectively improves the model’s ability to portray the dynamic evolution process. Nie et al. [20] constructed a prediction framework based on bidirectional LSTM (BiLSTM), leveraging the coupling relationship among deflection, stress, and temperature. This bidirectional architecture enhances the model’s ability to capture temporal dependencies, enabling more accurate deflection trend forecasting. Wang et al. [21] further introduced the idea of the probabilistic modeling of Bayesian neural network and proposed a deflection prediction method based on the Bayesian dynamic linear model, which achieved the prediction of the temperature-induced deflection of bridges and its confidence interval and improved the model’s ability to express and quantify uncertainty information. Ju et al. [22] constructed a bridge deflection prediction model based on temperature and vehicle influence coefficients using a GRU network and proposed a strategy to decompose the physical components of the deflection signal, which provides a new way of thinking for the modeling of the deflection response under complex working conditions. In addition to the classical time series modeling approach, some scholars have also explored new paradigms for deflection prediction. Entezami et al. [23] proposed a kernelized deep regression method based on synthetic aperture radar (SAR) imagery, which combines the nonlinear modeling capability of kernel regression with the high-dimensional feature representation of deep neural networks. This approach enables the effective prediction and normalization of displacement responses in long-span bridges under conditions where high-frequency monitoring data are lacking. Qi et al. [24] proposed a modal deflection prediction method based on the construction of additional mass blocks, which significantly improves the engineering adaptability of the model and the deflection prediction accuracy by systematically analyzing the influence of the additional mass ratio and sensor configuration on the prediction accuracy. Xiao et al. [25] integrated the convolutional neural network (CNN), LSTM, and probability density estimation module to construct a bridge deflection interval prediction model. The method utilizes the joint CNN-LSTM structure to extract local features and long-term dependence, deeply explores the implicit relationship between ambient temperature, vehicle load, and deflection, and achieves probabilistic interval prediction based on Gaussian density estimation. Finally, its effectiveness and adaptability are verified on the measured data of Nanxi Yangtze River Bridge.
Although the aforementioned bridge deflection prediction methods have made progress in time series modeling, multi-variable input, and uncertainty quantification, they generally suffer from insufficient responsiveness to nonlinear dynamic features, the shallow fusion of multi-source information, and limited model generalization capability. Therefore, there is an urgent need to develop a unified prediction framework with stronger nonlinear modeling ability, deeper information fusion mechanisms, and better cross-structure adaptability to effectively address bridge deflection prediction tasks under complex working conditions.
In recent years, Transformer models have garnered significant attention in time series forecasting due to their global attention mechanism and powerful capability to model long-range dependencies. Its structure breaks through the limitations of traditional RNN in fixed-step modeling and gradient transfer and is suitable for dealing with multi-scale and non-smooth structural response data. However, the standard Transformer still faces practical challenges in engineering applications, particularly when dealing with high-dimensional and multivariate inputs. These include high computational complexity and limited ability to capture localized patterns, which constrain its effectiveness in structural health monitoring scenarios. In order to further optimize its structural performance, Nie et al. [26] proposed the Patch time series transformer (PatchTST) model, which segments the input sequence into multiple patches for modeling. This strategy retains the Transformer’s capacity for long-range dependency extraction while significantly improving computational efficiency and overall model performance, particularly in multivariate forecasting tasks. On this basis, Xin et al. [13] further integrated the swarm decomposition (SWD) mechanism with the PatchTST model to construct a bridge structural response prediction method for complex working conditions. The effective combination of SWD and PatchTST makes up for the shortcomings of traditional time–frequency analysis in dynamic evolution modeling. Meanwhile, the patch nesting strategy (a mechanism for the structured processing of multi-scale input sequences or image feature blocks) is introduced. By constructing a hierarchical nested feature patch structure in the spatial or temporal dimension, the model’s ability to perceive local perturbations and detail changes is enhanced, improving the model’s prediction accuracy and generalization performance in complex scenarios such as high noise and data perturbations.
In summary, although the existing deep learning-based bridge deflection prediction methods have made significant progress in terms of their nonlinear modeling capability and multivariate processing, there are still obvious deficiencies in some aspects: (1) most models have a single modeling of temporal input features, which makes it difficult to fully tap the multiscale evolution law in structural response; (2) the ability to perceive local perturbations and short-term dynamic features is limited, which leads to large fluctuations in the prediction accuracy of the models under complex working conditions; and (3) some of the methods have a large computational overhead in the face of high-dimensional and multi-source heterogeneous data, and there is the problem of insufficient information fusion, which affects their usability and stability in actual monitoring scenarios.
Aiming to solve the above problems, this paper proposes an LDAiT bridge deflection prediction model that integrates the LSTM network and differential attention mechanism on the basis of the iTransformer framework. The proposed model effectively integrates the long-term dependency modeling capability of the LSTM network for temporal feature extraction with the strength of differential attention mechanisms in capturing localized dynamic responses. This hybrid architecture is designed to enhance the model’s ability to represent the nonlinear evolutionary behavior of bridge structures, improve the recognition accuracy of critical deflection patterns, and reinforce the model’s stability and generalization under complex service conditions. In parallel, considering the demands of deflection prediction for real-time performance and multi-point response awareness, this paper constructs a data acquisition system for WSN, which further improves the timeliness and relevance of the model input data and provides a solid data foundation for the efficient training and deployment of the LDAiT model.

3. The Proposed Method

3.1. LSTM Module

The Long Short-Term Memory (LSTM) network is capable of capturing remote dependencies across time steps and suppressing gradient decay in a recurrent architecture, the structure of which is schematically illustrated in Figure 1. The network uses the cell state as a long-term memory channel and selectively writes, updates, and reads the information through forgetting, input, and output gates, making it suitable for the task of modeling complex dynamic sequences [27].
In bridge deflection prediction, structural responses are often subject to lag effects from factors such as temperature, load, and material aging, exhibiting distinct long-term time-dependent characteristics. The primary purpose of introducing LSTM in this paper is to leverage its memory gate mechanism to effectively model these slow-changing trends and lag evolution processes, thereby enhancing the predictive model’s ability to perceive changes in global structural responses. By first applying LSTM to extract temporal features from the initial input of bridge structural monitoring data, high-dimensional representations of how each variable evolves over time are generated, providing more discriminative input encoding for the subsequent iTransformer module embedded with the differential attention mechanism. Its internal state update mechanism and control gate design are shown in Equations (1)–(5).
f t = σ ( W f [ h t 1 , x t ] + b f )
i t = σ ( W i [ h t 1 , x t ] + b i )
o t = σ ( W o [ h t 1 , x t ] + b o )
C t = f t C t 1 + i t tanh ( W c [ h t 1 , x t ] + b c )
h t = o t tanh ( C t )
where W is the weight matrix, b is the bias vector, and σ is the nonlinear sigmoid activation function. f t , i t , and o t are the information of the forgetting gate, the input gate, and the output gate at the moment t, respectively. C t is the information of the cellular state at the moment t , and h t is the final output information of the LSTM cell at the moment t .

3.2. iTransformer Module

Although the Transformer architecture is widely used in time series modeling for its excellent global dependency modeling capability, it still faces many challenges in long series forecasting tasks. On the one hand, the computational complexity of the model increases exponentially as the review window increases, significantly limiting its applicability in practical engineering scenarios. On the other hand, the traditional Transformer encapsulates the multivariate information into a uniform temporal token embedding, which may mask the intrinsic structure among the variables and lead to an unfocused distribution of attention, thus affecting the ability to model the key physical features. To address the above problems, this paper introduces the iTransformer model [14], whose core idea lies in reversing the traditional modeling perspective by applying the attention mechanism to the variable dimensions to capture multivariate inter-dependencies and at the same time introducing feed-forward networks on each variable token to extract nonlinear feature representations. This structure effectively mitigates the performance degradation and attentional redundancy in long series modeling and significantly improves the model’s ability to model variable correlations and a global temporal structure.
In the task of highway bridge deflection prediction, the structural response of a bridge is usually driven by a variety of physical quantities (e.g., temperature, load, displacement, etc.), and its dynamic evolution process is highly nonlinear and multi-scale. iTransformer is able to explore the potential dependency relationship between variables and enhance the ability to portray the trend of the evolution of complex disturbances. This enables more accurate and informative predictions, which are critical for the reliable assessment of bridge health conditions.
The network structure of the iTransformer model is shown in Figure 2, including embedding, projection, and TrmBlock, and its specific calculation process is as follows:
Suppose that in a time series prediction task with N number of feature variables, an input sequence X = { x 1 , , x T } T × N with time step T predicts the future S time steps Y = { x T + 1 , , x T + S } S × N , where X t , : is used to denote all the variables at the moment t , and X : , n denotes the entire time series of n feature variables. Unlike previous models that treat X t , : as an independent word, we use an embedding layer to learn the sequence feature representation of the variables X : , n , which independently aggregates the global features of each variable, as shown in Equation (6).
H = { h 1 , , h N } N × D
where h i D implies the complete temporal dynamics of the corresponding variable within a historical time window and is defined as a Variate Token, and H denotes a representation matrix consisting of N Variate Token with dimension D. In the subsequent layers, cross-variate information interaction is achieved between each Variate Token through a self-attention mechanism; meanwhile, within each Variate Token, layer normalization operations are introduced to standardize the differences in magnitude and distribution between different variables, and nonlinear feature encoding and enhancement is performed through a feed-forward network. Finally, each Variate Token is projected as the predicted value of the corresponding variable through the mapping layer. The above process can be formalized as follows:
h n 0 = Embedding ( X : n )
H l + 1 = TrmBlock ( H l ) ,   l = 0 , , L 1
Y : n = Projection ( h n L )
Among them, embedding: T D , projection: D S are all achieved by multilayer perception (MLP) construction. The generated Variate Tokens interact through a self-attentive mechanism and transform with independent features by the shared feedforward network in each TrmBlock. It is worth noting that the temporal order is implicitly encoded by the neuron arrangement, so the model does not need to explicitly introduce position embedding when capturing temporal dependencies.

3.3. Multi-Head Differential Attention

Although the iTransformer architecture shows superior performance in multivariate time series modeling, its core self-attention mechanism still suffers from redundancy in attention distribution and difficulty in highlighting key information when dealing with long sequences. Specifically, the attention weights tend to show a tendency to be uniformly distributed or incorrectly focused on non-critical contexts, which limits the efficiency of the model in extracting key features and may introduce semantic noise, affecting the prediction accuracy and stability.
To alleviate the above problems, this paper introduces the multi-head differential attention (MHDA) mechanism (shown in Figure 3) to achieve the active suppression of irrelevant attention and effective filtering of information noise [28]. Differential attention is an attention mechanism based on feature time difference modeling. Its core idea is to enhance the model’s ability to perceive local disturbances, small fluctuations, or sudden changes in features by explicitly modeling the rate of change (difference) between adjacent time steps. The mechanism calculates two independent softmax attention maps by dividing the Query and Key vectors into two subsets and then generates the final attention weight distribution through the difference operation. This strategy is inspired by the principle of differential amplification in signal processing, which can effectively remove the common mode interference components, so that the model focuses on the key feature regions with high expression and significant response. This mechanism is particularly suitable for detecting local jumps, minor anomalies, and early damage responses in bridge deflection data. In addition, the differential operation not only improves the sparsity and selectivity of the attention distribution but also significantly improves the feature discrimination ability of the model under complex working conditions, thus improving the accuracy and stability of the deflection prediction of the bridge. The specific calculation steps of MHDA are as follows:
First, assume that the input sequence is X N × d m o d e l and map it to Query, Key, and Value using Equation (10), where W Q , W K , W V d m o d e l × 2 d denote the parameters Q 1 , Q 2 , K 1 , K 2 N × d , V N × 2 d .
[ Q 1 : Q 2 ] = X W Q ,   [ K 1 : K 2 ] = X W K ,   V = X W V
Subsequently, differential attention computation is performed via the DiffAttn ( ) operation (as shown in Equation (11)), where the final attention scores are obtained by calculating the difference between two independently computed softmax attention maps. This operation effectively suppresses attention noise and enhances focus on relevant features. Meanwhile, the modulation factor λ is parameterized through Equation (12) to enable dynamic control over the attention distribution, further optimizing the adaptability of the attention mechanism.
DiffAttn ( X ) = softmax ( Q 1 K 1 T d ) λ softmax ( Q 2 K 2 T d ) V
λ = exp ( λ q 1 λ k 1 ) exp ( λ q 2 λ k 2 ) + λ i n i t
where λ q 1 , λ k 1 , λ q 2 , λ k 2 d denotes the learnable parameter vector and λ i n i t ( 0 , 1 ) is a constant used for λ initialization. Referring to the setting of reference [28], λ i n i t is set to 0.8 0.6 × exp ( 0.3 ( l 1 ) ) and l [ 1 , L ] denotes the layer index of the network.
Finally, for the number of attention heads h , the attention results of each head are mapped according to Equation (13), where the scalar λ within the same layer is shared among all heads. Subsequently, all head outputs are normalized by Equation (14) and mapped to the final output space according to Equation (15).
h e a d i = DiffAttn ( X ;   W i Q ,   W i K ,   W i V ,   λ ) , i [ 1 , h ]
h e a d i ¯ = ( 1 λ init ) LN ( h e a d i )
MultiHead ( X ) = Concat ( h e a d 1 ¯ , , h e a d h ¯ ) W O
where W O d m o d e l × d m o d e l is the learnable projection matrix, LN ( ) denotes the operation of normalizing each attention head using GroupNorm before it is connected, and Concat ( · ) denotes the output of all attention heads spliced in the channel dimension. Since differential attention mechanisms typically present sparser and diversely distributed attention patterns, the statistical features among the heads differ significantly. By performing the LN normalization operation on each head before splicing, it helps to unify the activation distribution and alleviate the problem of unstable gradient statistics, thus improving the stability and convergence of model training.

3.4. LDAiT Prediction Model

Based on the iTransformer architecture, this study develops a bridge deflection prediction model named LDAiT, which integrates the LSTM network with an MHDA mechanism. The overall architecture of the model is illustrated in Figure 4. The model combines the advantages of LSTM in capturing temporal dependence with the ability of the differential attention mechanism to highlight the local dynamic response, aiming to enhance the ability to characterize the nonlinear evolution process of the bridge structure, effectively identify the key deflection changes, and achieve higher prediction accuracy, stability, and generalization performance under complex service conditions.
The LDAiT model as a whole consists of several functional modules. First, the original input sequences are extracted from the LSTM module to effectively capture the temporal evolution of structural responses. Subsequently, the embedding module completes the feature mapping of variable dimensions and introduces the multi-head differential attention mechanism to construct sparse and discriminative local dynamic dependencies, which enhances the model’s ability to perceive the interaction of key variables. Then, LayerNorm and feed-forward network modules are introduced to further optimize the feature distribution and nonlinear representation to enhance expressive ability and training stability. Finally, the output is mapped to the final prediction result through the projection layer after being normalized again.
Through the above structural design, the LDAiT model achieves the deep integration of time-dependent modeling and correlation mining among variables, which effectively improves the prediction accuracy, robustness, and engineering applicability in the task of bridge deflection prediction.

3.5. Overall Framework for Bridge Deflection Prediction

The bridge deflection prediction framework proposed in this paper is illustrated in Figure 5, which outlines a full-cycle modeling process from structural response data acquisition to deflection output prediction. The framework includes a total of four core steps:
Step 1: WSN Deployment and Data Acquisition: Based on the mechanical characteristics of critical bridge components, finite element analysis (FEA) is employed to simulate the stress distribution and determine the optimal sensor placement regions and spatial density. Subsequently, relying on WSN technology, the integrated deployment of multiple types of sensors is implemented in the key structural units of Xintian Yellow River Highway Bridge to achieve the long-term real-time monitoring of disturbance, strain, temperature, and other multi-source response variables and to construct a highly time-sensitive and strongly correlated raw dataset.
Step 2: Data Preprocessing: Normalize the collected raw monitoring data and complete the operations of time alignment, outlier removal, missing data filling, and normalization in order to ensure the unity of each feature dimension in the numerical scale and temporal structure. After the preprocessing is completed, the data are divided into training set, validation set, and test set to provide data support for model training and performance evaluation.
Step 3: LDAiT Model Prediction: Construct the LDAiT model based on the improved iTransformer architecture and introduce the LSTM network to enhance the modeling ability of long-term dependencies so as to achieve high-precision prediction.
Step 4: Bridge Deflection Prediction Results and Evaluation: Use a number of regression indicators to assess the performance of the prediction results, and combine with local zoom and comparison curves to analyze the applicability and stability of the model in different prediction time and fluctuation scenarios, so as to verify the engineering practicality and popularization value of the proposed method.

4. Experiment Validation

4.1. Data Acquisition and Processing

To construct a high-quality feature dataset for deflection prediction, this paper first uses FEA to numerically simulate the stress distribution of the bridge structure, identifying key stress-concentrated areas such as the mid-span of the main girder, side spans, and bearings, and then determine the optimal sensor placement and spatial density. Based on this, this paper leverages WSN technology to conduct a two-year long-term monitoring of the key structural units of the Xintian Yellow River Highway Bridge. The WSN system adopted in this study is based on a distributed node deployment architecture, where all sensor nodes are interconnected with the central gateway through edge acquisition units, enabling real-time data collection, caching, and remote transmission. The nodes achieve unified time calibration through integrated GPS timing modules, ensuring the temporal synchronization and sequence consistency of multi-sensor data. Additionally, to mitigate the data asynchrony issues caused by network latency, this system incorporates a caching retransmission mechanism and packet loss compensation strategy, effectively enhancing the stability and integrity of model input data. This sensor network system has been operating continuously and stably at the Xintian Yellow River Highway Bridge site for over six years, demonstrating excellent robustness and environmental adaptability under complex conditions such as high temperatures and adverse weather, thereby providing reliable engineering data support for the training and validation of subsequent deep learning models.
The data acquisition focused on the mid-span region of the bridge, targeting nine feature variables that are closely associated with deflection behavior. These include mid-span deflection, upper right surface temperature, lower left surface temperature of the box girder, four strain responses at the upper and lower surfaces (left and right), mid-span vibration frequency, and crack width. These features cover temperature–stress–vibration–damage multi-physical information dimensions, with high correlation and dynamic coupling characteristics that can comprehensively reflect the operation status of the structure under different working conditions. Data were recorded at an hourly frequency. This study selected 17,497 monitoring samples from the fourth span of the main bridge collected between 2 August 2020 and 1 August 2022 to construct the experimental dataset. The data were partitioned into training, validation, and testing sets in an 8:1:1 ratio for model training and performance evaluation. A sliding time window of length 24 was used to input historical data, with the objective of predicting deflection values for the subsequent 2 h period.
In order to enhance the quality of model construction, the raw monitoring data underwent systematic preprocessing following acquisition. This process included time synchronization across multi-source inputs, outlier removal, missing value imputation, and feature normalization (as defined in Equation (16)). These steps ensured temporal alignment and numerical comparability among input variables, thereby providing a stable and reliable data foundation for the subsequent training and optimization of the LDAiT model. The value ranges of the preprocessed deflection-related features are summarized in Table 1, and the fluctuations of some feature data are shown in Figure 6. To further validate the nonlinear temporal dependence of the deflection data, this study conducted an analysis using the Brock–Dechert–Scheinkman (BDS) [29] test on a subset of measured deflection data from the Xintian Yellow River Bridge. The results indicate that the BDS statistics significantly deviate from zero across multiple embedding dimensions, with corresponding p-values all below 0.01. This leads to a clear rejection of the null hypothesis that the deflection sequence is independently and identically distributed, thereby confirming the presence of complex nonlinear temporal dependencies within the deflection data.
X = X X min X max X min
where X and X are the original input data and normalized model inputs, respectively.

4.2. Performance Evaluation Indicators

In order to comprehensively evaluate the prediction performance of the model, the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) are selected as the performance evaluation indexes. The mathematical expressions of each index are shown below [30,31]:
R 2 = 1 i = 1 n ( y i y i ) 2 i = 1 n ( y i y ¯ ) 2
R M S E = 1 n i = 1 n ( y i y i ) 2
M A E = 1 n i = 1 n y i y i
M A P E = 1 n i = 1 n y i y i y i × 100 %
where y i and y i denote the true and predicted values of the i th sample, respectively, y ¯ i is the mean of all true values, and n is the total number of samples.

4.3. Comparative Experimental Results and Analysis

To verify the applicability and superiority of the proposed model, this study conducted comparative experiments using the aforementioned dataset and selected four representative time series forecasting models: PatchTST [26], FEDformer [32], and Transformer [33] models based on attention mechanism, and the LSTM [34] model based on the gating mechanism, respectively. Figure 7 and Figure 8, respectively, illustrate the 1 h and 2 h bridge deflection prediction results obtained by these models. Each figure includes the predicted curves, local magnified plots, and regression metrics such as R2, RMSE, and MAE. In addition, the MAPE values of each model are given in Table 2 to evaluate the deviation of the predicted values from the actual measurements.
Through the comprehensive analysis of the results in Figure 7 and Figure 8 and Table 2, it can be observed that the LDAiT model exhibits the minimum values in RMSE, MAE, and MAPE metrics, indicating that the LDAiT model outperforms the other models in terms of prediction accuracy, stability, and generalization ability. Moreover, the R2 value of LDAiT is also closest to 1, which further illustrates the model’s good fitting ability to data fluctuations.
Specifically, in the 1 h bridge deflection prediction task, the LDAiT model achieved RMSE, MAE, and MAPE values of 0.16, 0.10, and 1.29%, respectively. Compared with the second-best model, PatchTST, these values were reduced by 0.12, 0.09, and 0.83%, demonstrating superior accuracy and error control. In terms of fitting performance, the R2 values of LDAiT and PatchTST were 0.98 and 0.95, respectively, indicating strong global fitting capabilities for both models, with LDAiT being closer to the theoretical optimum. In contrast, the prediction error of the LSTM model is significantly higher than that of other models. This can be attributed to the limitations of its gated architecture, which, while effective for short-term sequence modeling, tends to suffer from gradient vanishing and insufficient memory of long-term dependencies. As a result, the model struggles to capture the critical evolutionary patterns in deflection changes, leading to reduced prediction accuracy and poorer curve fitting. The RF model performed the worst in this task, with an R2 value of only 0.79 for its 1 h prediction results, and with error metrics (RMSE = 0.56, MAE = 0.41, MAPE = 4.79%) all higher than those of other models. This indicates that RF has significant limitations in modeling nonlinear temporal dependencies and multivariate coupling characteristics in disturbance data, further highlighting the performance advantages and application potential of deep learning methods in complex civil structural response modeling tasks.
In the 2 h bridge deflection prediction task, the LDAiT model continued to outperform the other benchmark models in both predictive accuracy and curve fitting, maintaining its advantage observed in the 1 h prediction. Compared with its 1 h performance, the RMSE, MAE, and MAPE of LDAiT increased by only 0.02, 0.01, and 0.39%, respectively, indicating that it has good prediction stability and generalization ability across time. In contrast, the second-best model, PatchTST, exhibited larger increases of 0.08, 0.07, and 0.50% in the same metrics, further highlighting LDAiT’s robustness in modeling longer temporal dependencies. In terms of fitting performance, the R2 value of LDAiT only decreases by 0.01, while that of PatchTST decreases by 0.03, indicating that LDAiT is better able to maintain the overall fitting ability of the model to the trend of perturbation changes and possesses stronger time scale migration and feature retention ability. The performance of the LSTM model in the 2 h prediction task is still poor, with RMSE, MAE, and MAPE increasing by 0.01, 0.01, and 0.14%, respectively, compared to its 1 h prediction. Although these increments are relatively small, the overall error level remains significantly higher than the other models. This further indicates that the LSTM is limited by its ability to express the gating mechanism when dealing with long-term dependence modeling and has obvious deficiencies in capturing the long-term dynamic features of the bridge deflection, which makes it difficult to perform the task of the high-precision prediction of the response of complex structures. The RF model continued to perform the worst in this task, with an R2 of only 0.78, an RMSE of 0.56, and a MAPE of 4.88%. Its limited ability to model long-term time series dependencies and multi-source variable coupling characteristics resulted in low overall accuracy. In contrast, deep learning models demonstrated stronger dynamic modeling and feature extraction capabilities, giving them an advantage in predicting complex structural disturbances.
An analysis of the predicted curves reveals that the LDAiT model effectively captures the critical features of bridge deflection and accurately predicts its fluctuation trends. This is particularly evident in the interval between prediction points 700 and 800, where the model successfully tracks peak deflection variations. In contrast, the other models exhibit varying degrees of prediction deviation in this region, failing to identify the rapid change patterns of bridge disturbance in short time scales, reflecting their limited ability to model high-frequency local features. LDAiT, however, maintains high prediction accuracy even under this extreme disturbance scenario, demonstrating its strong capability in responding to complex nonlinear fluctuations and its broad adaptability to varying structural conditions.
In summary, the LDAiT model proposed in this paper shows excellent time series feature extraction and dynamic response identification capabilities in the complex nonlinear modeling task of bridge structural deflection. On the other hand, the incorporation of the differential attention mechanism significantly improves the model’s sensitivity to local disturbances, thereby enhancing its ability to detect fine-grained deflection fluctuations. Moreover, LDAiT maintains high prediction accuracy and stability across different forecasting horizons, indicating strong robustness and generalization performance. Its ability to adapt to the dynamic evolution of deflection under multi-condition and multi-scale environments makes it particularly suitable for health monitoring and intelligent early warning in complex bridge structures.

4.4. Results and Analysis of Ablation Experiments

The LDAiT model consists of several functional modules, and in order to further verify the role of each module in the overall architecture and its contribution to the model performance, ablation experiments are designed and conducted in this paper. Based on the structural characteristics of the model, it is divided into three core components: the basic framework iTransformer, the LSTM timing module, and the MHDA mechanism. On this basis, four groups of comparison methods are constructed, as shown in Table 3, for gradually adding modules to evaluate their performance impact. The corresponding prediction curves for each method are shown in Figure 9, and the results of the related evaluation metrics are listed in Table 4.
As shown by the ablation results in Table 4 and Figure 9, the complete LDAiT model (M4) achieves the best prediction performance in both 1 h and 2 h forecasting tasks, indicating that each functional module plays a critical role in enhancing overall model effectiveness. Compared with M1, which contains only the baseline iTransformer architecture, the performance consistently improves with the successive inclusion of the LSTM module (M2) and the MHDA mechanism (M3), thereby confirming the effectiveness and necessity of each component in the context of bridge deflection modeling.
Specifically, M4 achieves the highest goodness-of-fit (R2 = 0.98) and the lowest error metrics (RMSE = 0.17, MAE = 0.10, MAPE = 1.29%) in the 1 h prediction task, which is significantly better than M1 (R2 = 0.87, RMSE = 0.43, MAE = 0.32, MAPE = 3.87%). The same trend is also reflected in the 2 h prediction task, where M4 is ahead of the comparison model in several evaluation metrics, indicating that it still has good prediction performance and generalization ability in long time series prediction scenarios.
It is worth noting that incorporating the LSTM module into the baseline iTransformer architecture (M2) significantly improves the model’s fitting accuracy, demonstrating the effectiveness of LSTM in capturing temporal dependencies and enhancing sequential modeling capacity for deflection data. Similarly, introducing the MHDA module (M3) also yields notable performance gains, indicating that the differential attention mechanism plays a crucial role in identifying key regions of deflection fluctuations and reinforcing the modeling of localized dynamic responses. Furthermore, the integration of both LSTM and MHDA modules into the iTransformer backbone (M4) leads to the best performance across all of the evaluation metrics. M4 significantly outperforms M2 and M3 in terms of fitting the prediction curves, which indicates that there is a significant synergistic gain between the two modules in terms of feature capture and modeling capabilities.
In summary, the ablation study confirms the synergistic effectiveness of each module within the LDAiT architecture. The LSTM module enhances the model’s capacity to represent the temporal evolution of deflection sequences, while the MHDA module improves the sensitivity to local deflection fluctuations. Together, they form a high-accuracy and broadly applicable framework for bridge deflection prediction.

5. Conclusions

This paper aims to overcome the challenge of high-precision deflection prediction for highway bridges under complex working conditions and proposes an LDAiT bridge deflection prediction model that integrates multi-source sensing and deep modeling capabilities. First, a high-quality time series dataset tailored for deflection modeling was constructed based on WSN technology, enabling the continuous monitoring of structural response states. Second, on the basis of iTransformer architecture, the LSTM module is introduced to enhance the long-term time series dependent modeling capability, and the multi-head differential attention mechanism is introduced to strengthen the response characterization capability of the local abrupt change in deflection so as to achieve the accurate prediction of the complex dynamic evolution process of bridge deflection. Finally, an experimental setup was established based on real monitoring data from the Xintian Yellow River Bridge, where comparative experiments and ablation studies were conducted to systematically validate the advantages of the proposed LDAiT model and the performance contributions of its individual modules. The experimental results demonstrate that LDAiT consistently outperforms existing mainstream models in terms of R2, RMSE, MAE, and MAPE, showcasing its superior prediction accuracy, robustness, and engineering applicability. This model provides a novel and effective approach for high-precision bridge deflection forecasting and refined structural modeling, with promising potential for deployment in complex structural health monitoring and intelligent early-warning systems.
Although this study validated the effectiveness of the LDAiT model in bridge deflection prediction, its generalization ability across bridge types, climates, and multiple operating conditions still needs to be further verified. In the future, transfer learning and multimodal modeling will be introduced to enhance the model’s adaptability and generalizability to complex engineering scenarios.

Author Contributions

Conceptualization, methodology, C.M.; validation, C.M. and C.C.; formal analysis, C.M. and C.C.; investigation, C.M. and C.C.; resources, J.H.; data curation, C.M. and J.Y.; writing—original draft preparation, C.M.; writing—review and editing, J.Y.; visualization, J.Y.; supervision, J.H.; project administration, J.H.; funding acquisition, J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research study was funded by National Nature Science Foundation of China, grant number 62262038; The Gansu Province Science and Technology Program-Innovation Fund for Small and Medium-Sized Enterprises, grant number 25CXGA014; Technology Innovation Guidance Program of Gansu Province-Science and Technology Specialist, grant number 25CXGA030; and the Education technology innovation project of Gansu Province under grant number 2025CXZX-634.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank all the staff of 1404 Laboratory, School of Electronic and Information Engineering, Lanzhou Jiaotong University, for their support of this work.

Conflicts of Interest

Author Jiuyuan Huo was employed by the company Lanzhou Huahao Science and Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Giordano, P.F.; Quqa, S.; Limongelli, M.P. The Value of Monitoring a Structural Health Monitoring System. Struct. Saf. 2023, 100, 102280. [Google Scholar] [CrossRef]
  2. Xin, J.; Liu, Q.; Tang, Q.; Li, J.; Zhang, H.; Zhou, J. Damage identification of arch bridges based on dense convolutional networks and attention mechanisms. J. Vib. Shock 2024, 43, 18–28. [Google Scholar] [CrossRef]
  3. Yang, J.; Huo, J.; Mu, C. A Novel Clustering Routing Algorithm for Bridge Wireless Sensor Networks Based on Spatial Model and Multicriteria Decision Making. IEEE Internet Things J. 2024, 11, 27775–27789. [Google Scholar] [CrossRef]
  4. Ni, Y.Q.; Wang, Y.W.; Zhang, C. A Bayesian Approach for Condition Assessment and Damage Alarm of Bridge Expansion Joints Using Long-Term Structural Health Monitoring Data. Eng. Struct. 2020, 212, 110520. [Google Scholar] [CrossRef]
  5. Sonbul, O.S.; Rashid, M. Algorithms and Techniques for the Structural Health Monitoring of Bridges: Systematic Literature Review. Sensors 2023, 23, 4230. [Google Scholar] [CrossRef] [PubMed]
  6. Zeng, G.; Deng, Y.; Ma, B.; Liu, T. Monitoring-based reliability assessment of vertical deflection for in-service long-span bridges. J. Cent. South Univ. 2021, 52, 3636–3646. [Google Scholar]
  7. Han, Q.; Ma, Q.; Liu, M. Structural Health Monitoring Research under Varying Temperature Condition: A Review. J. Civ. Struct. Health Monit. 2020, 11, 1–25. [Google Scholar] [CrossRef]
  8. Li, G.; Zhang, C.; Hu, S.; Wang, X.; Guo, J. WSN Clustering Routing Protocol for Bridge Structure Health Monitoring. J. Syst. Simul. 2022, 34, 62–69. [Google Scholar] [CrossRef]
  9. Kustiana, W.A.A.; Trilaksono, B.R.; Riyansyah, M.; Putra, S.A.; Caesarendra, W.; Królczyk, G.; Sulowicz, M. Bridge Damage Detection with Support Vector Machine in Accelerometer-Based Wireless Sensor Network. J. Vib. Eng. Technol. 2024, 12, 21–40. [Google Scholar] [CrossRef]
  10. Feng, R.; Xie, G.; Zhang, Y.; Kong, H.; Wu, C.; Liu, H. Research on Vehicle Fatigue Load Spectrum of Highway Bridges Based on Weigh-in-Motion Data. Buildings 2025, 15, 675. [Google Scholar] [CrossRef]
  11. Entezami, A.; Behkamal, B.; Michele, C.D.; Stefano, M. Displacement prediction for long-span bridges via limited remote sensing images: An adaptive ensemble regression method. Measurement 2025, 245, 116567. [Google Scholar] [CrossRef]
  12. Vaswani, A.; Shazeer, N.M.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  13. Xin, J.; Wang, Y.; Jiang, Y.; Huang, L.; Zhang, H.; Zhou, J. Bridge Structural Response Prediction Method Integrating SWD and PatchTST. J. Vib. Eng. 2025, 1–13. Available online: https://link.cnki.net/urlid/32.1349.TB.20250418.1932.002 (accessed on 13 May 2025).
  14. Liu, Y.; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; Long, M. iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. arXiv 2023, arXiv:2310.06625. [Google Scholar]
  15. Qu, G.; Song, M.M.; Sun, L.M. Bayesian Dynamic Noise Model for Online Bridge Deflection Prediction Considering Stochastic Modeling Error. J. Civ. Struct. Health Monit. 2025, 15, 245–262. [Google Scholar] [CrossRef]
  16. Kaloop, M.R.; Hussan, M.; Kim, D. Time-Series Analysis of GPS Measurements for Long-Span Bridge Movements Using Wavelet and Model Prediction Techniques. Adv. Space Res. 2019, 63, 3505–3521. [Google Scholar] [CrossRef]
  17. Entezami, A.; Sarmadi, H. Machine learning-aided prediction of windstorm-induced vibration responses of long-span suspension bridges. Comput.-Aided Civ. Infrastruct. Eng. 2025, 40, 1043–1060. [Google Scholar] [CrossRef]
  18. Li, S.; Xin, J.; Jiang, Y.; Wang, C.; Zhou, J.; Yang, X. Temperature-Induced Deflection Separation Based on Bridge Deflection Data Using the TVFEMD-PE-KLD Method. J. Civ. Struct. Health Monit. 2023, 13, 781–797. [Google Scholar] [CrossRef]
  19. Guo, Y.; Zhang, M.; Wang, K.; Liu, L.; Chen, W. CEEMDAN-VMD-PSO-LSTM Model for Bridge Deflection Prediction. Saf. Environ. Eng. 2024, 31, 150–159. [Google Scholar] [CrossRef]
  20. Nie, H.; Ying, J.; Deng, J. Bridge Deflection Prediction Method Based on Bi-LSTM Model for Health Monitoring. Highway 2024, 69, 213–219. [Google Scholar]
  21. Wang, M.Y.; Ding, Y.L.; Zhao, H.W. Digital Prediction Model of Temperature-Induced Deflection for Cable-Stayed Bridges Based on Learning of Response-Only Data. J. Civ. Struct. Health Monit. 2022, 12, 629–645. [Google Scholar] [CrossRef]
  22. Ju, H.; Deng, Y.; Li, A. Correlation Model of Deflection-Temperature-Vehicle Load Monitoring Data for Bridge Structures. J. Vib. Shock 2023, 42, 79–89. [Google Scholar] [CrossRef]
  23. Entezami, A.; Behkamal, B.; Michele, C.D.; Mariani, S. A kernelized deep regression method to simultaneously predict and normalize displacement responses of long-span bridges via limited synthetic aperture radar images. Struct. Health Monit. 2025. [Google Scholar] [CrossRef]
  24. Qi, X.; Sun, X.; Wang, S.; Cao, S. Study on Modal Deflection Prediction Method of Bridges Based on Additional Mass Blocks. J. Vib. Shock 2022, 41, 104–113. [Google Scholar] [CrossRef]
  25. Xiao, X.; Liu, X.; Zhang, H.; Wan, Z.; Chen, F.; Luo, Y.; Liu, Y. Deflection Prediction and Early Warning Method of Suspension Bridge Main Girder Based on CNN-LSTM-GD. J. Vib. Shock 2025, 1–11. [Google Scholar] [CrossRef]
  26. Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A Time Series Is Worth 64 Words: Long-Term Forecasting with Transformers. In Proceedings of the ICLR, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  27. Pei, X.-Y.; Hou, Y.; Huang, H.-B.; Zheng, J.-X. A Deep Learning-Based Structural Damage Identification Method Integrating CNN-BiLSTM-Attention for Multi-Order Frequency Data Analysis. Buildings 2025, 15, 763. [Google Scholar] [CrossRef]
  28. Ye, T.; Dong, L.; Xia, Y.; Sun, Y.; Zhu, Y.; Huang, G.; Wei, F. Differential Transformer. arXiv 2024, arXiv:2410.05258. [Google Scholar]
  29. Aydin, C.; Cahit, E. Distinguishing between stochastic and deterministic behavior in foreign exchange rate returns: Further evidence. Econ. Lett. 1996, 51, 323–329. [Google Scholar] [CrossRef]
  30. Briggs, S.; Behazin, M.; King, F. Validation of Water Radiolysis Models Against Experimental Data in Support of the Prediction of the Radiation-Induced Corrosion of Copper-Coated Used Fuel Containers. Corros. Mater. Degrad. 2025, 6, 14. [Google Scholar] [CrossRef]
  31. Lunardi, L.R.; Cornélio, P.G.; Prado, L.P.; Nogueira, C.G.; Felix, E.F. Hybrid Machine Learning Model for Predicting the Fatigue Life of Plain Concrete Under Cyclic Compression. Buildings 2025, 15, 1618. [Google Scholar] [CrossRef]
  32. Hong, J.; Bai, Y.; Huang, Y.; Chen, Z. Hybrid Carbon Price Forecasting Using a Deep Augmented FEDformer Model and Multimodel Optimization Piecewise Error Correction. Expert Syst. Appl. 2024, 247, 123325. [Google Scholar] [CrossRef]
  33. Gong, Y.; Wang, Y.; Xie, Y.; Peng, X.; Peng, Y.; Zhang, W. Dynamic Fusion LSTM-Transformer for Prediction in Energy Harvesting from Human Motions. Energy 2025, 327, 136192. [Google Scholar] [CrossRef]
  34. Bhanbhro, J.; Memon, A.A.; Lal, B.; Talpur, S.; Memon, M. Speech Emotion Recognition: Comparative Analysis of CNN-LSTM and Attention-Enhanced CNN-LSTM Models. Signals 2025, 6, 22. [Google Scholar] [CrossRef]
Figure 1. The architectural schematic of LSTM neural network cells.
Figure 1. The architectural schematic of LSTM neural network cells.
Buildings 15 02176 g001
Figure 2. Schematic of the network structure of the iTransformer model.
Figure 2. Schematic of the network structure of the iTransformer model.
Buildings 15 02176 g002
Figure 3. Schematic structure of the multi-differential attention mechanism.
Figure 3. Schematic structure of the multi-differential attention mechanism.
Buildings 15 02176 g003
Figure 4. Schematic diagram of the network structure of the LDAiT model.
Figure 4. Schematic diagram of the network structure of the LDAiT model.
Buildings 15 02176 g004
Figure 5. Overall framework for bridge deflection prediction based on the LDAiT model.
Figure 5. Overall framework for bridge deflection prediction based on the LDAiT model.
Buildings 15 02176 g005
Figure 6. Schematic illustration of fluctuations in data for selected features.
Figure 6. Schematic illustration of fluctuations in data for selected features.
Buildings 15 02176 g006
Figure 7. Bridge deflection prediction results of different models (1 h).
Figure 7. Bridge deflection prediction results of different models (1 h).
Buildings 15 02176 g007
Figure 8. Bridge deflection prediction results of different models (2 h).
Figure 8. Bridge deflection prediction results of different models (2 h).
Buildings 15 02176 g008
Figure 9. Prediction results of each method in the ablation experiment.
Figure 9. Prediction results of each method in the ablation experiment.
Buildings 15 02176 g009
Table 1. Statistical summary of bridge deflection-related features.
Table 1. Statistical summary of bridge deflection-related features.
No.FeaturesCountMeanStdMinMedianMax
F1Mid-span box girder top surface right temperature/°C17,49715.8213.04−23.7918.5340.04
F2Mid-span box girder bottom surface left temperature/°C17,49710.8313.34−21.691530.43
F3Mid-span box girder bottom surface left strain/mm17,49727.0780.75−165.6946.63140.81
F4Mid-span box girder bottom surface right strain/mm17,49710.83165.96−314.6433.5347.75
F5Mid-span box girder top surface left strain/mm17,49773.57264.77−568.80138.67463.95
F6Mid-span box girder top surface right strain/mm17,49770.24246.93−521.77125.56460.63
F7Mid-span vibration/Hz17,4970.612.04−12.571.0523.18
F8Mid-span crack/mm17,4970.100.83−1.860.3081.23
F9Bridge deflection/mm17,4971.447.92−11.64−6.350.31
Table 2. MAPE of deflection prediction results for different models.
Table 2. MAPE of deflection prediction results for different models.
MAPE/%LDAiTPatchTSTFEDformerTransformerLSTMRF
1 h1.292.123.044.144.574.79
2 h1.682.623.434.564.714.88
Table 3. Configuration scheme for each module in the LDAiT model ablation experiment.
Table 3. Configuration scheme for each module in the LDAiT model ablation experiment.
MethodiTransformerLSTMMHDA
M1
M2
M3
M4
Table 4. Evaluation results of each method for the ablation experiment.
Table 4. Evaluation results of each method for the ablation experiment.
Evaluation IndicatorM1M2M3M4
1 hR20.870.910.960.98
RNSE0.430.370.240.17
MAE0.320.290.170.10
MAPE/%3.873.562.021.29
2 hR20.850.880.940.97
RNSE0.460.410.290.18
MAE0.350.320.200.11
MAPE/%4.173.892.241.68
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mu, C.; Chang, C.; Huo, J.; Yang, J. Deflection Prediction of Highway Bridges Using Wireless Sensor Networks and Enhanced iTransformer Model. Buildings 2025, 15, 2176. https://doi.org/10.3390/buildings15132176

AMA Style

Mu C, Chang C, Huo J, Yang J. Deflection Prediction of Highway Bridges Using Wireless Sensor Networks and Enhanced iTransformer Model. Buildings. 2025; 15(13):2176. https://doi.org/10.3390/buildings15132176

Chicago/Turabian Style

Mu, Cong, Chen Chang, Jiuyuan Huo, and Jiguang Yang. 2025. "Deflection Prediction of Highway Bridges Using Wireless Sensor Networks and Enhanced iTransformer Model" Buildings 15, no. 13: 2176. https://doi.org/10.3390/buildings15132176

APA Style

Mu, C., Chang, C., Huo, J., & Yang, J. (2025). Deflection Prediction of Highway Bridges Using Wireless Sensor Networks and Enhanced iTransformer Model. Buildings, 15(13), 2176. https://doi.org/10.3390/buildings15132176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop