Next Article in Journal
Optimal Sensor Placement for Structural Health Monitoring of Buildings Using a Kalman Filter-Based Approach
Previous Article in Journal
The Spatial Relationship Characteristics and Driving Mechanisms Between Traditional Villages and Intangible Cultural Heritage in Zhejiang Province
Previous Article in Special Issue
Post-Earthquake Damage Detection and Safety Assessment of the Ceiling Panoramic Area in Large Public Buildings Using Image Stitching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Structural Early Warning for Bridge Based on Deep Learning: Methodology and Engineering Application

1
School of Civil Engineering and Transportation, South China University of Technology, Guangzhou 510630, China
2
Guangdong Huitao Engineering Technology Co., Ltd., Shunde 528300, China
*
Author to whom correspondence should be addressed.
Buildings 2026, 16(4), 823; https://doi.org/10.3390/buildings16040823
Submission received: 10 January 2026 / Revised: 5 February 2026 / Accepted: 13 February 2026 / Published: 18 February 2026
(This article belongs to the Special Issue Building Structure Health Monitoring and Damage Detection)

Abstract

In bridge health monitoring, structural responses are strongly coupled with temperature effects and vehicle load effects, making it difficult for conventional fixed thresholds and single data-driven approaches to simultaneously achieve environmental adaptability and quantitative reliability assessment. To address this issue, this study proposes a deep-learning-based dynamic early-warning method for bridge structures, using health-monitoring data from an in-service long-span cable-stayed bridge as the research background. First, a two-month mid-span deflection time series is processed using variational mode decomposition optimized by the Porcupine Optimization Algorithm to separate temperature-induced effects. Subsequently, a hybrid prediction model integrating Informer and SEnet is constructed. Temperature and temperature-induced deflection components are used as input features, and a sliding-window strategy is adopted to achieve high-accuracy prediction of the temperature-induced deflection trend, which serves as the time-varying baseline of the dynamic threshold. On this basis, vehicle load effects are modeled by combining Pareto extreme value theory with finite element analysis and superimposed to establish a two-level dynamic early-warning threshold system that satisfies code requirements. Furthermore, a stochastic finite element Monte Carlo method is introduced to probabilistically model uncertainties associated with material parameters, load effects, and model prediction errors. The threshold failure probability at each time instant is taken as the evaluation metric, enabling quantitative characterization of threshold reliability. The results indicate that under combined multiple working conditions, the proposed method reduces the maximum failure probability of the first-level warning by 32.68% and that of the second-level warning by 93.48%, with more stable and consistent probabilistic responses. In engineering applications, simulation experiments based on stochastic traffic loading show that the warning accuracy is improved by up to 19.27%, while the error rate is reduced by up to 16.16%. The study demonstrates that the proposed method possesses a clear physical and statistical foundation as well as good engineering feasibility and provides a viable pathway for transforming bridge early-warning systems from experience-based schemes toward data-driven and risk-oriented frameworks.

1. Introduction

As time progresses, a large number of bridge infrastructures are gradually entering a critical stage of long-term operation and maintenance. Owing to the continuous degradation of structural performance and the increasingly complex combined effects of multiple loads, bridge health monitoring systems have played an important role in ensuring structural safety and enabling the early identification of structural damage. However, determining how to accurately and timely extract abnormal information indicative of structural damage or performance deterioration from the massive amounts of monitoring data and further translate such information into effective operation and maintenance decisions remains a major challenge in current engineering practice. Structural monitoring data are essentially the mixed outcome of multiple coupled factors, among which the quasi-static effects induced by temperature variations and the dynamic effects caused by vehicle loads are the dominant sources [1]. Their coupling forms the primary background of the monitoring signals, while the weak abnormal features that truly reflect structural damage are often obscured within it. Therefore, the development of novel intelligent early warning methods capable of effectively separating different effects and being sensitive to structural damage is of great significance for enhancing the practical applicability and reliability of bridge health monitoring systems [2,3].
As an important component of bridge health monitoring, early warning methods have been extensively studied by researchers worldwide with respect to monitoring data analysis and warning strategy development. Methods based purely on physical models represent the earliest form of warning threshold determination. For example, Fan et al. [4] established dynamic deflection warning thresholds for bridges by combining the generalized Pareto model with finite element analysis. Zheng et al. [5] established a three-level early-warning threshold system for bridges based on data variations under the serviceability limit state. However, the safety thresholds defined using finite element-based physical models allow relatively wide ranges of structural variation, making it difficult to capture abnormal structural state changes in real time, and they are prone to missed warnings. With the continuous accumulation of monitoring data, data-driven approaches have demonstrated significant potential. Traditional statistical methods—such as regression analysis [6,7,8], principal component analysis [9], and kernel density estimation [10]—have been widely applied to environmental effect separation and data normalization. However, these methods are generally based on linear or weakly nonlinear assumptions and thus have limited capability in capturing the complex high-order nonlinear and hysteretic relationships between temperature variations and structural responses. In recent years, the rapid development of deep learning has provided transformative tools for addressing complex time-series analysis problems in bridge health monitoring. Deep learning models, represented by recurrent neural networks and their variants [11,12], as well as Transformer-based large-scale models and their variants [13,14], have shown strong performance in structural response prediction, damage diagnosis, and related applications. In the field of structural engineering, Lazaridis et al. [15,16] investigated the performance of various machine learning algorithms in predicting earthquake-induced damage in different reinforced concrete frame structures and deployed these algorithms in cloud-based practical applications. Similarly, in the field of bridge structural health monitoring, deep learning algorithms have demonstrated strong predictive capabilities. Jiang et al. [17] proposed a data-driven dynamic strain early-warning method by integrating a Generative Adversarial Network (GAN) with a Long Short-Term Memory (LSTM) network in which the GAN is employed to capture the data distribution characteristics and enhance the robustness of strain prediction. Ju et al. [18] developed a structural deflection early-warning method based on a bidirectional gated recurrent unit network with an attention mechanism, where environmental temperature data and traffic load coefficients are incorporated to update the warning thresholds in real time. Men et al. [19] utilized an LSTM-based deep learning architecture to achieve accurate prediction of structural monitoring data and further realized dynamic early warning for bridge structural safety. Relying solely on deep learning models for early-warning threshold setting imposes stringent requirements on prediction accuracy, while the resulting thresholds often lack clear engineering meaning, interpretability, and controllability; moreover, the relatively narrow threshold intervals tend to cause frequent and potentially spurious warnings.
In addition to threshold construction, reliability evaluation of warning results is also an indispensable component of early warning systems. As a quantitative assessment approach [20,21], reliability analysis plays a crucial role in identifying and enhancing the stability and effectiveness of warning system performance. At present, reliability assessment has been applied in various scenarios, including the evaluation of nonlinear behavior of bridge structures [22], reliability analysis of structures with low failure probabilities [23], and reliability assessment of natural gas pipeline supply systems [24]. Owing to its strong model generalization capability, sound computational framework, and high credibility of results, reliability analysis has gradually been extended beyond the aforementioned application fields to the stability and safety evaluation of early warning systems. For example, Duque et al. [25] investigated the reliability of flood early warning systems using Monte Carlo simulation and further improved the warning system based on the evaluation results. Wang et al. [26] proposed a multi-objective expected value optimization method based on expert judgment and Monte Carlo simulation to assess and optimize safety systems for underground engineering. Sattele et al. [27] proposed a threshold-based approach for natural hazard early warning systems in which reliability analysis was employed to evaluate system performance and decision effectiveness. To further characterize the differences among performance indicators of early-warning systems, Soundararajan et al. [28] conducted a refined investigation into slope performance using Monte Carlo simulation based on reliability indices and failure probabilities. Finazzi et al. [29] evaluated the robustness of smartphone-based earthquake early-warning systems by employing parametric statistical models, hypothesis testing, and Monte Carlo simulation, with particular emphasis on false alarm and missed-detection rates, and demonstrated the effectiveness of the proposed evaluation framework. Chen et al. [30] analyzed the influence of various control variables on the time-variant reliability of subsea structures by adopting equivalent normalization techniques in combination with Monte Carlo simulation.
Existing studies indicate that bridge health monitoring and early-warning systems still exhibit notable deficiencies in the critical stage of warning methodology and practical application. First, data-driven models and physical–mechanical models have long remained decoupled at the threshold-setting level. Purely data-driven approaches typically rely on prediction residuals or anomaly scores as warning criteria. Although adaptive, such thresholds lack explicit engineering meaning and auditable justification, making false alarm and missed alarm rates prone to temporal fluctuation and difficult to control. In contrast, purely physics-based approaches can provide mechanical interpretability but often depend on idealized boundary conditions and parameter assumptions, which are insufficient to capture multi-source randomness and model uncertainty in long-term monitoring, resulting in limited applicability and stability of thresholds under real operating conditions. Consequently, while existing studies have achieved progress separately in prediction accuracy or mechanical analysis, a systematic solution for jointly translating data patterns and mechanical constraints into implementable and controllable risk-oriented warning thresholds is still lacking. Second, current early-warning research is predominantly driven by improving prediction accuracy or anomaly detection sensitivity, with performance gains often used as the primary evidence of effectiveness. However, probabilistic reliability assessment and quantitative application-oriented analysis of warning outcomes are generally absent. In particular, key engineering questions remain unanswered, such as the probabilities of false alarms and missed detections under a given threshold, and whether the threshold maintains a stable risk boundary under uncertainty propagation—issues that are essential for practical deployment. In other words, how to systematically integrate fusion and quantification concepts throughout the entire warning chain—including environmental effect separation, load effect modeling, hierarchical threshold construction, and threshold reliability evaluation—has yet to be adequately explored.
To address the above issues, this study proposes an intelligent bridge health monitoring early-warning framework that integrates data-driven and physics-based modeling, hierarchical threshold setting, and probabilistic reliability assessment into a unified scheme for the first time. First, the temperature effects in mid-span deflection monitoring data are separated using the CPO-VMD method. An Informer–SEnet-based deep learning model is then developed in which temperature and temperature-induced deflection components are used as input features, while the temperature-induced deflection trend is taken as the target variable. A sliding-window strategy is adopted, whereby the subsequent 144 data points are predicted from the preceding 144 historical points, enabling high-accuracy forecasting of the temperature-induced trend. Second, vehicle load effects are characterized through both physical and statistical modeling approaches. On the one hand, a finite element model is used to compute vehicle-load-induced deflection under the serviceability limit state. On the other hand, to capture the heavy-tail characteristics of vehicle-induced deflection, the peaks-over-threshold method based on Pareto extreme value theory is adopted to model tail behavior and estimate vehicle-load-induced deflection at specified exceedance probabilities. These two results are then respectively combined with the temperature-induced trend to construct two-level dynamic early-warning thresholds, enabling synergy between engineering constraints and data adaptivity. Furthermore, a Monte Carlo simulation framework based on stochastic finite element modeling is introduced to jointly account for multiple sources of uncertainty, including temperature variability, stochastic traffic loading, model errors, and prediction uncertainty, and to quantify their propagation effects along the warning chain. Monte Carlo samples are compared with the predefined dynamic thresholds, and the comparison results are used as the reliability control objective: samples exceeding the threshold are classified as failures, while others are regarded as reliable. The threshold performance is thus probabilistically characterized in terms of failure probability. Finally, the effectiveness and engineering applicability of the proposed framework are validated through a real-world bridge case study. The results show that, under multi-condition simulated datasets, the proposed method improves reliability by up to 58.01% compared with traditional fixed-threshold approaches and by up to 6.19% compared with existing dynamic warning methods. In validation using four consecutive days of real-time monitoring data and simulation experiments, confusion-matrix-based evaluation indicates that, relative to fixed-threshold methods, the warning accuracy is increased by up to 0.17% and the false alarm rate is reduced by up to 0.17%; compared with existing dynamic warning methods, the accuracy is improved by up to 19.27% and the false alarm rate is reduced by up to 16.16%. These results confirm the accuracy, stability, and practical engineering value of the proposed method for structural anomaly early warning.
This study proposes a deep learning-based dynamic early warning threshold method for bridge structures. Taking an in-service long-span cable-stayed bridge as the engineering background, the reliability and feasibility of the proposed dynamic warning threshold method are verified from both theoretical and practical engineering perspectives. The remainder of this paper is organized as follows. Section 2 introduces the principles of the related methodologies, including the Informer–SEnet architecture, extreme value theory, and the stochastic finite element Monte Carlo simulation method. Section 3 presents the development of the dynamic early warning method based on actual engineering monitoring data using the above approaches. Section 4 demonstrates the practical performance of the proposed dynamic warning threshold method through an engineering case study. Finally, Section 5 summarizes the main conclusions of this study.

2. Methodology

2.1. Principle of Informer-SEnet Model

2.1.1. Informer Model

Informer [31] is an efficient extension of the Transformer [32] for long-term time-series forecasting. It mainly consists of three core modules: sparse attention, hierarchical feature extraction, and generative decoding. The detailed principles are described as follows.
(1)
Sparse Self-Attention Mechanism
The core improvement of Informer is to replace the conventional self-attention mechanism, which computes correlations between each time step and all others. Instead, only a small number of the most representative time steps are selected for global interaction, significantly reducing computational complexity with little loss of prediction accuracy. To this end, Informer defines an importance score M ( q i , k ) to measure the divergence between the similarity distribution of a query q i with respect to key vectors k and a uniform distribution. The formulation is given as follows (1):
M ( q i , k ) = log ( j = 1 L e q i k j T d k ) 1 L j = 1 L q i k j T d k
Simplifying the above equation further yields
M ¯ ( q i , K ) = max j { q i k j T d } 1 L k j = 1 L k q i k j T d
In expression (2), the first term is the log-sum-exp operation, which measures the overall attention intensity generated by the interactions between the query q i and all keys. The second term is the mean of the scaled similarities between the query and the keys. Their difference characterizes the sharpness of the attention distribution: a larger value indicates a more important query, whereas a smaller value implies a distribution closer to uniform and thus negligible.
As shown in the above equation, a set of query vectors Q ¯ with the highest scores are selected to participate in the attention calculation. The sparse attention mechanism can be formulated as follows:
A t t e n t i o n ( Q , K , V ) = S o f t m a x ( Q ¯ K T d k ) V
In Formula (3), Q is the query matrix; K is the key matrix; V is the value matrix; and d k is the dimensionality of the key matrix.
Sparse attention essentially selects the most important time points from long time series and concentrates computational resources on a small number of segments that are most relevant to prediction, making it particularly well suited for forecasting long-term monitoring data.
(2)
Hierarchical Feature Extraction
Relying solely on sparse attention may still retain a considerable amount of redundant information. To further improve efficiency and enhance multi-scale feature representation, the Informer model introduces an information distillation layer after the attention mechanism. This design enables the model to capture local fluctuations without being slowed down by excessively long sequences, thereby effectively preserving the key trend components of the series. The information distillation mechanism mainly consists of the following two parts:
(1)
One-dimensional convolution extracts local multi-scale features
The convolution kernel slides along the temporal dimension, effectively extracting local patterns within different time windows.
Y ( l ) = E L U ( C o n v 1 D ( L a y e r N o r m ( X ( l ) ) ) )
In Formula (4), X ( l ) represents the input feature of the l -th layer; L a y e r N o r m denotes layer normalization; C o n v 1 D stands for one-dimensional convolution; and E L U refers to the Exponential Linear Unit activation function.
(2)
Max pooling downsampling
Following the convolution operation mentioned above, pooling is applied. Common pooling methods include average pooling and max pooling. To reduce computational complexity while preserving important feature scales, this study employs max pooling for downsampling.
  X ( l + 1 ) = M a x p o o l 1 D ( Y ( l ) , s t r i d e = n )
In Formula (5), the “stride” represents the step size of the kernel movement along the temporal dimension, which directly determines the compression ratio of the sequence length. Typically, a stride of 2 is chosen in the model, which directly halves the sequence length, thereby further reducing computational complexity.
(3)
Generative Decoder
The decoder in Informer primarily consists of two parts: one is the ground-truth value X t o k e n truncated at the end of the input sequence, and the other is the predicted value X 0 . The input sequence X f e e d _ d e for the decoder is calculated as follows (6):
  X f e e d _ d e = C o n c a t ( X t o k e n , X 0 )
Based on the above principles, the overall workflow of the Informer model for time-series forecasting can be summarized as shown in Figure 1.

2.1.2. Squeeze and Excitation (Senet) Module

SEnet enhances feature representation by modeling channel dependencies and dynamically reweighting feature channels [33]. Specifically, the SEnet module assigns an importance coefficient to each feature channel, enabling the model to automatically emphasize informative features while suppressing irrelevant or noisy ones. When integrated with Informer, the combined model leverages both long-sequence modeling capability and feature recalibration, thereby improving the prediction accuracy of temperature-induced strain. The corresponding architecture is illustrated in Figure 2.
This module mainly consists of two components: Squeeze (global information compression) and Excitation (modeling inter-channel dependencies). The workflow of this module is illustrated in Figure 3, and its underlying principles are described as follows:
Figure 1. Informer Model Architecture Diagram.
Figure 1. Informer Model Architecture Diagram.
Buildings 16 00823 g001
Figure 2. Channel Attention Mechanism.
Figure 2. Channel Attention Mechanism.
Buildings 16 00823 g002
(1)
Squeeze Module
The module compresses spatiotemporal dimensional information through global average pooling, aggregating the mean of each channel’s feature map into a global descriptor vector. This enables the model to focus on overall contextual information rather than local details. The corresponding formulation is given as follows (7):
  z c = 1 H × W i = 1 H j = 1 W X c ( i , j )      
where H and W denote the height and width of the feature map, respectively; i and j are the indices along the height and width directions; and X c ( i , j ) represents the element at position ( i , j ) of the c -th channel.
(2)
Excitation Module
The purpose of this module is to enable the model to learn the relationships among different channels and to determine which channels are more important. It captures inter-channel dependencies through a two-layer fully connected network, thereby addressing the channel selection problem. The corresponding formulation is given as follows (8):
s = σ ( W 2 δ ( W 1 z ) )
where W 1 ϵ R C r × C denotes the compression weight matrix, W 2 ϵ R C × C r denotes the expansion weight matrix, and r represents the channel reduction ratio. δ ( · ) denotes the ReLU activation function, σ ( · ) denotes the Sigmoid activation function, and s ϵ R C represents the channel-wise weight vector.
Figure 3. SEnet Module Flowchart.
Figure 3. SEnet Module Flowchart.
Buildings 16 00823 g003

2.2. Extreme Value Theory

Extreme value theory mainly investigates the statistical regularities of extreme outcomes of random events. In practical applications, there are two primary approaches for extreme value analysis: the block maxima method and the peaks-over-threshold method.

2.2.1. Block Maxima Method

The Block Maxima Method divides the original observation sequence into multiple non-overlapping time blocks and extracts the maximum or minimum value within each block. Let X 1 , X 2 X n be a sequence of independent and identically distributed random variables. Define M n = m a x ( X 1 , X 2 X n ) . If there exist constant sequences a n   > 0 and b n such that the normalized extreme values converge to some limiting distribution function G ( Z ) ,
lim n P ( M n b n a n z ) = G ( Z )
then G ( Z ) must belong to one of the following three types of extreme value distributions:
(1)
Gumbel distribution (Extreme Value Type I)
  G ( Z ) = e x p [ e x p ( z μ σ ) ] , z R
(2)
Fréchet distribution (Extreme Value Type II)
G ( Z ) = { 0 ,                                                                   z μ                   e x p [ ( z μ σ ) α ] , z > μ , α > 0        
(3)
Weibull distribution (Extreme Value Type III)
  G ( Z ) = { e x p [ ( z μ σ ) α ] , z < μ , α > 0 1 , z μ
These three distributions can be unified into the generalized extreme value distribution, with its cumulative distribution function expressed as (13):
G ( z ; μ , σ , ξ ) = e x p { [ 1 + ξ ( z μ σ ) ] 1 ξ }
where μ denotes the location parameter, characterizing the central position of the distribution function; σ indicates the degree of dispersion measuring the extreme values; and ξ determines the tail type of the distribution.

2.2.2. Peak over Threshold (POT) Method

(1)
Peak Over Threshold Extreme Value Theory
The Peak Over Threshold (POT) method utilizes samples exceeding a certain threshold u to estimate extremes, currently the most widely applied extreme value analysis approach. Assume X 1 , X 2 X n are n independent samples drawn from a population distribution function F ( x ) . Given a sufficiently high threshold u , the conditional distribution of the excess Y = X u | X > u is
  F u ( y ) = P ( X u y | X > u ) = F ( y + u ) F ( u ) 1 F ( u ) , y 0  
According to the Pickands–Balkema–de Haan theorem, when the threshold is sufficiently large (i.e., tending toward the right endpoint of the extreme value distribution), the above conditional distribution converges to the generalized Pareto distribution:
G ( y ) = { 1 ( 1 + ξ y σ ) 1 ξ , ξ 0 1 e x p ( y σ ) ,   ξ = 0 y 0
where ξ is the shape parameter, which critically influences the type of generalized Pareto distribution. When ξ = 0, the conditional distribution is exponential; when ξ   > 0, it is a Pareto Type II distribution; when ξ < 0, it is an ordinary Pareto distribution. The variable y ϵ D ( σ , ξ ) , i.e.,
D ( σ , ξ ) = { [ 0 , ) , ξ 0 [ 0 , σ ξ ] , ξ < 0
Based on the fundamental principles of the Peak Over Threshold method, this paper proposes a prediction approach for extreme live load effects under the condition that live load effect values have already been obtained, employing the generalized Pareto distribution for extreme value analysis.
(2)
Threshold Selection
In extreme value analysis based on the generalized Pareto distribution, threshold selection is key to balancing estimation bias and variance. A threshold set too low tends to introduce convergence bias, while setting it too high increases variance due to scarce samples. This paper employs the Mean Residual Life (MRL) plot method to determine the optimal threshold by analyzing sample exceedances. The mean excess function e ( u ) for the generalized Pareto distribution can be expressed as (17):
e ( u ) = E ( x u | x > u ) = σ u + ξ u 1 ξ , ξ < 1
where u is the candidate threshold, and σ u is the scale parameter corresponding to threshold u . From Equation (17), it can be seen that the mean excess function is a linear function of u , with a slope of ξ / ( 1 ξ ) and an intercept of σ u / ( 1 ξ ) . For the samples X 1 , X 2 X n , the empirical estimate of the mean excess is
e n ( u ) = 1 N u i = 1 N u ( X i u ) , X i > u
Based on the number of exceedances N u , plot the Mean Residual Life (MRL) graph. If there exists a threshold point u 0 in the graph such that the data beyond it exhibits a linear trend, this indicates that the exceedance data follows a generalized Pareto distribution, and u 0 can be selected as the optimal threshold.
(3)
Parameter Estimation for Generalized Pareto Distribution
Parameter estimation is a core issue in conducting extreme value statistical analysis using the generalized Pareto distribution. By studying different literature and methodologies, this paper selects the maximum likelihood estimation method to estimate the parameters of the generalized Pareto model in Equation (15). When ξ 0 , the log-likelihood function of the generalized Pareto distribution is
  l ( σ , ξ | y ) = n l n σ ( 1 + 1 ξ ) i = 1 n l n ( 1 + ξ y i σ )
When ξ = 0, the log-likelihood function can be expressed as
l = ( σ | y ) = n l n σ 1 σ i = 1 n y i
Taking partial derivatives of the log-likelihood function with respect to ξ and the scale parameter σ , respectively, and setting them to zero yields a system of nonlinear equations for the parameters. Since this system has no analytical solution, numerical optimization methods must be employed to solve it, thereby obtaining the maximum likelihood estimates of the relevant parameters.
(4)
Quantile Estimation
The result of extreme value analysis can be expressed by calculating the p-quantile x p , which predicts the extreme value of the relevant variable at an assurance level p (the probability of not exceeding this value is p ):
x p = μ 0 + σ ξ [ n N u ( 1 p ) ξ 1 ]
where ξ and σ represent the shape parameter and scale parameter, respectively; n denotes the sample size; and N u is the number of samples exceeding the threshold. In the process of quantile estimation, the relationship between the return period T R , the design reference period T , and the assurance level p is involved. According to the requirements of the General Code for Design of Highway Bridges and Culverts [34], the relationship among these three can be expressed as
P = ( 1 1 T R ) T
For example, for an assurance level of p = 0.95 and a design reference period of 100 years, the corresponding extreme value return period is approximately T R 1950 years.

2.3. Stochastic Finite Element Monte Carlo Simulation Method

2.3.1. Reliability Theory

Reliability theory describes the performance degradation and failure behavior of a system under uncertain conditions using random variables. The core of this research problem is the probability that a system does not fail under a given time or working condition. The relevant fundamental calculation formula is as follows (23)–(26):
Z = R S  
P f = P ( Z < 0 ) = P ( R < S ) = R < s f R S ( r , s ) d r d s  
ϕ ( x ) = x 1 2 π e t 2 2 d t = P ( X x ) = 1 a
P f = 0 f z ( z ) d z = ϕ ( β ) = 1 P s
where z is the structural function, R represents resistance, S represents action effect, a is the significance level, ϕ ( x ) is the probability distribution function, P f is the failure probability, and β is the target reliability index.
Drawing on the theoretical basis of reliability theory, this paper calculates the failure probability between the early warning threshold and the structural response caused by uncertain variables, thereby further quantifying the reliability of the proposed method. By treating the i -th level warning as a quasi-failure event, the limit state function can be defined as
g i ( x ) = T i y ( x )
where T i is the warning threshold for the i -th level, and y ( x ) is the sampled or predicted response value. The corresponding failure probability is
P { f i } = P { ( y ( x ) > T i ) } = P { g i ( x ) < 0 }
Monte Carlo simulation is employed to generate a large number of samples and estimate the failure probabilities P { f 1 } , P { f 2 } …..   P { f n } .

2.3.2. Monte Carlo Finite Element Method

The core concept of the Monte Carlo Finite Element Method is to model uncertainties—such as material parameters, load variables, and prediction model errors—as random variables. By performing extensive random sampling of finite element simulations, structural responses and failure probabilities are statistically analyzed:
Y ( x ) = A X
where A denotes the influence matrix, X represents the sampling results from the probabilistic distribution model of the random variables, and Y ( x ) denotes the structural response.
The random vector [ X 1 X 2 X n ] T is subjected to N random samplings, yielding N sample vectors [ X 1 X 2 X n ] T   ( j = 1 , 2 , , N ) . Each sample vector [ x 1 j x 2 j x n j ] T is taken as the input, and a deterministic structural analysis is performed using the finite element method. Consequently, N sample response values are obtained. Based on these N response samples, statistical hypothesis testing methods for population distributions can be applied to identify the distribution type of the structural response R . When N is sufficiently large, the mean and standard deviation of the structural response R can be obtained using statistical theory as shown in (30) and (31):
  μ R = 1 N j = 1 N R j
D R = 1 N 1 j = 1 N ( R j μ R ) 2

3. Dynamic Early Warning Method

3.1. Data Preparation

This study takes a long-span cable-stayed bridge as the research background. The bridge is a prestressed concrete double-pylon, double-plane cable-stayed bridge with a total main-span length of 600 m. The bridge consists of six piers, numbered from Pier 16 to 21. The main girder adopts a prestressed concrete double-edge box-girder cross-section, with a deck width of 32.5 m. The main pylons are 98 m in height. The stay cables are arranged in a spatial double-plane configuration and consist of parallel wire stay cables. A total of 100 pairs of stay cables are installed along the entire bridge. Within the bridge health monitoring system, five vertical displacement monitoring points are arranged on the main girder. Machine vision-based measurement devices are employed to achieve accurate monitoring of vertical displacement. The specific layout of the monitoring points and the monitoring equipment is illustrated in Figure 4 and Figure 5.
The main girder deflection is recorded at a sampling rate of 1 s, whereas temperature data are collected every 10 min. To ensure consistency between deflection and temperature data and to reduce the computational burden, the deflection data are downsampled by taking the 10 min average as one observation. Prior to downsampling, an ARIMA-based optimization algorithm [35] is applied to preprocess both deflection and temperature data. If the data remain beyond a predefined warning threshold for an extended period, maintenance personnel are notified for further inspection; otherwise, the preprocessing procedure is used to correct sudden abnormal values. This strategy aims to ensure data stability and continuity while preventing transient anomalies from affecting the experimental results, thereby satisfying the deep learning model’s requirement for long-term sequential data. In this study, a total of 8900 data points of mid-span deflection collected over two consecutive months, from 22 October to 22 December 2025, are used for analysis. Correlation analysis between ambient temperature and mid-span deflection indicates a strong negative correlation, with an absolute correlation coefficient of 0.85, further confirming the significant influence of temperature on deflection response. The correlation results are presented in Figure 6.
Figure 5. (a) Displacement measurement using a machine vision camera. (b) Machine vision target.
Figure 5. (a) Displacement measurement using a machine vision camera. (b) Machine vision target.
Buildings 16 00823 g005
Figure 6. (a) Temperature and mid-span deflection curves. (b) Scatter plot of temperature versus mid-span deflection.
Figure 6. (a) Temperature and mid-span deflection curves. (b) Scatter plot of temperature versus mid-span deflection.
Buildings 16 00823 g006
In bridge structural health monitoring, strain or deflection response signals usually contain both live-load effects and temperature effects simultaneously. The live-load effect is mainly reflected in high-frequency components, whereas the temperature effect is mixed into the monitoring data in the form of low-frequency signals. The coupling of these two effects can easily interfere with structural state assessment, thereby affecting the scientific validity and accuracy of health monitoring results. To improve the prediction accuracy of deep learning models for structural responses, this study introduces the Variational Mode Decomposition (VMD) method [10] prior to data input to separate the temperature effects from the monitoring data.
Variational mode decomposition (VMD) is an adaptive and non-recursive signal decomposition method whose core idea is to decompose an original signal into several intrinsic mode components with finite bandwidth by formulating and solving a variational problem. However, the penalty factor   α   and the number of modes K are difficult to determine manually. Therefore, the Porcupine Optimization Algorithm (CPO) [36] is introduced to optimize the parameter combination ( α , K ) within the VMD framework. The core concept of this approach is to use the minimum envelope entropy as the fitness function, define appropriate parameter search ranges, and automatically identify the optimal parameter combination that yields the sparsest and most separable signal decomposition under a given population size and number of iterations. This strategy avoids the subjectivity associated with manual parameter selection and allows the number of modes to be adaptively determined according to the spectral characteristics of the monitoring signal. The temperature-effect separation results obtained using the CPO-VMD method provide cleaner and more reliable input data for subsequent deep learning model training and prediction, as well as for dynamic early-warning threshold construction and reliability analysis. The decomposition results are shown in Figure 7 and Figure 8.
Based on the CPO-VMD optimization results, the optimal parameter combination is determined as α = 522 and K = 6. To more accurately distinguish temperature-induced deflection responses from live-load-induced deflection effects, correlation coefficients between temperature and each decomposed mode are evaluated. In Table 1, as K increases from 1 to 4, the correlation coefficient increases from −0.6870 to −0.8422; after further incorporating IMF5 and IMF6, the correlation coefficient remains at a similar level without further increase. Furthermore, as shown in the time-domain and frequency-domain results in Figure 8, the energy of IMF1–IMF4 is mainly concentrated in the low-frequency range, exhibiting clear low-frequency-dominated spectral characteristics. In contrast, IMF5–IMF6 concentrate their spectral energy in higher frequency bands with broader mid- to high-frequency components, which are typically associated with instantaneous deflection fluctuations induced by vehicle passages. Based on these observations, IMF1–IMF4 are identified as temperature-induced deflection components shown in Figure 9a, while IMF5–IMF6 are attributed to vehicle load effects shown in Figure 9b.

3.2. Baseline Threshold

According to the calculation procedure of Pareto extreme value theory, the baseline value of the warning threshold is determined in this study. First, the threshold of the sample needs to be selected. The optimal threshold u 0 is determined using the mean residual life plot method. The relationship between the mean excess function and the corresponding threshold is shown in Figure 10. As illustrated in the figures, when the threshold exceeds 0.33 mm, the mean excess function exhibits a linear relationship with the threshold. Therefore, the threshold u 0 is taken as 0.33 mm.
By plotting the cumulative density function (CDF) and the Q–Q (Quantile–Quantile) plot, the goodness of fit between the exceedance data and the GPD can be evaluated more intuitively. Analysis of Figure 11 shows that the GPD fitted curve closely overlaps with the empirical cumulative distribution function of the exceedance data, and the data points in the Q–Q plot are almost entirely distributed along the fitted line. This indicates that the Generalized Pareto Distribution provides a very good fit for the data exceeding 0.33.
The above steps determine the peak-over-threshold value u 0 and demonstrate that the data distribution conforms to the Generalized Pareto Distribution (GPD). In this study, the maximum likelihood estimation method is adopted, and under a 95% confidence level, the shape parameter is obtained as ξ   = 0.325 and the scale parameter as σ   = 0.105. With the determined shape and scale parameters, the extreme value corresponding to a 95% guarantee rate within a 100-year return period is calculated using the quantile estimation Formula (21), yielding x p = 18.59   m m.

3.3. Prediction of Temperature-Induced Deflection Effects

The separation and analysis of deflection monitoring data indicate that temperature effects are the dominant factor governing structural deflection variations. Neglecting temperature effects can easily lead to missed or false warnings. To achieve more accurate and stable warning performance, this study introduces a deep learning approach to model and predict temperature-induced deflection, which exhibits periodic and nonlinear characteristics, and constructs dynamic warning thresholds on this basis. An Informer–SEnet deep learning prediction model is proposed. All experiments are implemented in a Python 3.9 environment using the PyTorch 2.0.0+CU118 framework and run on a Windows 11 operating system. Referring to parameter ranges commonly used in similar deep learning models, multiple trials are conducted to determine the optimal parameter configuration. The Informer adopts an encoder–decoder architecture consisting of three encoder layers and two decoder layers to balance representational capacity and generalization performance. Both the encoder and decoder include multi-head attention mechanisms, feedforward neural networks, and hidden layers. The hidden dimension is set to 256, with eight attention heads, and the feedforward network dimension is eight times the hidden dimension. Dropout regularization with a rate of 0.1 is applied after each fully connected layer, and the ReLU activation function is used. The batch size is set to 64, and the model is trained for 120 epochs. The SE module consists of feature squeezing and channel recalibration. Channel weights are adaptively adjusted via global average pooling and two fully connected layers. The number of channels is set to 64 with a compression ratio of 16%, which reduces computational complexity while maintaining model performance. Considering the robustness and fast convergence of the Adam optimizer in training attention-based deep time-series models, Adam is selected for model optimization. The learning rate is set to 1 × 10 4 , with default exponential decay parameters β 1 = 0.9 and β 2 = 0.999 and a numerical stability term ε = 1 × 10 8 . A weight decay coefficient of 1 × 10 5 is introduced for regularization. In addition, a learning-rate scheduling strategy is adopted: when the validation loss does not improve for 10 consecutive epochs, the learning rate is reduced by a factor of 0.5. To ensure comparability and reproducibility, all experimental cases use the same optimizer configuration and training parameters. The corresponding parameter settings are summarized in Table 2 [37].
In this experiment, the temporal resolution of the data and the physical characteristics of bridge response variations are considered, and after multiple trials, the window length is finally set to (S = 143) (one day) and the prediction horizon to (d = 143) (the next day) so as to balance long-term trend representation and short-term fluctuation capture. With this setting, a total of (N − S + 1) valid samples (where (N) is the total number of monitoring points) are obtained. All samples are then randomly split into training, validation, and test sets at a ratio of 7:2:1, while ensuring that the three sets do not overlap in time, so as to objectively evaluate the model’s generalization performance.
Deep learning models generally require high-quality and large-volume data. Due to the limited historical data available for the case-study bridge, this study primarily adopts a public dataset to ensure the reliability and persuasiveness of the prediction results, and to validate the proposed method across data from different time periods. The public dataset is drawn from a standard open-source resource widely used in structural health monitoring research, Bridge-health-monitoring-master. It contains time-series responses of strain, displacement, and temperature under varying thermal conditions. The deflection sensor is sampled at 10 min intervals, and 4433 records from May to June 2020 are selected as the research basis. To evaluate the predictive performance of the model across different datasets and time spans, comparative experiments and longitudinal ablation studies are conducted on the temperature-effect-separated data. LSTM, BiLSTM, and Transformer are selected as baseline models for comparison, while variants such as Informer-Encoder and Informer-Encoder-SEnet are used for ablation. The results demonstrate that Informer–SEnet achieves higher accuracy and better stability in temperature-induced deflection prediction, as shown in Figure 12 and Figure 13.
The above results show that the Informer–SEnet model consistently outperforms the baseline models on both the measured bridge dataset and the public dataset. In the comparative experiments, it achieves a maximum reduction of 51.57% in MAE and 47.03% in RMSE, and a maximum increase of 10.17% in R 2 . In the ablation studies, the maximum reductions in MAE and RMSE reach 41.43% and 40.70%, respectively, while R 2 improves by up to 7.66%. These improvements demonstrate the superiority of Informer–SEnet for multivariate long-term time-series forecasting across different seasons. Moreover, the prediction errors are small, and the model attains an R 2 of at least 0.97 in both the measured-data and public dataset experiments, indicating that the predicted values can closely reproduce the temporal trends of the ground truth. Since these trends primarily reflect the thermal expansion and contraction of materials driven by temperature variations, the results further confirm that the model can capture temperature-induced structural response changes more accurately. This provides effective support for updating the time-varying baseline of dynamic warning thresholds and for subsequent threshold calibration.
To further verify the reliability and purity of the deep learning prediction model, two additional experimental settings are designed. The first group conducts five-fold cross-validation [38]. The second group tests the model using measured data that contain mixed vehicle load effects. This design aims to ensure the authenticity and robustness of the experimental procedure as well as the integrity of the data sources. The corresponding results are shown in Figure 14 and Table 3.
The results of the five-fold cross-validation show that all evaluation metrics exhibit superior and stable performance. The mean values of RMSE and MAE remain consistently low across repeated runs, and their corresponding standard deviations are generally small, indicating limited prediction variability and good robustness to different data splits. In contrast, experiments with randomly mixed vehicle-load data yield inferior performance across all metrics, with noticeably larger discrepancies between predicted and observed values. Together, these two experiments demonstrate that the proposed prediction model does not suffer from data leakage or unintended data mixing during training and evaluation.

3.4. Warning Threshold Setting Method

(1)
Yellow warning threshold (Level I warning)
The yellow warning threshold is determined based on historical monitoring data. Specifically, the p -quantile x p obtained from the Pareto extreme value theory-based model is first adopted. To ensure consistency with the probabilistic quantile values specified in Chinese bridge design codes [34], the guarantee probability p is set to 95%. As derived in Section 3.2, the baseline threshold is obtained as −18.59 mm. This value is then superimposed with the trend term predicted by the Informer–SENet deep learning model for the temperature-induced effects on the bridge structure. This approach characterizes extreme structural responses under temperature actions while maintaining statistical stability, thereby enabling early identification of temperature anomalies or long-term cumulative effects. Taking the prediction of the temperature-induced deflection trend over the next four days as an example, the comparison between the predicted trend and the measured data demonstrates that the model is capable of capturing the evolution of structural behavior governed by thermal expansion. This indicates that the model has practical application value. Based on the predicted trend, a dynamic first-level warning threshold is constructed by superimposing the baseline threshold, as illustrated in Figure 15.
(2)
Red warning threshold (Level II warning)
The red warning threshold is used to represent the safety control limit of the structure under the superposition of multiple unfavorable effects, with a focus on potential safety risk states. In this study, a full-bridge finite element model is established in Midas Civil 2019, as shown in Figure 16. The main girder, pylons, and substructure are modeled using beam elements, while the stay cables are modeled using tension-only truss elements. The whole-bridge model consists of 1447 nodes and 1198 elements. Based on code-specified limit conditions, the most unfavorable vehicle-load-induced deflection under the serviceability limit state is obtained. The finite element analysis yields a value of −85.9 mm. This value is superimposed on the predicted temperature-induced trend component to form the red warning threshold, which can be seen in Figure 17.

4. Engineering Application Analysis

4.1. Reliability Analysis

(1)
Selection of uncertainty factors
To quantitatively characterize the uncertainties present in the process of evaluating structural responses and early warning thresholds, it is essential to reasonably select and probabilistically model the key influencing factors in stochastic finite element analysis. Considering the mechanical characteristics of bridge structures and the sources of monitoring data, this paper treats material mechanical parameters, load effects, measurement errors, and model prediction errors as the main random variables. Among them, parameters such as material elastic modulus reflect the inherent uncertainties of the structure, while live load effects and temperature actions represent external random excitations. The probability distribution types and statistical parameters for each random variable are determined based on statistical analysis of measured data, recommended values from design codes, or relevant literature, and it is assumed that the random variables are independent of each other. On this basis, probabilistic simulation of structural responses is conducted through a stochastic finite element model, laying the foundation for subsequent Monte Carlo sampling and the calculation of early warning reliability indices.
Based on relevant design codes and reference literature [34,39], the distribution types of uncertainty factors are as shown in Table 4.
(2)
Parameter Sensitivity Analysis
Sensitivity analysis is a qualitative and quantitative method used to evaluate the impact of changes in system parameters on system performance. By determining how variations in each parameter affect the performance of the early warning system under different conditions, the reliability of the warning method can be assessed. First, after identifying the random parameters and their distributions, the finite element model is used to examine how each parameter varies under different perturbations. Based on design specifications and realistic variation ranges, a variation range is set for each selected influencing factor—for instance, increasing or decreasing values by a specific percentage. The sensitivity variations in each parameter, as obtained through the finite element model, are illustrated in Figure 18 and Figure 19 and Table 5 below. Among them, the vehicle braking force refers to the force exerted on the bridge deck during vehicle braking (sudden acceleration or deceleration). The resulting deflection variation is included together in the live load effect value.
(3)
Result Analysis
After determining the random parameters and conducting parameter sensitivity analysis, this section constructs a test set consisting of 12 representative working conditions based on the principle of full factorial experimental design. As shown in Table 6, these conditions range from an ideal baseline scenario involving only temperature variation to complex loading scenarios that combine temperature, wind speed, vehicle loads, and their extreme combinations. On this basis, Monte Carlo simulations and reliability index comparisons are employed to quantitatively reveal the evolution of the probabilistic characteristics of the warning thresholds under the coupled effects of multiple environmental factors.
Based on the above combined working conditions, Monte Carlo simulation is employed to perform random sampling and to calculate the failure probability of the warning thresholds under different scenarios. In this study, the number of samples is set to 200,000. The sampling results are presented in Figure 20 and Figure 21 and Table 7 and Table 8.
(4)
Contrast Experiments
To further verify the generalization capability and reliability of the proposed method, an existing dynamic warning method reported in the literature is selected as Method 1 [40], and a fixed-threshold method is adopted as Method 2 [41] for comparative experiments. The core idea of the warning strategy in Method 1 is to use the residuals between the values predicted by a deep learning model and the corresponding measured values, and then to infer the warning threshold line based on the concept of confidence intervals, namely:
    Y = X P
where X denotes the measured value, P represents the predicted value, and Y is the residual. Assuming that the distribution of Y follows the Laida (three-sigma) criterion, the mean and standard deviation of Y can be calculated. Based on the concept of confidence intervals, the warning threshold interval can then be inferred accordingly.
The warning strategy of Method 2 is as follows: the Level I warning threshold is determined from the historical monitoring database by statistically calculating the mean and standard deviation of the monitoring data, and the Level I warning line is defined as the mean plus three times the standard deviation. The Level II warning threshold is directly taken as the code-specified limit value. Based on the above methods, the corresponding experimental results are illustrated in Figure 22, Figure 23, Figure 24, Figure 25 and Figure 26.
Based on the stochastic finite element Monte Carlo simulation method, this study systematically evaluates the reliability characteristics of the constructed two-level dynamic warning thresholds under the influence of multi-source uncertainties. The results show that, within the Level I warning system, the maximum failure probability of the proposed method is 35.02%, whereas the maximum failure probabilities of Method 1 and Method 2 are 52.02% and 38.81%, respectively. Within the Level II warning system, the maximum failure probability of the proposed method is only 0.56%, compared with 5.02% for Method 1 and 8.59% for Method 2.
The failure probability of the Level I warning threshold increases with the intensification of coupled environmental and loading effects, while the Level II warning threshold remains at a consistently low level, demonstrating favorable safety redundancy and stability. Comparative results further indicate that, relative to the dynamic warning method reported in the literature and the fixed-threshold method, the proposed approach exhibits lower failure probabilities and more consistent probabilistic response characteristics under most working conditions. It effectively suppresses threshold drift caused by the superposition of environmental effects, live-load randomness, and measurement errors. Overall, the above analysis verifies the probabilistic reliability and engineering applicability of the proposed dynamic warning method, providing quantitative support for its practical implementation in bridge health monitoring and early warning systems.

4.2. Engineering Application Performance

4.2.1. Dynamic Early Warning Implementation Procedure

In practical engineering applications, the proposed dynamic early-warning method is deployed based on the existing bridge health monitoring system. The monitoring system continuously collects structural response and environmental information through machine-vision targets installed at key sections of the main girder and temperature sensors and uploads the data to the cloud for storage. The early-warning system operates on the server side of the monitoring platform. It first preprocesses the real-time acquired data and then establishes dynamic warning thresholds for structural responses according to the proposed method. When the measured deflection response exceeds the corresponding level of the threshold, the system automatically triggers the corresponding warning alert and pushes the warning information to the operation and maintenance management module, providing a basis for subsequent manual inspection and maintenance decision-making. The specific workflow is illustrated in the following Figure 27.
Figure 28 and Figure 29 present the first- and second-level dynamic deflection warning thresholds established based on the future temperature-induced trend predicted by the Informer–SEnet model, extreme value theory, and finite element analysis, together with the warning thresholds defined using reference methods. During the continuous monitoring period, the data variations remain relatively stable, making it difficult to directly compare the warning performance of thresholds obtained by different methods. To demonstrate the superiority of the dynamic early-warning approach proposed in this study, two loading scenarios are assumed: (1) Scenario 1: one 50-ton vehicle is introduced into normal traffic flow; (2) Scenario 2: two 100-ton overloaded vehicles are introduced into normal traffic flow. Finite element analysis is performed for both scenarios, and the resulting mid-span downward deflections of the main girder of the case-study bridge are listed in Table 9.

4.2.2. Effectiveness of Dynamic Early Warning Implementation

To quantitatively evaluate the identification capability and operational stability of the health monitoring and early warning system in practical engineering applications, this study introduces a confusion matrix to statistically analyze the dynamic warning results. A confusion matrix is a statistical tool used to describe the correspondence between the predicted results of a classification model and the true structural states, and it can comprehensively reflect the performance of the warning system under different decision scenarios. The confusion matrix can be expressed as
C = [ T P F N F P T N ]
where TP (True Positive) denotes the number of samples in which an abnormal condition actually occurs and the system correctly triggers a warning; FN (False Negative) denotes the number of samples in which an abnormal condition actually occurs but the system fails to trigger a warning; FP (False Positive) denotes the number of samples in which the structure is in a normal state but the system triggers a warning; and TN (True Negative) denotes the number of samples in which the structure is actually normal and the system does not trigger a warning.
In the study, accuracy and error rate are mainly adopted as the comprehensive evaluation indicators, which are defined as follows:
  A c c u r a c y = T P + T N T P + T N + F P + F N
    E r r o r   R a t e = F P + F N F P + F N + T P + T N
The study evaluates the warning performance of different methods and the proposed approach through comparative analysis based on actual engineering data under identical monitoring conditions. From the perspective of the confusion matrix, the effectiveness of warning implementation over a continuous period is assessed by modeling the dynamic warning process in practical engineering as a classification problem. In this way, the identification capability and stability of the warning system under real monitoring data conditions are quantitatively evaluated. The relevant statistical results are shown in Figure 30, Figure 31 and Figure 32 and Table 10 and Table 11.
Results from real-time monitoring and simulation experiments conducted on an in-service long-span cable-stayed bridge indicate that the proposed dynamic early-warning method can stably achieve online updating and hierarchical triggering of warning thresholds under actual operating conditions. Compared with the fixed-threshold and dynamic warning methods reported in the literature, the introduction of a temperature-induced trend prediction enables the warning baseline to adaptively adjust to environmental variations, effectively mitigating the interference of temperature fluctuations on warning criteria. Based on confusion-matrix evaluation, the proposed method can accurately identify structural abnormal states in the warning system. Under Scenario 1, the proposed method improves accuracy by 19.27% and reduces the error rate by 16.16% compared with Method 1, while achieving comparable performance to Method 2. Method 1, however, exhibits excessive sensitivity to incidental fluctuations under normal traffic conditions, leading to false alarms. Under Scenario 2, the proposed method improves accuracy by 19.03% and reduces the error rate by 15.99% relative to Method 1; compared with Method 2, it achieves a 0.17% increase in accuracy and a 0.17% reduction in error rate. Notably, Method 2 fails to timely capture abrupt structural anomalies induced by overloaded vehicles, resulting in missed warnings. Overall, these results demonstrate that the proposed approach delivers higher discrimination accuracy and lower false alarm and missed alarm rates in bridge health monitoring early-warning systems. By balancing warning sensitivity and system stability, the method shows strong engineering feasibility and promising potential for practical deployment and wider application.

5. Conclusions

The study addresses the challenge in bridge health monitoring of establishing dynamic warning thresholds that simultaneously account for environmental adaptability, engineering rationality, and quantitative reliability. A deep learning-based dynamic warning threshold method for bridge structures is proposed, and its effectiveness and engineering applicability are systematically investigated using monitoring data from an in-service long-span cable-stayed bridge. The main conclusions are as follows:
(1) To address the strong coupling between temperature effects and vehicle load effects in bridge structural responses, this study develops an Informer–SENet-based deep learning model to accurately predict temperature-induced structural response trends and uses them as a dynamic baseline. On this basis, vehicle load effects are modeled by integrating extreme value theory and finite element analysis, and a two-level dynamic warning threshold strategy suitable for the operational stage is proposed, enabling adaptive adjustment of warning thresholds in response to environmental variations.
(2) By introducing a stochastic finite element Monte Carlo simulation approach, probabilistic modeling is performed for multi-source uncertainties, including material properties, load effects, and environmental factors. The failure probabilities and reliability indices of different warning levels under multiple combined working conditions are systematically evaluated. Comparative results show that the failure probability of the proposed method is reduced to as low as 0.56% under multi-condition scenarios, demonstrating that the constructed hierarchical dynamic warning thresholds exhibit good stability and clear probabilistic significance under uncertainty, and provide quantitative reliability support for warning decisions.
(3) The proposed dynamic warning method is applied to long-term monitoring data from an in-service long-span cable-stayed bridge to analyze the response characteristics of dynamic warning thresholds under real operating conditions. The results indicate that, compared with other threshold-setting methods, the proposed approach effectively suppresses false alarms and missed alarms caused by environmental temperature variations, while maintaining strong detection capability for abnormal responses, thereby improving the overall stability and engineering applicability of the warning system.
(4) A confusion-matrix-based evaluation is introduced to assess the warning performance of the proposed method in real monitoring and simulation experiments. The results show that the proposed method significantly reduces the false alarm rate while maintaining a high abnormality detection capability. Compared with baseline approaches, the warning accuracy is improved by up to 19.27% and the error rate is reduced by as much as 16.16%, further demonstrating the discriminative capability and robustness of the proposed dynamic early-warning method in engineering applications.
Overall, the proposed deep-learning-based dynamic early-warning threshold method for bridge structures is supported by clear physical and statistical foundations. It also demonstrates good adaptability and stability in engineering applications, providing a feasible technical pathway for transforming bridge health monitoring systems from experience-based warning schemes toward data-driven and risk-oriented frameworks. Nevertheless, several limitations remain: (1) Data dependence. The deep learning prediction model is sensitive to the continuity and stability of monitoring data; sensor faults or abnormal data can noticeably affect the performance of dynamic warning. (2) Model transferability and generalizability. The current conclusions are mainly validated on a single bridge. Although the generalization ability of the deep learning model has been examined, systematic cross-bridge validation is still needed. In addition, finite element model parameters require recalibration when structural systems and boundary conditions change. (3) Limited coverage of extreme conditions. For low-frequency but high-impact events—such as overloaded vehicles, extreme temperature gradients, and strong winds—threshold behavior in the tail-risk region still requires long-term verification and calibration using multi-source data, due to the scarcity of such samples and limited monitoring duration.
To address these issues, future research can be pursued in the following directions: (1) Multi-bridge comparison and cross-domain validation. Benchmark datasets covering multiple bridge types and multiple monitoring indicators should be established to systematically evaluate the applicability of the proposed method across different bridges and warning metrics. (2) Enhanced tail-risk and uncertainty modeling. Prediction uncertainty and threshold confidence levels should be further characterized, enabling threshold inversion and risk-constrained design targeting a specified failure probability. (3) Multi-source data fusion and abnormal-scenario simulation. Numerical simulation should be introduced to generate synthetic data under extreme scenarios, thereby expanding training samples and improving the system’s capability to identify and warn against complex abnormal states. With these extensions, the reliability, transferability, and practical engineering value of the proposed dynamic warning method under complex service environments are expected to be further improved.

Author Contributions

Conceptualization, Y.X. and Z.Z.; methodology, Y.X., Z.Z. and Q.Q.; software, Q.Q.; formal analysis, Y.X. and Z.Z.; investigation, F.G.; resources, F.G.; data curation, Z.Z., Y.X., F.G. and Q.Q.; writing—original draft preparation, Q.Q.; writing—review and editing, Y.X. and Q.Q.; supervision, Y.X. and Z.Z.; visualization, Y.X. and Q.Q.; Project administration, F.G.; validation, F.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

Author Fentao Guo was employed by the company Guangdong Huitao Engineering Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Qiao, Z.F.; Ding, Y.; Zhou, M.; Ye, X.W. Research on mapping relationship between environmental load and vibration response of bridge structure based on structural health monitoring data. Eng. Rep. 2024, 6, 12778. [Google Scholar] [CrossRef]
  2. An, Y.H.; Xue, Z.L.; Li, B.B.; Ou, J.P. Deep-SVDD-based Real-time Early Warning for Cable Structure. Comput. Struct. 2024, 290, 107185. [Google Scholar] [CrossRef]
  3. Dan, D.H.; Li, H.J. Monitoring, intelligent perception, and early warning of vortex-induced vibration of suspension bridge. Struct. Control Health Monit. 2022, 29, e2928. [Google Scholar] [CrossRef]
  4. Fan, Z.Y.; Huang, Q.; Ren, Y.; Xu, X.; Zhu, Z.Y. Real-Time Dynamic Warning on Deflection Abnormity of Cable-Stayed Bridges Considering Operational Environment Variations. J. Perform. Constr. Facil. 2021, 35, 04020123. [Google Scholar] [CrossRef]
  5. Zheng, W.J. Design and Application of a Bridge Real-Time Condition Monitoring and Assessment System and a Rapid Pre-Assessment System for Serviceability States. Master’s Thesis, South China University of Technology, Guangzhou, China, 2019. [Google Scholar]
  6. Qu, B.; Huang, Y.L.; She, J.Q.; Liao, P.; Lai, X.Y. Forecasting and early warning of bridge monitoring information based on a multivariate time series ARDL model. Phys. Chem. Earth 2024, 133, 103533. [Google Scholar] [CrossRef]
  7. Qu, G.; Sun, L.M. Performance Prediction for Steel Bridges Using SHM Data and Bayesian Dynamic Regression Linear Model: A Novel Approach. J. Bridge Eng. 2024, 29, 04024044. [Google Scholar] [CrossRef]
  8. Chen, J.Z.; Jiang, X.H.; Yan, Y.; Lang, Q.; Wang, H.; Ai, Q. Dynamic Warning Method for Structural Health Monitoring Data Based on ARIMA: Case Study of HongKong-Zhuhai-Macao Bridge Immersed Tunnel. Sensors 2022, 22, 6185. [Google Scholar] [CrossRef]
  9. Jiang, L.W.; Yang, H.Y.; Liu, W.J.; Ye, Z.T.; Pei, J.W.; Liu, Z.J.; Fan, J.F. Early Warning for Continuous Rigid Frame Bridges Based on Nonlinear Modeling for Temperature-Induced Deflection. Sensors 2024, 24, 3587. [Google Scholar] [CrossRef]
  10. Xin, J.Z.; Jiang, Y.; Zhou, J.T.; Peng, L.L.; Liu, S.Y.; Tang, Q.Z. Bridge deformation prediction based on SHM data using improved VMD and conditional KDE. Eng. Struct. 2022, 261, 114285. [Google Scholar] [CrossRef]
  11. Hu, J.Y.; Ma, J.J.; Li, Z.; Huang, D.Q. Path Planning for Redundant Manipulators Based on an Improved RNN Meta-Heuristic RRT Algorithm. Mod. Manuf. Eng. 2025, 9, 41–52. [Google Scholar]
  12. Guo, Z.C.; Zheng, J.R.; Bao, L.L.; Liu, Z.S.; Shao, Q.L.; Huang, N.B. Cable Force Prediction Method for Steel–Concrete Hybrid Wind Turbine Towers Based on LSTM. Build. Struct. 2025, 55, 119–123. [Google Scholar]
  13. Suvitha, D.; Vijayalakshmi, M. Vehicle Density Prediction in Low Quality Videos with Transformer Timeseries Prediction Model (TTPM). Comput. Syst. Sci. Eng. 2023, 44, 873–894. [Google Scholar] [CrossRef]
  14. Shi, Z.; Li, J.; Jiang, Z.; Li, H.; Yu, C.; Mi, X. WGformer: A Weibull-Gaussian Informer based model for windspeed prediction. Eng. Appl. Artif. Intell. 2024, 131, 107891. [Google Scholar] [CrossRef]
  15. Lazaridis, P.C.; Kavvadias, L.E.; Demertzis, K.; Iliadis, L.; Vasiliadis, L.K. Structural Damage Prediction of a Reinforced Concrete Frame under Single and Multiple Seismic Events Using Machine Learning Algorithms. Appl. Sci. 2022, 12, 3845. [Google Scholar] [CrossRef]
  16. Imam, M.H.; Mohiuddin, M.M.; Shuman, N.M.; Oyshi, T.I.; Debnath, B.; Liham, M.I.M.H. Prediction of seismic performance of steel frame structures: A machine learning approach. Structures 2024, 69, 107547. [Google Scholar] [CrossRef]
  17. Jiang, H.C.; Sun, M.J.; Cai, J.W.; Wu, H.Y.; Zhao, R.X.; Xing, Y.; Wang, F.; Yu, W.L. Nonlinear time-dependent mapping model for SHM data utilizing gan-based imputation and LSTM for uncertainty quantification. Structures 2025, 72, 108290. [Google Scholar] [CrossRef]
  18. Ju, H.W.; Zhao, Y.J.; Deng, Y.; Yi, T.H.; Li, A.Q. Deflection prediction and early warning for long-span bridges using a fusion ATT-BiGRU model and real-time updated thresholds. Structures 2025, 76, 109061. [Google Scholar] [CrossRef]
  19. Men, Y.Q.; Li, H.; Liu, F.Z.; Huang, Y.L.; Gao, M.X.; Wang, X.H.; Xie, H.; Cao, J.X. A dynamic early-warning method for bridge structural safety based on data reconstruction and depth prediction. PLoS ONE 2025, 20, e0324816. [Google Scholar] [CrossRef]
  20. Anyfantis, K.N. Optimizing uncertainty estimation in Enhanced Monte Carlo methods. Struct. Saf. 2025, 116, 102617. [Google Scholar] [CrossRef]
  21. Jiang, F.Y.; Dong, S. An Integrated Reliability Analysis Model of Sheet Pile wharfs Based on Virtual Support Beam Model and Artificial Intelligence Algorithm. Struct. Eng. 2021, 25, 2613–2630. [Google Scholar] [CrossRef]
  22. Ditommaso, R.; Ponzo, F.C. Identifying Damage in Structures: Definition of Thresholds to Minimize False Alarms in SHM Systems. Buildings 2024, 14, 821. [Google Scholar] [CrossRef]
  23. Guo, H.Y.; Luo, C.Q.; Zhu, S.P.; You, X.Y.; Yan, M.L.; Liu, X.H. Machine learning-based enhanced Monte Carlo simulation for low failure probability structural reliability analysis. Structures 2025, 74, 108530. [Google Scholar] [CrossRef]
  24. Yang, Z.M.; Li, X.Y.; Xiang, Q.; He, Q.; Faber, M.H.; Zio, E.; Su, H.; Zhang, J.J. A resilience evaluation method of natural gas pipeline system based on uncertainty analysis. Process Saf. Emviron. Prot. 2023, 177, 891–908. [Google Scholar] [CrossRef]
  25. Duque, L.F.; O’Connell, E.; O’Donnell, G. A Monte Carlo simulation and sensitivity analysis framework demonstrating the advantages of probabilistic forecasting over deterministic forecasting in terms of flood warning reliability. J. Hydrol. 2023, 619, 129340. [Google Scholar] [CrossRef]
  26. Wang, X.; Pan, Y.; Li, M.G.; Chen, J.J. A novel data-driven optimization framework for unsupervised and multivariate early-warning threshold modification in risk assessment of deep excavations. Expert Syst. Appl. 2024, 238, 121872. [Google Scholar] [CrossRef]
  27. Sättele, M.; Bründl, M.; Straub, D. Reliability and effectiveness of early warning systems for natural hazards: Concept and application to debris flow warning. Reliab. Eng. Syst. Saf. 2015, 142, 192–202. [Google Scholar] [CrossRef]
  28. Soundararajan, B.; Vadivel, S.; Sennimalai, S.C. Investigation of rainfall-induced landslide on unsaturated lateritic residual soil slope in Nilgiris, Western Ghats, India using deterministic and reliability analysis. Bull. Eng. Geol. Environ. 2024, 83, 221. [Google Scholar] [CrossRef]
  29. Finazzi, F.; Tchoussi, F.Y.M. A statistical methodology for classifying earthquake detections and for earthquake parameter estimation in smartphone-based earthquake early warning systems. Front. Appl. Math. Stat. 2023, 9, 1107243. [Google Scholar] [CrossRef]
  30. Chen, D.S.; Guo, W.H.; Wu, B.; Shi, J. Service life prediction and time-variant reliability of reinforced concrete structures in harsh marine environment considering multiple factors: A case study for Qingdao Bay Bridge. Eng. Fail. Anal. 2023, 154, 107671. [Google Scholar] [CrossRef]
  31. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Proc. AAAI Conf. Artif. Intell. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
  32. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  33. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  34. JTGD60–2015; General Specifications for Design of Highway Bridges and Culverts. China Communications Press Co., Ltd.: Beijing, China, 2015.
  35. Xu, Y.F.; Quan, Q.Z. Research and Application of Abnormal Data Processing Method for Bridge Health Monitoring Based on ARIMA Model. Adv. Transdiscipl. Eng. 2025, 80, 184–190. [Google Scholar]
  36. Wang, S.B.; Sun, W.H.; Cheng, B.H.; Li, Y.B.; Jing, L.W.; Fang, X.C.; Lu, X.M.; Song, J.; Zhou, Y.; Peng, G.L. A study of fluid temperature inversion based on multi-strategy cooperative improvement of CPO algorithm. Appl. Therm. Eng. 2025, 267, 125864. [Google Scholar] [CrossRef]
  37. Xu, Y.F.; Quan, Q.Q.; Zhang, Z.T. Research on Long-Term Structural Response Time-series Prediction Method Based on the Informer-SEnet Model. Buildings 2026, 16, 189. [Google Scholar] [CrossRef]
  38. Mekbib, D.B.; Cai, M.; Wu, D.; Dai, W.Y.; Liu, X.L.; Zhao, L. Reproducibility and Sensitivity of Resting-State fMRl in Patients With Parkinson’s Disease Using Cross Validation-Based Data Censoring. J. Magn. Reson. Imaging 2024, 59, 1630–1642. [Google Scholar] [CrossRef]
  39. Chen, H.X.; Zeng, J.D.; Zhang, Z.T. Application of wind speed probability distribution models in the calculation of basic wind speeds for bridges. Chin. Foreign Highw. 2023, 43, 78–83. [Google Scholar]
  40. Xiao, X.H.; Liu, H.; Zhang, H.P.; Wang, Z.P.; Chen, F.H.; Luo, Y.; Liu, Y. Research on Main Girder Deflection Prediction and Early Warning Method for Suspension Bridges Based on CNN–LSTM–GD. Vib. Shock 2025, 44, 84–94. [Google Scholar]
  41. Huang, H.J. Research on Anomaly Diagnosis and Early Warning Methods for Bridge Monitoring Data Based on Deep Learning. Master’s Thesis, South China University of Technology, Guangzhou, China, 2023. [Google Scholar]
Figure 4. Layout of vertical displacement monitoring points on the main girder.
Figure 4. Layout of vertical displacement monitoring points on the main girder.
Buildings 16 00823 g004
Figure 7. (a) Original mid-span deflection data. (b) Separation results based on the VMD method.
Figure 7. (a) Original mid-span deflection data. (b) Separation results based on the VMD method.
Buildings 16 00823 g007
Figure 8. (a) Time-Domain Decomposition Results of CPO-VMD. (b) Frequency-Domain Decomposition Results of CPO-VMD.
Figure 8. (a) Time-Domain Decomposition Results of CPO-VMD. (b) Frequency-Domain Decomposition Results of CPO-VMD.
Buildings 16 00823 g008
Figure 9. (a) Low-frequency deflection effects. (b) High-frequency deflection effects.
Figure 9. (a) Low-frequency deflection effects. (b) High-frequency deflection effects.
Buildings 16 00823 g009
Figure 10. (a) Mean excess distribution plot. (b) GPD fitted distribution plot.
Figure 10. (a) Mean excess distribution plot. (b) GPD fitted distribution plot.
Buildings 16 00823 g010
Figure 11. (a) Fitted distribution of the excess values. (b) Quantile fit goodness-of-fit plot.
Figure 11. (a) Fitted distribution of the excess values. (b) Quantile fit goodness-of-fit plot.
Buildings 16 00823 g011
Figure 12. (a) Radar Chart of Predictive Performance for Comparative Models (Measured Dataset). (b) Radar Chart of Predictive Performance for Ablation Experiments (Measured Dataset).
Figure 12. (a) Radar Chart of Predictive Performance for Comparative Models (Measured Dataset). (b) Radar Chart of Predictive Performance for Ablation Experiments (Measured Dataset).
Buildings 16 00823 g012
Figure 13. (a) Radar Chart of Predictive Performance for Comparative Models (Open Dataset). (b) Radar Chart of Predictive Performance for Ablation Experiments (Open Dataset).
Figure 13. (a) Radar Chart of Predictive Performance for Comparative Models (Open Dataset). (b) Radar Chart of Predictive Performance for Ablation Experiments (Open Dataset).
Buildings 16 00823 g013
Figure 14. (a) Comparison of Prediction Curves for Mixed Data. (b) Bar Chart of Prediction Performance Metrics.
Figure 14. (a) Comparison of Prediction Curves for Mixed Data. (b) Bar Chart of Prediction Performance Metrics.
Buildings 16 00823 g014
Figure 15. (a) Prediction of temperature-induced deflection trend term. (b) Yellow warning threshold line.
Figure 15. (a) Prediction of temperature-induced deflection trend term. (b) Yellow warning threshold line.
Buildings 16 00823 g015
Figure 16. Finite Element Model of the Bridge.
Figure 16. Finite Element Model of the Bridge.
Buildings 16 00823 g016
Figure 17. (a) Live-load-induced deflection. (b) Red warning threshold line.
Figure 17. (a) Live-load-induced deflection. (b) Red warning threshold line.
Buildings 16 00823 g017
Figure 18. (a) Impact of Wind Speed on Structural Response. (b) Impact of Live load on Structural Response.
Figure 18. (a) Impact of Wind Speed on Structural Response. (b) Impact of Live load on Structural Response.
Buildings 16 00823 g018
Figure 19. (a) Impact of Temperature on Structural Response. (b) Impact of Car Braking force on Structural Response.
Figure 19. (a) Impact of Temperature on Structural Response. (b) Impact of Car Braking force on Structural Response.
Buildings 16 00823 g019
Figure 20. (a) Sampling results of Level I warning failure probability (Conditions 1–6). (b) Sampling results of Level I warning failure probability (Conditions 7–12).
Figure 20. (a) Sampling results of Level I warning failure probability (Conditions 1–6). (b) Sampling results of Level I warning failure probability (Conditions 7–12).
Buildings 16 00823 g020
Figure 21. (a) Sampling results of Level II warning failure probability (Conditions 1–6). (b) Sampling results of Level II warning failure probability (Conditions 7–12).
Figure 21. (a) Sampling results of Level II warning failure probability (Conditions 1–6). (b) Sampling results of Level II warning failure probability (Conditions 7–12).
Buildings 16 00823 g021
Figure 22. (a) Sampling results of the Level I warning threshold for Method 1 (Conditions 1–6). (b) Sampling results of the Level I warning threshold for Method 1 (Conditions 7–12).
Figure 22. (a) Sampling results of the Level I warning threshold for Method 1 (Conditions 1–6). (b) Sampling results of the Level I warning threshold for Method 1 (Conditions 7–12).
Buildings 16 00823 g022
Figure 23. (a) Sampling results of the Level II warning threshold for Method 1 (Conditions 1–6). (b) Sampling results of the Level II warning threshold for Method 1 (Conditions 7–12).
Figure 23. (a) Sampling results of the Level II warning threshold for Method 1 (Conditions 1–6). (b) Sampling results of the Level II warning threshold for Method 1 (Conditions 7–12).
Buildings 16 00823 g023
Figure 24. (a) Sampling results of the Level I warning threshold for Method 2 (Conditions 1–6). (b) Sampling results of the Level I warning threshold for Method 2 (Conditions 7–12).
Figure 24. (a) Sampling results of the Level I warning threshold for Method 2 (Conditions 1–6). (b) Sampling results of the Level I warning threshold for Method 2 (Conditions 7–12).
Buildings 16 00823 g024
Figure 25. (a) Sampling results of the Level II warning threshold for Method 2 (Conditions 1–6). (b) Sampling results of the Leveler II warning threshold for Method 2 (Conditions 7–12).
Figure 25. (a) Sampling results of the Level II warning threshold for Method 2 (Conditions 1–6). (b) Sampling results of the Leveler II warning threshold for Method 2 (Conditions 7–12).
Buildings 16 00823 g025
Figure 26. (a) Comparison plot of Level I warning failure probability. (b) Comparison plot of Level II warning failure probability.
Figure 26. (a) Comparison plot of Level I warning failure probability. (b) Comparison plot of Level II warning failure probability.
Buildings 16 00823 g026
Figure 27. Deployment Workflow of the Health Monitoring System.
Figure 27. Deployment Workflow of the Health Monitoring System.
Buildings 16 00823 g027
Figure 28. Early Warning Performance under Scenario 1.
Figure 28. Early Warning Performance under Scenario 1.
Buildings 16 00823 g028
Figure 29. Early Warning Performance under Scenario 2.
Figure 29. Early Warning Performance under Scenario 2.
Buildings 16 00823 g029
Figure 30. (a) Scenario 1 Warning Confusion Matrix (proposed method). (b) Scenario 2 Warning Confusion Matrix (proposed method).
Figure 30. (a) Scenario 1 Warning Confusion Matrix (proposed method). (b) Scenario 2 Warning Confusion Matrix (proposed method).
Buildings 16 00823 g030
Figure 31. (a) Scenario 1 Warning Confusion Matrix (Comparative method 1). (b) Scenario 2 Warning Confusion Matrix (Comparative method 1).
Figure 31. (a) Scenario 1 Warning Confusion Matrix (Comparative method 1). (b) Scenario 2 Warning Confusion Matrix (Comparative method 1).
Buildings 16 00823 g031
Figure 32. (a) Scenario 1 Warning Confusion Matrix (Comparative method 2). (b) Scenario 2 Warning Confusion Matrix (Comparative method 2).
Figure 32. (a) Scenario 1 Warning Confusion Matrix (Comparative method 2). (b) Scenario 2 Warning Confusion Matrix (Comparative method 2).
Buildings 16 00823 g032
Table 1. Correlation coefficients between each modal component and temperature.
Table 1. Correlation coefficients between each modal component and temperature.
Correlation CoefficientsIMF1IMF12IMF123IMF1234IMF12345IMF123456
R−0.6870−0.8252−0.8421−0.8422−0.8422−0.8422
Table 2. Hyperparameter Settings for the Informer-SEnet Model.
Table 2. Hyperparameter Settings for the Informer-SEnet Model.
Parameter NameParameter ValueParameter NameParameter Value
Informer ModuleHidden Features256Number of Decoder Layers2
Multi-Head Attention Mechanism8Feed-Forward Layer Dimension2048
Number of Encoder Layers3Activation FunctionReLu
Learning Rate0.0001Loss FunctionMSE
Batches64Number of Training Iterations120
Dropout0.1Time Step Size24
SEnet ModuleNumber of Channels64Activation FunctionReLu, Sigmoid
Channel Reduction Ratio16%Pooling TypeAverage Pooling
Table 3. Results of Five-Fold Cross-Validation.
Table 3. Results of Five-Fold Cross-Validation.
Iteration12345AverageStandard Deviation
Index
Measured DatasetMAE1.5151.4421.6921.3461.5011.4990.113
RMSE1.2021.141.3551.051.2211.1940.100
R20.9850.9850.980.9880.9830.9840.003
Open DatasetMAE2.0911.962.1461.9941.9432.0270.078
RMSE1.6481.541.6811.581.5231.5940.061
R20.9640.9760.9660.9680.9720.9690.005
Table 4. Random Parameters and Distribution Types.
Table 4. Random Parameters and Distribution Types.
NumberParameters TypeMean ValueStandard DeviationDistribution TypeVariation
Range
1Elastic modulusE0.06 ENormal distribution/
2Wind speed//Extreme value Type I distribution/
3Overall temperature difference010Normal distribution/
4Gradient temperature difference05Truncated normal distribution[−2, 14 °C]
5Cable-girder temperature difference05Normal distribution/
6Vehicle load310Truncated normal distribution[0, 55 t]
7Measurement error00.5Normal distribution/
8Prediction error01Normal distribution/
Table 5. Structural Response per Unit Change (mm).
Table 5. Structural Response per Unit Change (mm).
Overall Temperature DifferenceGradient Temperature DifferenceCable-Girder Temperature DifferenceLive LoadWind Speed
Left Side Span Midspan0.560.080.080.340.97
Midspan of Main Span1.124.790.661.551.14
Right Side Span Midspan0.560.080.080.340.96
Table 6. Parameter combination scenarios.
Table 6. Parameter combination scenarios.
Working ConditionParameter CombinationWorking ConditionParameter CombinationWorking ConditionParameter Combination
1Temperature5Temperature + Live load9Temperature + Live load + Measure Error
2Live load6Temperature + Wind Speed10Temperature + Wind Speed + Measure Error
3Wind Speed7Temperature + Measure Error11Live load + Wind Speed + Measure Error
4Measure Erro8Temperature + Live load + Wind Speed12Temperature + Live load+ Wind Speed + Measure Error
Table 7. Statistical table of Level I warning threshold failure probability (Partial).
Table 7. Statistical table of Level I warning threshold failure probability (Partial).
ThresholdFailure Probability of the Level I Warning Threshold
Point123456789101112
16.251.670.580.0018.6813.488.3728.5118.7413.5710.5928.45
25.951.490.530.0018.1013.008.0227.8118.1513.119.9227.74
35.681.350.480.0017.5512.547.7127.1417.5712.659.3227.06
14210.244.991.570.0025.9219.6112.7736.7825.9319.6220.2936.77
1429.674.441.390.0025.0118.8212.1735.7525.0218.8118.8935.73
1439.153.931.230.0024.0918.0511.5934.7624.1418.0717.5234.70
Mean Value9.996.192.030.0024.6718.7812.3235.0224.7218.8020.4434.95
Table 8. Statistical table of Level II warning threshold failure probability (Partial).
Table 8. Statistical table of Level II warning threshold failure probability (Partial).
ThresholdFailure Probability of the Level II Warning Threshold
Points123456789101112
10.000.000.000.000.080.030.010.340.080.030.000.36
20.000.000.000.000.070.030.010.310.070.020.000.33
30.000.000.000.000.060.030.010.280.060.020.000.30
1410.010.000.000.000.160.070.030.590.170.070.000.63
1420.010.000.000.000.160.070.020.580.160.070.000.61
1430.010.000.000.000.150.070.020.550.150.060.000.59
Mean Value0.01 0.00 0.00 0.00 0.14 0.06 0.02 0.52 0.15 0.07 0.00 0.56
Table 9. Simulation Experiment Results.
Table 9. Simulation Experiment Results.
Simulation ExperimentDeflection Calculation Results (mm)
Scenario1−35.65
Scenario2−75.38
Table 10. Values of Evaluation Metrics for Warning Performance under Scenario 1.
Table 10. Values of Evaluation Metrics for Warning Performance under Scenario 1.
Proposed MethodComparative Method 1Comparative Method 2
Accuracy100.00%83.84%100.00%
Error Rate0.00%16.16%0.00%
Table 11. Values of Evaluation Metrics for Warning Performance under Scenario 2.
Table 11. Values of Evaluation Metrics for Warning Performance under Scenario 2.
Proposed MethodComparative Method 1Comparative Method 2
Accuracy100.00%84.01%99.83%
Error Rate0.00%15.99%0.17%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, F.; Xu, Y.; Quan, Q.; Zhang, Z. Dynamic Structural Early Warning for Bridge Based on Deep Learning: Methodology and Engineering Application. Buildings 2026, 16, 823. https://doi.org/10.3390/buildings16040823

AMA Style

Guo F, Xu Y, Quan Q, Zhang Z. Dynamic Structural Early Warning for Bridge Based on Deep Learning: Methodology and Engineering Application. Buildings. 2026; 16(4):823. https://doi.org/10.3390/buildings16040823

Chicago/Turabian Style

Guo, Fentao, Yufeng Xu, Qingzhong Quan, and Zhantao Zhang. 2026. "Dynamic Structural Early Warning for Bridge Based on Deep Learning: Methodology and Engineering Application" Buildings 16, no. 4: 823. https://doi.org/10.3390/buildings16040823

APA Style

Guo, F., Xu, Y., Quan, Q., & Zhang, Z. (2026). Dynamic Structural Early Warning for Bridge Based on Deep Learning: Methodology and Engineering Application. Buildings, 16(4), 823. https://doi.org/10.3390/buildings16040823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop