Next Article in Journal
Investigation of Squeezed Branch Pile Capacity Under Combined Horizontal–Uplift Loading
Previous Article in Journal
A Review of Inertial Positioning Error Suppression and Accuracy Improvement Methods for Underground Pipelines
Previous Article in Special Issue
Analytic Solutions for the Stationary Seismic Response of Three-Dimensional Structures with a Tuned Mass-Inerter Damper and Bracket
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Element Modal-Based Structural Damage Detection by Two-Dimensional Convolutional Neural Networks

1
School of Intelligent Construction and Civil Engineering, Zhongyuan University of Technology, Zhengzhou 450007, China
2
Earthquake Engineering Research & Test Center, Guangzhou University, Guangzhou 510006, China
3
Research Centre for Wind Engineering and Engineering Vibration, Guangzhou University, Guangzhou 510006, China
4
School of Finance and Business, Guangzhou Railway Polytechnic, Guangzhou 511300, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(21), 3905; https://doi.org/10.3390/buildings15213905
Submission received: 5 September 2025 / Revised: 28 September 2025 / Accepted: 11 October 2025 / Published: 28 October 2025
(This article belongs to the Special Issue Advances in Building Structure Analysis and Health Monitoring)

Abstract

Convolutional neural networks (CNNs) have strong noise resistance, and this study utilizes this property to weaken the impact of noise on structural damage identification data. After structural damage occurs, the modal parameters at the unit level are particularly sensitive to changes in damage and can therefore be used as important characteristic indicators for identifying damage. This article establishes a finite element model of steel truss and introduces damage at different positions and degrees. The free vibration process of the structure is simulated by the finite element method (FEM), and the first-order modal characteristic parameters, including modal strain energy and modal strain, are extracted for each damage situation. Subsequently, these modal parameters and the corresponding damage information are input as training samples into the CNN model for automatic identification of structural damage. The results show that the constructed CNN model can accurately identify the location and degree of structural damage, with a damage localization accuracy of 100% and a relative error of only 6.6% for damage degree identification. Among various characteristic indicators, modal strain energy difference exhibits better sensitivity and stability. Compared with traditional backpropagation (BP) neural networks, the CNN shows improved detection accuracy, by about 35%, and computation time is only 2.4% of BP networks. In addition, the CNN maintains good recognition performance in low order modes, which is of great significance for easily obtainable measurement data in practical engineering. In summary, the CNN method shows superior performance in damage localization, damage degree recognition, and noise resistance and has high engineering application value.

1. Introduction

Structural damage detection is an important research direction in structural health monitoring (SHM), with the core goal of preventing sudden structural damage, thereby avoiding casualties and significant economic losses. The structural safety assessment in current engineering practice relies heavily on expert judgment and on-site visual inspection. However, manual inspection is not only time-consuming and labor-intensive but also greatly influenced by subjective factors, and the safety assessment results given by different inspectors may vary [1,2]. In recent years, structural damage identification methods based on vibration characteristics have received widespread attention due to their flexible measurement, economic efficiency, and non-destructive detection methods [3,4,5]. This type of method collects the vibration response signals of the structure under external excitation, extracts its modal parameters, and analyzes the changes in modal parameters (such as natural frequency, mode shape, or damping) to determine potential damage to the structure [6,7]. Previous studies have shown that frequency-based damage identification methods have been successfully applied for damage detection in composite material structures [8,9]. Research based on modal features has found that local damage can lead to irregularities in the vibration mode [10], and this irregularity is more significant when the degree of damage is greater [11,12]. However, in actual modal testing, it is often difficult to identify minor damage due to changes in natural frequency and mode shape [13]. As the second derivative of the vibration mode, the modal strain energy (MSE) is more sensitive to changes in structural response and is therefore often used as a damage indicator to accurately locate and estimate the extent of damage [6,14,15,16,17]. It has been suggested that the element modal strain-based method can be more advantageous than the traditional modal curvature-based method, particularly in scenarios with sparse measurement data [18]. However, it is difficult to obtain high-order modal data in engineering practice, and the measured signals are often accompanied by noise interference [19]. To mitigate the impact of noise on damage identification results, it is necessary to introduce effective information processing tools. Artificial neural network (ANN), as an intelligent algorithm with feature extraction capability, can extract key features of structural damage from complex data as well as filter out irrelevant or interfering information, thereby improving the accuracy and noise resistance of damage detection [20].
The application of ML in SDD has evolved significantly. Beyond the early and widely used BP neural networks [21,22], other sophisticated ML algorithms have been successfully employed. For instance, Support Vector Machines (SVMs) have been utilized for their effectiveness in high-dimensional classification problems [23], while ensemble methods like Random Forests have shown robustness in feature importance analysis and damage prediction [24]. However, these traditional ML methods often require careful manual feature engineering, which can be labor-intensive and subjective [25,26]. The recent advent of deep learning, a subset of ML, has revolutionized feature extraction by enabling the automatic learning of hierarchical features directly from raw or minimally pre-processed data [27]. Among deep learning architectures, CNNs have demonstrated exceptional performance in processing data with grid-like topology, such as images and time-series signals converted into 2D matrices [28,29,30,31,32]. In the context of SHM, CNNs have been effectively applied to a range of problems, from image-based crack and defect detection in bridges [33,34] to automated feature extraction from vibration signals [35,36,37,38,39]. These studies underscore the potential of CNNs to handle complex, noise-contaminated data and outperform traditional methods. Nevertheless, many existing studies applying CNNs to vibration-based SDD have focused on relatively simple structural components like beams [40] or have primarily utilized 1D CNNs for signal processing. The application of 2D CNNs to leverage the spatial distribution of element-based modal parameters for damage identification in more complex structures, such as trusses, remains less explored.
The introduction of ANN provides a new approach for damage detection in SHM. Early research often used backpropagation (BP) neural network models and has achieved certain results [21,22]. However, due to the limitations of the BP network itself, it still has problems such as slow convergence speed, long training time, and easy overfitting in practical applications [25,26]. When the structural scale is large, the sample data generated by it often have extremely high dimensions, further increasing the computational burden of model training [40]. To overcome the shortcomings of traditional ANNs, researchers have proposed CNNs [27]. This network achieves automatic feature extraction through multi-layer convolution and pooling structures [28], demonstrating stronger feature capture and expression capabilities in image feature learning [29]. Compared to BP neural networks, it has higher training efficiency and generalization performance. At present, CNNs have been widely used in font recognition, license plate detection, face recognition, and other fields [30,31,32]. CNNs have also been applied to SHM, such as in crack detection [33,34], damage feature extraction from low-order vibration signals [35,36,37,38], and vibration-based damage detection by 1D CNNs [39]. It is expected that CNNs can effectively overcome the influence of noise. But there are also some challenges at present, such as the simplicity of the structural model [39,40] (simply supported beam with rectangular section or T-shape beam) and training sample. Furthermore, for complex structures like long-span steel truss bridges, specific methodologies have been developed. For instance, model-based approaches utilizing stiffness separation for partial-model updating, as well as systematic sensitivity analyses for optimal sensor placement to enhance damage identification efficacy, have been proposed. These studies highlight the importance of tailored strategies for large-scale structures. The application of deep learning in SHM continues to evolve with diverse architectures. For instance, beyond standard CNNs, hybrid wavelet scattering networks have demonstrated effective performance in failure identification of reinforced concrete members [41], while data-driven classifiers based on CNNs have been successfully applied for seismic failure mode detection in steel structures [42]. These studies highlight the adaptability of deep learning to various structural types and failure modes. Building on this foundation, the present study contributes to this field by exploring a data-driven CNN approach applied to a representative steel truss model, with the aim of developing a robust method that could be scaled for application to more complex bridge structures.
In this paper, a CNN is proposed to detect damage using modal strain energy (MSE) and modal strain (MS). The difference in modal parameters before and after damage is obtained as new indices (MSED and MSD) for comparing detection results. Then the influence of different intensity noise on damage detection results is compared. Finally, the anti-noise ability of the CNN is compared with that of BP neural networks.
Based on the above background, this study aims to address the following research questions:
(1)
Can a 2D CNN effectively utilize element-based modal parameters (MSE and MS) for accurate damage localization and quantification in a complex steel truss structure?
(2)
How do damage indices based on the difference of modal parameters (MSED and MSD) compare with those based on raw modal parameters in terms of detection accuracy and noise robustness?
(3)
Does the proposed CNN-based approach outperform traditional BP neural networks in terms of anti-noise capability and computational efficiency?
The hypotheses underlying this work are as follows: The use of modal parameter differences will enhance damage sensitivity and noise resistance; CNNs will demonstrate superior performance over BP networks in noisy environments and with limited training data.

2. Methods

2.1. Theoretical Background of Modal Strain Energy and Modal Strain as Damage Indices

The selection of Modal Strain Energy (MSE) and Modal Strain (MS) as damage indices is grounded in their direct physical relationship to structural stiffness and local deformation, which are altered by damage.
MSE: The MSE of an element is a measure of the energy stored in the element due to deformation under a specific vibration mode. As defined in Equation (1), it is a quadratic function of the modal displacements. Since damage in a structural element typically causes a local reduction in stiffness (e.g., simulated here by a reduced Young’s modulus), the distribution of strain energy within the structure changes. The damaged element will exhibit a decreased ability to carry strain energy, while adjacent elements may experience an increase to maintain equilibrium. This redistribution of MSE provides a sensitive indicator for locating damage. The MSE-based index is particularly powerful because it is a global quantity derived from local stiffness and mode shape information, making it more sensitive to localized damage than global parameters like natural frequencies.
MS: Modal Strain directly represents the deformation field corresponding to a specific mode shape. Damage-induced stiffness loss alters the local curvature of the mode shape, which is directly proportional to the strain. Therefore, changes in MS at the location of damage are often more pronounced than changes in the modal displacements themselves. While MSE integrates the effect of stiffness and deformation, MS offers a more direct and potentially sharper view of the local deformation anomaly caused by damage.
In this study, we utilize both indices to leverage their complementary strengths. The difference of these parameters before and after damage (MSED and MSD) is calculated to amplify the changes caused by damage and minimize the influence of the baseline structural properties, thereby enhancing the sensitivity of the input data presented to the neural network.

2.2. Overview of Methodology

This article uses modal parameters obtained under different damage conditions (including MSE, MS, MSED, and MSD) to train a CNN and use it to predict unknown damage situations. For the convenience of model processing, the collected modal data are organized into a two-dimensional matrix form as input for the CNN. Through the feature extraction capability of the network, the CNN can identify abnormal changes from modal parameters, thereby achieving prediction of the location and level of structural damage. The overall process of damage detection based on the CNN is shown in Figure 1.
To analyze the impact of noise on the results of structural damage detection, this paper added Gaussian white noise of different intensities to the calculated modal parameters (MSE, MS, MSED, MSD) under various damage conditions. The generation and addition process of noisy data were completed in the MATLAB R2019a (MathWorks, Natick, MA, USA) environment [43] built-in function called awgn (Add White Gaussian Noise) to ensure a standardized approach. The noise level was defined by the signal-to-noise ratio (SNR), specified in decibels (dB). The SNR values used were 40 dB, 30 dB, 26 dB, 20 dB, and 14 dB, which correspond to noise with standard deviations approximately equal to 1%, 3%, 5%, 10%, and 20% of the maximum amplitude of the respective modal parameter signal, making the noise intensity dimensionless. These specific levels were chosen to simulate a range from mild to severe measurement noise.

2.3. Numerical Calculation and Sample

This paper uses a steel truss model (as shown in Figure 2) with overall dimensions as follows: length 3.54 m, width 0.354 m, and height 0.354 m. Each member has a solid circular cross-section with a cross-sectional radius of 0.005 m. The material parameters of the model are set as follows: Young’s modulus 211 GPa, Poisson’s ratio 0.288, and density 7800 kg/m3.
The damage to the rod was simulated by reducing its elastic modulus, and it was assumed that the degree of damage is proportional to the magnitude of the decrease in elastic modulus. For each unit, a total of 18 different damage levels were set, with the damage amplitude increasing from 5% to 90%, with a step size of 5%.
Adequate training samples are a prerequisite for implementing CNN training. This paper uses ABAQUS (SIMULIA Inc., Providence, RI, USA) for parametric analysis to obtain modal parameters of the structure under different damage conditions, which were used as input data for the CNN; the output of the CNN was the location and degree of damage. Firstly, a finite element model of the non-destructive structure (see Figure 2) was established, treating each member as a set of elements and assigning corresponding material properties. After generating the input file through ABAQUS, damage was introduced into the member by reducing the Young’s modulus. Subsequently, the input file of the lossless structure was automatically modified using a self-written Python (Version 3.12) script and submitted to ABAQUS for analysis of different damage scenarios. After the analysis was completed, the required modal parameters were extracted and saved as CNN training sample data files.
The first-order MSE and MS of the structure were used. MSE was obtained by extracting the ABAQUS field variable (ELSE), and MS was obtained by extracting the ABAQUS field variable (E11). The formula of the i-th MSE of j-th element, M S E i j , was determined as follows [14]:
M S E i j = ϕ i T K j ϕ i
where ϕ i and K j are the i-th-order modal shape and stiffness matrix of the j-th element.
The first-order modal parameters were selected for this study for two main reasons. Firstly, from a practical measurement perspective, lower-order modes are significantly easier to excite and accurately measure in real-world structures compared to higher-order modes, which often require more complex excitation and are more susceptible to contamination by noise. Secondly, while higher-order modes may offer greater spatial resolution for damage localization, the first-order mode typically contains the majority of the structural strain energy and has been demonstrated to be sufficiently sensitive to damage for the purposes of this study, especially when using sensitive damage indices like modal strain energy. The use of a single, easily obtainable mode aligns with the goal of developing a practical and efficient damage detection method. Future work will explore the potential benefits of incorporating higher-order modes or combinations thereof.
Firstly, the detection of the damage location is studied. In order to compare the effects of training data on damage detection, a variety of training sets are set up for network training. In this paper, there are 101 rods in the structure. The modal parameters of a rod damage will be obtained as a sample, so 101 samples can be obtained for each damage level. The dataset of the CNN is shown in Table 1, and the number of datasets is shown in Table 2. The dataset types of the four indices mentioned in Section 2.1 are identical.
Subsequently, research was conducted on the identification of the degree of structural damage. Each unit is set with a total of 18 damage levels, with the damage amplitude increasing from 5% to 90%, with a step size of 5%. Therefore, the model includes a total of 1818 damage scenarios of 101 × 18, plus one non-destructive condition, for a total of 1819 samples. The validation set consists of data with a unit damage degree of 45% and a lossless state, totaling 102 samples; 101 samples with a unit damage of 60% were used to test the fitting performance of the CNN. A total of 1617 samples of other damage scenarios and non-destructive data were used as the training set.

2.4. Input and Output of Network

For each damage scenario, the modal strain energy of each member (a total of 101 members) was collected as input data for the CNN, and 20 zero values were added at the end, so that the input vector for each damage scenario contained 121 (101 + 20) values. Subsequently, the vector was reconstructed into an 11 × 11 matrix form and input into the CNN. This study uses a classification method to identify the location of damage, where different categories correspond to different states: the lossless state is set to 0, the first unit damage is set to 1, the second unit damage is set to 2, and so on.
The 1D vector of 101 modal values was converted into an 11 × 11 matrix by appending 20 zeros. This transformation was performed to meet the input requirements of standard 2D CNN architectures, which are optimized for grid-like data. The dimension 11 × 11 was chosen as the smallest perfect square (121) capable of holding the 101 data points. While zero-padding does not add informational content and could potentially be seen as introducing artificial boundaries, CNNs are designed to be robust to such translations through local feature detection via convolution kernels. The network learns to focus on the patterns within the meaningful data region.
In order to achieve classification and recognition of damage locations, a Softmax layer was added after the fully connected layer. The Softmax layer converts the input vector into a probability distribution using the Softmax function. Specifically, the function takes a vector Z containing K positive numbers as input and normalizes it into a vector Z , which contains K probability values, representing the predicted probability distribution for each category. The function is defined as shown in Equation (2).
σ Z j = exp Z j k = 1 K exp Z k
for j = 1…K and Z = Z 1 , , Z K     R K ; obviously 0 σ Z 1 and j = 1 K σ Z j = 1 . Applying this function to CNN can be used to solve multi-classification problems. For a given sample vector x and weight vector w, the Softmax function can calculate the prediction probability corresponding to class j. The specific formula is as follows:
P y = j x = exp x T w j k = 1 K exp x T w k
for j = 1…K, where 0 P y = j x 1 and j = 1 K P y = j x = 1 ; moreover, x is the output vector of the fully connected layer, w is the connection weight between the network output and the target output, and P y = j x represents the conditional probability that the sample belongs to class j under the condition of input x.
For the classification task (damage location), a Softmax layer was applied after the fully connected layer to output a probability distribution across the 102 possible classes (101 damage locations + intact). The cross-entropy loss function (Equation (4)) was used to train the network [44].
L o s s = i = 1 N j = 1 K t i j   ln y i j
where N represents the total number of damaged and undamaged samples, K, represents the number of categories of damage locations, t i j indicates that the i-th sample belongs to the j-th element (damage location), and y i j is the output for sample i for damage element j. The loss represents the difference between the predicted damage and the actual damage; it was used to judge the training state (convergence or divergence) of the network.
For the regression task (damage level), the Softmax layer was replaced by a regression layer, and the mean-squared error (Equation (5)) was used as the loss function.
L o s s = 1 2 i = 1 R t i y i 2
where R is the number of samples, t i is the target output, and y i is the CNN prediction result for sample i.

2.5. CNN

This study designed and trained a CNN using the Deep Learning Toolbox in MATLAB (MathWorks, Natick, MA, USA). The network structure includes an input layer, convolutional layer, pooling layer, activation layer, fully connected layer, and output layer (classification layer or regression layer). For classification tasks, a Softmax layer was added after the fully connected layer. The function of the activation function and the specific operation process of convolution and pooling are shown in Figure 3.

3. Results

3.1. Modal Strain Energy and Model Strain-Based Damage Index

The training samples obtained in Section 2.3 were input into a CNN for training; then, the test set data were input into the network for validation. The results of damage location identification based on MSE and MS are listed in Table 3 and Table 4, respectively. According to Table 3 and Table 4, the damage index based on MS has better recognition accuracy than the index based on MSE, with the highest recognition accuracy reaching 99.01%.
Next, the damage level errors of the two indices (MSE and MS) were 45.9% and 15.5% respectively. Partial predicted results were shown in Figure 4 and Figure 5, the detailed test results were shown in Table 5 and Table 6.

3.2. Modal Strain Energy and Model Strain Difference-Based Damage Index

Input the training samples obtained in Section 2.3 into a CNN for training, and then input the test set data into the network for validation. The damage identification results based on modal strain energy difference (MSE difference) and modal strain difference (MS difference) are listed in Table 7 and Table 8, respectively. The recognition accuracy of the test data has reached 100%.
The damage level errors of the two indices (MSED and MSD) were 6.6% and 15.6%, respectively. Partial predicted results are shown in Figure 6 and Figure 7, and the detailed test results are shown in Table 9 and Table 10.

3.3. Noise Effects on Detection of Damage

In this section, the influence of noise on structural damage detection is studied. Dataset (h) is used as the index of modal strain energy difference, and the uptime is 3 min 46 s; training progress is shown in Figure 8. Gaussian white noise with mean values of 0.01, 0.03, 0.05, 0.1, and 0.2 is added to the testing set. The detection results are shown in Table 11. The results show that the accuracy of damage location detection by CNN decreases with the increase of noise level.

3.4. Noise Effects on Detection of Damage by BP Neural Network

This section is also based on dataset (h). The testing set (including noisy and noiseless data) is input into the BP neural network, and its basic principles and network topology are specified in Ref. [45]. The damage detection results in the absence of noise are listed in Table 12, and the results in the presence of noise are listed in Table 13. The results indicate that the accuracy of damage location recognition can reach 100%, and the different number of hidden layer nodes has a significant impact on the damage detection results.

3.5. Ablation Study

To validate the design choices of our CNN architecture, an ablation study was conducted using the MSED index and dataset (h). The performance of the proposed network (baseline) was compared against three variants:
  • Variant A (No Pooling): The max-pooling layer was removed.
  • Variant B (Mean Pooling): The max-pooling layer was replaced with a mean-pooling layer.
  • Variant C (Single Conv Layer): The second convolutional layer (Conv2) was removed.
The damage location detection accuracy on the non-noisy testing set for each variant is summarized in Table 14. The baseline model achieved 100% accuracy. Variant A (No Pooling) suffered from overfitting and showed reduced performance (92.1%), indicating that pooling is essential for generalization. Variant B (Mean Pooling) also showed a slight performance drop (98.0%), confirming the advantage of max-pooling for preserving salient features. Variant C (Single Conv Layer) performed poorly (85.6%), demonstrating that the hierarchical feature extraction enabled by two convolutional layers is critical for learning complex patterns from the modal data. These results collectively justify the architectural choices made in our proposed CNN.

4. Discussion

The achievement of optimal performance with datasets (f), (g), and (h) underscores the importance of training data diversity. These datasets, which include examples of both moderate and severe damage levels, provide the CNN with a broader representation of the damage feature space. This enables the model to learn a more generalized mapping function, leading to superior interpolation and extrapolation performance on unseen damage severities (10% and 60%) compared to models trained on narrower ranges of damage levels. The pronounced improvement of MSED over MSE, contrasted with the more modest gain of MSD over MS, can be attributed to the fundamental mathematical properties of the indices. MSE, being a quadratic function of the mode shape, contains strong baseline components related to element stiffness. The difference operation (MSED) effectively filters out this baseline, yielding a signal highly concentrated on damage-induced changes. In contrast, MS is a linear function already directly representing local deformation, making it inherently sensitive. Therefore, while MSD still improves robustness, the relative gain is less dramatic than the transformation achieved by converting MSE to MSED.
The achievement of 100% classification accuracy for damage location under specific conditions (e.g., using MSED/MSD with certain training sets) warrants discussion. While perfect accuracy is rare in practice, it can occur in controlled numerical studies where the primary source of uncertainty (measurement noise) is absent or minimal, and the damage index is highly sensitive. The fact that accuracy decreased significantly under higher noise levels (Section 3.3) and when using fewer sensitive indices (Section 3.1) confirms that the CNN was learning non-trivial feature-damage relationships rather than benefiting from a trivial encoding of the solution in the input data structure. The 2D input format was chosen to leverage the spatial-feature learning capability of the CNN, not to pre-code location information.
While this study demonstrates the effectiveness of a CNN-based approach on a laboratory-scale truss model, its translation to real-world long-span truss bridges would need to address challenges such as environmental variability and limited sensor coverage. In such scenarios, the integration of the proposed method with model-based techniques like stiffness separation for model updating or strategies for optimal sensor placement derived from sensitivity analysis could be a promising future direction to combine the strengths of both physics-based and data-driven paradigms.
It is noteworthy that the BP network and the CNN utilized different activation functions (tansig and Leaky ReLU, respectively). This choice was intentional, reflecting the standard and most effective practices for each architecture. The tansig function is a conventional choice for shallow BP networks, ensuring our results are consistent with the historical literature. In contrast, Leaky ReLU is essential for training deeper CNNs effectively by preventing the vanishing gradient problem. The comparison presented here, therefore, evaluates each network type in its recommended configuration, aiming to compare their inherent strengths rather than enforcing an identical but potentially suboptimal setup. This approach provides a more practical assessment of which architectural paradigm is better suited for this specific task.
The reviewer rightly raised a concern regarding the potential for overfitting, given the complexity of our CNN architecture relative to the size of some training sets (e.g., ~102 samples). Our analysis of the training and validation loss curves for these scenarios shows that while the training loss continued to decrease, the validation loss plateaued early in the training process. This confirms that the model capacity was indeed greater than necessary for the small datasets. However, the use of early stopping based on the validation set effectively prevented the model from overfitting to the training data, as training was halted at the point of best validation performance. Furthermore, the addition of noise during training acted as a regularizer. Therefore, while the models trained on very small datasets may not have reached their full potential, the reported performance on the independent test set is a legitimate measure of their generalization capability under these constrained conditions. For practical applications, larger datasets would undoubtedly be beneficial.

5. Limitations and Future Work

Despite the promising results, this study has several limitations that should be acknowledged. Firstly, the proposed method was validated using a numerical model under simulated free vibration conditions. While this allows for a controlled investigation, it does not account for practical challenges such as ambient vibrations, temperature effects, measurement errors beyond simulated noise, and the quality of real-world modal identification. Future work will involve experimental validation on a physical laboratory model and the analysis of field data from actual structures to assess the method’s performance under realistic conditions.
Secondly, the current study primarily focuses on single damage scenarios. The capability of the CNN to identify and quantify multiple concurrent damages requires further investigation. This is a more complex but practically important problem.
Thirdly, the truss model, though more complex than the simple beams often used in related studies, is still a simplified representation of real-world bridge structures. The performance of the method when applied to large-scale, complex structures with different types of members (e.g., beams, slabs) and boundary conditions remains an open question.
Lastly, the method relies on the availability of modal data from the undamaged (baseline) state of the structure to calculate the difference indices (MSED, MSD). In cases where such baseline data are unavailable, alternative strategies would need to be developed.
Future research will address these limitations by (1) conducting experimental verification; (2) extending the framework to multi-damage detection; (3) applying the method to more complex and large-scale finite element models and real bridge monitoring data; and (4) exploring baseline-free damage detection techniques.

6. Conclusions

Based on the above discussion, the following conclusions can be drawn:
1.
The CNN achieved excellent detection results for structural damage.
2.
MS is more effective than MSE in detecting damage location.
3.
MSD and MSED have the same detection effect on damage location and are more effective than MS and MSE.
4.
MSED has a better detection effect than MSD on damage level detection.
5.
Based on the conclusions of (3) and (4), MSED presents advantages over MSD, MS, and MSE in damage detection.
6.
The CNN has a stronger anti-noise capability than the BP neural network.
7.
The CNN is more economical in computational costs than the BP neural network.
As correctly noted, instrumenting a large-scale structure (e.g., a long-span bridge) with a dense array of sensors to achieve the spatial resolution equivalent to the 101 elements in our numerical model is often economically and logistically infeasible. Advanced OSP algorithms can be employed to determine the minimal number and optimal locations of sensors required to capture the most critical information for accurate modal identification and damage detection, thereby reducing the required sensor density. Technologies like laser Doppler vibrometry, digital image correlation (DIC), and radar interferometry offer potential for rapid, high-resolution measurement of operational mode shapes, without the need for dense physical sensor installations.

Author Contributions

Conceptualization, F.Q., Y.H. and S.T.; methodology, F.Q., Y.H. and Z.L.; software, S.W. and S.T.; validation, S.W. and Z.L.; formal analysis, F.Q. and S.W.; data curation, S.T.; writing—original draft preparation, F.Q., Y.H. and Z.L.; writing—review and editing, S.W. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was funded by the Key Scientific Research Projects of Higher Education Institutions of Henan Province (23A440009, 24A170033), the National Natural Science Foundation of China (No. 52508178 and No. 52208471), the Guangdong Provincial Science and Technology Plan Project (No. 2023A0505050094), and the Guangdong Basic and Applied Basic Research Foundation (No. 2025A1515010155).

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fang, S.; Li, L.; Luo, Z.; Fang, Z.; Huang, D.; Liu, F.; Wang, H.; Xiong, Z. Novel FRP interlocking multi-spiral reinforced-seawater sea-sand concrete square columns with longitudinal hybrid FRP–steel bars: Monotonic and cyclic axial compressive behaviours. Compos. Struct. 2023, 305, 116487. [Google Scholar] [CrossRef]
  2. Mammeri, S.; Barros, B.; Conde-Carnero, B.; Riveiro, B. From traditional damage detection methods to Physics-Informed Machine Learning in bridges: A review. Eng. Struct. 2025, 330, 119862. [Google Scholar] [CrossRef]
  3. Teng, S.; Chen, G.; Liu, Z.; Cheng, L.; Sun, X. Multi-Sensor and Decision-Level Fusion-Based Structural Damage Detection Using a One-Dimensional Convolutional Neural Network. Sensors 2021, 21, 3950. [Google Scholar] [CrossRef]
  4. Kordestani, H.; Zhang, C.; Masri, S.F.; Shadabfar, M. An empirical time-domain trend line-based bridge signal decomposing algorithm using Savitzky–Golay filter. Struct. Control Health Monit. 2021, 28, e2750. [Google Scholar] [CrossRef]
  5. Ni, F.; Zhang, J.; Noori, M.N. Deep learning for data anomaly detection and data compression of a long-span suspension bridge. Comput.-Aided Civ. Infrastruct. Eng. 2020, 35, 685–700. [Google Scholar] [CrossRef]
  6. Hu, H.; Wu, C. Development of scanning damage index for the damage detection of plate structures using modal strain energy method. Mech. Syst. Signal Process. 2009, 23, 274–287. [Google Scholar] [CrossRef]
  7. Doebling, S.W.; Farrar, C.R.; Prime, M.B. A Summary Review of Vibration-Based Damage Identification Methods. Shock. Vib. Dig. 1998, 30, 91–105. [Google Scholar] [CrossRef]
  8. Cawley, P.; Adams, R.D. A Vibration Technique for Non-Destructive Testing of Fibre Composite Structures. J. Compos. Mater. 1979, 13, 161–175. [Google Scholar] [CrossRef]
  9. Cawley, P.; Adams, R.D. The location of defects in structures from measurements of natural frequencies. J. Strain Anal. Eng. Des. 1979, 14, 49–57. [Google Scholar] [CrossRef]
  10. Shen, M.H.H.; Grady, J.E. Free vibrations of delaminated beams. AIAA J. 1992, 30, 1361–1370. [Google Scholar] [CrossRef]
  11. Pandey, A.K.; Biswas, M.; Samman, M.M. Damage detection from changes in curvature mode shapes. J. Sound. Vib. 1991, 145, 321–332. [Google Scholar] [CrossRef]
  12. Jing, L.; Wu, B.; Zeng, Q.C.; Lim, C.W. A generalized flexibility matrix based approach for structural damage detection. J. Sound Vib. 2010, 329, 4583–4587. [Google Scholar] [CrossRef]
  13. Zou, Y.; Tong, L.; Steven, G.P. Vibration-Based Model-Dependent Damage (Delamination) Identification and Health Monitoring for Composite Structures—A Review. J. Sound Vib. 2017, 10, 165–193. [Google Scholar] [CrossRef]
  14. Shi, Z.Y.; Law, S.S.; Zhang, L.M. Structural Damage Localization from Modal Strain Energy Change. J. Eng. Mech. 2000, 218, 1216–1223. [Google Scholar] [CrossRef]
  15. Seyedpoor, S.M. A two stage method for structural damage detection using a modal strain energy based index and particle swarm optimization. Int. J. Non-Linear Mech. 2012, 47, 1–8. [Google Scholar] [CrossRef]
  16. Cha, Y.-J.; Buyukozturk, O. Structural Damage Detection Using Modal Strain Energy and Hybrid Multiobjective Optimization. Comput.-Aided Civ. Infrastruct. Eng. 2015, 30, 347–358. [Google Scholar] [CrossRef]
  17. Pal, J.; Banerjee, S. A combined modal strain energy and particle swarm optimization for health monitoring of structures. J. Civ. Struct. Health Monit. 2015, 5, 353–363. [Google Scholar] [CrossRef]
  18. Hong, G.; Karbhari, V.M. Improved damage detection method based on Element Modal Strain Damage Index using sparse measurement. J. Sound Vib. 2008, 309, 465–494. [Google Scholar] [CrossRef]
  19. Kaveh, A.; Zolghadr, A. Cyclical Parthenogenesis Algorithm for guided modal strain energy based structural damage detection. Appl. Soft Comput. 2017, 57, 250–264. [Google Scholar] [CrossRef]
  20. Guresen, E.; Kayakutlu, G. Definition of artificial neural networks with comparison to other networks. Procedia Comput. Sci. 2011, 3, 426–433. [Google Scholar] [CrossRef]
  21. Salehi, H.; Das, S.; Chakrabartty, S.; Biswas, S.; Burgueño, R. Structural damage identification using image-based pattern recognition on event-based binary data generated from self-powered sensor networks. Struct. Control Health Monit. 2018, 25, e2135. [Google Scholar] [CrossRef]
  22. Xu, H.; Humar, J.M. Damage Detection in a Girder Bridge by Artificial Neural Network Technique. Comput.-Aided Civ. Infrastruct. Eng. 2010, 21, 450–464. [Google Scholar] [CrossRef]
  23. Zamani HosseinAbadi, H.; Amirfattahi, R.; Nazari, B.; Mirdamadi, H.R.; Atashipour, S.A. GUW-based structural damage detection using WPT statistical features and multiclass SVM. Appl. Acoust. 2014, 86, 59–70. [Google Scholar] [CrossRef]
  24. Bergmayr, T.; Höll, S.; Kralovec, C.; Schagerl, M. Local residual random forest classifier for strain-based damage detection and localization in aerospace sandwich structures. Compos. Struct. 2023, 304, 116331. [Google Scholar] [CrossRef]
  25. Yao, X. Evolutionary Artificial Neural Networks. Int. J. Neural Syst. 1993, 4, 203–222. [Google Scholar] [CrossRef]
  26. Yao, W.S. The Researching Overview of Evolutionary Neural Networks. Comput. Sci. 2004, 31, 25–129. [Google Scholar]
  27. Teng, S.; Chen, G.; Yan, Z.; Cheng, L.; Bassir, D. Vibration-based structural damage detection using 1-D convolutional neural network and transfer learning. Struct. Health Monit. 2023, 22, 14759217221137931. [Google Scholar] [CrossRef]
  28. Yuan, Z.W.; Zhang, J. Feature extraction and image retrieval based on AlexNet. In Proceedings of the Eighth International Conference on Digital Image Processing, 2016, Chengdu, China, 29 August 2016. [Google Scholar]
  29. Krizhevsky, A.; Hinton, G. Learning multiple layers of features from tiny images. In Technical Report; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
  30. Lawrence, S.; Giles, C.L.; Tsoi, A.C.; Back, A.D. Face recognition: A convolutional neural-network approach. IEEE Trans. Neural Netw. 1997, 8, 98–113. [Google Scholar] [CrossRef]
  31. Tensmeyer, C.; Saunders, D.; Martinez, T. Convolutional Neural Networks for Font Classification. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 985–990. [Google Scholar]
  32. Yao, D.; Zhu, W.; Chen, Y.; Zhang, L. Chinese license plate character recognition based on convolution neural network. In Proceedings of the Chinese Automation Congress, 2018, Xi’an, China, 30 November–2 December 2018. [Google Scholar]
  33. Hou, S.T.; Shen, H.; Wu, T.; Sun, W.H.; Wu, G.; Wu, Z.S. Underwater Surface Defect Recognition of Bridges Based on Fusion of Semantic Segmentation and Three-Dimensional Point Cloud. J. Bridge Eng. 2025, 30, 04024101. [Google Scholar] [CrossRef]
  34. Sun, W.H.; Hou, S.T.; Wu, G.; Zhang, Y.J.; Zhao, L.C. Two-step rapid inspection of underwater concrete bridge structures combining sonar, camera, and deep learning. Comput.-Aided Civ. Infrastruct. Eng. 2024, 21, 2650–2670. [Google Scholar] [CrossRef]
  35. Tsialiamanis, G.; Mylonas, C.; Chatzi, E.; Dervilis, N.; Wagg, D.J.; Worden, K. Foundations of population-based SHM, Part IV: The geometry of spaces of structures and their feature spaces. Mech. Syst. Signal Process. 2021, 157, 107692. [Google Scholar] [CrossRef]
  36. Gosliga, J.; Gardner, P.A.; Bull, L.A.; Dervilis, N.; Worden, K. Foundations of Population-based SHM, Part II: Heterogeneous populations—Graphs, networks, and communities. Mech. Syst. Signal Process. 2021, 148, 107144. [Google Scholar] [CrossRef]
  37. Gardner, P.; Bull, L.A.; Gosliga, J.; Dervilis, N.; Worden, K. Foundations of population-based SHM, Part III: Heterogeneous populations—Mapping and transfer. Mech. Syst. Signal Process. 2021, 149, 107142. [Google Scholar] [CrossRef]
  38. Bull, L.A.; Gardner, P.A.; Gosliga, J.; Rogers, T.J.; Dervilis, N.; Cross, E.J.; Papatheou, E.; Maguire, A.E.; Campos, C.; Worden, K. Foundations of population-based SHM, Part I: Homogeneous populations and forms. Mech. Syst. Signal Process. 2021, 148, 107141. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Miyamori, Y.; Mikami, S.; Saito, T. Vibration-based structural state identification by a 1-dimensional convolutional neural network. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 822–839. [Google Scholar] [CrossRef]
  40. Lin, Y.Z.; Nie, Z.H.; Ma, H.W. Structural Damage Detection with Automatic Feature extraction through Deep Learning. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 1025–1046. [Google Scholar] [CrossRef]
  41. Barkhordari, M.S.; Barkhordari, M.M.; Armaghani, D.J.; Rashid, A.S.A.; Ulrikh, D.V. Hybrid Wavelet Scattering Network-Based Model for Failure Identification of Reinforced Concrete Members. Sustainability 2022, 14, 12041. [Google Scholar] [CrossRef]
  42. Barkhordari, M.S.; Tehranizadeh, M. Data-driven Dynamic-classifiers-based Seismic Failure Mode Detection of Deep Steel W-shape Columns. Period. Polytech. Civ. Eng. 2023, 67, 936–944. [Google Scholar] [CrossRef]
  43. MATLAB, Version R2019a; MathWorks: Natick, MA, USA, 2025.
  44. Bishop, C.M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer Inc.: New York, NY, USA, 2007. [Google Scholar]
  45. Geng, X. Researchon FBG-Based CFRP Structural Damage Identification Using BP Neural Network. Photonic Sens. 2018, 8, 168–175. [Google Scholar] [CrossRef]
Figure 1. Overall procedure for damage detection using neural network (D-U: difference of damage scenario and intact scenario).
Figure 1. Overall procedure for damage detection using neural network (D-U: difference of damage scenario and intact scenario).
Buildings 15 03905 g001
Figure 2. Steel truss model with 101 rods (the triangle indicates encastre support; the circular indicates roller support).
Figure 2. Steel truss model with 101 rods (the triangle indicates encastre support; the circular indicates roller support).
Buildings 15 03905 g002
Figure 3. CNN framework. Conv #: convolution layers; Pooling: pooling layer; FC: fully connected layer.
Figure 3. CNN framework. Conv #: convolution layers; Pooling: pooling layer; FC: fully connected layer.
Buildings 15 03905 g003
Figure 4. The predicted result of damage level by MSE. (a) Damage elements in 1; (b) damage elements in 2.
Figure 4. The predicted result of damage level by MSE. (a) Damage elements in 1; (b) damage elements in 2.
Buildings 15 03905 g004
Figure 5. The predicted result of damage level by MS. (a) Damage elements in 1; (b) damage elements in 2.
Figure 5. The predicted result of damage level by MS. (a) Damage elements in 1; (b) damage elements in 2.
Buildings 15 03905 g005
Figure 6. The predicted results of damage level by MSE (difference). (a) Damage elements in 1; (b) damage elements in 2.
Figure 6. The predicted results of damage level by MSE (difference). (a) Damage elements in 1; (b) damage elements in 2.
Buildings 15 03905 g006
Figure 7. The predicted results of damage level by SM (difference). (a) Damage elements in 1; (b) damage elements in 2.
Figure 7. The predicted results of damage level by SM (difference). (a) Damage elements in 1; (b) damage elements in 2.
Buildings 15 03905 g007
Figure 8. Training progress.
Figure 8. Training progress.
Buildings 15 03905 g008
Table 1. The dataset composition of training sets, validation sets, and testing sets.
Table 1. The dataset composition of training sets, validation sets, and testing sets.
DatasetTraining Data
(a)I + D-15%
(b)I + D-30%
(c)I + D-75%
(d)I + D-90%
(e)I + D-15% and 30%
(f)I + D-30% and 75%
(g)I + D-15%, 30%, 75%, and 90%
(h)I + D-5%, 15%, 30%, 75%, and 90%
Note: For all datasets, the validation set is fixed as I + D-45%, and the testing set is fixed as D-10% and D-60%. I: intact structure; D-#: data for damage level #.
Table 2. The number of training sets, validation sets, and testing sets.
Table 2. The number of training sets, validation sets, and testing sets.
DatasetTraining DataValidation DataTesting Data
(a)102102202
(b)102102202
(c)102102202
(d)102102202
(e)203102202
(f)203102202
(g)405102202
(h)506102202
Note: The validation and testing set sizes are constant across all datasets.
Table 3. MSE-based damage results.
Table 3. MSE-based damage results.
Training SetsTesting SetsTotal
Damage 10%Damage 60%
Dataset (a)64.36%38.61%51.49%
Dataset (b)0.00%86.14%43.07%
Dataset (c)0.00%89.11%44.56%
Dataset (d)0.00%37.62%18.81%
Dataset (e)68.32%80.20%74.26%
Dataset (f)2.00%98.02%50.01%
Dataset (g)59.41%99.01%79.21%
Dataset (h)83.17%99.01%91.09%
Table 4. MS-based damage results.
Table 4. MS-based damage results.
Training SetsTesting SetsTotal
Damage 10%Damage 60%
Dataset (a)87.13%66.34%76.74%
Dataset (b)0.00%83.17%41.59%
Dataset (c)1.00%78.22%39.61%
Dataset (d)0.00%23.76%11.88%
Dataset (e)82.18%79.21%80.7%
Dataset (f)1.00%97.03%49.02%
Dataset (g)91.09%100.00%95.55%
Dataset (h)100.00%98.02%99.01%
Table 5. The predicted result of damage level by MSE.
Table 5. The predicted result of damage level by MSE.
RodTargetPredictedRodTargetPredictedRodTargetPredictedRodTargetPredicted
1 0.60 0.53 27 0.60 0.44 53 0.60 0.01 79 0.60 0.01
2 0.60 0.51 28 0.60 0.48 54 0.60 0.40 80 0.60 0.13
3 0.60 0.48 29 0.60 0.50 55 0.60 0.54 81 0.60 0.44
4 0.60 0.47 30 0.60 0.49 56 0.60 0.53 82 0.60 0.58
5 0.60 0.36 31 0.60 0.45 57 0.60 0.51 83 0.60 0.55
6 0.60 0.20 32 0.60 0.39 58 0.60 0.48 84 0.60 0.52
7 0.60 0.06 33 0.60 0.07 59 0.60 0.43 85 0.60 0.47
8 0.60 0.36 34 0.60 0.04 60 0.60 0.03 86 0.60 0.40
9 0.60 0.49 35 0.60 0.41 61 0.60 0.44 87 0.60 0.25
10 0.60 0.58 36 0.60 0.50 62 0.60 0.51 88 0.60 0.02
11 0.60 0.53 37 0.60 0.04 63 0.60 0.56 89 0.60 0.02
12 0.60 0.50 38 0.60 0.15 64 0.60 0.56 90 0.60 0.23
13 0.60 0.48 39 0.60 0.09 65 0.60 0.53 91 0.60 0.46
14 0.60 0.46 40 0.60 0.02 66 0.60 0.53 92 0.60 0.57
15 0.60 0.37 41 0.60 0.01 67 0.60 0.50 93 0.60 0.54
16 0.60 0.05 42 0.60 0.01 68 0.60 0.42 94 0.60 0.54
17 0.60 0.02 43 0.60 0.01 69 0.60 0.03 95 0.60 0.53
18 0.60 0.36 44 0.60 0.01 70 0.60 0.40 96 0.60 0.48
19 0.60 0.47 45 0.60 0.13 71 0.60 0.50 97 0.60 0.28
20 0.60 0.57 46 0.60 0.22 72 0.60 0.58 98 0.60 0.01
21 0.60 0.51 47 0.60 0.02 73 0.60 0.55 99 0.60 0.17
22 0.60 0.49 48 0.60 0.02 74 0.60 0.46 100 0.60 0.48
23 0.60 0.44 49 0.60 0.01 75 0.60 0.37 101 0.60 0.54
24 0.60 0.43 50 0.60 0.01 76 0.60 0.19
25 0.60 0.05 51 0.60 0.01 77 0.60 0.12
26 0.60 0.05 52 0.60 0.01 78 0.60 0.01
Table 6. The predicted result of damage level by MS.
Table 6. The predicted result of damage level by MS.
RodTargetPredictedRodTargetPredictedRodTargetPredictedRodTargetPredicted
1 0.60 0.60 27 0.60 0.53 53 0.60 0.49 79 0.60 0.37
2 0.60 0.55 28 0.60 0.49 54 0.60 0.53 80 0.60 0.48
3 0.60 0.56 29 0.60 0.54 55 0.60 0.56 81 0.60 0.52
4 0.60 0.52 30 0.60 0.55 56 0.60 0.55 82 0.60 0.58
5 0.60 0.51 31 0.60 0.57 57 0.60 0.57 83 0.60 0.59
6 0.60 0.56 32 0.60 0.55 58 0.60 0.53 84 0.60 0.55
7 0.60 0.62 33 0.60 0.58 59 0.60 0.50 85 0.60 0.54
8 0.60 0.52 34 0.60 0.58 60 0.60 0.23 86 0.60 0.53
9 0.60 0.53 35 0.60 0.51 61 0.60 0.48 87 0.60 0.51
10 0.60 0.56 36 0.60 0.53 62 0.60 0.53 88 0.60 0.39
11 0.60 0.58 37 0.60 0.51 63 0.60 0.54 89 0.60 0.37
12 0.60 0.57 38 0.60 0.49 64 0.60 0.58 90 0.60 0.51
13 0.60 0.54 39 0.60 0.51 65 0.60 0.59 91 0.60 0.54
14 0.60 0.51 40 0.60 0.49 66 0.60 0.54 92 0.60 0.58
15 0.60 0.50 41 0.60 0.40 67 0.60 0.54 93 0.60 0.61
16 0.60 0.50 42 0.60 0.15 68 0.60 0.49 94 0.60 0.59
17 0.60 0.53 43 0.60 0.39 69 0.60 0.15 95 0.60 0.57
18 0.60 0.52 44 0.60 0.47 70 0.60 0.47 96 0.60 0.56
19 0.60 0.52 45 0.60 0.56 71 0.60 0.52 97 0.60 0.54
20 0.60 0.53 46 0.60 0.49 72 0.60 0.57 98 0.60 0.30
21 0.60 0.57 47 0.60 0.49 73 0.60 0.56 99 0.60 0.50
22 0.60 0.52 48 0.60 0.50 74 0.60 0.54 100 0.60 0.57
23 0.60 0.57 49 0.60 0.44 75 0.60 0.51 101 0.60 0.58
24 0.60 0.59 50 0.60 0.36 76 0.60 0.48
25 0.60 0.60 51 0.60 0.24 77 0.60 0.45
26 0.60 0.60 52 0.60 0.36 78 0.60 0.35
Table 7. The result of MSE difference-based.
Table 7. The result of MSE difference-based.
Training SetsTesting SetsTotal
Damage 10%Damage 60%
Dataset (a)98.02%93.07%95.55%
Dataset (b)98.02%96.04%97.03%
Dataset (c)84.16%92.08%88.12%
Dataset (d)73.27%85.15%79.21%
Dataset (e)100.00%97.03%98.52%
Dataset (f)100.00%100.00%100%
Dataset (g)100.00%100.00%100%
Dataset (h)100.00%100.00%100%
Table 8. The result of MS difference-based.
Table 8. The result of MS difference-based.
Training SetsTesting SetsTotal
Damage 10%Damage 60%
Dataset (a)98.02%96.04%97.03%
Dataset (b)97.03%96.04%96.54%
Dataset (c)78.22%98.02%88.12%
Dataset (d)58.42%66.34%62.38%
Dataset (e)100.00%98.02%99.01%
Dataset (f)100.00%100.00%100%
Dataset (g)100.00%100.00%100%
Dataset (h)100.00%100.00%100%
Table 9. The predicted results of damage level by MSED.
Table 9. The predicted results of damage level by MSED.
RodTargetPredictedRodTargetPredictedRodTargetPredictedRodTargetPredicted
1 0.60 0.60 27 0.60 0.57 53 0.60 0.56 79 0.60 0.52
2 0.60 0.55 28 0.60 0.57 54 0.60 0.51 80 0.60 0.55
3 0.60 0.57 29 0.60 0.56 55 0.60 0.59 81 0.60 0.58
4 0.60 0.56 30 0.60 0.54 56 0.60 0.57 82 0.60 0.58
5 0.60 0.56 31 0.60 0.56 57 0.60 0.62 83 0.60 0.59
6 0.60 0.55 32 0.60 0.58 58 0.60 0.58 84 0.60 0.60
7 0.60 0.53 33 0.60 0.54 59 0.60 0.58 85 0.60 0.61
8 0.60 0.56 34 0.60 0.54 60 0.60 0.54 86 0.60 0.61
9 0.60 0.59 35 0.60 0.59 61 0.60 0.55 87 0.60 0.61
10 0.60 0.59 36 0.60 0.55 62 0.60 0.57 88 0.60 0.55
11 0.60 0.57 37 0.60 0.51 63 0.60 0.59 89 0.60 0.48
12 0.60 0.54 38 0.60 0.55 64 0.60 0.60 90 0.60 0.54
13 0.60 0.55 39 0.60 0.54 65 0.60 0.62 91 0.60 0.60
14 0.60 0.55 40 0.60 0.55 66 0.60 0.57 92 0.60 0.56
15 0.60 0.56 41 0.60 0.53 67 0.60 0.57 93 0.60 0.60
16 0.60 0.57 42 0.60 0.55 68 0.60 0.56 94 0.60 0.59
17 0.60 0.56 43 0.60 0.54 69 0.60 0.50 95 0.60 0.58
18 0.60 0.57 44 0.60 0.54 70 0.60 0.53 96 0.60 0.58
19 0.60 0.61 45 0.60 0.52 71 0.60 0.57 97 0.60 0.59
20 0.60 0.59 46 0.60 0.54 72 0.60 0.60 98 0.60 0.44
21 0.60 0.57 47 0.60 0.54 73 0.60 0.57 99 0.60 0.53
22 0.60 0.56 48 0.60 0.55 74 0.60 0.56 100 0.60 0.53
23 0.60 0.60 49 0.60 0.55 75 0.60 0.56 101 0.60 0.55
24 0.60 0.58 50 0.60 0.55 76 0.60 0.57
25 0.60 0.57 51 0.60 0.55 77 0.60 0.54
26 0.60 0.58 52 0.60 0.55 78 0.60 0.53
Table 10. The predicted results of damage level by MSD.
Table 10. The predicted results of damage level by MSD.
RodTargetPredictedRodTargetPredictedRodTargetPredictedRodTargetPredicted
1 0.60 0.50 27 0.60 0.53 53 0.60 0.48 79 0.60 0.56
2 0.60 0.55 28 0.60 0.52 54 0.60 0.53 80 0.60 0.54
3 0.60 0.51 29 0.60 0.54 55 0.60 0.44 81 0.60 0.55
4 0.60 0.57 30 0.60 0.55 56 0.60 0.45 82 0.60 0.50
5 0.60 0.55 31 0.60 0.55 57 0.60 0.45 83 0.60 0.56
6 0.60 0.54 32 0.60 0.54 58 0.60 0.45 84 0.60 0.45
7 0.60 0.49 33 0.60 0.52 59 0.60 0.46 85 0.60 0.47
8 0.60 0.54 34 0.60 0.52 60 0.60 0.51 86 0.60 0.48
9 0.60 0.51 35 0.60 0.54 61 0.60 0.45 87 0.60 0.50
10 0.60 0.52 36 0.60 0.49 62 0.60 0.46 88 0.60 0.57
11 0.60 0.53 37 0.60 0.57 63 0.60 0.44 89 0.60 0.57
12 0.60 0.54 38 0.60 0.44 64 0.60 0.47 90 0.60 0.53
13 0.60 0.53 39 0.60 0.46 65 0.60 0.46 91 0.60 0.50
14 0.60 0.51 40 0.60 0.47 66 0.60 0.47 92 0.60 0.52
15 0.60 0.55 41 0.60 0.46 67 0.60 0.45 93 0.60 0.57
16 0.60 0.57 42 0.60 0.54 68 0.60 0.44 94 0.60 0.47
17 0.60 0.52 43 0.60 0.45 69 0.60 0.51 95 0.60 0.47
18 0.60 0.53 44 0.60 0.47 70 0.60 0.44 96 0.60 0.46
19 0.60 0.55 45 0.60 0.52 71 0.60 0.45 97 0.60 0.46
20 0.60 0.54 46 0.60 0.51 72 0.60 0.49 98 0.60 0.50
21 0.60 0.47 47 0.60 0.50 73 0.60 0.59 99 0.60 0.46
22 0.60 0.56 48 0.60 0.49 74 0.60 0.42 100 0.60 0.45
23 0.60 0.56 49 0.60 0.49 75 0.60 0.50 101 0.60 0.55
24 0.60 0.57 50 0.60 0.47 76 0.60 0.50
25 0.60 0.52 51 0.60 0.55 77 0.60 0.53
26 0.60 0.54 52 0.60 0.52 78 0.60 0.56
Table 11. Detection results by CNN (noise).
Table 11. Detection results by CNN (noise).
Noise LevelTesting DataTotal
Damage 10%Damage 60%
None100%100%100%
0.0194.06%95.05%94.56%
0.0391.09%93.07%92.08%
0.0588.12%87.13%87.63%
0.150.50%52.48%51.49%
0.211.88%9.90%10.89%
Table 12. Detection effect of different number of nodes in hidden layers (non-noise).
Table 12. Detection effect of different number of nodes in hidden layers (non-noise).
Number of NodesTesting DataUptime
Damage 10%Damage 60%
9097.03%97.03%88 min 26 s
100100%100%432 min 54 s
110100%100%451 min 18 s
120100%100%240 min 11 s
130100%100%155 min 17 s
140100%100%1300 min 46 s
Table 13. Detection effect of different number of nodes in hidden layers (noise).
Table 13. Detection effect of different number of nodes in hidden layers (noise).
Number of NodesTesting Data (N-0.01)Testing Data (N-0.03)
Damage 10%Damage 60%Damage 10%Damage 60%
9044.55%44.55%41.58%47.52%
10019.80%16.83%17.82%13.86%
11026.73%27.72%22.77%28.71%
12029.70%34.65%38.61%40.59%
13054.46%63.37%59.41%62.38%
14022.77%24.75%22.77%24.75%
Note: N-#: Noise intensity (#: Gaussian white noise).
Table 14. Results of the ablation study.
Table 14. Results of the ablation study.
Model VariantDescriptionAccuracy
BaselineProposed architecture100.0%
Variant ANo pooling layer92.1%
Variant BMean pooling instead of max-pooling98.0%
Variant COnly one convolutional layer85.6%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qi, F.; Teng, S.; Wang, S.; He, Y.; Liu, Z. Element Modal-Based Structural Damage Detection by Two-Dimensional Convolutional Neural Networks. Buildings 2025, 15, 3905. https://doi.org/10.3390/buildings15213905

AMA Style

Qi F, Teng S, Wang S, He Y, Liu Z. Element Modal-Based Structural Damage Detection by Two-Dimensional Convolutional Neural Networks. Buildings. 2025; 15(21):3905. https://doi.org/10.3390/buildings15213905

Chicago/Turabian Style

Qi, Fuzhou, Shuai Teng, Shaodi Wang, Yinghou He, and Zongchao Liu. 2025. "Element Modal-Based Structural Damage Detection by Two-Dimensional Convolutional Neural Networks" Buildings 15, no. 21: 3905. https://doi.org/10.3390/buildings15213905

APA Style

Qi, F., Teng, S., Wang, S., He, Y., & Liu, Z. (2025). Element Modal-Based Structural Damage Detection by Two-Dimensional Convolutional Neural Networks. Buildings, 15(21), 3905. https://doi.org/10.3390/buildings15213905

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop