Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI
Abstract
:1. Introduction
2. ABAQUS Finite Element Analysis
2.1. Establishment of Finite Element Model
2.2. Analysis of Numerical Simulation
3. Introducing the Principles of Artificial Intelligence Machine Learning Models
3.1. Data Analysis
3.1.1. Correlation Matrix Analysis
3.1.2. Statistical Analysis of Variables
3.2. FCNN Model Construction and Analysis Study
- Step 1: Data preprocessing.
- Z(0) represents the input data, which is expanded into the matrix form represented below [53].
- 2.
- represents the weight matrix utilized in the linear transformation of the initial layer of the network. This matrix is expanded into the form represented below [54].
- 3.
- B(1) represents the bias matrix in the linear transformation of the initial layer of the network. This matrix is expanded into the form represented below [55].
- 4.
- Step 4: Backpropagation is performed using gradient descent and the adaptive moment estimation (Adam) optimizer. By adaptively adjusting the learning rate of each parameter and updating all weight values, including those of the five sub-loss functions, the parameters are updated in a more independent manner. Concurrently, the dynamic adjustment of the parameters is employed to enhance the performance of the model and to generate the final predicted value that is closer to the actual value of the force of support and reaction. The formula for updating the weights is expressed as follows [64,65,66,67].
- Among the aforementioned formulas, that which expresses the chain rule for the calculation of the gradient is as follows [68].
3.3. CNN Model Construction and Analysis Study
- Step 1: Data preprocessing. The calculations were performed in accordance with the requirements of Formula (1).
- Step 2: Define the primary structural framework of the CNN. The CNN model presented in this paper comprises three convolutional layers and the corresponding three maximum pooling layers [73].
- Calculate the net input Zp of Conv1D. The convolution kernel (Wp,1, Wp,2, ⋯, Wp,D) is used to convolve the input characteristic mappings (X1, X2, ⋯, XD), then the convolution results are summed and a scalar bias bp is added to finally obtain the net input Zp of Conv1D [74]. The formulas are represented as follows.
- 2.
- Compute the output characteristic mapping Yp of Conv1D. Subsequently, the net input Zp of the Conv1D is passed through the ReLU nonlinear activation function, thereby obtaining the output feature mapping Yp. The weights of the convolution kernel and the bias are updated iteratively through the computation of the fully connected layer [75]. The ReLU activation function is employed for each Conv1D, with inputs and outputs consistently populated to ensure compatibility in size. The ReLU activation function was calculated in accordance with the requirements of Formula (6).
- 3.
- Calculate MaxPooling1D. The MaxPooling1D function is designed to extract the maximum value within each window, thereby reducing the spatial dimension of the input data set. This approach has the additional benefits of reducing the amount of computation and the number of parameters, while maintaining the integrity of the data’s salient features [76,77].
- Step 3: Flatten the layer. It is essential to flatten the output of the MaxPooling1D operation in order to establish a connection with the subsequent fully connected layer [78].
- Step 4: The fully connected layer. The fully connected layer neural network comprises two hidden layers, with 64 and 32 nodes, respectively, which are activated through the application of the ReLU function. The output layer is constituted by five nodes, which are responsible for predicting five values. The fully connected layer and the output layer are calculated in accordance with the FCNN principle [79].
3.4. Comparative Analysis of the FCNN Model and the CNN Model
4. Construction of Analytical Indicators for Model Evaluation
4.1. Error Indicators
4.2. Linear Correlation Strength Indicator
5. Analysis of Model Predictions
5.1. FCNN Model for Cross-Comparison Prediction Analysis of Input Variables
5.1.1. Comparative Analysis of Total Loss Function Curves
5.1.2. Analysis of Evaluation Results
- Comparative Analysis of Error Indicators.
- Comparative Analysis of the Strength of Linear Correlation.
- Comparative Analysis of Comprehensive Evaluation Indicators.
5.2. CNN Modeling for Cross-Comparison Prediction Analysis of Input Variables
5.3. Interactive Coupled Mechanics Prediction System GUI for CNN Models
5.4. Numerical Experiments for CNN Model
6. Conclusions
- This study proposes an intelligent multivariate regression model based on FCNN and CNN for predicting the internal forces of FRP double-helix sensor-coupled steel tube members. The findings demonstrate that AI-driven methods have significant advantages in the field of Structural Health Monitoring (SHM). By integrating deep learning technologies with the FRP double-helix sensor system, this approach not only enhances the accuracy of internal force prediction but also makes SHM for offshore wind turbine support systems more intelligent and efficient. The results highlight the extensive application potential of AI technology in civil engineering.
- Regarding the integration of sensors and neural network models, the FRP double-helix sensor enables reliable multi-point strain data collection through optimized layout, while the CNN model optimizes variable combinations by transforming the data into image-like formats. This process ultimately determines the optimal number and placement of sensors (Input 6). This design significantly reduces the number of sensors required while maintaining prediction accuracy. The optimized approach effectively reduces project costs and resource consumption in engineering practice, validating the capability of deep learning models to address complex engineering problems.
- The practical application of this research is reflected in the development of an interactive graphical user interface (GUI) tool that enables engineers to quickly predict the internal forces of support members. This tool is suitable for real-time monitoring and rapid evaluation of offshore wind turbine support systems, providing effective support for improving the efficiency of engineering design and maintenance. The reliability of the model is verified using performance metrics such as R2, MSE, MAE, and SMAPE, demonstrating its advantages in addressing multi-output problems.
- Despite the significant progress achieved, certain limitations remain. The analysis in this study mainly focuses on conventional conditions within the elastic range. Future work could extend the approach to complex stress fields under nonlinear conditions. Additionally, integrating more experimental data and real-world operational tests could further validate the model’s robustness and generalizability, offering insights for designing and optimizing other support systems.
- This study presents an innovative SHM method and provides new solutions for intelligent civil engineering design. Future research will continue to explore the broader applicability and engineering scalability of this method, laying the foundation for long-term SHM and performance optimization of complex structures.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ren, W.; Zhou, X.H.; Gao, Y.; Deng, R.; Wang, Y.H.; Cao, Y.Q. Compressive behavior of stiffened steel tubes for wind turbine towers. Thin-Walled Struct. 2023, 183, 110372. [Google Scholar] [CrossRef]
- Haselibozchaloee, D.; Correia, J.; Mendes, P.; de Jesus, A.; Berto, F. A review of fatigue damage assessment in offshore wind turbine support structure. Int. J. Fatigue 2022, 164, 107145. [Google Scholar] [CrossRef]
- Varelis, G.E.; Papatheocharis, T.; Karamanos, S.A.; Perdikaris, P.C. Structural behavior and design of high-strength steel welded tubular connections under extreme loading. Mar. Struct. 2020, 71, 102701. [Google Scholar] [CrossRef]
- Wang, L.; Kolios, A.; Liu, X.; Venetsanos, D.; Cai, R. Reliability of offshore wind turbine support structures: A state-of-the-art review. Renew. Sustain. Energy Rev. 2022, 161, 112250. [Google Scholar] [CrossRef]
- Yang, Y.; Liang, F.; Zhu, Q.; Zhang, H. An Overview on Structural Health Monitoring and Fault Diagnosis of Offshore Wind Turbine Support Structures. J. Mar. Sci. Eng. 2024, 12, 377. [Google Scholar] [CrossRef]
- Rohit, T.; Kannan, U.K.; Pua, J.Y.; Alexander, C.H.C.; Sivakumar, S.; Narendran, R.; Fatin, A. To design simulate an off-shore wind turbine system and monitor its performance in real time. AIP Conf. Proc. 2024, 3161, 020137. [Google Scholar]
- Wang, M.; Wang, C.; Hnydiuk-Stefan, A.; Feng, S.; Atilla, I.; Li, Z. Recent progress on reliability analysis of offshore wind turbine support structures considering digital twin solutions. Ocean Eng. 2021, 232, 109168. [Google Scholar] [CrossRef]
- Glisic, B. Concise Historic Overview of Strain Sensors Used in the Monitoring of Civil Structures: The First One Hundred Years. Sensors 2022, 22, 2397. [Google Scholar] [CrossRef]
- Gao, K.; Zhang, Z.; Weng, S.; Zhu, H.; Yu, H.; Peng, T. Review of flexible piezoresistive strain sensors in civil structural health monitoring. Appl. Sci. 2022, 12, 9750. [Google Scholar] [CrossRef]
- Bado, M.F.; Casas, J.R. A review of recent distributed optical fiber sensors applications for civil engineering structural health monitoring. Sensors 2021, 21, 1818. [Google Scholar] [CrossRef]
- Shibu, M.; Kumar, K.P.; Pillai, V.J.; Murthy, H.; Chandra, S. Structural health monitoring using AI and ML based multimodal sensors data. Meas. Sens. 2023, 27, 100762. [Google Scholar] [CrossRef]
- Tapeh, A.T.G.; Naser, M.Z. Artificial intelligence, machine learning, and deep learning in structural engineering: A scientometrics review of trends and best practices. Arch. Comput. Methods Eng. 2023, 30, 115–159. [Google Scholar] [CrossRef]
- Thai, H.T. Machine learning for structural engineering: A state-of-the-art review. Structures 2022, 38, 448–491. [Google Scholar] [CrossRef]
- Pan, M.; Yang, Y.; Zheng, Z.; Pan, W. Artificial intelligence and robotics for prefabricated and modular construction: A systematic literature review. J. Constr. Eng. Manag. 2022, 148, 03122004. [Google Scholar] [CrossRef]
- Moein, M.M.; Saradar, A.; Rahmati, K.; Mousavinejad, S.H.G.; Bristow, J.; Aramali, V.; Karakouzian, M. Predictive models for concrete properties using machine learning and deep learning approaches: A review. J. Build. Eng. 2023, 63, 105444. [Google Scholar] [CrossRef]
- Baghbani, A.; Choudhury, T.; Costa, S.; Reiner, J. Application of artificial intelligence in geotechnical engineering: A state-of-the-art review. Earth-Sci. Rev. 2022, 228, 103991. [Google Scholar] [CrossRef]
- Sharma, S.; Ahmed, S.; Naseem, M.; Alnumay, W.S.; Singh, S.; Cho, G.H. A survey on applications of artificial intelligence for pre-parametric project cost and soil shear-strength estimation in construction and geotechnical engineering. Sensors 2021, 21, 463. [Google Scholar] [CrossRef]
- Lawal, A.I.; Kwon, S. Application of artificial intelligence to rock mechanics: An overview. J. Rock Mech. Geotech. Eng. 2021, 13, 248–266. [Google Scholar] [CrossRef]
- Zhang, W.; Gu, X.; Hong, L.; Han, L.; Wang, L. Comprehensive review of machine learning in geotechnical reliability analysis: Algorithms, applications and further challenges. Appl. Soft Comput. 2023, 136, 110066. [Google Scholar] [CrossRef]
- Kumar, M.; Kumar, V.; Rajagopal, B.G.; Samui, P.; Burman, A. State of art soft computing based simulation models for bearing capacity of pile foundation: A comparative study of hybrid ANNs and conventional models. Model. Earth Syst. Environ. 2023, 9, 2533–2551. [Google Scholar] [CrossRef]
- Luleci, F.; Catbas, F.N.; Avci, O. Generative adversarial networks for data generation in structural health monitoring. Front. Built Environ. 2022, 8, 816644. [Google Scholar] [CrossRef]
- Sabato, A.; Dabetwar, S.; Kulkarni, N.N.; Fortino, G. Noncontact sensing techniques for AI-aided structural health monitoring: A systematic review. IEEE Sens. J. 2023, 23, 4672–4684. [Google Scholar] [CrossRef]
- Azimi, M.; Eslamlou, A.D.; Pekcan, G. Data-driven structural health monitoring and damage detection through deep learning: State-of-the-art review. Sensors 2020, 20, 2778. [Google Scholar] [CrossRef]
- Lagaros, N.D. Artificial neural networks applied in civil engineering. Appl. Sci. 2023, 13, 1131. [Google Scholar] [CrossRef]
- Chang, J.; Huang, H.; Thewes, M.; Zhang, D.; Wu, H. Data-Based postural prediction of shield tunneling via machine learning with physical information. Comput. Geotech. 2024, 174, 106584. [Google Scholar] [CrossRef]
- Zou, J.; Lourens, E.M.; Cicirello, A. Virtual sensing of subsoil strain response in monopile-based offshore wind turbines via Gaussian process latent force models. Mech. Syst. Signal Process. 2023, 200, 110488. [Google Scholar] [CrossRef]
- Gualtero, I.A.; Alvi, A.H.; Womble, S.D.; Jacobsen, J.J.; Collie, V.S. Data-driven preventive maintenance and service life extension of the Sunshine Skyway Bridge–Tampa FL, USA. In Bridge Maintenance, Safety, Management, Digitalization and Sustainabilit; CRC Press: Boca Raton, FL, USA, 2024; pp. 3196–3204. [Google Scholar]
- Sharma, A.; Adhikary, S.; Singh, R. Three Dimensional Numerical Modelling of Piles Using ABAQUS Software. In Structural Engineering Convention; Springer Nature: Singapore, 2023; pp. 633–642. [Google Scholar]
- Wu, R.; Jiang, Y.; Zhao, S.; Chen, M.; Shang, S.; Lang, X. Application and comparative analysis of Intelligent Monitoring Technology for Grouted Pile Construction based on abaqus. Sci. Rep. 2024, 14, 9253. [Google Scholar] [CrossRef]
- Khan, Z.; Sharma, A. Numerical Simulation of Piled-Raft Foundation in Cohesionless Soil using ABAQUS. J. Min. Environ. 2023, 14, 1183–1203. [Google Scholar]
- Ewing, J.A. The Strength of Materials; University Press: Cambridge, UK, 1921; Available online: https://cir.nii.ac.jp/crid/1130282272283309440 (accessed on 1 February 2025).
- Huda, Z.; Huda, Z. Mechanical Behavior of Materials: Fundamentals, Analysis, and Calculations. Elast. Viscoelasticity 2022, 119–142. Available online: https://books.google.com/books?id=yYNSEAAAQBAJ (accessed on 1 February 2025).
- Lynch, S. Python for Scientific Computing and Artificial Intelligence; Chapman and Hall/CRC: Boca Raton, FL, USA, 2023. [Google Scholar]
- Kumar, A.; Saharia, M. Python Environment and Basics. In Python for Water and Environment; Springer Nature: Singapore, 2024; pp. 7–12. [Google Scholar]
- Yang, J.; Zhang, L.; Chen, C.; Li, Y.; Li, R.; Wang, G.; Zeng, Z. A hierarchical deep convolutional neural network and gated recurrent unit framework for structural damage detection. Inf. Sci. 2020, 540, 117–130. [Google Scholar] [CrossRef]
- Auffarth, B. Machine Learning for Time-Series with Python: Forecast, Predict, and Detect Anomalies with State-of-the-art Machine Learning Methods; Packt Publishing Ltd.: Birmingham, UK, 2021. [Google Scholar]
- Wang, Z.N.; Yao, J.; Liu, H.; Liu, Y.; Jin, H.; Zhang, Y. oppHeatmap: Rendering Various Types of Heatmaps for Omics Data. Appl. Biochem. Biotechnol. 2024, 196, 2356–2366. [Google Scholar] [CrossRef] [PubMed]
- Macquisten, A.M. Hierarchical Visualization of High Dimensional Data: Interactive Exploration Of’omics Type Data. Doctoral Dissertation, Newcastle University, Newcastle upon Tyne, UK, 2022. [Google Scholar]
- Gu, Z. Complex heatmap visualization. Imeta 2022, 1, e43. [Google Scholar] [CrossRef] [PubMed]
- Meeker, W.Q.; Escobar, L.A.; Pascual, F.G. Statistical Methods for Reliability Data; John Wiley Son: Hoboken, NJ, USA, 2022. [Google Scholar]
- Washington, S.; Karlaftis, M.G.; Mannering, F.; Anastasopoulos, P. Statistical and Econometric Methods for Transportation Data Analysis; Chapman and Hall/CRC: Boca Raton, FL, USA, 2020. [Google Scholar]
- Hehman, E.; Xie, S.Y. Doing better data visualization. Adv. Methods Pract. Psychol. Sci. 2021, 4, 25152459211045334. [Google Scholar] [CrossRef]
- Dentith, M.; Enkin, R.J.; Morris, W.; Adams, C.; Bourne, B. Petrophysics and mineral exploration: A workflow for data analysis and a new interpretation framework. Geophys. Prospect. 2020, 68, 178–199. [Google Scholar] [CrossRef]
- Petrelli, M. Introduction to Python in Earth Science Data Analysis: From Descriptive Statistics to Machine Learning; Springer Nature: Berlin, Germany, 2021. [Google Scholar]
- Park, K.B.; Lee, J.Y. SwinE-Net: Hybrid deep learning approach to novel polyp segmentation using convolutional neural network and Swin Transformer. J. Comput. Des. Eng. 2022, 9, 616–632. [Google Scholar] [CrossRef]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef]
- Moayedi, H.; Mosallanezhad, M.; Rashid, A.S.A.; Jusoh, W.A.W.; Muazu, M.A. A systematic review and meta-analysis of artificial neural network application in geotechnical engineering: Theory and applications. Neural Comput. Appl. 2020, 32, 495–518. [Google Scholar] [CrossRef]
- Kaveh, A. Applications of artificial neural networks and machine learning in civil engineering. Stud. Comput. Intell. 2024, 1168, 472. [Google Scholar]
- Li, B.; Wu, F.; Lim, S.N.; Belongie, S.; Weinberger, K.Q. On feature normalization and data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12383–12392. [Google Scholar]
- Huang, L.; Qin, J.; Zhou, Y.; Zhu, F.; Liu, L.; Shao, L. Normalization techniques in training dnns: Methodology, analysis and application. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10173–10196. [Google Scholar] [CrossRef]
- Vadyala, S.R.; Betgeri, S.N.; Matthews, J.C.; Matthews, E. A review of physics-based machine learning in civil engineering. Results Eng. 2022, 13, 100316. [Google Scholar] [CrossRef]
- Asghari, V.; Leung, Y.F.; Hsu, S.C. Deep neural network based framework for complex correlations in engineering metrics. Adv. Eng. Inform. 2020, 44, 101058. [Google Scholar] [CrossRef]
- Yang, X.; Guan, J.; Ding, L.; You, Z.; Lee, V.C.; Hasan, M.R.M.; Cheng, X. Research and applications of artificial neural network in pavement engineering: A state-of-the-art review. J. Traffic Transp. Eng. 2021, 8, 1000–1021. [Google Scholar] [CrossRef]
- Chandrasekhar, A.; Suresh, K. TOuNN: Topology optimization using neural networks. Struct. Multidiscip. Optim. 2021, 63, 1135–1149. [Google Scholar] [CrossRef]
- Suganthan, P.N.; Katuwal, R. On the origins of randomization-based feedforward neural networks. Appl. Soft Comput. 2021, 105, 107239. [Google Scholar] [CrossRef]
- Apicella, A.; Donnarumma, F.; Isgrò, F.; Prevete, R. A survey on modern trainable activation functions. Neural Netw. 2021, 138, 14–32. [Google Scholar] [CrossRef] [PubMed]
- Rasamoelina, A.D.; Adjailia, F.; Sinčák, P. A review of activation function for artificial neural network. In Proceedings of the 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herl’any, Slovakia, 23–25 January 2020; pp. 281–286. [Google Scholar]
- Boob, D.; Dey, S.S.; Lan, G. Complexity of training ReLU neural network. Discret. Optim. 2022, 44, 100620. [Google Scholar] [CrossRef]
- Parhi, R.; Nowak, R.D. The role of neural network activation functions. IEEE Signal Process. Lett. 2020, 27, 1779–1783. [Google Scholar] [CrossRef]
- Jentzen, A.; Riekert, A. A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions. J. Mach. Learn. Res. 2022, 23, 1–50. [Google Scholar]
- Wang, Q.; Ma, Y.; Zhao, K.; Tian, Y. A comprehensive survey of loss functions in machine learning. Ann. Data Sci. 2020, 9, 187–212. [Google Scholar] [CrossRef]
- Skorski, M.; Temperoni, A.; Theobald, M. Revisiting weight initialization of deep neural networks. In Proceedings of the Asian Conference on Machine Learning, PMLR, Virtually, 17–19 November 2021; pp. 1192–1207. [Google Scholar]
- Narkhede, M.V.; Bartakke, P.P.; Sutaone, M.S. A review on weight initialization strategies for neural networks. Artif. Intell. Rev. 2022, 55, 291–322. [Google Scholar] [CrossRef]
- Chen, C.T.; Gu, G.X. Generative deep neural networks for inverse materials design using backpropagation and active learning. Adv. Sci. 2020, 7, 1902607. [Google Scholar] [CrossRef]
- Wright, L.G.; Onodera, T.; Stein, M.M.; Wang, T.; Schachter, D.T.; Hu, Z.; McMahon, P.L. Deep physical neural networks trained with backpropagation. Nature 2022, 601, 549–555. [Google Scholar] [CrossRef]
- Reyad, M.; Sarhan, A.M.; Arafa, M. A modified Adam algorithm for deep neural network optimization. Neural Comput. Appl. 2023, 35, 17095–17112. [Google Scholar] [CrossRef]
- Ogundokun, R.O.; Maskeliunas, R.; Misra, S.; Damaševičius, R. Improved CNN based on batch normalization and adam optimizer. In Proceedings of the International Conference on Computational Science and Its Applications, Malaga, Spain, 4–7 July 2022; Springer: Cham, Switzerland, 2022; pp. 593–604. [Google Scholar]
- Smith, S.L.; Dherin, B.; Barrett, D.G.; De, S. On the origin of implicit regularization in stochastic gradient descent. arXiv 2021, arXiv:2101.12176. [Google Scholar]
- Tripathy, S.; Singh, R. Convolutional neural network: An overview and application in image classification. In Proceedings of the Third International Conference on Sustainable Computing: SUSCOM 2021, Jaipur, India, 19–20 March 2021; Springer Nature: Singapore, 2022; pp. 145–153. [Google Scholar]
- Yang, Q.; Shi, W.; Chen, J.; Lin, W. Deep convolution neural network-based transfer learning method for civil infrastructure crack detection. Autom. Constr. 2020, 116, 103199. [Google Scholar] [CrossRef]
- He, M.; Zhang, Z.; Li, N. Deep convolutional neural network-based method for strength parameter prediction of jointed rock mass using drilling logging data. Int. J. Geomech. 2021, 21, 04021111. [Google Scholar] [CrossRef]
- Ehtisham, R.; Qayyum, W.; Camp, C.V.; Mir, J.; Ahmad, A. Predicting the defects in wooden structures by using pre-trained models of Convolutional Neural Network and Image Processing. In Proceedings of the 2nd International Conference on Recent Advances in Civil Engineering and Disaster Management, Peshawar Pakistan, 15–16 December 2022; pp. 208–212. [Google Scholar]
- Ingrosso, A.; Goldt, S. Data-driven emergence of convolutional structure in neural networks. Proc. Natl. Acad. Sci. USA 2022, 119, e2201854119. [Google Scholar] [CrossRef] [PubMed]
- Taye, M.M. Theoretical understanding of convolutional neural network: Concepts, architectures, applications, future directions. Computation 2023, 11, 52. [Google Scholar] [CrossRef]
- Wen, S.; Chen, J.; Wu, Y.; Yan, Z.; Cao, Y.; Yang, Y.; Huang, T. CKFO: Convolution kernel first operated algorithm with applications in memristor-based convolutional neural network. IEEE Trans. Comput. -Aided Des. Integr. Circuits Syst. 2020, 40, 1640–1647. [Google Scholar] [CrossRef]
- Zafar, A.; Aamir, M.; Mohd Nawi, N.; Arshad, A.; Riaz, S.; Alruban, A.; Almotairi, S. A comparison of pooling methods for convolutional neural networks. Appl. Sci. 2022, 12, 8643. [Google Scholar] [CrossRef]
- Jie, H.J.; Wanda, P. RunPool: A dynamic pooling layer for convolution neural network. Int. J. Comput. Intell. Syst. 2020, 13, 66–76. [Google Scholar] [CrossRef]
- Ahmadzadeh, M.; Zahrai, S.M.; Bitaraf, M. An integrated deep neural network model combining 1D CNN and LSTM for structural health monitoring utilizing multisensor time-series data. Struct. Health Monit. 2024, 24, 447–465. [Google Scholar] [CrossRef]
- Basha, S.S.; Dubey, S.R.; Pulabaigari, V.; Mukherjee, S. Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing 2020, 378, 112–119. [Google Scholar] [CrossRef]
- Hirata, D.; Takahashi, N. Ensemble learning in CNN augmented with fully connected subnetworks. IEICE Trans. Inf. Syst. 2023, 106, 1258–1261. [Google Scholar] [CrossRef]
- Jatmika, S.; Patmanthara, S.; Wibawa Aji, P.; Kurniawan, F. The model of local wisdom for smart wellness tourism with optimization multilayer perceptron. J. Theor. Appl. Inf. Technol. 2024, 102, 640–652. [Google Scholar]
- Lyu, Z.; Yu, Y.; Samali, B.; Rashidi, M.; Mohammadi, M.; Nguyen, T.N.; Nguyen, A. Back-propagation neural network optimized by K-fold cross-validation for prediction of torsional strength of reinforced Concrete beam. Materials 2022, 15, 1477. [Google Scholar] [CrossRef]
- Lin, S.; Zheng, H.; Han, C.; Han, B.; Li, W. Evaluation and prediction of slope stability using machine learning approaches. Front. Struct. Civ. Eng. 2021, 15, 821–833. [Google Scholar] [CrossRef]
- Ranjbar, A.; Barahmand, N.; Ghanbari, A. Hybrid artificial intelligence model development for roller-compacted concrete compressive strength estimation. Int. J. Eng. 2020, 33, 1852–1863. [Google Scholar]
- Dufera, A.G.; Liu, T.; Xu, J. Regression models of Pearson correlation coefficient. Stat. Theory Relat. Fields 2023, 7, 97–106. [Google Scholar] [CrossRef]
- Li, G.; Zhang, A.; Zhang, Q.; Wu, D.; Zhan, C. Pearson correlation coefficient-based performance enhancement of broad learning system for stock price prediction. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 2413–2417. [Google Scholar] [CrossRef]
- Zhao, Y.; Liu, Y.; Li, Y.; Hao, Q. Development and application of resistance strain force sensors. Sensors 2020, 20, 5826. [Google Scholar] [CrossRef]
- Zheng, Y.; Zhu, Z.W.; Xiao, W.; Deng, Q.X. Review of fiber optic sensors in geotechnical health monitoring. Opt. Fiber Technol. 2020, 54, 102127. [Google Scholar] [CrossRef]
- Wang, Z.J.; Turko, R.; Shaikh, O.; Park, H.; Das, N.; Hohman, F.; Chau, D.H.P. CNN explainer: Learning convolutional neural networks with interactive visualization. IEEE Trans. Vis. Comput. Graph. 2020, 27, 1396–1406. [Google Scholar] [CrossRef]
- Jin, H.; Wagner, M.W.; Ertl-Wagner, B.; Khalvati, F. An educational graphical user interface to construct convolutional neural networks for teaching artificial intelligence in radiology. Can. Assoc. Radiol. J. 2023, 74, 526–533. [Google Scholar] [CrossRef]
Material | Diameter (mm) | Thickness (mm) | Length (mm) | Modulus of Elasticity (MPa) | Poisson’s Ratio |
---|---|---|---|---|---|
Steel Tube | 97 | 3 | 1000 | 205,000 | 0.3 |
FRP Tube | 100 | 5 | 300 | 15,000 | 0.2 |
Variable | Number | ||||||||
---|---|---|---|---|---|---|---|---|---|
1 | – | 50 | – | 500 | – | 800 | – | 1000 | |
Time | 0.001 | – | 0.05 | – | 0.5 | – | 0.8 | – | 1 |
S1G1 | 2.60643 × 10−6 | – | 1.11428 × 10−4 | – | −7.14399 × 10−8 | – | −7.93848 × 10−4 | – | −1.18914 × 10−7 |
S1G2 | 5.12635 × 10−7 | – | −4.98577 × 10−6 | – | 2.34286 × 10−7 | – | 1.57363 × 10−4 | – | 2.36832 × 10−7 |
S1G3 | 2.19770 × 10−6 | – | 9.71944 × 10−5 | – | −2.02134 × 10−7 | – | 1.22821 × 10−4 | – | 1.67849 × 10−7 |
S1G3.5 | −2.84843 × 10−6 | – | −1.24557 × 10−4 | – | 2.89937 × 10−7 | – | 2.57832 × 10−4 | – | −1.07993 × 10−6 |
S1G4 | −1.52650 × 10−6 | – | −7.72136 × 10−5 | – | 7.22822 × 10−8 | – | −2.90367 × 10−4 | – | 3.43204 × 10−9 |
S1G5 | −5.49184 × 10−6 | – | −2.07579 × 10−4 | – | 1.80148 × 10−7 | – | 3.81431 × 10−5 | – | 6.26635 × 10−7 |
S1G6 | 6.55528 × 10−7 | – | −1.72538 × 10−5 | – | 4.35172 × 10−7 | – | 6.73431 × 10−4 | – | −9.15750 × 10−7 |
S1G7 | −2.30069 × 10−6 | – | −1.29847 × 10−4 | – | −2.90252 × 10−7 | – | 5.59804 × 10−4 | – | 4.80166 × 10−7 |
S1G8 | 4.42004 × 10−7 | – | −1.31939 × 10−5 | – | 6.18470 × 10−7 | – | 1.22710 × 10−4 | – | 2.02801 × 10−7 |
S1G9 | 4.46011 × 10−7 | – | −1.72241 × 10−5 | – | −3.58697 × 10−7 | – | 2.36182 × 10−4 | – | −1.99505 × 10−7 |
S1G9.5 | 3.36411 × 10−6 | – | 1.91283 × 10−4 | – | −7.28941 × 10−7 | – | −3.13872 × 10−5 | – | 6.92708 × 10−8 |
S1G10 | 8.65081 × 10−9 | – | 2.85809 × 10−6 | – | 7.77037 × 10−7 | – | 3.36948 × 10−4 | – | 5.34538 × 10−7 |
S1G11 | 3.11608 × 10−6 | – | 1.54039 × 10−4 | – | 4.58082 × 10−7 | – | −1.24424 × 10−4 | – | 7.79329 × 10−7 |
S1G12 | 3.02017 × 10−6 | – | 1.29256 × 10−4 | – | −1.39974 × 10−7 | – | −4.22722 × 10−4 | – | −4.21732 × 10−7 |
P | 7.00000 | – | 3.50000 × 102 | – | 8.33333 × 102 | – | 9.33333 × 102 | – | 1.00000 × 103 |
Vx | −3.13953 × 102 | – | −1.54508 × 104 | – | 1.00398 × 10−5 | – | 4.75528 × 104 | – | 2.15214 × 10−5 |
Vy | −6.26666 × 102 | – | −2.93893 × 104 | – | 2.95524 × 10−5 | – | 2.93893 × 104 | – | 1.17724 × 10−5 |
Mx | 6.07830 × 105 | – | 2.84635 × 107 | – | −6.11875 × 10−2 | – | −2.65352 × 107 | – | −7.43699 × 10−2 |
My | −3.51553 × 105 | – | −1.72151 × 107 | – | −2.68673 × 10−1 | – | 4.93165 × 107 | – | 2.69873 × 10−2 |
Variable | Max | Min | Average | SD | Median | Skewness | Kurtosis | Quantity |
---|---|---|---|---|---|---|---|---|
S1G1 | 5.18562 × 10−4 | −1.08657 × 10−3 | −1.34969 × 10−4 | 3.58551 × 10−4 | −4.70620 × 10−5 | −9.18675 × 10−1 | 7.59991 × 10−1 | 1000 |
S1G2 | 8.13798 × 10−4 | −2.53403 × 10−4 | 9.69276 × 10−5 | 2.31278 × 10−4 | 3.83514 × 10−5 | 1.41300 | 2.15216 | 1000 |
S1G3 | 4.34290 × 10−4 | −7.02724 × 10−4 | 1.29007 × 10−5 | 2.86414 × 10−4 | 5.22401 × 10−5 | −8.24923 × 10−1 | 1.45556 × 10−1 | 1000 |
S1G3.5 | 4.74011 × 10−4 | −9.89815 × 10−4 | −3.98808 × 10−5 | 3.29115 × 10−4 | −2.64781 × 10−5 | −8.52608 × 10−1 | 7.05133 × 10−1 | 1000 |
S1G4 | 6.15423 × 10−4 | −4.35879 × 10−4 | 3.49296 × 10−5 | 2.67868 × 10−4 | 3.02154 × 10−6 | 4.22521 × 10−1 | −3.96468 × 10−1 | 1000 |
S1G5 | 6.39601 × 10−4 | −7.68633 × 10−4 | 6.71803 × 10−5 | 3.57381 × 10−4 | 5.35384 × 10−5 | −5.42648 × 10−1 | −1.73634 × 10−1 | 1000 |
S1G6 | 6.76991 × 10−4 | −6.05781 × 10−4 | −4.74632 × 10−5 | 2.68679 × 10−4 | −1.00409 × 10−5 | −6.03959 × 10−2 | 7.53766 × 10−1 | 1000 |
S1G7 | 5.63200 × 10−4 | −9.36714 × 10−4 | −1.07080 × 10−4 | 3.31270 × 10−4 | −6.59848 × 10−5 | −6.01145 × 10−1 | 4.84777 × 10−1 | 1000 |
S1G8 | 8.09240 × 10−4 | −3.33270 × 10−4 | 9.73150 × 10−5 | 2.38915 × 10−4 | 4.02714 × 10−5 | 1.00508 | 1.40003 | 1000 |
S1G9 | 4.04137 × 10−4 | −8.16492 × 10−4 | 8.43682 × 10−6 | 3.02262 × 10−4 | 5.81366 × 10−5 | −1.19401 | 7.13884 × 10−1 | 1000 |
S1G9.5 | 5.59487 × 10−4 | −1.02465 × 10−3 | −2.58145 × 10−5 | 3.40170 × 10−4 | −3.33002 × 10−5 | −5.91806 × 10−1 | 6.22294 × 10−1 | 1000 |
S1G10 | 5.15147 × 10−4 | −4.84967 × 10−4 | 6.29947 × 10−5 | 2.39223 × 10−4 | 5.98854 × 10−5 | −3.82613 × 10−1 | −1.37343 × 10−1 | 1000 |
S1G11 | 6.92355 × 10−4 | −5.15911 × 10−4 | 4.62883 × 10−5 | 2.74684 × 10−4 | 1.60620 × 10−6 | 2.81283 × 10−1 | −1.18549 × 10−1 | 1000 |
S1G12 | 8.29720 × 10−4 | −5.32430 × 10−4 | −1.06863 × 10−5 | 2.81636 × 10−4 | 6.73036 × 10−6 | 7.96226 × 10−1 | 8.34247 × 10−1 | 1000 |
P | 1000 | 0 | 7.99699 × 102 | 1.83504 × 102 | 8.33333 × 102 | −2.22521 | 5.64135 | 1000 |
Vx | 50,000 | −50,000 | 3.15312 × 10−8 | 3.53438 × 104 | 1.00398 × 10−5 | −2.68136 × 10−12 | −1.49999 | 1000 |
Vy | 4.99013 × 104 | −4.99013 × 104 | 7.99676 × 10−5 | 3.53093 × 104 | 3.39514 × 10−6 | −2.26788 × 10−9 | −1.49999 | 1000 |
Mx | 52,078,600 | −52,077,100 | −2.86954 × 102 | 3.53699 × 107 | −6.11875 × 10−2 | 4.06369 × 10−6 | −1.48935 | 1000 |
My | 50,355,900 | −50,354,200 | 6.97984 × 102 | 3.54052 × 107 | 0 | 1.48570 × 10−5 | −1.48917 | 1000 |
Model | Advantages | Disadvantages |
FCNN | The model is capable of processing a multitude of input data formats, both linear and nonlinear, and is well suited for a diverse array of tasks, including regression and classification. The generality of this approach renders the structure relatively simple and facilitates comprehension and implementation. | Each neuron is connected to all neurons in the preceding layer. The number of parameters increases exponentially with the number of layers. The computational cost is high, and overfitting is a significant concern. The spatial information present in the input data, such as images, cannot be effectively utilized, resulting in the loss of spatial information. Each pixel point is treated as an independent feature, and there is no mechanism in place to capture local features. |
CNN | The convolutional layer and pooling layer facilitate the effective extraction of local and global spatial features in the input data, thereby enhancing the model’s generalization ability and preventing overfitting. The convolution operation and the movement of the convolution kernel effectively reduce the number of parameters through the parameter sharing mechanism, thereby reducing the computational burden and conserving memory and processing time. | The structure is more intricate and necessitates a more comprehensive hyper-parameter tuning and design process. The model is not sufficiently flexible to accommodate non-image data. However, it is possible to transform non-image data into suitable inputs through the process of feature engineering. |
Input | FCNN | CNN | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
R2 | MSE | MAE | SMAPE | R | R2 | MSE | MAE | SMAPE | R | |
Input 2 | 0.746 | 0.082 | 0.175 | 1.124 | 0.863 | 0.768 | 0.069 | 0.181 | 1.113 | 0.877 |
Input 4 | 0.954 | 0.007 | 0.054 | 1.100 | 0.978 | 0.967 | 0.007 | 0.049 | 1.104 | 0.984 |
Input 6 | 0.980 | 0.0012 | 0.023 | 1.102 | 0.991 | 0.993 | 0.00056 | 0.018 | 1.103 | 0.997 |
Input 10a | 0.992 | 0.0005 | 0.016 | 1.101 | 0.996 | 0.993 | 0.00034 | 0.013 | 1.105 | 0.997 |
Input 2 | 0.746 | 0.082 | 0.175 | 1.124 | 0.863 | 0.768 | 0.069 | 0.181 | 1.113 | 0.877 |
Input 10b | 0.993 | 0.00032 | 0.011 | 1.103 | 0.998 | 0.992 | 0.00036 | 0.011 | 1.101 | 0.996 |
Input 14 | 0.996 | 0.00025 | 0.011 | 1.101 | 0.998 | 0.995 | 0.00025 | 0.011 | 1.102 | 0.998 |
Microstrain | Forces [A] | Predicted Forces [B] | Error [A−B] | ||
---|---|---|---|---|---|
S1G1 | −15.8499 | P [kN] Vx [kN] Vy [kN] Mx [kN·m] My [kN·m] | 0.966667 0.117557 −1.771 × 10−15 −6.47132 × 10−8 2.02018 | 0.982713 0.122016 −3.2504 × 10−15 −6.30282 × 10−8 1.96190 | −0.016046 −0.004459 1.4794 × 10−15 −0.168498 × 10−8 0.05828 |
S1G3.5 | −2.75776 | ||||
S1G6 | −6.70106 | ||||
S1G7 | 19.8599 | ||||
S1G9.5 | 7.70355 | ||||
S1G12 | 0.186638 |
Microstrain | Forces [A] | Predicted Forces [B] | Error [A−B] | ||
---|---|---|---|---|---|
S1G1 | −7.92872 | P [kN] Vx [kN] Vy [kN] Mx [kN·m] My [kN·m] | 1.93333 0.235114 0.190211 −0.425514 0.996432 | 1.83586 0.229871 0.189499 −0.419330 0.988110 | 0.0974706 0.0052433 0.00071216 −0.0061838 0.0083224 |
S1G3.5 | 1.83893 | ||||
S1G6 | −3.14737 | ||||
S1G7 | 8.09374 | ||||
S1G9.5 | 2.26935 | ||||
S1G12 | −1.50729 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, H.; Chung, H. Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI. Sensors 2025, 25, 919. https://doi.org/10.3390/s25030919
Li H, Chung H. Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI. Sensors. 2025; 25(3):919. https://doi.org/10.3390/s25030919
Chicago/Turabian StyleLi, Haiyu, and Heungjin Chung. 2025. "Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI" Sensors 25, no. 3: 919. https://doi.org/10.3390/s25030919
APA StyleLi, H., & Chung, H. (2025). Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI. Sensors, 25(3), 919. https://doi.org/10.3390/s25030919