Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (122)

Search Parameters:
Keywords = networked sensor fusion and decisions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1133 KB  
Article
Spatio-Temporal Recursive Method for Traffic Flow Interpolation
by Gang Wang, Yuhao Mao, Xu Liu, Haohan Liang and Keqiang Li
Symmetry 2025, 17(9), 1577; https://doi.org/10.3390/sym17091577 - 21 Sep 2025
Viewed by 248
Abstract
Traffic data sequence imputation plays a crucial role in maintaining the integrity and reliability of transportation analytics and decision-making systems. With the proliferation of sensor technologies and IoT devices, traffic data often contain missing values due to sensor failures, communication issues, or data [...] Read more.
Traffic data sequence imputation plays a crucial role in maintaining the integrity and reliability of transportation analytics and decision-making systems. With the proliferation of sensor technologies and IoT devices, traffic data often contain missing values due to sensor failures, communication issues, or data processing errors. It is necessary to effectively interpolate these missing parts to ensure the correctness of downstream work. Compared with other data, the monitoring data of traffic flow shows significant temporal and spatial correlations. However, most methods have not fully integrated the correlations of these types. In this work, we introduce the Temporal–Spatial Fusion Neural Network (TSFNN), a framework designed to address missing data recovery in transportation monitoring by jointly modeling spatial and temporal patterns. The architecture incorporates a temporal component, implemented with a Recurrent Neural Network (RNN), to learn sequential dependencies, alongside a spatial component, implemented with a Multilayer Perceptron (MLP), to learn spatial correlations. For performance validation, the model was benchmarked against several established methods. Using real-world datasets with varying missing-data ratios, TSFNN consistently delivered more accurate interpolations than all baseline approaches, highlighting the advantage of combining temporal and spatial learning within a single framework. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

31 pages, 3129 KB  
Review
A Review on Gas Pipeline Leak Detection: Acoustic-Based, OGI-Based, and Multimodal Fusion Methods
by Yankun Gong, Chao Bao, Zhengxi He, Yifan Jian, Xiaoye Wang, Haineng Huang and Xintai Song
Information 2025, 16(9), 731; https://doi.org/10.3390/info16090731 - 25 Aug 2025
Cited by 1 | Viewed by 1040
Abstract
Pipelines play a vital role in material transportation within industrial settings. This review synthesizes detection technologies for early-stage small gas leaks from pipelines in the industrial sector, with a focus on acoustic-based methods, optical gas imaging (OGI), and multimodal fusion approaches. It encompasses [...] Read more.
Pipelines play a vital role in material transportation within industrial settings. This review synthesizes detection technologies for early-stage small gas leaks from pipelines in the industrial sector, with a focus on acoustic-based methods, optical gas imaging (OGI), and multimodal fusion approaches. It encompasses detection principles, inherent challenges, mitigation strategies, and the state of the art (SOTA). Small leaks refer to low flow leakage originating from defects with apertures at millimeter or submillimeter scales, posing significant detection difficulties. Acoustic detection leverages the acoustic wave signals generated by gas leaks for non-contact monitoring, offering advantages such as rapid response and broad coverage. However, its susceptibility to environmental noise interference often triggers false alarms. This limitation can be mitigated through time-frequency analysis, multi-sensor fusion, and deep-learning algorithms—effectively enhancing leak signals, suppressing background noise, and thereby improving the system’s detection robustness and accuracy. OGI utilizes infrared imaging technology to visualize leakage gas and is applicable to the detection of various polar gases. Its primary limitations include low image resolution, low contrast, and interference from complex backgrounds. Mitigation techniques involve background subtraction, optical flow estimation, fully convolutional neural networks (FCNNs), and vision transformers (ViTs), which enhance image contrast and extract multi-scale features to boost detection precision. Multimodal fusion technology integrates data from diverse sensors, such as acoustic and optical devices. Key challenges lie in achieving spatiotemporal synchronization across multiple sensors and effectively fusing heterogeneous data streams. Current methodologies primarily utilize decision-level fusion and feature-level fusion techniques. Decision-level fusion offers high flexibility and ease of implementation but lacks inter-feature interaction; it is less effective than feature-level fusion when correlations exist between heterogeneous features. Feature-level fusion amalgamates data from different modalities during the feature extraction phase, generating a unified cross-modal representation that effectively resolves inter-modal heterogeneity. In conclusion, we posit that multimodal fusion holds significant potential for further enhancing detection accuracy beyond the capabilities of existing single-modality technologies and is poised to become a major focus of future research in this domain. Full article
Show Figures

Figure 1

41 pages, 4171 KB  
Article
Development of a System for Recognising and Classifying Motor Activity to Control an Upper-Limb Exoskeleton
by Artem Obukhov, Mikhail Krasnyansky, Yaroslav Merkuryev and Maxim Rybachok
Appl. Syst. Innov. 2025, 8(4), 114; https://doi.org/10.3390/asi8040114 - 19 Aug 2025
Viewed by 691
Abstract
This paper addresses the problem of recognising and classifying hand movements to control an upper-limb exoskeleton. To solve this problem, a multisensory system based on the fusion of data from electromyography (EMG) sensors, inertial measurement units (IMUs), and virtual reality (VR) trackers is [...] Read more.
This paper addresses the problem of recognising and classifying hand movements to control an upper-limb exoskeleton. To solve this problem, a multisensory system based on the fusion of data from electromyography (EMG) sensors, inertial measurement units (IMUs), and virtual reality (VR) trackers is proposed, which provides highly accurate detection of users’ movements. Signal preprocessing (noise filtering, segmentation, normalisation) and feature extraction were performed to generate input data for regression and classification models. Various machine learning algorithms are used to recognise motor activity, ranging from classical algorithms (logistic regression, k-nearest neighbors, decision trees) and ensemble methods (random forest, AdaBoost, eXtreme Gradient Boosting, stacking, voting) to deep neural networks, including convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformers. The algorithm for integrating machine learning models into the exoskeleton control system is considered. In experiments aimed at abandoning proprietary tracking systems (VR trackers), absolute position regression was performed using data from IMU sensors with 14 regression algorithms: The random forest ensemble provided the best accuracy (mean absolute error = 0.0022 metres). The task of classifying activity categories out of nine types is considered below. Ablation analysis showed that IMU and VR trackers produce a sufficient informative minimum, while adding EMG also introduces noise, which degrades the performance of simpler models but is successfully compensated for by deep networks. In the classification task using all signals, the maximum result (99.2%) was obtained on Transformer; the fully connected neural network generated slightly worse results (98.4%). When using only IMU data, fully connected neural network, Transformer, and CNN–GRU networks provide 100% accuracy. Experimental results confirm the effectiveness of the proposed architectures for motor activity classification, as well as the use of a multi-sensor approach that allows one to compensate for the limitations of individual types of sensors. The obtained results make it possible to continue research in this direction towards the creation of control systems for upper exoskeletons, including those used in rehabilitation and virtual simulation systems. Full article
Show Figures

Figure 1

18 pages, 4857 KB  
Article
Fast Detection of FDI Attacks and State Estimation in Unmanned Surface Vessels Based on Dynamic Encryption
by Zheng Liu, Li Liu, Hongyong Yang, Zengfeng Wang, Guanlong Deng and Chunjie Zhou
J. Mar. Sci. Eng. 2025, 13(8), 1457; https://doi.org/10.3390/jmse13081457 - 30 Jul 2025
Viewed by 277
Abstract
Wireless sensor networks (WSNs) are used for data acquisition and transmission in unmanned surface vessels (USVs). However, the openness of wireless networks makes USVs highly susceptible to false data injection (FDI) attacks during data transmission, which affects the sensors’ ability to receive real [...] Read more.
Wireless sensor networks (WSNs) are used for data acquisition and transmission in unmanned surface vessels (USVs). However, the openness of wireless networks makes USVs highly susceptible to false data injection (FDI) attacks during data transmission, which affects the sensors’ ability to receive real data and leads to decision-making errors in the control center. In this paper, a novel dynamic data encryption method is proposed whereby data are encrypted prior to transmission and the key is dynamically updated using historical system data, with a view to increasing the difficulty for attackers to crack the ciphertext. At the same time, a dynamic relationship is established among ciphertext, key, and auxiliary encrypted ciphertext, and an attack detection scheme based on dynamic encryption is designed to realize instant detection and localization of FDI attacks. Further, an H fusion filter is designed to filter external interference noise, and the real information is estimated or restored by the weighted fusion algorithm. Ultimately, the validity of the proposed scheme is confirmed through simulation experiments. Full article
(This article belongs to the Special Issue Control and Optimization of Ship Propulsion System)
Show Figures

Figure 1

28 pages, 2918 KB  
Article
Machine Learning-Powered KPI Framework for Real-Time, Sustainable Ship Performance Management
by Christos Spandonidis, Vasileios Iliopoulos and Iason Athanasopoulos
J. Mar. Sci. Eng. 2025, 13(8), 1440; https://doi.org/10.3390/jmse13081440 - 28 Jul 2025
Viewed by 794
Abstract
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics [...] Read more.
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics is at an emerging state. This paper proposes a machine learning-driven framework for real-time ship performance management. The framework starts with data collected from onboard sensors and culminates in a decision support system that is easily interpretable, even by non-experts. It also provides a method to forecast vessel performance by extrapolating Key Performance Indicator (KPI) values. Furthermore, it offers a flexible methodology for defining KPIs for every crucial component or aspect of vessel performance, illustrated through a use case focusing on fuel oil consumption. Leveraging Artificial Neural Networks (ANNs), hybrid multivariate data fusion, and high-frequency sensor streams, the system facilitates continuous diagnostics, early fault detection, and data-driven decision-making. Unlike conventional static performance models, the framework employs dynamic KPIs that evolve with the vessel’s operational state, enabling advanced trend analysis, predictive maintenance scheduling, and compliance assurance. Experimental comparison against classical KPI models highlights superior predictive fidelity, robustness, and temporal consistency. Furthermore, the paper delineates AI and ML applications across core maritime operations and introduces a scalable, modular system architecture applicable to both commercial and naval platforms. This approach bridges advanced simulation ecosystems with in situ operational data, laying a robust foundation for digital transformation and sustainability in maritime domains. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

28 pages, 8982 KB  
Article
Decision-Level Multi-Sensor Fusion to Improve Limitations of Single-Camera-Based CNN Classification in Precision Farming: Application in Weed Detection
by Md. Nazmuzzaman Khan, Adibuzzaman Rahi, Mohammad Al Hasan and Sohel Anwar
Computation 2025, 13(7), 174; https://doi.org/10.3390/computation13070174 - 18 Jul 2025
Viewed by 658
Abstract
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in [...] Read more.
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in a manner that is both environmentally sustainable and economically advantageous. Weed classification for autonomous agricultural robots is a challenging task for a single-camera-based system due to noise, vibration, and occlusion. To address this issue, we present a multi-camera-based system with decision-level sensor fusion to improve the limitations of a single-camera-based system in this paper. This study involves the utilization of a convolutional neural network (CNN) that was pre-trained on the ImageNet dataset. The CNN subsequently underwent re-training using a limited weed dataset to facilitate the classification of three distinct weed species: Xanthium strumarium (Common Cocklebur), Amaranthus retroflexus (Redroot Pigweed), and Ambrosia trifida (Giant Ragweed). These weed species are frequently encountered within corn fields. The test results showed that the re-trained VGG16 with a transfer-learning-based classifier exhibited acceptable accuracy (99% training, 97% validation, 94% testing accuracy) and inference time for weed classification from the video feed was suitable for real-time implementation. But the accuracy of CNN-based classification from video feed from a single camera was found to deteriorate due to noise, vibration, and partial occlusion of weeds. Test results from a single-camera video feed show that weed classification accuracy is not always accurate for the spray system of an agricultural robot (AgBot). To improve the accuracy of the weed classification system and to overcome the shortcomings of single-sensor-based classification from CNN, an improved Dempster–Shafer (DS)-based decision-level multi-sensor fusion algorithm was developed and implemented. The proposed algorithm offers improvement on the CNN-based weed classification when the weed is partially occluded. This algorithm can also detect if a sensor is faulty within an array of sensors and improves the overall classification accuracy by penalizing the evidence from a faulty sensor. Overall, the proposed fusion algorithm showed robust results in challenging scenarios, overcoming the limitations of a single-sensor-based system. Full article
(This article belongs to the Special Issue Moving Object Detection Using Computational Methods and Modeling)
Show Figures

Figure 1

21 pages, 15208 KB  
Article
Unlabeled-Data-Enhanced Tool Remaining Useful Life Prediction Based on Graph Neural Network
by Dingli Guo, Honggen Zhou, Li Sun and Guochao Li
Sensors 2025, 25(13), 4068; https://doi.org/10.3390/s25134068 - 30 Jun 2025
Viewed by 602
Abstract
Remaining useful life (RUL) prediction of cutting tools plays an important role in modern manufacturing because it provides the criterion used in decisions to replace worn cutting tools just in time so that machining deficiency and unnecessary costs will be restrained. However, the [...] Read more.
Remaining useful life (RUL) prediction of cutting tools plays an important role in modern manufacturing because it provides the criterion used in decisions to replace worn cutting tools just in time so that machining deficiency and unnecessary costs will be restrained. However, the performance of existing deep learning algorithms is limited due to the smaller quantity and low quality of labeled training datasets, because it is costly and time-consuming to build such datasets. A large amount of unlabeled data in practical machining processes is underutilized. To solve this issue, an unlabeled-data-enhanced tool RUL prediction method is proposed to make full use of the abundant accessible unlabeled data. This paper proposes a novel and effective method for utilizing unlabeled data. This paper defines a custom criterion and loss function to train on unlabeled data, thereby utilizing the valuable information contained in these unlabeled data for tool RUL prediction. The physical rule that tool wear increases with the increasing number of cuts is employed to learn knowledge crucial for tool RUL prediction from unlabeled data. Model parameters trained on unlabeled data contain this knowledge. This paper then transfers the parameters through transfer learning to another model based on labeled data for tool RUL prediction, thus completing unlabeled data enhancement. Since multiple sensors are frequently used to simultaneously collect cutting data, this paper uses a graph neural network (GNN) for multi-sensor data fusion, extracting more useful information from the data to improve unlabeled data enhancement. Through multiple sets of comparative experiments and validation, the proposed method effectively enhances the accuracy and generalization capability of the RUL prediction model for cutting tools by utilizing unlabeled data. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

19 pages, 4801 KB  
Article
Attention-Enhanced CNN-LSTM Model for Exercise Oxygen Consumption Prediction with Multi-Source Temporal Features
by Zhen Wang, Yingzhe Song, Lei Pang, Shanjun Li and Gang Sun
Sensors 2025, 25(13), 4062; https://doi.org/10.3390/s25134062 - 29 Jun 2025
Cited by 3 | Viewed by 650
Abstract
Dynamic oxygen uptake (VO2) reflects moment-to-moment changes in oxygen consumption during exercise and underpins training design, performance enhancement, and clinical decision-making. We tackled two key obstacles—the limited fusion of heterogeneous sensor data and inadequate modeling of long-range temporal patterns—by integrating wearable [...] Read more.
Dynamic oxygen uptake (VO2) reflects moment-to-moment changes in oxygen consumption during exercise and underpins training design, performance enhancement, and clinical decision-making. We tackled two key obstacles—the limited fusion of heterogeneous sensor data and inadequate modeling of long-range temporal patterns—by integrating wearable accelerometer and heart-rate streams with a convolutional neural network–LSTM (CNN-LSTM) architecture and optional attention modules. Physiological signals and VO2 were recorded from 21 adults through resting assessment and cardiopulmonary exercise testing. The results showed that pairing accelerometer with heart-rate inputs improves prediction compared with considering the heart rate alone. The baseline CNN-LSTM reached R2 = 0.946, outperforming a plain LSTM (R2 = 0.926) thanks to stronger local spatio-temporal feature extraction. Introducing a spatial attention mechanism raised accuracy further (R2 = 0.962), whereas temporal attention reduced it (R2 = 0.930), indicating that attention success depends on how well the attended features align with exercise dynamics. Stacking both attentions (spatio-temporal) yielded R2 = 0.960, slightly below the value for spatial attention alone, implying that added complexity does not guarantee better performance. Across all models, prediction errors grew during high-intensity bouts, highlighting a bottleneck in capturing non-linear physiological responses under heavy load. These findings inform architecture selection for wearable metabolic monitoring and clarify when attention mechanisms add value. Full article
(This article belongs to the Special Issue Sensors for Physiological Monitoring and Digital Health)
Show Figures

Figure 1

27 pages, 569 KB  
Article
Construction Worker Activity Recognition Using Deep Residual Convolutional Network Based on Fused IMU Sensor Data in Internet-of-Things Environment
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
IoT 2025, 6(3), 36; https://doi.org/10.3390/iot6030036 - 28 Jun 2025
Viewed by 710
Abstract
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a [...] Read more.
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a deep residual convolutional neural network (ResNet) architecture integrated with multi-sensor fusion techniques. The proposed system processes data from multiple inertial measurement unit sensors strategically positioned on workers’ bodies to identify and classify construction-related activities accurately. A comprehensive pre-processing pipeline is implemented, incorporating Butterworth filtering for noise suppression, data normalization, and an adaptive sliding window mechanism for temporal segmentation. Experimental validation is conducted using the publicly available VTT-ConIoT dataset, which includes recordings of 16 construction activities performed by 13 participants in a controlled laboratory setting. The results demonstrate that the ResNet-based sensor fusion approach outperforms traditional single-sensor models and other deep learning methods. The system achieves classification accuracies of 97.32% for binary discrimination between recommended and non-recommended activities, 97.14% for categorizing six core task types, and 98.68% for detailed classification across sixteen individual activities. Optimal performance is consistently obtained with a 4-second window size, balancing recognition accuracy with computational efficiency. Although the hand-mounted sensor proved to be the most effective as a standalone unit, multi-sensor configurations delivered significantly higher accuracy, particularly in complex classification tasks. The proposed approach demonstrates strong potential for real-world applications, offering robust performance across diverse working conditions while maintaining computational feasibility for IoT deployment. This work advances the field of innovative construction by presenting a practical solution for real-time worker activity monitoring, which can be seamlessly integrated into existing IoT infrastructures to promote workplace safety, streamline construction processes, and support data-driven management decisions. Full article
Show Figures

Figure 1

22 pages, 5516 KB  
Article
Technology and Method Optimization for Foot–Ground Contact Force Detection in Wheel-Legged Robots
by Chao Huang, Meng Hong, Yaodong Wang, Hui Chai, Zhuo Hu, Zheng Xiao, Sijia Guan and Min Guo
Sensors 2025, 25(13), 4026; https://doi.org/10.3390/s25134026 - 27 Jun 2025
Viewed by 607
Abstract
Wheel-legged robots combine the advantages of both wheeled robots and traditional quadruped robots, enhancing terrain adaptability but posing higher demands on the perception of foot–ground contact forces. However, existing approaches still suffer from limited accuracy in estimating contact positions and three-dimensional contact forces [...] Read more.
Wheel-legged robots combine the advantages of both wheeled robots and traditional quadruped robots, enhancing terrain adaptability but posing higher demands on the perception of foot–ground contact forces. However, existing approaches still suffer from limited accuracy in estimating contact positions and three-dimensional contact forces when dealing with flexible tire–ground interactions. To address this challenge, this study proposes a foot–ground contact state detection technique and optimization method based on multi-sensor fusion and intelligent modeling for wheel-legged robots. First, finite element analysis (FEA) is used to simulate strain distribution under various contact conditions. Combined with global sensitivity analysis (GSA), the optimal placement of PVDF sensors is determined and experimentally validated. Subsequently, under dynamic gait conditions, data collected from the PVDF sensor array are used to predict three-dimensional contact forces through Gaussian process regression (GPR) and artificial neural network (ANN) models. A custom experimental platform is developed to replicate variable gait frequencies and collect dynamic contact data for validation. The results demonstrate that both GPR and ANN models achieve high accuracy in predicting dynamic 3D contact forces, with normalized root mean square error (NRMSE) as low as 8.04%. The models exhibit reliable repeatability and generalization to novel inputs, providing robust technical support for stable contact perception and motion decision-making in complex environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

26 pages, 10233 KB  
Article
Time-Series Forecasting Method Based on Hierarchical Spatio-Temporal Attention Mechanism
by Zhiguo Xiao, Junli Liu, Xinyao Cao, Ke Wang, Dongni Li and Qian Liu
Sensors 2025, 25(13), 4001; https://doi.org/10.3390/s25134001 - 26 Jun 2025
Viewed by 846
Abstract
In the field of intelligent decision-making, time-series data collected by sensors serves as the core carrier for interaction between the physical and digital worlds. Accurate analysis is the cornerstone of decision-making in critical scenarios, such as industrial monitoring and intelligent transportation. However, the [...] Read more.
In the field of intelligent decision-making, time-series data collected by sensors serves as the core carrier for interaction between the physical and digital worlds. Accurate analysis is the cornerstone of decision-making in critical scenarios, such as industrial monitoring and intelligent transportation. However, the inherent spatio-temporal coupling characteristics and cross-period long-range dependency of sensor data cause traditional time-series prediction methods to face performance bottlenecks in feature decoupling and multi-scale modeling. This study innovatively proposes a Spatio-Temporal Attention-Enhanced Network (TSEBG). Breaking through traditional structural designs, the model employs a Squeeze-and-Excitation Network (SENet) to reconstruct the convolutional layers of the Temporal Convolutional Network (TCN), strengthening the feature expression of key time steps through dynamic channel weight allocation to address the redundancy issue of traditional causal convolutions in local pattern capture. A Bidirectional Gated Recurrent Unit (BiGRU) variant based on a global attention mechanism is designed, leveraging the collaboration between gating units and attention weights to mine cross-period long-distance dependencies and effectively alleviate the gradient disappearance problem of Recurrent Neural Network (RNN-like) models in multi-scale time-series analysis. A hierarchical feature fusion architecture is constructed to achieve multi-dimensional alignment of local spatial and global temporal features. Through residual connections and the dynamic adjustment of attention weights, hierarchical semantic representations are output. Experiments show that TSEBG outperforms current dominant models in time-series single-step prediction tasks in terms of accuracy and performance, with a cross-dataset R2 standard deviation of only 3.7%, demonstrating excellent generalization stability. It provides a novel theoretical framework for feature decoupling and multi-scale modeling of complex time-series data. Full article
Show Figures

Figure 1

20 pages, 3416 KB  
Article
Deflection Prediction of Highway Bridges Using Wireless Sensor Networks and Enhanced iTransformer Model
by Cong Mu, Chen Chang, Jiuyuan Huo and Jiguang Yang
Buildings 2025, 15(13), 2176; https://doi.org/10.3390/buildings15132176 - 22 Jun 2025
Viewed by 513
Abstract
As an important part of national transportation infrastructure, the operation status of bridges is directly related to transportation safety and social stability. Structural deflection, which reflects the deformation behavior of bridge systems, serves as a key indicator for identifying stiffness degradation and the [...] Read more.
As an important part of national transportation infrastructure, the operation status of bridges is directly related to transportation safety and social stability. Structural deflection, which reflects the deformation behavior of bridge systems, serves as a key indicator for identifying stiffness degradation and the progression of localized damage. The accurate modeling and forecasting of deflection are thus essential for effective bridge health monitoring and intelligent maintenance. To address the limitations of traditional methods in handling multi-source data fusion and nonlinear temporal dependencies, this study proposes an enhanced iTransformer-based prediction model, termed LDAiT (LSTM Differential Attention iTransformer), which integrates Long Short-Term Memory (LSTM) networks and a differential attention mechanism for high-fidelity deflection prediction under complex working conditions. Firstly, a multi-source heterogeneous time series dataset is constructed based on wireless sensor network (WSN) technology, enabling the real-time acquisition and fusion of key structural response parameters such as deflection, strain, and temperature across critical bridge sections. Secondly, LDAiT enhances the modeling capability of long-term dependence through the introduction of LSTM and combines with the differential attention mechanism to improve the precision of response to the local dynamic changes in disturbance. Finally, experimental validation is carried out based on the measured data of Xintian Yellow River Bridge, and the results show that LDAiT outperforms the existing mainstream models in the indexes of R2, RMSE, MAE, and MAPE and has good accuracy, stability and generalization ability. The proposed approach offers a novel and effective framework for deflection forecasting in complex bridge systems and holds significant potential for practical deployment in structural health monitoring and intelligent decision-making applications. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

20 pages, 3449 KB  
Review
Bayesian Network in Structural Health Monitoring: Theoretical Background and Applications Review
by Qi-Ang Wang, Ao-Wen Lu, Yi-Qing Ni, Jun-Fang Wang and Zhan-Guo Ma
Sensors 2025, 25(12), 3577; https://doi.org/10.3390/s25123577 - 6 Jun 2025
Viewed by 1593
Abstract
With accelerated urbanization and aging infrastructure, the safety and durability of civil engineering structures face significant challenges, making structural health monitoring (SHM) a critical approach to ensuring engineering safety. The Bayesian network, as a probabilistic reasoning tool, offers a novel technological pathway for [...] Read more.
With accelerated urbanization and aging infrastructure, the safety and durability of civil engineering structures face significant challenges, making structural health monitoring (SHM) a critical approach to ensuring engineering safety. The Bayesian network, as a probabilistic reasoning tool, offers a novel technological pathway for SHM due to its strengths in handling uncertainties and multi-source data fusion. This study systematically reviews the core applications of the Bayesian network in SHM, including damage prediction, data fusion, uncertainty modeling, and decision support. By integrating multi-source sensor data with probabilistic inference, the Bayesian network enhances the accuracy and reliability of monitoring systems, providing a theoretical foundation for damage identification, risk early warning, and optimization of maintenance strategies. The study presents a comprehensive review that systematically unifies the theoretical framework of BN with SHM applications, addressing the gap between probabilistic reasoning and real-world infrastructure management. The research outcomes hold significant theoretical and engineering implications for advancing SHM technology development, reducing operational and maintenance costs, and ensuring the safety of public infrastructure. Full article
Show Figures

Figure 1

33 pages, 10200 KB  
Review
Unmanned Surface Vessels in Marine Surveillance and Management: Advances in Communication, Navigation, Control, and Data-Driven Research
by Zhichao Lv, Xiangyu Wang, Gang Wang, Xuefei Xing, Chenlong Lv and Fei Yu
J. Mar. Sci. Eng. 2025, 13(5), 969; https://doi.org/10.3390/jmse13050969 - 16 May 2025
Cited by 1 | Viewed by 3486
Abstract
Unmanned Surface Vehicles (USVs) have emerged as vital tools in marine monitoring and management due to their high efficiency, low cost, and flexible deployment capabilities. This paper presents a systematic review focusing on four core areas of USV applications: communication networking, navigation, control, [...] Read more.
Unmanned Surface Vehicles (USVs) have emerged as vital tools in marine monitoring and management due to their high efficiency, low cost, and flexible deployment capabilities. This paper presents a systematic review focusing on four core areas of USV applications: communication networking, navigation, control, and data-driven operations. First, the characteristics and challenges of acoustic, electromagnetic, and optical communication methods for USV networking are analyzed, with an emphasis on the future trend toward multimodal communication integration. Second, a comprehensive review of global navigation, local navigation, cooperative navigation, and autonomous navigation technologies is provided, highlighting their applications and limitations in complex environments. Third, the evolution of USV control systems is examined, covering group control, distributed control, and adaptive control, with particular attention given to fault tolerance, delay compensation, and energy optimization. Finally, the application of USVs in data-driven marine tasks is summarized, including multi-sensor fusion, real-time perception, and autonomous decision-making mechanisms. This study aims to reveal the interaction and coordination mechanisms among communication, navigation, control, and data-driven operations from a system integration perspective, providing insights and guidance for the intelligent operations and comprehensive applications of USVs in marine environments. Full article
Show Figures

Figure 1

16 pages, 4231 KB  
Article
Intelligent Testing Method for Multi-Point Vibration Acquisition of Pile Foundation Based on Machine Learning
by Ke Wang, Weikai Zhao, Juntao Wu and Shuang Ma
Sensors 2025, 25(9), 2893; https://doi.org/10.3390/s25092893 - 3 May 2025
Cited by 1 | Viewed by 714
Abstract
To address the limitations of the conventional low-strain reflected wave method for pile foundation testing, this study proposes an intelligent multi-point vibration acquisition testing model based on machine learning to evaluate the integrity of in-service, high-cap pile foundations. The model’s performance was assessed [...] Read more.
To address the limitations of the conventional low-strain reflected wave method for pile foundation testing, this study proposes an intelligent multi-point vibration acquisition testing model based on machine learning to evaluate the integrity of in-service, high-cap pile foundations. The model’s performance was assessed using statistical error metrics, including the correlation coefficient R2, mean absolute error (MAE), and variance accounted for (VAF), with comparative evaluations conducted across different model frameworks. Results show that both the convolutional neural network (CNN) and the long short-term memory neural network (LSTM) consistently achieved high accuracy in identifying the location of the first reflection point in the pile shaft, with R2 values greater than 0.98, MAE below 0.41 (m), and VAF greater than 98%. These findings demonstrate the model’s strong predictive capability, test stability, and practical utility in supporting operator decision-making. Among the evaluated models, CNN is recommended for analyzing the integrity of in-service pile foundation based on the multi-point vibration pickup signals and multi-sensor fusion signal preprocessed by the time series stacking method. Full article
Show Figures

Figure 1

Back to TopTop