Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (118)

Search Parameters:
Keywords = IoT sensor fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
47 pages, 2691 KB  
Systematic Review
Buzzing with Intelligence: A Systematic Review of Smart Beehive Technologies
by Josip Šabić, Toni Perković, Petar Šolić and Ljiljana Šerić
Sensors 2025, 25(17), 5359; https://doi.org/10.3390/s25175359 - 29 Aug 2025
Abstract
Smart-beehive technologies represent a paradigm shift in beekeeping, transitioning from traditional, reactive methods toward proactive, data-driven management. This systematic literature review investigates the current landscape of intelligent systems applied to beehives, focusing on the integration of IoT-based monitoring, sensor modalities, machine learning techniques, [...] Read more.
Smart-beehive technologies represent a paradigm shift in beekeeping, transitioning from traditional, reactive methods toward proactive, data-driven management. This systematic literature review investigates the current landscape of intelligent systems applied to beehives, focusing on the integration of IoT-based monitoring, sensor modalities, machine learning techniques, and their applications in precision apiculture. The review adheres to PRISMA guidelines and analyzes 135 peer-reviewed publications identified through searches of Web of Science, IEEE Xplore, and Scopus between 1990 and 2025. It addresses key research questions related to the role of intelligent systems in early problem detection, hive condition monitoring, and predictive intervention. Common sensor types include environmental, acoustic, visual, and structural modalities, each supporting diverse functional goals such as health assessment, behavior analysis, and forecasting. A notable trend toward deep learning, computer vision, and multimodal sensor fusion is evident, particularly in applications involving disease detection and colony behavior modeling. Furthermore, the review highlights a growing corpus of publicly available datasets critical for the training and evaluation of machine learning models. Despite the promising developments, challenges remain in system integration, dataset standardization, and large-scale deployment. This review offers a comprehensive foundation for the advancement of smart apiculture technologies, aiming to improve colony health, productivity, and resilience in increasingly complex environmental conditions. Full article
Show Figures

Figure 1

17 pages, 3307 KB  
Article
Electrode-Free ECG Monitoring with Multimodal Wireless Mechano-Acoustic Sensors
by Zhi Li, Fei Fei and Guanglie Zhang
Biosensors 2025, 15(8), 550; https://doi.org/10.3390/bios15080550 - 20 Aug 2025
Viewed by 265
Abstract
Continuous cardiovascular monitoring is essential for the early detection of cardiac events, but conventional electrode-based ECG systems cause skin irritation and are unsuitable for long-term wear. We propose an electrode-free ECG monitoring approach that leverages synchronized phonocardiogram (PCG) and seismocardiogram (SCG) signals captured [...] Read more.
Continuous cardiovascular monitoring is essential for the early detection of cardiac events, but conventional electrode-based ECG systems cause skin irritation and are unsuitable for long-term wear. We propose an electrode-free ECG monitoring approach that leverages synchronized phonocardiogram (PCG) and seismocardiogram (SCG) signals captured by wireless mechano-acoustic sensors. PCG provides precise valvular event timings, while SCG provides mechanical context, enabling the robust identification of systolic/diastolic intervals and pathological patterns. A deep learning model reconstructs ECG waveforms by intelligently combining mechano-acoustic sensor data. Its architecture leverages specialized neural network components to identify and correlate key cardiac signatures from multimodal inputs. Experimental validation on an IoT sensor dataset yields a mean Pearson correlation of 0.96 and an RMSE of 0.49 mV compared to clinical ECGs. By eliminating skin-contact electrodes through PCG–SCG fusion, this system enables robust IoT-compatible daily-life cardiac monitoring. Full article
Show Figures

Figure 1

26 pages, 663 KB  
Article
Multi-Scale Temporal Fusion Network for Real-Time Multimodal Emotion Recognition in IoT Environments
by Sungwook Yoon and Byungmun Kim
Sensors 2025, 25(16), 5066; https://doi.org/10.3390/s25165066 - 14 Aug 2025
Viewed by 501
Abstract
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), [...] Read more.
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), visual, and audio data using hierarchical temporal attention across short-term (0.5–2 s), medium-term (2–10 s), and long-term (10–60 s) windows. Edge computing optimizations, including model compression, quantization, and adaptive sampling, enable deployment on resource-constrained devices. Extensive experiments on MELD, DEAP, and G-REx datasets demonstrate 94.2% accuracy on discrete emotion classification and 0.087 mean absolute error on dimensional prediction, outperforming the best baseline (87.4%). The system maintains sub-200 ms latency on IoT hardware while achieving a 40% improvement in energy efficiency. Real-world deployment validation over four weeks achieved 97.2% uptime and user satisfaction scores of 4.1/5.0 while ensuring privacy through local processing. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 510 KB  
Review
IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities
by Samson O. Ooko, Emmanuel Ndashimye, Evariste Twahirwa and Moise Busogi
IoT 2025, 6(3), 46; https://doi.org/10.3390/iot6030046 - 7 Aug 2025
Viewed by 666
Abstract
The activities of birds present increasing challenges in agriculture, aviation, and environmental conservation. This has led to economic losses, safety risks, and ecological imbalances. Attempts have been made to address the problem, with traditional deterrent methods proving to be labour-intensive, environmentally unfriendly, and [...] Read more.
The activities of birds present increasing challenges in agriculture, aviation, and environmental conservation. This has led to economic losses, safety risks, and ecological imbalances. Attempts have been made to address the problem, with traditional deterrent methods proving to be labour-intensive, environmentally unfriendly, and ineffective over time. Advances in artificial intelligence (AI) and the Internet of Things (IoT) present opportunities for enabling automated real-time bird detection and repellence. This study reviews recent developments (2020–2025) in AI-driven bird detection and repellence systems, emphasising the integration of image, audio, and multi-sensor data in IoT and edge-based environments. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework was used, with 267 studies initially identified and screened from key scientific databases. A total of 154 studies met the inclusion criteria and were analysed. The findings show the increasing use of convolutional neural networks (CNNs), YOLO variants, and MobileNet in visual detection, and the growing use of lightweight audio-based models such as BirdNET, MFCC-based CNNs, and TinyML frameworks for microcontroller deployment. Multi-sensor fusion is proposed to improve detection accuracy in diverse environments. Repellence strategies include sound-based deterrents, visual deterrents, predator-mimicking visuals, and adaptive AI-integrated systems. Deployment success depends on edge compatibility, power efficiency, and dataset quality. The limitations of current studies include species-specific detection challenges, data scarcity, environmental changes, and energy constraints. Future research should focus on tiny and lightweight AI models, standardised multi-modal datasets, and intelligent, behaviour-aware deterrence mechanisms suitable for precision agriculture and ecological monitoring. Full article
Show Figures

Figure 1

36 pages, 2671 KB  
Article
DIKWP-Driven Artificial Consciousness for IoT-Enabled Smart Healthcare Systems
by Yucong Duan and Zhendong Guo
Appl. Sci. 2025, 15(15), 8508; https://doi.org/10.3390/app15158508 - 31 Jul 2025
Viewed by 420
Abstract
This study presents a DIKWP-driven artificial consciousness framework for IoT-enabled smart healthcare, integrating a Data–Information–Knowledge–Wisdom–Purpose (DIKWP) cognitive architecture with a software-defined IoT infrastructure. The proposed system deploys DIKWP agents at edge and cloud nodes to transform raw sensor data into high-level knowledge and [...] Read more.
This study presents a DIKWP-driven artificial consciousness framework for IoT-enabled smart healthcare, integrating a Data–Information–Knowledge–Wisdom–Purpose (DIKWP) cognitive architecture with a software-defined IoT infrastructure. The proposed system deploys DIKWP agents at edge and cloud nodes to transform raw sensor data into high-level knowledge and purpose-driven actions. This is achieved through a structured DIKWP pipeline—from data acquisition and information processing to knowledge extraction, wisdom inference, and purpose-driven decision-making—that enables semantic reasoning, adaptive goal-driven responses, and privacy-preserving decision-making in healthcare environments. The architecture integrates wearable sensors, edge computing nodes, and cloud services to enable dynamic task orchestration and secure data fusion. For evaluation, a smart healthcare scenario for early anomaly detection (e.g., arrhythmia and fever) was implemented using wearable devices with coordinated edge–cloud analytics. Simulated experiments on synthetic vital sign datasets achieved approximately 98% anomaly detection accuracy and up to 90% reduction in communication overhead compared to cloud-centric solutions. Results also demonstrate enhanced explainability via traceable decisions across DIKWP layers and robust performance under intermittent connectivity. These findings indicate that the DIKWP-driven approach can significantly advance IoT-based healthcare by providing secure, explainable, and adaptive services aligned with clinical objectives and patient-centric care. Full article
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)
Show Figures

Figure 1

28 pages, 2918 KB  
Article
Machine Learning-Powered KPI Framework for Real-Time, Sustainable Ship Performance Management
by Christos Spandonidis, Vasileios Iliopoulos and Iason Athanasopoulos
J. Mar. Sci. Eng. 2025, 13(8), 1440; https://doi.org/10.3390/jmse13081440 - 28 Jul 2025
Viewed by 562
Abstract
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics [...] Read more.
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics is at an emerging state. This paper proposes a machine learning-driven framework for real-time ship performance management. The framework starts with data collected from onboard sensors and culminates in a decision support system that is easily interpretable, even by non-experts. It also provides a method to forecast vessel performance by extrapolating Key Performance Indicator (KPI) values. Furthermore, it offers a flexible methodology for defining KPIs for every crucial component or aspect of vessel performance, illustrated through a use case focusing on fuel oil consumption. Leveraging Artificial Neural Networks (ANNs), hybrid multivariate data fusion, and high-frequency sensor streams, the system facilitates continuous diagnostics, early fault detection, and data-driven decision-making. Unlike conventional static performance models, the framework employs dynamic KPIs that evolve with the vessel’s operational state, enabling advanced trend analysis, predictive maintenance scheduling, and compliance assurance. Experimental comparison against classical KPI models highlights superior predictive fidelity, robustness, and temporal consistency. Furthermore, the paper delineates AI and ML applications across core maritime operations and introduces a scalable, modular system architecture applicable to both commercial and naval platforms. This approach bridges advanced simulation ecosystems with in situ operational data, laying a robust foundation for digital transformation and sustainability in maritime domains. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 1563 KB  
Review
Autonomous Earthwork Machinery for Urban Construction: A Review of Integrated Control, Fleet Coordination, and Safety Assurance
by Zeru Liu and Jung In Kim
Buildings 2025, 15(14), 2570; https://doi.org/10.3390/buildings15142570 - 21 Jul 2025
Viewed by 575
Abstract
Autonomous earthwork machinery is gaining traction as a means to boost productivity and safety on space-constrained urban sites, yet the fast-growing literature has not been fully integrated. To clarify current knowledge, we systematically searched Scopus and screened 597 records, retaining 157 peer-reviewed papers [...] Read more.
Autonomous earthwork machinery is gaining traction as a means to boost productivity and safety on space-constrained urban sites, yet the fast-growing literature has not been fully integrated. To clarify current knowledge, we systematically searched Scopus and screened 597 records, retaining 157 peer-reviewed papers (2015–March 2025) that address autonomy, integrated control, or risk mitigation for excavators, bulldozers, and loaders. Descriptive statistics, VOSviewer mapping, and qualitative synthesis show the output rising rapidly and peaking at 30 papers in 2024, led by China, Korea, and the USA. Four tightly linked themes dominate: perception-driven machine autonomy, IoT-enabled integrated control systems, multi-sensor safety strategies, and the first demonstrations of fleet-level collaboration (e.g., coordinated excavator clusters and unmanned aerial vehicle and unmanned ground vehicle (UAV–UGV) site preparation). Advances include centimeter-scale path tracking, real-time vision-light detection and ranging (LiDAR) fusion and geofenced safety envelopes, but formal validation protocols and robust inter-machine communication remain open challenges. The review distils five research priorities, including adaptive perception and artificial intelligence (AI), digital-twin integration with building information modeling (BIM), cooperative multi-robot planning, rigorous safety assurance, and human–automation partnership that must be addressed to transform isolated prototypes into connected, self-optimizing fleets capable of delivering safer, faster, and more sustainable urban construction. Full article
(This article belongs to the Special Issue Automation and Robotics in Building Design and Construction)
Show Figures

Figure 1

43 pages, 1035 KB  
Review
A Review of Internet of Things Approaches for Vehicle Accident Detection and Emergency Notification
by Mohammad Ali Sahraei and Said Ramadhan Mubarak Al Mamari
Sustainability 2025, 17(14), 6510; https://doi.org/10.3390/su17146510 - 16 Jul 2025
Viewed by 1775
Abstract
The inspiration behind this specific research is based on addressing the growing need to improve road safety via the application of the Internet of Things (IoT) system. Although several investigations have discovered the possibility of IoT-based accident recognition, recent research remains fragmented, usually [...] Read more.
The inspiration behind this specific research is based on addressing the growing need to improve road safety via the application of the Internet of Things (IoT) system. Although several investigations have discovered the possibility of IoT-based accident recognition, recent research remains fragmented, usually concentrating on outdated science or specific use cases. This study aims to fill that gap by carefully examining and conducting a comparative analysis of 101 peer-reviewed articles published between 2008 and 2025, with a focus on IoT systems for accident recognition techniques. The review categorizes approaches depending on the sensor used, incorporation frameworks, and recognition techniques. The study examines numerous sensors, such as Global System for Mobile Communications/Global Positioning System (GSM/GPS), accelerometers, vibration, and many other superior sensors. The research shows the constraints and advantages of existing techniques, concentrating on the significance of multi-sensor utilization in enhancing recognition precision and dependability. Findings indicate that, although substantial improvements have been made in the use of IoT-based systems for accident recognition, problems such as substantial implementation costs, weather conditions, and data precision issues persist. Moreover, the research acknowledges deficiencies in standardization, as well as the requirement for strong communication systems to enhance the responsiveness of emergency services. As a result, the study suggests a plan for upcoming developments, concentrating on the incorporation of IoT-enabled infrastructure, sensor fusion approaches, and artificial intelligence. This study improves knowledge by offering an extensive viewpoint on IoT-based accident recognition, providing insights for upcoming research, and suggesting policies to facilitate implementation, eventually enhancing worldwide road safety. Full article
Show Figures

Figure 1

21 pages, 4147 KB  
Article
AgriFusionNet: A Lightweight Deep Learning Model for Multisource Plant Disease Diagnosis
by Saleh Albahli
Agriculture 2025, 15(14), 1523; https://doi.org/10.3390/agriculture15141523 - 15 Jul 2025
Cited by 1 | Viewed by 774
Abstract
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB [...] Read more.
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB and multispectral drone imagery with IoT-based environmental sensor data (e.g., temperature, humidity, soil moisture), recorded over six months across multiple agricultural zones. Built on the EfficientNetV2-B4 backbone, AgriFusionNet incorporates Fused-MBConv blocks and Swish activation to improve gradient flow, capture fine-grained disease patterns, and reduce inference latency. The model was evaluated using a comprehensive dataset composed of real-world and benchmarked samples, showing superior performance with 94.3% classification accuracy, 28.5 ms inference time, and a 30% reduction in model parameters compared to state-of-the-art models such as Vision Transformers and InceptionV4. Extensive comparisons with both traditional machine learning and advanced deep learning methods underscore its robustness, generalization, and suitability for deployment on edge devices. Ablation studies and confusion matrix analyses further confirm its diagnostic precision, even in visually ambiguous cases. The proposed framework offers a scalable, practical solution for real-time crop health monitoring, contributing toward smart and sustainable agricultural ecosystems. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

27 pages, 7808 KB  
Article
Phenology-Aware Transformer for Semantic Segmentation of Non-Food Crops from Multi-Source Remote Sensing Time Series
by Xiongwei Guan, Meiling Liu, Shi Cao and Jiale Jiang
Remote Sens. 2025, 17(14), 2346; https://doi.org/10.3390/rs17142346 - 9 Jul 2025
Viewed by 521
Abstract
Accurate identification of non-food crops underpins food security by clarifying land-use dynamics, promoting sustainable farming, and guiding efficient resource allocation. Proper identification and management maintain the balance between food and non-food cropping, a prerequisite for ecological sustainability and a healthy agricultural economy. Distinguishing [...] Read more.
Accurate identification of non-food crops underpins food security by clarifying land-use dynamics, promoting sustainable farming, and guiding efficient resource allocation. Proper identification and management maintain the balance between food and non-food cropping, a prerequisite for ecological sustainability and a healthy agricultural economy. Distinguishing large-scale non-food crops—such as oilseed rape, tea, and cotton—remains challenging because their canopy reflectance spectra are similar. This study proposes a novel phenology-aware Vision Transformer Model (PVM) for accurate, large-scale non-food crop classification. PVM incorporates a Phenology-Aware Module (PAM) that fuses multi-source remote-sensing time series with crop-growth calendars. The study area is Hunan Province, China. We collected Sentinel-1 SAR and Sentinel-2 optical imagery (2021–2022) and corresponding ground-truth samples of non-food crops. The model uses a Vision Transformer (ViT) backbone integrated with PAM. PAM dynamically adjusts temporal attention using encoded phenological cues, enabling the network to focus on key growth stages. A parallel Multi-Task Attention Fusion (MTAF) mechanism adaptively combines Sentinel-1 and Sentinel-2 time-series data. The fusion exploits sensor complementarity and mitigates cloud-induced data gaps. The fused spatiotemporal features feed a Transformer-based decoder that performs multi-class semantic segmentation. On the Hunan dataset, PVM achieved an F1-score of 74.84% and an IoU of 61.38%, outperforming MTAF-TST and 2D-U-Net + CLSTM baselines. Cross-regional validation on the Canadian Cropland Dataset confirmed the model’s generalizability, with an F1-score of 71.93% and an IoU of 55.94%. Ablation experiments verified the contribution of each module. Adding PAM raised IoU by 8.3%, whereas including MTAF improved recall by 8.91%. Overall, PVM effectively integrates phenological knowledge with multi-source imagery, delivering accurate and scalable non-food crop classification. Full article
Show Figures

Figure 1

24 pages, 15879 KB  
Article
Real-Time Hand Gesture Recognition in Clinical Settings: A Low-Power FMCW Radar Integrated Sensor System with Multiple Feature Fusion
by Haili Wang, Muye Zhang, Linghao Zhang, Xiaoxiao Zhu and Qixin Cao
Sensors 2025, 25(13), 4169; https://doi.org/10.3390/s25134169 - 4 Jul 2025
Viewed by 560
Abstract
Robust and efficient contactless human–machine interaction is critical for integrated sensor systems in clinical settings, demanding low-power solutions adaptable to edge computing platforms. This paper presents a real-time hand gesture recognition system using a low-power Frequency-Modulated Continuous Wave (FMCW) radar sensor, featuring a [...] Read more.
Robust and efficient contactless human–machine interaction is critical for integrated sensor systems in clinical settings, demanding low-power solutions adaptable to edge computing platforms. This paper presents a real-time hand gesture recognition system using a low-power Frequency-Modulated Continuous Wave (FMCW) radar sensor, featuring a novel Multiple Feature Fusion (MFF) framework optimized for deployment on edge devices. The proposed system integrates velocity profiles, angular variations, and spatial-temporal features through a dual-stage processing architecture: an adaptive energy thresholding detector segments gestures, followed by an attention-enhanced neural classifier. Innovations include dynamic clutter suppression and multi-path cancellation optimized for complex clinical environments. Experimental validation demonstrates high performance, achieving 98% detection recall and 93.87% classification accuracy under LOSO cross-validation. On embedded hardware, the system processes at 28 FPS, showing higher robustness against environmental noise and lower computational overhead compared with existing methods. This low-power, edge-based solution is highly suitable for applications like sterile medical control and patient monitoring, advancing contactless interaction in healthcare by addressing efficiency and robustness challenges in radar sensing for edge computing. Full article
(This article belongs to the Special Issue Integrated Sensor Systems for Medical Applications)
Show Figures

Figure 1

19 pages, 1103 KB  
Article
Early-Stage Sensor Data Fusion Pipeline Exploration Framework for Agriculture and Animal Welfare
by Devon Martin, David L. Roberts and Alper Bozkurt
AgriEngineering 2025, 7(7), 215; https://doi.org/10.3390/agriengineering7070215 - 3 Jul 2025
Viewed by 635
Abstract
Internet-of-Things (IoT) approaches are continually introducing new sensors into the fields of agriculture and animal welfare. The application of multi-sensor data fusion to these domains remains a complex and open-ended challenge that defies straightforward optimization, often requiring iterative testing and refinement. To respond [...] Read more.
Internet-of-Things (IoT) approaches are continually introducing new sensors into the fields of agriculture and animal welfare. The application of multi-sensor data fusion to these domains remains a complex and open-ended challenge that defies straightforward optimization, often requiring iterative testing and refinement. To respond to this need, we have created a new open-source framework as well as a corresponding Python tool which we call the “Data Fusion Explorer (DFE)”. We demonstrated and evaluated the effectiveness of our proposed framework using four early-stage datasets from diverse disciplines, including animal/environmental tracking, agrarian monitoring, and food quality assessment. This included data across multiple common formats including single, array, and image data, as well as classification or regression and temporal or spatial distributions. We compared various pipeline schemes, such as low-level against mid-level fusion, or the placement of dimensional reduction. Based on their space and time complexities, we then highlighted how these pipelines may be used for different purposes depending on the given problem. As an example, we observed that early feature extraction reduced time and space complexity in agrarian data. Additionally, independent component analysis outperformed principal component analysis slightly in a sweet potato imaging dataset. Lastly, we benchmarked the DFE tool with respect to the Vanilla Python3 packages using our four datasets’ pipelines and observed a significant reduction, usually more than 50%, in coding requirements for users in almost every dataset, suggesting the usefulness of this package for interdisciplinary researchers in the field. Full article
Show Figures

Figure 1

27 pages, 569 KB  
Article
Construction Worker Activity Recognition Using Deep Residual Convolutional Network Based on Fused IMU Sensor Data in Internet-of-Things Environment
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
IoT 2025, 6(3), 36; https://doi.org/10.3390/iot6030036 - 28 Jun 2025
Viewed by 503
Abstract
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a [...] Read more.
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a deep residual convolutional neural network (ResNet) architecture integrated with multi-sensor fusion techniques. The proposed system processes data from multiple inertial measurement unit sensors strategically positioned on workers’ bodies to identify and classify construction-related activities accurately. A comprehensive pre-processing pipeline is implemented, incorporating Butterworth filtering for noise suppression, data normalization, and an adaptive sliding window mechanism for temporal segmentation. Experimental validation is conducted using the publicly available VTT-ConIoT dataset, which includes recordings of 16 construction activities performed by 13 participants in a controlled laboratory setting. The results demonstrate that the ResNet-based sensor fusion approach outperforms traditional single-sensor models and other deep learning methods. The system achieves classification accuracies of 97.32% for binary discrimination between recommended and non-recommended activities, 97.14% for categorizing six core task types, and 98.68% for detailed classification across sixteen individual activities. Optimal performance is consistently obtained with a 4-second window size, balancing recognition accuracy with computational efficiency. Although the hand-mounted sensor proved to be the most effective as a standalone unit, multi-sensor configurations delivered significantly higher accuracy, particularly in complex classification tasks. The proposed approach demonstrates strong potential for real-world applications, offering robust performance across diverse working conditions while maintaining computational feasibility for IoT deployment. This work advances the field of innovative construction by presenting a practical solution for real-time worker activity monitoring, which can be seamlessly integrated into existing IoT infrastructures to promote workplace safety, streamline construction processes, and support data-driven management decisions. Full article
Show Figures

Figure 1

16 pages, 6543 KB  
Article
IoT-Edge Hybrid Architecture with Cross-Modal Transformer and Federated Manifold Learning for Safety-Critical Gesture Control in Adaptive Mobility Platforms
by Xinmin Jin, Jian Teng and Jiaji Chen
Future Internet 2025, 17(7), 271; https://doi.org/10.3390/fi17070271 - 20 Jun 2025
Viewed by 796
Abstract
This research presents an IoT-empowered adaptive mobility framework that integrates high-dimensional gesture recognition with edge-cloud orchestration for safety-critical human–machine interaction. The system architecture establishes a three-tier IoT network: a perception layer with 60 GHz FMCW radar and TOF infrared arrays (12-node mesh topology, [...] Read more.
This research presents an IoT-empowered adaptive mobility framework that integrates high-dimensional gesture recognition with edge-cloud orchestration for safety-critical human–machine interaction. The system architecture establishes a three-tier IoT network: a perception layer with 60 GHz FMCW radar and TOF infrared arrays (12-node mesh topology, 15 cm baseline spacing) for real-time motion tracking; an edge intelligence layer deploying a time-aware neural network via NVIDIA Jetson Nano to achieve up to 99.1% recognition accuracy with latency as low as 48 ms under optimal conditions (typical performance: 97.8% ± 1.4% accuracy, 68.7 ms ± 15.3 ms latency); and a federated cloud layer enabling distributed model synchronization across 32 edge nodes via LoRaWAN-optimized protocols (κ = 0.912 consensus). A reconfigurable chassis with three operational modes (standing, seated, balance) employs IoT-driven kinematic optimization for enhanced adaptability and user safety. Using both radar and infrared sensors together reduces false detections to 0.08% even under high-vibration conditions (80 km/h), while distributed learning across multiple devices maintains consistent accuracy (variance < 5%) in different environments. Experimental results demonstrate 93% reliability improvement over HMM baselines and 3.8% accuracy gain over state-of-the-art LSTM models, while achieving 33% faster inference (48.3 ms vs. 72.1 ms). The system maintains industrial-grade safety certification with energy-efficient computation. Bridging adaptive mechanics with edge intelligence, this research pioneers a sustainable IoT-edge paradigm for smart mobility, harmonizing real-time responsiveness, ecological sustainability, and scalable deployment in complex urban ecosystems. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

18 pages, 854 KB  
Review
Water Quality Management in the Age of AI: Applications, Challenges, and Prospects
by Shubin Zou, Hanyu Ju and Jingjie Zhang
Water 2025, 17(11), 1641; https://doi.org/10.3390/w17111641 - 28 May 2025
Viewed by 3816
Abstract
Artificial intelligence (AI) is transforming water environment management, creating new opportunities for improved monitoring, prediction, and intelligent regulation of water quality. This review highlights the transformative impact of AI, particularly through hybrid modeling frameworks that integrate AI with technologies like the Internet of [...] Read more.
Artificial intelligence (AI) is transforming water environment management, creating new opportunities for improved monitoring, prediction, and intelligent regulation of water quality. This review highlights the transformative impact of AI, particularly through hybrid modeling frameworks that integrate AI with technologies like the Internet of Things (IoT), Remote Sensing (RS), and Unmanned Monitoring Platforms (UMP). These advances have significantly enhanced real-time monitoring accuracy, expanded the scope of data acquisition, and enabled comprehensive analysis through multisource data fusion. Coupling AI models with process-based models (PBM) has notably enhanced predictive capabilities for simulating water quality dynamics. Additionally, AI facilitates dynamic early-warning systems, precise pollutant source tracking, and data-driven decision-making. However, significant challenges remain, including data quality and accessibility, model interpretability, monitoring of hard-to-measure pollutants, and the lack of system integration and standardization. To address these bottlenecks, future research should focus on: (1) constructing high-quality, standardized open-access datasets; (2) developing explainable AI (XAI) models; (3) strengthening integration with digital twins and next-generation sensors; (4) improving the monitoring of trace and emerging pollutants; and (5) coupling AI with PBM by optimizing input data, internal mechanisms, and correcting model outputs through validation against observations. Overcoming these challenges will position AI as a central pillar in advancing smart water quality governance, safeguarding water security, and achieving sustainable development goals. Full article
Show Figures

Figure 1

Back to TopTop