Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,120)

Search Parameters:
Keywords = IoT cloud computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1319 KB  
Systematic Review
The Use of Industry 4.0 and 5.0 Technologies in the Transformation of Food Services: An Integrative Review
by Regiana Cantarelli da Silva, Lívia Bacharini Lima, Emanuele Batistela dos Santos and Rita de Cássia Akutsu
Foods 2025, 14(24), 4320; https://doi.org/10.3390/foods14244320 - 15 Dec 2025
Abstract
Industry 5.0 involves the integration of advanced technologies, collaboration between humans and intelligent machines, resilience and sustainability, all of which are essential for the advancement of the food services industry. This analysis reviews the scientific literature on Industries 4.0 and 5.0 technologies, whether [...] Read more.
Industry 5.0 involves the integration of advanced technologies, collaboration between humans and intelligent machines, resilience and sustainability, all of which are essential for the advancement of the food services industry. This analysis reviews the scientific literature on Industries 4.0 and 5.0 technologies, whether experimental or implemented, focused on producing large meals in food service. The review has been conducted through a systematic search, covering aspects from consumer ordering and the cooking process to distribution while considering management, quality control, and sustainability. A total of thirty-one articles, published between 2006 and 2025, were selected, with the majority focusing on Industry 5.0 (71%) and a significant proportion on testing phases (77.4%). In the context of Food Service Perspectives, the emphasis has been placed on customer service (32.3%), highlighting the use of Artificial Intelligence (AI)-powered robots for serving customers and AI for service personalization. Sustainability has also received attention (29%), focusing on AI and machine learning (ML) applications aimed at waste reduction. In management (22.6%), AI has been applied to optimize production schedules, enhance menu engineering, and improve overall management. Big Data (BD) and ML were utilized for sales analysis, while Blockchain technology was employed for traceability. Cooking innovations (9.7%) centered on automation, particularly the use of collaborative robots (cobots). For Quality Control (6.4%), AI, along with the Internet of Things (IoT) and Cloud Computing, has been used to monitor the physical aspects of food. The study underscores the importance of strategic investments in technology to optimize processes and resources, personalize services, and ensure food quality, thereby promoting balance and sustainability. Full article
(This article belongs to the Section Food Systems)
Show Figures

Figure 1

21 pages, 1151 KB  
Article
Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems
by Norjihan Abdul Ghani, Bintang Annisa Bagustari, Muneer Ahmad, Herman Tolle and Diva Kurnianingtyas
Sensors 2025, 25(24), 7583; https://doi.org/10.3390/s25247583 (registering DOI) - 14 Dec 2025
Abstract
The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe [...] Read more.
The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems. Full article
(This article belongs to the Special Issue Edge Artificial Intelligence and Data Science for IoT-Enabled Systems)
43 pages, 2472 KB  
Article
Privacy-Preserving Federated Learning for Distributed Financial IoT: A Blockchain-Based Framework for Secure Cryptocurrency Market Analytics
by Oleksandr Kuznetsov, Saltanat Adilzhanova, Serhiy Florov, Valerii Bushkov and Danylo Peremetchyk
IoT 2025, 6(4), 78; https://doi.org/10.3390/iot6040078 - 11 Dec 2025
Viewed by 163
Abstract
The proliferation of Internet of Things (IoT) devices in financial markets has created distributed ecosystems where cryptocurrency exchanges, trading platforms, and market data providers operate as autonomous edge nodes generating massive volumes of sensitive financial data. Collaborative machine learning across these distributed financial [...] Read more.
The proliferation of Internet of Things (IoT) devices in financial markets has created distributed ecosystems where cryptocurrency exchanges, trading platforms, and market data providers operate as autonomous edge nodes generating massive volumes of sensitive financial data. Collaborative machine learning across these distributed financial IoT nodes faces fundamental challenges: institutions possess valuable proprietary data but cannot share it directly due to competitive concerns, regulatory constraints, and trust management requirements in decentralized networks. This study presents a privacy-preserving federated learning framework tailored for distributed financial IoT systems, combining differential privacy with Shamir secret sharing to enable secure collaborative intelligence across blockchain-based cryptocurrency trading networks. We implement per-layer gradient clipping and Rényi differential privacy composition to minimize utility loss while maintaining formal privacy guarantees in edge computing scenarios. Using 5.6 million orderbook observations from 11 cryptocurrency pairs collected across distributed exchange nodes, we evaluate three data partitioning strategies simulating realistic heterogeneity patterns in financial IoT deployments. Our experiments reveal that federated edge learning imposes 9–15 percentage point accuracy degradation compared to centralized cloud processing, driven primarily by data distribution heterogeneity across autonomous nodes. Critically, adding differential privacy (ε = 3.0) and cryptographic secret sharing increases this degradation by less than 0.3 percentage points when mechanisms are calibrated appropriately for edge devices. The framework achieves 62–66.5% direction accuracy on cryptocurrency price movements, with confidence-based execution generating 71–137 basis points average profit per trade. These results demonstrate the practical viability of privacy-preserving collaborative intelligence for distributed financial IoT while identifying that the federated optimization gap dominates privacy mechanism costs. Our findings offer architectural insights for designing trustworthy distributed systems in blockchain-enabled financial IoT ecosystems. Full article
(This article belongs to the Special Issue Blockchain-Based Trusted IoT)
Show Figures

Figure 1

26 pages, 2212 KB  
Article
Adaptive Reinforcement Learning-Based Framework for Energy-Efficient Task Offloading in a Fog–Cloud Environment
by Branka Mikavica and Aleksandra Kostic-Ljubisavljevic
Sensors 2025, 25(24), 7516; https://doi.org/10.3390/s25247516 - 10 Dec 2025
Viewed by 204
Abstract
Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog–cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive [...] Read more.
Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog–cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive tasks. In this paper, we propose a reinforcement learning-based framework, namely Adaptive Q-learning-based Energy-aware Task Offloading (AQETO), that dynamically manages the energy consumption of fog nodes in a fog–cloud network. Concurrently, it considers IoT task delay tolerance and allocates computational resources while satisfying deadline requirements. The proposed approach dynamically determines energy states of each fog node using Q-learning depending on workload fluctuations. Moreover, AQETO prioritizes allocation of the most urgent tasks to minimize delays. Extensive experiments demonstrate the effectiveness of AQETO in terms of the minimization of fog node energy consumption and delay and the maximization of system efficiency. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

33 pages, 4059 KB  
Article
AI-Enabled Dynamic Edge-Cloud Resource Allocation for Smart Cities and Smart Buildings
by Marian-Cosmin Dumitru, Simona-Iuliana Caramihai, Alexandru Dumitrascu, Radu-Nicolae Pietraru and Mihnea-Alexandru Moisescu
Sensors 2025, 25(24), 7438; https://doi.org/10.3390/s25247438 - 6 Dec 2025
Viewed by 418
Abstract
The rapid expansion of IoT devices represents significant progress in areas such as smart buildings and smart cities, but at the same time, the volume of data generated represents a challenge, which can lead to real bottlenecks in the data analysis process, thus [...] Read more.
The rapid expansion of IoT devices represents significant progress in areas such as smart buildings and smart cities, but at the same time, the volume of data generated represents a challenge, which can lead to real bottlenecks in the data analysis process, thus resulting in increased waiting times for end users. The use of cloud-based solutions may prove inefficient in some cases, as the bandwidth required for transmitting data generated by IoT devices is limited. The integration with Edge computing mitigates this issue, bringing data processing closer to the resource that generates it. Edge computing plays a key role in improving cloud performance by offloading tasks closer to the data source, optimizing resource allocation. Achieving the desired performance requires a dynamic approach to resource management, where task execution can be prioritized based on current load conditions: either at the Edge node or the Cloud node. This paper proposes an approach based on the Seasonal Auto Regressive Integrated Moving Average (SARIMA) model for seamlessly switching between the Cloud and Edge nodes in the event of a loss of connection between the Cloud and Edge nodes. Thereby ensuring the command loop remains closed by transferring the task to the Edge node until the Cloud node becomes available. In this way, the prediction that could underlie a command is not jeopardized by the lack of connection to the cloud node. The method was evaluated using real-world resource utilization data and compared against a Simple Moving Average (SMA) baseline using standard metrics: RMSE, MAE, MAPE, and MSE. Experimental results demonstrate that SRIMA significantly improves prediction accuracy, achieving up to 64% improvement for CPU usage and 35% for RAM usage compared to SMA. These findings highlight the effectiveness of incorporating seasonality and autoregressive components in predictive models for edge computing, contributing to more efficient resource allocation and enhanced performance in smart city environments. Full article
Show Figures

Figure 1

26 pages, 1957 KB  
Systematic Review
Industrial Digitalization: Systematic Literature Review and Bibliometric Analysis
by Galina Ilieva, Tania Yankova, Peyo Staribratov, Galina Ruseva and Yuliy Iliev
Information 2025, 16(12), 1080; https://doi.org/10.3390/info16121080 - 5 Dec 2025
Viewed by 386
Abstract
This article reviews the state of the art, implementation barriers, and emerging trends in industrial digitalization, drawing on studies published between 2020 and July 2025. It analyzes how classical Industry 4.0 technologies, simulation and modeling, and Industry 5.0 priorities are transforming production processes [...] Read more.
This article reviews the state of the art, implementation barriers, and emerging trends in industrial digitalization, drawing on studies published between 2020 and July 2025. It analyzes how classical Industry 4.0 technologies, simulation and modeling, and Industry 5.0 priorities are transforming production processes in smart factories, yielding higher productivity, reduced downtime, and improved quality. At the same time, the literature documents persistent obstacles, including system integration and interoperability, security and data-privacy risk, and financial constraints, especially for SMEs. Looking ahead, future directions point to a gradual shift towards sustainable intelligent manufacturing with human–robot collaboration and data-centric operations. In addition, the article proposes and validates a conceptual framework for the digitalization of manufacturing companies and provides practical recommendations for stakeholders seeking to leverage digital technologies for operational excellence and sustainable value creation. Full article
(This article belongs to the Special Issue Modeling in the Era of Generative AI)
Show Figures

Figure 1

27 pages, 3213 KB  
Article
Urban Sound Classification for IoT Devices in Smart City Infrastructures
by Simona Domazetovska Markovska, Viktor Gavriloski, Damjan Pecioski, Maja Anachkova, Dejan Shishkovski and Anastasija Angjusheva Ignjatovska
Urban Sci. 2025, 9(12), 517; https://doi.org/10.3390/urbansci9120517 - 5 Dec 2025
Viewed by 293
Abstract
Urban noise is a major environmental concern that affects public health and quality of life, demanding new approaches beyond conventional noise level monitoring. This study investigates the development of an AI-driven Acoustic Event Detection and Classification (AED/C) system designed for urban sound recognition [...] Read more.
Urban noise is a major environmental concern that affects public health and quality of life, demanding new approaches beyond conventional noise level monitoring. This study investigates the development of an AI-driven Acoustic Event Detection and Classification (AED/C) system designed for urban sound recognition and its integration into smart city application. Using the UrbanSound8K dataset, five acoustic parameters—Mel Frequency Cepstral Coefficients (MFCC), Mel Spectrogram (MS), Spectral Contrast (SC), Tonal Centroid (TC), and Chromagram (Ch)—were mathematically modeled and applied to feature extraction. Their combinations were tested with three classical machine learning algorithms: Support Vector Machines (SVM), Random Forest (RF), Naive Bayes (NB) and a deep learning approach, i.e., Convolutional Neural Networks (CNN). A total of 52 models with the three ML algorithms were analyzed along with 4 models with CNN. The MFCC-based CNN models showed the highest accuracy, achieving up to 92.68% on test data. This achieved accuracy represents approximately +2% improvement compared to prior CNN-based approaches reported in similar studies. Additionally, the number of trained models, 56 in total, exceeds those presented in comparable research, ensuring more robust performance validation and statistical reliability. Real-time validation confirmed the applicability for IoT devices, and a low-cost wireless sensor unit (WSU) was developed with fog and cloud computing for scalable data processing. The constructed WSU demonstrates a cost reduction of at least four times compared to previously developed units, while maintaining good performance, enabling broader deployment potential in smart city applications. The findings demonstrate the potential of AI-based AED/C systems for continuous, source-specific noise classification, supporting sustainable urban planning and improved environmental management in smart cities. Full article
Show Figures

Figure 1

22 pages, 396 KB  
Review
Towards a Unified Digital Ecosystem: The Role of Platform Technology Convergence
by Asif Mehmood, Mohammad Arif and Faisal Mehmood
Electronics 2025, 14(24), 4787; https://doi.org/10.3390/electronics14244787 - 5 Dec 2025
Viewed by 455
Abstract
The rapid evolution of platform technologies is transforming industries, interoperability, and innovation. Despite numerous studies on individual technologies, no prior review unifies AI, IoT, blockchain, and 5G with cross-sector standards, governance, and technical enablers to provide a comprehensive view of platform convergence. This [...] Read more.
The rapid evolution of platform technologies is transforming industries, interoperability, and innovation. Despite numerous studies on individual technologies, no prior review unifies AI, IoT, blockchain, and 5G with cross-sector standards, governance, and technical enablers to provide a comprehensive view of platform convergence. This narrative review synthesizes conceptual and technical literature from 2015–2025, focusing on how converging platform technologies interact across sectors. The review organizes findings by technological enablers, cross-domain integration mechanisms, sector-specific applications, and emergent trends, highlighting systemic synergies and challenges. The study demonstrates that AI, IoT, blockchain, cloud-edge architectures, and advanced communication networks collectively enable interoperable, secure, and adaptive ecosystems. Key enablers include standardized protocols, edge–cloud orchestration, and cross-platform data sharing, while challenges involve cybersecurity, regulatory compliance, and scalability. Sectoral examples span healthcare, finance, manufacturing, smart cities, and autonomous systems. Platform convergence offers transformative potential for sustainable and intelligent systems. Critical research gaps remain in unified architectures, privacy-preserving AI and blockchain mechanisms, and dynamic orchestration of heterogeneous systems. Emerging technologies such as quantum computing and federated learning are poised to further strengthen collaborative ecosystems. This review provides actionable insights for researchers, policymakers, and industry leaders aiming to harness platform convergence for innovation and sustainable development. Full article
Show Figures

Figure 1

16 pages, 1229 KB  
Systematic Review
Resilience of Post-Quantum Cryptography in Lightweight IoT Protocols: A Systematic Review
by Mohammed Almutairi and Frederick T. Sheldon
Eng 2025, 6(12), 346; https://doi.org/10.3390/eng6120346 - 2 Dec 2025
Viewed by 501
Abstract
The rapid advancement of quantum computing poses significant threats to classical cryptographic methods, such as Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC), which currently secure Internet of Things (IoT) and cloud communications. Post-Quantum Cryptography (PQC), particularly lattice-based schemes, has emerged as a promising [...] Read more.
The rapid advancement of quantum computing poses significant threats to classical cryptographic methods, such as Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC), which currently secure Internet of Things (IoT) and cloud communications. Post-Quantum Cryptography (PQC), particularly lattice-based schemes, has emerged as a promising alternative. CRYSTALS-Kyber, standardized by the National Institute of Standards and Technology (NIST) as ML-KEM, has shown efficiency and practicality for constrained IoT devices. Most existing research has focused on PQC within the Transport Layer Security (TLS) protocol. Consequently, a critical gap exists in understanding PQC’s performance in lightweight IoT protocols. These are Message Queuing Telemetry Transport (MQTT) and Constrained Application Protocol (CoAP), particularly under adverse network conditions. To address this gap, this paper provides a systematic review of the literature on the network resilience and performance of CRYSTALS-Kyber when integrated into these protocols operating over lossy and high-latency networks. Additional challenges include non-standardized integration, resource limitations, and side-channel vulnerabilities. This review provides a structured synthesis of current knowledge, highlights unresolved trade-offs between security and efficiency, and outlines future research directions, including protocol-level optimization, lightweight signature schemes, and resilience testing of PQC-secured IoT protocols under realistic conditions. Full article
Show Figures

Figure 1

22 pages, 3760 KB  
Article
Embedded Implementation of Real-Time Voice Command Recognition on PIC Microcontroller
by Mohamed Shili, Salah Hammedi, Amjad Gawanmeh and Khaled Nouri
Automation 2025, 6(4), 79; https://doi.org/10.3390/automation6040079 - 28 Nov 2025
Viewed by 227
Abstract
This paper describes a real-time system for recognizing voice commands for resource-constrained embedded devices, specifically a PIC microcontroller. While most existing speech ordering support solutions rely on high-performance processing platforms or cloud computation, the system described here performs fully embedded low-power processing locally [...] Read more.
This paper describes a real-time system for recognizing voice commands for resource-constrained embedded devices, specifically a PIC microcontroller. While most existing speech ordering support solutions rely on high-performance processing platforms or cloud computation, the system described here performs fully embedded low-power processing locally on the device. Sound is captured through a low-cost MEMS microphone, segmented into short audio frames, and time domain features are extracted (i.e., Zero-Crossing Rate (ZCR) and Short-Time Energy (STE)). These features were chosen for low power and computational efficiency and the ability to be processed in real time on a microcontroller. For the purposes of this experimental system, a small vocabulary of four command words (i.e., “ON”, “OFF”, “LEFT”, and “RIGHT”) were used to simulate real sound-ordering interfaces. The main contribution is demonstrated in the clever combination of low-complex, lightweight signal-processing techniques with embedded neural network inference, completing a classification cycle in real time (under 50 ms). It was demonstrated that the classification accuracy was over 90% using confusion matrices and timing analysis of the classifier’s performance across vocabularies with varying levels of complexity. This method is very applicable to IoT and portable embedded applications, offering a low-latency classification alternative to more complex and resource intensive classification architectures. Full article
Show Figures

Graphical abstract

20 pages, 6450 KB  
Article
An Edge AI Approach for Low-Power, Real-Time Atrial Fibrillation Detection on Wearable Devices Based on Heartbeat Intervals
by Eliana Cinotti, Maria Gragnaniello, Salvatore Parlato, Jessica Centracchio, Emilio Andreozzi, Paolo Bifulco, Michele Riccio and Daniele Esposito
Sensors 2025, 25(23), 7244; https://doi.org/10.3390/s25237244 - 27 Nov 2025
Viewed by 702
Abstract
Atrial fibrillation (AF) is the most common type of heart rhythm disorder worldwide. Early recognition of brief episodes of atrial fibrillation can provide important diagnostic information and lead to prompt treatment. AF is mainly characterized by an irregular heartbeat. Today, many personal devices [...] Read more.
Atrial fibrillation (AF) is the most common type of heart rhythm disorder worldwide. Early recognition of brief episodes of atrial fibrillation can provide important diagnostic information and lead to prompt treatment. AF is mainly characterized by an irregular heartbeat. Today, many personal devices such as smartphones, smartwatches, smart rings, or small wearable medical devices can detect heart rhythm. Sensors can acquire different types of heart-related signals and extract the sequence of inter-beat intervals, i.e., the instantaneous heart rate. Various algorithms, some of which are very complex and require significant computational resources, are used to recognize AF based on inter-beat intervals (RR). This study aims to verify the possibility of using neural networks algorithms directly on a microcontroller connected to sensors for AF detection. Sequences of 25, 50, and 100 RR were extracted from a public database of electrocardiographic signals with annotated episodes of atrial fibrillation. A custom 1D convolutional neural network (1D-CNN) was designed and then validated via a 5-fold subject-wise split cross-validation scheme. In each fold, the model was tested on a set of 3 randomly selected subjects, which had not previously been used for training, to ensure a subject-independent evaluation of model performance. Across all folds, all models achieved high and stable performance, with test accuracies of 0.963 ± 0.031, 0.976 ± 0.022, and 0.980 ± 0.023, respectively, for models using 25 RR, 50 RR, and 100 RR sequences. Precision, recall, F1-score, and AUC-ROC exhibited similarly high performance, confirming robust generalization across unseen subjects. Performance systematically improved with longer RR windows, indicating that richer temporal context enhances discrimination of AF rhythm irregularities. A complete Edge AI prototype integrating a low-power ECG analog front-end, an ARM Cortex M7 microcontroller and an IoT transmitting module was utilized for realistic tests. Inferencing time, peak RAM usage, flash usage and current absorption were measured. The results obtained show the possibility of using neural network algorithms directly on microcontrollers for real-time AF recognition with very low power consumption. The prototype is also capable of sending the suspicious ECG trace to the cloud for final validation by a physician. The proposed methodology can be used for personal screening not only with ECG signals but with any other signal that reproduces the sequence of heartbeats (e.g., photoplethysmographic, pulse oximetric, pressure, accelerometric, etc.). Full article
(This article belongs to the Special Issue Sensors for Heart Rate Monitoring and Cardiovascular Disease)
Show Figures

Figure 1

21 pages, 1824 KB  
Article
A Framework for Integration of Machine Vision with IoT Sensing
by Gift Nwatuzie and Hassan Peyravi
Sensors 2025, 25(23), 7237; https://doi.org/10.3390/s25237237 - 27 Nov 2025
Viewed by 321
Abstract
Automated monitoring systems increasingly leverage diverse sensing sources, yet a disconnect often persists between machine vision and IoT sensor pipelines. While IoT sensors provide reliable point measurements and cameras offer rich spatial context, their independent operation limits coherent environmental interpretation. Existing multimodal fusion [...] Read more.
Automated monitoring systems increasingly leverage diverse sensing sources, yet a disconnect often persists between machine vision and IoT sensor pipelines. While IoT sensors provide reliable point measurements and cameras offer rich spatial context, their independent operation limits coherent environmental interpretation. Existing multimodal fusion frameworks frequently lack tight synchronization and efficient cross-modal learning. This paper introduces a unified edge–cloud framework that deeply integrates cameras as active sensing nodes within an IoT network. Our approach features tight time synchronization between visual and IoT data streams and employs cross-modal knowledge distillation to enable efficient model training on resource-constrained edge devices. The system leverages a multi-task learning setup with dynamically adjusted loss weighting, combining architectures like EfficientNet, Vision Transformers, and U-Net derivatives. Validation on environmental monitoring tasks, including classification, segmentation, and anomaly detection, demonstrates the framework’s robustness. Experiments deployed on compact edge hardware (Jetson Nano, Coral TPU) achieved 94.8% classification accuracy and 87.6% segmentation quality (mIoU), and they also sustained sub-second inference latency. The results confirm that the proposed synchronized, knowledge-driven fusion yields a more adaptive, context-aware, and deployment-ready sensing solution, significantly advancing the practical integration of machine vision within IoT ecosystems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

27 pages, 2355 KB  
Article
An IoT-Enabled Digital Twin Architecture with Feature-Optimized Transformer-Based Triage Classifier on a Cloud Platform
by Haider Q. Mutashar, Hiba A. Abu-Alsaad and Sawsan M. Mahmoud
IoT 2025, 6(4), 73; https://doi.org/10.3390/iot6040073 - 26 Nov 2025
Viewed by 348
Abstract
It is essential to assign the correct triage level to patients as soon as they arrive in the emergency department in order to save lives, especially during peak demand. However, many healthcare systems estimate the triage levels by manual eyes-on evaluation, which can [...] Read more.
It is essential to assign the correct triage level to patients as soon as they arrive in the emergency department in order to save lives, especially during peak demand. However, many healthcare systems estimate the triage levels by manual eyes-on evaluation, which can be inconsistent and time consuming. This study creates a full Digital Twin-based architecture for patient monitoring and automated triage level recommendation using IoT sensors, AI, and cloud-based services. The system can monitor all patients’ vital signs through embedded sensors. The readings are used to update the Digital Twin instances that represent the present condition of the patients. This data is then used for triage prediction using a pretrained model that can predict the patients’ triage levels. The training of the model utilized the synthetic minority over-sampling technique, combined with Tomek links to lessen the degree of data imbalance. Additionally, Lagrange element optimization was applied to select those features of the most informative nature. The final triage level is predicted using the Tabular Prior-Data Fitted Network, a transformer-based model tailored for tabular data classification. This combination achieved an overall accuracy of 87.27%. The proposed system demonstrates the potential of integrating digital twins and AI to improve decision support in emergency healthcare environments. Full article
Show Figures

Figure 1

15 pages, 1414 KB  
Article
Gait Cycle Duration Analysis in Lower Limb Amputees Using an IoT-Based Photonic Wearable Sensor: A Preliminary Proof-of-Concept Study
by Bruna Alves, Alessandro Fantoni, José Pedro Matos, João Costa and Manuela Vieira
Sensors 2025, 25(23), 7148; https://doi.org/10.3390/s25237148 - 23 Nov 2025
Viewed by 564
Abstract
This study represents a preliminary proof of concept intended to demonstrate the feasibility of using a single-point LiDAR sensor for wearable gait analysis. The study presents a low-cost wearable sensor system that integrates a single-point LiDAR module and IoT connectivity to assess Gait [...] Read more.
This study represents a preliminary proof of concept intended to demonstrate the feasibility of using a single-point LiDAR sensor for wearable gait analysis. The study presents a low-cost wearable sensor system that integrates a single-point LiDAR module and IoT connectivity to assess Gait Cycle Duration (GCD) and gait symmetry in real time. The device is positioned on the medial side of the calf to detect the contralateral limb crossing—used as a proxy for mid-stance—enabling the computation of GCD for both limbs and the derivation of the Symmetry Ratio and Symmetry Index. This was conducted under simulated walking at three cadences (slow, normal and fast). GCD estimated by the sensor was compared against the visual annotation with Kinovea®, showing reasonable agreement, with most cycle-wise relative differences below approximately 13% and both methods capturing similar symmetry trends. The wearable system operated reliably across different speeds, with an estimated materials cost of under 100 € and wireless data streaming to a cloud dashboard for real-time visualization. Although the validation is preliminary and limited to a single healthy participant and a video-based reference, the results support the feasibility of a photonic, IoT-based approach for portable and objective gait assessment, motivating future studies with larger and clinical cohorts and gold-standard references to quantify accuracy, repeatability and clinical utility. Full article
Show Figures

Figure 1

34 pages, 2182 KB  
Article
The B-Health Box: A Standards-Based Fog IoT Gateway for Interoperable Health and Wellbeing Data Collection
by Maria Marques, Vasco Delgado-Gomes, Fábio Januário, Carlos Lopes, Ricardo Jardim-Goncalves and Carlos Agostinho
Sensors 2025, 25(23), 7116; https://doi.org/10.3390/s25237116 - 21 Nov 2025
Viewed by 404
Abstract
In recent years, healthcare is evolving to meet the needs of a growing and ageing population. To support better and more reliable care, a comprehensive and up-to-date Personal Health Record (PHR) is essential. Ideally, the PHR should contain all health-related information about an [...] Read more.
In recent years, healthcare is evolving to meet the needs of a growing and ageing population. To support better and more reliable care, a comprehensive and up-to-date Personal Health Record (PHR) is essential. Ideally, the PHR should contain all health-related information about an individual and be available for sharing with healthcare institutions. However, due to interoperability issues of the medical and fitness devices, most of the times, the PHR only contains the same information as the patient Electronic Health Record (EHR). This results in lack of health-related information (e.g., physical activity, working patterns) essential to address medical conditions, support prescriptions, and treatment follow-up. This paper introduces the B-Health IoT Box, a fog IoT computing framework for eHealth interoperability and data collection that enables seamless, secure integration of health and contextual data into interoperable health records. The system was deployed in real-world settings involving over 4500 users, successfully collecting and transmitting more than 1.5 million datasets. The validation shown that data was collected, harmonized, and properly stored in different eHealth platforms, enriching data from personal EHR with mobile and wearable sensors data. The solution supports real-time and near real-time data collection, fast prototyping, and secure cloud integration, offering a modular, standards-compliant gateway for digital health ecosystems. The health and health-related data is available in FHIR format enabling interoperable eHealth ecosystems, and better equality of access to health and care services. Full article
Show Figures

Figure 1

Back to TopTop