Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (209)

Search Parameters:
Keywords = IoT device recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 9414 KB  
Article
FCDNet: An Efficient and Cost-Effective Strawberry Disease Detection Model for Smart Farming Management
by Ruoyu Ouyang, Junying Jiang, Yujia Shao, Jialei Zhan and Xiaoyu Zhang
Plants 2026, 15(9), 1341; https://doi.org/10.3390/plants15091341 - 28 Apr 2026
Abstract
With the rapid development of precision agriculture and smart farming management, accurate crop disease detection has become a critical tool for optimizing agricultural resource allocation, controlling operational costs, and supporting scientific plant protection strategies. However, real-world field environments are often characterized by strong [...] Read more.
With the rapid development of precision agriculture and smart farming management, accurate crop disease detection has become a critical tool for optimizing agricultural resource allocation, controlling operational costs, and supporting scientific plant protection strategies. However, real-world field environments are often characterized by strong background interference, multiple concurrent diseases, and fine-grained lesion differences, posing significant challenges to existing detection methods in practical agricultural Internet of Things (IoT) applications. In this paper, we propose Freq-spatial Context Dynamic Network(FCDNet), an efficient and cost-effective detection model tailored for multi-category strawberry disease recognition in complex field management scenarios. The proposed model integrates a Freq-Spatial Feature Module (FSFM), a Context Guide Fusion Module (CGFM), and a Task Align Dynamic Detection Head (TADDH), enabling enhanced expression of high-frequency micro-lesions, adaptive filtering of field background noise, and spatial alignment of classification and regression tasks, while maintaining a lightweight architecture suitable for low-cost agricultural edge devices. Extensive experiments conducted on the newly constructed Strawberry Disease Dataset-7(S7DD) demonstrate that FCDNet consistently outperforms existing mainstream methods, achieving an F1-score of 91.0% and an mAP@0.5 of 94.6%. The model’s architectural robustness and capacity for generalization are further substantiated by evaluations across diverse agricultural datasets using PlantDoc and ALDOD. Ultimately, FCDNet became a practical and cost-effective tool for real-time detection of strawberry diseases, directly supporting more accurate yield forecasting and risk management in smart agriculture systems. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research—2nd Edition)
Show Figures

Figure 1

28 pages, 2054 KB  
Article
A Hybrid CNN–LSTM–Attention Framework for Intrusion Detection in Smart Mobility Networks
by Otuekong Ekpo, Valentina Casola, Alessandra De Benedictis, Philip Asuquo and Bright Agbor
Future Internet 2026, 18(4), 210; https://doi.org/10.3390/fi18040210 - 15 Apr 2026
Viewed by 547
Abstract
Smart cities are increasingly dependent on interconnected transportation systems; however, this connectivity exposes smart mobility networks to significant cybersecurity risks. Traditional Intrusion Detection Systems are ill-equipped for this environment, as they are designed for isolated systems or fixed network boundaries. Thus, they struggle [...] Read more.
Smart cities are increasingly dependent on interconnected transportation systems; however, this connectivity exposes smart mobility networks to significant cybersecurity risks. Traditional Intrusion Detection Systems are ill-equipped for this environment, as they are designed for isolated systems or fixed network boundaries. Thus, they struggle to secure the complex and heterogeneous smart mobility networks, where various protocols and resource-constrained edge devices require more adaptive solutions. To address this limitation, we propose a novel hybrid deep learning framework that combines convolutional neural networks for spatial feature extraction, long short-term memory networks for temporal pattern recognition, and an attention mechanism for adaptive feature weighting, together forming a context-aware Intrusion Detection System. Our approach is evaluated across six benchmark datasets spanning vehicular networks, IoT ecosystems, cloud computing, and 5G environments—VeReMi Extension, CICIoV2024, Edge-IIoTset, UNSW-NB15, Car Hacking, and 5G-NIDD—a deliberately diverse selection that represents the heterogeneous nature of real-world smart mobility networks. Empirical evaluation using three different random seeds reveals the proposed framework achieves detection accuracy exceeding 98% on each dataset, a mean F1 score of 98.94%, and an inference latency of just 4.96 ms per sample. Our results show that the proposed model achieves consistently high detection performance across six heterogeneous benchmark datasets, making it a potentially robust candidate for real-time intrusion detection in smart mobility systems. Full article
(This article belongs to the Special Issue Cybersecurity in the Era of Smart Cities)
Show Figures

Figure 1

43 pages, 1324 KB  
Article
Explainable Kolmogorov–Arnold Networks for Zero-Shot Human Activity Recognition on TinyML Edge Devices
by Ismail Lamaakal, Chaymae Yahyati, Yassine Maleh, Khalid El Makkaoui and Ibrahim Ouahbi
Mach. Learn. Knowl. Extr. 2026, 8(3), 55; https://doi.org/10.3390/make8030055 - 26 Feb 2026
Cited by 1 | Viewed by 859
Abstract
Human Activity Recognition (HAR) on wearable and IoT devices must jointly satisfy four requirements: high accuracy, the ability to recognize previously unseen activities, strict memory and latency constraints, and interpretable decisions. In this work, we address all four by introducing an explainable Kolmogorov–Arnold [...] Read more.
Human Activity Recognition (HAR) on wearable and IoT devices must jointly satisfy four requirements: high accuracy, the ability to recognize previously unseen activities, strict memory and latency constraints, and interpretable decisions. In this work, we address all four by introducing an explainable Kolmogorov–Arnold Network for Human Activity Recognition (TinyKAN-HAR) with a zero-shot learning (ZSL) module, designed specifically for TinyML edge devices. The proposed KAN replaces fixed activation functions by learnable one-dimensional spline operators applied after linear mixing, yielding compact yet expressive feature extractors whose internal nonlinearities can be directly visualized. On top of the KAN latent space, we learn a semantic projection and cosine-based compatibility function that align sensor features with class-level semantic embeddings, enabling both pure and generalized zero-shot recognition of unseen activities. We evaluate our method on three benchmark datasets (UCI HAR, WISDM, PAMAP2) under subject-disjoint and zero-shot splits. TinyKAN-HAR consistently achieves over 97% macro-F1 on seen classes and over 96% accuracy on unseen activities, with harmonic mean above 96% in the generalized ZSL setting, outperforming CNN, LSTM and Transformer-based ZSL baselines. For explainability, we combine gradient-based attributions, SHAP-style global relevance scores and inspection of the learned spline functions to provide sensor-level, temporal and neuron-level insights into each prediction. After 8-bit quantization and TinyML-oriented optimizations, the deployed model occupies only 145 kB of flash and 26 kB of RAM, and achieves an average inference latency of 4.1 ms (about 0.32 mJ per window) on a Cortex-M4F-class microcontroller, while preserving accuracy within 0.2% of the full-precision model. These results demonstrate that explainable, zero-shot HAR with near state-of-the-art accuracy is feasible on severely resource-constrained TinyML edge devices. Full article
(This article belongs to the Section Learning)
Show Figures

Graphical abstract

39 pages, 10175 KB  
Article
EdgeML-Driven Real-Time Vehicle Tracking and Traffic Control for Traffic Management in Smart Cities
by Hyago V. L. B. Silva, Davi Rosim, Felipe A. P. de Figueiredo, Samuel B. Mafra, Ahmed S. Khwaja and Alagan Anpalagan
Appl. Sci. 2026, 16(5), 2216; https://doi.org/10.3390/app16052216 - 25 Feb 2026
Viewed by 610
Abstract
The escalating global rates of traffic accidents in urban areas and the growing demands of smart cities underscore the urgent need for advanced real-time monitoring solutions. This paper presents an EdgeML-based system for vehicle tracking that performs real-time speed and distance analysis and [...] Read more.
The escalating global rates of traffic accidents in urban areas and the growing demands of smart cities underscore the urgent need for advanced real-time monitoring solutions. This paper presents an EdgeML-based system for vehicle tracking that performs real-time speed and distance analysis and traffic violation detection. This is achieved by deploying a YOLOv8 object detection model on a Raspberry Pi 5 with a Coral USB Edge TPU accelerator. The system integrates computer vision and IoT technologies to enable real-time processing. It utilizes the Message Queuing Telemetry Transport (MQTT) protocol to allow scalable communication between distributed edge devices and a central MongoDB database, facilitating real-time storage and analysis of traffic data. A synthetic dataset generated via the Blender 3D modeling tool validates the system’s accuracy, demonstrating average speed and distance measurement errors of ±2.11 km/h and ±0.58 m, respectively. These findings are further supported by preliminary practical experiments in a real-world environment, where speed estimation errors remained within 0–2 km/h and distance errors stayed below 0.11 m. Key innovations of this work include license plate recognition, speeding and collision detection, and context analysis using Google’s Gemini-2.5-Flash API. A Streamlit dashboard provides real-time visualization of traffic metrics, violations, and aggregated data. A comparative evaluation of YOLOv5n, YOLOv8n, YOLOv11n, and YOLOv12n identifies YOLOv8n as the most suitable model for embedded deployment, achieving 91.07 ± 0.61% mAP@0.5 without quantization, 88.77 ± 3.31% mAP@0.5 with quantization, while maintaining real-time performance of 30–43 frames per second (FPS) on the Edge TPU. The system’s modular architecture, low latency, and robust performance highlight its suitability for smart city applications, enhancing traffic safety and enabling data-driven urban mobility management. Full article
(This article belongs to the Special Issue Smart Cities: AI-Enhanced Urban Living)
Show Figures

Figure 1

21 pages, 1714 KB  
Article
Lightweight Authentication and Dynamic Key Generation for IMU-Based Canine Motion Recognition IoT Systems
by Guanyu Chen, Hiroki Watanabe, Kohei Matsumura and Yoshinari Takegawa
Future Internet 2026, 18(2), 111; https://doi.org/10.3390/fi18020111 - 20 Feb 2026
Viewed by 399
Abstract
The integration of wearable inertial measurement units (IMU) in animal welfare Internet of Things (IoT) systems has become crucial for monitoring animal behaviors and enhancing welfare management. However, the vulnerability of IoT devices to network and hardware attacks poses significant risks, potentially compromising [...] Read more.
The integration of wearable inertial measurement units (IMU) in animal welfare Internet of Things (IoT) systems has become crucial for monitoring animal behaviors and enhancing welfare management. However, the vulnerability of IoT devices to network and hardware attacks poses significant risks, potentially compromising data integrity and misleading caregivers, negatively impacting animal welfare. Additionally, current animal monitoring solutions often rely on intrusive tagging methods, such as Radio Frequency Identification (RFID) or ear tagging, which may cause unnecessary stress and discomfort to animals. In this study, we propose a lightweight integrity and provenance-oriented security stack that complements standard transport security, specifically tailored to IMU-based animal motion IoT systems. Our system utilizes a 1D-convolutional neural network (CNN) model, achieving 88% accuracy for precise motion recognition, alongside a lightweight behavioral fingerprinting CNN model attaining 83% accuracy, serving as an auxiliary consistency signal to support collar–animal association and reduce mis-attribution risks. We introduce a dynamically generated pre-shared key (PSK) mechanism based on SHA-256 hashes derived from motion features and timestamps, further securing communication channels via application-layer Hash-based Message Authentication Code (HMAC) combined with Message Queuing Telemetry Transport (MQTT)/Transport Layer Security (TLS) protocols. In our design, MQTT/TLS provides primary device authentication and channel protection, while behavioral fingerprinting and per-window dynamic–HMAC provide auxiliary provenance cues and tamper-evident integrity at the application layer. Experimental validation is conducted primarily via offline, dataset-driven experiments on a public canine IMU dataset; system-level overhead and sensor-to-edge latency are measured on a Raspberry Pi-based testbed by replaying windows through the MQTT/TLS pipeline. Overall, this work integrates motion recognition, behavioral fingerprinting, and dynamic key management into a cohesive, lightweight telemetry integrity/provenance stack and provides a foundation for future extensions to multi-species adaptive scenarios and federated learning applications. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

22 pages, 763 KB  
Article
Comparative Evaluation of LSTM and 3D CNN Models in a Hybrid System for IoT-Enabled Sign-to-Text Translation in Deaf Communities
by Samar Mouti, Hani Al Chalabi, Mohammed Abushohada, Samer Rihawi and Sulafa Abdalla
Informatics 2026, 13(2), 27; https://doi.org/10.3390/informatics13020027 - 5 Feb 2026
Viewed by 949
Abstract
This paper presents a hybrid deep learning framework for real-time sign language recognition (SLR) tailored to Internet of Things (IoT)-enabled environments, enhancing accessibility for Deaf communities. The proposed system integrates a Long Short-Term Memory (LSTM) network for static gesture recognition and a 3D [...] Read more.
This paper presents a hybrid deep learning framework for real-time sign language recognition (SLR) tailored to Internet of Things (IoT)-enabled environments, enhancing accessibility for Deaf communities. The proposed system integrates a Long Short-Term Memory (LSTM) network for static gesture recognition and a 3D Convolutional Neural Network (3D CNN) for dynamic gesture recognition. Implemented on a Raspberry Pi device using MediaPipe for landmark extraction, the system supports low-latency, on-device inference suitable for resource-constrained edge computing. Experimental results demonstrate that the LSTM model achieves its highest stability and performance for static signs at 1000 training epochs, yielding an average F1-score of 0.938 and an accuracy of 86.67%. In contrast, at 2000 epochs, the model exhibits a catastrophic performance collapse (F1-score of 0.088) due to overfitting and weight instability, highlighting the necessity of careful training regulation. Despite this, the overall system achieves consistently high classification performance under controlled conditions. In contrast, the 3D CNN component maintains robust and consistent performance across all evaluated training phases (500–2000 epochs), achieving up to 99.6% accuracy on dynamic signs. When deployed on a Raspberry Pi platform, the system achieves real-time performance with a frame rate of 12–15 FPS and an average inference latency of approximately 65 ms per frame. The hybrid architecture effectively balances recognition accuracy with computational efficiency by routing static gestures to the LSTM and dynamic gestures to the 3D CNN. This work presents a detailed epoch-wise comparative analysis of model stability and computational feasibility, contributing a practical and scalable IoT-enabled solution for inclusive, real-time sign-to-text communication in intelligent environments. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

20 pages, 2482 KB  
Article
Compression-Efficient Feature Extraction Method for a CMOS Image Sensor
by Keiichiro Kuroda, Yu Osuka, Ryoya Iegaki, Ryuichi Ujiie, Hideki Shima, Kota Yoshida and Shunsuke Okura
Sensors 2026, 26(3), 962; https://doi.org/10.3390/s26030962 - 2 Feb 2026
Viewed by 532
Abstract
To address the power constraints of the emerging Internet of Things (IoT) era, we propose a compression-efficient feature extraction method for a CMOS image sensor that can extract binary feature data. This sensor outputs six-channel binary feature data, comprising three channels of binarized [...] Read more.
To address the power constraints of the emerging Internet of Things (IoT) era, we propose a compression-efficient feature extraction method for a CMOS image sensor that can extract binary feature data. This sensor outputs six-channel binary feature data, comprising three channels of binarized luminance signals and three channels of horizontal edge signals, compressed via a run length encoding (RLE) method. This approach significantly reduces data transmission volume while maintaining image recognition accuracy. The simulation results obtained using a YOLOv7-based model designed for edge GPUs demonstrate that our approach achieves a large object recognition accuracy (APL50) of 60.7% on the COCO dataset while reducing the data size by 99.2% relative to conventional 8-bit RGB color images. Furthermore, the image classification results using MobileNetV3 tailored for mobile devices on the Visual Wake Words (VWW) dataset show that our approach reduces data size by 99.0% relative to conventional 8-bit RGB color images and achieves an image classification accuracy of 89.4%. These results are superior to the conventional trade-off between recognition accuracy and data size, thereby enabling the realization of low-power image recognition systems. Full article
Show Figures

Figure 1

38 pages, 6181 KB  
Article
An AIoT-Based Framework for Automated English-Speaking Assessment: Architecture, Benchmarking, and Reliability Analysis of Open-Source ASR
by Paniti Netinant, Rerkchai Fooprateepsiri, Ajjima Rukhiran and Meennapa Rukhiran
Informatics 2026, 13(2), 19; https://doi.org/10.3390/informatics13020019 - 26 Jan 2026
Viewed by 1441
Abstract
The emergence of low-cost edge devices has enabled the integration of automatic speech recognition (ASR) into IoT environments, creating new opportunities for real-time language assessment. However, achieving reliable performance on resource-constrained hardware remains a significant challenge, especially on the Artificial Internet of Things [...] Read more.
The emergence of low-cost edge devices has enabled the integration of automatic speech recognition (ASR) into IoT environments, creating new opportunities for real-time language assessment. However, achieving reliable performance on resource-constrained hardware remains a significant challenge, especially on the Artificial Internet of Things (AIoT). This study presents an AIoT-based framework for automated English-speaking assessment that integrates architecture and system design, ASR benchmarking, and reliability analysis on edge devices. The proposed AIoT-oriented architecture incorporates a lightweight scoring framework capable of analyzing pronunciation, fluency, prosody, and CEFR-aligned speaking proficiency within an automated assessment system. Seven open-source ASR models—four Whisper variants (tiny, base, small, and medium) and three Vosk models—were systematically benchmarked in terms of recognition accuracy, inference latency, and computational efficiency. Experimental results indicate that Whisper-medium deployed on the Raspberry Pi 5 achieved the strongest overall performance, reducing inference latency by 42–48% compared with the Raspberry Pi 4 and attaining the lowest Word Error Rate (WER) of 6.8%. In contrast, smaller models such as Whisper-tiny, with a WER of 26.7%, exhibited two- to threefold higher scoring variability, demonstrating how recognition errors propagate into automated assessment reliability. System-level testing revealed that the Raspberry Pi 5 can sustain near real-time processing with approximately 58% CPU utilization and around 1.2 GB of memory, whereas the Raspberry Pi 4 frequently approaches practical operational limits under comparable workloads. Validation using real learner speech data (approximately 100 sessions) confirmed that the proposed system delivers accurate, portable, and privacy-preserving speaking assessment using low-power edge hardware. Overall, this work introduces a practical AIoT-based assessment framework, provides a comprehensive benchmark of open-source ASR models on edge platforms, and offers empirical insights into the trade-offs among recognition accuracy, inference latency, and scoring stability in edge-based ASR deployments. Full article
Show Figures

Figure 1

24 pages, 3303 KB  
Article
Deep Learning-Based Human Activity Recognition Using Binary Ambient Sensors
by Qixuan Zhao, Alireza Ghasemi, Ahmed Saif and Lila Bossard
Electronics 2026, 15(2), 428; https://doi.org/10.3390/electronics15020428 - 19 Jan 2026
Viewed by 668
Abstract
Human Activity Recognition (HAR) has become crucial across various domains, including healthcare, smart homes, and security systems, owing to the proliferation of Internet of Things (IoT) devices. Several Machine Learning (ML) techniques, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), have [...] Read more.
Human Activity Recognition (HAR) has become crucial across various domains, including healthcare, smart homes, and security systems, owing to the proliferation of Internet of Things (IoT) devices. Several Machine Learning (ML) techniques, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), have been proposed for HAR. However, they are still deficient in addressing the challenges of noisy features and insufficient data. This paper introduces a novel approach to tackle these two challenges, employing a Deep Learning (DL) Ensemble-Based Stacking Neural Network (SNN) combined with Generative Adversarial Networks (GANs) for HAR based on ambient sensors. Our proposed deep learning ensemble-based approach outperforms traditional ML techniques and enables robust and reliable recognition of activities in real-world scenarios. Comprehensive experiments conducted on six benchmark datasets from the CASAS smart home project demonstrate that the proposed stacking framework achieves superior accuracy on five out of six datasets when compared to literature-reported state-of-the-art baselines, with improvements ranging from 3.36 to 39.21 percentage points and an average gain of 13.28 percentage points. Although the baseline marginally outperforms the proposed models on one dataset (Aruba) in terms of accuracy, this exception does not alter the overall trend of consistent performance gains across diverse environments. Statistical significance of these improvements is further confirmed using the Wilcoxon signed-rank test. Moreover, the ASGAN-augmented models consistently improve macro-F1 performance over the corresponding baselines on five out of six datasets, while achieving comparable performance on the Milan dataset. The proposed GAN-based method further improves the activity recognition accuracy by a maximum of 4.77 percentage points, and an average of 1.28 percentage points compared to baseline models. By combining ensemble-based DL with GAN-generated synthetic data, a more robust and effective solution for ambient HAR addressing both accuracy and data imbalance challenges in real-world smart home settings is achieved. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 1392 KB  
Article
Barriers, Enablers, and Adoption Patterns of IoT and Wearable Devices in the Saudi Construction Industry: Survey Evidence
by Ibrahim Mosly
Buildings 2026, 16(2), 347; https://doi.org/10.3390/buildings16020347 - 14 Jan 2026
Viewed by 741
Abstract
The construction industry relies on the Internet of Things (IoT) and wearable technologies to enhance workplace safety. This research investigates the use of IoT and wearable technology among Saudi Arabian construction sector employees, analyzing their implementation difficulties and the factors contributing to successful [...] Read more.
The construction industry relies on the Internet of Things (IoT) and wearable technologies to enhance workplace safety. This research investigates the use of IoT and wearable technology among Saudi Arabian construction sector employees, analyzing their implementation difficulties and the factors contributing to successful implementation. A structured questionnaire was distributed to 567 construction professionals across different roles and projects. Frequency analysis was used to study adoption patterns, chi-square tests to study demographic factors, and principal component analysis for exploratory factor analysis to discover hidden adoption factors. The findings show that smart safety vests and helmets receive the highest level of recognition. On the other hand, advanced monitoring systems, including fatigue and environmental sensors, are not used enough. Group differences in device adoption were investigated in terms of years of experience, academic qualification, job role, and project budget. The findings from factor analysis show that three main factors determine adoption rates, which include (1) safety and operational effectiveness, (2) worker acceptance and support structures, and (3) technical and adoption barriers. A data-driven system is created to help policymakers and industry leaders accelerate construction safety digitalization efforts. Full article
(This article belongs to the Special Issue Digital Technologies, AI and BIM in Construction)
Show Figures

Figure 1

20 pages, 2458 KB  
Article
Efficient and Personalized Federated Learning for Human Activity Recognition on Resource-Constrained Devices
by Abdul Haseeb, Ian Cleland, Chris Nugent and James McLaughlin
Appl. Sci. 2026, 16(2), 700; https://doi.org/10.3390/app16020700 - 9 Jan 2026
Viewed by 671
Abstract
Human Activity Recognition (HAR) using wearable sensors enables impactful applications in healthcare, fitness, and smart environments, but it also faces challenges related to data privacy, non-independent and identically distributed (non-IID) data, and limited computational resources on edge devices. This study proposes an efficient [...] Read more.
Human Activity Recognition (HAR) using wearable sensors enables impactful applications in healthcare, fitness, and smart environments, but it also faces challenges related to data privacy, non-independent and identically distributed (non-IID) data, and limited computational resources on edge devices. This study proposes an efficient and personalized federated learning (PFL) framework for HAR that integrates federated training with model compression and per-client fine-tuning to address these challenges and support deployment on resource-constrained devices (RCDs). A convolutional neural network (CNN) is trained across multiple clients using FedAvg, followed by magnitude-based pruning and float16 quantization to reduce model size. While personalization and compression have previously been studied independently, their combined application for HAR remains underexplored in federated settings. Experimental results show that the global FedAvg model experiences performance degradation under non-IID conditions, which is further amplified after pruning, whereas per-client personalization substantially improves performance by adapting the model to individual user patterns. To ensure realistic evaluation, experiments are conducted using both random and temporal data splits, with the latter mitigating temporal leakage in time-series data. Personalization consistently improves performance under both settings, while quantization reduces the model footprint by approximately 50%, enabling deployment on wearable and IoT devices. Statistical analysis using paired significance tests confirms the robustness of the observed performance gains. Overall, this work demonstrates that combining lightweight model compression with personalization providing an effective and practical solution for federated HAR, balancing accuracy, efficiency, and deployment feasibility in real-world scenarios. Full article
Show Figures

Figure 1

24 pages, 15172 KB  
Article
Real-Time Hand Gesture Recognition for IoT Devices Using FMCW mmWave Radar and Continuous Wavelet Transform
by Anna Ślesicka and Adam Kawalec
Electronics 2026, 15(2), 250; https://doi.org/10.3390/electronics15020250 - 6 Jan 2026
Viewed by 905
Abstract
This paper presents an intelligent framework for real-time hand gesture recognition using Frequency-Modulated Continuous-Wave (FMCW) mmWave radar and deep learning. Unlike traditional radar-based recognition methods that rely on Discrete Fourier Transform (DFT) signal representations and focus primarily on classifier optimization, the proposed system [...] Read more.
This paper presents an intelligent framework for real-time hand gesture recognition using Frequency-Modulated Continuous-Wave (FMCW) mmWave radar and deep learning. Unlike traditional radar-based recognition methods that rely on Discrete Fourier Transform (DFT) signal representations and focus primarily on classifier optimization, the proposed system introduces a novel pre-processing stage based on the Continuous Wavelet Transform (CWT). The CWT enables the extraction of discriminative time–frequency features directly from raw radar signals, improving the interpretability and robustness of the learned representations. A lightweight convolutional neural network architecture is then designed to process the CWT maps for efficient classification on edge IoT devices. Experimental validation with data collected from 20 participants performing five standardized gestures demonstrates that the proposed framework achieves an accuracy of up to 99.87% using the Morlet wavelet, with strong generalization to unseen users (82–84% accuracy). The results confirm that the integration of CWT-based radar signal processing with deep learning forms a computationally efficient and accurate intelligent system for human–computer interaction in real-time IoT environments. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

15 pages, 2297 KB  
Article
Cellulose-Based Sustainable Photo-Triboelectric Hybrid Nanogenerator for High-Performance Energy Harvesting and Smart Control Systems
by Zhen Tian, Jiacheng Liu, Chang Ding, Changyu Yang, Muqing Chen, Xiaoming Chen, Qiang Liu and Li Su
Nanoenergy Adv. 2026, 6(1), 1; https://doi.org/10.3390/nanoenergyadv6010001 - 23 Dec 2025
Cited by 1 | Viewed by 874
Abstract
With the advancement of Internet of Things (IoT) technology, flexible sensors with dual optoelectronic sensing modes have emerged as a research hotspot for next-generation smart devices, further driving the urgent demand for environmentally friendly functional materials. Here, we innovatively integrated wastepaper recycling technology [...] Read more.
With the advancement of Internet of Things (IoT) technology, flexible sensors with dual optoelectronic sensing modes have emerged as a research hotspot for next-generation smart devices, further driving the urgent demand for environmentally friendly functional materials. Here, we innovatively integrated wastepaper recycling technology with a polyethyleneimine (PEI)-assisted pulping strategy to develop a novel cellulose-based sustainable photo-triboelectric hybrid nanogenerator (PT-HNG). Based on the working mechanism of a freestanding triboelectric nanogenerator (TENG), the PT-HNG can directly convert pressure stimuli into electrical energy and triboelectrification-induced electroluminescence (TIEL) signals. It achieves luminescence brightness of 0.06 mW cm−2 (3.84 cd m−2) and simultaneously delivers excellent electrical output performance (172.4 V, 6.36 μA, 43.7 nC) under sliding motion. More importantly, compatible with existing industrial papermaking processes, the PT-HNG is scalable for large-scale production. By combining PT-HNG with deep learning algorithms, a handwritten e-book system based on trajectory recognition was constructed, with a recognition accuracy of up to 95.5%. In addition, real-time intelligent control of PowerPoint presentations via PT-HNG was demonstrated. This study provides a new pathway for converting wastepaper into intelligent products and presents a novel idea for the interdisciplinary integration of the circular economy and advanced electronic technology. Full article
(This article belongs to the Special Issue Hybrid Energy Storage Systems Based on Nanostructured Materials)
Show Figures

Graphical abstract

21 pages, 1543 KB  
Article
Understanding Patient Adherence Through Sensor Data: An Integrated Approach to Chronic Disease Management
by David Díaz-Jiménez, José L. López Ruiz, Juan F. Gaitán-Guerrero and Macarena Espinilla Estévez
Appl. Sci. 2025, 15(24), 13226; https://doi.org/10.3390/app152413226 - 17 Dec 2025
Cited by 1 | Viewed by 609
Abstract
Treatment adherence in chronic diseases is addressed here as a measurable construct that can be formally defined and computed from heterogeneous IoT data streams. The central contribution of this work lies in establishing a mathematical formulation of adherence that integrates both explicit treatment-related [...] Read more.
Treatment adherence in chronic diseases is addressed here as a measurable construct that can be formally defined and computed from heterogeneous IoT data streams. The central contribution of this work lies in establishing a mathematical formulation of adherence that integrates both explicit treatment-related activities and behavioural indicators derived from sensor observations. The methodology specifies how raw data from wearables, BLE beacons, and ambient devices can be transformed into clinically meaningful activities through fuzzy logic, enabling the representation of uncertainty, temporal variability, and partial evidence. This framework also accommodates activity labels generated by machine learning models, providing a mechanism to adapt their outputs—originally expressed as probabilistic or categorical predictions—into fuzzy memberships suitable for adherence computation. By unifying sensor-driven activity extraction and model-based activity recognition under a common fuzzy representation, the proposed formulation delivers a coherent pathway for calculating adherence across multiple dimensions and contexts, thereby supporting robust and interpretable evaluation of patient behaviour. By integrating these elements, the methodology provides a comprehensive and interpretable profile of adherence, moving from isolated measures to a unified characterisation of patient behaviour. The framework enables healthcare professionals and patients to better monitor progress, anticipate risks, and support long-term disease management. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in the IoT)
Show Figures

Figure 1

27 pages, 3213 KB  
Article
Urban Sound Classification for IoT Devices in Smart City Infrastructures
by Simona Domazetovska Markovska, Viktor Gavriloski, Damjan Pecioski, Maja Anachkova, Dejan Shishkovski and Anastasija Angjusheva Ignjatovska
Urban Sci. 2025, 9(12), 517; https://doi.org/10.3390/urbansci9120517 - 5 Dec 2025
Cited by 1 | Viewed by 2681
Abstract
Urban noise is a major environmental concern that affects public health and quality of life, demanding new approaches beyond conventional noise level monitoring. This study investigates the development of an AI-driven Acoustic Event Detection and Classification (AED/C) system designed for urban sound recognition [...] Read more.
Urban noise is a major environmental concern that affects public health and quality of life, demanding new approaches beyond conventional noise level monitoring. This study investigates the development of an AI-driven Acoustic Event Detection and Classification (AED/C) system designed for urban sound recognition and its integration into smart city application. Using the UrbanSound8K dataset, five acoustic parameters—Mel Frequency Cepstral Coefficients (MFCC), Mel Spectrogram (MS), Spectral Contrast (SC), Tonal Centroid (TC), and Chromagram (Ch)—were mathematically modeled and applied to feature extraction. Their combinations were tested with three classical machine learning algorithms: Support Vector Machines (SVM), Random Forest (RF), Naive Bayes (NB) and a deep learning approach, i.e., Convolutional Neural Networks (CNN). A total of 52 models with the three ML algorithms were analyzed along with 4 models with CNN. The MFCC-based CNN models showed the highest accuracy, achieving up to 92.68% on test data. This achieved accuracy represents approximately +2% improvement compared to prior CNN-based approaches reported in similar studies. Additionally, the number of trained models, 56 in total, exceeds those presented in comparable research, ensuring more robust performance validation and statistical reliability. Real-time validation confirmed the applicability for IoT devices, and a low-cost wireless sensor unit (WSU) was developed with fog and cloud computing for scalable data processing. The constructed WSU demonstrates a cost reduction of at least four times compared to previously developed units, while maintaining good performance, enabling broader deployment potential in smart city applications. The findings demonstrate the potential of AI-based AED/C systems for continuous, source-specific noise classification, supporting sustainable urban planning and improved environmental management in smart cities. Full article
Show Figures

Figure 1

Back to TopTop