Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (14,731)

Search Parameters:
Keywords = IoT

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
63 pages, 1743 KB  
Review
Smart Greenhouses in the Era of IoT and AI: A Comprehensive Review of AI Applications, Spectral Sensing, Multimodal Data Fusion, and Intelligent Systems
by Wiam El Ouaham, Mohamed Sadik, Abdelhadi Ennajih, Youssef Mouzouna, Houda Orchi and Samir Elouaham
Agriculture 2026, 16(7), 761; https://doi.org/10.3390/agriculture16070761 - 30 Mar 2026
Abstract
Smart greenhouses (SGHs) are controlled-environment agricultural systems that leverage digital technologies to optimize crop production and resource management. In particular, recent advances in artificial intelligence (AI) and the Internet of Things (IoT) have enabled the development of intelligent monitoring, predictive modeling, and automated [...] Read more.
Smart greenhouses (SGHs) are controlled-environment agricultural systems that leverage digital technologies to optimize crop production and resource management. In particular, recent advances in artificial intelligence (AI) and the Internet of Things (IoT) have enabled the development of intelligent monitoring, predictive modeling, and automated decision-support systems within these environments. Against this backdrop, this comprehensive review synthesizes over 130 studies published between 2020 and 2025, with a focus on AI-driven monitoring, predictive modeling, and decision-support frameworks in SGH environments. More specifically, key application domains include microclimate regulation, crop growth assessment, disease and pest detection, yield estimation, and robotic harvesting. Moreover, particular attention is given to the interplay between AI methodologies and their data sources, encompassing IoT sensor networks, RGB, multispectral, and hyperspectral imaging, as well as multimodal data-fusion approaches. In addition, publicly available datasets, model architectures, and performance metrics are consolidated to support reproducibility and cross-study comparison. Nevertheless, persistent challenges are critically discussed, including data heterogeneity, limited model generalization across sites, interpretability constraints, and practical barriers to deployment. Finally, emerging research directions are identified, notably multimodal learning, edge-AI integration, standardized benchmarks, and scalable system architectures, with the overarching objective of guiding the development of robust, sustainable, and operationally feasible AI-enabled SGH systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
17 pages, 3863 KB  
Article
SemiWaferNet: Efficient Semi-Supervised Hybrid CNN–Transformer Models for Wafer Defect Classification and Segmentation
by Ruiwen Shi, Ruihan Liu, Zhiguo Zhou and Xuehua Zhou
Electronics 2026, 15(7), 1437; https://doi.org/10.3390/electronics15071437 (registering DOI) - 30 Mar 2026
Abstract
Wafer defect analysis is important for semiconductor manufacturing, but labeled data are limited, and class distributions are highly imbalanced. We present a semi-supervised framework with two lightweight hybrid CNN–Transformer models for wafer defect classification and segmentation. For classification, HybridCNN-ViT combines CNN-based local feature [...] Read more.
Wafer defect analysis is important for semiconductor manufacturing, but labeled data are limited, and class distributions are highly imbalanced. We present a semi-supervised framework with two lightweight hybrid CNN–Transformer models for wafer defect classification and segmentation. For classification, HybridCNN-ViT combines CNN-based local feature extraction with Transformer-based global context modeling, and adopts a three-stage progressive pseudo-labeling strategy to leverage unlabeled samples. The pseudo-label selection mechanism is systematically calibrated to improve pseudo-label reliability under limited labeled data. For segmentation, ConvoFormer-UNet integrates convolution-enhanced embeddings with Transformer blocks to balance boundary detail and global context. On the public WM-811K dataset, HybridCNN-ViT achieves 98.72% accuracy and 0.9985 macro-AUC under the semi-supervised setting for classification, while ConvoFormer-UNet reaches 99.19% IoU for segmentation with fewer parameters than several baselines. We also report efficiency on a single GPU to illustrate practical inference speed. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

7 pages, 2523 KB  
Proceeding Paper
AI- and IoT-Enabled Smart Dustbin for Automated Hazardous Electronic Waste Separation
by Min Xuan Soh, Hou Kit Mun, Hui Ziang Lee, Zhi Khai Ng and Yan Chai Hum
Eng. Proc. 2026, 134(1), 10; https://doi.org/10.3390/engproc2026134010 - 30 Mar 2026
Abstract
Electronic waste (e-waste) continues to increase globally, yet conventional bins cannot distinguish hazardous batteries and devices from recyclable metals. This article presents an AI- and IoT-enabled smart dustbin that automatically identifies and segregates general waste, metals, and electronic or battery-based hazards while providing [...] Read more.
Electronic waste (e-waste) continues to increase globally, yet conventional bins cannot distinguish hazardous batteries and devices from recyclable metals. This article presents an AI- and IoT-enabled smart dustbin that automatically identifies and segregates general waste, metals, and electronic or battery-based hazards while providing real-time monitoring through a cloud-based dashboard. The system integrates inductive sensing, Time-of-Flight detection, an Espressif Systems Platform 32 (ESP32)-CAM module, and Google Gemini 1.5 Flash for image classification. The prototype achieved a waste segregation accuracy of 93.5% with a total cycle time of 4–6 s per item. The touch-free lid, swift mechanical actuation, and compact 59 × 59 × 100 cm footprint make the dustbin suitable for deployment in campuses, offices, and shopping malls. Dual ESP32 controllers, cloud connectivity through Message Queuing Telemetry Transport (MQTT), Firebase, and a Streamlit web interface enable automated alerts through Discord and email, demonstrating a scalable and energy-efficient approach to sustainable e-waste management. Full article
Show Figures

Figure 1

22 pages, 3000 KB  
Article
Edge-Based and Gateway-Based SmartSync Systems for Efficient LoRaWAN
by Mohammad Al mojamed
Electronics 2026, 15(7), 1426; https://doi.org/10.3390/electronics15071426 - 30 Mar 2026
Abstract
Low-Power Wide-Area Networks (LPWANs) like LoRaWAN enable IoT applications with low-power and long-range characteristics. While LoRaWAN class B mode is server-initiated downlink communication-oriented, its uplink communication, especially in mobile scenarios, remains underexplored. This paper proposes two novel systems, Edge-based SmartSync and Gateway-based SmartSync, [...] Read more.
Low-Power Wide-Area Networks (LPWANs) like LoRaWAN enable IoT applications with low-power and long-range characteristics. While LoRaWAN class B mode is server-initiated downlink communication-oriented, its uplink communication, especially in mobile scenarios, remains underexplored. This paper proposes two novel systems, Edge-based SmartSync and Gateway-based SmartSync, aiming to enhance uplink by leveraging class B synchronization. Edge-based SmartSync enables end devices to dynamically adjust the Spreading Factor (SF) based on real-time Received Signal Strength Indicator (RSSI) from beacons, achieving a significant improvement in terms of packet delivery and energy consumption. Gateway-based SmartSync ensures the fair distribution of end devices across a lower SF to further enhance the efficiency of the system. The beacon is reengineered to convey sensitivity limits to end devices. The systems were implemented in the OMNeT++ simulator over a 25 km2 area with 100–1000 mobile devices and evaluated against a baseline using metrics like the Packet Delivery Ratio, collisions, and energy consumption. The obtained results show that both systems are capable of improving the delivery ratio by over 40% and reducing collisions by 80% compared to the baseline, with energy savings exceeding 35%. Proposed systems offer cost-effective, adaptable solutions, paving the way for more reliable IoT deployments. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

31 pages, 8420 KB  
Article
RTOS-Integrated Time Synchronization for Self-Deployable Wireless Sensor Networks
by Sarah Goossens, Valentijn De Smedt, Lieven De Strycker and Liesbet Van der Perre
Sensors 2026, 26(7), 2121; https://doi.org/10.3390/s26072121 - 29 Mar 2026
Abstract
The deployment of Wireless Sensor Networks (WSNs) remains challenging and time consuming due to the manual commissioning, configuration, and maintenance of resource-constrained Internet of Things (IoT) devices. Achieving precise network-wide time synchronization in such systems further increases this deployment complexity. This paper presents [...] Read more.
The deployment of Wireless Sensor Networks (WSNs) remains challenging and time consuming due to the manual commissioning, configuration, and maintenance of resource-constrained Internet of Things (IoT) devices. Achieving precise network-wide time synchronization in such systems further increases this deployment complexity. This paper presents a novel Real-Time Operating System (RTOS)-integrated time synchronization method that distributes an absolute Coordinated Universal Time (UTC) reference across the network using a single Global Navigation Satellite System (GNSS)-enabled host. The method extends the semantics of the RTOS tick count by directly linking it to a global time reference. Consequently, sensor nodes obtain a notion of UTC time and can execute time-critical tasks at precisely defined moments without requiring a dedicated Real-Time Clock (RTC) or GNSS module on each sensor node. This design reduces both hardware cost and overall system complexity. Experimental results obtained on custom-developed hardware running FreeRTOS demonstrate a task synchronization error below ±30 μs between the GNSS reference and a sensor node operating at a clock frequency of 32 MHz. Such precise network-wide synchronization enables more efficient channel utilization, reduces power consumption, and improves the accuracy of both local and coordinated task execution across multiple devices in WSNs. It therefore serves as a key enabler for self-deployable WSNs. Full article
Show Figures

Figure 1

23 pages, 1863 KB  
Article
A Low-Power Piglet Crushing Detection System Based on Multi-Modal Fusion
by Hao Liu, Haopu Li, Yue Cao, Riliang Cao, Guangying Hu and Zhenyu Liu
Agriculture 2026, 16(7), 753; https://doi.org/10.3390/agriculture16070753 - 28 Mar 2026
Abstract
Accidental crushing by sows is the primary cause of pre-weaning piglet mortality in intensive production, often due to the spatiotemporal lag of manual inspection. While Internet of Things (IoT) solutions exist, they frequently face challenges such as vision occlusion, high hardware costs, and [...] Read more.
Accidental crushing by sows is the primary cause of pre-weaning piglet mortality in intensive production, often due to the spatiotemporal lag of manual inspection. While Internet of Things (IoT) solutions exist, they frequently face challenges such as vision occlusion, high hardware costs, and latency. To address these, this study developed a low-cost multi-modal edge computing system based on TinyML. Using an ESP32-S3 microcontroller, the system employs a “Motion-Gated Acoustic Detection” strategy, activating a lightweight 1D-CNN model to identify piglet screams only when an IMU detects high-risk postural transitions of the sow. Results show the quantized model (5.1 KB) achieves 95.56% accuracy and 2 ms inference latency. The total end-to-end response latency is within 179 ms, ensuring intervention within the early “golden rescue window.” The low-power design enables the battery life to cover the entire lactation period. Field tests demonstrated that the system intercepted identified crushing risks within the monitored cohort, supporting its potential for significantly improving piglet survival probability. This research overcomes the limitations of single-modal monitoring and provides a scalable, cost-effective engineering intervention for enhancing animal welfare and achieving intelligent, unattended supervision in precision livestock farming. Full article
51 pages, 1921 KB  
Review
Federated Retrieval-Augmented Generation for Cybersecurity in Resource-Constrained IoT and Edge Environments: A Deployment-Oriented Scoping Review
by Hangyu He, Xin Yuan, Kai Wu and Wei Ni
Electronics 2026, 15(7), 1409; https://doi.org/10.3390/electronics15071409 - 27 Mar 2026
Viewed by 150
Abstract
Cybersecurity operations in IoT and edge environments require fast, evidence-grounded decisions under strict resource and trust constraints. While large language models can support triage and incident analysis, their parametric knowledge may be outdated and prone to hallucination. Retrieval-augmented generation (RAG) improves grounding by [...] Read more.
Cybersecurity operations in IoT and edge environments require fast, evidence-grounded decisions under strict resource and trust constraints. While large language models can support triage and incident analysis, their parametric knowledge may be outdated and prone to hallucination. Retrieval-augmented generation (RAG) improves grounding by conditioning responses on retrieved evidence, but also introduces new risks such as knowledge-base poisoning, indirect prompt injection, and embedding leakage. Federated learning enables collaborative adaptation without centralizing sensitive data, motivating federated RAG (FedRAG) architectures for distributed cybersecurity deployments. This study presents a deployment-oriented scoping review of FedRAG for cybersecurity. The review follows PRISMA-ScR reporting guidance and synthesizes 82 studies published between 2020 and 2026, identified through keyword search and citation snowballing over OpenAlex, arXiv, and Crossref. We develop a taxonomy that clarifies the components of federated systems, deployment locations, trust boundaries, and protected assets. We further map the combined RAG+FL attack surface, summarize practical defenses and system patterns, and distill actionable guidance for secure, privacy-preserving, and efficient FedRAG deployment in real-world IoT and edge scenarios. Our synthesis highlights recurring trade-offs among robustness, privacy, latency, communication overhead, and maintainability, and identifies open research priorities in benchmark design, governance mechanisms, and cross-silo evaluation protocols for practical deployment. Full article
(This article belongs to the Special Issue Novel Approaches for Deep Learning in Cybersecurity)
22 pages, 28650 KB  
Article
Benchmarking MARL for UAV-Assisted Mobile Edge Computing Under Realistic 3D Collision Avoidance Navigation Constraints for Periodic Task Offloading
by Jiacheng Gu, Qingxu Meng, Qiurui Sun, Bing Zhu, Songnan Zhao and Shaode Yu
Technologies 2026, 14(4), 202; https://doi.org/10.3390/technologies14040202 - 27 Mar 2026
Viewed by 113
Abstract
The rapid growth of Internet of Things (IoT) and Industrial IoT applications has intensified the demand for low-latency and reliable computation support for deadline-constrained periodic real-time tasks. While unmanned aerial vehicles (UAVs) enabling mobile edge computing (MEC) can reduce latency by bringing compute [...] Read more.
The rapid growth of Internet of Things (IoT) and Industrial IoT applications has intensified the demand for low-latency and reliable computation support for deadline-constrained periodic real-time tasks. While unmanned aerial vehicles (UAVs) enabling mobile edge computing (MEC) can reduce latency by bringing compute closer to data sources, terrestrial MEC deployments often suffer from limited coverage and poor adaptability to spatially heterogeneous demand. In this paper, we study a multiple-UAV-assisted MEC system serving cluster-based IoT networks, where cluster heads generate deadline-constrained periodic tasks for offloading under strict deadlines. To ensure practical feasibility in dense urban environments, we benchmark UAV mobility using a realistic 3D collision avoidance navigation graph with shortest-path execution, rather than assuming unconstrained continuous UAV motion in free space. On top of this benchmark, we systematically compare three multi-agent reinforcement learning (MARL) paradigms for joint navigation and periodic task offloading: (i) continuous 3D control MARL that outputs motion commands directly; (ii) discrete graph-based MARL that selects collision-free shortest paths; and (iii) asynchronous macro-action MARL. Using a high-fidelity 3D digital twin of San Francisco, we evaluate these paradigms under a unified protocol in terms of offloading success, end-to-end latency, and energy consumption. The results reveal clear performance trade-offs induced by realistic 3D collision avoidance constraints and provide actionable insights for designing UAV-assisted MEC systems supporting periodic real-time task offloading. Full article
Show Figures

Figure 1

17 pages, 1748 KB  
Article
An Integrated AI Framework for Crop Recommendation
by Shadi Youssef, Kumari Gamage and Fouad Zablith
Horticulturae 2026, 12(4), 416; https://doi.org/10.3390/horticulturae12040416 - 27 Mar 2026
Viewed by 181
Abstract
Despite recent advances in artificial intelligence for agriculture, reliable crop recommendation remains constrained by limited access to soil diagnostics, insufficient integration of environmental context, and the absence of transparent, quantitative evaluation frameworks. This study addresses the research question: How can we integrate multiple [...] Read more.
Despite recent advances in artificial intelligence for agriculture, reliable crop recommendation remains constrained by limited access to soil diagnostics, insufficient integration of environmental context, and the absence of transparent, quantitative evaluation frameworks. This study addresses the research question: How can we integrate multiple indicators to generate accurate, explainable, and context-sensitive crop recommendations? To this end, we propose a multimodal decision-support framework that combines image-based soil texture classification with geospatial, and climatic information. A convolutional neural network was trained on a curated dataset of 3250 soil images aggregated from four publicly available sources, covering four primary soil texture classes, alongside tabular soil and nutrient data. The model was evaluated using 5-fold stratified cross-validation, achieving an average classification accuracy of 99.30% (standard deviation ≈ 0.66), and was further validated on an independent hold-out test set to assess generalization performance. To enhance practical applicability, the framework incorporates elevation, rainfall, temperature, and major soil nutrients, and employs a large language model to generate user-oriented, interpretable justifications for each recommendation. Crop recommendations were quantitatively evaluated using a novel Agronomic Suitability Score (ASS), which measures alignment across soil compatibility, climatic suitability, seasonal alignment, and elevation tolerance. Across six geographically diverse case studies, the framework achieved mean ASS values ranging from 3.76 to 4.96, with five regions exceeding 4.45, demonstrating strong agronomic validity, robustness, and scalability. A Streamlit-based application further illustrates the system’s ability to deliver accessible, location-aware, and explainable agronomic guidance. The results indicate that the proposed approach constitutes a scalable decision-support tool with significant potential for sustainable agriculture and food security initiatives. Full article
Show Figures

Figure 1

12 pages, 300 KB  
Article
On Syntactical Simplification of Temporal Operators in Negation-Free Metric Temporal Logic
by Mathijs van Noort, Femke Ongenae and Pieter Bonte
Mathematics 2026, 14(7), 1124; https://doi.org/10.3390/math14071124 - 27 Mar 2026
Viewed by 140
Abstract
Temporal reasoning in dynamic, data-intensive environments increasingly demands expressive yet tractable logical frameworks. Traditional approaches often rely on negation to express absence or contradiction. In such contexts, negation-as-failure is commonly used to infer negative information from the lack of positive evidence. However, for [...] Read more.
Temporal reasoning in dynamic, data-intensive environments increasingly demands expressive yet tractable logical frameworks. Traditional approaches often rely on negation to express absence or contradiction. In such contexts, negation-as-failure is commonly used to infer negative information from the lack of positive evidence. However, for open and distributed systems such as IoT networks and the Semantic Web, negation-as-failure semantics become unreliable due to incomplete and asynchronous data. This has led to growing interest in negation-free fragments of temporal rule-based systems, which preserve monotonicity and enable scalable reasoning. This paper investigates the expressive power of negation-free Metric Temporal Logic (MTL), a temporal logic framework designed for rule-based reasoning over time. We show that the “always” operators ⊞ and ⊟, often treated as syntactic sugar for combinations of other temporal constructs, can be eliminated using “once”, “since” and “until” operators. Remarkably, even the “once” operators can be removed, yielding a fragment based solely on “until” and “since”. These results challenge the assumption that negation is necessary for expressing universal temporal constraints and reveal a robust fragment capable of capturing both existential and invariant temporal patterns. Furthermore, the results induce a reduction in the syntax of MTL, which, in turn, can provide benefits for both theoretical study as well as for implementation efforts. Full article
(This article belongs to the Special Issue Formal Methods in Computer Science: Theory and Applications)
Show Figures

Figure 1

19 pages, 2222 KB  
Article
A Multimodal Hybrid Piezoelectric–Electromagnetic Vibration Energy Harvester Exploiting the First and Second Resonance Modes for Broadband Low-Frequency Applications
by Dejan Shishkovski, Zlatko Petreski, Simona Domazetovska Markovska, Maja Anachkova, Damjan Pecioski and Anastasija Angjusheva Ignjatovska
Sensors 2026, 26(7), 2092; https://doi.org/10.3390/s26072092 - 27 Mar 2026
Viewed by 231
Abstract
The increasing demand for autonomous wireless sensors in Internet of Things (IoT) applications has intensified research on vibration energy harvesting, particularly in the low-frequency range where ambient vibrations are most prevalent. However, most vibration energy harvesters operate efficiently only at a single resonance [...] Read more.
The increasing demand for autonomous wireless sensors in Internet of Things (IoT) applications has intensified research on vibration energy harvesting, particularly in the low-frequency range where ambient vibrations are most prevalent. However, most vibration energy harvesters operate efficiently only at a single resonance mode, resulting in a narrow operational bandwidth and pronounced performance degradation under frequency detuning. To address this limitation, this paper proposes a multimodal hybrid piezoelectric–electromagnetic vibration energy harvester that exploits both the first and second resonance modes of a cantilever-based structure to achieve broadband low-frequency operation. The design is guided by the complementary utilization of strain-dominated and velocity-dominated regions associated with different vibration modes. Numerical modeling and finite element simulations are employed to investigate the influence of mass distribution, deformation characteristics, and relative velocity on energy conversion performance. A secondary cantilever carrying the electromagnetic coil is introduced to enhance the relative motion between the coil and the magnetic field, thereby extending the effective operational bandwidth. The experimental results demonstrate increased harvested power, improved energy conversion efficiency, and a significantly broadened effective frequency range compared to conventional single-mode piezoelectric and electromagnetic energy harvesters. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

63 pages, 32785 KB  
Article
Cost-Effective TinyML-Ready Design and Field Deployment of a Solar-Powered Environmental Monitoring Data Collector Using LTE-M Communication
by Emanuel-Crăciun Trînc, Valentin Niţă, Cristina Stolojescu-Crisan, Cosmin Ancuţi, Răzvan Marius Mihai and Cristian Pațachia Sultănoiu
Appl. Sci. 2026, 16(7), 3237; https://doi.org/10.3390/app16073237 - 27 Mar 2026
Viewed by 161
Abstract
Environmental monitoring is essential for smart agriculture, renewable energy assessment, and climate-aware farm management. However, deploying autonomous sensing platforms in rural environments remains challenging because of energy constraints, communication reliability, and real-time processing requirements. This paper presents a modular, solar-powered environmental monitoring platform [...] Read more.
Environmental monitoring is essential for smart agriculture, renewable energy assessment, and climate-aware farm management. However, deploying autonomous sensing platforms in rural environments remains challenging because of energy constraints, communication reliability, and real-time processing requirements. This paper presents a modular, solar-powered environmental monitoring platform integrating LTE-M communication and TinyML-enabled edge sensing. The proposed system adopts a dual-microcontroller architecture that combines an Arduino Nano 33 BLE for real-time sensor acquisition and edge processing with an Arduino MKR NB 1500 dedicated to low-power wide-area communication. The platform integrates temperature, humidity, atmospheric pressure, rainfall, wind, and light sensors within a scalable framework. Two monitoring stations were deployed in rural regions of Romania to evaluate communication robustness, sensing stability, and energy autonomy. Field results demonstrated reliable LTE-M connectivity (4306 received signal strength indicator [RSSI] samples; mean 75.51 dBm) and strong agreement with a regional weather station, with mean deviations of −0.71 °C (temperature), 4.98% (humidity), and a stable pressure offset of 9.58 hPa attributable to altitude differences. Despite a total system cost of €315, the platform achieved measurement performance comparable to that of professional meteorological stations while maintaining long-term solar-powered operation. The proposed architecture provides a scalable and cost-effective solution for distributed smart agriculture and environmental monitoring applications. Full article
(This article belongs to the Special Issue The Internet of Things (IoT) and Its Application in Monitoring)
Show Figures

Figure 1

23 pages, 3226 KB  
Article
A Detection and Recognition Method for Interference Signals Based on Radio Frequency Fingerprint Characteristics
by Yang Guo and Yuan Gao
Electronics 2026, 15(7), 1393; https://doi.org/10.3390/electronics15071393 - 27 Mar 2026
Viewed by 163
Abstract
With the advancement of 5G and the Internet of Things (IoT), traditional upper-layer authentication mechanisms are vulnerable to attacks, while quantum computing threatens cryptographic security. Radio frequency fingerprint identification (RFFI) offers a physical-layer solution by exploiting inherent hardware imperfections. However, in complex electromagnetic [...] Read more.
With the advancement of 5G and the Internet of Things (IoT), traditional upper-layer authentication mechanisms are vulnerable to attacks, while quantum computing threatens cryptographic security. Radio frequency fingerprint identification (RFFI) offers a physical-layer solution by exploiting inherent hardware imperfections. However, in complex electromagnetic environments, narrowband and especially agile interference (characterized by low power and narrow bandwidth) can severely distort fingerprint features, rendering conventional detection algorithms ineffective. To address this challenge, this paper proposes a novel interference detection framework tailored for Orthogonal Frequency Division Multiplexing (OFDM) systems. First, a signal transmission model incorporating non-ideal hardware characteristics (e.g., DC offset, I/Q imbalance) is established. Based on this model, we design an agile interference detection algorithm comprising two key components: (1) a time-series anomaly detection method that fuses multi-domain expert features (fractal, complexity, and high-order statistics) with machine learning, demonstrating superior performance over the traditional CME algorithm under narrowband interference, and (2) a progressive search segmental detection algorithm that, combined with reconstruction error features extracted by an autoencoder, effectively identifies low-power agile interference by appropriately trading-off computation time for detection sensitivity. Finally, an OFDM simulation platform is developed to validate the proposed methods. The results show that the segmental detection algorithm achieves reliable detection at a jammer-to-signal ratio (JSR) as low as −10 dB, significantly outperforming existing approaches and enhancing the robustness of RFFI in challenging interference environments. Full article
Show Figures

Figure 1

20 pages, 4332 KB  
Article
Design and Pilot Evaluation of an IoT-Based Blood Pressure Monitoring System for Rabbits
by Carlos Exequiel Garay, Gonzalo Nicolás Mansilla, Rossana Elena Madrid, Agustina González Colombres and Susana Josefina Jerez
Bioengineering 2026, 13(4), 384; https://doi.org/10.3390/bioengineering13040384 - 26 Mar 2026
Viewed by 336
Abstract
Telemedicine, driven by the Internet of Things (IoT) and wireless connectivity, is essential for managing cardiovascular diseases, where hypertension remains the primary risk factor. In preclinical research, rabbits are superior biological models compared to rodents due to their human-like lipid metabolism. However, continuous [...] Read more.
Telemedicine, driven by the Internet of Things (IoT) and wireless connectivity, is essential for managing cardiovascular diseases, where hypertension remains the primary risk factor. In preclinical research, rabbits are superior biological models compared to rodents due to their human-like lipid metabolism. However, continuous blood pressure monitoring in this species remains challenging. The gold-standard technique (direct carotid catheterization) requires terminal procedures, and indirect methods (Doppler, oscillometric) show limited agreement with direct measurements. Furthermore, commercially available implantable telemetry platforms, while enabling real-time monitoring in freely moving animals, require costly surgical implantation, specialized proprietary hardware, and post-operative recovery periods that may confound early hemodynamic data. To address these limitations, this study presents a low-cost, customizable, and minimally invasive monitoring system utilizing a pressure transducer in the central auricular artery. The device integrates an ESP32 microcontroller with IoT technology for digital signal processing and seamless wireless data transmission to the ThingSpeak cloud platform. Unlike implantable telemetry, the proposed approach avoids surgical implantation and its associated costs and recovery time, while still enabling continuous, real-time hemodynamic tracking throughout the experimental period. A pilot evaluation against the BIOPAC MP100 reference (carotid artery) demonstrated relative errors of 1.60% for mean arterial pressure, 8.58% for systolic blood pressure, and 2.43% for diastolic blood pressure. By reducing invasiveness and enhancing remote data accessibility, this system provides a promising framework for the preclinical evaluation of antihypertensive agents and cardiovascular mechanisms, bridging the gap between edge computing and remote clinical diagnostics. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

24 pages, 518 KB  
Article
A Secure Authentication Scheme for Hierarchical Federated Learning with Anomaly Detection in IoT-Based Smart Agriculture
by Jihye Choi and Youngho Park
Appl. Sci. 2026, 16(7), 3211; https://doi.org/10.3390/app16073211 - 26 Mar 2026
Viewed by 131
Abstract
Unmanned Aerial Vehicle (UAV)-assisted hierarchical federated learning (HFL) has emerged as a promising architecture for Internet of Things (IoT)-based smart agriculture, which enables scalable model training over large and sparse farmlands. In this setting, UAVs act as mobile edge servers, aggregating local updates [...] Read more.
Unmanned Aerial Vehicle (UAV)-assisted hierarchical federated learning (HFL) has emerged as a promising architecture for Internet of Things (IoT)-based smart agriculture, which enables scalable model training over large and sparse farmlands. In this setting, UAVs act as mobile edge servers, aggregating local updates from distributed agricultural IoT devices and relaying them to the cloud server. While HFL improves scalability and reduces communication overhead, it still faces critical security threats due to its reliance on public wireless channels and the vulnerability of model aggregation to malicious updates. In this paper, we propose a secure authentication scheme that integrates anomaly detection with elliptic curve cryptography (ECC)-based mutual authentication to protect both the communication and training phases. In the proposed scheme, UAVs authenticate participating clients before receiving their local models, then perform anomaly detection to identify and exclude malicious participants. If a client is found to be malicious, its identity credentials are revoked and broadcast by the cloud server to prevent future participation. The security of the proposed scheme is formally verified using Burrows–Abadi–Needham (BAN) logic, the Real-or-Random (RoR) model, and the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool, along with informal security analysis. The performance evaluation includes comparisons of security features, computation cost, and communication cost with other related schemes, and an experimental assessment of anomaly detection performance. The results demonstrate that our scheme provides strong security guarantees, low overhead, and effective malicious client detection, making it well suited for UAV-assisted HFL in smart agriculture. Full article
Show Figures

Figure 1

Back to TopTop