Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,767)

Search Parameters:
Keywords = low-latency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 3078 KB  
Article
Heterogeneous-Tolerant Ripple Suppression for Parallel PV Distributed Converters: A Communication-Free Randomized Phase Shifting Method Based on Enhanced PSO
by Qing Fu, Yuan Jing, Benfei Wang and Muhammad Amjad
Electronics 2026, 15(9), 1815; https://doi.org/10.3390/electronics15091815 - 24 Apr 2026
Abstract
Conventional fixed phase-shift strategies for parallel PV converters fail to minimize output ripple under heterogeneous input conditions, while communication-based synchronous methods incur high costs and reliability risks. Furthermore, standard global optimization algorithms like conventional Particle Swarm Optimization (PSO) suffer from slow convergence, hindering [...] Read more.
Conventional fixed phase-shift strategies for parallel PV converters fail to minimize output ripple under heterogeneous input conditions, while communication-based synchronous methods incur high costs and reliability risks. Furthermore, standard global optimization algorithms like conventional Particle Swarm Optimization (PSO) suffer from slow convergence, hindering real-time application. To address these limitations, this paper proposes a communication-free distributed ripple suppression method based on an enhanced PSO with randomized phase shifting. Unlike traditional approaches, our method enables autonomous convergence without inter-unit communication. Crucially, a randomized pre-scanning mechanism narrows the search space, accelerating convergence significantly. Simulation results demonstrate that the proposed method reaches a steady state in merely 5 ms, which is 50% faster than conventional PSO (~10 ms) and eliminates communication latency. Under severe heterogeneous conditions, the technique reduces output voltage ripple to 0.66 V (a 53% reduction) compared to the unoptimized 1.21 V, vastly outperforming fixed interleaving strategies that show negligible improvement. The approach also ensures robust stability during load steps and plug-and-play operations, offering a superior low-cost and high-speed solution for distributed PV systems. Full article
(This article belongs to the Special Issue AI Applications for Smart Grid: 2nd Edition)
Show Figures

Figure 1

16 pages, 3821 KB  
Article
Independent Motion Segmentation Based on Pure Event Data
by Wenjun Yin, Dongdong Teng and Lilin Liu
Sensors 2026, 26(9), 2620; https://doi.org/10.3390/s26092620 - 23 Apr 2026
Abstract
Event cameras are bio-inspired vision sensors offering low latency, low power consumption, and high dynamic range, capturing motion with microsecond-level precision via a per-event triggering mechanism. Despite these advantages, the inherent sparsity and lack of color in event data hinder direct analysis, necessitating [...] Read more.
Event cameras are bio-inspired vision sensors offering low latency, low power consumption, and high dynamic range, capturing motion with microsecond-level precision via a per-event triggering mechanism. Despite these advantages, the inherent sparsity and lack of color in event data hinder direct analysis, necessitating advanced deep learning approaches. To achieve low-latency and high-precision motion segmentation for indoor robotic applications, this paper introduces a dual-branch decoupled CNN framework. Specifically, Principal Component Analysis (PCA) is utilized to project 3D event point clouds into 2D motion trend maps, capturing local motion priors while suppressing ambiguity in structured environments. Concurrently, an Event Leaky Integration (ELI) model, inspired by biological membrane potentials, is designed to enhance the structural representation of sparse events. Within this framework, separate branches respectively perform motion validation and shape extraction and are fused via a Spatial Gated Fusion (SGF) module to suppress static background interference. It is demonstrated experimentally that with an input window of only 10 ms, the proposed method achieves a 77% average mIoU across five indoor test scenarios from the EV-IMO dataset with an inference latency of 10 ms per frame. Compared to state-of-the-art methods like MSRNN and GCN, which required 30–300 ms event slices, our framework achieves a favorable trade-off between computational efficiency and segmentation accuracy, maintaining competitive performance under ultra-short time windows for indoor event-based motion processing. Full article
(This article belongs to the Special Issue Event-Based Vision Technology: From Imaging to Perception and Control)
24 pages, 1331 KB  
Article
Edge-Deployable Stereo Vision for Fish Biomass Estimation via Lightweight YOLOv11n-Pose and Dynamic Geometry
by Cheuk Yiu Cheng and Condon Lau
Appl. Sci. 2026, 16(9), 4125; https://doi.org/10.3390/app16094125 - 23 Apr 2026
Abstract
Non-invasive, real-time biomass estimation is critical for smart aquaculture, yet high computational latency and the cost of specialized optical sensors remain significant bottlenecks. This study proposes an ultra-low-cost, edge-deployable stereo-vision framework utilizing a dual-webcam architecture synchronized with a lightweight YOLOv11n-pose model. To address [...] Read more.
Non-invasive, real-time biomass estimation is critical for smart aquaculture, yet high computational latency and the cost of specialized optical sensors remain significant bottlenecks. This study proposes an ultra-low-cost, edge-deployable stereo-vision framework utilizing a dual-webcam architecture synchronized with a lightweight YOLOv11n-pose model. To address the spatial uncertainties in non-rigid fish locomotion, we integrated advanced spatial loss functions to achieve precise anatomical keypoint extraction. These coordinates are processed through a three-point Bézier curve interpolation and a mathematically derived Dynamic Shape Factor (K) to correct for optical refraction and morphological variations. As a proof-of-concept, the proposed system was validated on a live multi-species cohort (N = 10), achieving a Mean Absolute Percentage Error (MAPE) of 8.64% and an R2 of 0.92 under strict Leave-One-Out Cross-Validation (LOOCV), drastically outperforming traditional naive volumetric baselines (MAPE > 54%). Requiring only 6.7 GFLOPs and 5.5 MB of memory, the model achieves 111.6 FPS. These results demonstrate the feasibility of highly efficient, cost-effective AI solutions for precision aquaculture while clearly defining the validity boundaries and statistical constraints for future large-scale deployment. Full article
29 pages, 8989 KB  
Article
Real-Field-Ready and Digitally Sustainable Plant Disease Recognition via Federated Multimodal Edge Learning and Few-Shot Domain Adaptation
by Muhammad Irfan Sharif, Yong Zhong, Muhammad Zaheer Sajid and Francesco Marinello
Agriculture 2026, 16(9), 918; https://doi.org/10.3390/agriculture16090918 - 22 Apr 2026
Abstract
Plant disease diagnosis in real-world agricultural environments is challenged by data scarcity, domain shift, privacy constraints, and limited edge-device resources. This paper proposes FMEL-FSDA, a Federated Multimodal Edge Learning framework with Few-Shot Domain Adaptation for robust field-based plant disease recognition. The framework [...] Read more.
Plant disease diagnosis in real-world agricultural environments is challenged by data scarcity, domain shift, privacy constraints, and limited edge-device resources. This paper proposes FMEL-FSDA, a Federated Multimodal Edge Learning framework with Few-Shot Domain Adaptation for robust field-based plant disease recognition. The framework integrates attention-based RGB–text feature fusion, privacy-preserving federated learning, rapid few-shot personalization, and uncertainty-aware inference within an edge-efficient architecture. Federated training enables collaborative learning across distributed farms without sharing raw data, while few-shot adaptation allows fast deployment to new regions using only 1–10 labeled samples per class. Experiments on the PlantWild in-the-wild dataset show that FMEL-FSDA outperforms centralized, federated, and few-shot baselines, achieving 93.78% accuracy, 93.33% F1-score, and 0.97 AUC. The model maintains strong performance under privacy mechanisms such as gradient perturbation and secure aggregation, reduces communication overhead by up to , and supports low-latency edge inference. Uncertainty estimation and Grad-CAM-based explainability further enhance reliability by identifying low-confidence cases and highlighting disease-relevant regions. Overall, FMEL-FSDA offers a scalable, privacy-aware, and field-ready solution for intelligent plant disease diagnosis in precision agriculture. Full article
24 pages, 1534 KB  
Article
Hybrid Energy-Aware Ranking and Optimization
by Zhiling Zeng, Yuxuan Jiang and Na Niu
Future Internet 2026, 18(5), 226; https://doi.org/10.3390/fi18050226 - 22 Apr 2026
Abstract
The increase in delay-sensitive application tasks requires heterogeneous edge clusters to maintain low online latency and energy efficiency without relying on rigid scheduling policies. To address this, we propose HERO (Hybrid Energy-aware Ranking and Optimization), a lightweight collaborative scheduling framework. HERO utilizes a [...] Read more.
The increase in delay-sensitive application tasks requires heterogeneous edge clusters to maintain low online latency and energy efficiency without relying on rigid scheduling policies. To address this, we propose HERO (Hybrid Energy-aware Ranking and Optimization), a lightweight collaborative scheduling framework. HERO utilizes a perturbation-based communication-aware multi-layer perceptron (MLP) predictor to quantify global time sensitivity and discover latent time slack in non-critical paths. A hybrid budget mechanism then converts this slack into customized DVFS decisions. These decisions are based on the inherent computational load and topological criticality to optimize energy consumption. A communication-aware hole-filling strategy dynamically recovers sporadic idle times fragmented by heterogeneous communication overhead. Extensive simulations were conducted across varying DAG depths, parallelism levels, and system utilizations. Compared to state-of-the-art algorithms (NSGA-II, SSA, TOM, and DPMC), HERO reduced the completion time by an average of 10.89% under high-density topologies, and achieved up to 4.04% energy savings across varying task depths. Full article
15 pages, 5996 KB  
Article
Real-Time Analysis and Intervention of Classroom Behavior Using Multi-Modal Fusion and Spatiotemporal Context
by Kai Zhao and Guiling Sun
Appl. Sci. 2026, 16(9), 4069; https://doi.org/10.3390/app16094069 - 22 Apr 2026
Abstract
Analyzing classroom engagement is essential for developing effective smart learning environments. Conventional methods often face challenges in achieving reliable identification of individual students, accurately recognizing their behavioral states, and providing timely support. This paper presents a multimodal sensing and supportive feedback system built [...] Read more.
Analyzing classroom engagement is essential for developing effective smart learning environments. Conventional methods often face challenges in achieving reliable identification of individual students, accurately recognizing their behavioral states, and providing timely support. This paper presents a multimodal sensing and supportive feedback system built upon an end–edge–cloud collaborative architecture. By integrating RFID-based seat association, fingerprint verification, and computer vision-based activity analysis, the system establishes a reliable link between student identity and observed activities. Key computational tasks, including activity recognition, spatiotemporal context matching, and rule-based assessment, are executed locally on edge nodes. This enables low-latency, privacy-conscious feedback delivered via Bluetooth, effectively avoiding delays associated with cloud processing. Experimental results indicate that the proposed system significantly enhances both activity recognition accuracy and identity–behavior association reliability in typical classroom scenarios while substantially reducing the average feedback latency compared to traditional approaches. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

28 pages, 9778 KB  
Article
Spatio-Temporal Data Model for Early Wildfire Detection
by Damir Krstinić, Jakov Bejo, Toma Sikora and Marin Bugarić
Fire 2026, 9(4), 175; https://doi.org/10.3390/fire9040175 - 21 Apr 2026
Viewed by 102
Abstract
Early detection is a key tool for mitigating the devastating effects of wildfires. Single-frame detection methods that do not consider inter-frame dependencies often fail to detect smoke plumes at the earliest stage and at greater distances, or produce excessive false alarms. Biological vision [...] Read more.
Early detection is a key tool for mitigating the devastating effects of wildfires. Single-frame detection methods that do not consider inter-frame dependencies often fail to detect smoke plumes at the earliest stage and at greater distances, or produce excessive false alarms. Biological vision is particularly sensitive to motion cues, and this translates well to automated systems. Recent temporal-memory approaches have demonstrated improved performance over purely spatial methods, but typically rely on complex, computationally heavy multi-stage architectures. This study investigates the possibility of encoding temporal and contextual information into additional image channels as a basis for compiling data models with increased information content. Seven distinct data models were proposed, and corresponding datasets were generated to train standard YOLO architectures without modifications to the network structure. The datasets were compiled from real wildfire footage collected from an operational wildfire surveillance system in Croatia, comprising 333 annotated sequences of real fires recorded between 2018 and 2024. Experimental evaluation compared the performance of YOLO models trained on the information-enriched datasets with those trained on standard RGB images. Based on the results, the best data model for early wildfire smoke detection, combining original RGB channels with short-term and long-term temporal memory, was selected. Comparative evaluation demonstrated improved detection accuracy, achieving up to 5 percent higher true-positive detection rate for models trained on spatio-temporal data compared to standard RGB images, while maintaining low inference latency. The proposed approach shifts the focus to the structure and information content of the data while preserving the efficiency of standard convolutional neural network architectures. This approach could be applied to other problems requiring high efficiency and real-time operation, where temporal and contextual information can improve detection performance. Full article
Show Figures

Graphical abstract

25 pages, 5544 KB  
Article
Retrofitting a Legacy Industrial Robot Through Monocular Computer Vision-Based Human-Arm Posture Tracking and 3-DoF Robot-Axis Control (A1–A3)
by Paúl A. Chasi-Pesantez, Eduardo J. Astudillo-Flores, Valeria A. Dueñas-López, Jorge O. Ordoñez-Ordoñez, Eldad Holdengreber and Luis Fernando Guerrero-Vásquez
Robotics 2026, 15(4), 82; https://doi.org/10.3390/robotics15040082 - 21 Apr 2026
Viewed by 183
Abstract
This paper presents a low-cost retrofitting pipeline for a legacy industrial robot that uses a single RGB webcam and monocular 2D keypoint tracking to estimate human-arm posture angles θ(h) and map them to robot-axis joint targets [...] Read more.
This paper presents a low-cost retrofitting pipeline for a legacy industrial robot that uses a single RGB webcam and monocular 2D keypoint tracking to estimate human-arm posture angles θ(h) and map them to robot-axis joint targets qcmd(r) for A1–A3 on a KUKA KR5-2 ARC HW, while keeping the wrist orientation (A4–A6) fixed. Rather than targeting full six-DoF manipulation, the main contribution is an experimental characterization of how far monocular 2D posture-to-axis mapping can be used reliably for coarse placement and safeguarded low-speed demonstrations on a legacy robot platform. Vision-side accuracy was evaluated per axis against goniometer-based reference angles θref(h), showing low errors for A2–A3 within the tested range and larger errors for A1 due to monocular yaw/depth ambiguity and occlusions. The study also analyzes failure modes during simultaneous multi-joint motion, where performance degrades notably, especially for A2 and A3, and reports practical mitigation directions such as improved viewpoints, multi-view/depth sensing, and stricter dropout handling. Runtime behavior is additionally characterized through a loop timing budget, with an end-to-end latency of 185.44 ms and an effective loop frequency of 5.39 Hz, which is consistent with low-speed online operation within the demonstrated scope. The system was implemented in a fenced industrial cell with restricted access and emergency stop; no collaborative operation is claimed. Full article
(This article belongs to the Special Issue Artificial Vision Systems for Robotics)
Show Figures

Figure 1

22 pages, 4808 KB  
Article
Transforming Opportunistic Routing: A Deep Reinforcement Learning Framework for Reliable and Energy-Efficient Communication in Mobile Cognitive Radio Sensor Networks
by Suleiman Zubair, Bala Alhaji Salihu, Altyeb Altaher Taha, Yakubu Suleiman Baguda, Ahmed Hamza Osman and Asif Hassan Syed
IoT 2026, 7(2), 34; https://doi.org/10.3390/iot7020034 - 21 Apr 2026
Viewed by 144
Abstract
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To [...] Read more.
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To address this limitation, we present DRL-MROR, a refined routing framework that incorporates deep reinforcement learning (DRL) to enable intelligent and adaptive forwarding decisions. In DRL-MROR, the secondary users (SUs) act as autonomous agents that observe local state information, including primary-user activity, link quality, residual energy, and neighbor-mobility patterns. Each agent learns a forwarding policy through a Deep Q-Network (DQN) optimized for long-term network utility in terms of throughput, delay, and energy efficiency. We formulate routing as a Markov Decision Process (MDP) and use experience replay with prioritized sampling to improve learning stability and convergence. The DQN used at each node is intentionally lightweight, requiring 5514 trainable parameters, about 21.5 kB of weight storage in 32-bit precision, and approximately 5.4k multiply-accumulate operations per inference, which supports practical deployment on edge-capable CRSN nodes. Extensive simulations show that DRL-MROR outperforms the original MROR protocol and representative AI-based routing baselines such as AIRoute under diverse operating conditions. The results indicate gains of up to 38% in throughput, 42% in goodput, a 29% reduction in energy consumed per packet, and an approximately 18% improvement in network lifetime, while maintaining high route stability and fairness. DRL-MROR also reduces control overhead by about 30% and average end-to-end delay by up to 32%, maintaining strong performance even under elevated PU activity and higher node mobility. These results show that augmenting opportunistic routing with lightweight DRL can substantially improve adaptability and efficiency in next-generation IoT-oriented CRSNs. Full article
(This article belongs to the Special Issue Advances in Wireless Communication Technologies for IoT Devices)
Show Figures

Graphical abstract

52 pages, 933 KB  
Article
An Edge–Mesh–Cloud Telemetry Architecture for High-Mobility Environments: Low-Latency V2V Hazard Dissemination in Competitive Motorcycling
by Rubén Juárez and Fernando Rodríguez-Sela
Telecom 2026, 7(2), 47; https://doi.org/10.3390/telecom7020047 - 21 Apr 2026
Viewed by 228
Abstract
At racing speeds above 300 km/h (≈83 m/s), hazard awareness becomes a vehicular-communications problem: 100 ms already correspond to about 8.3 m of blind travel before an alert can influence braking, line choice, or torque delivery. Cloud-only telemetry is therefore insufficient under intermittent [...] Read more.
At racing speeds above 300 km/h (≈83 m/s), hazard awareness becomes a vehicular-communications problem: 100 ms already correspond to about 8.3 m of blind travel before an alert can influence braking, line choice, or torque delivery. Cloud-only telemetry is therefore insufficient under intermittent coverage and variable round-trip delay, while conventional trackside and pit-wall links do not provide direct inter-bike hazard dissemination. We propose Hybrid Epistemic Offloading (HEO), an edge–mesh–cloud architecture for high-mobility V2V/V2X hazard dissemination that explicitly separates an ephemeral safety plane from a durable cloud-analytics plane. On-bike edge nodes ingest high-rate ECU/IMU signals over CAN and persist full-fidelity traces into standardized ASAM MDF containers, enabling loss-tolerant buffering, deterministic replay, and post hoc auditability across coverage gaps. For real-time safety, motorcycles form a local V2V mesh that disseminates compact hazard digests using latency-bounded gossip with adaptive fanout, TTL-based suppression, and redundancy-aware forwarding over sidelink-capable V2X links. The hazard channel is formulated as uncertainty-aware to account for localization error and propagation delay at race pace. We evaluate the system in two stages: (i) a reproducible mobility-coupled simulation/emulation campaign for mesh dissemination and durable edge → gateway → cloud delivery; and (ii) an MDF4 replay-based Jerez pilot for stability-oriented co-design analysis. Under the tested conditions, the durable MQTT path achieved an 83.4 ms median, 175.9 ms p95, and 303.74 ms maximum end-to-end latency with no observed event loss. In the Jerez pilot, the co-design workflow reduced mean wheel slip from 6.26% to 3.75% (−40.10%) and a control-volatility proxy from 0.1290 to 0.0212 (−83.58%). Full article
Show Figures

Figure 1

18 pages, 1499 KB  
Article
Toward Personalized Rotator Cuff Physical Therapy Dosage Using a Machine Learning-Based Pilot Study with EMG
by AmirHossein MajidiRad, Iram Azam, Japp Adhikari and Mehrnoosh Damircheli
Bioengineering 2026, 13(4), 483; https://doi.org/10.3390/bioengineering13040483 - 21 Apr 2026
Viewed by 193
Abstract
Rotator cuff injuries are among the most common musculoskeletal conditions that affect shoulder function and can ultimately impact quality of life. While physical therapy is essential in the care of rotator cuff injuries, the ideal dose of therapeutic exercises continues to be a [...] Read more.
Rotator cuff injuries are among the most common musculoskeletal conditions that affect shoulder function and can ultimately impact quality of life. While physical therapy is essential in the care of rotator cuff injuries, the ideal dose of therapeutic exercises continues to be a significant clinical dilemma because of the generalized nature of rehabilitation protocols. This pilot study proposes a machine learning approach to personalize rehabilitation using surface electromyography (sEMG) data collected from eight healthy individuals by testing four key shoulder movements: scaption, internal rotation, external rotation, and external rotation at 90° abduction. In this research, the XGBoost algorithm was used to model muscle activation patterns by achieving a high predictive accuracy (R2 = 0.5325; MSE = 0.0084 μV2). Because sEMG reliably measures superficial muscle activity, a linear programming model was used to divide a 60 min therapy session in a way that increases activation of superficial muscles (such as deltoid and trapezius) while reducing strain on deep muscles (such as supraspinatus and infraspinatus). Three optimization scenarios were tested by reflecting a different clinical goal: prioritizing superficial muscles, minimizing deep muscle strain, or balancing both. Optimized time allocations assigned more time to external rotation at 90° abduction and scaption. This research demonstrates the potential for data-driven methods to transform rotator cuff rehabilitation through personalized and evidence-based treatment plans. The results enhance clinical practice by enabling adaptive rehabilitation planning and show that machine learning can support decision-making in complex muscle activation analysis with strong performance and low latency. Full article
(This article belongs to the Special Issue Advances in Physical Therapy and Rehabilitation, 2nd Edition)
Show Figures

Figure 1

43 pages, 646 KB  
Review
TinyML in Industrial IoT: A Systematic Review of Applications, System Components, and Methodologies
by Shahad Alharthi, Muhammad Rashid and Malak Aljabri
Sensors 2026, 26(8), 2550; https://doi.org/10.3390/s26082550 - 21 Apr 2026
Viewed by 298
Abstract
Tiny Machine Learning (TinyML) enables Machine Learning (ML) models to run on resource-constrained devices, which is critical for Industrial Internet of Things (IIoT) systems requiring low latency, energy efficiency, and local decision-making. Nevertheless, deploying TinyML in IIoT remains challenging due to diverse applications, [...] Read more.
Tiny Machine Learning (TinyML) enables Machine Learning (ML) models to run on resource-constrained devices, which is critical for Industrial Internet of Things (IIoT) systems requiring low latency, energy efficiency, and local decision-making. Nevertheless, deploying TinyML in IIoT remains challenging due to diverse applications, hardware, frameworks, and deployment methodologies, highlighting the need for a structured and focused review. Existing review articles mainly address general IoT or edge AI, leaving a critical gap in a unified and systematic understanding of TinyML applications, system components, and methodologies within IIoT contexts. Consequently, this systematic literature review (SLR) addresses this gap by analyzing 35 peer-reviewed studies published between 2018 and 2026, offering a comprehensive and structured synthesis of TinyML-enabled IIoT systems. The selected works are synthesized across three major dimensions: applications, system components, and methodologies. In terms of applications, TinyML is primarily used for predictive maintenance, equipment monitoring, anomaly detection, energy management, and general-purpose applications. The general category captures cross-domain solutions that do not fit into a single industrial application. A comparative analysis of all application categories is conducted in terms of accuracy, latency, memory, and energy. For system components, a structured comparison shows how hardware, software, and sensing choices shape performance and applicability. Hardware platforms are grouped by microcontroller families, highlighting dominant types. Software frameworks are summarized, showing the widespread use of lightweight toolchains for on-device inference. Sensor types are categorized, with vibration sensing most common. They are supported by other sensing methods such as vision, sound (acoustic), and environmental sensors. Finally, the methodologies examined in this SLR provide a comprehensive view of the data foundations, model selection, and optimization strategies. In short, this SLR converges diverse TinyML–IIoT applications, microcontroller-based hardware, lightweight software frameworks, sensing modalities, varied datasets, and optimization strategies, while also identifying challenges and future research directions. Full article
Show Figures

Figure 1

30 pages, 1289 KB  
Article
Anomaly Detection for Substations Based on IEC 61850-NFA Model
by Deniz Berfin Tastan and Musa Balta
Appl. Sci. 2026, 16(8), 4000; https://doi.org/10.3390/app16084000 - 20 Apr 2026
Viewed by 204
Abstract
The increasing digitalization of energy transmission and distribution infrastructures has made industrial control systems (ICS), and especially IEC 61850-based communication structures, critical. IEC 61850 performs protection and control functions in substations in real time via GOOSE and MMS protocols. The fast and low-latency [...] Read more.
The increasing digitalization of energy transmission and distribution infrastructures has made industrial control systems (ICS), and especially IEC 61850-based communication structures, critical. IEC 61850 performs protection and control functions in substations in real time via GOOSE and MMS protocols. The fast and low-latency operation of these protocols is essential; however, their open structure leaves systems vulnerable to cyberattacks. Traditional signature-based solutions are insufficient for detecting such anomalies, and models capable of learning both time and state relationships are needed. This study develops a time-aware probabilistic NFA model to detect anomalous behavior in IEC 61850 traffic. The model analyzes GOOSE and MMS message sequences with both state transitions and time differences (Δt). Thus, not only the message sequence but also the timing variations between events are learned. The probability of each transition is dynamically updated, and deviations from normal behavior are marked as “anomalies”. The dataset used in this study was created based on normal and attack scenarios conducted in the Sakarya University Critical Infrastructure National Testbed Center Energy Laboratory (Center Energy). The experimental results obtained in the study show that the model detects time-based, structural, and behavioral anomalies with high accuracy. With a dual-model configuration, results of 91.7% accuracy, 88.9% precision, 100% recall, and a 94.1% F1-score were achieved; particularly in time-based attack scenarios, the model performance reached an accuracy level of up to 93%. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
14 pages, 2371 KB  
Article
Multimodal Phase-Space Dynamics Fusion for Robust Ischemia Screening: An Edge-AI Paradigm with SERF Magnetocardiography
by Keyi Li, Xiangyang Zhou, Yifan Jia, Ruizhe Wang, Yidi Cao, Jiaojiao Pang, Rui Shang, Yadan Zhang, Yangyang Cui, Dong Xu and Min Xiang
Biosensors 2026, 16(4), 228; https://doi.org/10.3390/bios16040228 - 20 Apr 2026
Viewed by 174
Abstract
Background: Myocardial ischemia (MI) is a major cause of morbidity and mortality worldwide and requires timely and reliable detection. Although Spin-Exchange Relaxation-Free (SERF) magnetocardiography (MCG) provides femtotesla-level sensitivity for identifying non-linear cardiac repolarization anomalies, its clinical deployment is currently impeded by the computational [...] Read more.
Background: Myocardial ischemia (MI) is a major cause of morbidity and mortality worldwide and requires timely and reliable detection. Although Spin-Exchange Relaxation-Free (SERF) magnetocardiography (MCG) provides femtotesla-level sensitivity for identifying non-linear cardiac repolarization anomalies, its clinical deployment is currently impeded by the computational bottlenecks inherent to portable edge platforms. Methods: We propose a “Sensor-to-Image” Edge-AI framework that links quantum sensing with computer vision. Single-channel SERF-MCG signals from a large cohort of 2118 subjects (1135 Healthy, 983 Ischemia) were transformed into phase-space images using three distinct encoding modalities: Recurrence Plots (RP), Gramian Angular Summation Fields (GASF), and Markov Transition Fields (MTF). These visual representations were subsequently analyzed by a streamlined MobileNetV3-Small architecture, optimized for low-latency inference. To maximize diagnostic precision, an adaptive weighted fusion mechanism was engineered to combine the chaotic specificity captured by RP with the morphological sensitivity of GASF through a validation-optimized fixed global weighting strategy. Results: In our experiments, the fusion model achieved an Area Under the Curve (AUC) of 0.865, which was higher than the 1D-CNN baseline (AUC 0.857) and the single-modality models. Notably, the fusion strategy significantly elevated sensitivity to 88.3% while maintaining a specificity of 66.5%. Although specificity is moderate, this trade-off prioritizes high sensitivity to minimize false negatives in pre-hospital screening scenarios. The average inference time was 4.7 ms per sample on a standard CPU, suggesting suitability for real-time Point-of-Care (PoC) scenarios under further on-device validation. Conclusions: The results suggest that multi-view phase-space fusion can capture subtle spatio-temporal changes associated with ischemia. The proposed lightweight framework may support the development of portable SERF-MCG systems with embedded AI screening. Full article
(This article belongs to the Section Biosensor and Bioelectronic Devices)
Show Figures

Figure 1

34 pages, 5833 KB  
Article
High-Level Synthesis-Based FPGA Hardware Accelerator for Generalized Hebbian Learning Algorithm for Neuromorphic Computing
by Shivani Sharma and Darshika G. Perera
Electronics 2026, 15(8), 1725; https://doi.org/10.3390/electronics15081725 - 18 Apr 2026
Viewed by 424
Abstract
With the advent of AI and the smart systems era, neuromorphic computing will be imperative to support next-generation AI-related applications. Existing intelligent systems, (such as smart cities, robotics), face many challenges and requirements including, high performance, adaptability, scalability, dynamic decision-making, and low power. [...] Read more.
With the advent of AI and the smart systems era, neuromorphic computing will be imperative to support next-generation AI-related applications. Existing intelligent systems, (such as smart cities, robotics), face many challenges and requirements including, high performance, adaptability, scalability, dynamic decision-making, and low power. Neuromorphic computing is emerging as a complementary solution to address these challenges and requirements of next-gen intelligent systems. Neuromorphic computing comprises many traits, such as adaptive, low-power, scalable, parallel computing, that satisfies the requirements of future intelligent systems. There is a need for innovative solutions (in terms of models, architectures, techniques) for neuromorphic computing to support next-gen intelligent systems to overcome several challenges hindering the advancement of neuromorphic computing. In this research work, we introduce a novel and efficient FPGA-HLS-based hardware accelerator for the Generalized Hebbian learning algorithm (GHA) for neuromorphic computing applications. We decided to focus on GHA, since it was demonstrated that GHA enables online and incremental learning, and provides a hardware-efficient unsupervised learning framework that aligns closely with the principles of biological adaptation—traits that are vital for neuromorphic computing applications. In addition, our previous work showed that FPGAs have many features, such as low power, customized circuits, parallel computing capabilities, low latency, and especially adaptive nature, which make FPGAs suitable for neuromorphic computing applications. We propose two different hardware versions of FPGA-HLS-based GHA hardware accelerators: one is memory-mapped interface-based and another one is streaming interface-based. Our streaming interface-based FPGA-HLS-based GHA hardware IP achieves up to 51.13× speedup compared to its embedded software counterpart, while maintaining small area and low power requirements of neuromorphic computing applications. Our experimental results show great potential in utilizing FPGA-based architectures to support neuromorphic computing applications. Full article
Show Figures

Figure 1

Back to TopTop