Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,754)

Search Parameters:
Keywords = real-time hardware

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2737 KB  
Article
Rain Detection in Solar Insecticidal Lamp IoTs Systems Based on Multivariate Wireless Signal Feature Learning
by Lingxun Liu, Lei Shu, Yiling Xu, Kailiang Li, Ru Han, Qin Su and Jiarui Fang
Electronics 2026, 15(2), 465; https://doi.org/10.3390/electronics15020465 (registering DOI) - 21 Jan 2026
Abstract
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This [...] Read more.
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This study proposes a hardware-free rain detection method based on multivariate wireless signal feature learning, using LTE communication data. A large-scale primary dataset containing 11.84 million valid samples was collected from a real farmland SIL-IoTs deployment in Nanjing, recording RSRP, RSRQ, and RSSI at 1 Hz. To address signal heterogeneity, a signal-strength stratification strategy and a dual-rate EWMA-based adaptive signal-leveling mechanism were introduced. Four machine-learning models—Logistic Regression, Random Forest, XGBoost, and LightGBM—were trained and evaluated using both the primary dataset and an external test dataset collected in Changsha and Dongguan. Experimental results show that XGBoost achieves the highest detection accuracy, whereas LightGBM provides a favorable trade-off between performance and computational cost. Evaluation using accuracy, precision, recall, F1-score, and ROC-AUC indicates that all metrics exceed 0.975. The proposed method demonstrates strong accuracy, robustness, and cross-regional generalization, providing a practical and scalable solution for rain detection in agricultural IoT systems without additional sensing hardware. Full article
18 pages, 3222 KB  
Article
Short-Time Homomorphic Deconvolution (STHD): A Novel 2D Feature for Robust Indoor Direction of Arrival Estimation
by Yeonseok Park and Jun-Hwa Kim
Sensors 2026, 26(2), 722; https://doi.org/10.3390/s26020722 - 21 Jan 2026
Abstract
Accurate indoor positioning and navigation remain significant challenges, with audio sensor-based sound source localization emerging as a promising sensing modality. Conventional methods, often reliant on multi-channel processing or time-delay estimation techniques such as Generalized Cross-Correlation, encounter difficulties regarding computational complexity, hardware synchronization, and [...] Read more.
Accurate indoor positioning and navigation remain significant challenges, with audio sensor-based sound source localization emerging as a promising sensing modality. Conventional methods, often reliant on multi-channel processing or time-delay estimation techniques such as Generalized Cross-Correlation, encounter difficulties regarding computational complexity, hardware synchronization, and reverberant environments where time difference in arrival cues are masked. While machine learning approaches have shown potential, their performance depends heavily on the discriminative power of input features. This paper proposes a novel feature extraction method named Short-Time Homomorphic Deconvolution, which transforms multi-channel audio signals into a 2D Time × Time-of-Flight representation. Unlike prior 1D methods, this feature effectively captures the temporal evolution and stability of time-of-flight differences between microphone pairs, offering a rich and robust input for deep learning models. We validate this feature using a lightweight Convolutional Neural Network integrated with a dual-stage channel attention mechanism, designed to prioritize reliable spatial cues. The system was trained on a large-scale dataset generated via simulations and rigorously tested using real-world data acquired in an ISO-certified anechoic chamber. Experimental results demonstrate that the proposed model achieves precise Direction of Arrival estimation with a Mean Absolute Error of 1.99 degrees in real-world scenarios. Notably, the system exhibits remarkable consistency between simulation and physical experiments, proving its effectiveness for robust indoor navigation and positioning systems. Full article
19 pages, 2984 KB  
Article
Development and Field Testing of an Acoustic Sensor Unit for Smart Crossroads as Part of V2X Infrastructure
by Yury Furletov, Dinara Aptinova, Mekan Mededov, Andrey Keller, Sergey S. Shadrin and Daria A. Makarova
Smart Cities 2026, 9(1), 17; https://doi.org/10.3390/smartcities9010017 - 21 Jan 2026
Abstract
Improving city crossroads safety is a critical problem for modern smart transportation systems (STS). This article presents the results of developing, upgrading, and comprehensively experimentally testing an acoustic monitoring system prototype designed for rapid accident detection. Unlike conventional camera- or lidar-based approaches, the [...] Read more.
Improving city crossroads safety is a critical problem for modern smart transportation systems (STS). This article presents the results of developing, upgrading, and comprehensively experimentally testing an acoustic monitoring system prototype designed for rapid accident detection. Unlike conventional camera- or lidar-based approaches, the proposed solution uses passive sound source localization to operate effectively with no direct visibility and in adverse weather conditions, addressing a key limitation of camera- or lidar-based systems. Generalized Cross-Correlation with Phase Transform (GCC-PHAT) algorithms were used to develop a hardware–software complex featuring four microphones, a multichannel audio interface, and a computation module. This study focuses on the gradual upgrading of the algorithm to reduce the mean localization error in real-life urban conditions. Laboratory and complex field tests were conducted on an open-air testing ground of a university campus. During these tests, the system demonstrated that it can accurately determine the coordinates of a sound source imitating accidents (sirens, collisions). The analysis confirmed that the system satisfies the V2X infrastructure integration response time requirement (<200 ms). The results suggest that the system can be used as part of smart transportation systems. Full article
(This article belongs to the Section Physical Infrastructures and Networks in Smart Cities)
Show Figures

Figure 1

22 pages, 1918 KB  
Article
Edge-VisionGuard: A Lightweight Signal-Processing and AI Framework for Driver State and Low-Visibility Hazard Detection
by Manuel J. C. S. Reis, Carlos Serôdio and Frederico Branco
Appl. Sci. 2026, 16(2), 1037; https://doi.org/10.3390/app16021037 - 20 Jan 2026
Abstract
Driving safety under low-visibility or distracted conditions remains a critical challenge for intelligent transportation systems. This paper presents Edge-VisionGuard, a lightweight framework that integrates signal processing and edge artificial intelligence to enhance real-time driver monitoring and hazard detection. The system fuses multi-modal sensor [...] Read more.
Driving safety under low-visibility or distracted conditions remains a critical challenge for intelligent transportation systems. This paper presents Edge-VisionGuard, a lightweight framework that integrates signal processing and edge artificial intelligence to enhance real-time driver monitoring and hazard detection. The system fuses multi-modal sensor data—including visual, inertial, and illumination cues—to jointly estimate driver attention and environmental visibility. A hybrid temporal–spatial feature extractor (TS-FE) is introduced, combining convolutional and B-spline reconstruction filters to improve robustness against illumination changes and sensor noise. To enable deployment on resource-constrained automotive hardware, a structured pruning and quantization pipeline is proposed. Experiments on synthetic VR-based driving scenes demonstrate that the full-precision model achieves 89.6% driver-state accuracy (F1 = 0.893) and 100% visibility accuracy, with an average inference latency of 16.5 ms. After 60% parameter reduction and short fine-tuning, the pruned model preserves 87.1% accuracy (F1 = 0.866) and <3 ms latency overhead. These results confirm that Edge-VisionGuard maintains near-baseline performance under strict computational constraints, advancing the integration of computer vision and Edge AI for next-generation safe and reliable driving assistance systems. Full article
(This article belongs to the Special Issue Advances in Virtual Reality and Vision for Driving Safety)
Show Figures

Figure 1

14 pages, 2906 KB  
Proceeding Paper
Onboard Deep Reinforcement Learning: Deployment and Testing for CubeSat Attitude Control
by Sajjad Zahedi, Jafar Roshanian, Mehran Mirshams and Krasin Georgiev
Eng. Proc. 2026, 121(1), 26; https://doi.org/10.3390/engproc2025121026 - 20 Jan 2026
Abstract
Recent progress in Reinforcement Learning (RL), especially deep RL, has created new possibilities for autonomous control in complex and uncertain environments. This study explores these possibilities through a practical approach, implementing an RL agent on a custom-built CubeSat. The CubeSat, equipped with a [...] Read more.
Recent progress in Reinforcement Learning (RL), especially deep RL, has created new possibilities for autonomous control in complex and uncertain environments. This study explores these possibilities through a practical approach, implementing an RL agent on a custom-built CubeSat. The CubeSat, equipped with a reaction wheel for active attitude control, serves as a physical testbed for validating RL-based strategies. To mimic space-like conditions, the CubeSat was placed on a custom air-bearing platform that allows near-frictionless rotation along a single axis, simulating microgravity. Unlike simulation-only research, this work showcases real-time hardware-level implementation of a Double Deep Q-Network (DDQN) controller. The DDQN agent receives real system state data and outputs control commands to orient the CubeSat via its reaction wheel. For comparison, a traditional PID controller was also tested under identical conditions. Both controllers were evaluated based on response time, accuracy, and resilience to disturbances. The DDQN outperformed the PID, showing better adaptability and control. This research demonstrates the successful integration of RL into real aerospace hardware, bridging the gap between theoretical algorithms and practical space applications through a hands-on CubeSat platform. Full article
Show Figures

Figure 1

20 pages, 390 KB  
Systematic Review
Systematic Review of Quantization-Optimized Lightweight Transformer Architectures for Real-Time Fruit Ripeness Detection on Edge Devices
by Donny Maulana and R Kanesaraj Ramasamy
Computers 2026, 15(1), 69; https://doi.org/10.3390/computers15010069 - 19 Jan 2026
Viewed by 147
Abstract
Real-time visual inference on resource-constrained hardware remains a core challenge for edge computing and embedded artificial intelligence systems. Recent deep learning architectures, particularly Vision Transformers (ViTs) and Detection Transformers (DETRs), achieve high detection accuracy but impose substantial computational and memory demands that limit [...] Read more.
Real-time visual inference on resource-constrained hardware remains a core challenge for edge computing and embedded artificial intelligence systems. Recent deep learning architectures, particularly Vision Transformers (ViTs) and Detection Transformers (DETRs), achieve high detection accuracy but impose substantial computational and memory demands that limit their deployment on low-power edge platforms such as NVIDIA Jetson and Raspberry Pi devices. This paper presents a systematic review of model compression and optimization strategies—specifically quantization, pruning, and knowledge distillation—applied to lightweight object detection architectures for edge deployment. Following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, peer-reviewed studies were analyzed from Scopus, IEEE Xplore, and ScienceDirect to examine the evolution of efficient detectors from convolutional neural networks to transformer-based models. The synthesis highlights a growing focus on real-time transformer variants, including Real-Time DETR (RT-DETR) and low-bit quantized approaches such as Q-DETR, alongside optimized YOLO-based architectures. While quantization enables substantial theoretical acceleration (e.g., up to 16× operation reduction), aggressive low-bit precision introduces accuracy degradation, particularly in transformer attention mechanisms, highlighting a critical efficiency-accuracy tradeoff. The review further shows that Quantization-Aware Training (QAT) consistently outperforms Post-Training Quantization (PTQ) in preserving performance under low-precision constraints. Finally, this review identifies critical open research challenges, emphasizing the efficiency–accuracy tradeoff and the high computational demands imposed by Transformer architectures. Future directions are proposed, including hardware-aware optimization, robustness to imbalanced datasets, and multimodal sensing integration, to ensure reliable real-time inference in practical agricultural edge computing environments. Full article
Show Figures

Figure 1

25 pages, 6614 KB  
Article
Timer-Based Digitization of Analog Sensors Using Ramp-Crossing Time Encoding
by Gabriel Bravo, Ernesto Sifuentes, Geu M. Puentes-Conde, Francisco Enríquez-Aguilera, Juan Cota-Ruiz, Jose Díaz-Roman and Arnulfo Castro
Technologies 2026, 14(1), 72; https://doi.org/10.3390/technologies14010072 - 18 Jan 2026
Viewed by 86
Abstract
This work presents a time-domain analog-to-digital conversion method in which the amplitude of a sensor signal is encoded through its crossing instants with a periodic ramp. The proposed architecture departs from conventional ADC and PWM demodulation approaches by shifting quantization entirely to the [...] Read more.
This work presents a time-domain analog-to-digital conversion method in which the amplitude of a sensor signal is encoded through its crossing instants with a periodic ramp. The proposed architecture departs from conventional ADC and PWM demodulation approaches by shifting quantization entirely to the time domain, enabling waveform reconstruction using only a ramp generator, an analog comparator, and a timer capture module. A theoretical framework is developed to formalize the voltage-to-time mapping, derive expressions for resolution and error, and identify the conditions ensuring monotonicity and single-crossing behavior. Simulation results demonstrate high-fidelity reconstruction for both periodic and non-periodic signals, including real photoplethysmographic (PPG) waveforms, with errors approaching the theoretical quantization limit. A hardware implementation on a PSoC 5LP microcontroller confirms the practicality of the method under realistic operating conditions. Despite ramp nonlinearity, comparator delay, and sensor noise, the system achieves effective resolutions above 12 bits using only native mixed-signal peripherals and no conventional ADC. These results show that accurate waveform reconstruction can be obtained from purely temporal information, positioning time-encoded sensing as a viable alternative to traditional amplitude-based conversion. The minimal analog front end, low power consumption, and scalability of timer-based processing highlight the potential of the proposed approach for embedded instrumentation, distributed sensor nodes, and biomedical monitoring applications. Full article
Show Figures

Figure 1

29 pages, 7700 KB  
Article
Secure and Decentralised Swarm Authentication Using Hardware Security Primitives
by Sagir Muhammad Ahmad and Barmak Honarvar Shakibaei Asli
Electronics 2026, 15(2), 423; https://doi.org/10.3390/electronics15020423 - 18 Jan 2026
Viewed by 108
Abstract
Autonomous drone swarms are increasingly deployed in critical domains such as infrastructure inspection, environmental monitoring, and emergency response. While their distributed operation enables scalability and resilience, it also introduces new vulnerabilities, particularly in authentication and trust establishment. Conventional cryptographic solutions, including public key [...] Read more.
Autonomous drone swarms are increasingly deployed in critical domains such as infrastructure inspection, environmental monitoring, and emergency response. While their distributed operation enables scalability and resilience, it also introduces new vulnerabilities, particularly in authentication and trust establishment. Conventional cryptographic solutions, including public key infrastructures (PKI) and symmetric key protocols, impose computational and connectivity requirements unsuited to resource-constrained and external infrastructure-free swarm deployments. In this paper, we present a decentralized authentication scheme rooted in hardware security primitives (HSPs); specifically, Physical Unclonable Functions (PUFs) and True Random Number Generators (TRNGs). The protocol leverages master-initiated token broadcasting, iterative HSP seed evolution, randomized response delays, and statistical trust evaluation to detect cloning, replay, and impersonation attacks without reliance on centralized authorities or pre-distributed keys. Simulation studies demonstrate that the scheme achieves lightweight operation, rapid anomaly detection, and robustness against wireless interference, making it well-suited for real-time swarm systems. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation, 2nd Edition)
Show Figures

Figure 1

20 pages, 7676 KB  
Article
Development of a Neural Network-Based Controller for a Greenhouse Irrigation System at Laboratory Scale
by Cesar Gerardo-Parra, Luis Enrique Barreto-Salazar, Lidia Madeleine Flores-López, Julio César Picos-Ponce, David Enrique Castro-Palazuelos and Guillermo Javier Rubio-Astorga
Agriculture 2026, 16(2), 245; https://doi.org/10.3390/agriculture16020245 - 18 Jan 2026
Viewed by 176
Abstract
Water scarcity and inefficient irrigation practices are major challenges for modern protected agriculture systems. This study designs, implements, and experimentally validates a neural network-based irrigation control strategy in an industrial programmable logic controller (PLC) for a drip irrigation system operating in a laboratory-scale [...] Read more.
Water scarcity and inefficient irrigation practices are major challenges for modern protected agriculture systems. This study designs, implements, and experimentally validates a neural network-based irrigation control strategy in an industrial programmable logic controller (PLC) for a drip irrigation system operating in a laboratory-scale micro-tunnel greenhouse. The objective is to evaluate the real-time performance of an intelligent controller under practical operating conditions and to quantify its impact on water use efficiency and crop growth compared to a conventional on–off strategy. The neural network is trained using 1039 data samples, divided into training (70%), validation (15%), and test (15%) datasets, and is implemented in the PLC to regulate soil moisture through a proportional valve. Experimental validation is carried out over 67 days using a serrano chili pepper (Capsicum annuum L.) crop. Both controllers operate simultaneously under identical environmental and operating conditions. Performance is evaluated using soil moisture stability metrics, including mean squared error (MSE), mean absolute error (MAE), and standard error (SE), water consumption, and crop growth indicators prior to harvest. Results show that the neural network controller achieves higher soil moisture regulation accuracy (MSE = 3.2159%, MAE = 0.7560%, SE = 0.00001687%) and reduces the average daily water consumption per plant by 50.18% compared with the on–off controller. In addition, the absolute growth rate increases by 26.42%, with statistically significant differences. These results demonstrate that neural network-based control can be effectively implemented on industrial hardware and provide tangible benefits for water-efficient and precision irrigation systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 3982 KB  
Article
AI-Driven Decimeter-Level Indoor Localization Using Single-Link Wi-Fi: Adaptive Clustering and Probabilistic Multipath Mitigation
by Li-Ping Tian, Chih-Min Yu, Li-Chun Wang and Zhizhang (David) Chen
Sensors 2026, 26(2), 642; https://doi.org/10.3390/s26020642 - 18 Jan 2026
Viewed by 89
Abstract
This paper presents an Artificial Intelligence (AI)-driven framework for high-precision indoor localization using single-link Wi-Fi channel state information (CSI), targeting real-time deployment in complex multipath environments. To overcome challenges such as signal distortion and environmental dynamics, the proposed system integrates adaptive and unsupervised [...] Read more.
This paper presents an Artificial Intelligence (AI)-driven framework for high-precision indoor localization using single-link Wi-Fi channel state information (CSI), targeting real-time deployment in complex multipath environments. To overcome challenges such as signal distortion and environmental dynamics, the proposed system integrates adaptive and unsupervised intelligence modules into the localization pipeline. A refined two-stage time-of-flight (TOF) estimation method is introduced, combining a minimum-norm algorithm with a probability-weighted refinement mechanism that improves ranging accuracy under non-line-of-sight (NLOS) conditions. Simultaneously, an adaptive parameter-tuned DBSCAN algorithm is applied to angle-of-arrival (AOA) sequences, enabling unsupervised spatio-temporal clustering for stable direction estimation without requiring prior labels or environmental calibration. These AI-enabled components allow the system to dynamically suppress multipath interference, eliminate positioning ambiguity, and maintain robustness across diverse indoor layouts. Comprehensive experiments conducted on the Widar2.0 dataset demonstrate that the proposed method achieves decimeter-level accuracy with an average localization error of 0.63 m, outperforming existing methods such as “Widar2.0” and “Dynamic-MUSIC” in both accuracy and efficiency. This intelligent and lightweight architecture is fully compatible with commodity Wi-Fi hardware and offers significant potential for real-time human tracking, smart building navigation, and other location-aware AI applications. Full article
(This article belongs to the Special Issue Sensors for Indoor Positioning)
Show Figures

Figure 1

17 pages, 1911 KB  
Editorial
Advances in (Bio)Sensors for Physiological Monitoring: A Special Issue Review
by Magnus Falk and Sergey Shleev
Sensors 2026, 26(2), 633; https://doi.org/10.3390/s26020633 - 17 Jan 2026
Viewed by 256
Abstract
Physiological monitoring has become an inherently interdisciplinary field, merging advances in engineering, chemistry, biology, medicine, and data analytics to create sensors that continuously track the vital signals of the body. These developments are enabling more personalized and preventive healthcare, as wearable (bio)sensors and [...] Read more.
Physiological monitoring has become an inherently interdisciplinary field, merging advances in engineering, chemistry, biology, medicine, and data analytics to create sensors that continuously track the vital signals of the body. These developments are enabling more personalized and preventive healthcare, as wearable (bio)sensors and intelligent algorithms can detect subtle physiological changes in real-time. In the Special Issue ‘Advances in (Bio)Sensors for Physiological Monitoring’, researchers from diverse domains contributed 18 papers showcasing cutting-edge sensor technologies and applications for health and performance monitoring. In this review, we summarize these contributions by grouping them into logical themes based on their focus: (1) cardiovascular and autonomic monitoring, (2) glucose and metabolic monitoring, (3) wearable sensors for movement and musculoskeletal health, (4) neurophysiological monitoring and brain–computer interfaces, and (5) innovations in sensor technology and methods. This thematic organization highlights the breadth of the research, spanning from fundamental sensor hardware to data-driven analytics, and underscores how modern (bio)sensors are breaking traditional boundaries in healthcare. Full article
(This article belongs to the Special Issue (Bio)sensors for Physiological Monitoring)
Show Figures

Figure 1

27 pages, 11232 KB  
Article
Aerokinesis: An IoT-Based Vision-Driven Gesture Control System for Quadcopter Navigation Using Deep Learning and ROS2
by Sergei Kondratev, Yulia Dyrchenkova, Georgiy Nikitin, Leonid Voskov, Vladimir Pikalov and Victor Meshcheryakov
Technologies 2026, 14(1), 69; https://doi.org/10.3390/technologies14010069 - 16 Jan 2026
Viewed by 179
Abstract
This paper presents Aerokinesis, an IoT-based software–hardware system for intuitive gesture-driven control of quadcopter unmanned aerial vehicles (UAVs), developed within the Robot Operating System 2 (ROS2) framework. The proposed system addresses the challenge of providing an accessible human–drone interaction interface for operators in [...] Read more.
This paper presents Aerokinesis, an IoT-based software–hardware system for intuitive gesture-driven control of quadcopter unmanned aerial vehicles (UAVs), developed within the Robot Operating System 2 (ROS2) framework. The proposed system addresses the challenge of providing an accessible human–drone interaction interface for operators in scenarios where traditional remote controllers are impractical or unavailable. The architecture comprises two hierarchical control levels: (1) high-level discrete command control utilizing a fully connected neural network classifier for static gesture recognition, and (2) low-level continuous flight control based on three-dimensional hand keypoint analysis from a depth camera. The gesture classification module achieves an accuracy exceeding 99% using a multi-layer perceptron trained on MediaPipe-extracted hand landmarks. For continuous control, we propose a novel approach that computes Euler angles (roll, pitch, yaw) and throttle from 3D hand pose estimation, enabling intuitive four-degree-of-freedom quadcopter manipulation. A hybrid signal filtering pipeline ensures robust control signal generation while maintaining real-time responsiveness. Comparative user studies demonstrate that gesture-based control reduces task completion time by 52.6% for beginners compared to conventional remote controllers. The results confirm the viability of vision-based gesture interfaces for IoT-enabled UAV applications. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

61 pages, 10490 KB  
Article
An Integrated Cyber-Physical Digital Twin Architecture with Quantitative Feedback Theory Robust Control for NIS2-Aligned Industrial Robotics
by Vesela Karlova-Sergieva, Boris Grasiani and Nina Nikolova
Sensors 2026, 26(2), 613; https://doi.org/10.3390/s26020613 - 16 Jan 2026
Viewed by 117
Abstract
This article presents an integrated framework for robust control and cybersecurity of an industrial robot, combining Quantitative Feedback Theory (QFT), digital twin (DT) technology, and a programmable logic controller–based architecture aligned with the requirements of the NIS2 Directive. The study considers a five-axis [...] Read more.
This article presents an integrated framework for robust control and cybersecurity of an industrial robot, combining Quantitative Feedback Theory (QFT), digital twin (DT) technology, and a programmable logic controller–based architecture aligned with the requirements of the NIS2 Directive. The study considers a five-axis industrial manipulator modeled as a set of decoupled linear single-input single-output systems subject to parametric uncertainty and external disturbances. For position control of each axis, closed-loop robust systems with QFT-based controllers and prefilters are designed, and the dynamic behavior of the system is evaluated using predefined key performance indicators (KPIs), including tracking errors in joint space and tool space, maximum error, root-mean-square error, and three-dimensional positional deviation. The proposed architecture executes robust control algorithms in the MATLAB/Simulink environment, while a programmable logic controller provides deterministic communication, time synchronization, and secure data exchange. The synchronized digital twin, implemented in the FANUC ROBOGUIDE environment, reproduces the robot’s kinematics and dynamics in real time, enabling realistic hardware-in-the-loop validation with a real programmable logic controller. This work represents one of the first architectures that simultaneously integrates robust control, real programmable logic controller-based execution, a synchronized digital twin, and NIS2-oriented mechanisms for observability and traceability. The conducted simulation and digital twin-based experimental studies under nominal and worst-case dynamic models, as well as scenarios with externally applied single-axis disturbances, demonstrate that the system maintains robustness and tracking accuracy within the prescribed performance criteria. In addition, the study analyzes how the proposed architecture supports the implementation of key NIS2 principles, including command traceability, disturbance resilience, access control, and capabilities for incident analysis and event traceability in robotic manufacturing systems. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

17 pages, 2317 KB  
Article
Design and Realization of Dynamically Adjustable Multi-Pulse Real-Time Coherent Integration System
by Jinrui Bi, Hongyu Zhang, Lihua Sun and Qingchao Jiang
Electronics 2026, 15(2), 397; https://doi.org/10.3390/electronics15020397 - 16 Jan 2026
Viewed by 100
Abstract
Radar signal coherent integration technology is a critical method to improve the performance of detection systems. However, existing techniques face challenges regarding real-time performance and the flexibility of multi-pulse coherent accumulation. In this paper, a dynamically configurable multi-pulse multi-frame real-time coherent integration system [...] Read more.
Radar signal coherent integration technology is a critical method to improve the performance of detection systems. However, existing techniques face challenges regarding real-time performance and the flexibility of multi-pulse coherent accumulation. In this paper, a dynamically configurable multi-pulse multi-frame real-time coherent integration system based on FPGA is designed and implemented, and the dynamic configuration of the number of pulses and the number of frames stored for each pulse is realized through the host computer. The experimental results show that the output signal delay of coherent integration is 33 microseconds at 40 pulses, and the energy gain reaches 16 dB at 40 pulses, which provides a dynamically configurable hardware platform and solution for real-time coherent integration of high-frame-count, multi-pulse radar signals. Full article
(This article belongs to the Special Issue From Circuits to Systems: Embedded and FPGA-Based Applications)
Show Figures

Figure 1

13 pages, 1383 KB  
Article
Adaptive Software-Defined Honeypot Strategy Using Stackelberg Game and Deep Reinforcement Learning with DPU Acceleration
by Mingxuan Zhang, Yituan Yu, Shengkun Li, Yan Liu, Yingshuai Zhang, Rui Zhang and Sujie Shao
Modelling 2026, 7(1), 23; https://doi.org/10.3390/modelling7010023 - 16 Jan 2026
Viewed by 80
Abstract
Software-defined (SD) honeypots, as dynamic cybersecurity technologies, enhance defense efficiency through flexible resource allocation. However, traditional SD honeypots face latency and jitter issues under network fluctuations, while balancing adjustment costs with defense benefits remains challenging. This paper proposes a DPU-accelerated SD honeypot security [...] Read more.
Software-defined (SD) honeypots, as dynamic cybersecurity technologies, enhance defense efficiency through flexible resource allocation. However, traditional SD honeypots face latency and jitter issues under network fluctuations, while balancing adjustment costs with defense benefits remains challenging. This paper proposes a DPU-accelerated SD honeypot security service deployment method, leveraging DPU hardware acceleration to optimize network traffic processing and protocol parsing, thereby significantly improving honeypot environment construction efficiency and response real-time performance. For dynamic attack–defense scenarios, we design an adaptive adjustment strategy combining Stackelberg game theory with deep reinforcement learning (AASGRL). By calculating the expected defense benefits and adjustment costs of optimal honeypot deployment strategies, the approach dynamically determines the timing and scope of honeypot adjustments. Simulation experiments demonstrate that the mechanism requires no adjustments in 80% of interaction rounds, while achieving enhanced defense benefits in 20% of rounds with controlled adjustment costs. Compared to traditional methods, the AASGRL mechanism maintains stable defense benefits in long-term interactions, verifying its effectiveness in balancing low costs and high benefits against dynamic attacks. This work provides critical technical support for building adaptive proactive network defense systems. Full article
Show Figures

Figure 1

Back to TopTop