Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (953)

Search Parameters:
Keywords = Hardware-in-the-loop simulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3803 KB  
Article
Enhanced Frequency Dynamic Support for PMSG Wind Turbines via Hybrid Inertia Control
by Jian Qian, Yina Song, Gengda Li, Ziyao Zhang, Yi Wang and Haifeng Yang
Electronics 2026, 15(2), 373; https://doi.org/10.3390/electronics15020373 - 14 Jan 2026
Abstract
High penetration of wind farms into the power grid lowers system inertia and compromises stability. This paper proposes a grid-forming control strategy for Permanent Magnet Synchronous Generator (PMSG) wind turbines based on DC-link voltage matching and virtual inertia. First, a relationship between grid [...] Read more.
High penetration of wind farms into the power grid lowers system inertia and compromises stability. This paper proposes a grid-forming control strategy for Permanent Magnet Synchronous Generator (PMSG) wind turbines based on DC-link voltage matching and virtual inertia. First, a relationship between grid frequency and DC-link voltage is established, replacing the need for a phase-locked loop. Then, DC voltage dynamics are utilized to trigger a real-time switching of the power tracking curve, releasing the rotor’s kinetic energy for inertia response. This is further coordinated with a de-loading control that maintains active power reserves through over-speeding or pitch control. Finally, the MATLAB/Simulink simulation results and RT-LAB hardware-in-the-loop experiments demonstrate the capability of the proposed control strategy to provide rapid active power support during grid disturbances. Full article
(This article belongs to the Special Issue Stability Analysis and Optimal Operation in Power Electronic Systems)
Show Figures

Figure 1

24 pages, 29056 KB  
Article
ANN-Based Online Parameter Correction for PMSM Control Using Sphere Decoding Algorithm
by Joseph O. Akinwumi, Yuan Gao, Xin Yuan, Sergio Vazquez and Harold S. Ruiz
Sensors 2026, 26(2), 553; https://doi.org/10.3390/s26020553 - 14 Jan 2026
Abstract
This work addresses parameter mismatch in Permanent Magnet Synchronous Motor (PMSM) drives, focusing on performance degradation caused by variations in flux linkage and inductance arising under realistic operating uncertainties. An artificial neural network (ANN) is trained to estimate these parameter shifts and update [...] Read more.
This work addresses parameter mismatch in Permanent Magnet Synchronous Motor (PMSM) drives, focusing on performance degradation caused by variations in flux linkage and inductance arising under realistic operating uncertainties. An artificial neural network (ANN) is trained to estimate these parameter shifts and update the controller model online. The procedure comprises three steps: (i) data generation using Sphere Decoding Algorithm-based Model Predictive Control (SDA-MPC) across a mismatch range of ±50%; (ii) offline ANN training to map measured features to parameter estimates; and (iii) online ANN deployment to update model parameters within the SDA-MPC loop. MATLAB /Simulink simulations show that ANN-based compensation can improve current tracking and THD under many mismatch conditions, although in some cases—particularly when inductance is overestimated—THD may increase relative to nominal operation. When parameters return to nominal values the ANN adapts accordingly, steering the controller back toward baseline performance. The data-driven adaptation enhances robustness with modest computational overhead. Future work includes hardware-in-the-loop (HIL) testing and explicit experimental study of temperature-dependent effects. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

33 pages, 9246 KB  
Article
Optimized Model Predictive Controller Using Multi-Objective Whale Optimization Algorithm for Urban Rail Train Tracking Control
by Longda Wang, Lijie Wang and Yan Chen
Biomimetics 2026, 11(1), 60; https://doi.org/10.3390/biomimetics11010060 - 10 Jan 2026
Viewed by 134
Abstract
With the rapid development of urban rail transit, train operation control is required to meet increasingly stringent demands in terms of energy consumption, comfort, punctuality, and precise stopping. The optimization and tracking control of speed profiles are two critical issues in ensuring the [...] Read more.
With the rapid development of urban rail transit, train operation control is required to meet increasingly stringent demands in terms of energy consumption, comfort, punctuality, and precise stopping. The optimization and tracking control of speed profiles are two critical issues in ensuring the performance of automatic train operation systems. However, conventional model predictive control (MPC) methods are highly dependent on parameter settings and show limited adaptability, while heuristic optimization approaches such as the whale optimization algorithm (WOA) often suffer from premature convergence and insufficient robustness. To overcome these limitations, this study proposes an optimized model predictive controller using the multi-objective whale optimization algorithm (MPC-MOWOA) for urban rail train tracking control. In the improved optimization algorithm, a nonlinear convergence mechanism and the Tchebycheff decomposition method are introduced to enhance convergence accuracy and population diversity, which enables effective optimization of the initial parameters of the MPC. During real-time operation, the MPC is further enhanced by integrating a fuzzy satisfaction function that adaptively adjusts the softening factor. In addition, the control coefficients are corrected online according to the speed error and its rate of change, thereby improving adaptability of the control system. Taking the section from Lvshun New Port to Tieshan Town on Dalian Metro Line 12 as the study case, the proposed control algorithm was deployed on a TMS320F28335 embedded processor platform, and hardware-in-the-loop simulation experiments (HILSEs) were conducted under the same simulation environment, a unified train dynamic model, consistent operating conditions, and an identical evaluation index system. The results indicate that, compared with the Fuzzy-PID control method, the proposed control strategy reduces the integral of time-weighted absolute error nearly by 39.6% and decreases energy consumption nearly by 5.9%, while punctuality, stopping accuracy, and comfort are improved nearly by 33.2%, 12.4%, and 7.1%, respectively. These results not only verify the superior performance of the proposed MPC-MOWOA, but also demonstrate its capability for real-time implementation on embedded processors, thereby overcoming the limitations of purely MATLAB-based offline simulations and exhibiting strong potential for practical engineering applications in urban rail transit. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Graphical abstract

24 pages, 8857 KB  
Article
Cooperative Control and Energy Management for Autonomous Hybrid Electric Vehicles Using Machine Learning
by Jewaliddin Shaik, Sri Phani Krishna Karri, Anugula Rajamallaiah, Kishore Bingi and Ramani Kannan
Machines 2026, 14(1), 73; https://doi.org/10.3390/machines14010073 - 7 Jan 2026
Viewed by 115
Abstract
The growing deployment of connected and autonomous vehicles (CAVs) requires coordinated control strategies that jointly address safety, mobility, and energy efficiency. This paper presents a novel two-stage cooperative control framework for autonomous hybrid electric vehicle (HEV) platoons based on machine learning. In the [...] Read more.
The growing deployment of connected and autonomous vehicles (CAVs) requires coordinated control strategies that jointly address safety, mobility, and energy efficiency. This paper presents a novel two-stage cooperative control framework for autonomous hybrid electric vehicle (HEV) platoons based on machine learning. In the first stage, a metric learning-based distributed model predictive control (ML-DMPC) strategy is proposed to enable cooperative longitudinal control among heterogeneous vehicles, explicitly incorporating inter-vehicle interactions to improve speed tracking, ride comfort, and platoon-level energy efficiency. In the second stage, a multi-agent twin-delayed deep deterministic policy gradient (MATD3) algorithm is developed for real-time energy management, achieving an optimal power split between the engine and battery while reducing Q-value overestimation and accelerating learning convergence. Simulation results across multiple standard driving cycles demonstrate that the proposed framework outperforms conventional distributed model predictive control (DMPC) and multi-agent deep deterministic policy gradient (MADDPG)-based methods in fuel economy, stability, and convergence speed, while maintaining battery state of charge (SOC) within safe limits. To facilitate future experimental validation, a dSPACE-based hardware-in-the-loop (HIL) architecture is designed to enable real-time deployment and testing of the proposed control framework. Full article
Show Figures

Figure 1

32 pages, 7978 KB  
Article
A Digital Twin Approach for Spacecraft On-Board Software Development and Testing
by Andrea Colagrossi, Stefano Silvestrini, Andrea Brandonisio and Michèle Lavagna
Aerospace 2026, 13(1), 55; https://doi.org/10.3390/aerospace13010055 - 6 Jan 2026
Viewed by 197
Abstract
The increasing complexity of spacecraft On-Board Software (OBSW) necessitates advanced development and testing methodologies to ensure reliability and robustness. This paper presents a digital twin approach for the development and testing of embedded spacecraft software. The proposed electronic digital twin enables high-fidelity hardware [...] Read more.
The increasing complexity of spacecraft On-Board Software (OBSW) necessitates advanced development and testing methodologies to ensure reliability and robustness. This paper presents a digital twin approach for the development and testing of embedded spacecraft software. The proposed electronic digital twin enables high-fidelity hardware and software simulations of spacecraft subsystems, facilitating a comprehensive validation framework. Through real-time execution, the digital twin supports dynamical simulations with possibility of failure injections, enabling the observation of software behavior under various nominal or fault conditions. This capability allows for thorough debugging and verification of critical software components, including Finite State Machines (FSM), Guidance, Navigation, and Control (GNC) algorithms, and platform and mode management logic. By providing an interactive and iterative environment for software validation in nominal and contingency scenarios, the digital twin reduces the need for extensive Hardware-in-the-Loop (HIL) testing, accelerating the software development life-cycle while improving reliability. The paper discusses the architecture and implementation of the digital twin, along with case studies based on a modular OBSW architecture, demonstrating its effectiveness in identifying and resolving software anomalies. This approach offers a cost-effective and scalable solution for spacecraft software development, enhancing mission safety and performance. Full article
Show Figures

Figure 1

28 pages, 12490 KB  
Article
A Full-Parameter Calibration Method for an RINS/CNS Integrated Navigation System in High-Altitude Drones
by Huanrui Zhang, Xiaoyue Zhang, Chunhua Cheng, Xinyi Lv and Chunxi Zhang
Vehicles 2026, 8(1), 11; https://doi.org/10.3390/vehicles8010011 - 5 Jan 2026
Viewed by 143
Abstract
High-altitude long-endurance (HALE) UAVs require navigation payloads that are both fully autonomous and lightweight. This paper presents a full-parameter calibration method for a dual-axis rotational-modulation RINS/CNS integrated system in which the IMU is mounted on a two-axis indexing mechanism and the reconnaissance camera [...] Read more.
High-altitude long-endurance (HALE) UAVs require navigation payloads that are both fully autonomous and lightweight. This paper presents a full-parameter calibration method for a dual-axis rotational-modulation RINS/CNS integrated system in which the IMU is mounted on a two-axis indexing mechanism and the reconnaissance camera is reused as the star sensor. We establish a unified error propagation model that simultaneously covers IMU device errors (bias, scale, cross-axis/installation), gimbal non-orthogonality and encoder angle errors, and camera exterior/interior parameters (EOPs/IOPs), including Brown–Conrady distortion. Building on this model, we design an error-decoupled calibration path that exploits (i) odd/even symmetry under inner-axis scans, (ii) basis switching via outer-axis waypoints, and (iii) frequency tagging through rate-limited triangular motions. A piecewise-constant system (PWCS)/SVD analysis quantifies segment-wise observability and guides trajectory tuning. Simulation and hardware-in-the-loop results show that all parameter groups converge primarily within the segments that excite them; the final relative errors are typically ≤5% in simulation and 6–16% with real IMU/gimbal data and catalog-based star pixels. Full article
Show Figures

Figure 1

18 pages, 3673 KB  
Article
Voltage Regulation of a DC–DC Boost Converter Using a Vertex-Based Convex PI Controller
by Hector Hidalgo, Leonel Estrada, Nimrod Vázquez, Daniel Mejia, Héctor Huerta and José Eli Eduardo González-Durán
Technologies 2026, 14(1), 30; https://doi.org/10.3390/technologies14010030 - 1 Jan 2026
Viewed by 371
Abstract
The regulation of output voltage in power converters often demands nonlinear control techniques; however, their implementation is challenging when deployed on low-cost hardware with limited computational resources. To address this difficulty, the modeling via the sector nonlinearity technique is adopted to represent the [...] Read more.
The regulation of output voltage in power converters often demands nonlinear control techniques; however, their implementation is challenging when deployed on low-cost hardware with limited computational resources. To address this difficulty, the modeling via the sector nonlinearity technique is adopted to represent the converter dynamics as a convex combination of linear vertex models. Building on this representation, this article proposes a vertex-based convex PI controller that significantly reduces the required online computations compared to conventional convex controllers relying on full-state feedback. In the proposed scheme, the inductor current is used solely to evaluate the weighting functions, avoiding the need to compute control gains associated with this state. The effectiveness of the method is demonstrated through offline simulations and validated using hardware-in-the-loop experiments. Full article
(This article belongs to the Special Issue Innovative Power System Technologies)
Show Figures

Figure 1

29 pages, 4094 KB  
Article
Hybrid LSTM–DNN Architecture with Low-Discrepancy Hypercube Sampling for Adaptive Forecasting and Data Reliability Control in Metallurgical Information-Control Systems
by Jasur Sevinov, Barnokhon Temerbekova, Gulnora Bekimbetova, Ulugbek Mamanazarov and Bakhodir Bekimbetov
Processes 2026, 14(1), 147; https://doi.org/10.3390/pr14010147 - 1 Jan 2026
Viewed by 280
Abstract
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling [...] Read more.
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling using Sobol and Halton sequences to ensure uniform coverage of operating conditions and the hyperparameter space. The processing pipeline includes preprocessing and temporal synchronization of measurements, a parameter identification module, anomaly detection and correction using an ε-threshold scheme, and a decision-making and control loop. In simulation scenarios modeling the dynamics of temperature, pressure, level, and flow (1 min sampling interval, injected anomalies, and measurement noise), the hybrid model outperformed GRU and CNN architectures: a determination coefficient of R2 > 0.92 was achieved for key indicators, MAE and RMSE improved by 7–15%, and the proportion of unreliable measurements after correction decreased to <2% (compared with 8–12% without correction). The experiments also demonstrated accelerated adaptation during regime changes. The scientific novelty lies in combining recurrent memory and deep nonlinear approximation with deterministic experimental design in the hypercube of states and hyperparameters, enabling reproducible self-adaptation of the ICS and increased noise robustness without upgrading the measurement hardware. Modern metallurgical information-control systems operate under non-stationary regimes and limited measurement reliability, which reduces the robustness of conventional forecasting and decision-support approaches. To address this issue, a hybrid LSTM–DNN architecture combined with low-discrepancy hypercube probing and anomaly-aware data correction is proposed. The proposed approach is distinguished by the integration of hybrid neural forecasting, deterministic hypercube-based adaptation, and anomaly-aware data correction within a unified information-control loop for non-stationary industrial processes. Full article
Show Figures

Figure 1

16 pages, 3274 KB  
Article
An Adaptive Inertia and Damping Control Strategy for Virtual Synchronous Generators to Enhance Transient Performance
by Wenzuo Tang, Bo Li, Xianqi Shao, Yun Ye, Yue Yu and Jiawei Chen
Energies 2026, 19(1), 204; https://doi.org/10.3390/en19010204 - 30 Dec 2025
Viewed by 225
Abstract
Virtual synchronous generator (VSG) technology introduces synthetic rotational inertia and damping into inverter-based systems, thereby enhancing regulation performance under grid-connected operation. However, the output characteristics of VSGs are strongly influenced by virtual inertia and damping. This paper develops a self-tuning inertia–damping coordination mechanism [...] Read more.
Virtual synchronous generator (VSG) technology introduces synthetic rotational inertia and damping into inverter-based systems, thereby enhancing regulation performance under grid-connected operation. However, the output characteristics of VSGs are strongly influenced by virtual inertia and damping. This paper develops a self-tuning inertia–damping coordination mechanism for VSGs. The coupling between virtual inertia and damping with respect to grid power quality is systematically investigated, and a power-angle dynamic response model for synchronous generators (SGs) under extreme operating conditions is established. Building on these results, an improved adaptive control strategy for the VSG’s virtual inertia and damping is proposed. The proposed strategy detects changes in frequency and load power, enabling adaptive tuning of virtual inertia and damping in response to system variations, thereby reducing frequency overshoot while accelerating the dynamic response. The effectiveness of the proposed strategy is validated by hardware-in-the-loop real-time simulations. Full article
(This article belongs to the Special Issue Digital Modeling, Operation and Control of Sustainable Energy Systems)
Show Figures

Figure 1

19 pages, 1730 KB  
Article
Optimizing EV Battery Charging Using Fuzzy Logic in the Presence of Uncertainties and Unknown Parameters
by Minhaz Uddin Ahmed, Md Ohirul Qays, Stefan Lachowicz and Parvez Mahmud
Electronics 2026, 15(1), 177; https://doi.org/10.3390/electronics15010177 - 30 Dec 2025
Viewed by 202
Abstract
The growing use of electric vehicles (EVs) creates challenges in designing charging systems that are smart, dependable, and efficient, especially when environmental conditions change. This research proposes a fuzzy-logic-based PID control strategy integrated into a photovoltaic (PV) powered EV charging system to address [...] Read more.
The growing use of electric vehicles (EVs) creates challenges in designing charging systems that are smart, dependable, and efficient, especially when environmental conditions change. This research proposes a fuzzy-logic-based PID control strategy integrated into a photovoltaic (PV) powered EV charging system to address uncertainties such as fluctuating solar irradiance, grid instability, and dynamic load demands. A MATLAB-R2023a/Simulink-R2023a model was developed to simulate the charging process using real-time adaptive control. The fuzzy logic controller (FLC) automatically updates the PID gains by evaluating the error and how quickly the error is changing. This adaptive approach enables efficient voltage regulation and improved system stability. Simulation results demonstrate that the proposed fuzzy–PID controller effectively maintains a steady charging voltage and minimizes power losses by modulating switching frequency. Additionally, the system shows resilience to rapid changes in irradiance and load, improving energy efficiency and extending battery life. This hybrid approach outperforms conventional PID and static control methods, offering enhanced adaptability for renewable-integrated EV infrastructure. The study contributes to sustainable mobility solutions by optimizing the interaction between solar energy and EV charging, paving the way for smarter, grid-friendly, and environmentally responsible charging networks. These findings support the potential for the real-world deployment of intelligent controllers in EV charging systems powered by renewable energy sources This study is purely simulation-based; experimental validation via hardware-in-the-loop (HIL) or prototype development is reserved for future work. Full article
(This article belongs to the Special Issue Data-Related Challenges in Machine Learning: Theory and Application)
Show Figures

Figure 1

37 pages, 8037 KB  
Article
Research on a Lane Changing Obstacle Avoidance Control Strategy for Hub Motor-Driven Vehicles
by Jiaqi Wan, Tianqi Yang, Zitai Xiao, Jijie Wang, Shuiyan Yang, Tong Niu and Fuwu Yan
Mathematics 2026, 14(1), 139; https://doi.org/10.3390/math14010139 - 29 Dec 2025
Viewed by 155
Abstract
Hub motor-driven vehicles can control vehicle attitude by regulating the speed and torque of four wheels, supporting safe and stable lane changing and obstacle avoidance. However, under high-speed scenarios, these vehicles often suffer from poor stability, limited comfort, and inadequate trajectory tracking accuracy [...] Read more.
Hub motor-driven vehicles can control vehicle attitude by regulating the speed and torque of four wheels, supporting safe and stable lane changing and obstacle avoidance. However, under high-speed scenarios, these vehicles often suffer from poor stability, limited comfort, and inadequate trajectory tracking accuracy during lane changing and obstacle avoidance operations. To address these challenges, this study proposes a lane changing obstacle avoidance control strategy for hub motor-driven vehicles based on collision risk prediction. A fuzzy controller featuring a variable weight objective function is designed to balance lane changing efficiency and ride comfort, thereby generating an optimal lane changing and obstacle avoidance trajectory. Furthermore, a linear time-varying model predictive controller (LTV-MPC) is developed, which adaptively adjusts both the weighting coefficient of lateral displacement error in the objective function and the prediction horizon of the controller, enabling dynamic tuning of vehicle trajectory tracking accuracy. A dSPACE hardware-in-the-loop (HIL) platform was established to conduct simulations under typical obstacle avoidance scenarios. The simulation results show that under two easily destabilized conditions—high-adhesion, high-speed, large-curvature, and low-adhesion, medium-speed, large-curvature maneuvers—the proposed optimized control strategy limits the maximum lateral trajectory tracking error to 0.116 m and 0.143 m, representing reductions of 58.6% and 79.6% compared with the baseline control strategy. These results demonstrate that the proposed method enhances trajectory tracking accuracy and stability during lane changing and obstacle avoidance maneuvers. Full article
Show Figures

Figure 1

47 pages, 6988 KB  
Article
A Hierarchical Predictive-Adaptive Control Framework for State-of-Charge Balancing in Mini-Grids Using Deep Reinforcement Learning
by Iacovos Ioannou, Saher Javaid, Yasuo Tan and Vasos Vassiliou
Electronics 2026, 15(1), 61; https://doi.org/10.3390/electronics15010061 - 23 Dec 2025
Viewed by 280
Abstract
State-of-charge (SoC) balancing across multiple battery energy storage systems (BESS) is a central challenge in renewable-rich mini-grids. Heterogeneous battery capacities, differing states of health, stochastic renewable generation, and variable loads create a high-dimensional uncertain control problem. Conventional droop-based SoC balancing strategies are decentralized [...] Read more.
State-of-charge (SoC) balancing across multiple battery energy storage systems (BESS) is a central challenge in renewable-rich mini-grids. Heterogeneous battery capacities, differing states of health, stochastic renewable generation, and variable loads create a high-dimensional uncertain control problem. Conventional droop-based SoC balancing strategies are decentralized and computationally light but fundamentally reactive and limited, whereas model predictive control (MPC) is insightful but computationally intensive and prone to modeling errors. This paper proposes a Hierarchical Predictive–Adaptive Control (HPAC) framework for SoC balancing in mini-grids using deep reinforcement learning. The framework consists of two synergistic layers operating on different time scales. A long-horizon Predictive Engine, implemented as a federated Transformer network, provides multi-horizon probabilistic forecasts of net load, enabling multiple mini-grids to collaboratively train a high-capacity model without sharing raw data. A fast-timescale Adaptive Controller, implemented as a Soft Actor-Critic (SAC) agent, uses these forecasts to make real-time charge/discharge decisions for each BESS unit. The forecasts are used both to augment the agent’s state representation and to dynamically shape a multi-objective reward function that balances SoC, economic performance, degradation-aware operation, and voltage stability. The paper formulates SoC balancing as a Markov decision process, details the SAC-based control architecture, and presents a comprehensive evaluation using a MATLAB-(R2025a)-based digital-twin simulation environment. A rigorous benchmarking study compares HPAC against fourteen representative controllers spanning rule-based, MPC, and various DRL paradigms. Sensitivity analysis on reward weight selection and ablation studies isolating the contributions of forecasting and dynamic reward shaping are conducted. Stress-test scenarios, including high-volatility net-load conditions and communication impairments, demonstrate the robustness of the approach. Results show that HPAC achieves near-minimal operating cost with essentially zero SoC variance and the lowest voltage variance among all compared controllers, while maintaining moderate energy throughput that implicitly preserves battery lifetime. Finally, the paper discusses a pathway from simulation to hardware-in-the-loop testing and a cloud-edge deployment architecture for practical, real-time deployment in real-world mini-grids. Full article
(This article belongs to the Special Issue Smart Power System Optimization, Operation, and Control)
Show Figures

Figure 1

29 pages, 29485 KB  
Article
FPGA-Based Dual Learning Model for Wheel Speed Sensor Fault Detection in ABS Systems Using HIL Simulations
by Farshideh Kordi, Paul Fortier and Amine Miled
Electronics 2026, 15(1), 58; https://doi.org/10.3390/electronics15010058 - 23 Dec 2025
Viewed by 227
Abstract
The rapid evolution of modern vehicles into intelligent and interconnected systems presents new complexities in both functional safety and cybersecurity. In this context, ensuring the reliability and integrity of critical sensor data, such as wheel speed inputs for anti-lock brake systems (ABS), is [...] Read more.
The rapid evolution of modern vehicles into intelligent and interconnected systems presents new complexities in both functional safety and cybersecurity. In this context, ensuring the reliability and integrity of critical sensor data, such as wheel speed inputs for anti-lock brake systems (ABS), is essential. Effective detection of wheel speed sensor faults not only improves functional safety, but also plays a vital role in keeping system resilience against potential cyber–physical threats. Although data-driven approaches have gained popularity for system development due to their ability to extract meaningful patterns from historical data, a major limitation is the lack of diverse and representative faulty datasets. This study proposes a novel dual learning model, based on Temporal Convolutional Networks (TCN), designed to accurately distinguish between normal and faulty wheel speed sensor behavior within a hardware-in-the-loop (HIL) simulation platform implemented on an FPGA. To address dataset limitations, a TruckSim–MATLAB/Simulink co-simulation environment is used to generate realistic datasets under normal operation and eight representative fault scenarios, yielding up to 5000 labeled sequences (balanced between normal and faulty behaviors) at a sampling rate of 60 Hz. Two TCN models are trained independently to learn normal and faulty dynamics, and fault decisions are made by comparing the reconstruction errors (MSE and MAE) of both models, thus avoiding manually tuned thresholds. On a test set of 1000 sequences (500 normal and 500 faulty) from the 5000 sample configuration, the proposed dual TCN framework achieves a detection accuracy of 97.8%, a precision of 96.5%, a recall of 98.2%, and an F1-score of 97.3%, outperforming a single TCN baseline, which achieves 91.4% accuracy and an 88.9% F1-score. The complete dual TCN architecture is implemented on a Xilinx ZCU102 FPGA evaluation kit (AMD, Santa Clara, CA, USA), while supporting real-time inference in the HIL loop. These results demonstrate that the proposed approach provides accurate, low-latency fault detection suitable for safety-critical ABS applications and contributes to improving both functional safety and cyber-resilience of braking systems. Full article
(This article belongs to the Special Issue Artificial Intelligence and Microsystems)
Show Figures

Figure 1

22 pages, 1746 KB  
Article
A BFS-Based DEVS Simulation Kernel for HDL-Compatible Simulation
by Bo Seung Kwon, Young Shin Han and Jong Sik Lee
Electronics 2026, 15(1), 48; https://doi.org/10.3390/electronics15010048 - 23 Dec 2025
Viewed by 191
Abstract
The Discrete Event System Specification (DEVS) formalism provides a mathematical foundation for modeling hierarchical discrete-event systems. However, the Depth-First Search (DFS) scheduling used in the classical DEVS abstract simulator conflicts with the concurrency semantics of Hardware Description Language (HDL) simulators such as Verilog [...] Read more.
The Discrete Event System Specification (DEVS) formalism provides a mathematical foundation for modeling hierarchical discrete-event systems. However, the Depth-First Search (DFS) scheduling used in the classical DEVS abstract simulator conflicts with the concurrency semantics of Hardware Description Language (HDL) simulators such as Verilog or VHDL. This mismatch induces timing distortions, including pipeline skew and zero-time feedback loops. To address these limitations, this study proposes a new DEVS simulation kernel that adopts Breadth-First Search (BFS) scheduling, integrating the delta-round concept. This approach employs an event-parking mechanism that separates event computation from application, structurally aligning with HDL’s Active–NBA–Reactive phases and enabling semantically simultaneous updates without introducing additional ε-time. Case studies demonstrate that the proposed BFS-based DEVS kernel eliminates timing discrepancies in pipeline and feedback-loop structures and establishes a formal foundation for semantic alignment between DEVS and HDL simulators. Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

28 pages, 5859 KB  
Article
Adaptive Gain Twisting Sliding Mode Controller Design for Flexible Manipulator Joints with Variable Stiffness
by Shijie Zhang, Tianle Yang, Hui Zhang and Jilong Wang
Actuators 2026, 15(1), 7; https://doi.org/10.3390/act15010007 - 22 Dec 2025
Viewed by 262
Abstract
This paper proposes an adaptive gain twisting sliding-mode control (AGTSMC) strategy for trapezoidal variable-stiffness joints (TVSJs) to achieve accurate trajectory tracking under both matched and mismatched uncertainties. The TVSJ employs a compact trapezoidal leaf spring with grooved bearing followers (GBFs), enabling wide-range stiffness [...] Read more.
This paper proposes an adaptive gain twisting sliding-mode control (AGTSMC) strategy for trapezoidal variable-stiffness joints (TVSJs) to achieve accurate trajectory tracking under both matched and mismatched uncertainties. The TVSJ employs a compact trapezoidal leaf spring with grooved bearing followers (GBFs), enabling wide-range stiffness modulation through low-friction rolling contact. To address the strong nonlinearities and unmodeled dynamics introduced by stiffness variation, a Lyapunov-based adaptive twisting controller is developed, where the gains are automatically adjusted without conservative overestimation. A second-order sliding-mode differentiator is integrated to estimate velocity and disturbance terms in finite time using only position measurements, effectively reducing chattering. The proposed controller guarantees finite-time stability of the closed-loop system despite bounded uncertainties and measurement noise. Extensive simulations and hardware-in-the-loop experiments on a TVSJ platform validate the method. Compared with conventional sliding mode controller (CSMC), terminal sliding mode controller (TSMC), and fixed-gain twisting control (TC), the AGTSMC achieves faster convergence, lower steady-state error, and improved vibration suppression across low, high, and variable stiffness modes. Experimental results confirm that the proposed approach enhances tracking accuracy and energy efficiency while maintaining robustness under large stiffness variations. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

Back to TopTop