Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,890)

Search Parameters:
Keywords = hardware system improvement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 10966 KB  
Article
Contextual Real-Time Optimization on FPGA by Dynamic Selection of Chaotic Maps and Adaptive Metaheuristics
by Rabab Ouchker, Hamza Tahiri, Ismail Mchichou, Mohamed Amine Tahiri, Hicham Amakdouf and Mhamed Sayyouri
Appl. Sci. 2025, 15(19), 10695; https://doi.org/10.3390/app151910695 (registering DOI) - 3 Oct 2025
Abstract
In dynamic and information-rich contexts, systems must be capable of making instantaneous, context-aware decisions. Such scenarios require optimization methods that are both fast and flexible. This paper introduces an innovative hardware-based intelligent optimization framework, deployed on FPGAs, designed to support autonomous decisions in [...] Read more.
In dynamic and information-rich contexts, systems must be capable of making instantaneous, context-aware decisions. Such scenarios require optimization methods that are both fast and flexible. This paper introduces an innovative hardware-based intelligent optimization framework, deployed on FPGAs, designed to support autonomous decisions in real-time systems. In contrast to conventional methods based on a single chaotic map, our scheme brings together six separate chaotic generators in simultaneous operation, orchestrated by an adaptive voting system based on past results. The system, in conjunction with the Secretary Bird Optimization Algorithm (SBOA), constantly adjusts its optimization approach according to the changing profile of the objective function. This delivers first-rate, timely solutions with improved convergence, resistance to local minima, and a high degree of adaptability to a variety of decision-making contexts. Simulations carried out on reference standards and engineering problems have demonstrated the scalability, responsiveness, and efficiency of the proposed model. These characteristics make it particularly suitable for use in embedded intelligence applications in sectors such as intelligent production, robotics, and IoT-based infrastructures. The suggested solution was tested using post-synthesis simulations on Vivado 2022.2 and experimented on three concrete engineering challenges: welded beam design, pressure equipment design, and tension/compression spring refinement. In each situation, the adaptive selection process dynamically determined the most suitable chaotic map, such as the logistics map for the Welded Beam Design Problem (WBDP) and the Tent map for the Pressure Vessel Design Problem (PVDP). This led to ideal results that exceed both conventional static methods and recent references in the literature. The post-synthesis results on the Nexys 4 DDR (Artix-7 XC7A100T, Digilent Inc., Pullman, WA, USA) show that the initial Q16.16 implementation exceeded the device resources (128% LUTs and 100% DSPs), whereas the optimized Q4.8 representation achieved feasible deployment with 80% LUT utilization, 72% DSP usage, and 3% FF occupancy. This adjustment reduced resource consumption by more than 25% while maintaining sufficient computational accuracy. Full article
21 pages, 3850 KB  
Article
Controlling AGV While Docking Based on the Fuzzy Rule Inference System
by Damian Grzechca, Łukasz Gola, Michał Grzebinoga, Adam Ziębiński, Krzysztof Paszek and Lukas Chruszczyk
Sensors 2025, 25(19), 6108; https://doi.org/10.3390/s25196108 - 3 Oct 2025
Abstract
Accurate docking of Autonomous Guided Vehicles (AGVs) is a critical requirement for efficient automated production systems in Industry 4.0, particularly for collaborative tasks with robotic arms that have a limited working range. This paper introduces a cost-effective software-upgrade solution to enhance the precision [...] Read more.
Accurate docking of Autonomous Guided Vehicles (AGVs) is a critical requirement for efficient automated production systems in Industry 4.0, particularly for collaborative tasks with robotic arms that have a limited working range. This paper introduces a cost-effective software-upgrade solution to enhance the precision of the final docking phase without requiring new hardware. Our approach is based on a two-stage strategy: first, a switch from a global dead reckoning system to a local navigation scheme, is triggered near the docking station; second, a dedicated Takagi-Sugeno Fuzzy Logic Controller (FLC), guides the AGV to its final position with high accuracy. The core novelty of our FLC is its implementation as a gain-scheduling lookup table (LUT), which synthesizes critical state variables—heading error and distance error—from limited proximity sensor data, to robustly handle positional uncertainty and environmental variations. This method directly addresses the inadequacies of traditional odometry, whose cumulative errors become unacceptable at the critical docking point. For experimental validation, we assume the global navigation delivers the AGV to a general switching point, near the assembly station with an unknown true pose. We detail the design of the fuzzy controller and present experimental results that demonstrate a significant improvement, achieving repeatable docking accuracy within industrially acceptable tolerances. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 6267 KB  
Article
Local and Remote Digital Pre-Distortion for 5G Power Amplifiers with Safe Deep Reinforcement Learning
by Christian Spano, Damiano Badini, Lorenzo Cazzella and Matteo Matteucci
Sensors 2025, 25(19), 6102; https://doi.org/10.3390/s25196102 - 3 Oct 2025
Abstract
The demand for higher data rates and energy efficiency in wireless communication systems drives power amplifiers (PAs) into nonlinear operation, causing signal distortions that hinder performance. Digital Pre-Distortion (DPD) addresses these distortions, but existing systems face challenges with complexity, adaptability, and resource limitations. [...] Read more.
The demand for higher data rates and energy efficiency in wireless communication systems drives power amplifiers (PAs) into nonlinear operation, causing signal distortions that hinder performance. Digital Pre-Distortion (DPD) addresses these distortions, but existing systems face challenges with complexity, adaptability, and resource limitations. This paper introduces DRL-DPD, a Deep Reinforcement Learning-based solution for DPD that aims to reduce computational burden, improve adaptation to dynamic environments, and minimize resource consumption. To ensure safety and regulatory compliance, we integrate an ad-hoc Safe Reinforcement Learning algorithm, CRE-DDPG (Cautious-Recoverable-Exploration Deep Deterministic Policy Gradient), which prevents ACLR measurements from falling below safety thresholds. Simulations and hardware experiments demonstrate the potential of DRL-DPD with CRE-DDPG to surpass current DPD limitations in both local and remote configurations, paving the way for more efficient communication systems, especially in the context of 5G and beyond. Full article
Show Figures

Figure 1

26 pages, 4563 KB  
Article
Personalized Smart Home Automation Using Machine Learning: Predicting User Activities
by Mark M. Gad, Walaa Gad, Tamer Abdelkader and Kshirasagar Naik
Sensors 2025, 25(19), 6082; https://doi.org/10.3390/s25196082 - 2 Oct 2025
Abstract
A personalized framework for smart home automation is introduced, utilizing machine learning to predict user activities and allow for the context-aware control of living spaces. Predicting user activities, such as ‘Watch_TV’, ‘Sleep’, ‘Work_On_Computer’, and ‘Cook_Dinner’, is essential for improving occupant comfort, optimizing energy [...] Read more.
A personalized framework for smart home automation is introduced, utilizing machine learning to predict user activities and allow for the context-aware control of living spaces. Predicting user activities, such as ‘Watch_TV’, ‘Sleep’, ‘Work_On_Computer’, and ‘Cook_Dinner’, is essential for improving occupant comfort, optimizing energy consumption, and offering proactive support in smart home settings. The Edge Light Human Activity Recognition Predictor, or EL-HARP, is the main prediction model used in this framework to predict user behavior. The system combines open-source software for real-time sensing, facial recognition, and appliance control with affordable hardware, including the Raspberry Pi 5, ESP32-CAM, Tuya smart switches, NFC (Near Field Communication), and ultrasonic sensors. In order to predict daily user activities, three gradient-boosting models—XGBoost, CatBoost, and LightGBM (Gradient Boosting Models)—are trained for each household using engineered features and past behaviour patterns. Using extended temporal features, LightGBM in particular achieves strong predictive performance within EL-HARP. The framework is optimized for edge deployment with efficient training, regularization, and class imbalance handling. A fully functional prototype demonstrates real-time performance and adaptability to individual behavior patterns. This work contributes a scalable, privacy-preserving, and user-centric approach to intelligent home automation. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Graphical abstract

24 pages, 4022 KB  
Article
Dynamic Vision Sensor-Driven Spiking Neural Networks for Low-Power Event-Based Tracking and Recognition
by Boyi Feng, Rui Zhu, Yue Zhu, Yan Jin and Jiaqi Ju
Sensors 2025, 25(19), 6048; https://doi.org/10.3390/s25196048 - 1 Oct 2025
Abstract
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and [...] Read more.
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and effectively handling spatio-temporal complexity, which limits their potential for real-time applications on embedded sensing systems such as object tracking and recognition. Targeting this neuromorphic sensing pipeline, this paper proposes the Dynamic Tracking with Event Attention Spiking Network (DTEASN), a novel framework designed to address these challenges by employing a pure SNN architecture, bypassing conventional convolutional neural network (CNN) operations, and reducing GPU resource dependency, while tailoring the processing to DVS signal characteristics (asynchrony, sparsity, and polarity). The model incorporates two innovative, self-developed components: an event-driven multi-scale attention mechanism and a spatio-temporal event convolver, both of which significantly enhance spatio-temporal feature extraction from raw DVS events. An Event-Weighted Spiking Loss (EW-SLoss) is introduced to optimise the learning process by prioritising informative events and improving robustness to sensor noise. Additionally, a lightweight event tracking mechanism and a custom synaptic connection rule are proposed to further improve model efficiency for low-power, edge deployment. The efficacy of DTEASN is demonstrated through empirical results on event-based (DVS) object recognition and tracking benchmarks, where it outperforms conventional methods in accuracy, latency, event throughput (events/s) and spike rate (spikes/s), memory footprint, spike-efficiency (energy proxy), and overall computational efficiency under typical DVS settings. By virtue of its event-aligned, sparse computation, the framework is amenable to highly parallel neuromorphic hardware, supporting on- or near-sensor inference for embedded applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 2759 KB  
Review
MCU Intelligent Upgrades: An Overview of AI-Enabled Low-Power Technologies
by Tong Zhang, Bosen Huang, Xiewen Liu, Jiaqi Fan, Junbo Li, Zhao Yue and Yanfang Wang
J. Low Power Electron. Appl. 2025, 15(4), 60; https://doi.org/10.3390/jlpea15040060 - 1 Oct 2025
Abstract
Microcontroller units (MCUs) serve as the core components of embedded systems. In the era of smart IoT, embedded devices are increasingly deployed on mobile platforms, leading to a growing demand for low-power consumption. As a result, low-power technology for MCUs has become increasingly [...] Read more.
Microcontroller units (MCUs) serve as the core components of embedded systems. In the era of smart IoT, embedded devices are increasingly deployed on mobile platforms, leading to a growing demand for low-power consumption. As a result, low-power technology for MCUs has become increasingly critical. This paper systematically reviews the development history and current technical challenges of MCU low-power technology. It then focuses on analyzing system-level low-power optimization pathways for integrating MCUs with artificial intelligence (AI) technology, including lightweight AI algorithm design, model pruning, AI acceleration hardware (NPU, GPU), and heterogeneous computing architectures. It further elaborates on how AI technology empowers MCUs to achieve comprehensive low power consumption from four dimensions: task scheduling, power management, inference engine optimization, and communication and data processing. Through practical application cases in multiple fields such as smart home, healthcare, industrial automation, and smart agriculture, it verifies the significant advantages of MCUs combined with AI in performance improvement and power consumption optimization. Finally, this paper focuses on the key challenges that still need to be addressed in the intelligent upgrade of future MCU low power consumption and proposes in-depth research directions in areas such as the balance between lightweight model accuracy and robustness, the consistency and stability of edge-side collaborative computing, and the reliability and power consumption control of the sensor-storage-computing integrated architecture, providing clear guidance and prospects for future research. Full article
Show Figures

Figure 1

43 pages, 28786 KB  
Article
Secure and Efficient Data Encryption for Internet of Robotic Things via Chaos-Based Ascon
by Gülyeter Öztürk, Murat Erhan Çimen, Ünal Çavuşoğlu, Osman Eldoğan and Durmuş Karayel
Appl. Sci. 2025, 15(19), 10641; https://doi.org/10.3390/app151910641 - 1 Oct 2025
Abstract
The increasing adoption of digital technologies, robotic systems, and IoT applications in sectors such as medicine, agriculture, and industry drives a surge in data generation and necessitates secure and efficient encryption. For resource-constrained systems, lightweight yet robust cryptographic algorithms are critical. This study [...] Read more.
The increasing adoption of digital technologies, robotic systems, and IoT applications in sectors such as medicine, agriculture, and industry drives a surge in data generation and necessitates secure and efficient encryption. For resource-constrained systems, lightweight yet robust cryptographic algorithms are critical. This study addresses the security demands of IoRT systems by proposing an enhanced chaos-based encryption method. The approach integrates the lightweight structure of NIST-standardized Ascon-AEAD128 with the randomness of the Zaslavsky map. Ascon-AEAD128 is widely used on many hardware platforms; therefore, it must robustly resist both passive and active attacks. To overcome these challenges and enhance Ascon’s security, we integrate into Ascon the keys and nonces generated by the Zaslavsky chaotic map, which is deterministic, nonperiodic, and highly sensitive to initial conditions and parameter variations.This integration yields a chaos-based Ascon variant with a higher encryption security relative to the standard Ascon. In addition, we introduce exploratory variants that inject non-repeating chaotic values into the initialization vectors (IVs), the round constants (RCs), and the linear diffusion constants (LCs), while preserving the core permutation. Real-time tests are conducted using Raspberry Pi 3B devices and ROS 2–based IoRT robots. The algorithm’s performance is evaluated over 100 encryption runs on 12 grayscale/color images and variable-length text transmitted via MQTT. Statistical and differential analyses—including histogram, entropy, correlation, chi-square, NPCR, UACI, MSE, MAE, PSNR, and NIST SP 800-22 randomness tests—assess the encryption strength. The results indicate that the proposed method delivers consistent improvements in randomness and uniformity over standard Ascon-AEAD128, while remaining comparable to state-of-the-art chaotic encryption schemes across standard security metrics. These findings suggest that the algorithm is a promising option for resource-constrained IoRT applications. Full article
(This article belongs to the Special Issue Recent Advances in Mechatronic and Robotic Systems)
21 pages, 812 KB  
Systematic Review
The Potential of Low-Cost IoT-Enabled Agrometeorological Stations: A Systematic Review
by Christa M. Al Kalaany, Hilda N. Kimaita, Ahmed A. Abdelmoneim, Roula Khadra, Bilal Derardja and Giovana Dragonetti
Sensors 2025, 25(19), 6020; https://doi.org/10.3390/s25196020 - 1 Oct 2025
Abstract
The integration of Internet of Things (IoT) technologies in agriculture has facilitated real-time environmental monitoring, with low-cost IoT-enabled agrometeorological stations emerging as a valuable tool for climate-smart farming. This systematic review examines low-cost IoT-based weather stations by analyzing their hardware and software components [...] Read more.
The integration of Internet of Things (IoT) technologies in agriculture has facilitated real-time environmental monitoring, with low-cost IoT-enabled agrometeorological stations emerging as a valuable tool for climate-smart farming. This systematic review examines low-cost IoT-based weather stations by analyzing their hardware and software components and assessing their potential in comparison to conventional weather stations. It emphasizes their contribution to improving climate resilience, facilitating data-driven decision-making, and expanding access to weather data in resource-constrained environments. The analysis revealed widespread adoption of ESP32 microcontrollers, favored for its affordability and modularity, as well as increasing use of communication protocols like LoRa and Wi-Fi due to their balance of range, power efficiency, and scalability. Sensor integration largely focused on core parameters such as air temperature, relative humidity, soil moisture, and rainfall supporting climate-smart irrigation, disease risk modeling, and microclimate management. Studies highlighted the importance of usability and adaptability through modular hardware and open-source platforms. Additionally, scalability was demonstrated through community-level and multi-station deployments. Despite their promise, challenges persist regarding sensor calibration, data interoperability, and long-term field validation. Future research should explore the integration of edge computing, adaptive analytics, and standardization protocols to further enhance the reliability and functionality of IoT-enabled agrometeorological systems. Full article
Show Figures

Figure 1

23 pages, 2251 KB  
Article
Enhancing FDM Rapid Prototyping for Industry 4.0 Applications Through Simulation and Optimization Techniques
by Mihalache Ghinea, Alex Cosmin Niculescu and Bogdan Dragos Rosca
Materials 2025, 18(19), 4555; https://doi.org/10.3390/ma18194555 - 30 Sep 2025
Abstract
Modern manufacturing is increasingly shaped by the paradigm of Industry 4.0 (Smart Manufacturing). As one of its nine pillars, additive manufacturing plays a crucial role, enabling high-quality final products with improved profitability in minimal time. Advances in this field have facilitated the emergence [...] Read more.
Modern manufacturing is increasingly shaped by the paradigm of Industry 4.0 (Smart Manufacturing). As one of its nine pillars, additive manufacturing plays a crucial role, enabling high-quality final products with improved profitability in minimal time. Advances in this field have facilitated the emergence of diverse technologies—such as Fused Deposition Modelling (FDM), Stereolithography (SLA), and Selective Laser Sintering (SLS)—allowing the use of metallic, polymeric, and composite materials. Within this context, Klipper v.0.12, an open-source firmware for 3D printers, addresses the performance limitations of conventional consumer-grade systems. By offloading computationally intensive tasks to an external single-board computer (e.g., Raspberry Pi), Klipper enhances speed, precision, and flexibility while reducing prototyping time. The purpose of this study is twofold: first, to identify and analyze bottlenecks in low-cost 3D printers and second, to evaluate how these shortcomings can be mitigated through the integration of supplementary hardware and software (Klipper firmware, Raspberry Pi, additional sensors, and the Mainsail interface). The scientific contribution of this study lies in demonstrating that a consumer-grade FDM 3D printer can be significantly upgraded through this integration and systematic calibration, achieving up to a 50% reduction in printing time while maintaining dimensional accuracy and improving surface quality. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
27 pages, 1370 KB  
Article
Hardware–Software Co-Design Architecture for Real-Time EMG Feature Processing in FPGA-Based Prosthetic Systems
by Carlos Gabriel Mireles-Preciado, Diana Carolina Toledo-Pérez, Roberto Augusto Gómez-Loenzo, Marcos Aviles and Juvenal Rodríguez-Reséndiz
Algorithms 2025, 18(10), 617; https://doi.org/10.3390/a18100617 - 30 Sep 2025
Abstract
This paper presents a novel hardware architecture for implementing real-time EMG feature extraction and dimensionality reduction in resource-constrained FPGA environments. The proposed co-processing architecture integrates four time-domain feature extractors (MAV, WL, SSC, ZC) with a specialized PCA matrix multiplication unit within a unified [...] Read more.
This paper presents a novel hardware architecture for implementing real-time EMG feature extraction and dimensionality reduction in resource-constrained FPGA environments. The proposed co-processing architecture integrates four time-domain feature extractors (MAV, WL, SSC, ZC) with a specialized PCA matrix multiplication unit within a unified processing pipeline, demonstrating significant improvements in power efficiency and processing latency compared to traditional software-based approaches. Multiple matrix multiplication architectures are evaluated to optimize FPGA resource utilization while maintaining deterministic real-time performance using a Zed evaluation board as the development platform. This implementation achieves efficient dimensionality reduction with minimal hardware resources, making it suitable for embedded prosthetic applications. The functionality of this system is validated using a custom EMG database from previous studies. The results demonstrate a 7.3× speed improvement and 3.1× energy efficiency gain compared to ARM Cortex-A9 software implementation, validating the architectural approach for battery-powered prosthetic control applications. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

20 pages, 2979 KB  
Article
Computer Vision-Enabled Construction Waste Sorting: A Sensitivity Analysis
by Xinru Liu, Zeinab Farshadfar and Siavash H. Khajavi
Appl. Sci. 2025, 15(19), 10550; https://doi.org/10.3390/app151910550 - 29 Sep 2025
Abstract
This paper presents a comprehensive sensitivity analysis of the pioneering real-world deployment of computer vision-enabled construction waste sorting in Finland, implemented by a leading provider of robotic recycling solutions. Building upon and extending the findings of prior field research, the study analyzes an [...] Read more.
This paper presents a comprehensive sensitivity analysis of the pioneering real-world deployment of computer vision-enabled construction waste sorting in Finland, implemented by a leading provider of robotic recycling solutions. Building upon and extending the findings of prior field research, the study analyzes an industry flagship case to examine the financial feasibility of computer vision-enabled robotic sorting compared to conventional sorting. The sensitivity analysis covers cost parameters related to labor, wages, personnel training, machinery (including AI software, hardware, and associated components), and maintenance operations, as well as capital expenses. We further expand the existing cost model by integrating the net present value (NPV) of investments. The results indicate that the computer vision-enabled automated system (CVAS) achieves cost competitiveness over conventional sorting (CS) under conditions of higher labor-related costs, such as increased headcount, wages, and training expenses. For instance, when annual wages exceed EUR 20,980, CVAS becomes more cost-effective. Conversely, CS retains cost advantages in scenarios dominated by higher machinery and maintenance costs or extremely elevated discount rates. For example, when the average machinery cost surpasses EUR 512,000 per unit, CS demonstrates greater economic viability. The novelty of this work arises from the use of a pioneering real-world case study and the improvements offered to a comprehensive comparative cost model for CVAS and CS, and furthermore from clarification of the impact of key cost variables on solution (CVAS or CS) selection. Full article
Show Figures

Figure 1

20 pages, 2504 KB  
Article
Enhancing Ocean Monitoring for Coastal Communities Using AI
by Erika Spiteri Bailey, Kristian Guillaumier and Adam Gauci
Appl. Sci. 2025, 15(19), 10490; https://doi.org/10.3390/app151910490 - 28 Sep 2025
Abstract
Coastal communities and marine ecosystems face increasing risks due to changing ocean conditions, yet effective wave monitoring remains limited in many low-resource regions. This study investigates the use of seismic data to predict significant wave height (SWH), offering a low-cost and scalable solution [...] Read more.
Coastal communities and marine ecosystems face increasing risks due to changing ocean conditions, yet effective wave monitoring remains limited in many low-resource regions. This study investigates the use of seismic data to predict significant wave height (SWH), offering a low-cost and scalable solution to support coastal conservation and safety. We developed a baseline machine learning (ML) model and improved it using a longest-stretch algorithm for seismic data selection and station-specific hyperparameter tuning. Models were trained and tested on consumer-grade hardware to ensure accessibility and availability. Applied to the Sicily–Malta region, the enhanced models achieved up to a 0.133 increase in R2 and a 0.026 m reduction in mean absolute error compared to existing baselines. These results demonstrate that seismic signals, typically collected for geophysical purposes, can be repurposed to support ocean monitoring using accessible artificial intelligence (AI) tools. The approach may be integrated into conservation planning efforts such as early warning systems and ecosystem monitoring frameworks. Future work may focus on improving robustness in data-sparse areas through augmentation techniques and exploring broader applications of this method in marine and coastal sustainability contexts. Full article
(This article belongs to the Special Issue Transportation and Infrastructures Under Extreme Weather Conditions)
Show Figures

Figure 1

20 pages, 6622 KB  
Article
A Hardware-in-the-Loop Simulation Case Study of High-Order Sliding Mode Control for a Flexible-Link Robotic Arm
by Aydemir Arisoy and Deniz Kavala Sen
Appl. Sci. 2025, 15(19), 10484; https://doi.org/10.3390/app151910484 - 28 Sep 2025
Abstract
This paper presents a hardware-in-the-loop (HIL) simulation case study on the application of High-Order Sliding Mode Control (HOSMC) to a flexible-link robotic arm. The developed HIL platform combines physical hardware components with a simulated plant model, enabling real-time testing of control algorithms under [...] Read more.
This paper presents a hardware-in-the-loop (HIL) simulation case study on the application of High-Order Sliding Mode Control (HOSMC) to a flexible-link robotic arm. The developed HIL platform combines physical hardware components with a simulated plant model, enabling real-time testing of control algorithms under realistic operating conditions without requiring a full-scale prototype. HOSMC, an advanced nonlinear control strategy, mitigates the chattering effects inherent in conventional sliding mode control by driving the system to a reduced-order sliding manifold within a finite time, resulting in smoother actuator commands and reduced mechanical stress. Flexible-link arms, while lightweight and energy-efficient, are inherently nonlinear and prone to vibration, posing significant control challenges. In this case study, the experimental HIL environment is used to evaluate HOSMC performance, demonstrating improved trajectory tracking, reduced overshoot, and minimized steady-state error. The results confirm that HIL simulation offers an effective bridge between theoretical control design and practical implementation for advanced robotic systems. Full article
Show Figures

Figure 1

40 pages, 17089 KB  
Review
Advancing Flexible Optoelectronic Synapses and Neurons with MXene-Integrated Polymeric Platforms
by Hongsheng Xu, Xiangyu Zeng and Akeel Qadir
Nanomaterials 2025, 15(19), 1481; https://doi.org/10.3390/nano15191481 - 27 Sep 2025
Abstract
Neuromorphic computing, inspired by the human brain’s architecture, offers a transformative approach to overcoming the limitations of traditional von Neumann systems by enabling highly parallel, energy-efficient information processing. Among emerging materials, MXenes—a class of two-dimensional transition metal carbides and nitrides—have garnered significant attention [...] Read more.
Neuromorphic computing, inspired by the human brain’s architecture, offers a transformative approach to overcoming the limitations of traditional von Neumann systems by enabling highly parallel, energy-efficient information processing. Among emerging materials, MXenes—a class of two-dimensional transition metal carbides and nitrides—have garnered significant attention due to their exceptional electrical conductivity, tunable surface chemistry, and mechanical flexibility. This review comprehensively examines recent advancements in MXene-based optoelectronic synapses and neurons, focusing on their structural properties, device architectures, and operational mechanisms. We emphasize synergistic electrical–optical modulation in memristive and transistor-based synaptic devices, enabling improved energy efficiency, multilevel plasticity, and fast response times. In parallel, MXene-enabled optoelectronic neurons demonstrate integrate-and-fire dynamics and spatiotemporal information integration crucial for biologically inspired neural computations. Furthermore, this review explores innovative neuromorphic hardware platforms that leverage multifunctional MXene devices to achieve programmable synaptic–neuronal switching, enhancing computational flexibility and scalability. Despite these promising developments, challenges remain in device stability, reproducibility, and large-scale integration. Addressing these gaps through advanced synthesis, defect engineering, and architectural innovation will be pivotal for realizing practical, low-power optoelectronic neuromorphic systems. This review thus provides a critical roadmap for advancing MXene-based materials and devices toward next-generation intelligent computing and adaptive sensory applications. Full article
(This article belongs to the Section Theory and Simulation of Nanostructures)
Show Figures

Figure 1

22 pages, 3275 KB  
Review
Permanent Magnet Synchronous Motor Drive System for Agricultural Equipment: A Review
by Chao Zhang, Xiongwei Xia, Hong Zheng and Hongping Jia
Agriculture 2025, 15(19), 2007; https://doi.org/10.3390/agriculture15192007 - 25 Sep 2025
Abstract
The electrification of agricultural equipment is a critical pathway to address the dual challenges of increasing global food production and ensuring sustainable agricultural development. As the core power unit, the permanent magnet synchronous motor (PMSM) drive system faces severe challenges in achieving high [...] Read more.
The electrification of agricultural equipment is a critical pathway to address the dual challenges of increasing global food production and ensuring sustainable agricultural development. As the core power unit, the permanent magnet synchronous motor (PMSM) drive system faces severe challenges in achieving high performance, robustness, and reliable control in complex farmland environments characterized by sudden load changes, extreme operating conditions, and strong interference. This paper provides a comprehensive review of key technological advancements in PMSM drive systems for agricultural electrification. First, it analyzes solutions to enhance the reliability of power converters, including high-frequency silicon carbide (SiC)/gallium nitride (GaN) power device packaging, thermal management, and electromagnetic compatibility (EMC) design. Second, it systematically elaborates on high-performance motor control algorithms such as Direct Torque Control (DTC) and Model Predictive Control (MPC) for improving dynamic response; robust control strategies like Sliding Mode Control (SMC) and Active Disturbance Rejection Control (ADRC) for enhancing resilience; and the latest progress in fault-tolerant control architectures incorporating sensorless technology. Furthermore, the paper identifies core challenges in large-scale applications, including environmental adaptability, real-time multi-machine coordination, and high reliability requirements. Innovatively, this review proposes a closed-loop intelligent control paradigm encompassing environmental disturbance prediction, control parameter self-tuning, and actuator dynamic response. This paradigm provides theoretical support for enhancing the autonomous adaptability and operational quality of agricultural machinery in unstructured environments. Finally, future trends involving deep AI integration, collaborative hardware innovation, and agricultural ecosystem construction are outlined. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Back to TopTop