Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (124)

Search Parameters:
Keywords = microcontroller unit (MCU)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 1767 KB  
Systematic Review
Advanced Hardware Security on Embedded Processors: A 2026 Systematic Review
by Ali Kia, Aaron W. Storey and Masudul Imtiaz
Electronics 2026, 15(5), 1135; https://doi.org/10.3390/electronics15051135 - 9 Mar 2026
Viewed by 737
Abstract
The proliferation of Internet of Things (IoT) devices and embedded processors has recently spurred rapid advances in hardware-level security. This paper systematically reviews developments in securing microcontroller units (MCUs) and constrained embedded platforms from 2020 to 2026, a period marked by the finalization [...] Read more.
The proliferation of Internet of Things (IoT) devices and embedded processors has recently spurred rapid advances in hardware-level security. This paper systematically reviews developments in securing microcontroller units (MCUs) and constrained embedded platforms from 2020 to 2026, a period marked by the finalization of NIST’s post-quantum cryptography standards and accelerated commercial deployment of hardware security primitives. Through analysis of the peer-reviewed literature, industry implementations, and standardization efforts, we survey five critical areas: post-quantum cryptography (PQC) implementations on resource-constrained hardware, physically unclonable functions (PUFs) for device authentication, hardware Roots of Trust and secure boot mechanisms, side-channel attack mitigations, and Trusted Execution Environments (TEEs) for microcontroller-class devices. For each domain, we analyze technical mechanisms, deployment constraints (power, memory, cost), security guarantees, and commercial maturity. Our review distinguishes itself through its integration perspective, examining how these primitives must be composed to secure real-world embedded systems, and its emphasis on post-standardization PQC developments. We highlight critical gaps including PQC memory overhead challenges, ML-resistant PUF designs, and TEE developer friction, while documenting commercial progress such as PSA Level 3 certified components and 500+ million PUF-enabled devices deployed. This synthesis provides practitioners with practical guidance for securing the next generation of IoT and embedded systems. Full article
Show Figures

Figure 1

21 pages, 457 KB  
Article
Understanding Energy Efficiency of AI Deployments in IoT-Driven Smart Cities
by Salvatore Bramante, Filippo Ferrandino and Alessandro Cilardo
IoT 2026, 7(1), 27; https://doi.org/10.3390/iot7010027 - 8 Mar 2026
Viewed by 377
Abstract
The pervasive adoption of AI and AIoT applications at the network edge presents both opportunities and challenges for smart cities. With a focus on the energy efficiency of AI in urban environments, this paper provides a systematic comparative analysis of representative edge hardware [...] Read more.
The pervasive adoption of AI and AIoT applications at the network edge presents both opportunities and challenges for smart cities. With a focus on the energy efficiency of AI in urban environments, this paper provides a systematic comparative analysis of representative edge hardware platforms, i.e., embedded GPUs, FPGAs, and ultra-low-power microcontroller-/sensor-class devices, assessing their suitability for AI workloads in IoT-driven smart city infrastructures. The evaluation, based on direct characterization of diverse neural networks and relevant datasets, quantifies computational performance and energy behavior through inference latency, throughput, and energy/per inference measurements. Across the evaluated network–board pairs, the measured inference power spans several orders of magnitude, ranging from 0.1–10 mW for ultra-low-power Intelligent Sensor Processing Units (ISPUs) up to 1–10 W for embedded GPUs, highlighting the wide design space between the least and most power-demanding configurations. Results indicate that embedded GPUs provide a favorable performance-to-power ratio for computationally intensive workloads, while MCU/ISPU-class solutions, despite throughput limitations, offer compelling advantages in ultra-low-power scenarios when combined with quantization and pruning, making them well-suited for distributed sensing and actuation typical of smart city deployments. Overall, this comparative analysis guides hardware selection for heterogeneous, sustainable AI-enabled urban services. Full article
(This article belongs to the Special Issue IoT-Driven Smart Cities)
Show Figures

Figure 1

15 pages, 2809 KB  
Article
Research on an Intelligent Sealed Neutral Point Protection Device for High-Altitude Transformers
by Wen Yan, Xiaohui Li, Fujie Wang, Huifang Dong, Zhongqi Zhao, Jinpeng Gao and Xutao Han
Energies 2026, 19(4), 906; https://doi.org/10.3390/en19040906 - 9 Feb 2026
Viewed by 258
Abstract
To address the malfunction and unreliable operation of traditional open discharge gaps in high-altitude environments (with sandstorms and low pressure), which are prone to interference from factors like electrode corrosion and contamination, this study proposes an intelligent sealed neutral point protection device for [...] Read more.
To address the malfunction and unreliable operation of traditional open discharge gaps in high-altitude environments (with sandstorms and low pressure), which are prone to interference from factors like electrode corrosion and contamination, this study proposes an intelligent sealed neutral point protection device for transformers. Its core is a sealed discharge gap filled with nitrogen gas, effectively isolating it from external conditions and significantly stabilizing the power frequency discharge voltage. Innovatively, an active breakdown technology is introduced. Overvoltage signals at the transformer neutral point are acquired in real time via a capacitive voltage divider. After processing by a microcontroller unit (MCU), if both the amplitude and duration meet the preset thresholds, the MCU triggers a pulse to actively induce a discharge at the gap’s low-voltage end, enabling controlled breakdown. This allows the transient discharge voltage to be raised to 3–4 times the steady-state value, avoiding overlap with the surge arrester’s residual voltage. Tests confirm that the gap breaks down stably only when both amplitude and duration conditions are met, remaining reliable otherwise. This design successfully resolves the critical issues of failure and maloperation under both steady-state and transient overvoltages in high-altitude settings, significantly improving protection selectivity and reliability, and offering a novel solution for transformer safety in such regions. Full article
Show Figures

Figure 1

13 pages, 2210 KB  
Article
High-Throughput Control-Data Acquisition for Multicore MCU-Based Real-Time Control Systems Using Double Buffering over Ethernet
by Seung-Hun Lee, Duc M. Tran and Joon-Young Choi
Electronics 2026, 15(2), 469; https://doi.org/10.3390/electronics15020469 - 22 Jan 2026
Viewed by 404
Abstract
For the design, implementation, performance optimization, and predictive maintenance of high-speed real-time control systems with sub-millisecond control periods, the capability to acquire large volumes of high-rate control data in real time is required without interfering with normal control operation that is repeatedly executed [...] Read more.
For the design, implementation, performance optimization, and predictive maintenance of high-speed real-time control systems with sub-millisecond control periods, the capability to acquire large volumes of high-rate control data in real time is required without interfering with normal control operation that is repeatedly executed in each extremely short control cycle. In this study, we propose a control-data acquisition method for high-speed real-time control systems with sub-millisecond control periods, in which control data are transferred to an external host device via Ethernet in real time. To enable the transmission of high-rate control data without disturbing the real-time control operation, a multicore microcontroller unit (MCU) is adopted, where the control task and the data transmission task are executed on separately assigned central processing unit (CPU) cores. Furthermore, by applying a double-buffering algorithm, continuous Ethernet communication without intermediate waiting time is achieved, resulting in a substantial improvement in transmission throughput. Using a control card based on TI’s multicore MCU TMS320F28388D, which consists of dual digital signal processor cores and one connectivity manager (CM) core, the proposed control-data acquisition method is implemented and an actual experimental environment is constructed. Experimental results show that the double-buffering transmission achieves a maximum throughput of 94.2 Mbps on a 100 Mbps Fast Ethernet link, providing a 38.5% improvement over the single-buffering case and verifying the high performance and efficiency of the proposed data acquisition method. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

21 pages, 30287 KB  
Article
Online Estimation of Lithium-Ion Battery State of Charge Using Multilayer Perceptron Applied to an Instrumented Robot
by Kawe Monteiro de Souza, José Rodolfo Galvão, Jorge Augusto Pessatto Mondadori, Maria Bernadete de Morais França, Paulo Broniera Junior and Fernanda Cristina Corrêa
Batteries 2026, 12(1), 25; https://doi.org/10.3390/batteries12010025 - 10 Jan 2026
Viewed by 579
Abstract
Electric vehicles (EVs) rely on a battery pack as their primary energy source, making it a critical component for their operation. To guarantee safe and correct functioning, a Battery Management System (BMS) is employed, which uses variables such as State of Charge (SOC) [...] Read more.
Electric vehicles (EVs) rely on a battery pack as their primary energy source, making it a critical component for their operation. To guarantee safe and correct functioning, a Battery Management System (BMS) is employed, which uses variables such as State of Charge (SOC) to set charge/discharge limits and to monitor pack health. In this article, we propose a Multilayer Perceptron (MLP) network to estimate the SOC of a 14.8 V battery pack installed in a robotic vacuum cleaner. Both offline and online (real-time) tests were conducted under continuous load and with rest intervals. The MLP’s output is compared against two commonly used approaches: NARX (Nonlinear Autoregressive Exogenous) and CNN (Convolutional Neural Network). Performance is evaluated via statistical metrics, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), and we also assess computational cost using Operational Intensity. Finally, we map these results onto a Roofline Model to predict how the MLP would perform on an automotive-grade microcontroller unit (MCU). A generalization analysis is performed using Transfer Learning and optimization using MLP–Kalman. The best performers are the MLP–Kalman network, which achieved an RMSE of approximately 13% relative to the true SOC, and NARX, which achieved approximately 12%. The computational cost of both is very close, making it particularly suitable for use in BMS. Full article
(This article belongs to the Section Energy Storage System Aging, Diagnosis and Safety)
Show Figures

Graphical abstract

20 pages, 8010 KB  
Article
Laser Pulse-Driven Multi-Sensor Time Synchronization Method for LiDAR Systems
by Jiazhi Yang, Xingguo Han, Wenzhong Deng, Hong Jin and Biao Zhang
Sensors 2025, 25(24), 7555; https://doi.org/10.3390/s25247555 - 12 Dec 2025
Cited by 1 | Viewed by 723
Abstract
Multi-sensor systems require precise time synchronization for accurate data fusion. However, currently prevalent software time synchronization methods often rely on clocks provided by the Global Navigation Satellite System (GNSS), which may not offer high accuracy and can be easily affected by issues with [...] Read more.
Multi-sensor systems require precise time synchronization for accurate data fusion. However, currently prevalent software time synchronization methods often rely on clocks provided by the Global Navigation Satellite System (GNSS), which may not offer high accuracy and can be easily affected by issues with GNSS signals. To address this limitation, this study introduces a novel laser pulse-driven time synchronization (LPTS) method in our custom-developed Light Detecting and Ranging (LiDAR) system. The LPTS method uses electrical pulses, synchronized with laser beams as the time synchronization source, driving the Micro-Controller Unit (MCU) timer within the control system to count with a timing accuracy of 0.1 μs and to timestamp the data from the Positioning and Orientation System (POS) unit or laser scanner unit. By employing interpolation techniques, the POS and laser scanner data are precisely synchronized with laser pulses, ensuring strict correlation through their timestamps. In this article, the working principles and experimental methods of both traditional time synchronization (TRTS) and LPTS methods are discussed. We have implemented both methods on experimental platforms, and the results demonstrate that the LPTS method circumvents the dependency on external time references for inter-sensor alignment and minimizes the impact of laser jitter stemming from third-party time references, without requiring additional hardware. Moreover, it elevates the internal time synchronization resolution to 0.1 μs and significantly improves relative timing precision. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

17 pages, 1472 KB  
Article
Three-Phase Powerline Energy Harvesting Circuit with Maximum Power Point Tracking and Cold Start-Up
by Fariborz Lohrabi Pour, Seong Kwang Hong, Jaeyun Lee, Meysam Sohani Darban, Jaehoon Matthias Kim and Dong Sam Ha
Appl. Sci. 2025, 15(22), 11954; https://doi.org/10.3390/app152211954 - 11 Nov 2025
Cited by 1 | Viewed by 705
Abstract
This paper presents a three-phase powerline energy harvesting circuit with doubly regulated output voltages to power wireless sensors for the monitoring of railroad powerline status. Three ring-shaped silicon steel cores coupled to the three phases of a powerline convert the line current into [...] Read more.
This paper presents a three-phase powerline energy harvesting circuit with doubly regulated output voltages to power wireless sensors for the monitoring of railroad powerline status. Three ring-shaped silicon steel cores coupled to the three phases of a powerline convert the line current into three-phase voltages, which are applied to an energy harvesting circuit. The key parts of the circuit are a series three-phase voltage rectifier, a buck–boost converter operating in discontinuous conduction mode (DCM), and a microcontroller unit (MCU) for maximum power point tracking (MPPT). The MCU performs two-step MPPT, coarse and fine, for impedance matching based on the perturb and observe method. Two parallel voltage regulators deliver 5 V and 5.7 V regulated DC voltages to power a radio and a set of sensors, respectively. The energy harvesting circuit is prototyped using commercial-off-the-shelf (COTS) components on an FR4 PCB. The measured maximum efficiency is 84% for the three-phase voltage rectifier and 89% for the buck–boost converter under the powerline current ranging from 5 A to 20 A. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

18 pages, 494 KB  
Article
Atrial Fibrillation Detection on the Embedded Edge: Energy-Efficient Inference on a Low-Power Microcontroller
by Yash Akbari, Ningrong Lei, Nilesh Patel, Yonghong Peng and Oliver Faust
Sensors 2025, 25(21), 6601; https://doi.org/10.3390/s25216601 - 27 Oct 2025
Cited by 1 | Viewed by 1466
Abstract
Atrial Fibrillation (AF) is a common yet often undiagnosed cardiac arrhythmia with serious clinical consequences, including increased risk of stroke, heart failure, and mortality. In this work, we present a novel Embedded Edge system performing real-time AF detection on a low-power Microcontroller Unit [...] Read more.
Atrial Fibrillation (AF) is a common yet often undiagnosed cardiac arrhythmia with serious clinical consequences, including increased risk of stroke, heart failure, and mortality. In this work, we present a novel Embedded Edge system performing real-time AF detection on a low-power Microcontroller Unit (MCU). Rather than relying on full Electrocardiogram (ECG) waveforms or cloud-based analytics, our method extracts Heart Rate Variability (HRV) features from RR-Interval (RRI) and performs classification using a compact Long Short-Term Memory (LSTM) model optimized for embedded deployment. We achieved an overall classification accuracy of 98.46% while maintaining a minimal resource footprint: inference on the target MCU completes in 143 ± 0 ms and consumes 3532 ± 6 μJ per inference. This low power consumption for local inference makes it feasible to strategically keep wireless communication OFF, activating it only to transmit an alert upon AF detection, thereby reinforcing privacy and enabling long-term battery life. Our results demonstrate the feasibility of performing clinically meaningful AF monitoring directly on constrained edge devices, enabling energy-efficient, privacy-preserving, and scalable screening outside traditional clinical settings. This work contributes to the growing field of personalised and decentralised cardiac care, showing that Artificial Intelligence (AI)-driven diagnostics can be both technically practical and clinically relevant when implemented at the edge. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

26 pages, 2759 KB  
Review
MCU Intelligent Upgrades: An Overview of AI-Enabled Low-Power Technologies
by Tong Zhang, Bosen Huang, Xiewen Liu, Jiaqi Fan, Junbo Li, Zhao Yue and Yanfang Wang
J. Low Power Electron. Appl. 2025, 15(4), 60; https://doi.org/10.3390/jlpea15040060 - 1 Oct 2025
Cited by 4 | Viewed by 4259
Abstract
Microcontroller units (MCUs) serve as the core components of embedded systems. In the era of smart IoT, embedded devices are increasingly deployed on mobile platforms, leading to a growing demand for low-power consumption. As a result, low-power technology for MCUs has become increasingly [...] Read more.
Microcontroller units (MCUs) serve as the core components of embedded systems. In the era of smart IoT, embedded devices are increasingly deployed on mobile platforms, leading to a growing demand for low-power consumption. As a result, low-power technology for MCUs has become increasingly critical. This paper systematically reviews the development history and current technical challenges of MCU low-power technology. It then focuses on analyzing system-level low-power optimization pathways for integrating MCUs with artificial intelligence (AI) technology, including lightweight AI algorithm design, model pruning, AI acceleration hardware (NPU, GPU), and heterogeneous computing architectures. It further elaborates on how AI technology empowers MCUs to achieve comprehensive low power consumption from four dimensions: task scheduling, power management, inference engine optimization, and communication and data processing. Through practical application cases in multiple fields such as smart home, healthcare, industrial automation, and smart agriculture, it verifies the significant advantages of MCUs combined with AI in performance improvement and power consumption optimization. Finally, this paper focuses on the key challenges that still need to be addressed in the intelligent upgrade of future MCU low power consumption and proposes in-depth research directions in areas such as the balance between lightweight model accuracy and robustness, the consistency and stability of edge-side collaborative computing, and the reliability and power consumption control of the sensor-storage-computing integrated architecture, providing clear guidance and prospects for future research. Full article
Show Figures

Figure 1

17 pages, 4813 KB  
Article
Design and Testing of a Multi-Channel Temperature and Relative Humidity Acquisition System for Grain Storage
by Chenyi Wei, Jingyun Liu and Bingke Zhu
Agriculture 2025, 15(17), 1870; https://doi.org/10.3390/agriculture15171870 - 2 Sep 2025
Viewed by 1790
Abstract
To ensure the safety and quality of grain during storage requires distributed monitoring of temperature and relative humidity within the bulk material, where hundreds of sensors may be needed. Conventional multi-channel systems are often constrained by the limited number of sensors connectable to [...] Read more.
To ensure the safety and quality of grain during storage requires distributed monitoring of temperature and relative humidity within the bulk material, where hundreds of sensors may be needed. Conventional multi-channel systems are often constrained by the limited number of sensors connectable to a single acquisition unit, high hardware cost, and poor scalability. To address these challenges, this study proposes a novel design method for a multi-channel temperature and relative humidity acquisition system (MTRHAS). The system integrates sequential sampling control and a time-division multiplexing mechanism, enabling efficient data acquisition from multiple sensors while reducing hardware requirements and cost. This system employs sequential sampling control using a single complex programmable logic device (CPLD), and uses multiple CPLDs for multi-channel sensor expansion with a shared address and data bus for communication with a microcontroller unit (MCU). A prototype was developed using two CPLDs and one MCU, achieving data collection from 80 sensors. To validate the approach, a simulated grain silo experiment was conducted, with nine sensors deployed to monitor temperature and relative humidity during aeration. Calibration ensured sensor accuracy, and real-time monitoring results revealed that the system effectively captured spatial and temporal variation patterns of intergranular air conditions. Compared with conventional designs, the proposed system shortens the sampling cycle, decreases the number of acquisition units required, and enhances scalability through the shared bus architecture. These findings demonstrate that the MTRHAS provides an efficient and practical solution for large-scale monitoring of grain storage environments. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

32 pages, 5164 KB  
Article
Decentralized Distributed Sequential Neural Networks Inference on Low-Power Microcontrollers in Wireless Sensor Networks: A Predictive Maintenance Case Study
by Yernazar Bolat, Iain Murray, Yifei Ren and Nasim Ferdosian
Sensors 2025, 25(15), 4595; https://doi.org/10.3390/s25154595 - 24 Jul 2025
Cited by 4 | Viewed by 2059
Abstract
The growing adoption of IoT applications has led to increased use of low-power microcontroller units (MCUs) for energy-efficient, local data processing. However, deploying deep neural networks (DNNs) on these constrained devices is challenging due to limitations in memory, computational power, and energy. Traditional [...] Read more.
The growing adoption of IoT applications has led to increased use of low-power microcontroller units (MCUs) for energy-efficient, local data processing. However, deploying deep neural networks (DNNs) on these constrained devices is challenging due to limitations in memory, computational power, and energy. Traditional methods like cloud-based inference and model compression often incur bandwidth, privacy, and accuracy trade-offs. This paper introduces a novel Decentralized Distributed Sequential Neural Network (DDSNN) designed for low-power MCUs in Tiny Machine Learning (TinyML) applications. Unlike the existing methods that rely on centralized cluster-based approaches, DDSNN partitions a pre-trained LeNet across multiple MCUs, enabling fully decentralized inference in wireless sensor networks (WSNs). We validate DDSNN in a real-world predictive maintenance scenario, where vibration data from an industrial pump is analyzed in real-time. The experimental results demonstrate that DDSNN achieves 99.01% accuracy, explicitly maintaining the accuracy of the non-distributed baseline model and reducing inference latency by approximately 50%, highlighting its significant enhancement over traditional, non-distributed approaches, demonstrating its practical feasibility under realistic operating conditions. Full article
Show Figures

Figure 1

10 pages, 857 KB  
Proceeding Paper
Implementation of a Prototype-Based Parkinson’s Disease Detection System Using a RISC-V Processor
by Krishna Dharavathu, Pavan Kumar Sankula, Uma Maheswari Vullanki, Subhan Khan Mohammad, Sai Priya Kesapatnapu and Sameer Shaik
Eng. Proc. 2025, 87(1), 97; https://doi.org/10.3390/engproc2025087097 - 21 Jul 2025
Viewed by 900
Abstract
In the wide range of human diseases, Parkinson’s disease (PD) has a high incidence, according to a recent survey by the World Health Organization (WHO). According to WHO records, this chronic disease has affected approximately 10 million people worldwide. Patients who do not [...] Read more.
In the wide range of human diseases, Parkinson’s disease (PD) has a high incidence, according to a recent survey by the World Health Organization (WHO). According to WHO records, this chronic disease has affected approximately 10 million people worldwide. Patients who do not receive an early diagnosis may develop an incurable neurological disorder. PD is a degenerative disorder of the brain, characterized by the impairment of the nigrostriatal system. A wide range of symptoms of motor and non-motor impairment accompanies this disorder. By using new technology, the PD is detected through speech signals of the PD victims by using the reduced instruction set computing 5th version (RISC-V) processor. The RISC-V microcontroller unit (MCU) was designed for the voice-controlled human-machine interface (HMI). With the help of signal processing and feature extraction methods, the digital signal is impaired by the impairment of the nigrostriatal system. These speech signals can be classified through classifier modules. A wide range of classifier modules are used to classify the speech signals as normal or abnormal to identify PD. We use Matrix Laboratory (MATLAB R2021a_v9.10.0.1602886) to analyze the data, develop algorithms, create modules, and develop the RISC-V processor for embedded implementation. Machine learning (ML) techniques are also used to extract features such as pitch, tremor, and Mel-frequency cepstral coefficients (MFCCs). Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

22 pages, 2113 KB  
Article
Tracking Control of Quadrotor Micro Aerial Vehicles Using Efficient Nonlinear Model Predictive Control with C/GMRES Optimization on Resource-Constrained Microcontrollers
by Dong-Min Lee, Jae-Hong Jung, Yeon-Su Sim and Gi-Woo Kim
Electronics 2025, 14(14), 2775; https://doi.org/10.3390/electronics14142775 - 10 Jul 2025
Viewed by 2112
Abstract
This study investigates the tracking control of quadrotor micro aerial vehicles using nonlinear model predictive control (NMPC), with primary emphasis on the implementation of a real-time embedded control system. Apart from the limited memory size, one of the critical challenges is the limited [...] Read more.
This study investigates the tracking control of quadrotor micro aerial vehicles using nonlinear model predictive control (NMPC), with primary emphasis on the implementation of a real-time embedded control system. Apart from the limited memory size, one of the critical challenges is the limited processor speed on resource-constrained microcontroller units (MCUs). This technical issue becomes critical particularly when the maximum allowed computation time for real-time control exceeds 0.01 s, which is the typical sampling time required to ensure reliable control performance. To reduce the computational burden for NMPC, we first derive a nonlinear quadrotor model based on the quaternion number system rather than formulating nonlinear equations using conventional Euler angles. In addition, an implicit continuation generalized minimum residual optimization algorithm is designed for the fast computation of the optimal receding horizon control input. The proposed NMPC is extensively validated through rigorous simulations and experimental trials using Crazyflie 2.1®, an open-source flying development platform. Owing to the more precise prediction of the highly nonlinear quadrotor model, the proposed NMPC demonstrates that the tracking performance outperforms that of conventional linear MPCs. This study provides a basis and comprehensive guidelines for implementing the NMPC of nonlinear quadrotors on resource-constrained MCUs, with potential extensions to applications such as autonomous flight and obstacle avoidance. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

17 pages, 5876 KB  
Article
Optimization of Knitted Strain Sensor Structures for a Real-Time Korean Sign Language Translation Glove System
by Youn-Hee Kim and You-Kyung Oh
Sensors 2025, 25(14), 4270; https://doi.org/10.3390/s25144270 - 9 Jul 2025
Viewed by 1222
Abstract
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the [...] Read more.
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the presence or absence of elastic yarn are set as experimental variables, and five distinct sensors are manufactured. A comprehensive analysis of the electrical and mechanical performance, including sensitivity, responsiveness, reliability, and repeatability, reveals that the sensor with a plain-plated-knit structure, no elastic yarn included, and the conductive yarn positioned uniformly on the back exhibits the best performance, with a gauge factor (GF) of 88. The sensor exhibited a response time of less than 0.1 s at 50 cycles per minute (cpm), demonstrating that it detects and responds promptly to finger joint bending movements. Moreover, it exhibits stable repeatability and reliability across various angles and speeds, confirming its optimization for sign language recognition applications. Based on this design, an integrated textile-based system is developed by incorporating the sensor, interconnections, snap connectors, and a microcontroller unit (MCU) with built-in Bluetooth Low Energy (BLE) technology into the knitted glove. The complete system successfully recognized 12 Korean Sign Language (KSL) gestures in real time and output them as both text and audio through a dedicated application, achieving a high recognition accuracy of 98.67%. Thus, the present study quantitatively elucidates the structure–performance relationship of a knitted sensor and proposes a wearable system that accounts for real-world usage environments, thereby demonstrating the commercialization potential of the technology. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

8 pages, 866 KB  
Proceeding Paper
Internet of Things and Predictive Artificial Intelligence for SmartComposting Process in the Context of Circular Economy
by Soukaina Fouguira, Emna Ammar, Mounia Em Haji and Jamal Benhra
Eng. Proc. 2025, 97(1), 16; https://doi.org/10.3390/engproc2025097016 - 10 Jun 2025
Cited by 4 | Viewed by 3309
Abstract
To promote sustainable development, adopting circular economy principles is crucial for preserving natural resources and ensuring environmental continuity. Among solid waste management strategies, composting plays a significant role by converting biodegradable waste into eco-friendly biofertilizers. Traditional composting methods, which rely on open-window techniques, [...] Read more.
To promote sustainable development, adopting circular economy principles is crucial for preserving natural resources and ensuring environmental continuity. Among solid waste management strategies, composting plays a significant role by converting biodegradable waste into eco-friendly biofertilizers. Traditional composting methods, which rely on open-window techniques, face challenges in controlling critical physico-chemical parameters such as temperature, humidity, and gaseous emissions. Additionally, these methods require significant labor and over 100 days to achieve compost maturity. To address these issues, we propose an intelligent, automated composting system leveraging the Internet of Things (IoT) and wireless sensor networks (WSNs). This system integrates sensors for real-time monitoring of key parameters: DS18b20 for waste temperature, HD-38 for humidity, DHT11 for ambient conditions, and MQ sensors for detecting CO2, NH3, and CH4. Controlled by an ESP32 microcontroller unit (MCU), the system employs a mixer and heating elements to optimize waste degradation based on sensor feedback. Data transmission is managed using the MQTT protocol, allowing real-time monitoring via a cloud-based platform (ThingSpeak). Furthermore, the degradation process was analyzed during the first 24 h, and a recurrent neural network (RNN) algorithm was employed to predict the time required for reaching optimal compost maturity, ensuring an efficient and sustainable solution. Full article
Show Figures

Figure 1

Back to TopTop