Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,897)

Search Parameters:
Keywords = inertial sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 530 KiB  
Article
Kinesiological Analysis Using Inertial Sensor Systems: Methodological Framework and Clinical Applications in Pathological Gait
by Danelina Emilova Vacheva and Atanas Kostadinov Drumev
Sensors 2025, 25(14), 4435; https://doi.org/10.3390/s25144435 - 16 Jul 2025
Abstract
Accurate gait assessment is essential for managing pathological locomotion, especially in elderly patients recovering from hip joint surgeries. Inertial measurement units (IMUs) provide real-time, objective data in clinical settings. This study examined pelvic oscillations in sagittal, frontal, and transverse planes using a wearable [...] Read more.
Accurate gait assessment is essential for managing pathological locomotion, especially in elderly patients recovering from hip joint surgeries. Inertial measurement units (IMUs) provide real-time, objective data in clinical settings. This study examined pelvic oscillations in sagittal, frontal, and transverse planes using a wearable IMU system in two groups: Group A (n = 15, osteosynthesis metallica) and Group B (n = 34, arthroplasty), all over age 65. Gait analysis was conducted during assisted and unassisted walking. In the frontal plane, both groups showed statistically significant improvements: Group A from 46.4% to 75.2% (p = 0.001) and Group B from 52.6% to 72.2% (p = 0.001), reflecting enhanced lateral stability. In the transverse plane, Group A improved significantly from 47.7% to 80.2% (p = 0.001), while Group B showed a non-significant increase from 73.0% to 80.5% (p = 0.068). Sagittal plane changes were not statistically significant (Group A: 68.8% to 71.1%, p = 0.313; Group B: 76.4% to 69.1%, p = 0.065). These improvements correspond to better pelvic symmetry and postural control, which are critical for a safe and stable gait. Improvements were more pronounced during unassisted walking, indicating better pelvic control. These results confirm the clinical utility of IMUs in capturing subtle gait asymmetries and monitoring recovery progress. The findings support their use in tailoring rehabilitation strategies, particularly for enhancing frontal and transverse pelvic stability in elderly orthopedic patients. Full article
(This article belongs to the Special Issue Sensor Technologies for Gait Analysis: 2nd Edition)
16 pages, 2107 KiB  
Article
Determination of Spatiotemporal Gait Parameters Using a Smartphone’s IMU in the Pocket: Threshold-Based and Deep Learning Approaches
by Seunghee Lee, Changeon Park, Eunho Ha, Jiseon Hong, Sung Hoon Kim and Youngho Kim
Sensors 2025, 25(14), 4395; https://doi.org/10.3390/s25144395 - 14 Jul 2025
Viewed by 146
Abstract
This study proposes a hybrid approach combining threshold-based algorithm and deep learning to detect four major gait events—initial contact (IC), toe-off (TO), opposite initial contact (OIC), and opposite toe-off (OTO)—using only a smartphone’s built-in inertial sensor placed in the user’s pocket. The algorithm [...] Read more.
This study proposes a hybrid approach combining threshold-based algorithm and deep learning to detect four major gait events—initial contact (IC), toe-off (TO), opposite initial contact (OIC), and opposite toe-off (OTO)—using only a smartphone’s built-in inertial sensor placed in the user’s pocket. The algorithm enables estimation of spatiotemporal gait parameters such as cadence, stride length, loading response (LR), pre-swing (PSw), single limb support (SLS), double limb support (DLS), and swing phase and symmetry. Gait data were collected from 20 healthy individuals and 13 hemiparetic stroke patients. To reduce sensitivity to sensor orientation and suppress noise, sum vector magnitude (SVM) features were extracted and filtered using a second-order Butterworth low-pass filter at 3 Hz. A deep learning model was further compressed using knowledge distillation, reducing model size by 96% while preserving accuracy. The proposed method achieved error rates in event detection below 2% of the gait cycle for healthy gait and a maximum of 4.4% for patient gait in event detection, with corresponding parameter estimation errors also within 4%. These results demonstrated the feasibility of accurate and real-time gait monitoring using a smartphone. In addition, statistical analysis of gait parameters such as symmetry and DLS revealed significant differences between the normal and patient groups. While this study is not intended to provide or guide rehabilitation treatment, it offers a practical means to regularly monitor patients’ gait status and observe gait recovery trends over time. Full article
(This article belongs to the Special Issue Wearable Devices for Physical Activity and Healthcare Monitoring)
Show Figures

Figure 1

20 pages, 3710 KiB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Viewed by 165
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

17 pages, 2032 KiB  
Article
Measurement Techniques for Highly Dynamic and Weak Space Targets Using Event Cameras
by Haonan Liu, Ting Sun, Ye Tian, Siyao Wu, Fei Xing, Haijun Wang, Xi Wang, Zongyu Zhang, Kang Yang and Guoteng Ren
Sensors 2025, 25(14), 4366; https://doi.org/10.3390/s25144366 - 12 Jul 2025
Viewed by 158
Abstract
Star sensors, as the most precise attitude measurement devices currently available, play a crucial role in spacecraft attitude estimation. However, traditional frame-based cameras tend to suffer from target blur and loss under high-dynamic maneuvers, which severely limit the applicability of conventional star sensors [...] Read more.
Star sensors, as the most precise attitude measurement devices currently available, play a crucial role in spacecraft attitude estimation. However, traditional frame-based cameras tend to suffer from target blur and loss under high-dynamic maneuvers, which severely limit the applicability of conventional star sensors in complex space environments. In contrast, event cameras—drawing inspiration from biological vision—can capture brightness changes at ultrahigh speeds and output a series of asynchronous events, thereby demonstrating enormous potential for space detection applications. Based on this, this paper proposes an event data extraction method for weak, high-dynamic space targets to enhance the performance of event cameras in detecting space targets under high-dynamic maneuvers. In the target denoising phase, we fully consider the characteristics of space targets’ motion trajectories and optimize a classical spatiotemporal correlation filter, thereby significantly improving the signal-to-noise ratio for weak targets. During the target extraction stage, we introduce the DBSCAN clustering algorithm to achieve the subpixel-level extraction of target centroids. Moreover, to address issues of target trajectory distortion and data discontinuity in certain ultrahigh-dynamic scenarios, we construct a camera motion model based on real-time motion data from an inertial measurement unit (IMU) and utilize it to effectively compensate for and correct the target’s trajectory. Finally, a ground-based simulation system is established to validate the applicability and superior performance of the proposed method in real-world scenarios. Full article
Show Figures

Figure 1

14 pages, 6560 KiB  
Article
Robust and Precise Navigation and Obstacle Avoidance for Unmanned Ground Vehicle
by Iván González-Hernández, Jonathan Flores, Sergio Salazar and Rogelio Lozano
Sensors 2025, 25(14), 4334; https://doi.org/10.3390/s25144334 - 11 Jul 2025
Viewed by 179
Abstract
This paper presents a robust control strategy based on simplified second-order sliding mode for autonomous navigation and obstacle avoidance for an unmanned ground vehicle. The proposed control is implemented in a mini ground vehicle equipped with redundant inertial sensors for orientation, a global [...] Read more.
This paper presents a robust control strategy based on simplified second-order sliding mode for autonomous navigation and obstacle avoidance for an unmanned ground vehicle. The proposed control is implemented in a mini ground vehicle equipped with redundant inertial sensors for orientation, a global positioning system, and LiDAR sensors. The algorithm control avoids the derivative of the sliding surface. This provides a feasibility in real-time programming. In order to demonstrate stability in the system, the second method of the Lyapunov theory is used. In addition, the robustness of the proposed algorithm is verified through numerical simulations. Outdoor experimental tests are performed in order to validate the performance of the proposed control. Full article
Show Figures

Figure 1

16 pages, 5397 KiB  
Article
Evaluation of Technical and Anthropometric Factors in Postures and Muscle Activation of Heavy-Truck Vehicle Drivers: Implications for the Design of Ergonomic Cabins
by Esteban Ortiz, Daysi Baño-Morales, William Venegas, Álvaro Page, Skarlet Guerra, Mateo Narváez and Iván Zambrano
Appl. Sci. 2025, 15(14), 7775; https://doi.org/10.3390/app15147775 - 11 Jul 2025
Viewed by 201
Abstract
This study investigates how three technical factors—steering wheel tilt, torque, and cabin vibration frequency—affect driver posture. Heavy-truck drivers often suffer from musculoskeletal disorders (MSDs), mainly due to poor cabin ergonomics and prolonged postures during work. In countries like Ecuador, making major structural changes [...] Read more.
This study investigates how three technical factors—steering wheel tilt, torque, and cabin vibration frequency—affect driver posture. Heavy-truck drivers often suffer from musculoskeletal disorders (MSDs), mainly due to poor cabin ergonomics and prolonged postures during work. In countries like Ecuador, making major structural changes to cabin design is not feasible. These factors were identified through video analysis and surveys from drivers at two Ecuadorian trucking companies. An experimental system was developed using a simplified cabin to control these variables, while posture and muscle activity were recorded in 16 participants using motion capture, inertial sensors, and electromyography (EMG) on the upper trapezius, middle trapezius, triceps brachii, quadriceps muscle, and gastrocnemius muscle. The test protocol simulated key truck-driving tasks. Data were analyzed using ANOVA (p<0.05), with technical factors and mass index as independent variables, and posture metrics as dependent variables. Results showed that head mass index significantly affected head abduction–adduction (8.12 to 2.18°), and spine mass index influenced spine flexion–extension (0.38 to 6.99°). Among technical factors, steering wheel tilt impacted trunk flexion–extension (13.56 to 16.99°) and arm rotation (31.1 to 19.7°). Steering wheel torque affected arm rotation (30.49 to 6.77°), while vibration frequency influenced forearm flexion–extension (3.76 to 16.51°). EMG signals showed little variation between muscles, likely due to the protocol’s short duration. These findings offer quantitative support for improving cabin ergonomics in low-resource settings through targeted, cost-effective design changes. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

21 pages, 2189 KiB  
Article
Smart Watch Sensors for Tremor Assessment in Parkinson’s Disease—Algorithm Development and Measurement Properties Analysis
by Giulia Palermo Schifino, Maira Jaqueline da Cunha, Ritchele Redivo Marchese, Vinicius Mabília, Luis Henrique Amoedo Vian, Francisca dos Santos Pereira, Veronica Cimolin and Aline Souza Pagnussat
Sensors 2025, 25(14), 4313; https://doi.org/10.3390/s25144313 - 10 Jul 2025
Viewed by 168
Abstract
Parkinson’s disease (PD) is a neurodegenerative disorder commonly marked by upper limb tremors that interfere with daily activities. Wearable devices, such as smartwatches, represent a promising solution for continuous and objective monitoring in PD. This study aimed to develop and validate a tremor-detection [...] Read more.
Parkinson’s disease (PD) is a neurodegenerative disorder commonly marked by upper limb tremors that interfere with daily activities. Wearable devices, such as smartwatches, represent a promising solution for continuous and objective monitoring in PD. This study aimed to develop and validate a tremor-detection algorithm using smartwatch sensors. Data were collected from 21 individuals with PD and 27 healthy controls using both a commercial inertial measurement unit (G-Sensor, BTS Bioengineering, Italy) and a smartwatch (Apple Watch Series 3). Participants performed standardized arm movements while sensor signals were synchronized and processed to extract relevant features. Statistical analyses assessed discriminant and concurrent validity, reliability, and accuracy. The algorithm demonstrated moderate to strong correlations between smartwatch and commercial IMU data, effectively distinguishing individuals with PD from healthy controls showing associations with clinical measures, such as the MDS-UPDRS III. Reliability analysis demonstrated agreement between repeated measurements, although a proportional bias was noted. Power spectral density (PSD) analysis of accelerometer and gyroscope data along the x-axis successfully detected the presence of tremors. These findings support the use of smartwatches as a tool for detecting tremors in PD. However, further studies involving larger and more clinically impaired samples are needed to confirm the robustness and generalizability of these results. Full article
(This article belongs to the Special Issue IMU and Innovative Sensors for Healthcare)
Show Figures

Figure 1

25 pages, 1272 KiB  
Article
Complex Environmental Geomagnetic Matching-Assisted Navigation Algorithm Based on Improved Extreme Learning Machine
by Jian Huang, Zhe Hu and Wenjun Yi
Sensors 2025, 25(14), 4310; https://doi.org/10.3390/s25144310 - 10 Jul 2025
Viewed by 143
Abstract
In complex environments where satellite signals may be interfered with, it is difficult to achieve precise positioning of high-speed aerial vehicles solely through the inertial navigation system. To overcome this challenge, this paper proposes an NGO-ELM geomagnetic matching-assisted navigation algorithm, in which the [...] Read more.
In complex environments where satellite signals may be interfered with, it is difficult to achieve precise positioning of high-speed aerial vehicles solely through the inertial navigation system. To overcome this challenge, this paper proposes an NGO-ELM geomagnetic matching-assisted navigation algorithm, in which the Northern Goshawk Optimization (NGO) algorithm is used to optimize the initial weights and biases of the Extreme Learning Machine (ELM). To enhance the matching performance of the NGO-ELM algorithm, three improvements are proposed to the NGO algorithm. The effectiveness of these improvements is validated using the CEC2005 benchmark function suite. Additionally, the IGRF-13 model is utilized to generate a geomagnetic matching dataset, followed by comparative testing of five geomagnetic matching models: INGO-ELM, NGO-ELM, ELM, INGO-XGBoost, and INGO-BP. The simulation results show that after the airborne equipment acquires the geomagnetic data, it only takes 0.27 µs to obtain the latitude, longitude, and altitude of the aerial vehicle through the INGO-ELM model. After unit conversion, the average absolute errors are approximately 6.38 m, 6.43 m, and 0.0137 m, respectively, which significantly outperform the results of four other models. Furthermore, when noise is introduced into the test set inputs, the positioning error of the INGO-ELM model remains within the same order of magnitude as those before the noise was added, indicating that the model exhibits excellent robustness. It has been verified that the geomagnetic matching-assisted navigation algorithm proposed in this paper can achieve real-time, accurate, and stable positioning, even in the presence of observational errors from the magnetic sensor. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

15 pages, 2750 KiB  
Article
Gait Environment Recognition Using Biomechanical and Physiological Signals with Feed-Forward Neural Network: A Pilot Study
by Kyeong-Jun Seo, Jinwon Lee, Ji-Eun Cho, Hogene Kim and Jung Hwan Kim
Sensors 2025, 25(14), 4302; https://doi.org/10.3390/s25144302 - 10 Jul 2025
Viewed by 137
Abstract
Gait, the fundamental form of human locomotion, occurs across diverse environments. The technology for recognizing environmental changes during walking is crucial for preventing falls and controlling wearable robots. This study collected gait data on level ground (LG), ramps, and stairs using a feed-forward [...] Read more.
Gait, the fundamental form of human locomotion, occurs across diverse environments. The technology for recognizing environmental changes during walking is crucial for preventing falls and controlling wearable robots. This study collected gait data on level ground (LG), ramps, and stairs using a feed-forward neural network (FFNN) to classify the corresponding gait environments. Gait experiments were performed on five non-disabled participants using an inertial measurement unit, a galvanic skin response sensor, and a smart insole. The collected data were preprocessed through time synchronization and filtering, then labeled according to the gait environment, yielding 47,033 data samples. Gait data were used to train an FFNN model with a single hidden layer, achieving a high accuracy of 98%, with the highest accuracy observed on LG. This study confirms the effectiveness of classifying gait environments based on signals acquired from various wearable sensors during walking. In the future, these research findings may serve as basic data for exoskeleton robot control and gait analysis. Full article
(This article belongs to the Special Issue Wearable Sensing Technologies for Human Health Monitoring)
Show Figures

Figure 1

20 pages, 3148 KiB  
Article
Performance Analysis of Stellar Refraction Autonomous Navigation for Cross-Domain Vehicles
by Yuchang Xu, Yang Zhang, Xiaokang Wang, Guanbing Zhang, Guang Yang and Hong Yuan
Remote Sens. 2025, 17(14), 2367; https://doi.org/10.3390/rs17142367 - 9 Jul 2025
Viewed by 199
Abstract
Stellar refraction autonomous navigation provides a promising alternative for cross-domain vehicles, particularly in near-space environments where traditional inertial and satellite navigation methods face limitations. This study develops a stellar refraction navigation system that utilizes stellar refraction angle observations and the Implicit Unscented Kalman [...] Read more.
Stellar refraction autonomous navigation provides a promising alternative for cross-domain vehicles, particularly in near-space environments where traditional inertial and satellite navigation methods face limitations. This study develops a stellar refraction navigation system that utilizes stellar refraction angle observations and the Implicit Unscented Kalman Filter (IUKF) for state estimation. A representative orbit with altitudes ranging from 60 km to 200 km is designed to simulate cross-domain flight conditions. The navigation performance is analyzed under varying conditions, including orbital altitude, as well as star sensor design parameters, such as limiting magnitude, field of view (FOV) value, and measurement error, along with different sampling intervals. The simulation results show that increasing the limiting magnitude from 5 to 8 reduced the position error from 705.19 m to below 1 m, with optimal accuracy reaching 0.89 m when using a 20° × 20° field of view and a 3 s sampling interval. In addition, shorter sampling intervals improved accuracy and filter stability, while longer intervals introduced greater integration drift. When the sampling interval reached 100 s, position error grew to the kilometer level. These findings validate the feasibility of using stellar refraction for autonomous navigation in cross-domain scenarios and provide design guidance for optimizing star sensor configurations and sampling strategies in future near-space navigation systems. Full article
(This article belongs to the Special Issue Autonomous Space Navigation (Second Edition))
Show Figures

Figure 1

17 pages, 2328 KiB  
Article
Investigating Performance of an Embedded Machine Learning Solution for Classifying Postural Behaviors
by Bruno Andò, Salvatore Baglio, Mattia Manenti, Valeria Finocchiaro, Vincenzo Marletta, Sreeraman Rajan, Ebrahim Ali Nehary, Valeria Dibilio, Mario Zappia and Giovanni Mostile
Sensors 2025, 25(14), 4262; https://doi.org/10.3390/s25144262 - 9 Jul 2025
Viewed by 168
Abstract
Postural instability is one of the main critical aspects to be monitored in the case of degenerative diseases, and is also a predictor of potential falls. This paper presents a multi-layer perceptron approach for the classification of four different classes of postural behaviors [...] Read more.
Postural instability is one of the main critical aspects to be monitored in the case of degenerative diseases, and is also a predictor of potential falls. This paper presents a multi-layer perceptron approach for the classification of four different classes of postural behaviors that is implemented by an embedded sensing architecture. The robustness of the methodology against noisy data and the effects of using different sets of classification features have been investigated. In the case of noisy input data, a reliability index of almost 100% has been obtained, with a negligible drop (less than 5%) being shown for the whole range of noise levels that was investigated. Such an achievement substantiates the better robustness of this approach with respect to threshold-based algorithms, which have been also considered for the sake of comparison. Full article
(This article belongs to the Section Wearables)
Show Figures

Graphical abstract

13 pages, 1892 KiB  
Article
Research on Improving the Accuracy of Wearable Heart Rate Measurement Based on a Six-Axis Sensing Device Integrating a Three-Axis Accelerometer and a Three-Axis Gyroscope
by Jinman Kim and Joongjin Kook
Appl. Sci. 2025, 15(14), 7659; https://doi.org/10.3390/app15147659 - 8 Jul 2025
Viewed by 138
Abstract
This study proposes a novel heart rate estimation method that detects subtle cardiac-induced vibrations propagated through the cardiovascular system based on the ballistocardiography (BCG) principle, using a six-axis heart rate sensing device that integrates a three-axis accelerometer and a three-axis gyroscope. To validate [...] Read more.
This study proposes a novel heart rate estimation method that detects subtle cardiac-induced vibrations propagated through the cardiovascular system based on the ballistocardiography (BCG) principle, using a six-axis heart rate sensing device that integrates a three-axis accelerometer and a three-axis gyroscope. To validate the effectiveness of the proposed method, a comparative analysis was conducted against heart rate measurements obtained from photoplethysmography (PPG) sensors, which are widely used in conventional heart rate monitoring. Experiments were conducted on 20 adult participants, and frequency domain analysis was performed using different time windows of 30 s, 20 s, 8 s, and 4 s. The results showed that the 4 s window provided the highest accuracy in heart rate estimation, demonstrating that the proposed method can effectively capture fine cardiac-induced vibrations. This approach offers a significant advantage by utilizing inertial sensors commonly embedded in wearable devices for heart rate monitoring without the need for additional optical sensors. Compared to optical-based systems, the proposed method is more power-efficient and less affected by environmental factors such as ambient lighting conditions. The findings suggest that heart rate estimation using the six-axis heart rate sensing device presents a reliable, continuous, and non-invasive alternative for cardiovascular monitoring. Full article
19 pages, 3176 KiB  
Article
Deploying an Educational Mobile Robot
by Dorina Plókai, Borsa Détár, Tamás Haidegger and Enikő Nagy
Machines 2025, 13(7), 591; https://doi.org/10.3390/machines13070591 - 8 Jul 2025
Viewed by 503
Abstract
This study presents the development of a software solution for processing, analyzing, and visualizing sensor data collected by an educational mobile robot. The focus is on statistical analysis and identifying correlations between diverse datasets. The research utilized the PlatypOUs mobile robot platform, equipped [...] Read more.
This study presents the development of a software solution for processing, analyzing, and visualizing sensor data collected by an educational mobile robot. The focus is on statistical analysis and identifying correlations between diverse datasets. The research utilized the PlatypOUs mobile robot platform, equipped with odometry and inertial measurement units (IMUs), to gather comprehensive motion data. To enhance the reliability and interpretability of the data, advanced data processing techniques—such as moving averages, correlation analysis, and exponential smoothing—were employed. Python-based tools, including Matplotlib and Visual Studio Code, were used for data visualization and analysis. The analysis provided key insights into the robot’s motion dynamics; specifically, its stability during linear movements and variability during turns. By applying moving average filtering and exponential smoothing, noise in the sensor data was significantly reduced, enabling clearer identification of motion patterns. Correlation analysis revealed meaningful relationships between velocity and acceleration during various motion states. These findings underscore the value of advanced data processing techniques in improving the performance and reliability of educational mobile robots. The insights gained in this pilot project contribute to the optimization of navigation algorithms and motion control systems, enhancing the robot’s future potential in STEM education applications. Full article
Show Figures

Figure 1

25 pages, 4232 KiB  
Article
Multimodal Fusion Image Stabilization Algorithm for Bio-Inspired Flapping-Wing Aircraft
by Zhikai Wang, Sen Wang, Yiwen Hu, Yangfan Zhou, Na Li and Xiaofeng Zhang
Biomimetics 2025, 10(7), 448; https://doi.org/10.3390/biomimetics10070448 - 7 Jul 2025
Viewed by 336
Abstract
This paper presents FWStab, a specialized video stabilization dataset tailored for flapping-wing platforms. The dataset encompasses five typical flight scenarios, featuring 48 video clips with intense dynamic jitter. The corresponding Inertial Measurement Unit (IMU) sensor data are synchronously collected, which jointly provide reliable [...] Read more.
This paper presents FWStab, a specialized video stabilization dataset tailored for flapping-wing platforms. The dataset encompasses five typical flight scenarios, featuring 48 video clips with intense dynamic jitter. The corresponding Inertial Measurement Unit (IMU) sensor data are synchronously collected, which jointly provide reliable support for multimodal modeling. Based on this, to address the issue of poor image acquisition quality due to severe vibrations in aerial vehicles, this paper proposes a multi-modal signal fusion video stabilization framework. This framework effectively integrates image features and inertial sensor features to predict smooth and stable camera poses. During the video stabilization process, the true camera motion originally estimated based on sensors is warped to the smooth trajectory predicted by the network, thereby optimizing the inter-frame stability. This approach maintains the global rigidity of scene motion, avoids visual artifacts caused by traditional dense optical flow-based spatiotemporal warping, and rectifies rolling shutter-induced distortions. Furthermore, the network is trained in an unsupervised manner by leveraging a joint loss function that integrates camera pose smoothness and optical flow residuals. When coupled with a multi-stage training strategy, this framework demonstrates remarkable stabilization adaptability across a wide range of scenarios. The entire framework employs Long Short-Term Memory (LSTM) to model the temporal characteristics of camera trajectories, enabling high-precision prediction of smooth trajectories. Full article
Show Figures

Figure 1

32 pages, 2740 KiB  
Article
Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review
by Eder A. Rodríguez-Martínez, Wendy Flores-Fuentes, Farouk Achakir, Oleg Sergiyenko and Fabian N. Murrieta-Rico
Eng 2025, 6(7), 153; https://doi.org/10.3390/eng6070153 - 7 Jul 2025
Viewed by 656
Abstract
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from [...] Read more.
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from sensing to deployment. We first examine the expanding sensor palette—monocular and multi-camera rigs, stereo and RGB-D devices, LiDAR–camera hybrids, event cameras, and infrared systems—highlighting the complementary operating envelopes and the rise of learning-based depth inference. The advances in visual localization and mapping are then analyzed, contrasting sparse and dense SLAM approaches, as well as monocular, stereo, and visual–inertial formulations. Additional topics include loop closure, semantic mapping, and LiDAR–visual–inertial fusion, which enables drift-free operation in dynamic environments. Building on these foundations, we review the navigation and control strategies, spanning classical planning, reinforcement and imitation learning, hybrid topological–metric memories, and emerging visual language guidance. Application case studies—autonomous driving, industrial manipulation, autonomous underwater vehicles, planetary rovers, aerial drones, and humanoids—demonstrate how tailored sensor suites and algorithms meet domain-specific constraints. Finally, the future research trajectories are distilled: generative AI for synthetic training data and scene completion; high-density 3D perception with solid-state LiDAR and neural implicit representations; event-based vision for ultra-fast control; and human-centric autonomy in next-generation robots. By providing a unified taxonomy, a comparative analysis, and engineering guidelines, this review aims to inform researchers and practitioners designing robust, scalable, vision-driven robotic systems. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

Back to TopTop