Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (462)

Search Parameters:
Keywords = inertial sensors fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5938 KiB  
Article
Noise-Adaptive GNSS/INS Fusion Positioning for Autonomous Driving in Complex Environments
by Xingyang Feng, Mianhao Qiu, Tao Wang, Xinmin Yao, Hua Cong and Yu Zhang
Vehicles 2025, 7(3), 77; https://doi.org/10.3390/vehicles7030077 - 22 Jul 2025
Abstract
Accurate and reliable multi-scene positioning remains a critical challenge in autonomous driving systems, as conventional fixed-noise fusion strategies struggle to handle the dynamic error characteristics of heterogeneous sensors in complex operational environments. This paper proposes a novel noise-adaptive fusion framework integrating Global Navigation [...] Read more.
Accurate and reliable multi-scene positioning remains a critical challenge in autonomous driving systems, as conventional fixed-noise fusion strategies struggle to handle the dynamic error characteristics of heterogeneous sensors in complex operational environments. This paper proposes a novel noise-adaptive fusion framework integrating Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS) measurements. Our key innovation lies in developing a dual noise estimation model that synergizes priori weighting with posterior variance compensation. Specifically, we establish an a priori weighting model for satellite pseudorange errors based on elevation angles and signal-to-noise ratios (SNRs), complemented by a Helmert variance component estimation for posterior refinement. For INS error modeling, we derive a bias instability noise accumulation model through Allan variance analysis. These adaptive noise estimates dynamically update both process and observation noise covariance matrices in our Error-State Kalman Filter (ESKF) implementation, enabling real-time calibration of GNSS and INS contributions. Comprehensive field experiments demonstrate two key advantages: (1) The proposed noise estimation model achieves 37.7% higher accuracy in quantifying GNSS single-point positioning uncertainties compared to conventional elevation-based weighting; (2) in unstructured environments with intermittent signal outages, the fusion system maintains an average absolute trajectory error (ATE) of less than 0.6 m, outperforming state-of-the-art fixed-weight fusion methods by 36.71% in positioning consistency. These results validate the framework’s capability to autonomously balance sensor reliability under dynamic environmental conditions, significantly enhancing positioning robustness for autonomous vehicles. Full article
Show Figures

Figure 1

20 pages, 2575 KiB  
Article
Gait Analysis Using Walking-Generated Acceleration Obtained from Two Sensors Attached to the Lower Legs
by Ayuko Saito, Natsuki Sai, Kazutoshi Kurotaki, Akira Komatsu, Shinichiro Morichi and Satoru Kizawa
Sensors 2025, 25(14), 4527; https://doi.org/10.3390/s25144527 - 21 Jul 2025
Viewed by 119
Abstract
Gait evaluation approaches using small, lightweight inertial sensors have recently been developed, offering improvements in terms of both portability and usability. However, accelerometer outputs include both the acceleration that is generated by human motion and gravitational acceleration, which changes along with the posture [...] Read more.
Gait evaluation approaches using small, lightweight inertial sensors have recently been developed, offering improvements in terms of both portability and usability. However, accelerometer outputs include both the acceleration that is generated by human motion and gravitational acceleration, which changes along with the posture of the body part to which the sensor is attached. This study presents a gait analysis method that uses the gravitational, centrifugal, tangential, and translational accelerations obtained from sensors attached to the lower legs. In this method, each sensor pose is sequentially estimated using sensor fusion to combine data obtained from a three-axis gyroscope, a three-axis accelerometer, and a three-axis magnetometer. The estimated sensor pose is then used to calculate the gravitational acceleration that is included in each axis of the sensor coordinate system. The centrifugal and tangential accelerations are determined from the gyroscope output. The translational acceleration is then obtained by subtracting the centrifugal, tangential, and gravitational accelerations from the accelerometer output. As a result, the acceleration components contained in the outputs of the accelerometers attached to the lower legs are provided. As only the acceleration components caused by walking motion are captured, thus reflecting their characteristics, it is expected that the developed method can be used for gait evaluation. Full article
(This article belongs to the Special Issue IMU and Innovative Sensors for Healthcare)
Show Figures

Figure 1

25 pages, 4232 KiB  
Article
Multimodal Fusion Image Stabilization Algorithm for Bio-Inspired Flapping-Wing Aircraft
by Zhikai Wang, Sen Wang, Yiwen Hu, Yangfan Zhou, Na Li and Xiaofeng Zhang
Biomimetics 2025, 10(7), 448; https://doi.org/10.3390/biomimetics10070448 - 7 Jul 2025
Viewed by 400
Abstract
This paper presents FWStab, a specialized video stabilization dataset tailored for flapping-wing platforms. The dataset encompasses five typical flight scenarios, featuring 48 video clips with intense dynamic jitter. The corresponding Inertial Measurement Unit (IMU) sensor data are synchronously collected, which jointly provide reliable [...] Read more.
This paper presents FWStab, a specialized video stabilization dataset tailored for flapping-wing platforms. The dataset encompasses five typical flight scenarios, featuring 48 video clips with intense dynamic jitter. The corresponding Inertial Measurement Unit (IMU) sensor data are synchronously collected, which jointly provide reliable support for multimodal modeling. Based on this, to address the issue of poor image acquisition quality due to severe vibrations in aerial vehicles, this paper proposes a multi-modal signal fusion video stabilization framework. This framework effectively integrates image features and inertial sensor features to predict smooth and stable camera poses. During the video stabilization process, the true camera motion originally estimated based on sensors is warped to the smooth trajectory predicted by the network, thereby optimizing the inter-frame stability. This approach maintains the global rigidity of scene motion, avoids visual artifacts caused by traditional dense optical flow-based spatiotemporal warping, and rectifies rolling shutter-induced distortions. Furthermore, the network is trained in an unsupervised manner by leveraging a joint loss function that integrates camera pose smoothness and optical flow residuals. When coupled with a multi-stage training strategy, this framework demonstrates remarkable stabilization adaptability across a wide range of scenarios. The entire framework employs Long Short-Term Memory (LSTM) to model the temporal characteristics of camera trajectories, enabling high-precision prediction of smooth trajectories. Full article
Show Figures

Figure 1

32 pages, 2740 KiB  
Article
Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review
by Eder A. Rodríguez-Martínez, Wendy Flores-Fuentes, Farouk Achakir, Oleg Sergiyenko and Fabian N. Murrieta-Rico
Eng 2025, 6(7), 153; https://doi.org/10.3390/eng6070153 - 7 Jul 2025
Viewed by 930
Abstract
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from [...] Read more.
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from sensing to deployment. We first examine the expanding sensor palette—monocular and multi-camera rigs, stereo and RGB-D devices, LiDAR–camera hybrids, event cameras, and infrared systems—highlighting the complementary operating envelopes and the rise of learning-based depth inference. The advances in visual localization and mapping are then analyzed, contrasting sparse and dense SLAM approaches, as well as monocular, stereo, and visual–inertial formulations. Additional topics include loop closure, semantic mapping, and LiDAR–visual–inertial fusion, which enables drift-free operation in dynamic environments. Building on these foundations, we review the navigation and control strategies, spanning classical planning, reinforcement and imitation learning, hybrid topological–metric memories, and emerging visual language guidance. Application case studies—autonomous driving, industrial manipulation, autonomous underwater vehicles, planetary rovers, aerial drones, and humanoids—demonstrate how tailored sensor suites and algorithms meet domain-specific constraints. Finally, the future research trajectories are distilled: generative AI for synthetic training data and scene completion; high-density 3D perception with solid-state LiDAR and neural implicit representations; event-based vision for ultra-fast control; and human-centric autonomy in next-generation robots. By providing a unified taxonomy, a comparative analysis, and engineering guidelines, this review aims to inform researchers and practitioners designing robust, scalable, vision-driven robotic systems. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

27 pages, 569 KiB  
Article
Construction Worker Activity Recognition Using Deep Residual Convolutional Network Based on Fused IMU Sensor Data in Internet-of-Things Environment
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
IoT 2025, 6(3), 36; https://doi.org/10.3390/iot6030036 - 28 Jun 2025
Viewed by 290
Abstract
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a [...] Read more.
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a deep residual convolutional neural network (ResNet) architecture integrated with multi-sensor fusion techniques. The proposed system processes data from multiple inertial measurement unit sensors strategically positioned on workers’ bodies to identify and classify construction-related activities accurately. A comprehensive pre-processing pipeline is implemented, incorporating Butterworth filtering for noise suppression, data normalization, and an adaptive sliding window mechanism for temporal segmentation. Experimental validation is conducted using the publicly available VTT-ConIoT dataset, which includes recordings of 16 construction activities performed by 13 participants in a controlled laboratory setting. The results demonstrate that the ResNet-based sensor fusion approach outperforms traditional single-sensor models and other deep learning methods. The system achieves classification accuracies of 97.32% for binary discrimination between recommended and non-recommended activities, 97.14% for categorizing six core task types, and 98.68% for detailed classification across sixteen individual activities. Optimal performance is consistently obtained with a 4-second window size, balancing recognition accuracy with computational efficiency. Although the hand-mounted sensor proved to be the most effective as a standalone unit, multi-sensor configurations delivered significantly higher accuracy, particularly in complex classification tasks. The proposed approach demonstrates strong potential for real-world applications, offering robust performance across diverse working conditions while maintaining computational feasibility for IoT deployment. This work advances the field of innovative construction by presenting a practical solution for real-time worker activity monitoring, which can be seamlessly integrated into existing IoT infrastructures to promote workplace safety, streamline construction processes, and support data-driven management decisions. Full article
Show Figures

Figure 1

21 pages, 15478 KiB  
Review
Small Object Detection in Traffic Scenes for Mobile Robots: Challenges, Strategies, and Future Directions
by Zhe Wei, Yurong Zou, Haibo Xu and Sen Wang
Electronics 2025, 14(13), 2614; https://doi.org/10.3390/electronics14132614 - 28 Jun 2025
Viewed by 408
Abstract
Small object detection in traffic scenes presents unique challenges for mobile robots operating under constrained computational resources and highly dynamic environments. Unlike general object detection, small targets often suffer from low resolution, weak semantic cues, and frequent occlusion, especially in complex outdoor scenarios. [...] Read more.
Small object detection in traffic scenes presents unique challenges for mobile robots operating under constrained computational resources and highly dynamic environments. Unlike general object detection, small targets often suffer from low resolution, weak semantic cues, and frequent occlusion, especially in complex outdoor scenarios. This study systematically analyses the challenges, technical advances, and deployment strategies for small object detection tailored to mobile robotic platforms. We categorise existing approaches into three main strategies: feature enhancement (e.g., multi-scale fusion, attention mechanisms), network architecture optimisation (e.g., lightweight backbones, anchor-free heads), and data-driven techniques (e.g., augmentation, simulation, transfer learning). Furthermore, we examine deployment techniques on embedded devices such as Jetson Nano and Raspberry Pi, and we highlight multi-modal sensor fusion using Light Detection and Ranging (LiDAR), cameras, and Inertial Measurement Units (IMUs) for enhanced environmental perception. A comparative study of public datasets and evaluation metrics is provided to identify current limitations in real-world benchmarking. Finally, we discuss future directions, including robust detection under extreme conditions and human-in-the-loop incremental learning frameworks. This research aims to offer a comprehensive technical reference for researchers and practitioners developing small object detection systems for real-world robotic applications. Full article
(This article belongs to the Special Issue New Trends in Computer Vision and Image Processing)
Show Figures

Figure 1

30 pages, 14473 KiB  
Article
VOX-LIO: An Effective and Robust LiDAR-Inertial Odometry System Based on Surfel Voxels
by Meijun Guo, Yonghui Liu, Yuhang Yang, Xiaohai He and Weimin Zhang
Remote Sens. 2025, 17(13), 2214; https://doi.org/10.3390/rs17132214 - 27 Jun 2025
Viewed by 373
Abstract
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an [...] Read more.
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an adaptive hash voxel-based point cloud map management method that incorporates surfel features and planarity. This method enhances the efficiency of point-to-surfel association by leveraging long-term observed surfel. It facilitates the incremental refinement of surfel features within classified surfel voxels, thereby enabling precise and efficient map updates. Furthermore, we develop a weighted fusion approach that integrates LiDAR and IMU data measurements on the manifold, effectively compensating for motion distortion, particularly under high-speed LiDAR motion. We validate our system through experiments conducted on both public datasets and our mobile robot platforms. The results demonstrate that VOX-LIO outperforms the existing methods, effectively handling challenging environments while minimizing computational cost. Full article
Show Figures

Figure 1

29 pages, 4413 KiB  
Article
Advancing Road Infrastructure Safety with the Remotely Piloted Safety Cone
by Francisco Javier García-Corbeira, David Alvarez-Moyano, Pedro Arias Sánchez and Joaquin Martinez-Sanchez
Infrastructures 2025, 10(7), 160; https://doi.org/10.3390/infrastructures10070160 - 27 Jun 2025
Viewed by 404
Abstract
This article presents the design, implementation, and validation of a Remotely Piloted Safety Cone (RPSC), an autonomous robotic system developed to enhance safety and operational efficiency in road maintenance. The RPSC addresses challenges associated with road works, including workers’ exposure to traffic hazards [...] Read more.
This article presents the design, implementation, and validation of a Remotely Piloted Safety Cone (RPSC), an autonomous robotic system developed to enhance safety and operational efficiency in road maintenance. The RPSC addresses challenges associated with road works, including workers’ exposure to traffic hazards and inefficiencies of traditional traffic cones, such as manual placement and retrieval, limited visibility in low-light conditions, and inability to adapt to dynamic changes in work zones. In contrast, the RPSC offers autonomous mobility, advanced visual signalling, and real-time communication capabilities, significantly improving safety and operational flexibility during maintenance tasks. The RPSC integrates sensor fusion, combining Global Navigation Satellite System (GNSS) with Real-Time Kinematic (RTK) for precise positioning, Inertial Measurement Unit (IMU) and encoders for accurate odometry, and obstacle detection sensors within an optimised navigation framework using Robot Operating System (ROS2) and Micro Air Vehicle Link (MAVLink) protocols. Complying with European regulations, the RPSC ensures structural integrity, visibility, stability, and regulatory compliance. Safety features include emergency stop capabilities, visual alarms, autonomous safety routines, and edge computing for rapid responsiveness. Field tests validated positioning accuracy below 30 cm, route deviations under 15 cm, and obstacle detection up to 4 m, significantly improved by Kalman filtering, aligning with digitalisation, sustainability, and occupational risk prevention objectives. Full article
Show Figures

Figure 1

17 pages, 5036 KiB  
Article
Automated UPDRS Gait Scoring Using Wearable Sensor Fusion and Deep Learning
by Xiangzhi Liu, Xiangliang Zhang, Juan Li, Wenhao Pan, Yiping Sun, Shuanggen Lin and Tao Liu
Bioengineering 2025, 12(7), 686; https://doi.org/10.3390/bioengineering12070686 - 24 Jun 2025
Viewed by 490
Abstract
The quantitative assessment of Parkinson’s disease (PD) is critical for guiding diagnosis, treatment, and rehabilitation. Conventional clinical evaluations—heavily dependent on manual rating scales such as the Unified Parkinson’s Disease Rating Scale (UPDRS)—are time-consuming and prone to inter-rater variability. In this study, we propose [...] Read more.
The quantitative assessment of Parkinson’s disease (PD) is critical for guiding diagnosis, treatment, and rehabilitation. Conventional clinical evaluations—heavily dependent on manual rating scales such as the Unified Parkinson’s Disease Rating Scale (UPDRS)—are time-consuming and prone to inter-rater variability. In this study, we propose a fully automated UPDRS gait-scoring framework. Our method combines (a) surface electromyography (EMG) signals and (b) inertial measurement unit (IMU) data into a single deep learning model. Our end-to-end network comprises three specialized branches—a diagnosis head, an evaluation head, and a balance head—whose outputs are integrated via a customized fusion-detection module to emulate the multidimensional assessments performed by clinicians. We validated our system on 21 PD patients and healthy controls performing a simple walking task while wearing a four-channel EMG array on the lower limbs and 2 shank-mounted IMUs. It achieved a mean classification accuracy of 92.8% across UPDRS levels 0–2. This approach requires minimal subject effort and sensor setup, significantly cutting clinician workload associated with traditional UPDRS evaluations while improving objectivity. The results demonstrate the potential of wearable sensor-driven deep learning methods to deliver rapid, reliable PD gait assessment in both clinical and home settings. Full article
(This article belongs to the Special Issue Advanced Wearable Sensors for Human Gait Analysis)
Show Figures

Figure 1

14 pages, 3205 KiB  
Article
A 209 ps Shutter-Time CMOS Image Sensor for Ultra-Fast Diagnosis
by Houzhi Cai, Zhaoyang Xie, Youlin Ma and Lijuan Xiang
Sensors 2025, 25(12), 3835; https://doi.org/10.3390/s25123835 - 19 Jun 2025
Viewed by 380
Abstract
A conventional microchannel plate framing camera is typically utilized for inertial confinement fusion diagnosis. However, as a vacuum electronic device, it has inherent limitations, such as a complex structure and the inability to achieve single-line-of-sight imaging. To address these challenges, a CMOS image [...] Read more.
A conventional microchannel plate framing camera is typically utilized for inertial confinement fusion diagnosis. However, as a vacuum electronic device, it has inherent limitations, such as a complex structure and the inability to achieve single-line-of-sight imaging. To address these challenges, a CMOS image sensor that can be seamlessly integrated with an electronic pulse broadening system can provide a viable alternative to the microchannel plate detector. This paper introduces the design of an 8 × 8 pixel-array ultrashort shutter-time single-framing CMOS image sensor, which leverages silicon epitaxial processing and a 0.18 μm standard CMOS process. The focus of this study is on the photodiode and the readout pixel-array circuit. The photodiode, designed using the silicon epitaxial process, achieves a quantum efficiency exceeding 30% in the visible light band at a bias voltage of 1.8 V, with a temporal resolution greater than 200 ps for visible light. The readout pixel-array circuit, which is based on the 0.18 μm standard CMOS process, incorporates 5T structure pixel units, voltage-controlled delayers, clock trees, and row-column decoding and scanning circuits. Simulations of the pixel circuit demonstrate an optimal temporal resolution of 60 ps. Under the shutter condition with the best temporal resolution, the maximum output swing of the pixel circuit is 448 mV, and the output noise is 77.47 μV, resulting in a dynamic range of 75.2 dB for the pixel circuit; the small-signal responsivity is 1.93 × 10−7 V/e, and the full-well capacity is 2.3 Me. The maximum power consumption of the 8 × 8 pixel-array and its control circuits is 0.35 mW. Considering both the photodiode and the pixel circuit, the proposed CMOS image sensor achieves a temporal resolution better than 209 ps. Full article
(This article belongs to the Special Issue Ultrafast Optoelectronic Sensing and Imaging)
Show Figures

Figure 1

19 pages, 2531 KiB  
Article
Fusion-Based Localization System Integrating UWB, IMU, and Vision
by Zhongliang Deng, Haiming Luo, Xiangchuan Gao and Peijia Liu
Appl. Sci. 2025, 15(12), 6501; https://doi.org/10.3390/app15126501 - 9 Jun 2025
Viewed by 625
Abstract
Accurate indoor positioning services have become increasingly important in modern applications. Various new indoor positioning methods have been developed. Among them, visual–inertial odometry (VIO)-based techniques are notably limited by lighting conditions, while ultrawideband (UWB)-based algorithms are highly susceptible to environmental interference. To address [...] Read more.
Accurate indoor positioning services have become increasingly important in modern applications. Various new indoor positioning methods have been developed. Among them, visual–inertial odometry (VIO)-based techniques are notably limited by lighting conditions, while ultrawideband (UWB)-based algorithms are highly susceptible to environmental interference. To address these limitations, this study proposes a hybrid indoor positioning algorithm that combines UWB and VIO. The method first utilizes a tightly coupled UWB/inertial measurement unit (IMU) fusion algorithm based on a sliding-window factor graph to obtain initial position estimates. These estimates are then combined with VIO outputs to formulate the system’s motion and observation models. Finally, an extended Kalman filter (EKF) is applied for data fusion to achieve optimal state estimation. The proposed hybrid positioning algorithm is validated on a self-developed mobile platform in an indoor environment. Experimental results show that, in indoor environments, the proposed method reduces the root mean square error (RMSE) by 67.6% and the maximum error by approximately 67.9% compared with the standalone UWB method. Compared with the stereo VIO model, the RMSE and maximum error are reduced by 55.4% and 60.4%, respectively. Furthermore, compared with the UWB/IMU fusion model, the proposed method achieves a 50.0% reduction in RMSE and a 59.1% reduction in maximum error. Full article
Show Figures

Figure 1

29 pages, 4560 KiB  
Article
GNSS-RTK-Based Navigation with Real-Time Obstacle Avoidance for Low-Speed Micro Electric Vehicles
by Nuksit Noomwongs, Kanin Kiataramgul, Sunhapos Chantranuwathana and Gridsada Phanomchoeng
Machines 2025, 13(6), 471; https://doi.org/10.3390/machines13060471 - 29 May 2025
Viewed by 526
Abstract
Autonomous navigation for micro electric vehicles (micro EVs) operating in semi-structured environments—such as university campuses and industrial parks—requires solutions that are cost-effective, low in complexity, and robust. Traditional autonomous systems often rely on high-definition maps, multi-sensor fusion, or vision-based SLAM, which demand expensive [...] Read more.
Autonomous navigation for micro electric vehicles (micro EVs) operating in semi-structured environments—such as university campuses and industrial parks—requires solutions that are cost-effective, low in complexity, and robust. Traditional autonomous systems often rely on high-definition maps, multi-sensor fusion, or vision-based SLAM, which demand expensive sensors and high computational power. These approaches are often impractical for micro EVs with limited onboard resources. To address this gap, a real-world autonomous navigation system is presented, combining RTK-GNSS and 2D LiDAR with a real-time trajectory scoring algorithm. This configuration enables accurate path following and obstacle avoidance without relying on complex mapping or multi-sensor fusion. This study presents the development and experimental validation of a low-speed autonomous navigation system for a micro electric vehicle based on GNSS-RTK localization and real-time obstacle avoidance. The research achieved the following three primary objectives: (1) the development of a low-level control system for steering, acceleration, and braking; (2) the design of a high-level navigation controller for autonomous path following using GNSS data; and (3) the implementation of real-time obstacle avoidance capabilities. The system employs a scored predicted trajectory algorithm that simultaneously optimizes path-following accuracy and obstacle evasion. A Toyota COMS micro EV was modified for autonomous operation and tested on a closed-loop campus track. Experimental results demonstrated an average lateral deviation of 0.07 m at 10 km/h and 0.12 m at 15 km/h, with heading deviations of approximately 3° and 4°, respectively. Obstacle avoidance tests showed safe maneuvering with a minimum clearance of 1.2 m from obstacles, as configured. The system proved robust against minor GNSS signal degradation, maintaining precise navigation without reliance on complex map building or inertial sensing. The results confirm that GNSS-RTK-based navigation combined with minimal sensing provides an effective and practical solution for autonomous driving in semi-structured environments. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

26 pages, 10564 KiB  
Article
DynaFusion-SLAM: Multi-Sensor Fusion and Dynamic Optimization of Autonomous Navigation Algorithms for Pasture-Pushing Robot
by Zhiwei Liu, Jiandong Fang and Yudong Zhao
Sensors 2025, 25(11), 3395; https://doi.org/10.3390/s25113395 - 28 May 2025
Viewed by 552
Abstract
Aiming to address the problems of fewer related studies on autonomous navigation algorithms based on multi-sensor fusion in complex scenarios in pastures, lower degrees of fusion, and insufficient cruising accuracy of the operation path in complex outdoor environments, a multimodal autonomous navigation system [...] Read more.
Aiming to address the problems of fewer related studies on autonomous navigation algorithms based on multi-sensor fusion in complex scenarios in pastures, lower degrees of fusion, and insufficient cruising accuracy of the operation path in complex outdoor environments, a multimodal autonomous navigation system is proposed based on a loosely coupled architecture of Cartographer–RTAB-Map (real-time appearance-based mapping). Through laser-vision inertial guidance multi-sensor data fusion, the system achieves high-precision mapping and robust path planning in complex scenes. First, comparing the mainstream laser SLAM algorithms (Hector/Gmapping/Cartographer) through simulation experiments, Cartographer is found to have a significant memory efficiency advantage in large-scale scenarios and is thus chosen as the front-end odometer. Secondly, a two-way position optimization mechanism is innovatively designed: (1) When building the map, Cartographer processes the laser with IMU and odometer data to generate mileage estimations, which provide positioning compensation for RTAB-Map. (2) RTAB-Map fuses the depth camera point cloud and laser data, corrects the global position through visual closed-loop detection, and then uses 2D localization to construct a bimodal environment representation containing a 2D raster map and a 3D point cloud, achieving a complete description of the simulated ranch environment and material morphology and constructing a framework for the navigation algorithm of the pushing robot based on the two types of fused data. During navigation, the combination of RTAB-Map’s global localization and AMCL’s local localization is used to generate a smoother and robust positional attitude by fusing IMU and odometer data through the EKF algorithm. Global path planning is performed using Dijkstra’s algorithm and combined with the TEB (Timed Elastic Band) algorithm for local path planning. Finally, experimental validation is performed in a laboratory-simulated pasture environment. The results indicate that when the RTAB-Map algorithm fuses with the multi-source odometry, its performance is significantly improved in the laboratory-simulated ranch scenario, the maximum absolute value of the error of the map measurement size is narrowed from 24.908 cm to 4.456 cm, the maximum absolute value of the relative error is reduced from 6.227% to 2.025%, and the absolute value of the error at each location is significantly reduced. At the same time, the introduction of multi-source mileage fusion can effectively avoid the phenomenon of large-scale offset or drift in the process of map construction. On this basis, the robot constructs a fusion map containing a simulated pasture environment and material patterns. In the navigation accuracy test experiments, our proposed method reduces the root mean square error (RMSE) coefficient by 1.7% and Std by 2.7% compared with that of RTAB-MAP. The RMSE is reduced by 26.7% and Std by 22.8% compared to that of the AMCL algorithm. On this basis, the robot successfully traverses the six preset points, and the measured X and Y directions and the overall position errors of the six points meet the requirements of the pasture-pushing task. The robot successfully returns to the starting point after completing the task of multi-point navigation, achieving autonomous navigation of the robot. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

27 pages, 9977 KiB  
Article
Mergeable Probabilistic Voxel Mapping for LiDAR–Inertial–Visual Odometry
by Balong Wang, Nassim Bessaad, Huiying Xu, Xinzhong Zhu and Hongbo Li
Electronics 2025, 14(11), 2142; https://doi.org/10.3390/electronics14112142 - 24 May 2025
Cited by 1 | Viewed by 702
Abstract
To address the limitations of existing LiDAR–visual fusion methods in adequately accounting for map uncertainties induced by LiDAR measurement noise, this paper introduces a LiDAR–inertial–visual odometry framework leveraging mergeable probabilistic voxel mapping. The method innovatively employs probabilistic voxel models to characterize uncertainties in [...] Read more.
To address the limitations of existing LiDAR–visual fusion methods in adequately accounting for map uncertainties induced by LiDAR measurement noise, this paper introduces a LiDAR–inertial–visual odometry framework leveraging mergeable probabilistic voxel mapping. The method innovatively employs probabilistic voxel models to characterize uncertainties in environmental geometric plane features and optimizes computational efficiency through a voxel merging strategy. Additionally, it integrates color information from cameras to further enhance localization accuracy. Specifically, in the LiDAR–inertial odometry (LIO) subsystem, a probabilistic voxel plane model is constructed for LiDAR point clouds to explicitly represent measurement noise uncertainty, thereby improving the accuracy and robustness of point cloud registration. A voxel merging strategy based on the union-find algorithm is introduced to merge coplanar voxel planes, reducing computational load. In the visual–inertial odometry (VIO) subsystem, image tracking points are generated through a global map projection, and outlier points are eliminated using a random sample consensus algorithm based on a dynamic Bayesian network. Finally, state estimation accuracy is enhanced by jointly optimizing frame-to-frame reprojection errors and frame-to-map RGB color errors. Experimental results demonstrate that the proposed method achieves root mean square errors (RMSEs) of absolute trajectory error at 0.478 m and 0.185 m on the M2DGR and NTU-VIRAL datasets, respectively, while attaining real-time performance with an average processing time of 39.19 ms per-frame on the NTU-VIRAL datasets. Compared to state-of-the-art approaches, our method exhibits significant improvements in both accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

16 pages, 562 KiB  
Communication
Implementation of a Low-Cost Navigation System Using Data Fusion of a Micro-Electro-Mechanical System Inertial Sensor and an Ultra Short Baseline on a Microcontroller
by Julian Winkler and Sabah Badri-Hoeher
Sensors 2025, 25(10), 3125; https://doi.org/10.3390/s25103125 - 15 May 2025
Viewed by 2387
Abstract
In this work, a low-cost low-power navigation solution for autonomous underwater vehicles is introduced utilizing a Micro-Electro-Mechanical System (MEMS) inertial sensor and an ultra short baseline (USBL) system. The complete signal processing is implemented on a cheap 16-bit fixed-point arithmetic microcontroller. For data [...] Read more.
In this work, a low-cost low-power navigation solution for autonomous underwater vehicles is introduced utilizing a Micro-Electro-Mechanical System (MEMS) inertial sensor and an ultra short baseline (USBL) system. The complete signal processing is implemented on a cheap 16-bit fixed-point arithmetic microcontroller. For data fusion and calibration, an error state Kalman filter in square root form is used, which preserves stability in case of rounding errors. To further reduce the influence of rounding errors, a stochastic rounding scheme is applied. The USBL measurements are integrated using tightly coupled data fusion by deriving the observation functions separately for range, elevation, and azimuth angles. The effectiveness of the fixed point implementation with stochastic rounding is demonstrated on a simulation, and the the complete setup is tested in a field test. The results of the field test show an improved accuracy of the tightly coupled data fusion in comparison with loosely coupled data fusion. It is also shown that the applied rounding schemes can bring the fixed-point estimates to a near floating point accuracy. Full article
(This article belongs to the Special Issue Advanced Sensors in MEMS: 2nd Edition)
Show Figures

Figure 1

Back to TopTop