Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (68)

Search Parameters:
Keywords = SLAM calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 11514 KB  
Article
Fuzzy Fusion of Monocular ORB-SLAM2 and Tachometer Sensor for Car Odometry
by David Lázaro Mata, José Alfredo Padilla Medina, Juan José Martínez Nolasco, Juan Prado Olivarez and Alejandro Israel Barranco Gutiérrez
Appl. Syst. Innov. 2025, 8(6), 188; https://doi.org/10.3390/asi8060188 - 30 Nov 2025
Viewed by 622
Abstract
Estimating the absolute scale of reconstructed camera trajectories in monocular odometry is a challenging task due to the inherent scale ambiguity in any monocular vision system. One promising solution is to fuse data from different sensors, which can improve the accuracy and precision [...] Read more.
Estimating the absolute scale of reconstructed camera trajectories in monocular odometry is a challenging task due to the inherent scale ambiguity in any monocular vision system. One promising solution is to fuse data from different sensors, which can improve the accuracy and precision of scale estimation. However, this approach often requires additional effort in sensor design and data processing. In this paper, we propose a novel method for fusing single-camera data with wheel odometer readings using a fuzzy system. The architecture of the fuzzy system has as inputs the wheel odometer value and the translation and rotation obtained from ORB-SLAM2. It was trained with the ANFIS tool in MATLAB 2014b. Our approach yields significantly better results compared to state-of-the-art pure monocular systems. In our experiments, the average error relative to GPS measurements was only four percent. A key advantage of this method is the elimination of the sensor calibration step, allowing for straightforward data fusion without a substantial increase in data processing demands. Full article
(This article belongs to the Special Issue Autonomous Robotics and Hybrid Intelligent Systems)
Show Figures

Figure 1

20 pages, 3688 KB  
Article
Intelligent Fruit Localization and Grasping Method Based on YOLO VX Model and 3D Vision
by Zhimin Mei, Yifan Li, Rongbo Zhu and Shucai Wang
Agriculture 2025, 15(14), 1508; https://doi.org/10.3390/agriculture15141508 - 13 Jul 2025
Cited by 1 | Viewed by 1822
Abstract
Recent years have seen significant interest among agricultural researchers in using robotics and machine vision to enhance intelligent orchard harvesting efficiency. This study proposes an improved hybrid framework integrating YOLO VX deep learning, 3D object recognition, and SLAM-based navigation for harvesting ripe fruits [...] Read more.
Recent years have seen significant interest among agricultural researchers in using robotics and machine vision to enhance intelligent orchard harvesting efficiency. This study proposes an improved hybrid framework integrating YOLO VX deep learning, 3D object recognition, and SLAM-based navigation for harvesting ripe fruits in greenhouse environments, achieving servo control of robotic arms with flexible end-effectors. The method comprises three key components: First, a fruit sample database containing varying maturity levels and morphological features is established, interfaced with an optimized YOLO VX model for target fruit identification. Second, a 3D camera acquires the target fruit’s spatial position and orientation data in real time, and these data are stored in the collaborative robot’s microcontroller. Finally, employing binocular calibration and triangulation, the SLAM navigation module guides the robotic arm to the designated picking location via unobstructed target positioning. Comprehensive comparative experiments between the improved YOLO v12n model and earlier versions were conducted to validate its performance. The results demonstrate that the optimized model surpasses traditional recognition and harvesting methods, offering superior target fruit identification response (minimum 30.9ms) and significantly higher accuracy (91.14%). Full article
Show Figures

Figure 1

14 pages, 3376 KB  
Article
A Study of Ultra-Thin Surface-Mounted MEMS Fibre-Optic Fabry–Pérot Pressure Sensors for the In Situ Monitoring of Hydrodynamic Pressure on the Hull of Large Amphibious Aircraft
by Tianyi Feng, Xi Chen, Ye Chen, Bin Wu, Fei Xu and Lingcai Huang
Photonics 2025, 12(7), 627; https://doi.org/10.3390/photonics12070627 - 20 Jun 2025
Viewed by 817
Abstract
Hydrodynamic slamming loads during water landing are one of the main concerns for the structural design and wave resistance performance of large amphibious aircraft. However, current existing sensors are not used for full-scale hydrodynamic load flight tests on complex models due to their [...] Read more.
Hydrodynamic slamming loads during water landing are one of the main concerns for the structural design and wave resistance performance of large amphibious aircraft. However, current existing sensors are not used for full-scale hydrodynamic load flight tests on complex models due to their large size, fragility, intrusiveness, limited range, frequency response limitations, accuracy issues, and low sampling frequency. Fibre-optic sensors’ small size, immunity to electromagnetic interference, and reduced susceptibility to environmental disturbances have led to their progressive development in maritime and aeronautic fields. This research proposes a novel hydrodynamic profile encapsulation method using ultra-thin surface-mounted micro-electromechanical system (MEMS) fibre-optic Fabry–Pérot pressure sensors (total thickness of 1 mm). The proposed sensor exhibits an exceptional linear response and low-temperature sensitivity in hydrostatic calibration tests and shows superior response and detection accuracy in water-entry tests of wedge-shaped bodies. This work exhibits significant potential for the in situ monitoring of hydrodynamic loads during water landing, contributing to the research of large amphibious aircraft. Furthermore, this research demonstrates, for the first time, the proposed surface-mounted pressure sensor in conjunction with a high-speed acquisition system for the in situ monitoring of hydrodynamic pressure on the hull of a large amphibious prototype. Following flight tests, the sensors remained intact throughout multiple high-speed hydrodynamic taxiing events and 12 full water landings, successfully acquiring the complete dataset. The flight test results show that this proposed pressure sensor exhibits superior robustness in extreme environments compared to traditional invasive electrical sensors and can be used for full-scale hydrodynamic load flight tests. Full article
Show Figures

Figure 1

9 pages, 2383 KB  
Proceeding Paper
WiFi–Round-Trip Timing (WiFi–RTT) Simultaneous Localisation and Mapping: Pedestrian Navigation in Unmapped Environments Using WiFi–RTT and Smartphone Inertial Sensors
by Khalil J. Raja and Paul D. Groves
Eng. Proc. 2025, 88(1), 16; https://doi.org/10.3390/engproc2025088016 - 24 Mar 2025
Viewed by 1968
Abstract
A core problem relating to indoor positioning is a lack of prior knowledge of the environment. To date, most WiFi–RTT research assumes knowledge of the access points in an indoor environment. This paper provides a solution to this problem by using a simultaneous [...] Read more.
A core problem relating to indoor positioning is a lack of prior knowledge of the environment. To date, most WiFi–RTT research assumes knowledge of the access points in an indoor environment. This paper provides a solution to this problem by using a simultaneous localisation and mapping (SLAM) algorithm, using WiFi–RTT and pedestrian dead reckoning, which uses the inertial sensors in a smartphone. A WiFi–RTT SLAM algorithm has only been researched in one instance at the time of writing; this paper aims to expand the exploration of this problem, particularly in relation to the use of outlier detection and motion models. For the trials, which were 35 steps long, the final mobile device horizontal positioning error was 1.01 m and 1.7 m for the forward and reverse trials, respectively. The results of this paper show that unmapped indoor positioning using WiFi–RTT is feasible for metre-level indoor positioning, given correct access point calibration. Full article
(This article belongs to the Proceedings of European Navigation Conference 2024)
Show Figures

Figure 1

15 pages, 3120 KB  
Article
Implementation of Visual Odometry on Jetson Nano
by Jakub Krško, Dušan Nemec, Vojtech Šimák and Mário Michálik
Sensors 2025, 25(4), 1025; https://doi.org/10.3390/s25041025 - 9 Feb 2025
Viewed by 5454
Abstract
This paper presents the implementation of ORB-SLAM3 for visual odometry on a low-power ARM-based system, specifically the Jetson Nano, to track a robot’s movement using RGB-D cameras. Key challenges addressed include the selection of compatible software libraries, camera calibration, and system optimization. The [...] Read more.
This paper presents the implementation of ORB-SLAM3 for visual odometry on a low-power ARM-based system, specifically the Jetson Nano, to track a robot’s movement using RGB-D cameras. Key challenges addressed include the selection of compatible software libraries, camera calibration, and system optimization. The ORB-SLAM3 algorithm was adapted for the ARM architecture and tested using both the EuRoC dataset and real-world scenarios involving a mobile robot. The testing demonstrated that ORB-SLAM3 provides accurate localization, with errors in path estimation ranging from 3 to 11 cm when using the EuRoC dataset. Real-world tests on a mobile robot revealed discrepancies primarily due to encoder drift and environmental factors such as lighting and texture. The paper discusses strategies for mitigating these errors, including enhanced calibration and the potential use of encoder data for tracking when camera performance falters. Future improvements focus on refining the calibration process, adding trajectory correction mechanisms, and integrating visual odometry data more effectively into broader systems. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 8888 KB  
Article
E2-VINS: An Event-Enhanced Visual–Inertial SLAM Scheme for Dynamic Environments
by Jiafeng Huang, Shengjie Zhao and Lin Zhang
Appl. Sci. 2025, 15(3), 1314; https://doi.org/10.3390/app15031314 - 27 Jan 2025
Cited by 4 | Viewed by 4235
Abstract
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. [...] Read more.
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. Although SLAM, especially Visual–Inertial SLAM (VI-SLAM), has made substantial progress, most classic algorithms in this field are designed based on the assumption that the observed scene is static. In complex real-world environments, the presence of dynamic objects such as pedestrians and vehicles can seriously affect the robustness and accuracy of such systems. Event cameras, which use recently introduced motion-sensitive biomimetic sensors, efficiently capture scene changes (referred to as “events”) with high temporal resolution, offering new opportunities to enhance VI-SLAM performance in dynamic environments. Integrating this kind of innovative sensor, we propose the first event-enhanced Visual–Inertial SLAM framework specifically designed for dynamic environments, termed E2-VINS. Specifically, the system uses visual–inertial alignment strategy to estimate IMU biases and correct IMU measurements. The calibrated IMU measurements are used to assist in motion compensation, achieving spatiotemporal alignment of events. The event-based dynamicity metrics, which measure the dynamicity of each pixel, are then generated on these aligned events. Based on these metrics, the visual residual terms of different pixels are adaptively assigned weights, namely, dynamicity weights. Subsequently, E2-VINS jointly and alternately optimizes the system state (camera poses and map points) and dynamicity weights, effectively filtering out dynamic features through a soft-threshold mechanism. Our scheme enhances the robustness of classic VI-SLAM against dynamic features, which significantly enhances VI-SLAM performance in dynamic environments, resulting in an average improvement of 1.884% in the mean position error compared to state-of-the-art methods. The superior performance of E2-VINS is validated through both qualitative and quantitative experimental results. To ensure that our results are fully reproducible, all the relevant data and codes have been released. Full article
(This article belongs to the Special Issue Advances in Audio/Image Signals Processing)
Show Figures

Figure 1

14 pages, 6079 KB  
Data Descriptor
The EDI Multi-Modal Simultaneous Localization and Mapping Dataset (EDI-SLAM)
by Peteris Racinskis, Gustavs Krasnikovs, Janis Arents and Modris Greitans
Data 2025, 10(1), 5; https://doi.org/10.3390/data10010005 - 7 Jan 2025
Viewed by 2092
Abstract
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an [...] Read more.
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an RTK-enabled IMU-GNSS positioning module—both as satellite fixes and internally fused interpolated pose estimates. The tracks are formatted as ROS1 and ROS2 bags, with separately available calibration and ground truth data. In addition to the filtered positioning module outputs, a second form of sparse ground truth pose annotation is provided using independently surveyed visual fiducial markers as a reference. This enables the meaningful evaluation of systems that directly utilize data from the positioning module into their localization estimates, and serves as an alternative when the GNSS reference is disrupted by intermittent signals or multipath scattering. In this paper, we describe the methods used to collect the dataset, its contents, and its intended use. Full article
Show Figures

Figure 1

13 pages, 2064 KB  
Article
A Robust Method for Validating Orientation Sensors Using a Robot Arm as a High-Precision Reference
by József Kuti, Tamás Piricz and Péter Galambos
Sensors 2024, 24(24), 8179; https://doi.org/10.3390/s24248179 - 21 Dec 2024
Cited by 3 | Viewed by 2634
Abstract
This paper presents a robust and efficient method for validating the accuracy of orientation sensors commonly used in practical applications, leveraging measurements from a commercial robotic manipulator as a high-precision reference. The key concept lies in determining the rotational transformations between the robot’s [...] Read more.
This paper presents a robust and efficient method for validating the accuracy of orientation sensors commonly used in practical applications, leveraging measurements from a commercial robotic manipulator as a high-precision reference. The key concept lies in determining the rotational transformations between the robot’s base frame and the sensor’s reference, as well as between the TCP (Tool Center Point) frame and the sensor frame, without requiring precise alignment. Key advantages of the proposed method include its independence from the exact measurement of rotations between the reference instrumentation and the sensor, systematic testing capabilities, and the ability to produce repeatable excitation patterns under controlled conditions. This approach enables automated, high-precision, and comparative evaluation of various orientation sensing devices in a reproducible manner. Moreover, it facilitates efficient calibration and analysis of sensor errors, such as drift, noise, and response delays under various motion conditions. The method’s effectiveness is demonstrated through experimental validation of an Inertial Navigation System module and the SLAM-IMU fusion capabilities of the HTC VIVE VR headset, highlighting its versatility and reliability in addressing the challenges associated with orientation sensor validation. Full article
Show Figures

Figure 1

14 pages, 12144 KB  
Article
NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map
by Changshuai Dai, Ting Han, Yang Luo, Mengyi Wang, Guorong Cai, Jinhe Su, Zheng Gong and Niansheng Liu
Sensors 2024, 24(16), 5228; https://doi.org/10.3390/s24165228 - 13 Aug 2024
Cited by 3 | Viewed by 3475
Abstract
With the advancement of computer vision and sensor technologies, many multi-camera systems are being developed for the control, planning, and other functionalities of unmanned systems or robots. The calibration of multi-camera systems determines the accuracy of their operation. However, calibration of multi-camera systems [...] Read more.
With the advancement of computer vision and sensor technologies, many multi-camera systems are being developed for the control, planning, and other functionalities of unmanned systems or robots. The calibration of multi-camera systems determines the accuracy of their operation. However, calibration of multi-camera systems without overlapping parts is inaccurate. Furthermore, the potential of feature matching points and their spatial extent in calculating the extrinsic parameters of multi-camera systems has not yet been fully realized. To this end, we propose a multi-camera calibration algorithm to solve the problem of the high-precision calibration of multi-camera systems without overlapping parts. The calibration of multi-camera systems is simplified to the problem of solving the transformation relationship of extrinsic parameters using a map constructed by multiple cameras. Firstly, the calibration environment map is constructed by running the SLAM algorithm separately for each camera in the multi-camera system in closed-loop motion. Secondly, uniformly distributed matching points are selected among the similar feature points between the maps. Then, these matching points are used to solve the transformation relationship between the multi-camera external parameters. Finally, the reprojection error is minimized to optimize the extrinsic parameter transformation relationship. We conduct comprehensive experiments in multiple scenarios and provide results of the extrinsic parameters for multiple cameras. The results demonstrate that the proposed method accurately calibrates the extrinsic parameters for multiple cameras, even under conditions where the main camera and auxiliary cameras rotate 180°. Full article
Show Figures

Figure 1

41 pages, 3369 KB  
Review
Application of Event Cameras and Neuromorphic Computing to VSLAM: A Survey
by Sangay Tenzin, Alexander Rassau and Douglas Chai
Biomimetics 2024, 9(7), 444; https://doi.org/10.3390/biomimetics9070444 - 20 Jul 2024
Cited by 9 | Viewed by 7500
Abstract
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously, consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance, was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2024)
Show Figures

Figure 1

20 pages, 4350 KB  
Article
Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones
by Haoyu Wang, Chi Chen, Yong He, Shangzhe Sun, Liuchun Li, Yuhang Xu and Bisheng Yang
Drones 2024, 8(4), 137; https://doi.org/10.3390/drones8040137 - 2 Apr 2024
Cited by 4 | Viewed by 5281
Abstract
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera [...] Read more.
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera occlusion, and the cost of common commercial Mocap systems is high. To address these challenges, we propose Easy Rocap, a low-cost, open-source robot motion capture system, which can quickly and robustly capture the accurate position and orientation of the robot. Firstly, based on training a real-time object detector, an object-filtering algorithm using class and confidence is designed to eliminate false detections. Secondly, multiple-object tracking (MOT) is applied to maintain the continuity of the trajectories, and the epipolar constraint is applied to multi-view correspondences. Finally, the calibrated multi-view cameras are used to calculate the 3D coordinates of the markers and effectively estimate the 3D pose of the target robot. Our system takes in real-time multi-camera data streams, making it easy to integrate into the robot system. In the simulation scenario experiment, the average position estimation error of the method is less than 0.008 m, and the average orientation error is less than 0.65 degrees. In the real scenario experiment, we compared the localization results of our method with the advanced LiDAR-Inertial Simultaneous Localization and Mapping (SLAM) algorithm. According to the experimental results, SLAM generates drifts during turns, while our method can overcome the drifts and accumulated errors of SLAM, making the trajectory more stable and accurate. In addition, the pose estimation speed of our system can reach 30 Hz. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

16 pages, 12904 KB  
Article
ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios
by Yuhang He, Bo Li, Jianyuan Ruan, Aihua Yu and Beiping Hou
Electronics 2024, 13(7), 1341; https://doi.org/10.3390/electronics13071341 - 2 Apr 2024
Cited by 3 | Viewed by 3407
Abstract
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms [...] Read more.
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms with limited computational power and low-resolution three-dimensional LiDAR sensors (16-beam LiDAR), and fills the gaps in the existing literature. Our data include abundant scenarios that include degenerated environments, dynamic objects, and large slope terrain to facilitate the investigation of the performance of the SLAM system. We provided the ground truth pose from RTK-GPS and carefully rectified its elevation errors, and designed an extra method to evaluate the vertical drift. The module for calibrating the LiDAR and IMU was also enhanced to ensure the precision of point cloud data. The reliability and applicability of the dataset are fully tested through a series of experiments using several state-of-the-art LiDAR SLAM methods. Full article
Show Figures

Figure 1

24 pages, 10706 KB  
Article
Adaptive Point-Line Fusion: A Targetless LiDAR–Camera Calibration Method with Scheme Selection for Autonomous Driving
by Yingtong Zhou, Tiansi Han, Qiong Nie, Yuxuan Zhu, Minghu Li, Ning Bian and Zhiheng Li
Sensors 2024, 24(4), 1127; https://doi.org/10.3390/s24041127 - 8 Feb 2024
Cited by 5 | Viewed by 4231
Abstract
Accurate calibration between LiDAR and camera sensors is crucial for autonomous driving systems to perceive and understand the environment effectively. Typically, LiDAR–camera extrinsic calibration requires feature alignment and overlapping fields of view. Aligning features from different modalities can be challenging due to noise [...] Read more.
Accurate calibration between LiDAR and camera sensors is crucial for autonomous driving systems to perceive and understand the environment effectively. Typically, LiDAR–camera extrinsic calibration requires feature alignment and overlapping fields of view. Aligning features from different modalities can be challenging due to noise influence. Therefore, this paper proposes a targetless extrinsic calibration method for monocular cameras and LiDAR sensors that have a non-overlapping field of view. The proposed solution uses pose transformation to establish data association across different modalities. This conversion turns the calibration problem into an optimization problem within a visual SLAM system without requiring overlapping views. To improve performance, line features serve as constraints in visual SLAM. Accurate positions of line segments are obtained by utilizing an extended photometric error optimization method. Moreover, a strategy is proposed for selecting appropriate calibration methods from among several alternative optimization schemes. This adaptive calibration method selection strategy ensures robust calibration performance in urban autonomous driving scenarios with varying lighting and environmental textures while avoiding failures and excessive bias that may result from relying on a single approach. Full article
(This article belongs to the Special Issue Radar Technology and Data Processing)
Show Figures

Figure 1

14 pages, 8775 KB  
Article
Accurate Visual Simultaneous Localization and Mapping (SLAM) against Around View Monitor (AVM) Distortion Error Using Weighted Generalized Iterative Closest Point (GICP)
by Yangwoo Lee, Minsoo Kim, Joonwoo Ahn and Jaeheung Park
Sensors 2023, 23(18), 7947; https://doi.org/10.3390/s23187947 - 17 Sep 2023
Cited by 3 | Viewed by 2920
Abstract
Accurately estimating the pose of a vehicle is important for autonomous parking. The study of around view monitor (AVM)-based visual Simultaneous Localization and Mapping (SLAM) has gained attention due to its affordability, commercial availability, and suitability for parking scenarios characterized by rapid rotations [...] Read more.
Accurately estimating the pose of a vehicle is important for autonomous parking. The study of around view monitor (AVM)-based visual Simultaneous Localization and Mapping (SLAM) has gained attention due to its affordability, commercial availability, and suitability for parking scenarios characterized by rapid rotations and back-and-forth movements of the vehicle. In real-world environments, however, the performance of AVM-based visual SLAM is degraded by AVM distortion errors resulting from an inaccurate camera calibration. Therefore, this paper presents an AVM-based visual SLAM for autonomous parking which is robust against AVM distortion errors. A deep learning network is employed to assign weights to parking line features based on the extent of the AVM distortion error. To obtain training data while minimizing human effort, three-dimensional (3D) Light Detection and Ranging (LiDAR) data and official parking lot guidelines are utilized. The output of the trained network model is incorporated into weighted Generalized Iterative Closest Point (GICP) for vehicle localization under distortion error conditions. The experimental results demonstrate that the proposed method reduces localization errors by an average of 39% compared with previous AVM-based visual SLAM approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

18 pages, 4158 KB  
Article
Multi-Lidar System Localization and Mapping with Online Calibration
by Fang Wang, Xilong Zhao, Hengzhi Gu, Lida Wang, Siyu Wang and Yi Han
Appl. Sci. 2023, 13(18), 10193; https://doi.org/10.3390/app131810193 - 11 Sep 2023
Cited by 3 | Viewed by 2981
Abstract
Currently, the demand for automobiles is increasing, and daily travel is increasingly reliant on cars. However, accompanying this trend are escalating traffic safety issues. Surveys indicate that most traffic accidents stem from driver errors, both intentional and unintentional. Consequently, within the framework of [...] Read more.
Currently, the demand for automobiles is increasing, and daily travel is increasingly reliant on cars. However, accompanying this trend are escalating traffic safety issues. Surveys indicate that most traffic accidents stem from driver errors, both intentional and unintentional. Consequently, within the framework of vehicular intelligence, intelligent driving uses computer software to assist drivers, thereby reducing the likelihood of road safety incidents and traffic accidents. Lidar, an essential facet of perception technology, plays an important role in vehicle intelligent driving. In real-world driving scenarios, the detection range of a single laser radar is limited. Multiple laser radars can improve the detection range and point density, effectively mitigating state estimation degradation in unstructured environments. This, in turn, enhances the precision and accuracy of synchronous positioning and mapping. Nonetheless, the relationship governing pose transformation between multiple lidars is intricate. Over extended periods, perturbations arising from vibrations, temperature fluctuations, or collisions can compromise the initially converged external parameters. In view of these concerns, this paper introduces a system capable of concurrent multi-lidar positioning and mapping, as well as real-time online external parameter calibration. The method first preprocesses the original measurement data, extracts linear and planar features, and rectifies motion distortion. Subsequently, leveraging degradation factors, the convergence of the multi-lidar external parameters is detected in real time. When deterioration in external parameters is identified, the local map of the main laser radar and the feature point cloud of the auxiliary laser radar are associated to realize online calibration. This is succeeded by frame-to-frame matching according to the converged external parameters, culminating in laser odometer computation. Introducing ground constraints and loop closure detection constraints in the back-end optimization effectuates global estimated pose rectification. Concurrently, the feature point cloud is aligned with the global map, and map update is completed. Finally, experimental validation is conducted on data acquired from Chang’an University to substantiate the system’s online calibration and positioning mapping accuracy, robustness, and real-time performance. Full article
(This article belongs to the Special Issue Autonomous Vehicles: Technology and Application)
Show Figures

Figure 1

Back to TopTop