Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = lidar–IMU calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 6079 KiB  
Data Descriptor
The EDI Multi-Modal Simultaneous Localization and Mapping Dataset (EDI-SLAM)
by Peteris Racinskis, Gustavs Krasnikovs, Janis Arents and Modris Greitans
Data 2025, 10(1), 5; https://doi.org/10.3390/data10010005 - 7 Jan 2025
Viewed by 1241
Abstract
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an [...] Read more.
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an RTK-enabled IMU-GNSS positioning module—both as satellite fixes and internally fused interpolated pose estimates. The tracks are formatted as ROS1 and ROS2 bags, with separately available calibration and ground truth data. In addition to the filtered positioning module outputs, a second form of sparse ground truth pose annotation is provided using independently surveyed visual fiducial markers as a reference. This enables the meaningful evaluation of systems that directly utilize data from the positioning module into their localization estimates, and serves as an alternative when the GNSS reference is disrupted by intermittent signals or multipath scattering. In this paper, we describe the methods used to collect the dataset, its contents, and its intended use. Full article
Show Figures

Figure 1

18 pages, 9657 KiB  
Article
Research on Digital Terrain Construction Based on IMU and LiDAR Fusion Perception
by Chen Huang, Yiqi Wang, Xiaoqiang Sun and Shiyue Yang
Sensors 2025, 25(1), 15; https://doi.org/10.3390/s25010015 - 24 Dec 2024
Cited by 1 | Viewed by 944
Abstract
To address the shortcomings of light detection and ranging (LiDAR) sensors in extracting road surface elevation information in front of a vehicle, a scheme for digital terrain construction based on the fusion of an Inertial Measurement Unit (IMU) and LiDAR perception is proposed. [...] Read more.
To address the shortcomings of light detection and ranging (LiDAR) sensors in extracting road surface elevation information in front of a vehicle, a scheme for digital terrain construction based on the fusion of an Inertial Measurement Unit (IMU) and LiDAR perception is proposed. First, two sets of sensor coordinate systems were configured, and the parameters of LiDAR and IMU were calibrated. Then, a terrain construction system based on the fusion perception of IMU and LiDAR was established, and improvements were made to the state estimation and mapping architecture. Terrain construction experiments were conducted in an academic setting. Finally, based on the output information from the terrain construction system, a moving average-like algorithm was designed to process point cloud data and extract the road surface elevation information at the vehicle’s trajectory position. By comparing the extraction effects of four different sliding window widths, the 4 cm width sliding window, which yielded the best results, was ultimately selected, making the extracted road surface elevation information more accurate and effective. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

20 pages, 6270 KiB  
Article
Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping
by Eun-Seok Park , Saba Arshad and Tae-Hyoung Park
Sensors 2024, 24(24), 8199; https://doi.org/10.3390/s24248199 - 22 Dec 2024
Viewed by 1110
Abstract
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, [...] Read more.
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, handheld devices allow data collection from different angles, but this mobility introduces challenges in data quality, particularly when initial calibration between sensors is not precise. Accurate LiDAR-IMU calibration, essential for mapping accuracy in Simultaneous Localization and Mapping applications, involves precise alignment of the sensors’ extrinsic parameters. This research presents a robust initial pose calibration method for LiDAR-IMU systems in handheld devices, specifically designed for indoor environments. The research contributions are twofold. Firstly, we present a robust plane detection method for LiDAR data. This plane detection method removes the noise caused by mobility of scanning device and provides accurate planes for precise LiDAR initial pose estimation. Secondly, we present a robust planes-aided LiDAR calibration method that estimates the initial pose. By employing this LiDAR calibration method, an efficient LiDAR-IMU calibration is achieved for accurate mapping. Experimental results demonstrate that the proposed method achieves lower calibration errors and improved computational efficiency compared to existing methods. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

27 pages, 3941 KiB  
Article
Precision Inter-Row Relative Positioning Method by Using 3D LiDAR in Planted Forests and Orchards
by Limin Liu, Dong Ji, Fandi Zeng, Zhihuan Zhao and Shubo Wang
Agronomy 2024, 14(6), 1279; https://doi.org/10.3390/agronomy14061279 - 13 Jun 2024
Cited by 1 | Viewed by 1432
Abstract
Accurate positioning at the inter-row canopy can provide data support for precision variable-rate spraying. Therefore, there is an urgent need to design a reliable positioning method for the inter-row canopy of closed orchards (planted forests). In the study, the Extended Kalman Filter (EKF) [...] Read more.
Accurate positioning at the inter-row canopy can provide data support for precision variable-rate spraying. Therefore, there is an urgent need to design a reliable positioning method for the inter-row canopy of closed orchards (planted forests). In the study, the Extended Kalman Filter (EKF) fusion positioning method (method C) was first constructed by calibrating the IMU and encoder with errors. Meanwhile, 3D Light Detection and Ranging (LiDAR) observations were introduced to be fused into Method C. An EKF fusion positioning method (method D) based on 3D LiDAR corrected detection was designed. The method starts or closes method C by the presence or absence of the canopy. The vertically installed 3D LiDAR detected the canopy body center, providing the vehicle with inter-row vertical distance and heading. They were obtained through the distance between the center of the body and fixed row spacing. This can provide an accurate initial position for method C and correct the positioning trajectory. Finally, the positioning and canopy length measurement experiments were designed using a GPS positioning system. The results show that the method proposed in this study can significantly improve the accuracy of length measurement and positioning at the inter-row canopy, which does not significantly change with the distance traveled. In the orchard experiment, the average positioning deviations of the lateral and vertical distances at the inter-row canopy are 0.1 m and 0.2 m, respectively, with an average heading deviation of 6.75°, and the average relative error of canopy length measurement was 4.35%. The method can provide a simple and reliable inter-row positioning method for current remote-controlled and manned agricultural machinery when working in standardized 3D crops. This can modify the above-mentioned machinery to improve its automation level. Full article
(This article belongs to the Special Issue Agricultural Unmanned Systems: Empowering Agriculture with Automation)
Show Figures

Figure 1

17 pages, 5535 KiB  
Article
Responsiveness and Precision of Digital IMUs under Linear and Curvilinear Motion Conditions for Local Navigation and Positioning in Advanced Smart Mobility
by Luciano Chiominto, Emanuela Natale, Giulio D’Emilia, Sante Alessandro Grieco, Andrea Prato, Alessio Facello and Alessandro Schiavi
Micromachines 2024, 15(6), 727; https://doi.org/10.3390/mi15060727 - 30 May 2024
Cited by 3 | Viewed by 3590
Abstract
Sensors based on MEMS technology, in particular Inertial Measurement Units (IMUs), when installed on vehicles, provide a real-time full estimation of vehicles’ state vector (e.g., position, velocity, yaw angle, angular rate, acceleration), which is required for the planning and control of cars’ trajectories, [...] Read more.
Sensors based on MEMS technology, in particular Inertial Measurement Units (IMUs), when installed on vehicles, provide a real-time full estimation of vehicles’ state vector (e.g., position, velocity, yaw angle, angular rate, acceleration), which is required for the planning and control of cars’ trajectories, as well as managing the in-car local navigation and positioning tasks. Moreover, data provided by the IMUs, integrated with the data of multiple inputs from other sensing systems (such as Lidar, cameras, and GPS) within the vehicle, and with the surrounding information exchanged in real time (vehicle to vehicle, vehicle to infrastructure, or vehicle to other entities), can be exploited to actualize the full implementation of “smart mobility” on a large scale. On the other hand, “smart mobility” (which is expected to improve road safety, reduce traffic congestion and environmental burden, and enhance the sustainability of mobility as a whole), to be safe and functional on a large scale, should be supported by highly accurate and trustworthy technologies based on precise and reliable sensors and systems. It is known that the accuracy and precision of data supplied by appropriately in-lab-calibrated IMUs (with respect to the primary or secondary standard in order to provide traceability to the International System of Units) allow guaranteeing high quality, reliable information managed by processing systems, since they are reproducible, repeatable, and traceable. In this work, the effective responsiveness and the related precision of digital IMUs, under sinusoidal linear and curvilinear motion conditions at 5 Hz, 10 Hz, and 20 Hz, are investigated on the basis of metrological approaches in laboratory standard conditions only. As a first step, in-lab calibrations allow one to reduce the variables of uncontrolled boundary conditions (e.g., occurring in vehicles in on-site tests) in order to identify the IMUs’ sensitivity in a stable and reproducible environment. For this purpose, a new calibration system, based on an oscillating rotating table was developed to reproduce the dynamic conditions of use in the field, and the results are compared with calibration data obtained on linear calibration benches. Full article
Show Figures

Figure 1

16 pages, 12904 KiB  
Article
ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios
by Yuhang He, Bo Li, Jianyuan Ruan, Aihua Yu and Beiping Hou
Electronics 2024, 13(7), 1341; https://doi.org/10.3390/electronics13071341 - 2 Apr 2024
Cited by 1 | Viewed by 2123
Abstract
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms [...] Read more.
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms with limited computational power and low-resolution three-dimensional LiDAR sensors (16-beam LiDAR), and fills the gaps in the existing literature. Our data include abundant scenarios that include degenerated environments, dynamic objects, and large slope terrain to facilitate the investigation of the performance of the SLAM system. We provided the ground truth pose from RTK-GPS and carefully rectified its elevation errors, and designed an extra method to evaluate the vertical drift. The module for calibrating the LiDAR and IMU was also enhanced to ensure the precision of point cloud data. The reliability and applicability of the dataset are fully tested through a series of experiments using several state-of-the-art LiDAR SLAM methods. Full article
Show Figures

Figure 1

16 pages, 7583 KiB  
Technical Note
Geometric and Radiometric Quality Assessments of UAV-Borne Multi-Sensor Systems: Can UAVs Replace Terrestrial Surveys?
by Junhwa Chi, Jae-In Kim, Sungjae Lee, Yongsik Jeong, Hyun-Cheol Kim, Joohan Lee and Changhyun Chung
Drones 2023, 7(7), 411; https://doi.org/10.3390/drones7070411 - 22 Jun 2023
Cited by 7 | Viewed by 2340
Abstract
Unmanned aerial vehicles (UAVs), also known as drones, are a cost-effective alternative to traditional surveying methods, and they can be used to collect geospatial data over inaccessible or hard-to-reach locations. UAV-integrated miniaturized remote sensing sensors such as hyperspectral and LiDAR sensors, which formerly [...] Read more.
Unmanned aerial vehicles (UAVs), also known as drones, are a cost-effective alternative to traditional surveying methods, and they can be used to collect geospatial data over inaccessible or hard-to-reach locations. UAV-integrated miniaturized remote sensing sensors such as hyperspectral and LiDAR sensors, which formerly operated on airborne and spaceborne platforms, have recently been developed. Their accuracies can still be guaranteed when incorporating pieces of equipment such as ground control points (GCPs) and field spectrometers. This study conducted three experiments for geometric and radiometric accuracy assessments of simultaneously acquired RGB, hyperspectral, and LiDAR data from a single mission. Our RGB and hyperspectral data generated orthorectified images based on direct georeferencing without any GCPs. Because of this, a base station is required for the post-processed Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU) data. First, we compared the geometric accuracy of orthorectified RGB and hyperspectral images relative to the distance of the base station to determine which base station should be used. Second, point clouds could be generated from overlapped RGB images and a LiDAR sensor. We quantitatively and qualitatively compared RGB and LiDAR point clouds in this experiment. Lastly, we evaluated the radiometric quality of hyperspectral images, which is the most critical factor of the hyperspectral sensor, using reference spectra that was simultaneously measured by a field spectrometer. Consequently, the distance of the base station for post-processing the GNSS/IMU data was found to have no significant impact on the geometric accuracy, indicating that a dedicated base station is not always necessary. Our experimental results demonstrated geometric errors of less than two hyperspectral pixels without using GCPs, achieving a level of accuracy that is comparable to survey-level standards. Regarding the comparison of RGB- and LiDAR-based point clouds, RGB point clouds exhibited noise and lacked details; however, through the cleaning process, their vertical accuracy was found to be comparable with LiDAR’s accuracy. Although photogrammetry generated denser point clouds compared with LiDAR, the overall quality for extracting the elevation data greatly relies on factors such as the original image quality, including the image’s occlusions, shadows, and tie-points, for matching. Furthermore, the image spectra derived from hyperspectral data consistently demonstrated high radiometric quality without the need for in situ field spectrum information. This finding indicates that in situ field spectra are not always required to guarantee the radiometric quality of hyperspectral data, as long as well-calibrated targets are utilized. Full article
Show Figures

Figure 1

13 pages, 13033 KiB  
Article
Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System
by Shilun Yin, Donghai Xie, Yibo Fu, Zhibo Wang and Ruofei Zhong
Sensors 2023, 23(6), 3119; https://doi.org/10.3390/s23063119 - 14 Mar 2023
Cited by 4 | Viewed by 2959
Abstract
Calibration of sensors is critical for the precise functioning of lidar–IMU systems. However, the accuracy of the system can be compromised if motion distortion is not considered. This study proposes a novel uncontrolled two-step iterative calibration algorithm that eliminates motion distortion and improves [...] Read more.
Calibration of sensors is critical for the precise functioning of lidar–IMU systems. However, the accuracy of the system can be compromised if motion distortion is not considered. This study proposes a novel uncontrolled two-step iterative calibration algorithm that eliminates motion distortion and improves the accuracy of lidar–IMU systems. Initially, the algorithm corrects the distortion of rotational motion by matching the original inter-frame point cloud. Then, the point cloud is further matched with IMU after the prediction of attitude. The algorithm performs iterative motion distortion correction and rotation matrix calculation to obtain high-precision calibration results. In comparison with existing algorithms, the proposed algorithm boasts high accuracy, robustness, and efficiency. This high-precision calibration result can benefit a wide range of acquisition platforms, including handheld, unmanned ground vehicle (UGV), and backpack lidar–IMU systems. Full article
Show Figures

Figure 1

15 pages, 5799 KiB  
Article
Two-Step Self-Calibration of LiDAR-GPS/IMU Based on Hand-Eye Method
by Xin Nie, Jun Gong, Jintao Cheng, Xiaoyu Tang and Yuanfang Zhang
Symmetry 2023, 15(2), 254; https://doi.org/10.3390/sym15020254 - 17 Jan 2023
Cited by 2 | Viewed by 4419
Abstract
Multi-line LiDAR and GPS/IMU are widely used in autonomous driving and robotics, such as simultaneous localization and mapping (SLAM). Calibrating the extrinsic parameters of each sensor is a necessary condition for multi-sensor fusion. The calibration of each sensor directly affects the accurate positioning [...] Read more.
Multi-line LiDAR and GPS/IMU are widely used in autonomous driving and robotics, such as simultaneous localization and mapping (SLAM). Calibrating the extrinsic parameters of each sensor is a necessary condition for multi-sensor fusion. The calibration of each sensor directly affects the accurate positioning control and perception performance of the vehicle. Through the algorithm, accurate extrinsic parameters and a symmetric covariance matrix of extrinsic parameters can be obtained as a measure of the confidence of the extrinsic parameters. As for the calibration of LiDAR-GPS/IMU, many calibration methods require specific vehicle motion or manual calibration marking scenes to ensure good constraint of the problem, resulting in high costs and a low degree of automation. To solve this problem, we propose a new two-step self-calibration method, which includes extrinsic parameter initialization and refinement. The initialization part decouples the extrinsic parameters from the rotation and translation part, first calculating the reliable initial rotation through the rotation constraints, then calculating the initial translation after obtaining a reliable initial rotation, and eliminating the accumulated drift of LiDAR odometry by loop closure to complete the map construction. In the refinement part, the LiDAR odometry is obtained through scan-to-map registration and is tightly coupled with the IMU. The constraints of the absolute pose in the map refined the extrinsic parameters. Our method is validated in the simulation and real environments, and the results show that the proposed method has high accuracy and robustness. Full article
(This article belongs to the Special Issue Recent Progress in Robot Control Systems: Theory and Applications)
Show Figures

Figure 1

18 pages, 2894 KiB  
Article
OMC-SLIO: Online Multiple Calibrations Spinning LiDAR Inertial Odometry
by Shuang Wang, Hua Zhang and Guijin Wang
Sensors 2023, 23(1), 248; https://doi.org/10.3390/s23010248 - 26 Dec 2022
Cited by 6 | Viewed by 3420
Abstract
Light detection and ranging (LiDAR) is often combined with an inertial measurement unit (IMU) to get the LiDAR inertial odometry (LIO) for robot localization and mapping. In order to apply LIO efficiently and non-specialistically, self-calibration LIO is a hot research topic in the [...] Read more.
Light detection and ranging (LiDAR) is often combined with an inertial measurement unit (IMU) to get the LiDAR inertial odometry (LIO) for robot localization and mapping. In order to apply LIO efficiently and non-specialistically, self-calibration LIO is a hot research topic in the related community. Spinning LiDAR (SLiDAR), which uses an additional rotating mechanism to spin a common LiDAR and scan the surrounding environment, achieves a large field of view (FoV) with low cost. Unlike common LiDAR, in addition to the calibration between the IMU and the LiDAR, the self-calibration odometer for SLiDAR must also consider the mechanism calibration between the rotating mechanism and the LiDAR. However, existing self-calibration LIO methods require the LiDAR to be rigidly attached to the IMU and do not take the mechanism calibration into account, which cannot be applied to the SLiDAR. In this paper, we propose firstly a novel self-calibration odometry scheme for SLiDAR, named the online multiple calibration inertial odometer (OMC-SLIO) method, which allows online estimation of multiple extrinsic parameters among the LiDAR, rotating mechanism and IMU, as well as the odometer state. Specially, considering that the rotating and static parts of the motor encoder inside the SLiDAR are rigidly connected to the LiDAR and IMU respectively, we formulate the calibration within the SLiDAR as two separate sets of calibrations: the mechanism calibration between the LiDAR and the rotating part of the motor encoder and the sensor calibration between the static part of the motor encoder and the IMU. Based on such a SLiDAR calibration formulation, we can construct a well-defined kinematic model from the LiDAR to the IMU with the angular information from the motor encoder. Based on the kinematic model, a two-stage motion compensation method is presented to eliminate the point cloud distortion resulting from LiDAR spinning and platform motion. Furthermore, the mechanism and sensor calibration as well as the odometer state are wrapped in a measurement model and estimated via an error-state iterative extended Kalman filter (ESIEKF). Experimental results show that our OMC-SLIO is effective and attains excellent performance. Full article
Show Figures

Figure 1

17 pages, 4078 KiB  
Article
A Spatiotemporal Calibration Algorithm for IMU–LiDAR Navigation System Based on Similarity of Motion Trajectories
by Yunhui Li, Shize Yang, Xianchao Xiu and Zhonghua Miao
Sensors 2022, 22(19), 7637; https://doi.org/10.3390/s22197637 - 9 Oct 2022
Cited by 7 | Viewed by 4266
Abstract
The fusion of light detection and ranging (LiDAR) and inertial measurement unit (IMU) sensing information can effectively improve the environment modeling and localization accuracy of navigation systems. To realize the spatiotemporal unification of data collected by the IMU and the LiDAR, a two-step [...] Read more.
The fusion of light detection and ranging (LiDAR) and inertial measurement unit (IMU) sensing information can effectively improve the environment modeling and localization accuracy of navigation systems. To realize the spatiotemporal unification of data collected by the IMU and the LiDAR, a two-step spatiotemporal calibration method combining coarse and fine is proposed. The method mainly includes two aspects: (1) Modeling continuous-time trajectories of IMU attitude motion using B-spline basis functions; the motion of the LiDAR is estimated by using the normal distributions transform (NDT) point cloud registration algorithm, taking the Hausdorff distance between the local trajectories as the cost function and combining it with the hand–eye calibration method to solve the initial value of the spatiotemporal relationship between the two sensors’ coordinate systems, and then using the measurement data of the IMU to correct the LiDAR distortion. (2) According to the IMU preintegration, and the point, line, and plane features of the lidar point cloud, the corresponding nonlinear optimization objective function is constructed. Combined with the corrected LiDAR data and the initial value of the spatiotemporal calibration of the coordinate systems, the target is optimized under the nonlinear graph optimization framework. The rationality, accuracy, and robustness of the proposed algorithm are verified by simulation analysis and actual test experiments. The results show that the accuracy of the proposed algorithm in the spatial coordinate system relationship calibration was better than 0.08° (3δ) and 5 mm (3δ), respectively, and the time deviation calibration accuracy was better than 0.1 ms and had strong environmental adaptability. This can meet the high-precision calibration requirements of multisensor spatiotemporal parameters of field robot navigation systems. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation)
Show Figures

Figure 1

16 pages, 3732 KiB  
Article
A Method of Calibration for the Distortion of LiDAR Integrating IMU and Odometer
by Qiuxuan Wu, Qinyuan Meng, Yangyang Tian, Zhongrong Zhou, Cenfeng Luo, Wandeng Mao, Pingliang Zeng, Botao Zhang and Yanbin Luo
Sensors 2022, 22(17), 6716; https://doi.org/10.3390/s22176716 - 5 Sep 2022
Cited by 7 | Viewed by 4201
Abstract
To improve the motion distortion caused by LiDAR data at low and medium frame rates when moving, this paper proposes an improved algorithm for scanning matching of estimated velocity that combines an IMU and odometer. First, the information of the IMU and the [...] Read more.
To improve the motion distortion caused by LiDAR data at low and medium frame rates when moving, this paper proposes an improved algorithm for scanning matching of estimated velocity that combines an IMU and odometer. First, the information of the IMU and the odometer is fused, and the pose of the LiDAR is obtained using the linear interpolation method. The ICP method is used to scan and match the LiDAR data. The data fused by the IMU and the odometer provide the optimal initial value for the ICP. The estimated speed of the LiDAR is introduced as the termination condition of the ICP method iteration to realize the compensation of the LiDAR data. The experimental comparative analysis shows that the algorithm is better than the ICP algorithm and the VICP algorithm in matching accuracy. Full article
(This article belongs to the Special Issue Efficient Intelligence with Applications in Embedded Sensing)
Show Figures

Figure 1

22 pages, 1538 KiB  
Article
A Human Gait Tracking System Using Dual Foot-Mounted IMU and Multiple 2D LiDARs
by Huu Toan Duong and Young Soo Suh
Sensors 2022, 22(17), 6368; https://doi.org/10.3390/s22176368 - 24 Aug 2022
Cited by 4 | Viewed by 3037
Abstract
This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. The combining system aims to overcome the disadvantages of each single sensor system (the short tracking range of the single 2D LiDAR and the drift errors [...] Read more.
This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. The combining system aims to overcome the disadvantages of each single sensor system (the short tracking range of the single 2D LiDAR and the drift errors of the IMU system). The LiDARs act as anchors to mitigate the errors of an inertial navigation algorithm. In our system, two 2D LiDARs are used. LiDAR 1 is placed around the starting point, and LiDAR 2 is placed at the ending point (in straight walking) or at the turning point (in rectangular path walking). Using the LiDAR 1, we can estimate the initial headings and positions of each IMU without any calibration process. We also propose a method to calibrate two LiDARs that are placed far apart. Then, the measurement from two LiDARs can be combined in a Kalman filter and the smoother algorithm to correct the two estimated feet trajectories. If straight walking is detected, we update the current stride heading and the foot position using the previous stride headings. Then, it is used as a measurement update in the Kalman filter. In the smoother algorithm, a step width constraint is used as a measurement update. We evaluate the stride length estimation through a straight walking experiment along a corridor. The root mean square errors compared with an optical tracking system are less than 3 cm. The performance of proposed method is also verified with a rectangular path walking experiment. Full article
(This article belongs to the Collection Inertial Sensors and Applications)
Show Figures

Figure 1

18 pages, 6567 KiB  
Article
High-Precision SLAM Based on the Tight Coupling of Dual Lidar Inertial Odometry for Multi-Scene Applications
by Kui Xiao, Wentao Yu, Weirong Liu, Feng Qu and Zhenyan Ma
Appl. Sci. 2022, 12(3), 939; https://doi.org/10.3390/app12030939 - 18 Jan 2022
Cited by 5 | Viewed by 3170
Abstract
Simultaneous Localization and Mapping (SLAM) is an essential feature in many applications of mobile vehicles. To solve the problem of poor positioning accuracy, single use of mapping scene, and unclear structural characteristics in indoor and outdoor SLAM, a new framework of tight coupling [...] Read more.
Simultaneous Localization and Mapping (SLAM) is an essential feature in many applications of mobile vehicles. To solve the problem of poor positioning accuracy, single use of mapping scene, and unclear structural characteristics in indoor and outdoor SLAM, a new framework of tight coupling of dual lidar inertial odometry is proposed in this paper. Firstly, through external calibration and an adaptive timestamp synchronization algorithm, the horizontal and vertical lidar data are fused, which compensates for the narrow vertical field of view (FOV) of the lidar and makes the characteristics of vertical direction more complete in the mapping process. Secondly, the dual lidar data is tightly coupled with an Inertial Measurement Unit (IMU) to eliminate the motion distortion of the dual lidar odometry. Then, the value of the lidar odometry after correcting distortion and the pre-integrated value of IMU are used as constraints to establish a non-linear least-squares objective function. Joint optimization is then performed to obtain the best value of the IMU state values, which will be used to predict the state of IMU at the next time step. Finally, experimental results are presented to verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

19 pages, 11190 KiB  
Article
Cost Effective Mobile Mapping System for Color Point Cloud Reconstruction
by Cheng-Wei Peng, Chen-Chien Hsu and Wei-Yen Wang
Sensors 2020, 20(22), 6536; https://doi.org/10.3390/s20226536 - 16 Nov 2020
Cited by 8 | Viewed by 4375
Abstract
Survey-grade Lidar brands have commercialized Lidar-based mobile mapping systems (MMSs) for several years now. With this high-end equipment, the high-level accuracy quality of point clouds can be ensured, but unfortunately, their high cost has prevented practical implementation in autonomous driving from being affordable. [...] Read more.
Survey-grade Lidar brands have commercialized Lidar-based mobile mapping systems (MMSs) for several years now. With this high-end equipment, the high-level accuracy quality of point clouds can be ensured, but unfortunately, their high cost has prevented practical implementation in autonomous driving from being affordable. As an attempt to solve this problem, we present a cost-effective MMS to generate an accurate 3D color point cloud for autonomous vehicles. Among the major processes for color point cloud reconstruction, we first synchronize the timestamps of each sensor. The calibration process between camera and Lidar is developed to obtain the translation and rotation matrices, based on which color attributes can be composed into the corresponding Lidar points. We also employ control points to adjust the point cloud for fine tuning the absolute position. To overcome the limitation of Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU) positioning system, we utilize Normal Distribution Transform (NDT) localization to refine the trajectory to solve the multi-scan dispersion issue. Experimental results show that the color point cloud reconstructed by the proposed MMS has a position error in centimeter-level accuracy, meeting the requirement of high definition (HD) maps for autonomous driving usage. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Back to TopTop