Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = LiDAR-camera calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2993 KiB  
Article
A Joint LiDAR and Camera Calibration Algorithm Based on an Original 3D Calibration Plate
by Ziyang Cui, Yi Wang, Xiaodong Chen and Huaiyu Cai
Sensors 2025, 25(15), 4558; https://doi.org/10.3390/s25154558 - 23 Jul 2025
Viewed by 282
Abstract
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods [...] Read more.
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods that rely on fitting planar contours using depth-discontinuous points are prone to systematic errors, which hinder the precise extraction of the 3D positions of feature points. This, in turn, compromises the accuracy and robustness of the calibration. To overcome these challenges, this paper introduces a novel 3D calibration plate incorporating the gradient depth, localization markers, and corner features. At the point cloud level, the gradient depth enables the accurate estimation of the 3D coordinates of feature points. At the image level, corner features and localization markers facilitate the rapid and precise acquisition of 2D pixel coordinates, with minimal interference from environmental noise. This method establishes a rigorous and systematic framework to enhance the accuracy of LiDAR–camera extrinsic calibrations. In a simulated environment, experimental results demonstrate that the proposed algorithm achieves a rotation error below 0.002 radians and a translation error below 0.005 m. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 3214 KiB  
Article
Singular Value Decomposition (SVD) Method for LiDAR and Camera Sensor Fusion and Pattern Matching Algorithm
by Kaiqiao Tian, Meiqi Song, Ka C. Cheok, Micho Radovnikovich, Kazuyuki Kobayashi and Changqing Cai
Sensors 2025, 25(13), 3876; https://doi.org/10.3390/s25133876 - 21 Jun 2025
Viewed by 738
Abstract
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and [...] Read more.
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and viewpoint. In this paper, we propose a robust pattern matching algorithm that leverages singular value decomposition (SVD) and gradient descent (GD) to align geometric features—such as object contours and convex hulls—across LiDAR and camera modalities. Unlike traditional calibration methods that require manual targets, our approach is targetless, extracting matched patterns from projected LiDAR point clouds and 2D image segments. The algorithm computes the optimal transformation matrix between sensors, correcting misalignments in rotation, translation, and scale. Experimental results on a vehicle-mounted sensing platform demonstrate an alignment accuracy improvement of up to 85%, with the final projection error reduced to less than 1 pixel. This pattern-based SVD-GD framework offers a practical solution for maintaining reliable cross-sensor alignment under calibration drift, enabling real-time perception systems to operate robustly without recalibration. This method provides a practical solution for maintaining reliable sensor fusion in autonomous driving applications subject to long-term calibration drift. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensor)
Show Figures

Figure 1

18 pages, 12661 KiB  
Article
Regression-Based Docking System for Autonomous Mobile Robots Using a Monocular Camera and ArUco Markers
by Jun Seok Oh and Min Young Kim
Sensors 2025, 25(12), 3742; https://doi.org/10.3390/s25123742 - 15 Jun 2025
Viewed by 466
Abstract
This paper introduces a cost-effective autonomous charging docking system that utilizes a monocular camera and ArUco markers. Traditional monocular vision-based approaches, such as SolvePnP, are sensitive to viewing angles, lighting conditions, and camera calibration errors, limiting the accuracy of spatial estimation. To address [...] Read more.
This paper introduces a cost-effective autonomous charging docking system that utilizes a monocular camera and ArUco markers. Traditional monocular vision-based approaches, such as SolvePnP, are sensitive to viewing angles, lighting conditions, and camera calibration errors, limiting the accuracy of spatial estimation. To address these challenges, we propose a regression-based method that learns geometric features from variations in marker size and shape to estimate distance and orientation accurately. The proposed model is trained using ground-truth data collected from a LiDAR sensor, while real-time operation is performed using only monocular input. Experimental results show that the proposed system achieves a mean distance error of 1.18 cm and a mean orientation error of 3.11°, significantly outperforming SolvePnP, which exhibits errors of 58.54 cm and 6.64°, respectively. In real-world docking tests, the system achieves a final average docking position error of 2 cm and an orientation error of 3.07°, demonstrating that reliable and accurate performance can be attained using low-cost, vision-only hardware. This system offers a practical and scalable solution for industrial applications. Full article
Show Figures

Figure 1

22 pages, 30414 KiB  
Article
Metric Scaling and Extrinsic Calibration of Monocular Neural Network-Derived 3D Point Clouds in Railway Applications
by Daniel Thomanek and Clemens Gühmann
Appl. Sci. 2025, 15(10), 5361; https://doi.org/10.3390/app15105361 - 11 May 2025
Viewed by 549
Abstract
Three-dimensional reconstruction using monocular camera images is a well-established research topic. While multi-image approaches like Structure from Motion produce sparse point clouds, single-image depth estimation via machine learning promises denser results. However, many models estimate relative depth, and even those providing metric depth [...] Read more.
Three-dimensional reconstruction using monocular camera images is a well-established research topic. While multi-image approaches like Structure from Motion produce sparse point clouds, single-image depth estimation via machine learning promises denser results. However, many models estimate relative depth, and even those providing metric depth often struggle with unseen data due to unfamiliar camera parameters or domain-specific challenges. Accurate metric 3D reconstruction is critical for railway applications, such as ensuring structural gauge clearance from vegetation to meet legal requirements. We propose a novel method to scale 3D point clouds using the track gauge, which typically only varies in very limited values between large areas or countries worldwide (e.g., 1.435 m in Europe). Our approach leverages state-of-the-art image segmentation to detect rails and measure the track gauge from a train driver’s perspective. Additionally, we extend our method to estimate a reasonable railway-specific extrinsic camera calibration. Evaluations show that our method reduces the average Chamfer distance to LiDAR point clouds from 1.94 m (benchmark UniDepth) to 0.41 m for image-wise calibration and 0.71 m for average calibration. Full article
Show Figures

Figure 1

23 pages, 20311 KiB  
Article
Bridge Geometric Shape Measurement Using LiDAR–Camera Fusion Mapping and Learning-Based Segmentation Method
by Shang Jiang, Yifan Yang, Siyang Gu, Jiahui Li and Yingyan Hou
Buildings 2025, 15(9), 1458; https://doi.org/10.3390/buildings15091458 - 25 Apr 2025
Cited by 2 | Viewed by 768
Abstract
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study [...] Read more.
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study proposes a novel framework that utilizes an airborne LiDAR–camera fusion system for data acquisition, reconstructs high-precision 3D bridge models through real-time mapping, and automatically extracts structural geometric shapes using deep learning. The main contributions include the following: (1) A synchronized LiDAR–camera fusion system integrated with an unmanned aerial vehicle (UAV) and a microprocessor was developed, enabling the flexible and large-scale acquisition of bridge images and point clouds; (2) A multi-sensor fusion mapping method coupling visual-inertial odometry (VIO) and Li-DAR-inertial odometry (LIO) was implemented to construct 3D bridge point clouds in real time robustly; and (3) An instance segmentation network-based approach was proposed to detect key structural components in images, with detected geometric shapes projected from image coordinates to 3D space using LiDAR–camera calibration parameters, addressing challenges in automated large-scale point cloud analysis. The proposed method was validated through geometric shape measurements on a concrete arch bridge. The results demonstrate that compared to the oblique photogrammetry method, the proposed approach reduces errors by 77.13%, while its detection time accounts for 4.18% of that required by a stationary laser scanner and 0.29% of that needed for oblique photogrammetry. Full article
(This article belongs to the Special Issue Urban Infrastructure and Resilient, Sustainable Buildings)
Show Figures

Figure 1

29 pages, 6622 KiB  
Article
Semantic Fusion Algorithm of 2D LiDAR and Camera Based on Contour and Inverse Projection
by Xingyu Yuan, Yu Liu, Tifan Xiong, Wei Zeng and Chao Wang
Sensors 2025, 25(8), 2526; https://doi.org/10.3390/s25082526 - 17 Apr 2025
Cited by 1 | Viewed by 829
Abstract
Common single-line 2D LiDAR sensors and cameras have become core components in the field of robotic perception due to their low cost, compact size, and practicality. However, during the data fusion process, the randomness and complexity of real industrial scenes pose challenges. Traditional [...] Read more.
Common single-line 2D LiDAR sensors and cameras have become core components in the field of robotic perception due to their low cost, compact size, and practicality. However, during the data fusion process, the randomness and complexity of real industrial scenes pose challenges. Traditional calibration methods for LiDAR and cameras often rely on precise targets and can accumulate errors, leading to significant limitations. Additionally, the semantic fusion of LiDAR and camera data typically requires extensive projection calculations, complex clustering algorithms, or sophisticated data fusion techniques, resulting in low real-time performance when handling large volumes of data points in dynamic environments. To address these issues, this paper proposes a semantic fusion algorithm for LiDAR and camera data based on contour and inverse projection. The method has two remarkable features: (1) Combined with the ellipse extraction algorithm of the arc support line segment, a LiDAR and camera calibration algorithm based on various regular shapes of an environmental target is proposed, which improves the adaptability of the calibration algorithm to the environment. (2) This paper proposes a semantic segmentation algorithm based on the inverse projection of target contours. It is specifically designed to be versatile and applicable to both linear and arc features, significantly broadening the range of features that can be utilized in various tasks. This flexibility is a key advantage, as it allows the method to adapt to a wider variety of real-world scenarios where both types of features are commonly encountered. Compared with existing LiDAR point cloud semantic segmentation methods, this algorithm eliminates the need for complex clustering algorithms, data fusion techniques, and extensive laser point reprojection calculations. When handling a large number of laser points, the proposed method requires only one or two inverse projections of the contour to filter the range of laser points that intersect with specific targets. This approach enhances both the accuracy of point cloud searches and the speed of semantic processing. Finally, the validity of the semantic fusion algorithm is proven by field experiments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

16 pages, 14380 KiB  
Article
Online Calibration Method of LiDAR and Camera Based on Fusion of Multi-Scale Cost Volume
by Xiaobo Han, Jie Luo, Xiaoxu Wei and Yongsheng Wang
Information 2025, 16(3), 223; https://doi.org/10.3390/info16030223 - 13 Mar 2025
Cited by 1 | Viewed by 1752
Abstract
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high [...] Read more.
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high hardware requirements, while it is difficult for lightweight calibration algorithms to meet the accuracy requirements. Secondly, sensor noise, vibration, and changes in environmental conditions may reduce calibration accuracy. In addition, due to the large domain differences between different public datasets, the existing online calibration algorithms are unstable for various datasets and have poor algorithm robustness. To solve the above problems, we propose an online calibration algorithm based on multi-scale cost volume fusion. First, a multi-layer convolutional network is used to downsample and concatenate the camera RGB data and LiDAR point cloud data to obtain three-scale feature maps. The latter is then subjected to feature concatenation and group-wise correlation processing to generate three sets of cost volumes of different scales. After that, all the cost volumes are spliced and sent to the pose estimation module. After post-processing, the translation and rotation matrix between the camera and LiDAR coordinate systems can be obtained. We tested and verified this method on the KITTI odometry dataset and measured the average translation error of the calibration results to be 0.278 cm, the average rotation error to be 0.020°, and the single frame took 23 ms, reaching the advanced level. Full article
Show Figures

Graphical abstract

19 pages, 30440 KiB  
Article
A Method for the Calibration of a LiDAR and Fisheye Camera System
by Álvaro Martínez, Antonio Santo, Monica Ballesta, Arturo Gil and Luis Payá
Appl. Sci. 2025, 15(4), 2044; https://doi.org/10.3390/app15042044 - 15 Feb 2025
Cited by 2 | Viewed by 1593
Abstract
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data [...] Read more.
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data captured by both systems, which is equivalent to finding the transformation between the reference systems of both sensors. Traditional calibration methods for LiDAR and camera systems are developed for pinhole cameras and are not directly applicable to fisheye cameras. This work proposes a target-based calibration method for LiDAR and fisheye camera systems that avoids the need to transform images to a pinhole camera model, reducing the computation time. Instead, the method uses the spherical projection of the image, obtained with the intrinsic calibration parameters and the corresponding point cloud for LiDAR–fisheye calibration. Thus, unlike a pinhole-camera-based system, a wider field of view is provided, adding more information, which will lead to a better understanding of the environment itself, as well as enabling using fewer image sensors to cover a wider area. Full article
Show Figures

Figure 1

22 pages, 63900 KiB  
Article
Camera–LiDAR Wide Range Calibration in Traffic Surveillance Systems
by Byung-Jin Jang, Taek-Lim Kim and Tae-Hyoung Park
Sensors 2025, 25(3), 974; https://doi.org/10.3390/s25030974 - 6 Feb 2025
Viewed by 1264
Abstract
In traffic surveillance systems, accurate camera–LiDAR calibration is critical for effective detection and robust environmental recognition. Due to the significant distances at which sensors are positioned to cover extensive areas and minimize blind spots, the calibration search space expands, increasing the complexity of [...] Read more.
In traffic surveillance systems, accurate camera–LiDAR calibration is critical for effective detection and robust environmental recognition. Due to the significant distances at which sensors are positioned to cover extensive areas and minimize blind spots, the calibration search space expands, increasing the complexity of the optimization process. This study proposes a novel target-less calibration method that leverages dynamic objects, specifically, moving vehicles, to constrain the calibration search range and enhance accuracy. To address the challenges of the expanded search space, we employ a genetic algorithm-based optimization technique, which reduces the risk of converging to local optima. Experimental results on both the TUM public dataset and our proprietary dataset indicate that the proposed method achieves high calibration accuracy, which is particularly suitable for traffic surveillance applications requiring wide-area calibration. This approach holds promise for enhancing sensor fusion accuracy in complex surveillance environments. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

12 pages, 3424 KiB  
Technical Note
Enhancing Calibration Precision in MIMO Radar with Initial Parameter Optimization
by Yonghwi Kwon, Kanghyuk Seo and Chul Ki Kim
Remote Sens. 2025, 17(3), 389; https://doi.org/10.3390/rs17030389 - 23 Jan 2025
Viewed by 917
Abstract
For Advanced Driver Assistance Systems (ADASs), lots of researchers have been constantly researching various devices that can become the eyes of a vehicle. Currently represented devices are LiDAR, camera, and radar. This paper suggests one of the operation processes to study radar, which [...] Read more.
For Advanced Driver Assistance Systems (ADASs), lots of researchers have been constantly researching various devices that can become the eyes of a vehicle. Currently represented devices are LiDAR, camera, and radar. This paper suggests one of the operation processes to study radar, which can be used regardless of climate change or weather, day or night. Thus, we propose a simple and easy calibration method for Multi-Input Multi-Output (MIMO) radar to guarantee performance with initial calibration parameters. Based on a covariance matrix, the modified signals of all channels improve performance, reducing unexpected interferences. Therefore, using the proposed coupling matrix, we can reduce unexpected interference and generate accurately calibrated results. To prove and verify the improvement in our method, a practical experiment is conducted with Frequency-Modulated Continuous-Wave (FMCW) MIMO radar, mounted on an automotive. Full article
(This article belongs to the Special Issue Array and Signal Processing for Radar)
Show Figures

Graphical abstract

14 pages, 6079 KiB  
Data Descriptor
The EDI Multi-Modal Simultaneous Localization and Mapping Dataset (EDI-SLAM)
by Peteris Racinskis, Gustavs Krasnikovs, Janis Arents and Modris Greitans
Data 2025, 10(1), 5; https://doi.org/10.3390/data10010005 - 7 Jan 2025
Viewed by 1241
Abstract
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an [...] Read more.
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an RTK-enabled IMU-GNSS positioning module—both as satellite fixes and internally fused interpolated pose estimates. The tracks are formatted as ROS1 and ROS2 bags, with separately available calibration and ground truth data. In addition to the filtered positioning module outputs, a second form of sparse ground truth pose annotation is provided using independently surveyed visual fiducial markers as a reference. This enables the meaningful evaluation of systems that directly utilize data from the positioning module into their localization estimates, and serves as an alternative when the GNSS reference is disrupted by intermittent signals or multipath scattering. In this paper, we describe the methods used to collect the dataset, its contents, and its intended use. Full article
Show Figures

Figure 1

21 pages, 20775 KiB  
Article
Sensor Fusion Method for Object Detection and Distance Estimation in Assisted Driving Applications
by Stefano Favelli, Meng Xie and Andrea Tonoli
Sensors 2024, 24(24), 7895; https://doi.org/10.3390/s24247895 - 10 Dec 2024
Cited by 5 | Viewed by 3051
Abstract
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between [...] Read more.
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between a vehicle equipped with sensors and different road objects on its path using the fusion of data from cameras, radars, and LiDARs. The target application is an Advanced Driving Assistance System (ADAS) that benefits from the integration of the sensors’ attributes to plan the vehicle’s speed according to real-time road occupation and distance from obstacles. Based on geometrical projection, a low-level sensor fusion approach is proposed to map 3D point clouds into 2D camera images. The fusion information is used to estimate the distance of objects detected and labeled by a Yolov7 detector. The open-source pipeline implemented in ROS consists of a sensors’ calibration method, a Yolov7 detector, 3D point cloud downsampling and clustering, and finally a 3D-to-2D transformation between the reference frames. The goal of the pipeline is to perform data association and estimate the distance of the identified road objects. The accuracy and performance are evaluated in real-world urban scenarios with commercial hardware. The pipeline running on an embedded Nvidia Jetson AGX achieves good accuracy on object identification and distance estimation, running at 5 Hz. The proposed framework introduces a flexible and resource-efficient method for data association from common automotive sensors and proves to be a promising solution for enabling effective environment perception ability for assisted driving. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 6095 KiB  
Article
MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms
by Fengguang Xiong, Zhiqiang Zhang, Yu Kong, Chaofan Shen, Mingyue Hu, Liqun Kuang and Xie Han
Remote Sens. 2024, 16(22), 4233; https://doi.org/10.3390/rs16224233 - 14 Nov 2024
Viewed by 1631
Abstract
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets [...] Read more.
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets or manual intervention, which are time-consuming and have limited generalization capabilities. To address these issues, we introduce MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms, an end-to-end deep learn-based online calibration network for inferring 6-degree of freedom (DOF) rigid body transformations between 2D images and 3D point clouds. By fusing multi-scale features, we obtain feature representations that contain a lot of detail and rich semantic information. The attention module is used to carry out feature correlation among different modes to complete feature matching. Rather than acquiring the precise parameters directly, MSANet online corrects deviations, aligning the initial calibration with the ground truth. We conducted extensive experiments on the KITTI datasets, demonstrating that our method performs well across various scenarios, the average error of translation prediction especially improves the accuracy by 2.03 cm compared with the best results in the comparison method. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

29 pages, 61165 KiB  
Article
LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations
by Khanh Bao Tran, Alexander Carballo and Kazuya Takeda
Sensors 2024, 24(22), 7199; https://doi.org/10.3390/s24227199 - 10 Nov 2024
Viewed by 2161
Abstract
Integrating multiple types of sensors into autonomous systems, such as cars and robots, has become a widely adopted approach in modern technology. Among these sensors, RGB cameras, thermal cameras, and LiDAR are particularly valued for their ability to provide comprehensive environmental data. However, [...] Read more.
Integrating multiple types of sensors into autonomous systems, such as cars and robots, has become a widely adopted approach in modern technology. Among these sensors, RGB cameras, thermal cameras, and LiDAR are particularly valued for their ability to provide comprehensive environmental data. However, despite their advantages, current research primarily focuses on the one or combination of two sensors at a time. The full potential of utilizing all three sensors is often neglected. One key challenge is the ego-motion compensation of data in dynamic situations, which results from the rotational nature of the LiDAR sensor, and the blind spots of standard cameras due to their limited field of view. To resolve this problem, this paper proposes a novel method for the simultaneous registration of LiDAR, panoramic RGB cameras, and panoramic thermal cameras in dynamic environments without the need for calibration targets. Initially, essential features from RGB images, thermal data, and LiDAR point clouds are extracted through a novel method, designed to capture significant raw data characteristics. These extracted features then serve as a foundation for ego-motion compensation, optimizing the initial dataset. Subsequently, the raw features can be further refined to enhance calibration accuracy, achieving more precise alignment results. The results of the paper demonstrate the effectiveness of this approach in enhancing multiple sensor calibration compared to other ways. In the case of a high speed of around 9 m/s, some situations can improve the accuracy about 30 percent higher for LiDAR and Camera calibration. The proposed method has the potential to significantly improve the reliability and accuracy of autonomous systems in real-world scenarios, particularly under challenging environmental conditions. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

13 pages, 1882 KiB  
Article
Coastline Bathymetry Retrieval Based on the Combination of LiDAR and Remote Sensing Camera
by Yicheng Liu, Tong Wang, Qiubao Hu, Tuanchong Huang, Anmin Zhang and Mingwei Di
Water 2024, 16(21), 3135; https://doi.org/10.3390/w16213135 - 1 Nov 2024
Viewed by 1571
Abstract
This paper presents a Compact Integrated Water–Land Survey System (CIWS), which combines a remote sensing camera and a LiDAR module, and proposes an innovative underwater topography retrieval technique based on this system. This technique utilizes high-precision water depth points obtained from LiDAR measurements [...] Read more.
This paper presents a Compact Integrated Water–Land Survey System (CIWS), which combines a remote sensing camera and a LiDAR module, and proposes an innovative underwater topography retrieval technique based on this system. This technique utilizes high-precision water depth points obtained from LiDAR measurements as control points, and integrating them with the grayscale values from aerial photogrammetry images to construct a bathymetry retrieval model. This model can achieve large-scale bathymetric retrieval in shallow waters. Calibration of the UAV-mounted LiDAR system was conducted using laboratory and Dongjiang Bay marine calibration fields, with the results showing a laser depth measurement accuracy of up to 10 cm. Experimental tests near Miaowan Island demonstrated the generation of high-precision 3D seabed topographic maps for the South China Sea area using LiDAR depth data and remote sensing images. The study validates the feasibility and accuracy of this integrated scanning method for producing detailed 3D seabed topography models. Full article
(This article belongs to the Special Issue Application of Remote Sensing for Coastal Monitoring)
Show Figures

Figure 1

Back to TopTop