Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = geometric calibration of multi-imaging systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 3214 KiB  
Article
Singular Value Decomposition (SVD) Method for LiDAR and Camera Sensor Fusion and Pattern Matching Algorithm
by Kaiqiao Tian, Meiqi Song, Ka C. Cheok, Micho Radovnikovich, Kazuyuki Kobayashi and Changqing Cai
Sensors 2025, 25(13), 3876; https://doi.org/10.3390/s25133876 - 21 Jun 2025
Viewed by 725
Abstract
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and [...] Read more.
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and viewpoint. In this paper, we propose a robust pattern matching algorithm that leverages singular value decomposition (SVD) and gradient descent (GD) to align geometric features—such as object contours and convex hulls—across LiDAR and camera modalities. Unlike traditional calibration methods that require manual targets, our approach is targetless, extracting matched patterns from projected LiDAR point clouds and 2D image segments. The algorithm computes the optimal transformation matrix between sensors, correcting misalignments in rotation, translation, and scale. Experimental results on a vehicle-mounted sensing platform demonstrate an alignment accuracy improvement of up to 85%, with the final projection error reduced to less than 1 pixel. This pattern-based SVD-GD framework offers a practical solution for maintaining reliable cross-sensor alignment under calibration drift, enabling real-time perception systems to operate robustly without recalibration. This method provides a practical solution for maintaining reliable sensor fusion in autonomous driving applications subject to long-term calibration drift. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensor)
Show Figures

Figure 1

23 pages, 20311 KiB  
Article
Bridge Geometric Shape Measurement Using LiDAR–Camera Fusion Mapping and Learning-Based Segmentation Method
by Shang Jiang, Yifan Yang, Siyang Gu, Jiahui Li and Yingyan Hou
Buildings 2025, 15(9), 1458; https://doi.org/10.3390/buildings15091458 - 25 Apr 2025
Cited by 2 | Viewed by 757
Abstract
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study [...] Read more.
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study proposes a novel framework that utilizes an airborne LiDAR–camera fusion system for data acquisition, reconstructs high-precision 3D bridge models through real-time mapping, and automatically extracts structural geometric shapes using deep learning. The main contributions include the following: (1) A synchronized LiDAR–camera fusion system integrated with an unmanned aerial vehicle (UAV) and a microprocessor was developed, enabling the flexible and large-scale acquisition of bridge images and point clouds; (2) A multi-sensor fusion mapping method coupling visual-inertial odometry (VIO) and Li-DAR-inertial odometry (LIO) was implemented to construct 3D bridge point clouds in real time robustly; and (3) An instance segmentation network-based approach was proposed to detect key structural components in images, with detected geometric shapes projected from image coordinates to 3D space using LiDAR–camera calibration parameters, addressing challenges in automated large-scale point cloud analysis. The proposed method was validated through geometric shape measurements on a concrete arch bridge. The results demonstrate that compared to the oblique photogrammetry method, the proposed approach reduces errors by 77.13%, while its detection time accounts for 4.18% of that required by a stationary laser scanner and 0.29% of that needed for oblique photogrammetry. Full article
(This article belongs to the Special Issue Urban Infrastructure and Resilient, Sustainable Buildings)
Show Figures

Figure 1

18 pages, 2485 KiB  
Article
The Application of Supervised Machine Learning Algorithms for Image Alignment in Multi-Channel Imaging Systems
by Kyrylo Romanenko, Yevgen Oberemok, Ivan Syniavskyi, Natalia Bezugla, Pawel Komada and Mykhailo Bezuglyi
Sensors 2025, 25(2), 544; https://doi.org/10.3390/s25020544 - 18 Jan 2025
Viewed by 918
Abstract
This study presents a method for aligning the geometric parameters of images in multi-channel imaging systems based on the application of pre-processing methods, machine learning algorithms, and a calibration setup using an array of orderly markers at the nodes of an imaginary grid. [...] Read more.
This study presents a method for aligning the geometric parameters of images in multi-channel imaging systems based on the application of pre-processing methods, machine learning algorithms, and a calibration setup using an array of orderly markers at the nodes of an imaginary grid. According to the proposed method, one channel of the system is used as a reference. The images from the calibration setup in each channel determine the coordinates of the markers, and the displacements of the marker centers in the system’s channels relative to the coordinates of the centers in the reference channel are then determined. Correction models are obtained as multiple polynomial regression models based on these displacements. These correction models align the geometric parameters of the images in the system channels before they are used in the calculations. The models are derived once, allowing for geometric calibration of the imaging system. The developed method is applied to align the images in the channels of a module of a multispectral imaging polarimeter. As a result, the standard image alignment error in the polarimeter channels is reduced from 4.8 to 0.5 pixels. Full article
(This article belongs to the Special Issue Application and Technology Trends in Optoelectronic Sensors)
Show Figures

Figure 1

36 pages, 28452 KiB  
Article
Assessing Geometric and Radiometric Accuracy of DJI P4 MS Imagery Processed with Agisoft Metashape for Shrubland Mapping
by Tiago van der Worp da Silva, Luísa Gomes Pereira and Bruna R. F. Oliveira
Remote Sens. 2024, 16(24), 4633; https://doi.org/10.3390/rs16244633 - 11 Dec 2024
Cited by 2 | Viewed by 1712
Abstract
The rise in inexpensive Unmanned Aerial Systems (UAS) and accessible processing software offers several advantages in forest ecosystem monitoring and management. The increase in usability of such tools can result in the simplification of workflows, potentially impacting the quality of the generated data. [...] Read more.
The rise in inexpensive Unmanned Aerial Systems (UAS) and accessible processing software offers several advantages in forest ecosystem monitoring and management. The increase in usability of such tools can result in the simplification of workflows, potentially impacting the quality of the generated data. This study offers insights into the precision and reliability of the DJI Phantom 4 Multispectral (P4MS) UAS for mapping shrublands using the Agisoft Metashape (AM) for image processing. Geometric accuracy was evaluated using ground control points (GCPs) and different configurations. The best configuration was then used to produce orthomosaics. Subsequently, the orthomosaics were transformed into reflectance orthomosaics using various radiometric correction methods. These methods were further assessed using reference panels. The method producing the most accurate reflectance values was then chosen to create the final reflectance and Normalised Difference Vegetation Index (NDVI) maps. Radiometric accuracy was assessed through a multi-step process. Initially, precision was measured by comparing reflectance orthomosaics and NDVI derived from images taken on consecutive days. Finally, reliability was evaluated by comparing the NDVI with NDVI from a reference camera, the MicaSense Altum AL0, produced with images acquired on the same days. The results demonstrate that the P4MS is both precise and reliable for shrubland mapping. Reflectance maps and NDVI generated in AM exhibit acceptable geometric and radiometric accuracy when geometric calibration is performed with at least one GCP and radiometric calibration utilises images of reflectance panels captured at flight height, without relying on incident light sensor (ILS) data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

22 pages, 20682 KiB  
Article
Three-Dimensional Phenotyping Pipeline of Potted Plants Based on Neural Radiation Fields and Path Segmentation
by Xinghui Zhu, Zhongrui Huang and Bin Li
Plants 2024, 13(23), 3368; https://doi.org/10.3390/plants13233368 - 29 Nov 2024
Cited by 4 | Viewed by 1185
Abstract
Precise acquisition of potted plant traits has great theoretical significance and practical value for variety selection and guiding scientific cultivation practices. Although phenotypic analysis using two dimensional(2D) digital images is simple and efficient, leaf occlusion reduces the available phenotype information. To address the [...] Read more.
Precise acquisition of potted plant traits has great theoretical significance and practical value for variety selection and guiding scientific cultivation practices. Although phenotypic analysis using two dimensional(2D) digital images is simple and efficient, leaf occlusion reduces the available phenotype information. To address the current challenge of acquiring sufficient non-destructive information from living potted plants, we proposed a three dimensional (3D) phenotyping pipeline that combines neural radiation field reconstruction with path analysis. An indoor collection system was constructed to obtain multi-view image sequences of potted plants. The structure from motion and neural radiance fields (SFM-NeRF) algorithm was then utilized to reconstruct 3D point clouds, which were subsequently denoised and calibrated. Geometric-feature-based path analysis was employed to separate stems from leaves, and density clustering methods were applied to segment the canopy leaves. Phenotypic parameters of potted plant organs were extracted, including height, stem thickness, leaf length, leaf width, and leaf area, and they were manually measured to obtain the true values. The results showed that the coefficient of determination (R2) values, indicating the correlation between the model traits and the true traits, ranged from 0.89 to 0.98, indicating a strong correlation. The reconstruction quality was good. Additionally, 22 potted plants were selected for exploratory experiments. The results indicated that the method was capable of reconstructing plants of various varieties, and the experiments identified key conditions essential for successful reconstruction. In summary, this study developed a low-cost and robust 3D phenotyping pipeline for the phenotype analysis of potted plants. This proposed pipeline not only meets daily production requirements but also advances the field of phenotype calculation for potted plants. Full article
Show Figures

Figure 1

32 pages, 100733 KiB  
Article
On-Orbit Geometric Calibration and Accuracy Validation of the Jilin1-KF01B Wide-Field Camera
by Hongyu Wu, Guanzhou Chen, Yang Bai, Ying Peng, Qianqian Ba, Shuai Huang, Xing Zhong, Haijiang Sun, Lei Zhang and Fuyu Feng
Remote Sens. 2024, 16(20), 3893; https://doi.org/10.3390/rs16203893 - 19 Oct 2024
Cited by 2 | Viewed by 1819
Abstract
On-orbit geometric calibration is key to improving the geometric positioning accuracy of high-resolution optical remote sensing satellite data. Grouped calibration with geometric consistency (GCGC) is proposed in this paper for the Jilin1-KF01B satellite, which is the world’s first satellite capable of providing 150-km [...] Read more.
On-orbit geometric calibration is key to improving the geometric positioning accuracy of high-resolution optical remote sensing satellite data. Grouped calibration with geometric consistency (GCGC) is proposed in this paper for the Jilin1-KF01B satellite, which is the world’s first satellite capable of providing 150-km swath width and 0.5-m resolution data. To ensure the geometric accuracy of high-resolution image data, the GCGC method conducts grouped calibration of the time delay integration charge-coupled device (TDI CCD). Each group independently calibrates the exterior orientation elements to address the multi-time synchronization issues between imaging processing system (IPS). An additional inter-chip geometric positioning consistency constraint is used to enhance geometric positioning consistency in the overlapping areas between adjacent CCDs. By combining image simulation techniques associated with spectral bands, the calibrated panchromatic data are used to generate simulated multispectral reference band image as control data, thereby enhancing the geometric alignment consistency between panchromatic and multispectral data. Experimental results show that the average seamless stitching accuracy of the basic products after calibration is better than 0.6 pixels, the positioning accuracy without ground control points(GCPs) is better than 20 m, the band-to-band registration accuracy is better than 0.3 pixels, the average geometric alignment consistency between panchromatic and multispectral data are better than 0.25 multispectral pixels, the geometric accuracy with GCPs is better than 2.1 m, and the geometric alignment consistency accuracy of multi-temporal data are better than 2 m. The GCGC method significantly improves the quality of image data from the Jilin1-KF01B satellite and provide important references and practical experience for the geometric calibration of other large-swath high-resolution remote sensing satellites. Full article
Show Figures

Figure 1

21 pages, 12097 KiB  
Article
Infrared Camera Array System and Self-Calibration Method for Enhanced Dim Target Perception
by Yaning Zhang, Tianhao Wu, Jungang Yang and Wei An
Remote Sens. 2024, 16(16), 3075; https://doi.org/10.3390/rs16163075 - 21 Aug 2024
Viewed by 1671
Abstract
Camera arrays can enhance the signal-to-noise ratio (SNR) between dim targets and backgrounds through multi-view synthesis. This is crucial for the detection of dim targets. To this end, we design and develop an infrared camera array system with a large baseline. The multi-view [...] Read more.
Camera arrays can enhance the signal-to-noise ratio (SNR) between dim targets and backgrounds through multi-view synthesis. This is crucial for the detection of dim targets. To this end, we design and develop an infrared camera array system with a large baseline. The multi-view synthesis of camera arrays relies heavily on the calibration accuracy of relative poses in the sub-cameras. However, the sub-cameras within a camera array lack strict geometric constraints. Therefore, most current calibration methods still consider the camera array as multiple pinhole cameras for calibration. Moreover, when detecting distant targets, the camera array usually needs to adjust the focal length to maintain a larger depth of field (DoF), so that the distant targets are located on the camera’s focal plane. This means that the calibration scene should be selected within this DoF range to obtain clear images. Nevertheless, the small parallax between the distant sub-aperture views limits the calibration. To address these issues, we propose a calibration model for camera arrays in distant scenes. In this model, we first extend the parallax by employing dual-array frames (i.e., recording a scene at two spatial locations). Secondly, we investigate the linear constraints between the dual-array frames, to maintain the minimum degrees of freedom of the model. We develop a real-world light field dataset called NUDT-Dual-Array using an infrared camera array to evaluate our method. Experimental results on our self-developed datasets demonstrate the effectiveness of our method. Using the calibrated model, we improve the SNR of distant dim targets, which ultimately enhances the detection and perception of dim targets. Full article
Show Figures

Figure 1

20 pages, 3088 KiB  
Article
Position and Orientation System Error Analysis and Motion Compensation Method Based on Acceleration Information for Circular Synthetic Aperture Radar
by Zhenhua Li, Dawei Wang, Fubo Zhang, Yi Xie, Hang Zhu, Wenjie Li, Yihao Xu and Longyong Chen
Remote Sens. 2024, 16(4), 623; https://doi.org/10.3390/rs16040623 - 7 Feb 2024
Cited by 1 | Viewed by 1619
Abstract
Circular synthetic aperture radar (CSAR) possesses the capability of multi-angle observation, breaking through the geometric observation constraints of traditional strip SAR and holding the potential for three-dimensional imaging. Its sub-wavelength level of planar resolution, resulting from a long synthetic aperture, makes CSAR highly [...] Read more.
Circular synthetic aperture radar (CSAR) possesses the capability of multi-angle observation, breaking through the geometric observation constraints of traditional strip SAR and holding the potential for three-dimensional imaging. Its sub-wavelength level of planar resolution, resulting from a long synthetic aperture, makes CSAR highly valuable in the field of high-precision mapping. However, the motion geometry of CSAR is more intricate compared to traditional strip SAR, demanding high precision from navigation systems. The accumulation of errors over the long synthetic aperture time cannot be overlooked. CSAR exhibits significant coupling between the range and azimuth directions, making traditional motion compensation methods based on linear SAR unsuitable for direct application in CSAR. The dynamic nature of flight, with its continuous changes in attitude, introduces a significant deformation error between the non-rigidly connected Inertial Measurement Unit (IMU) and the Global Positioning System (GPS). This deformation error makes it difficult to accurately obtain radar position information, resulting in imaging defocus. The research in this article uncovers a correlation between the deformation error and radial acceleration. Leveraging this insight, we propose utilizing radial acceleration to estimate residual motion errors. This paper delves into the analysis of Position and Orientation System (POS) errors, presenting a novel high-resolution CSAR motion compensation method based on airborne platform acceleration information. Once the system deformation parameters are calibrated using point targets, the deformation error can be directly calculated and compensated based on the acceleration information, ultimately resulting in the generation of a high-resolution image. In this paper, the effectiveness of the method is verified with airborne flight test data. This method can compensate for the deformation error and effectively improve the peak sidelobe ratio and integral sidelobe ratio of the target, thus improving image quality. The introduction of acceleration information provides new means and methods for high-resolution CSAR imaging. Full article
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Data Processing and Application)
Show Figures

Figure 1

31 pages, 11765 KiB  
Article
Operational Aspects of Landsat 8 and 9 Geometry
by Michael J. Choate, Rajagopalan Rengarajan, Md Nahid Hasan, Alexander Denevan and Kathryn Ruslander
Remote Sens. 2024, 16(1), 133; https://doi.org/10.3390/rs16010133 - 28 Dec 2023
Cited by 4 | Viewed by 1910
Abstract
Landsat 9 (L9) was launched on 27 September 2021. This spacecraft contained two instruments, the Operational Land Imager-2 (OLI-2) and Thermal Infrared Sensor-2 (TIRS-2), that allow for a continuation of the Landsat program and the mission to acquire multi-spectral observations of the globe [...] Read more.
Landsat 9 (L9) was launched on 27 September 2021. This spacecraft contained two instruments, the Operational Land Imager-2 (OLI-2) and Thermal Infrared Sensor-2 (TIRS-2), that allow for a continuation of the Landsat program and the mission to acquire multi-spectral observations of the globe on a moderate scale. Following a period of commissioning, during which time the spacecraft and instruments were initialized and set up for operations, with the initial calibration performed, the mission moved to an operational mode This operational mode involved the same cadence and methods that were performed for the Landsat 8 (L8) spacecraft and the two instruments onboard, the Operational Land Imager-1 (OLI-1) and Thermal Infrared Sensor-1 (TIRS-1), with respect to calibration, characterization, and validation. This paper discusses the geometric operational aspects of the L9 instruments during the first year of the mission and post-commissioning, and compares these same geometric activities performed for L8 during the same time frame. During this time, optical axes of the two sensors, OLI-1 and OLI-2, were adjusted to stay aligned with the spacecraft’s Attitude Control System (ACS), and the TIRS-1 and TIRS-2 instruments were adjusted to stay aligned with the OLI-1 and OLI-2 instruments, respectively. In this paper, the L9 operational adjustments are compared to the same operational aspects of L8 during this same time frame. The comparisons shown in this paper will demonstrate that both instruments aboard L8 and L9 performed very similar geometric qualities while fully meeting the expected requirements. This paper describes the geometric differences between the L9 imagery that was made available to the public prior to the reprocessing campaign that was performed using the new calibration updates to the sensor and to ACS and TIRS-to-OLI alignment parameters. This reprocessing campaign of L9 products involved data acquired from the launch of the spacecraft up to early 2023. Full article
Show Figures

Figure 1

18 pages, 5723 KiB  
Article
Improved Calibration of Eye-in-Hand Robotic Vision System Based on Binocular Sensor
by Binchao Yu, Wei Liu and Yi Yue
Sensors 2023, 23(20), 8604; https://doi.org/10.3390/s23208604 - 20 Oct 2023
Cited by 2 | Viewed by 1835
Abstract
Eye-in-hand robotic binocular sensor systems are indispensable equipment in the modern manufacturing industry. However, because of the intrinsic deficiencies of the binocular sensor, such as the circle of confusion and observed error, the accuracy of the calibration matrix between the binocular sensor and [...] Read more.
Eye-in-hand robotic binocular sensor systems are indispensable equipment in the modern manufacturing industry. However, because of the intrinsic deficiencies of the binocular sensor, such as the circle of confusion and observed error, the accuracy of the calibration matrix between the binocular sensor and the robot end is likely to decline. These deficiencies cause low accuracy of the matrix calibrated by the traditional method. In order to address this, an improved calibration method for the eye-in-hand robotic vision system based on the binocular sensor is proposed. First, to improve the accuracy of data used for solving the calibration matrix, a circle of confusion rectification method is proposed, which rectifies the position of the pixel in images in order to make the detected geometric feature close to the real situation. Subsequently, a transformation error correction method with the strong geometric constraint of a standard multi-target reference calibrator is developed, which introduces the observed error to the calibration matrix updating model. Finally, the effectiveness of the proposed method is validated by a series of experiments. The results show that the distance error is reduced to 0.080 mm from 0.192 mm compared with the traditional calibration method. Moreover, the measurement accuracy of local reference points with updated calibration results from the field is superior to 0.056 mm. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

18 pages, 7775 KiB  
Article
Research on A Special Hyper-Pixel for SAR Radiometric Monitoring
by Songtao Shangguan, Xiaolan Qiu and Kun Fu
Remote Sens. 2023, 15(8), 2175; https://doi.org/10.3390/rs15082175 - 20 Apr 2023
Cited by 6 | Viewed by 1595
Abstract
The objects presented in synthetic-aperture radar (SAR) images are the products of the joint actions of ground objects and SAR sensors in specific geospatial contexts. With the accumulation of massive time-domain SAR data, scholars have the opportunity to better understand ground-object targets and [...] Read more.
The objects presented in synthetic-aperture radar (SAR) images are the products of the joint actions of ground objects and SAR sensors in specific geospatial contexts. With the accumulation of massive time-domain SAR data, scholars have the opportunity to better understand ground-object targets and sensor systems, providing some useful feedback for SAR-data processing. Aiming at normalized and low-cost SAR radiometric monitoring, this paper proposes a new hyper-pixel concept for handling multi-pixel ensembles of semantic ground targets. The special hyper-pixel in this study refers to low-rise single-family residential areas, and its radiation reference is highly stable in the time domain when the other dimensions are fixed. The stability of its radiometric data can reach the level of 0.3 dB (1σ), as verified by the multi-temporal data from Sentinel-1. A comparison with tropical-rainforest data verified its availability for SAR radiometric monitoring, and possible radiation variations and radiation-intensity shifts in the Sentinel-1B SAR products ere experimentally monitored. In this paper, the effects of seasonal climate and of the relative geometrical states observed on the intensity of the hyper-pixel’s radiation are investigated. This paper proposes a novel hyper-pixel concept for processing and interpreting SAR-image data. The proposed residential hyper-pixel is shown to be useful in multi-temporal-data observations for normalized radiometric monitoring and has the potential to be used for cross-calibration, in addition to other applications. Full article
Show Figures

Figure 1

12 pages, 82623 KiB  
Communication
Multi-View Surgical Camera Calibration with None-Feature-Rich Video Frames: Toward 3D Surgery Playback
by Mizuki Obayashi, Shohei Mori, Hideo Saito, Hiroki Kajita and Yoshifumi Takatsume
Appl. Sci. 2023, 13(4), 2447; https://doi.org/10.3390/app13042447 - 14 Feb 2023
Cited by 5 | Viewed by 2424
Abstract
Mounting multi-view cameras within a surgical light is a practical choice since some cameras are expected to observe surgery with few occlusions. Such multi-view videos must be reassembled for easy reference. A typical way is to reconstruct the surgery in 3D. However, the [...] Read more.
Mounting multi-view cameras within a surgical light is a practical choice since some cameras are expected to observe surgery with few occlusions. Such multi-view videos must be reassembled for easy reference. A typical way is to reconstruct the surgery in 3D. However, the geometrical relationship among cameras is changed because each camera independently moves every time the lighting is reconfigured (i.e., every time surgeons touch the surgical light). Moreover, feature matching between surgical images is potentially challenging because of missing rich features. To address the challenge, we propose a feature-matching strategy that enables robust calibration of the multi-view camera system by collecting a set of a small number of matches over time while the cameras stay stationary. Our approach would enable conversion from multi-view videos to a 3D video. However, surgical videos are long and, thus, the cost of the conversion rapidly grows. Therefore, we implement a video player where only selected frames are converted to minimize time and data until playbacks. We demonstrate that sufficient calibration quality with real surgical videos can lead to a promising 3D mesh and a recently emerged 3D multi-layer representation. We reviewed comments from surgeons to discuss the differences between those 3D representations on an autostereoscopic display with respect to medical usage. Full article
(This article belongs to the Special Issue Advanced Medical Signal Processing and Visualization)
Show Figures

Figure 1

12 pages, 5610 KiB  
Article
Underwater Optical-Sonar Image Fusion Systems
by Hong-Gi Kim, Jungmin Seo and Soo Mee Kim
Sensors 2022, 22(21), 8445; https://doi.org/10.3390/s22218445 - 3 Nov 2022
Cited by 11 | Viewed by 5745
Abstract
Unmanned underwater operations using remotely operated vehicles or unmanned surface vehicles are increasing in recent times, and this guarantees human safety and work efficiency. Optical cameras and multi-beam sonars are generally used as imaging sensors in underwater environments. However, the obtained underwater images [...] Read more.
Unmanned underwater operations using remotely operated vehicles or unmanned surface vehicles are increasing in recent times, and this guarantees human safety and work efficiency. Optical cameras and multi-beam sonars are generally used as imaging sensors in underwater environments. However, the obtained underwater images are difficult to understand intuitively, owing to noise and distortion. In this study, we developed an optical and sonar image fusion system that integrates the color and distance information from two different images. The enhanced optical and sonar images were fused using calibrated transformation matrices, and the underwater image quality measure (UIQM) and underwater color image quality evaluation (UCIQE) were used as metrics to evaluate the performance of the proposed system. Compared with the original underwater image, image fusion increased the mean UIQM and UCIQE by 94% and 27%, respectively. The contrast-to-noise ratio was increased six times after applying the median filter and gamma correction. The fused image in sonar image coordinates showed qualitatively good spatial agreement and the average IoU was 75% between the optical and sonar pixels in the fused images. The optical-sonar fusion system will help to visualize and understand well underwater situations with color and distance information for unmanned works. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

12 pages, 1777 KiB  
Article
Fast Positioning Model and Systematic Error Calibration of Chang’E-3 Obstacle Avoidance Lidar for Soft Landing
by Donghong Wang, Xingfeng Chen, Jun Liu, Zongqi Liu, Fengjie Zheng, Limin Zhao, Jiaguo Li and Xiaofei Mi
Sensors 2022, 22(19), 7366; https://doi.org/10.3390/s22197366 - 28 Sep 2022
Cited by 2 | Viewed by 1879
Abstract
Chang’E-3 is China’s first soft landing mission on an extraterrestrial celestial body. The laser Three-Dimensional Imaging (TDI) sensor is one of the key payloads of the Chang’E-3 lander. Its main task is to provide accurate 3D lunar surface information of the target landing [...] Read more.
Chang’E-3 is China’s first soft landing mission on an extraterrestrial celestial body. The laser Three-Dimensional Imaging (TDI) sensor is one of the key payloads of the Chang’E-3 lander. Its main task is to provide accurate 3D lunar surface information of the target landing area in real time for the selection of safe landing sites. Here, a simplified positioning model was constructed, to meet the accuracy and processing timeline requirements of the TDI sensor of Chang’E-3. By analyzing the influence of TDI intrinsic parameters, a permanent outdoor calibration field based on flat plates was specially designed and constructed, and a robust solution of the geometric calibration adjustment was realized by introducing virtual observation equations for unknowns. The geometric calibration and its absolute and relative positioning accuracy verification were carried out using multi-measurement and multi-angle imaging data. The results show that the error of TDI intrinsic parameters will produce a false obstacle with a maximum height of about 1.4 m on the plane, which will cause the obstacle avoidance system of Chang’E-3 to fail to find a suitable landing area or find a false flat area. Furthermore, the intrinsic parameters of the TDI have good stability and the accuracy of the reconstructed three-dimensional surface can reach about 4 cm after error calibration, which provides a reliable terrain guarantee for the autonomous obstacle avoidance of the Chang’E-3 lander. Full article
Show Figures

Figure 1

15 pages, 3365 KiB  
Article
An Effective Camera-to-Lidar Spatiotemporal Calibration Based on a Simple Calibration Target
by Lazaros Grammatikopoulos, Anastasios Papanagnou, Antonios Venianakis, Ilias Kalisperakis and Christos Stentoumis
Sensors 2022, 22(15), 5576; https://doi.org/10.3390/s22155576 - 26 Jul 2022
Cited by 18 | Viewed by 4093
Abstract
In this contribution, we present a simple and intuitive approach for estimating the exterior (geometrical) calibration of a Lidar instrument with respect to a camera as well as their synchronization shifting (temporal calibration) during data acquisition. For the geometrical calibration, the 3D rigid [...] Read more.
In this contribution, we present a simple and intuitive approach for estimating the exterior (geometrical) calibration of a Lidar instrument with respect to a camera as well as their synchronization shifting (temporal calibration) during data acquisition. For the geometrical calibration, the 3D rigid transformation of the camera system was estimated with respect to the Lidar frame on the basis of the establishment of 2D to 3D point correspondences. The 2D points were automatically extracted on images by exploiting an AprilTag fiducial marker, while the detection of the corresponding Lidar points was carried out by estimating the center of a custom-made retroreflective target. Both AprilTag and Lidar reflective targets were attached to a planar board (calibration object) following an easy-to-implement set-up, which yielded high accuracy in the determination of the center of the calibration target. After the geometrical calibration procedure, the temporal calibration was carried out by matching the position of the AprilTag to the corresponding Lidar target (after being projected onto the image frame), during the recording of a steadily moving calibration target. Our calibration framework was given as an open-source software implemented in the ROS platform. We have applied our method to the calibration of a four-camera mobile mapping system (MMS) with respect to an integrated Velodyne Lidar sensor and evaluated it against a state-of-the-art chessboard-based method. Although our method was a single-camera-to-Lidar calibration approach, the consecutive calibration of all four cameras with respect to the Lidar sensor yielded highly accurate results, which were exploited in a multi-camera texturing scheme of city point clouds. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Back to TopTop