sensors-logo

Journal Browser

Journal Browser

Special Issue "Camera Calibration and 3D Reconstruction"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 31 December 2021.

Special Issue Editors

Dr. Alexey Pak
E-Mail Website
Guest Editor
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Fraunhoferstraße 1, Karlsruhe, 76131 Germany
Interests: optical metrology; computer vision; probabilistic models; deflectometry
Prof. Dr. Steffen Reichel
E-Mail Website
Guest Editor
Hochschule Pforzheim, Tiefenbronner Straße 65, Pforzheim 75175 Germany
Interests: optical metrology; ray and wave optics; image processing
Dr. Jan Burke
E-Mail Website
Guest Editor
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Fraunhoferstraße 1, Karlsruhe, 76131 Germany
Interests: optical metrology; image processing; machine learning

Special Issue Information

Dear Colleagues,

The importance of accurate image-based assessment of 3D objects and scenes is rapidly growing in the fields of computer vision (cf. AR/VR, autonomous driving, aerial surveillance, etc.) and optical metrology (photogrammetry, fringe projection, deflectometry, etc.). As the performance of digital sensors and optics approaches physical limits, uncertainties associated with models of imaging geometry, calibration workflows and data types, pattern recognition algorithms etc. directly affect numerous applications.

We are pleased to invite you to contribute manuscripts to this Special Issue. It addresses the metrological aspects of modeling, characterizing, and using digital cameras in the context of 3D measurements, as well as novel analytic (e.g., visualization) tools and techniques facilitating robust and reliable camera-based measurements. Both original research articles and reviews are welcome.

We look forward to receiving your contributions. Please do not hesitate to contact us if you have any comments or questions.

Dr. Alexey Pak
Prof. Dr. Steffen Reichel
Dr. Jan Burke
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • camera calibration
  • geometrical camera models
  • image-based 3D reconstruction
  • uncertainties in optical 3D measurements
  • shape-from-X techniques
  • high-precision camera-based measurements
  • non-conventional imaging systems for 3D measurements
  • computational imaging for 3D measurements

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Virtual Namesake Point Multi-Source Point Cloud Data Fusion Based on FPFH Feature Difference
Sensors 2021, 21(16), 5441; https://doi.org/10.3390/s21165441 - 12 Aug 2021
Viewed by 291
Abstract
There are many sources of point cloud data, such as the point cloud model obtained after a bundle adjustment of aerial images, the point cloud acquired by scanning a vehicle-borne light detection and ranging (LiDAR), the point cloud acquired by terrestrial laser scanning, [...] Read more.
There are many sources of point cloud data, such as the point cloud model obtained after a bundle adjustment of aerial images, the point cloud acquired by scanning a vehicle-borne light detection and ranging (LiDAR), the point cloud acquired by terrestrial laser scanning, etc. Different sensors use different processing methods. They have their own advantages and disadvantages in terms of accuracy, range and point cloud magnitude. Point cloud fusion can combine the advantages of each point cloud to generate a point cloud with higher accuracy. Following the classic Iterative Closest Point (ICP), a virtual namesake point multi-source point cloud data fusion based on Fast Point Feature Histograms (FPFH) feature difference is proposed. For the multi-source point cloud with noise, different sampling resolution and local distortion, it can acquire better registration effect and improve the accuracy of low precision point cloud. To find the corresponding point pairs in the ICP algorithm, we use the FPFH feature difference, which can combine surrounding neighborhood information and have strong robustness to noise, to generate virtual points with the same name to obtain corresponding point pairs for registration. Specifically, through the establishment of voxels, according to the F2 distance of the FPFH of the target point cloud and the source point cloud, the convolutional neural network is used to output a virtual and more realistic and theoretical corresponding point to achieve multi-source Point cloud data registration. Compared with the ICP algorithm for finding corresponding points in existing points, this method is more reasonable and more accurate, and can accurately correct low-precision point cloud in detail. The experimental results show that the accuracy of our method and the best algorithm is equivalent under the clean point cloud and point cloud of different resolutions. In the case of noise and distortion in the point cloud, our method is better than other algorithms. For low-precision point cloud, it can better match the target point cloud in detail, with better stability and robustness. Full article
(This article belongs to the Special Issue Camera Calibration and 3D Reconstruction)
Show Figures

Figure 1

Back to TopTop