Next Article in Journal
Feature Fusion of ICP-AES, UV-Vis and FT-MIR for Origin Traceability of Boletus edulis Mushrooms in Combination with Chemometrics
Previous Article in Journal
Fabrication of a Textile-Based Wearable Blood Leakage Sensor Using Screen-Offset Printing
Previous Article in Special Issue
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle
Open AccessArticle

A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks

1
Center for Visualization and Virtual Environments, University of Kentucky, Lexington, KY 40506, USA
2
Interactive Visual Media (IVDIA) Lab, University of Dayton, Dayton, OH 45469, USA
3
Department of Computer Information Technology and Graphics, Purdue University Northwest, Hammond, IN 46323, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in The extension of extrinsic calibration for wide-baseline RGB-D camera network. In the Proceedings of the 2014 IEEE 16th International Workshop on Multimedia Signal Processing (MMSP), Jakarta, Indonesia, 22–24 September 2014; pp. 1–6.
Sensors 2018, 18(1), 235; https://doi.org/10.3390/s18010235
Received: 10 November 2017 / Revised: 6 January 2018 / Accepted: 8 January 2018 / Published: 15 January 2018
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. View Full-Text
Keywords: RGB-D camera; spherical object; camera network calibration; 3D reconstruction RGB-D camera; spherical object; camera network calibration; 3D reconstruction
Show Figures

Figure 1

MDPI and ACS Style

Su, P.-C.; Shen, J.; Xu, W.; Cheung, S.-C.S.; Luo, Y. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks. Sensors 2018, 18, 235.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop