Next Article in Journal
Estimating BDS-3 Satellite Differential Code Biases with the Single-Frequency Uncombined PPP Model
Next Article in Special Issue
Quartz-Enhanced Photoacoustic Spectroscopy Assisted by Partial Least-Squares Regression for Multi-Gas Measurements
Previous Article in Journal
Cropland Mapping Using Sentinel-1 Data in the Southern Part of the Russian Far East
Previous Article in Special Issue
An Automatic Calibration Method for the Field of View Aberration in a Risley-Prism-Based Image Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Bayesian Inference-Based Approach for the Uncertainty Characterization of Zhang’s Camera Calibration Method

by
Ramón Gutiérrez-Moizant
1,*,
María Jesús L. Boada
1,
María Ramírez-Berasategui
1 and
Abdulla Al-Kaff
2
1
Mechanical Engineering Department, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Spain
2
Systems Engineering and Automation, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Spain
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(18), 7903; https://doi.org/10.3390/s23187903
Submission received: 21 July 2023 / Revised: 12 September 2023 / Accepted: 13 September 2023 / Published: 15 September 2023
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems)

Abstract

:
Camera calibration is necessary for many machine vision applications. The calibration methods are based on linear or non-linear optimization techniques that aim to find the best estimate of the camera parameters. One of the most commonly used methods in computer vision for the calibration of intrinsic camera parameters and lens distortion (interior orientation) is Zhang’s method. Additionally, the uncertainty of the camera parameters is normally estimated by assuming that their variability can be explained by the images of the different poses of a checkerboard. However, the degree of reliability for both the best parameter values and their associated uncertainties has not yet been verified. Inaccurate estimates of intrinsic and extrinsic parameters during camera calibration may introduce additional biases in post-processing. This is why we propose a novel Bayesian inference-based approach that has allowed us to evaluate the degree of certainty of Zhang’s camera calibration procedure. For this purpose, the a prioriprobability was assumed to be the one estimated by Zhang, and the intrinsic parameters were recalibrated by Bayesian inversion. The uncertainty of the intrinsic parameters was found to differ from the ones estimated with Zhang’s method. However, the major source of inaccuracy is caused by the procedure for calculating the extrinsic parameters. The procedure used in the novel Bayesian inference-based approach significantly improves the reliability of the predictions of the image points, as it optimizes the extrinsic parameters.

1. Introduction

Accurate knowledge of image projection parameters is essential for computer vision and photogrammetry applications. These techniques are the science of extracting information about the physical world from image data. Photogrammetry mainly involves the precise measurement of three-dimensional objects and terrain features from two-dimensional photos [1]. Topographical mapping, architecture, engineering, manufacturing, quality control, police investigation, defense, agriculture, geology, film and games industries are some of the areas where photogrammetry is used [2].
On the other hand, computer vision focuses on developing systems that can capture, process and analyze visual data to understand the world around them for scientific and technical exploration [3]. Computer vision is used in a wide range of industrial and scientific applications, such as robot and autonomous vehicles, control systems, traffic speed, automatic inspection in manufacturing applications, medical image processing or computer–human interaction [4].
Usually, the information from the camera manufacturer is not accurate enough to be of use in either photogrammetry or computer vision applications [5]. To accurately reconstruct the camera’s mapping from the 3D world to the 2D image, it is necessary to determine the relationship between the three-dimensional coordinates in the real world and the pixel of the captured image. To determine this, the calibration of the camera is required. Camera calibration is the process of characterizing the internal properties or interior orientation of the camera (intrinsic parameters and focal lens distortion) and determining the position and orientation of the camera with respect to the captured scene (extrinsic parameters or exterior orientation) [1]. Once the parameters of the camera are known, the theoretical camera model can be used to extract metric information from 2D images [6].
The calibration of the camera has always been an important issue in both photogrammetry and computer vision applications. Although, from the point of view of photogrammetry, the problem of camera calibration is considered to be solved [7], the calibration methods used in computer vision have become very popular due to their ease of application [8]. However, Luhmann et al. [7] warns that little attention is paid to the accuracy of the calibration methods used in computer vision. For instance, the focus of computer vision researchers has centered on the development of user-friendly, automated calibration procedures based on linear methods with simplified image models.
In the past, cameras were calibrated separately. Currently, object modeling using a set of images from different camera perspectives is a very extended practice [9,10]. Moreover, in situ calibration can also be performed by applying deep learning methods [11,12], based on vanishing points [13]. However, the vanishing point approach assumes that the camera is undistorted [12], and the internal parameters are considered fixed, i.e., without uncertainty [11].
Separate calibration or pre-calibration is still required for single camera applications such as special survey cameras or aerial cameras [7].
There are several methods used for single camera calibration in the computer vision community [6,9,14,15,16]. The aim of these methods is to find the best estimate of the camera parameters that minimizes the difference between the image points and the estimated points. However, according to [17], the ’best’ answer is ambiguous because of the uncertainty involved. Furthermore, this is because, according to metrology, a measurement is incomplete if it is not presented with its corresponding uncertainty [18]. In this sense, one of the challenges in modern metrology is to provide the mathematical tools to quantify the uncertainties of inverse and non- linear problems in scientific modeling [19]. In fact, this is a research hotspot where the solutions presented to date combine different mathematical algorithms [20]. Among them, the Bayesian method has proved its worth in the inverse uncertainty inference [21,22,23]. According to King et al. [24], the Bayesian approach represents reality more accurately than the uncertainties obtained using the standard frequentist approach, where it is commonly assumed that the probability density function (PDF) of the uncertainties are Gaussian.
The most widely used method for camera calibration is the one proposed by Zhang [6], due to its ease of application, efficiency and robustness [25,26]. In this method, 2 to 5 images of different poses of a planar checkerboard are required to determine the interior orientation of the camera [27]. The camera and planar pattern can move freely and the movement does not have to be known. However, it was observed that the values of the interior orientation depend on the number of images captured [27], so it is recommended to use a large number of images for a better adjustment of these parameters. Additionally, the previous studies [28,29,30] have shown that the accuracy of the calibration results is highly dependent on the poses used in the acquired images. According to [29], inexperienced users often do not take calibration images that enhance calibration accuracy. Therefore, refs [28,29,30] propose interactive methods that help to successfully perform camera calibration. In these methods, all target poses that lead to a reduction in the interior camera orientation uncertainty are preferred and the rest are ignored.
It is well known that the ground truth of the calibration parameters is unfortunately unknown in reality. However, it is usually assumed that it can be approximated from a large number of images or calibration experiments [31]. Typically, the accuracy of calibration results is measured by examining re-projection errors. High values may indicate large systematic errors or inaccurate calibration. Nevertheless, they cannot give a complete information about the quality of the calibration [32]. Some camera calibration software packages, such as those implemented in Matlab [33] or OpenCV [34], also calculate the expanded uncertainty of both interior and exterior camera parameters based in Zhang’s method [6] for a 95% confidence level, assuming they follow a normal probability distribution. These uncertainties are calculated from the covariance matrix, which is computed considering the Jacobian matrix of the calibration residuals and the mean square error [35].
Therefore, to determine the uncertainty of the camera parameters using Zhang’s method [6], it is assumed that: (a) the uncertainties can be captured by the variability of the points used in the non-linear regression model, and (b) the uncertainties depend on the images considered in the calibration.
The results of camera calibration using the software mentioned above should be interpreted with caution for the following reasons:
  • The method of estimating parameter uncertainties leads to the assumption that the model is linear, when, in fact, it is highly non-linear.
  • Although there are studies dedicated to the analysis of the uncertainty of camera parameters [31,32,36,37], none of them have verified the hypothesis of normality of the calibrated parameters.
  • The accuracy of the parameters with these software has not been verified with a suitable alternative method.
Therefore, the aim of this research work is to address the above underlined weaknesses commonly accepted by the computer vision community. To this end, a novel Bayesian inference-based method is proposed to validate the intrinsic parameters and lens distortion coefficients (interior orientation) obtained with the calibration process of Zhang [6]. Moreover, the new method proposed allows us to update the information of the intrinsic parameters and lens distortion coefficients with the different sets of images considered. It also improves the accuracy and reliability of the calibration process by minimizing re-projection errors. This article is organized as follows. Section 2 provides a brief description of Zhang´s camera calibration procedure. Section 3 describes the proposed Bayesian inversion calibration of the interior camera orientation parameters. In Section 4 an uncertainty propagation of the interior camera orientation parameters through the pinhole model is developed to evaluate the accuracy of the estimated image points. Finally, Section 5 summarizes the main results and key remarks.

2. Materials and Methods

2.1. Camera Calibration

The pinhole model is the most extended camera model [38], which relates world points to image points. In the original pinhole model [39], the camera is modeled by a set of intrinsic parameters (focal length, principal point, axis skew), and its poses and orientation are expressed by the extrinsic or exterior parameters (rotation matrix and translation vector). However, subsequent works have improved the model by including lens distortion in the internal camera parameters (interior orientation in photogrammetry) [39,40,41,42]. The intrinsic parameters are responsible for transforming the 3D points of the camera reference system, or camera coordinates, into 2D points of the displayed image. On the other hand, the extrinsic parameters provide information about the position of the camera with respect to the captured scene and transform the coordinates of the world into the camera reference system. The relationship between image points [ u , v ] T with respect to real world point coordinates [ X , Y , Z ] T can be expressed as follows:
s u v 1 = A R t X Y Z 1 ,
where s is a scaling factor, ( R , t ) are the extrinsic or exterior parameters composed by the rotation and translation matrices, respectively, and A is the intrinsic parameter matrix defined by:
A = f x γ c x 0 f y c y 0 0 1 ,
f x and f y are image axis scale factors, c x and c y are principal point coordinates, and γ is the skewness of the u and v image axes.
Transforming 3D world coordinates to camera coordinates is expressed as follows:
X c Y c Z c = R X Y Z + t ,
The next step is to perform the projection of the 3D camera points from the camera coordinates to the 2D pixel coordinates:
s u v 1 = f x γ c x 0 f y c y 0 0 1 x y 1 ,
where x = X c / Z c and y = Y c / Z c are the camera’s homogeneous coordinates.
The pinhole model is an approximation of the camera projection expressing with a simple mathematical formulation the relationship between the object and image coordinates [16]. However, it is not valid for modeled real cameras. To accurately represent a real camera, lens distortion must be included in the pinhole model [39,40,41,42]. Brown [40] divided the distortion of a lens into radial distortion and tangential distortion. According to Brown [40], the radial distortion can be modeled by three parameters: k 1 , k 2 and k 3 . On the other hand, the tangential distortion is characterized with three coefficients: p 1 , p 2 and p 3 . In practice, only the first two terms of tangential distortion are considered, as the remaining terms are usually negligible [42]. Thus, for a more comprehensive camera model, the distorted points can be estimated by the following empirical equations [37]:
x = x 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + 2 p 1 x y + p 2 r 2 + 2 x 2 y = y 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + p 1 r 2 + 2 y 2 + 2 p 2 x y ,
with r = x 2 + y 2 .
Using in Equation (4) the normalized distorted points of the image shown in Equation (5) instead of the ideal x and y , the coordinates of the 3D projection point in the 2D image points are obtained:
u = f x · x + γ · y + c x v = f y · y + c y ,

2.2. Plane-Based Camera Calibration

Zhang’s camera calibration method is one of the most commonly used. This calibration procedure is briefly described below [27]:
  • Take n images of the planar calibration pattern at different orientations by moving either the plane, the camera or both.
  • In each image, m feature points (corners) are detected. With these, the associated image homographies H are computed.
  • Using the homographies, the intrinsic parameters A are estimated by least- squares minimization.
  • The radial and tangential distortion coefficients are solved by the method of least squares.
  • With the interior orientation, the undistorted points and the world points, the extrinsic or exterior orientation parameters ( R , t ) are calculated.
  • Finally, all the estimated parameters values are refined by non-linear optimization (Levenberg–Marquardt algorithm), considering all the m corners of the n views.
    i = 1 n j = 1 m m i , j m ˘ A , k 1 , k 2 , k 3 , p 1 , p 2 , R i , t i , M j 2 ,
    where m ˘ A , k 1 , k 2 , k 3 , p 1 , p 2 , R i , t i , M j is the estimated projection of the 2D image point m i , j , corresponding to the detected target points M j = X j , Y j , Z T .
In this paper, we have used the Matlab camera calibration toolbox [33] based on Zhang’s method. For the calibration, we have downloaded from the GitHub repository [43] 10 images of a 48-corner checkerboard, which were taken by a sports camera. As this method requires at least 2 views of the planar calibration target with different orientations, the 10 images were separated in pairs (5 sets). The calibration of the 5 sets was performed separately in sequence, i.e., set 1 = [Figure 1a,b], set 2 = [Figure 1c,d], ⋯, set 5 = [Figure 1i,j].

3. Bayesian Inference-Based Method for Uncertainty Characterization

An inverse model based on Bayesian inference was performed to validate the uncertainties of the interior camera orientation results of the Matlab toolbox. Moreover, the Bayesian method allows for the determination of the uncertainties of a mathematical model. It is based on a prior assumption (belief or knowledge) about the probability distributions of the uncertain parameters involved in the model [44]. It is, therefore, possible to speak of the establishment of a priori probability density functions (PDFs), based on the knowledge of the phenomenon being represented (in this case, the uncertainties of the interior camera orientation obtained with the Matlab toolbox) and of a posteriori or updated PDFs. In Bayesian inference, the prior experimental observations determine the subsequent observations and, thus, the final distribution [45].
In the proposed Bayesian method of camera calibration, only the interior camera orientations are evaluated, since the extrinsic parameters or exterior camera orientations can be obtained from them [6]. Furthermore, since the complete camera model is non-linear, due to lens distortion, the PDFs of the extrinsic parameters obtained using the Matlab toolbox cannot be employed for either Bayesian inversion or uncertainty propagation, as they are not independent. They depend on the values of the intrinsic camera parameters and lens distortion [6].
Bayesian inference is based on the rules of Bayes’ theorem, which, for our analysis, reads as follows:
π X m = L m X π X Z ,
where π X m is the posterior distribution of the parameters:
X = A , k 1 , k 2 , k 3 , p 1 , p 2
which is conditional on the actual known values, in this case, the checkerboard 2D image points or reference points m = [ u i , j , v i , j ] . This means that the probability distributions of the X parameters are updated using the experimental data m . The interior camera orientation X is composed by the intrinsic camera parameters A and the radial and tangential distortion coefficients k 1 , k 2 , k 3 , p 1 , p 2 . L m X is the likelihood function between the theoretical or estimated image points M ( X ) , i.e., u , v from Equation (6), and the reference points m . π ( X ) is the a priori probability distribution, representing the belief or knowledge of the uncertain parameters, in our case, the interior camera orientation uncertainties (Gaussian PDFs), obtained with the Matlab camera calibration software. Moreover, Z is a normalized constant representing the model evidence or marginal likelihood.
Any mathematical or computational model is an incomplete representation of reality due to its simplifications. Therefore, there is a discrepancy term ϵ associated with the model M ( X ) that justifies its mismatch with the observed data m [46]. The relationship between the actual measurements and the image point estimated with the model is
m = M ( X ) + ϵ
In general, it is assumed that the discrepancy will follow a Gaussian distribution ϵ N 0 , σ . This discrepancy term represents the effects of both measurement error and model inaccuracy [47]. In many practical situations, it is difficult to perfectly know σ . According to [46], if the residual variance is unknown, it can be estimated from the residual between the estimated and experimental data. Therefore, the residual variance can be defined as
σ 2 = 1 n × m i = 1 n j = 1 m ϵ i , j 2 = 1 n × m i = 1 n j = 1 m ( m i , j M ( X ) ) 2 ,
where the variance σ 2 is now treated as an independent uncertain parameter with a priori distribution π ( σ 2 ) [47]. Combining the a priori distribution of the variance π ( σ 2 ) and the a priori distribution of the uncertain parameters of the model π ( X ) , the following joint prior distribution is obtained [47]:
π ( X , σ 2 ) = π ( X ) π ( σ 2 ) ,
the joint prior distribution π ( X , σ 2 ) allows π ( σ 2 ) to be included in the inference process. Thus, its PDF is also updated in conjunction with π ( X ) . Therefore, instead of using Equation (8), the following posterior distribution should be used:
π X , σ 2 m = L ( m X , σ 2 π ( X , σ 2 ) Z = L ( m X , σ 2 π ( X ) π ( σ 2 ) Z
According to [47], the traditional likelihood functions for the case of the unknown residual variance problem is given by
L m X , σ 2 = i = 1 N 1 2 π σ 2 × e x p 1 2 σ 2 m i M X T m i M X ,
where N is the number of experimental data points, i.e., the total number of image points or references points for all the poses of the checkerboard. The likelihood function quantifies the degree of accuracy of the predicted image points M ( X ) with respect to the checkerboard corners m i for the X values considered. A higher degree of accuracy implies a greater influence of the sample X on the a posteriori distribution (Equation (8)), i.e., the likelihood functions are used to update the last distribution.
For each set of random samples { X ( 1 ) , , X ( k ) } obtained from the initial PDFs π ( X ) , the optimum exterior camera orientation values are calculated (this is explained in more detail in Section 3.2). Then, the estimated image points M ( X ( l ) ) , for l = 1 , , k , are calculated with Equation (6) and compared with their corresponding pixel m i using the likelihood Equation (13). This gives k likelihood values L ( l ) m X ( l ) , σ 2 ( l ) , which quantify the quality of the image points prediction produced by the interior camera orientation set X ( l ) . The best set of values in sample { X ( 1 ) , , X ( k ) } is the one that maximizes the likelihood function (Equation (13)). From a metrological point of view and according to Bayes’ theorem, this is the best set of interior camera orientations for the sample considered but not the only solution. Therefore, several groups of samples of the interior camera orientation have to be generated, and in each group, the best set of values has to be selected according to its likelihood score. In this way, a distribution of the possible values of X is constructed with Equation (12), considering the real points of the image and the prior distributions π ( X ) and π ( σ 2 ) .
Finally, we can write the new form of the posterior distribution from Equation (12) by introducing the likelihood from Equation (13):
π X , σ 2 m = 1 Z π X π ( σ 2 ) i = 1 N 1 2 π σ 2 × e x p 1 2 σ 2 m i M X T m i M X
In practice, the posterior distribution of Equation (14) does not have a closed-form solution. This is mainly due to the integration that must be solved to obtain the model evidence or marginal likelihood Z, which can be difficult or impossible to calculate. For this reason, numerical methods based on Markov Chain Monte Carlo (MCMC) are often used to calculate π X , σ 2 m [46]. With the MCMC, the posterior distribution is proportional to the prior distribution times the likelihood [48,49], i.e., π X , σ 2 m π X π σ 2 L m X , σ 2 .

3.1. Markov Chain Monte Carlo (MCMC)

The MCMC method uses Markov chains to perform Monte Carlo estimation. The Markov chain represent a sequence of events where the probability of the future x t + 1 only depends of the present x t , i.e., it is memoryless (Markov Property). MCMC allows us to obtain a sequence of random samples of an (unknown) probability distribution from which direct sampling is not possible [48]. Therefore, the MCMC establishes a relationship between the a priori distributions and the a posteriori distributions. The initial values of the MCMC will not exactly follow the distribution of the posterior values—this phase is known as the burn-in period—but eventually, the predictions will tend towards the desired distribution (stationary state) [50]. The initial values are needed to move up the chain but are discarded when it is finished, i.e., they are not included in the final distribution. The MCMC converges to an invariant distribution after a number of steps; therefore, it is useful to obtain the a posteriori distribution [47].
The most common MCMC algorithms include Metropolis–Hastings, Adaptive Metropolis, and Hamiltonian Monte Carlo [47]. In this paper, the Adaptive Metropolis algorithm is used, as it has proven to be quite efficient in practice, allowing fast convergence in situations where other MCMC algorithms are impractical [51]. For instance, the Metropolis–Hastings algorithm needs to choose a proposed distribution that is as close as possible to the posterior distribution. This is not always possible in practical applications [52]. On the other hand, the Hamiltonian Monte Carlo algorithm is slower per iteration than most other approaches because it needs to compute the gradient at different points in the posterior distribution [47]. In addition, it is not widely used in practice because of the difficulty of adjusting the hyperparameters [53].

3.2. Computational Tools

Bayesian calibration produces a finite number of possible values rather than an optimal one. Therefore, it is necessary to fit it to an optimal PDF by Bayesian inference. In this work, both Bayesian calibration and PDF fitting were performed using the general-purpose uncertainty quantification framework UQLAB [47]. UQLAB is a reliable framework for Bayesian inversion and uncertainty quantification [46]. To perform the Bayesian calibration in UQLAB, it is essential to specify:
  • The uncertain model parameters; in our case, the intrinsic parameters and lens distortion coefficients (interior orientation) X = A , k 1 , k 2 , k 3 , p 1 , p 2 .
  • The measurement vector or experimental data m .
  • The forward model, i.e., M ( X ) .
In our Bayesian inversion process, only the PDFs of the intrinsic parameters and lens distortion are updated. However, the calculation of the extrinsic parameters is necessary because the model M ( X ) is also dependent on them. Therefore, for each combination of intrinsic parameters and lens distortion sampled in the Bayesian process, X ( l ) = A ( l ) , k 1 ( l ) , k 2 ( l ) , k 3 ( l ) , p 1 ( l ) , p 2 ( l ) , the proposed methodology estimates ( R , t ) for each image from its corresponding homography H = h 1 , h 2 , h 3 . According to Zhang [6], this can be implemented as follows:
r 1 = λ A 1 h 1 r 2 = λ A 1 h 2 r 3 = r 1 × r 2 t = λ A 1 h 3
with a scale factor:
λ = 1 A 1 h 1 = 1 A 1 h 2
Homography H is computed with the ideal image points, i.e., after correcting lens distortions (with numeric nonlinear least-squares optimization) in each Bayesian inverse model simulation.
According to [6], the matrix R = r 1 , r 2 , r 3 computed with Equation (15) may not be a true rotation matrix because of the noise in the data. It is, therefore, recommended to compute the best or true rotation matrix by singular value decomposition (SVD). However, we believe that both the best estimates of R -matrix and t -matrix may not be the most optimal due to non-linearity. Therefore, we use the R and t values from the above expressions (for each X ( l ) ) as initial guests of a nonlinear model to be optimized:
i = 1 n j = 1 m m i , j M R i ( l ) , t i ( l ) 2
where M ( R i ( l ) , t i ( l ) ) is the estimated image point computed by Equation (6), corresponding to the set X ( l ) . Therefore, for each set X ( l ) , there exists an optimal solution of R ( l ) and t ( l ) that minimizes Equation (16).
This optimization was performed using the Levenberg–Marquardt algorithm implemented in Matlab [33]. Once R ( l ) and t ( l ) are known, the Bayesian inference-based method searches for the possible values of the interior camera orientation that minimize Equation (10).
The steps to apply the proposed Bayesian inference-based method for the uncertainties characterization are:
  • Calculate the intrinsic parameters and distortion coefficients with their uncertainties using Zhang´s method [6] (with Matlab or OpenCv), assuming normal distributions.
  • With each set of random values sampled from the PDFs of the intrinsic parameters of the camera and lens distortion coefficients, obtain the undistorted image points.
  • With the undistorted points and the sampled values, calculate the exterior camera orientation or extrinsic parameters with Equation (15).
  • Use the obtained exterior camera orientation values as initial guests and determine the optimal value of the exterior camera orientation using Equation (16).
  • Calculate the theoretical image points with Equation (6).
Figure 2 shows a schematic representation of the Bayesian inference-based method for the uncertainties characterization of the interior camera orientation.

4. Results

Table 1, Table 2 and Table 3 show the interior camera orientation results corresponding to three image sets obtained using the Matlab camera calibration software and then Bayesian calibration. These sets of images were chosen because they are the most interesting of the five sets analyzed. For the Bayesian method, 5000 steps and 50 Markov chains were used, so a total of 25,000 simulations per image set were performed, with a computational cost of approximately 5 h. It was found that in most cases, the differences between the mean values of the interior camera orientation obtained with the two methods are not significant (less than 5%). However, the differences between the values of the standard deviations are relevant and depend on the set of images considered. It can also be seen that the normality assumption of the method commonly used to estimate the uncertainties of Zhang’s method is not always met. In this case, the appropriate PDFs are those obtained using the proposed Bayesian inference-based method, as this is a mathematical approach specifically designed to generate a probability distribution for uncertain parameters from prior knowledge [54,55,56].
To verify the effects of the different PDFs found in Table 1, Table 2 and Table 3, it is necessary to perform a statistical comparison of both the estimated points and the re-projection error. Therefore, in this work, we have verified the effect of the uncertainties of the camera parameters through a propagation analysis and uncertainty quantification. Therefore, we have calculated the estimated image points with Equation (6), using three different procedures:
  • Procedure A: Based on 1000 pseudo-random samples from the Gaussian PDFs of the interior and exterior camera orientations, computed with the Matlab camera software. In this way, we have then assumed that the exterior camera orientation do not depend on the other parameters of the camera.
  • Procedure B: From 1000 pseudo-random samples, considering only the Gaussian PDFs of the interior camera orientation calculated with the Matlab camera software. This is the common procedure used for an already calibrated camera [6], i.e., with the X -values provided by the Matlab camera software, executing the following steps:
    • First, the exterior camera orientation matrices [ R , t ] are calculated with Equation (15).
    • Then, the rotation matrix R is optimized using SVD.
    • Finally, the image points are predicted with Equation (6).
  • Procedure C: From 1000 pseudo-random samples from the PDFs of the interior camera orientation obtained with the proposed Bayesian inference method, where, in addition to each sample of the interior camera orientation, the optimal values of R and t are obtained. In this case, the procedure used to calculate the estimated point was as follows:
    • First, the initial exterior camera orientation matrices ( R , t ) are calculated using Equation (15) with the random values from the X PDFs.
    • Then, R and t are optimized with the Equation (16), as was carried out in the proposed Bayesian camera calibration procedure.
    • Finally, the image points are predicted with Equation (6).
Figure 3a shows the result of the comparison between the reference image points (red color) and the estimated ones (blue color) obtained with Procedure A. The reference points are the coordinates of the corners of the image checkerboard shown in Figure 1a and represent our ground truth. The reference points were detected using the Matlab command detectCheckerboardPoints [33]. This command is based on the work of Geiger et al. [9]. It can be seen that the reference points are within the distribution of the estimated checkerboard corners. However, their variability is much higher compared to Figure 3b,c, corresponding to Procedures B and C, respectively. The higher variability is due to the non-linearity of the complete camera model. Therefore, the uncertainties of the exterior camera orientation resulting from the Matlab camera software should not be used in an uncertainty propagation analysis.
From a validation point of view, no major differences are observed between the distributions of the estimated image points in Figure 3b and those in Figure 3c. Therefore, it could be said that, in this case, the estimation of the exterior camera orientation using the actual method (Equation (15)) is similar to the proposed optimal solution obtained from Equation (16).
However, if the same three different uncertainty quantification procedures are carried out with a different set of images, the differences in the quality of the prediction points of the checkerboard image arise when comparing the results of the procedures. Figure 4 shows the distributions of the possible values of the image checkerboard corners (Estimated points) and pixel coordinates (Reference points) shown in the Figure 1e belonging to set 3 for the three procedures. The estimated points were calculated using the camera parameter values from Table 2. It can be seen that in the three cases shown, the estimated points closest to the vertical axis v have a greater distortion than the rest of the estimated points due to the barrel distortion. However, this unfavorable effect is minimized when the predictions of the image points are made with the PDFs from the Bayesian calibration and the exterior camera orientations are calculated with Equation (16) (Procedure C), as is depicted in Figure 4c.
The worst results were obtained with the checkerboard shown in Figure 1j. Figure 5 shows the distribution of the possible checkerboard corner values (estimated points) and pixel coordinates (reference points). The greatest dispersion appears when using the uncertainty propagation of Procedure A (Figure 5a). In this case, the reference image points lie within the estimated points. However, the error is very large because the distributions of the estimated points overlap. Figure 5b shows that in most cases, the predictions made using the Matlab Gaussian PDFs of the interior camera orientation and the values of ( R , t ) calculated using Equation (15) (Procedure B) do not match the reference points. In addition, the distributions of the estimated point values differ significantly from the ones in Figure 5a. This is because the effect of non-linearity is much greater in this image than in the rest of the images analyzed. As already observed in the cases analyzed previously (Figure 3c and Figure 4c), when the PDFs of the Bayesian method are used and ( R , t ) are optimized with Equation (16), i.e., when applying the uncertainty propagation of Procedure C, both the variance and the quality of the prediction are improved. However, in some areas, the reference points are not within the distribution of the possible estimated points.
Figure 6 shows the statistical re-projection errors (average Euclidean norm) between the estimated points and the reference points for the checkerboards in Figure 1a,e,j. In this case, the re-projection errors calculated with the PDFs of both X and ( R , t ) from the Matlab camera software (Procedure A), are not shown because their variance is very high, i.e., those corresponding to the estimates in Figure 3a, Figure 4a, Figure 5a. The whisker bounds represent a 95% confidence interval. In both Figure 6a,b, we observe an asymmetric distribution of the errors of both Matlab based on SVD and the proposed Bayesian method (optimization of the exterior camera orientation). In both cases, the asymmetry tends toward the minimum values and is more pronounced when the PDFs from the Bayesian calibration are used. The largest difference between the re-projection errors of the two methods is depicted in Figure 6c, which corresponds to the estimates of the points of the checkerboard in Figure 1j. The distribution of the possible values of the re-projection error shows that the accuracy obtained from the proposed Bayesian calibration PDFs is significantly higher than the one obtained with the Matlab camera software (based on SVD).
To explain the differences in the statistical re-projection errors between the two calibration methods, the Matlab re-projection errors were recalculated, but this time, the exterior camera orientation matrices were optimized as the Bayesian calibration process proposed. We call this case “Matlab nonlinear fit”.
Figure 7 shows the new re-projection errors estimated with the Matlab nonlinear fit versus the proposed Bayesian calibration. It can be seen that the predictions made with the interior camera orientation PDFs from the Matlab camera software are significantly improved by the optimization of the exterior camera orientation. Thus, for the set of images analyzed, the difference between the Matlab X PDFs and the Bayesian calibration does not significantly affect the predictions of the image points but only when calculating the exterior camera orientation, as proposed in our Bayesian inference-based method.

5. Discussion

Computer vision has a wide range of applications in today’s world. To ensure its reliability, it is, therefore, necessary that all the measurements made with the different algorithms used are presented with their respective uncertainty to quantify their accuracy.
Images are one of the most commonly used data for computer vision; that is why cameras have become an essential part of this technique. They capture points in the 3D world and project them onto a 2D plane, which we see as images. To extract accurate information from the captured images, a metrological characterization of the intrinsic and distortion lens parameters of the cameras used is, thus, essential.
The high dissemination of made-up camera calibration algorithms in the web, implemented in software such as Matlab or OpenCV, can be extremely helpful as they provide powerful solutions for non-expert users. However, these algorithms must be “handled with care”, because the uncertainty quantification in the camera calibration process demands a deep knowledge on metrology issues.
The methodology proposed by Zhang [6] is extensively used in camera calibration software such as those mentioned before. However, no study has yet been carried out to validate the uncertainties of interior camera orientation calculated with this type of tool. Therefore, we have used the Bayesian inversion to verify the PDF uncertainties for both the intrinsic camera parameters and lens distortion coefficients of the Matlab software. This approach does not replace the calibration method proposed by Zhang [6]. It verifies the practical use of the uncertainties calculated with the considered camera software.
On a first instance, it is worth commenting that the PDFs of the interior camera orientation found with the Bayesian inference are not always normal as the ones assumed by the Matlab software. Moreover, we have demonstrated that in an uncertainty propagation study, the PDFs of the interior camera orientation cannot be used directly in conjunction with the exterior orientation provided by Matlab, as the model is non-linear. Thus, the linear simplification of the software, results in a very large scatter of possible pixel values and a high statistical re-projection error. The assumptions of Matlab for the calculation of camera parameter uncertainties are conditioned by the image pose considered in the calibration. Therefore, to quantify the quality of the camera calibration, it is necessary to verify the effect of camera parameter uncertainties on the estimated image points, as was performed in this research. This is an important finding since, as in transport and industry, the decision-making activities rely more and more on computer vision, and it is compulsory to determine the metric reliability of the images. The uncertainty quantification becomes then an unavoidable issue.
We have also developed the uncertainty propagation of the interior orientation of the camera from Matlab in accordance with the procedure implemented in this software. This method uses the homography matrix and the SVD to compute the true values of the rotation matrix. We have found that the accuracy of the point estimates with this procedure is also strongly affected by the image considered. Therefore, we propose to always optimize the extrinsic parameters or exterior camera orientation with Equation (16), using the Matlab [ R , t ] matrices as initial values. In this way, the predictions of the image points are significantly improved and are similar to the results obtained with the PDFs of the Bayesian calibration process.
However, we are aware that these findings were obtained with a reduced number of images and that to ensure a good accuracy in the magnitudes of the camera parameters, it was proven [28,29,30] that it is necessary to perform an analysis with a larger number of images. As the Bayesian inference is a very time-consuming procedure, we propose, as a future work, the development of surrogate models to efficiently perform a reliable uncertainty quantification.

6. Conclusions

It is a fact that uncertainty quantification in camera calibration is currently an unavoidable issue. This work presents the verification of the uncertainties of the interior camera orientation calculated with the Matlab camera software. Therefore, we have analyzed the impact of the assumptions made by the software to quantify the uncertainties of Zhang´s camera calibration method.
We have found that the PDFs of the Bayesian inverse process differ from those assumed by the Matlab camera calibration software. We have also shown that using the PDFs obtained in Matlab can lead to poor estimates of the image points, since the assumptions on which the software relies to estimate the uncertainty of the camera parameters are strongly conditioned by the checkerboard pose used.
Furthermore, we have demonstrated that the main source of bias in the prediction of image points is the method used to calculate the camera’s exterior orientation. According to the results obtained in the present work, if the exterior orientation of the camera is optimized with the procedure we have proposed, the impact of the Gaussian assumption of the interior camera orientation in the statistical re-projection errors is minimized. We, therefore, recommend optimizing the camera’s exterior orientation based on the interior orientation values, as this will significantly improve the accuracy of the image point predictions.
Moreover, a larger number of images should be considered to improve the accuracy of the camera parameters, and factors such as image light and time drift of the camera parameters have to be computed for a more realistic uncertainty quantification of a vision system.
The novel Bayesian inference-based approach for the uncertainty characterization of Zhang’s camera calibration method may require a higher computational cost compared with the uncertainty analysis of conventional camera calibration software.
In this sense, we propose, as a next step, to conjugate the Bayesian inference with surrogate models. This will allow us to fulfill two opposite but necessary requirements: to reduce the computational cost of uncertainty quantification and to ensure the metrological reliability of the results.

Author Contributions

Conceptualization, R.G.-M. and M.J.L.B.; methodology, R.G.-M., M.J.L.B. and M.R.-B.; software, R.G.-M. and M.J.L.B.; validation, R.G.-M. and M.J.L.B.; formal analysis, R.G.-M., M.J.L.B. and M.R.-B.; investigation, R.G.-M., M.J.L.B., M.R.-B. and A.A.-K.; data curation, R.G.-M. and M.J.L.B.; writing—original draft preparation, R.G.-M. and M.J.L.B.; writing—review and editing, R.G.-M., M.J.L.B., M.R.-B. and A.A.-K.; supervision, M.J.L.B. and M.R.-B.; project administration, R.G.-M. and A.A.-K.; funding acquisition, R.G.-M. and A.A.-K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Madrid Government (Comunidad de Madrid Spain) under the Multiannual Agreement with UC3M (“Fostering Young Doctors Research”, APBI-CM-UC3M), and in the context of the VPRICIT (Research and Technological Innovation Regional Programme and by the FEDER/Ministry of Science and Innovation—Agencia Estatal de Investigacion (AEI) of the Government of Spain through the projects PID2022-136468OB-I00 and PID2022-142015OB-I00.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Aber, J.S.; Marzolff, I.; Ries, J.B. Chapter 3-photogrammetry. In Small-Format Aerial Photography; Elsevier: Amsterdam, The Netherlands, 2010; pp. 23–39. [Google Scholar]
  2. Champion, E. Virtual Heritage: A Guide; Ubiquity Press: London, UK, 2021. [Google Scholar]
  3. Jähne, B.; Haußecker, H. Computer Vision and Applications; Elsevier: Amsterdam, The Netherlands, 2000. [Google Scholar]
  4. Danuser, G. Computer vision in cell biology. Cell 2011, 147, 973–978. [Google Scholar] [CrossRef] [PubMed]
  5. Klette, R. Concise Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; Volume 233. [Google Scholar]
  6. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  7. Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  8. Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  9. Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar]
  10. Nyimbili, P.H.; Demirel, H.; Seker, D.; Erden, T. Structure from motion (sfm)-approaches and applications. In Proceedings of the International Scientific Conference on Applied Sciences, Antalya, Turkey, 27–30 September 2016; pp. 27–30. [Google Scholar]
  11. Zhou, X.; Sun, K.; Wang, J.; Zhao, J.; Feng, C.; Yang, Y.; Zhou, W. Computer vision enabled building digital twin using building information model. IEEE Trans. Ind. Inf. 2022, 19, 2684–2692. [Google Scholar] [CrossRef]
  12. Chen, J.; Wang, Q.; Cheng, H.H.; Peng, W.; Xu, W. A review of vision-based traffic semantic understanding in ITSs. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19954–19979. [Google Scholar] [CrossRef]
  13. Orghidan, R.; Salvi, J.; Gordan, M.; Orza, B. Camera calibration using two or three vanishing points. In Proceedings of the 2012 Federated Conference on Computer science and information systems (FedCSIS), Wroclaw, Poland, 9–12 September 2012; pp. 123–130. [Google Scholar]
  14. Mei, C.; Rives, P. Single view point omnidirectional camera calibration from planar grids. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3945–3950. [Google Scholar]
  15. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  16. Heikkila, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  17. Sundareswara, R.; Schrater, P.R. Bayesian modelling of camera calibration and reconstruction. In Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM’05), Ottawa, ON, Canada, 13–16 June 2005; pp. 394–401. [Google Scholar]
  18. Jorio, A.; Dresselhaus, M. Nanostructured Materials: Metrology. In Reference Module in Materials Science and Materials Engineering; Elsevier: Amsterdam, The Netherlands, 2016. [Google Scholar] [CrossRef]
  19. Rasmussen, K.; Kondrup, J.B.; Allard, A.; Demeyer, S.; Fischer, N.; Barton, E.; Partridge, D.; Wright, L.; Bär, M.; Fiebach, H.; et al. Novel Mathematical and Statistical Approaches to Uncertainty Evaluation: Best Practice Guide to Uncertainty Evaluation for Computationally Expensive Models; Euramet: Brunswick, Germany, 2015. [Google Scholar]
  20. Wang, C.; Qiang, X.; Xu, M.; Wu, T. Recent advances in surrogate modeling methods for uncertainty quantification and propagation. Symmetry 2022, 14, 1219. [Google Scholar] [CrossRef]
  21. Fu, C.; Sinou, J.J.; Zhu, W.; Lu, K.; Yang, Y. A state-of-the-art review on uncertainty analysis of rotor systems. Mech. Syst. Signal Process. 2023, 183, 109619. [Google Scholar]
  22. Karandikar, J.; Chaudhuri, A.; No, T.; Smith, S.; Schmitz, T. Bayesian optimization for inverse calibration of expensive computer models: A case study for Johnson-Cook model in machining. Manuf. Lett. 2022, 32, 32–38. [Google Scholar] [CrossRef]
  23. Zhang, C.; Sun, L. Bayesian Calibration of the Intelligent Driver Model. arXiv 2022, arXiv:2210.03571. [Google Scholar]
  24. King, G.; Lovell, A.; Neufcourt, L.; Nunes, F. Direct comparison between Bayesian and frequentist uncertainty quantification for nuclear reactions. Phys. Rev. Lett. 2019, 122, 232502. [Google Scholar] [CrossRef] [PubMed]
  25. Bok, Y.; Ha, H.; Kweon, I.S. Automated checkerboard detection and indexing using circular boundaries. Pattern Recognit. Lett. 2016, 71, 66–72. [Google Scholar] [CrossRef]
  26. Feng, X.; Cao, M.; Wang, H.; Collier, M. The comparison of camera calibration methods based on structured-light measurement. In Proceedings of the 2008 Congress on Image and Signal Processing, Octeville, France, 1–3 July 2008; Volume 2, pp. 155–160. [Google Scholar]
  27. Burger, W. Zhang’s Camera Calibration Algorithm: In-Depth Tutorial and Implementation; HGB16-05; University of Applied Sciences Upper Austria: Hagenberg, Austria, 2016; pp. 1–6. [Google Scholar]
  28. Rojtberg, P.; Kuijper, A. Efficient pose selection for interactive camera calibration. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 16–20 October 2018; pp. 31–36. [Google Scholar]
  29. Peng, S.; Sturm, P. Calibration wizard: A guidance system for camera calibration based on modelling geometric and corner uncertainty. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1497–1505. [Google Scholar]
  30. Richardson, A.; Strom, J.; Olson, E. AprilCal: Assisted and repeatable camera calibration. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1814–1821. [Google Scholar]
  31. Semeniuta, O. Analysis of camera calibration with respect to measurement accuracy. Procedia Cirp 2016, 41, 765–770. [Google Scholar] [CrossRef]
  32. Hagemann, A.; Knorr, M.; Janssen, H.; Stiller, C. Inferring bias and uncertainty in camera calibration. Int. J. Comput. Vis. 2022, 130, 17–32. [Google Scholar] [CrossRef]
  33. MATLAB. Version 9.12.0.2039608 (R2022a) Update 5 (R2022a); The MathWorks Inc.: Natick, MA, USA, 2022. [Google Scholar]
  34. Wang, Y.; Li, Y.; Zheng, J. A camera calibration technique based on OpenCV. In Proceedings of the 3rd International Conference on Information Sciences and Interaction Sciences, Chengdu, China, 23–25 June 2010; pp. 403–406. [Google Scholar]
  35. Hu, W.; Xie, J.; Chau, H.W.; Si, B.C. Evaluation of parameter uncertainties in nonlinear regression using Microsoft Excel Spreadsheet. Environ. Syst. Res. 2015, 4, 1–12. [Google Scholar] [CrossRef]
  36. García-Alfonso, H.; Córdova-Esparza, D.M. Comparison of uncertainty analysis of the Montecarlo and Latin Hypercube algorithms in a camera calibration model. In Proceedings of the 2018 IEEE 2nd Colombian Conference on Robotics and Automation (CCRA), Barranquilla, Colombia, 1–3 November 2018; pp. 1–5. [Google Scholar]
  37. Zhan, K.; Fritsch, D.; Wagner, J.F. Stability analysis of intrinsic camera calibration using probability distributions. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1048, p. 012010. [Google Scholar]
  38. Kainz, O.; Jakab, F.; Fecil’ak, P.; Vápeník, R.; Deák, A.; Cymbalák, D. Estimation of camera intrinsic matrix parameters and its utilization in the extraction of dimensional units. In Proceedings of the 2016 International Conference on Emerging eLearning Technologies and Applications (ICETA), Stary Smokovec, Slovakia, 24–25 November 2016; pp. 153–156. [Google Scholar]
  39. Reznicek, J. Method for measuring lens distortion by using pinhole lens. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 509–515. [Google Scholar] [CrossRef]
  40. Fryer, J.G.; Brown, D.C. Lens distortion for close-range photogrammetry. Photogramm. Eng. Remote Sens. 1986, 52, 51–58. [Google Scholar]
  41. Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  42. Wang, J.; Shi, F.; Zhang, J.; Liu, Y. A new calibration model of camera lens distortion. Pattern Recognit. 2008, 41, 607–615. [Google Scholar] [CrossRef]
  43. Sudhanshu, K. CarND-Camera-Calibration. 2016. Available online: https://github.com/udacity/CarND-Camera-Calibration/tree/master (accessed on 1 December 2022).
  44. Wong, R.K.; Storlie, C.B.; Lee, T.C. A frequentist approach to computer model calibration. J. R. Stat. Soc. Ser. B Stat. Methodol. 2017, 79, 635–648. [Google Scholar] [CrossRef]
  45. Zhang, J.; Fu, S. An efficient Bayesian uncertainty quantification approach with application to k-ω-γ transition modeling. Comput. Fluids 2018, 161, 211–224. [Google Scholar] [CrossRef]
  46. Radaideh, M.I.; Lin, L.; Jiang, H.; Cousineau, S. Bayesian inverse uncertainty quantification of the physical model parameters for the spallation neutron source first target station. Results Phys. 2022, 36, 105414. [Google Scholar] [CrossRef]
  47. Marelli, S.; Sudret, B. UQLab: A framework for uncertainty quantification in Matlab. In Proceedings of the 2nd International Conference on Vulnerability and Risk Analysis and Management (ICVRAM2014), Liverpool, UK, 13–16 July 2014; pp. 2554–2563. [Google Scholar]
  48. Van Ravenzwaaij, D.; Cassey, P.; Brown, S.D. A simple introduction to Markov Chain Monte–Carlo sampling. Psychon. Bull. Rev. 2018, 25, 143–154. [Google Scholar] [CrossRef]
  49. Wu, X.; Kozlowski, T.; Meidani, H.; Shirvan, K. Inverse uncertainty quantification using the modular Bayesian approach based on Gaussian process, Part 1: Theory. Nucl. Eng. Des. 2018, 335, 339–355. [Google Scholar] [CrossRef]
  50. Kruschke, J. Doing Bayesian Data Analysis: A Ttutorial with R, JAGS, and Stan; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  51. Roberts, G.O.; Rosenthal, J.S. Examples of adaptive MCMC. J. Comput. Graph. Stat. 2009, 18, 349–367. [Google Scholar] [CrossRef]
  52. Yamada, T.; Ohno, K.; Ohta, Y. Comparison between the Hamiltonian Monte Carlo method and the Metropolis–Hastings method for coseismic fault model estimation. Earth Planets Space 2022, 74, 86. [Google Scholar] [CrossRef]
  53. Jin, S.S.; Ju, H.; Jung, H.J. Adaptive Markov chain Monte Carlo algorithms for Bayesian inference: Recent advances and comparative study. Struct. Infrastruct. Eng. 2019, 15, 1548–1565. [Google Scholar] [CrossRef]
  54. Fornacon-Wood, I.; Mistry, H.; Johnson-Hart, C.; Faivre-Finn, C.; O’Connor, J.P.; Price, G.J. Understanding the differences between Bayesian and frequentist statistics. Int. J. Radiat. Oncol. Biol. Phys. 2022, 112, 1076–1082. [Google Scholar] [CrossRef]
  55. Ding, J.; Wang, Y.D.; Gulzar, S.; Kim, Y.R.; Underwood, B.S. Uncertainty quantification of simplified viscoelastic continuum damage fatigue model using the bayesian inference-based markov chain monte carlo method. Transp. Res. Rec. 2020, 2674, 247–260. [Google Scholar] [CrossRef]
  56. Kacker, R.; Jones, A. On use of Bayesian statistics to make the Guide to the Expression of Uncertainty in Measurement consistent. Metrologia 2003, 40, 235. [Google Scholar] [CrossRef]
Figure 1. Multiple images of the planar checkerboard pattern in different poses for camera calibration [43].
Figure 1. Multiple images of the planar checkerboard pattern in different poses for camera calibration [43].
Sensors 23 07903 g001
Figure 2. Bayesian inference-based method for uncertainties quantification of the interior camera orientation.
Figure 2. Bayesian inference-based method for uncertainties quantification of the interior camera orientation.
Sensors 23 07903 g002
Figure 3. The comparison between the estimates and pixel coordinates for the checkerboard in Figure 1a. (a) Procedure A: PDFs of both X and ( R , t ) estimated using the Matlab toolbox. (b) Procedure B: PDFs of X and the possible values of ( R , t ) using Equation (15). (c) Procedure C: PDFs of X and the possible values of ( R , t ) optimized with Equation (16) (proposed method).
Figure 3. The comparison between the estimates and pixel coordinates for the checkerboard in Figure 1a. (a) Procedure A: PDFs of both X and ( R , t ) estimated using the Matlab toolbox. (b) Procedure B: PDFs of X and the possible values of ( R , t ) using Equation (15). (c) Procedure C: PDFs of X and the possible values of ( R , t ) optimized with Equation (16) (proposed method).
Sensors 23 07903 g003
Figure 4. The comparison between the estimates and pixel coordinates for the checkerboard in Figure 1e. (a) Procedure A: PDFs of both X and ( R , t ) , estimated using the Matlab toolbox. (b) Procedure B: PDFs of X and the possible values of ( R , t ) using Equation (15). (c) Procedure C: PDFs of X and the possible values of ( R , t ) , optimized with Equation (16) (proposed method).
Figure 4. The comparison between the estimates and pixel coordinates for the checkerboard in Figure 1e. (a) Procedure A: PDFs of both X and ( R , t ) , estimated using the Matlab toolbox. (b) Procedure B: PDFs of X and the possible values of ( R , t ) using Equation (15). (c) Procedure C: PDFs of X and the possible values of ( R , t ) , optimized with Equation (16) (proposed method).
Sensors 23 07903 g004
Figure 5. The comparison between the estimates and pixel coordinates for the checkerboard in Figure 1j. (a) Procedure A: PDFs of both X and ( R , t ) estimated using the Matlab toolbox. (b) Procedure B: PDFs of X and the possible values of ( R , t ) using Equation (15). (c) Procedure C: PDFs of X and the possible values of ( R , t ) optimized with Equation (16) (proposed method).
Figure 5. The comparison between the estimates and pixel coordinates for the checkerboard in Figure 1j. (a) Procedure A: PDFs of both X and ( R , t ) estimated using the Matlab toolbox. (b) Procedure B: PDFs of X and the possible values of ( R , t ) using Equation (15). (c) Procedure C: PDFs of X and the possible values of ( R , t ) optimized with Equation (16) (proposed method).
Sensors 23 07903 g005
Figure 6. Re-projection error statistical comparison between Procedure B (Matlab SVD ) and Procedure C (Bayesian nonlinear fit). (a) Checkerboard of Figure 1a. (b) Checkerboard of Figure 1e. (c) Checkerboard of Figure 1j.
Figure 6. Re-projection error statistical comparison between Procedure B (Matlab SVD ) and Procedure C (Bayesian nonlinear fit). (a) Checkerboard of Figure 1a. (b) Checkerboard of Figure 1e. (c) Checkerboard of Figure 1j.
Sensors 23 07903 g006
Figure 7. Re-projection error statistical comparison between Matlab nonlinear fit and Procedure C (Bayesian nonlinear fit). (a) Checkerboard of Figure 1a. (b) Checkerboard of Figure 1e. (c) Checkerboard of Figure 1j.
Figure 7. Re-projection error statistical comparison between Matlab nonlinear fit and Procedure C (Bayesian nonlinear fit). (a) Checkerboard of Figure 1a. (b) Checkerboard of Figure 1e. (c) Checkerboard of Figure 1j.
Sensors 23 07903 g007aSensors 23 07903 g007b
Table 1. Statistical moments and PDFs for image subset 1.
Table 1. Statistical moments and PDFs for image subset 1.
Zhang’s MethodProposed Bayesian Method
Parameter Mean Standard Deviation PDF Mean Standard Deviation PDF
f x 565.03263.3316Gaussian564.97143.1788Logistic
f y 565.55753.3106Gaussian565.31793.5436Weibull
c x 647.18470.6420Gaussian647.16680.8688Gaussian
c y 504.25820.6714Gaussian504.36731.0146Gaussian
k 1 −0.26930.0037Gaussian−0.27000.0040Gumbel
k 2 0.12440.0068Gaussian0.12520.0071GumbelMin
k 3 −0.04670.0063Gaussian−0.04580.0062Gaussian
p 1 −0.00220.0003Gaussian−0.00220.0005Gaussian
p 2 0.00260.0002Gaussian0.00270.0003Beta
Table 2. Statistical moments and PDFs for image subset 3.
Table 2. Statistical moments and PDFs for image subset 3.
Zhang’s MethodProposed Bayesian Method
Parameter Mean Standard Deviation PDF Mean Standard Deviation PDF
f x 566.87783.6615Gaussian567.46373.5116Gaussian
f y 567.41313.7248Gaussian567.95253.6931Gaussian
c x 651.49620.6736Gaussian651.71410.9339Uniform
c y 502.65450.7893Gaussian502.77471.0737Uniform
k 1 −0.24650.0036Gaussian−0.24660.0029Gaussian
k 2 0.07560.0023Gaussian0.07560.0021Gaussian
k 3 −0.01160.0006Gaussian−0.01170.0006Gaussian
p 1 −0.00100.0002Gaussian−0.00100.0003Gaussian
p 2 −0.00020.0002Gaussian−0.00020.0003Gaussian
Table 3. Statistical moments and PDFs for image subset 5.
Table 3. Statistical moments and PDFs for image subset 5.
Zhang’s MethodProposed Bayesian Method
Parameter Mean Standard Deviation PDF Mean Standard Deviation PDF
f x 553.29644.5922Gaussian552.70393.3532Gaussian
f y 548.78214.3422Gaussian547.28563.6085Uniform
c x 641.18692.1851Gaussian641.29312.9777Gumbel
c y 502.44522.2514Gaussian502.26402.8074Weibull
k 1 −0.24000.0051Gaussian−0.23960.0037Logistic
k 2 0.07660.0047Gaussian0.07640.0038Logistic
k 3 −0.01400.0015Gaussian−0.01410.0013Logistic
p 1 −0.00120.0006Gaussian−0.00110.0006Gaussian
p 2 0.00350.0005Gaussian0.00360.0005Beta
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gutiérrez-Moizant, R.; Boada, M.J.L.; Ramírez-Berasategui, M.; Al-Kaff, A. Novel Bayesian Inference-Based Approach for the Uncertainty Characterization of Zhang’s Camera Calibration Method. Sensors 2023, 23, 7903. https://doi.org/10.3390/s23187903

AMA Style

Gutiérrez-Moizant R, Boada MJL, Ramírez-Berasategui M, Al-Kaff A. Novel Bayesian Inference-Based Approach for the Uncertainty Characterization of Zhang’s Camera Calibration Method. Sensors. 2023; 23(18):7903. https://doi.org/10.3390/s23187903

Chicago/Turabian Style

Gutiérrez-Moizant, Ramón, María Jesús L. Boada, María Ramírez-Berasategui, and Abdulla Al-Kaff. 2023. "Novel Bayesian Inference-Based Approach for the Uncertainty Characterization of Zhang’s Camera Calibration Method" Sensors 23, no. 18: 7903. https://doi.org/10.3390/s23187903

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop