A Stable, Efficient, and High-Precision Non-Coplanar Calibration Method: Applied for Multi-Camera-Based Stereo Vision Measurements

Traditional non-coplanar calibration methods, represented by Tsai’s method, are difficult to apply in multi-camera-based stereo vision measurements because of insufficient calibration accuracy, inconvenient operation, etc. Based on projective theory and matrix transformation theory, a novel mathematical model is established to characterize the transformation from targets’ 3D affine coordinates to cameras’ image coordinates. Then, novel non-coplanar calibration methods for both monocular and binocular camera systems are proposed in this paper. To further improve the stability and accuracy of calibration methods, a novel circular feature points extraction method based on region Otsu algorithm and radial section scanning method is proposed to precisely extract the circular feature points. Experiments verify that our novel calibration methods are easy to operate, and have better accuracy than several classical methods, including Tsai’s and Zhang’s methods. Intrinsic and extrinsic parameters of multi-camera-systems can be calibrated simultaneously by our methods. Our novel circular feature points extraction algorithm is stable, and with high precision can effectively improve calibration accuracy for coplanar and non-coplanar methods. Real stereo measurement experiments demonstrate that the proposed calibration method and feature extraction method have high accuracy and stability, and can further serve for complicated shape and deformation measurements, for instance, stereo-DIC measurements, etc.


Introduction
Camera calibration is the necessary process to determine the unknown basic parameters of camera imaging and the transformation parameters from world coordinate system to camera coordinate system.The mapping relationship between the 3D world coordinates of targets' feature points and the 2D image coordinates can be used to acquire unknown parameters based on an ideal camera imaging model [1].Precise calibration parameters of a camera-based measurement system are the prerequisite for the image-based 3D reconstruction, because the calibration parameters directly participate in the mapping progress from 3D reference coordinates to 2D image coordinates or the remapping progress [2,3].Therefore, developing a high-precision camera calibration method is of great significance.
According to the geometry characteristics of the calibration targets, the existing camera calibration methods can be divided into the following three categories: 3D stereo target calibration methods, 2D planar target calibration methods, and self-calibration methods.
The representative traditional 3D stereo methods are the Direct Linear Transformation (DLT) method developed by Abdel-Aziz [4] and the "Two-Stage" Non-coplanar method developed by Tsai [5].DLT bridges the gap between photogrammetry and computer vision.
Shi [6] developed DLT methods and proposed a DLT-Lines method wherein the camera's intrinsic and extrinsic parameters can be extracted from the matrix linearly and operate nonlinear optimization with distortion coefficients.Tsai gave a two-step calibration method based on radial alignment constraint (RAC).Tsai's method has a moderate amount of calculation and high accuracy.But the implementation of Tsai's non-coplanar method is cumbersome.The extrinsic parameters calibrated by Tsai's method are inaccurate and the tangential distortion parameters of lenses cannot be calibrated by Tsai's method because of the RAC model's insufficiency.J. Zhang [7] and Zheng [8] et al. noticed the drawbacks of Tsai's method and they gave some corrective action for Tsai's non-coplanar methods.However, their research is still not impeccable and cannot efficiently apply in multi-camera calibration tasks.The deficiencies of Tsai's method and some incomplete improvements will be further discussed in Section 2. 3.
It is the case that 2D planar target calibration methods are also called coplanar calibration methods [5,[8][9][10][11].The representative coplanar method was proposed by Zhang [9].As a milestone in camera calibration, Zhang's method is easy to use in practical calibration and provides sufficient accuracy for most applications.Zhang's method needs to take images of 2D targets in multi-viewing multiplane position, and then calculate the intrinsic and extrinsic parameters and distortion coefficients through linearly initial parameters solving and non-linear optimization.Zhu et al. [10], Sels et al. [11], Chen et al. [12], and many other scholars further developed Zhang's method.Most of the listed scholars made improvements by changing the calibration patterns, increasing the numbers of targets' feature points, and increasing the extraction accuracy of feature points in images.It must be admitted that the calibration accuracy and uncertainty of calibrated parameters are accorded certain promotions by the above scholars' research.However, the above promotions are mostly established at the expense of applying complicated time-consuming feature extraction algorithms.Meanwhile, there is rarely limited innovation in the calibration model and mathematical operation process of Zhang's original calibration methods.
Self-calibration methods only use the corresponding relationship between the surrounding images and the images during the movement to perform the calibration.Hartley [13] and Maybank and Faugeras et al. [14] first proposed the idea of camera selfcalibration.Due to the unstable characteristics of the nature feature and feature extraction algorithms, self-calibration methods are hard to maintain with high accuracy and robustness.Li et al. [15] and Li et al. [16] and other scholars tried to use manually selecting features to replace nature textural features.Calibration accuracy is relatively improved at the huge expanse of time-consuming feature extraction and increasing mathematical complexity.In general, existing self-calibration methods have poor accuracy, low efficiency, and low robustness, so that they are difficult to use in high-precision stereo measurements.This paper focus more on developing a stable, efficient, and accurate calibration method for multi-camera-based high-precision stereo measurements.As a result, we will not discuss the self-calibration method in the rest of this paper.
Camera-sensing-systems which are used for stereo measurements can be roughly divided into two types.The first type is a monocular camera measurement system.A single camera cannot directly acquire spatial depth information, but a monocular camera system can often contain extra feature projection devices, e.g., line-structured light camera-based sensors, laser light structure camera-based sensors, etc. Calibration for structured-lightbased monocular camera measurement systems mainly include two progresses, which are intrinsic parameters calibration for the monocular camera and the extrinsic parameters calibration for the connecting structure of the monocular camera and light projection device.Calibration for structured-light-based monocular camera measurement has been studied by many scholars [17,18].This paper focus more on the calibration of the second camera sensing system, which is the multi-camera-based stereo measurement system.A binocular camera system is the most typical and fundamental multi-camera system.The calibration of a binocular camera system also contains two steps, which are separate intrinsic parameters 1.
This paper establishes a novel improved affine coordinates correction mathematical model for non-coplanar calibration.A novel calibration method based on this model is established for both monocular and binocular camera systems.Simulation and real experiments verified that our novel methods have better accuracy, stability, and efficiency than controlled methods.

2.
For further improving the accuracy and stability of existing calibration methods, a novel simple circle feature points extraction algorithm based on the combining of local OTSU and gradient-based radial section scanning for edges is proposed in this paper.Simulations and real experiments demonstrated our algorithm has better performance in extraction accuracy and stability for illumination and viewing angle changes than the traditional algorithm from OpenCV.

3.
Real all-process 3D reconstruction experiments of both discrete feature points and object surface's full-field region of interest (ROI) have been operated from the stereo system's calibration, features extraction, features' stereo matching, to the final features' stereo reconstruction.Experiments demonstrate the feasibility of our calibration methods for real measurement scenes, and the stereo measurements with this paper's calibration parameters have better accuracy than controlled methods.
The layout of this paper is organized as follows.Section 2 formulates related works, models, strategies, and restrictions of current camera calibration methods and stereo measurements.Sections 3 and 4 present the methodology of this paper's research.The experiments and results are presented in Section 5. Finally, Section 6 concludes the paper and indicates future directions.

Mathematical Model and Some Developments of Tsai's Non-Coplanar Calibration Method
We have summed up Tsai's non-coplanar mathematical model [5] in Table 1.Based on radial alignment constraint (RAC), an overdetermined equation can be used to solve seven independent intermediate variables a 1 a 2 • • • a 7 .Then, with two orthogonal constraints of rotation constrains, s x R 3×3 T x T y can be solved first.Correspondingly, the initial values of T z f can be solved linearly with calculated parameters and another overdetermined equation.At last, use nonlinear optimization to refine part of the calibration parameters and radial distortion coefficients.The calibration with Tsai's method for a binocular camera system is the combination of two separate calibrations for single cameras.

Matrix of Intrinsic Parameters Known Optic Center
Linear Initial Values Solving (RAC Constraints) Step Nonlinear Optimization (Binocular Camera System) I(T a_z , f a , k a1 , k a2 , k a3 , T b_z , f b , k b1 , k b2 , k b3 ) = min Tsai's calibration method has a simple mathematical model and it is very easy to operate in an algorithm.The algorithm has good efficiency for most of the parameters are computed by linear overdetermined equations and very few elements are brought into non-linear optimization.However, Tsai's calibration mathematical model has some drawbacks.Firstly, RAC constraints take no account of the tangential distortion of the lens.J. Zhang et al. [7] and Xu et al. [23] introduced the tangential distortion model into Tsai's method and significantly improved the calibration accuracy.Tang et al. [24] also considered tangential distortion with Tsai's model and verified that Tsai's method has better efficiency in an algorithm than Zhang's method.
Despite the better efficiency of Tsai's method, there are fewer scholars and engineers applying Tsai's method in real measurements than there are applying Zhang's method.This is because Tsai's method is less about accuracy and is inconvenient to implement which can be seen in detail in Sections 2.3 and 2.5.

Mathematical Model and Recent Development of Coplanar Calibration Method
As a milestone of calibration methods, Zhang's coplanar calibration method has been applied in many computer vision tasks for its convenience, accuracy, and stability.The mathematical model in [9] is summarized in Table 2.

Matrix of Intrinsic Parameters Distortion Coefficients
Linear Initial Values Solving (H 1_3×3 , Nonlinear Optimization (Binocular Camera System) I(K a_Zhang , D a , K b_Zhang , D b , R a-b _vector3×1 , T a-b_vector3×1 , R a1_vector3×1 , T a1_vector3×1 , • • • , R aN_vector3×1 , T aN_vector3×1 ) = min Using ∼ P I and ∼ P W respectively represents the matched homogeneous coordinates from the 2D image pixel coordinate system and world coordinate system.The calibration process of coplanar methods based on the transformation from 2D world coordinates to 2D image pixel coordinates, which can be described as an 3 × 3 homography matrix, is shown as H 3×3 in Table 2.However, one H 3×3 matrix can only supply two constraints for the linear solution of parameters, yet there are five intrinsic parameters to be solved.Thus, at least two images from different orientations are needed to evaluate the four initial values of intrinsic parameters if impose γ = 0.Then, the initial intrinsic parameters are used to calculate the rotation and translation vectors between each planar target's world coordinate system and camera system.It is worth mentioning that if the initial K Zhang and H 3×3 are directly used to compute R 3×3 , the R 3×3 cannot strictly satisfy the orthogonal properties of a rotation matrix.Singular value decomposition is used to approximate the relatively best rotation matrix in [9].Then, R 3×3 rotation matrices are transferred to R 3×1 rotation vectors.For the DOF of a certain rotation is three, obviously, R 3×1 rotation vectors can better meet the demands of follow-up nonlinear optimization.R 3×1 and R 3×3 are related by the Rodrigues formula.Using R 3×1 vector rather than R 3×3 matrix in nonlinear optimization can avoid the problem of insufficient orthogonality of a certain rotation matrix.
Recently, scholars have developed different calibration methods based on Zhang's calibration model.Yin et al. [25] improved the binocular calibration accuracy by timing correction of two consecutive frames; Cheng et al. [26] used the perspective correction and phase estimation method together to help increase the accuracy of control point localization and consequently of camera calibration; Chen et al. [27] applied sub-pixel edge detection and cross ratio invariance to refine the circular control points' image position and then increase the accuracy of calibration; Wang et al. [28] extended Pascal's theorem to the affine plane to obtain the constraints of the circular points in images and used properties of circle and infinite line to calibrate the intrinsic parameters of camera; Dong et al. [29] developed a confidence-based camera calibration method with modified census transform for chessboard patterns whereby this method is effective to achieve accurate calibration results; Zhang et al. [30] designed particular stereo targets with multiple feature planes to help simultaneously identify the intrinsic and extrinsic parameters of a camera system by a single captured image.
Obviously, scholars focus more on the improvements of feature points extraction methods and accuracy rather than the mathematical model of coplanar methods.This research tendency partly reflects the accuracy of Zhang's calibration model which has been confirmed effective by most scholars.The improvements nonetheless can only emerge in other aspects except the mathematical model.

Deficiencies of Tsai's Calibration Model and Recent Research of Non-Coplanar Calibration Model Based on Affine Coordinate Correction
Zheng et al. [8] pointed out how the uncorrected sliding direction of planar calibration target would greatly influent the accuracy of Tsai's non-coplanar method.This inference can be confirmed by the simulation in Figure 1.Assume there is an uncorrected yaw angle between sliding direction and ideal world coordinate system decided by planar target and its normal vector direction.If the sliding shifts are erroneously assumed to be the z w in a world system, using Tsai's method will obtain the results in Figure 1.The absolute value of focal length's relative error increases to 20.47% when the yaw angle between sliding direction and planar targets increases to 4 • , and they will continue increasing along with the increasing angle.Moreover, Zheng et al. [8] also pointed out that Tsai's mathematical model cannot obtain a strictly orthogonal rotation matrix, for the following reason: The 1st step of linear initial values solving uses seven independent intermediate variables to acquire s x R 3×3 T x T y of six DOF.This is essentially an overdetermined equation solving procedure and can only obtain an approximate solution.Assume r1, r2 are the first two rows of R 3×3 .In this procedure, Tsai can only use the two single inner product properties of r1 and r2 to calculate s x , T y .If one needs to obtain an orthonormal matrix by this way, the other property of r1 × r2 = 0 should be applied to ensure the orthogonality of R 3×3 .However, Tsai's method missed this constraint and did not make any compensation in the following nonlinear optimization.Zheng et al. [8] proposed a novel non-coplanar method based on an affine coordinate correction (ACC) model as shown in Table 3.This method is called the ACC method in this paper based on mathematical characteristics.
The ACC method introduced a 2D normalized vector η = (η x η x ) T to correct the planar target's two axis, and a 3D normalized vector β = (β x β y β z ) T to correct the sliding direction vertical to the planar.The optical center is assumed fixed at the center of the image.Since the number of intermediate parameters is equal to the DOF of parameters to be calibrated, take 11 intermediate parameters solved by overdetermined equations into nonlinear optimization together with distortion coefficients.With enough orthonormal constraints of the rotation matrix and the properties of normalized vectors, the final parameters can easily obtain analytical solutions from optimized intermediate parameters.
The ACC calibration model works well and can obtain accurate calibration results for a monocular camera system.However, this model cannot fit the calibration well for a binocular camera system as shown in Table 4.
Firstly, Table 3 illustrates that the DOF of the intermediate parameters of a single camera is 11.If the ACC model is extended to a binocular camera system such as in Table 4, one will obtain two separate intermediate matrices of which the sum DOF is 22.But the physical meanings of η and β illustrate that they should be equal for each single camera when operating a non-coplanar calibration for a binocular system.This means that the DOF of the parameters to be calibrated reduces to 19 and the original unconstrained nonlinear optimization of the intermediate parameters cannot apply for the binocular system.Constraints from the rotation matrix and correction vectors were introduced to be the penalty constraints and construct nonlinear optimization.To further simplify the binocular calibration model based on the ACC model and maintain the orthogonality of the rotation matrix, Zheng et al. [8] chose to calibrate the intrinsic parameters for the single camera first in order to reduce the DOF of the parameters to be calibrated and introduce enough penalty constraints.
Unfortunately, this mathematical compromise did not bring an advance of calibration accuracy but rather introduced an extra workload for determining the penalty coefficients of each penalty constraint.
To overcome the above problems either existing in Tsai's method or in the ACC method, this paper proposes a novel improved affine coordinate correction (IACC) calibration method for both monocular and binocular camera systems.

Local Spatial Optimality of Calibration Parameters
No matter what method is used to evaluate the accuracy of calibration parameters, the optimality of these parameters is only mathematically optimal, and approximately physically l optimal.The approximation properties make calibration parameters lack strictly physical significance.This means that the calibration parameters from different calibration methods must have optimum properties in particular time and space domains.
The above analysis indicates that measurement objects which occur at different positions within the FOV of camera systems have their own optimal calibration parameters for the best measurement results.Experience suggests that the more calibration targets cover the measurement position, the more fitting calibration parameters can be acquired for measurements.These deductions can be reflected in Figure 2.This kind of local optimality may also happen in domains of time.Compared to the spatial optimality, current camera sensors' hardware exhibits more stability in a short time interval.Thus, this paper focuses more on the local spatial optimality of calibration parameters.Experiments in Section 5.5 can support these deductions about the local spatial optimality of calibration parameters.The above analysis indicates that measurement objects which occur at different positions within the FOV of camera systems have their own optimal calibration parameters for the best measurement results.Experience suggests that the more calibration targets cover the measurement position, the more fitting calibration parameters can be acquired for measurements.These deductions can be reflected in Figure 2.This kind of local optimality may also happen in domains of time.Compared to the spatial optimality, current camera sensors' hardware exhibits more stability in a short time interval.Thus, this paper focuses more on the local spatial optimality of calibration parameters.Experiments in Section 5.5 can support these deductions about the local spatial optimality of calibration parameters.The depth of field (DOV) within FOV of camera system can be roughly evaluated by the nominal value of lens' focal length, aperture, allowable dispersion circle's diameter, and object distance.Then, measurement objects can be placed at the range of DOV.If we want to acquire the best measurement accuracy, calibration targets' feature points should come close to the measurement position and cover the limited measurement depth as The depth of field (DOV) within FOV of camera system can be roughly evaluated by the nominal value of lens' focal length, aperture, allowable dispersion circle's diameter, and object distance.Then, measurement objects can be placed at the range of DOV.If we want to acquire the best measurement accuracy, calibration targets' feature points should come close to the measurement position and cover the limited measurement depth as much as possible.It is worth noting that the greater covered depth of planar targets is not better than the accurate smaller covered depth decided by measured objects' actual 3D information, especially for high precision close-range photogrammetry.

Implementation and Restrictions of Coplanar Calibration Methods and Non-Coplanar Calibration Methods
As shown in Figure 3a, coplanar methods for monocular camera systems need to take images of planar targets from various orientations (at least two).If planar targets were set in mono-viewing positions, coplanar methods would lose efficacy.As shown in Figure 3b, non-coplanar methods need to take images of planar targets from one fixed orientation.At least two images and one known shift of targets are needed to carry out a continued calibration process.Correspondingly, the implementation process of binocular camera system calibration by coplanar methods and non-coplanar methods are shown in Figure 3c,d.Basically, non-coplanar method is an improved calibration method using virtual 3D stereo targets.For guaranteeing the geometric accuracy of patterns' world coordinates, extra equipment (Tsai's method) or mathematical model (ACC and IACC methods) should be introduced to make the correction of world coordinates.This may bring in extra workloads, but the mathematical method is obviously more efficient than the manual adjustment method with extra instruments.In most actual measurements, the rough 3D information of measured objects is known, and the structure of a multi-camera measurement system is specially designed for the measured object.If the depth's changing range is not too large, it is better to perform calibration over the depth's changing range than over the whole FOV.On this occasion, non-coplanar calibration methods are more applicable than coplanar methods, for tiny shifts of planar targets in one fixed direction could generate large amounts of feature points for calibration tasks.On the contrast, considering the size of planar targets and the demands of multi-viewing images, the equivalent amount of feature points needs larger depth range than measured objects' depth range on most occasions.Otherwise, minor inclination angles' change of planar targets may not support coplanar methods to obtain the right calibration parameters.Thus, the restriction of coplanar methods comes from the demands of image acquisition from different viewing angles, and the restriction of noncoplanar methods comes from the measurement and correction of sliding shifts.In most actual measurements, the rough 3D information of measured objects is known, and the structure of a multi-camera measurement system is specially designed for the measured object.If the depth's changing range is not too large, it is better to perform calibration over the depth's changing range than over the whole FOV.On this occasion, non-coplanar calibration methods are more applicable than coplanar methods, for tiny shifts of planar targets in one fixed direction could generate large amounts of feature points for calibration tasks.On the contrast, considering the size of planar targets and the demands of multi-viewing images, the equivalent amount of feature points needs larger depth range than measured objects' depth range on most occasions.Otherwise, minor inclination angles' change of planar targets may not support coplanar methods to obtain Sensors 2023, 23, 8466 9 of 42 the right calibration parameters.Thus, the restriction of coplanar methods comes from the demands of image acquisition from different viewing angles, and the restriction of non-coplanar methods comes from the measurement and correction of sliding shifts.

Strategies of Enhancing Calibration Performance by Improving Feature Points Extraction Algorithms
Alongside calibration models, the improvements of calibration targets' 3D production and corresponding 2D image feature extraction can directly enhance calibration results.
The introduction of scholars' strategies to improve the calibration performance by enhancing 2D image feature extraction algorithm is put in Appendix A.
This paper concentrates more on the improvements of extracting accuracy and stability for symmetric circles.A novel extraction algorithm for symmetric circles' pattern is proposed in this paper, which has better extraction accuracy, stability of illumination, and targets' orientation changes than OpenCV's traditional algorithm.Correspondingly, the performance of calibration accuracy and stability can further be enhanced by using the proposed algorithm.

Present Novel Improved Affine Coordinate Correction Mathematical Model for Non-Coplanar Calibration
With the analysis in Sections 2.1-2.4,a novel improved affine coordinate correction (IACC) mathematical model for non-coplanar calibration is proposed.As shown in Table 5, using ∼ P I and ∼ P p respectively represent the matched homogeneous coordinates of the 2D image pixel coordinate system and 3D affine coordinate system built from uncorrected planar target's two axes and 1D sliding direction.

Coordinate Space Transformation from Target Affine Space to Orthogonal World Coordinate Space
Figure 4's left part shows the corrections for the calibration target's affine coordinates from the ACC model, in which normalized 2D vector η corrects the planar target's vertical axis and horizontal axis, and in which normalized 3D vector β corrects the sliding direction into orthogonal world coordinate system O w -X w Y w Z w .In this case, η and β are introduced to describe the skews of the planar target's two axes and stage's sliding direction, if the planar target and sliding direction remain fixed, η = (η x , η y ) T and β = (β x , β y , β z ) T should keep unchanged.However, when we set calibration experiments with the ACC method for different cameras with the same sliding stage and planar target (remain fixed), η and β do not always stay the same.The change of η is more remarkable than β with the switch of calibrating different cameras.It is more likely that the 1 DOF from η should transfer to characterize some intrinsic properties of different cameras.In fact, our planar targets are fabricated with optical glass and high precision (close to 1 µm) lithography process, which means η should be infinitely close to (0, 1) T .
Figure 4.This is a transformation model for mapping a target affine space coordinate system to an orthogonal world coordinate system.This process can be represented as a matrix transformation equation.
Correspondingly, the transformation equation should be adjusted as follow: Figure 5 shows the pinhole imaging model of the camera.O-XYZ is the camera coordinate system, of which the unit is mm.OR-UV is the camera image sensor's two-dimensional pixel coordinate system, of which the origin point is in the upper left corner of the image sensor and the unit is Pixel.Ou-xuyu is the image-plane coordinate system, of which the unit is mm.According to the theory of rigid transformation, the transformation relationship between the camera coordinate system O-XYZ and the calibration target object world coordinate system Ow-XwYwZw can be expressed as: In which  .This is a transformation model for mapping a target affine space coordinate system to an orthogonal world coordinate system.This process can be represented as a matrix transformation equation.
Based on the actual physical reality, we propose our novel improved affine coordinate correction model as shown in Figure 4's right part.In our calibration model, the planar target's two axes are considered strictly orthogonal.This means original η should be adjusted to (0, 1) T .As shown in Figure 4's right part, O w -X w Y w Z w is the ideal orthogonal 3D world coordinate system of the planar target.Normalized 3D vector β remains in our calibration model to correct the sliding direction to the ideal.
Correspondingly, the transformation equation should be adjusted as follow: Figure 5 shows the pinhole imaging model of the camera.O-XYZ is the camera coordinate system, of which the unit is mm.O R -UV is the camera image sensor's twodimensional pixel coordinate system, of which the origin point is in the upper left corner of the image sensor and the unit is Pixel.O u -x u y u is the image-plane coordinate system, of which the unit is mm.  .This is a transformation model for mapping a target affine space coordinate system to an orthogonal world coordinate system.This process can be represented as a matrix transformation equation.
Correspondingly, the transformation equation should be adjusted as follow: Figure 5 shows the pinhole imaging model of the camera.O-XYZ is the camera coordinate system, of which the unit is mm.OR-UV is the camera image sensor's two-dimensional pixel coordinate system, of which the origin point is in the upper left corner of the image sensor and the unit is Pixel.Ou-xuyu is the image-plane coordinate system, of which the unit is mm.According to the theory of rigid transformation, the transformation relationship between the camera coordinate system O-XYZ and the calibration target object world coordinate system Ow-XwYwZw can be expressed as: In which  According to the theory of rigid transformation, the transformation relationship between the camera coordinate system O-XYZ and the calibration target object world coordinate system O w -X w Y w Z w can be expressed as: In which The transformation of points' coordinates between system O R -UV and O-XYZ can be expressed by Equation (3): Focal length in ( 3) is expressed as f x and f y , which separately express the focal length's equivalent pixel numbers in the sensor's horizontal and vertical direction.For the use of homogeneous coordinates transformation, ρ in this paper denotes the proportionality coefficients of transformation and have no strict physical meanings.
Thus, the ideal process from the affine space coordinate system O p -X p Y p Z p to the camera image sensor's two-dimensional pixel coordinate system O R -UV can be expressed as: Compared with the ACC model in Table 3, our novel model IACC introduces γ to describe the skewness of the image sensor's two axes.If the actual included angle of the imaging sensor's two axes is θ, there is γ = f y • cot θ in physical meaning and the f y in (4) illustrates f y = f y / sin θ.The physical meaning of γ illustrates that when θ is close to 90 • , γ should be close to 0. Thus, many scholars and engineers choose to regard γ as 0 in actual scenes based on the manufacturing level of current industrial cameras.
But what really appeals to us is that γ can supply 1 DOF for the intermediate matrix A 3×4 , which we lost when we abandon η, and γ is an intrinsic parameter for a camera.We will further give our explanation about the significance of γ for IACC in Section 4.1.

Processing of Lens' Distortion
There are two main types of distortion errors for lens due to the inevitable processing and assembly error, i.e., radial distortion and tangential distortion.
The radial distortion is symmetrical about the main optical axis of the camera, and its mathematical model can be expressed as: In which, q = x u 2 + y u 2 , (x u , y u ) expresses the ideal (undistorted) normalized coordinates of the image-plane coordinate system O u -x u y u and the distortion center is O u .k 1 , while k 2 . . .are the radial distortion coefficients in which generally only the first two order coefficients play a major role.
The tangential distortion is not symmetrical about the main optical axis of the camera lens, and its mathematical model is: In this formula, p 1 and p 2 express the first two order tangential distortion coefficients.
In the image plane system O u -x u y u , the mathematical relationship between the ideal imaging point's normalized coordinate (x u , y u ) and the actual imaging point's normalized coordinate (x d , y d ) can be expressed by Equation (9).Note that the subscript u in this paper represents the ideal coordinate value, the subscript d represents the coordinate value with distortion, the superscript " ' " represents normalized coordinates, and superscript "ˆ"represents ideal image points' coordinates from reprojection.
Combining Equation ( 3) with known calibrated parameters, the ideal image points' coordinates ( Ûd , Vd ) in O R -UV should be expressed as:

Key Procedures of IACC Calibration Method 4.1. Initial Value Linear Solving and Parameters Separation Method
The relationship between A 3×4 from Equation ( 4) and parameters to be calibrated can be expressed as: In which A 3×4 can supply 11 DOF.And the DOF of final parameters (the intrinsic and extrinsic parameters, excepted for P c = (U 0 , V 0 )) to be calibrated is 11.Theoretically, the analytical solution of the camera's intrinsic and extrinsic parameters can be solved directly from a 1 ~a11 by the algebraic solving method.
Firstly, assume P c is at the image center, solve the initial value of a 1 ~a11 by the linear least squares method.With Equations ( 4) and ( 9), there are: With N pairs of corresponding calibration feature points, we can obtain the least squares solution a = a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 a 11 T .
With enough orthogonal constraints and properties of normalized vector β shown in Equation ( 11)'s left part, Equation ( 11)'s right part can be deduced as follows: In the calibration process, T z > 0, f x > 0, f y > 0 and β z > 0 are specified.The analytical and T z 2 can be obtained by solving the first four equations in Equation (11).Further, we can solve these four parameters, i.e., f x , f y , γ, and T z .β x and β y can be solved by the Equations ( 5) and ( 6).
In physical meanings, the introduction of γ supplies an intrinsic parameter for individual cameras.And in mathematical meanings, γ can supply 1 DOF for the intermediate matrix A 3×4 , which we lost when we abandoned η.Since the DOF of the final parameters (the intrinsic and extrinsic parameters, excepted for the DOF of final parameters (the intrinsic and extrinsic parameters, excepted for P c = (U 0 , V 0 ))) is equal to the DOF of A 3×4 .The orthogonality of R 3×3 's analytical solution can be guaranteed.
Bringing the above parameters back into Equation ( 9), the remaining extrinsic parameters can be obtained as follows: So far, all the final parameters' initial values without distortion coefficients have been solved linearly.The geometry constraints have fully confirmed the orthogonality of the rotation matrix.There is no need to further approximate the rotation matrix's initial value.

Parameters' Nonlinear Optimization
Section 4.1 has given the method of parameters' linear initial values solving.Combining Equations ( 4)-( 8), parameters to be calibrated are summed as follows: With the improvement of the lenses' manufacturing process, the distortion of today's non-wide-angle lenses of cameras is very small.The initial guess of D can be simply set to 0. And the initial guess of (U 0 , V 0 ) can be set to the centre of the collected images.As one of the non-coplanar calibration methods' advantages, there is only one set of intrinsic and extrinsic parameters for all collected images.Equation ( 13) illustrates the Rodrigues transformation, which supplies the interconversion of rotation R 3×3 matrix and R 3×1 vector.
The minimum error squared sum objective function for pixel coordinates can be established: In which, ( Ûdi , Vdi ) is the target's feature point projection from affine space coordinate system O p -X p Y p Z p to the camera image sensor's two-dimensional pixel coordinate system O R -UV.(U i , V i ) is the corresponding feature point's coordinate extracted from the image.This paper uses the Levenberg-Marquardt algorithm to solve this nonlinear minimization problem of the monocular camera system as shown in Equation (14).Experiments verify that our method can well converge to provide optimum values when the initial guess of the parameters is well estimated.

Binocular Camera System Calibration Method
A binocular camera system can be seen as two related monocular cameras.Thus, one of the strategies to calibrate a binocular camera system is the combination of two related monocular camera calibrations.As shown in Figure 3d, when using the binocular camera system to take some mono-view non-coplanar 2D targets' images in a common viewing field, the collected images can then be used to implement binocular camera system calibration.The same as the monocular calibration mentioned before, there is no need for extra equipment making a sliding direction vertical to a target's plane by our novel method.
At first, for each single camera, repeat the process from Equations ( 10)-( 12) to calculate the initial value of parameters.The parameters to be calibrated in binocular systems can be summed as: The careful reader may notice that we have repeatedly calculated β x , β y , and β z separately in two single cameras' initial value solving processes.Theoretically, β x , β y , and β z should remain the same for each single camera in a binocular system.
Our solution for this problem is to use either solution of (β x ,β y ) as the initial value of the binocular system.Then, take the other initial values of final parameters as shown in Equation ( 15), along with (β x ,β y ), into the nonlinear optimization procedure.
Then, the minimum error squared sum objective function for pixel coordinates can be established: In which, ( Ûadi , Vadi ) and ( Ûbdi , Vbdi ) are the target's feature point projection separately from affine space coordinate system O p -X p Y p Z p to Left and Right camera image sensor's two-dimensional pixel coordinate system O Ra -U a V a and O Rb -U b V b .(U ai , V ai ) and (U bi , V bi ) are the corresponding feature points' coordinates extracted from images of different cameras.
While familiar with monocular calibration, this paper uses the Levenberg-Marquardt algorithm to solve the nonlinear minimization problem of the binocular camera system as shown in Equation (16).Experiments verify that our method can well converge to the optimum value when the initial guess of parameters is well estimated.It is worth mentioning that the present IACC calibration method has good universality and stability for conventional binocular camera systems.

Novel Simple Circle Feature Points Extraction Algorithm with High Accuracy and Stability Based on Local-ROI-OTSU and Radial Section Scanning Method
Datta et al. [31] and other scholars have verified that using a circles pattern can obtain better calibration accuracy than a chessboard pattern in most instances.And refinement based on iterative method [31], inverse rendering [32], or image rectify [26], etc., can really help improve the extraction accuracy of features.However, the above strategies are built on prior knowledge of the special information between camera and targets.The implementation of the above strategies is not simple.Also, they do not consider an algorithms' stability to illumination, rotations, etc.
This paper proposes a simple circle feature points extraction algorithm with high accuracy and stability based on Local-ROI-OTSU and radial section scanning method.The introduction and deduction are in Appendix B, and the improvements in accuracy and stability brought with our novel algorithm are verified in Sections 5.2-5.5.

Experiments Results and Discussion
Several experiments are set to test the performance of the proposed methods in this paper.The model of camera used in this paper is Basler acA1300-60gm, for which resolution is 1280 × 1024, for which pixel size is 5.3 µm × 5.3 µm, and for which there are matching 12 mm lenses.Three classical and typical calibration methods-Tsai's method [5], Zhang's method [9], and ACC method [8]-are used as the contrast methods in the experiments.
Firstly, carry out simulation experiments in Section 5.1 to analyze the performance of our calibration method with respect to the noise level, the number of planes, and different yaw angles.Stability simulations of the proposed novel circle feature points extraction algorithm are carried out in Section 5.2 to evaluate the stability with respect to illumination and viewing angle changes.
Then, carry out the real calibration experiments for the monocular camera in Sections 5.3 and 5.4.Calibrate the intrinsic and extrinsic parameters of the two cameras respectively by the proposed IACC method and the three contrast methods.Evaluate the accuracy of the resulting parameters from multiple aspects.
Further, carry out the real calibration experiments for the binocular camera system in Section 5.5.Zhang's method, ACC method, and the proposed IACC method are separately used to calibrate the binocular camera system's unknown parameters.Then, take 3D reconstruction experiments with calibrated parameters for discrete feature points to test the actual measurement accuracy.Compare measured distance between points with the actual values.Experiments also verify the deduction in Section 2.3 about the local spatial optimality of calibration parameters.
In Section 5.6, use stereo-DIC method with calibrated binocular systems to carry out full-field stereo measurements based on 3D reconstruction.The results show the feasibility of applying our IACC calibration method for both discrete feature points and surface full-field measurements.
To ensure objectivity, we use the same calibration targets of different patterns to set experiments comparing different calibration methods.The patterns processing accuracy of the targets is 1 µm.Different feature points extraction algorithms are used to obtain points' sub-pixel coordinates and make contrast experiments.Zolix KA50 motorized linear stage with MC600 controller is used to generate displacements of fixed direction for non-coplanar methods.Attocube IDS3010 laser interferometer is used to monitor the 1D out-of-plane shifts of targets' plane.
Performance with respect to the noise level.In this experiment, we use 10 planes in mono-viewing multiplane position to simulate the monocular camera calibration.The extrinsic parameters are set as follows: (R v1 , R v2 , R v3 ) = (0, 0, 0), (T x , T y , T z ) = (−64.0,−51.2, 225.0), and (β x , β y ) = (0.087, 0.000).Gaussian noise with 0 mean and σ standard deviation is added to the projected image points.We vary the noise level from 0.1 pixels to 2.0 pixels.For each noise level, we perform 100 independent trials, and the results shown are the average.As we can see from Figure 6b, the relative errors in f x and f y are less than 0.4%.For the most simulated noise level, they are less than 0.3%.Other intrinsic parameters, i.e., γ, U 0 and V 0 show similar properties as f x and f y .Just as shown in Figure 6, they have very good accuracy and stability.The intrinsic parameters' average calibration results are not as sensitive to the noise level as Zhang's method.Reference [9] mentioned that the simulated intrinsic parameters' errors by Zhang's method increase linearly with the noise level.For σ = 0.5, the errors in f x and f y with Zhang's method are less than 0.3%.Thus, our method shows better stability than Zhang's when the noise level is less than 2.0 pixels.For the extrinsic parameters, their errors also remain within a reasonable range.When the noise level is lower than 1.4 pixels, the rotation angle's error is less than 0.02 • , as well as the max error of (T x , T y , T z ) is less than 3 mm.The error of (β x , β y ) is less than 0.015, which means the translation direction's calibration error is less than 0.015 • .The distortion coefficients' error remains low when the noise level is lower than 2 pixels, especially when it is lower than 1.4 pixels.It is worth mentioning that the reprojection error of the proposed calibration method could well converge on the ground truth error value we set before when the noise level is less than 2.0 pixels, as shown in Figure 6a.
Sensors 2023, 23, 8466 18 of 44 average.As we can see from Figure 6b, the relative errors in fx and fy are less than 0.4%.
For the most simulated noise level, they are less than 0.3%.Other intrinsic parameters, i.e., γ, U0 and V0 show similar properties as fx and fy.Just as shown in Figure 6, they have very good accuracy and stability.The intrinsic parameters' average calibration results are not as sensitive to the noise level as Zhang's method.Reference [9] mentioned that the simulated intrinsic parameters' errors by Zhang's method increase linearly with the noise level.
For σ = 0.5, the errors in fx and fy with Zhang's method are less than 0.3%.Thus, our method shows better stability than Zhang's when the noise level is less than 2.0 pixels.For the extrinsic parameters, their errors also remain within a reasonable range.When the noise level is lower than 1.4 pixels, the rotation angle's error is less than 0.02°, as well as the max error of (Tx, Ty, Tz) is less than 3 mm.The error of (βx, βy) is less than 0.015, which means the translation direction's calibration error is less than 0.015°.The distortion coefficients' error remains low when the noise level is lower than 2 pixels, especially when it is lower than 1.4 pixels.It is worth mentioning that the reprojection error of the proposed calibration method could well converge on the ground truth error value we set before when the noise level is less than 2.0 pixels, as shown in Figure 6a.
The standard deviation of the parameters' calibration result is used to characterize the uncertainty of the results.As shown in Figure 6e, the uncertainty of fx and fy keeps increasing with the rising noise level.Other parameters' uncertainty does not show in the figure which has similar characteristics with fx and fy.Thus, the noise level of images has a directly negative effect on the uncertainty of the proposed calibration method, which should be noted in practical applications.Performance with respect to the number of planes.In this experiment, the simulated camera has the same intrinsic and extrinsic parameters as the experiment with respect to the noise level.We vary the number of planes in mono-viewing multiplane position from 2 to 20.For each number, we perform 100 independent trials.Independent Gaussian noise with mean 0 and standard deviation 0.5 pixels is conducted in the trials.The results are the average as shown in Figure 7.We can learn from Figure 7b that average relative errors of fx and fy decrease significantly when the number of planes increase from 2 to 3.Then, they become quite stable, and the relative error remains lower than 0.3%.The other intrinsic and extrinsic parameters' errors show similar properties as fx and fy.The absolute errors The standard deviation of the parameters' calibration result is used to characterize the uncertainty of the results.As shown in Figure 6e, the uncertainty of f x and f y keeps increasing with the rising noise level.Other parameters' uncertainty does not show in the figure which has similar characteristics with f x and f y .Thus, the noise level of images has a directly negative effect on the uncertainty of the proposed calibration method, which should be noted in practical applications.
Performance with respect to the number of planes.In this experiment, the simulated camera has the same intrinsic and extrinsic parameters as the experiment with respect to the noise level.We vary the number of planes in mono-viewing multiplane position from 2 to 20.For each number, we perform 100 independent trials.Independent Gaussian noise with mean 0 and standard deviation 0.5 pixels is conducted in the trials.The results are the average as shown in Figure 7.We can learn from Figure 7b that average relative errors of f x and f y decrease significantly when the number of planes increase from 2 to 3.Then, they become quite stable, and the relative error remains lower than 0.3%.The other intrinsic and extrinsic parameters' errors show similar properties as f x and f y .The absolute errors of main distortion coefficients k 1 and p 1 stay close to 0, which shows favourable stability with the number of planes increasing.The errors of high-order radial distortion coefficient k 2 seem like changing more dramatically.In numerical terms, the error is still small, and has little effect on the results. of main distortion coefficients k1 and p1 stay close to 0, which shows favourable stability with the number of planes increasing.The errors of high-order radial distortion coefficient k2 seem like changing more dramatically.In numerical terms, the error is still small, and has little effect on the results.The data of the reprojection error shown in Figure 7a illustrates that the proposed calibration method could well converge on the ground truth error value.And the number of planes has little effect on the reprojection error.Further, the uncertainty of fx and fy shown in Figure 7e decrease significantly when the number of planes increase from 2 to 7, and then decrease more slightly as the number increases from 7 to 20.Performance with respect to the rotation angle of targets' plane.In this experiment, the displacement direction of the calibration target remains parallel to the optical axis of the camera.To examine the influence of the orientation of the target's plane with respect to the imaging plane, we firstly set the target's plane parallel to the imaging plane.The target's plane is then rotated around the Yw-axis with angle θ.The angle θ varies from 10° to 50°.From the θ could we obtain the R_vec(Rv1, Rv2, Rv3) = (0, −θ(rad), 0), (βx, βy, βz) = (sin(θ), 0, cos(θ)).The other extrinsic parameters and intrinsic parameters remain the same as the above two experiments.Then, these parameters are used to generate simulation datasets.Independent Gaussian noise with mean 0 and standard deviation 0.5 pixels is added to the projected points.Ten images of simulated feature points-pairs are used to calibrate the camera with different angle θ.We repeated this progress 100 times and computed average errors.The results are shown in Figure 8.The data in Figure 8b,d illustrate that the rotation angle has little effect on fx, fy, U0, and V0 when θ increases from 0° to 50°.When θ is increasing larger than 40°, the relative error of fx grows faster.When θ increases to 50°, the relative error of fx increases to around 0.3%, along with the relative error of fy which is still less than 0.1%.The rotation angle has a relatively large effective on γ, especially when θ is increasing larger than 20°.Even if the value of γ increases to six, it means the angle between the image sensor's two axes is 89.847° in our simulated camera.The result is very close to 90°, which can be accepted in real situations.As for the extrinsic parameters, Tx and Ty seem relatively more sensitive to the rotation angle than Tz.This can be explained by the simulated rotation direction.Then, the rotation vector's simulated The data of the reprojection error shown in Figure 7a illustrates that the proposed calibration method could well converge on the ground truth error value.And the number of planes has little effect on the reprojection error.Further, the uncertainty of f x and f y shown in Figure 7e decrease significantly when the number of planes increase from 2 to 7, and then decrease more slightly as the number increases from 7 to 20.
Performance with respect to the rotation angle of targets' plane.In this experiment, the displacement direction of the calibration target remains parallel to the optical axis of the camera.To examine the influence of the orientation of the target's plane with respect to the imaging plane, we firstly set the target's plane parallel to the imaging plane.The target's plane is then rotated around the Y w -axis with angle θ.The angle θ varies from 10 • to 50 • .From the θ could we obtain the R_vec(Rv1, Rv2, Rv3) = (0, −θ(rad), 0), (β x , β y , β z ) = (sin(θ), 0, cos(θ)).The other extrinsic parameters and intrinsic parameters remain the same as the above two experiments.Then, these parameters are used to generate simulation datasets.Independent Gaussian noise with mean 0 and standard deviation 0.5 pixels is added to the projected points.Ten images of simulated feature points-pairs are used to calibrate the camera with different angle θ.We repeated this progress 100 times and computed average errors.The results are shown in Figure 8.The data in Figure 8b,d illustrate that the rotation angle has little effect on f x , f y , U 0 , and V 0 when θ increases from 0 • to 50 • .When θ is increasing larger than 40 • , the relative error of f x grows faster.When θ increases to 50 • , the relative error of f x increases to around 0.3%, along with the relative error of f y which is still less than 0.1%.The rotation angle has a relatively large effective on γ, especially when θ is increasing larger than 20 • .Even if the value of γ increases to six, it means the angle between the image sensor's two axes is 89.847 • in our simulated camera.The result is very close to 90 • , which can be accepted in real situations.As for the extrinsic parameters, T x and T y seem relatively more sensitive to the rotation angle than T z .This can be explained by the simulated rotation direction.Then, the rotation vector's simulated value is quite close to the ground truth value, and the error of the rotation vector is less than 0.1 • for most simulated rotation angles.The distortion coefficients' errors are low, which shows favourable stability with the increasing rotation angle.
Sensors 2023, 23, 8466 20 of 44 value is quite close to the ground truth value, and the error of the rotation vector is less than 0.1° for most simulated rotation angles.The distortion coefficients' errors are low, which shows favourable stability with the increasing rotation angle.The data of the reprojection error shown in Figure 8a illustrates that the proposed calibration method could well converge on the ground truth error value.And the rotation angle has little effect on the reprojection error.Further, the uncertainty of fx and fy as shown in Figure 8e increase relatively significantly when the angles increase from 0° to 50°, and the uncertainty of fx increases more distinctly than the uncertainty of fy.This can be explained by the simulated rotation direction.Other parameters' uncertainty does not show in the figure but has similar characteristics with fx and fy.Obviously, the increasing rotation angle may bring in more uncertainty of the calibration parameters.The experiments in Section 5.1 can summarize some useful conclusions: 1.The proposed IACC calibration method can fit different levels of noise in images.
From the simulation experiment result, our method shows better accuracy and stability than Zhang's method.However, the increasing noise level will bring in more uncertainty of the calibrated parameters.Thus, it is necessary enhance the certainty of the parameters by reducing the noise level of the feature points' coordinates.2. The more images used in the calibration, the less uncertainty the parameters will have.Note that in practice, taking more images means we need more displacement data of 2D targets, which may bring in new uncertainty.Thus, combining with our simulation experiments, the suggested number of images is around 10. 3. The proposed calibration method can fit 2D targets' plane at different angles with the image plane.Compared with the simulation data in [8], our improved method shows better accuracy and stability than the ACC method with respect to the rotation angle.However, increasing the angle may bring in the difficulty of extracting feature points precisely and the uncertainty of calibration parameters.Thus, try to avoid taking images from a large angle, and experience and data indicate that an angle of less than 45° is suggested.The data of the reprojection error shown in Figure 8a illustrates that the proposed calibration method could well converge on the ground truth error value.And the rotation angle has little effect on the reprojection error.Further, the uncertainty of f x and f y as shown in Figure 8e increase relatively significantly when the angles increase from 0 • to 50 • , and the uncertainty of f x increases more distinctly than the uncertainty of f y .This can be explained by the simulated rotation direction.Other parameters' uncertainty does not show in the figure but has similar characteristics with f x and f y .Obviously, the increasing rotation angle may bring in more uncertainty of the calibration parameters.
The experiments in Section 5.1 can summarize some useful conclusions: 1.
The proposed IACC calibration method can fit different levels of noise in images.From the simulation experiment result, our method shows better accuracy and stability than Zhang's method.However, the increasing noise level will bring in more uncertainty of the calibrated parameters.Thus, it is necessary enhance the certainty of the parameters by reducing the noise level of the feature points' coordinates.

2.
The more images used in the calibration, the less uncertainty the parameters will have.Note that in practice, taking more images means we need more displacement data of 2D targets, which may bring in new uncertainty.Thus, combining with our simulation experiments, the suggested number of images is around 10.

3.
The proposed calibration method can fit 2D targets' plane at different angles with the image plane.Compared with the simulation data in [8], our improved method shows better accuracy and stability than the ACC method with respect to the rotation angle.However, increasing the angle may bring in the difficulty of extracting feature points precisely and the uncertainty of calibration parameters.Thus, try to avoid taking images from a large angle, and experience and data indicate that an angle of less than 45 • is suggested.

Stability Simulations of the Proposed Novel Circle Feature Points Extraction Algorithm
Illumination conditions are very important to visual measurements.The edges of image features may occur with 1-2 Pixels offsets while the illumination intensity has a 10~20% change.In actual measurements, it is hard to put forward a uniform standard to evaluate the illumination's sufficiency and suitability.Illumination conditions are often set according to the experience of the operators.Thus, the stability of feature extraction algorithms with respect to illumination change is quite important for high-precision measurements.
Planar targets of a symmetric circle pattern with a back light source are chosen to be the measurement object.The illumination intensity of the back light source is constant, and engineering parts are used to keep the light source and planar target fixed.For simulating the scenes from insufficient illumination to sufficient illumination, we vary the exposure value of the camera from 600 to 3100 and take images from the front of the target.Then, take the image of 3100 exposure value as the reference image.Separately use present novel circle feature points extraction algorithm in Appendix B and OpenCV's findCirclesGrid to extract the circles' centers pixel information.RMS errors in Pixels between the test images and the reference image are used to evaluate the stability of the above two algorithms with respect to illumination changes.The result is shown in Figure 9a.Further test the stability of these two algorithms at different viewing angles.We hold the camera still and separately rotate the target around the central axis with 20 • and 45 • .Repeat the above procedures to test the stability of the two algorithms at different viewing angles.The test results are separately shown in Figure 9b,c.

Stability Simulations of the Proposed Novel Circle Feature Points Extraction Algorithm
Illumination conditions are very important to visual measurements.The edges of image features may occur with 1-2 Pixels offsets while the illumination intensity has a 10%~20% change.In actual measurements, it is hard to put forward a uniform standard to evaluate the illumination's sufficiency and suitability.Illumination conditions are often set according to the experience of the operators.Thus, the stability of feature extraction algorithms with respect to illumination change is quite important for high-precision measurements.
Planar targets of a symmetric circle pattern with a back light source are chosen to be the measurement object.The illumination intensity of the back light source is constant, and engineering parts are used to keep the light source and planar target fixed.For simulating the scenes from insufficient illumination to sufficient illumination, we vary the exposure value of the camera from 600 to 3100 and take images from the front of the target.Then, take the image of 3100 exposure value as the reference image.Separately use present novel circle feature points extraction algorithm in Appendix B and OpenCV's findCir-clesGrid to extract the circles' centers pixel information.RMS errors in Pixels between the test images and the reference image are used to evaluate the stability of the above two algorithms with respect to illumination changes.The result is shown in Figure 9a.Further test the stability of these two algorithms at different viewing angles.We hold the camera still and separately rotate the target around the central axis with 20° and 45°.Repeat the above procedures to test the stability of the two algorithms at different viewing angles.The test results are separately shown in Figure 9b,c.The results in Figure 9 clearly show the better performance our novel circle feature points extraction algorithm has with respect to illumination and viewing angles changes than OpenCV's traditional algorithm findCirclesGrid.Simulations at different angles and illumination levels verify that our novel algorithm can stably extract symmetric circles' features at different viewing angles and illumination conditions.

Real Monocular Camera Calibration Experiments
Planar chessboard pattern and symmetric circle pattern are chosen to be the calibration targets' patterns.First, monocular camera calibration experiments are performed.The same 2D calibration targets are used to perform experiments.Some machined parts are used to fix the targets, stage, and cameras.The information of the calibration targets' pattern is shown in Table 6.The results in Figure 9 clearly show the better performance our novel circle feature points extraction algorithm has with respect to illumination and viewing angles changes than OpenCV's traditional algorithm findCirclesGrid.Simulations at different angles and illumination levels verify that our novel algorithm can stably extract symmetric circles' features at different viewing angles and illumination conditions.

Real Monocular Camera Calibration Experiments
Planar chessboard pattern and symmetric circle pattern are chosen to be the calibration targets' patterns.First, monocular camera calibration experiments are performed.The same 2D calibration targets are used to perform experiments.Some machined parts are used to fix the targets, stage, and cameras.The information of the calibration targets' pattern is shown in Table 6.Use stage and targets to generate a virtual 3D points array.Machine parts are used to keep the sliding direction approximately vertical to the targets' plane.Take images of both patterns in different positions.Feature point extract functions findChessboardCorners and findCirclesGrid from OpenCV 3.3.0are used in this section to extract the corresponding image pixel coordinates.
First, 11 × 8 chessboard pattern is used to perform the monocular camera calibration experiments, whereby 1760 point-pairs from 20 images are used to generate the datasets in Table 7.Then, use the above datasets to calibrate two monocular cameras separately by the mentioned four methods in Table 7.The reprojection RMS errors in pixels and the errors in world system between detected feature points and projected ones (also called as reprojection error, but the unit is mm) are used to evaluate the accuracy of these four methods.Tables 8 and 9 show the calibration results of two different cameras by mentioning four methods with chessboard datasets in Table 7.The results in Tables 8 and 9 show that the present IACC method with the traditional chessboard pattern has better accuracy than Tsai's method, ACC method, and Zhang's method.
The other target of 9 × 7 symmetric circle pattern is used to perform the monocular camera calibration experiments with the above four methods.In all, 630 feature points-pairs data from 10 images is used to calibrate the unknown parameters.The data sets of the symmetric circle pattern in this section are made by using findCirclesGrid from OpenCV 3.3.0.The calibration results are shown in Tables 10 and 11.The results in Tables 10 and 11 show that the present IACC method with the traditional symmetric circle pattern also has better accuracy than Tsai's method, ACC method, and Zhang's method.Clearly, Tables 8-11 also reflect how different calibration methods using symmetric circle pattern with OpenCV's findCirclesGrid has better performance than using chessboard pattern with findChessboardCorners.The simulation results in Section 5.1 verify that the reprojection error may be convergent to the added noise of feature points.Certainly, there are other noises in actual images.But the accuracy of the feature points' extraction algorithm plays a major part in added noise.Thus, the reprojection error could reflect the accuracy of the points' extraction algorithm.From Tables 9 and 10's data, the accuracy of findCirclesGrid can achieve close to 0.02 pixels, and the accuracy of findChessboardCorners can only come close to 0.06 pixels in our real experiments.Comparing the calibration results for different cameras in Tables 10 and 11, we can also notice that the improvements brought by IACC for Single_R camera is relatively lower than for Single_L camera.After the examination, we found that there are some imperceptible stains on the surface of Single_R camera's imaging sensor, which may affect the quality of the calibration images.The feature extraction algorithm findCirclesGrid is easily affected by these stains because findCirclesGrid is a gray-centroid-based blob detect algorithm.Clearly, alongside the calibration model, the accuracy of feature extraction will more directly affect the accuracy of calibration.Thus, the stability of algorithms for different measurement environments is important, and our new algorithm in Appendix B has better performance than findCirclesGrid.This has been verified in simulations in Section 5.2 and in real calibration experiments in Section 5.4.
As for distortion coefficients, Tsai's method assumes tangential distortion can be ignored to satisfy RAC constraint, so Tsai's method cannot calibrate the tangential distortion.The present IACC method, ACC method, and Zhang's method can calibrate the radial and tangential distortion coefficients through nonlinear optimization.It needs to be explained that different coefficients definition expression in [8] and in this paper caused difference in values in Tables 8-11.Both coefficients definition expression can meet the physical model and can reflect the distortion level.For convenience, the coefficients definition expression in this paper is in accordance with Zhang's method [9].

Performance of Present New Algorithm in Appendix B for Improving Calibration Accuracy
We further apply the present new algorithm in Appendix B to generate feature pointpairs data sets with the same calibration images in Tables 5 and 6.The present IACC calibration method and Zhang's method are chosen to verify that our novel circle feature points extraction algorithm can make improvements of calibration accuracy for both noncoplanar calibration methods and coplanar calibration methods.
The same as Tables 10 and 11, 630 feature points-pairs data from 10 images are used to calibrate the unknown parameters.Next, ∆θ is set to 1 • and the searching step length in radial direction is set to 0.1 pixel.The calibration results are shown in Tables 12 and 13.
The results data in Tables 12 and 13 clearly verified that our novel circle feature points extraction algorithm effectively improves the calibration accuracy of both the coplanar method and the non-coplanar method, represented by Zhang's method and the present IACC method.From previous deduction that the reprojection error may be convergent to the accuracy of feature points extraction algorithm with the present method, we could further reckon that the present new algorithm in Appendix B has better accuracy than findCirclesGrid from OpenCV 3.3.0.The accuracy of our algorithm can reach within 0.02 Pixels in actual application.As we mentioned before, the implementation of coplanar calibration methods needs to take images of 2D targets in multi-viewing multiplane position as shown in Figure 3.The data in Tables 12 and 13 also illustrates the stability of the proposed novel circle feature points extraction algorithm for the rotation of planar targets.As shown in Figure 10, our algorithm and strategy to sort the key points can fit the situations when angular deflection between planar targets and imaging sensor remains at a relatively rational level.Usually, this angle should be below 45 • to keep the accuracy of calibration results.The results data in Tables 12 and 13 clearly verified that our novel circle feature points extraction algorithm effectively improves the calibration accuracy of both the coplanar method and the non-coplanar method, represented by Zhang's method and the present IACC method.From previous deduction that the reprojection error may be convergent to the accuracy of feature points extraction algorithm with the present method, we could further reckon that the present new algorithm in Appendix B has better accuracy than findCirclesGrid from OpenCV 3.3.0.The accuracy of our algorithm can reach within 0.02 Pixels in actual application.
As we mentioned before, the implementation of coplanar calibration methods needs to take images of 2D targets in multi-viewing multiplane position as shown in Figure 3.The data in Tables 12 and 13 also illustrates the stability of the proposed novel circle feature points extraction algorithm for the rotation of planar targets.As shown in Figure 10, our algorithm and strategy to sort the key points can fit the situations when angular deflection between planar targets and imaging sensor remains at a relatively rational level.Usually, this angle should be below 45° to keep the accuracy of calibration results.Combining the simulations in Section 5.2, the present new algorithm in Appendix B has better extraction accuracy and stability than the traditional extraction algorithm find-CirclesGrid from OpenCV 3.3.0,whereby our novel algorithm can effectively improve the calibration accuracy for both non-coplanar methods and coplanar methods.

Real Binocular Camera System Calibration and 3D Reconstruction Experiments for Discrete Feature Points
We set experiments to test the performance of the proposed binocular camera system calibration method.As a supplement, a 3D reconstruction experiment for discrete feature points is set to evaluate the actual measurement accuracy of the binocular measurement system calibrated by the proposed method.
Firstly, use the binocular camera system to acquire a set of images to be the test images.Take part of the test images to be the parameter calibration images and the other part to be the accuracy test images.Then, measure the distance between different feature points on the target surface and compare the measured distance data with actual values.
Similar to the monocular calibration process, we fixed a one-dimensional displacement stage with the planar target of high precision on the optical platform, moved the planar target in a fixed direction, and used the laser interferometer to measure the moving shifts of the calibration target in this direction.Then, we established the affine space coordinate sequence of the calibration target.The intrinsic and extrinsic parameters of the binocular system can be calibrated through the target's image sequence captured by the two cameras simultaneously.
The binocular system is shown as Figure 11.The machine part is designed to fix two cameras.The designed horizontal baseline between two cameras is 156 mm, and the included angle between the camera optical axis and the baseline is about 75 • .Five out of ten image-pairs of the symmetric circle pattern acquired at mono-viewing positions are taken to calibrate the parameters of the binocular system.The other five image-pairs are retained for the distance measurement experiment.

Real Binocular Camera System Calibration and 3D Reconstruction Experiments for Discret Feature Points
We set experiments to test the performance of the proposed binocular camera system calibration method.As a supplement, a 3D reconstruction experiment for discrete featur points is set to evaluate the actual measurement accuracy of the binocular measuremen system calibrated by the proposed method.
Firstly, use the binocular camera system to acquire a set of images to be the test im ages.Take part of the test images to be the parameter calibration images and the other pa to be the accuracy test images.Then, measure the distance between different feature poin on the target surface and compare the measured distance data with actual values.
Similar to the monocular calibration process, we fixed a one-dimensional displace ment stage with the planar target of high precision on the optical platform, moved th planar target in a fixed direction, and used the laser interferometer to measure the movin shifts of the calibration target in this direction.Then, we established the affine space coo dinate sequence of the calibration target.The intrinsic and extrinsic parameters of the bin ocular system can be calibrated through the target's image sequence captured by the tw cameras simultaneously.
The binocular system is shown as Figure 11.The machine part is designed to fix tw cameras.The designed horizontal baseline between two cameras is 156 mm, and the in cluded angle between the camera optical axis and the baseline is about 75 °.Five out o ten image-pairs of the symmetric circle pattern acquired at mono-viewing positions ar taken to calibrate the parameters of the binocular system.The other five image-pairs ar retained for the distance measurement experiment.After the calculation process mentioned in Section 4.3, the intrinsic and extrinsic pa rameters of the binocular camera system have been calibrated simultaneously.The extrin sic parameters of the two cameras in the system can be expressed in more universal forma by Equation ( 17): The calibration results of the proposed method are shown in the bottom part of Tabl 14.After the calculation process mentioned in Section 4.3, the intrinsic and extrinsic parameters of the binocular camera system have been calibrated simultaneously.The extrinsic parameters of the two cameras in the system can be expressed in more universal format by Equation ( 17): The calibration results of the proposed method are shown in the bottom part of Table 14.It is worth noting that reference [8]'s binocular system calibration method (ACC method) can only calibrate extrinsic parameters with the prerequisite known monocular camera's intrinsic parameters.The ACC binocular system calibration method cannot calibrate the intrinsic and extrinsic parameters simultaneously.And the ACC binocular system calibration method introduced a penalty function to ensure the orthogonality of the rotation matrix.This means that the penalty factors should be adjusted along with the changes in the binocular cameras' structure, calibration targets' pattern, monocular calibration accuracy, etc. Above all, in actual applications, the loss of universality makes the ACC method too cumbersome to carry out the binocular camera systems' calibration.The calibrated extrinsic parameters of the binocular system and the prerequisite intrinsic parameters with the ACC method are shown in the top-left of Table 14.
Similar with the ACC method, the complicated procedure to adjusting the targets' sliding direction of Tsai's non-coplanar method also make it inefficient to calibrate either monocular or binocular camera systems with high accuracy.Considering how the accuracy of the previous monocular calibration with Tsai's method under current conditions is much more worthwhile than the other three methods, Tsai's method is not considered to be the contrast method in this section.
The same as the present IACC method, Zhang's method can calibrate the intrinsic and extrinsic parameters of the binocular simultaneously.Zhang's binocular calibration method which contained in OpenCV 3.3.0function stereoCalibrate is chosen as a comparative method.Without loss of generality, five out of ten image-pairs of the symmetric circle pattern from multi-viewing positions are taken to calibrate the parameters of the binocular system with Zhang's method.The other five image-pairs are retained for the distance measurement experiment.The calibration results of Zhang's method are shown in the top-right part of Table 14.
The key data reprojection errors in Table 14 reflect how our binocular calibration method has the best calibration accuracy among the mentioned three methods.It is worth mentioning that the feature points of the symmetric circle pattern used in the above experiments are extracted by the present new algorithm in Appendix B for all three binocular calibration methods.
As mentioned in Section 2.4, calibration parameters always present local spatial optimality.Thus, we should not discuss the calibration accuracy of the binocular camera system in isolation ignoring the actual measurement position.This means that we should combine the actual measurement accuracy with calibration accuracy to evaluate the actual accuracy of calibrated parameters.
Camera-based stereo measurements are built on the 3D reconstruction of features in images acquired by camera-sensing-systems.The 3D reconstruction of image features mainly contains three steps, i.e., features' extraction, features' stereo matching, and features' stereo reconstruction based on triangulation.The accuracy of the camera-based stereo measurements mostly depends on the accuracy of the features' matching and multi-camera system's calibration.With precise camera calibration parameters and matching point-pairs' coordinates, based on the 3D reconstruction algorithm, high-precision measurements can be realized.
Clearances measurements for discrete circular feature points based on 3D reconstruction are set for further testing the accuracy of the calibrated parameters.The clearances to be measured can be divided into two types as shown in Figure 12.The clearances between feature points in high precision calibration planar targets are used as the measurement objects.As shown in Figure 12, a specific image-pair of 9 × 7 symmetric circle pattern corresponds to a planar target at one specific position in a world coordinates system.For one specific position, each target can acquire 56 sets of horizontal clearances and 54 sets of vertical clearances.In this paper, the least squares method is used to solve the 3D reconstruction crete circular feature points, since this method can directly use the original matc ture point-pairs' pixel information and calibrated parameters without extra imag transformation and interpolation.
Different camera parameters calibrated by the mentioned three methods are perform the 3D reconstruction of the discrete feature points with the above retai age-pairs.The error data is shown in Tables 15 and 16.In this paper, the least squares method is used to solve the 3D reconstruction for discrete circular feature points, since this method can directly use the original matched feature point-pairs' pixel information and calibrated parameters without extra image affine transformation and interpolation.
Different camera parameters calibrated by the mentioned three methods are used to perform the 3D reconstruction of the discrete feature points with the above retained image-pairs.The error data is shown in Tables 15 and 16.
Data in Tables 14-16 illustrate that the parameters of the binocular camera system calibrated by the present IACC method not only have the best calibration accuracy among three contrast methods, but also can supply the best measurements accuracy for circular feature points at nearby positions of where the calibration images are acquired.Data in Tables 15 and 16 can also reflect that parameters calibrated by different methods achieve their best measurements accuracy at around the calibration position, but cannot achieve equivalent measurements accuracy away from the calibration position.As we can see from the root mean square error data in Tables 15 and 16, the binocular system with IACC's calibration parameters can achieve 2.6 µm measurement accuracy using images taken from the mono-viewing multiplane position, but can only achieve 59.1 µm measurement accuracy using images taken from the multi-viewing multiplane position.Similarly, the binocular system with Zhang's calibration parameters can achieve 21.7 µm measurement accuracy using images taken from the multi-viewing multiplane position, but can only achieve 52.0 µm measurement accuracy using images taken from the mono-viewing multiplane position.Referring to Figure 13, the mono-viewing position and the multi-viewing position in this experiment are clearly distributed in different depths of the same world coordinates system.These results have verified the local spatial optimality of the calibration parameters mentioned in Section 2.4.Data in Tables 14-16 illustrate that the parameters of the binocular camera system calibrated by the present IACC method not only have the best calibration accuracy among three contrast methods, but also can supply the best measurements accuracy for circular feature points at nearby positions of where the calibration images are acquired.Data in Tables 15 and 16 can also reflect that parameters calibrated by different methods achieve their best measurements accuracy at around the calibration position, but cannot achieve equivalent measurements accuracy away from the calibration position.As we can see from the root mean square error data in Tables 15 and 16, the binocular system with IACC's calibration parameters can achieve 2.6 μm measurement accuracy using images taken from the mono-viewing multiplane position, but can only achieve 59.1 μm measurement accuracy using images taken from the multi-viewing multiplane position.Similarly, the binocular system with Zhang's calibration parameters can achieve 21.7 μm measurement accuracy using images taken from the multi-viewing multiplane position, but can only achieve 52.0 μm measurement accuracy using images taken from the mono-viewing multiplane position.Referring to Figure 13, the mono-viewing position and the multi-viewing position in this experiment are clearly distributed in different depths of the same world coordinates system.These results have verified the local spatial optimality of the calibration parameters mentioned in Section 2.4.
The parameters either from non-coplanar or coplanar methods obtain the best effect when their calibration positions are close to and cover most of the measurement space.Overall, the present IACC calibration method for binocular camera systems shows prominent advantages in calibration accuracy and measurements accuracy than both Zhang's method and the ACC method.
The left camera's coordinates system is chosen to be the world coordinates system, using IACC's calibration parameters and feature extraction algorithm, wherein the calculated 3D coordinates of the measured feature points' centers are drawn in Figure 13.The parameters either from non-coplanar or coplanar methods obtain the best effect when their calibration positions are close to and cover most of the measurement space.
Overall, the present IACC calibration method for binocular camera systems shows prominent advantages in calibration accuracy and measurements accuracy than both Zhang's method and the ACC method.
The left camera's coordinates system is chosen to be the world coordinates system, using IACC's calibration parameters and feature extraction algorithm, wherein the calculated 3D coordinates of the measured feature points' centers are drawn in Figure 13. Figure 13 can clearly show the difference of the measured targets' positions in the world coordinates system.

Full-Field Stereo Measurement Experiments by Stereo-DIC Technologies with the Proposed Calibration Method
In the last few decades, stereo-Digital Image Correlation (stereo-DIC) has been widely accepted as a powerful and versatile tool for non-contact full-field 3D shape and surface deformation measurement in experimental solid mechanics [22,33].Stereo-DIC relies on the image correlation analysis of image-pairs obtained from a calibrated stereo-vision system.Stereo-DIC is still far from reaching its full potential.This is mainly due to three major challenges that Sutton and associates [34,35] identified as follows: (1) surface patterning; (2) imaging of the structure (i.e., appropriately selecting lens and stereo-angle); (3) calibrating the stereo-DIC measurement system.
Among the various calibration techniques used in the computer vision community, the two traditional methods presented by Zhang [9] and Tsai [5] are still commonly taken as key-methods for stereo-DIC system calibration with 2D and 3D targets, respectively.Research on the calibration methods applied for the stereo-DIC system is still valuable, and we will further testify the feasible applications of the proposed novel calibration methods in full-field stereo measurements.
The calibrated binocular camera system in Section 5.5 and three acrylic hollow cylinders with artificial speckle patterns are used to carry out full-field 3D shape measurements.The speckle patterns are arranged into cylinders' surfaces by hydro transfer printing.
The Newton-Raphson (NR) method has been integrated into an open-source digital image correlation (DIC) tool DICe [38] from Sandia National Laboratories.Its primary capabilities are computing full-field displacements and strains from sequences of digital images and rigid body motion tracking of objects.
A calibrated binocular camera system is used to take three image-pairs of cylinder objects with different radii as shown in Figure 14.
Sensors 2023, 23, 8466 31 of 4 In the last few decades, stereo-Digital Image Correlation (stereo-DIC) has bee widely accepted as a powerful and versatile tool for non-contact full-field 3D shape an surface deformation measurement in experimental solid mechanics [22,33].Stereo-DIC re lies on the image correlation analysis of image-pairs obtained from a calibrated stereo vision system.Stereo-DIC is still far from reaching its full potential.This is mainly due t three major challenges that Sutton and associates [34,35] identified as follows: (1) surfac patterning; (2) imaging of the structure (i.e., appropriately selecting lens and stereo-angle (3) calibrating the stereo-DIC measurement system.
Among the various calibration techniques used in the computer vision community the two traditional methods presented by Zhang [9] and Tsai [5] are still commonly take as key-methods for stereo-DIC system calibration with 2D and 3D targets, respectively Research on the calibration methods applied for the stereo-DIC system is still valuable and we will further testify the feasible applications of the proposed novel calibratio methods in full-field stereo measurements.
The calibrated binocular camera system in Section 5.5 and three acrylic hollow cylin ders with artificial speckle patterns are used to carry out full-field 3D shape measure ments.The speckle patterns are arranged into cylinders' surfaces by hydro transfer prin ing.
Two classical stereo-DIC methods, Newton-Raphson (NR) [36] method and inverse compositional Gauss-Newton (IC-GN) [37] algorithm, are used to carry out stereo match ing for the speckle patterns' subsets region.
The Newton-Raphson (NR) method has been integrated into an open-source digita image correlation (DIC) tool DICe [38] from Sandia National Laboratories.Its primar capabilities are computing full-field displacements and strains from sequences of digita images and rigid body motion tracking of objects.
A calibrated binocular camera system is used to take three image-pairs of cylinde objects with different radii as shown in Figure 14.Calibrated parameters from Zhang's method and the IACC method are separatel taken into the DICe to supply the basic parameters of the binocular stereo-DIC system Every image-pair from single capturing at same moments are used to be both the referenc image and the deformed image.Thus, the calculated displacement and deformation of th objects should be zero theoretically.The measurement results separately calculated wit Zhang's parameters and with IACC's parameters are very close, and the similarity of thes results can be reflected by colormaps in Figure 15.Since colormaps can only reflect th Calibrated parameters from Zhang's method and the IACC method are separately taken into the DICe to supply the basic parameters of the binocular stereo-DIC system.Every image-pair from single capturing at same moments are used to be both the reference image and the deformed image.Thus, the calculated displacement and deformation of the objects should be zero theoretically.The measurement results separately calculated with Zhang's parameters and with IACC's parameters are very close, and the similarity of these results can be reflected by colormaps in Figure 15.Since colormaps can only reflect the rough tendency, if the accuracy of the calibration parameters is close enough, with the same high-precision DIC matching method, one can barely tell the difference between the top and bottom parts of Figure 15. Figure 15 shows the static measurement results of one cylinder.Figure 15a-c separately demonstrates the measured ROI's z-coordinates, displacement of x-direction, and normal strain of x-direction with the binocular system calibrated with Paper's parameters.As we expected, the calculated displacement and deformation of the measured ROI are very close to 0. And the z-coordinates of the measured ROI accord with the actual situation of the measured position.It is worth noting that the static measured displacement data of ROI is quite close to zero and the absolute error value is mostly within 20 picometers.This level of absolute error reflects that both the matched accuracy of DICe and the accuracy of our calibration method remain at elevated levels.We also noted that the region of slightly larger error appeared around the circular ring at the middle of the ROI.This can also meet expectations because there is no speckle distribution inside the ring as we designed.As a testified method by many scholars which can be used in the calibration stereo-DIC, the calibration parameters from Zhang's method are also used to calculate the same imagepair.The results are shown in Figure 15d-f, which shows similar properties as the results achieved with IACC's parameters.
Then, full-field 3D reconstruction experiments for surface ROIs' matched subsets from three different cylinders are carried out by our self-designed IC-GN-based program.The IC-GN method, first-order shape function, and Bicubic interpolation method are used in our program to complete the correlated subsets' sub-pixel matching.A seed-pointbased neighbor-region-generation calculation path is applied to complete the ROI's fullfield stereo matching calculation.Stereo-rectify of image-pairs based on our calibration  (a-c) show the static measurement results of the surface's z-coordinates, displacement of x-direction, and normal strain of x-direction.The results data of (a-c) comes from the binocular camera system with the present IACC calibration parameters.Correspondingly, (d-f) show the static measurement results of the surface's z-coordinates, displacement of x-direction, and normal strain of x-direction.The results data of (d-f) comes from the binocular camera system with Zhang's calibration parameters.
Figure 15 shows the static measurement results of one cylinder.Figure 15a-c separately demonstrates the measured ROI's z-coordinates, displacement of x-direction, and normal strain of x-direction with the binocular system calibrated with Paper's parameters.As we expected, the calculated displacement and deformation of the measured ROI are very close to 0. And the z-coordinates of the measured ROI accord with the actual situation of the measured position.It is worth noting that the static measured displacement data of ROI is quite close to zero and the absolute error value is mostly within 20 picometers.This level of absolute error reflects that both the matched accuracy of DICe and the accuracy of our calibration method remain at elevated levels.We also noted that the region of slightly larger error appeared around the circular ring at the middle of the ROI.This can also meet expectations because there is no speckle distribution inside the ring as we designed.As a testified method by many scholars which can be used in the calibration stereo-DIC, the calibration parameters from Zhang's method are also used to calculate the same imagepair.The results are shown in Figure 15d-f, which shows similar properties as the results achieved with IACC's parameters.This paper gives a more comprehensive calibration efficiency evaluation method for both monocular and binocular camera systems, the deduction and analysis of which are appended in Appendix C.
According to the evaluation method in Appendix C, assume 10 images are used for calibration, and there are 63 features in every image.The numbers of nonlinear iterations are set to 200, and the C_operation is set to 1 × 10 8 .The complexity of the mentioned four methods can be quantified as shown in Table 18.Thus, from the perspective of algorithm efficiency, for the mentioned four monocular calibration methods, Tsai's method has the best efficiency, followed by ACC, present IACC, and Zhang's methods.For the mentioned four binocular calibration methods' algorithm efficiency, Tsai's method has the best efficiency, followed by present IACC, Zhang's, and ACC methods.
From the overall efficiency including algorithm and implementation, for the mentioned four monocular calibration methods, Zhang's method has the best efficiency, followed by present IACC, ACC, and Tsai's methods.For the mentioned four binocular calibration methods' overall efficiency, Zhang's and present IACC methods have similar best efficiency, followed by Tsai's and ACC methods.
Summarizing the results from Sections 5.1-5.7,this gives our suggestions for the choice of the mentioned four calibration methods: 1.
Zhang's method is the easiest to implement, but the calibration accuracy is not the best.Thus, Zhang's method is the best choice if there are no extreme demands of high-precision calibration and measurements.

2.
The present IACC method for monocular and binocular calibration has the best calibration accuracy and moderate implementation complexity.The present IACC method is the preferred choice for high-precision calibration and measurements.Especially when the structure of camera systems and measurements' position is confirmed in advance.And in some special scenarios when 2D targets are restricted to change postures in multi-viewing positions, the present IACC method would be the best solution for both accuracy and efficiency.

3.
The ACC method can supply accurate calibration parameters for monocular camera systems with good efficiency.However, it is not suitable for binocular camera systems for either accuracy or efficiency.4.
Tsai's method has distinct defects in mathematical model and is inefficient using planar target with non-coplanar mode.Using real stereo targets will reduce the complexity of Tsai's method.However, it is still not suitable for high-precision calibration and measurements.

5.
The Present algorithm in Appendix B can well serve for the mentioned four methods with high-precision and good stability.

Conclusions
This paper proposes an Improved Affine Coordinate Correction Mathematical (IACC) model which can be well applied for the calibration of both monocular and binocular camera systems.Our novel calibration methods are stable, efficient, and with high precision.Based on Local-ROI-OTSU and gradient-based Edge Radial Section Scanning method, a novel simple extraction algorithm for the symmetric circles pattern is proposed.ROI segmentation for each key point.Then, we use a simple strategy to sort the key points in the right order.Further, apply Sobel operator to acquire gradient image.According to the rough centers and radius information, locate the ROI of each circle.Separately operate the OTSU [41] algorithm for each ROI of the gradient images to confirm the effective gradient amplitude.Apply radial section scanning method to acquire the accurate sub-pixel edge of each circle.At last, use ellipse fitting to calculate and refine accurate centers.The flow chart of the proposed algorithm is as follows:  For the step edge, the first order derivative of edge section's grayscale intensity distribution is Gaussian-like distribution.Theoretically, the normalized first order derivative image of a circle patten should be as shown in Figure A3: For each edge region of segmentation, the grayscale intensity distribution along with the gradient direction should be like the left part of Figure A3.The symmetry axis' coordinate xe of gradient-intensity curve could characterize the edge point position of this segmentation region.
In this paper, we use the gray centroid method to acquire the location of xe.Sobel operator has a low calculation amount, simple structure, and high precision.Therefore, it has a wide range of applications in fields of remote sensing, image processing, and industrial detection [42].For convenience, we use a two-directional 3 × 3 Sobel operator to separately obtain the gradient of x-direction and y-direction.Then, we calculate the gradient intensity as follows: Thus, around each edge point, along with the direction of gradient, the coordinate of each edge point (xe, ye) could express as follows: For the step edge, the first order derivative of edge section's grayscale intensity distribution is Gaussian-like distribution.Theoretically, the normalized first order derivative image of a circle patten should be as shown in Figure A3:  For the step edge, the first order derivative of edge section's grayscale inten tribution is Gaussian-like distribution.Theoretically, the normalized first order de image of a circle patten should be as shown in Figure A3: In this paper, we use the gray centroid method to acquire the location of x operator has a low calculation amount, simple structure, and high precision.Ther has a wide range of applications in fields of remote sensing, image processing, and trial detection [42].For convenience, we use a two-directional 3 × 3 Sobel operator arately obtain the gradient of x-direction and y-direction.Then, we calculate the g intensity as follows:  For each edge region of segmentation, the grayscale intensity distribution along with the gradient direction should be like the left part of Figure A3.The symmetry axis' coordinate x e of gradient-intensity curve could characterize the edge point position of this segmentation region.
In this paper, we use the gray centroid method to acquire the location of x e .Sobel operator has a low calculation amount, simple structure, and high precision.Therefore, it has a wide range of applications in fields of remote sensing, image processing, and industrial detection [42].For convenience, we use a two-directional 3 × 3 Sobel operator to separately obtain the gradient of x-direction and y-direction.Then, we calculate the gradient intensity as follows: g (x, y) = g x 2 g y 2 (A1) Thus, around each edge point, along with the direction of gradient, the coordinate of each edge point (x e , y e ) could express as follows: In which, E represents the continuous region of rough edge point along with the gradient direction.The Gray centroid method can find the turning point of Gaussian-like distribution quickly and accurately.Simulation verified that the Gray Centroid method can be used to locate the turning point of Gaussian-like curves.The location accuracy and the calculation of efficiency is better than the traditional linear Gaussian Fitting method if there is little noise of the original data of the gradient intensity.The simulation results are shown in Figure A4.
In which, E represents the continuous region of rough edge point along with the gradient direction.The Gray centroid method can find the turning point of Gaussian-like distribution quickly and accurately.Simulation verified that the Gray Centroid method can be used to locate the turning point of Gaussian-like curves.The location accuracy and the calculation of efficiency is better than the traditional linear Gaussian Fitting method if there is little noise of the original data of the gradient intensity.The simulation results are shown in Figure A4.The Gradient-based Gray Centroid method has been verified which has good effect on the inspection of the edges of feature points, lines, and curves if there is less noise interference occurring in certain ROIs [43].This paper further applies the Gradient-based Gray Centroid method in symmetric circles arrays and reduces noise interference by the combination of Local-ROI-OTSU and Radial Section Scanning strategies.
For the real images captured by cameras, the distribution of grayscale intensity is uneven.This uneven distribution is usually caused by illumination condition, exposure control, or some other factors.This uneven distribution may cause the noise on the gradient intensity images which can be illustrated as Figure A5a:  The Gradient-based Gray Centroid method has been verified which has good effect on the inspection of the edges of feature points, lines, and curves if there is less noise interference occurring in certain ROIs [43].This paper further applies the Gradient-based Gray Centroid method in symmetric circles arrays and reduces noise interference by the combination of Local-ROI-OTSU and Radial Section Scanning strategies.
For the real images captured by cameras, the distribution of grayscale intensity is uneven.This uneven distribution is usually caused by illumination condition, exposure control, or some other factors.This uneven distribution may cause the noise on the gradient intensity images which can be illustrated as Figure A5a: In which, E represents the continuous region of rough edge point along with the gradient direction.The Gray centroid method can find the turning point of Gaussian-like distribution quickly and accurately.Simulation verified that the Gray Centroid method can be used to locate the turning point of Gaussian-like curves.The location accuracy and the calculation of efficiency is better than the traditional linear Gaussian Fitting method if there is little noise of the original data of the gradient intensity.The simulation results are shown in Figure A4.The Gradient-based Gray Centroid method has been verified which has good effect on the inspection of the edges of feature points, lines, and curves if there is less noise interference occurring in certain ROIs [43].This paper further applies the Gradient-based Gray Centroid method in symmetric circles arrays and reduces noise interference by the combination of Local-ROI-OTSU and Radial Section Scanning strategies.
For the real images captured by cameras, the distribution of grayscale intensity is uneven.This uneven distribution is usually caused by illumination condition, exposure control, or some other factors.This uneven distribution may cause the noise on the gradient intensity images which can be illustrated as Figure A5a:  Obviously, the noise introduced may cause unexpected errors for finding the edge point.The Otsu algorithm is used to determine the threshold in this paper.In fact, different ROI of the same image may also have different distributions of intensity.In this paper, we separately calculate the threshold for each ROI.It will improve the stability of threshold value calculated by the Otsu algorithm when non-ROI regions' intensity changes.
After eliminating the noise under the threshold, we use the redial section scanning method to distract the edge points of each circle pattern in specific ROI.As shown in Figure A5b, the blob detector can roughly find the centers of asymmetric circle grid patterns, as well as the radius of each circle.With the rough center and radius, we can simply divide ROIs for specific circles.We take the rough center of a circle as the center of square ROI, and four times radius as the length of the square's single side.
Take the rough center as the starting point and ∆θ as the interval angle to scan the circle's edge region in a radial direction.The radius direction of each ∆θ could be seen as the gradient direction of each segmentation edge region.Bilinear interpolation is used to obtain the gradient intensity in a radial direction.Then, use the gray centroid method to distract the edge point's coordinate at this direction.With N pairs known edge points' coordinates, least square method associate with ellipse fitting is used to calculate the center of each circle.It is worth mentioning that the known edge points' coordinates could participate in least square ellipse fitting with equal weight.This is because the found edge points' coordinates have high sub-pixel accuracy by our method.
For the demand of actual measurements, the area of each circle in targets is not always equal.Because of the search strategy of the algorithm, the order of key points found by simple blob detector is not always regular.To solve this problem, this paper gives an easy strategy to sort the key points in regular order.The typical scene is shown in Figure A6: Obviously, the noise introduced may cause unexpected errors for finding the edge point.The Otsu algorithm is used to determine the threshold in this paper.In fact, different ROI of the same image may also have different distributions of intensity.In this paper, we separately calculate the threshold for each ROI.It will improve the stability of threshold value calculated by the Otsu algorithm when non-ROI regions' intensity changes.
After eliminating the noise under the threshold, we use the redial section scanning method to distract the edge points of each circle pattern in specific ROI.As shown in Figure A5b, the blob detector can roughly find the centers of asymmetric circle grid patterns, as well as the radius of each circle.With the rough center and radius, we can simply divide ROIs for specific circles.We take the rough center of a circle as the center of square ROI, and four times radius as the length of the square's single side.
Take the rough center as the starting point and   as the interval angle to scan the circle's edge region in a radial direction.The radius direction of each   could be seen as the gradient direction of each segmentation edge region.Bilinear interpolation is used to obtain the gradient intensity in a radial direction.Then, use the gray centroid method to distract the edge point's coordinate at this direction.With N pairs known edge points' coordinates, least square method associate with ellipse fitting is used to calculate the center of each circle.It is worth mentioning that the known edge points' coordinates could participate in least square ellipse fitting with equal weight.This is because the found edge points' coordinates have high sub-pixel accuracy by our method.
For the demand of actual measurements, the area of each circle in targets is not always equal.Because of the search strategy of the algorithm, the order of key points found by simple blob detector is not always regular.To solve this problem, this paper gives an easy strategy to sort the key points in regular order.The typical scene is shown in Figure A6: With known size of real targets' pattern, the homography matrix between key-points' imagine pixel coordinates and world coordinates could be solved.Then, operate perspective transform for all key points from the image to the world coordinates system, and sort them

Sensors 2023 ,Figure 1 .Figure 1 .
Figure 1.Simulated error data with Tsai's method.(a) shows the relative error of f changing trend with the increasing uncorrected yaw angle between sliding direction and target's plane; (b) shows the reprojection error changing trend with increasing uncorrected yaw angle between sliding direction and target's plane.Moreover, Zheng et al. [8] also pointed out that Tsai's mathematical model cannot obtain a strictly orthogonal rotation matrix, for the following reason: The 1st step of linear initial values solving uses seven independent intermediate variables to acquire R 33 x x y s T T    of six DOF.This is essentially an overdetermined equation solving proce-

Figure 2 .
Figure 2.This figure shows the real imaging process and reflects the experience that the measurement accuracy would be better if the calibration feature points closely cover the measurement position.

Figure 2 .
Figure 2.This figure shows the real imaging process and reflects the experience that the measurement accuracy would be better if the calibration feature points closely cover the measurement position.

Figure 3 .
Figure 3.This figure separately shows the implementation processes of coplanar and non-coplanar calibration methods.The multiple panels of this figure are listed as: (a) The implementation process of coplanar monocular calibration methods; (b) The implementation process of non-coplanar monocular calibration methods; (c) The implementation process of coplanar binocular calibration methods; (d) The implementation process of non-coplanar binocular calibration methods.

Figure 3 .
Figure 3.This figure separately shows the implementation processes of coplanar and non-coplanar calibration methods.The multiple panels of this figure are listed as: (a) The implementation process of coplanar monocular calibration methods; (b) The implementation process of non-coplanar monocular calibration methods; (c) The implementation process of coplanar binocular calibration methods; (d) The implementation process of non-coplanar binocular calibration methods.

Figure 5 .
Figure 5.This figure is a camera pinhole imaging model.The transformation among different coordinate system processes can be represented by matrix transformation equations.

.
The transformation of points' coordinates between system OR-UV and O-XYZ can be expressed by Equation (3):

Figure 4
Figure 4.This is a transformation model for mapping a target affine space coordinate system to an orthogonal world coordinate system.This process can be represented as a matrix transformation equation.

Figure 4
Figure 4.This is a transformation model for mapping a target affine space coordinate system to an orthogonal world coordinate system.This process can be represented as a matrix transformation equation.

Figure 5 .
Figure 5.This figure is a camera pinhole imaging model.The transformation among different coordinate system processes can be represented by matrix transformation equations.
points' coordinates between system OR-UV and O-XYZ can be expressed by Equation (3):

Figure 5 .
Figure 5.This figure is a camera pinhole imaging model.The transformation among different coordinate system processes can be represented by matrix transformation equations.

Figure 6 .
Figure 6.This figure shows the performance of the proposed calibration method with respect to the noise level.The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the noise level; (b) Relative error of fx and fy with respect to the noise level; (c) Absolute error of γ with respect to the noise level; (d) Relative error of U0 and V0 with respect to the noise level; (e) Uncertainty of fx and fy with respect to the noise level; (f) Absolute error of R_vec and T_vec with respect to the noise level; (g) Absolute error of βx and βy with respect to the noise level; (h) Absolute error of distortion coefficients with respect to the noise level.

Figure 6 .
Figure 6.This figure shows the performance of the proposed calibration method with respect to the noise level.The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the noise level; (b) Relative error of f x and f y with respect to the noise level; (c) Absolute error of γ with respect to the noise level; (d) Relative error of U 0 and V 0 with respect to the noise level; (e) Uncertainty of f x and f y with respect to the noise level; (f) Absolute error of R_vec and T_vec with respect to the noise level; (g) Absolute error of β x and β y with respect to the noise level; (h) Absolute error of distortion coefficients with respect to the noise level.

Figure 7 .
Figure 7.This figure shows the performance of the proposed calibration method with respect to the number of planes.The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the number of planes; (b) Relative error of fx and fy with respect to the number of planes; (c) Absolute error of γ with respect to the number of planes; (d) Relative error of U0 and V0 with respect to the number of planes; (e) Uncertainty of fx and fy with respect to the number of planes; (f) Absolute error of R_vec and T_vec with respect to the number of planes; (g) Absolute error of βx and βy with respect to the number of planes; (h) Absolute error of distortion coefficients with respect to the number of planes.

Figure 7 .
Figure 7.This figure shows the performance of the proposed calibration method with respect to the number of planes.The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the number of planes; (b) Relative error of f x and f y with respect to the number of planes; (c) Absolute error of γ with respect to the number of planes; (d) Relative error of U 0 and V 0 with respect to the number of planes; (e) Uncertainty of f x and f y with respect to the number of planes; (f) Absolute error of R_vec and T_vec with respect to the number of planes; (g) Absolute error of β x and β y with respect to the number of planes; (h) Absolute error of distortion coefficients with respect to the number of planes.

Figure 8 .
Figure 8.This figure shows the performance of the proposed calibration method with respect to the rotation angle of the targets' plane.The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the rotation angle of the targets' plane; (b) Relative error of fx and fy with respect to the rotation angle of the targets' plane; (c) Absolute error of γ with respect to the rotation angle of the targets' plane; (d) Relative error of U0 and V0 with respect to the rotation angle of the targets' plane; (e) Uncertainty of fx and fy with respect to the rotation angle of targets' plane; (f) Absolute error of R_vec and T_vec with respect to the rotation angle of the targets' plane; (g) Absolute error of βx and βy with respect to the rotation angle of the targets' plane; (h) Absolute error of distortion coefficients with respect to the rotation angle of the targets' plane.

Figure 8 .
Figure 8.This figure shows the performance of the proposed calibration method with respect to the rotation angle of the targets' plane.The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the rotation angle of the targets' plane; (b) Relative error of f x and f y with respect to the rotation angle of the targets' plane; (c) Absolute error of γ with respect to the rotation angle of the targets' plane; (d) Relative error of U 0 and V 0 with respect to the rotation angle of the targets' plane; (e) Uncertainty of f x and f y with respect to the rotation angle of targets' plane; (f) Absolute error of R_vec and T_vec with respect to the rotation angle of the targets' plane; (g) Absolute error of β x and β y with respect to the rotation angle of the targets' plane; (h) Absolute error of distortion coefficients with respect to the rotation angle of the targets' plane.

Figure 9 .
Figure 9.This figure shows the performance simulation results of the two contrast algorithms with respect to illumination and viewing angles changes.The multiple panels of this figure are listed as follows: (a) Stability simulations with respect to illumination changes (viewing angle: 0°); (b) Stability simulations with respect to illumination changes (viewing angle: 20°); (c) Stability simulations with respect to illumination changes (viewing angle: 45°).

Figure 9 .
Figure 9.This figure shows the performance simulation results of the two contrast algorithms with respect to illumination and viewing angles changes.The multiple panels of this figure are listed as follows: (a) Stability simulations with respect to illumination changes (viewing angle: 0 • ); (b) Stability simulations with respect to illumination changes (viewing angle: 20 • ); (c) Stability simulations with respect to illumination changes (viewing angle: 45 • ).

Figure 10 .
Figure 10.This figure shows the feature searching ability of our novel circle feature points extraction algorithm for fitting different inclination angles of planar targets.Combining the simulations in Section 5.2, the present new algorithm in Appendix B has better extraction accuracy and stability than the traditional extraction algorithm findCirclesGrid from OpenCV 3.3.0,whereby our novel algorithm can effectively improve the calibration accuracy for both non-coplanar methods and coplanar methods.

Figure 10 .
Figure 10.This figure shows the feature searching ability of our novel circle feature points extraction algorithm for fitting different inclination angles of planar targets.

Figure 11 .
Figure 11.This figure shows the binocular camera system to be calibrated.

Figure 11 .
Figure 11.This figure shows the binocular camera system to be calibrated.

Sensors 2023, 23 , 8466 Figure 12 .
Figure 12.This figure shows the horizontal and vertical point-pairs clearances to be measur targets.Each target in a specific position can acquire 56 sets of horizontal clearances and vertical distance clearances.

Figure 12 .
Figure 12.This figure shows the horizontal and vertical point-pairs clearances to be measured in the targets.Each target in a specific position can acquire 56 sets of horizontal clearances and 54 sets of vertical distance clearances.

Figure 13 .
Figure 13.This figure shows a 3D reconstruction of the circular feature points' centers.Listed as follows: (a) 3D reconstruction of feature points with paper's parameters in mono-viewing mutiplane position; (b) 3D reconstruction of feature points with paper's parameters in multi-viewing muti-plane position.5.6.Full-Field Stereo Measurement Experiments by Stereo-DIC Technologies with the Proposed Calibration Method

Figure 13 .
Figure 13.This figure shows a 3D reconstruction of the circular feature points' centers.Listed as follows: (a) 3D reconstruction of feature points with paper's parameters in mono-viewing muti-plane position; (b) 3D reconstruction of feature points with paper's parameters in multi-viewing muti-plane position.

Figure 14 .
Figure 14.This figure shows the calibrated binocular camera system is used to implement static 3 surface shape reconstruction experiments of cylinder objects with different radii.

Figure 14 .
Figure 14.This figure shows the calibrated binocular camera system is used to implement static 3D surface shape reconstruction experiments of cylinder objects with different radii.

Figure 15 .
Figure 15.(a-c) show the static measurement results of the surface's z-coordinates, displacement of x-direction, and normal strain of x-direction.The results data of (a-c) comes from the binocular camera system with the present IACC calibration parameters.Correspondingly, (d-f) show the static measurement results of the surface's z-coordinates, displacement of x-direction, and normal strain of x-direction.The results data of Figure15(d-f) comes from the binocular camera system with Zhang's calibration parameters.

Figure 15 .
Figure 15.(a-c) show the static measurement results of the surface's z-coordinates, displacement of x-direction, and normal strain of x-direction.The results data of (a-c) comes from the binocular camera system with the present IACC calibration parameters.Correspondingly, (d-f) show the static measurement results of the surface's z-coordinates, displacement of x-direction, and normal strain of x-direction.The results data of (d-f) comes from the binocular camera system with Zhang's calibration parameters.

Figure 16 .
Figure 16.This figure shows 3D reconstructions of the ROIs' matched subsets from three different cylinders.Listed as follows: (a) 3D reconstruction of surface ROI on cylinder #1; (b) 3D reconstruction of surface ROI on cylinder #2; (c) 3D reconstruction of surface ROI on cylinder #3.The local ROIs' points cloud data in Figure 17 are used to achieve cylinder fit by the nonlinear least squares method based on the Levenberg-Marquardt algorithm.The cylinder mathematical model can be illustrated as follows:

Sensors 2023 ,
23, 8466 38 of 44 sub-pixel edge of each circle.At last, use ellipse fitting to calculate and refine accurate centers.The flow chart of the proposed algorithm is as follows:

Figure A2 .
Figure A2.This figure shows the flow chart of the proposed circle feature points extraction algorithm.

Figure A3 .
Figure A3.The left part of this figure shows the grayscale intensity distribution and corresponding first order derivative intensity along with the edge's gradient direction.The right part of this figure shows the grayscale image of the circle pattern and its normalized gradient-intensity image.

Figure A2 .
Figure A2.This figure shows the flow chart of the proposed circle feature points extraction algorithm.

Sensors 2023 ,
23, 8466    sub-pixel edge of each circle.At last, use ellipse fitting to calculate and refine a centers.The flow chart of the proposed algorithm is as follows:

Figure A2 .
Figure A2.This figure shows the flow chart of the proposed circle feature points extraction rithm.

Figure A3 .
Figure A3.The left part of this figure shows the grayscale intensity distribution and corres first order derivative intensity along with the edge's gradient direction.The right part of th ure shows the grayscale image of the circle pattern and its normalized gradient-intensity im each edge point, along with the direction of gradient, the coord each edge point (xe, ye) could express as follows:

Figure A3 .
Figure A3.The left part of this figure shows the grayscale intensity distribution and corresponding first order derivative intensity along with the edge's gradient direction.The right part of this figure shows the grayscale image of the circle pattern and its normalized gradient-intensity image.

Figure A4 .
Figure A4.This figure shows the simulation of different edge position extraction methods.The Gray Centroid method has better performance than the Gaussian Fitting method if there is little noise.

Figure A5 .
Figure A5.(a) shows the grayscale intensity distribution in actual images captured by cameras.The uneven illumination condition, exposure control, and some other factors may introduce noise for both pattern images and corresponding gradient intensity images; (b) shows the redial section scanning method to distract the edge point of each circle pattern in specific ROI.

Figure A4 .
Figure A4.This figure shows the simulation of different edge position extraction methods.The Gray Centroid method has better performance than the Gaussian Fitting method if there is little noise.

Figure A4 .
Figure A4.This figure shows the simulation of different edge position extraction methods.The Gray Centroid method has better performance than the Gaussian Fitting method if there is little noise.

Figure A5 .
Figure A5.(a) shows the grayscale intensity distribution in actual images captured by cameras.The uneven illumination condition, exposure control, and some other factors may introduce noise for both pattern images and corresponding gradient intensity images; (b) shows the redial section scanning method to distract the edge point of each circle pattern in specific ROI.

Figure A5 .
Figure A5.(a) shows the grayscale intensity distribution in actual images captured by cameras.The uneven illumination condition, exposure control, and some other factors may introduce noise for both pattern images and corresponding gradient intensity images; (b) shows the redial section scanning method to distract the edge point of each circle pattern in specific ROI.

Figure A6 .Figure A6 .
Figure A6.This figure shows the process of the proposed easy strategy to sort the key points in regular order.First, find the four vertexes' coordinates of the pattern's rectangle region.The four points in Figure A6 are shown as Pt-TL, Pt-TR, Pt-BL, and Pt-BR.When the yaw angle and roll angle between the target's plane and the imaging plane remain in relatively low level (usually less than 45°), the following equation could be used to search for these four points' pixel coordinates:-.-. ( . .)-.-. ( . .)-.-. ( . .)-.-. ( . . )

Table 1 .
The mathematical model and algorithm flow of Tsai's non-coplanar calibration methods.

Table 2 .
The mathematical model and algorithm flow of Zhang's coplanar calibration methods.

Table 3 .
The mathematical model and algorithm flow of ACC non-coplanar calibration method for monocular camera system.

Table 4 .
The mathematical model and algorithm flow of ACC non-coplanar calibration method for binocular camera system.

Table 5 .
Proposed novel improved affine coordinate correction (IACC) mathematical model for non-coplanar calibration.

Table 6 .
The Information of the Calibration Targets' Pattern.
1This level of accuracy is decided by the manufacturing technique.

Table 7 .
Parameters of Chessboard Data Sets made by findChessboardCorners.

Table 8 .
This table shows the Single_L camera calibration results using the chessboard pattern with feature points found by findChessboardCorners from OpenCV 3.3.0.
* Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.

Table 9 .
This table shows the Single_R camera calibration results using the chessboard pattern with feature points found by findChessboardCorners from OpenCV 3.3.0.Reprojection (bold data in the table) error is the key data to evaluate the calibration accuracy. *

Table 10 .
This table shows the Single_L camera calibration results using the symmetric circle pattern with feature points found by findCirclesGrid from OpenCV 3.3.0.

Table 13 .
This table shows the Single_R camera calibration results using symmetric circle pattern with feature points found by the Present New Algorithm in Appendix B.

Table 13 .
Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.This table shows the Single_R camera calibration results using symmetric circle pattern with feature points found by the Present New Algorithm in Appendix B.
* * Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.

Table 14 .
This table shows the calibration results separately calibrated by the ACC method, Zhang's method, and the present IACC method.
23,eprojection error in Pixels (bold data in the table) is a key data to evaluate the calibration accuracy of different methods.Sensors 2023,23, 8466

Table 15 .
This table shows the 3D reconstruction error data of the present calibration me Zhang's method with the target in mono-viewing multiplane position.

Type Horizontal Point-Pairs Clearances Error Data Vertical Point-Pairs Clearances Error Number of Measured Clear- ances 280 270 Real Clearance Value 15 mm ± 1 μm 15 mm ± 1 μm Calibration Methods
* RMS error (bold data in the table) is the key data to evaluate the accuracy of distance ments.

Table 16 .
This table shows the 3D reconstruction error data of the present calibration me Zhang's method with the target in multi-viewing multiplane position.

Table 15 .
This table shows the 3D reconstruction error data of the present calibration method and Zhang's method with the target in mono-viewing multiplane position.RMS error (bold data in the table) is the key data to evaluate the accuracy of distance measurements. *

Table 16 .
This table shows the 3D reconstruction error data of the present calibration method and Zhang's method with the target in multi-viewing multiplane position.

Root Mean Square Error * (Unit: mm) 0.0217 0.0575 0.0591 0.0173 0.0770 0.0544 *
RMS error (bold data in the table) is the key data to evaluate the accuracy of distance measurements.

Table 18 .
Quantified complexity of the mentioned four calibration methods.