Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement

In the vision-based inspection of specular or shiny surfaces, we often compute the camera pose with respect to a reference plane by analyzing images of calibration grids, reflected in such a surface. To obtain high precision in camera calibration, the calibration target should be large enough to cover the whole field of view (FOV). For a camera with a large FOV, using a small target can only obtain a locally optimal solution. However, using a large target causes many difficulties in making, carrying, and employing the large target. To solve this problem, an improved calibration method based on coplanar constraint is proposed for a camera with a large FOV. Firstly, with an auxiliary plane mirror provided, the positions of the calibration grid and the tilt angles of the plane mirror are changed several times to capture several mirrored calibration images. Secondly, the initial parameters of the camera are calculated based on each group of mirrored calibration images. Finally, adding with the coplanar constraint between each group of calibration grid, the external parameters between the camera and the reference plane are optimized via the Levenberg-Marquardt algorithm (LM). The experimental results show that the proposed camera calibration method has good robustness and accuracy.


Introduction
In recent years, the vision measurement system has been widely used in industrial production due to its high precision, non-contact, real-time capabilities, etc. [1,2]. At the same time, for some special objects, such as car windshield [3], painted body shell [4], polishing mold, stainless steel products, and other smooth surface objects, the demand for three-dimensional measurement is greater and greater. Meanwhile, the traditional three-dimensional reconstruction method [5][6][7] is not ideal for the reconstruction of the bright surface. The two-dimensional feature information of the image obtained by the camera mainly comes from the surrounding environment of the shiny surface, rather than the surface itself. For the high reflection characteristics of the shiny surface, the reference pattern is usually placed around it, and the reference pattern modulated by the surface helps realize three-dimensional reconstruction of itself [8][9][10][11][12]. In this case, the calibration accuracy of the reference plane and camera directly affects the subsequent three-dimensional reconstruction accuracy of the shiny surface. Meanwhile, to measure more area of surface, a camera with a large FOV is needed. However, for calibration in large FOV, the targets with large areas and high precision are not only difficult to make, but they are also inconvenient to carry and use.
For the calibration of the catadioptric system, many scholars have proposed methods of using an auxiliary plane mirror to estimate the external parameters between the camera and the reference object [13]. Kumar et al. [14] proposed using the orthogonal constraint between the direction vector of the connection from the corresponding point of the object to the mirror image, as well as the column vector of the rotation matrix to list linear equations for solving, and each set of equations requires at least five calibration images. However, the calculated position parameter has a large error with the true value, which is harmful to the subsequent parameter optimization. Takahashi et al. [15] obtain the unique solution of three P3P problems (perspective-three-point problem) from three mirror images based on the orthogonal constraint. However, if the reference object is smaller than a certain size, a wrong solution will be obtained. The method proposed by Hesch et al. [16] also obtains the solutions of three P3P problems from three mirror images, but it can only select an optimal solution from 64 candidate solutions after re-projection error evaluation. Xin et al. [17] directly estimate the camera rotation matrix by the SVD decomposition of the sum of the rotation matrices. Additionally, they calculate the translation vector by solving overdetermined linear equations. While it is more sensitive to noise, the algorithm stability is poor. Bergamasco [18] proposed a method to locate coplanar circles from images by means of a non-cooperative evolutionary game and refined the estimation of camera parameters by observing a set of coplanar circles. However, the accuracy of this method is low.
For the calibration of the camera with a large FOV, scholars consider combing several two-dimensional small targets into a large three-dimensional target. While in the methods proposed in the paper [19,20], all intrinsic parameters of the camera cannot be obtained because the polynomial projection mode is used. Meanwhile, in the methods proposed in the paper [21,22], the relative positions between the small targets are subject to certain restrictions, which makes it difficult to be applied in real applications. Occlusion-resistant markers, such as Charuco [23] or RUNETag [24], are also robust options, but they present fewer points for calibration.
To solve this problem, we use a LCD monitor as a reference plane to produce the calibration grid. It not only solves the problem of difficulty in manufacturing, carrying, and using large-sized objects, but it also can be used as a carrier for projecting encoded patterns when measuring bright surfaces due to its ability to produce free patterns. Bergamasco [25,26] also used a monitor that displays dense calibration grids for camera calibration, but it requires multiple frames, and when dense grid points are spread over the display, the curvature of the display surface will greatly affect the accuracy and robustness of calibration. Therefore, this article calibrates using a smaller calibration grid on the monitor and covers the camera's field of view by moving the position of the calibration grid, which to some extent reduces the impact of display surface curvature, and ultimately it achieves high accuracy and robustness.
Firstly, by moving the calibration grid on the reference plane and changing the tilt angle of the plane mirror on the optical platform to obtain multiple sets of mirrored calibration images, the internal and external parameters of the camera are computed by Zhang's [27] calibration method. Secondly, the orthogonality constraint calibration method and P3P algorithm proposed in [15,16] are used to obtain the external parameters from the reference plane to the camera. Finally, the LM [28] algorithm is used to obtain the optimal solution of the external parameters with the coplanar constraint of multiple calibration grid positions. At the same time, using the method of reconstructing the smooth mirror shape from a single image proposed in [12], three-dimensional measurement experiments are carried out to indirectly verify the accuracy of the calibration method proposed.

Plane Mirror Reflection Model
As is shown in Figure 1, in the camera coordinate system C, the plane mirror can be described by the plane parameters Π = {n, d}. The unit vector n denotes the normal vector of the mirror plane, d represents the distance between the origin of C and the plane [17], and R s2c and T s2c are the rotation matrix and the translation vector between the reference Sensors 2023, 23, 3464 3 of 13 plane coordinate system and the camera coordinate system. P is a feature point on the reference plane.

Plane Mirror Reflection Model
As is shown in Figure 1, in the camera coordinate system C , the plane mirro described by the plane parameters { , } d Π = n . The unit vector n denotes the norm tor of the mirror plane, d represents the distance between the origin of C and th [17], and s2c R and s2c T are the rotation matrix and the translation vector between erence plane coordinate system and the camera coordinate system. P is a featur on the reference plane. Based on the reflection property of the mirror, the relationship between th and its mirror point is given by:

Mirror-Based Camera Projection Model
The perspective projection model is a camera imaging model widely used puter vision [23]. The mapping relation between any three-dimensional point w P space and its corresponding pixel point where s is a nonzero scale factor, A is the intrinsic parameters matrix of the and R and T are the rotation matrix and the translation vector between the coordinate system and the world coordinate system. Taking the mirror reflection count, concatenate the camera model with the mirror reflection, the mirror-based projection model becomes:  The camera C observes a point P on the reference plane via the plane mirror Π. We denote by i the incident ray and by l the reflected ray, R s2c and T s2c denote the pose parameters between the reference plane and the camera, n denotes the normal of the mirror, and d is the distance between C and Π.
Based on the reflection property of the mirror, the relationship between this point and its mirror point is given by: This denotes the symmetric transformation induced by Π. Note that M 1 = M 1 −1 , and (I − 2 · n · n T ) is a Householder matrix. Let M 2 describe the rigid transformation that transforms points from the reference to the camera frame:

Mirror-Based Camera Projection Model
The perspective projection model is a camera imaging model widely used in computer vision [23]. The mapping relation between any three-dimensional point P w in the space and its corresponding pixel point v = [x y 1] T in the image can be described as: where s is a nonzero scale factor, A is the intrinsic parameters matrix of the camera, and R and T are the rotation matrix and the translation vector between the camera coordinate system and the world coordinate system. Taking the mirror reflection into account, concatenate the camera model with the mirror reflection, the mirror-based camera projection model becomes: R and T can be written as: According to the Equation (5), we need at least three specular reflection images to calculate R s2c and T s2c .

Computation of External Parameters
By changing the tilt angle of the plane mirror, we can obtain mirrored images at different positions and compute external parameters by the P3P algorithm [16]. Let j, j ∈ {1, 2, 3}, and R j represents the rotation matrix of the mirrored image at the j position of the plane mirror. Assume unit vector m jj is perpendicular to n j and n j , so we can obtain: R j · R j T is a special orthogonal matrix, which has two complex conjugate eigenvalues, and one eigenvalue equals 1. So m jj is the eigenvector of R j · R j T corresponding to the eigenvalue of 1. According to the cross-product properties of the eigenvector, the unit normal vectors corresponding to the three positions of the plane mirror can be calculated.
According to the Equation (5), R s2c can be calculated. In the case of an ideal condition without noise, the three rotation matrices calculated by three R j should be equal. While they are not equal in fact due to the noise. Therefore, the average of the rotation matrices should be calculated [20].
The rest of the parameters [T, d 1 , d 2 , d 3 ] T can be solved by linear equations constructed by the Equation (5). So far, all of the initial values of the pose parameters have been calculated.

Optimization with Coplanar Constraint
Linear solutions are usually sensitive to noise, we can minimize the reprojection error of back-projection by adjusting R s2c , T s2c , n and d with coplanar constraint. As is shown in Figure 2, we move the calibration grid on the LCD monitor for W times and rotate the plane mirror corresponding to each grid position for M times. The grid has N characteristic corners. Let R ji represent the rotation matrix of the mirrored image of the j grid at the i plane mirror position. In the same way, T ji is translation vector, n ji represents the normal vector of the mirror, d ji represents the distance between the origin of the camera coordinate system and the plane mirror, R s2cj represents rotation matrix from the j checkerboard coordinate system to the camera coordinate system, and T s2cj represents the translation vector. P k represents the k feature point of the grid in the reference plane coordinate system. q jik represents the projection point of the k feature point of the j grid at the i planar mirror position. q jik represents the back-projection point. The back-projection process can be written as: where λ ji is a nonzero scale factor, A represents the intrinsic matrix of the camera, and R ji = (I − 2 · n ji · n ji T ) · R s2cj , T ji = (I − 2 · n ji · n ji T ) · T s2cj + 2 · d ji · n ji . jik ture point of the j grid at the i planar mirror position.  jik q represents the back-projection point. The back-projection process can be written as: where ji λ is a nonzero scale factor, A represents the intrinsic matrix of the camera, and Finally, s2c R , s2c T , n , and d can be calculated by Equations (8) and (9).
Combined with the Equation (10), the reprojection error function of the back-projection can be expressed as: Let jk P represent the k feature point of the j checkboard in the camera coordinate system.
Since the reference plane can be regarded as a standard plane, the coplanar constraint of the W grids should be added. Let [24]. The error Rerr between s2c j R and av R can be written as: The smaller the Rerr value is, the better the coplanar effect will perform. Likely, the five plane mirror positions with zero tilt angle on the optical platform also have coplanar characteristics. Therefore, the corresponding normal vectors j1 n are theoretically equal.
The average normal vector av n can also be calculated. We can obtain three calibration images at each grid location. Then, W × 3 calibration images can calculate the intrinsic matrix A, as well as the pose parameters R ji and T ji . Finally, R s2c , T s2c , n, and d can be calculated by Equations (8) and (9).
Combined with the Equation (10), the reprojection error function of the back-projection can be expressed as: Let P jk represent the k feature point of the j checkboard in the camera coordinate system.
Since the reference plane can be regarded as a standard plane, the coplanar constraint of the W grids should be added. Let Perr represent the fitting effect evaluation value of plane fitting function: [ f itresult, Perr] = createFit(dx, dy, dz). The input of the function is P jk . The smaller the Perr value is, the better the coplanar effect will perform. In addition, R s2cj , j ∈ {1, . . . W} are equal in theory. Let R av represent the average rotation matrix [24]. The error Rerr between R s2cj and R av can be written as: The smaller the Rerr value is, the better the coplanar effect will perform. Likely, the five plane mirror positions with zero tilt angle on the optical platform also have coplanar characteristics. Therefore, the corresponding normal vectors n j1 are theoretically equal. The average normal vector n av can also be calculated.
In the ideal condition, Perr = 0, Rerr = 0, Nerr = 0. Therefore, the cost function can be regarded as two major components: the reprojection error term Errpro and the coplanar constraint term (Perr, Rerr, Nerr). We can establish the cost function in the case of equality constraints: where R s2cj , T s2cj , n ji and d ji are parameters to be optimized. The calculation of the specific LM algorithm can be realized by the tool function lsqnonlin() in Matlab.

Three-Dimensional Measurement Principle of a Single Camera
In the monocular measurement system, we observe the images of the grid pattern, reflected in the unknown surface when the pose of the camera is known, and establish the reflection correspondence between the three-dimensional reference points and the two-dimensional image points. The depth of the reflection points on the surface is parameterized, and the surface shape is fitted by a polynomial. Therefore, the measurement of the surface shape is converted into an optimization problem: minimizing the error between the reference points and the corresponding points through the surface back projection [12]. The principle of the measurement system is shown in Figure 3. O is the origin of the camera coordinate frame, m is a feature point on the reference plane, p is a reflection point of the surface, and v is a projection point on the normalized image plane. p and v are called reflection correspondences. l is the reflected ray at p, and i is the incident ray. R s2c and T s2c are the rotation matrix and translation vector from reference plane coordinate frame to camera coordinate frame. Obviously, v is on the incident ray i. The relationship between p and v is given by

Three-Dimensional Measurement Principle of a Single Camera
In the monocular measurement system, we observe the images of the grid pattern, reflected in the unknown surface when the pose of the camera is known, and establish the reflection correspondence between the three-dimensional reference points and the twodimensional image points. The depth of the reflection points on the surface is parameterized, and the surface shape is fitted by a polynomial. Therefore, the measurement of the surface shape is converted into an optimization problem: minimizing the error between the reference points and the corresponding points through the surface back projection [12]. The principle of the measurement system is shown in Figure 3. O is the origin of the camera coordinate frame, m is a feature point on the reference plane, p is a reflection point of the surface, and v is a projection point on the normalized image plane. p and v are called reflection correspondences. l is the reflected ray at p , and i is the incident ray. s2c R and s2c T are the rotation matrix and translation vector from reference plane coordinate frame to camera coordinate frame. Obviously, v is on the incident ray i . The relationship between p and v is given by (16) s is the depth of the corresponding reflected point p . Correspondingly, the normal n to the surface at p can be written as: s is the depth of the corresponding reflected point p. Correspondingly, the normal n to the surface at p can be written as: Suppose the coordinates of the normalized image points {v 1 , v 1 , . . . , v m } and points on the reference plane {m 1 , m 2 , . . . , m m } are known. The principle of back projection is shown in Figure 4. The three-dimensional reflection point on the mirror corresponds to the normalized image plane coordinates (x i , y i ) T that can be expressed as p i = s i (x i , y i , 1) T . The unit vector of the incident ray is i i = (x i , y i , 1) T / (x i , y i , 1) T , the unit vector of the reflected ray is l i = i i − 2 · n i , i i · n i , and n i = n i / n i . Let R s2c = (r 1 r 2 r 3 ), r 3 represents the coordinates of the unit vector in the Z-axis direction of the reference plane coordinate frame in the camera coordinate frame. T s2c indicates the coordinates of the origin of the reference plane coordinate frame in the camera coordinate frame. The reference plane can be represented by the vector q = (r 3 T , −r 3 T · T s2c ) T , such that q, (m T i , 1) T = 0 for any point on the reference plane. Back-projection can be achieved by computing the point m, the intersection of the reflected ray with the reference plane. plane coordinate frame in the camera coordinate frame. The reference plane can be represented by the vector for any point on the reference plane. Back-projection can be achieved by computing the point m , the intersection of the reflected ray with the reference plane.  In Equation (18), ˆi m is a function of depth s . We can build an optimization model to minimize the error between the back projection point and the real point on the reference plane. That means solving a nonlinear least-squares problem to estimate the depth of the mirror.
For minimizing problems in (19), we can also iteratively calculate s with the LM algorithm. The initial surface can be regarded as a plane. Principle of back projection. The rotation matrix R s2c can be written as (r 1 r 2 r 3 ). r 3 denotes the unit vector in the Z-axis of the reference plane. T s2c denotes the distance between S and C. The reflected ray l intersects the reference plane at the pointm. The pointm satisfies −r 3 T ·m = d s2c . We denote by d c2s the distance between C and the reference plane.
In Equation (18),m i is a function of depth s. We can build an optimization model to minimize the error between the back projection point and the real point on the reference plane. That means solving a nonlinear least-squares problem to estimate the depth of the mirror.
For minimizing problems in (19), we can also iteratively calculate s with the LM algorithm. The initial surface can be regarded as a plane.

Calibration Experiment
To verify the accuracy and universality of the calibration method proposed in this paper, a monocular vision system measurement experiment was designed ( Figure 5). The whole measurement system consists of an optical platform, standard plane mirror, LCD monitor, and large FOV camera. The focal length of the camera is 8 mm; the image resolution is 1280 pixel × 1024 pixel, and the pixel size is 4 µm; the FOV of the camera is 820 mm × 670 mm, which is much bigger than grid image. When the measurement distance is about 1000 mm, the field of view of the camera is 820 mm × 670 mm. The LCD is 19 inches in size and has a pixel size of 0.2451 mm. In order to approach a large field of view measurement scene, we use a 90 × 120 mm checkerboard image as a calibration target, which is much smaller than the camera's field of view range.

Calibration Experiment
To verify the accuracy and universality of the calibration method proposed i paper, a monocular vision system measurement experiment was designed ( Figure 5 whole measurement system consists of an optical platform, standard plane mirror monitor, and large FOV camera. The focal length of the camera is 8 mm; the image lution is 1280 pixel × 1024 pixel, and the pixel size is 4 µm; the FOV of the camera mm × 670 mm, which is much bigger than grid image. When the measurement dista about 1000 mm, the field of view of the camera is 820 mm × 670 mm. The LCD is 19 in size and has a pixel size of 0.2451 mm. In order to approach a large field of view urement scene, we use a 90 × 120 mm checkerboard image as a calibration target, wh much smaller than the camera's field of view range. The LCD faces the standard plane mirror on the optical platform. The grid ima the LCD is captured by the camera through the plane mirror. In the experiment, th image is moved on the LCD. Each grid position corresponds to three positions of a  The LCD faces the standard plane mirror on the optical platform. The grid image on the LCD is captured by the camera through the plane mirror. In the experiment, the grid image is moved on the LCD. Each grid position corresponds to three positions of a plane mirror, which are the position STZ on the optical platform, the position STX around the X-axis, and the position STY around the Y-axis. In this way, it not only ensures that the three positions of the plane mirror intersect with each other to satisfy the orthogonality constraint, but also ensures that there is an obvious height difference to satisfy the conditions of Zhang's calibration method. Figure 6 is a set of mirrored images of the grid taken by the camera for calibration. The grid image was moved five times, and the five positions of the grid basically filled the whole LCD screen to cover the whole FOV of the camera. In the five pose conversion parameters from the reference coordinate system to the camera coordinate system, the rotation matrices are equal, and the translation vectors change with the motion of the grid in theory. In the same way, the plane mirrors at the STZ position corresponding to the five grid images are also coplanar, so the corresponding mirror normal vectors are equal. This is the coplanar constraint described in Section 2.4.     As is shown in Figure 7c,d, the positions of each chessboard are not only poor in coplanarity, but they also have a large offset in the relative positions, which can not comply with the law of mirror reflection. Figure 8a is the coplanarity of the five grids performs well with coplanar constraint, RMSE = 0.11 mm. However, the five grids without coplanar constraint have poor coplanarity, RMSE = 6.45 mm. Figure 8b is the reprojection error of the two methods after back projection. The average reproject error of the method proposed in this paper is 0.1641 pixels, and in paper [16], it is 0.1419 pixels. The two methods are similar in terms of calibration accuracy, and the reprojection error without coplanar constraint is smaller. However, for the reference plane, the calibration result of this method is locally optimal. With coplanar constraints, the reprojection optimization model can unify the positions of five checkerboards and optimize the calibration results as a whole. Therefore, the calibration method in this paper sacrifices part As is shown in Figure 7c,d, the positions of each chessboard are not only poor in coplanarity, but they also have a large offset in the relative positions, which can not comply with the law of mirror reflection. Figure 8a is the coplanarity of the five grids performs well with coplanar constraint, RMSE = 0.11 mm. However, the five grids without coplanar constraint have poor coplanarity, RMSE = 6.45 mm. Figure 8b is the reprojection error of the two methods after back projection. The average reproject error of the method proposed in this paper is 0.1641 pixels, and in paper [16], it is 0.1419 pixels. As is shown in Figure 7c,d, the positions of each chessboard are not only poor in coplanarity, but they also have a large offset in the relative positions, which can not comply with the law of mirror reflection. Figure 8a is the coplanarity of the five grids performs well with coplanar constraint, RMSE = 0.11 mm. However, the five grids without coplanar constraint have poor coplanarity, RMSE = 6.45 mm. Figure 8b is the reprojection error of the two methods after back projection. The average reproject error of the method proposed in this paper is 0.1641 pixels, and in paper [16], it is 0.1419 pixels. The two methods are similar in terms of calibration accuracy, and the reprojection error without coplanar constraint is smaller. However, for the reference plane, the calibration result of this method is locally optimal. With coplanar constraints, the reprojection optimization model can unify the positions of five checkerboards and optimize the cali- The two methods are similar in terms of calibration accuracy, and the reprojection error without coplanar constraint is smaller. However, for the reference plane, the calibration result of this method is locally optimal. With coplanar constraints, the reprojection optimization model can unify the positions of five checkerboards and optimize the calibration results as a whole. Therefore, the calibration method in this paper sacrifices part of the calibration accuracy to improve the reliability of the algorithm. This calibration result is more suitable for practical measurement.

Measurement of the Step Surface
After the calibration of the reference plane, we can carry out a three-dimensional measurement experiment according to Section 2.5. As is shown in Figure 9a,b, a standard plane mirror is placed on the optical platform, and the mirror feature point calculation is performed at the STZ position. Then place the standard gauge block between the optical table and the planar mirror, so the mirror position is 8.74 mm higher than before, and the mirror feature points are calculated at the higher mirror position. Fit the mirror surface with feature points by createFit(), and then use the point-to-plane distance formula to calculate the distance from each feature point to the fitting plane, and then take the average value. Compare it with the actual distance of 8.74 mm to indirectly verify the accuracy of the calibration method proposed in this paper. The mirror feature points of the first mirror position are shown in Figure 9c. The plane fitting model is as follows: Sensors 2023, 23, x FOR PEER REVIEW 11 of 14

Measurement of the Step Surface
After the calibration of the reference plane, we can carry out a three-dimensional measurement experiment according to Section 2.5. As is shown in Figure 9a,b, a standard plane mirror is placed on the optical platform, and the mirror feature point calculation is performed at the STZ position. Then place the standard gauge block between the optical table and the planar mirror, so the mirror position is 8.74 mm higher than before, and the mirror feature points are calculated at the higher mirror position. Fit the mirror surface with feature points by () createFit , and then use the point-to-plane distance formula to calculate the distance from each feature point to the fitting plane, and then take the average value. Compare it with the actual distance of 8.74 mm to indirectly verify the accuracy of the calibration method proposed in this paper. The mirror feature points of the first mirror position are shown in Figure 9c. The plane fitting model is as follows: We can obtain the coefficients of the plane: p 00 = 421.4000, p 10 = −0.6167, p 01 = 0.0267, and the RMSE = 0.02 mm. To have an intuitive display effect, the first and second mirror positions are shown together in Figure 9d. The average distance of the two mirror positions is 8.68 mm. The difference with the actual distance of 8.74 mm is 0.06 mm, and the relative error is 0.69%.

Measurement of the Spherical Mirror
In addition, we also measure the spherical mirror surface. The principle of the experiment is the same as that of the mirror. Firstly, measure five sets of spherical characteristic points, with 108 points in each group as measurement data. Then, place the spherical mirror on a coordinate measuring machine (model: MC850) with the highest resolution of 1 um for sampling.
The number of detection points is 202, which is used as reference data. Since the coordinate system of the coordinate measuring machine is not unified with the camera coordinate system, it is necessary to use Cloud-Compare software to unify the measurement data and reference data with the method of iterative closest point (ICP). The ICP registration of the measured feature points and the reference feature points is shown in Figure 10b. The number of detection points is 202, which is used as reference data. Since the coordinate system of the coordinate measuring machine is not unified with the camera coordinate system, it is necessary to use Cloud-Compare software to unify the measurement data and reference data with the method of iterative closest point (ICP). The ICP registration of the measured feature points and the reference feature points is shown in Figure  10b. The spherical equation is fitted to the reference data through Cloud-Compare software. As shown in Figure 11a, the spherical equation is: Additionally, RMSE = 0.01 mm. The fitting error distribution is shown in Figure 11b. We can obtain the spherical mirror radius from the Equation (21) ( Table 1). The number of detection points is 202, which is used as reference data. Since the coordinate system of the coordinate measuring machine is not unified with the camera coordinate system, it is necessary to use Cloud-Compare software to unify the measurement data and reference data with the method of iterative closest point (ICP). The ICP registration of the measured feature points and the reference feature points is shown in Figure  10b. The spherical equation is fitted to the reference data through Cloud-Compare software. As shown in Figure 11a Additionally, RMSE = 0.01 mm. The fitting error distribution is shown in Figure 11b. Additionally, RMSE = 0.01 mm. The fitting error distribution is shown in Figure 11b. We can obtain the spherical mirror radius from the Equation (21) ( Table 1). In the experiment, we use a cubic polynomial to initialize the spherical mirror surface because we treat the mirror surface as unknown. Supposing we directly use the spherical equation to iteratively optimize the mirror surface, the measurement accuracy will perform better.

Conclusions
This paper proposes a calibration method based on coplanar constraints for a camera with a large FOV. The whole experiment process is divided into two parts. The first is the calibration of a large FOV camera and the reference plane. By adjusting the tilt angle of the planar mirror and moving the grid image on the LCD monitor, the camera acquires multiple sets of calibration images and then obtains the optimal solution of the external parameters between the camera and the LCD monitor with the coplanar constraint. The other is shiny surface reconstruction. When the pose of the reference plane is known, we can establish the dense reflection correspondence between normalized image plane two-dimensional feature points, reference plane three-dimensional feature points, and bright surface reflection points, and we can iteratively calculate the reflection point depth information. In terms of calibration accuracy, the calibration accuracy of the method proposed in this paper is similar to that of [16]. At the same time, in the step surface and spherical surface measurement experiments, the results also indirectly prove the accuracy of the proposed method. The universality of the method has important research significance for further application to the multi-camera measurement system in the future.