DoF-Dependent and Equal-Partition Based Lens Distortion Modeling and Calibration Method for Close-Range Photogrammetry

Lens distortion is closely related to the spatial position of depth of field (DoF), especially in close-range photography. The accurate characterization and precise calibration of DoF-dependent distortion are very important to improve the accuracy of close-range vision measurements. In this paper, to meet the need of short-distance and small-focal-length photography, a DoF-dependent and equal-partition based lens distortion modeling and calibration method is proposed. Firstly, considering the direction along the optical axis, a DoF-dependent yet focusing-state-independent distortion model is proposed. By this method, manual adjustment of the focus and zoom rings is avoided, thus eliminating human errors. Secondly, considering the direction perpendicular to the optical axis, to solve the problem of insufficient distortion representations caused by using only one set of coefficients, a 2D-to-3D equal-increment partitioning method for lens distortion is proposed. Accurate characterization of DoF-dependent distortion is thus realized by fusing the distortion partitioning method and the DoF distortion model. Lastly, a calibration control field is designed. After extracting line segments within a partition, the de-coupling calibration of distortion parameters and other camera model parameters is realized. Experiment results shows that the maximum/average projection and angular reconstruction errors of equal-increment partition based DoF distortion model are 0.11 pixels/0.05 pixels and 0.013°/0.011°, respectively. This demonstrates the validity of the lens distortion model and calibration method proposed in this paper.


Introduction
Vision measurement is a subject that allows quantitative perception of scene information by combining image processing with calibrated camera parameters. Therefore, the calibration accuracy of the parameters is an important determinant of the vision measurement uncertainty. The lens distortion is closely related to the depth of field (DoF), which refers to the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image. For medium-or high-accuracy applications, close-range imaging parameters (e.g., short object distance (<1 m) and small focal length) are often adopted. In such occasions, DoF has a significant influence on lens distortion and, hence, becomes a major cause of vision measurement errors. For instance, to ensure that the vision has a micron-level accuracy when detecting the contouring error of a machine tool [1], the camera is placed 400 mm away from the focal plane to collect and analyze the image sequence of the interpolation trajectory running in the DoF. In this case, measurement errors ranging from dozens to hundreds of microns can be caused by a large lens distortion. Therefore, to improve the vision measurement accuracy in close-range photogrammetry, accurate modeling and calibration of the DoF-dependent lens distortion are urgently needed. The lens distortion model maps the relation between distorted and undistorted image points. Models to show the relations vary according to the types of optical systems, which include the polynomial distortion model, logarithmic fish-eye distortion model [2], polynomial fish-eye distortion model [2][3][4][5], field-of-view (FoV) distortion model [6], division distortion model [7,8], rational function distortion model [9,10], and so on. In 1971, Brown [11,12] proposed the Gaussian polynomial function to express radial and decentering distortion, which is particularly suitable for studying the distortion of a standard lens in high-accuracy measurements [13,14]. Later, researchers noticed that the observed radial and decentering distortion varies with the focal length, the lens focusing state (i.e., focused or defocused), and the DoF position. Since then, researchers have focused on the improvement of distortion calibration and modeling methods to obtain a precise representation of distortion behavior. For the distortion calibration, the study goes in two directions: the coupled-calibration method and the decoupled-calibration method. The former can be generally divided into three types: self-calibration method [15], active calibration method, and traditional calibration method [16]. Among the traditional ones, Zhang's calibration method [17] and its improved method [18][19][20], used widely in industry and scientific research, are the most popular. In this coupled-calibration method, the distortion parameters are calculated by performing a full-scale optimization for all parameters. Due to the strong coupling effect, the estimated errors of other parameters (i.e., intrinsic and extrinsic parameters) in the camera model would be propagated to that of distortion parameters, thus leading to the failure of getting optimal solutions. By contrast, the decoupled-calibration method does not involve coupling other factors or entail any prior geometric knowledge of the calibration object, and only geometric invariants of some image features, such as straight lines [6,12,[21][22][23], vanishing points [24], or spheres [25], are needed to solve the parameters. Among these features, straight lines can be easily reflected in scenes and extracted from noise images, thus having enormous potential.
Regarding the distortion modeling, some researchers incorporated the DoF into the distortion function. Magill [26] used the distortion of two focal planes at infinity to solve that of an arbitrary focal plane. Then, Brown [12] improved Magill's model by establishing distortion models of any focal plane and any defocused plane (the plane perpendicular to the optical axis in the DoF) on the condition that the distortions of two focal planes are known. Soon after, Fryer [27], based on Brown's model, realized the lens distortion calibration of an underwater camera [28]. Fraser and Shortis [29] introduced an empirical model and solved the Brown model's problem of inaccurate description of large image distortion. Additionally, Dold [30] established a DoF distortion model that is different from Brown's and solved the model parameters through the strategy of bundle adjustment. In 2004, Brakhage [31] characterized the DoF distortion of the telecentric lens in a fringe projection system by using Zernike Polynomials. Moreover, in 2006, the DoF distortion distribution of the grating projection system was experimentally analyzed by Bräuer-Burchardt. In 2008, Hanning [32] introduced depth (object distance) into the spline function to form a distortion model and used the model to calibrate radial distortion.
The above DoF distortion models not only depend on the focusing state but also relate to the distortion coefficients on the focal plane. For these models, on the one hand, the focusing state is usually adjusted by manually twisting the zoom and focus rings, which introduces human errors and changes the camera parameters. On the other hand, the focus distance and distortion parameters on the focal plane cannot be determined accurately. To overcome the problem, Alvarez [33], based on Brown's and Fraser's models, deduced a radial distortion model that is suitable for planar scenarios. With this model, when the focal length is locked, distortion at any image position can be estimated by using two lines in a single photograph. In 2017, Dong [34] proposed a DoF distortion model, by which the researcher accurately calibrated the distortion parameters on arbitrary object planes, and reduced the error from 0.055 mm to 0.028 mm in the measuring volume of 7.0 m × 3.5 m × 2.5 m with the large-object-distance of 6 m. Additionally, in 2019, Ricolfe-Viala [35] proposed a depth-dependent high distortion lens calibration method, by embedding the object distance in the division distortion model, and the highly distorted images can be corrected with only one distortion parameter. However, these researchers only used one set of coefficients, which is not sufficient to accurately represent the distortion. To address this problem, some scholars adopted the idea of partitioning to process image distortion, which uses several sets of distortion coefficients to characterize the distortion. The study, however, which is only applicable to the partitioning of a 2D object plane, fails to take into account the distortion partition within the DoF and the correlation between lens distortion and DoF. Our previous work partitioned the distortion with an equal radius [36]. Although it improved the vision measurement accuracy, the distortion correction accuracy within the partition corresponding to the image edge is still low. Besides, the distortion model we adopted depends on the focusing state of the lens, thus is less practical. In general, the current distortion model and partitioning method cannot accurately reflect the lens DoF distortion behavior in close-range photography, especially for short-distance measurements.
To solve the above problems, the lens distortion model and calibration method for short-distance measurement, which takes into consideration the dimensions of DoF and equal-increment partition of distortion, are proposed in this paper. The rest of this paper is organized as follows. In Section 2, a focusing-state-independent DoF distortion model, which only involves the spatial position of the observed point, is constructed. In Section 3, based on the model in previous section, an equal-increment partitioning DoF distortion model is proposed, which enables a fine representation of the lens distortion in the photographic field. Section 4 details the calibration method for both DoF distortion and camera model parameters, as well as the image processing of the control field for distortion calibration. In Section 5, experimental verification of the proposed lens distortion model and calibration method is carried out. Finally, Section 6 concludes this paper.

Focusing-State-Independent DoF Distortion Model
The observed distortion of a point varies with its position within the DoF. Though the close-range imaging configuration increases the visible range, it enlarges the DoF image distortion, consequently affecting the measurement accuracy. To break the limitations of the aforementioned in-plane and DoF distortion model in the vision measurement of short-distance and small-focal-length settings, a DoF-dependent yet focusing-state-independent distortion model is proposed in this paper.

Pinhole Camera Model with Distortion
As illustrated in Figure 1, the linear pinhole camera model depicts the one-to-one mapping between the 3D points in the object space and its 2D projections in the image. Let p u li v li be undistorted coordinates mapped from a spatial point in the world coordinate system O w X w Y w Z w to the image coordinate system ouv through the optical center O C . Then, camera mapping can be expressed as [17] z where z describes the scaling factor; K is the intrinsic parameter matrix, which quantitatively characterizes the critical parameters of the image sensor (i.e., Charge Coupled Device (CCD) or Complementary Metal-Oxide Semiconductor (CMOS)); Matrix M, expressing the transformation between the vision coordinate system (VCS) and the world coordinate system, consists of the rotation matrix R and translation matrix T.  However, manufacturing and assembly errors can lead to radial and decentering lens distortion. Consequently, the pinhole assumption does not hold for real camera systems, and the image projection of a straight line would be bent into a curve (Figure 1b,c). To characterize the lens distortion, Brown proposed the distortion model in a polynomial form [11,12]:   However, manufacturing and assembly errors can lead to radial and decentering lens distortion. Consequently, the pinhole assumption does not hold for real camera systems, and the image projection of a straight line would be bent into a curve (Figure 1b,c). To characterize the lens distortion, Brown proposed the distortion model in a polynomial form [11,12]: where u li v li is the distorted coordinates; δ u li and δ v li are the distortion functions of an image point in the u and v direction respectively; ( u 0 v 0 ) denotes the distortion center; r = (u li − u 0 ) 2 + (v li − v 0 ) 2 stands for the distortion radius of the image point; K 1 and K 2 are the first and second-order coefficients of radial distortion respectively; while, P 1 and P 2 are the first and second-order coefficients of decentering distortion respectively.

Radial Distortion Model
Let δr ∞ be the radial distortion for a lens that is focused on plus infinity, while δr −∞ on the minus infinity. m s refers to the vertical magnification in the focal plane at object distance s. According to Magill's model [26], δr s , the lens radial distortion in the focal plane, can be expressed as Let δr s m and δr s k be the radial distortions in the focal planes when the lens is focused on the distances of s m and s k respectively. Then, the distortion function for focal plane δr s at distance s can be written as where, f is the focal length; α s = (s k −s m )·(s− f ) (s k −s)·(s m − f ) . The i-th radial distortion coefficients K s i for focused object plane at distance s are where K s m i and K s k i are the i-th radial distortion coefficients when the lens is focused on the distances of s m and s k respectively. As can be easily noticed in Equation (5), if the radial distortion coefficients of two different focal planes are known, the radial distortion coefficients of any focal plane can be obtained.

Decentering Distortion
As for the decentering distortion, the equations are as follows [12]: where (1 − f s ) · r s,s is the compensation coefficient; δr u and δr v represent the components of the decentering distortion in the u and the v direction respectively; s and s depict the object distances corresponding to the two focal planes, respectively.

DoF-Dependent Radial Distortion Model
Fraser and Shortis [29] proposed an empirical model for describing the distortion of any object plane (or defocused plane), which solved Brown model's problem of inaccurate description of severe distortion caused by the image configuration of short-distance and small-focal-length settings. The equation is as follows: K s,s p = K s + g · (K s p − K s ) where K s,s p denotes the radial distortion coefficient in the defocused plane with the depth of s p when the lens is focused at distance s; g is the empirical coefficient; K s p and K s represent the radial distortion coefficients in the focal planes at distances s p and s respectively. By extending the equation, we can get the radial distortion function δr s,s n at s n expressed by the δr s,s m at s m when the lens is focused at the distance of s: δr s,s m = δr s + g · (δr s m − δr s ) δr s,s n − δr s,s m = δr s + g · (δr s n − δr s ) − δr s − g · (δr s m − δr s ) From the above equation, we can easily obtain δr s,s n = δr s,s m + α s,s m (s n ) · (δr s − δr s,s m ). Then, by extending the results to the radial distortion of a point in the defocused plane at distance s k , the relationship between δr s,s n , δr s,s m and δr s,s k can be given by δr s,s n = δr s,s m + α s,s m (s n ) · (δr s − δr s,s m ) δr s,s n = δr s,s k + α s,s k (s n ) · (δr s − δr s,s k ) δr s,s m = δr s,s k + α s,s k (s m ) · (δr s − δr s,s k ) Obviously, when the lens is focused at distance s, through two distortions corresponding to object distances s m and s k respectively, radial distortion coefficient in any defocused plane with the depth of s n when the lens is focused at distance s can be obtained: When two object planes are set, K s,s m i , K s,s k i , s m , s k and f are known. Thus, K s,s n i in Equation (12) is only dependent on s n , and it is independent of the distortion coefficient K s i on the focal plane and the focus distance s.

DoF-Dependent Decentering Distortion Model
In Equation (6), since δP s,s m = r s,s m · δP ∞ , the distortion in the focal plane can be written as where δP s,s m , δP s,s k and δP s,s n are the decentering distortion functions in the defocused plane at the object distances of s m , s k and s n when the lens is focused at distance s respectively. From the first two lines of the above equation, we get δP s,s k δP s,sm = Put the first line into the second one, and we obtain δP s,s n δP s,s m = Equation (15) can be simplified to Put M s k ,s m = P s,s k i P s,sm i (i = 1, 2) into Equation (16), and the following equation is obtained: Given the parameters P s,s m i , P s,s k i , s m and s k are known, it can be illustrated from Equation (17) that the decentering distortion coefficient P s,s n i in any defocused plane is dependent only on the object distance, s n , and is independent of the focus distance s and the distortion P s i in the focal plane. Moreover, since focal length f is not included in Equation (17), decentering distortion is not affected by this parameter.
Hereto, the DoF-dependent yet focusing-state-independent distortion model suitable for close-range, short-distance measurement scenes is established, which overcomes the limited practicability caused by the way of calibrating DoF distortion by manual adjustment of the focus and zoom rings, and it also solves the problem when the current position and the distortion parameters of the focal plane are not exactly known.

Equal-Increment Partition Based DoF Distortion Model
The distortion coefficients are solved by minimizing the straightness error of the observed points. If a set of distortion coefficients is used to describe the distortion in the whole image, the distortion coefficients will be the error balance of all points. However, for each region of the image the error is not the minimum. Hence, an equal-increment partition based DoF distortion model is proposed in this section. The distortion spreads outward from the image center along a circumferential contour, with the characteristics of the image being small in the middle and large on the image edge. In this paper, we first partition the in-plane distortion in an equal-increment way, then the 2D partition strategy is extended to the 3D photographic field. Figure 2 presents two distortion partitioning methods. The X axis represents the distance from an image point to the distortion center (u 0 , v 0 ), namely the distortion radius. The Y axis describes the distortion in pixels. The blue curve is the distortion curve calculated by the features in the whole image. As illustrated in Figure 2a, when DoF distortion is partitioned by an equal radius, the distortion increment of each partition is different ( [36]. For a polynomial-based distortion function, it is well known that the more scattered the distorted points and the larger the distortion increments are, the lower the regression accuracy of the function to the distortion is. As a result, the estimated accuracy of the partition's distortion parameters decreases gradually from inside to outside (ε 1 > ε 2 > ε 3 > ε 4 > ε 5 ).

Equal-Increment Based Distortion Partitioning Method
Sensors 2020, 20, x 8 of 24 the distortion in pixels. The blue curve is the distortion curve calculated by the features in the whole image. As illustrated in Figure 2a, when DoF distortion is partitioned by an equal radius, the distortion increment of each partition is different . For a polynomial-based distortion function, it is well known that the more scattered the distorted points and the larger the distortion increments are, the lower the regression accuracy of the function to the distortion is. As a result, the estimated accuracy of the partition's distortion parameters decreases gradually from inside to outside ( ＞ ＞ ＞ ＞ To solve the problem, a DoF distortion model based on the equal-increment partition is proposed in this paper, and the procedures are as follows: (1) Estimate the distortion curve using all features in the whole image ( Figure 2b). Then, determine the maximum value of image distortion max d according to the maximum distortion radius and distortion curves. The maximum distortion radius of the image is , where l I and h I are the length and height of the image, respectively.
(2) In the central image region, the distortion is so tiny that it cannot converge after iteration, which results in a poorer quality of the undistorted image than that of the original one. Therefore, we use limited d , the minimum distortion value when the algorithm converges in the central image region, as the threshold to estimate limited r , the minimum value of the image distortion radius.
(3) Determine the number of partitions p n . To solve the problem, a DoF distortion model based on the equal-increment partition is proposed in this paper, and the procedures are as follows: (1) Estimate the distortion curve using all features in the whole image ( Figure 2b). Then, determine the maximum value of image distortion δ max according to the maximum distortion radius and distortion curves. The maximum distortion radius of the image is r max = where I l and I h are the length and height of the image, respectively. (2) In the central image region, the distortion is so tiny that it cannot converge after iteration, which results in a poorer quality of the undistorted image than that of the original one. Therefore, we use δ limited , the minimum distortion value when the algorithm converges in the central image region, as the threshold to estimate r limited , the minimum value of the image distortion radius.
(3) Determine the number of partitions n p . (4) Use the maximum distortion δ max , the lower-limit distortion δ limited , and n p to determine the distortion increment of each partition Figure 2b). (5) Calculate the radius increment of each partition using δ equ and the distortion curve, Figure 2b). (6) Calibrate the distortion curve of each partition by the features in the corresponding partition of the image utilizing the decoupled-calibration method (see Section 4).
Then the distortion partition of the 2D object plane is extended to the 3D DoF. As can be known from Equation (1), the object-to-image mapping satisfies the following: where P m ( X m Y m Z m ) and P k ( X k Y k Z k ) are two points in the VCS. The 2D point p( x y ) (in millimeters) is the image projection of the P m and P k (P m , P k , and O C are collinear). Let ρ be the partition radius, then x 2 + y 2 = ρ 2 , and we get From the above equation, we can know that f · R m = ρ · Z m and Z m · R k = Z k · R m , where Z m and Z k are the depths of the m-th (Π m ) and the k-th (Π k ) object planes in the VCS respectively. R m and R k are the partition radius of the two object planes. Let s m = Z m and s k = Z k , and then extend the above distortion partitions to 3D DoF domain. As shown in Figure 3, if the range of the g-th partition in the object plane Π m is (g − 1) · R m g · R m , the partition range in object planes Π k and Π n are In this way, although the distortion radius in each partition is different, distortion coefficients can be obtained with high accuracy when the image distortion is partitioned by equal distortion increments. X x y r + = , and we get , and then extend the above distortion partitions to 3D DoF domain. As shown in Figure 3, if the range of the g -th partition in the object plane m , the partition range in object planes k P and respectively. In this way, although the distortion radius in each partition is different, distortion coefficients can be obtained with high accuracy when the image distortion is partitioned by equal distortion increments.

Equal-Increment Partition Based DoF Distortion Model
After partitioning the DoF distortion, we incorporate the partitions into the DoF distortion model. Procedures to solve the partition radius and distortion coefficients on any object distance n s are as follows: (1) Partition the distortion in the object plane m P using the proposed method, and calculate the i -th order radial and decentering distortion coefficients in the g -th partition. Register the two R g-1 R g R g +1 Figure 3. The geometric relationship between the partition radii in different object planes.

Equal-Increment Partition Based DoF Distortion Model
After partitioning the DoF distortion, we incorporate the partitions into the DoF distortion model. Procedures to solve the partition radius and distortion coefficients on any object distance s n are as follows: (1) Partition the distortion in the object plane Π m using the proposed method, and calculate the i-th order radial and decentering distortion coefficients in the g-th partition. Register the two coefficients as g K s,s m i and g P s,s m i respectively.
(2) Based on the g-th partition in the object plane Π m (the object distance is s m ), the corresponding partition radius in the object plane Π k (the object distance is s k ) is calculated. In addition, the i-th order radial and decentering distortion coefficients can be computed. Register the two coefficients as g K s,s k i and g P s,s k i respectively.
(3) Based on the partitions in the object plane Π m , we calculate the partitions in the object distance plane Π n (the object distance is s n ). Then, for the g-th partition of the object plane Π n , the radial distortion coefficient g K s,s n i and the decentering distortion coefficient g P s,s n i can be expressed as From the equation, we can know · s n s m · g P s,s m i i = 1, 2 g = 1, 2, · · · , n p At this point, we have established an equal-increment partition based DoF distortion model for any object plane at s n when the lens is focused at distance s.

Calibration Method for Camera Parameters
In close-range photography, the DoF images are seriously distorted, so the calibration accuracy of the distortion parameters is the decisive factor affecting the vision measurement accuracy. When the coupled-calibration method is used to solve the distortion parameters, the estimated errors of intrinsic and extrinsic parameters will be propagated to distortion parameters. Thus, a two-step method is proposed to calibrate the camera parameters, in which distortion parameters are estimated independently.

Independent Distortion Calibration Method Based on Linear Conformation
Figure 4 details the experimental system for DoF lens distortion, which consists of a monocular camera, a control field, a light source, an electric control platform, and a multi-axis motion controller. The X, Y, A, and C axes of the platform are in the object space, while the Z-axis is in the image space. A control field, with the features of circle, corner, and line, is used to calibrate the lens distortion, and the geometric relationship between the features is known accurately. On this basis, the pose of the control field relative to the image plane can be adjusted by the Perspective-n-Point (PnP) algorithm.
In this paper, the distortion coefficients can be estimated by the plumb-line method [12] alone. It is defined by Brown (1971) as "a straight line in the object space will be mapped to the image plane in a straight way after a perfect lens, and any change of straightness can be reflected as the lens distortion described by the radial and decentering distortion coefficients." Sensors 2020, 20, x; doi: www.mdpi.com/journal/sensors In this paper, the distortion coefficients can be estimated by the plumb-line method [12] alone. It is defined by Brown (1971) as "a straight line in the object space will be mapped to the image plane in a straight way after a perfect lens, and any change of straightness can be reflected as the lens distortion described by the radial and decentering distortion coefficients." As demonstrated in Figure 5, when N edge points )on the same curve are known, the regression line equation determined by the point group is where q is the angle between the regression line and the u axis ( Figure 5).
Given there are L lines and there are l N points in the l -th line, the average sum of squared distances from the points ( ) li li u v to all the lines can be written as Any distortion of a line's straightness in the image plane can be corrected by a mapping involving radial and decentering distortion. Thus, substitute Equation (2) into Equation (23) and we can get  As demonstrated in Figure 5, when N edge points u 1 v 1 · · · u N v N on the same curve are known, the regression line equation determined by the point group is where θ is the angle between the regression line and the u axis ( Figure 5).
Given there are L lines and there are N l points in the l-th line, the average sum of squared distances from the points u li v li to all the lines can be written as Any distortion of a line's straightness in the image plane can be corrected by a mapping involving radial and decentering distortion. Thus, substitute Equation (2) into Equation (23) and we can get F(u li , v li ; K 1 , K 2 , P 1 , P 2 ) = 0 (24) Sensors 2020, 20, x 12 of 24 If there are L lines in an image and N l observation points are extracted from each line, we can have L · N l equations. In these equations, there are L + 4 variables (L line coefficients and 4 distortion coefficients). If L · N l > L + 4, the optimal solution of distortion coefficients can be obtained.
After solving the image distortion coefficients, the inverse mapping imR(u, v) = imD(u d , v d ) between the undistorted image imR and distorted image imD is established by cubic B-spline interpolation. In this way, the image distortion can be corrected. Besides, in this paper, the three straightness indicators of the maximum, average, and root mean square (RMS) d = D/ L l=1 N l of the point-to-line distance, and the Peak Signal-to-Noise Ratio (PSNR) PSNR = 10 × log 10 ((2 n − 1) 2 /MSE, are used to evaluate the distortion correction effects. D has been defined in Equation (23), and MSE is the mean square error of the image before and after distortion correction.

Image Processing and Camera Calibration
In this paper, the parameters in the equal-increment partition based DoF distortion model are calculated by using straight lines in a particular area of the control field. To this end, the corner control based method, for extracting line segments within a partition, is proposed. As shown in Figure 6, the image processing procedures include the following: (1) Image acquisition. Capture the image of the control field using the monocular camera ( Figure 6a). By combining the image processing results with the DoF distortion partition model, distortion parameters at any position of the DoF can be determined. To avoid the coupling effect between the distortion parameters and other parameters in the camera model, the camera's intrinsic and extrinsic parameters are preliminarily calibrated by Zhang's method. Then, we fix the distortion parameters and place the high-precision target in multiple spatial positions to optimize the intrinsic and extrinsic parameters. The cost function to be optimized is where E q depth_dependent (R q ) describes the cost function when the control field is in the q-th pose. R q and T q are the rotation and translation matrices in the q-th pose. g K j and g P j are the i-th order radial and j-th order decentering distortion coefficients in the g-th partition of the q-th pose. By using the Levenberg-Marquardt (LM) algorithm, the optimal solution of the camera's intrinsic and extrinsic parameters can be obtained.
Through the above process, the monocular camera calibration can be realized. In practice, the partition where a spatial point is located can be determined after estimating its 3D position ( X Y Z ). Then, the observed distortion can be corrected by choosing the proper distortion coefficients, thus realizing high-accuracy vision measurements. By combining the image processing results with the DoF distortion partition model, distortion parameters at any position of the DoF can be determined. To avoid the coupling effect between the distortion parameters and other parameters in the camera model, the camera's intrinsic and extrinsic parameters are preliminarily calibrated by Zhang's method. Then, we fix the distortion parameters and place the high-precision target in multiple spatial positions to optimize the intrinsic and extrinsic parameters. The cost function to be optimized is order radial and j -th order decentering distortion coefficients in the g -th partition of the q -th pose. By using the Levenberg-Marquardt (LM) algorithm, the optimal solution of the camera's intrinsic and extrinsic parameters can be obtained. Through the above process, the monocular camera calibration can be realized. In practice, the partition where a spatial point is located can be determined after estimating its 3D position ( ) X Y Z . Then, the observed distortion can be corrected by choosing the proper distortion coefficients, thus realizing high-accuracy vision measurements.

Experimental Verification of the 2D Distortion Partitioning Method
The experimental system is shown in Figure 7. The stroke of the electric control platform along the optical axis of the camera is 500 mm, and the size of the control field is 300 × 300 mm. The SIGMA

Experimental Verification of the 2D Distortion Partitioning Method
The experimental system is shown in Figure 7. The stroke of the electric control platform along the optical axis of the camera is 500 mm, and the size of the control field is 300 × 300 mm. The SIGMA zoom lens (18-35 mm) and HIK ROBOT camera (MV-CH120-10TM) are selected for imaging. The resolution and focal length are set as 2560 × 2560 pixels and 18 mm respectively. The procedures are as follows: (1) calibrate the intrinsic and extrinsic parameters of the monocular camera; (2) make adjustments to ensure that the circle features are distributed symmetrically around the image center; (3) the pose of the control field is determined and adjusted repeatedly to ensure the object and the image planes are parallel; (4) the control field is driven by the electronic control platform to move several object planes along the optical axis. The image of the control field in each plane is collected and analyzed by the algorithm on the graphic workstation. (1) calibrate the intrinsic and extrinsic parameters of the monocular camera; (2) make adjustments to ensure that the circle features are distributed symmetrically around the image center; (3) the pose of the control field is determined and adjusted repeatedly to ensure the object and the image planes are parallel; (4) the control field is driven by the electronic control platform to move several object planes along the optical axis. The image of the control field in each plane is collected and analyzed by the algorithm on the graphic workstation.  First, the accuracy of the 2D distortion partitioning method is verified. The image of the control field at the focus distance is divided into five concentric rings by the equal-radius (Figure 8a-e) and equal-increment (Figure 9a-e) distortion partition models, respectively. In each partition, the corresponding lines (green ones) are selected to solve the distortion coefficients and correct the image  First, the accuracy of the 2D distortion partitioning method is verified. The image of the control field at the focus distance is divided into five concentric rings by the equal-radius (Figure 8a-e) and equal-increment (Figure 9a-e) distortion partition models, respectively. In each partition, the corresponding lines (green ones) are selected to solve the distortion coefficients and correct the image distortion. For each of the two partitioning methods, five corrected images can be obtained (i.e., Figures 8f-j and 9f-j). Here, we use Figures 8f and 9f as an example to illustrate the results of distortion correction. The distortion on the image edge solved by the distortion coefficients of the first partition is far beyond the actual distortion here. After distortion correction, distortion is removed overly, thus resulting in the distortion in the opposite direction. Sensors 2020, 20, x; doi: www.mdpi.com/journal/sensors fourth and fifth partitions by the equal-radius partitioning method were at least 4 times and 2 times those by the partitioning method proposed in this paper. That is to say, with the equal-increment partitioning method, each partition can get better distortion correction results. The enlarged image of the best distortion curve for each partition is shown in Figure 10, which validates the effectiveness and accuracy of the proposed partitioning method in 2D settings. Sensors 2020, 20, x; doi: www.mdpi.com/journal/sensors fourth and fifth partitions by the equal-radius partitioning method were at least 4 times and 2 times those by the partitioning method proposed in this paper. That is to say, with the equal-increment partitioning method, each partition can get better distortion correction results. The enlarged image of the best distortion curve for each partition is shown in Figure 10, which validates the effectiveness and accuracy of the proposed partitioning method in 2D settings.

Accuracy Verification Experiments of DoF Distortion Partitioning Model and Camera Calibration
In this section, the accuracy of the DoF distortion model and camera calibration is verified. The control field is driven to move four different object planes within the DoF, two of which are at the To compare the distortion correction effect of each partition in the image, we subtract the simulated undistorted image with the corrected image, and we get Figures 8k-o and 9k-o. Obviously, the smaller the gray value is, the closer the undistorted image is to the ground truth, and the better the distortion removal effect is. As can be seen from the figures, the distortion correction results of each partition by the equal-radius partitioning method (Figure 8k-m) were not as good as that by the equal-increment partitioning method. Notably, in the fourth partition and the fifth partition located at the edge of the image, the green concentric ring in Figure 8n-o had a larger gray value, while in Figure 9n-o the gray values of pixels in the green concentric ring were approximately 0. This shows that the equal-increment partitioning method had a better performance on eliminating distortion.
Meanwhile, all the lines were used to solve and correct the image distortion as well. Then, the distortion correction effects with and without partition were compared using the aforementioned indexes (Section 4.1). As shown in Table 1, the undistorted images obtained by the two distortion partition methods had a good PSNR of up to 37.61 dB. Compared with the results obtained when without partition, the two partition methods showed a smaller straightness error in each partition. However, compared with the two partitioning methods, the maximum and average errors in the fourth and fifth partitions by the equal-radius partitioning method were at least 4 times and 2 times those by the partitioning method proposed in this paper. That is to say, with the equal-increment partitioning method, each partition can get better distortion correction results. The enlarged image of the best distortion curve for each partition is shown in Figure 10, which validates the effectiveness and accuracy of the proposed partitioning method in 2D settings.

Accuracy Verification Experiments of DoF Distortion Partitioning Model and Camera Calibration
In this section, the accuracy of the DoF distortion model and camera calibration is verified. The control field is driven to move four different object planes within the DoF, two of which are at the

Accuracy Verification Experiments of DoF Distortion Partitioning Model and Camera Calibration
In this section, the accuracy of the DoF distortion model and camera calibration is verified. The control field is driven to move four different object planes within the DoF, two of which are at the limit positions of the front and rear DoF, and the other two planes are within the DoF. The front object plane was divided into five areas with equal distortion increment of 20.2 pixels. Then, based on the distortion parameters in two object planes with known depths, the distortions in the other two object planes are calculated by the non-partition model, the proposed DoF distortion model with equal-radius partition, and the proposed DoF distortion model with equal-increment distortion partition, respectively. Thereafter, we manually adjusted the ring to focus the lens on the two object planes located at the limit positions of the front and rear DoF. Then, based on the calculated radial and decentering distortion coefficients on the two focal planes, Brown's model [12] with equal-radius partition is used to estimate distortion parameters on the two planes within the DoF.
Furthermore, the results are compared with the distortion directly solved by the lines (the observed value) within the corresponding partition. To compare the accuracy of different DoF distortion models, we took the in-plane point located in the common area (the second column of Table 2) partitioned by the two models at the same object distance (e.g., 400 mm in the first column of Table 2) as an example. As shown in Table 2, for Brown's model [12] with equal-radius partition, the maximum and average absolute differences between the calculated and the observed values were 7.32 µm and 2.81 µm, respectively. Those errors are smaller than that of the traditional Zhang's model without considering the DoF and distortion partition, but much larger than those of the proposed DoF distortion model with equal-radius partition and the proposed DoF distortion model with equal-increment distortion partition, respectively. The maximum and average absolute differences between the calculated and the observed values of the equal-increment distortion partition based DOF distortion model were 1.53 µm and 0.88 µm, respectively. By contrast, the errors of equal-radius partition based DOF distortion model were 4.64 µm and 1.94 µm, which was more than two times those of the proposed model in this paper. The results verified the accuracy of the DoF distortion partitioning model in 3D settings. Images of circular markers with known precise distance on the planar artifact are collected. The calibration accuracy of the monocular camera is verified by the re-projection errors and the angular reconstruction errors, respectively. Specifically, the planar artifact is driven by the high-accuracy pitch axis to rotate five positions, between two adjacent ones of which are 10 • . In each position, the pose matrix between the planar artifact Figure 11a and the calibrated camera is calculated by the OPNP algorithm [38] with the equal-radius and the equal-increment partitioning based DoF distortion models, respectively. Thereafter, 20 markers are projected back to the image via the estimated pose matrix, and the re-projection errors, the image distances between the projected and observed points, are calculated. As shown in Figure 11b, for equal-radius DoF distortion partition model, the maximum and average re-projection errors of the five positions were 0.29 pixels and 0.17 pixels, respectively, while the maximum and average projection errors of the proposed model were 0.11 pixels and 0.05 pixels, respectively. The angle between two adjacent positions of the artifact is reconstructed with the two models as well. As illustrated in Figure 11c, the 3D measurement accuracy of the system is assessed by comparing it with the nominal angle. The results show that the maximum and average angular errors of equal-radius based DoF distortion partition model were 0.48 • and 0.30 • , respectively, while those of the proposed model were 0.013 • and 0.011 • , which means that the angular reconstruction errors are effectively reduced. The above results comprehensively verify the accuracy of the DoF distortion partitioning model and the camera calibration method proposed in this paper.
Sensors 2020, 20, x; doi: www.mdpi.com/journal/sensors Images of circular markers with known precise distance on the planar artifact are collected. The calibration accuracy of the monocular camera is verified by the re-projection errors and the angular reconstruction errors, respectively. Specifically, the planar artifact is driven by the high-accuracy pitch axis to rotate five positions, between two adjacent ones of which are 10 °. In each position, the pose matrix between the planar artifact Figure 11a and the calibrated camera is calculated by the OPNP algorithm [38] with the equal-radius and the equal-increment partitioning based DoF distortion models, respectively. Thereafter, 20 markers are projected back to the image via the estimated pose matrix, and the re-projection errors, the image distances between the projected and observed points, are calculated. As shown in Figure 11b, for equal-radius DoF distortion partition model, the maximum and average re-projection errors of the five positions were 0.29 pixels and 0.17 pixels, respectively, while the maximum and average projection errors of the proposed model were 0.11 pixels and 0.05 pixels, respectively. The angle between two adjacent positions of the artifact is reconstructed with the two models as well. As illustrated in Figure 11c, the 3D measurement accuracy of the system is assessed by comparing it with the nominal angle. The results show that the maximum and average angular errors of equal-radius based DoF distortion partition model were 0.48 ° and 0.30 °, respectively, while those of the proposed model were 0.013 ° and 0.011 °, which means that the angular reconstruction errors are effectively reduced. The above results comprehensively verify the acc

Conclusions
This paper has investigated the methods of modeling and calibration of lens distortions for closerange photogrammetry (e.g., short object distance and small focal length). Our work finds that the following: (1) A focusing-state-independent DoF distortion model is constructed, and the distortion parameters at any object plane can be solved through the distortion on two defocus planes, which removes the human errors introduced by manual adjustment of the focus and zoom rings.
(2) A 2D-to-3D equal-increment partitioning method for lens distortion is proposed. After fusing with the DoF distortion model to form a DoF distortion partition model, the accuracy of lens distortion characterization is further improved.
(3) A two-step method is proposed to calibrate camera parameters, in which the DoF distortion is calculated independently by the plumb-line method, which eliminated the coupling effect among the parameters in the camera model.
(4) Experiments were performed to verify the accuracy of the 2D distortion partition model, DoF-dependent distortion partition model, and camera calibration. The results show that the maximum and average angular reconstruction errors by the proposed model were 0.013 ° and 0.011 ° respectively, which validates the accuracy and feasibility of the equal-increment partitioning based DoF distortion method.
The main limitation of the present study is that the number of partitions is not optimized to achieve higher calibration accuracy. Our future work will focus on this and extend our model to other optical systems with fisheye or catadioptric lenses.

Conclusions
This paper has investigated the methods of modeling and calibration of lens distortions for close-range photogrammetry (e.g., short object distance and small focal length). Our work finds that the following: (1) A focusing-state-independent DoF distortion model is constructed, and the distortion parameters at any object plane can be solved through the distortion on two defocus planes, which removes the human errors introduced by manual adjustment of the focus and zoom rings. (2) A 2D-to-3D equal-increment partitioning method for lens distortion is proposed. After fusing with the DoF distortion model to form a DoF distortion partition model, the accuracy of lens distortion characterization is further improved. (3) A two-step method is proposed to calibrate camera parameters, in which the DoF distortion is calculated independently by the plumb-line method, which eliminated the coupling effect among the parameters in the camera model. (4) Experiments were performed to verify the accuracy of the 2D distortion partition model, DoF-dependent distortion partition model, and camera calibration. The results show that the maximum and average angular reconstruction errors by the proposed model were 0.013 • and 0.011 • respectively, which validates the accuracy and feasibility of the equal-increment partitioning based DoF distortion method.
The main limitation of the present study is that the number of partitions is not optimized to achieve higher calibration accuracy. Our future work will focus on this and extend our model to other optical systems with fisheye or catadioptric lenses.

Conflicts of Interest:
The authors declare no conflict of interest.