A Combined Physical and Mathematical Calibration Method for Low-Cost Cameras in the Air and Underwater Environment

Low-cost camera calibration is vital in air and underwater photogrammetric applications. However, various lens distortions and the underwater environment influence are difficult to be covered by a universal distortion compensation model, and the residual distortions may still remain after conventional calibration. In this paper, we propose a combined physical and mathematical camera calibration method for low-cost cameras, which can adapt to both in-air and underwater environments. The commonly used physical distortion models are integrated to describe the image distortions. The combination is a high-order polynomial, which can be considered as basis functions to successively approximate the image deformation from the point of view of mathematical approximation. The calibration process is repeated until certain criteria are met and the distortions are reduced to a minimum. At the end, several sets of distortion parameters and stable camera interior orientation (IO) parameters act as the final camera calibration results. The Canon and GoPro in-air calibration experiments show that GoPro owns distortions seven times larger than Canon. Most Canon distortions have been described with the Australis model, while most decentering distortions for GoPro still exist. Using the proposed method, all the Canon and GoPro distortions are decreased to close to 0 after four calibrations. Meanwhile, the stable camera IO parameters are obtained. The GoPro Hero 5 Black underwater calibration indicates that four sets of distortion parameters and stable camera IO parameters are obtained after four calibrations. The camera calibration results show a difference between the underwater environment and air owing to the refractive and asymmetric environment effects. In summary, the proposed method improves the accuracy compared with the conventional method, which could be a flexible way to calibrate low-cost cameras for high accurate in-air and underwater measurement and 3D modeling applications.


Introduction
Due to their advantages of flexibility and affordability, low-cost cameras have been widely used in close-range photogrammetric applications in air and underwater environments. Typical applications include three-dimensional image reconstruction for important culture/religion sites [1], deformation monitoring of the industrial components [2], topographic mapping of shallow water area [3], and accurate underwater measurements of fish [4], and so on. Camera calibration is essential and vital in the image reconstruction process. Due to small distortion errors, the mathematical relationship between the image space and object space cannot be established rigorously, which results in an accuracy decrease in spatial coordinates. Accordingly, effective distortion compensation models are always expected to achieve the highest-level calibration precision.
Camera calibration using the bundle adjustment method was developed in the early 1970s [5], and was employed in the close-range photogrammetry and reached maturity in the 1980s [6][7][8]. To obtain high-precision calibration results, additional parameters are introduced to compensate for the systematic lens distortion errors [9][10][11]. Some researchers have applied distortion models with significant physical meaning, for example, Brown [5] considered the radial and decentering distortion for the commercial Schneider Symmar lens and it became an accepted standard for the analytical camera calibration; Fraser [12] found that four principal sources of radial distortion, decentering distortion, image plane unflatness, and in-plane image distortion should be considered in the calibration process and the mathematical models were introduced in detail; Remondino and Fraser [13] explained how the chromatic aberration had an impact on the radial distortion model for the consumer-grade digital color cameras, and camera calibration should be concerned yet with the increase of camera types; Luber [14] presented a statistical procedure to choose the most appropriate model automatically and it was suitable for wide-angle fisheye lenses, spherical lenses, and perspective lenses with a calibrating precision of 1 pixel. Some other studies suggested that the camera calibration can be considered as a mathematical approximation problem [15]. For example, Tang et al. [16,17] have verified that the orthogonal Legendre and Fourier polynomials are rigorous and efficient for calibrating the airborne cameras.
Compared to in-air calibration, the influence of light refraction traveling through water, the waterproof housing, and the air must be considered in the underwater camera calibration, rather than calibrating the lens distortions only [18]. Researches show that there are two approaches for underwater camera calibration: one is based on the refraction geometry, in which the light propagation is considered in multimedia and the imaging geometric model introducing refraction is established [19]. Lavest [20] considered the effect of refraction on light ray tracing and proposed a correction to the focal length, and a rescaling of the distortion parameters as a function of the refractive index. Underwater refractive geometry is regarded as a perspective camera looking through parallel flat refractive mediums that can be orientated by the underlying axis related to the number of lays, distances from the camera, and the refractive indices [21][22][23]. The strict refraction imaging model is complicated and there are always some assumptions in the underwater environment [24]. The other method is based on compensating for the refraction effects, in which the standard pinhole-camera model is used to describe the object-image relationship and the systematic refraction errors are additionally compensated in the calibration process [24]. Harvey and Shortis [25,26] performed calibration to compensate the refraction effect on the light path. However, there will always be some systematic errors that are not incorporated into the model.
As mentioned above, both the physical and mathematical approximation models can lead to a successful systematic error compensation and accurate calibration for most cameras used today in photogrammetry [27][28][29]. However, a wide variety of low-cost digital cameras, due to their utility and flexibility, are nowadays employed across air and underwater photogrammetric applications. Accordingly, the influence of light refraction is concerned during the underwater camera calibration process. The various lens distortions and the influence of the underwater environment are difficult to be covered with a universal mathematical model, and the residual distortions may still remain after the calibration using conventional methods. In some precise engineering applications, high accuracies (0.1 pixel or better at the image space) are required [30].
In this paper, we propose a combined physical and mathematical camera calibration method for low-cost cameras, which can adapt to both in-air and underwater environments. Photogrammetric camera calibration can-to a very large extent-be considered as a function approximation problem in mathematics. The distortion can be described by a linear combination of specific basis functions [16,17]. First, the commonly used radial, decentering, and in-plane distortion physical models are applied to describe the image distortions [12] in the proposed method. These models are expressed as high order polynomials, which can be combined with polynomial basis functions. During the in-air or underwater imaging process, the lens distortions, light refraction through multiple mediums, and other environmental factors synthetically influence the image deformation, which can be successively approximated using the polynomials from the point of view of mathematical approximation. Then, these distortion model parameters are estimated with the aid of calibration fixture. Subsequently, the calibration fixture images are geometrically corrected using the estimated distortion parameters. Next, the calibration process is performed again, in which the geometrically corrected calibration fixture images are treated as distorted images, and the distortion models are applied again to describe the distortions. Accordingly, a new set of camera calibration parameters is estimated. Specifically, when the previous camera calibration results (several sets of calibration parameters) are applied to the calibration fixture image geometric correction, in order to avoid multiple resampling degrading image quality, a strategy is proposed to perform resampling just once from the original calibration fixture images. This process is repeated until certain criteria are met. In the end, several sets of distortion parameters are obtained to jointly describe all the camera distortions, along with the obtained camera interior orientation (IO) parameters, acting as the final camera calibration results.
In the following sections, the proposed camera calibration method is introduced in detail, and several experiments of low-cost camera calibration in the air and underwater environment are conducted. The results are analyzed and compared to the conventional camera calibration method (using the Australis model in this case, referring to it as the conventional method).

The Combined Physical and Mathematical Camera Calibration Method
The key work in this paper is to develop a combined physical and mathematical camera calibration technique that can calibrate all the cameras in the air and underwater environment. All the theory involved in the proposed method will be introduced, including the image distortion description, camera parameter estimation, geometric correction of calibration fixture images, and accuracy assessment ( Figure 1). (1) Image distortion description The commonly used physical radial, decentering, and in-plane distortion models are integrated to describe the image distortions [12], which can be regarded as the high order polynomials of x and y in mathematics to successively approximate the image distortions derived from the lens distortions, light refraction through multiple mediums, and other environmental factors, as shown in Equation (1): where i = 1, 2, ..., N stands for the number of the camera calibration; ∆x i and ∆y i represent the image distortions in the line and sample directions at the ith camera calibration; k 1i , k 2i , and k 3i are the radial distortion parameters at the ith camera calibration; p 1i and p 2i are the decentering distortion parameters at the ith camera calibration; b 1i and b 2i are the in-plane distortion parameters at the ith camera calibration; x 0i and y 0i are the principal point offset at the ith camera calibration; (x i−1 ,y i−1 ) are the image coordinates on the (i − 1)th corrected calibration fixture images.
(2) Camera calibration parameters estimation The imaging geometry of digital cameras in photogrammetry can be modeled according to the principle of a pinhole camera, and the well-known collinearity equation [5] is used to depict the perspective transformation between the image space and object space, as shown in Equation (2): are the object coordinates of the Control Points (CPs) in the geographic projection coordinate system; f i is the principal distance at the ith camera calibration; R i stands for the rotation matrix at the ith camera calibration, constituted with the attitude angles (ϕ i , ω i , κ i ) of the image; (X 0i Y 0i Z 0i ) is the perspective center position at the ith camera calibration. Afterwards, the calibration parameters could be solved using the least square estimation method with even CPs, which is widely used in the photogrammetry. The observation equations are obtained when Equation (2) is linearized using the Taylor series expansion and the second-order and higher-order terms are ignored, as shown in Equation (3): where v x i−1 and v y i−1 are the observational residuals in the line and sample directions; dk 1i , dk 2i , dk 3i , dp 1i , dp 2i , db 1i and db 2i are the corrections to the distortion parameters; dx 0i , dy 0i and df i are the corrections to the camera IO parameters; dX 0i , dY 0i , dZ 0i , dϕ i , dω i , dκ i are the corrections to the exterior orientation elements; x i−1 0 and y i−1 0 are the nominal line and sample coordinates of CPs computed by Equation (2).
A matrix version of the observation Equation (3) is shown in Equation (4): where V i−1 refers to the observational residual vector; A i contains the partial derivatives of the distortion parameters, camera IO parameters and the exterior orientation elements; t i is the vector of corrections to the calibration parameters, including distortion parameters, camera IO parameters, and the exterior orientation elements; L i−1 is the observation vector of points' image coordinates; P stands for the weights of CPs on the images, and they are observed artificially or by the image matching method at the same accuracy, therefore the weight matrix is set to unit matrix I. The corrections of the calibration parameters will be obtained by least squares, as shown in Equation (5): In the linearization procedure, the coefficients of distortion parameters' correction retain the first-order terms only, and the initial values of these unknowns are set to 0, therefore an iterative process is necessary to determine the distortion parameters. The solution of distortion parameters is shown in Equation (6): represents the obtained distortion parameters; T i−1 is the previous distortion parameters at the (i − 1)th camera calibration; q i is the obtained corrections of the distortion parameters at the ith calibration process.
In order to avoid the correlations among the calibration parameters, the calibration fixtures are captured at various scales and different stations, forming a robust network containing short baseline image sequences that can provide benefits to increase the redundant observations and produce strong intersections [30].
(3) Geometric correction of the calibration fixture images The calibration fixture images are geometrically corrected using the previous camera calibration results. In order to avoid multiple resampling degrading image quality, a strategy is proposed to perform resampling just once from the original calibration fixture images ( Figure 2). The pixels on the corrected calibration fixture images (Figure 2c) are directly reversed to acquire the corresponding pixels on the original calibration fixture images (Figure 2a) using the previous m sets of distortions, as shown in Figure 2 and Equation (7). Then, the resampling is performed on the original calibration fixture images using the cubic convolution method to obtain the grey value of these pixels.
where (x', y') are the corrected image coordinates; x 0 and y 0 are the principal point offset; (x, y) are the image coordinates on the original calibration fixture images. Subsequently, the corrected calibration fixture images are taken as the distorted images to repeat the calibration process, and so on, until certain criteria are met. At the end, several sets of distortion parameters (∆x 1 , ∆x 2 ... ∆x n and ∆y 1 , ∆y 2 ... ∆y n ) are used to jointly describe all the camera distortions, which, along with the obtained camera IO parameters, act as the final camera calibration results.
(4) Accuracy assessment CPs that acted as check points are used for external accuracy assessment in the object space. The nominal object coordinates of these check points are calculated using Equation (2) with the direct intersection method. The Root Mean Square Error (RMSE) in the object space is computed based on the discrepancies between the truth values and nominal values of the check points, shown in Equation (8): where µ X , µ Y and µ Z represent the RMSE of check points for the corresponding three directions; r refers to the number of check points; X t , Y t , Z t are the truth coordinates of the check points; X c , Y c , Z c are the nominal coordinates of the check points. µ P is used to state the accuracy in the object space in the following discussions.

Calibration Fixture
In this study, the calibration fixtures of a 3D test field and a 2D planar calibration board are used to assist the camera calibration. The 3D test field is established on the shape invariant brick wall of a building and fits the in-air camera calibration. The size of each brick is 24 cm × 6 cm. A total of 188 evenly distributed corner points of the bricks are taken as the CPs and check points. The object coordinates of these points are measured by the total station and the measurement accuracies are 2 mm. The established 3D test field of the brick wall and the distributed CPs are shown in Figure 3. The black dots represent the CPs. The 2D planar calibration board is applied for camera calibration in the underwater environment. The diffused reflective planar calibration board is used, which is sized at 400 mm × 400 mm and consists of 625 circular targets (dots). Dots are chosen as targets owing to the isotropy of circles, and the CPs are located in the center. The dots are evenly positioned across 25 rows and 25 columns. Each individual circular dot has a diameter of 7.5 mm, and two neighboring dots are 15 mm apart. The measurement accuracy is 0.01 mm. The 2D diffused reflective planar calibration board and circular targets are shown in Figure 4.

Tests of Low-Cost Camera Calibration
In this study, two types of low-cost cameras are calibrated in the air and underwater environment using the proposed camera calibration method, including a full-frame digital single-lens reflex camera (Canon EOS 5Dsr) with a prime lens of TS-E 24 mm 1:3.5 L I I and an action camera (GoPro Hero 5 Black) operated in photo mode with a narrow field of view. The Canon camera deploys a Complementary Metal Oxide Semiconductor (CMOS) imaging sensor and the size is 36 mm × 24 mm with a resolution of 8688 pixels × 5792 pixels. The given focal length of the lens is 24 mm. The GoPro carries a 1/2.3-inch image sensor with a resolution of 12 megapixels, and is equipped with a wide-angle lens, leading to large distortions for the captured images. The in-air calibration is performed using both cameras, and the underwater calibration is carried out with the GoPro camera.

Calibration Tests in the Air
The 3D test field was employed to calibrate the Canon and GoPro cameras in the air. Forty-five images were acquired via the Canon camera, which were captured at two scales, 1.80 m and 2.60 m away from the test field. Seventeen images were acquired by the GoPro camera at three distances of 1.10 m, 1.65 m, and 2.50 m. In order to avoid introducing other systematic errors, each image was filled with the calibration fixture, and no other object entered the picture.
A total of 98 CPs and 61 CPs evenly distributed in the test field are used for the Canon and GoPro calibration parameters estimation, respectively. A total of 90 CPs and 127 CPs are used for the Canon and GoPro calibration accuracy assessment, respectively. On the original images, the CPs are plotted manually. For the following calibrations, the CPs on the corrected images are measured automatically according to Equation (7) without extra work. The radial distortion, decentering distortion, and in-plane distortion of the two cameras at each calibration are profiled separately, shown in Figures 5−10. The calibrated IO parameters of the two cameras and the calibration accuracy are shown in Table 1. The calibration results of the conventional method (using the Australis model once) are exactly the same as that of the proposed method at the first calibration.   From the Canon in-air calibration results ( Figures 5-7, Table 1), the maximum radial, decentering, and in-plane distortion of the camera are 104 µm, 14 µm and 20 µm, respectively ( Figures 5-7). These maximum values decrease to 1.14 µm, 1.15 µm and 0.4 µm after the second calibration, which indicates that the residual distortions remained after the calibration with the conventional method (the first calibration). The accuracy shows that compared with the conventional method, the proposed method achieves a slight accuracy improvement of 2.8 µm (0.24%) ( Table 1) after the residual distortions are calibrated.   From the GoPro in-air calibration results (Figures 8-10, Table 1), the maximum radial, decentering, and in-plane distortion of the camera are 0.72 mm, 0.01 mm, and 0.07 mm, respectively (Figures 8-10), which are calibrated by the conventional method (the first calibration). These maximum values decrease to 0.77 µm, 9.33 µm, and 0.4 µm after the second calibration, which indicates that the residual distortions, especially most decentering distortions, remained after the calibration with the conventional method (the first calibration). The proposed method achieves an accuracy improvement of 0.2 mm (3.88%) compared with the conventional method (Table 1) after the residual distortions are calibrated.
In summary, the Canon and GoPro in-air calibration results show that the distortions gradually grow larger with the increase of the radial distance from the center of the image. The in-plane distortion is distributed unequally on the image. All the residual distortions of the Canon and GoPro cameras are decreased as the number of calibration increases and approach 0 after four calibrations, which illustrates that all the distortions have been fully calibrated by the proposed method. Meanwhile, the stable camera IO parameters are obtained after the repeated calibration processes. From the magnitudes of distortions, the lens distortion is dominated by the radial distortion for the two cameras [18], and the GoPro lens has distortions seven times larger than the Canon lens. Most distortions of the Canon lens have been detected with the conventional method (the first calibration), while a large number of decentering distortions for the GoPro lens still exist ( Figure 9). This indicates that the Australis model cannot cover the distortions of all the cameras, with which the Canon lens is better described than the GoPro lens.

Calibration Tests in the Underwater Environment
Due to the smaller field of view and close range caused by the attenuation of light in the water, the 2D planar calibration board was employed to calibrate the GoPro camera in the underwater environment. Because of refraction, underwater images exhibit nonlinear distortions related to the distance from the object to the camera [31]. Therefore, two underwater camera calibration tests were executed with two distances, respectively. The first test captured 33 images at a distance of 0.14 m away from the calibration board. A total of 33 CPs and 45 CPs are used to calculate the calibration parameters and assess the calibration accuracy, respectively. The second test acquired 10 images at a distance of 0.50 m away from the calibration board. A total of 24 CPs and 30 CPs are used to calculate the calibration parameters and assess the calibration accuracy, respectively. The calibration results are shown in Table 1        (1) Underwater camera calibration at a distance of 0.14 m away from the calibration board From the GoPro underwater calibration results (Figures 11-13, Table 1), the calibrated maximum radial, decentering, and in-plane distortion are 0.11 mm, 0.002 mm, and 25 µm, respectively. These maximum values decrease to 5 µm, 1 µm and 0.06 µm after the second calibration, which are the residual distortions that remained after the calibration with the conventional method. All the residual distortions can be detected by the proposed method. The accuracy shows a rise from 3.8451 mm to 3.8215 mm, indicating that the proposed method improves the accuracy by 0.02 mm (0.61%) compared with the conventional method (Table 1). It is similar to the in-air calibration results, in which the stable camera IO parameters are obtained after the repeated calibration processes.
(2) Underwater camera calibration at a distance of 0.50 m away from the calibration board From the second GoPro underwater calibration test (Figures 14−16, Table 1), the calibrated maximum radial, decentering, and in-plane distortion are 0.16 mm, 0.16 mm, and 0.5 µm, respectively. These maximum values decrease to 17 µm, 1.6 µm, and 0.3 µm after the second calibration. All the residual decentering distortions and in-plane distortions are detected after the second calibration. The residual radial distortions are decreased to 0 after the third calibration. The accuracy shows a rise from 3.4860 mm to 3.4650 mm, indicating that the proposed method improves the accuracy of 0.021 mm (0.60%) compared with the conventional method (Table 1). Similar to the in-air and first underwater calibration results, the stable camera IO parameters are obtained after the repeated calibration processes.
In summary, the GoPro underwater calibration results show variations by distance from the camera to the calibration board. The distortions at a distance of 0.14 m are dominated by the radial distortions, and all the distortions can be calibrated by four repeated calibrations. The radial and decentering distortions are major distortions at a distance of 0.50 m, and all the distortions can be detected by three repeated calibrations. The stable camera IO parameters are obtained after the calibration processes.
Compared to the in-air calibration, the final GoPro camera calibration results in the underwater environment (Table 1) indicate that the achieved principal distance in the underwater environment is different from that obtained in the air, because the light rays refract when traveling from the water to the air, which leads to a change of the principal distance. The shifts of the principal point and the distortions in the underwater environment appear smaller than that in the air, which is caused by the refractive and asymmetric environment effects absorbed by the radial, decentering, and in-plane distortions [18].
In summary, Canon EOS 5Dsr and GoPro Hero 5 Black were calibrated for the the air and underwater environment. The calibration results illustrate that the residual distortions remained with the conventional method. When using the proposed method, the distortions decrease gradually throughout the repeated calibration processes and can be negligible at last. Meanwhile, stable elements of camera IO can be achieved after several calibrations. Desirable results were obtained due to the repetitive use of the physical distortion models during the calibration process. Photogrammetric calibration is regarded as a function approximation problem from the perspective of mathematics [16], and any univariate function can be approximated by a polynomial of sufficiently high degree according to the Weierstrass Theorem [17]. The distortion model combined with radial, decentering, and in-plane physical distortion models in this paper is actually a polynomial with multiple degrees, which is applied to successively approximate the distortions constantly during the calibration processes. Compared to the in-air calibration, the underwater calibration is affected by the complicated underwater environment, and it is difficult to describe these factors with various mathematical models owing to the coupling effects and correlations.
The results indicate that the proposed camera calibration method is applicable for the underwater camera calibration as well.

Underwater Application Example
The calibrated camera can be used in three-dimensional reconstruction and measurement of the objects using the photogrammetric method. In this paper, the calibrated GoPro at a distance of 0.5 m in the underwater environment (Figures 14-16, Table 1) was used to carry out the underwater object three-dimensional reconstruction and measurement.
Four images covering the bottom of the pool are acquired with GoPro at a depth of 0.50 m, which have about (80%) overlap, as shown in Figure 17. The offset angle between the camera movement direction and the horizontal direction is less than 3 • . These four images are geometrically corrected using three sets of calibrated distortion parameters (Figures 14-16), and the corrected images are shown in Figure 18. The proposed strategy is used to perform the image resampling ( Figure 2). The pixels on the corrected underwater images are directly reversed to acquire the corresponding pixels on the original images using three sets of distortions. Then, the resampling is performed on the original images using the cubic convolution method to obtain the grey value of these pixels.  Figure 18d form the third pair. About 100 tie points are matched on each image pair using the manual and semi-automatic matching method. The relative orientation is carried out with the image pairs to make the corresponding rays intersect, in which the corrected image Figure 18b is taken as the reference image. Meanwhile, the relationship between the corrected images are established by the calculated exterior orientation parameters. Then, the disparity maps are produced using the oriented image pairs through the dense matching technique (Figure 19). Three-dimensional point clouds of the scene are obtained using the disparity maps through the forward intersection method. Afterwards, the digital surface model (DSM) of the scene can be acquired by point clouds interpolation (Figure 20a). The corrected images are then rectified with DSM to generate orthophoto (Figure 20b). The three-dimensional model can be reconstructed by DSM and orthophoto. There is no absolute reference at the bottom of the pool, therefore only relative elevation and horizontal information can be obtained from DSM and orthophoto. An underwater brick is used to carry out the photogrammetric measurement. The true size of the experimental brick is 200.00 mm (length) × 100.00 mm (width) × 50.00 mm (height). Ten points on the brick are chosen to perform the three-dimensional information measurement. Five lines are constituted by these points. They are line 01-02, line 03-04, line 05-06, line 07-08, and line 09-10, respectively. Line 07-08 and line 09-10 stand for the brick length. Line 01-02, line 03-04, and line 05-06 stand for the brick width. The point and line distribution is shown in Figure 21. The three-dimensional coordinates of these points are calculated by forward intersection using the disparity maps and exterior orientation parameters. The length of line is the distance between two points, which can be determined by three-dimensional coordinates of the points. There is no absolute reference at the bottom of the pool, therefore only relative geometric information can be obtained. The brick length and width are calculated by scale conversion using the relative calculated results. The measurement accuracy is assessed by the true value and the calculated value of the brick length and width.
The GoPro has been calculated at a distance of 0.5 m in the underwater environment using the conventional method and the proposed camera calibration method (Figures 14-16, Table 1). The calibrated results were applied in the brick size measurement. The measurement accuracy is shown in Table 2.

Conventional Method Proposed Method
Measurement accuracy (mm) 5.00 1.24 The measurement accuracy acquired by the proposed method is 1.24 mm, which is superior to the accuracy obtained by the conventional method ( Table 2). The proposed camera calibration method can detect almost all the image distortions, and the systematic distortion error can be compensated for in the geometric correction process to improve the subsequent photogrammetric steps.

Conclusions
In this paper, a combined physical and mathematical calibration method for low-cost cameras in the air and underwater environment is proposed. In the proposed method, the physical radial, decentering, and in-plane distortion models are integrated to describe the camera distortions. The combination is a high-order polynomial, which can be considered as basis functions to successively approximate the distortions during the calibration processes from the point view of mathematical approximation. Then, the proposed method uses the least squares estimation method with even CPs to estimate the camera calibration parameters. Next, the original calibration fixture images are geometrically corrected using the estimated distortion parameters. Unlike the conventional methods, the proposed method performs the above calibration in a repeated fashion: treating the geometrically corrected images as input distorted images, re-applying the distortion model, estimating a new set of camera parameters, and geometrically correcting the already corrected images, and so on, until certain criteria are met. Specifically, when the previous camera calibration results (m sets of calibration parameters) are applied to the image geometric correction, in order to avoid multiple resampling degrading image quality, a strategy is proposed to perform resampling just once from the original calibration fixture images. At the end, several sets of distortion parameters and stable camera IO parameters are obtained, which act as the final camera calibration results and are used to describe all the image distortions.
The in-air calibration results with Canon EOS 5Dsr and GoPro Hero 5 Black demonstrate that the distortions of these cameras vary greatly in magnitude. The Australis model cannot cover all the distortions, and the residual distortions still remained with the conventional method. While using the proposed method, all the final distortion parameters are close to 0 and can be negligible after several calibrations, meanwhile, stable elements of IO can be achieved at last. The underwater calibration with GoPro Hero 5 Black indicates that the calibration parameters in the underwater environment appear different from that in the air, owing to the refractive and asymmetric environment effects. The proposed method is applicable for the underwater camera calibration as well. In summary, the calibration experimental results demonstrate that the proposed method is flexible to use and has potentials in practice in order to achieve high three-dimensional measurement accuracies for both in-air or underwater applications.
Further study would be focused on applying the proposed method in the situation where no control point is available, and investigating how the proposed method can be used in triangulation while camera calibration and block adjustment are performed jointly. Recently, our research team is establishing the refraction physical imaging model in the underwater environment, in which the object distance is taken as one of the parameters.