Quantification Method for the Uncertainty of Matching Point Distribution on 3D Reconstruction

: Matching points are the direct data sources of the fundamental matrix, camera parameters, and point cloud calculation. Thus, their uncertainty has a direct influence on the quality of image-based 3D reconstruction and is dependent on the number, accuracy, and distribution of the matching points. This study mainly focuses on the uncertainty of matching point distribution. First, horizontal dilution of precision (HDOP) is used to quantify the feature point distribution in the overlapping region of multiple images. Then, the quantization method is constructed.  ∗  , the average of 2 ×   ×    − 1 / on all images, is utilized to measure the uncertainty of matching point distribution on 3D reconstruction. Finally, simulated and real scene experiments were performed to describe and verify the rationality of the proposed method. We found that the relationship between  ∗  and the matching point distribution in this study was consistent with that between matching point distribution and 3D reconstruction. Consequently, it may be a feasible method to predict the quality of 3D


Introduction
A considerable amount of research has been conducted on image-based three-dimensional (3D) reconstruction in traditional 3D terrain reconstruction [1], urban (rigid object) reconstruction [2], and vegetation (nonrigid object) reconstruction [3]. The rapid extraction and construction of 3D models from images has an important role in spatial data acquisition. Carrivick [4] summarized the quantitative research on errors in 3D reconstruction, which has mainly focused on the data source, results, and other similar aspects. Process research is an essential research topic in the GIS (Geographic Information System) field to ensure the accuracy of 3D reconstruction.
Image-based 3D reconstruction is a complex process that involves several steps: Feature extraction and matching, fundamental matrix computation, camera calibration, and point cloud reconstruction. A large number of matching points are extracted from stereo pairs for the calculation of the fundamental matrix, camera parameters, and point clouds. Thus, matching points are a direct data source for the other steps of image-based 3D reconstruction, and their uncertainty has an important influence on the quality of 3D reconstruction.
The uncertainty of matching points depends on numerous factors [5], including the number, accuracy, and distribution of the matching points. Statistical methods [6][7][8], the covariance matrix [9][10][11][12][13], and covariance propagation law [14][15][16][17][18] have been utilized to calculate the uncertainty of feature points. A pair of matching points comprises two feature points. Liu [19] discussed several types of camera geometry and error analyses of feature point matching. Gui [20] presented a novel pointpattern matching method based on speeded-up robust features and the shape context to increase matching accuracy. Tong [21] improved the scale-invariant feature transform (SIFT) algorithm, and removed feature points surrounding image boundaries to increase matching accuracy. Zhao [22] employed the normalized cross-correlation coefficient and a bidirectional matching strategy to improve matching point accuracy. Hu [23] improved the robustness and accuracy of matching points by using the structural distance between feature point sets as the basis of matching similarity. Most previous studies have focused on the accuracy, not the distribution, of matching points. Therefore, the present work will focus on the distribution of matching points. Suppose that matching points with the same precision but different distributions are used to reconstruct 3D models. Whether the reconstructed results will remain the same and whether distributed matching points can be used to reconstruct high-quality 3D models under this condition remain unknown. Additionally, the quantification method of matching point distribution warrants consideration.
This study assumes that the accuracy of matching points is constant, and measures the uncertainty of matching point distribution on 3D reconstruction. Firstly, horizontal dilution of precision (HDOP) was used to quantify the feature point distribution. Then, the quantization method was constructed.
* was utilized to measure the uncertainty of matching point distribution on 3D reconstruction. Finally, simulated and real scene experiments were performed to describe and verify the rationality of the proposed method.

Methods
Suppose that matching points of the same stereo pair are extracted multiple times using the same algorithm. The results may be different due to different algorithm parameters, as shown in Figure 1. In principle, the three groups of 3D models reconstructed based on matching points in Figure 1 are not exactly the same. A large number of experiments in the early stage have shown that 3D models have good quality when reconstructed on the basis of evenly distributed matching points. Consequently, uniformity is an important index to measure the distribution of matching points. Dilution of precision (DOP) is a term used in satellite navigation and geomatics engineering to specify the additional multiplicative effect of navigation satellite geometry on positional measurement precision, which could be used to assess the spatial geometrical layout characteristics of constellations [24][25][26]. DOP indicates how uniform the satellite distribution is in each orbital plane [27]. HDOP, which is a type of DOP, expresses the precision of the plane position on the basis of satellite latitude and longitude coordinates. Point distribution depends on the positional relationship of point coordinates on the image. Point coordinates (x and y) on the image are similar to satellite latitude and longitude coordinates. In this study, HDOP was selected to quantify the uniformity of point distribution.
As shown in Figure 1, the center point (white box) is compared to a receiver, and feature points (black points) in overlapping regions are compared to satellites. In the first step in HDOP The three elements of each row of A are the components of a unit vector from the center point to the indicated point. HDOP is given by: where: Here, n indicates the number of points on the image, ( ) = 2n. Suppose that λ , λ , and λ are the eigenvalues of , then λ + λ + λ = 2 . Gerschgorin's disk theorem [28] in matrix theory shows that the range of the first and the second eigenvalues of are the same. Additionally, we know from the literature [29] that λ ≥ . Therefore: = tr(( ) ) = tr dig , , = + + ≥ 2 × × + ≥ .
As can be seen from Equations (1)-(4), the HDOP is related to the number and the position of points. So, the purpose of HDOP divided by is to remove the effect of the number of points.
Meanwhile, the normalization method is utilized to transform × into the range of 0-1.
Specifically, × − 1 can make its range from (1, +∞) to (0, +∞). Then, the anti-tangent function is selected for transformation, with a range between 0 and π/2. Thereafter, the transformation result is multiplied by 2 and then divided by π. Finally, HDOP can be converted to between 0 and 1: The feature points that represent the same spatial position on two or more images are a pair of matching points. As shown in Figure 1 The quantization for the uncertainty of matching point distribution is designed as follows: Specific steps： 1. Matching points on multiple images are extracted. 2. The overlapping or interested region of multiple images is estimated, and the center point coordinate of the overlapping or interested region is computed.

3.
* is respectively calculated based on feature points in overlapping regions.

4.
* is calculated, which is the average of * on all images.
In this study, * indicates the uncertainty of matching point distribution, and has a range of [0, 1]. When * , calculated by the matching points of a certain distribution, is close to 0, the quality of 3D models reconstructed based on these matching points may be great. When * is close to 1, there may be a need to re-extract matching points for 3D reconstruction.

Experiment
To test whether the matching point distribution quantized by the proposed method can reflect the quality of 3D reconstruction, stereo pairs are selected on the basis of simulated and real scenes. Meanwhile, groundtruth data can also be given, and used to evaluate the uncertainty of 3D models reconstructed based on matching points.

Data Source
The simulation scene of this experiment was the indoor calibration field of Nanjing Normal University in China. Photos were taken with the OLYMPUS E-20, and the photo sizes were 2560 pixels × 1920 pixels, with a focal length of 9mm. The stereo pair was comprised of both photos in Figure 2a,b, and its overlap was 80%. A total of 227 pairs of matching points were extracted by hand, and their corresponding coordinates of 3D points (groundtruth data) are shown in Figure 2c. These were measured by constructing the local coordinate system, and the unit was meters.

* of Matching Points
The proposed method in this study can be used to calculate * of matching points with different numbers and different distributed regions. Specifically, the overlapping region (red rectangles in Figure 3 of the stereo pair needs to be estimated, and their center points (yellow dots in Figure 3 can then be calculated. The * on the left image and * on the right image were calculated by using Equations (1)-(5), and then * was determined. The specific calculated results are shown in Table 1.   Table 1, the values in Figure 3

Result Evaluation
The rationality of the proposed method was evaluated by the number and the distribution of matching points.

Evaluated by the Number of Matching Points
In this experiment, the internal and external parameters of the stereo pair in Figure 2 can be calculated four times using the direct linear transformation algorithm and matching points in Figure  3(a1-d1). So, the coordinates of 3D points were also calculated four times, and the calculations are shown in Figure 4a-d. Subsequently, the calculations were subtracted from the true values in Figure  2c. The distance errors after subtraction are shown in Figure 4e. It can be seen from Figure 4e that the distance errors of 3D points calculated by matching points in Figure 3(a1-d1) are similar, and the number of matching points has less effect on 3D reconstruction. Therefore, the proposed method in this study removes the effect of the number of matching points on 3D reconstruction. As shown in Figure 3(a1-d1) in Table 1, the * values calculated by matching points with different numbers are also similar, and these values do not change with the number of matching points.

Evaluated by the Distribution of Matching Points
This experiment is comprised of two parts. One is to study the change rule of 3D points when the matching points are gathered toward the center point of overlapping regions, and the other is to analyze when the matching points are gathered toward one corner of the overlapping regions.  Matching points gathered toward the center point of overlapping regions The 3D points can also be calculated four times by Figure 3(a2-d2), and their coordinates are not exactly equal. The specific distance errors are shown in Figure 5. The matching points in Figure 3(a2-d2) are evenly distributed surrounding the center point of the overlapping regions and have different distributed regions (yellow rectangles). The distance errors of 3D points calculated by these matching points are also similar, as shown in Figure 5. The distributed regions of matching points surrounding the center point of overlapping regions also have less effect on 3D reconstruction. Therefore, the center point of overlapping regions is considered as an important point of stereo pairs, and in the proposed method, it is selected as the receiver to measure the uncertainty of matching points' distribution.  Matching points gathered toward one corner of the overlapping regions The 3D points can also be calculated four times by Figure 3(a3-d3), and their coordinates are also not exactly equal. The specific distance differences are shown in Figure 6.  Legend： : between 3D points calculated by Fig.3(a2) and Fig.2(c) : between 3D points calculated by Fig.3(b2) and Fig.2(c) : between 3D points calculated by Fig.3(c2) and Fig.2(c) : between 3D points calculated by Fig.3(d2) and Fig.2 The matching points in Figure 3(a3-d3) have the same number of matching points in different distributed regions (yellow rectangles) and gradually gather toward the lower-right corner of the overlapping regions. The distance errors of 3D points calculated by these matching points have a large difference, as shown in Figure 6. The distribution of matching points deviating from the center point affects the accuracy of 3D points. When the matching points are more clustered and further from the center point, the calculated 3D point errors are larger, as shown by the symbol ○ in Figure  6. However, not all distance errors of the symbol ○ are the largest. Among them, serial numbers 105-136,180-195, and 205-220 are the smallest. This is because their corresponding matching points are mainly distributed in the lower-right corner of the overlapping regions. Through analyzing the four groups of distance errors in Figure 6, we can say that the calculated 3D points in the region where matching points are located are more precise and have smaller distance errors.
In addition, Table 2 shows the mean of the distance errors in Figure 6, which reflects the accuracy of the 3D point as a whole. In the table, the mean error in Figure 3(a3) is the smallest, followed by the value in Figure 3(b3), and the maximum value is in Figure 3(d3). This is consistent with the change rule of * of Figure 3(a3-d3) in Table 1. Table 2. Mean error of 3D points in Figure 6. distribution on 3D reconstruction. In a series of simulated scene experiments, we also found that the relationship between the matching points' distribution and * were consistent with that between the matching point distribution and the accuracy of 3D points. Therefore, * can be used to measure the uncertainty of matching point distribution on 3D reconstruction.

Data Source
In this experiment, the stereo pair (Tsinghua University Gate) [28] published by the Institute of Automation of Chinese Academy of Sciences was selected for testing. Feature points were detected in Figure 7a,b by using the SIFT algorithm. Hence, matching points can be extracted by using the nearest neighbor search algorithm of the K-d tree. A total of 943 pairs of matching points were extracted. The point clouds of Tsinghua University Gate are shown in Figure 7c, which are groundtruth data. To verify the rationality of the proposed method in this study, the following experimental data were designed for analysis. There were 943 pairs of matching points in Figure 8a throughout the entire overlapping region, and 300 pairs in Figure 8b surrounding the center point of overlapping regions. Meanwhile, 442 pairs in Figure 8c and 300 pairs in Figure 8d with different distributed regions were extracted surrounding the lower-right corner of overlapping regions.

* of Matching Points
The overlapping region (red rectangles in Figure 8) of the stereo pair needs to be estimated, and their center points (yellow dots in Figure 8) can then be calculated.
* on the left image and * on the right image were calculated by using Equations (1)-(5), and then, * was determined. The specific calculated results are shown in Table 3.   Figure 8d is 0.9419; it is the biggest and close to 1. We can see from the relationship between * and the matching points that the change rule of * is related with the distribution, and not with the number of matching points. When the distribution of matching points is clustered and away from the center point of overlapping regions, its * may be a larger value and close to 1.

Result Evaluation
In this experiment, the open-source software visual SFM [30,31] was selected to reconstruct dense point clouds. Visual SFM was operated as follows: The stereo pairs (with features) in Figure 7 are loaded; the matching points in Figure 8 are imported; sparse point clouds are computed; and dense reconstruction is run to obtain dense point clouds. The reconstruction of dense point clouds is illustrated in Figure 9.
Four groups are available, where each one is a display image of dense point clouds reconstructed on the basis of the matching point in Figure 8. It can reflect some relationships between 3D point clouds and matching points.
 The matching points in Figure 8b,d have the same number and different distributions, and three groups of 3D point clouds that are reconstructed used them quite different. Here, its quality using matching points surrounding the center point of overlapping regions is higher. So, we can say that matching points surrounding the center point maybe have less effect on 3D point clouds.  Matching points in Figure 8a and gradually gather toward the lower-right corner of overlapping regions. Here, the three groups of 3D point clouds that were reconstructed using them are quite different. The overall shapes shown in Figure 9a,c are more complete, and that in Figure 9d is incomplete. Meanwhile, compared with Figure 9a, there are a small number in Figure 9c and a large number in Figure 9d of block objects that are not associated with the reconstructed Tsinghua Gate. The above change rule is consistent with the uniformity of the matching point distribution. The above relationships between the 3D point clouds and matching points in the real scene experiment are consistent with that in the simulated scene experiment. In addition, it is necessary to quantitatively measure the similarity between 3D models. There have been many studies on posture estimation of 3D models or object search in model-based vision [32][33][34][35]. Topology matching of 3D shapes [33] is quickly, accurately, and automatically calculated by comparing Multiresolutional Reeb Graphs; this study utilized it to compute the similarity between the reconstructed and groundtruth 3D models. Here, 3D models can be obtained through Poisson surface reconstruction on the basis of the dense point clouds in Figures 9 and 7c. Table 4 shows that the 3D models in Figure 9a-c are more similar, and that Figure 9d is the least similar to that in Figure  7c. Table 4. Similarity between the reconstructed and groundtruth models.

3D model
Figure 9a The correlation analysis in Table 4 indicates that the quality of the reconstructed 3D model based on Figure 9a is better, followed by that based on Figure 9b, whereas that based on Figure 9d is poor. Such analysis results are consistent with the change rule of * in Table 3. Therefore, we can use * to measure the uncertainty of matching point distribution on 3D reconstruction. In this study, an * value that is close to 0 may be indicative of high-quality 3D reconstruction, and a value that is close to 1 may be indicative of low-quality 3D reconstruction.

Conclusion
Through a series of experiments, we obtained some representations between 3D reconstruction and matching points.
1. The number of matching points in this study had less effect on the accuracy of 3D reconstruction.
2. The distributed regions of matching points surrounding the center point of overlapping regions had less effect on the accuracy of 3D reconstruction.
3. The matching point distribution deviating from the center point of the overlapping regions affected the accuracy of 3D reconstruction. To quantize the uncertainty of matching point distribution on 3D reconstruction, the proposed method needs to reflect the above representations. Therefore, the proposed method was designed as follows:  HDOP was introduced to this study from satellite navigation and geomatics engineering. Here, the center point of the overlapping regions was selected as the receiver, which can reduce the effect of matching points surrounding the center point on HDOP.  * was constructed to measure the distribution of feature points, and has a range of [0, 1]. Here, * = 2 × × − 1 / , where × removes the effect of the number of feature points.  * , the average of * on all images, was utilized to measure the uncertainty of matching point distribution on 3D reconstruction. In this study, simulated and real scene experiments were performed and found that the change rules of * , 3D reconstruction, and matching point distribution were consistent. Therefore, it is reasonable for * to indicate the uncertainty of matching point distribution on 3D reconstruction.
In the feature extraction step of image-based 3D reconstruction, the proposed method can be utilized to measure the distribution of matching points. When * is close to 0, this means the quality of 3D models reconstructed may be better, and we can continue to reconstruct 3D models based on these matching points. Additionally, when it is close to 1, we may need to re-extract matching points for 3D models.