Next Article in Journal
IT Services and Crowdsourcing in Support of the Hellenic Cadastre: Advanced Citizen Participation and Crowdsourcing in the Official Property Registration Process
Next Article in Special Issue
Quantifying the Characteristics of the Local Urban Environment through Geotagged Flickr Photographs and Image Recognition
Previous Article in Journal
A Multi-Scale Water Extraction Convolutional Neural Network (MWEN) Method for GaoFen-1 Remote Sensing Images
Previous Article in Special Issue
Classification and Segmentation of Mining Area Objects in Large-Scale Spares Lidar Point Cloud Using a Novel Rotated Density Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantification Method for the Uncertainty of Matching Point Distribution on 3D Reconstruction

1
College of Resources and Environment, Chengdu University of Information Technology, Chengdu 610000, China
2
Ministry of Education Key Laboratory of Virtual Geographic Environment, Nanjing Normal University, Nanjing 210000, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2020, 9(4), 187; https://doi.org/10.3390/ijgi9040187
Submission received: 28 February 2020 / Revised: 14 March 2020 / Accepted: 22 March 2020 / Published: 25 March 2020
(This article belongs to the Special Issue Deep Learning and Computer Vision for GeoInformation Sciences)

Abstract

:
Matching points are the direct data sources of the fundamental matrix, camera parameters, and point cloud calculation. Thus, their uncertainty has a direct influence on the quality of image-based 3D reconstruction and is dependent on the number, accuracy, and distribution of the matching points. This study mainly focuses on the uncertainty of matching point distribution. First, horizontal dilution of precision (HDOP) is used to quantify the feature point distribution in the overlapping region of multiple images. Then, the quantization method is constructed. H D O P ¯ , the average of 2 × arctan ( H D O P × n 5 1 ) / π on all images, is utilized to measure the uncertainty of matching point distribution on 3D reconstruction. Finally, simulated and real scene experiments were performed to describe and verify the rationality of the proposed method. We found that the relationship between H D O P ¯ and the matching point distribution in this study was consistent with that between matching point distribution and 3D reconstruction. Consequently, it may be a feasible method to predict the quality of 3D reconstruction by calculating the uncertainty of matching point distribution.

1. Introduction

A considerable amount of research has been conducted on image-based three-dimensional (3D) reconstruction in traditional 3D terrain reconstruction [1], urban (rigid object) reconstruction [2], and vegetation (nonrigid object) reconstruction [3]. The rapid extraction and construction of 3D models from images has an important role in spatial data acquisition. Carrivick [4] summarized the quantitative research on errors in 3D reconstruction, which has mainly focused on the data source, results, and other similar aspects. Process research is an essential research topic in the GIS (Geographic Information System) field to ensure the accuracy of 3D reconstruction.
Image-based 3D reconstruction is a complex process that involves several steps: Feature extraction and matching, fundamental matrix computation, camera calibration, and point cloud reconstruction. A large number of matching points are extracted from stereo pairs for the calculation of the fundamental matrix, camera parameters, and point clouds. Thus, matching points are a direct data source for the other steps of image-based 3D reconstruction, and their uncertainty has an important influence on the quality of 3D reconstruction.
The uncertainty of matching points depends on numerous factors [5], including the number, accuracy, and distribution of the matching points. Statistical methods [6,7,8], the covariance matrix [9,10,11,12,13], and covariance propagation law [14,15,16,17,18] have been utilized to calculate the uncertainty of feature points. A pair of matching points comprises two feature points. Liu [19] discussed several types of camera geometry and error analyses of feature point matching. Gui [20] presented a novel point-pattern matching method based on speeded-up robust features and the shape context to increase matching accuracy. Tong [21] improved the scale-invariant feature transform (SIFT) algorithm, and removed feature points surrounding image boundaries to increase matching accuracy. Zhao [22] employed the normalized cross-correlation coefficient and a bidirectional matching strategy to improve matching point accuracy. Hu [23] improved the robustness and accuracy of matching points by using the structural distance between feature point sets as the basis of matching similarity. Most previous studies have focused on the accuracy, not the distribution, of matching points. Therefore, the present work will focus on the distribution of matching points. Suppose that matching points with the same precision but different distributions are used to reconstruct 3D models. Whether the reconstructed results will remain the same and whether distributed matching points can be used to reconstruct high-quality 3D models under this condition remain unknown. Additionally, the quantification method of matching point distribution warrants consideration.
This study assumes that the accuracy of matching points is constant, and measures the uncertainty of matching point distribution on 3D reconstruction. Firstly, horizontal dilution of precision (HDOP) was used to quantify the feature point distribution. Then, the quantization method was constructed. H D O P ¯ was utilized to measure the uncertainty of matching point distribution on 3D reconstruction. Finally, simulated and real scene experiments were performed to describe and verify the rationality of the proposed method.

2. Methods

Suppose that matching points of the same stereo pair are extracted multiple times using the same algorithm. The results may be different due to different algorithm parameters, as shown in Figure 1. In principle, the three groups of 3D models reconstructed based on matching points in Figure 1 are not exactly the same. A large number of experiments in the early stage have shown that 3D models have good quality when reconstructed on the basis of evenly distributed matching points. Consequently, uniformity is an important index to measure the distribution of matching points.
Dilution of precision (DOP) is a term used in satellite navigation and geomatics engineering to specify the additional multiplicative effect of navigation satellite geometry on positional measurement precision, which could be used to assess the spatial geometrical layout characteristics of constellations [24,25,26]. DOP indicates how uniform the satellite distribution is in each orbital plane [27]. HDOP, which is a type of DOP, expresses the precision of the plane position on the basis of satellite latitude and longitude coordinates. Point distribution depends on the positional relationship of point coordinates on the image. Point coordinates (x and y) on the image are similar to satellite latitude and longitude coordinates. In this study, HDOP was selected to quantify the uniformity of point distribution.
As shown in Figure 1, the center point (white box) is compared to a receiver, and feature points (black points) in overlapping regions are compared to satellites. In the first step in HDOP computation, the unit vectors from the center point to an arbitrary point i ( ( x i x ) R i , ( y i y ) R i ) are considered. Here, R i = ( x i x ) 2 + ( y i y ) 2 , where x and y denote the position of the center point, and x i and y i denote the position of an arbitrary point i on the image. Matrix A is formulated as:
A = [ ( x 1 x ) R 1 ( y 1 y ) R 1 1 ( x 2 x ) R 2 ( y 2 y ) R 2 1 ( x n x ) R n ( y n y ) R n 1 ] .
The three elements of each row of A are the components of a unit vector from the center point to the indicated point. HDOP is given by:
H D O P = tr ( ( A T A ) 1 ) ,
where:
A T A = [ i = 1 n ( x i x ) 2 R i 2 i = 1 n ( x i x ) ( y i y ) R i 2 i = 1 n ( x i x ) R i i = 1 n ( y i y ) ( x i x ) R i 2 i = 1 n ( y i y ) 2 R i 2 i = 1 n ( y i y ) R i i = 1 n ( x i x ) R i i = 1 n ( y i y ) R i n ] .
Here, n indicates the number of points on the image, t r ( A T A ) = 2 n . Suppose that λ 1 ,   λ 2 , and λ 3 are the eigenvalues of A T A , then λ 1 + λ 2 + λ 3 = 2 n . Gerschgorin’s disk theorem [28] in matrix theory shows that the range of the first and the second eigenvalues of A T A are the same. Additionally, we know from the literature [29] that λ 3 n . Therefore:
H D O P = tr ( ( A T A ) 1 ) = tr ( dig ( 1 λ 1 , 1 λ 2 , 1 λ 3 ) ) = 1 λ 1 + 1 λ 2 + 1 λ 3 2 × ( 1 λ 1 × 1 λ 2 ) 1 2 + 1 λ 3 5 n .
As can be seen from Equations (1)–(4), the HDOP is related to the number and the position of points. So, the purpose of HDOP divided by 5 n is to remove the effect of the number of points. Meanwhile, the normalization method is utilized to transform H D O P × n 5 into the range of 0–1. Specifically, H D O P × n 5 1 can make its range from (1, + ) to (0, + ). Then, the anti-tangent function is selected for transformation, with a range between 0 and π/2. Thereafter, the transformation result is multiplied by 2 and then divided by π. Finally, HDOP can be converted to between 0 and 1:
H D O P = 2 × a r c t a n ( H D O P × n 5 1 ) / π .
The feature points that represent the same spatial position on two or more images are a pair of matching points. As shown in Figure 1, the matching points are comprised of feature points on the left and right images, and their pixel coordinates in different images are different. Hence, the H D O P calculated by using Equations (1)–(5) on the left and right images are different. To measure the uncertainty of matching point distribution, this study selected the average H D O P on all images as the final result.
The quantization for the uncertainty of matching point distribution is designed as follows:
Specific steps:
1.  Matching points on multiple images are extracted.
2.  The overlapping or interested region of multiple images is estimated, and the center point coordinate of the overlapping or interested region is computed.
3.  H D O P is respectively calculated based on feature points in overlapping regions.
4.  H D O P ¯ is calculated, which is the average of H D O P on all images.
In this study, H D O P ¯ indicates the uncertainty of matching point distribution, and has a range of [0, 1]. When H D O P ¯ , calculated by the matching points of a certain distribution, is close to 0, the quality of 3D models reconstructed based on these matching points may be great. When H D O P ¯ is close to 1, there may be a need to re-extract matching points for 3D reconstruction.

3. Experiment

To test whether the matching point distribution quantized by the proposed method can reflect the quality of 3D reconstruction, stereo pairs are selected on the basis of simulated and real scenes. Meanwhile, groundtruth data can also be given, and used to evaluate the uncertainty of 3D models reconstructed based on matching points.

3.1. Simulation Scene

3.1.1. Data Source

The simulation scene of this experiment was the indoor calibration field of Nanjing Normal University in China. Photos were taken with the OLYMPUS E-20, and the photo sizes were 2560 pixels × 1920 pixels, with a focal length of 9mm. The stereo pair was comprised of both photos in Figure 2a,b, and its overlap was 80%. A total of 227 pairs of matching points were extracted by hand, and their corresponding coordinates of 3D points (groundtruth data) are shown in Figure 2c. These were measured by constructing the local coordinate system, and the unit was meters.
Three groups of experimental data were designed for analysis. The first group was Figure 3(a1–d1); there were evenly distributed matching points with different numbers (227 pairs in Figure 3(a1), 160 pairs in Figure 3(b1), 90 pairs in Figure 3(c1), and 20 pairs in Figure 3(d1)) in the same distributed regions. The second (Figure 3(a2–d2)) and the third (Figure 3(a3–d3)) groups represented the same number of matching points in different distributed regions. Fifty pairs of matching points were extracted by random sampling and by controlling pixel coordinates on the images, and their distributed regions are shown in Figure 3(a2–d3).

3.1.2. H D O P ¯ of Matching Points

The proposed method in this study can be used to calculate H D O P ¯ of matching points with different numbers and different distributed regions. Specifically, the overlapping region (red rectangles in Figure 3 of the stereo pair needs to be estimated, and their center points (yellow dots in Figure 3 can then be calculated. The H D O P L on the left image and H D O P R on the right image were calculated by using Equations (1)–(5), and then H D O P ¯ was determined. The specific calculated results are shown in Table 1.
The H D O P ¯ value is distributed between 0–1. In Table 1, the values in Figure 3(a1–d1) are similar. It indicates that the number of matching points has less effect on H D O P ¯ . The values in Figure 3(a2–d2) are also similar and close to 0. This means that the distributed regions of matching points surrounding the center point of overlapping regions have less effect on H D O P ¯ . The values in Figure 3(a3–d3) are increasing. The value in Figure 3(a3) is the smallest and close to 0, and in Figure 3(d3), it is the biggest and close to 1. This means that the uniformity of matching point distribution in overlapping regions affects H D O P ¯ . When matching points are evenly distributed in overlapping regions, the H D O P ¯ value is small and close to 0. When matching points are clustered in overlapping regions, the H D O P ¯ value is large and close to 1.

3.1.3. Result Evaluation

The rationality of the proposed method was evaluated by the number and the distribution of matching points.

Evaluated by the Number of Matching Points

In this experiment, the internal and external parameters of the stereo pair in Figure 2 can be calculated four times using the direct linear transformation algorithm and matching points in Figure 3(a1–d1). So, the coordinates of 3D points were also calculated four times, and the calculations are shown in Figure 4a–d. Subsequently, the calculations were subtracted from the true values in Figure 2c. The distance errors after subtraction are shown in Figure 4e.
It can be seen from Figure 4e that the distance errors of 3D points calculated by matching points in Figure 3(a1–d1) are similar, and the number of matching points has less effect on 3D reconstruction. Therefore, the proposed method in this study removes the effect of the number of matching points on 3D reconstruction. As shown in Figure 3(a1–d1) in Table 1, the H D O P ¯ values calculated by matching points with different numbers are also similar, and these values do not change with the number of matching points.

Evaluated by the Distribution of Matching Points

This experiment is comprised of two parts. One is to study the change rule of 3D points when the matching points are gathered toward the center point of overlapping regions, and the other is to analyze when the matching points are gathered toward one corner of the overlapping regions.
• Matching points gathered toward the center point of overlapping regions
The 3D points can also be calculated four times by Figure 3(a2–d2), and their coordinates are not exactly equal. The specific distance errors are shown in Figure 5.
The matching points in Figure 3(a2–d2) are evenly distributed surrounding the center point of the overlapping regions and have different distributed regions (yellow rectangles). The distance errors of 3D points calculated by these matching points are also similar, as shown in Figure 5. The distributed regions of matching points surrounding the center point of overlapping regions also have less effect on 3D reconstruction. Therefore, the center point of overlapping regions is considered as an important point of stereo pairs, and in the proposed method, it is selected as the receiver to measure the uncertainty of matching points’ distribution.
• Matching points gathered toward one corner of the overlapping regions
The 3D points can also be calculated four times by Figure 3(a3–d3), and their coordinates are also not exactly equal. The specific distance differences are shown in Figure 6.
The matching points in Figure 3(a3–d3) have the same number of matching points in different distributed regions (yellow rectangles) and gradually gather toward the lower-right corner of the overlapping regions. The distance errors of 3D points calculated by these matching points have a large difference, as shown in Figure 6. The distribution of matching points deviating from the center point affects the accuracy of 3D points. When the matching points are more clustered and further from the center point, the calculated 3D point errors are larger, as shown by the symbol ○ in Figure 6. However, not all distance errors of the symbol ○ are the largest. Among them, serial numbers 105–136, 180–195, and 205–220 are the smallest. This is because their corresponding matching points are mainly distributed in the lower-right corner of the overlapping regions. Through analyzing the four groups of distance errors in Figure 6, we can say that the calculated 3D points in the region where matching points are located are more precise and have smaller distance errors.
In addition, Table 2 shows the mean of the distance errors in Figure 6, which reflects the accuracy of the 3D point as a whole. In the table, the mean error in Figure 3(a3) is the smallest, followed by the value in Figure 3(b3), and the maximum value is in Figure 3(d3). This is consistent with the change rule of H D O P ¯ of Figure 3(a3–d3) in Table 1.
Through the above experiments, the relationship between 3D points and the matching point distribution was found as follows:
  • The number of matching points had less effect on the accuracy of 3D points.
  • The distributed regions of matching points surrounding the center point of overlapping regions had less effect on the accuracy of 3D points.
  • The matching point distribution deviating from the center point of the overlapping regions affected the accuracy of 3D points.
Therefore, the proposed method selected the center point of the overlapping regions as the receiver to measure the HDOP of feature point distribution, and under the influence of the number of matching points removed, it used H D O P ¯ to measure the uncertainty of the matching points’ distribution on 3D reconstruction.
In a series of simulated scene experiments, we also found that the relationship between the matching points’ distribution and H D O P ¯ were consistent with that between the matching point distribution and the accuracy of 3D points. Therefore, H D O P ¯ can be used to measure the uncertainty of matching point distribution on 3D reconstruction.

3.2. Real Scene

3.2.1. Data Source

In this experiment, the stereo pair (Tsinghua University Gate) [28] published by the Institute of Automation of Chinese Academy of Sciences was selected for testing. Feature points were detected in Figure 7a,b by using the SIFT algorithm. Hence, matching points can be extracted by using the nearest neighbor search algorithm of the K–d tree. A total of 943 pairs of matching points were extracted. The point clouds of Tsinghua University Gate are shown in Figure 7c, which are groundtruth data.
To verify the rationality of the proposed method in this study, the following experimental data were designed for analysis. There were 943 pairs of matching points in Figure 8a throughout the entire overlapping region, and 300 pairs in Figure 8b surrounding the center point of overlapping regions. Meanwhile, 442 pairs in Figure 8c and 300 pairs in Figure 8d with different distributed regions were extracted surrounding the lower-right corner of overlapping regions.

3.2.2. H D O P ¯ of Matching Points

The overlapping region (red rectangles in Figure 8) of the stereo pair needs to be estimated, and their center points (yellow dots in Figure 8) can then be calculated. H D O P L on the left image and H D O P R on the right image were calculated by using Equations (1)–(5), and then, H D O P ¯ was determined. The specific calculated results are shown in Table 3.
In Table 3, the H D O P ¯ values in Figure 8a–d are gradually increasing. The value in Figure 8a is 0.0284; it is the smallest and close to 0. The value in Figure 8c is 0.2530; it is not very big. Additionally, the value in Figure 8d is 0.9419; it is the biggest and close to 1. We can see from the relationship between H D O P ¯ and the matching points that the change rule of H D O P ¯ is related with the distribution, and not with the number of matching points. When the distribution of matching points is clustered and away from the center point of overlapping regions, its H D O P ¯ may be a larger value and close to 1.

3.2.3. Result Evaluation

In this experiment, the open-source software visual SFM [30,31] was selected to reconstruct dense point clouds. Visual SFM was operated as follows: The stereo pairs (with features) in Figure 7 are loaded; the matching points in Figure 8 are imported; sparse point clouds are computed; and dense reconstruction is run to obtain dense point clouds. The reconstruction of dense point clouds is illustrated in Figure 9.
Four groups are available, where each one is a display image of dense point clouds reconstructed on the basis of the matching point in Figure 8. It can reflect some relationships between 3D point clouds and matching points.
  • The matching points in Figure 8b,d have the same number and different distributions, and three groups of 3D point clouds that are reconstructed used them quite different. Here, its quality using matching points surrounding the center point of overlapping regions is higher. So, we can say that matching points surrounding the center point maybe have less effect on 3D point clouds.
  • Matching points in Figure 8a,b have different numbers and are mainly distributed surrounding the center point of overlapping regions. However, three groups of 3D point clouds that were reconstructed are similar. Therefore, the number of matching points may have less of an effect on 3D point clouds.
  • The matching points in Figure 8a,c,d have different distributed regions, and gradually gather toward the lower-right corner of overlapping regions. Here, the three groups of 3D point clouds that were reconstructed using them are quite different. The overall shapes shown in Figure 9a,c are more complete, and that in Figure 9d is incomplete. Meanwhile, compared with Figure 9a, there are a small number in Figure 9c and a large number in Figure 9d of block objects that are not associated with the reconstructed Tsinghua Gate. The above change rule is consistent with the uniformity of the matching point distribution.
The above relationships between the 3D point clouds and matching points in the real scene experiment are consistent with that in the simulated scene experiment.
In addition, it is necessary to quantitatively measure the similarity between 3D models. There have been many studies on posture estimation of 3D models or object search in model-based vision [32,33,34,35]. Topology matching of 3D shapes [33] is quickly, accurately, and automatically calculated by comparing Multiresolutional Reeb Graphs; this study utilized it to compute the similarity between the reconstructed and groundtruth 3D models. Here, 3D models can be obtained through Poisson surface reconstruction on the basis of the dense point clouds in Figure 7c and Figure 9. Table 4 shows that the 3D models in Figure 9a–c are more similar, and that Figure 9d is the least similar to that in Figure 7c.
The correlation analysis in Table 4 indicates that the quality of the reconstructed 3D model based on Figure 9a is better, followed by that based on Figure 9b, whereas that based on Figure 9d is poor. Such analysis results are consistent with the change rule of H D O P ¯ in Table 3. Therefore, we can use H D O P ¯ to measure the uncertainty of matching point distribution on 3D reconstruction. In this study, an H D O P ¯ value that is close to 0 may be indicative of high-quality 3D reconstruction, and a value that is close to 1 may be indicative of low-quality 3D reconstruction.

4. Conclusions

Through a series of experiments, we obtained some representations between 3D reconstruction and matching points.
  • The number of matching points in this study had less effect on the accuracy of 3D reconstruction.
  • The distributed regions of matching points surrounding the center point of overlapping regions had less effect on the accuracy of 3D reconstruction.
  • The matching point distribution deviating from the center point of the overlapping regions affected the accuracy of 3D reconstruction.
To quantize the uncertainty of matching point distribution on 3D reconstruction, the proposed method needs to reflect the above representations. Therefore, the proposed method was designed as follows:
  • HDOP was introduced to this study from satellite navigation and geomatics engineering. Here, the center point of the overlapping regions was selected as the receiver, which can reduce the effect of matching points surrounding the center point on HDOP.
  • H D O P was constructed to measure the distribution of feature points, and has a range of [0, 1]. Here, H D O P = 2 × a r c t a n ( H D O P × n 5 1 ) / π , where H D O P × n 5 removes the effect of the number of feature points.
  • H D O P ¯ , the average of H D O P on all images, was utilized to measure the uncertainty of matching point distribution on 3D reconstruction.
In this study, simulated and real scene experiments were performed and found that the change rules of H D O P ¯ , 3D reconstruction, and matching point distribution were consistent. Therefore, it is reasonable for H D O P ¯ to indicate the uncertainty of matching point distribution on 3D reconstruction.
In the feature extraction step of image-based 3D reconstruction, the proposed method can be utilized to measure the distribution of matching points. When H D O P ¯ is close to 0, this means the quality of 3D models reconstructed may be better, and we can continue to reconstruct 3D models based on these matching points. Additionally, when it is close to 1, we may need to re-extract matching points for 3D models.

Author Contributions

Conceptualization, Yuxia Bian and Xuejun Liu; methodology, Yuxia Bian and Xuejun Liu; software, Hongji Liu; validation, Meizhen Wang; formal analysis, Meizhen Wang; resources, Yuxia Bian and Meizhen Wang; writing—original draft preparation, Yuxia Bian; writing—review and editing, Yuxia Bian, Shuhong Fang and Liang Yu. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the National Natural Science Foundation of China under Grant Number 41601422, 41771420 and 21607018, the Scientific Research Foundation of the Education Department of Sichuan Province under Grant Number 17ZB0089, Leaders of Disciplines in Science of Chengdu University of Information Technology under Grant Number J201715, and Technology Innovation Research and Development Project of Chengdu Science and Technology Bureau under Grant Number 2019-YF05-00941-SN.

Acknowledgments

Thanks to the technical help given by Yansong Duan of Wuhan University, China.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Remondino, F.; El-Hakim, S. Image-based 3D Modelling: A Review. Photogramm. Rec. 2006, 21, 269–291. [Google Scholar] [CrossRef]
  2. Musialski, P.; Wonka, P.; Aliaga, D.G.; Wimmer, M.; Gool, L.V.; Purgathofer, W. A survey of urban reconstruction. Comput. Graph. Forum 2013, 32, 1–26. [Google Scholar] [CrossRef]
  3. Thuy, N.; David, S.; Nelson, M.; Julin, M.; Neelima, S. Structured light-based 3D reconstruction system for plants. Sensors 2015, 15, 18587–18612. [Google Scholar]
  4. Carrivick, J.L.; Smith, M.W.; Quincey, D.J. Structure from Motion in the Geosciences; John Wiley and Sons Limited: Chichester, UK, 2016. [Google Scholar]
  5. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  6. Elhakim, S.F. Accuracy in image measure. Int. Soc. Opt. Eng. 1994, 2350, 218–228. [Google Scholar]
  7. Bowyer, K.; Kranenburg, C.; Dougherty, S. Edge detector evaluation using empirical roc curves. Comput. Vis. Image Underst. 2001, 84, 77–103. [Google Scholar] [CrossRef]
  8. Sankowski, W.; Wkodarczyk, M.; Kacperski, D. Estimation of measurement uncertainty in stereo vision system. Image Vis. Comput. 2017, 60, 70–81. [Google Scholar] [CrossRef]
  9. Kanazawa, Y.; Kanatani, K. Do we really have to consider covariance matrices for image features? IEEE Int. Conf. Comput. Vis. 2001, 2, 301–306. [Google Scholar]
  10. Brooks, M.J.; Chojnacki, W.; Gawley, D.; Hengel, A.V.D. What value covariance information in estimating vision parameters. IEEE Int. Conf. Comput. Vis. 2001, 302–308. [Google Scholar]
  11. Kanatani, K. Uncertainty modeling and model selection for geometric inference. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1307–1319. [Google Scholar] [CrossRef]
  12. Weng, J.; Huang, T. Motion and Structure from Two Perspective View: Algorithms, error Analysis and Error Estimation. Pattern Anal. Mach. Intell. 1989, 11, 451–476. [Google Scholar] [CrossRef] [Green Version]
  13. Cui, J.; Min, C.; Bai, X. An Improved Pose Estimation Method Based on Projection Vector with Noise Error Uncertainty. IEEE Photonics J. 2019, 11. [Google Scholar] [CrossRef]
  14. Steele, R.; Jaynes, C. Feature uncertainty arising from covariant image noise. Comput. Vis. Pattern Recognit. 2005, 1, 1063–1070. [Google Scholar]
  15. Park, H.; Shin, D.; Bae, H.; Baeg, H. Spatial uncertainty model for visual features using a kinect? Sensors 2012, 12, 8640–8662. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Belhaoua, A.; Kohler, S.; Hirsch, E. Error evaluation in a stereovision-based 3D reconstruction system. J. Image Video Process. 2010, 6, 1–12. [Google Scholar] [CrossRef] [Green Version]
  17. Haralick, R. Propagating covariance in computer vision. Pattern Recognit. Artif. Intell. 1996, 10, 561–572. [Google Scholar] [CrossRef] [Green Version]
  18. Di Leo, G.; Liguori, C.; Paolillo, A. Covariance propagation for the Uncertainty Estimation in Stereo Vision. Ieee Trans. Instrum. Meas. 2011, 60, 1664–1673. [Google Scholar] [CrossRef]
  19. Liu, B.G.; Yuan, L.X.; Zheng, N.N.; Shu, F. Several Camera Geometry Models and Error Analysis for Image Matching in 3-D Machine Vision. Acta Photonica Sin. 1997, 26, 737–741. [Google Scholar] [CrossRef]
  20. Gui, Y.; Su, A.; Du, J. Point-pattern matching method using SURF and Shape Context. Opt. Int. J. Light Electron Opt. 2013, 124, 1869–1873. [Google Scholar] [CrossRef]
  21. Tong, G.; Wang, C.C.; Wang, P. Study on improving image feature points detection and matching accuracy in binocular vision system. In Proceedings of the International Industrial Informatics and Computer Engineering Conference, Xi’an, China, 10–11 January 2015. [Google Scholar]
  22. Zhao, Y.; Su, J.B. Local sharpness distribution–based feature points matching algorithm. J. Electron. Imaging 2014, 23, 013011. [Google Scholar] [CrossRef]
  23. Hu, M.; Liu, Y.; Fan, Y. Robust Image Feature Point Matching Based on Structural Distance; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  24. Kihara, M.; Okada, T. A satellite selection method and accuracy for the global positioning system. Navigation 1984, 31, 8–20. [Google Scholar] [CrossRef]
  25. Wu, C.H.; Ho, Y.W.; Chen, L.W.; Huang, Y.D. Discovering approximate expressions of GPS geometric dilution of precision using genetic programming. Adv. Eng. Softw. 2012, 45, 332–340. [Google Scholar] [CrossRef]
  26. Santerre, R.; Geiger, A.; Banville, S. Geometry of GPS dilution of precision: Revisited. Gps Solut. 2017, 21, 1747–1763. [Google Scholar] [CrossRef]
  27. Li, J.; Li, Z.; Zhou, W. Study on the minimum of GDOP in satellite navigation and its applications. Acta Geod. Et Cartogr. Sin. 2011, 40, 85–88. [Google Scholar]
  28. Bu, C.J.; Luo, Y.S. Matrix Theory; Harbin Engineer University Press: Harbin, China, 2003; pp. 164–174. [Google Scholar]
  29. Sheng, H.; Yang, J.S.; Zeng, F.L. The Minimum Value of GDOP in Pseudo-range Positioning. Fire Control Command Control 2009, 34, 22–24. [Google Scholar]
  30. VisualSFM: A Visual Structure from Motion System. Available online: http://ccwu.me/vsfm/ (accessed on 1 September 2018).
  31. Wu, C.; Agarwal, S.; Curless, B.; Seitz, S.M. Multicore bundle adjustment. Comput. Vis. Pattern Recognit. 2011.
  32. Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pami 1992, 14, 239–256. [Google Scholar] [CrossRef]
  33. Chen, Y.; Medioni, G. Object Modelling by Registration of Multiple Range Images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  34. Weiss, I.; Ray, M. Model-Based Recognition of 3D Object from Single Vision. IEEE Trans. Pami 2001, 23, 116–128. [Google Scholar] [CrossRef]
  35. Hilaga, M.; Shinagawa, Y.; Komura, T.; Kunii, L. Topology matching for fully automatic similarity estimation of 3D shapes. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniquesm, Los Angeles, CA, USA, 12–17 August 2001. [Google Scholar]
Figure 1. Distribution of matching points. (a) Linearly distributed matching points. (b) Clustered distributed matching points. (c) Evenly distributed matching points.
Figure 1. Distribution of matching points. (a) Linearly distributed matching points. (b) Clustered distributed matching points. (c) Evenly distributed matching points.
Ijgi 09 00187 g001
Figure 2. Data source of the simulation scene. (a) Left image and the feature points. (b) Right image and the feature points. The symbols + with the same number in (a,b) are a pair of matching points. (c) The true coordinates of 3D points (symbols *) corresponding to matching points.
Figure 2. Data source of the simulation scene. (a) Left image and the feature points. (b) Right image and the feature points. The symbols + with the same number in (a,b) are a pair of matching points. (c) The true coordinates of 3D points (symbols *) corresponding to matching points.
Ijgi 09 00187 g002aIjgi 09 00187 g002b
Figure 3. Matching points with different numbers and different distributed regions. There are 227 pairs in (a1), 160 pairs in (b1), 90 pairs in (c1), and 20 pairs in (d1) in the same distributed regions. There are 50 pairs of matching points gathered toward the center point of overlapping regions in (a2d2), and toward the lower-right corner of the overlapping regions in (a3d3). The red rectangle indicates the overlapping region of stereo pairs, the yellow rectangle indicates the region where matching points are located, and the yellow dot indicates the position where the center point of the overlapping region is located.
Figure 3. Matching points with different numbers and different distributed regions. There are 227 pairs in (a1), 160 pairs in (b1), 90 pairs in (c1), and 20 pairs in (d1) in the same distributed regions. There are 50 pairs of matching points gathered toward the center point of overlapping regions in (a2d2), and toward the lower-right corner of the overlapping regions in (a3d3). The red rectangle indicates the overlapping region of stereo pairs, the yellow rectangle indicates the region where matching points are located, and the yellow dot indicates the position where the center point of the overlapping region is located.
Ijgi 09 00187 g003aIjgi 09 00187 g003b
Figure 4. 3D points and their distance errors. 3D points calculated on the basis of the matching points in (a) Figure 3(a1), (b) Figure 3(b1), (c) Figure 3(c1), and (d) Figure 3(d1). (e) Distance errors of 3D points between the calculated and true values.
Figure 4. 3D points and their distance errors. 3D points calculated on the basis of the matching points in (a) Figure 3(a1), (b) Figure 3(b1), (c) Figure 3(c1), and (d) Figure 3(d1). (e) Distance errors of 3D points between the calculated and true values.
Ijgi 09 00187 g004aIjgi 09 00187 g004b
Figure 5. Distance errors of 3D points calculated by Figure 3(a2–d2) between the calculated and true values.
Figure 5. Distance errors of 3D points calculated by Figure 3(a2–d2) between the calculated and true values.
Ijgi 09 00187 g005
Figure 6. Distance errors of 3D points calculated by Figure 3(a3–d3) between the calculated and true values.
Figure 6. Distance errors of 3D points calculated by Figure 3(a3–d3) between the calculated and true values.
Ijgi 09 00187 g006
Figure 7. Data source of the real scene (http://vision.ia.ac.cn/zh/data/index.html). (a) Left image and the feature points. (b) Right image and the feature points. (c) Scanned groundtruth point clouds. The feature points with the same number on the left and right images represent a pair of matching points.
Figure 7. Data source of the real scene (http://vision.ia.ac.cn/zh/data/index.html). (a) Left image and the feature points. (b) Right image and the feature points. (c) Scanned groundtruth point clouds. The feature points with the same number on the left and right images represent a pair of matching points.
Ijgi 09 00187 g007
Figure 8. Matching points with different distribution patterns. (a) Throughout the entire overlapping region. (b) Surrounding the center point of overlapping regions. With different distributed scopes in (c,d) surrounding the lower-right corner of overlapping regions.
Figure 8. Matching points with different distribution patterns. (a) Throughout the entire overlapping region. (b) Surrounding the center point of overlapping regions. With different distributed scopes in (c,d) surrounding the lower-right corner of overlapping regions.
Ijgi 09 00187 g008aIjgi 09 00187 g008b
Figure 9. Dense point clouds. Reconstructed by matching points in (a) Figure 8a, (b) Figure 8b, (c) Figure 8c, and (d) Figure 8d. (e) Color bar. Different colors indicate the distance between the camera and the reconstructed object. The unit of distance is meters.
Figure 9. Dense point clouds. Reconstructed by matching points in (a) Figure 8a, (b) Figure 8b, (c) Figure 8c, and (d) Figure 8d. (e) Color bar. Different colors indicate the distance between the camera and the reconstructed object. The unit of distance is meters.
Ijgi 09 00187 g009
Table 1. The results calculated by the matching points in Figure 3.
Table 1. The results calculated by the matching points in Figure 3.
Matching PointsFigure 3(a1)Figure 3(b1)Figure 3(c1)Figure 3(d1)
H D O P ¯ 0.01310.01340.00680.0120
Matching PointsFigure 3(a2)Figure 3(b2)Figure 3(c2)Figure 3(d2)
H D O P ¯ 0.00890.00480.01220.0096
Matching PointsFigure 3(a3)Figure 3(b3)Figure 3(c3)Figure 3(d3)
H D O P ¯ 0.00910.06700.26330.8770
Table 2. Mean error of 3D points in Figure 6.
Table 2. Mean error of 3D points in Figure 6.
3D Point CloudsFigure 3(a3)Figure 3(b3)Figure 3(c3)Figure 3(d3)
Mean error0.22280.22630.23710.2533
Table 3. The results calculated by the matching points in Figure 8.
Table 3. The results calculated by the matching points in Figure 8.
Matching PointsFigure 8aFigure 8bFigure 8cFigure 8d
H D O P L 0.03300.16860.29210.9501
H D O P R 0.02380.14740.21390.9337
H D O P ¯ 0.02840.15800.25300.9419
Table 4. Similarity between the reconstructed and groundtruth models.
Table 4. Similarity between the reconstructed and groundtruth models.
3D ModelFigure 9aFigure 9bFigure 9cFigure 9d
Similarity to the 3D model in Figure 7c0.75680.73560.71430.6272

Share and Cite

MDPI and ACS Style

Bian, Y.; Liu, X.; Wang, M.; Liu, H.; Fang, S.; Yu, L. Quantification Method for the Uncertainty of Matching Point Distribution on 3D Reconstruction. ISPRS Int. J. Geo-Inf. 2020, 9, 187. https://doi.org/10.3390/ijgi9040187

AMA Style

Bian Y, Liu X, Wang M, Liu H, Fang S, Yu L. Quantification Method for the Uncertainty of Matching Point Distribution on 3D Reconstruction. ISPRS International Journal of Geo-Information. 2020; 9(4):187. https://doi.org/10.3390/ijgi9040187

Chicago/Turabian Style

Bian, Yuxia, Xuejun Liu, Meizhen Wang, Hongji Liu, Shuhong Fang, and Liang Yu. 2020. "Quantification Method for the Uncertainty of Matching Point Distribution on 3D Reconstruction" ISPRS International Journal of Geo-Information 9, no. 4: 187. https://doi.org/10.3390/ijgi9040187

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop