Next Article in Journal
Cross-Flow Vortex-Induced Vibration (VIV) Responses and Hydrodynamic Forces of a Long Flexible and Low Mass Ratio Pipe
Next Article in Special Issue
Adaptive Weighted Multi-Discriminator CycleGAN for Underwater Image Enhancement
Previous Article in Journal
Interdecadal Foredune Changes along the Southeast Australian Coastline: 1942–2014
Previous Article in Special Issue
Interdisciplinary Methodology to Extend Technology Readiness Levels in SONAR Simulation from Laboratory Validation to Hydrography Demonstrator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Image Registration for Underwater Optical Mapping Using Geometric Invariants

School of Information Science, Japan Advanced Institute of Science and Technology, Ishikawa 923-1292, Japan
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2019, 7(6), 178; https://doi.org/10.3390/jmse7060178
Submission received: 25 April 2019 / Revised: 15 May 2019 / Accepted: 31 May 2019 / Published: 5 June 2019
(This article belongs to the Special Issue Underwater Imaging)

Abstract

:
Image registration is one of the most fundamental and widely used tools in optical mapping applications. It is mostly achieved by extracting and matching salient points (features) described by vectors (feature descriptors) from images. While matching the descriptors, mismatches (outliers) do appear. Probabilistic methods are then applied to remove outliers and to find the transformation (motion) between images. These methods work in an iterative manner. In this paper, an efficient way of integrating geometric invariants into feature-based image registration is presented aiming at improving the performance of image registration in terms of both computational time and accuracy. To do so, geometrical properties that are invariant to coordinate transforms are studied. This would be beneficial to all methods that use image registration as an intermediate step. Experimental results are presented using both semi-synthetically generated data and real image pairs from underwater environments.

1. Introduction

Owing to the recent developments of robotic platforms carrying cameras, it is possible to obtain optical data from places where a human cannot access (e.g., Mars, aerial imaging, seabed, and many others.). Processing the optical data to have a 2D-map (or mosaic) of the visited area has been very important for different science communities (e.g., remote sensing, geology, ecology, marine science, environmental science and several others) and there has been a high demand for creating maps from gathered images using image mosaicking methods. Image mosaicking can be defined as creating a big image by composing relatively smaller images. Image mosaicking requires a good harmony of different steps. One of the most crucial steps in image mosaicking (especially when there is no additional sensor information except vision) is to find the transformation (motion) between overlapping image pairs and this process is referred to as image registration [1]. Lately, image registration is done through finding and matching some salient points (called as features) in images. Four main steps for feature-based image registration are (1) feature extraction, (2) computing feature descriptors, (3) descriptor matching and (4) computing a transformation between image coordinate frames using robust estimation methods. During the descriptor matching step, there usually occur some matched pairs that do not follow the transformation (or equivalently the relative motion of the robotic platform) between images and they are called outliers. These outliers are rejected using probabilistic methods based on robust estimation (mainly using a Random Sampling Consensus (RANSAC) algorithm [2]). Afterward, the transformation between images is calculated by minimizing a pre-defined cost function on feature point positions [3]. The success of this minimization depends on whether the matching between the features of the two images is free of outliers or not. The existence and the number of these outliers have a direct effect on the performance of the minimization used. Particularly in large scale differences between images, the number of mismatches is often high (e.g., Figure 1). This causes an incorrect estimation of the coordinate transformation between the two images. Moreover, the probabilistic nature of RANSAC-based methods could bring a computational burden as they work in an iterative manner. Our main objective is to develop a method to remove the limitations of the robust estimation methods used in feature-based image registration especially for the cases where outliers are more numerous than the inliers and also to improve the performance of robust estimation methods in terms of computational cost and accuracy. This would greatly improve most of the existing methods that make use of feature-based image registration as an intermediate step as well as making it possible to match image pairs with a large scale difference. This would be very useful for mapping, 3D reconstruction, object detection, localization applications and many others using robotics platforms.
There have been several methods proposed to improve RANSAC. MSAC (M-Estimator Sample Consensus) [4] is based on RANSAC. The outcome may significantly change according to the error threshold (to decide whether the data obey the estimated model or not. In other words, to decide inlier or outlier) selected in RANSAC. In MSAC, there is also a threshold check for inliers in addition to outliers like in RANSAC. Therefore, MSAC is a slightly improved version of RANSAC. The difference between MLESAC (Maximum Likelihood Estimation Sample) [4] and RANSAC is in the step of checking whether the estimated model suits the data or not. Instead of counting the inliers, MLESAC uses Maximum-Likelihood estimation to decide. Tordoff and Murray (Guided-MLESAC [5]) showed that, in the case of some probabilistic information known a priori about the dataset, how this can be used in the MLESAC algorithm and the required number of iterations can be reduced. PROSAC (Progressive Sample Consensus) [6] proposed an enhanced sampling algorithm instead of doing it randomly as in RANSAC. The features were ranked according to similarity scores of their descriptors and samples selected through their ranking. Raguram et al. [7] presented a comparison of RANSAC based methods and also adaptively changing/calculating the total number of needed iterations by computing the outlier ratio. This allowed for running RANSAC faster. Senthilnath et al. [8] proposed a method to register images coming from different sensors by using a genetic algorithm. They used more criteria to find the transformation between images. Similarly, Le et al. [9]. proposed an efficient sampling technique using shape prior information for fitting a cylindrical object from a 3D point cloud using RANSAC. Both studies in [8] and [9] provide a base for our work in this paper. The main disadvantages of all these aforementioned methods are iterative and probabilistic. Although they perform well in the cases where the outlier ratio is low, they require a lot of iterations in the cases where outlier ratio is getting higher. Marszalek and Rokita [10] proposed a method for establishing correspondences between astronomical images using a single invariant property of distance ratio. In this paper, we extend their work by integrating the usage of different invariants and introducing a confidence level measure to decide whether robust estimation can be discarded or not.
In this paper, our objective is to discuss the possible usage of geometrical invariants in order to enhance the feature-based image registration framework in unmanned scene mapping and reduce the iterations needed in probabilistic methods in the framework. The next section provides an overview of the planar transformations used in this study while Section 3 details on integrating the geometrical invariants into a feature-based image registration framework. Section 4 presents experimental results and Section 5 summarizes some conclusions.

2. Overview of Used Planar Transformations

In this study, we focus on 4-Degree-of-Freedom (DOF) similarity and the 6-DOF affine motion model represented by 2D planar transformations as they provide good enough DOFs for a robot motion moving in a relatively controlled environment with a downlooking camera whose optical axis is approximately kept perpendicular to the seabed.
Similarity Transformations 
A Similarity transformation is a scale included version of Euclidean transformation. The rotation is around an optical axis and scale in our context refers to the altitude changes of an underwater robot. A similarity transformation can be decomposed as follows [3]:
H s = S R t 0 T 1 3 × 3 , SR = s 0 0 s Scaling cos θ sin θ sin θ cos θ Rotation , s r 11 = s cos θ , s r 12 = s sin θ , s r 21 = s sin θ , s r 22 = s cos θ ,
where θ is for rotation, s is uniform scaling on both the x- and y-axes while t = ( t x , t y ) is a translation vector making four DOFs in total.
Affine Transformations 
An affine transformation has six DOFs and it can be decomposed as follows [3]:
H A = A t 0 T 1 3 × 3 ,
where A 2 × 2 is the rotations and non-isotropic scalings while t is the translation on the x- and y-axes. Further decomposition of A can be done as follows:
A = a 11 a 12 a 21 a 22 = R ( θ ) Rotation R ( ϕ ) λ 1 0 0 λ 2 R ( ϕ ) Deformation , a 11 = cos 2 ϕ θ 2 ( λ 1 λ 2 ) + cos θ 2 ( λ 1 + λ 2 ) , a 12 = sin 2 ϕ θ 2 ( λ 2 λ 1 ) sin θ 2 ( λ 1 + λ 2 ) , a 21 = sin 2 ϕ θ 2 ( λ 2 λ 1 ) + sin θ 2 ( λ 1 + λ 2 ) , a 22 = cos 2 ϕ θ 2 ( λ 2 λ 1 ) + cos θ 2 ( λ 1 + λ 2 ) .
Visual illustration of Rotation and Deformation can be seen in Figure 2.

3. Geometrical Invariant Extraction from Overlapping Images

The number of geometrical invariants, their properties and some hints on obtaining them from images are discussed in [3,10,11,12]. In this paper, we focus on obtaining geometric invariants from extracted features and putative correspondences initialized by descriptor matching. The obtained geometric invariants are used to filter correspondences before applying robust estimation. The integrated pipeline of feature-based image registration is illustrated in Figure 3. We use the geometric invariants; the ratio of lengths, angle, the ratio of areas and parallelism as they are relatively easy to obtain and their computational costs are relatively low. After matching feature descriptors, we have a list of matched point positions as A p i = ( A x i , A y i , 1 ) in image A and B p i = ( B x i , B y i , 1 ) in image B where i = 1 , 2 , 3 , , n and n is the total number of correspondences.
Ratio of Lengths 
We compute the distance between all matched points in the same image:
A d i , j = | | A p i A p j | | and B d i , j = | | B p i B p j | | ,
where i = 1 , 2 , , n 1 and j = i + 1 , , n . Since we are interested in the ratio of lengths, we compute r A B ( i , j ) = A d i , j B d i , j . If the feature matching was free of outliers, the computed r values would be the same and/or very close to each other. Some extreme values are filtered out (e.g., r < 0.1 r > 10 ) assuming that the ratio does not have such extreme values. In order to find a ratio of lengths, we compute the median of r values. As r values may suffer truncation and/or rounding errors, we repeat the same for r B A = 1 / r A B values in order to verify. Once median of r A B and r B A values are found, we select all of the feature points ( i , j ) that provide the ratio value in a small neighborhood of selected r value (e.g., [ 0.95 × r , 1.05 × r ] ). Then, we sort the selected feature points according to their number of appearance in descending order and keep the first m of them. If the m is too small then, in the next step of the pipeline, robust estimation might fail due to not being able to find enough inliers, especially for the cases where outlier ratio is bigger than inlier ratio. If the value of m is close to the total number of correspondences (n), the total number of iteration in the robust estimation would be the same as using all correspondences and there would be no benefit of using the proposed filtering step. To decide m, we use some descriptive statistical measures (e.g., mean and standard deviation) to choose the threshold and keep the ones that appeared more than the threshold (e.g., m e a n + 2 × s t a n d a r d d e v i a t i o n ). We also compute a confidence level as a ratio of number of entries in [ 0.95 × r , 1.05 × r ] and all possible r values after eliminating the extreme values.
Angle 
In order to use this geometric invariant, we use triangles similar to those in [10]. As descriptor matching provides putative correspondences, we use Delaunay triangulation and compute triangles in one image using correspondences. Triangles are formed by the feature positions e.g., A p i , A p j and A p k and similarly their correspondences in the second image, B p i , B p j and B p k . The error between angles of corresponding triangles are computed as follows:
e i j k = | A p i A p j A p k B p i B p j B p k | + | A p i A p k A p j B p i B p k B p j | + | A p j A p i A p k B p j B p i B p k | ,
where ( i , j , k ) represents the correspondences indices forming a triangle and angles can be computed using either law of cosine or difference of angles of two intersecting line segments. Indices of correspondences are selected accordingly their error value in Equation (5) smaller than a certain threshold value. We use 5 . We sort the features according to their number of appearance in triangles that are satisfying error threshold and keep the first m of them. The confidence level is similarly computed as a ratio of the total number of satisfying triangles and the total number of triangles used.
Ratio of Areas 
Similar to angles, we form triangles and compute the ratio of the areas of triangles. Afterward, we follow similar steps to the ratio of lengths.
Parallelism 
Parallel lines stay parallel after applying a transformation (except the projective model). To use this invariant, we compute the angles of all line segments of features extracted from images separately and group them with an increment of 2 in [ π , π ] . Therefore, each group is composed of line segments that are approximately parallel. For each group, we check if they are also parallel in the second image. If they are identified as parallel, the points composing the line segments are kept in a list and at the end they are ranked according to the total number of parallel line segments in which they were involved. Similarly, we select the first m entries from the list.
Confidence Level 
The confidence level measure is motivated from the question “How many of the Areas/Lines out of all possibilities are following/forming the selected value of the corresponding geometric invariant?”. Let us assume that n putative correspondences are identified by the descriptor matching process and o [ 0 , 1 ] is the outlier ratio. Therefore, the total number of inliers is n i n = n × ( 1 o ) . Theoretically, the ratio of the total number of triangles formed by inliers to be formed by all points assuming that points that are not collinear can be calculated using the combination formula:
A r e a r a t i o = n i n 3 n 3 = ( n i n ) ( n i n 1 ) ( n i n 2 ) ( n ) ( n 1 ) ( n 2 ) < ( 1 o ) 3 .
Similarly, the ratio of lengths can be computed as follows:
L i n e r a t i o = ( n i n ) ( n i n 1 ) ( n ) ( n 1 ) < ( 1 o ) 2 .
From our experiments, in the presence of a maximum of 50–60% of outliers, geometrical invariant computations are safe to continue without using robust estimators. In our experiments with real-images, we use the upper limit values computed o = 0.6 using for each geometric invariant. If the computed confidence level value is greater than the upper limit computed using o = 0.6 , this means that the outlier ratio is likely to be less than the o = 0.6 .

4. Experimental Results

Real data from underwater images [13] and extensive simulations with synthetic data were used in order to test and validate the idea of retrieving geometrical invariant(s) from images and their usage to eliminate and/or reduce the total number of outliers. During the experiments, we assume that there is a single common motion that can be represented as a 2D planar similarity or affine transformation.

4.1. Experiment 1

In this experiment, we aim to show that geometric invariants can be obtained from correspondences when there is a high level of outliers. We detected features using Scale Invariant Feature Transform (SIFT) [14] from an image and applied a set of 20 extreme transformations [15] (details are provided in Table A1 in Appendix A) to generate their correspondences in the second image. For each transformation, we created a different number of outliers by assigning feature points randomly and tested our proposal to recover the transformation applied initially. Our test flow is presented in an algorithmic way in Algorithm 1. We provide inputs as H i , Outlier ratios o = [ 0.75 , 0.80 , 0.85 , 0.90 ] and the number of random trials, n b r R n d T r i a l s = 1000 . We repeat tests for a different number of total correspondences, respectively 100, 250 and 500. The results of the random trials over different outlier ratios for each transformation using three different total number of correspondences are summarized in Table 1. The first two columns are for the total number of correspondences used and the transformation while the rest of the columns represent the number of random trials in which running RANSAC with our proposal and without were able to obtain the correct value of the transformation and average number of random iterations in RANSAC for each tested outlier ratio level. The maximum number of RANSAC iteration was set to 1000 and the total number of iterations were adaptively updated during RANSAC iterations ensuring the probability of picking an outlier-free sample [16]. Since the data is noise-free, we used a small threshold (e.g., 0.5) in RANSAC to decide whether a correspondence is an outlier or not. This ensures that the outcome of the process is precisely the same as the initial transformation used to generate the correspondences. From Table 1, our proposal was able to recover the correct transformation successfully. As it does filter the correspondences listed in the form of eliminating outliers, the needed total number of RANSAC iterations reduces drastically over all tested outlier ratios. Its success ratio is higher than running a RANSAC over all correspondences for the higher outlier ratios especially for 90 % . Again from Table 1 and outlier ratio of 90 % , increasing the total number of correspondences improved the performance of using our proposal (the column with) while the total number of successful trials for the column without remained approximately in the same level. This leads to the fact that the total number of inliers are more important than their ratio in our approach.
Algorithm 1: Experimental Simulations
Jmse 07 00178 i021
We repeated the same Algorithm 1 with a different step of generating outliers (see Algorithm 2). For this experiment, we added zero mean normally distributed additive noise with three different levels of σ , respectively 2.5 , 5 and 10 to the certain number of feature point positions in one image to generate outliers. The results of this experiment are presented in Table 2, Table 3 and Table 4. The first three columns are for the total number of correspondences used, noise ( σ ) values and the transformation while the rest of the columns represent the number of random trials in which running RANSAC with our proposal and without were able to obtain the correct value of the transformation and average number of random iterations in RANSAC for each tested outlier ratio level. As the proposed method is mainly based on finding a peak in histogram, the total number of inliers is more important and it has a direct effect on the results. In RANSAC-based methods, both the outlier ratio and the total number of correspondences are important as the probability of selecting at least one outlier-free sample is a direct outcome of these values. This probability can be calculated as follows:
p = i = 0 s 1 n i n i n i , p ( 1 ( 1 p ) k ) ,
where n i n is the total number of inliers, n is the total number of correspondences, s is the sample size (two or three; two is a minimum number of points to compute similarity transformation while three is for affine transformation.), p is the probability of selecting outlier free sample from the correspondences and k is the total number of random trials in RANSAC-based methods. Setting the value of k big enough, one could argue that RANSAC-based methods would choose at least one outlier-free sample. However, the bigger the k, the bigger the computational cost and this might not be suitable in real-time applications with low-computational resources available on board. From the experimental results, our approach overall was able to filter the correspondences and reduce the total number of outliers before applying robust estimation methods. From Table 2, in the presence of 90 % of outliers and noise σ = 2.5 , the total number of successful trials with the proposed approach is smaller than ones with the higher noise values for the most of the tested transformations (both σ = 5 and σ = 10 ) while the number of successful trials without using our proposal decreases with the high noise values (see Figure 4). For the three tested transformations, in the presence of low-level noise ( σ = 2.5 ), using our approach did not provide better results compared to running RANSAC directly. This is mainly due to the fact that the obtained values of geometrical invariants are prone to be sensitive and this makes it difficult to distinguish the real value. In the cases with higher noise, the correct geometrical values were spotted easily through histograms. In such a situation, we observed that increasing the precision would help.
Algorithm 2: Experimental Simulations
Jmse 07 00178 i022
Although our implementation is not optimized, its computational burden is still low and it is within the time saved in RANSAC iterations. Its time saving can be coarsely estimated by comparing the iteration numbers needed in RANSAC. If the confidence value is low, we also form additional triangles using bucketing [17]. Feature points are grouped into a 10 × 10 cells distributed uniformly. One feature from each cell randomly picked and used as a corner for forming triangles. The total number of triangles can be a maximum of 100 3 assuming that each cell has at least one feature point and selected features are not collinear. The best would be to use all the possible triangles; however, this would bring a prohibitively huge computational burden. The total number of triangles used can be adjusted depending on the computational resources available. We also observed that increasing the size of the selected correspondences (m) in the sorted list as a resulting step of geometrical invariant computation can improve the result; however, this would also increase the total number of iterations needed in RANSAC.

4.2. Experiment 2

In this experiment, we tested the proposed approach on challenging real images from seabed captured using an underwater robot while surveying a coral reef patch [13]. Some samples are given in Figure 5. SIFT [14] is employed to extract and match features. Afterward, we ran our proposal to use geometrical invariants to filter out outliers. We run RANSAC algorithm 1000 times over initially matched features (referred to as without in Table 5) and remaining matched features after applying our proposal (referred to as with in Table 5). During the experiment, the maximum number of iterations for RANSAC was limited to also 1000 (max. k = 1000 in Equation (8)). The total number of RANSAC iterations and the total number of inliers computed using the resulting motion of RANSAC steps are given in Table 5. The threshold in RANSAC to decide whether an inlier was selected or not as 2.5 pixels and errors in one image were minimized to estimate the motion [3]. For the image pairs in gray-rows, the confidence level was greater than the selected threshold; therefore, our algorithm was able to skip the robust estimation part and the total number of inliers were 89, 268, and 275, respectively. For comparison purposes, we also opted to provide the results of applying RANSAC in the table for those image pairs. An average number of RANSAC iterations are given in Figure 6.
As it was mentioned before, increasing the total number of inliers would improve the performance of our proposal. In order to test this claim using real data, we resized images for the pairs in which we had less than 200 correspondences (image pairs numbered 2, 17, 19, 21, 23, and 24 in Table 5) with a scaling factor of 1.5 with the expectation of increasing the total number of detected correspondences and inliers. We run our proposal on the newly detected correspondences. Obtained results are presented in Table 6. From the table, it can be observed that the total number of inliers was increased and this could be expected as the total number of correspondences, and thus inliers, increased. However, this would not always be guaranteed as it can be noticed with the image pair 24. Our approach failed to recover stable geometrical invariants due to the low number of inliers. In pairs (2, 17, 19, and 23), the average number of RANSAC iterations reduced compared to the ones in Table 5. This is a result of the expected increment in the total number of inliers along with the total number of correspondences. One could argue that applying a more aggressive threshold in descriptor matching could improve the inlier ratio. This would be also favorable for our approach as it would be likely to skip the robust estimation step due to the higher value of confidence.

5. Conclusions

Camera carrying Unmanned Underwater Vehicles have been widely used for different purposes such as inspection, mapping, sample collection, and many others. When these vehicles do not have a wide variety of sensors, image (or video) data are the only source of information and image matching (or registration) is one of most fundamental steps of optical mapping, navigation and localization. Since distinctive point extraction and matching steps provide a set of correspondences that do not obey the common motion of the camera, image registration pipeline employs robust estimation methods to remove such correspondences. In this paper, we present a method for identifying geometrical invariants to filter the outliers before the robust estimation step, aiming to reduce the time they needed and to improve their performance. We also discussed that their usage can be omitted in some cases. We provide experimental results with both synthetic and real data to show the efficiency and limitations of the proposed method.

Author Contributions

Conceptualization, A.E. and N.Y.C.; software and validation, A.E.; writing—original draft preparation, A.E.; writing—review and editing, N.Y.C.

Funding

This research received no external funding and the APC was funded by JAIST Faculty research budget.

Acknowledgments

The authors would like to thank Nuno Gracias from the Computer Vision and Robotics Group of University of Girona for providing the dataset used in experiments. Figure 2 is used from [3] with the permission of Cambridge University Press through PLSclear Ref. No:14046. obtained on 10 May 2019.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DOFDegree of Freedom
RANSACRandom Sampling Consensus
MSACM-estimator Sample Consensus
MLESACMaximum Likelihood Estimator Sample Consensus
PROSACProgressive Sample Consensus
SIFTScale Invariant Feature Transform

Appendix A. List of Transformations Used in Experiments

For experiments, we used 20 different affine transformation [15]. For some of the projective model transformations, we set ( 3 , 1 ) and ( 3 , 2 ) elements of their matrices to 0 manually. Following the decomposition in Equation (2), we list the parameter vector [ λ 1 , λ 2 , θ , ϕ ] for the transformations used in experiments in Table A1. Warped images of a sample image using the these transformations are given in Table A2.
Table A1. Transformation parameters used in experiments.
Table A1. Transformation parameters used in experiments.
Transformation λ 1 λ 2 θ (In Rad.) ϕ (In Rad.)
10.890.88−0.240.97
20.740.73−0.69−2.51
30.540.53−1.39−2.45
40.430.420.15−3.06
50.390.33−0.71−2.88
60.820.82−0.55−2.36
70.570.522.62−0.25
80.410.40−2.09−0.61
90.330.33−0.401.85
100.260.232.680.10
111.010.87−0.271.13
121.070.820.341.33
131.220.61−0.471.28
141.200.600.091.29
151.230.600.661.43
160.920.79−0.021.63
170.900.67−0.041.66
180.880.55−0.051.67
190.870.41−0.091.68
200.910.26−0.111.68
Table A2. Warped images with transformations used in Experiment 1. The numbers below each image represent the transformation parameters listed in Table A1.
Table A2. Warped images with transformations used in Experiment 1. The numbers below each image represent the transformation parameters listed in Table A1.
Jmse 07 00178 i001 Jmse 07 00178 i002 Jmse 07 00178 i003 Jmse 07 00178 i004
1234
Jmse 07 00178 i005 Jmse 07 00178 i006 Jmse 07 00178 i007 Jmse 07 00178 i008
5678
Jmse 07 00178 i009 Jmse 07 00178 i010 Jmse 07 00178 i011 Jmse 07 00178 i012
9101112
Jmse 07 00178 i013 Jmse 07 00178 i014 Jmse 07 00178 i015 Jmse 07 00178 i016
13141516
Jmse 07 00178 i017 Jmse 07 00178 i018 Jmse 07 00178 i019 Jmse 07 00178 i020
17181920

References

  1. Zitová, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  2. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  3. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  4. Torr, P.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
  5. Tordoff, B.J.; Murray, D.W. Guided-MLESAC: Faster Image Transform Estimation by Using Matching Priors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1523–1535. [Google Scholar] [CrossRef] [PubMed]
  6. Chum, O.; Matas, J. Matching with PROSAC. Proc. Comput. Vis. Pattern Recognit. 2005, 1, 220–226. [Google Scholar]
  7. Raguram, R.; Frahm, J.-M.; Pollefeys, M. A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Random Sample Consensus. In Computer Vision—ECCV 2008; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5303, pp. 500–513. [Google Scholar]
  8. Senthilnath, J.; Kalro, N.P.; Benediktsson, J.A. Accurate point matching based on multi-objective Genetic Algorithm for multi-sensor satellite imagery. Appl. Math. Comput. 2014, 236, 546–564. [Google Scholar] [CrossRef]
  9. Le, V.-H.; Vu, H.; Nguyen, T.T.; Le, T.-L.; Tran, T.-H. Acquiring qualified samples for RANSAC using geometrical constraints. Pattern Recognit. Lett. 2018, 102, 58–66. [Google Scholar]
  10. Marszalek, M.; Rokita, P. Pattern matching with differential voting and median transformation derivation improved point-pattern matching algorithm for two-dimensional coordinate lists. Comput. Imaging Vis. 2006, 32, 1002–1007. [Google Scholar]
  11. Reiss, T.H. Recognizing Planar Objects Using Invariant Image Features; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1993; Volume 676. [Google Scholar]
  12. Byer, O.; Lazebnik, F.; Smeltzer, D.L. Methods for Euclidean Geometry; Mathematical Association of America: Washington, DC, USA, 2010. [Google Scholar]
  13. Gracias, N.; Negahdaripour, S. Underwater mosaic creation using video sequences from different altitudes. In Proceedings of the OCEANS 2005 MTS/IEEE, Washington, DC, USA, 17–23 September 2005; Volume 2, pp. 1295–1300. [Google Scholar]
  14. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  15. Datasets from Visual Geometry Group, Department of Engineering Science, University of Oxford. Available online: http://www.robots.ox.ac.uk/~vgg/data/data-aff.html (accessed on 30 November 2018).
  16. Kovesi, P.D. MATLAB and Octave Functions for Computer Vision and Image Processing. Available online: http://www.peterkovesi.com/matlabfns/ (accessed on 30 November 2018).
  17. Choukroun, A.; Charvillat, V. Bucketing Techniques in Robust Regression for Computer Vision; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2749, pp. 609–616. [Google Scholar]
Figure 1. Example of extracted and matched features between images with a relatively large scale difference. RANSAC would be applied to remove outliers; however, it is likely to fail to work accurately due to the high presence of outliers.
Figure 1. Example of extracted and matched features between images with a relatively large scale difference. RANSAC would be applied to remove outliers; however, it is likely to fail to work accurately due to the high presence of outliers.
Jmse 07 00178 g001
Figure 2. (a) rotation by θ ; (b) deformation by orthogonal scaling and ϕ .
Figure 2. (a) rotation by θ ; (b) deformation by orthogonal scaling and ϕ .
Jmse 07 00178 g002
Figure 3. Geometric invariants enhanced a feature-based image registration pipeline.
Figure 3. Geometric invariants enhanced a feature-based image registration pipeline.
Jmse 07 00178 g003
Figure 4. Total number of successful trials for each tested transformation for outlier ratio 90 % . Increasing the noise level increased the performance of the proposed method for most of the tested transformations while it reduced running RANSAC without the proposed method.
Figure 4. Total number of successful trials for each tested transformation for outlier ratio 90 % . Increasing the noise level increased the performance of the proposed method for most of the tested transformations while it reduced running RANSAC without the proposed method.
Jmse 07 00178 g004
Figure 5. Rows show some sample image pairs used in the experiments. They are corresponding to Image Pairs 1, 3, 19 and 24 in Table 5. The resolution of images is 512 × 384 .
Figure 5. Rows show some sample image pairs used in the experiments. They are corresponding to Image Pairs 1, 3, 19 and 24 in Table 5. The resolution of images is 512 × 384 .
Jmse 07 00178 g005
Figure 6. Average number of iterations in RANSAC with the proposed approach and without. For four of the image pairs, our proposal has similar performance on average, while, for the remaining pairs, it reduced the iteration number needed in RANSAC.
Figure 6. Average number of iterations in RANSAC with the proposed approach and without. For four of the image pairs, our proposal has similar performance on average, while, for the remaining pairs, it reduced the iteration number needed in RANSAC.
Jmse 07 00178 g006
Table 1. Experimental results from simulations outlined in Algorithm 1.
Table 1. Experimental results from simulations outlined in Algorithm 1.
Number of Corresp.TransformationOutlier Ratio
75%80%85%90%
Number ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number of
Successful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSAC
withoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwith
100110001000438.871.099991000860.184.199391000100022.096319641000152.89
210001000438.721.7310001000860.136.749321000100026.966599541000176.96
310001000438.601.4810001000859.566.279431000100026.736479601000156.50
410001000438.961.349991000860.685.549431000100024.926299661000169.10
510001000438.771.529981000860.826.199271000100026.076319571000182.35
610001000438.641.699991000860.676.979381000100026.956499491000173.58
710001000438.831.559981000860.576.369461000100027.626309631000165.56
810001000438.611.419971000860.495.919361000100026.046159581000178.13
910001000438.882.029991000859.847.569311000100028.326399511000180.55
1010001000438.901.499991000860.006.459511000100026.006509611000167.25
1110001000438.711.639991000860.786.639291000100025.826329621000163.30
1210001000438.891.489991000860.336.339411000100025.436389661000162.66
1310001000438.851.879971000859.657.599621000100028.396449361000178.02
1410001000438.901.919991000860.887.499431000100028.166439551000171.39
1510001000438.861.769991000860.507.119431000100027.546359551000175.68
1610001000439.221.569981000860.776.519401000100026.796099631000173.69
1710001000438.841.4110001000860.296.079321000100025.426359691000160.51
1810001000438.811.5110001000860.086.129301000100026.266479571000172.25
1910001000438.831.7310001000860.537.109431000100027.626459631000165.88
2010001000439.062.789981000860.729.28937999100032.416329451000184.38
250110001000429.091.009991000859.871.00968100010001.03654100010008.54
210001000428.861.0110001000860.101.03971100010001.226421000100016.78
310001000428.861.0010001000860.331.01972100010001.166291000100015.93
410001000428.811.0010001000860.461.00955100010001.106321000100013.14
510001000428.901.009991000860.501.01958100010001.196331000100017.27
610001000428.881.039981000860.831.02965100010001.226321000100019.33
710001000428.961.0110001000860.251.02962100010001.15624999100017.91
810001000428.741.0010001000860.101.02952100010001.136431000100013.64
910001000428.851.019991000860.451.05964100010001.426221000100021.92
1010001000428.791.019991000860.341.02958100010001.176111000100018.21
1110001000428.691.0110001000860.131.02949100010001.236651000100015.70
1210001000428.621.0010001000860.151.01972100010001.126101000100014.29
1310001000428.821.0010001000860.451.05960100010001.826321000100028.41
1410001000428.671.0110001000859.851.02971100010001.65647999100025.02
1510001000428.761.0110001000860.301.03950100010001.446361000100023.54
1610001000428.801.0010001000860.251.02964100010001.186401000100014.33
1710001000428.781.0010001000860.101.02961100010001.156321000100013.39
1810001000428.811.009991000860.191.02970100010001.146341000100013.53
1910001000428.701.0010001000860.011.03971100010001.36660999100022.09
2010001000428.811.0210001000860.351.10953100010003.13653999100032.08
500110001000438.682.799991000860.263.34950100010005.966431000100030.36
210001000438.832.859991000860.233.53965100010008.956571000100035.72
310001000438.702.7610001000860.093.52963100010008.446521000100035.79
410001000439.002.8210001000860.153.32962100010007.21614999100034.25
510001000438.882.8710001000860.093.44961100010008.236351000100038.62
610001000438.902.849991000860.533.50958100010008.706291000100038.56
710001000438.732.8210001000860.123.59969100010008.576311000100037.47
810001000438.892.8110001000860.093.43962100010008.106671000100034.73
910001000438.752.8110001000860.043.65957100010009.536341000100039.19
1010001000439.132.7610001000860.413.51965100010008.726291000100036.94
1110001000438.682.8010001000860.043.46960100010008.546301000100036.04
1210001000438.792.7510001000860.503.47961100010007.946201000100035.04
1310001000438.602.8110001000860.263.61953100010009.626411000100039.41
1410001000438.812.8310001000860.223.60949100010009.866471000100039.41
1510001000438.732.7810001000860.283.56955100010009.266261000100037.94
1610001000438.622.8110001000860.253.43963100010008.046351000100034.24
1710001000438.732.839991000860.493.46968100010007.476541000100033.28
1810001000438.722.749991000860.443.50964100010007.366361000100034.14
1910001000438.792.809991000859.893.58959100010009.036331000100037.85
2010001000438.772.859991000860.344.119651000100012.906341000100041.79
Table 2. Experimental results from simulations through adding different levels of noise to correspondences with a total number of 100 correspondences.
Table 2. Experimental results from simulations through adding different levels of noise to correspondences with a total number of 100 correspondences.
Number of Corresp.Noise σ TransformationOutlier Ratio
75%80%85%90%
Number ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number of
Successful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSAC
withoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwith
1002.5110001000422.3817.121000998822.2422.25973979100066.446386971000247.77
210001000427.4726.689981000824.8945.24966990100075.146225751000403.28
310001000430.3815.4810001000839.9231.65952997100054.926487321000331.70
410001000432.2213.4610001000844.4424.24961998100050.046098521000262.85
510001000434.6214.179991000847.4423.59963998100051.146008611000281.51
61000998424.5326.901000990825.5848.639579531000134.356366011000365.58
710001000430.1215.801000999835.4134.53960995100065.386458321000245.39
810001000433.2717.5110001000845.1421.949531000100049.536208791000269.25
910001000434.1716.2410001000848.4026.469651000100044.746348231000318.33
1010001000436.6211.609991000854.0022.509611000100036.905849021000286.92
1110001000422.4726.58999992815.0340.119699531000103.61628595999353.94
1210001000422.3520.471000994813.6747.30966964100098.236526791000283.07
131000999423.8525.819991000824.0543.809649791000109.866176911000368.18
1410001000425.2321.80999998824.6040.449699701000117.166327431000328.18
1510001000424.0928.281000997826.2450.829679661000111.346496991000364.10
1610001000423.5323.23999995825.2238.66960963100094.46640736999280.46
171000999425.4621.621000996828.3131.28966984100082.946506861000294.11
181000999426.2618.5310001000831.0231.60978979100083.286367591000299.63
1910001000429.5920.749991000835.2535.59970990100071.296427751000336.23
2010001000432.0420.8010001000843.3931.85964996100082.865947721000438.50
1005110001000434.599.24999999850.6819.819451000100040.58594828999286.74
210001000435.7316.679991000852.0426.879491000100060.01556785999342.96
310001000436.7512.1310001000854.9320.639401000100052.165568881000272.93
410001000437.3311.0010001000857.5017.419571000100038.965348791000303.93
510001000438.099.9410001000858.3518.969561000100040.145549131000288.07
61000999435.2314.621000999853.4228.07949995100068.565598271000304.31
710001000436.6612.6210001000853.7322.03951998100048.925688811000285.86
810001000437.7311.4910001000857.9318.359361000100038.415619291000233.94
910001000437.8312.629981000858.4821.28949999100042.145589061000281.63
1010001000438.319.1710001000859.5118.789471000100037.645399081000262.63
1110001000434.5315.9310001000850.2127.40942997100058.085568051000315.21
1210001000434.9114.62997999851.5927.66960991100061.295628691000270.01
1310001000435.1617.539991000851.5431.34946999100063.625688381000358.61
1410001000435.3717.869991000852.9525.489391000100058.415728281000356.95
1510001000434.5819.039991000850.5929.86949995100072.935918551000325.13
1610001000435.4116.719981000851.0324.07955996100056.255518721000258.49
1710001000435.7615.4110001000853.3125.869541000100054.365688681000259.37
1810001000436.0613.2110001000854.2121.20950998100059.145708621000297.56
1910001000437.1713.3410001000853.1823.599501000100044.795648811000306.86
2010001000437.9715.3110001000856.7528.149421000100051.785838771000390.24
10010110001000438.258.769991000858.7319.299401000100035.935498951000277.67
210001000438.6512.0910001000859.0022.099491000100037.035408901000299.06
310001000438.419.7510001000859.7017.459371000100040.06541895999325.11
410001000438.398.0310001000860.0915.719491000100036.145309881000101.85
510001000438.526.9010001000860.2714.949371000100037.635378051000423.70
610001000438.3712.899991000859.4922.059561000100036.715229371000198.73
710001000439.229.089991000859.6119.649351000100036.975598931000308.48
810001000438.856.6710001000859.7917.879431000100036.005229301000255.70
910001000438.669.2410001000860.5020.309481000100037.635348771000328.77
1010001000439.066.9110001000860.3515.489431000100034.405089261000249.76
1110001000438.1211.089981000859.1521.569501000100046.945268991000283.15
1210001000437.8413.5110001000856.9019.149491000100039.725299371000240.97
1310001000438.6011.109991000858.7820.639471000100042.65514894999314.33
1410001000438.1212.289981000859.3921.909391000100040.075229721000179.03
1510001000438.0611.8410001000857.7021.959351000100039.195279231000271.82
1610001000438.1211.189991000858.9222.299511000100038.025199181000277.09
1710001000438.3112.0310001000859.1420.459421000100040.125399461000225.12
1810001000438.0010.329991000858.2719.629431000100039.505489101000277.68
1910001000438.8210.249991000859.8021.599541000100036.815479201000280.26
2010001000438.7110.7410001000860.1619.799331000100039.295458171000449.70
Table 3. Experimental results from simulations through adding different levels of noise to correspondences with a total number of 250 correspondences.
Table 3. Experimental results from simulations through adding different levels of noise to correspondences with a total number of 250 correspondences.
Number of Corresp.Noise σ TransformationOutlier Ratio
75%80%85%90%
Number ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number of
Successful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSAC
withoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwith
2502.5110001000433.287.7310001000819.8512.78983994100032.14733941100086.58
21000997435.9432.311000998824.4237.53978982100069.996968011000267.96
310001000440.3110.159991000837.2315.05970998100029.517118961000154.72
410001000442.378.229991000844.8812.339781000100024.636709641000113.48
510001000444.536.409981000847.3811.599681000100018.906719741000112.26
610001000433.3136.66999998822.7859.319699561000125.897127661000305.23
710001000440.809.2810001000836.1515.18979998100028.186909681000101.58
810001000443.208.2610001000844.5612.829631000100023.146829531000130.63
910001000444.9910.6010001000848.8815.65970999100031.746789721000115.78
1010001000446.775.4710001000853.808.719571000100020.626339821000109.07
1110001000432.1113.371000998816.1820.74985992100052.457178661000152.35
1210001000432.1711.1810001000816.9719.86985991100037.957269201000111.43
1310001000432.9311.0310001000818.9124.94984992100055.257269231000171.26
1410001000434.9018.871000998821.1220.56983991100047.007098871000185.86
1510001000435.2013.2910001000821.9927.17992993100047.117209061000179.29
1610001000433.8811.8410001000819.1717.55979992100043.377479071000137.29
171000999435.1211.6410001000825.5518.50984995100043.387098631000161.55
1810001000437.6510.331000999829.6615.99975994100031.887159621000103.18
1910001000440.2511.881000999834.0015.459811000100034.777309501000124.94
2010001000442.3610.0510001000840.7320.81977999100036.886559701000145.78
2505110001000446.025.6210001000850.438.983962998100015.56620991100066.05
210001000446.768.8810001000853.5116.2159691000100025.506119761000119.56
310001000447.965.7910001000855.349.1929601000100019.73594986100077.27
410001000448.234.5210001000856.726.9969521000100015.21634995100081.61
510001000448.784.1510001000857.396.4139681000100014.646241000100055.80
610001000445.8914.1110001000851.1119.3529671000100035.506349531000161.34
710001000447.805.649991000854.329.0149521000100016.34598990100074.82
810001000448.364.4810001000857.267.3889681000100013.89586996100054.56
910001000448.785.7210001000858.0210.1269491000100015.22580998100065.13
1010001000449.212.6010001000858.855.5849541000100012.99621994100072.94
1110001000445.5210.2010001000848.6512.948967998100024.806489531000129.81
1210001000445.227.8110001000848.9913.473971999100026.426399681000111.93
1310001000446.3010.4710001000850.7717.686974999100035.466279621000164.56
1410001000446.308.969991000850.1114.528959999100029.746609831000122.98
1510001000446.047.9910001000852.3515.486977998100028.236539771000116.60
1610001000445.778.109991000850.4913.9619621000100027.576639761000105.22
1710001000445.957.759991000853.2411.6819581000100020.57615974100093.35
1810001000446.977.3710001000852.2111.069631000100020.706259771000113.94
1910001000447.657.0810001000854.5212.4579601000100021.006529771000113.21
2010001000448.118.359991000857.0312.4639511000100025.876149921000100.31
25010110001000449.123.119991000858.175.369601000100012.62604999100050.60
210001000449.085.059991000858.848.759651000100016.60591992100075.63
310001000449.522.969981000859.585.589551000100013.35623994100058.33
410001000449.742.089981000860.163.86956100010009.17622993100093.89
510001000449.651.7710001000859.803.679581000100010.32598995100090.75
610001000449.585.869991000858.529.799551000100021.17604998100057.01
710001000449.373.0410001000859.705.539591000100012.815781000100041.49
810001000449.561.9410001000859.853.85954100010009.46594995100078.09
910001000449.662.419991000860.264.789421000100011.86619995100053.26
1010001000450.091.3110001000860.602.42962100010008.14615993100092.71
1110001000449.145.8410001000858.219.689651000100015.63602999100050.76
1210001000449.175.5710001000857.869.089551000100016.88620996100068.86
1310001000449.045.9010001000858.3011.069591000100019.495959881000117.74
1410001000449.266.1310001000858.4910.959541000100018.516379861000123.33
1510001000448.805.379981000858.779.779571000100021.27592999100054.57
1610001000449.274.1010001000858.498.179601000100015.94599992100080.88
1710001000449.084.0210001000858.207.509561000100015.94604994100077.48
1810001000449.314.0410001000858.707.699591000100015.51611996100071.31
1910001000449.563.969981000859.786.549571000100015.68585999100062.72
2010001000449.544.079991000859.797.559451000100016.71543998100044.11
Table 4. Experimental Results from simulations through adding different levels of noise to correspondences with a total number of 500 correspondences.
Table 4. Experimental Results from simulations through adding different levels of noise to correspondences with a total number of 500 correspondences.
Number of Corresp.Noise σ TransformationOutlier Ratio
75%80%85%90%
Number ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number ofNumber ofAverage Number of
Successful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSACSuccessful TrialsIterations in RANSAC
withoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwithwithoutwith
5002.511000999423.7911.5810001000816.4615.84994996100030.477949311000119.65
210001000425.4818.0410001000824.9328.88995992100059.547998991000202.91
310001000430.1711.7510001000836.8623.05991999100028.797599741000112.30
410001000432.2110.949991000843.6114.809841000100028.147469821000101.92
510001000433.708.6610001000846.7017.75978999100029.09699981100087.64
61000999423.4625.101000998821.9639.43996990100066.678128521000222.88
710001000430.4212.4110001000837.6017.33990999100029.887779741000124.03
810001000433.1911.3610001000843.5115.259861000100030.497349761000107.25
910001000434.3912.8410001000848.5118.139851000100032.576949721000125.78
1010001000436.367.0110001000853.7812.939761000100018.816649841000114.13
111000999421.4614.131000999815.8226.83992994100045.398238701000157.58
1210001000421.6111.821000997815.7220.49998991100049.128219071000141.68
131000999423.5418.671000993822.3630.44996981100064.868058951000151.15
1410001000424.0217.3910001000819.9030.34990989100059.798248791000182.19
1510001000424.6015.581000999821.4727.16993994100046.538008981000187.97
1610001000423.4214.891000996818.5524.88994991100043.727929231000143.92
1710001000425.1711.681000999823.2423.64985996100042.868189351000130.19
1810001000426.4214.0410001000828.4920.46994998100033.297869311000129.03
1910001000429.2814.1410001000832.8821.06991999100037.417649561000135.25
2010001000431.8915.061000999842.1221.52983998100038.557449211000233.27
5005110001000434.937.7810001000850.3313.999811000100024.377209711000120.87
210001000435.9910.9310001000851.8819.459671000100027.31676979100097.61
310001000436.627.5110001000855.0413.359701000100023.286819841000111.50
410001000437.566.1210001000856.5912.469811000100020.62654996100064.90
510001000438.167.4510001000858.0412.509761000100019.43664993100068.05
610001000435.1814.5710001000851.1321.669711000100034.076599551000168.24
710001000436.986.2610001000854.5012.439721000100024.27637997100071.15
810001000437.506.849991000856.7611.539811000100020.59651999100054.86
910001000437.789.9710001000857.8213.459681000100024.37657984100099.33
1010001000438.346.379991000859.3111.549741000100019.00605991100082.95
1110001000434.329.5710001000849.6118.56985999100036.726779731000126.36
1210001000434.7712.0210001000848.4716.569771000100036.066899621000141.92
1310001000435.4515.0310001000850.4422.309831000100039.986729821000132.29
1410001000435.3011.2110001000851.3818.609761000100036.106809581000164.39
1510001000434.8910.8910001000850.9118.559831000100034.466729651000151.29
1610001000435.2111.1510001000850.6921.00987999100032.546989821000105.75
1710001000435.3711.0810001000851.8917.749851000100024.616839811000112.17
1810001000436.0010.2010001000852.6212.609811000100028.83699988100084.40
1910001000437.059.1710001000854.4117.559791000100029.136819751000142.21
2010001000437.2812.5410001000856.1214.879761000100032.35656998100069.47
50010110001000438.045.6710001000858.2712.429751000100017.656531000100056.22
210001000438.097.259991000859.1511.799681000100023.786279801000136.20
310001000438.455.0810001000859.179.559591000100019.30604997100058.39
410001000438.623.3010001000860.108.909611000100017.426169771000116.75
510001000438.894.9010001000860.128.919651000100018.03606998100047.00
610001000437.977.7710001000858.3014.249711000100020.05611995100074.06
710001000438.756.3110001000859.3611.089581000100017.986469821000104.44
810001000438.624.699981000860.497.349731000100018.666159831000102.88
910001000438.645.7310001000860.139.399691000100019.506089701000121.62
1010001000438.923.7710001000860.394.569601000100014.94646986100079.16
1110001000437.946.5810001000857.9913.749691000100024.456399851000121.74
1210001000437.798.2110001000858.1512.899761000100029.126119851000106.40
1310001000438.338.4610001000858.1614.349581000100022.82637995100096.97
1410001000438.026.1510001000858.2311.759771000100022.356479891000104.91
1510001000438.107.8810001000858.5114.899671000100023.00658997100087.27
1610001000437.887.729991000858.2811.939671000100020.67654996100068.42
1710001000438.247.7410001000858.5510.119681000100019.67643995100067.35
1810001000438.415.5810001000858.9612.089661000100021.02629999100054.84
1910001000438.475.789991000859.5211.189761000100022.41654998100056.64
2010001000438.657.3910001000859.6211.699761000100026.25640997100050.63
Table 5. Experimental results obtained using real data of underwater images.
Table 5. Experimental results obtained using real data of underwater images.
Image PairNumber of Corresp.Number of Iterations in RANSACNumber of Inliers
withoutwithwithoutwith
Min.Max.Avg.Std.Min.Max.Avg.Std.Min.Max.Avg.Std.Min.Max.Avg.Std.
12365141000780.74105.396015698.3616.3599181.653.65609180.813.44
2148100010001000.000.0089208103.6015.361916.782.92171918.330.70
3354234635381.2463.435212479.2112.8109148134.344.6880146131.456.98
4526356732518.7768.83409459.049.08110140130.935.3577124107.607.22
568068152100.8114.14418760.058.25255315300.179.9258314300.069.08
62528811000997.4114.44176391251.0932375246.822.95213826.562.37
72239611000999.842.465016685.6917.8325048.662.82455149.320.60
837584230131.0224.90247036.617.78171194184.262.7167191184.841.62
92685831000858.85119.2968183112.6617.9304945.784.18244934.854.44
1030683243136.3326.13186535.207.62155164159.091.99156163159.780.80
11249385903611.4685.9771158101.9815558477.002.54708276.282.46
12293261554355.2849.984811471.8811.494111102.772.456910595.565.21
13304406942622.3580.305715299.5616.6396554.693.91396656.154.38
143605821000829.2792.79114223163.0521.77910692.633.778010691.423.27
158395312886.0313.04369157.069.34327417390.3315.3320416390.9915.47
162665701000876.05102.93162367205.7235.5547766.063.07226143.799.97
17166100010001000.000.006991000847.7497.9183125.242.9142421.891.73
18406198554326.1757.144715188.1015.7139179167.566.96127180166.127.99
19138100010001000.000.006801000933.6312551613.102.0951613.301.95
20200414913601.9771.214411170.139.96556758.142.76556658.162.00
211965311000824.81118.154613082.2814.9264036.372.72193629.982.08
2229069194109.2418.91134422.474.3590116109.814.3495116109.374.17
23171100010001000.000.00100010001000.000133224.753.43133125.704.08
24148100010001000.000.007371000931.6211241611.802.12101411.391.56
Table 6. Experimental results obtained with increasing image size.
Table 6. Experimental results obtained with increasing image size.
Image PairNumber of Corresp.Number of Iterations in RANSACNumber of Inliers
withoutwithwithoutwith
Min.Max.Avg.Std.Min.Max.Avg.Std.Min.Max.Avg.Std.Min.Max.Avg.Std.
25331000100010000215729.004.88479388.623.31849188.8071.04
17545100010001000059214122.0422.7056038.2310.53355749.2844.94
1940610001000100004591000680.11135.2742310.375.25142421.8790.65
21500100010001000086292160.8636.00305750.806.43243531.1762.23
234571000100010000356916650.2994.7884936.647.34214842.4963.62
244391000100010000100010001000.000.0042515.245.46042.3361.97

Share and Cite

MDPI and ACS Style

Elibol, A.; Chong, N.Y. Efficient Image Registration for Underwater Optical Mapping Using Geometric Invariants. J. Mar. Sci. Eng. 2019, 7, 178. https://doi.org/10.3390/jmse7060178

AMA Style

Elibol A, Chong NY. Efficient Image Registration for Underwater Optical Mapping Using Geometric Invariants. Journal of Marine Science and Engineering. 2019; 7(6):178. https://doi.org/10.3390/jmse7060178

Chicago/Turabian Style

Elibol, Armagan, and Nak Young Chong. 2019. "Efficient Image Registration for Underwater Optical Mapping Using Geometric Invariants" Journal of Marine Science and Engineering 7, no. 6: 178. https://doi.org/10.3390/jmse7060178

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop