A New Line Matching Approach for High-Resolution Line Array Remote Sensing Images

: In this paper, a new line matching approach for high-resolution line array remote sensing images is presented. This approach establishes the correspondence of straight lines on two images by combining multiple constraints. Firstly, three geometric constraints, epipolar, direction and the point-line geometric relationship, are used in turn to reduce the number of matching candidates. After this, two similarity constraints, the double line descriptor and point-line distance, are used to determine the optimal matches. Finally, the co-linearity constraint is used to check the one-to-many and many-to-one correspondences in the results. The proposed approach is tested on eight representative image patches selected from the ZY-3 line array satellite images, and the results are compared with those of two state-of-the-art approaches. Experiments demonstrate the superiority and potential of the proposed approach due to its higher accuracy and greater number of matches in most cases.


Introduction
The availability of high-resolution optical satellite remote sensing images has increased significantly with the rapid development of space technology. Image matching is one of the most significant phases of image-based 3D reconstruction. The existing feature point-based quasi-dense matching or pixel-based dense matching can successfully obtain the general contour of the object, but the edges of artificial ground object models such as buildings can experience problems such as boundary deformation. Compared to feature point, the feature line in an image can more accurately express the contour features of the object. Therefore, line matching plays a key role in computer vision [1], image registration [2] and 3D reconstruction [3][4][5]. Line matching serves to establish correspondence between corresponding lines from different images using image correlation techniques. The existing line matching approaches can be divided into three categories, described below.
The first is line matching based on geometric properties. The main geometric properties commonly used are length, overlapping, distance, gradient, etc. [6][7][8][9]. Due to the uncertainty of the endpoints of lines and lines that are broken during line segment extraction, a line matching algorithm with a single geometric constraint matching is insufficient. Therefore, a combination of multiple constraints is mostly used in the line matching process. Here, the existing geometric constraints, such as epipolar constraint [10,11], triangulation constraint [12] and line-point invariant constraint [13,14], are often used. Among them, the most preferred constraint is probably the epipolar constraint, which can limit the matching candidates to a small quadrilateral region and is often used as a benchmark for determining the corresponding direction of two images. Zhang et al. [15] used the corresponding Delaunay triangulations, constructed by reliable corresponding points, to constrain the

Epipolar Line Generation
The epipolar constraint is one of the most popular geometric constraints that is used in image matching. It has the effect of reducing the search range and improving the matching efficiency. For high-resolution line array satellite remote sensing images, this paper iteratively constructs its inverse solution model with the positive solution parameters of the rational function model (RFM) of the line array image. Corresponding epipolar lines are generated by combining the RFM inverse solution model with the projected trajectory method [38]. Figure 2a,b represent the reference image and search image, respectively, and the blue lines are the corresponding epipolar lines on both images.

Overlap Constraint
Due to the influence of the radiometric variation on different images and the line extraction algorithm, there are some differences in the extraction results of the corresponding feature line on different images, such as non-corresponding line segment endpoints and different segment fragmentation. Based on the condition that there should be

Epipolar Line Generation
The epipolar constraint is one of the most popular geometric constraints that is used in image matching. It has the effect of reducing the search range and improving the matching efficiency. For high-resolution line array satellite remote sensing images, this paper iteratively constructs its inverse solution model with the positive solution parameters of the rational function model (RFM) of the line array image. Corresponding epipolar lines are generated by combining the RFM inverse solution model with the projected trajectory method [38]. Figure 2a,b represent the reference image and search image, respectively, and the blue lines are the corresponding epipolar lines on both images.

Overlap Constraint
Due to the influence of the radiometric variation on different images and the line extraction algorithm, there are some differences in the extraction results of the corresponding feature line on different images, such as non-corresponding line segment endpoints and different segment fragmentation. Based on the condition that there should be

Overlap Constraint
Due to the influence of the radiometric variation on different images and the line extraction algorithm, there are some differences in the extraction results of the corresponding feature line on different images, such as non-corresponding line segment endpoints and different segment fragmentation. Based on the condition that there should be an overlapping segment between corresponding lines in different views, we determine matching candidate line segments using the epipolar constraint. The process is implemented as follows, calculating the epipolar lines of the two endpoints of the reference line segment using the above-mentioned epipolar generation method. The line segment on the search image that intersects at least one of the two epipolar lines or is within the range of the two epipolar lines is considered as a candidate line. Inspired by Ref [32], the Boolean operator formula to satisfy this constraint is shown in Formula (1): where ∨ represents the logical "or" relationship. C l,l denotes a pair of correspondence hypotheses (l, l ), where l and l' represent the reference line and candidate line, respectively. c and d represent the two endpoints of the candidate line segment, and c' and d' represent the intersection points of the candidate line segment and both epipolar lines on the search image, in which the subscript x denotes the x-coordinates. If the C l,l is true, then the correspondence hypothesis (l, l ) satisfies the overlap constraint. Figure 2 presents an example of the overlap constraint.

Direction Constraint
During the matching of line segments, the epipolar constraint is a two-dimensional constraint, and the number of candidates in the epipolar quadrilateral region could be excessive. Therefore, other strong geometric constraints can be employed to reduce the candidates, and the direction constraint is one of them. We take the corresponding epipolar lines on the two images as the reference and calculate the angles between the lines to be matched on the two images and the corresponding epipolar lines separately. At first, corresponding epipolar lines passing through the midpoint of the reference line are generated. Then, the angle of the reference line to the epipolar line on the reference image and the angle of the candidate line to the epipolar line on the search image are calculated, respectively, and denoted as θ r and θ c . The difference between the above two angles is calculated and compared with a given threshold value, T θ . If the absolute value of the difference is less than the given threshold, i.e., |θ r − θ c | < T θ , then the candidate line satisfies the direction constraint; otherwise, it is discarded. Based on the results of the epipolar constraint in Figure 2, the direction constraint results are obtained, as shown in Figure 3.
where ∨ represents the logical "or" relationship. , l l C ′ denotes a pair of correspondence hypotheses ( , ) l l ′ , where l and l' represent the reference line and candidate line, respectively. c and d represent the two endpoints of the candidate line segment, and c' and d' represent the intersection points of the candidate line segment and both epipolar lines on the search image, in which the subscript x denotes the x-coordinates. If the , l l C ′ is true, then the correspondence hypothesis ( , ) l l ′ satisfies the overlap constraint. Figure 2 presents an example of the overlap constraint.

Direction Constraint
During the matching of line segments, the epipolar constraint is a two-dimensional constraint, and the number of candidates in the epipolar quadrilateral region could be excessive. Therefore, other strong geometric constraints can be employed to reduce the candidates, and the direction constraint is one of them. We take the corresponding epipolar lines on the two images as the reference and calculate the angles between the lines to be matched on the two images and the corresponding epipolar lines separately. At first, corresponding epipolar lines passing through the midpoint of the reference line are generated. Then, the angle of the reference line to the epipolar line on the reference image and the angle of the candidate line to the epipolar line on the search image are calculated, respectively, and denoted as r θ and c θ . The difference between the above two angles is calculated and compared with a given threshold value, θ T . If the absolute value of the difference is less than the given threshold, i.e., θ θ θ T c r < -, then the candidate line satisfies the direction constraint; otherwise, it is discarded. Based on the results of the epipolar constraint in Figure 2, the direction constraint results are obtained, as shown in Figure  3.

Corresponding Points Constraint
After the above two constraints, the matching candidates are limited in a relatively small range, but for the repeated texture region, due to the existence of parallel feature lines, there is still a relatively large number of matching candidates after applying the direction constraint. Therefore, based on the consistency of the local geometric relationships between corresponding points and corresponding lines on different views, we further narrow down the candidates with the matched points, i.e., the local geometric relationship between a line and its neighboring matched points is used. We first determine the matched points in the neighborhood of the reference line. Suppose that a reference line L with length l makes a line L ⊥ , which passes through the midpoint of the line L and is perpendicular to the line L. The matched points that satisfy the condition wherein the distance to the line L is less than T D1 and the distance to the line L ⊥ is less than T D2 are selected as the points in the neighborhood of the line L. In other words, d(p i , L) < T D1 and d(p i , L ⊥ ) < T D2 , where d(·) denotes the distance from the point to the line, p i is a matched point. T D1 and T D2 are the Remote Sens. 2022, 14, 3287 6 of 16 distance thresholds, such that T D2 = l/2 + ∆, where ∆ is expressed as the distance from each end of the line segment L extended outward. In this paper, ∆ = 30. The set of matched points in the neighborhood of the line L is recorded as S, which corresponds to the set of points on the search image as S . At the same time, the points on both sides of the line segment L are denoted as set P + and set P − , respectively, and S = P + ∪ P − . Take a candidate line to be checked on the search image as the reference, and the points in the set S that are located on both sides of the candidate line are recorded as sets P + and P − , respectively. If the above four sets satisfy the condition (P + = P + &P − = P − ) ∨ (P + = P − &P − = P + ), the candidate line is retained; otherwise, it is removed. Here, ∨ represents the logical "or" relationship and & represents the logical "and" relationship. Based on the results of the direction constraint in Figure 3, the corresponding constraint results are shown in Figure 4b. As shown in Figure 4, for candidates on the search image that satisfy the conditions, the matched points in blue are located under the candidate line, and the matched points in red are located above the candidate line, which is consistent with the local geometric relationship between the matched points and the reference line on the reference image.   . Take a candidate line to be checked on the search image as the reference, and the points in the set S′ that are located on both sides of the candidate line are recorded as sets + ′ P and − ′ P , respectively.
If the above four sets satisfy the condition the candidate line is retained; otherwise, it is removed. Here, ∨ represents the logical "or" relationship and & represents the logical "and" relationship. Based on the results of the direction constraint in Figure 3, the corresponding constraint results are shown in Figure 4b. As shown in Figure 4, for candidates on the search image that satisfy the conditions, the matched points in blue are located under the candidate line, and the matched points in red are located above the candidate line, which is consistent with the local geometric relationship between the matched points and the reference line on the reference image.   For each candidate line that satisfies the above constraints, it is further judged whether it satisfies the descriptor similarity constraint. Based on the principle of the line band descriptor (LBD) that is proposed by Zhang [25], the affine transformation of the support region is added, while the LBD descriptors on both sides of the line are generated separately. To construct the corresponding support regions of the corresponding lines, the epipolar lines of the endpoints of the two lines in the candidate matching pair are used to determine the corresponding maximum overlap segments between two lines. As shown in Figure 5b, the two candidate lines are each extended to the epipolar lines.
On the reference image, a rectangular region centered at the line L is generated as the line support region, as shown in Figure 6. The size of the support region is c × len, where len is the length of the rectangle, which is equal to the length of the line L, and c is the width of the rectangle. To make the descriptor rotation-invariant and facilitate the following calculations, the support region is affine transformed so that it is parallel to the image coordinate system, and the size of the region remains the same before and after the affine transformation. The support region is divided into m bands {B 1 , B 2 . . . B m }, where each band is a sub-region and parallel to the horizontal direction, and the width of a band is w. Thus, w × m = c. In the diagram, assume m = 5.
whether it satisfies the descriptor similarity constraint. Based on the principle of the line band descriptor (LBD) that is proposed by Zhang [25], the affine transformation of the support region is added, while the LBD descriptors on both sides of the line are generated separately. To construct the corresponding support regions of the corresponding lines, the epipolar lines of the endpoints of the two lines in the candidate matching pair are used to determine the corresponding maximum overlap segments between two lines. As shown in Figure 5b, the two candidate lines are each extended to the epipolar lines. On the reference image, a rectangular region centered at the line L is generated as the line support region, as shown in Figure 6. The size of the support region is where len is the length of the rectangle, which is equal to the length of the line L , and c is the width of the rectangle. To make the descriptor rotation-invariant and facilitate the following calculations, the support region is affine transformed so that it is parallel to the image coordinate system, and the size of the region remains the same before and after the affine transformation. The support region is divided into m bands where each band is a sub-region and parallel to the horizontal direction, and the width of   On the reference image, a rectangular region centered at the line L is generated as the line support region, as shown in Figure 6. The size of the support region is where len is the length of the rectangle, which is equal to the length of the line L , and c is the width of the rectangle. To make the descriptor rotation-invariant and facilitate the following calculations, the support region is affine transformed so that it is parallel to the image coordinate system, and the size of the region remains the same before and after the affine transformation. The support region is divided into m bands

The Construction of the Line Descriptor
The gradient descriptor of a line, known as the line descriptor, is a gradient orientation statistic for pixels within the line neighborhood in sub-regions. Each statistic value is the sum of the gradient magnitudes near that direction within the sub-region. A different weight is given to each pixel during the statistics based on the distance from the pixel to the line. A line descriptor is a vector containing the gradient statistics of all the different subregions, or a vector consisting of the mean vector or vector normalization of the statistics of the corresponding orientations within all the different sub-regions.
To enhance the robustness of the line descriptor, we divide the support region into two parts, which are used to construct the line descriptor separately. Assume that the band in which the line segment is located is the i-th band, denoted as B i . Then, the region from band B 1 to band B i is one part, and the region from band B i to band B m is another part, denoted as the upper and lower support regions of the line segment, respectively. Correspondingly, the line descriptors that are constructed from the above two regions are denoted as LBD u and LBD l , respectively.
where BD k is the band descriptor of the k-th band, which is computed from band B k and its nearest two neighbor bands B k−1 , B k+1 . Specifically, any band that is outside of the support region will not be considered when calculating the band descriptor for the top and bottom bands B 1 , B i and B m . Now, we construct the band descriptor BD k . For the row B h k , the h-th row in band B k and its neighboring bands, we accumulate the gradients of pixels within this row along four directions (0 • ,90 • ,180 • ,270 • ), respectively, In Formula (4), λ h k is the weight coefficient of the row B h k , which is related to two Gaussian weighting functions that are applied to each row of the line support region along the column direction. One is a global weighting coefficient , i denotes the i-th row of the line support region, and d i is the distance of the i-th row to the row where the line segment is located. The other is a local where it is assumed that the row B h k is the j-th row of the line support region.
By stacking these four accumulated gradients of all the rows that are associated with band B k , the band gradient description matrix (GDM) is formed as follows: where n is the number of rows that are associated with B k . The mean vector M k of GDM k row vectors is computed to obtain the descriptor of band B k , and BD k = (M k ) ∈ R 4×1 . Through substitution in Equation (2), the descriptors LBD u and LBD l of both support regions of the line segment are obtained, respectively,

Similarity Constraint
Similarly, for a candidate line on the search image, the line support region is constructed with the overlapping segment corresponding to the reference line as the center, and the candidate line descriptors LBD u and LBD l are obtained. The Euclidean distance formula is used to calculate the similarity of the two descriptors. For a reference line and a candidate line, two similarities are computed, corresponding to the descriptors of the upper and lower support regions of the two lines, respectively. As long as one of the two similarities is less than a given threshold, T d , the candidate line is considered to satisfy the similarity constraint.

Determining the Final Matching Results
After applying the four constraints that are described above, there may still be many candidates for the reference line. We combine the point-line distance constraint and the collinearity constraint to determine the final matching results. According to Section 2.2, we have obtained two sets of matching points, P + and P − , which are located on either side of the reference line, and the corresponding sets of matching points, P + and P − , which are located on either side of the candidate line on the search image. We calculate the sum of the distances from all points in each set to the line, respectively, according to Formula (7), where p j , p k , p j and p k denote any point in the set. l r and l c denote the reference line and the candidate line, respectively. If the condition (abs( is satisfied, it means that the candidate line satisfies the point-line distance constraint, where T D3 is the threshold. For any reference line, if only one candidate satisfies the condition, then the candidate line is the final corresponding line. If there are still many candidates satisfying the constraint, we first judge whether these candidate lines are co-linear. If they are, then all the candidates are correct matches; otherwise, the line corresponding to the smallest distance difference among the candidates is selected as the correct match. When all the lines have been matched, we check whether there are "many-to-one" matching correspondences between two images in the results, i.e., many lines on the reference image correspond to the same line on the search image. If so, we identify the final match in the same way as above with the collinearity constraint and the point-line distance constraint.

Line Matching Experiments
To evaluate the performance of our approach, we selected eight pairs of image patches from the forward-and backward-looking image pairs of the ZY-3 satellite for line matching experiments, as shown in Figure 7. The ground sampling distance of the image is approximately 3.5 m. These image patches were chosen to represent the characteristics of different landscapes, including farmland, paddy fields, residential environments with dense and complex buildings, countryside, etc. Among them, the smallest image size is 400 pixels × 400 pixels, as shown in Figure 7c, and the largest image size is 600 pixels × 600 pixels, as shown in Figure 7b. In this study, the experiments were performed on a 1.60 GHz Intel (R) Core(TM) i5-4200U CPU processor with 8 GB of RAM.
Remote Sens. 2022, 14, x FOR PEER REVIEW 10 of 17 match in the same way as above with the collinearity constraint and the point-line distance constraint.

Line Matching Experiments
To evaluate the performance of our approach, we selected eight pairs of image patches from the forward-and backward-looking image pairs of the ZY-3 satellite for line matching experiments, as shown in Figure 7. The ground sampling distance of the image is approximately 3.5 m. These image patches were chosen to represent the characteristics of different landscapes, including farmland, paddy fields, residential environments with dense and complex buildings, countryside, etc. Among them, the smallest image size is 400 pixels × 400 pixels, as shown in Figure 7c, and the largest image size is 600 pixels × 600 pixels, as shown in Figure 7b. In this study, the experiments were performed on a 1.60 GHz Intel (R) Core(TM) i5-4200U CPU processor with 8 GB of RAM.

Parameter Selection
Several parameters are adopted in our approach, four of which are key and influence the performance of the approach. To balance the impact of these parameters on matching, two image pairs, Figure 7a,b, were randomly selected for parameter analysis experiments to determine their thresholds. Each parameter's value was determined by a comparative analysis of the matching results under different parameter values.
3.1.1. Determine the Value of Parameter T θ T θ is the threshold value for the direction constraint. This is reflected in the matching by the fact that the angles between two lines in a candidate pair and their corresponding epipolars should be approximately equal. Due to errors such as linear extraction and epipolar computation, there will be a small deviation between two angles. Therefore, given other parameter values, the two sets of images in Figure 7a

Parameter Selection
Several parameters are adopted in our approach, four of which are key and influence the performance of the approach. To balance the impact of these parameters on matching, two image pairs, Figure 7a,b, were randomly selected for parameter analysis experiments to determine their thresholds. Each parameter's value was determined by a comparative analysis of the matching results under different parameter values.
3.1.1. Determine the Value of Parameter T θ T θ is the threshold value for the direction constraint. This is reflected in the matching by the fact that the angles between two lines in a candidate pair and their corresponding epipolars should be approximately equal. Due to errors such as linear extraction and epipolar computation, there will be a small deviation between two angles. Therefore, given other parameter values, the two sets of images in Figure 7a

Comparison to State-of-the-Art Matching Methods
To verify the effectiveness of the proposed approach, it was used to perform line matching experiments on the image pairs shown in Figure 7. We further compared our approach with state-of-the-art line matching approaches, the new line-point invariant (N-LPI) [13] and the Line-Junction-Line (LJL) [31] approaches. These two methods, which match line segments in groups and are mainly used for close-range images line matching, were employed for the comparison, because their source codes were available for download from the GitHub website. For each image pair, we used the same lines that were extracted by the LSD algorithm and corresponding points as input. Referring to reference [13,25], we evaluated the three approaches by computing the following three measures: the number of correct matches, the matching accuracy, and the running time, where accuracy is the ratio of the number of correct matches and total obtained matches. The statistics of the matching results of the three approaches are shown in Table 1. In the table, columns  Figure 11. In terms of the number of corresponding lines and the number of correct matches, both image pairs reach the maximum level of variation at T d = 0.6, with a flat change after T d = 0.6. The accuracy, on the other hand, gradually decreases with the increase in T d , slowly leveling off and both varying within a small range, i.e., 95.62% to 98.22% and 94.43% to 95.65%. Thus, considering both the number of corresponding lines and the matching accuracy, T d = 0.6 is chosen in this paper.

Comparison to State-of-the-Art Matching Methods
To verify the effectiveness of the proposed approach, it was used to perform lin matching experiments on the image pairs shown in Figure 7. We further compared ou approach with state-of-the-art line matching approaches, the new line-point invariant (N LPI) [13] and the Line-Junction-Line (LJL) [31] approaches. These two methods, whic match line segments in groups and are mainly used for close-range images line matching were employed for the comparison, because their source codes were available for down load from the GitHub website. For each image pair, we used the same lines that wer extracted by the LSD algorithm and corresponding points as input. Referring to referenc [13,25], we evaluated the three approaches by computing the following three measures the number of correct matches, the matching accuracy, and the running time, where accu racy is the ratio of the number of correct matches and total obtained matches. The statistic of the matching results of the three approaches are shown in Table 1. In the table, column

Comparison to State-of-the-Art Matching Methods
To verify the effectiveness of the proposed approach, it was used to perform line matching experiments on the image pairs shown in Figure 7. We further compared our approach with state-of-the-art line matching approaches, the new line-point invariant (N-LPI) [13] and the Line-Junction-Line (LJL) [31] approaches. These two methods, which match line segments in groups and are mainly used for close-range images line matching, were employed for the comparison, because their source codes were available for download from the GitHub website. For each image pair, we used the same lines that were extracted by the LSD algorithm and corresponding points as input. Referring to reference [13,25], we evaluated the three approaches by computing the following three measures: the number of correct matches, the matching accuracy, and the running time, where accuracy is the ratio of the number of correct matches and total obtained matches. The statistics of the matching results of the three approaches are shown in Table 1. In the table, columns 1 to 4 represent, in order, the image pairs, the number of lines that are extracted from the reference image, the number of lines that are extracted from the search image, and the number of corresponding points. The three numbers in column 5, respectively, indicate the number of corresponding lines, the number of correct matches and the matching accuracy in the result of our approach, and column 6 contains the running time of our approach. In addition, the corresponding experimental results from the LJL approach and the N-LPI approach are presented in columns seven to eight and nine to ten, respectively. To facilitate comparison and analysis, we further display the statistics in Figures 12 and 13. The line matching results using our approach for eight sets of image pairs are shown in Figure 14, where red lines indicate the incorrect matches and lines in other colors indicate the correct matches. 1 to 4 represent, in order, the image pairs, the number of lines that are extracted from the reference image, the number of lines that are extracted from the search image, and the number of corresponding points. The three numbers in column 5, respectively, indicate the number of corresponding lines, the number of correct matches and the matching accuracy in the result of our approach, and column 6 contains the running time of our approach. In addition, the corresponding experimental results from the LJL approach and the N-LPI approach are presented in columns seven to eight and nine to ten, respectively. To facilitate comparison and analysis, we further display the statistics in Figures 12 and  13. The line matching results using our approach for eight sets of image pairs are shown in Figure 14, where red lines indicate the incorrect matches and lines in other colors indicate the correct matches.        By comparing and analyzing the results, the following conclusions can be drawn.
(1) In terms of the number of correct matches, the proposed approach has a higher number of correct matches than the other two approaches, except for image pair (f). In particular, for image pairs (a), (d) and (e), the number of correct matches of the proposed approach is 64, 37 and 31 more than the LJL approach, and 27, 81 and 38 more than the N-LPI approach, respectively. (2) In terms of accuracy, for eight sets of image pairs, the proposed approach's accuracy is above 91% and the average accuracy is higher than 95%. Except for image pairs (c) and (f), the accuracy of the proposed approach is higher than that of the other two approaches. For image pair (c), the accuracies of the three approaches are similar. For image pair (f), the accuracy of the proposed approach is 1.38% and 4.82% lower than that of the LJL and N-LPI approaches, respectively. (3) In terms of the running time, the proposed approach has more obvious advantages over the other two approaches. For all eight image pairs, the average times of the LJL approach and the N-LPI approach are 8.25 times and 2.16 times longer than that of our approach, respectively. This is because the other two approaches require the extraction of line pairs that are assumed to be co-planar, which accounts for a large proportion of the overall matching time.
Combining the above statistics and the analysis of the matching results of the three approaches, it can be found that: (1) For a flat region with dense and well-distributed corresponding points, all three methods can obtain good matching results, such as image pairs (c). Despite the presence of a large number of neighboring parallel feature lines and light transformation on the image pair (c), the three approaches can still achieve a good matching result with over 97% accuracy. (2) The N-LPI approach yields false matches, mainly in areas with sparse or no corresponding points coverage. This is because this approach is based on corresponding points in the generation of matching primitives, the construction of geometric descriptors and the filtering of matching candidates with the homography matrix constraint, so the number and distribution of corresponding points have a strong influence on the approach. (3) The LJL approach can achieve better accuracy for flat areas or areas with less undulating terrain, such as image pairs (a), (c), (f) and (g), with an accuracy of over 90%, but for areas that are covered by buildings, which can easily result in false matches, such as image pairs (b), (d) and (e). This is because the approach assumes that the two lines in each pair are coplanar in object space and intersect in a local neighborhood, while in areas that have buildings with fractured textures on the image, non-coplanar lines that are misjudged as coplanar lead to an increase in false matches.
Compared with the other two approaches, our approach obtained lower accuracy in matching the image pair (f). Its false matches mainly occurred in regions with similar texture with only a few or no corresponding points. The corresponding points also play a key role in our approach, so it is difficult for our approach to obtain correct matches for regions of similar texture with few corresponding points. For the other seven sets of image pairs, our approach achieved more matches and higher accuracy, mainly due to the following advantages. (1) For the ZY-3 line array satellite image, in the matching process, our approach uses the RFM positive and inverse solution model to calculate the corresponding epipolar lines, which have high accuracy. (2) In our approach, during the construction of the line descriptor, on the one hand, the epipolar lines are used to determine the corresponding endpoints of the reference line and candidate line for constructing the corresponding line support regions. This solves the problem of inconsistent local support regions of corresponding lines due to extracted line breaks, thus improving the correspondence of the descriptors. On the other hand, the line descriptors are constructed separately on both sides of the line. This avoids the problem of no or false matches due to inconsistent information on both sides of the corresponding lines that is caused by large view changes, occlusion, etc. (3) Our approach uses the epipolar line constraint, direction constraint and corresponding points constraint in turn to narrow down the matching candidates to a small and reliable range. Furthermore, a double line descriptor similarity and point-line distance constraint are used to determine the final corresponding line. The multiple constraints in our approach effectively improve the number and correctness of the line matching results.

Conclusions
This paper has presented a new line matching approach for high-resolution line array remote sensing images. The characteristics of our approach can be outlined as follows.
(1) The epipolar line was fully exploited in our approach, mainly in the epipolar constraint and direction constraint to determine the matching candidates, and in the determination of the corresponding endpoints of the corresponding lines during the construction of the descriptors. (2) A double line descriptor is obtained by separately constructing a descriptor on each side of the line segment, which resolves the problem of inconsistency in the corresponding support regions of the corresponding lines on both images. (3) The corresponding points are fully exploited in our approach, mainly in the point-line local geometric relationship constraint and the point-line distance similarity constraint, effectively avoiding the ambiguous matches that are generated by neighboring parallel lines. The proposed approach was tested for eight representative image patches that were selected from the ZY-3 line array satellite images, and the evaluation indicated that the proposed line matching approach could achieve promising results. For immediate future work, our aim is to achieve the rapid matching of large areas of line-array satellite remote sensing images.