Next Article in Journal
Multi-Temporal Dual Polarimetric SAR Crop Classification Based on Spatial Information Comprehensive Utilization
Previous Article in Journal
Synergic Lidar Observations of Ozone Episodes and Transport During 2023 Summer AGES+ Campaign in NYC Region
Previous Article in Special Issue
Selection of Landing Sites for the Chang’E-7 Mission Using Multi-Source Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Feature Matching of Multi-Illumination Lunar Orbiter Images Based on Crater Neighborhood Structure

1
Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(13), 2302; https://doi.org/10.3390/rs17132302
Submission received: 21 May 2025 / Revised: 28 June 2025 / Accepted: 2 July 2025 / Published: 4 July 2025
(This article belongs to the Special Issue Remote Sensing and Photogrammetry Applied to Deep Space Exploration)

Abstract

Lunar orbiter image matching is a critical process for achieving high-precision lunar mapping, positioning, and navigation. However, with the Moon’s weak-texture surface and rugged terrain, lunar orbiter images generally suffer from inconsistent lighting conditions and exhibit varying degrees of non-linear intensity distortion, which pose significant challenges to image traditional matching. This paper presents a robust feature matching method based on crater neighborhood structure, which is particularly robust to changes in illumination. The method integrates deep-learning based crater detection, Crater Neighborhood Structure features (CNSFs) construction, CNSF similarity-based matching, and outlier removal. To evaluate the effectiveness of the proposed method, we created an evaluation dataset, comprising Multi-illumination Lunar Orbiter Images (MiLOIs) from different latitudes (a total of 321 image pairs). And comparative experiments have been conducted using the proposed method and state-of-the-art image matching methods. The experimental results indicate that the proposed approach exhibits greater robustness and accuracy against variations in illumination.

Graphical Abstract

1. Introduction

Lunar orbiter images and their derived mapping products are essential for both lunar scientific research and exploration missions [1,2,3,4]. Lunar orbiter cameras from various missions have observed the Moon’s surface for several decades, collecting extensive high-resolution, multi-scale, multi-temporal image data [5]. Some of these images have been used for 3D reconstruction [6,7,8], precise localization of points of interest [9,10], mapping and environmental analyses of potential landing areas [11,12,13]. The regional applications require synergistic use of multi-images, which makes accurate image matching essential [14]. Moreover, tasks such as lunar global mapping and global control network construction require more intensive demands for image matching [15,16]. With the acquisition of massive amounts of high-resolution lunar orbiter imagery and the growing demand for large-area or even global mapping, more reliable image matching methods have become indispensable.
Among the factors influencing lunar orbiter image matching, significant variations in shading and shadowing pose major challenges [17,18]. These variations are closely related to changes in the angles of sunlight, manifesting as a non-linear effect on image intensity, commonly known as non-linear radiometric distortion [19]. Specifically, the degree and direction of shadows cast by lunar terrain features are mainly influenced by the solar incidence and azimuth angles, further exacerbating the image intensity distortions. Some examples of multi-illumination lunar orbiter images (MiLOIs) are shown in Figure 1. The rugged terrain and weak-texture surface cause substantial differences in image intensity resulting from significant changes in sun direction over time. Low solar incidence angles (~0°) lead to overexposure and low contrast in images, generally in equatorial regions. Conversely, high incidence angles (~90°) result in severe shadowing, and even permanently shadowed regions near the lunar poles, due to the Moon’s low axial tilt relative to the ecliptic (approximately 1.54°). Therefore, for weakly-textured lunar surfaces, severe changes in illumination conditions predominantly may induce challenges in feature matching algorithms [20].
Feature-based matching algorithms have been widely studied in the fields of computer vision and photogrammetry. Typical feature-based methods include Harris (1988) [21], SIFT (2004) [22], ORB (2011) [23], AKAZE (2013) [24], and so on. SIFT is considered one of the most classical methods due to its invariance to scale, rotation, and brightness. However, it remains susceptible to intensity distortions caused by shadows during feature detection and is sensitive to non-linear radiometric distortions when using gradient information to describe features. To this end, Wu, et al. [25] improved the gradient orientation histogram of the SIFT algorithm to adapt it for matching planetary orbiter images with varying shadow direction. Yet, its robustness in matching images with large differences in both solar incidence and azimuth angles still requires further investigation, especially for images with severe shadows at low solar elevation angles. Overall, the performance of the above-mentioned spatial domain-based feature matching methods is limited in feature detection and description when dealing with non-linear radiometric distortions. Therefore, some researchers conducted feature detection and description over the frequency domain, effectively enhancing the matching performance of multimodal remote sensing images with non-linear radiometric distortions in Earth observation [19,26,27,28]. Similar approaches have also been applied to planetary remote sensing images with weak textures and illumination differences, such as FDAFT [17] and EPCFT [18]. Yet, the accuracy of the matching points remains relatively limited, and the dataset of images with illumination differences is not extensive.
Using geomorphological features for matching is a novel approach in lunar remote sensing image matching [15]. On the moon, geomorphological features, such as impact craters, are ubiquitous and exhibit distinct characteristics [29]. Using craters as cues infuses semantic prior knowledge to the matching problem, which enhances the ability to extract invariant features under extreme conditions [30]. With the recent advancements in deep learning, various algorithms have emerged that automatically detect craters from high-resolution lunar orbiter images using deep learning models [13,31,32,33], which also enhances the feasibility of the crater-based matching methods. Crater-based techniques have been developed for lunar autonomous visual navigation and position technology [33,34]. Researchers have achieved matching between navigation camera images and a base map or a crater database [35,36]. However, the methods are only applicable to matching scenarios involving a small number of craters, which is not suitable for matching orbiter images. For orbiter images, some studies used Hausdorff distance and mutual information to determine the correspondence between individual craters [30,37]. Whereas, these methods rely solely on the local information of individual craters, disturbing their robustness when craters are small or under varying illumination conditions.
Structural information between feature points has been used to improve matching performance of challenging images by reducing reliance on local grayscale descriptors and effectively eliminating outliers [38,39,40,41]. Considering the special challenges of lunar orbiter image matching, and inspired by the robustness of feature crater and structural information, we propose a robust feature matching method for MiLOIs called Crater Neighborhood Structure based Feature Matching (CNSFM) in this research. The approach combines automatic crater detection and topological characteristics of impact craters to address image matching issues arising from weak textures and multiple illumination conditions. Instead of traditional grayscale-based feature description, the proposed crater neighborhood structure features (CNSFs) were treated as matching units. A CNSF consists of a crater and its K-nearest neighboring craters. A clever approach was employed to eliminate non-corresponding craters within CNSFs, enabling the identification of similar CNSFs. Then, a CNSF distance measurement was established based on geometric topological parameters to determine the most similar CNSF pairs. Finally, local geometric transformation consistency was used to remove mismatched CNSFs. The matching process reducing reliance on single feature image patches by capitalizing on structural information between features, thereby enhancing robustness against weakly-textured regions and multi-illumination scenarios. And an evaluation dataset was constructed using lunar orbiter images captured under diverse illumination conditions from various latitudinal regions. This dataset was used to systematically assess the matching performance of recent state-of-the-art multi-modal image matching approaches, as well as the methodology proposed in this study.
This paper is structured as follows: Section 1 presents the research background and objectives. Section 2 details the proposed method. Section 3 provides the experimental results and analysis. Section 4 concludes with a summary of the paper.

2. Methods

This paper proposes a dedicated feature matching framework for robustly matching MiLOIs, called CNSFM. The methodological workflow is illustrated in Figure 2. Firstly, we constructed a crater dataset using MiLOIs by manual labelling and employed it for transfer learning on an advanced open-source deep learning-based object detection model, ensuring that enough corresponding features can be consistently extracted under varying illumination conditions. Secondly, the geometric structure (called CNSF) formed by combining a feature with multiple neighboring features is employed as the fundamental matching unit. This approach aims to reduce sole reliance on local image information from individual features during the matching process, thereby enhancing the robustness of feature correspondence. Whereas the omission of craters poses challenges for the identification of homologous CNSFs. The proposed method overcome them with the following steps. Specifically, based on the principle of similarity invariance, similar CNSF searches are conducted to remove the non-corresponding craters and identify CNSF pairs that satisfy the structural similarity judgement. Geometric parameters of CNSF are then utilized to construct distance measurements, enabling the selection of the most similar CNSF as matched CNSF. Finally, potential false correspondences among CNSFs are eliminated through outlier rejection grounded in the assumption of local geometric transformation consistency. Detailed descriptions will be provided in the subsequent sections.

2.1. Crater Feature Detection

To test the matching algorithm proposed in this paper, the feature detector was obtained by transfer learning based on an open-source deep learning model using a custom crater dataset. We selected the recent version of the YOLO series, YOLOv9 [42], an open-source, easy-to-implement, high-performance object detection network, to create the crater feature detector. YOLOv9 utilizes Programmable Gradient Information (PGI) and Generalized Efficient Layer Aggregation Network (GELAN) to extract key features more effectively, offering strong feature extraction capabilities, high computational efficiency, and fast detection speed. Moreover, since there are no suitable crater datasets available, we prepared a training dataset for transfer learning. We considered the following factors: (a) The image data should be representative, covering multi-illumination images of various geological units; (b) The annotated crater samples should be easily identifiable, with clear and preferably circular edges. These considerations aim to ensure that the crater detector achieves sufficient generalization capability while maintaining high localization accuracy of the detected crater features.

2.2. Crater Neighborhood Structure Based Feature Matching

This paper proposes a crater neighborhood structure feature matching method for MiLOIs. The approach is simple but efficient, leveraging the principle of similarity invariance to establish correspondences among feature-formed structures and thereby enabling robust matching for MiLOIs.

2.2.1. Rationale

Because of the weak textures on the lunar surface, crater centers are often used as tie points during manual image registration. The intensity characterization of the same crater on MiLOIs may vary considerably. Consequently, it is challenging to directly identify corresponding crater in another image based solely on the characteristics of a single crater. In the case of manual registration, it is often necessary to consider the surrounding craters collectively while identifying correspondences. Inspired by this experience, the geometric structure between craters within the neighborhood of a crater is worthing to be considered. The crater feature detector outlined in Section 2.1 has replaces manual impact crater identification. The automatic feature matching process requires comparing the geometric consistency of crater-formed structures. However, since automatic crater detection inevitably results in omissions or false detections, determining the similarity of geometric structures under these uncertain conditions is the core idea of the method proposed in this paper, which is primarily based on similarity invariants.

2.2.2. Similarity Invariant

A similarity transformation is a type of transformation in which the shape of a figure remains unchanged before and after the transformation. It can be decomposed into scaling, rotation, and translation. A similarity transformation T is generally represented by,
P = T P = s R P + t
where P is the observation; P is the observation after similarity transformation; s is a scale parameter; R is a rotation matrix; t is a translation vector; For a two-dimensional plane, the rotation matrix R θ corresponding to a rotation angle θ can be expressed as:
R θ = cos θ sin θ sin θ cos θ
The translation vector t is formed by scale parameters t x and t y ,
t = t x t y
Similarity invariants are quantities that remain unchanged when a geometric figure undergoes any similarity transformation. There are some basic invariants under similarity transformations, such as angles and similarity ratios (e.g., distance ratios). Figure 3 shows that the angles and the ratios of distance are invariant under similarity transformation, i.e., A 1 O 1 B 1 = A 2 O 2 B 2 , O 1 A 1 / O 1 B 1 = O 2 A 2 / O 2 B 2 and ϕ O 1 / ϕ A 1 = ϕ O 2 / ϕ A 2 .
Given features O 1 ,   A 1 ,   B 1 ,   C 1 , we transform them to O 2 ,   A 2 ,   B 2 ,   C 2 by a similarity transformation. Based on the invariants, we also have the following equations:
D = s 2 D ,   where D = O 1 A 1 O 1 B 1 cos A 1 O 1 B 1 O 1 A 1 O 1 C 1 cos A 1 O 1 C 1 O 1 B 1 O 1 C 1 cos B 1 O 1 C 1 , D = O 2 A 2 O 2 B 2 cos A 2 O 2 B 2 O 2 A 2 O 2 C 2 cos A 2 O 2 C 2 O 2 B 2 O 2 C 2 cos B 2 O 2 C 2
where s is a scale parameter of the transformation with O 1 A 1 = s O 2 A 2 .
For lunar orbiter images, the flight altitude of the orbiter is typically much greater than the variations in terrain elevation. As a result, deformations between images primarily exhibit rigid transformations [38]. Moreover, after orthorectification, local regions within the images generally satisfy similarity transformations. Therefore, the similarity invariants mentioned above are applicable to lunar orbiter images. By comprehensively utilizing these invariants, our method can identify similar structures, which details in the following sections.

2.2.3. Similar Crater Neighborhood Structure Feature Search

In the case of a pair of lunar orbiter images with disparate illumination conditions, the initial step is to perform crater feature detection utilizing the crater feature detector outlined in Section 2.1. In instances where images contain prior scale information, the diameters of the crater features are constrained so that the diameters of the crater features in both images fall within the same ground true scale range. This improves the adaptability of the proposed method to images with significant scale variations. As scale information is readily available for remote sensing images, prior scale information is employed in matching scenarios to closely simulate real-world conditions. And the following major work is to identify similar structures between the two crater feature sets. The representation of the crater feature sets is as follows:
χ = x i , y i , ϕ i i I ,   ψ = x j , y j , ϕ j j J
where χ and ψ are two distinct crater feature sets extracted from MiLOIs of the same region. I and J are separate index sets, representing the number of elements in sets χ and ψ , respectively. x and y are the coordinates of the crater center. ϕ represents the diameter of the crater.
To prevent the computational complexity of similar structure search from becoming unmanageable, we define the structure formed by a central crater (Central Crater, CC) and its neighboring craters (Nearby Craters, NCs) as the foundational structural unit, which is designated as a CNSF. Specifically, the NCs of each crater could be searched quickly using KD-tree. The size of the neighborhood is determined by the K value, which specifies that only the K craters closest to the CC are considered NCs. Proximity is measured by Euclidean distance. The two sets of CNSFs are expressed as follows:
χ C N S F = C N S F i i I ,   C N S F i = x i , y i , ϕ i , x p , y p , ϕ p p K i ψ C N S F = C N S F j j J , C N S F j = x j , y j , ϕ j , x q , y q , ϕ q q K j
where χ C N S F and ψ C N S F are the sets of CNSF. K i and K j are separate index sets of the K closest craters to the crater x i , y i , ϕ i and x j , y j , ϕ j , respectively.
Determining the correspondence of crater features within a structure is not straightforward due to the omissions or false detections of features. When searching for similar CNSFs, we first assume that the CC of two CNSFs correspond to each other. Then, the neighborhood structure is decomposed into multiple angular structures, and an exhaustive comparison of all angular structures within the two CNSFs is performed to eliminate non-corresponding NCs. If enough NCs remain, the two CNSFs are identified as similar.
For a CNSF, the CC can form an angular structure with any two NCs in a clockwise direction. The geometric parameters of an angular structure include six parameters: the angle β , the two edge lengths S 1 and S 2 , and the diameters of the three craters, ϕ 0 , ϕ 1 , and ϕ 2 . According to the rule, each NC forms angle structures with the other K 1 NCs and a total of K K 1 / 2 angle structures can be formed when there are K NCs.
The direction for angle calculation is set to clockwise, whereas the initial edge is uncertain when splitting different CNSFs into angular structures. The geometric parameter values of an angular structure formed by the same pair of NCs and CC may differ when the initial edge is different, as illustrated in the Figure 4. Therefore, according to the principle of similarity invariants, if the six geometric parameters of an angular structure satisfy the following equations, it indicates that the two angular structures are similar.
β = β S 2 S 1 = S 2 S 1 ϕ 1 ϕ 0 = ϕ 1 ϕ 0 ϕ 2 ϕ 0 = ϕ 2 ϕ 0   o r   β = 360 β S 2 S 1 = S 1 S 2 ϕ 1 ϕ 0 = ϕ 2 ϕ 0 ϕ 2 ϕ 0 = ϕ 1 ϕ 0
In the equations, β and β represent the angles of the two similar angle structures, S indicates the length of the sides that form the angle, and ϕ represents the diameter of the craters.
If a pair of CNSFs contains ξ corresponding NCs, ξ ξ 1 / 2 similar triangular structures can be identified according to the split criteria. Conversely, when performing a brute-force comparison of angular structures on pairs of CNSFs and obtaining multiple similar angular structures, we only need to count the frequency of each NC pair’s occurrence among all the similar angular structures to identify the corresponding NCs. For all corresponding NCs, in the ideal case, each will appear ξ 1 times, so the mode of the frequency count is taken for ξ 1 . And if a NC pair appears ξ 1 within these structures, it is considered a possibly correct correspondence; otherwise, it is deemed a fake correspondence. Based on this algorithm, non-corresponding NCs within a CNSF pair can be eliminated, and the correspondence between NCs can be established, thereby completing a single iteration of similar CNSF search.
Considering the presence of feature localization errors, the geometric parameters of similar structures do not strictly satisfy Equation (7). Assuming a tolerable feature localization error of δ pixels and a tolerable crater feature detection diameter error of η %, angular structures are considered similar if the equation residuals are less than the thresholds. Given that the central crater is fixed, the maximum angular deviation due to the localization error of the two neighboring craters can be derived as arcsin δ / S 1 + arcsin δ / S 2 ; the maximum deviation in the side length ratio is: δ S 2 + S 1 / S 1 ( S 1 δ ) ; The maximum deviation in the crater diameter ratio is: 2 ϕ 1 η % / ϕ 0 1 η % . Taking the left-hand part of Equation (7) as an example, the maximum deviation expression is as follows:
Δ β β β arcsin δ S 1 + arcsin δ S 2 + arcsin δ S 1 + arcsin δ S 2 Δ R S S 2 S 1 S 2 S 1 δ S 2 + S 1 S 1 S 1 δ + δ S 2 + S 1 S 1 S 1 δ Δ R ϕ 1 ϕ 1 ϕ 0 ϕ 1 ϕ 0 2 ϕ 1 η % ϕ 0 1 η % + 2 ϕ 1 η % ϕ 0 1 η % Δ R ϕ 2 ϕ 2 ϕ 0 ϕ 2 ϕ 0 2 ϕ 2 η % ϕ 0 1 η % + 2 ϕ 2 η % ϕ 0 1 η %
where Δ β , Δ R S , Δ R ϕ 1 Δ R ϕ 2 are the equation residuals. Based on experience, setting the thresholds to one-third of the maximum deviations is appropriate.
According to the above strategy, the similar CNSFs can be searched and matched. A ground-truth corresponding CNSF with K = 10 in images under varying illumination conditions is shown in Figure 5. The crater indicated by the yellow line does not have a corresponding feature on the other side and will be excluded by similar CNSF search, whereas the crater indicated by the green line will be retained, with its correspondence clearly established. The results of the similar CNSF search can be expressed as follows:
C N S F i τ , C N S F j τ i I , j J , a n d C N S F i τ ~ C N S F j τ
where ~ indicates that the two CNSFs satisfy the similarity criteria, while the superscript τ denotes that the non-corresponding craters have been excluded from the two similar CNSFs and their correspondence has been established.
The similar CNSF search only ensures that the two CNSFs exhibit similar topological structures within a certain error threshold. Several potential corresponding CNSFs might be identify for a CNSF, with only one being the true match. The objective of the subsequent section is to further constrain the similarity of CNSFs and identify the most similar CNSF.

2.2.4. CNSF Distance Measurement

Measuring distance between features is crucial for feature matching. The measurement of CNSFs’ distance essentially quantifies the degree of similarity between two CNSFs that have been identified as similar in the previous section. Since the one-to-one correspondence of crater features within a CNSF has already been established, we can refer to the similarity invariance principle mentioned earlier. In an ideal similar geometric structure, the vectors D and D should satisfy D = s 2 D according to Equation (4). For two collinear vectors with all positive values, their directions are completely identical, and the cosine distance of them is 1. Thus, the specific calculation formula of CNSF distance measurement is shown in Equation (10).
d C N S F = ,     d i m ( D ) < ξ m i n 1 D 2 D 2 D · D D 2 D 2 ,     o t h e r w i s e .
where D and D are vector composed of all angular structures within the similar CNSFs, with a dimension of ξ 1 .   2 represents the Euclidean norm of the vector, denotes the dot product of the vectors, and d i m ( ) represents the dimension of the vector. ξ m i n is the minimum number of corresponding NCs required in a pair of matching CNSFs. Specifically, the number of remaining NCs in a similar CNSF pair is less than ξ m i n , the distance between the two CNSFs is considered infinite.
The strategy for determining matched CNSFs is the Nearest Neighbor Distance Ratio (NNDR). This means that if the ratio of the closest distance to the second closest distance, calculated using Equation (10), is less than a specified threshold (e.g., 0.1), the CNSFs are considered corresponding features. Additionally, the closest distance is also constrained within a threshold close to zero (e.g., 0.1).

2.2.5. Mismatched CNSF Removal

The outlier removal design for CNSFs ensures the reliability of the matching results. Structural similarity does not guarantee a genuine match. Inevitably, there are erroneous correspondences among the matched CNSFs generated in the previous session. Visualization reveals that these erroneous correspondences are easily identifiable and removable (as illustrated in Figure 6). Clearly, erroneous correspondences, while having similar structures, do not exhibit similar geometric transformations. Each corresponding CNSF can be used to compute a similarity transformation parameter. In correctly matched local regions, the similarity transformation parameters remain consistent, whereas mismatched CNSFs exhibit significant differences in their computed parameters. Therefore, mismatch elimination can be achieved by evaluating the consistency of similarity transformation parameters. Firstly, the similarity transformation between images is calculated based on a selected pair of corresponding CNSFs, which is then validated by the remaining CNSF pairs. If the number of CNSF pairs that satisfy this transformation exceeds a predefined threshold ρ , the selected pair of corresponding CNSFs is considered a correct correspondence. Whether a pair of CNSF P , P satisfies the transformation T is determined by the following expression:
c o u n t T P P < ϵ > 2 3 n u m P
where ϵ is the threshold of the transformation residual, c o u n t represents the number of points that satisfy the inequality, and n u m P denotes the total number of points. The notation ∨ ∩ denotes the floor function, which rounds a value down to the nearest integer.
Finally, all correct CNSF correspondences are merged, duplicate corresponding craters are removed, and the final matching result is obtained.

3. Experimental Results and Analyses

3.1. Experimental Data and Metrics

The LROC Narrow Angle Camera (NAC) is composed of two linear pushbroom cameras (NAC-Left and NAC-Right) that offer highest-resolution (typically 0.5 to 2 m per pixel) lunar orbiter images so far [43]. Since 2009, the LROC NACs have been continuously observing the Moon for over 15 years, accumulating many repeated or overlapping observations that provide images of the lunar surface under a variety of illumination conditions. NAC images have been employed for numerous purposes [44], including large-area mapping, high-precision positioning, 3D reconstruction and landing site environment analysis. There is a pressing need for the co-registration of multi-illumination NAC images. Therefore, this paper uses NAC images as the experimental image data source to create the training dataset and the evaluation dataset.
The image data used for creating datasets were selected and downloaded from the Planetary Data System (PDS) Geosciences Node Lunar Orbital Data Explorer website (https://ode.rsl.wustl.edu/moon/index.aspx, accessed on 1 March 2024). Subsequently, relevant images were map projected using United States Geological Survey (USGS) Integrated Software for Imagers and Spectrometers (ISIS) [45,46], and then were converted into the GeoTiffs and cropped to the specified area using GDAL.

3.1.1. Training Dataset

Eight representative regions of global distribution were selected, including maria and highlands at different latitude on the both nearside and farside of the Moon. All available multi-illumination NAC images from each region were processed to create a training dataset, ensuring the crater feature detector’s generalization capability for various illumination conditions across the entire lunar surface. For each region, images were preprocessed, uniformly resampled to 0.5m resolution, and manually co-registered. Craters were then manually annotated using CraterTools [47], resulting in a total of 4682 annotated craters with a minimum diameter of 6 pixels. It is important to note that to minimize the impact of illumination on the accuracy of crater feature localization, only visually interpretable, perfectly circular craters were annotated. Finally, the images and CraterTools annotated craters were converted to the YOLO dataset format, with image blocks of 640 × 640 pixels, resulting in a training dataset comprising 1604 labeled image blocks.

3.1.2. Evaluation Dataset

To validate the effectiveness of the proposed method, multi-illumination images from three different latitude scenes (as shown in Figure 7 S1–S3) were selected for the experiments. S1 is the Chang’e 3 landing site (19.5117°W, 44.1214°N); S2 is the Apollo 11 landing site (23.4731°E, 0.6742°N); S3 is Peak Near Shackleton (123.7783°E, 88.8064°S), a region of interest for lunar south pole exploration [48,49]. All the selected scenes are 1000 × 1000 m. These regions have sufficient repeat observations, providing images under various illumination conditions. After preprocessing the images and removing images with similar illumination conditions, we obtained an image set with multiple scales and illumination that well represent the real scenarios of lunar orbiter image matching. The distribution of the solar azimuth, incidence and resolution for all 42 selected images is shown in Figure 7. For Scenes S1, S2, and S3, the maximum incidence angle differences are 41 degrees, 78 degrees, and 4 degrees, respectively, and the maximum solar azimuth angle differences are 164 degrees, 180 degrees, and 180 degrees, respectively. Sample image pairs with typical illumination differences are shown in Figure 8.
Quantitative evaluation of the matching results requires obtaining the true transformation between images. However, due to various errors, the exact ground truth geometric transformation is difficult to obtain. Therefore, approximate ground truth geometric transformation is commonly used for evaluation. Specifically, for each scene, we select a visually favorable image as the reference and determine five correspondences manually between it and other images to estimate the similarity transformations between all image pairs in the scene as approximate ground truth geometric transformations. And the fitting residuals are all within one pixel. The three scenes consist of 10, 10 and 22 images respectively, for a total of 321 multi-illumination image pairs.

3.1.3. Evaluation Metrics

The following five metrics were employed to evaluate the matching results:
  • Number of Correct Matches (NCM)
Matches are considered correct if the residuals calculated using the approximate ground truth similarity transformation parameters are less than 5 pixels. Residuals are calculated as follows:
σ i = p i T t r u e · p i 2
where, p i and p i are a pair of matched points, T t r u e represents the approximate ground truth similarity transformation parameters,   2 denotes the Euclidean norm, and · represents the dot product.
2.
Rate of Correct Matches (RCM)
The definition of RCM is the ratio of NCM to the total number of final matches (TNM).
R C M = N C M T N M
3.
Root of Mean-Squared Error (RMSE)
RMSE is the root mean square of residuals calculated from the approximate true transformation. The RMSE of the correct matches is defined as Equation (14). σ i is derived from the application of Equation (12). In addition, the RMSE is set to 5 for failed matching pairs.
R M S E = 1 N C M i = 1 N C M σ i 2
4.
Success Rate (SR)
SR is the ratio of the number of successful matches to the number of matched image pairs. The definition of successful matching is when NCM between a pair of images exceeds 3.
5.
Matching Time (MT)
This metric is primarily used to evaluate algorithm performance during parameter analysis.

3.2. Custom Crater Feature Detector

To testing the proposed matching algorithm, we performed transfer learning on the YOLOv9 model using our custom dataset, resulting in a crater feature detector. The YOLOv9-C model and its pretrained weights from the MS COCO 2017 dataset were downloaded from https://github.com, accessed on 20 November 2024, and the training was conducted using a single NVIDIA GeForce RTX 3090 GPU (NVIDIA, Santa Clara, CA, USA) under PyTorch 3.9 framework. The custom training dataset created from MiLOIs were randomly subdivided into a training set and a validation set in a 4:1 ratio. The model was trained for 300 epochs with a batch size of 8 to obtain the crater feature detector. Since only easily recognizable craters were manually annotated, the results of the recall and precision are close to 1, indicating that the obtained crater detector has good detection capabilities. Figure 9 illustrates the crater detection results for a pair of MiLOIs. After cropping, the two images cover the same area, with an incidence difference of 41.1°, a solar azimuth difference of 76.7°, and a minor scale difference. 241 craters (labeled in green) were detected in the left image, 201 craters (labeled in blue) were detected in the right image, and 138 craters were successfully detected in both images (labeled in red), accounting for 57.3% and 68.6%, respectively. This demonstrates that the crater detector obtained through our training is capable of resisting illumination changes.

3.3. Matching Experiments and Results

Three state-of-the-art multimodal matching algorithms proposed in recent years, i.e., HAPCG [19], ML-HLMO [26], and WSSF [28], were tested for MiLOIs as comparison experiments. The implementations of these methods use publicly available code shared by the authors, with parameter settings as recommended by the authors. Additionally, the classic feature matching algorithm SIFT was also used in the experiments. The experiments were performed on a DELL T7920 workstation running Windows 10 x64 operating system, with hardware configurations including an Intel Silver 4216 CPU (2.10GHz), 64GB-RAM, and an NVIDIA GeForce RTX 3090 GPU. The evaluation dataset mentioned in Section 3.1.2 was utilized in the comparison experiments.

3.3.1. Qualitative Evaluation

Three pairs of images from each of the three scenes were selected respectively from the evaluation dataset, making a total of nine pairs of images (as shown in Figure 8), for a qualitative comparison. The matching results are shown in Figure 10, Figure 11 and Figure 12. Blue lines indicate incorrect matches, while yellow lines indicate correct matches.
Scene S1 is located at 44 degrees north latitude. The scale difference between the images in a selected pair is minimal. In the first pair, the incidence angle differs by 17 degrees, and the azimuth angle differs by 88 degrees. In the second pair, the incidence angle differs by 38 degrees, and the azimuth angle differs by 110 degrees. Only the WSSF and CNSFM methods successfully matched these two pairs of images. In the third pair, the incidence angle differs by 21 degrees, and the azimuth angle differs by 22 degrees, with the slightest illumination difference. All five methods successfully matched this pair of images. For the matched image pairs, the CNSFM and WSSF methods show no occurrences of blue lines, indicating higher reliability of the matching results.
Scene S2 is located near the equator where it is close to direct sunlight (very small incidence). A small incidence may also mean a loss of image contrast because of overexposure during imaging. Three pairs of images selected from Scene S2 are varied simultaneously in both scale and illumination, thereby presenting greater matching challenges. The incidence angle differences between the pair of images were 13°, 57°, and 44° respectively, with azimuth angle differences of 179°, 3°, and 176°, and scale differences of 1.7 times, 2.0 times, and 3.4 times. SIFT failed on these three pairs of images. The other four methods were successful in matching the first pair of images, as WSSF and CNSFM demonstrated higher reliability. The second pair with minimal azimuth angle difference but the greatest incidence angle difference was matched successfully using ML-HLMO, WSSF, and CNSFM. WSSF and CNSFM showed the reliable matching outcomes. The third pair of images exhibit significant differences in both illumination and scale, posing the greatest matching difficulty. Only ML-HLMO and CNSFM successfully matched them, but the results of ML-HLMO included numerous incorrect matches.
Scene S3 is near the South Pole, where the solar incidence angle remains consistently close to 90° and the terrain is rugged. The images in S3 exhibit extensive shadowing, and shadow directions vary with azimuth angles, thus presenting the most challenging matching scenario. For the selected three pairs of images from Scene S3, incidence angle differences are less than 2°, scale differences are less than 1.5 times, and azimuth angle differences are 62°, 95°, and 158° respectively. Among the five methods, only CNSFM successfully matched all three pairs and demonstrated high reliability in the matching results.
Based on the qualitative analysis of the multi-illumination lunar orbiter image matching results, among the five methods, SIFT is the most sensitive to illumination changes; HAPCG and ML-HLMO exhibit some resistance to illumination variations but are unstable and prone to mismatches. WSSF shows relatively good resistance to illumination changes, although its performance in polar regions is suboptimal. In contrast, the proposed CNSFM method consistently produces highly reliable matching results, demonstrating exceptional robustness.

3.3.2. Quantitative Evaluation

Table 1 displays the SR of matching across 321 pairs of images from three scenes using five algorithms. The proposed algorithms achieved 100% SR in Scenes S1 and S2. In Scene S3, the SR is 72.3%, which is 4.2 times higher than SIFT, 3.3 times higher than HAPCG, 5.2 times higher than ML-HLMO, and 2.3 times higher than WSSF. It is evident that the method proposed in this paper is the most robust to illumination differences. Among the other methods, WSSF exhibits the best overall performance.
In feature matching, the positioning accuracy of invariant features is often challenging to determine directly. RMSE based on manually acquired approximate similarity transformation ground truth was used as a quantitative evaluation metric for matching accuracy. Table 2 presents the average RMSE of five algorithms across datasets from three scenes, while Figure 13 illustrates the RMSE of the matching results for all 321 image pairs. The methods proposed in this paper consistently achieve lower average RMSE values compared to other algorithms.
As seen in the figure, aside from the failed matches, the RMSE of HAPCG and ML-HLMO is mostly concentrated between 2~3 pixels, while WSSF is around 2 pixels. In contrast, the RMSE of the matching results obtained by the proposed method is largely consistent with that of SIFT, generally around 1 pixel. This indicates that craters with lower feature localization accuracy were largely excluded, and the crater features matched by the proposed method exhibit high positioning accuracy.
Additionally, in lunar remote sensing image processing, the reliability of image matching results is crucial. Table 3 presents the average correct matching point ratio for all successfully matched pairs, while Figure 14 visualizes the specific distribution. It can be observed that the proposed method achieves exceptionally high RCM, reaching 100% in regions S1 and S3, and an average accuracy of 99.3% in region S2, outperforming the other four methods. This indicates that the proposed method can provide highly reliable matching results.
Table 4 shows the average values of NCM across 321 pairs of images. Due to the limitation of lunar crater numbers, the NCM values obtained by the proposed method are relatively limited. The other algorithms exhibit significant fluctuations in NCM, ranging from tens to thousands for successfully matched image pairs, resulting in a relatively large average value.
From the matching experiment results, the proposed method demonstrates exceptional robustness to illumination differences and achieves highly reliable matching results. The WSSF method follows, exhibiting a certain degree of resistance to illumination variations, possibly due to its use of structural information. By fully leveraging the structural relationships between features, the proposed method significantly enhances matching robustness under extreme illumination differences.

3.3.3. Parameter Study

The value of parameter K is critical when matching using the proposed method. K determines the size of the neighborhood used to construct the CNSF, which significantly impacts algorithm performance. Other parameters are set based on existing experience. Qualitatively, we can initially analyze the impact of K: smaller K values result in faster computation but weaker robustness to gross errors, leading to mismatches due to similar structures. Increasing the K value increases the number of matches but also substantially increases matching computation time. Moreover, due to various potential errors and deformations, matching accuracy may also be affected.
To quantitatively analyze the impact of K on algorithm performance and determine the most suitable K value for practical applications, experiments with 6 different K values were conducted. The experiments used the evaluation dataset of region S1, evaluating NCM, SR, and MT as metrics. The experimental results are shown in Table 5. Figure 15 provides a more intuitive visualization of the matching results for a sample image pair under different K values.
Experimental results indicate that setting the K value too low leads to matching failures, as the number of craters used to construct the CNSF is insufficient, resulting in poor structure robustness. Increasing the K value significantly enhances the matching performance (increased NCM). However, beyond a certain point, the matching performance stabilizes and may even deteriorate if the K value is set too high. Additionally, increasing the K value significantly increases MT. On the basis of these results, it is recommended that the K value be set at 15 or 20 and these settings were maintained in the experiments. And other parameters are set separately according to experience with δ = 3 , η = 25 , ξ m i n = 3 , ϵ = 5 , ρ = 3 .

3.3.4. Ablation Study

To further validate the effectiveness of the mismatched CNSF removal (MCR) algorithm in the proposed method, we conducted an ablation experiment using the dataset from region S1. By comparing the RCM of matching results with and without this algorithm, we assess its effectiveness. The experiment result is shown in Table 6, with two visualization examples presented in Figure 16. The results indicate that the MCR algorithm effectively enhances the reliability of the matching results.

4. Discussion

All image pairs in region S1 and S2 were successfully matched, whereas the success rate in region S3 was not perfect. This is primarily due to the low solar elevation angles near the lunar south pole, where azimuthal differences approaching 180 degrees result in overlapping image regions predominantly covered by shadows (as shown in Figure 17), making direct matching extremely difficult. Since CNSFM relies on a certain number of crater features for structure construction, it may fail when the overlapping area between image pairs is small or heavily shadowed. In addition, in some geologically young units (e.g., light plains or impact melt ponds), craters might be relatively sparse, which can also result in matching failure. These are limitations common to structure-dependent crater feature matching algorithms.
In region S3, the images exhibit small differences in solar incidence angles, with variations primarily in solar azimuth. Figure 18 illustrates the relationship between azimuth differences and successful matching, providing insight into the resistance of different algorithms to changes in illumination azimuth. The proposed method demonstrates the strongest robustness to differences in solar azimuth (147.7°), followed by WSSF (62°) and HAPCG (47°). ML-HLMO (34°) and SIFT (33°) exhibit the weakest resistance.
As shown in Figure 18, successful matches still occur even when the azimuth angle difference exceeds these angles. For the proposed method, this can be attributed to the fact that topographic occlusion-induced shadows do not completely obscure the overlapping regions of the imagery. In contrast, other methods rely on the symmetry of local shadows under such conditions. However, their matching accuracy suffers significantly: at an azimuth difference of 159.1°, the correct matching rates are 78.6% (HAPCG), 34.9% (ML-HLMO), and 58.3% (WSSF), whereas the proposed method achieves 100% accuracy.

5. Conclusions

This paper presented a lunar orbiter image feature matching method that is robust to illumination variations, leveraging CNSF to achieve highly reliable matches. The performance of the proposed CNSFM method was evaluated using a dataset of 321 pairs of MiLOIs from three different latitude regions with maximum incidence angle difference of 78 degrees, maximum solar azimuth angle difference of 180 degrees. The results indicated that the proposed method exhibits superior robustness to illumination variations, achieving a perfect matching success rate of 100% in the equatorial and mid-to-high latitude regions. Even in the polar regions where there are full of shadows due to the ragged terrain and very low sun elevation angles, the method attains a matching success rate of 72.3%, outperforming the 31.2% at best delivered by other methods.
The CNSFM method will be highly suitable for applications in planetary remote sensing image processing applications, such as aligning overlapping images and constructing tie points for block adjustments. The method can be applied to tasks such as landing site mapping and other lunar scientific research.

Author Contributions

Conceptualization, B.X., B.L. and K.D.; methodology, B.X. and B.L.; software, B.X., Y.J. and Y.Z.; validation, B.X., B.L. and Y.K.; formal analysis, B.X.; investigation, B.X.; resources, B.X. and K.D.; data curation, B.X.; writing—original draft preparation, B.X., B.L. and K.D.; writing—review and editing, B.X., B.L., K.D. and W.-C.L.; visualization, B.X. and W.-C.L.; supervision, B.L. and K.D.; project administration, B.L.; funding acquisition, K.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Grant No. 2022YFF0503100 and the Open Fund of the Key Laboratory of Aerospace Flight Dynamics Technology (Grant No. KGJ6142210220204).

Data Availability Statement

Both the datasets and the executable used for experiments in the study are publicly available at https://github.com/Bin501/CNSFM, accessed on 20 November 2024. The remaining data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors gratefully acknowledge all those who worked on the Planetary Data System archive (https://ode.rsl.wustl.edu/moon/index.aspx, accessed on 1 March 2024) to make the LROC NAC imagery publicly available.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
MiLOIsMulti-illumination Lunar Orbiter Images
CNSFsCrater Neighborhood Structure features
CNSFMCrater Neighborhood Structural Feature Matching
CCsCentral Craters
NCsNearby Craters
LROCLunar Reconnaissance Orbiter Camera
NACNarrow Angle Camera
NCMNumber of Correct Matches
RCMRate of Correct Matches
RMSERoot of Mean-Squared Error
SRSuccess Rate
MTMatching Time
MCRMismatched CNSF Removal

References

  1. Kirk, R.L.; Archinal, B.A.; Gaddis, L.R.; Rosiek, M.R. Lunar Cartography: Progress in the 2000s and Prospects for the 2010s. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. Remote Sens. 2012, 39, 489–494. [Google Scholar] [CrossRef]
  2. Di, K.; Liu, B.; Liu, Z.; Zou, Y. Review and prospect of lunar mapping using remote sensing data. Natl. Remote Sens. Bull. 2016, 20, 1230–1242. [Google Scholar] [CrossRef]
  3. Kim, J.; Lin, S.-Y.; Xiao, H. Remote Sensing and Data Analyses on Planetary Topography. Remote. Sens. 2023, 15, 2954. [Google Scholar] [CrossRef]
  4. Tong, X.; Liu, S.; Xie, H.; Xu, X.; Ye, Z.; Feng, Y.; Wang, C.; Jin, Y.; Chen, P.; Hong, Z.; et al. From Earth mapping to extraterrestrial planet mapping. Acta Geod. Cartogr. Sin. 2022, 51, 488–500. [Google Scholar]
  5. Di, K.; Oberst, J.; Karachevtseva, I.; Wu, B. Topographic Mapping of the Moon in the 21st Century: From Hectometer to Millimeter Scales. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1117–1124. [Google Scholar] [CrossRef]
  6. Alexandrov, O.; Beyer, R.A. Multiview Shape-From-Shading for Planetary Images. Earth Space Sci. 2018, 5, 652–666. [Google Scholar] [CrossRef]
  7. Liu, B.; Jia, M.; Di, K.; Oberst, J.; Xu, B.; Wan, W. Geopositioning precision analysis of multiple image triangulation using LROC NAC lunar images. Planet. Space Sci. 2018, 162, 20–30. [Google Scholar] [CrossRef]
  8. Liu, W.-C.; Wu, B. An integrated photogrammetric and photoclinometric approach for illumination-invariant pixel-resolution 3D mapping of the lunar surface. ISPRS J. Photogramm. Remote Sens. 2020, 159, 153–168. [Google Scholar] [CrossRef]
  9. Wagner, R.V.; Nelson, D.M.; Plescia, J.B.; Robinson, M.S.; Speyerer, E.J.; Mazarico, E. Coordinates of anthropogenic features on the Moon. Icarus 2017, 283, 92–103. [Google Scholar] [CrossRef]
  10. Liu, Z.; Peng, M.; Di, K.; Wan, W.; Liu, B.; Wang, Y.; Xie, B.; Kou, Y.; Wang, B.; Zhao, C.; et al. High-Precision Visual Localization of the Chang’e-6 Lander. Natl. Remote Sens. Bull. 2024, 28, 1649–1656. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Liu, B.; Di, K.; Liu, S.; Yue, Z.; Han, S.; Wang, J.; Wan, W.; Xie, B. Analysis of Illumination Conditions in the Lunar South Polar Region Using Multi-Temporal High-Resolution Orbital Images. Remote Sens. 2023, 15, 5691. [Google Scholar] [CrossRef]
  12. Chen, C.; Ye, Z.; Xu, Y.; Liu, D.; Huang, R.; Zhou, M.; Xie, H.; Tong, X. Large-Scale Block Bundle Adjustment of LROC NAC Images for Lunar South Pole Mapping Based on Topographic Constraint. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 2731–2746. [Google Scholar] [CrossRef]
  13. Wang, Y.; Nan, J.; Zhao, C.; Xie, B.; Gou, S.; Yue, Z.; Di, K.; Zhang, H.; Deng, X.; Sun, S. A Catalogue of Impact Craters and Surface Age Analysis in the Chang’e-6 Landing Area. Remote Sens. 2024, 16, 2014. [Google Scholar] [CrossRef]
  14. Xie, B.; Liu, B.; Di, K.; Zhang, Y.; Wang, B.; Zhao, C. Analysis of the temporal and spatial characteristics of lunar reconnaissance orbiter’s orbit error based on multi-coverage narrow angle camera images. Geo-Spat. Inf. Sci. 2024. [Google Scholar] [CrossRef]
  15. Di, K.; Liu, B.; Peng, M.; Xin, X.; Jia, M.; Zuo, W.; Ping, J.; Wu, B.; Oberst, J. Scheme and Key Techniques for Construction of New-Generation Lunar Global Control Network Using Multi-Mission Data. Geom. Inform. Sci. Wuhan Univ. 2018, 43, 2099–2105. [Google Scholar] [CrossRef]
  16. Di, K.; Liu, B.; Xin, X.; Yue, Z.; Ye, L. Advances and applications of lunar photogrammetric mapping using orbital images. Acta Geod. Cartogr. Sin. 2019, 48, 1562–1574. [Google Scholar]
  17. Huang, R.; Wan, G.; Zhou, Y.; Ye, Z.; Xie, H.; Xu, Y.; Tong, X. Fast Double-Channel Aggregated Feature Transform for Matching Planetary Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9282–9293. [Google Scholar] [CrossRef]
  18. Wan, G.; Huang, R.; Xu, Y.; Ye, Z.; You, Q.; Yan, X.; Tong, X. Efficient Phase Congruency-Based Feature Transform for Rapid Matching of Planetary Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2025, 22, 1–5. [Google Scholar] [CrossRef]
  19. Yao, Y.; Zhang, Y.; Wan, Y.; Liu, X.; Guo, H. Heterologous Images Matching Considering Anisotropic Weighted Moment and Absolute Phase Orientation. Geom. Inform. Sci. Wuhan Univ. 2021, 46, 1727–1736. [Google Scholar] [CrossRef]
  20. Sui, H.; Liu, C.; Gan, Z.; Jiang, Z.; Xu, C. Overview of multi-modal remote sensing image matching methods. Acta Geod. Cartogr. Sin. 2022, 51, 1848–1861. [Google Scholar] [CrossRef]
  21. Harris, C.G.; Stephens, M.J. A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, September 1988; pp. 147–151. [Google Scholar]
  22. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  23. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  24. Alcantarilla, P.F.; Nuevo, J.; Bartoli, A. Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. In Proceedings of the British Machine Vision Conference 2013, Bristol, UK, 9–13 September 2013. [Google Scholar]
  25. Wu, B.; Zeng, H.; Hu, H. Illumination invariant feature point matching for high-resolution planetary remote sensing images. Planet. Space Sci. 2018, 152, 45–54. [Google Scholar] [CrossRef]
  26. Gao, C.; Li, W.; Tao, R.; Du, Q. MS-HLMO: Multiscale Histogram of Local Main Orientation for Remote Sensing Image Registration. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5626714. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Yao, Y.; Wan, Y.; Liu, W.; Yang, W.; Zheng, Z.; Xiao, R. Histogram of the orientation of the weighted phase descriptor for multi-modal remote sensing image matching. ISPRS J. Photogramm. Remote Sens. 2023, 196, 1–15. [Google Scholar] [CrossRef]
  28. Wan, G.; Ye, Z.; Xu, Y.; Huang, R.; Zhou, Y.; Xie, H.; Tong, X. Multimodal Remote Sensing Image Matching Based on Weighted Structure Saliency Feature. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4700816. [Google Scholar] [CrossRef]
  29. Yue, Z.Y.; Shi, K.; Di, K.C.; Lin, Y.T.; Gou, S. Progresses and prospects of impact crater studies. Sci. China Earth Sci. 2023, 66, 2441–2451. [Google Scholar] [CrossRef]
  30. Yang, Z.; Kang, Z.; Cao, Z.; Yang, J.; Peng, M.; Liu, B. Coarse-to-Fine Crater Matching from Heterogeneous Surfaces of LROC NAC and Chang’e-2 DOM Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 6002605. [Google Scholar] [CrossRef]
  31. Fairweather, J.H.; Lagain, A.; Servis, K.; Benedix, G.K.; Kumar, S.S.; Bland, P.A. Automatic Mapping of Small Lunar Impact Craters Using LRO-NAC Images. Earth Space Sci. 2022, 9, e2021EA002177. [Google Scholar] [CrossRef]
  32. Yang, H.; Xu, X.; Ma, Y.; Xu, Y.; Liu, S. CraterDANet: A Convolutional Neural Network for Small-Scale Crater Detection via Synthetic-to-Real Domain Adaptation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4600712. [Google Scholar] [CrossRef]
  33. Xu, L.; Jiang, J.; Ma, Y. Review of Visual Navigation Technology Based on Craters. Laser Optoelectron. Prog. 2023, 60, 1106013. [Google Scholar] [CrossRef]
  34. Cui, P.; Gao, X.; Zhu, S.; Yao, W. Progress in Complex Topography Feature Matching and Autonomous Navigation for Planetary Landing. J. Astronaut. 2022, 43, 713–722. [Google Scholar]
  35. Maass, B.; Woicke, S.; Oliveira, W.M.; Razgus, B.; Krüger, H. Crater Navigation System for Autonomous Precision Landing on the Moon. J. Guid. Control. Dyn. 2020, 43, 1414–1431. [Google Scholar] [CrossRef]
  36. Wan, W.; Liu, Z.; Liu, Y.; Liu, B.; Di, K.; Zhou, J.; Wang, B.; Liu, C.; Wang, J. Descent Image Matching Based Position Evaluation for Chang’e-3 Landing Point. Spacecr. Eng. 2014, 23, 5–12. [Google Scholar] [CrossRef]
  37. Solarna, D.; Gotelli, A.; Moigne, J.L.; Moser, G.; Serpico, S.B. Crater Detection and Registration of Planetary Images Through Marked Point Processes, Multiscale Decomposition, and Region-Based Analysis. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6039–6058. [Google Scholar] [CrossRef]
  38. Li, J.; Hu, Q.; Ai, M.; Zhong, R. Robust feature matching via support-line voting and affine-invariant ratios. ISPRS J. Photogramm. Remote Sens. 2017, 132, 61–76. [Google Scholar] [CrossRef]
  39. Yuan, X.; Chen, S.; Yuan, W.; Cai, Y. Poor textural image tie point matching via graph theory. ISPRS J. Photogramm. Remote Sens. 2017, 129, 21–31. [Google Scholar] [CrossRef]
  40. Liu, D.; Ye, Z.; Xu, Y.; Huang, R.; Xue, L.; Chen, H.; Wan, G.; Xie, H.; Tong, X. A Mismatch Removal Method Based on Global Constraint and Local Geometry Preservation for Lunar Orbiter Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 10221–10236. [Google Scholar] [CrossRef]
  41. Lu, Y.; Ma, J.; Mei, X.; Huang, J.; Zhang, X.-P. Feature Matching via Topology-Aware Graph Interaction Model. IEEE/CAA J. Autom. Sin. 2024, 11, 113–130. [Google Scholar] [CrossRef]
  42. Wang, C.-Y.; Yeh, I.H.; Liao, H.-Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar] [CrossRef]
  43. Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; et al. Lunar Reconnaissance Orbiter Camera (LROC) Instrument Overview. Space Sci. Rev. 2010, 150, 81–124. [Google Scholar] [CrossRef]
  44. Keller, J.W.; Petro, N.E.; Vondrak, R.R.; Team, L.R.O. The Lunar Reconnaissance Orbiter Mission—Six years of science and exploration at the Moon. Icarus 2016, 273, 2–24. [Google Scholar] [CrossRef]
  45. Gaddis, L.; Anderson, J.; Becker, K.; Becker, T.; Cook, D.; Edwards, K.; Eliason, E.; Hare, T.; Kieffer, H.; Lee, E.M.; et al. An Overview of the Integrated Software for Imaging Spectrometers (ISIS). In Proceedings of the Presented at the 28th Lunar and Planetary Science Conference, Houston, TX, USA, 17–21 March 1997; p. 387. [Google Scholar]
  46. Keszthelyi, L.; Becker, T.; Sides, S.; Barrett, J.; Cook, D.; Lambright, S.; Lee, E.; Milazzo, M.; Oyama, K.; Richie, J.; et al. Support and Future Vision for the Integrated Software for Imagers and Spectrometers (ISIS). In Proceedings of the Presented at the 44th Annual Lunar and Planetary Science Conference, The Woodlands, TX, USA, 18–22 March 2013; p. 2546. [Google Scholar]
  47. Kneissl, T.; van Gasselt, S.; Neukum, G. Map-projection-independent crater size-frequency determination in GIS environments-New software tool for ArcGIS. Planet. Space Sci. 2011, 59, 1243–1254. [Google Scholar] [CrossRef]
  48. Flahaut, J.; Carpenter, J.; Williams, J.P.; Anand, M.; Crawford, I.A.; van Westrenen, W.; Füri, E.; Xiao, L.; Zhao, S. Regions of interest (ROI) for future exploration missions to the lunar South Pole. Planet. Space Sci. 2020, 180, 104750. [Google Scholar] [CrossRef]
  49. Yu, H.; Rao, W.; Zhang, Y.; Xing, Z. Mission Analysis and Spacecraft Design of Chang’E-7. J. Deep. Space Explor. 2023, 10, 567–576. [Google Scholar] [CrossRef]
Figure 1. Schematic of illumination angles and examples of lunar orbiter images captured under different illumination conditions. i.e., p are the solar incidence, emission and phase angles respectively. α is the solar azimuth angle defined as the angle between the direction of sunlight and the north direction on the lunar surface. The image blocks are sourced from Lunar Reconnaissance Orbiter Camera (LROC), which are centered on the Apollo 11 lander. The corresponding image product IDs are M150361817RE, M1134046721RE, M113799518RE, and M1277607697LE, respectively.
Figure 1. Schematic of illumination angles and examples of lunar orbiter images captured under different illumination conditions. i.e., p are the solar incidence, emission and phase angles respectively. α is the solar azimuth angle defined as the angle between the direction of sunlight and the north direction on the lunar surface. The image blocks are sourced from Lunar Reconnaissance Orbiter Camera (LROC), which are centered on the Apollo 11 lander. The corresponding image product IDs are M150361817RE, M1134046721RE, M113799518RE, and M1277607697LE, respectively.
Remotesensing 17 02302 g001
Figure 2. Framework of the proposed CNSFM method.
Figure 2. Framework of the proposed CNSFM method.
Remotesensing 17 02302 g002
Figure 3. Similarity invariants. O 2 ,   A 2 ,   B 2 ,   C 2 are the corresponding locations of O 1 ,   A 1 ,   B 1 ,   C 1 after similarity transformation T . ϕ O 1 ,   ϕ A 1 , ϕ O 2 ,   ϕ A 2 are the diameters of features O 1 ,   A 1 ,   O 2 ,   A 2 . Side length O 1 A 1 represents the distance between points O 1 and A 1 . Angle A 1 O 1 B 1 represents the angle formed at point O 1 in the clockwise direction, from O 1 A 1 to O 1 B 1 . Other side lengths and angles follow the same definitions.
Figure 3. Similarity invariants. O 2 ,   A 2 ,   B 2 ,   C 2 are the corresponding locations of O 1 ,   A 1 ,   B 1 ,   C 1 after similarity transformation T . ϕ O 1 ,   ϕ A 1 , ϕ O 2 ,   ϕ A 2 are the diameters of features O 1 ,   A 1 ,   O 2 ,   A 2 . Side length O 1 A 1 represents the distance between points O 1 and A 1 . Angle A 1 O 1 B 1 represents the angle formed at point O 1 in the clockwise direction, from O 1 A 1 to O 1 B 1 . Other side lengths and angles follow the same definitions.
Remotesensing 17 02302 g003
Figure 4. Angle structures within a CNSF as K = 3. The angle structures on the left are divided with CC-NC1 as the starting edge, while the angle structures on the right are divided with CC-NC2 as the starting edge. The corresponding angles on the left and right of the diagram may be equal or may add up to 360°.
Figure 4. Angle structures within a CNSF as K = 3. The angle structures on the left are divided with CC-NC1 as the starting edge, while the angle structures on the right are divided with CC-NC2 as the starting edge. The corresponding angles on the left and right of the diagram may be equal or may add up to 360°.
Remotesensing 17 02302 g004
Figure 5. CNSFs when the K value is set to 10. The craters indicated by the yellow lines have no corresponding features on the other side, whereas those indicated by the green lines exhibit corresponding features on both sides.
Figure 5. CNSFs when the K value is set to 10. The craters indicated by the yellow lines have no corresponding features on the other side, whereas those indicated by the green lines exhibit corresponding features on both sides.
Remotesensing 17 02302 g005
Figure 6. Outlier removal for CNSFs.
Figure 6. Outlier removal for CNSFs.
Remotesensing 17 02302 g006
Figure 7. The spatial distribution of the selected regions in the lunar surface within the evaluation dataset (a). The distribution of the solar azimuth and incidence for all selected images (b). The resolution histogram for all selected images (c).
Figure 7. The spatial distribution of the selected regions in the lunar surface within the evaluation dataset (a). The distribution of the solar azimuth and incidence for all selected images (b). The resolution histogram for all selected images (c).
Remotesensing 17 02302 g007
Figure 8. Sample image pairs of the evaluation dataset. S1 (a), S2 (b), S3 (c).
Figure 8. Sample image pairs of the evaluation dataset. S1 (a), S2 (b), S3 (c).
Remotesensing 17 02302 g008aRemotesensing 17 02302 g008b
Figure 9. Crater detection results for a pair of MiLOIs. The image product IDs are M1389516184R and M1276733320L, respectively. The images are centered on the Chang’e 3 lander. Their incidence angles are 43.4° and 84.5°, solar azimuth angles are 173.6° and 97.1°, respectively.
Figure 9. Crater detection results for a pair of MiLOIs. The image product IDs are M1389516184R and M1276733320L, respectively. The images are centered on the Chang’e 3 lander. Their incidence angles are 43.4° and 84.5°, solar azimuth angles are 173.6° and 97.1°, respectively.
Remotesensing 17 02302 g009
Figure 10. Matching results of sample data from Scene S1. The columns correspond to pairs of images, while rows the correspond to methods. Blue lines indicate incorrect matches, while yellow lines indicate correct matches. SIFT (a), HAPCG (b), ML-HLMO (c), WSSF (d), and CNSFM (e).
Figure 10. Matching results of sample data from Scene S1. The columns correspond to pairs of images, while rows the correspond to methods. Blue lines indicate incorrect matches, while yellow lines indicate correct matches. SIFT (a), HAPCG (b), ML-HLMO (c), WSSF (d), and CNSFM (e).
Remotesensing 17 02302 g010
Figure 11. Matching results of sample data from Scene S2. The columns correspond to pairs of images, while rows the correspond to methods. Blue lines indicate incorrect matches, while yellow lines indicate correct matches. SIFT (a), HAPCG (b), ML-HLMO (c), WSSF (d), and CNSFM (e).
Figure 11. Matching results of sample data from Scene S2. The columns correspond to pairs of images, while rows the correspond to methods. Blue lines indicate incorrect matches, while yellow lines indicate correct matches. SIFT (a), HAPCG (b), ML-HLMO (c), WSSF (d), and CNSFM (e).
Remotesensing 17 02302 g011aRemotesensing 17 02302 g011b
Figure 12. Matching results of sample data from Scene S3. The columns correspond to pairs of images, while rows the correspond to methods. Blue lines indicate incorrect matches, while yellow lines indicate correct matches. SIFT (a), HAPCG (b), ML-HLMO (c), WSSF (d), CNSFM (e).
Figure 12. Matching results of sample data from Scene S3. The columns correspond to pairs of images, while rows the correspond to methods. Blue lines indicate incorrect matches, while yellow lines indicate correct matches. SIFT (a), HAPCG (b), ML-HLMO (c), WSSF (d), CNSFM (e).
Remotesensing 17 02302 g012aRemotesensing 17 02302 g012b
Figure 13. The frequency distribution of RMSE, with different colors representing different methods.
Figure 13. The frequency distribution of RMSE, with different colors representing different methods.
Remotesensing 17 02302 g013
Figure 14. The frequency distribution of RCM. The horizontal axis represents RCM (in percentage), while the vertical axis represents the number of image pairs within different RCM ranges; different colors correspond to different methods.
Figure 14. The frequency distribution of RCM. The horizontal axis represents RCM (in percentage), while the vertical axis represents the number of image pairs within different RCM ranges; different colors correspond to different methods.
Remotesensing 17 02302 g014
Figure 15. The matching results of a sample image pair for different K values.
Figure 15. The matching results of a sample image pair for different K values.
Remotesensing 17 02302 g015
Figure 16. (a,b) Matching results of two sample image pairs without (left) and with (right) mismatched CNSF removal. Blue lines indicate incorrect matches, while yellow lines indicate correct matches.
Figure 16. (a,b) Matching results of two sample image pairs without (left) and with (right) mismatched CNSF removal. Blue lines indicate incorrect matches, while yellow lines indicate correct matches.
Remotesensing 17 02302 g016
Figure 17. Two image pairs with failed matches in region S3. The yellow arrows point to corresponding impact craters located along the shadow boundaries.
Figure 17. Two image pairs with failed matches in region S3. The yellow arrows point to corresponding impact craters located along the shadow boundaries.
Remotesensing 17 02302 g017
Figure 18. The relationship between successful matching and Solar Azimuth differences. Each point represents a successful match, with different colors representing different matching methods.
Figure 18. The relationship between successful matching and Solar Azimuth differences. Each point represents a successful match, with different colors representing different matching methods.
Remotesensing 17 02302 g018
Table 1. Comparisons on SR Metric (%).
Table 1. Comparisons on SR Metric (%).
RegionSIFTHAPCGML-HLMOWSSFOurs
S133.366.733.3100100
S224.448.988.966.7100
S317.322.113.931.272.3
Table 2. The average value of RMSE (pixel).
Table 2. The average value of RMSE (pixel).
RegionSIFTHAPCGML-HLMOWSSFOurs
S13.83.24.11.91.0
S24.23.83.13.01.5
S34.34.44.74.12.2
Table 3. The average value of RCM (%).
Table 3. The average value of RCM (%).
RegionSIFTHAPCGML-HLMOWSSFOurs
S193.393.890.798.8100
S293.288.162.598.699.3
S371.079.452.092.2100
Table 4. The average value of NCM.
Table 4. The average value of NCM.
RegionSIFTHAPCGML-HLMOWSSFOurs
S14794293087793
S26522671332567150
S310856411223357
Table 5. Matching results with difference value of parameter K.
Table 5. Matching results with difference value of parameter K.
MetricValues of Parameter K
51015202530
Average NCM258293938980
SR (%)93.3100100100100100
Average MT(s)1.33.523.891.1239.6530.0
Table 6. Results of ablation experiment.
Table 6. Results of ablation experiment.
MetricCNSFM Without MCRFull CNSFM
Average RCM (%)62.2100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, B.; Liu, B.; Di, K.; Liu, W.-C.; Kou, Y.; Jia, Y.; Zhang, Y. Robust Feature Matching of Multi-Illumination Lunar Orbiter Images Based on Crater Neighborhood Structure. Remote Sens. 2025, 17, 2302. https://doi.org/10.3390/rs17132302

AMA Style

Xie B, Liu B, Di K, Liu W-C, Kou Y, Jia Y, Zhang Y. Robust Feature Matching of Multi-Illumination Lunar Orbiter Images Based on Crater Neighborhood Structure. Remote Sensing. 2025; 17(13):2302. https://doi.org/10.3390/rs17132302

Chicago/Turabian Style

Xie, Bin, Bin Liu, Kaichang Di, Wai-Chung Liu, Yuke Kou, Yutong Jia, and Yifan Zhang. 2025. "Robust Feature Matching of Multi-Illumination Lunar Orbiter Images Based on Crater Neighborhood Structure" Remote Sensing 17, no. 13: 2302. https://doi.org/10.3390/rs17132302

APA Style

Xie, B., Liu, B., Di, K., Liu, W.-C., Kou, Y., Jia, Y., & Zhang, Y. (2025). Robust Feature Matching of Multi-Illumination Lunar Orbiter Images Based on Crater Neighborhood Structure. Remote Sensing, 17(13), 2302. https://doi.org/10.3390/rs17132302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop