Next Article in Journal
Study on Non-Equilibrium Atomic Radiation Characteristics During High-Speed Re-Entry of a Spacecraft Capsule
Previous Article in Journal
Research and Experimental Verification of the Static and Dynamic Pressure Characteristics of Aerospace Porous Media Gas Bearings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Deep Space Image-Based Navigation Methods

1
Beijing Institute of Control Engineering, Beijing 100090, China
2
Open Laboratory for Photoelectric Measurement and Intelligent Perception, Beijing 100090, China
*
Author to whom correspondence should be addressed.
Aerospace 2025, 12(9), 789; https://doi.org/10.3390/aerospace12090789
Submission received: 21 July 2025 / Revised: 24 August 2025 / Accepted: 26 August 2025 / Published: 31 August 2025
(This article belongs to the Section Astronautics & Space Science)

Abstract

Deep space exploration missions face technical challenges such as long-distance communication delays and high-precision autonomous positioning. Traditional ground-based telemetry and control as well as inertial navigation schemes struggle to meet mission requirements in the complex environment of deep space. As a vision-based autonomous navigation technology, image-based navigation enables spacecraft to obtain real-time images of the target celestial body surface through a variety of onboard remote sensing devices, and it achieves high-precision positioning using stable terrain features, demonstrating good autonomy and adaptability. Craters, due to their stable geometry and wide distribution, serve as one of the most important terrain features in deep space image-based navigation and have been widely adopted in practical missions. This paper systematically reviews the research progress of deep space image-based navigation technology, with a focus on the main sources of remote sensing data and a comprehensive summary of its typical applications in lunar, Martian, and asteroid exploration missions. Focusing on key technologies in image-based navigation, this paper analyzes core methods such as surface feature detection, including the accurate identification and localization of craters as critical terrain features in deep space exploration. On this basis, the paper further discusses possible future directions of image-based navigation technology in response to key challenges such as the scarcity of remote sensing data, limited computing resources, and environmental noise in deep space, including the intelligent evolution of image navigation systems, enhanced perception robustness in complex environments, hardware evolution of autonomous navigation systems, and cross-mission adaptability and multi-body generalization, providing a reference for subsequent research and engineering practice.

1. Introduction

With the continuous advancement of human capabilities in space exploration, deep space exploration has gradually become a major focus in the aerospace field worldwide. Since the United States and the former Soviet Union launched lunar exploration programs in 1958, more than one hundred deep space exploration missions have been conducted worldwide, targeting celestial bodies such as the Moon, Mars, asteroids, Mercury and Venus. However, deep space exploration missions not only face challenges such as long-duration travel and complex orbit adjustments but must also address difficulties such as extreme environments and remote control delays [1]. Achieving efficient and precise navigation is therefore a key technological challenge for deep space missions [2]. Traditional deep space navigation methods, such as ground-based remote sensing and inertial navigation, have proven successful in short-range or near-Earth missions. However, in deep space environments, increasing distance from Earth leads to signal delays and insufficient navigation accuracy, which gradually become bottlenecks that limit the success of deep space missions [3]. Therefore, image-based navigation technology, as a vision-based autonomous positioning and navigation approach, has gradually become a research hotspot in the field of deep space exploration.
Image-based navigation enables spacecraft to obtain real-time images of the target celestial body through sensors onboard and to perform autonomous position and attitude calculation by integrating surface feature extraction, image matching, and attitude estimation [4]. Craters, as common terrain features in deep space exploration, are important targets that cannot be ignored in image-based navigation technology [5]. Due to their stable geometric shape and wide distribution, craters are frequently used as landmark features in deep space image navigation and have been applied in typical missions such as NEAR Shoemaker and SLIM. Compared to traditional ground-based control and inertial navigation methods, image-based navigation demonstrates greater autonomy, precision, and adaptability in deep space missions. On the one hand, it can achieve autonomous positioning and guidance in communication-constrained environments, reducing the reliance on ground commands; on the other hand, it can achieve high-precision pose estimation during critical phases such as descent by relying on real-time high-resolution images obtained in close proximity to the target surface, which enable effective surface feature matching. In addition, it exhibits greater robustness than traditional methods when facing complex terrain and extreme lighting conditions, making it suitable for real-time navigation requirements in diverse targets such as lunar polar regions and asteroids.
Although image-based navigation technology has achieved good application results in multiple deep space exploration missions, it still faces many technical challenges in practical use. Remote sensing image data in deep space exploration missions are often difficult to obtain, have limited coverage, and show uneven resolution. In particular, for small or unknown celestial bodies, there are bottlenecks such as a scarcity of effective visual features and a lack of annotated samples [6]. However, the complex and variable deep space environment imposes higher demands on the robustness of image navigation systems. Factors such as drastic lighting changes, image noise, significant viewpoint variation, and large-scale terrain variation can all reduce the accuracy of feature extraction and matching, thus affecting overall navigation performance. Furthermore, due to limited computing resources on spacecraft platforms, how we might achieve efficient and real-time image processing and state estimation while ensuring navigation accuracy has become a key issue that deep space image navigation technology urgently needs to address. In recent years, with the rapid development of deep learning, self-supervised learning, multimodal perception fusion, and intelligent decision-making algorithms, image navigation systems are gradually evolving toward high precision, high robustness, adaptability, and intelligence. However, their application in complex deep space mission scenarios still faces many scientific problems and engineering challenges that need to be resolved.
In view of the above context, this paper presents a systematic review of deep space image-based navigation technology. The first review covers the main sources of remote sensing image data in deep space exploration missions, typical application examples, and the challenges of deep space image navigation. Then, it systematically summarizes the key technical framework of image-based navigation, including surface feature extraction and how craters, as critical terrain features in deep space exploration, can be accurately identified and located using image navigation technology. Finally, the paper looks forward to the development trends and potential research directions of future deep space image navigation systems, providing references and insights for the design and optimization of high-precision autonomous navigation systems in subsequent deep space exploration missions.

2. Image-Based Navigation in Deep Space Missions

This section explores the application of image-based navigation in deep space missions, including the sources of remote sensing imagery in deep space, the use of image-based navigation in such missions, and the challenges faced by deep space image-based navigation. The discussion primarily focuses on missions targeting celestial bodies where optical image-based navigation is feasible, such as the Moon, Mars, asteroids, and Mercury.

2.1. Sources of Remote Sensing Image Data in Deep Space Exploration Missions

In deep space exploration missions, orbiters serve as one of the key platforms, acquiring image data through a variety of advanced onboard sensors to support navigation, scientific research, and mission planning. Orbiters obtain detailed information on the surfaces of celestial bodies through precision imaging systems. These data form the foundation for autonomous localization, feature recognition, and route planning. Since different celestial bodies have unique environments and terrain characteristics, orbiters are often equipped with suitable imaging systems based on the characteristics of the target body. Therefore, image data acquired by orbiters on different deep space exploration missions vary in terms of quality, resolution, and application scope. This section provides a detailed analysis of the sources, quality, challenges, and applications of orbiter image data for target bodies such as the Moon, Mars, Mercury, and asteroids. Table 1 summarizes the sources of remote sensing imagery from different celestial bodies.

2.1.1. Lunar Remote Sensing Images

The Moon is the first extraterrestrial body explored by humans, and as the celestial body closest to Earth within the solar system, it has long been a focal point in astronomy and space exploration. During the 1950s and 1960s, driven by the space race between the United States and the Soviet Union, lunar exploration experienced its first period of prosperity, including the Soviet Luna series, the U.S. Surveyor and Apollo manned missions, and lunar orbital photography missions. Although these missions laid the foundation for subsequent research, the remote sensing images obtained at that time were limited in resolution and coverage due to technological constraints. It was not until the 1990s, with rapid technological advances and the arrival of the third technological revolution, that lunar exploration entered a new phase of prosperity. Multiple countries and agencies, including the European Space Agency (ESA), Japan, India, the United States, and China, successively launched lunar exploration missions.
In 1994, NASA launched the Clementine spacecraft [7,8,9]. This mission was equipped with multiple remote sensing payloads such as an ultraviolet-visible multispectral camera, a near-infrared CCD camera, and a laser imaging and ranging system. It acquired multiband, multimodal remote sensing images of the entire Moon. Subsequently, the scientists constructed the lunar control network based on Clementine data and generated a global Digital Orthophoto Map (DOM) with a spatial resolution of up to 100 m [10], providing essential data support for subsequent image-based navigation and high-precision terrain modeling. In 2003, ESA’s SMART1 was a milestone mission [11]. The spacecraft carried a lunar surface imaging experiment camera, an X-ray spectrometer, and other particle and plasma detectors, acquiring visible images and elements composition data of the lunar surface from orbit for the first time. In 2009, NASA’s Lunar Reconnaissance Orbiter (LRO) further advanced lunar remote sensing research. LRO was equipped with the Lunar Orbiter Laser Altimeter (LOLA) and the Lunar Reconnaissance Orbiter Camera (LROC), which obtained highly precise lunar topography data [12] and high-resolution surface images [13]. Based on LRO’s data, researchers developed what is currently considered the most accurate digital elevation model (DEM) of the Moon.
Beyond US missions, Japan has also played a critical role in the acquisition of foundational data for lunar remote sensing and image navigation. In 2007, the Japan Aerospace Exploration Agency (JAXA) launched the Kaguya spacecraft, which contributed significantly. Kaguya was equipped with a laser altimeter and terrain camera, acquiring high-resolution lunar surface imagery and elevation data. Based on these data, researchers created digital products such as DEM [14] and a digital image map (DIM) [15]. In 2024, JAXA’s SLIM lander was equipped with multiple vision-based navigation and obstacle detection sensors, including image-matching navigation cameras, laser rangefinders, landing radars, and high-resolution detection cameras for final-stage obstacle identification. It achieved a highly precise soft landing on the lunar surface, marking the first mission to achieve a landing error within 100 m.
India has also accelerated its lunar exploration program and has made significant progress in acquiring key remote sensing data for image-based navigation. In 2008, India’s Chandrayaan-1 spacecraft, equipped with a terrain mapping camera and hyperspectral imaging spectrometer, successfully obtained global lunar images with 5 m resolution, providing a new perspective for lunar scientific research. In 2019, the Chandrayaan-2 orbiter carried the Orbiter High-Resolution Camera (OHRC), which acquired 0.25 m panchromatic images from an orbital altitude of 100 km [16]. The mission generated a DOM and DEM with a resolution of 0.28 m and vertical accuracy of up to 0.1 m through multiview imaging [17]. These high-precision remote sensing products have been widely used in hazard identification in landing zones, feature extraction for image navigation, and lunar surface boulder and crater statistics [18]. In 2023, the Chandrayaan-3 mission successfully completed India’s first soft landing near the Moon’s south pole.
In recent years, with the rise of commercial space capabilities in deep space exploration, several private companies have initiated lunar surface or orbital image acquisition missions. For example, Intuitive Machines’ Nova-C landers (IM-1 and IM-2) carried Navigation Cameras (NavCam) and landing cameras, publicly releasing images from descent and post-landing [19]. Firefly Aerospace’s Blue Ghost lander, which carried NASA’s lunar surface imaging and particle detection system, successfully transmitted high-resolution images of the lunar horizon and landing dust clouds, which were later identified and catalogued by LRO. Lockheed Martin’s LunIR CubeSat, part of the Artemis I lunar flyby mission, acquired mid-wave infrared imaging spectral data of the lunar surface [20]. However, most of the image data obtained from these commercial missions have yet to form standardized remote sensing products for academic research. Much of the data remains commercially restricted or only partially shared, limiting its broader application in the validation of deep space image navigation algorithms and the development of autonomous navigation systems.
Regarding China’s lunar exploration program, the Chang’e series of missions have achieved a series of significant milestones since the launch of Chang’e-1 in 2007. Chang’e-1 was equipped with a laser altimeter and CCD stereo camera and successfully acquired a vast amount of lunar scientific data, including high-resolution DOM [21], stereo images [22], and DEM [23]. Chang’e-2 further advanced the field by obtaining 7 m resolution global imagery from a 100 km polar orbit and 1.5 m resolution local images from a 15 km orbit [24]. In 2013, Chang’e-3 achieved China’s first soft landing on the Moon, with onboard detectors and rovers providing rich data for further lunar research. Chang’e-4, launched in 2019, landed on the far side of the Moon in the Von Kármán crater, marking humanity’s first soft landing and exploration on the far side of the Moon. In 2020, Chang’e-5 achieved a successful landing in Oceanus Procellarum on the near side of the Moon through autonomous control and precise navigation, completing China’s first lunar sample return mission. In 2024, Chang’e-6 collected lunar soil samples from the far side and returned them to Earth, marking a historic breakthrough and bringing about a new phase of lunar exploration.

2.1.2. Martian Remote Sensing Images

Since the 1960s, more than 40 Mars exploration missions have been carried out worldwide. Although the success rate is less than half, technological advancements—especially since the 1990s—have led to significantly improved mission outcomes. The main methods of Mars exploration include orbital and rover-based observations. During the past 30 years, a large amount of remote sensing data have been accumulated, greatly advancing our understanding of Mars.
On 12 September 1997, NASA successfully launched the Mars Global Surveyor (MGS) into Martian orbit. The spacecraft was equipped with multiple scientific instruments, the most important of which was the Mars Orbiter Camera (MOC) [25] and the Mars Orbiter Laser Altimeter (MOLA) [26]. The wide-angle MOC provided images with resolutions ranging from 240 m to 7.5 km, covering the entire Martian surface. The narrow-angle camera captured data at resolutions of 1.5 to 12 m, covering 5.45% of the surface. MOLA achieved a range accuracy of 0.37 m. Using these data, scientists were able to create detailed topographic maps of Mars, providing essential information for subsequent missions.
In 2001, NASA launched the Mars Odyssey mission, equipped with the Thermal Emission Imaging System (THEMIS), which collected multiband remote sensing images with resolutions of 18–100 m in visible and infrared wavelengths. These data were widely used in studies of surface temperature, mineral composition, and atmospheric variations and provided multimodal data support for image-based navigation [27].
On 12 August 2005, NASA launched the Mars Reconnaissance Orbiter (MRO), which is currently one of the most advanced Mars orbital probes. MRO carried several key instruments, including the High-Resolution Imaging Science Experiment (HiRISE), the Context Camera (CTX), and the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) [28]. The HiRISE camera achieves a resolution of up to 0.25 m [29], allowing the acquisition of extremely detailed images of the Martian surface. CTX provides wide-area images at a resolution of 0.7 m, while CRISM offers spectral data across 544 bands with a resolution of 18 m. The data from MRO has significantly advanced research on the Martian surface, mineral composition, and the presence of water.
In 2021, NASA’s ‘Mars 2020’ mission successfully landed the Perseverance rover in the Jezero Crater, capturing high-precision remote sensing imagery [30]. The Mastcam-Z zoom camera, NavCam, and Hazard Avoidance Cameras (HazCam) onboard acquired multiband images at resolutions ranging from 0.7 to 3.3 cm, comprehensively documenting the microtopography of the landing site [31]. Furthermore, the mission produced a 25 cm resolution DOM and a 0.5 m digital terrain model (DTM), which are widely used in path planning and geological studies [32].
In addition to NASA missions, ESA has also played an important role in Mars exploration. On 25 December 2003, ESA launched the Mars Express (MEX) orbiter, which carried the High-Resolution Stereo Camera (HRSC) and the Observatoire pour la Minéralogie, l’Eau, les Glaces and l’Activité (OMEGA) spectrometer. The HRSC captures detailed images of the Martian surface at a resolution of 10 m in nine visible and near-infrared bands [33]. OMEGA, with its 352 spectral bands, reveals the mineral composition of the Martian surface, particularly features associated with water and ice [34].
On 23 July 2020, China successfully launched the Tianwen-1 probe, and on 15 May of the following year, the Zhurong rover landed on the southern Utopia Planitia, making China the second country after the U.S. to achieve a successful Mars landing. The Tianwen-1 orbiter carried seven scientific payloads, including high-resolution and medium-resolution cameras, as well as a subsurface penetrating radar. The Zhurong rover was equipped with six types of scientific instruments, including a multispectral camera and a terrain camera [35]. These devices supported high-resolution imaging and provided critical data for studying Martian geology, climate change, and underground water resources.

2.1.3. Mercury Remote Sensing Images

Mercury, as the closest planet to the Sun, has a surface that has been exposed to intense radiation and meteor impacts, making its exploration scientifically valuable. Since the 1970s, significant progress has been made in Mercury exploration, especially through orbital probes that have returned substantial data on its surface, climate, and magnetic field.
On 3 November 1973, NASA launched Mariner 10, the first spacecraft to explore Mercury’s orbit. The mission carried seven different scientific instruments, including two telescopes with cameras, an infrared radiometer, and a UV occultation spectrometer, aimed at studying Mercury’s terrain, atmosphere, and magnetic field. During the mission, Mariner 10 conducted flybys of both Venus and Mercury, adjusting its trajectory using Venus’s gravity and completing three Mercury flybys. The spacecraft captured over 2700 images, covering nearly half of the surface of Mercury. These images revealed various geological features, including craters and terrain details up to 100 m in size.
Following Mariner 10, NASA launched the Messenger spacecraft on 3 August 2004, which successfully entered Mercury’s orbit on 18 March 2011. Messenger was equipped with advanced instruments such as the Mercury Dual-Imaging System (MDIS) and the Mercury Laser Altimeter (MLA). It acquired high-definition images and precise elevation data of the Mercury surface. Based on data from both Mariner 10 and Messenger, researchers constructed a global digital image map (DIM) of Mercury [36], which forms a foundational dataset for future scientific research.
In addition to NASA missions, ESA and JAXA jointly developed the BepiColombo mission, which was successfully launched on 20 October 2018. The mission is scheduled to enter Mercury’s orbit in 2025 after conducting two Venus flybys and six Mercury flybys. BepiColombo carries more than ten scientific instruments, including a laser altimeter, radiation sensors, and a thermal infrared spectrometer, which are designed to comprehensively study Mercury’s geology, atmosphere, and magnetic field.

2.1.4. Asteroid Remote Sensing Images

Asteroids carry crucial information about the origin and evolution of the early solar system, serving as ’fossils’ that offer valuable clues about the history of the Sun. Asteroid exploration has evolved through several stages, from early flyby observations to orbital exploration, and more recently to on-orbit investigations and sample return missions. Several representative missions are described below.
NEAR Shoemaker, launched by NASA in 1996, successfully entered orbit around asteroid Eros on 14 February 2000, becoming the first spacecraft to orbit an asteroid. During a year-long low-orbit mission, NEAR Shoemaker captured approximately 16,000 images, which revealed rich geological features on Eros including craters, faults, and other surface structures, greatly enhancing our understanding of asteroids.
The Dawn spacecraft, launched on 27 September 2007, explored the two largest protoplanets in the asteroid belt. Vesta and Ceres. After 14 months at Vesta, Dawn entered orbit around Ceres and conducted four years of observations. The Dawn framing camera (FC) captured high-resolution images with resolutions of 140 m and 35 m in high- and low-altitude orbits, respectively [37]. These images provided detailed topographic data for craters, mountains, and fractures and supported the generation of DIM and DTM models for later analysis.
In recent years, asteroid exploration has entered a new era of on-orbit investigation and sample return. Japan’s Hayabusa2 mission conducted two brief sampling landings on the Ryugu asteroid. The Optical Navigation Camera (ONC) system captured high-resolution global images [38], which were stitched into a 0.5 to 0.7 m resolution DIM to build a complete visual mapping control network. In landing areas, precision optical imagery and bundle adjustment methods were used to reconstruct DOM and DTM models at resolutions of 5–10 cm, ensuring accurate terrain data for local morphology and rover path planning. During the two sampling operations, the ONC system performed ultra-high-resolution imaging from 42 m above the surface, achieving a spatial resolution of 4.6 mm per pixel, the highest known remote sensing resolution of any asteroid surface to date [39,40]. These data support studies on the surface morphology of the Ryugu sandstone, the evolution of the surface under microgravity, and the development of visual navigation algorithms.
NASA’s OSIRIS-REx spacecraft, using its Polyfunctional Camera (PolyCam), generated a global DIM of Bennu at a resolution of 5 cm/pixel, the highest resolution global map of an asteroid to date. With the help of its OLA laser altimeter, a detailed terrain model at about 20 cm resolution was created. During the sampling phase, its multispectral MapCam and sample monitoring SamCam captured images of 7–10 cm resolution used to aid navigation and refine control points during the sample return process [41].

2.2. Image-Based Navigation and Autonomous Landing Technology in Deep Space Exploration Missions

In deep space exploration missions, image-based navigation has become one of the key technologies for achieving precise landing, autonomous navigation, and terrain perception. With the rapid development of aerospace technology and computer vision, the reliance on image-based navigation in deep space missions has been increasing, particularly in missions targeting the Moon, Mars, and other small celestial bodies. Image-based navigation technology has become an essential method through which to ensure that probes can land safely and conduct scientific experiments in complex and unknown environments. The basic principle of image-based navigation is to use sensors onboard spacecraft to acquire high-resolution images of the target celestial surface or other sensor data and to compute the probe’s relative position and attitude in space through feature extraction, image matching, and terrain reconstruction.
On 12 February 2001, NEAR Shoemaker became the first spacecraft to land on the asteroid Eros [42]. In the NEAR mission, image-based navigation played a key role by achieving precise control of the spacecraft’s trajectory through optical landmark tracking. The mission used multispectral imagers and other optical instruments to capture images of the surface of Eros, which not only supported scientific research but also provided foundational data for navigation. To build a global optical landmark database, the spacecraft had to capture images from different times and viewing angles under various lighting conditions. By identifying crater edge centers, the spacecraft could accurately define landmark positions and build a reliable crater database for subsequent trajectory adjustments. In the real-time matching process, the spacecraft analyzed parallax changes of craters in the images and combined this with relative motion between the spacecraft and Eros to calculate precise trajectory positions. As understanding of Eros’ mass, gravitational field, and spin state deepened, trajectory corrections became increasingly accurate.
JAXA’s Hayabusa2 asteroid mission was launched on 3 December 2014 and arrived at Ryugu on 27 June 2018. It conducted multiple descent operations, including two successful sample landings [38]. The Hayabusa2 descent and landing process can be divided into the approach phase (20 km to 45 m), the final descent phase (45 m to 8.5 m), the hovering phase (8.5 m), and the free-fall phase (8.5 m to 0 m), as illustrated in Figure 1. During the final descent phase, the spacecraft entered autonomous navigation mode, relying on data from the ONC and laser rangefinders to precisely adjust its attitude and position. It released a Target Marker (TM) onto the asteroid surface to serve as a navigation reference. During the hovering phase, the spacecraft continuously tracked the TM using optical cameras and adjusted its attitude to avoid obstacles. The Hayabusa2 navigation system relied primarily on two types of sensors: visual sensors and inertial sensors [43]. The visual sensors periodically captured images of the asteroid surface, focusing on the TM, while the inertial sensors provided positional information for the spacecraft. Image-processing techniques were used to identify and track the TM on the surface. With these markers, the system could accurately determine the location of the spacecraft on the asteroid and infer its altitude, velocity, and orientation.
NASA launched the OSIRIS-REx spacecraft to sample and return materials from the Bennu asteroid. Over the entire navigation process, if the LRF fails, a backup technique is used—the Natural Feature Tracking System (NFT) [44]. The NFT uses images captured by the spacecraft NavCam and compares them with the asteroid terrain model stored on the onboard flight computer. By comparing image features, the spacecraft’s relative position to the asteroid surface is calculated, updating its state and correcting errors via a Kalman filter. Key NFT algorithms include image feature extraction, model rendering, feature matching, position estimation, and attitude adjustment [45]. The NFT system estimates the orbital state using features from images collected by the NavCam. These images match the predicted feature appearances rendered onboard. The system uses the DTM stored on the spacecraft to generate predicted images of the asteroid surface, as shown in Figure 2. The DTM includes information such as the asteroid’s shape, terrain height, and albedo. Feature matching is performed using the Normalized Cross-Correlation (NCC) algorithm, which compares features in the predicted and real images. NCC calculates similarity across positions in both images to find the best match. Once feature matching is complete, the NFT system updates the spacecraft’s position and attitude using the Extended Kalman Filter (EKF). According to OSIRIS-REx mission documentation, the NFT is qualified for real-time onboard operation, completing its render–correlate–EKF loop within the image update cycle, and it has demonstrated navigation accuracy sufficient for TAG, achieving surface contact within a 25 m radius of the target site with velocity errors below 2 cm/s.
In the Mars 2020 mission, the Perseverance rover successfully employed advanced image-based navigation techniques. In particular, during landing, its Landing Vision System (LVS) matched images from the descent camera with a landing map and then fused them with the Inertial Measurement Unit (IMU) to obtain high-rate estimates of map-relative position, velocity, and attitude, thereby implementing Terrain-Relative Navigation (TRN). The TRN system used high-resolution images captured by NavCam, HazCam, and other cameras to detect surface terrain characteristics and compare them with pre-established Martian terrain maps to compute and adjust the position and attitude of the rover in real time [30]. Ultimately, Perseverance achieved a significant improvement in landing precision compared with previous Mars missions, providing an important foundation for future planetary landing and autonomous navigation technologies [47].
Based on this, the Enhanced Landing Vision System (ELVIS), as a new type of visual landing system, further improves the performance of image-based navigation [48]. ELVIS abandons traditional Doppler radar-based ranging and velocity measurement and instead uses a configuration of laser altimeter/LiDAR and cameras. The entire landing process is divided into three stages: the initial stage (8 km–15 km), the descent braking stage (4.5 km–550 m), and the terminal precision guidance stage (225 m–50 m). During the initial stage, ELVIS captures multiple images and, combined with positional differences from IMU inertial navigation, uses them as a baseline for stereo ranging to obtain altitude information, which serves as a key input for the Map-Relative Localization (MRL) algorithm. In the descent braking stage, the laser altimeter operates in narrow-beam mode and works in conjunction with the stereo ranging solution to ensure the precision and redundancy of altitude scale information required by the MRL algorithm, as illustrated in Figure 3. In the terminal precision guidance stage, ELVIS employs visual odometry (VO) technology, combining depth maps generated by the laser altimeter with camera images. Using 3D–2D point correspondences and the Perspective-n-Point (PnP) algorithm, it accurately computes the position of the lander. ELVIS demonstrates excellent navigation precision in the terminal guidance phase. The horizontal velocity error is maintained within ±0.3 m/s, and the vertical height error is less than 0.91 m, meeting the high-precision landing requirements of deep space exploration missions.
The SLIM probe by JAXA aimed to achieve a meter-level precision landing within 100 m of the target landing point. On 19 January 2024, it successfully landed 55 m from the target site (Shioli crater). The mission achieved a high-precision landing, and Figure 4 documents the lander’s actual position within the target region. The lander is shown in Figure 5. The mission successfully demonstrated high-precision autonomous landing capabilities based on image-based navigation technology, marking a key step toward meter-level accuracy in lunar exploration missions [49].
In the SLIM mission, the system acquired real-time high-resolution images of the lunar surface using onboard cameras and extracted geometric features of the surface craters, such as edges, center positions, and shapes, using crater detection algorithms [50]. These features were then matched with a preconstructed high-precision lunar terrain database. A matching algorithm based on geometric triangle invariants [51] was used to locate the current position of the probe and continuously update its landing trajectory. A conceptual diagram of the vision-based navigation (VBN) landing phase is shown in Figure 6. This technology enabled SLIM to dynamically avoid obstacles during descent and achieve autonomous path planning and precise navigation without relying on external signals. Meanwhile, the fusion of visual and IMU data further improved navigation robustness and accuracy. Ultimately, SLIM successfully landed near the target area, achieving precision control within the meter-level range.
Nova-C is a lunar lander developed by the US company Intuitive Machines, designed to perform lunar exploration and scientific missions. The IM-1 mission, the first of the Nova-C series, was launched on 15 February 2024, aboard a SpaceX Falcon 9 rocket from Kennedy Space Center [53]. In low lunar orbit, Nova-C relied primarily on a Distributed Position and Orientation System (DPOS) for navigation. The measurement process is illustrated in Figure 7. DPOS estimates the attitude changes between two images by extracting features (edges and corners), performing feature matching (via brute-force and RANSAC algorithms), and combining these with motion direction measurements provided by a hybrid Kalman filter.
The IM-2 mission, the second of the Nova-C series, was launched on 26 February 2025. It was the company’s second payload delivery mission, with a landing target near the lunar south pole. IM-2 required landing accuracy within 50 m [55]. To meet this requirement, IM-2 relied on two primary measurement technologies. The first is line-of-sight measurement, which detects the centers of surface craters and matches them with a preloaded lunar surface map to determine the position of the lander relative to the terrain; the second is LiDAR terrain matching, where a LiDAR scanner performs real-time topographic mapping of the landing site to ensure landing accuracy.
During the terminal descent phase of IM-1, a failure in the laser altimeter navigation system led to an attitude control anomaly, which caused the lander to topple after touching the surface, resulting in a partial success. Although the overall navigation process of IM-2 was smooth, in the final descent stage, the laser altimeter experienced signal noise and distortion, preventing accurate height readings. Meanwhile, the low solar elevation angle at the lunar south pole created long shadows that interfered with the visual navigation system. As the spacecraft descended, visual discrepancies between actual crater appearances and LRO orbital reference images further impacted optical navigation accuracy. The accumulation of errors caused the lander to deviate from its intended landing point and tip over. These two missions revealed the limited adaptability of visual navigation and laser range systems in dusty, highly dynamic, and extreme-lighting conditions during terminal soft landing, highlighting the need for robust visual perception and multisource redundant data fusion in deep space autonomous navigation systems.
Table 2 summarizes selected deep space exploration missions. As seen, with rapid advancements in aerospace and computer vision technologies, image-based navigation is playing an increasingly critical role in deep space missions and has demonstrated superiority in complex and unknown environments such as asteroid surfaces.
In addition to the technical implementation, the landing accuracy achieved by different navigation strategies also provides an important perspective for comparison. Traditional methods, such as the Apollo program that relied on inertial navigation combined with ground-based tracking and control, typically achieved only kilometer-level landing accuracy. Later, China’s Chang’e-3, -4, and -5 missions adopted autonomous navigation methods based on inertial navigation, supplemented by ranging and velocity correction, with landing accuracy generally ranging from several hundred meters to a few kilometers. Specifically, the landing point of Chang’e-3 deviated by about 600 m from its nominal target [56], while Chang’e-5 exhibited a landing dispersion of approximately 2.33 km [57]. In contrast, recent missions using image-based navigation have demonstrated significantly higher accuracy under favorable conditions. For example, NASA’s Mars 2020 mission achieved landing accuracy in the order of several tens of meters through TRN; JAXA’s SLIM achieved a landing offset of 55 m on the Moon; and Firefly Aerospace’s commercial Blue Ghost lander was designed for landing accuracy within the 100 m scale. However, partial failures in missions such as Intuitive Machines IM-1 (about 1 km), IM-2 (about 400 m), and Japan’s Hakuto-R Mission 2 (about 300 m) highlight the vulnerabilities of image-based navigation under conditions of extreme illumination, dust interference, or in highly dynamic environments. These results indicate that landing accuracy reflects not only the quality of image-based perception but also the effectiveness of the integrated tracking and control loop in the Guidance, Navigation, and Control (GNC) system. This underscores the need for improved robustness and multisource data fusion to ensure stability in future missions. Table 3 presents a comparative summary of landing accuracies between traditional and image-based navigation missions.

2.3. Challenges of Deep Space Image-Based Navigation

Despite its advantages, image-based navigation in deep space missions still faces many challenges, mainly including data scarcity, environmental complexity, difficulties in feature extraction and matching, and limitations in computational resources.

2.3.1. Data Scarcity

Deep space exploration missions usually target distant celestial bodies. When the probe first enters the target area, it is often difficult to obtain a large amount of high-quality image data [58], especially in harsh environments or in target areas that have not been fully surveyed. The surface terrain of many celestial bodies lacks sufficient visual markers or prominent features, resulting in sparse and low-quality image data. For example, on the surface of asteroids or other similar celestial bodies, there may be no obvious geological features or landmarks, resulting in limited useful information in the images. In addition, with the rapid development of deep learning technology in recent years, new challenges have also emerged. In exploration missions of different celestial bodies, especially asteroids or extreme terrains, existing public image datasets often lack sufficient diversity and annotated data [59]. Since deep space image data annotation usually relies on manual analysis and processing by experts, the workload is huge and the cost is high. The images obtained may vary greatly in resolution, quality, and lighting conditions, increasing the difficulty of annotation.

2.3.2. Environmental Complexity

The lighting conditions in deep space environments are often complex and variable. Especially in missions far from the sun, the low intensity of illumination may lead to insufficient image contrast, or even extremely dark or overexposed images. This directly affects the accuracy of surface feature recognition. In particular, the shadow parts of the craters may be difficult to distinguish under certain lighting conditions. Changes in viewing angles can also cause shape distortion, increasing the difficulty in obtaining and matching characteristics [60]. The IM-2 mission exposed the challenges brought about by such extreme lighting changes and terrain features. In the low-light environment of the lunar south pole, long shadows significantly interfered with the feature matching ability of the visual navigation system. At the same time, the appearance of craters in close-range images differed from the reference LRO orbital images, leading to a deviation in the visual solution path from the planned navigation trajectory. These cumulative errors directly affected the accuracy of soft landing terminal positioning and became an important technical cause of mission failure. In addition, the surface environments of different celestial bodies are highly dynamic [61], with factors such as dust, rocks, and climate change. These dynamic obstacles and natural phenomena affect the quality of the image and cause significant differences in images of the same location over time. Therefore, maintaining precise navigation and positioning under such changes has become a major challenge. Finally, noise and image quality issues cannot be ignored. Deep space exploration images are affected by sensor noise and signal attenuation. The low-resolution images of distant celestial bodies are poor quality, which affects feature extraction and matching. The special reflectance characteristics of the celestial surface further increase the difficulty of image matching [62].

2.3.3. Challenges of Feature Extraction and Matching

A core issue of deep space image-based navigation is how to extract and match surface features. Craters, which are commonly used features in deep space navigation, vary greatly in size on different celestial bodies. Even in the same body, the size, shape, and spatial distribution of craters can be highly uneven. Traditional methods such as edge detection and template matching are difficult to simultaneously adapt to features of different scales. Although deep learning-based models have shown excellent capabilities in complex environments, there are still technical difficulties in balancing large-scale and small-scale features [63]. In addition, it is a great challenge to ensure that images taken by the probe can be accurately matched with the surface feature map under different lighting conditions, viewing angles, and resolutions. Traditional methods tend to produce mismatches in complex environments, leading to accumulated localization errors. Although deep learning methods can effectively improve matching robustness, how we might reduce training errors and enhance the stability of matching algorithms remains a key technical problem in image-based navigation [64].

2.3.4. Computational Resource and Real-Time Requirements

In deep space exploration missions, probes are usually equipped with lightweight, low-power computing hardware, which limits their processing capabilities. Image-based navigation technology, especially deep learning algorithms, often requires high computational power, which probes typically cannot afford compared to ground stations. In addition, the navigation system in deep space missions needs to work in real time to respond to changes in dynamic environments. The image navigation system needs not only to complete complex image processing and feature matching but also to estimate and adjust the probe’s position in real time. In the case of high-speed movement or large attitude changes, maintaining positioning accuracy and ensuring stable operation of the navigation system become important technical challenges.

3. Key Technologies of Deep Space Image-Based Navigation

3.1. Surface Feature Extraction

Surface feature extraction is a critical component of deep space image-based navigation, and its accuracy and efficiency directly determine the success of the mission. In deep space exploration, surface features can generally be classified into four categories: natural terrain features, artificial markers, illumination conditions, and image characteristics [65]. In deep space exploration missions, different types of surface features have different applications in navigation, as shown in Figure 8. Natural terrain features include craters, mountains, valleys, etc. Due to their stability and widespread distribution, they are commonly used as navigation landmarks, especially features with visually distinctive shapes. Artificial markers include reflectors installed on the ground, positioning signs, or signals emitted by the probe itself. Illumination conditions and image features refer to variations such as shadows and reflections caused by lighting changes in images, which can serve as auxiliary information to improve the accuracy of surface feature extraction [66].
Mountains, valleys, and other terrain features have a certain navigation value, but their shapes are relatively complex and less stable. They are generally used together with other features to form landmarks. Artificial devices such as reflectors and markers can also serve as auxiliary features for navigation in some missions due to their specific shapes and known spatial positions. However, despite their reliability, artificial devices face limitations such as small quantity, high deployment risk, maintenance difficulty, and high cost, making them difficult to apply widely in deep space navigation.
Craters are undoubtedly one of the most ideal surface features for deep space exploration missions [67]. Compared with other terrain features, the application of craters in deep space image-based navigation offers unique advantages. First, craters generally have stable shapes and distribution patterns, especially on the surfaces of the Moon and Mars, where the morphology of craters has not changed significantly over long celestial histories. Therefore, they can serve as reliable navigation markers for extended periods. The geometric shape of the craters is clear and easily distinguishable from the surrounding environment, improving the accuracy of feature recognition. Their widespread distribution and regularity provide rich geographic information for navigation algorithms, enabling probes to achieve efficient autonomous localization without relying on complex ground-based tracking and control systems.
In crater-based navigation methods, Terrain-Relative Navigation (TRN) does not require the carrying of 3D models or complete digital elevation maps, which significantly reduces the memory and computational demands for onboard databases. Additionally, craters vary widely in scale, from micro-craters to large impact basins. This scale variation allows for effective extraction under different resolutions and supports feature matching and localization tailored to various celestial bodies. For example, on the surfaces of the Moon or Mars, large craters can be used as references for global localization, whereas smaller craters can support fine local navigation.

3.2. Crater Detection Methods

Crater detection algorithms can be divided into two categories: manual detection and automatic detection [68]. In manual detection, the edges of the crater are typically identified first, followed by fitting the curve to obtain the crater parameters [69]. Tools such as CraterTools [70] and CSFD Tools [71] are widely used for manual crater mapping.
Automatic crater detection algorithms fall into two main types. There are unsupervised algorithms, which use digital image processing techniques to detect craters, and supervised algorithms, which leverage machine learning or deep learning techniques for crater extraction. The classification of these methods is shown in Figure 9.

3.2.1. Unsupervised Crater Detection Algorithms

Unsupervised crater detection algorithms can be categorized by their techniques and application scenarios, including image processing-based methods, pattern recognition-based methods, and hybrid approaches.
Image processing-based methods detect craters by applying multiple analysis techniques to images. Edge detection approaches assume that visible crater rims create the most prominent structures in the image. These methods use edge extraction techniques such as the Canny edge detector [72] to assemble pixels into curvature-consistent segments that represent parts of the crater rim. These segments are then grouped into elliptical approximations for landmark measurements. In 2003, Cheng et al. [73] proposed a method using heuristics such as interedge gradients and curvature estimation to combine edge segments into sets of points suitable for ellipse fitting. This method also incorporated prior knowledge of global illumination direction to discard edge segments nearly parallel to lighting and to build direction order between segments for robust grouping.
The growth and region segmentation methods, which are characteristic of image processing, are often applied in crater detection to divide images into regions and identify craters based on regional properties. These methods typically involve grayscale segmentation to extract light and dark regions and then analyze geometrical shape and symmetry to determine crater boundaries. Although simple and efficient under favorable illumination and low background noise, their performance degrades under varying lighting. In 2019, Wei et al. [74] proposed a matching method for shadow and highlight regions, involving three steps: feature prematching, identification of illumination direction, and directional matching. Bright regions were matched one-to-one with shadow regions using a feature match factor and an area ratio distribution.
In 2020, the German Aerospace Center (DLR) [75] developed a terrain-based absolute navigation system. Its crater detection module identifies characteristic adjacent shadows and illuminated regions using image segmentation. A graph is constructed in which each dark region is connected to its two closest bright regions for matching [76]. Similarly, in 2023, the NOVA-C team [55] implemented a crater detection mechanism by pairing adjacent bright regions. Detected shadow regions are associated with nearby bright regions, and crater positions are determined via ellipse fitting.
Texture and statistical feature-based methods detect craters by analyzing texture properties and statistical distributions around craters. These methods rely on texture consistency, brightness distribution, and directional features, as well as the crater’s unique radial or ring-like patterns. In 2012, Bandeira et al. [77] combined shape and texture characteristics for the detection of subkilometer craters in high-resolution planetary images, using histograms of oriented gradients and geometric fitting. In 2015, Wang et al. [78] proposed an efficient detection method that combined sparse enhancement and texture features. Using gray-level cooccurrence matrices and sparse representation to suppress noise, their method showed high robustness and accuracy, especially for craters smaller than 1 km in diameter.
Pattern recognition methods detect craters using known templates or rules. Template matching approaches include sliding-window and correlation-based techniques. In 2017, Pedrosa et al. [79] improved template matching using the Fast Fourier Transform (FFT), efficiently calculating the correlation between the image and the template. Morphological preprocessing steps (e.g., closing, gradient enhancement) helped suppress false positives and improved boundary precision. In 2018, Woicke et al. [80] combined template matching with TRN for real-time crater detection using semicircular and shadow templates, demonstrating high performance under complex terrain and lighting.
Terrain analysis and mathematical morphology have gained attention in lunar crater detection, especially using DEM data. Terrain analysis based on DEM provides elevation details that are essential for identifying the shape of the crater. In 2018, Chen et al. [81] applied morphological filters and opening/closing operations to analyze topography and classify craters, effectively improving morphological features on different scales.
To improve detection accuracy, multifeature fusion methods have been widely adopted. These approaches combine edge, texture, grayscale, and other features into a unified feature vector. In 2003, Chapman et al. [82] proposed a composite method integrating cross-correlation, edge detection, annular kernels, and circular Hough transforms to detect craters of varying sizes. In 2008, Singh et al. [83] binarized images to isolate regions of interest and used ellipse fitting to refine edge-based results.
Multi-algorithm fusion is also common. At the decision level, different detection results are combined by voting or weighted averaging. In 2010, Krøgli [84] applied decision-level fusion by integrating results from Hough transform, template matching, and symmetry analysis, using majority voting and weighted means to select reliable crater candidates. This approach is effective in geologically complex environments, compensating for the limitations of single algorithms and reducing false detections.
Multisensor fusion also plays a vital role. By combining data from optical cameras, radar, and LiDAR, multimodal information improves detection accuracy and robustness. At the data level, sensor outputs are integrated during acquisition to form a comprehensive dataset. In 2010, Degirmenci et al. [85] combined Mars DEM and optical images to develop a robust crater detection algorithm using multisource data fusion. In 2021, Wang et al. [86] integrated traditional rule-based methods with multisource data to detect lunar craters, significantly improving efficiency and the identification of small targets against complex backgrounds.
Unsupervised crater detection algorithms are of great value in deep space exploration, especially when manual annotation is not feasible. However, these methods tend to be sensitive to noise and complex backgrounds, potentially leading to inaccurate results. The lack of evaluation standards also makes performance difficult to quantify, and subtle features of the crater may be missed in challenging terrain. Table 4 summarizes the strengths and weaknesses of these detection methods.

3.2.2. Crater Detection Algorithms Based on Machine Learning

Compared to deep learning algorithms, traditional non-neural network machine learning methods do not rely on large-scale datasets or complex network architectures but instead emphasize the manual extraction and optimization of features. These algorithms perform well in scenarios where datasets are small, computational resources are limited, or model interpretability is critical. Common traditional machine learning methods include Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT), and Principal Component Analysis (PCA).
SVM is a classical classification model that separates data into different classes by finding the optimal separation hyperplane. It is particularly effective for high-dimensional datasets and can handle complex decision boundaries, which enables it to effectively distinguish craters from background regions in crater detection tasks. In 2009, Ding Meng et al. [87] used the Census transform to convert original images into grayscale histogram feature vectors. These vectors were encoded for training image samples and then fed into an SVM classifier to detect craters on the lunar surface.
DT and RF belong to ensemble learning (EL) approaches. DT partitions the dataset into multiple subsets and builds a tree-like structure, which is suitable for datasets with clear class boundaries. Due to its simplicity and interpretability, DT can effectively classify craters by using geometric and texture features of the images [88,89]. RF builds multiple decision trees and performs the classification voting. It is capable of handling large amounts of features and data and is less prone to overfitting, making it well suited for large-scale remote sensing data used in crater detection. Liu et al. [90] encoded a lunar DEM sample dataset using 32 features that describe the geometry, depth, and density of the crater and used an RF classifier to distinguish between “primary craters” and “secondary craters”.
PCA is a linear dimensionality reduction technique that reduces data dimensions by extracting principal components. It is often used in remote sensing image processing for denoising and data compression. PCA can reduce the computational load and improve the efficiency of subsequent classification algorithms. Takino et al. [51] proposed using PCA for crater detection by generating vectors of the circular principal components. The crater parameters were determined using a correlation matrix between the templates and the target images. This method is particularly suited for scenarios with limited computational resources.
Although these traditional machine learning methods may not rival deep learning algorithms in handling large-scale datasets, they still provide high accuracy in many practical applications. Their greater interpretability and lower computational demands ensure that traditional, non-neural network machine learning methods remain highly relevant in remote sensing-based crater detection tasks.

3.2.3. Crater Detection Algorithms Based on Deep Learning

Crater detection methods based on deep learning can be categorized into three main types: semantic segmentation, object detection, and hybrid methods.
(1)
Crater Detection Based on Semantic Segmentation
Semantic segmentation is a pixel-level image classification task aimed at assigning a class label to each pixel in the image. In crater detection tasks, semantic segmentation methods can extract crater regions from the background, enabling precise analysis of crater shape, size, and distribution. Given variations in crater morphology and the influence of factors such as lighting, shadows, and terrain, semantic segmentation methods typically provide fine-grained results and can efficiently delineate crater boundaries in complex scenes. In recent years, classical deep learning architectures such as U-Net [91] and its variants [92], which are based on Convolutional Neural Networks (CNNs), have been widely applied to crater detection tasks, as shown in Figure 10.
In 2019, Silburt et al. [93] proposed a deep learning-based lunar crater identification system called ‘Deepmoon’. This pioneering work used the U-Net framework for crater detection using the Head [94] and Povilaitis [95] crater catalogs for training and qualitatively evaluated the generalizability of the model. This was the first application of deep learning for crater detection beyond the Moon. In the same year, Lee et al. [96] also adopted the U-Net architecture and applied it to a fused DTM constructed from MOLA and HRSC datasets, successfully detecting 75% of craters listed in the Robbins catalog [69]. That same year, Latte et al. [97] proposed a crater U-Net framework based on U-Net, exploring the effect of various parameters using THEMIS thermal infrared data.
With the introduction of advanced network modules and training techniques, numerous U-Net variants have been applied to crater detection. In 2020, Wang et al. [98] proposed a U-Net architecture incorporating residual connections for lunar crater detection. This model achieved favorable results in detecting overlapping craters. Lee et al. [99] proposed the ResUNet framework for detecting craters from optical images and DTMs. In 2021, Jia et al. [92] proposed a nested attention-aware U-Net (NAU-Net), combining UNet++ with attention mechanisms to improve the propagation of semantic information, achieving better performance in lunar crater detection. The model was trained using the Keras framework on 30,000 images from the LRO dataset for 20 epochs, reaching a 77.2% F2-score, and the experiments demonstrated that the network prioritizes recall rate in crater identification. In 2023, Jia et al. [100] developed an efficient small-scale lunar crater extraction framework based on Chang’e-5 LRO NAC grayscale imagery. They introduced AE-TransUNet+, a hybrid transfer network based on a convolutional block attention module and depth-wise separable convolution enhancement module. In 2024, Giannakis et al. [101] developed a method based on the Segment Anything Model (SAM), using masks for rapid image segmentation, and proposed a global crater identification system based on META AI’s SAM. Although not trained on the pseudocolor imagery of the Mars Reconnaissance Orbiter, the proposed crater detection method successfully identified most craters with low false positive and miss rates.
In 2025, Mohite et al. [102] proposed a lunar crater detection method combining Residual U-Net-34/50 with template-matching algorithms. This approach utilizes DEM data as input and enhances the U-Net’s ability to delineate crater boundaries in complex terrain via deep residual structures, while integrating template matching to improve detection accuracy. The experimental results demonstrated superior performance in detecting craters with blurred edges or overlaps, offering a novel solution for high-precision crater detection in complex scenarios. Also in 2025, Li et al. [103] proposed a LCDNet lunar crater detection network based on DEM data. The method uses an encoder–decoder structure and introduces multiscale feature fusion along with a Swin Transformer module, significantly improving the ability to detect craters of varying scales and in complex terrain.LCDnet used an SGD optimizer for 12 epochs, and the learning rate was reduced to 1/10 of the previous rate of each epoch. Compared with traditional U-Net and its variants, LCDNet achieved superior performance in terms of IoU and precision, especially in challenging scenarios involving blurred boundaries and heavy crater overlap.
(2)
Crater Detection Based on Object Detection
The objective of object detection is to identify objects within an image and mark their locations, typically using bounding boxes. In crater detection, object detection methods can quickly locate craters and provide information such as their position and size. Unlike semantic segmentation, object detection not only focuses on the precise boundaries of the target but also emphasizes its spatial location in the image. These methods offer significant advantages in large-scale image processing and real-time detection tasks and are thus widely employed in crater detection.
Object detection methods are generally divided into region proposal-based methods and single-stage detection methods. Regional proposal-based approaches include R-CNN and its variants, such as Fast R-CNN [104] and Faster R-CNN [105], which first generate candidate regions and then perform classification and regression on them, as illustrated in Figure 11. Single-stage detection methods such as YOLO [106], SSD [107], and CenterNet [108] directly regress the object information from the image in a complete way and offer high efficiency. The architectures used in crater detection are shown in Figure 12 and Figure 13. Recently, Transformer-based architectures have been introduced into object detection tasks, leading to novel methods such as RT-DETR [109] which take advantage of global context modeling and self-attention mechanisms to improve detection accuracy for complex scenes and small objects. The architecture is shown in Figure 14.
Regional proposal-based detection methods first narrow down regions likely to contain targets and then classify and regress these regions. These methods often rely on external algorithms to generate candidate regions and use CNNs for object classification and bounding box regression. Although computationally expensive, they typically provide high detection accuracy. In 2020, Yang et al. [114] used the R-FCN framework to detect lunar craters and estimate their ages, training on the IAU crater catalog with a transfer learning-based approach. In the same year, AliDib et al. [115] applied Mask R-CNN to DEM images of craters and improved the detection rate to 87% by ellipse fitting. In 2024, Liu et al. [116] applied an enhanced Faster R-CNN to daytime imagery from Chang’e-5 Kaguya TC, focusing on the automatic detection of small-scale craters and creating a new database. That same year, Tewari et al. [117] used DEM images from LRO and Selene TC, proposing a semi-supervised model combining an adaptive edge-preparation unsupervised model and a cascaded Mask R-CNN. Their method more accurately extracted crater edges and detected previously undiscovered craters.
A key advantage of single-stage object detection methods is their high efficiency, particularly in real-time detection scenarios. Unlike region proposal-based methods, single-stage approaches directly regress object positions and categories from the input image, eliminating the need for region proposals. This significantly reduces computation and increases speed, making them ideal for large-scale datasets and applications requiring real-time performance. In 2023, La Grassa et al. [112] proposed a new high-resolution object detection model that first employed residual dense block feature selection (RDBF) and then a YOLO Lens (generator + YOLOv5). YOLOLens5x outperformed existing models in accuracy but had longer training and inference times and consumed more resources. In their experiments, YOLOLens adopted low-level data augmentation strategies such as slight scaling, translation, and left/right flipping, and the model was trained for 300 epochs, demonstrating the effectiveness of mosaic augmentation. An interesting observation was that model performance strongly depended on the number of ground-truth samples and was influenced by object size, highlighting the data-sensitive nature of crater detection tasks. That same year, Kilduff et al. [118] used YOLO-v5 for object detection in combination with the POV-Ray photorealistic renderer for lunar surface modeling, successfully detecting craters of varying scales. In 2024, Zhang et al. [111] proposed a transfer learning strategy using an anchor-free deep learning model based on CenterNet. They employed a stacked hourglass network to aggregate multilevel features, enhancing crater center estimation and using feature heatmaps to locate craters, thereby eliminating the need for bounding boxes. Their model improved generalization by exploiting crater similarity across regions, enabling the detection of diverse crater types. In the work of Zhang et al., the 100 training epochs were divided equally into a pretraining stage with a frozen backbone and a fine-tuning stage, which helped the model to converge effectively and achieve robust detection performance.
With the introduction of Transformer architectures into crater detection, their unique advantages in modeling long-range dependencies and global context have become increasingly prominent. The self-attention mechanism allows Transformers to capture global image information, which is especially beneficial for detecting small objects in complex scenes. In 2024, Guo et al. [113] proposed Crater-DETR, a novel object detection Transformer variant tailored for crater detection. Crater-DETR employs 800 images from (1) 20,000 DACD training images (both daytime and evening) and (2) 9379 public AI-TOD images and trained 200 epochs with the Adam optimizer.This model computes cross-attention of local features on multiple scales and introduces contextual regional attention upsampling (CRAU) and pooling (CRAP) operators to address the problem of missing small crater features. Experimental results showed that Crater-DETR achieved state-of-the-art performance in small crater detection.
Based on the above analysis, a comparison between semantic segmentation and object detection-based methods is summarized in Table 5.
(3)
Hybrid Methods Combining Traditional Algorithms with Deep Learning
In hybrid methods that combine traditional algorithms with deep learning, potential crater regions are initially identified using non-deep learning techniques, such as sliding windows and selective search. These identified candidate crater regions are then processed by deep learning-based networks for classification and refinement. In 2019, Emami et al. [119] proposed such a method by integrating unsupervised techniques with CNNs for model preparation. Techniques including Hough transforms, light-shadow region detection, convex hull grouping, and interest point algorithms were used to locate candidate crater regions. These regions were subsequently processed through a CNN-based classification network to distinguish between crater and non-crater categories. Although hybrid detection methods can effectively leverage both traditional and deep learning advantages, they largely rely on the selection of candidate regions by non-deep learning methods, resulting in relatively lower efficiency compared to end-to-end deep learning approaches.
Deep learning demonstrates significant advantages in crater detection. First, it enables a high degree of automation by automatically learning features, without the need for manually designed feature extraction algorithms. Second, deep learning models exhibit strong adaptability, handling craters of varying scales and morphologies. In addition, they possess the capacity to efficiently process large-scale datasets, which makes them particularly suitable for wide-area remote sensing image analysis. However, deep learning also presents certain limitations. Chief among these is the heavy reliance on large volumes of annotated data, which poses a significant challenge in terms of data acquisition and labeling. Furthermore, the training process demands substantial computational resources, especially when handling high-resolution imagery, and requires powerful hardware such as GPUs. Lastly, the “black box” nature of deep learning models makes their decision-making processes difficult to interpret, affecting model transparency and trustworthiness. Therefore, in practical applications, a careful balance between model accuracy and operational feasibility must be considered.

3.3. Crater Matching Algorithms and Localization

Matching craters from a single image essentially constitutes a complex pattern recognition problem. The core task is to compare the craters extracted from the image with those in an existing large-scale crater database, thus accurately identifying the corresponding craters. Depending on whether the initial pose of the spacecraft is known, crater matching is generally classified into two categories [120]: crater matching with prior pose information and crater matching without prior pose information (also known as Lost-in-Space (LIS) crater matching). A comparison of these two methods is shown in Table 6.
In cases with prior pose information, the state estimation is used to query the crater catalog and predict the crater centers expected to appear within the camera field of view (FOV). This narrows the search range within the catalog and reduces the complexity and computation time of the matching process. In contrast, LIS matching is employed when no prior spacecraft state information is available. It relies on the identification of invariant relationships among craters under projective transformations, enabling crater identification without prior knowledge. This is particularly valuable for filter (re)initialization and for providing measurements that are independent of state estimation or for validation purposes.

3.3.1. Crater Matching Algorithms with Prior Pose Information

When the initial pose information is known and the crater scale variation is minimal, the detected craters and the corresponding projected ellipses from the catalog should ideally align. In such scenarios, methods such as distance constraints and template matching are typically employed. When there are larger disturbances in the initial position or orientation, matching is generally performed using geometric invariants formed by multiple craters, commonly through triangular matching. In later studies, hybrid descriptor-based and descriptor-free methods have also been proposed.
In early crater matching algorithms, geometric descriptors were often designed to identify corresponding craters by comparing the similarity of their descriptors [121]. In 2001, Leroy et al. [122] proposed a vision-based localization system for spacecraft landing on asteroids. This method used tensor voting and oriented normals to extract ellipses of craters at multiple scales from asteroid images, followed by matching the projected 3D model using pose estimation. It emphasized crater detection and ellipse fitting and required an accurate prior pose for successful matching. In 2007, Weismuller et al. [123] introduced a matching algorithm based on relative parameters between craters, using a database of ordered pairs based on distances between craters.
In 2010, Clerc et al. [124] employed the RANSAC algorithm to eliminate mismatched crater pairs from the matched set, offering strong robustness to translation and rotation. In 2012, Harada et al. [125] proposed the Evolutionary Triangle Similarity Matching (ETSM) method for the JAXA SLIM mission [126,127], which estimated spacecraft position by matching crater graphs with those extracted from KAGUYA satellite imagery using triangle angle similarity. To handle image-library mismatch due to positional differences, constraints such as edge length ratios and distances between triangle centroids were added. However, the method’s reliance on locally defined triangles limited robustness due to neglect of global topology.
In 2013, Yu et al. [128] proposed a visual navigation method for planetary landings using crater detection and matching. A winner-takes-all strategy matched detected craters with a database, further constrained by 3D crater parameters to reduce false matches. In 2016, Lu et al. [129] corrected the shapes of elliptical craters into metric circles using preprocessing and rectification then applied triangle similarity for matching (Figure 15a). In 2017, ISHII et al. [51] proposed the Triangle Similarity Matching (TSM) method for the SLIM mission. Crater triangles and their similarities were matched between onboard images and pregenerated crater triangle databases, significantly reducing search space and achieving high accuracy in under 3 s. In 2020, Shao et al. [130] introduced an arc band descriptor for crater matching, using Gaussian pyramids and edge detection to extract arc features (Figure 15b). In the same year, Maass et al. [131] proposed a direct matching method based on ellipse similarity under accurate pose estimates (Figure 15c). However, its performance rapidly degraded under increasing noise or pose deviation.
In 2023, Thrasher et al. [55] utilized triangulation and projection of crater triangle patterns to determine the spacecraft’s line of sight toward potential matched craters, validating them through reprojection. In 2024, Jin et al. [132] proposed a hybrid descriptor method by combining geometric and region-based descriptors based on Lu’s work [129], enhancing match robustness through weighted fusion and nearest-neighbor matching. Chng et al. [133] introduced a descriptor-free method based on the projection cone alignment solver and geometric verification, which directly used the crater ellipse geometry of the imaging projections to align matches (Figure 15d), significantly reducing memory use by two to four orders of magnitude and outperforming descriptor-based methods in noisy or challenging imagery.
Figure 15. Crater matching methods with prior pose (redrawn based on [129,130,131,133]).
Figure 15. Crater matching methods with prior pose (redrawn based on [129,130,131,133]).
Aerospace 12 00789 g015
In summary, crater matching with a prior pose is highly dependent on the accuracy of initial pose estimation. When camera position and orientation are accurate, direct methods, such as distance constraints, nearest-neighbor matching, and RANSAC, can achieve reliable results. In cases of pose disturbance, crater descriptors become essential, with multicrater invariants, especially triplet-based descriptors, being the most widely used because of their strong robustness and reliability. Although increasing tuple sizes improves resilience, it also drastically increases computational complexity (e.g., 2000 craters generate 1.3 × 10 9 triplets), necessitating pruning and preprocessing. Although triplet matching is simple to implement, it risks false matches and lacks correction mechanisms. Hybrid and descriptor-free methods offer promising alternatives, expanding the landscape of crater matching under prior pose constraints.

3.3.2. Crater Matching Algorithms Without Prior Pose

In the absence of prior pose information, crater matching algorithms must be more intelligent and adaptable to accurately match craters in the image with records from global crater catalogs. This approach is especially critical for autonomous lunar landing missions, as it allows high-precision localization without external navigation assistance. The key to Lost-in-Space (LIS) matching lies in constructing geometric invariants, typically involving projective invariants of conic curves. These descriptors are based on the contours of the crater rather than the coordinates of the center of the crater, offering greater robustness.
In 2003, Cheng et al. [73] introduced a landmark-based autonomous navigation scheme for the NEAR mission. The system combined cross-correlation matching, context-based matching, and projective conic invariant crater matching. The approach achieved a localization accuracy of 100 m after one hour of processing. In 2005, Cheng et al. [134] applied the projective conic invariant method to a Mars landing mission using invariants derived from two pairs of coplanar conic curves. Although this technique did not rely on spacecraft height or attitude estimates, crater parameter uncertainties and terrain relief made it prone to mismatches. In 2010, Hanak et al. [135] proposed a triangle-based matching algorithm under low-lunar-orbit conditions. They used a six-dimensional descriptor composed of cosine angles, diameter ratios, and the rotation direction of triangles formed from the crater centers. The method compared these triangles to a precomputed catalog and filtered false matches using a probabilistic threshold. In 2018, Park et al. [136] developed a new crater triangle matching method based on projective and permutation ( p 2 ) invariants. They used six groups of five coplanar points to calculate six invariants, resulting in an invariant vector of 30 features for matching (Figure 16a). Although robust, the method incurred high computational costs. In 2020, the German Aerospace Center introduced a crater depth estimation method [131] by constructing projection cones from crater ellipses in images and identifying intersecting circular planes (Figure 16b). Although effective for depth estimation, this method had low matching rates and required integration with other methods. In 2021, Doppenberg [137] redefined the invariants p 2 for the matching of the crater triplet, employing seven projective invariants from three coplanar ellipses. Matching accuracy was enhanced using RANSAC and reprojection verification. In 2022, Xu et al. [138] further improved robustness and speed by introducing dynamic threshold filtering and iterative pyramid matching.
In summary, LIS crater matching typically relies on the projective invariant theory of conic curves, utilizing crater contour information instead of center coordinates, which enhances robustness. According to Christian [120], d coplanar craters generate 5 d 8 invariants, whereas non-coplanar ones produce 3 d 6 invariants. Future research will likely focus on more robust descriptors, reduced computational complexity, and more effective matching validation strategies.
In recent years, deep learning has been introduced into crater matching. Wang et al. [139] proposed the CraterIDNet system, an end-to-end framework integrating detection and identification. It uses a novel grid pattern layer that converts the geometric layout and scale information of craters into a 2D grayscale grid image, exhibiting scale and rotation invariance. A Convolutional Neural Network then classifies the grid pattern and directly outputs the crater’s index from the catalog. This method avoids traditional dependence on feature construction and prior pose, offering high fault tolerance and real-time capability. With data augmentation strategies, the system demonstrated strong robustness under complex observational conditions.
Although deep learning offers new opportunities for crater matching through its automatic feature learning and adaptability to complex scenarios, its application remains limited. Deep models effectively extract features elusive to traditional descriptors, improving matching precision and robustness. However, these models often lack interpretability and require significant computational resources, posing challenges for real-time onboard navigation.
Currently, crater recognition is usually performed in a two-stage framework—detection followed by matching—which, while modular and interpretable, introduces error propagation and performance bottlenecks. False positives or missed detections in the first stage degrade matching accuracy in the second. End-to-end deep learning frameworks are emerging to address this issue by optimizing detection and matching jointly. These models can fully exploit contextual information and improve recognition and matching performance under occlusion or deformation, which is crucial for spacecraft autonomy.
Future crater matching research should focus on integrating deep learning into end-to-end frameworks while addressing model interpretability and computational efficiency. Such advancements will drive the development of robust and efficient autonomous navigation systems for deep space exploration, which are critical to mission success.

4. Discussion and Future Directions

With deep space exploration missions expanding toward increasingly distant celestial bodies, the demand for advanced image-based navigation technologies is becoming more urgent. Deep space probes must operate autonomously in extreme environments to perform complex tasks such as localization, obstacle avoidance, and path planning. As such, the development of deep space image navigation technologies is expected to emphasize greater autonomy, intelligence, real-time performance, and accuracy. Future research and development can be broadly categorized into the following directions.

4.1. Intelligent Evolution of Image Navigation Systems

The rapid advancement of artificial intelligence, particularly deep learning, has significantly accelerated progress in deep space image navigation. Traditional image processing and feature matching methods exhibit limitations when faced with complex environmental conditions such as varying illumination, perspectives, and resolutions. In contrast, deep learning algorithms can automatically learn and extract complex features from large-scale data, improving the accuracy of surface feature extraction and adapting to dynamic planetary environments.
Considering the long operational cycles and sparse landmark databases typical of deep space missions, image navigation systems urgently require onboard self-learning and incremental mapping capabilities. By integrating multistage data sources (such as orbital remote sensing, close-range imaging, and descent images) and incorporating weakly supervised or self-supervised learning, the system can dynamically update feature representations and map structures. This alleviates issues related to the drift of landmark appearance and model degradation.
Furthermore, integrating graph optimization and feature relocalization enables incremental expansion of landmark databases while maintaining structural consistency, thereby enhancing the long-term stability and adaptability of the navigation system in unstructured and non-cooperative environments.
In addition to perception-driven deep learning methods, reinforcement learning (RL) and model-free approaches have recently emerged as promising directions for intelligent navigation. Unlike deep learning, which primarily enhances feature perception and environment representation, RL focuses on sequential decision making and adaptive control under uncertainty. Recent image-based deep reinforcement meta-learning further closes the loop from vision to control. For example, Q-learning and policy gradient algorithms have been applied to optimize planetary landing trajectories, hazard avoidance, and resource-constrained maneuver planning, demonstrating the ability to adaptively refine policies through interaction with simulated environments [140,141]. Moreover, model-free RL strategies have shown potential for terrain-relative navigation in lunar and Martian scenarios, where robustness against unmodeled dynamics is critical [142,143]. In addition, meta-RL policies exploit internal memory to adapt online to distribution shifts in gravity, mass properties, or illumination without explicit full-state re-estimation, with training commonly leveraging photorealistic rendering over LRO-derived DTMs to mitigate sim-to-real gaps [4,144].
Future intelligent navigation systems are therefore likely to benefit from a hybrid paradigm, in which deep learning-based vision systems provide robust perception and mapping, while RL contributes to adaptive planning and real-time decision making. This integration can improve both the accuracy of environmental understanding and the adaptability of spacecraft behavior, ultimately advancing the autonomy and resilience of deep space navigation.

4.2. Enhanced Robustness in Complex Environments

In deep space missions, image-based navigation systems operate under extremely challenging imaging conditions, such as strong illumination contrasts, shadow occlusion, low-albedo terrain, and interference from dust or plumes. These factors directly affect the stability of surface feature extraction and the reliability of feature matching, which become particularly critical during descent and landing phases. In the IM-2 mission, for example, long shadows caused by the low solar elevation angle severely interfered with crater feature matching and ultimately led to localization errors. Such cases illustrate that robustness issues arise not only from external environmental factors but also from whether the algorithms themselves can remain reliable under diverse and dynamic conditions.
Recent research has made progress in enhancing robustness. Low-light enhancement, HDR compression, and illumination-invariant feature descriptors have extended the operational range of visual sensors, and semantic-aided recognition and deep learning-based keypoint detection have demonstrated greater resilience under weak texture or partial occlusion. Meanwhile, self-supervised and cross-modal learning approaches provide representations less sensitive to environmental variability. Together, these methods lay an important foundation for stable feature detection, matching, and localization in deep space environments.
From our perspective, robustness should not be regarded as an auxiliary performance indicator, but rather as a core requirement that permeates the entire process of feature extraction, feature matching, and localization. Future navigation systems should adopt a layered and adaptive framework; lightweight visual–inertial fusion can maintain continuous navigation under nominal conditions, while more advanced enhancement or cross-modal modules can be dynamically activated when the confidence of features or matching declines, thereby ensuring stability and accuracy in critical mission phases. At the same time, the evaluation of robustness should not rely solely on average error but should incorporate safety-related metrics such as recovery time after visual degradation, whether pose or position uncertainty exceeds thresholds during descent, and the magnitude of accumulated drift under dynamic disturbances.
In summary, robustness in complex environments is not an isolated requirement but an integral part of the core chain of image-based navigation. By organically combining improvements across feature extraction, matching, and localization, and by introducing adaptive mechanisms together with risk-oriented evaluation criteria, future deep space navigation systems will be able to achieve stable performance in increasingly complex and unpredictable environments.

4.3. Hardware Evolution for Autonomous Navigation Systems

To efficiently run complex image navigation algorithms on resource-constrained space platforms, the hardware system must balance performance, power consumption, and integration. Future hardware will trend toward high-performance, low-power computing platforms, such as RISC-V-based aerospace processors, space-grade GPUs, and FPGA/ASIC accelerators. Onboard AI chips will become critical enablers for intelligent navigation, including neural network inference chips (e.g., TPU-like architectures) and space-grade AI SoCs.
These advances will enable image processing, path planning, and decision making directly on the spacecraft, reducing reliance on ground control, and improving response efficiency. In terms of system architecture, highly integrated and modular designs will become the norm. This includes unified layouts of cameras, computing units, and storage and communication modules, which improve the reliability of the system and the adaptability of the mission.
In practice, however, the adoption of advanced hardware faces several constraints. Radiation hardening and long-term reliability often lag behind commercial devices by one or two technology generations, meaning that “state-of-the-art” space processors may not match terrestrial performance. Moreover, higher computational density inevitably increases thermal load and complicates spacecraft-level power budgeting. Therefore, a balanced design may rely on a heterogeneous computing paradigm: lightweight, radiation-hardened CPUs for baseline autonomy and mission safety complemented by reconfigurable accelerators (FPGA/ASIC) that can be selectively activated for high-demand vision tasks. From a system engineering perspective, modularity should also extend to fault detection, isolation, and recovery (FDIR), ensuring that the failure of an AI accelerator does not compromise the entire GNC chain. We believe that the true innovation will not only lie in faster chips but in co-designing algorithms with hardware constraints, e.g., pruning and quantizing neural networks for in-orbit deployment or developing reinforcement learning policies that can be executed within strict timing guarantees.

4.4. Cross-Mission Adaptability and Multi-Body Generalization

The high diversity of deep space missions demands that image navigation systems possess strong adaptability across celestial bodies. Currently, most systems are highly mission-specific and optimized for a single planetary environment, which limits their reuse in subsequent missions. This lack of transferability is largely due to domain gaps in surface appearance, illumination conditions, and sensor models, as well as the scarcity of annotated in situ data.
Recent algorithmic advances point toward several promising solutions. Transfer learning and domain adaptation techniques allow models trained on one planetary dataset to be fine-tuned for another, while meta-learning frameworks provide the ability to quickly adapt navigation policies with limited new data. Self-supervised and contrastive representation learning approaches are also emerging as effective ways to extract features that are invariant across terrains, enabling cross-body generalization without heavy reliance on labeled samples. Furthermore, multitask joint optimization—where localization, mapping, and hazard detection are trained in a shared representation space—can improve robustness and consistency when switching tasks during different mission phases.
In our view, achieving cross-mission adaptability requires more than algorithmic transfer; it demands a systematic design that combines pretraining on large-scale simulated datasets with continual on-orbit adaptation. One practical strategy is a two-stage pipeline: a universal backbone model pretrained on multi-body imagery (lunar, Martian, asteroid) followed by lightweight adapters that specialize the features to mission-specific conditions. Another critical aspect is uncertainty estimation: cross-domain models must signal when their predictions are unreliable, allowing the navigation system to hand over control to fallback sensors or inertial baselines. Finally, to ensure true generalization, evaluation metrics should move beyond single-mission accuracy and include cross-body validation, e.g., training on lunar datasets while testing on Martian analogs. We believe that future progress will depend on integrating transfer learning with robust uncertainty-aware decision-making, enabling spacecraft to operate reliably in environments that have not been explicitly encountered during training.

5. Conclusions

With ongoing breakthroughs in AI algorithms, onboard hardware, computing architectures, and multisource perception technologies, deep space image navigation is advancing toward higher levels of intelligence, efficiency, and precision. Future systems will go beyond precise localization to integrate multimodal data fusion, dynamic path optimization, and real-time obstacle avoidance. These capabilities will allow probes to navigate autonomously and reliably for extended durations in complex and dynamic deep space environments.
As mission complexity and technical demands continue to increase, deep space image navigation systems will undergo further optimization, evolving into more flexible, reliable, and globally adaptable technologies. This will be a key enabler of the continued success of deep space exploration missions.

Author Contributions

Conceptualization, X.L. and T.L.; methodology, X.L. and T.L.; software, X.L.; validation, T.L., B.H. and C.Z.; formal analysis, X.L.; investigation, X.L.; resources, T.L.; data curation, X.L.; writing—original draft preparation, X.L.; writing—review and editing, T.L. and C.Z.; visualization, X.L.; supervision, T.L.; project administration, T.L.; funding acquisition, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 52275083. The APC was funded by the same foundation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DOMDigital Orthophoto Map
DEMDigital Elevation Model
DIMDigital Image Map
LOLALunar Orbiter Laser Altimeter
LROCLunar Reconnaissance Orbiter Camera
OHRCOrbiter High-Resolution Camera
SARSynthetic Aperture Radar
NavCamNavigation Camera
MDISMercury Dual-Imaging System
MLAMercury Laser Altimeter
NFT        Natural Feature Tracking System
EKFExtended Kalman Filter
LVSLanding Vision System
IMUInertial Measurement Unit
TRNTerrain-Relative Navigation
ELVISEnhanced Landing Vision System
MRLMap-Relative Localization
VOVisual Odometry
PnPPerspective-n-Point
DPOSDistributed Position and Orientation System
SVMSupport Vector Machine
RFRandom Forest
DTDecision Tree
PCAPrincipal Component Analysis
CNNConvolutional Neural Network
SAMSegment Anything Model

References

  1. Zhang, Y.; Li, P.; Quan, J.; Li, L.; Zhang, G.; Zhou, D. Progress, challenges, and prospects of soft robotics for space applications. Adv. Intell. Syst. 2023, 5, 2200071. [Google Scholar] [CrossRef]
  2. Turan, E.; Speretta, S.; Gill, E. Autonomous navigation for deep space small satellites: Scientific and technological advances. Acta Astronaut. 2022, 193, 56–74. [Google Scholar] [CrossRef]
  3. Ji, J.H.; Wang, S. China’s Future Missions for Deep Space Exploration and Exoplanet Space Survey by 2030. Chin. J. Space Sci. 2020, 40, 729–731. [Google Scholar] [CrossRef]
  4. Scorsoglio, A.; D’Ambrosio, A.; Ghilardi, L.; Gaudet, B.; Curti, F.; Furfaro, R. Image-based deep reinforcement meta-learning for autonomous lunar landing. J. Spacecr. Rocket. 2022, 59, 153–165. [Google Scholar] [CrossRef]
  5. Li., S.; Lu, R.; Zhang, L.; Peng, Y. Image processing algorithms for deep-space autonomous optical navigation. J. Navig. 2013, 66, 605–623. [Google Scholar] [CrossRef]
  6. Franzese, V.; Topputo, F. Celestial bodies far-range detection with deep-space CubeSats. Sensors 2023, 23, 4544. [Google Scholar] [CrossRef]
  7. Nozette, S.; Rustan, P.; Pleasance, L.P.; Kordas, J.F.; Lewis, I.T.; Park, H.S.; Priest, R.E.; Horan, D.M.; Regeon, P.; Lichtenberg, C.L.; et al. The Clementine mission to the Moon: Scientific overview. Science 1994, 266, 1835–1839. [Google Scholar] [CrossRef]
  8. Mcewen, A.S.; Robinson, M.S. Mapping of the Moon by Clementine. Adv. Space Res. 1997, 19, 1523–1533. [Google Scholar] [CrossRef]
  9. Smith, D.E.; Zuber, M.T.; Neumann, G.A.; Lemoine, F.G. Topography of the Moon from the Clementine lidar. J. Geophys. Res. Planets 1997, 102, 1591–1611. [Google Scholar] [CrossRef]
  10. Gaddis, L.; Isbell, C.; Staid, M.; Eliason, E.; Lee, E.M.; Weller, L.; Sucharski, T.; Lucey, P.; Blewett, D.; Hinrichs, J.; et al. The Clementine NIR Global Lunar Mosaic. NASA Planetary Data System (PDS) Dataset. PDS. 2007. Available online: https://pds-imaging.jpl.nasa.gov/ (accessed on 27 July 2025).
  11. Racca, G.D.; Marini, A.; Stagnaro, L.; Van Dooren, J.; Di Napoli, L.; Foing, B.H.; Lumb, R.; Volp, J.; Brinkmann, J.; Grünagel, R.; et al. SMART-1 mission description and development status. Planet. Space Sci. 2002, 50, 1323–1337. [Google Scholar] [CrossRef]
  12. Scholten, F.; Oberst, J.; Matz, K.D.; Roatsch, T.; Wählisch, M.; Speyerer, E.J.; Robinson, M.S. GLD100: The near-global lunar 100 m raster DTM from LROC WAC stereo image data. J. Geophys. Res. Planets 2012, 117, E00H14. [Google Scholar] [CrossRef]
  13. Speyerer, E.J.; Robinson, M.S.; Denevi, B.W.; Lroc, S.T. Lunar Reconnaissance Orbiter Camera global morphological map of the Moon. In Proceedings of the 42nd Annual Lunar and Planetary Science Conference, The Woodlands, TX, USA, 7–11 March 2011; No. 1608. p. 2387. [Google Scholar]
  14. Barker, M.K.; Mazarico, E.; Neumann, G.A.; Zuber, M.T.; Haruyama, J.; Smith, D.E. A new lunar digital elevation model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera. Icarus 2016, 273, 346–355. [Google Scholar] [CrossRef]
  15. Haruyama, J.; Ohtake, M.; Matsunaga, T.; Morota, T.; Honda, C.; Yokota, Y.; Ogawa, Y.; Lism, W.G. Selene (Kaguya) terrain camera observation results of nominal mission period. In Proceedings of the 40th Annual Lunar and Planetary Science Conference, The Woodlands, TX, USA, 23–27 March 2009; p. 1553. [Google Scholar]
  16. Chowdhury, A.R.; Saxena, M.; Kumar, A.; Joshi, S.R.; Dagar, A.; Mittal, M.; Kirkire, S.; Desai, J.; Shah, D.; Karelia, J.C.; et al. Orbiter high resolution camera onboard Chandrayaan-2 orbiter. Curr. Sci. 2020, 118, 560–565. [Google Scholar] [CrossRef]
  17. Gupta, A.; Suresh, K.; Prashar, A.K.; Iyer, K.V.; Suhail, A.; Verma, S.; Islam, B.; Lalwani, H.K.; Srinivasan, T.P. High resolution DEM generation from Chandrayaan-2 orbiter high resolution camera images. 52nd Lunar Planet. Sci. Conf. 2021, 2548, 1396. [Google Scholar]
  18. Dagar, A.K.; Rajasekhar, R.P.; Nagori, R. Analysis of boulders population around a young crater using very high resolution image of Orbiter High Resolution Camera (OHRC) on board Chandrayaan-2 mission. Icarus 2022, 386, 115168. [Google Scholar] [CrossRef]
  19. Pelgrift, J.Y.; Nelson, D.S.; Adam, C.D.; Molina, G.; Hansen, M.; Hollister, A. In-flight calibration of the Intuitive Machines IM-1 optical navigation imagers. In Proceedings of the 4th Space Imaging Workshop (SIW 2024), Laurel, MD, USA, 7–9 October 2024. [Google Scholar]
  20. Shoer, J.; Mosher, T.; Mccaa, T.; Kwong, J.; Ringelberg, J.; Murrow, D. LunIR: A CubeSat spacecraft performing advanced infrared imaging of the lunar surface. In Proceedings of the 70th International Astronautical Congress (IAC 2019), 26th IAA Symposium on Small Satellite Missions (B4), Small Spacecraft for Deep-Space Exploration (8), Washington, DC, USA, 21–25 October 2019. Paper ID: IAC-19-B4.8.6x53460. [Google Scholar]
  21. Li, C.; Liu, J.; Ren, X.; Mou, L.; Zou, Y.; Zhang, H.; Lü, C.; Liu, J.; Zuo, W.; Su, Y.; et al. The global image of the Moon obtained by the Chang’E-1: Data processing and lunar cartography. Sci. China Earth Sci. 2010, 53, 1091–1102. [Google Scholar] [CrossRef]
  22. Li, C.; Ren, X.; Liu, J.; Zou, X.; Mu, L.; Wang, J.; Shu, R.; Zou, Y.; Zhang, H.; Lü, C.; et al. Laser altimetry data of Chang’E-1 and the global lunar DEM model. Sci. China Earth Sci. 2010, 53, 1582–1593. [Google Scholar] [CrossRef]
  23. Li, C.; Liu, J.; Ren, X.; Yan, W.; Zuo, W.; Mu, L.; Zhang, H.; Su, Y.; Wen, W.; Tan, X.; et al. Lunar global high-precision terrain reconstruction based on Chang’E-2 stereo images. Geomat. Inf. Sci. Wuhan Univ. 2018, 43, 485–495. [Google Scholar]
  24. Liu, J.; Ren, X.; Tan, X.; Li, C. Lunar image data preprocessing and quality evaluation of CCD stereo camera on Chang’E-2. Geomat. Inf. Sci. Wuhan Univ. 2013, 38, 186–190. [Google Scholar]
  25. Caplinger, M.A.; Malin, M.C. Mars orbiter camera geodesy campaign. J. Geophys. Res. Planets 2001, 106, 23595–23606. [Google Scholar] [CrossRef]
  26. Fergason, R.L.; Hare, T.M.; Laura, J. HRSC and MOLA Blended Digital Elevation Model at 200 m v2. USGS Astrogeology Science Center, PDS Annex. 2018. Available online: https://astrogeology.usgs.gov/search/map/Mars/Viking/HRSC_MOLA_Blend (accessed on 27 July 2025).
  27. Christensen, P.R.; Bandfield, J.L.; Bell, J.F., III; Gorelick, N.; Hamilton, V.E.; Ivanov, A.; Jakosky, B.M.; Kieffer, H.H.; Lane, M.D.; Malin, M.C.; et al. Morphology and composition of the surface of Mars: Mars Odyssey THEMIS results. Science 2003, 300, 2056–2061. [Google Scholar] [CrossRef] [PubMed]
  28. McEwen, A.S.; Eliason, E.M.; Bergstrom, J.W.; Bridges, N.T.; Hansen, C.J.; Delamere, W.A.; Grant, J.A.; Gulick, V.C.; Herkenhoff, K.E.; Keszthelyi, L. Mars Reconnaissance Orbiter’s High Resolution Imaging Science Experiment (HiRISE). J. Geophys. Res. Planets 2007, 112, E05S02. [Google Scholar] [CrossRef]
  29. Kirk, R.L.; Howington-Kraus, E.; Rosiek, M.R.; Anderson, J.A.; Archinal, B.A.; Becker, K.J.; Cook, D.A.; Galuszka, D.M.; Geissler, P.E.; Hare, T.M.; et al. Ultrahigh resolution topographic mapping of Mars with MRO HiRISE stereo images: Meter-scale slopes of candidate Phoenix landing sites. J. Geophys. Res. Planets 2008, 113, E00A24. [Google Scholar] [CrossRef]
  30. Johnson, A.E.; Aaron, S.B.; Ansari, H.; Bergh, C.; Bourdu, H.; Butler, J.; Chang, J.; Cheng, R.; Cheng, Y.; Clark, K.; et al. Mars 2020 lander vision system flight performance. In Proceedings of the AIAA SciTech 2022 Forum, San Diego, CA, USA, 3–7 January 2022; Volume 1214. [Google Scholar]
  31. Bell, J.F., III; Maki, J.N.; Mehall, G.L.; Ravine, M.A.; Caplinger, M.A.; Bailey, Z.J.; Brylow, S.; Schaffner, J.A.; Kinch, K.M.; Madsen, M.B.; et al. The Mars 2020 perseverance rover mast camera zoom (Mastcam-Z) multispectral, stereoscopic imaging investigation. Space Sci. Rev. 2021, 217, 1–40. [Google Scholar] [CrossRef]
  32. Fergason, R.L.; Hare, T.M.; Mayer, D.P.; Galuszka, D.M.; Redding, B.L.; Smith, E.D.; Shinaman, J.R.; Cheng, Y.; Otero, R.E. Mars 2020 terrain relative navigation flight product generation: Digital terrain model and orthorectified image mosaic. In Proceedings of the 51st Lunar and Planetary Science Conference, 2020, No. 2326. The Woodlands, TX, USA, 16–20 March 2020. [Google Scholar]
  33. Gwinner, K.; Scholten, F.; Preusker, F.; Elgner, S.; Roatsch, T.; Spiegel, M.; Schmidt, R.; Oberst, J.; Jaumann, R.; Heipke, C. Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: Characteristics and performance. Earth Planet. Sci. Lett. 2010, 294, 506–519. [Google Scholar] [CrossRef]
  34. Ody, A.; Poulet, F.; Langevin, Y.; Bibring, J.-P.; Bellucci, G.; Altieri, F.; Gondet, B.; Vincendon, M.; Carter, J.; Manaud, N.C.E.J. Global maps of anhydrous minerals at the surface of Mars from OMEGA/MEx. J. Geophys. Res. Planets 2012, 117, E00J14. [Google Scholar] [CrossRef]
  35. Liang, X.; Chen, W.; Cao, Z.; Wu, F.; Lyu, W.; Song, Y.; Li, D.; Yu, C.; Zhang, L.; Wang, L. The navigation and terrain cameras on the Tianwen-1 Mars rover. Space Sci. Rev. 2021, 217, 37. [Google Scholar] [CrossRef]
  36. Becker, K.J.; Weller, L.A.; Edmundson, K.L.; Becker, T.L.; Robinson, M.S.; Enns, A.C.; Solomon, S.C. Global controlled mosaic of Mercury from MESSENGER orbital images. In Proceedings of the 43rd Annual Lunar and Planetary Science Conference, The Woodlands, TX, USA, 19–23 March 2012; No. 1659. p. 2654. [Google Scholar]
  37. Roatsch, T.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C.A.; Russell, C.T. High-resolution ceres low altitude mapping orbit atlas derived from dawn framing camera images. Planet. Space Sci. 2017, 140, 74–79. [Google Scholar] [CrossRef]
  38. Watanabe, S.; Tsuda, Y.; Yoshikawa, M.; Tanaka, S.; Saiki, T.; Nakazawa, S. Hayabusa2 mission overview. Space Sci. Rev. 2017, 208, 3–16. [Google Scholar] [CrossRef]
  39. Preusker, F.; Scholten, F.; Elgner, S.; Matz, K.-D.; Kameda, S.; Roatsch, T.; Jaumann, R.; Sugita, S.; Honda, R.; Morota, T.; et al. The MASCOT landing area on asteroid (162173) Ryugu: Stereo-photogrammetric analysis using images of the ONC onboard the Hayabusa2 spacecraft. Astron. Astrophys. 2019, 632, L4. [Google Scholar] [CrossRef]
  40. Tatsumi, E.; Domingue, D.; Schröder, S.; Yokota, Y.; Kuroda, D.; Ishiguro, M.; Hasegawa, S.; Hiroi, T.; Honda, R.; Hemmi, R.; et al. Global photometric properties of (162173) Ryugu. Astron. Astrophys. 2020, 639, A83. [Google Scholar] [CrossRef]
  41. Becker, K.J.; Edmundson, K.L. Control of OSIRIS-REx OTES observations using OCAMS TAG images. arXiv 2024, arXiv:2401.12177. [Google Scholar] [CrossRef]
  42. Williams, B.G. Technical challenges and results for navigation of NEAR Shoemaker. Johns Hopkins APL Tech. Dig. 2002, 23, 1. [Google Scholar]
  43. Ogawa, N.; Terui, F.; Yasuda, S.; Matsushima, K.; Masuda, T.; Sano, J.; Hihara, H.; Matsuhisa, T.; Danno, S.; Yamada, M.; et al. Image-based autonomous navigation of Hayabusa2 using artificial landmarks: Design and in-flight results in landing operations on asteroid Ryugu. In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020. [Google Scholar]
  44. Mario, C.; Norman, C.; Miller, C.; Olds, R.; Palmer, E.; Weirich, J.; Lorenz, D.; Lauretta, D. Image correlation performance prediction for autonomous navigation of OSIRIS-REx asteroid sample collection. In Proceedings of the 43rd Annual AAS Guidance, Navigation & Control Conference, Breckenridge, CO, USA, 1–3 February 2020. AAS 20-087. [Google Scholar]
  45. Miller, C.; Olds, R.; Norman, C.; Gonzales, S.; Mario, C.; Lauretta, D.S. On orbit evaluation of natural feature tracking for OSIRIS-REx sample collection. In Proceedings of the 43rd Annual AAS Guidance, Navigation & Control Conference, Breckenridge, CO, USA, 1–3 February 2020. AAS 20-154. [Google Scholar]
  46. Lorenz, D.A.; Olds, R.; May, A.; Mario, C.; Perry, M.E.; Palmer, E.E.; Daly, M. Lessons learned from OSIRIS-REx autonomous navigation using natural feature tracking. In Proceedings of the 2017 IEEE Aerospace Conference, Big Sky, MT, USA, 1–12 March 2017. [Google Scholar]
  47. Maki, J.N.; Gruel, D.; McKinney, C.; Ravine, M.A.; Morales, M.; Lee, D.; Willson, R.; Copley-Woods, D.; Valvo, M.; Goodsall, T.; et al. The Mars 2020 engineering cameras and microphone on the Perseverance rover: A next-generation imaging system for Mars exploration. Space Sci. Rev. 2020, 216, 9. [Google Scholar] [CrossRef]
  48. Setterfield, T.P.; Conway, D.; Chen, P.-T.; Clouse, D.; Trawny, N.; Johnson, A.E.; Khattak, S.; Ebadi, K.; Massone, G.; Cheng, Y.; et al. Enhanced Lander Vision System (ELViS) algorithms for pinpoint landing of the Mars Sample Retrieval Lander. In Proceedings of the AIAA SCITECH 2024 Forum, Orlando, FL, USA, 8–12 January 2024; p. 0315. [Google Scholar]
  49. Maru, Y.; Morikawa, S.; Kawano, T. Evaluation of landing stability of two-step landing method for small lunar-planetary lander. In Proceedings of the International Symposium on Space Flight Dynamics (ISSFD), Darmstadt, Germany, 22–26 April 2024. [Google Scholar]
  50. Takino, T.; Nomura, I.; Moribe, M.; Kamata, H.; Takadama, K.; Fukuda, S.; Sawai, S.; Sakai, S. Crater detection method using principal component analysis and its evaluation. Trans. Jpn. Soc. Aeronaut. Space Sci. Aerosp. Technol. Jpn. 2016, 14, Pt_7–Pt_14. [Google Scholar] [CrossRef][Green Version]
  51. Ishii, H.; Takadama, K.; Murata, A.; Uwano, F.; Tatsumi, T.; Umenai, Y.; Matsumoto, K.; Kamata, H.; Ishida, T.; Fukuda, S.; et al. The robust spacecraft location estimation algorithm toward the misdetection crater and the undetected crater in SLIM. In Proceedings of the 31st International Symposium on Space Technology and Science (ISTS), Ehime, Japan, 3–9 June 2017. Paper No. ISTS-2017-d-067/ISSFD-2017-067. [Google Scholar]
  52. Ishida, T.; Fukuda, S.; Kariya, K.; Kamata, H.; Takadama, K.; Kojima, H.; Sawai, S.; Sakai, S. Vision-based navigation and obstacle detection flight results in SLIM lunar landing. Acta Astronaut. 2025, 226, 772–781. [Google Scholar] [CrossRef]
  53. Getchius, J.; Renshaw, D.; Posada, D.; Henderson, T.; Hong, L.; Ge, S.; Molina, G. Hazard detection and avoidance for the nova-c lander. In Proceedings of the 44th Annual American Astronautical Society Guidance, Navigation, and Control Conference, Charlotte, NC, USA, 7–11 August 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 921–943. [Google Scholar]
  54. Molina, G.; Hansen, M.; Getchius, J.; Christensen, R.; Christian, J.A.; Stewart, S.; Crain, T. Visual odometry for precision lunar landing. In Proceedings of the 44th Annual American Astronautical Society Guidance, Navigation, and Control Conference, Charlotte, NC, USA, 7–11 August 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 1021–1042. [Google Scholar]
  55. Thrasher, A.C.; Christian, J.A.; Molina, G.; Hansen, M.; Pelgrift, J.Y.; Nelson, D.S. Lunar crater identification using triangle reprojection. In Proceedings of the AAS/AIAA 2023, Big Sky, MT, USA, 13–17 August 2023. [Google Scholar]
  56. Zhang, H.; Li, J.; Guan, Y.; Huang, X. Autonomous Navigation for Powered Descent of Chang’e-3 Lander. Control Theory Appl. 2014, 31, 1686–1694. [Google Scholar]
  57. Yu, P.; Zhang, H.; Li, J.; Guan, Y.; Wang, L.; Zhao, Y.; Chen, Y.; Yang, W.; Yu, J.; Wang, H.; et al. Design and Implementation of the GNC System for the Chang’e-5 Lander-Ascent Combination. Sci. China Technol. Sci. 2021, 51, 763–777. [Google Scholar]
  58. Song, J.; Rondao, D.; Aouf, N. Deep learning-based spacecraft relative navigation methods: A survey. Acta Astronaut. 2022, 191, 22–40. [Google Scholar] [CrossRef]
  59. Bechini, M.; Lavagna, M.; Lunghi, P. Dataset generation and validation for spacecraft pose estimation via monocular images processing. Acta Astronaut. 2023, 204, 358–369. [Google Scholar] [CrossRef]
  60. Kloos, J.L.; Moores, J.E.; Godin, P.J.; Cloutis, E. Illumination conditions within permanently shadowed regions at the lunar poles: Implications for in-situ passive remote sensing. Acta Astronaut. 2021, 178, 432–451. [Google Scholar] [CrossRef]
  61. Christian, J.A. A tutorial on horizon-based optical navigation and attitude determination with space imaging systems. IEEE Access 2021, 9, 19819–19853. [Google Scholar] [CrossRef]
  62. Russo, A.; Lax, G. Using artificial intelligence for space challenges: A survey. Appl. Sci. 2022, 12, 5106. [Google Scholar] [CrossRef]
  63. Chen, D.; Hu, F.; Zhang, L.; Wu, Y.; Du, J.; Peethambaran, J. Impact crater recognition methods: A review. Sci. China Earth Sci. 2024, 1, 24. [Google Scholar] [CrossRef]
  64. Zhong, J.; Yan, J.; Li, M.; Barriot, J.P. A deep learning-based local feature extraction method for improved image matching and surface reconstruction from Yutu-2 PCAM images on the Moon. ISPRS J. Photogramm. Remote Sens. 2023, 206, 16–29. [Google Scholar] [CrossRef]
  65. Zhang, C.; Liang, X.; Wu, F.; Zhang, L. Overview of optical navigation technology development for asteroid descent and landing stage. Infrared Laser Eng. 2020, 49, 20201009. (In Chinese) [Google Scholar] [CrossRef]
  66. Sawai, S.; Scheeres, D.J.; Kawaguchi, J.; Yoshizawa, N.; Ogawawara, M. Development of a target marker for landing on asteroids. J. Spacecr. Rocket. 2001, 38, 601–608. [Google Scholar] [CrossRef]
  67. Xu, L.; Jiang, J.; Ma, Y. A review of vision-based navigation technology based on impact craters. Laser Optoelectron. Prog. 2023, 60, 1106013. (In Chinese) [Google Scholar]
  68. Tewari, A.; Prateek, K.; Singh, A.; Khanna, N. Deep learning based systems for crater detection: A review. arXiv 2023, arXiv:2310.07727. [Google Scholar] [CrossRef]
  69. Robbins, S.J.; Hynek, B.M. A new global database of Mars impact craters≥ 1 km: 1. Database Creat. Prop. Parameters [J. Geophys. Res. Planets] 2012, 117, E05XXXX. [Google Scholar] [CrossRef]
  70. Kneissl., T.; van Gasselt, S.; Neukum, G. Map-projection-independent crater size-frequency determination in GIS environments—New software tool for ArcGIS. Planet. Space Sci. 2011, 59, 1243–1254. [Google Scholar] [CrossRef]
  71. Heyer, T.; Iqbal, W.; Oetting, A.; Hiesinger, H.; van der Bogert, C.H.; Schmedemann, N. A comparative analysis of global lunar crater catalogs using OpenCraterTool–An open source tool to determine and compare crater size-frequency measurements. Planet. Space Sci. 2023, 231, 105687. [Google Scholar] [CrossRef]
  72. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
  73. Cheng, Y.; Miller, J.K. Autonomous landmark based spacecraft navigation system. AAS/AIAA Astrodyn. Spec. Conf. 2003, 114, 1769–1783. [Google Scholar]
  74. Zuo, W.; Li, C.; Yu, L.; Zhang, Z.; Wang, R.; Zeng, X.; Liu, Y.; Xiong, Y. Shadow–highlight feature matching automatic small crater recognition using high-resolution digital orthophoto map from Chang’E missions. Acta Geochim. 2019, 38, 541–554. [Google Scholar] [CrossRef]
  75. Maass, B. Robust approximation of image illumination direction in a segmentation-based crater detection algorithm for spacecraft navigation. CEAS Space J. 2016, 8, 303–314. [Google Scholar] [CrossRef][Green Version]
  76. Maass, B.; Krüger, H.; Theil, S. An edge-free, scale-, pose- and illumination-invariant approach to crater detection for spacecraft navigation. In Proceedings of the 2011 7th International Symposium on Image and Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 4–6 September 2011; pp. 603–608. [Google Scholar]
  77. Bandeira, L.; Ding, W.; Stepinski, T.F. Detection of sub-kilometer craters in high-resolution planetary images using shape and texture features. Adv. Space Res. 2012, 49, 64–74. [Google Scholar] [CrossRef]
  78. Wang, Y.; Yang, G.; Guo, L. A novel sparse boosting method for crater detection in the high-resolution planetary image. Adv. Space Res. 2015, 56, 29–41. [Google Scholar] [CrossRef]
  79. Pedrosa, M.M.; Azevedo, S.C.D.; Silva, E.A.D.; Dias, M.A. Improved automatic impact crater detection on Mars based on morphological image processing and template matching. J. Spat. Sci. 2017, 62, 219–231. [Google Scholar] [CrossRef]
  80. Woicke, S.; Gonzalez, A.M.; El-Hajj, I. Comparison of crater-detection algorithms for terrain-relative navigation. In Proceedings of the AIAA Scitech Forum, Kissimmee, FL, USA, 8–12 January 2018. [Google Scholar]
  81. Chen, M.; Liu, D.; Qian, K.; Li, J.; Lei, M.; Zhou, Y. Lunar crater detection based on terrain analysis and mathematical morphology methods using digital elevation models. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3681–3692. [Google Scholar] [CrossRef]
  82. Magee, M.; Chapman, C.R.; Dellenback, S.W.; Enke, B.; Merline, W.J.; Rigney, M.P. Automated identification of Martian craters using image processing. In Proceedings of the Lunar and Planetary Science Conference, Houston, TX, USA, 17–21 March 2003; p. 1756. [Google Scholar]
  83. Singh, L.; Lim, S. On lunar on-orbit vision-based navigation: Terrain mapping, feature tracking driven EKF. In Proceedings of the AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, HI, USA, 18–21 August 2008; AIAA Press: Reston, VA, USA, 2008; p. 6834. [Google Scholar]
  84. Krøgli, S.O. Automatic Extraction of Potential Impact Structures from Geospatial Data: Examples from Finnmark, Northern Norway. Ph.D. Thesis, University of Oslo, Oslo, Norway, 2010. [Google Scholar]
  85. Degirmenci, M.; Ashyralyyev, S. Impact Crater Detection on Mars Digital Elevation and Image Model; Middle East Technical University: Ankara, Turkey, 2010. [Google Scholar]
  86. Wang, S.; Li, W. GeoAI in terrain analysis: Enabling multi-source deep learning and data fusion for natural feature detection. Comput. Environ. Urban Syst. 2021, 90, 101715. [Google Scholar] [CrossRef]
  87. Ding, M.; Cao, Y.; Wu, Q. Crater detection in lunar grayscale images. J. Appl. Sci. 2009, 27, 156–160. (In Chinese) [Google Scholar] [CrossRef]
  88. Stepinski, T.F.; Mendenhall, M.P.; Bue, B.D. Machine cataloging of impact craters on Mars. Icarus 2009, 203, 77–87. [Google Scholar] [CrossRef]
  89. Urbach, E.R.; Stepinski, T.F. Automatic detection of sub-km craters in high resolution planetary images. Planet. Space Sci. 2009, 57, 880–887. [Google Scholar] [CrossRef]
  90. Liu, Q.; Cheng, W.; Yan, G.; Zhao, Y.; Liu, J. A machine learning approach to crater classification from topographic data. Remote Sens. 2019, 11, 2594. [Google Scholar] [CrossRef]
  91. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III, 18. Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  92. Jia, Y.; Liu, L.; Zhang, C. Moon impact crater detection using nested attention mechanism based UNet++. IEEE Access 2021, 9, 44107–44116. [Google Scholar] [CrossRef]
  93. Silburt, A.; Head, J.W.; Povilaitis, R. Lunar crater identification via deep learning. J. Geophys. Res. Planets 2019, 124, 1121–1134. [Google Scholar] [CrossRef]
  94. Head, J.W.; Fassett, C.I.; Kadish, S.J.; Smith, D.E.; Zuber, M.T.; Neumann, G.A.; Mazarico, E. Global distribution of large lunar craters: Implications for resurfacing and impactor populations. Science 2010, 329, 1504–1507. [Google Scholar] [CrossRef]
  95. Povilaitis, R.Z.; Robinson, M.S.; Van der Bogert, C.H.; Hiesinger, H.; Meyer, H.M.; Ostrach, L.R. Crater density differences: Exploring regional resurfacing, secondary crater populations, and crater saturation equilibrium on the Moon. Planet. Space Sci. 2018, 162, 41–51. [Google Scholar] [CrossRef]
  96. Lee, C. Automated crater detection on Mars using deep learning. Planet. Space Sci. 2019, 170, 16–28. [Google Scholar] [CrossRef]
  97. DeLatte, D.M.; Crites, S.T.; Guttenberg, N.; Tasker, E.J.; Yairi, T. Segmentation convolutional neural networks for automatic crater detection on Mars. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2944–2957. [Google Scholar]
  98. Wang, S.; Liu, W.; Zhang, L. An effective lunar crater recognition algorithm based on convolutional neural network. Space Sci. Technol. 2020, 40, 2694. [Google Scholar] [CrossRef]
  99. Lee, J.; Cho, Y.; Kim, H. Automated crater detection with human-level performance. Geophys. Res. Lett. 2021, 48, 210–220. [Google Scholar] [CrossRef]
  100. Jia, Y.; Su, Z.; Wan, G.; Liu, L.; Liu, J. Ae-transunet+: An enhanced hybrid transformer network for detection of lunar south small craters in LRO NAC images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 6007405. [Google Scholar] [CrossRef]
  101. Giannakis, I.; Bhardwaj, A.; Sam, L.; Leontidis, G. A flexible deep learning crater detection scheme using Segment Anything Model (SAM). Icarus 2024, 408, 115797. [Google Scholar] [CrossRef]
  102. Mohite, R.R.; Janardan, S.K.; Janghel, R.R.; Govil, H. Precision in planetary exploration: Crater detection with residual U-Net34/50 and matching template algorithm. Planet. Space Sci. 2025, 255, 106029. [Google Scholar] [CrossRef]
  103. Miao, D.; Yan, J.; Tu, Z.; Barriot, J.-P. LCDNet: An Innovative Neural Network for Enhanced Lunar Crater Detection Using DEM Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024. [Google Scholar] [CrossRef]
  104. Girshick, R. Fast R-CNN. arXiv 2015, arXiv:1504.08083. [Google Scholar] [CrossRef]
  105. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  106. Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  107. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I, 14. Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  108. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. CenterNet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6569–6578. [Google Scholar]
  109. Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. DETRs beat YOLOs on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 16965–16974. [Google Scholar]
  110. Martinez, L.; Schmidt, F.; Andrieu, F.; Bentley, M.; Talbot, H. Automatic crater detection and classification using Faster R-CNN. Copernicus Meetings 2024, EPSC2024-1005. [Google Scholar]
  111. Zhang, S.; Zhang, P.; Yang, J.; Kang, Z.; Cao, Z.; Yang, Z. Automatic detection for small-scale lunar impact crater using deep learning. Adv. Space Res. 2024, 73, 2175–2187. [Google Scholar] [CrossRef]
  112. La Grassa, R.; Cremonese, G.; Gallo, I.; Re, C.; Martellato, E. YOLOLens: A deep learning model based on super-resolution to enhance the crater detection of the planetary surfaces. Remote Sens. 2023, 15, 1171. [Google Scholar] [CrossRef]
  113. Guo, Y.; Wu, H.; Yang, S.; Cai, Z. Crater-DETR: A novel transformer network for crater detection based on dense supervision and multiscale fusion. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5614112. [Google Scholar] [CrossRef]
  114. Yang, C.; Zhao, H.; Bruzzone, L.; Benediktsson, J.A.; Liang, Y.; Liu, B.; Zeng, X.; Guan, R.; Li, C.; Ouyang, Z. Lunar impact crater identification and age estimation with Chang’e data by deep and transfer learning. Nat. Commun. 2020, 11, 6358. [Google Scholar] [CrossRef]
  115. Ali-Dib, M.; Menou, K.; Jackson, A.P.; Zhu, C.; Hammond, N. Automated crater shape retrieval using weakly-supervised deep learning. Icarus 2020, 345, 113749. [Google Scholar] [CrossRef]
  116. Liu, Y.; Lai, J.; Xie, M.; Zhao, J.; Zou, C.; Liu, C.; Qian, Y.; Deng, J. Identification of lunar craters in the Chang’e-5 landing region based on Kaguya TC Morning Map. Remote Sens. 2024, 16, 344. [Google Scholar] [CrossRef]
  117. Tewari, A.; Jain, V.; Khanna, N. Automatic crater shape retrieval using unsupervised and semi-supervised systems. Icarus 2024, 408, 115761. [Google Scholar] [CrossRef]
  118. Kilduff, T.; Machuca, P.; Rosengren, A.J. Crater detection for cislunar autonomous navigation through convolutional neural networks. In Proceedings of the AAS/AIAA Astrodynamics Specialist Conference, Big Sky, MT, USA, 13–17 August 2023. [Google Scholar]
  119. Emami, E.; Ahmad, T.; Bebis, G.; Nefian, A.; Fong, T. Crater detection using unsupervised algorithms and convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5373–5383. [Google Scholar] [CrossRef]
  120. Christian, J.A.; Derksen, H.; Watkins, R. Lunar crater identification in digital images. J. Astronaut. Sci. 2021, 68, 1056–1144. [Google Scholar] [CrossRef]
  121. Solarna, D.; Gotelli, A.; Le Moigne, J.; Moser, G.; Serpico, S.B. Crater detection and registration of planetary images through marked point processes, multiscale decomposition, and region-based analysis. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6039–6058. [Google Scholar] [CrossRef]
  122. Leroy, B.; Medioni, G.; Johnson, E.; Matthies, L. Crater detection for autonomous landing on asteroids. Image Vis. Comput. 2001, 19, 787–792. [Google Scholar] [CrossRef]
  123. Weismuller, T.; Caballero, D.; Leinz, M. Technology for autonomous optical planetary navigation and precision landing. In Proceedings of the AIAA Space Conference, Long Beach, CA, USA, 18–20 September 2007. [Google Scholar]
  124. Clerc, S.; Spigai, M.; Simard-Bilodeau, V. A crater detection and identification algorithm for autonomous lunar landing. IFAC Proc. Vol. 2010, 43, 527–532. [Google Scholar] [CrossRef]
  125. Harada, T.; Usami, R.; Takadama, K.; Kamata, H.; Ozawa, S.; Fukuda, S.; Sawai, S. Computational Time Reduction of Evolutionary Spacecraft Location Estimation toward Smart Lander for Investigating Moon. In Proceedings of the 11th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS2012); European Space Agency (ESA): Frascati, Italy, 2012. [Google Scholar]
  126. Sawai, S.; Fukuda, S.; Sakai, S.; Kushiki, K.; Arakawa, T.; Sato, E.; Tomiki, A.; Michigami, K.; Kawano, T.; Okazaki, S. Preliminary system design of small lunar landing demonstrator SLIM. Aerosp. Technol. Jpn. 2018, 17, 35–43. [Google Scholar] [CrossRef]
  127. Haruyama, J.; Sawai, S.; Mizuno, T.; Yoshimitsu, T.; Fukuda, S.; Nakatani, I. Exploration of lunar holes, possible skylights of underlying lava tubes, by smart lander for investigating moon (slim). Trans. Jpn. Soc. Aeronaut. Space Sci. Aerosp. Technol. Jpn. 2012, 10, Pk_7–Pk_10. [Google Scholar] [CrossRef]
  128. Yu, M.; Cui, H.; Tian, Y. A new approach based on crater detection and matching for visual navigation in planetary landing. Adv. Space Res. 2014, 53, 1810–1821. [Google Scholar] [CrossRef]
  129. Lu, T.; Hu, W.; Liu, C.; Yang, D. Relative pose estimation of a lander using crater detection and matching. Opt. Eng. 2016, 55, 1–25. [Google Scholar] [CrossRef]
  130. Shao, W.; Xie, J.; Cao, L.; Leng, J.; Wang, B. Crater matching algorithm based on feature descriptor. Adv. Space Res. 2020, 65, 616–629. [Google Scholar] [CrossRef]
  131. Maass, B.; Woicke, S.; Oliveira, W.M.; Razgus, B.; Krüger, H. Crater navigation system for autonomous precision landing on the moon. J. Guid. Control. Dyn. 2020, 43, 1414–1431. [Google Scholar] [CrossRef]
  132. Jin, M.; Shao, W. Crater triangle matching algorithm based on fused geometric and regional features. Aerospace 2024, 11, 417. [Google Scholar] [CrossRef]
  133. Chng, C.K.; Mcleod, S.; Rodda, M.; Chin, T.J. Crater identification by perspective cone alignment. Acta Astronaut. 2024, 224, 1–16. [Google Scholar] [CrossRef]
  134. Cheng, Y.; Ansar, A. Landmark based position estimation for pinpoint landing on Mars. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Piscataway, NJ, USA, 18–22 April 2005; pp. 1573–1578. [Google Scholar]
  135. Hanak, C.; Crain, T.; Bishop, R. Crater identification algorithm for the lost in low lunar orbit scenario. In Proceedings of the 33rd Annual AAS Rocky Mountain Guidance and Control Conference, Springfield, VA, USA, 5–10 February 2010. [Google Scholar]
  136. Park, W.; Jung, Y.; Bang, H.; Ahn, J. Robust crater triangle matching algorithm for planetary landing navigation. J. Guid. Control. Dyn. 2019, 42, 402–410. [Google Scholar] [CrossRef]
  137. Doppenberg, W. Autonomous Lunar Orbit Navigation with Ellipse R-CNN; Delft University of Technology: Delft, The Netherlands, 2021. [Google Scholar]
  138. Xu, L.H.; Jiang, J.; Ma, Y. Ellipse crater recognition for lost-in-space scenario. Remote Sens. 2022, 14, 6027. [Google Scholar] [CrossRef]
  139. Wang, H.; Jiang, J.; Zhang, G. CraterIDNet: An end-to-end fully convolutional neural network for crater detection and identification in remotely sensed planetary images. Remote Sens. 2018, 10, 1067. [Google Scholar] [CrossRef]
  140. Lu, S.; Xu, R.; Li, Z.; Wang, B.; Zhao, Z. Lunar rover collaborated path planning with artificial potential field-based heuristic on deep reinforcement learning. Aerospace 2024, 11, 253. [Google Scholar] [CrossRef]
  141. Liu, W.; Wan, G.; Liu, J.; Cong, D. Path Planning for Lunar Rovers in Dynamic Environments: An Autonomous Navigation Framework Enhanced by Digital Twin-Based A*-D3QN. Aerospace 2025, 12, 517. [Google Scholar] [CrossRef]
  142. Tao, W.; Zhang, J.; Hu, H.; Zhang, J.; Sun, H.; Zeng, Z.; Song, J.; Wang, J. Intelligent navigation for the cruise phase of solar system boundary exploration based on Q-learning EKF. Complex Intell. Syst. 2024, 10, 2653–2672. [Google Scholar] [CrossRef]
  143. Xiong, K.; Zhou, P.; Wei, C. Spacecraft autonomous navigation using line-of-sight directions of non-cooperative targets by improved Q-learning based extended Kalman filter. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2024, 238, 182–197. [Google Scholar] [CrossRef]
  144. Fereoli, G.; Schaub, H.; Di Lizia, P. Meta-reinforcement learning for spacecraft proximity operations guidance and control in cislunar space. J. Spacecr. Rocket. 2025, 62, 706–718. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the Hayabusa2 landing operation (redrawn based on [43]).
Figure 1. Schematic diagram of the Hayabusa2 landing operation (redrawn based on [43]).
Aerospace 12 00789 g001
Figure 2. NFT flow chart (redrawn based on [46]).
Figure 2. NFT flow chart (redrawn based on [46]).
Aerospace 12 00789 g002
Figure 3. Schematic diagram of binocular ranging with multiple images (redrawn based on [48]).
Figure 3. Schematic diagram of binocular ranging with multiple images (redrawn based on [48]).
Aerospace 12 00789 g003
Figure 4. SLIM landing site on the lunar surface captured by LRO NAC. Image credit: NASA/GSFC/Arizona State University.
Figure 4. SLIM landing site on the lunar surface captured by LRO NAC. Image credit: NASA/GSFC/Arizona State University.
Aerospace 12 00789 g004
Figure 5. SLIM lander on the lunar surface as imaged by the onboard MBC.Courtesy of JAXA/Ritsumeikan University/University of Aizu.
Figure 5. SLIM lander on the lunar surface as imaged by the onboard MBC.Courtesy of JAXA/Ritsumeikan University/University of Aizu.
Aerospace 12 00789 g005
Figure 6. VBN concept diagram of landing operation phase (redrawn based on [52]).
Figure 6. VBN concept diagram of landing operation phase (redrawn based on [52]).
Aerospace 12 00789 g006
Figure 7. Illustration of the process of calculating a single DPOS measurement (redrawn based on [54]).
Figure 7. Illustration of the process of calculating a single DPOS measurement (redrawn based on [54]).
Aerospace 12 00789 g007
Figure 8. Schematic diagram of typical surface characteristics. (d) Target Marker (schematic illustration based on [66]).
Figure 8. Schematic diagram of typical surface characteristics. (d) Target Marker (schematic illustration based on [66]).
Aerospace 12 00789 g008
Figure 9. Classification of crater detection methods.
Figure 9. Classification of crater detection methods.
Aerospace 12 00789 g009
Figure 10. U-Net network structure (redrawn based on [91]).
Figure 10. U-Net network structure (redrawn based on [91]).
Aerospace 12 00789 g010
Figure 11. Faster R-CNN network structure (redrawn based on [110]).
Figure 11. Faster R-CNN network structure (redrawn based on [110]).
Aerospace 12 00789 g011
Figure 12. CenterNet network structure (redrawn based on [111]).
Figure 12. CenterNet network structure (redrawn based on [111]).
Aerospace 12 00789 g012
Figure 13. YOLOLens network structure (redrawn based on [112]).
Figure 13. YOLOLens network structure (redrawn based on [112]).
Aerospace 12 00789 g013
Figure 14. CraterDETR network structure (redrawn based on [113]).
Figure 14. CraterDETR network structure (redrawn based on [113]).
Aerospace 12 00789 g014
Figure 16. Crater matching methods without prior pose (redrawn based on [131,136]). (a) Crater matching based on p 2 invariants. (b) Surface reconstruction-based LIS crater matching.
Figure 16. Crater matching methods without prior pose (redrawn based on [131,136]). (a) Crater matching based on p 2 invariants. (b) Surface reconstruction-based LIS crater matching.
Aerospace 12 00789 g016
Table 1. Sources of remote sensing images of different celestial bodies.
Table 1. Sources of remote sensing images of different celestial bodies.
Celestial BodyYearCountry (Agency)Mission (Probe)Main InstrumentsMain Contributions
Moon1966–1967USA
(NASA)
Lunar Orbiter 1–5Dual-lens camera system (up to 2 m; typical global coverage 60 m)First systematic lunar orbital imaging; supported Apollo landing site selection; acquired medium/low-res global imagery.
1994ClementineUV-visible multispectral camera (200 m); NIR CCD (150–500 m); LiDAR (100 m); high-res camera (7–20 m)Global multiband imagery and terrain data; established lunar control network; 100 m resolution global DOM.
2009LROLOLA (10 m vertical accuracy); LROC NAC (0.5 m)/WAC (100 m); Diviner radiometer (200–400 m)59 m global DEM; 100 m DIM; 5 m DTM in key areas; improved terrain accuracy.
2003ESASMART-1HRSC (2.3 m at 250 km); Shortwave Infrared Spectrometer (SIR) (300 m); LIDAR; X-ray/IR spectrometersHigh-resolution lunar surface images; revealed terrain and mineral distribution.
2007Japan (JAXA)KaguyaLaser Altimeter (LALT); Terrain Camera (TC) (10 m); Multiband ImagerAcquired high-res imagery and TC-derived DEM (horizontal resolution 60–100 m); generated DEM and DIM products.
2008–2023IndiaChandrayaan-1/2/3Chandrayaan-1: Terrain Mapping Camera (TMC-1, 5 m), Hyperspectral Imager (M3), and X-ray Spectrometer (C1XS). Chandrayaan-2: Orbiter High-Resolution Camera (OHRC, 0.25 m), Terrain Mapping Camera-2 (TMC-2, 5 m), Dual-Frequency Synthetic Aperture Radar (SAR), and Infrared Spectrometer. Chandrayaan-3: Vision Navigation Camera and Laser Altimeter and Velocimeter.Chandrayaan-1 generated 5 m DOM and mineral spectral data to support comprehensive lunar mapping and resource surveys. Chandrayaan-2 provided high-precision data for hazard identification and image navigation feature library construction by generating 0.28 m DOM and DEM through OHRC multi-angle stereo imaging. Chandrayaan-3 achieved the first soft landing near the lunar south pole, captured full-sequence visual imagery during descent, and validated the performance of the vision-based navigation and hazard avoidance system.
2007–2024ChinaChang’e 1–6Chang’e-1: Equipped with a CCD stereo camera (120 m) and a laser altimeter (5 m vertical accuracy); Chang’e-2 is equipped with a CCD stereo camera (7 m) and a laser altimeter (5 m vertical accuracy); Chang’e-3/4 carried a panoramic camera, navigation camera (NavCam) (1–2 m), landing camera (1.3 m), and ground-penetrating radar; Chang’e-5/6 carried a lunar surface camera (1 m) and sampling devices.Chang’e-1 generated global 120 m DIM and 500 m DEM products. Chang’e-2 generated 7 m DIM and 20 m DEM products. Chang’e-3 to Chang’e-6 acquired high-resolution imagery of local landing sites. Accomplished soft landing, surface exploration, and sample return missions, providing multisource remote sensing data to support image-based navigation and geomorphological analysis.
Mars1997USA
(NASA)
MGSMOC (1.5–12 m NA; 240 m WA); MOLA (1 m vertical, 300 m footprint)Provided Mars terrain data; supported climate and water studies.
2005MROHiRISE (0.25–0.5 m); CTX (6 m); CRISM (18–36 m)Detailed Mars surface imagery; advanced mineral/water composition analysis.
2020Mars 2020
(Perseverance)
NavCam (0.33 m/pixel at 1 m); HazCam; Mastcam-Z (0.7 mm–3.3 cm/pixel); Ground-Penetrating Radar (GPR)<5 m landing accuracy; 0.5 m DTM and 25 cm imagery; Mastcam-Z: 0.7 mm–3.3 cm detail scale.
2003ESAMars ExpressHRSC (10 m global, down to 2 m); IR mineral spectrometer (300 m–4.8 km)High-resolution Mars images; revealed water-related mineral composition.
2020ChinaTianwen-1High-Resolution Imaging Camera (HiRIC) (2.5 m); Medium-Resolution Camera (100 m); GPR (100 m depth, <10 m vertical)High-res Mars imagery and radar; enabled geology and water research.
Mercury1973USA
(NASA)
Mariner 10IR radiometer; UV spectrometer; TV imaging system (100 m, 1–2 km global)First high-res Mercury images; revealed craters and surface structures.
2004MESSENGERMDIS (10 m, 250 m global); MLA (<1 m vertical accuracy); UV/Vis/NIR/X-ray spectrometersHigh-res images; precise Mercury terrain data.
2018ESABepiColomboLALT (1 m vertical accuracy); thermal IR spectrometer; radiometers; HRIC (5 m), STC (50 m), VIHI (100–200 m)Studied Mercury’s geology, magnetosphere, and atmosphere.
Asteroid1996USA
(NASA)
NEAR ShoemakerNIR spectrometer; MDISFirst asteroid orbiting mission; surface imagery of Eros with geologic features.
2007DawnFraming camera; gamma/neutron/UV spectrometersHigh-res images of Vesta and Ceres; terrain and crater mapping.
2016OSIRIS-RExOptical Camera System (PolyCam, MapCam, SamCam), LALT, VIS/IR Spectrometer, Thermal IR Spectrometer, X-ray Imaging Spectrometer, Touch-and-Go SamplerGenerated global 5 cm DIM; conduct mineral and thermal remote sensing; acquire 7–10 cm images and return asteroid surface samples.
2014Japan (JAXA)Hayabusa2Optical Navigation Camera (ONC-T/W1/W2), Near-Infrared Spectrometer, Sampling Device, MASCOT Lander, MINERVA-II LanderGenerated global 0.5–0.7 m resolution DIM, 5–10 cm DTM and orthoimages of landing site, and 4.6 mm ultra-high-res close-range images during descent; provided high-precision remote sensing data for visual navigation and geomorphological evolution studies.
Table 2. Brief introduction of some deep space exploration missions.
Table 2. Brief introduction of some deep space exploration missions.
Mission NameLaunch DateCountry (Agency)Target BodyNavigation Landmark
NEAR17 February 1996USA (NASA)433 ErosCraters
Hayabusa23 December 2014Japan (JAXA)RyuguTarget Marker
OSIRIS-REx8 September 2016USA (NASA)BennuFeature Points
Mars 202030 July 2020USA (NASA)MarsFeature Points
SLIM7 September 2023Japan (JAXA)MoonCraters
Nova-C (IM-1)15 February 2024USA (Intuitive Machines)MoonFeature Points
Nova-C (IM-2)26 February 2025USA (Intuitive Machines)MoonCraters
Table 3. Comparison of successful navigation accuracies in deep space missions.
Table 3. Comparison of successful navigation accuracies in deep space missions.
Navigation MethodRepresentative MissionsLanding Accuracy
Traditional NavigationApollo; Chang’e-3; Chang’e-4; Chang’e-5Hundreds of meters to several kilometers
Image-Based NavigationMars 2020; SLIM; Blue GhostTens to hundreds of meters
Table 4. Summary of traditional crater detection methods.
Table 4. Summary of traditional crater detection methods.
CategoryMethodAdvantagesDisadvantages
Image Processing
-Based Methods
Edge Detection(1) Simple algorithm and easy to implement.
(2) Effective for detecting craters with clear edges.
(1) Sensitive to noise, prone to false detection.
(2) Poor performance in cases of indistinct edges or complex terrain.
(3) High computational cost, especially for high-resolution images.
Segmentation and Region Growing(1) Uses shadow information to reduce false positives to some extent.
(2) Performs well for circular crater detection.
(1) Very sensitive to lighting conditions; illumination changes affect results.
(2) Requires accurate lighting models; otherwise, large errors occur.
(3) Cannot detect craters without visible shadows.
Texture and Statistical Features(1) Capable of handling complex terrain, strong adaptability.
(2) Improves accuracy via multifeature analysis.
(1) Computationally intensive and slow processing.
(2) Complex feature extraction algorithms requiring careful design.
(3) Poor detection of small-sized craters.
Pattern Recognition
-Based Methods
Template Matching(1) Simple and easy to implement.
(2) Good performance for craters with fixed shape and size.
(1) Unable to detect craters with variable shapes and sizes.
(2) High computational load, especially for large images.
(3) Sensitive to noise, prone to false detection.
Morphology-based Methods(1) Incorporates geological knowledge, improving detection accuracy.
(2) Can handle complex terrain and lighting conditions.
(1) Complex rule design requiring extensive prior knowledge.
(2) Poor generalization; rules may vary across regions.
(3) High computational cost and slow processing.
Fusion MethodsMultifeature Fusion(1) Integrates multiple features, improving accuracy and robustness.
(2) Can handle complex terrain and illumination.
(1) Complex feature extraction and fusion algorithms needing significant resources.
(2) Requires suitable fusion strategies for different feature types.
Multi-algorithm Fusion(1) Leverages algorithm complementarity to improve robustness and accuracy.
(2) Capable of detecting diverse types of craters.
(1) Fusion strategy design is complex and requires careful tuning.
(2) High computational load and slower processing.
(3) Needs large training and testing datasets to validate fusion performance.
Multisensor Fusion(1) Provides multimodal information, improving reliability and accuracy.
(2) Handles high-precision detection tasks in complex environments.
(1) Requires multiple sensors, increasing hardware cost.
(2) Complex data fusion algorithms requiring substantial computation.
(3) Difficulties in synchronizing and calibrating different sensor data.
Table 5. Comparison of crater detection methods based on deep learning.
Table 5. Comparison of crater detection methods based on deep learning.
CharacteristicSemantic SegmentationObject Detection
Output TypePixel-level classification, providing a probability or label for each pixel belonging to a crater.Bounding boxes, including crater center and rectangular boundaries.
Application ScenarioProvides detailed crater morphology and structural information; suitable for high-precision map generation and scientific analysis.Rapid identification and annotation of crater positions and sizes; suitable for real-time detection and navigation.
Algorithm ComplexityGenerally higher; requires more computational resources and time, e.g., U-Net.Generally lower; fast inference speed, suitable for real-time applications. Examples include YOLO and SSD.
AdvantagesEffectively handles complex crater features and overlapping craters.Directly provides position and size information of craters.
Post-processingRequires extensive post-processing to accurately determine crater boundaries and sizes.Does not require extensive post-processing.
ChallengesMay not provide accurate location and size information without additional processing.Limited annotated datasets; reproducing results can be challenging.
Training Data RequirementsRequires large amounts of pixel-level labeled training data; high labeling cost.Requires large quantities of bounding-box annotated data; relatively lower labeling cost.
RobustnessSensitive to image resolution and noise; robustness can be improved through multiscale training and data augmentation.May experience degraded accuracy with changes in resolution and noise.
ApplicationsHigh-precision lunar surface map generation.Real-time navigation and landing hazard avoidance for lunar probes; automatic crater annotation for large-scale lunar imagery.
Model Training and OptimizationCommon models include U-Net, SegNet; trained and optimized using data augmentation and large-scale pixel-labeled datasets.Common models include R-CNN, Fast R-CNN, Faster R-CNN, YOLO, SSD; trained and optimized using large annotated datasets.
Real-time PerformanceLow real-time performance; suited for applications requiring high analytical precision, such as scientific research and geological monitoring.Suitable for applications with high real-time requirements, such as probe navigation and obstacle avoidance.
Accuracy and DetailRequires extra steps during post-processing to achieve comparable accuracy.Provides more precise position and size information.
Table 6. Comparison of crater matching methods.
Table 6. Comparison of crater matching methods.
Matching TypeWith Prior PoseWithout Prior Pose
Search ScopeLocal region, based on known camera position prediction.Global matching, requires searching the entire database.
Computational ComplexityLow, fast matching speed.High, requires global search with large computation load.
Matching AccuracyHigh, depends on initial navigation state.Relatively high, but depends on the stability of invariants.
Application ScenarioCamera position is roughly known, used during navigation.Camera lost, and navigation state is completely unknown.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, X.; Li, T.; Hua, B.; Li, L.; Zhao, C. A Review of Deep Space Image-Based Navigation Methods. Aerospace 2025, 12, 789. https://doi.org/10.3390/aerospace12090789

AMA Style

Lin X, Li T, Hua B, Li L, Zhao C. A Review of Deep Space Image-Based Navigation Methods. Aerospace. 2025; 12(9):789. https://doi.org/10.3390/aerospace12090789

Chicago/Turabian Style

Lin, Xiaoyi, Tao Li, Baocheng Hua, Lin Li, and Chunhui Zhao. 2025. "A Review of Deep Space Image-Based Navigation Methods" Aerospace 12, no. 9: 789. https://doi.org/10.3390/aerospace12090789

APA Style

Lin, X., Li, T., Hua, B., Li, L., & Zhao, C. (2025). A Review of Deep Space Image-Based Navigation Methods. Aerospace, 12(9), 789. https://doi.org/10.3390/aerospace12090789

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop