1. Introduction
The lunar surface is widely littered with craters of all sizes and shapes. Each crater is composed of six main parts: a bottom, a wall, an uplift edge, a central uplift, radiation stripes, and sputters [
1]. An in-depth study of crater morphology, including the shape, size, and distribution of craters, can provide important information about the internal structure, rock composition, and geological processes of the Moon, and can also be analyzed to learn about impact events of different sizes and frequencies on the Moon to help us understand the early formation and evolution of the solar system [
2]. Due to special environmental factors such as the Moon having no water, wind, or atmosphere, crater information has been preserved. During the Chang’e-1 mission from China, based on the gravity field anomaly model, it was inferred that there was a mascon directly below the Von Karman Crater at the south pole of the Moon and that the geological composition of the mascon was relatively complicated. The discovery of such hidden lunar mascons provides extremely important basic data for human research on the formation and geology of the Moon [
2]. Therefore, analyzing the geometry of craters has become a direct entry point for studying the evolution and development of lunar morphological features.
Due to limited data sources, current studies focus on medium- and large-sized craters with a diameter greater than 0.1 km. Head [
3,
4] and Kadish et al. [
5] created a list of 5185 craters with diameters of more than 20 km in diameter on the lunar surface based on the data from the lunar orbiter laser altimeter (LOLA) system. McDowell et al. [
6] identified 8639 craters on the Moon. Salamuniccar et al. [
7] established the LU60645GT list of 60,645 craters. Robbins et al. [
8] established a lunar crater database containing 2,033,574 craters. In subsequent studies, Robbins et al. [
9] measured small craters (ranging from 10 m to 500 m in diameter) in the lunar mare region and larger craters (ranging from 100 m to 1 km in diameter) in the lunar highland and lunar mare regions. Wang et al. [
10] identified 106,016 lunar craters with diameters greater than 500 m based on public data from the Chang’e lunar exploration project and analyzed the differences and distribution rules of lunar craters in different spatial scales. Hou et al. [
11] used the Chang’e-1 digital elevation model (DEM) data to analyze the spatial distribution of craters over the entire surface of the Moon by examining the numbers of craters in different diameter intervals and demonstrated that there were enormous numbers of medium-sized and small craters. Li et al. [
12] summarized and analyzed the structural characteristics of craters, the concept and behavioral characteristics of crater families, and the relationship between craters and stratigraphic age based on the morphological and evolutionary characteristics of 100-meter-level craters.
In addition, scholars have carried out research on crater recognition. Zhao [
13] proposed an intelligent recognition method based on region-based fully convolutional networks (R-FCNs) for identifying craters with diameters of less than 1 km. Zuo et al. [
14] used the contrast, highlight, and shadow features of craters in the sunlight to automatically identify small lunar craters. Hu [
15] proposed the semantic segmentation of the digital elevation model to detect small craters. Kang [
16] proposed a coarse-to-fine resolution recognition method. In this method, first, the histogram of oriented gradient (HOG) features and a support vector machine (SVM) classifier are used to conduct preliminary classification; then, small craters are automatically identified from charge-coupled device (CCD) images. Yang recognized 20,000 small craters through deep learning [
17].
For the geometric modeling of craters, previous studies have demonstrated that morphological parameters such as the depth, edge height, and central peak height of complex craters change exponentially with diameter [
18]; the basic expression is
, where
represents a morphological characteristic value of a given crater, such as its depth, edge height, edge width, central peak height, crater volume, or bottom area;
denotes the diameter of the crater; and
and
are constants. In view of the morphological characteristics of newly formed craters with high correlations, Bouska, Head, Pike, and Hu et al. [
19,
20,
21,
22,
23,
24,
25] used data obtained by different detectors to intensively study the newly formed original craters or secondary craters in different regions and presented the correlation equations of the shape parameters of various craters. Baldwin et al. studied the morphological relationship of craters in the early stage and obtained the expression
D = 0.256
d2 +
d + 0.6300. Bouska et al. identified the morphological relationship expression of craters as
logD =
Alogd +
B (where
A and
B are coefficients to be determined). Pike et al. obtained an expression for the morphological relationship of craters with diameters of less than 15 km, namely,
d = 0.196
d1.010. Hu et al. obtained morphological relationship expressions for simple craters, namely,
d = 0.126
D + 0.4902, and for complex craters,
d = 0.3273
D0.6252. Early crater depth data were obtained mainly by shadow length measurement. With the development of science and technology, depth data have become acquired through LOLA, which has improved the resolution and accuracy, and the identification rate and number of identified craters have increased greatly. Based on new data sources, more in-depth studies have been carried out on the geometry of craters. Hu et al. began to analyze the morphological characteristics of craters in different regions based on the differences in the morphological characteristics of different types of craters.
However, in the geometric analysis of craters, the following problems are still encountered: (1) Available lunar remote sensing images cannot be used to accurately identify the geometries of small craters (less than 3 m in diameter) and no solution to this problem has been found in related research. (2) It remains to be quantitatively studied whether the geometric modeling and metric data for small craters on the far side of the Moon are consistent with those of traditional large craters (less than 10 km in diameter). (3) Limited by the available data sources, the geometric modeling accuracy for the small craters on the far side of the Moon is low, and performing high-precision geometric morphological modeling is of necessity.
During the Chang’e-4 mission, the Yutu-2 lunar rover collected detailed images of the far side of the Moon in the Von Karman Crater region in the SPA Basin, which provided basic data for studying the geometric modeling of small craters on the far side of the Moon. This is the core step of the geometric modeling process for building a fine three-dimensional model of small craters. The process generally includes camera calibration, visual positioning of the lunar rover, stereo matching, and 3D modeling. In the field of camera calibration, Zhang et al. [
26] proposed a method based on three-dimensional direct linear transformation (DLT) and multi-image rear intersection iterative computation for accurate lunar rover calibration. Xu et al. [
27] built a combined adjustment model for the self-calibrated beam method which includes distance constraints, collinearity and coplanarity conditions, and relative pose constraints of the stereo camera. With solar panels as calibration targets, after the rover landing, Yan et al. [
28] proposed using grid lines as a calibration method for linear parameters by applying the Hough transform, a clustering strategy, and least squares linear fitting. In the field of visual positioning, Wang et al. [
29] proposed using scale-invariant feature transform (SIFT) matching for interstation image matching, combining the correlation coefficient and least squares matching to achieve feature matching of the same interstation image, and finally using the beam adjustment method to realize relative positioning between two stations. Liu et al. [
30] used affine scale-invariant feature transform (Affine-SIFT) matching with the beam adjustment model to realize relative positioning between two stations. Ma et al. [
31] used the unit quaternion beam adjustment model and virtual observation to obtain an accurate calculation adjustment model of the Mars rover and to achieve continuous relative positioning of the rover. In the field of stereo matching, Li et al. [
32] et al. proposed the three-dimensional reconstruction of the lunar surface terrain using an improved dynamic programming method for stereo matching in view of the unique imaging environment of the lunar surface, which has characteristics such as sparse texture, low illumination, and occlusion, and demonstrated that the image matching algorithm directly affects the accuracy and reliability of terrain restoration. Cao et al. [
33], aiming at satisfying the real-time performance and reliability requirements of the lunar rover vision system, proposed using Gaussian filtering and the contrast-limited adaptive histogram equalization (CLAHE) method to preprocess stereoscopic images, using the SIFT to extract the point and edge features, and calculating the three-dimensional information of the lunar environment by matching joint points with edge features. Hou et al. [
34] proposed a feature-assisted region-matching algorithm, using the regional feature constraints of less-texture and no-texture regions through regional matching, combining the advantages of respective matching. Peng et al. proposed an adaptive Markov random field model to constrain the parallax range of texture-poor deep space detection images [
35]; the adaptive method reduces the scope of the parallax search and preserves the parallax features in discontinuous regions. In recent years, scholars have proposed stereo-matching methods based on deep learning theory [
36,
37,
38,
39]. Zbontar et al. proposed the matching cost convolutional neural network (MC-CNN) algorithm [
36] in which deep learning is introduced into stereo matching to design two network structures: MC-CNN-accurate architecture (MC-CNN-acrt) and MC-CNN-fast architecture (MC-CNN-fst). The algorithm uses a deep neural network to perform similarity metric learning on image features in image blocks and trains them in a supervised manner. Mayer et al. [
37] proposed an end-to-end stereo matching algorithm, the disparity estimation network (DispNet) algorithm. The algorithm uses a codec-decoding structure with a cascading method for parallax regression, and a dataset, namely, Scene Flow, synthesized by stereo matching with a convolutional neural network used to pre-train the stereo-matching model of the convolutional neural network. Kendall et al. [
38] proposed an algorithm based on cost volume aggregation, the geometry and context network (GC-Net). The cost volume is constructed by connecting the parallax of the feature dimensions, and a 3D convolutional neural network is used to regularize the constructed cost volume, while a codec structure is used to reduce false stereo matching. Chang et al. [
39] proposed an algorithm based on pyramid stereo matching, namely, the pyramid stereo matching network (PSMNet) algorithm. This algorithm uses feature information aggregation at different scales and orientations to extract the global context information of images and employs a 3D hourglass structure to standardize the cost volume, thereby yielding a parallax map that is superior to that of GC-Net. These methods can solve the image matching problem but, due to the sparse and single lunar texture, the stereo matching accuracy is low; in particular, the lunar detail reconstruction performance is poor.
To realize the fine modeling of small craters on the far side of the Moon, a high-precision three-dimensional model of the small craters around the rover’s moving path is established by using stereo navigation camera images from the Yutu-2 lunar rover, and the geometric pattern of the small craters is studied [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12]. The contributions of the proposed method comprise the following:
- (1)
Considering the single sparse texture feature of the lunar surface, the cross-scale cost aggregation for a stereo matching network (CscaNet) is proposed.
- (2)
For the first time, the proposed CscaNet and forward intersection [
40] (triangulation) method is used to reconstruct the fine lunar terrain based on the image data obtained by the navigation camera. The small craters within the range are extracted, and a geometric morphological law analysis is carried out, which fills the gap in the morphological analysis of craters in the dorsal region of the Moon at the miniature scale.
- (3)
For the first time, the geometric pattern of small craters is discovered, and the relationship between the depth and diameter of craters within the scope of the Yutu-2 patrol mission is analyzed.
2. Methodology
The geometric modeling method for craters on the far side of the Moon proposed in this study first calibrates the lunar rover navigation camera based on the self-calibrated beam method combined with an adjustment model to obtain the internal and external parameters. Second, a visual positioning model of the lunar rover is built by using the sequence images with overlap regions of the adjacent exposure stations to accurately position the rover. Third, the CscaNet algorithm is used to obtain the image parallax map, which solves the problem of poor image matching caused by the simplex lunar image texture. On this basis, the parallax map is subjected to forward intersection to yield a 3D point cloud model of the lunar surface. Finally, the metric data (crater diameter,
; crater depth,
d; and depth–diameter ratio,
) of small craters are obtained based on the 3D point cloud of the lunar surface to analyze the small crater distribution law and expression
.
Figure 1 shows the overall technical flow of the study.
2.1. Yutu-2 Navigation Camera Calibration
The Yutu-2 rover is equipped with three types of stereo cameras: navigation cameras, panoramic cameras, and obstacle avoidance cameras. The navigation cameras and panoramic cameras are mounted on holders, while the two pairs of obstacle avoidance cameras are installed on the front and rear sides of the rover body, as shown in
Figure 2. Due to the small field of view of each panoramic camera, its geometric distortion is significant. Generally, navigation camera images are used for lunar rover navigation and positioning, terrain reconstruction, path planning, and other tasks for scientific expeditions. Navigation cameras have an effective depth of field from 0.5 m to infinity. The pixel count of the stereo navigation cameras is 1024 × 1024 pixels with a field of view of 46.9°, and they have a stereo base of 27 cm and a focal length of 8.72 mm. After the Yutu-2 lunar rover is launched and operating in orbit, the internal and external parameters of the camera calibrated on the ground may change. Therefore, to improve the accuracy of subsequent lunar terrain constructions, the present study employs the in-orbit self-calibration method of stereo cameras with additional multiple constraints [
27] to accurately determine the internal and external parameters of the cameras.
After a navigation camera captures a surround image, its field of view contains its solar panels (as shown in
Figure 3). On the basis of the self-calibration model of the conventional camera beam method, the self-calibrated beam method combined adjustment model accurately calculates the inner (
) and outer (
) orientation elements of the cameras under constraints such as the collinear and coplanar features of the solar panels and the relation between the relative position and attitude of the stereo navigation camera. The indirect adjustment model with multiple constraints is defined as follows:
where
are the image point residual corrections;
is the correction of the 3D coordinates of the solar panel features;
and
are the corrections of the orientation elements inside the left and right images, respectively;
and
are the corrections of the inner orientation elements of the left and right images, respectively;
is the correction of the 3D coordinates of the solar panel features;
A,
B,
C,
D,
G, and
F are the corresponding coefficient matrices;
,
, and
are the residual matrices;
is the weight matrix corresponding to the observed quantity; and
is the identity matrix. Specific expressions for these parameters are provided in reference [
27].
The least squares adjustment is applied to Equation (1) to obtain the substitution parameters (
,
,
, and
); the error in the unit weight is 0.210 pixels. The obtained parameters are shown in
Table 1.
Table 1 shows that the differences in the calibration parameters of on-orbit calibration and pre-calibration were between 1.062 pixels and 5.186 pixels. Specifically, the differences in the focal length f of the left and right cameras were 1.755 pixels and 5.186 pixels, respectively, the differences in the coordinates of principal points
and
were 1.062 pixels, 1.257 pixels, 1.931 pixels, and 1.7265 pixels, respectively, and the results of
,
,
, and
were close.
Table 2 shows the relative position and pose relationships between stereo cameras.
2.2. Visual Positioning of the Lunar Rover
Telemetry data, including the real-time pose information and the mast rotation angle of the lunar rover, are used to transform the camera pose parameters from the coordinate system of the navigation camera to the northeast geologic system, and they are regarded as initial values of the outer orientation elements of the camera. When the features in the image are rich, SURF is used for the sparse matching of adjacent stereo images with overlapping regions.
Figure 4 shows the SURF matching results of adjacent stations. When the matching effect of adjacent station images is not good, manual extraction is used to supplement the number of matching points. The matching accuracy of the SURF matching algorithm is 82.6% [
41]. The quaternion-based beam adjustment model [
40] is built based on the inner orientation elements and the relative pose parameters of the camera, and the optimized absolute pose parameters of the camera for the current exposure station are determined through least squares adjustment. Then, the navigation camera coordinate system is transformed into the northeast underground coordinate system of the lander, thereby completing the visual positioning of the lunar rover. In the surface experiment of the Yutu-2 lunar rover, the relative positioning accuracy (average) is 3.01%. The positioning accuracy of the in-orbit movement of the Yutu-2 lunar rover is 1.8 ± 1.1 m [
40].
2.3. Stereo Matching of Navigation Images from the Lunar Rover
The residual module is optimized based on the classic PSMNet algorithm to address the poor stereo matching accuracy due to the sparse and simplex lunar surface texture; furthermore, the CscaNet structure is constructed (as shown in
Figure 5) using the image context information in the matching cost volume.
First, a small number of feature maps extracted by conventional convolution are used in the structure to perform deep convolution to obtain the full internal details of the input image. Second, by adaptive average pooling, four cross-scale cost volumes of different scales are constructed to expand the perceptive range of the global context information. Finally, the cross-scale cost volumes are transported to the cross-scale cost volume 3D aggregation module for cost aggregation and parallax regression to obtain the predictive parallax; the cross-scale cost volume 3D aggregation module consists of a preprocessing structure and three improved hourglass structures.
Since a traditional convolutional neural network requires many floating point operations (FLOPs), a deep convolutional neural network is used to obtain many important features of the input image from the redundant information of the feature map after the overall training process. The redundant information of these features is used to ensure the integrity of the input image feature information. Therefore, a residual module with improved convolution is designed to fuse the redundant features obtained by training. The residual module with improved convolution can be divided into two parts: a convolution improvement method and a network structure setting.
Assuming that the input feature map is
,
n feature maps can be generated through any convolutional layer, and the process is as follows:
where
is the output feature map,
is the convolution filter,
is the convolution kernel size of
f, * is the convolution operation, and
b represents the offset.
During the convolution process, the number of floating points is
, where
n is the number of filters;
and
are the height and width, respectively, of the output feature map; and
C represents the number of channels. To reduce the complexity of the convolution operation, it is necessary to reduce the total number of parameters by optimizing the numbers of parameters of
f and
b, that is, to control the size of the feature map. Since conventional convolution can extract many feature maps containing redundant features, it is assumed that there are a small number of intrinsic feature maps obtained by conventional convolution filters in the output feature map. Then,
m intrinsic feature maps
are obtained as follows:
where
is the inherent feature map and
is the filter used; when
, the offset can be neglected to reduce the operational complexity. While keeping the size of the feature map unchanged, the remaining parameters of the convolution process are set to constant values consistent with Equation (2).
To yield
n feature maps of constant size, deep convolution is performed on each inherent feature map
to yield
s feature maps through the following process:
where
is the
i-th intrinsic feature map of
and
is the deep convolution operation for the
j-th feature map
(excluding the last convolution operation); that is, the feature map generated with
is
.
Feature maps are obtained using Equation (4), and the output feature map is
. With this improved convolution operation
procedure, deep convolution operations are performed on each channel with lower computational complexity than conventional convolution; the improved convolution operation process is shown in
Figure 6.
Using the conventional convolution operation, fewer intrinsic feature maps are obtained, and then the deep convolution operation is used to increase the number of channels and expand the feature information to better extract more internal features and greatly reduce the complexity of the calculation.
To extract more complete image feature map details, we set the network structure of the residual module as follows: First, three 3 × 3 filters are constructed, and the internal features of the low-level structure are extracted; the size of the output feature map is (H/2) × (W/2) × 32. Then, from basic residual blocks conv1_x, conv2_x, conv3_x, and conv4_x, feature extraction of high-level semantic information is performed pixel by pixel, where conv1_x, conv2_x, conv3_x, and conv4_x have 3, 16, 3, and 3 basic residual units, respectively. Each residual unit consists of two 3 × 3 convolution operations, a batch normalization (BN) layer, and an activation function (rectified linear unit, ReLU). Finally, conv2_x, conv3_x, and conv4_x are cascaded, and low-level internal structural features and high-level semantic information are fused to output feature maps
and
, each with a size of (H/4) × (W/4) × 320. The parameter settings are specified in
Table 3.
Because the single-scale cost volume does not fully consider the cross-scale spatial relationship of stereo image pairs, a cross-scale cost volume is proposed for extracting the global context information of stereoscopic images and the parallax details. The cross-scale cost volume combines the characteristics of the series cost volume and joint cost volume to provide more accurate and complete similarity measures to obtain more detailed feature information. To construct the cross-scale cost volume, a four-scale feature map is used to extract cross-scale feature information. The structure of the cross-scale cost volume is shown in
Figure 7; the dimensions at each level of the cross-scale cost volume are
, where
.
A cross-scale cost volume 3D aggregation module is designed when reducing the parallax search range. The fusion process of the cross-scale cost volume 3D aggregation module is shown in
Figure 8. The cross-scale cost volume 3D aggregation module captures the global context information through a preprocessing structure and three encoder–decoder structures. The preprocessing structure has four 3 × 3 × 3 3D convolutional layers and can capture the feature information of the low-level structure and use it to geometrically constrain the parallax map. The encoder–decoder structure is composed of two parts, namely, encoding and decoding, and requires repeated top–down and bottom–up convolution operations. The introduced encoder–decoder structure accelerates the inference process of fusion by reducing the number of 3D hourglass modules, which can not only directly fuse the constructed cost volumes of various scales but also capture the global feature information with better robustness, thereby preserving the global parallax range.
The structure of the cross-scale cost volume 3D aggregation module is shown in
Figure 9; its structure is similar to that of an hourglass network. By performing cascading operations on the feature dimensions, the obtained cost volume is fused with the down-sampled cost volume (blue line in
Figure 8); as a result, the size of the cost volume is reduced to 1/32 of that of the original image. The specific operation process is as follows: A 3D convolutional layer with a step size of 2 is used to obtain a cost volume of 1/4 the size of the input image (the cost volume of the first scale), which is then down-sampled to 1/8 the size of the input image. Then, in the feature dimension, the cost volume obtained by down sampling is cascaded with the cost volume of the second scale and a 3D convolutional layer is added to fix the size of the feature channel; similar steps are repeated to reduce the size of the cost volume to 1/32 of that of the input image. Finally, a 3D deconvolution operation is used to up-sample the cost volume step-by-step to reach 1/4 of the size of the original input image. In the feature dimension, the scale is set to 32, 64, 128, and 128, in order. In the hop connection, the 1 × 1 × 1 convolution operation (the dotted line in
Figure 9) is used to reduce the number of parameters of the network, and the cost volume is optimized using two stacked 3D hourglass structures. Next, an output unit is used for parallax prediction. The output unit yields a single-channel 4D cost volume by performing two 3D convolution operations of dimensions 3 × 3 × 3. The cost volume is up-sampled to make the size of the input image H × W × D, and the soft-argmin operation is applied to generate the final parallax map.
2.4. Three-Dimensional Crater Modeling
The reconstruction accuracy for Yutu-2 navigation stereoscopic images has been analyzed in previous work [
40]. The mean absolute error of the lunar terrain reconstruction with the BA+Geometry algorithm in the range of 0.8 m to 6.1 m reached 1.04 cm, which indicates a significant improvement compared with the standard deviation of 0.1 m in LOLA. Subsequently, the parallax map obtained by the CscaNet method in
Section 2.3 is used to obtain a 3D point cloud under the left navigation camera system through forward intersection (triangulation). Using the navigation pose and rover pose, the previous point cloud is transformed into the north–east down system. With theoretical analysis, the mean reconstruction accuracy of lunar terrain is better than 0.98 cm within the range of 0.8 m to 6.1 m, which surpasses the previous terrain reconstruction standard [
40], as seen in
Figure 10.
Our accuracy analysis only focuses on the distance error between points in the camera coordinate system, as diameter and depth are relative concepts, and even in different coordinate systems, the physical distance between two points remains unchanged. We used the lowest point of the region as the center of the crater and analyzed the 3D reconstruction accuracy of four manually selected points and the center of the crater on the x-axis and y-axis profiles. The maximum standard deviation in , is denoted as , , is denoted as . The plane error and elevation error are taken as the measurement error, and the result is as follows:
From
Figure 11, it can be concluded that the mean measurement error of the diameter of the crater in the camera coordinate system is
. The mean measurement error for the depth of the crater is
.
This accuracy analysis proves the effectiveness of our algorithm. Later, the human–machine interaction-based point cloud processing software CloudCompare v2 is used for point cloud cropping to yield crater point clouds; then, the resulting clipped point cloud is used for profile analysis to obtain the 2D transverse and longitudinal profile information, which is demonstrated with point clouds 21, 32, and 47 as examples (the numbers correspond to
Table 4 in
Section 2.2.; the data are of the same name) in
Figure 12.
2.5. Construction of Characteristic Metrics of Craters
The highest point of a crater is often referred to as the rim, the relatively flat area inside a crater is called the bottom, the height from the floor to the rim is called the crater depth, the boundary between the rim and the interior of a crater is referred to as the lip, the portion between the rim and the floor is called the wall, and the distance between the left and right crater heads, as shown in
Figure 13, is referred to as the “crater diameter”.
To accurately acquire the topographic information of the small craters on the far side of the Moon, it is necessary to extract the overall dimensions and profile shapes of the craters from the 3D model. The overall dimensions include the crater diameter and crater depth d. The shapes of lunar surface craters are normally irregularly elliptical or circular; in the present study, the diameter of the equal-area circle of the crater head is regarded as the crater diameter, and the elevation difference between the bottom and the edge is regarded as the depth. The depth of a crater is related not only to the size, initial velocity, and density of the meteorite but also to the gravitational field of the impacted body and its lithology. The profile shape metrics of a crater are mainly reflected in the relationship between the crater depth and the crater diameter, namely, the depth–diameter ratio , reflecting its steepness. On this basis, the distribution law and geometric morphology of small craters are analyzed.
The steps to extract indicators such as the D and
d of the crater from the 3D terrain obtained from
Section 2.4 are as follows:
- (1)
Manually select the crater area in the reconstructed 3D terrain using cloudcompare software.
- (2)
Rotate the point cloud of the selected crater area to ensure that the area outside the crater edge is in the same horizontal plane.
- (3)
Automatically search for the lowest point within the region.
- (4)
Take the x-axis and y-axis profiles that pass through the lowest point, and project them in the x-axis and y-axis directions to obtain the corresponding contour lines (x-axis and y-axis profiles).
- (5)
Calculate the diameter of the crater based on the distance between two farthest points on the crater edge.
- (6)
Calculate the vertical distance from the lowest point to the line (the line connecting the two points at the crater edge) to obtain the depth of the crater.