Next Article in Journal
Long-Term Performance Evaluation of BeiDou PPP-B2b Products and Its Application in Time Service
Next Article in Special Issue
Mapping Surface Features of an Alpine Glacier through Multispectral and Thermal Drone Surveys
Previous Article in Journal
Unveiling the Subsurface of Late Amazonian Lava Flows at Echus Chasma, on Mars
Previous Article in Special Issue
Comparison of Three Different Random Forest Approaches to Retrieve Daily High-Resolution Snow Cover Maps from MODIS and Sentinel-2 in a Mountain Area, Gran Paradiso National Park (NW Alps)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Branch Deep Neural Network for Bed Topography of Antarctica Super-Resolution: Reasonable Integration of Multiple Remote Sensing Data

1
School of Information and Communications Engineering, Beijing University of Technology, Beijing 100124, China
2
Polar Research Institute of China, Shanghai 200136, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(5), 1359; https://doi.org/10.3390/rs15051359
Submission received: 9 December 2022 / Revised: 4 February 2023 / Accepted: 26 February 2023 / Published: 28 February 2023
(This article belongs to the Special Issue Emerging Remote Sensing Techniques for Monitoring Glaciers and Snow)

Abstract

:
Bed topography and roughness play important roles in numerous ice-sheet analyses. Although the coverage of ice-penetrating radar measurements has vastly increased over recent decades, significant data gaps remain in certain areas of subglacial topography and need interpolation. However, the bed topography generated by interpolation such as kriging and mass conservation is generally smooth at small scales, lacking topographic features important for sub-kilometer roughness. DeepBedMap, a deep learning method combined with multiple surface observation inputs, can generate high-resolution (250 m) bed topography with realistic bed roughness but produces some unrealistic artifacts and higher bed elevation values in certain regions, which could bias ice-sheet models. To address these issues, we present MB_DeepBedMap, a multi-branch deep learning method to generate more realistic bed topography. The model improves upon DeepBedMap by separating inputs into two groups using a multi-branch network structure according to their characteristics, rather than fusing all inputs at an early stage, to reduce artifacts in the generated topography caused by earlier fusion of inputs. A direct upsampling branch preserves large-scale subglacial landforms while generating high-resolution bed topography. We use MB_DeepBedMap to generate a high-resolution (250 m) bed elevation grid product of Antarctica, MB_DeepBedMap_DEM, which can be used in high-resolution ice-sheet modeling studies. Moreover, we test the performance of MB_DeepBedMap model in Thwaites Glacier, Gamburtsev Subglacial Mountains, and several other regions, by comparing the qualitative topographic features and quantitative errors of MB_DeepBedMap, BEDMAP2, BedMachine Antarctica, and DeepBedMap. The results show that MB_DeepBedMap can provide more realistic small-scale topographic features and roughness compared to BEDMAP2, BedMachine Antarctica, and DeepBedMap.

1. Introduction

The bed topography beneath the Antarctic ice sheets is one of the essential controls for most ice-sheet analyses and models, including ice volume estimation [1,2] and sea level rise prediction [3]. The loss of ice mass from the Antarctic ice sheet has been increasing through the fast flow of ice streams over the last several decades [4], which leads to sea-level rise. The bed elevation and roughness are particularly important for the most vulnerable glaciers grounded below sea level, which could accelerate ice loss [4] and destabilize ice sheets [5].
Subglacial bed elevation is primarily measured by airborne ice-penetrating radars, and the density and coverage of bed elevation measurements in the Antarctic continent have increased greatly over the last several decades [1,6,7]. However, the measurements remain limited in their geographic coverage and are not invariant with respect to direction. That is, although they have very high spatial sampling in the along-track direction, flight tracks in certain regions are spaced at up to 100 km, and interpolation is needed to fill the data gaps [8]. Several Antarctic continental digital elevation models (DEMs) have been constructed using spline or kriging interpolation, such as BEDMAP [6], with a 5-km grid, and BEDMAP2 [1], with 1-km resolution. The DEMs compiled by interpolation have uncertainties exceeding 1 km in poorly sampled regions, which adversely affect the numerical simulation of ice-sheet dynamics [9].
Several indirect methods have been proposed to generate a high-resolution Antarctic DEM with realistic topography, including spatial statistical and inverse methods. Spatial statistical methods are intended to generate a higher-resolution bed that reproduces known detailed topographical features in similar areas. For example, the conditional simulation method proposed by Goff et al. adds stochastic synthetic small-scale details to the interpolated bed topography, avoiding the inconsistencies introduced by kriging interpolation [10]. Graham et al. generated a high-resolution synthetic bed topography by combining terrain coming from low-pass filtered BEDMAP2 with a non-conditional rough terrain simulated using high-resolution radar data [11]. Although the statistical simulation method can generate multiple realizations that reproduce the spatial statistics of observations, the generated topography is too rough and there are some steep peaks [12]. Inverse methods use high-resolution ice surface information combined with glaciological-process knowledge to reconstruct the bed topography [13]. Farinotti et al. tested various models with surface observation inputs to infer ice thickness in the Ice Thickness Models Intercomparison eXperiment (ITMIX) [14]. Morlighem et al. used the mass conservation method to construct a bed topography in Greenland (BedMachine) [15] and the Antarctic (BedMachine Antarctica) [2], which is constrained by ice surface velocity and radar measurements [16]. In the recent past, inspired by image super-resolution methods, a deep learning method, DeepBedMap, was proposed to resolve the bed topography of Antarctica, with features from both indirect inverse modeling and spatial statistical methodologies [17]. This method takes a low-resolution bed DEM and high-resolution surface information as inputs to generate a high-resolution bed DEM with realistic topography. The DeepBedMap model can reproduce the small-scale roughness observed in training data. However, its product has unrealistic topography in certain regions and suffers from significant bed elevation deviations of up to 1000 m in regions where it is above sea level, which could bias ice-sheet model simulations and estimation of ice thickness.
To address these issues, we propose a multi-branch network structure to better render realistic texture details. In addition to the various ice-sheet information inputs used by DeepBedMap, we add gradients of low-resolution bed topography as inputs, with the aim of better preserving terrain shape. Our inputs include a low-resolution bed DEM, high-resolution surface DEM, surface ice flow velocity, snow accumulation rate, and gradients of the bed. To avoid mutual interference between inputs, we separate them into two groups, depending on the characteristic of the input data, and input them to different branches of the network. One group, including the bed elevation and surface bed elevation, is used to extract elevation topographic features; the other includes the gradient, ice flow velocity, and snow accumulation rate, which are used to extract local area features. We use a global connection to upsample the low-resolution bed topography input directly with bilinear interpolation to retain the input large-scale topographic features. We combine the small- and large-scale topographic features to generate a four-times upsampled Antarctic bed DEM with a resolution of 250 m. Our main contributions are as follows: (1) We present a high-resolution (250 m) bed elevation map of Antarctica with more realistic topography than DeepBedMap, which also preserves more detail in bed roughness than BEDMAP2 and BedMachine Antarctica. (2) We design a multi-branch network structure that groups various input data in the network according to their characteristics, making full use of the input information to enhance bed topography resolution and generate realistic topographic features with adequate roughness. (3) To preserve large-scale topographic features, a global connection is created to combine the bilinear upsampling results of the low-resolution topography input with the network output. We name the multi-branch network MB_DeepBedMap, and the resulting digital elevation model (DEM) product is MB_DeepBedMap_DEM.

2. Related Work

The resolution of the bed digital elevation model in Antarctica is constrained by the density and coverage of radar measurements. In order to meet the demand as a high-resolution bed DEM for ice-sheet investigations, interpolation methods are generally used to improve DEM resolution, such as bilinear and inverse distance weighting methods. However, because the interpolation methods only consider the influence of neighborhood around the point to be interpolated and cannot provide enough valuable spatial information, the topographies generated by these methods have smooth topographic surfaces and lack high-frequency detail information. When the elevation values are regarded as grayscale values, one DEM can be considered as one grayscale image. Therefore, the methods of image super-resolution can be considered to generate a high-resolution DEM with the help of low-resolution DEM. Image super-resolution refers to the process of generating a related high-resolution image for a low-resolution image [18], which is an ill-posed problem because one low-resolution image corresponds to multiple high-resolution images. In the last few decades, significant progress has been made in the field of super-resolution, especially methods based on deep learning [19]. The deep learning-based methods intend to build a convolutional neural network (CNN) to learn the mapping relationship between low-resolution images and high-resolution images. SRCNN proposed by Dong et al. is the first work to solve a super-resolution problem using a CNN [20]. Since then, various deep learning-based methods have been proposed [21]. One way is to improve the super-resolved results by modifying the network architecture, such as a deeper network with gradient clipping [22], residual and dense block [23], and channel attention [24]. Specifically, Lim et al. [23] build a very deep network EDSR by removing unnecessary network components in the residual block and adding residual connections, which achieves better performance than older models. However, the super-resolution results of these methods appear overly smooth due to the mean square error loss [25]. Another way is to improve the visual quality of super-resolution results by adding more effective components in the loss function, such as perceptual loss [26] and adversarial loss [27]. By adopting adversarial training with generative adversarial network (GAN) [27], the Super-Resolution Generative Adversarial Network (SRGAN) [28] is able to produce super-resolution images with more realistic texture details. Wang et al. [29] propose an enhanced super-resolution generative adversarial network (ESRGAN) by introducing dense connections in the multi-level residual network, which achieves visually pleasing results and the state-of-the-art performance.
Considering the good performance of image super-resolution methods, Xu et al. proposed a learning-based method to generate high-resolution details from similar regions in a single DEM [30]. Chen et al. used SRCNN to learn the mapping relationship between low- and high-resolution DEM [31]. Since the elevation range of the DEM is greater than the gray value range of the image, Xu et al. proposed a deep gradient prior network based on transfer learning to improve the resolution of gradients with lower intensity ranges [32]. In contrast to the above work applied to the earth surface DEM, a nonlinear relationship exists between the subglacial bed elevation and surface observations [33]. Therefore, Leong et al. developed a deep learning method, DeepBedMap, in which the low-resolution bed topography and ice surface information inputs are fed into their modified ESRGAN [17]. DeepBedMap can generate high-resolution bed topography with terrain details by integrating multiple ice surface information inputs. Inspired by DeepBedMap and related DEM super-resolution methods, we propose a multi-branch network that can better integrate additional information to resolve the bed topography of Antarctica.

3. Materials and Methods

3.1. Data

The proposed MB_DeepBedMap model uses a BEDMAP2 grid product as input data to generate high-resolution DEM. In addition, gradient of bed and associated ice surface data (i.e., ice surface elevation [34], ice velocity [35], and snow accumulation [36]) are obtained as well to assist the generation of high-resolution DEM. We use the same training dataset as DeepBedMap [17] in order to make a fair comparison with it. The training dataset is collected from the area with dense radar measurements (the interval of radar data are less than 250 m), and most of them are distributed in coastal areas below sea level. The bed elevation data picked from ice-penetrating-radar surveys (see Table 1) are used as the reference to train the proposed model and evaluate the output results. Following DeepBedMap, we grid the elevation data onto a 250 m resolution regular grid using Generic Mapping Tools v6.0 (GMT6) [37], computing the median elevations within each grid cell. Then, we use an adjustable-tension continuous-curvature spline function to generate high-resolution bed grid products from the preprocessed elevation data. For the production of training dataset, the ground-truth bed elevation grids are cropped into image patches using a sliding window, each of which is completely filled with data. Moreover, other inputs are also cropped into image patches corresponding to the same spatial area. To reduce boundary artifact in prediction, the no padding operation is used in the input convolutional layer of the proposed model, meaning that the model input grids need to have a larger spatial area than the ground-truth grids. Specifically, the coverage area of the model input grids is 11 km × 11 km (11 pixels × 11 pixels for the low-resolution input), while the coverage of the ground-truth grids is 9 km × 9 km (36 pixels × 36 pixels for the model output).

3.2. Model Design

Our model adopts the generative adversarial network framework [27], consisting of a generator G for generating high-resolution (250 m) DEM and a discriminator D for judging the quality of the generated DEM. The two networks are trained against each other to generate high resolution DEMs with realistic terrain, where the generator tries to generate realistic DEMs to confuse the discriminator and the discriminator learns to distinguish the generated DEMs from the real ones. To train the generative adversarial network, we use the content loss, perceptual loss, and adversarial loss [29] to calculate the error of elevation and topographical features, which is designed to optimize accuracy and roughness of generated topography. The classical image classification network VGG [44] is adopted for the discriminator that is the same as that in DeepBedMap to judge the realism of the generated DEMs. In order to improve the quality of reconstructed DEMs, we design a new structure of the generator, which can more effectively extract the information of inputs to recover terrain details and remove artifacts. Moreover, because we use the DEM to train the proposed model, instead of the daily images used in the training of image super-resolution, the physical constraints of DEM have an impact on the proposed model in the training process, thus ensuring the authenticity of the detailed information. Details of the neural network training setup can be found in Appendix A.
The generator model, whose structure is shown in Figure 1, consists of three modules: direct upsampling, multi-branch feature extraction, and upsampling with feature fusion. Direct upsampling is used to generate the large-scale (>10 km) topographic features, and the other modules capture small-scale topographic features. Unlike DeepBedMap, which captures large- and small-scale bed topographic features through one network model, our model is divided into two branches to capture large- and small-scale topographic features respectively. To preserve the large-scale features of the input, bilinear interpolation is used to upsample the topography input by four times during direct upsampling. To avoid artifacts caused by premature input fusion, multiple inputs are divided into two groups according to their characteristics and are fed to a double-branch network. One group of inputs consists of the low-resolution (1000 m) bed topography BEDMAP2 [1] and high-resolution (100 m) surface elevation REMA [34], which are used to obtain the bed topographic features. Another group of inputs is applied to obtain local features, consisting of the gradient of BEDMAP2 (1000 m resolution), ice surface velocity MEaSUREs (500 m resolution) [35], and snow accumulation (1000 m resolution) [36]. Owing to its superior performance in recovering image details, the backbone network of the double-branch network is based on the ESRGAN structure, which consists of several stacked residuals in residual dense blocks (RRDBs) [29]. The features extracted from the two branches are fused to generate small-scale features of the bed topography using the fusion and upsampling module. The small-scale features of the network and large-scale features obtained by the direct upsampling branch are added together to generate the super-resolution bed topography. We discuss the direct upsampling, multi-branch network, and upsampling with fusion module below.

3.2.1. Direct Upsampling

Although the DeepBedMap model can generate the corresponding high-resolution bed topography with adequate bed roughness, the generated bed topography suffers from deviations from the actual bed elevation and generates unrealistic large-scale bed features in certain areas, due to the loss of large-scale topographic features from the input. Therefore, we design a global connection with an interpolation method to directly upsample the low-resolution bed topography input, which can help to preserve the large-scale features and ensure that the elevation values of the generated topography are within the normal range. Since nearest-neighbor interpolation selects the nearest pixel value for each location to be interpolated, regardless of other pixels, its results generally have blocky artifacts (Figure 2c). Compared with nearest-neighbor interpolation, bilinear interpolation performs one linear interpolation on each of the two directions, considering more surrounding pixels, and its interpolation result (Figure 2b) shows better performance. Therefore, we use bilinear interpolation in the direct upsampling branch to preserve the large-scale topographic features. The combination of the large- and small-scale topographic features generated by the multi-branch network will reduce the deviation of elevation in certain areas, so as to generate a more realistic bed topography.

3.2.2. Multi-Branch Feature Extraction

The main goal of the generator is to generate high-resolution (250 m) bed topography based on the low-resolution (1000 m) input BEDMAP2 [1]. However, BEDMAP2 lacks sufficient information for this super-resolution task, so in related studies, ice surface information is generally added to compensate for this. For example, in DeepBedMap, ice surface elevation, ice surface velocity, and snow accumulation are added as inputs, and the generated results present more realistic terrain textures than those with BEDMAP2 input alone [17]. In addition to ice surface information, we consider that bed gradients can be used to better preserve terrain contour features. Therefore, we add bed gradients of BEDMAP2 as input, providing more information to the network model.
The outputs of DeepBedMap tend to pick up the topographic features from the ice surface input in certain regions, which may not represent the true subglacial topography [17]. We think this may be because all inputs are fused before feature extraction through the network, resulting in the features of the ice surface input affecting a part of the bed features. Therefore, we divide the inputs into two groups according to their characteristics, and design a double-branch network to extract features of each group before they are fused, as shown in Figure 1.
Among all of the inputs, the bed elevation model BEDMAP2 and the high-resolution ice surface elevation model REMA belong to elevation information, and as a low-resolution input, the former can provide large-scale (>10 km) information of the topography and is the basis for the super-resolution task, while the latter can be regarded as a reference topography of the bed topography to provide high-frequency details to the low-resolution input. Therefore, BEDMAP2 and REMA are combined into one group, and they are fed into one branch of the network to produce the bed topographic features.
The gradient map reveals the sharpness of the local area in the topography, so as to help the model better recover terrain contour features. Inspired by mass conservation equation [16], the surface ice flow velocity and snow accumulation rate are chosen as the input of model. These surface observation inputs have a nonlinear relationship with the roughness of bed topography [33], which means that they can theoretically be used to infer the shape of the bed [45]. Thus, the gradient of BEDMAP2, MEaSUREs Ice Velocity, and snow accumulation makes up the other group of inputs, which are fed into another branch of the network to obtain local features that can implicitly reflect the roughness of the reconstructed area, so as to guide the high-resolution terrain generation.
In each branch, a convolution layer processes the input into tensors of the same shape to obtain the shallow features, which are fed into the deep feature extraction network after channel-wise concatenation. The core network block of ESRGAN is used in the deep feature extraction network, whose two branches are composed of 12 and 4 RRDBs, respectively. The shallow features are added to the deep features through a skip connection to obtain the deep features of each branch.

3.2.3. Upsampling with Feature Fusion

After feature extraction, the bed topographic features and local features are concatenated channel-wise. A feature fusion module (Fusion in Figure 3), consisting of an RRDB block and a convolution layer, fuses the concatenated features.
The fused features are upsampled to the output resolution by the upsampling module (upsample in Figure 3), which includes two upsampling processes, each of which is composed of nearest-neighbor upsampling, a convolution layer, and a LeakyRelu activation function, as shown in Figure 3. Following this are two convolutional layers, which are the same as in DeepBedMap [17]; they are applied to obtain the final small-scale topographic features, which are added to the results of the direct upsampling branch to obtain the high-resolution terrain.

4. Results

4.1. Bed Topography

We generate a full-continent DEM of Antarctica at 250 m resolution using MB_DeepBedMap, which is a four-times upsampled bed topography of BEDMAP2 [1]. Specifically, we cut Antarctica into multiple 250 km × 250 km tile areas, and load the input data within each tile area into the MB_DeepBedMap model to generate the high-resolution (250 m) DEM for each tile area. Then, the multiple generated DEMs are assembled to construct a high-resolution (250 m) DEM of the whole of Antarctica, that is, MB_DeepBedMap_DEM. In order to verify the effectiveness of the proposed model, we quote several figures (Figures 4 and 6–10) of DeepBedMap [17] and use the same color tables for comparison. The full Antarctic-wide DEM plots in Figure 4 show the bed topographies of BEDMAP2, DeepBedMap_DEM [17], and MB_DeepBedMap_DEM. It can be observed that both DeepBedMap and MB_DeepBedMap preserve the general topographical features of Antarctica. For the topography of DeepBedMap_DEM, however, its bed elevation seriously deviates from the input BEDMAP2 in regions where bed elevation is greater than sea level (Figure 4 yellow box).
In order to further verify the effectiveness of the proposed model, we test the model outside the training area. We compare bed topographies produced by different methods beneath Thwaites Glacier where the high-resolution (250 m) ground-truth [43] is available, as shown in Figure 5. For the grids that have no data point, we use the NaN (Not a Number) values to fill them which are not considered in the calculation. Both DeepBedMap_DEM and MB_DeepBedMap_DEM show a realistic topography that is rougher than that of BEDMAP2; however, the small-scale topographic details in DeepBedMap_DEM are inconsistent with ground-truth, while MB_DeepBedMap_DEM show finer-scale (<10 km) bumps and troughs, which are similar to those of ground-truth (see Figure 5 red box). Moreover, we calculate the root mean square error (RMSE) between the generated topography and the ground-truth. The RMSE of MB_DeepBedMap_DEM is 79.43 m, which is less than DeepBedMap_DEM’s RMSE of 92.14 m. We also compare the generated topography of parts of Antarctica where the high-resolution ground-truth is unavailable. Several areas with relatively dense radar data coverage are selected for comparison, including mountain areas with steep and rugged topography in East Antarctica, fast-flowing ice streams, and glaciers in the coastal area in which ice loss has been rapid. Each modeling area is 200 × 200 km 2 with a 250 m grid cell resolution. Over high-elevation areas such as the Gamburtsev Subglacial Mountains and Transantarctic Mountains, it can be observed that significant elevation deviations exist in most areas of DeepBedMap_DEM (Figure 6b and Figure 7b), which results in unnatural topographic features. Although the topographies (Figure 6b and Figure 7b) generated by DeepBedMap are rougher than those of BEDMAP2, DeepBedMap produces terrace features (T, Figure 6b) winding along the mountains and speckle texture features (S, Figure 7b) over steep mountain areas, and it has different general topographical features from those of BEDMAP2. In comparison, although the speckle features (S, Figure 7c) also appear in the topography generated by MB_DeepBedMap, this seems to be more realistic, while preserving large-scale topographical features (Figure 6c and Figure 7c).
Figure 8 shows the generated topographies of DeepBedMap_DEM (Figure 8a–c) and MB_DeepBedMap_DEM (Figure 8d–f) in Whillans Ice Stream, Evans Ice Stream, and Totten Glacier. It can be observed that both DeepBedMap and MB_DeepBedMap can generate a terrain with small-scale roughness. Beneath the fast-flowing ice streams and glaciers (Figure 8a–c), DeepBedMap produces ridges (R) aligned parallel and perpendicular to the ice flow direction. In relatively flat areas of Antarctica (Figure 8c), we can see hummocky wave-like (W) topographical features in DeepBedMap_DEM, which can be observed to be similar to the ice surface features at the same location through comparing DeepBedMap’s bed topography (Figure 8a–c) with ice surface topography (Figure 8g–i). By contrast, the bed topographies in MB_DeepBedMap_DEM (Figure 8d–f) show more realistic topographical features, and small-scale bumps and troughs that provide adequate roughness.

4.2. Bed Roughness

To verify the performance of MB_DeepBedMap from the aspect of roughness, we compare the roughness of DeepBedMap_DEM, MB_DeepBedMap_DEM, and BedMachine Antarctica [2] with ground-truth grids from processed radar track data [42], using the same measure of roughness as DeepBedMap, which calculates the standard deviation of a 5 × 5 elevation grid as the roughness of the central grid [17]. As shown in Figure 9, both DeepBedMap_DEM and MB_DeepBedMap_DEM show a rough bed topography with small-scale topographic features beneath Thwaites Glacier. However, there are hummocky wave-like patterns in DeepBedMap_DEM (Figure 9b). The 2D roughness view of DeepBedMap (Figure 9e) is denser than the ground-truth grids (Figure 9c), owing to the dense, wave-like artifacts (especially toward the coastal region on the left), whereas BedMachine Antarctica (Figure 9f) shows sparser roughness patterns, and MB_DeepBedMap_DEM (Figure 9d) has a roughness distribution more similar to ground-truth.
We now present the 1D transect over different grid products to compare the bed elevation and roughness along the radar track (Figure 9a, orange line) from the coastal region toward the interior region of Thwaites Glacier which have been densely monitored. To better show the difference in bed elevation and roughness, we resampled other grid products into the same 250-m resolution grids as ours using bicubic interpolation to obtain the bed elevation and roughness. As shown in Figure 10, all four elevation profiles show the same trend from the coast to the inland area. Both DeepBedMap_DEM and MB_DeepBedMap_DEM have small-scale bumps and troughs similar to ground-truth, but the noisy elevation of DeepBedMap deviates from that of ground-truth. In contrast, BedMachine Antarctica shows a relatively smooth elevation profile. From the roughness comparison, it can be found that the roughness of DeepBedMap_DEM exceeds ground-truth in the coastal region. The roughness of BedMachine is always lower than ground-truth, rarely exceeding 20 m, whereas that of MB_DeepBedMap_DEM is in better agreement. Furthermore, we calculate the mean absolute error (MAE) of roughness along the radar trace. The MAE of MB_DeepBedMap_DEM is 14.98 m, whereas the MAE of DeepBedMap_DEM and BedMachine is 16.54 m and 17.17 m, respectively.

4.3. Model Generalization

The super-resolution model trained on BEDMAP2 can also be used to improve the resolution of other bed elevation grid products, such as the 500-m resolution BedMachine Antarctica. To further verify the effectiveness of the proposed model, we conduct an extend experiment to test the performance of DeepBedMap and MB_DeepBedMap applied to BedMachine Antarctica, which helps to combine the deep learning model with the mass conservation constraints in BedMachine. Specifically because both models are trained to learn the mapping relationship from terrain with a resolution of 1000 m to terrain with a resolution of 250 m, we downsample BedMachine Antarctica to the same resolution (1000 m) as BEDMAP2, and this is super-resolved into 250-m resolution grids by MB_DeepBedMap and DeepBedMap. We compare the bed topography and data along flight lines using the same methods as Section 4.1 and Section 4.2 over several areas with relatively dense radar data coverage, including Marie Byrd Land in the coastal areas of West Antarctica, Dome C, and Dome F with steep and rugged topography in East Antarctica. In addition, each modeling area is 200 × 200 km 2 with a 250 m grid cell resolution.
As shown in Figure 11, DeepBedMap generates unrealistic topographic features and a higher bed elevation than the actual elevation in Dome C (Figure 11) and Dome F (Figure 11) areas where the bed elevation is above sea level. In comparison, the topographies generated by MB_DeepBedMap (Figure 11f) show large-scale features such as mountains and valleys similar to those of BedMachine Antarctica (Figure 11d), with the small-scale terrain details that the input lacks. Focusing on Marie Byrd Land, where the bed elevation is below sea level, both DeepBedMap (Figure 11) and MB_DeepBedMap (Figure 11) generate a rougher bed topography than BedMachine Antarctica (Figure 11), with small-scale topographic features. However, the small-scale topographic features generated by DeepBedMap are similar to the hummocky wake-like patterns mentioned in Section 4.1, whereas the topography generated by MB_DeepBedMap shows relatively realistic terrain details. In general, with the addition of mass conservation constraints to the model inputs, the topographies generated by MB_DeepBedMap have large-scale features similar to those of the mass conservation topographies, but they are rougher and show more details than BedMachine. Then, comparing data along the radar trace in Dome C (Figure 12) and Dome F (Figure 13), the elevation profile of MB_DeepBedMap is in better agreement with the ground-truth than DeepBedMap whose elevation error is close to 1000 m. The roughness profile of MB_DeepBedMap shows more comparable roughness values to ground-truth than that of BedMachine Antarctica, where the mean absolute error (MAE) between the roughness profile of MB_DeepBedMap and the ground-truth in Dome C and Dome F is 18.97 m and 13.79 m, respectively, while that of BedMachine Antarctica is 21.35 m and 15.33 m, respectively. Moreover, in the comparison of data along the radar trace over Marie Byrd Land, a similar general trend is shown in all four elevation profiles (Figure 14a). Although these profiles show close roughness (Figure 14b), looking at the roughness statistic, the MAE between the roughness profile of MB_DeepBedMap and the ground-truth is 9.87 m, whereas those of DeepBedMap and BedMachine Antarctica are 10.93 m and 10.37 m, respectively.

4.4. Extended Experiments

To further verify the effectiveness of the multi-branch network structure, we also test the performance of single-branch DeepBedMap model (SB_DeepBedMap), that is, all inputs (i.e., the low-resolution bedmap, ice surface elevation, ice velocity, snow accumulation, and gradient of bed) are fed into the network model, without being divided into two groups. Compared with DeepBedMap, SB_DeepBedMap has an additional gradient of bed as model input and a direct upsampling (see Section 3.2.1) to add the upsampled bed topography to the output of the last convolutional layer. Then, we compare bed topographies produced by different methods beneath Thwaites Glacier where the high-resolution (250 m) ground-truth is available, as shown in Figure 15. Although the small-scale topographical details can be found in the generated result of the single branch model, these topographical details are farther away from the ground-truth than those of MB_DeepBedMap. Moreover, we calculate the RMSE between the generated topography and the ground-truth. The RMSE of DeepBedMap, SB_DeepBedMap, and MB_DeepBedMap is 92.14 m, 85.75 m, and 79.43 m, respectively. In general, the performance of MB_DeepBedMap is better than that of other two models. Although the addition of bed gradient factor is conducive to the topography reconstruction, the multi-branch network structure is the key factor to obtain a higher precision simulation effect. It is because this network structure groups the input data to extract deep features, which can better preserve the original features of different inputs. Therefore, for future work, to depict more refined terrain, we should pay more attention to the design of model structure than adding more factors related to the prediction target.

5. Discussion

5.1. Bed Features

The results in Section 4.1 show that MB_DeepBedMap can generate a realistic bed topography with small-scale features similar to the training data. Over the regions where the bed elevation is above sea level, compared with DeepBedMap, whose topographies are unnatural and contain higher bed elevations than the BEDMAP2 input, MB_DeepBedMap can present better large-scale topographic features because it focuses on learning the difference between ground-truth and the results of the direct upsampling branch, so as to generate a high-resolution terrain with large-scale topographical features of the input. This means that MB_DeepBedMap can eliminate the influence of lack of training data above sea level to a certain extent, which is the main reason for the poor performance of DeepBedMap in this area. Over steep mountains, MB_DeepBedMap generates speckle features like DeepBedMap, and the speckle features in a 1D elevation profile present a large-amplitude elevation fluctuation. We consider that the amplitude of fluctuation is positively related to the gradient of topography under lack of training data. Therefore, the larger the gradient of topography, the greater the amplitude of fluctuation, leading to the speckle features. This artifact can be eliminated by increasing the training data in high-gradient areas.
Over the areas where the bed elevation is below sea level, DeepBedMap produces some unrealistic topography similar to ice surface topography, whereas MB_DeepBedMap generates more realistic topographic features. For example, ridges (R, Figure 8a–c) parallel and perpendicular to the flow direction are found along the fast-flowing glaciers and ice flows in DeepBedMap_DEM, which are similar to the imprints of crevasses or flow stripes observable from the ice surface [17]. Moreover, the fast-flowing ice flow is more likely to erode these ridges, resulting in relatively smooth terrain or lineated features aligned with the direction of the ice flow [34]. The results generated by MB_DeepBedMap (Figure 8d–f) show morphological similarities with the topographic features mentioned above. Hummocky wave-like (W, Figure 8c) patterns produced by DeepBedMap are found in relatively flat areas, which are likely to be the surface megadune structures [17]. It can be observed that these features are very similar to the ice surface features (Figure 8g–i) at the same locations, perhaps because the premature fusion of multiple data inputs in the input stage makes other inputs cover some features of BEDMAP2, causing the model to overfit to the ice surface elevation input. However, based on the grouped inputs, the proposed multi-branch network can avoid the premature fusion of inputs, which helps to remove artifacts and restore terrain details.

5.2. Roughness

MB_DeepBedMap_DEM shows high roughness values that better match ground-truth than BedMachine Antarctica. The roughness values produced by MB_DeepBedMap_DEM are generally lower than those of DeepBedMap_DEM, but they are closer to ground-truth, especially near coastal areas. Along the transect line, the elevation profiles of both DeepBedMap_DEM and MB_DeepBedMap_DEM show small-scale elevation fluctuation well related to ground-truth, but the elevation error between MB_DeepBedMap_DEM and ground-truth is lower than that of DeepBedMap_DEM, whose amplitude of fluctuation is relatively large. By comparison, BedMachine Antarctica presents a relatively smooth elevation curve over much of the transect, lacking small-scale bumps and troughs.
The BedMachine Antarctica grid product generally shows lower roughness values along the transect line due to the regularization term in the mass conservation method. In contrast, both MB_DeepBedMap and DeepBedMap can generate the rough bed topography, with small-scale roughness similar to ground-truth. However, in coastal regions where the roughness is relatively high, the spatial distribution of roughness generated by MB_DeepBedMap is similar to ground-truth, whereas that of DeepBedMap is denser because of the hummocky wake-like patterns. As shown in Figure 9c and Figure 10b, the spatial density of radar data decreases gradually from left to right, as does the accuracy of model prediction. This is because the radar data density directly affects the accuracy of the reference bedmaps (i.e., Bedmap2 and Bedmachine) used as model inputs. This means that, as the density of radar detection data increases, the uncertainty of the reference bedmaps will be reduced, and the estimation accuracy of the proposed model will be further improved. We note that MB_DeepBedMap_DEM shows relatively low roughness in certain areas (refer Figure 10, from −1400 to −1300 km on x-scale). This may be improved by adding roughness error between the predicted and ground-truth DEM in the loss function to generate the desired topography.

5.3. Model Generalization

In Section 4.3, we show that MB_DeepBedMap trained on BEDMAP2 can be applied on other bed elevation grid products to generate a realistic rough terrain with high spatial resolution. Over the vicinity of Marie Byrd Land, whose spatial statistical information is similar to that of the training areas, both DeepBedMap and MB_DeepBedMap can generate bed elevation and roughness values close to ground-truth along the transect line (Figure 14). Looking at the 2D view, the bed topography generated by DeepBedMap is rougher than that of BedMachine Antarctica, but the small-scale topographic features are not properly reproduced, even if the large-scale patterns are correctly generated. By comparison, the bed topography generated by MB_DeepBedMap shows more realistic patterns, and the generated small-scale features appear visually close to those of the training data. Over the vicinity of Dome C and Dome F, where most bed elevation is above sea level, it can be observed that the small-scale bumps and troughs in our grid products are similar to the ground-truth along the transect lines (refer to Figure 12, from 1240 to 1260 km on the x-scale, and Figure 13, from 850 to 890 km on the x-scale). In contrast, the DeepBedMap grid products show large-scale bumps and troughs and significant elevation deviation over much of the transect, which biases the estimation of ice thickness. Focusing on the 2D views of bed topographies in the vicinity of Dome C and Dome F, MB_DeepBedMap shows more realistic topographies than DeepBedMap, whose topographic features look like artifacts. Moreover, low sampling density means less radar measurement data, which leads to lower facticity of the input topography that affects the quality of the topography generated by MB_DeepBedMap in this area. However, the MB_DeepBedMap model can mine the semantic information and relationships in multiple remote sensing input data through deep learning, thus maintaining the facticity in the area with low sampling density to some extent. As shown in the upper right corner of Figure 11d–f, MB_DeepBedMap is able to generate more realistic high-resolution topography than the input bedmap in the area with low sampling density. The topography in this region is smoother than the surrounding topography in the reference bedmap (BedMachine) due to the lack of radar data. However, because the deep learning model with convolutional operations is good at recognizing spatial patterns, the MB_DeepBedMap model generates reasonable topographical features similar to those in the surrounding area. Therefore, owing to the good generalization performance, MB_DeepBedMap can improve the resolution of the newer bed elevation grid products.

5.4. Limitations and Future Work

MB_DeepBedMap only uses a small part of Antarctica as the training dataset, and most of the training data are distributed in the coastal areas (refer Figure 4a). This is an exceedingly small amount of data for the whole Antarctic continent for the super-resolution task. Therefore, the topographic features of high-resolution topography generated by MB_DeepBedMap will be similar to the training data, and the model output in areas dissimilar to the training areas (such as mountain areas) may not recover the true small-scale terrain details. Because the convolutional neural network model works on 2D gridded data, sparse radar point measurements cannot be used directly to train the model and constrain its output, but they can be used to build a large-scale DEM such as BEDMAP2 as the input of model. Based on the large-scale DEM input, the MB_DeepBedMap model focuses on generating small-scale topographical features important for sub-kilometer roughness. Therefore, the uncertainty of larger-scale DEM input will be introduced to generated high-resolution DEM, which directly affects the performance of the proposed model. To address the issues, more radar measurements are necessary to improve the input of low-resolution topography (e.g., Bedmap2 and BedMachine), resulting in more accurate kilometer-scale bedmaps. Moreover, the spatial coverage of the data and the methods used to construct topography differ in different areas throughout BEDMAP2 and BedMachine products. Thus, the statistical relationships between the input topography and other variables would differ throughout Antarctica, which could be a source of bias in the super-resolution model. In the future work, we can train a super-resolution model respectively for different types of regions in Antarctica.
There are two potential improvements to this model. We need to: (1) obtain more high-resolution ground-truth training data and (2) improve our model with advances in the field of ice-sheet modeling and deep learning. Radar detection technology can be used to increase the coverage of bed elevation measurement, which cannot only increase the high-resolution training data but can improve the low-resolution topography input to provide more accurate large-scale features. In addition, other subaerial digital elevation models with high-resolution topography can be used to increase the amount of training data. The current model not only needs paired topography for training, but it requires the corresponding information data of the ice surface. To use other subaerial topographies without ice cover as training data, we can design two independent neural network models. One would be trained on the paired topography to learn the mapping relationship from low- to high-resolution topography. Another model, which is optional, would be used to improve previous SR results, combined with ice surface observation information when available. From a physical point of view, the model does not adequately account for sliding/friction [46,47] at bed topography and the transmission [48,49] of bed shape into surface features. In future work, our MB_DeepBedMap model can take the basal sliding coefficient as input to further improve the performance.

6. Conclusions

We developed a method based on a convolutional neural network to resolve the bed topography of Antarctica, which can improve its spatial resolution as produced by interpolation or inverse methods, such as kriging and mass conservation, and recover realistic topographic features with small-scale roughness. Our method builds on the deep learning method proposed by Leong et al. but adopts a more effective network structure. Unlike the DeepBedMap model that fuses different inputs in the input stage, the proposed multi-branch model structure can help to extract these input features more effectively and reduce artifact problems caused by the premature fusion of multiple data inputs. With direct upsampling results including large-scale topographic features such as valleys and ridges, the proposed method can be applied to more types of regions than DeepBedMap.
We tested the performance of the proposed model by applying it on BEDMAP2 and BedMachine Antarctica to generate high-resolution bed topographies. Compared with the smooth bed topographies presented in BEDMAP2 and BedMachine Antarctica, those generated by MB_DeepBedMap are rougher and have roughness closer to ground-truth. In addition, the proposed model can generate more realistic topographical features than DeepBedMap under the same training dataset. Considering its realistic topography and small-scale roughness similar to ground-truth, we believe that the proposed method may potentially be used where a high-resolution bed elevation model is required.

Author Contributions

Conceptualization, Y.C., S.L., X.C. and F.W.; Methodology, Y.C. and F.W.; Validation, F.W.; Formal analysis, F.W. and Z.Y.; writing—original draft preparation, Y.C. and F.W.; writing—review and editing, Y.C., S.L., X.C., F.W. and Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shanghai Science and Technology Development Funds (No. 21ZR1469700) and the National Natural Science Foundation of China (No. 41776186).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Deep Neural Network Training Details

We use Chainer to implement the proposed model, and carry out all experiments on one RTX 2080ti graphics card. For a total of 4028 image patches, 3826 image patches are used for training with a batch size of 128, and 202 image patches are reserved for verification. During training, in order to check overfitting, we use two evaluation metrics on the validation dataset to evaluate the generative adversarial network model, including an accuracy metric for the discriminator and a peak signal-to-noise ratio (PSNR) metric for the generator. When these two metrics show little improvement, we stop the training, roughly at 190 epochs. In addition, we compare the grid output predicted by the model with the actual ground-truth elevation points on an independent test dataset which contains 39,640 data points. Specifically, we use the point-to-point elevation error to calculate the root mean square error (RMSE) on this test dataset. This RMSE value is used to select candidate models and is also the metric minimized by a hyperparameter optimization algorithm.
There are lots of hyperparameter settings that need to be determined in neural networks. In order to decide an appropriate hyperparameter to obtain better performance of the model, following DeepBedMap, we use the Tree structured Parzen Estimator [50] from the Optuna V2.0.0 library [51] to tune hyperparameters (see Table A1). We run 60 experiments to scan the hyperparameter space, and then selected the top five models from these experiments to further determine the final model by visual evaluation.
Table A1. Optimized hyperparameter settings.
Table A1. Optimized hyperparameter settings.
HyperparameterSettingTuning Range
Learning rate 1.7 × 10 4 2 × 10 4 to 1 × 10 4
Mini-batch size12864 or 128
Number of epochs190100 to 200
Residual scaling0.20.1 to 0.5
Adam optimizer epsilon0.1Fixed
Adam optimizer beta10.9Fixed
Adam optimizer beta20.99Fixed

References

  1. Fretwell, P.; Pritchard, H.D.; Vaughan, D.G.; Bamber, J.L.; Barrand, N.E.; Bell, R.; Bianchi, C.; Bingham, R.G.; Blankenship, D.D.; Casassa, G.; et al. Bedmap2: Improved Ice Bed, Surface and Thickness Datasets for Antarctica. Cryosphere 2013, 7, 375–393. [Google Scholar] [CrossRef] [Green Version]
  2. Morlighem, M.; Rignot, E.; Binder, T.; Blankenship, D.; Drews, R.; Eagles, G.; Eisen, O.; Ferraccioli, F.; Forsberg, R.; Fretwell, P.; et al. Deep Glacial Troughs and Stabilizing Ridges Unveiled beneath the Margins of the Antarctic Ice Sheet. Nat. Geosci. 2019, 13, 132–137. [Google Scholar] [CrossRef] [Green Version]
  3. Seroussi, H.; Nakayama, Y.; Larour, E.; Menemenlis, D.; Morlighem, M.; Rignot, E.; Khazendar, A. Continued retreat of Thwaites Glacier, West Antarctica, controlled by bed topography and ocean circulation. Geophys. Res. Lett. 2017, 44, 6191–6199. [Google Scholar] [CrossRef]
  4. Rignot, E.; Mouginot, J.; Scheuchl, B.; Van Den Broeke, M.; Van Wessem, M.J.; Morlighem, M. Four decades of Antarctic Ice Sheet mass balance from 1979–2017. Proc. Natl. Acad. Sci. USA 2019, 116, 1095–1103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Joughin, I.; Smith, B.E.; Medley, B. Marine ice sheet collapse potentially under way for the Thwaites Glacier Basin, West Antarctica. Science 2014, 344, 735–738. [Google Scholar] [CrossRef] [PubMed]
  6. Lythe, M.B.; Vaughan, D.G. BEDMAP: A New Ice Thickness and Subglacial Topographic Model of Antarctica. J. Geophys. Res. Solid Earth 2001, 106, 11335–11351. [Google Scholar] [CrossRef] [Green Version]
  7. Le Brocq, A.M.; Payne, A.J.; Vieli, A. An Improved Antarctic Dataset for High Resolution Numerical Ice Sheet Models (ALBMAP V1). Earth Syst. Sci. Data 2010, 2, 247–260. [Google Scholar] [CrossRef] [Green Version]
  8. Cui, X.; Jeofry, H.; Greenbaum, J.S.; Guo, J.; Li, L.; Lindzey, L.E.; Habbal, F.A.; Wei, W.; Young, D.A.; Ross, N.; et al. Bed topography of Princess Elizabeth Land in East Antarctica. Earth Syst. Sci. Data 2020, 12, 2765–2774. [Google Scholar] [CrossRef]
  9. Gasson, E.; DeConto, R.M.; Pollard, D.; Levy, R.H. Dynamic Antarctic ice sheet during the early to mid-Miocene. Proc. Natl. Acad. Sci. USA 2016, 113, 3459–3464. [Google Scholar] [CrossRef] [Green Version]
  10. Goff, J.A.; Powell, E.M.; Young, D.A.; Blankenship, D.D. Conditional Simulation of Thwaites Glacier (Antarctica) Bed Topography for Flow Models: Incorporating Inhomogeneous Statistics and Channelized Morphology. J. Glaciol. 2014, 60, 635–646. [Google Scholar] [CrossRef] [Green Version]
  11. Graham, F.S.; Roberts, J.L.; Galton-Fenzi, B.K.; Young, D.; Blankenship, D.; Siegert, M.J. A High-Resolution Synthetic Bed Elevation Grid of the Antarctic Continent. Earth Syst. Sci. Data 2017, 9, 267–279. [Google Scholar] [CrossRef] [Green Version]
  12. Graham, F.S.; Roberts, J.L.; Galton-Fenzi, B.K.; Young, D.; Blankenship, D.; Siegert, M.J. HRES—Synthetic High-Resolution Antarctic Bed Elevation, Ver. 2, Australian Antarctic Data Centre. 2021. Available online: https://data.aad.gov.au/metadata/AAS_3013_4077_4346_Ant_synthetic_bed_elevation_2016 (accessed on 27 February 2023).
  13. van Pelt, W.J.J.; Oerlemans, J.; Reijmer, C.H.; Pettersson, R.; Pohjola, V.A.; Isaksson, E.; Divine, D. An Iterative Inverse Method to Estimate Basal Topography and Initialize Ice Flow Models. Cryosphere 2013, 7, 987–1006. [Google Scholar] [CrossRef] [Green Version]
  14. Farinotti, D.; Brinkerhoff, D.J.; Clarke, G.K.C.; Fürst, J.J.; Frey, H.; Gantayat, P.; Gillet-Chaulet, F.; Girard, C.; Huss, M.; Leclercq, P.W.; et al. How Accurate Are Estimates of Glacier Ice Thickness? Results from ITMIX, the Ice Thickness Models Intercomparison eXperiment. Cryosphere 2017, 11, 949–970. [Google Scholar] [CrossRef] [Green Version]
  15. Morlighem, M.; Williams, C.N.; Rignot, E.; An, L.; Arndt, J.E.; Bamber, J.L.; Catania, G.; Chauché, N.; Dowdeswell, J.A.; Dorschel, B.; et al. BedMachine v3: Complete Bed Topography and Ocean Bathymetry Mapping of Greenland From Multibeam Echo Sounding Combined With Mass Conservation. Geophys. Res. Lett. 2017, 44, 11051–11061. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Morlighem, M.; Rignot, E.; Seroussi, H.; Larour, E.; Ben Dhia, H.; Aubry, D. A Mass Conservation Approach for Mapping Glacier Ice Thickness. Geophys. Res. Lett. 2011, 38. [Google Scholar] [CrossRef] [Green Version]
  17. Leong, W.J.; Horgan, H.J. DeepBedMap: A deep neural network for resolving the bed topography of Antarctica. Cryosphere 2020, 14, 3687–3705. [Google Scholar] [CrossRef]
  18. Huang, T.S. Multiframe Image Restoration and Registration. Comput. Vis. Image Process. 1984, 1, 317–339. [Google Scholar]
  19. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  20. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Computer Vision—ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar] [CrossRef]
  21. Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.H.; Liao, Q. Deep Learning for Single Image Super-Resolution: A Brief Review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef] [Green Version]
  22. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar] [CrossRef] [Green Version]
  23. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1132–1140. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Computer Vision—ECCV 2018, Proceedings of the 15th European Conference, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 294–310. [Google Scholar]
  25. Zhang, W.; Liu, Y.; Dong, C.; Qiao, Y. RankSRGAN: Generative Adversarial Networks With Ranker for Image Super-Resolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3096–3105. [Google Scholar] [CrossRef] [Green Version]
  26. Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Computer Vision—ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 694–711. [Google Scholar]
  27. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
  28. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar] [CrossRef] [Green Version]
  29. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Computer Vision—ECCV 2018, Proceedings of the ECCV 2018, Munich, Germany, 8–14 September 2018; Leal-Taixé, L., Roth, S., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 11133, pp. 63–79. [Google Scholar] [CrossRef] [Green Version]
  30. Xu, Z.; Wang, X.; Chen, Z.; Xiong, D.; Ding, M.; Hou, W. Nonlocal Similarity Based DEM Super Resolution. ISPRS J. Photogramm. Remote Sens. 2015, 110, 48–54. [Google Scholar] [CrossRef]
  31. Chen, Z.; Wang, X.; Xu, Z.; Hou, W. Convolutional Neural Network Based Dem Super Resolution. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B3, 247–250. [Google Scholar] [CrossRef] [Green Version]
  32. Xu, Z.; Chen, Z.; Yi, W.; Gui, Q.; Hou, W.; Ding, M. Deep gradient prior network for DEM super-resolution: Transfer learning from image to DEM. ISPRS J. Photogramm. Remote Sens. 2019, 150, 80–90. [Google Scholar] [CrossRef]
  33. Raymond, M.J.; Gudmundsson, G.H. On the Relationship between Surface and Basal Properties on Glaciers, Ice Sheets, and Ice Streams. J. Geophys. Res. Solid Earth 2005, 110, B08411. [Google Scholar] [CrossRef] [Green Version]
  34. Howat, I.M.; Porter, C.; Smith, B.E.; Noh, M.J.; Morin, P. The Reference Elevation Model of Antarctica. Cryosphere 2019, 13, 665–674. [Google Scholar] [CrossRef] [Green Version]
  35. Mouginot, J.; Rignot, E.; Scheuchl, B. Continent-Wide, Interferometric SAR Phase, Mapping of Antarctic Ice Velocity. Geophys. Res. Lett. 2019, 46, 9710–9718. [Google Scholar] [CrossRef]
  36. Arthern, R.J.; Winebrenner, D.P.; Vaughan, D.G. Antarctic Snow Accumulation Mapped Using Polarization of 4.3-Cm Wavelength Microwave Emission. J. Geophys. Res. 2006, 111, D06107. [Google Scholar] [CrossRef] [Green Version]
  37. Wessel, P.; Luis, J.; Uieda, L.; Scharroo, R.; Wobbe, F.; Smith, W.; Tian, D. The Generic Mapping Tools Version 6. Geochem. Geophys. Geosyst. 2019, 20, 5556–5564. [Google Scholar] [CrossRef] [Green Version]
  38. Bingham, R.G.; Vaughan, D.G.; King, E.C.; Davies, D.; Cornford, S.L.; Smith, A.M.; Arthern, R.J.; Brisbourne, A.M.; De Rydt, J.; Graham, A.G.C.; et al. Diverse Landscapes beneath Pine Island Glacier Influence Ice Flow. Nat. Commun. 2017, 8, 1618. [Google Scholar] [CrossRef] [Green Version]
  39. Jordan, T.A.; Ferraccioli, F.; Corr, H.; Graham, A.; Armadillo, E.; Bozzo, E. Hypothesis for Mega-Outburst Flooding from a Palaeo-Subglacial Lake beneath the East Antarctic Ice Sheet: Antarctic Palaeo-Outburst Floods and Subglacial Lake. Terra Nova 2010, 22, 283–289. [Google Scholar] [CrossRef]
  40. King, E.C. Ice Stream or Not? Radio-Echo Sounding of Carlson Inlet, West Antarctica. Cryosphere 2011, 5, 907–916. [Google Scholar] [CrossRef] [Green Version]
  41. King, E.C.; Pritchard, H.D.; Smith, A.M. Subglacial Landforms beneath Rutford Ice Stream, Antarctica: Detailed Bed Topography from Ice-Penetrating Radar. Earth Syst. Sci. Data 2016, 8, 151–158. [Google Scholar] [CrossRef] [Green Version]
  42. Shi, L.; Allen, C.T.; Ledford, J.R.; Rodriguez-Morales, F.; Blake, W.A.; Panzer, B.G.; Prokopiack, S.C.; Leuschen, C.J.; Gogineni, S. Multichannel Coherent Radar Depth Sounder for NASA Operation Ice Bridge. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 1729–1732. [Google Scholar] [CrossRef] [Green Version]
  43. Holschuh, N.; Christianson, K.; Paden, J.; Alley, R.; Anandakrishnan, S. Linking postglacial landscapes to glacier dynamics using swath radar at Thwaites Glacier, Antarctica. Geology 2020, 48, 268–272. [Google Scholar] [CrossRef]
  44. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  45. Farinotti, D.; Huss, M.; Bauder, A.; Funk, M.; Truffer, M. A method to estimate the ice volume and ice-thickness distribution of alpine glaciers. J. Glaciol. 2009, 55, 422–430. [Google Scholar] [CrossRef] [Green Version]
  46. Gudmundsson, G.H. Transmission of basal variability to a glacier surface. J. Geophys. Res. Solid Earth 2003, 108, 2253. [Google Scholar] [CrossRef]
  47. Gudmundsson, G.H. Analytical solutions for the surface response to small amplitude perturbations in boundary data in the shallow-ice-stream approximation. Cryosphere 2008, 2, 77–93. [Google Scholar] [CrossRef] [Green Version]
  48. Bahr, D.B.; Pfeffer, W.T.; Kaser, G. Glacier volume estimation as an ill-posed inversion. J. Glaciol. 2014, 60, 922–934. [Google Scholar] [CrossRef] [Green Version]
  49. Monnier, J.; des Boscs, P. Inference of the bottom properties in shallow ice approximation models. Inverse Probl. 2017, 33, 115001. [Google Scholar] [CrossRef] [Green Version]
  50. Bergstra, J.; Bardenet, R.; Bengio, Y.; Kégl, B. Algorithms for Hyper-Parameter Optimization. In Advances in Neural Information Processing Systems, Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, Spain, 12–15 December 2011; Curran Associates Inc.: Red Hook, NY, USA, 2011; Volume 24, pp. 2546–2554. [Google Scholar]
  51. Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A Next,-Generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining—KDD’19, Anchorage, AK, USA, 4–8 August 2019; ACM Press: New York, NY, USA, 2019; pp. 2623–2631. [Google Scholar] [CrossRef]
Figure 1. Our multi-branch generator model structure composed of three modules shown in the dotted boxes. The model takes five inputs (BEDMAP2, REMA, gradient of BEDMAP2, MEaSUREs Ice Velocity, and snow accumulation).
Figure 1. Our multi-branch generator model structure composed of three modules shown in the dotted boxes. The model takes five inputs (BEDMAP2, REMA, gradient of BEDMAP2, MEaSUREs Ice Velocity, and snow accumulation).
Remotesensing 15 01359 g001
Figure 2. Comparison of nearest-neighbor and bilinear interpolation applied directly on an upsample branch.
Figure 2. Comparison of nearest-neighbor and bilinear interpolation applied directly on an upsample branch.
Remotesensing 15 01359 g002
Figure 3. Fusion module and upsample module structure.
Figure 3. Fusion module and upsample module structure.
Remotesensing 15 01359 g003
Figure 4. The whole Antarctic continent DEM comparisons of bed topography with BEDMAP2, DeepBedMap [17], and MB_DeepBedMap. Red areas show locations of training data. Green areas show locations of test data. Orange boxes show locations of test regions. Yellow boxes show locations of the obvious differences between the above topographies.
Figure 4. The whole Antarctic continent DEM comparisons of bed topography with BEDMAP2, DeepBedMap [17], and MB_DeepBedMap. Red areas show locations of training data. Green areas show locations of test data. Orange boxes show locations of test regions. Yellow boxes show locations of the obvious differences between the above topographies.
Remotesensing 15 01359 g004
Figure 5. Comparison of bed elevation grid products over Thwaites Glacier. The ground-truth is from gridded Operation IceBridge points. Red boxes show the locations of the obvious differences between the above topographies.
Figure 5. Comparison of bed elevation grid products over Thwaites Glacier. The ground-truth is from gridded Operation IceBridge points. Red boxes show the locations of the obvious differences between the above topographies.
Remotesensing 15 01359 g005
Figure 6. Bed topography comparison over the Gamburtsev Subglacial Mountains. Features of interest are annotated in black text against a white background: terraces T [17].
Figure 6. Bed topography comparison over the Gamburtsev Subglacial Mountains. Features of interest are annotated in black text against a white background: terraces T [17].
Remotesensing 15 01359 g006
Figure 7. Bed topography comparison over steep mountains area. Features of interest are annotated in black text against a white background: speckle patterns S [17].
Figure 7. Bed topography comparison over steep mountains area. Features of interest are annotated in black text against a white background: speckle patterns S [17].
Remotesensing 15 01359 g007
Figure 8. Close-up view of different DEM around Antarctica. (ac) show DeepBedMap_DEM [17]; (df) show MB_DeepBedMap_DEM; (gi) show the ice surface elevation model. Features of interest are annotated in black text against a white background: ridges R, wave patterns W.
Figure 8. Close-up view of different DEM around Antarctica. (ac) show DeepBedMap_DEM [17]; (df) show MB_DeepBedMap_DEM; (gi) show the ice surface elevation model. Features of interest are annotated in black text against a white background: ridges R, wave patterns W.
Remotesensing 15 01359 g008
Figure 9. Spatial 2D view of grids over Thwaites Glacier, West Antarctica. (a) MB_DeepBedMap_DEM; (b) DeepBedMap_DEM [17]; (c) 2D roughness from interpolated radar data grids; (d) 2D roughness from MB_DeepBedMap_DEM grid; (e) 2D roughness from the DeepBedMap_DEM grid [17]; (f) 2D roughness from bicubically interpolated BedMachine Antarctica grid. The orange line in (a) is the flight line.
Figure 9. Spatial 2D view of grids over Thwaites Glacier, West Antarctica. (a) MB_DeepBedMap_DEM; (b) DeepBedMap_DEM [17]; (c) 2D roughness from interpolated radar data grids; (d) 2D roughness from MB_DeepBedMap_DEM grid; (e) 2D roughness from the DeepBedMap_DEM grid [17]; (f) 2D roughness from bicubically interpolated BedMachine Antarctica grid. The orange line in (a) is the flight line.
Remotesensing 15 01359 g009
Figure 10. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect (see Figure 9, orange line).
Figure 10. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect (see Figure 9, orange line).
Remotesensing 15 01359 g010
Figure 11. Bed topography comparison of BedMachine, DeepBedMap, and MB_DeepBedMap grid products. (ac) show Dome C location; (df) show Dome F location; (gi) show Marie Byrd Land (MBL) location. Orange lines in (c,f,i) correspond to the transect in Figure 11, Figure 12 and Figure 13, respectively.
Figure 11. Bed topography comparison of BedMachine, DeepBedMap, and MB_DeepBedMap grid products. (ac) show Dome C location; (df) show Dome F location; (gi) show Marie Byrd Land (MBL) location. Orange lines in (c,f,i) correspond to the transect in Figure 11, Figure 12 and Figure 13, respectively.
Remotesensing 15 01359 g011
Figure 12. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect in Dome C (see Figure 11c, orange line).
Figure 12. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect in Dome C (see Figure 11c, orange line).
Remotesensing 15 01359 g012
Figure 13. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect in Dome F (see Figure 11f, orange line).
Figure 13. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect in Dome F (see Figure 11f, orange line).
Remotesensing 15 01359 g013
Figure 14. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect in Marie Byrd Land (see Figure 11i, orange line).
Figure 14. Comparison bed elevation (a) and bed roughness (b) of each grid product over a transect in Marie Byrd Land (see Figure 11i, orange line).
Remotesensing 15 01359 g014
Figure 15. Comparison of bed elevation grid products over Thwaites Glacier. The ground-truth is from gridded Operation IceBridge points.
Figure 15. Comparison of bed elevation grid products over Thwaites Glacier. The ground-truth is from gridded Operation IceBridge points.
Remotesensing 15 01359 g015
Table 1. High-resolution ground-truth datasets from ice-penetrating-radar surveys used to train the MB_DeepBedMap model [17].
Table 1. High-resolution ground-truth datasets from ice-penetrating-radar surveys used to train the MB_DeepBedMap model [17].
LocationCitation
Pine Island GlacierBingham et al. [38]
Wilkes Subglacial BasinJordan et al. [39]
Carlson InletKing [40]
Rutford Ice StreamKing et al. [41]
Various locations in AntarcticaShi et al. [42] and Holschuh et al. [43]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, Y.; Wan, F.; Lang, S.; Cui, X.; Yao, Z. Multi-Branch Deep Neural Network for Bed Topography of Antarctica Super-Resolution: Reasonable Integration of Multiple Remote Sensing Data. Remote Sens. 2023, 15, 1359. https://doi.org/10.3390/rs15051359

AMA Style

Cai Y, Wan F, Lang S, Cui X, Yao Z. Multi-Branch Deep Neural Network for Bed Topography of Antarctica Super-Resolution: Reasonable Integration of Multiple Remote Sensing Data. Remote Sensing. 2023; 15(5):1359. https://doi.org/10.3390/rs15051359

Chicago/Turabian Style

Cai, Yiheng, Fuxing Wan, Shinan Lang, Xiangbin Cui, and Zijun Yao. 2023. "Multi-Branch Deep Neural Network for Bed Topography of Antarctica Super-Resolution: Reasonable Integration of Multiple Remote Sensing Data" Remote Sensing 15, no. 5: 1359. https://doi.org/10.3390/rs15051359

APA Style

Cai, Y., Wan, F., Lang, S., Cui, X., & Yao, Z. (2023). Multi-Branch Deep Neural Network for Bed Topography of Antarctica Super-Resolution: Reasonable Integration of Multiple Remote Sensing Data. Remote Sensing, 15(5), 1359. https://doi.org/10.3390/rs15051359

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop