Next Article in Journal
Influence of Different Land-Use Types on Soil Arthropod Communities in an Urban Area: A Case Study from Rome (Italy)
Previous Article in Journal
Soil Heavy Metal Accumulation and Ecological Risk in Mount Wuyi: Impacts of Vegetation Types and Pollution Sources
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Segmentation Performance and Mapping of Dunes in Multi-Source Remote Sensing Images Using Deep Learning

1
College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China
2
Xinjiang Key Laboratory of Oasis Ecology, Xinjiang University, Urumqi 830046, China
3
Xinjiang Institute of Ecology and Geography Chinese Academy of Sciences, Urumqi 830011, China
4
Changji Geological Brigade, Geological Bureau of Xinjiang Uygur Autonomous Region, Changji 831100, China
*
Author to whom correspondence should be addressed.
Land 2025, 14(4), 713; https://doi.org/10.3390/land14040713
Submission received: 15 February 2025 / Revised: 16 March 2025 / Accepted: 25 March 2025 / Published: 26 March 2025
(This article belongs to the Section Land Innovations – Data and Machine Learning)

Abstract

:
Dunes are key geomorphological features in aeolian environments, and their automated mapping is essential for ecological management and sandstorm disaster early warning in desert regions. However, the diversity and complexity of the dune morphology present significant challenges when using traditional classification methods, particularly in feature extraction, model parameter optimization, and large-scale mapping. This study focuses on the Gurbantünggüt Desert in China, utilizing the Google Earth Engine (GEE) cloud platform alongside multi-source remote sensing data from Landsat-8 (30 m) and Sentinel-2 (10 m). By integrating three deep learning models—DeepLab v3, U-Net, and U-Net++—this research evaluates the impact of the batch size, image resolution, and model structure on the dune segmentation performance, ultimately producing a high-precision dune type map. The results indicate that (1) the batch size significantly affects model optimization. Increasing the batch size from 4 to 12 improves the overall accuracy (OA) from 69.65% to 84.34% for Landsat-8 and from 89.19% to 92.03% for Sentinel-2. Increasing the batch size further to 16 results in a slower OA improvement, with Landsat-8 reaching OA of 86.63% and Sentinel-2 reaching OA of 92.32%, suggesting that gradient optimization approaches saturation. (2) The higher resolution of Sentinel-2 greatly enhances the ability to capture finer details, with the segmentation accuracy (OA: 92.45%) being 5.82% higher than that of Landsat-8 (OA: 86.63%). (3) The U-Net model performs best on Sentinel-2 images (OA: 92.45%, F1: 90.45%), improving the accuracy by 0.13% compared to DeepLab v3, and provides more accurate boundary delineation. However, DeepLab v3 demonstrates greater adaptability to low-resolution images. This study presents a dune segmentation approach that integrates multi-source data and model optimization, offering a framework for the dynamic monitoring and fine-scale mapping of the desert’s geomorphology.

1. Introduction

Dunes are globally distributed aeolian landforms that play critical roles in both desert and coastal ecosystems [1]. While commonly associated with arid environments (e.g., the Sahara and Gurbantünggüt Deserts), they are equally prevalent in coastal and marine regions, such as the Netherlands’ shoreline and Australia’s Fraser Island [2]. Automated dune mapping is essential for desertification control, coastal erosion management, and early warning systems for sandstorm hazards [3].
However, the diversity and complexity of dune morphologies pose substantial challenges for traditional classification methods, particularly in feature extraction, model parameter optimization, and large-scale mapping [4]. In recent years, advancements in remote sensing technology and deep learning algorithms have significantly improved the automation and accuracy of dune classification [5].
Conventional dune classification methods primarily rely on manual visual interpretation, which is not only time-consuming and labor-intensive but also subject to the interpreter’s subjective judgment [6]. While these methods have yielded results in specific study areas, they struggle to meet the demands of large-scale, high-efficiency dune classification.
Remote sensing technology provides powerful tools for dune classification [7]. Satellite- and UAV-acquired remote sensing imagery enables the rapid and non-destructive monitoring of dune morphologies [8]. However, traditional remote sensing image analysis methods still face limitations in handling complex dune morphologies, such as inaccurate boundary identification and low classification accuracy [9].
In recent years, deep learning techniques have enabled significant progress in image segmentation and classification [10], offering new solutions for dune classification [11]. Existing studies have demonstrated that deep learning models excel in classifying dune morphologies, effectively enhancing the classification accuracy and boundary delineation [12]. Zheng [13] proposed a global dune classification scheme, using an improved Transformer model for dune classification on Landsat-8 images. This work provided a dune type sample dataset and discussed different classification effects at various scales. Li Ming [14] applied drone images combined with deep learning techniques for classification. While Zheng et al. proposed a global dune type dataset, their classification system was designed to fit the global scale, choosing a more general classification approach. This, however, does not suit specific desert regions. For example, in the Gurbantünggüt Desert, China’s largest semi-fixed desert, the unique climatic conditions have led to the formation of a rich variety of dune types. Even within honeycomb dunes, the honeycomb ridged dunes exhibit different texture features compared to regular honeycomb dunes in imagery [15]. The current research still faces challenges in model parameter optimization, multi-source remote sensing data fusion, and the development of regionally adaptive classification systems.
To address these issues, this study focuses on the Gurbantünggüt Desert, the largest semi-fixed desert in China, using the regionally adaptive classification system proposed by Zhu Zhenda [16] (covering seven typical dune types). This approach combines the multi-resolution advantages of Landsat-8 and Sentinel-2 imagery [17] and systematically compares the performance differences between the DeepLab v3, U-Net, and U-Net++ models [18]. Additionally, this study analyzes the impact of the batch size (4, 8, 12, 16) on gradient optimization and the segmentation accuracy [19]. This paper solves the following issues: (1) constructing a multi-scale label set that integrates field verification and drone data to address the scarcity of region-specific dune samples; (2) revealing the synergistic effects of the batch size and image resolution and proposing an optimization scheme consisting of “higher-resolution data with a larger batch size for U-Net”; and (3) generating the first high-precision dune type map for the Gurbantünggüt Desert, providing data support for desertification control and geomorphological evolution research.

2. Data and Methods

2.1. Study Area

The study area is the Gurbantünggüt Desert [20] (Figure 1): the figure shows (a) the country and province where the study area is located, (b) the location of the study area, and (c) the distribution of different types of sand dunes in the study area. The Gurbantünggüt Desert is located at 44°11′–46°20′ N latitude and 84°31′–90°00′ E longitude. It is situated in the central part of the Junggar Basin in Xinjiang, China, to the east of the Manas River and to the south of the Ulungur River. The Gurbantünggüt Desert is currently the largest fixed and semi-fixed desert in China, covering an area of approximately 48,800 square kilometers. It ranks second among the eight major deserts in Northern China [21]. The desert encompasses a wide variety of dune types, and, after years of aeolian activity, it still retains dunes from different historical periods [22]. Therefore, the Gurbantünggüt Desert was selected as the study area.

2.2. Data Acquisition and Processing

2.2.1. Remote Sensing Image Acquisition and Processing

To investigate the performance of multi-source remote sensing imagery in dune classification, this study selected Landsat-8 and Sentinel-2 images. Landsat-8 imagery has a spatial resolution of 30 m, making it suitable for monitoring large-scale dune distribution. In contrast, Sentinel-2’s 10 m resolution provides more detailed dune feature information, which helps to improve the classification accuracy [23].
The study area was uploaded to the Google Earth Engine (GEE) cloud platform [24], where Landsat-8 and Sentinel-2 images from 2015, with cloud cover of less than 2%, were selected. Preprocessing tasks, such as mosaicking and masking, were performed on the GEE platform, and the processed images were subsequently downloaded to the local system.

2.2.2. Label Data

The label data used in this study came from the “Geological Records of Desert Changes in China and Survey of Human Activity Sites” project (2017FY101000)—specifically, the “2015 China Northern Desert Geomorphology Mapping Data”. The dune types were redrawn based on Zhu Zhenda’s General Introduction to China’s Deserts [16] and manually labeled. Based on the typical dune types of the Gurbantünggüt Desert, a total of seven dune types were labeled: semi-fixed ridge-nest dunes, honeycomb dunes, barchan dunes and dune chains, honeycomb dune ridges, semi-fixed dendritic dune ridges, fixed dune ridges, and semi-fixed dune ridges (Figure 2).
Among them, it is more difficult to distinguish between honeycomb dunes and honeycomb ridges and between fixed dune ridges and semi-fixed dune ridges. A honeycomb dune is a dune consisting of round or elliptical sand dens with high surroundings and a low center, formed under the interaction of multiple wind directions. It is irregular in shape due to external disturbances, and its shape is similar to a honeycomb, with lower vegetation coverage. A honeycomb sand ridge is a specific derivative of a fixed–semi-fixed dune, showing a hexagonal or sub-circular lattice structure. It consists of elevated annular sand ridges alternating with a central concave area. The diameter of a single grid is 20–80 m, the height of the sand ridges is 1.5–4 m, the width of the ridges is 2–5 m, and the depth of the depression area is 0.3–1.2 m. The vegetation cover reaches 40–60% (mainly lemon and sand willow). Fixed dune ridges are linear sand dunes with completely solidified vegetation, showing continuous ridge extension and a bilaterally symmetrical slope structure, and their stability reaches the final state of dune succession. The straightness of the ridge is more than 85%, the extension length is 3–8 km, the slope angle is 12–18°, and the specific height is 5–15 m. Semi-fixed dune ridges are a type of dune body in the shape of a ridge, with a straight ridge and symmetrical slopes, and the ridge has a very large spread; the length of the ridge ranges from dozens of meters to thousands of meters, the height of the ridge is 5–30 m, the ridges are parallel with each other, and the spacing of the ridges is 150–500 m. The vegetation coverage is lower, at about 20%, and the vegetation cover is more than 20%.
As the deep learning field is driven by big data, the quantity and quality of data are the key factors affecting the performance of image classification models.
The downloaded Landsat-8 and Sentinel-2 images were imported into the ArcGIS software. Firstly, they were extracted by a mask, and the background null value was set to 0; the R, G, and B bands were filtered; the values were restored to 0–255 due to the fact that the remote sensing images downloaded from the GEE were processed by normalization; both images were cropped to 256 × 256 pixel-sized plots; and the background values were filtered out in more than 90% of the plots. We assigned the images to the training set, validation set, or test set according to the ratio of 6:3:1, and the number of images in the training set, validation set, and test set for Landsat-8 and Sentinel-2 is shown in Table 1. We then resampled the corresponding label files to the same resolution as the images and cropped them to a size of 256 × 256 pixels, and we assigned labels to the images according to their names. We then constructed a remote sensing image segmentation dataset for the different categories of sand dunes in the Gurbantünggüt Desert.

2.2.3. Field Validation Data

To verify the accuracy of the label data, field sampling was conducted in the Gurbantünggüt Desert in September 2024. A DJI Phantom 4 drone was used to capture drone images with a resolution of 0.1 m and a size of 1 km × 1 km. A sampling survey was carried out at six points where the dune boundaries were difficult to distinguish. Manual visual interpretation of the dune types present in the drone images was performed to verify and correct the label files, ensuring the reliability of the dataset [25]. The base map showing the distribution of various dune types, shown in Figure 3, was based on the label data.

2.3. Methods

In this study, three deep learning models—DeepLab v3, U-Net, and U-Net++—were selected for dune type segmentation, primarily due to their excellent performance in remote sensing image segmentation and adaptability to complex geographical backgrounds [26]. DeepLab v3 introduces dilated convolutions and multi-scale context modeling, effectively handling dune classification problems characterized by complex backgrounds and multi-scale features. Its Atrous Spatial Pyramid Pooling (ASPP) module enables the model to capture multi-scale information about dune areas, improving its performance across different dune types and morphologies [27]. U-Net, based on a symmetric encoder–decoder structure, strengthens the retention of local details through skip connections [28]. This capability allows it to precisely identify dune boundaries and subtle features, which is particularly crucial for dune segmentation tasks. U-Net++ builds upon U-Net by adding dense skip connections and deep supervision mechanisms, giving the model an advantage in capturing multi-scale features, especially when handling dunes of varying sizes and shapes, thus demonstrating higher accuracy and robustness [29]. These models provide strong support for this study, ensuring the efficiency and accuracy of the dune type segmentation task.

2.3.1. DeepLab v3

DeepLab v3 is an advanced deep learning model designed specifically for semantic segmentation tasks, aiming to accurately segment different categories of regions from input images. Its core architecture is based on convolutional neural networks (CNNs) and significantly improves the segmentation performance through the following key features.
(1) Dilated Convolutions [30]: DeepLab v3 introduces dilated convolutions in the backbone network to expand the receptive field of convolutional kernels without increasing the number of parameters or reducing the resolution. This enables the model to capture multi-scale contextual information while maintaining the resolution of the feature maps, allowing for the better representation of complex object–background relationships.
(2) Atrous Spatial Pyramid Pooling (ASPP) [31]: The ASPP module in DeepLab v3 extracts multi-scale features by applying dilated convolutions with multiple sampling rates in parallel, and it combines global average pooling to capture global contextual information. The multi-scale fusion further enhances the model’s ability to segment objects of varying sizes.
(3) Backbone Network and Pretrained Models [32]: DeepLab v3 is typically built upon powerful backbone networks (such as ResNet, Xception, or MobileNet) and utilizes weights pretrained on large-scale image classification datasets (such as ImageNet) to enhance the feature extraction capabilities and accelerate convergence.
(4) Dense Prediction and Boundary Optimization [33]: DeepLab v3 removes the final downsampling operation, generating dense feature maps that enable high-resolution predictions. This approach is particularly beneficial in improving the segmentation performance at object boundaries.
In summary, DeepLab v3 leverages dilated convolutions, ASPP, and powerful backbone networks to achieve the precise modeling of multi-scale and global contexts. It is a high-performance model that is widely used in remote sensing image segmentation, medical image analysis, and object detection.

2.3.2. U-Net

U-Net is based on an encoder–decoder architecture, as shown in Figure 4. Typically, the encoder performs feature extraction and dimensionality reduction on the input image through convolutional and pooling layers, which reduces the size and dimensionality of the feature maps. The decoder is responsible for gradually upsampling and restoring the features extracted by the encoder, progressively recovering the size and dimensions of the feature maps to match the original image size. U-Net introduces skip connections, which link the feature maps from each layer of the encoder with the corresponding layer of the decoder [34].
In U-Net, the cross-entropy loss function is commonly used to measure the difference between the model’s output and the actual labels, guiding the training and optimization of the model parameters. The expression of the cross-entropy loss function is as follows:
  L = y log p + 1 y log 1 p          
where y 0,1 represents the true label, and p 0,1 represents the predicted probability of the correct class.

2.3.3. U-Net++

U-Net++ is an improvement and extension to the U-Net model. In U-Net++, the skip pathways are redesigned to connect every layer of both the encoder and the decoder. The features from each layer are directly passed to all subsequent layers of the decoder. U-Net++ also introduces dense skip connections, which link the feature maps of each layer to all subsequent layers of both the encoder and decoder. This connection approach better preserves the multi-scale feature information of the image. Additionally, U-Net++ incorporates deep supervision to dynamically adjust the weights of the feature maps, enhancing the model’s focus on important features and allowing for the more precise extraction of key information from the image [29].

2.4. Experiment Introduction

2.4.1. Dataset Construction

Both the Landsat-8 and Sentinel-2 images were cropped into 256 × 256 pixel tiles for training. The corresponding label files were resampled to match the resolution of the images and cropped to the same 256 × 256 pixel size. The images were randomly divided into training, validation, and test sets at a ratio of 6:3:1, resulting in 7150 training images, 3575 validation images, and 1192 test images.

2.4.2. Experimental Setup

In the experiments in this study, the DeepLab v3, U-Net, and U-Net++ deep learning segmentation models were implemented using the PyTorch (2.2.2) library and the Python (3.10) programming language on a Linux system with an NVIDIA L40 (48GB) GPU workstation.
The batch size setting influences the memory consumption, training speed, generalization performance, and model stability [35]. Therefore, batch sizes of 4, 8, 12, and 16 were tested to analyze their impacts on the accuracy. The loss function selected was the cross-entropy loss function [36], commonly used for multi-class image classification tasks. This function measures the difference between the model’s predicted results and the true labels, guiding the learning process. An adaptive learning rate module was incorporated to flexibly control the frequency and extent of learning rate adjustments [37]. The number of epochs was set to 200, with the training weights updated every 10 epochs. This setup allowed for the more intuitive observation of the model’s convergence time without affecting the prediction results.
The batch size refers to the number of samples selected for training in one iteration. The size of the batch influences the model’s optimization level and training speed, and it directly affects the GPU memory usage [38]. Since the optimal batch size varies for different segmentation tasks, it was essential to explore the optimal parameter for dune segmentation in this study. Taking the DeepLab v3 model as an example, two sets of different remote sensing image data (Landsat-8 and Sentinel-2) were selected, and four different batch sizes (from large to small) were used for training.

2.4.3. Model Evaluation Metrics

In order to quantitatively evaluate the performance of the model, five metrics—the overall accuracy ( O A ), precision, recall, F1 score, and confusion matrix array, were chosen in this study to evaluate the segmentation results.
O A represents the number of correctly classified samples as a proportion of the number of all samples.
O A = T P + T N T P + F N + F P + T N
P r e c i s i o n represents the number of correctly classified positive samples as a proportion of the number of all positive samples split by the classifier.
P r e c i s i o n = T P T P + F P
R e c a l l represents the number of correctly categorized positive samples as a proportion of the number of positive samples.
R e c a l l = T P T P + F N
F 1 is the result of combining the outputs of P r e c i s i o n and R e c a l l , with values ranging from 0 to 1. A value of 1 represents the best output of the model and 0 represents the worst output of the model.
F 1 = 2 · P r e c i s i o n · R e c a l l P r e c i s o n + R e c a l l
In the field of machine learning, the confusion matrix is also known as the likelihood matrix or the error matrix. Confusion matrices are visualization tools, especially used in supervised learning; in unsupervised learning, they are generally called matching matrices. In image accuracy evaluation, they are mainly used to compare the classification results with the actual measured values, and the accuracy of the classification results can be displayed inside a confusion matrix. Figure 5 shows the general structure of the confusion matrix.

3. Results

3.1. Segmentation Performance with Different Batch Sizes

As shown in the validation accuracy results in Table 2, it can be observed that, as the batch size increases, the model’s accuracy on the validation set significantly improves for both the Landsat-8 and Sentinel-2 images. Specifically, when the batch size increases from 4 to 12, there is a noticeable improvement in model performance. However, when the batch size further increases from 12 to 16, the accuracy growth becomes less significant. This indicates that, when the batch size reaches 12, further increasing the batch size yields limited improvements in model performance, while also increasing the GPU’s computational load and potentially slowing the convergence speed of the model. Therefore, for the dune segmentation task, the optimal batch size is determined to be 16.

3.2. Segmentation Performance of Different Models

The DeepLab v3, U-Net, and U-Net++ models were used to segment both Landsat-8 and Sentinel-2 images with a batch size of 16. The training convergence of the three models is shown via the training loss function in Figure 6. It can be observed that all models converged. Among them, the U-Net model showed the best convergence performance, while the more complex structure of the U-Net++ model led to poorer convergence results.
However, the number of epochs required for convergence varied across the models. The U-Net model converged earlier, typically after 45 epochs, followed by DeepLab v3, which converged after an average of 50 epochs, and finally U-Net++. This indicates that U-Net exhibits higher learning efficiency for this task.
This study compared the performance of the three models on different image types. Table 3 shows the validation set accuracy of the three different models on different image datasets. The U-Net model achieved the best segmentation performance on Sentinel-2, with overall accuracy (OA) of 92.45%, precision of 90.03%, recall of 90.94%, and an F1 score of 90.45%. Compared to DeepLab v3, U-Net improved the OA by 0.13% and precision by 0.07%, with all accuracy metrics showing improvements. Although U-Net++ had lower overall accuracy than the other two models, it still performed well.
In the results regarding the 30 m resolution Landsat-8 images and the 10 m resolution Sentinel-2 images (Table 3), the Sentinel-2 images consistently showed better segmentation performance across all evaluation metrics. For the Landsat-8 images, however, DeepLab v3 yielded the best results, with overall accuracy of 86.63%. Therefore, U-Net was determined to be the optimal model for the dune segmentation task, and Sentinel-2 images were found to be the best data source for this task.
Small tile tests were conducted on different dune types using different models and Sentinel-2 images, and the results are shown in Figure 7. The experimental results indicate significant differences in performance between the models in the dune type segmentation task. Overall, U-Net exhibited the best segmentation performance, with more accurate boundary delineation and clearer category recognition. It was particularly strong in distinguishing easily confused categories such as semi-fixed ridge and pit dunes (RGB: 255, 235, 153) and honeycomb dunes (RGB: 235, 188, 67). DeepLab v3 exhibited the second-best performance, with some misclassification in certain categories (e.g., honeycomb dunes, crescent-shaped dunes, and dune chains, RGB: 177, 117, 62), especially at regional boundaries, where the predictions appeared fragmented. In contrast, U-Net++ performed the worst, with significant boundary blurring across multiple categories and severe confusion between categories (e.g., honeycomb ridged dunes, RGB: 92, 64, 22) with other dune types, affecting the overall recognition accuracy. This is likely related to the complexity introduced by U-Net++ in its feature fusion process, leading to some loss of information when handling fine-grained terrain features.
In conclusion, U-Net performed the best in the dune type segmentation task in this study region, followed by DeepLab v3. U-Net++ demonstrated poor adaptability under the current data conditions.
Figure 8 shows the classification of typical dune types on Landsat-8 images. The segmentation results indicate that DeepLab v3 produced the best classification performance, with more accurate boundary identification for different dune types and higher classification consistency overall. U-Net exhibited the second-best performance, as it captured the dune morphology well but still had some boundary confusion in certain categories. U-Net++ performed poorly, with numerous misclassifications and blurry boundaries, especially in dune types with more complex, detailed structures.
A comprehensive analysis of the segmentation results from the Sentinel-2 and Landsat-8 images shows performance differences between the models. On the Sentinel-2 images, U-Net achieved the best segmentation results, followed by DeepLab v3, with U-Net++ remaining the worst. In contrast, DeepLab v3 showed the best performance on the Landsat-8 images, followed by U-Net, with U-Net++ still underperforming. This suggests that DeepLab v3 has stronger adaptability to the features of Landsat-8 images, while U-Net has better learning capabilities regarding the high-spectral information of Sentinel-2 images. This difference may be attributed to the images’ spatial resolutions, spectral bands, and texture features, reflecting how different models adapt to different sensor data. Overall, DeepLab v3 demonstrates more stable performance on Landsat-8 images, while U-Net performs better on Sentinel-2 images. This provides a direction for future research on the fusion of multi-source remote sensing images.

3.3. Segmentation Mapping of the Gurbantünggüt Desert

The optimal U-Net model with a batch size of 16 and the DeepLab v3 model were used to predict dune segmentation on Sentinel-2 and Landsat-8 images, respectively, and large-scale dune type maps of the study area were produced. The final results are shown in Figure 9. It can be observed that the U-Net model’s prediction on Sentinel-2 images provides better detail handling compared to the DeepLab v3 model’s prediction on Landsat-8 images.
Figure 10a shows the confusion matrix [39] of the U-Net model for the segmentation of the Gurbantünggüt Desert as a whole on the Sentinel-2 images, with overall classification accuracy of 98.8%, indicating strong classification capabilities. From the diagonal of the matrix, it can be seen that the main predicted values for each category are concentrated in the correct category, especially for honeycomb dunes (94.6%), barchan dunes and dune chains (95.3%), honeycomb dune ridges (96.2%), semi-fixed dendritic dune ridges (96.8%), fixed dune ridges (95.2%), and semi-fixed dune ridges (95.5%). This demonstrates good performance, suggesting that the model can accurately recognize these dune types.
However, some categories still show a degree of confusion. For example, 3.5% of the semi-fixed ridge-nest dunes are misclassified as honeycomb dunes and 4.4% are misclassified as semi-fixed dune ridges, likely due to spectral or morphological similarities between these dune types. Among honeycomb dune ridges, 1.1% were misclassified as semi-fixed dune ridges, possibly due to their similar texture features, resulting in mispredictions at certain boundary regions. Semi-fixed dune ridges also showed some misclassification, but the overall recognition accuracy remained high at 95.5%.
Figure 10b illustrates the performance of the DeepLab v3 model for dune classification on Landsat-8 images, with overall classification accuracy of 97.6%, indicating strong classification capabilities. From the diagonal of the matrix, it can be seen that the main predicted values for each category are concentrated in the correct category, especially for semi-fixed dendritic dune ridges (94.3%) and fixed dune ridges (95.1%). This shows good performance, indicating that the model is capable of accurately recognizing these dune types. However, misclassifications still occur in some categories.
Overall, both the U-Net and DeepLab v3 models show high accuracy in the dune type segmentation task and can effectively distinguish most dune categories. However, significant misclassification remains between categories with similar spectral features, with the DeepLab v3 model having a relatively higher misclassification rate. An analysis of the confusion matrices shows that semi-fixed ridge-nest dunes and honeycomb dunes exhibit the most significant misclassifications, primarily being misclassified as semi-fixed dune ridges. This misclassification likely arises from partial vegetation cover or mixed soil components present on the surfaces of semi-fixed dune ridges and honeycomb dune ridges, causing their spectral features to resemble those of semi-fixed ridge-nest dunes, thus increasing the model’s difficulty in distinguishing these types.

4. Discussion

This study systematically explores the optimization of dune segmentation and the mapping potential through the integration of multi-source remote sensing imagery and deep learning models. The results have significant implications for desert geomorphology research and technological applications.
The outstanding performance of U-Net on Sentinel-2 images (OA 92.45%) can be attributed to the synergy between its symmetric encoder–decoder architecture and the high-resolution data. The skip connections effectively transmit boundary information from shallow features, while the 10 m resolution of Sentinel-2 provides rich texture details, enabling the model to accurately differentiate between easily confused categories (e.g., semi-fixed ridge and pit dunes versus honeycomb dunes). In contrast, the advantage of DeepLab v3 on Landsat-8 (OA 86.63%) arises from its ASPP module’s multi-scale feature fusion capabilities, which allow the extraction of the global context from low-resolution imagery, compensating for the lack of spatial details. This finding aligns with Zheng et al.’s [13] conclusion that “model adaptability depends on the data scale” but further reveals the matching mechanism between the resolution and model architecture.
The experiment shows that increasing the batch size from 4 to 12 improves the overall accuracy (OA) from 69.65% to 84.34% for Landsat-8 and from 89.19% to 92.03% for Sentinel-2, closely linked to the stable gradient direction provided by larger batches [27]. However, increasing the batch size further to 16 results in a slower OA improvement, with Landsat-8 reaching OA of 86.63% and Sentinel-2 reaching OA of 92.32%, likely due to a reduction in gradient noise, causing the optimization to settle into a local optimum. This phenomenon suggests that, in practical applications, a balance must be struck between computational resources and accuracy improvement, rather than blindly increasing the batch size.
This study successfully distinguished the unique honeycomb ridged dunes from ordinary honeycomb dunes in the Gurbantünggüt Desert using Zhu Zhenda’s localized classification system. Compared to Zheng’s global scheme [13], the localized system is more aligned with the regional geomorphological features, reducing misclassifications caused by overly broad category definitions (Figure 7). This result supports Courrech du Pont’s [40] view that classification systems should be dynamically adjusted to the regional environmental characteristics.
Field validation provides robust empirical support for model transferability by directly comparing ground truth data with model predictions. In Figure 11, panel (a) displays the field validation points corresponding to the original labels, while panel (b) shows the field validation points aligned with the 2024 results predicted by the U-Net model. Panels (c–h) illustrate the spatial distribution of different validation points. The figure reveals that, despite some uncertainties in the boundary regions of dune types, the field observations confirmed that the dune types labeled in 2015 remained unchanged in 2024, even in transitional zones. Additionally, the results predicted by the U-Net model closely aligned with the field validation findings, exhibiting minimal deviation. This outcome provides evidence supporting the temporal transferability of the model. Although the model was trained using 2015 labels, our analysis indicates that it successfully learned the spectral characteristics of different dune types, enabling effective migration across temporal scales. These findings have significant implications for paleoclimate reconstruction [41].
Despite the significant results, there are some limitations in this study. First, the label data rely on historical manual annotations, which may not fully reflect the dynamic changes in dunes. Second, the poor performance of U-Net++ (OA 89.15%) may be due to the redundant noise introduced by its dense skip connections in complex terrain. Future work could explore combining the long-range dependency capture capabilities of Transformer models [13] or incorporating attention mechanisms to optimize feature weight allocation. Additionally, the integration of multi-temporal Sentinel-2 and Landsat-8 data holds promise in enabling the temporal monitoring of dune migration, providing dynamic support for desertification control.
The large-scale dune type map generated in this study (Figure 7) can directly aid in desert ecological protection planning. For instance, the distribution of fixed ridged dunes can provide a basis for site selection in vegetation restoration projects, while the dynamic monitoring of crescent-shaped dune chains can help providing early warnings of dune encroachment risks. By further integrating high-resolution drone data, an “air–space–ground” integrated monitoring network can be established, promoting the construction of a smart desert management system.
This study provides a reliable solution for dune automatic segmentation through model–data–parameter synergy. Future work should focus on deepening exploration in areas such as data diversity, model lightweighting, and cross-regional validation to further enhance the universality and practicality of the method.

5. Conclusions

The segmentation and geomorphological mapping of typical dunes in the Gurbantünggüt Desert face technical challenges due to the diversity and complexity of the dunes, including issues related to classification scheme design, sample collection, feature representation, and the selection of classification methods. This study utilized remote sensing images from two different resolutions, Landsat-8 and Sentinel-2, combined with three deep learning models—DeepLab v3, U-Net, and U-Net++—to conduct a systematic analysis of multiple aspects, including image selection, model performance evaluation, and parameter optimization. Through experimentation, the suitable models and optimal parameter settings for different dune types in high-resolution and low-resolution images were determined, and large-scale geomorphological maps of the typical dune types covering the study area were generated. The results show that Sentinel-2 images excel in detail feature extraction, while the segmentation of larger-scale dunes using Landsat-8 images also achieves good accuracy. At the same time, the deep learning models selected in this study demonstrated significant adaptability and robustness in handling the multi-scale features of different dune types, effectively addressing the issues of insufficient sample feature representation and model adaptability in dune classification. This provides reliable data support and technical methods for further research on dune evolution monitoring and geomorphological studies. The specific conclusions are as follows.
  • As the batch size increases, the segmentation performance improves. When the batch size reaches 12, the model’s performance improvement slows down, and the optimal batch size is 16. The number of epochs required for model convergence varies with different models and image data sources, and appropriate epoch values should be set according to the actual training process.
  • The segmentation performance when using Sentinel-2 images is generally superior to that with Landsat-8 images, providing more detailed features for model recognition.
  • The segmentation performance of U-Net on Sentinel-2 images was the best among all experiments, with overall accuracy (OA) of 92.45%. On Landsat-8 images, DeepLab v3 achieved the best segmentation performance, with OA of 86.63%.

Author Contributions

Conceptualization, methodology, writing—review and editing, P.Z.; methodology, resources, supervision, J.A.; writing—review and editing, visualization, supervision, funding acquisition, J.Z.; conceptualization, software, W.H. and N.T.; conceptualization, investigation, B.C. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Third Comprehensive Scientific Investigation in Xinjiang (2021xjkk1001) and the Geological Records of Desert Changes in China and Survey of Human Activity Sites project (2017FY101000).

Data Availability Statement

The Landsat-8 and Sentinel-2 remote sensing images used in this study were obtained from USGS and downloaded from https://earthexplorer.usgs.gov/, accessed on 20 November 2024.

Acknowledgments

We thank Feng Zhang from College of Geography and Remote Sensing Sciences, Xinjiang University, and Shixin Wu from the Xinjiang Institute of Ecology and Geography Chinese Academy of Sciences, for providing the labeled data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Carrera, D.; Bandeira, L.; Santana, R.; Lozano, J.A. Detection of sand dunes on Mars using a regular vine-based classification approach. Knowl.-Based Syst. 2019, 163, 858–874. [Google Scholar] [CrossRef]
  2. Gao, J.; Kennedy, D.M.; Konlechner, T.M. Coastal dune mobility over the past century: A global review. Prog. Phys. Geogr. Earth Environ. 2020, 44, 814–836. [Google Scholar] [CrossRef]
  3. Loope, D.B.; Rowe, C.M.; Joeckel, R.M. Annual monsoon rains recorded by Jurassic dunes. Nature 2001, 412, 64. [Google Scholar] [CrossRef]
  4. Dong, Z.B.; Qu, J.J.; Qian, G.Q.; Zhang, Z.S. Classification of Aeolian and Sand Dune Landforms in the Kumtag Desert. China Desert. 2011, 31, 805–814. (In Chinese) [Google Scholar]
  5. Vitousek, S.; Buscombe, D.; Vos, K.; Barnard, P.L.; Ritchie, A.C.; Warrick, J.A. The future of coastal monitoring through satellite remote sensing. Camb. Prism. Coast. Futures 2023, 1, e10. [Google Scholar]
  6. Tang, Y.; Wang, Z.; Jiang, Y.; Zhang, T.; Yang, W. An Auto-Detection and classification algorithm for identification of sand dunes based on remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103592. [Google Scholar] [CrossRef]
  7. Brownett, J.; Mills, R. The development and application of remote sensing to monitor sand dune habitats. J. Coast. Conserv. 2017, 21, 643–656. [Google Scholar]
  8. de Figueiredo Meyer, M. The Effect of Spatial Resolution on Biomass Estimation Through Multispectral Imagery: A Case Study on C. edulis. Master’s Thesis, Universidade do Porto, Porto, Portugal, 2023. [Google Scholar]
  9. Hugenholtz, C.H.; Levin, N.; Barchyn, T.E.; Baddock, M.C. Remote sensing and spatial analysis of aeolian sand dunes: A review and outlook. Earth-Sci. Rev. 2012, 111, 319–334. [Google Scholar]
  10. Dong, P. Automated measurement of sand dune migration using multi-temporal lidar data and GIS. Int. J. Remote Sens. 2015, 36, 5426–5447. [Google Scholar] [CrossRef]
  11. Cui, B.C. Research on Dune Morphology Classification Based on Deep Learning and Multi-Source Remote Sensing Imagery: A Case Study of the Southern Edge of the Gurbantünggüt Desert. Master’s Thesis, Xinjiang University, Urumchi, China, 2020. (In Chinese). [Google Scholar]
  12. Zhao, X.M. Research on Dune Type Information Extraction Based on Deep Learning. Master’s Thesis, Xinjiang University, Urumchi, China, 2021. (In Chinese). [Google Scholar]
  13. Zheng, Z.; Zhang, X.; Li, J.; Ali, E.; Yu, J.; Du, S. Global perspectives on sand dune patterns: Scale-adaptable classification using Landsat imagery and deep learning strategies. ISPRS J. Photogramm. Remote Sens. 2024, 218, 781–801. [Google Scholar]
  14. Li, M.; Yan, J.H.; Ye, W.Z.; Dong, S.; Yang, Z.K. Automatic Dune Morphology Classification Method Based on Convolutional Neural Networks. Arid. Zone Resour. Environ. 2024, 38, 121–129. (In Chinese) [Google Scholar]
  15. Diniega, S.; Kreslavsky, M.; Radebaugh, J.; Silvestro, S.; Telfer, M.; Tirsch, D. Our evolving understanding of aeolian bedforms, based on observation of dunes on different worlds. Aeolian Res. 2017, 26, 5–27. [Google Scholar] [CrossRef]
  16. Zhu, Z.D.; Wu, Z.; Liu, S.; Di, X.M. Introduction to the Deserts of China; Revised Edition; Science Press: Beijing, China, 1980; p. 107. (In Chinese) [Google Scholar]
  17. Sigurdsson, J.; Armannsson, S.E.; Ulfarsson, M.O.; Sveinsson, J.R. Fusing Sentinel-2 and Landsat 8 Satellite Images Using a Model-Based Method. Remote Sens. 2022, 14, 3224. [Google Scholar] [CrossRef]
  18. Wu, H.; Song, H.; Huang, J.; Zhong, H.; Zhan, R.; Teng, X.; Qiu, Z.; He, M.; Cao, J. Flood Detection in Dual-Polarization SAR Images Based on Multi-Scale Deeplab Model. Remote Sens. 2022, 14, 5181. [Google Scholar] [CrossRef]
  19. He, T.; Zhang, Z.; Zhang, H.; Zhang, Z.; Xie, J.; Li, M. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 558–567. [Google Scholar]
  20. Wang, X.; Wang, T.; Jiang, J.; Zhao, C. On the sand surface stability in the southern part of Gurbantünggüt Desert. Sci. China Ser. D Earth Sci. 2005, 48, 778–785. [Google Scholar] [CrossRef]
  21. Qian, Y.B. Environmental Studies of the Gurbantünggüt Desert; China Basic Education Subject Yearbook. Science Press: Beijing, China, 2011; p. 489. (In Chinese) [Google Scholar]
  22. Fearnehough, W.; Fullen, M.; Mitchell, D.; Trueman, I.; Zhang, J. Aeolian deposition and its effect on soil and vegetation changes on stabilised desert dunes in northern China. Geomorphology 1998, 23, 171–182. [Google Scholar] [CrossRef]
  23. Radeloff, V.C.; Roy, D.P.; Wulder, M.A.; Anderson, M.; Cook, B.; Crawford, C.J.; Friedl, M.; Gao, F.; Gorelick, N.; Hansen, M.; et al. Need and vision for global medium-resolution Landsat and Sentinel-2 data products. Remote Sens. Environ. 2024, 300, 113918. [Google Scholar] [CrossRef]
  24. Zhao, Q.; Yu, L.; Li, X.; Peng, D.; Zhang, Y.; Gong, P. Progress and trends in the application of Google Earth and Google Earth Engine. Remote Sens. 2021, 13, 3778. [Google Scholar] [CrossRef]
  25. Essel, B. Developing an Enhanced Photogrammetric Methodology for Mapping Water Bodies Using Low-Cost Drones. Ph.D. Thesis, National University of Ireland, Maynooth, Ireland, 2024. [Google Scholar]
  26. Aizatin, A.; Nugraha, I.G.B.B. Comparison of semantic segmentation deep learning methods for building extraction. In Proceedings of the 2022 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), Surabaya, Indonesia, 22–23 November 2022; pp. 1–5. [Google Scholar]
  27. Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
  28. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015; Springer: Cham, Switzerland, 2015. [Google Scholar]
  29. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: DLMIA ML-CDS 2018; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11045, pp. 3–11. [Google Scholar] [CrossRef]
  30. Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
  31. Lian, X.; Pang, Y.; Han, J.; Pan, J. Cascaded hierarchical atrous spatial pyramid pooling module for semantic segmentation. Pattern Recognit. 2021, 110, 107622. [Google Scholar]
  32. Goldblum, M.; Souri, H.; Ni, R.; Shu, M.; Prabhu, V.; Somepalli, G.; Chattopadhyay, P.; Ibrahim, M.; Bardes, A.; Hoffman, J.; et al. Battle of the backbones: A large-scale comparison of pretrained models across computer vision tasks. Adv. Neural Inf. Process. Syst. 2024, 36, 29343–29371. [Google Scholar]
  33. Vandenhende, S.; Georgoulis, S.; Van Gansbeke, W.; Proesmans, M.; Dai, D.; Van Gool, L. Multi-task learning for dense prediction tasks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3614–3633. [Google Scholar] [CrossRef] [PubMed]
  34. Li, Z.X.; Huang, M.E.; Gao, F.; Tao, T.Y.; Wu, Z.F.; Zhu, Y.C. Water Body Extraction from Remote Sensing Imagery Based on U-Net, U-Net++, and Attention-U-Net Networks. Surv. Mapp. Bull. 2024, 8, 26–30. (In Chinese) [Google Scholar]
  35. Schilling, F. The Effect of Batch Normalization on Deep Convolutional Neural Networks. Master’s Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2016. [Google Scholar]
  36. Mao, A.; Mohri, M.; Zhong, Y. Cross-entropy loss functions: Theoretical analysis and applications. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 23803–23828. [Google Scholar]
  37. Lin, H.-H.; Chuang, J.-H.; Liu, T.-L. Regularized background adaptation: A novel learning rate control scheme for Gaussian mixture modeling. IEEE Trans. Image Process. 2010, 20, 822–836. [Google Scholar]
  38. Acun, B.; Murphy, M.; Wang, X.; Nie, J.; Wu, C.-J.; Hazelwood, K. Understanding training efficiency of deep learning recommendation models at scale. In Proceedings of the 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Republic of Korea, 27 February–3 March 2021; pp. 802–814. [Google Scholar]
  39. Susmaga, R. Confusion matrix visualization. In Proceedings of the Intelligent Information Processing and Web Mining: Proceedings of the International IIS: IIPWM ‘04 Conference; Zakopane, Poland, 17–20 May 2004, pp. 107–116.
  40. Du Pont, S.C.; Rubin, D.M.; Narteau, C.; Lapôtre, M.G.; Day, M.; Claudin, P.; Livingstone, I.; Telfer, M.W.; Radebaugh, J.; Gadal, C. Complementary classifications of aeolian dunes based on morphology, dynamics, and fluid mechanics. Earth-Sci. Rev. 2024, 255, 104772. [Google Scholar]
  41. Liu, R. The Development of Barchan Dunes and the Evolution of Aeolian Environment in Western Gurbantunggut Desert. Master’s Thesis, Fujian Normal University, Fuzhou, China, 2023. (In Chinese). [Google Scholar]
Figure 1. (a) The province where the study area is located in China; (b) the location of the study area in Xinjiang, China; (c) the distribution of different types of sand dunes in the study area.
Figure 1. (a) The province where the study area is located in China; (b) the location of the study area in Xinjiang, China; (c) the distribution of different types of sand dunes in the study area.
Land 14 00713 g001
Figure 2. Dune type map. (ag) Remote sensing image slices representing seven different dune types: semi-fixed ridge-nest dunes, honeycomb dunes, barchan dunes and dune chains, honeycomb dune ridges, semi-fixed dendritic dune ridges, fixed dune ridges, and semi-fixed dune ridges, respectively.
Figure 2. Dune type map. (ag) Remote sensing image slices representing seven different dune types: semi-fixed ridge-nest dunes, honeycomb dunes, barchan dunes and dune chains, honeycomb dune ridges, semi-fixed dendritic dune ridges, fixed dune ridges, and semi-fixed dune ridges, respectively.
Land 14 00713 g002
Figure 3. Field survey sampling location map. (a) Distribution of field survey sampling locations in the study area; (bg) drone images collected at each survey location.
Figure 3. Field survey sampling location map. (a) Distribution of field survey sampling locations in the study area; (bg) drone images collected at each survey location.
Land 14 00713 g003
Figure 4. U-Net model architecture diagram.
Figure 4. U-Net model architecture diagram.
Land 14 00713 g004
Figure 5. Confusion matrix structure diagram.
Figure 5. Confusion matrix structure diagram.
Land 14 00713 g005
Figure 6. Loss function during segmentation by different models on different images. The red squares in the graph represent the starting and ending values.
Figure 6. Loss function during segmentation by different models on different images. The red squares in the graph represent the starting and ending values.
Land 14 00713 g006
Figure 7. Comparison of dune classification results on Sentinel-2 images. (ah) Each row represents tiles at the boundaries of different dune types; the first column shows the remote sensing image, the second column shows the original labels, and the third to fifth columns show the segmentation results of the different models.
Figure 7. Comparison of dune classification results on Sentinel-2 images. (ah) Each row represents tiles at the boundaries of different dune types; the first column shows the remote sensing image, the second column shows the original labels, and the third to fifth columns show the segmentation results of the different models.
Land 14 00713 g007
Figure 8. Comparison of dune classification results on Landsat-8 images. (ag) Each row represents tiles at the boundaries of different dune types; the first column shows the remote sensing image, the second column shows the original labels, and the third to fifth columns show the segmentation results of the different models.
Figure 8. Comparison of dune classification results on Landsat-8 images. (ag) Each row represents tiles at the boundaries of different dune types; the first column shows the remote sensing image, the second column shows the original labels, and the third to fifth columns show the segmentation results of the different models.
Land 14 00713 g008
Figure 9. Prediction results for the Gurbantünggüt desert. (a) Ground truth; (b) prediction results for Sentinel-2 imagery using U-Net model; (c) prediction results for Landsatl-8 imagery using DeepLab v3 model; (df) display details corresponding to (ac), respectively.
Figure 9. Prediction results for the Gurbantünggüt desert. (a) Ground truth; (b) prediction results for Sentinel-2 imagery using U-Net model; (c) prediction results for Landsatl-8 imagery using DeepLab v3 model; (df) display details corresponding to (ac), respectively.
Land 14 00713 g009
Figure 10. Confusion matrix results. (a) Confusion matrix of U-Net model predictions on Sentinel-2 images; (b) confusion matrix of Deeplab v3 model predictions on Landsat-8 images. The numbers on axes 0–7 in the figure represent different dune types: background, semi-fixed ridge-nest dunes, honeycomb dunes, barchan dunes and dune chains, honeycomb dune ridges, semi-fixed dendritic dune ridges, fixed dune ridges, and semi-fixed dune ridges.
Figure 10. Confusion matrix results. (a) Confusion matrix of U-Net model predictions on Sentinel-2 images; (b) confusion matrix of Deeplab v3 model predictions on Landsat-8 images. The numbers on axes 0–7 in the figure represent different dune types: background, semi-fixed ridge-nest dunes, honeycomb dunes, barchan dunes and dune chains, honeycomb dune ridges, semi-fixed dendritic dune ridges, fixed dune ridges, and semi-fixed dune ridges.
Land 14 00713 g010
Figure 11. Comparison of points. (a) Field survey sampling location map; (b) segmentation projection results for 2024. (ch) show the distribution of different points, respectively.
Figure 11. Comparison of points. (a) Field survey sampling location map; (b) segmentation projection results for 2024. (ch) show the distribution of different points, respectively.
Land 14 00713 g011
Table 1. Remote sensing image segmentation dataset.
Table 1. Remote sensing image segmentation dataset.
DataTraining SetValidation SetTest SetSum
Landsat-86533261091088
Sentinel-213726862292287
Table 2. Segmentation performance of DeepLab v3 with different images and batch sizes.
Table 2. Segmentation performance of DeepLab v3 with different images and batch sizes.
DataBatch SizeOAPrecisionRecallF1
Landsat-81686.63%80.46%74.62%76.82%
1284.90%79.05%74.12%75.92%
874.71%65.27%58.63%60.92%
469.65%52.67%43.73%43.84%
Sentinel-21692.32%89.96%90.38%90.13%
1292.03%89.22%89.88%89.51%
891.37%89.69%87.72%88.63%
489.19%87.42%84.09%85.60%
Table 3. Segmentation performance of different models on different images.
Table 3. Segmentation performance of different models on different images.
ModelDataOAPrecisionRecallF1
DeepLab v3Landsat-886.63%80.46%74.62%76.82%
Sentinel-292.32%89.96%90.38%90.13%
U-NetLandsat-885.61%78.63%73.91%75.85%
Sentinel-292.45%90.03%90.94%90.45%
U-Net++Landsat-886.02%78.65%76.98%77.71%
Sentinel-289.15%86.50%86.06%86.24%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, P.; An, J.; Zheng, J.; Han, W.; Tuerxun, N.; Cui, B.; Zhao, X. Segmentation Performance and Mapping of Dunes in Multi-Source Remote Sensing Images Using Deep Learning. Land 2025, 14, 713. https://doi.org/10.3390/land14040713

AMA Style

Zhao P, An J, Zheng J, Han W, Tuerxun N, Cui B, Zhao X. Segmentation Performance and Mapping of Dunes in Multi-Source Remote Sensing Images Using Deep Learning. Land. 2025; 14(4):713. https://doi.org/10.3390/land14040713

Chicago/Turabian Style

Zhao, Pengyu, Jiale An, Jianghua Zheng, Wanqiang Han, Nigela Tuerxun, Bochao Cui, and Xuemi Zhao. 2025. "Segmentation Performance and Mapping of Dunes in Multi-Source Remote Sensing Images Using Deep Learning" Land 14, no. 4: 713. https://doi.org/10.3390/land14040713

APA Style

Zhao, P., An, J., Zheng, J., Han, W., Tuerxun, N., Cui, B., & Zhao, X. (2025). Segmentation Performance and Mapping of Dunes in Multi-Source Remote Sensing Images Using Deep Learning. Land, 14(4), 713. https://doi.org/10.3390/land14040713

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop