Next Article in Journal
Characterizing Seasonal Variation of the Atmospheric Mixing Layer Height Using Machine Learning Approaches
Previous Article in Journal
Unveiling the Drivers of Unplanned Urbanization: A High-Resolution Night Light Development Index Approach for Assessing Regional Inequality and Urban Growth in Dhaka
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Method for Land Use Classification Based on Feature Augmentation

1
School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China
2
China Institute of Development Strategy and Planning, Wuhan University, Wuhan 430079, China
3
State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan 430072, China
4
School of Water Resources and Hydropower Engineering, Wuhan University, Wuhan 430072, China
5
Inner Mongolia Civil-Military Integration Development Research Center, Hohhot 010070, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(8), 1398; https://doi.org/10.3390/rs17081398
Submission received: 25 February 2025 / Revised: 10 April 2025 / Accepted: 11 April 2025 / Published: 14 April 2025

Abstract

:
Land use monitoring by satellite remote sensing can improve the capacity of ecosystem resources management. The satellite source, bandwidth, computing speed, data storage and cost constrain the development and application in the field. A novel deep learning classification method based on feature augmentation (CNNs-FA) is developed in this paper, which offers a robust avenue to realize regional low-cost and high-precision land use monitoring. Twenty-two spectral indices are integrated to augment vegetation, soil and water features, which are used for convolutional neural networks (CNNs) learning to effectively differentiate seven land use types, including cropland, forest, grass, built-up, bare, wetland and water. Results indicated that multiple spectral indices can effectively distinguish land uses with a similar reflectance, achieving an overall accuracy of 99.70%, 94.81% and 90.07%, respectively, and a kappa coefficient of 99.96%, 98.62% and 99.76%, respectively, for Bayannur, Ordos and the Hong Lake Basin (HLB). The overall accuracy of 98.18% for the field investigation demonstrated that the accuracy of the classification in wet areas and ecologically sensitive areas was characterized by significant desert–grassland interspersion.

1. Introduction

Land use monitoring based on remote sensing technology has been widely applied in critical fields related to national resource environmental protection and sustainable development, serving cultivated land protection, ecosystem assessment and ecological management [1,2,3,4,5], but realizing high-precision land use monitoring in complex areas is still a major challenge.
Spectral indices are calculated from combinations between different bands and used to reflect physical attributes of land use [6,7,8,9,10,11,12,13,14]. Based on the relationship between the spectral indices and land use, setting spectral indices thresholds can be used to classify land use [15,16,17]. NDVI values gradually increase with vegetation growth, up to 0.79 for reeds in wetlands, while water and built-up areas have an NDVI close to zero [18]. The EVI was more sensitive than the NDVI in the high vegetation cover area; global forest EVI values range from 0.02 to 0.6854, and forests at the equator have the highest EVI [19]. Influenced by geographical heterogeneity, ecological diversity and human activities, identical land uses may demonstrate spectral variability across areas, while a distinct land use easily exhibits overlapping signatures [20,21,22]. The comprehensive application of spectral indices can meet the requirements for characterizing the complex land use patterns. The RBI and NDSI highlight features of bare and built-up areas [23,24]. Vegetation indices, such as the LAI [25,26,27], SAVI [21,28,29], ARVI [30,31], DVI [32,33], GNDVI [34,35,36], WDRVI [37,38], VARI [39], TVI [40] and SIPI [41], have a degree of positive correlation with vegetation density, and the sensitivity and applicability of these indices vary in different ecological environments. In sparsely vegetated areas, the SAVI, MSAVI and OSAVI are more sensitive than the NDVI [42,43,44]. The indices of the NDWI and MNDWI show higher values on the water areas and a negative correlation with the vegetation density, in which the value range varies from area to area [45,46]. Sun et al. [2] realized the land use classification of the Hetao Irrigation District of China by determining the thresholds of the NDVI, EVI, NDWI and SI, with an accuracy of more than 80% in both the field investigation and the image survey. However, in complex areas, phenomena such as spectral overlap, boundary blurring and mixed pixels can lead to highly nonlinear mapping relationships between land use types and spectral indices, which require more flexible and powerful methods to improve classification accuracy.
Complex nonlinear mapping relationships of data can be established by deep learning, which has significant advantages in image classification [47,48,49,50,51,52]. Network architectures, such as Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNNs), Recurrent neural networks (RNNs), Generative Adversarial Networks (GAN) and Transformers, have been used for land use classification to enhance the ability of models to extract representative features [53,54,55,56]. CNNs demonstrate exceptional capability in extracting complex nonlinear features from input data and are widely used in high-resolution remote sensing image classification [57,58,59,60,61]. Compared to fully connected networks, CNNs achieve an efficient performance with fewer parameters in image processing tasks [62,63,64,65,66]. However, deep learning models usually learn based on specific datasets, and models generalize poorly when the model-trained datasets and the classification task area are inconsistent [63,67,68,69]. Deep learning land use classification methods often require a sufficiently large number of features, and hyperspectral data provide rich and distinctive features, yet they face the problems of expensive imaging equipment, data storage and computational resource allocation [70,71]. It would be valuable to calculate multiple spectral indices through four-band images to provide richer land use features for network learning, which facilitates the model to establish the mapping relationship between the land use and features and improves the applicability, stability and robustness of the classification.
The main purpose of this paper is to propose a deep learning land use classification method (CNNs-FA), which integrates 22 spectral indices to augment the features of seven land uses and constructs CNNs to achieve low-cost and high-precision land use classifications. Three study areas with unique and complex land uses, including Bayannur, Ordos and the Hong Lake Basin (HLB), were selected for the validation of the generalization ability of the method. Systematic monitoring of region land use using advanced classification methods can address current data limitations in the regulation department and contribute to helping local authorities to better understand land use and to inform and implement sustainable environment-related policies.

2. Methods

This paper proposes a deep learning land use classification method, which utilizes spectral indices to enhance the features of land use and establishes nonlinear relationship mapping from features to land use through CNNs to achieve the high-precision and low-cost land use classification of remote sensing images. This method is capable of classifying remote sensing images that include four basic bands. The overall workflow of the proposed classification method is illustrated in Figure 1, including the processes of spectral indices selection, CNN structure, image set construction, sample labeling and accuracy assessment.

2.1. Spectral Indices Selection

The land use structures and patterns are intricate and complex, the comprehensive use of spectral indices to augment land use features plays an important role in land use classification. The 22 spectral indices (Table 1) were analyzed based on the land use complex sensitive area, consisting of northern Ordos and southeastern Bayannur (Figure 2). Different indices vary in value across image patches, with unique variances between land use types. Vegetation indices usually highlight vegetation areas, but different types of vegetation cover were not consistently sensitive to the index. Considering the ecological diversity of the region and the complexity of the cultivation structure, it is necessary to select multiple vegetation indices to augment the features of the vegetation-covered areas, such as forests, grasslands, farmlands and wetlands and to provide richer characterization of other land uses, which is helpful to accelerate the convergence of the model and to improve the robustness and generalizability of the model [72]. The integrated water index and soil index augments the land use description from different perspectives and improves the classification accuracy of the model in complex areas.

2.2. CNN Structure

Nonlinear mapping relationships between 22 input features and 7 land use type outputs were established by constructing CNNs (Equation (1)). The CNN structure is shown in Figure 3.
y j = f ( b j + i = 1 n ( x i j × w i j ) )
where i ∈ [0, 21] and j ∈ [0, 6].
According to the remote sensing image features, the sample images can be digitized as a three-dimensional cube with a size of X1 × X2 × D, with the width and height of the image denoted by X1 and X2, and the number of bands is specified by D. Twenty-two spectral indices were calculated using the surface reflectance bands to form a three-dimensional cube with dimensions of X1 × X2 × 22. Each pixel in the sample area is regarded as a sample point and each spectral index is regarded as an augmentation feature for the sample data. These labeled sample points were randomly partitioned into training and test datasets, with 80% of the data for each land use type allocated to the training set and the remaining reserved for tests.
The sample size used for CNNs’ learning is Nb × Tb × Fb, with Nb, Tb and Fb denoting width, height and channel, respectively, and b being the index of the different layers. The data are fed into the network through the input layer with channel F0 for the 22 spectral indices used for feature augmentation and an image size of 32 × 1 in each channel. The input is denoted as Xfjk(t) and t ∈ [0, Tmb − 1], and f, j, k are the channel, width and height indices of individual pixels, respectively, where Tmb = batchsize = 1; j ∈ [0, N0 − 1], N0 = 32; k ∈ [0, T0 − 1]; and T0 = 1. Zero-mean normalization is performed on the data at the input layer, as shown in Equations (2) and (3). After data processing, it enters the network part consisting of two alternating sets of convolutional, batch normalization and pooling layers.
X ˜ f j k ( t ) = X f j k ( t ) μ f j k
μ f j k = 1 T t r a i n t = 0 T t r a i n 1 X f j k ( t )
where Ttrain is the total number of training sets.
The core role of the convolutional layer is to extract, enhance and optimize features to improve the model’s ability to learn complex data. The convolution operation convolves the sample data of the input hidden layer with the weight matrix to produce the output feature map, as shown in Equation (4).
h f l m ( t ) ( v + 1 ) = f = 0 F v 1 j = 0 R c 1 k = 0 R c 1 Θ f j k ( o ) f h f S C l + j S C m + k ( t ) ( v )
where Rc is the size of the receptive field of the convolutional layer; o is the convolutional kernel index; Θ is a four-dimensional tensor [F, Fp, Rc, Rc]; F is the number of feature mappings in the convolutional input layer; Fp is the number of feature mappings in the convolutional output layer; and Rc is the size of the receptive field in the width and height directions. Θ is updated by backpropagation of the gradient. The network is set up with two layers of convolution; in the first layer of convolution F = 22, Fp = 16 and the size of the convolution kernel is 2 × 1 × 22. In the second layer of convolution F = 16, Fp = 32 and the size of the convolution kernel is 2 × 1 × 16. Scl + j ∈ [0, NV − 1], Scm + k ∈ [0, TV − 1].
Using the same padding to retain the input shape, the size after filling is [Tmb, Nv + 2P, Tv + 2P, Fv], and P is the filling size. The width and height of the output image is determined by the step size, which is Sc = 1 for both convolutional layers.
The activation function enables the neural network to learn and express complex nonlinear relationships. In this paper, the activation function is ReLU, which is located after the convolutional layer, taking into account padding, as shown in Equation (5).
h f l + P m + P ( t ) ( v + 1 ) = g ( h f l m ( t ) ( v ) )
Batch normalization is performed after the convolutional layer, as shown in Equations (6)–(8).
h ˜ f l m ( t ) ( n ) = h f l m ( t ) ( v ) h ^ f ( n ) ( σ ^ f ( n ) ) 2 + ε
h ^ f ( n ) = 1 T m b N n T n t = 0 T m b 1 l = 0 N n 1 m = 0 T n 1 h f l m ( t ) ( v )
( δ ^ f ( n ) ) 2 = 1 T m b N n T n t = 0 T m b 1 l = 0 N n 1 m = 0 T n 1 ( h f l m ( t ) ( v ) h ^ f ( n ) ) 2
where n is the index of the batch normalization layer and ε is the offset.
The pooling layer down-samples the data, filtering to highlight important features so that the deep network can learn advanced features more efficiently. Maximum pooling is used to mitigate overfitting of convolutional neural networks (Equation (9)).
h f l + P m + P ( t ) ( v + 1 ) = a f l m ( t ) ( v ) = max j , k = 0 R p 1   h f S p l + j + P S p m + k + P ( t ) ( v )
where v is the index of the hidden layer of the maximum pooling layer; Rp is the receptive field of the pooling layer; and Spl + j ∈ [0, NV − 1], Spm + k ∈ [0, TV − 1].
The fully connected layer acts as a classifier in CNNs and outputs the category information of the image (Equation (10)).
h f ( t ) ( v ) = f = 0 F v 1 Θ f ( o ) f h f ( t ) ( v )
The Softmax layer normalizes the output of the fully connected layer and converts it into a category probability distribution (Equation (11)), which helps the model to perform a multi-classification task.
h f ( t ) ( N ) = e h f ( t ) ( N 1 ) f = 0 F N 1 e h f ( t ) ( N 1 )
The output layer, after Softmax, outputs the number of categories and determines which category the pixel belongs to by the output vector (Equation (12)). Cross-Entropy is used as a loss function for the CNNs model (Equation (13)).
y f ( t ) ( N 1 ) = f = 0 F N 1 Θ f ( o ) f h f ( t ) ( N 1 )
L o s s ( y , y * ) = i y i i = 1 m y i log ( y i * ) ,   i [ 1 , m ]

2.3. Image Set Construction

The images selected for this study had to cover the four basic bands to ensure a comprehensive and accurate analysis of the land use features. The majority of satellite sensors are able to provide image data, including red, green, blue and near-infrared bands, and the classification method is almost unlimited by data sources, which is highly generalized and reduces the dependence on hyperspectral data while improving the efficiency and flexibility of classification. Imagery data should provide complete coverage of the entire study area to ensure that all relevant spatial information is captured. For optimal feature extraction and interpretation, the physical information in the image must present significant distinguishable features, with feature shapes, color and textures clearly visible, contributing to reliable image classification and analysis.
To ensure the quality of the image data, the captured data need to avoid periods of snow and ice cover, which can mask features and affect spectral characteristics, and can lead to serious classification errors. Cloud coverage in the imagery should not exceed 10%, as excessive cloud coverage obscures feature information. Cloud detection algorithms should be used to identify cloud-obscured areas in the dataset to ensure that only accessible and reliable pixels are retained for subsequent analysis and to improve the overall data quality and robustness of the classification results [94]. The HOT index is a commonly used cloud index that distinguishes clouds from clear pixels by calculating the perpendicular distance from the pixel to the clear line [95]. Since the HOT values of cloud pixels are generally greater than clear-sky pixels, the potential cloud mask (pCM) at pixel x (pCMx) can be computed as Equations (14) and (15) [96].
H O T = a × B b l u e B r e d + b 1 + a 2
p C M x = 1 , i f   H O T x > T H O T 0 , i f   H O T x < T H O T
where Bblue and Bred are the blue and red reflectance of a pixel, respectively; a and b are the slope and intercept of the ‘clear-line’, respectively; pCMx is the potential cloud mask at pixel x; and THOT is HOT threshold.

2.4. Sample Labeling

Based on the actual geographic environment of the study area, random sampling and sample labeling were carried out in each class of area through visual interpretation on high-resolution images, and experts in visual interpretation of remote sensing were invited to review and modify the sample categories, and field surveys were carried out in the areas susceptible to confusion on the images. Generate sample slices for land use, each slice consists of 4 bands of blue, green, red and near-infrared with pixel-level truth markers. Seven primary land use types include forest, grassland, cropland, built-up, water, wetland and bare and were labeled to establish an accurate and reliable sample database, ensuring that each sample represented a single land use to maintain the independence of the sample and reduce bias in model training (Figure 4).

2.5. Accuracy Assessment

(1)
Model accuracy assessment
The accuracy of the classification model was assessed using the test dataset. The performance of each land use was visualized through a confusion matrix. Additionally, the performance of CNNs was evaluated using the kappa consistency coefficient [97], precision, recall and F1-Score.
(2)
Field validation
The accuracy of land use classification results was strictly verified through field investigation. Random sampling and the combination of key areas and typical areas were used to determine the patches to be validated by field surveys according to the area of the land category. The exact locations of the validation points were located according to the latitude and longitude coordinates to ensure consistency with the locations selected in the remote sensing images (Figure 5). The time of field validation is consistent with the acquisition time of the remote sensing images, as much as possible, to minimize the influence of factors, such as seasonal changes, on the validation results.

2.6. Method Feasibility Analysis

Samples of seven land use types were feature augmented and then trained in CNNs to analyze the feasibility of feature-augmented deep learning classification methods. The overall accuracy of the model training and test sets were 99.68% and 99.69%, respectively. The ROC curves for the seven land use types in the training and test sets are plotted as shown in Figure 6. The AUC values of the ROC curves for forest, grassland, cropland, built-up, bare, water and wetland in the training and test sets are all approximated to 1, indicating that the characteristics of the training data can be well learned by the model and the model has a strong generalization ability on new data. The precision, recall and F1-Score of the training and test sets are shown in Figure 7, which prove that the deep learning classification method base on feature augmentation has better performance and accuracy.

3. Results

3.1. Study Area and Data Source

The selection of three typical regions for analysis is facilitated to describe the strengths and weaknesses of the model. Bayannur (40°13′–42°28′N, 105°12′–109°53′E) (Figure 8A) is well equipped with a variety of ecosystem elements, with an ecological spatial pattern of “one mountain, two plains, east forests, west sands and one river and many lakes”, characterized by typical desert grasslands and irrigated agriculture. Ordos (37°35′–40°51′N, 106°42′–111°27′E) (Figure 8B) is located in the southwest of the Inner Mongolia Autonomous Region. It serves as a transitional area between semi-arid and arid areas, between the sandy plateau and the Loess Plateau and a region of agricultural and pastoral interaction dominated by grasslands and deserts. The Hong Lake Basin (30°25′–29°28′N, 112°6′–114°3′E) (Figure 8C) has a flat topography crisscrossed with water systems, developed regional agriculture and fishery areas, interspersed drylands and paddy fields and complex land uses. Both Bayannur and Ordos are located in the middle reaches of the Yellow River and are classified as semi-arid to arid regions, while the HLB is situated in the middle reaches of the Yangtze River and falls within a humid region.
This study utilized Gaofen-1 Satellite Images (https://data.cresda.cn/, accessed on 1 July 2022). The GF-1 satellite is equipped with two high-resolution panchromatic cameras (PMSs) and four wide-format multispectral cameras (WFVs), enabling high-precision and extensive space-based observation services for land resource management and environmental protection agencies. For classification, six images taken in Bayannur, Inner Mongolia Autonomous Region, on 30 June 2022; seven images taken in Ordos, Inner Mongolia Autonomous Region, on 24 August 2023; and four images taken in the HLB on 1 September 2023 were utilized.

3.2. Accuracy Assessment Results

The climate in northern regions is relatively arid, with a lower humidity and dust content in the air. Optical remote sensing images typically exhibit a higher resolution and clarity. The weather and climate conditions in these regions are generally stable, with minimal cloud cover, resulting in a wealth of available and continuous imagery. Southern regions experience a more humid climate, with higher moisture content in the air. The presence of moisture and fog can negatively affect the image quality, leading to a reduction in the resolution and posing challenges for land use classification. The performance of the classification model was assessed using a confusion matrix (Figure 9). The deep learning classification method based on data augmentation has demonstrated a robust performance across different regions. The overall accuracies of the test set classification in Bayannur, Ordos and the HLB were 99.70%, 94.81% and 90.07%, respectively, and the Kappa coefficients were 99.96%, 98.62% and 99.76%, respectively. The precision, recall and F1-Scores of each land use in the classification model for the three study areas are shown in Table 2. Easily confused grass and bare areas showed an excellent differentiation in the test data. These results indicate an exceptionally high level of agreement between the modeled and actual classifications, reflecting the robustness and precision of the feature-augmented deep learning model in accurately classifying land use.

3.3. Land Use Classification

(1)
Bayannur
The land use classification results with seven types were obtained in Bayannur using the augmented sample dataset (Figure 10). Large areas of bare land are located in the north of Bayannur, and the grassland was interspersed in the bare areas, which sparsely distribute the built-up land. The Hetao Irrigation District south of the city is predominantly cropland, with clustered towns and cities scattered throughout. The southeastern part of the city is rich in features, among which the Ulansuhai Lake and the Wula Mountains are the key ecological areas for the comprehensive environmental management and ecological restoration of forests and grasses.
(2)
Ordos
Ordos is located in the transition area from semi-arid to arid regions, from the wind–sand plateau to the Loess Plateau and also in the area of intermingled agriculture and animal husbandry regions (Figure 11). The Yellow River separates Ordos and Bayannur, and the cropland is distributed along the Yellow River. The center of the region is the Kubuqi Desert and the south is the Mao Wusu Sandland, with grasslands and deserts as the main land uses, and the distance between building complexes is large. The region is seriously affected by wind erosion, and problems such as land desertification, sandstorms, soil erosion, pasture degradation and resource exploitation make the ecosystem of the region fragile.
(3)
HLB
The HLB is located in the central part of Hubei Province, China, and the watershed contains rich water and wetland resources (Figure 12). Wetlands are mainly distributed within the Hong Lake embankment, and there are a large number of aquatic cash crops, such as lotus root, in the area, both for the development of eco-agriculture and as wetland protection, which are categorized as cropland for identification in this study. Agriculture in the HLB is one of the core components of the economic activities in the region, and agricultural land is widely distributed. As an important fishery production area, there are a large number of ponds distributed for aquaculture in addition to the Hong Lake. Towns and villages are staggered, with denser building complexes than in Bayannur and Ordos.

3.4. Field Validation

Verifying the accuracy of the land classification in the regions is an important basis for measuring the advantages and disadvantages of the classification method. The locations of the validation points, which correspond to various land use categories, including built-up areas, cropland, grassland, forest, bare land, water and wetlands, within Bayannur, Ordos and the HLB are illustrated in Figure 13. These points serve as ground-truth references for assessing the accuracy of the feature-augmented deep learning land use classification method. A direct comparison between the field survey outcomes and the classification results, presented in Table 3, reveals a high degree of accuracy in the feature-augmented deep learning classification method. This indicates that the product classifications are reliable and closely match the field observations; the overall accuracy is 98.18%, confirming the effectiveness of the classification methodology applied to the regions.
(1)
Cropland
The Hetao Irrigation District, located in the south of Bayannur, is a critical agricultural production area in the arid and semi-arid region, and the HLB is also an important food base. Holding the red line of farmland is essential for local food security and sustainable agricultural development. In order to ensure the effective implementation of farmland protection measures, accurate remote sensing monitoring technology is important. Dynamic land use monitoring can comprehensively and continuously regulate the situation of the permanent basic farmland in the region, providing key data to ensure regional compliance with agricultural land protection policies. In Figure 14, the feature-augmented deep learning classification results are in agreement with the field survey data, demonstrating the accuracy and reliability of the classification method. Cropland in Bayannur and Ordos is mostly dryland in large patches, and in the HLB it has a relatively large distribution of paddy fields. The effective and real-time monitoring of farmland utilization facilitates the timely detection of illegal land use changes and supports the enforcement of legal measures to prevent violations.
(2)
Forest, grass and bare
The Bayannur and Ordos landscape is marked by a complex mosaic, with large tracts of bare land interspersed with forest and grassland, and is highly vulnerable to the detrimental effects of wind erosion, water erosion and human activities. The prevention and control of desertification are central to regional ecological sustainability, and the effective monitoring of the forest, grassland and bare land areas is of importance.
In Figure 15 and Figure 16, the spatial intertwine is particularly pronounced in the transitional areas between the bare land and grassland, where it becomes challenging to distinguish between the two land use types. Such areas are easy to misclassify, as the characteristics of the bare land and grassland can overlap visually, complicating an accurate land use classification. During the field validation process, three validation points were found to exhibit confusion between the bare land and grassland, the remainder of the validation points demonstrated a high classification accuracy. This highlights the challenges inherent in monitoring and classifying land use in ecologically sensitive areas, where subtle transitions between land use types can lead to ambiguities in the classification. The overall high level of accuracy in the classification results underscores the effectiveness of the monitoring approach employed, which is crucial for informing desertification control strategies and guiding ecological restoration efforts in vulnerable regions.
(3)
Wetland
Ulansuhai provides a wide range of essential ecological functions, including climate regulation, water conservation, biodiversity protection, material production and the maintenance of regional ecological balance. Accurate monitoring of the dynamic changes occurring within this wetland is of the utmost importance for both ecological management and conservation.
The China Land Cover Dataset (CLCD) (http://irsip.whu.edu.cn/resources/CLCD.php, accessed on 1 July 2022) is a prominent resource for land use classification across China, with an overall accuracy of 80%. This dataset is highly valuable for global change research, providing essential information for assessing land use patterns and environmental changes. In the case of the Ulansuhai region, the CLCD categorizes large wetland areas as either forested land or agricultural land, which undermines its effectiveness in accurately capturing the distinct features and dynamics of wetland ecosystems (Figure 17b). This classification presents challenges for wetland management and regulation. The feature-augmented deep learning classification method proposed in this study demonstrates an improvement in the accurate identification of wetlands (Figure 17a). The results of the classification method are consistent with actual photographic data obtained at validation points and have a clear delineation of wetland boundaries. The feature-augmented deep learning classification method successfully differentiates various vegetation types, including crops, grasslands, forests and reeds.
(4)
Water
A high field validation accuracy in Bayannur, Ordos and the HLB proves the precision of the water identification (Figure 18). The method also accurately distinguishes the boundaries of wetlands and water bodies in the complex area where reeds and water bodies are intertwined in the Ulansuhai Lake, fully demonstrating its ability to recognize complex and transitional land use. These promising results highlight the potential of the classification method for broader applications in environmental monitoring and management. The accuracy of the method in land use identification can also be extended to other domains, such as flood mapping, to enhance the efficiency of disaster response and improve rescue operations. Especially in ecologically sensitive areas (Bayannur) and flood-prone areas (HLB), the ability to quickly and accurately reflect changes in water is of great practical significance for environmental protection and disaster management.
(5)
Built-up
The use of remote sensing technology to monitor urban development addresses the limitations of traditional manual inspection methods, offers significant advantages in terms of efficiency and accuracy, improves the precision of urban monitoring efforts and reduces the cost and time required for large-scale inspections.
Setting up validation points in the Ulatqian Banner, Dalat Banner and Datong Lake Management Area (Figure 19), a comparison of the classification results with field validation data reveals that the feature-augmented deep learning classification method achieves a high accuracy in identifying built-up areas. This method, utilizing the spectral index, successfully distinguishes urban construction areas without the need for mid-wave infrared bands, which are commonly used in traditional classification approaches, and offers a more cost-effective and time-efficient solution for monitoring urban growth, facilitating more accessible and sustainable urban planning and development. This capability to monitor urbanization with a high accuracy and reduced cost is important for both urban management and environmental conservation, especially in regions experiencing rapid development. The application of deep learning-based remote sensing techniques is an invaluable tool for enhancing urban spatial planning, mitigating environmental impacts and supporting the sustainable growth of urban areas.

4. Discussion

4.1. Overcoming Land Use Classification Challenges

High-accuracy land use classification is crucial in environmental management, ecological assessment and remote sensing monitoring. A reliable deep learning architecture and rich feature representations are effective ways to overcome the classification challenges. Pan et al. [98] proposed a CNN-based multispectral LiDAR land cover classification framework to classify the land use of a small town situated in Whitchurch-Stouffville, obtaining an overall classification accuracy of 95.5%. Azedou et al. [14] used a deep learning method to distinguish six land use types in Talassemtane National Park based on bands of Sentinel-2 satellite data combined with five spectral indices to augment the land class features, obtaining an overall accuracy of 94.5%. Kwan et al. [99] fused four-band data and LiDAR data for CNN learning and augmented a limited number of band images using the Extended Multi-Attribute Profile, to obtain the highest overall model accuracy of 87.92%. The classification methods in the above studies are mainly applied on areas of smaller sizes. In order to realize the land use classification of large-scale complex areas, both accurately and economically, the deep learning method based on feature augmentation proposed in this paper integrated 22 spectral indices to augment the land use features of the images containing four bands, which helps to differentiate the seven major land use classes. The three study areas of this paper, Bayannur, Ordos and the HLB, are 65,000 km2, 87,000 km2 and 7629 km2, respectively, and the overall accuracies of the classification models are 99.70%, 94.81% and 90.07%, respectively. These results show that the method can effectively realize the accurate monitoring of land use in large-scale regions and adapt to different geographic environments and complex land use features.

4.2. Spectral Indices for Data Augmentation

As shown in Figure 20, Figure 21 and Figure 22, in the areas where land use types are varied and complexly interspersed, different land use types may show similarities in some spectral features, making classification more difficult. Spectral features of wetlands can be easily confused with other areas of vegetation cover and with water [100]. Indices such as the NDVI, NDWI, DVI, NRI and MSAVI provide characters that distinguish wetlands from water, and the VARI and NDGI can highlight wetlands in areas of vegetative cover. Built-up and bare land areas are typically land use types that are easily conflated with each other [101,102]. In Bayannur, the values of the TVI, NDVI, SAVI, WDRVI, OSAVI and NDSI are closer in built-up and bare land, while in Ordos, the spectral features of two land use types are more similar in the MSIR and RVI. Without the use of shortwave infrared, more features are required to augment the representation of built-up and bare areas to improve the model adaptability and classification accuracy in different areas. The EVI, RBI, CIVE, SIPI, EVI, NPCI, NDGI and VARI provide additional differentiating features for built-up and bare areas.
Spectral features vary in different regions even for the same land use [22,103]. The distribution of NDWI values for water in the HLB, which has a humid climate and abundant water resources, is relatively concentrated. In arid and semi-arid areas, water is susceptible to the influence of the surrounding soil albedo, especially in Ordos, where the Yellow River is the main water source, the sediment in suspended substances in the water can contribute to a relatively large range of NDWI values [104,105]. Compared to Ordos, the cropland in the HLB and Bayannur had greater variations in spectral indices, such as the LAI, ARVI, GNDVI and OSAVI. This is mainly due to the complexity of the cropping structure in the HLB and Bayannur, as the main grain producing areas. Cropland in the HLB is interspersed with paddies and dryland, and the Hetao Irrigation District in Bayannur has both fallow and cultivated plots during the season, with significant differences in status that affect its spectral features.
Confusable land use types vary in different areas. In Bayannur, the value range of the spectral index is prone to a large degree of overlap between forests and cropland, mainly because the forest in the area presents typical mountain forest characteristics, and it is difficult to distinguish it from cropland during the period of vigorous vegetation growth. And the RBI and EVI can provide differentiated characteristics of the two land use types. Ordos needs to focus on the distinction between forests and grasslands. The forest in the area is mainly composed of low shrubs, such as lemonade and salal, whose spectral characteristics are closer to meadows, such as the typical grassland and desert grassland, and the RBI similarly provides differentiated features for these two land use types. The cropland widely interspersed with paddy fields in the HLB behaves more similarly to water in terms of the characteristics of the NDGI, NPCI and VARI.
The feature augmentation using spectral indices based on the four basic bands is an effective way to improve the classification accuracy of land use. The 22 spectral indices integrated can compensate for the lack of information in specific bands and also augment the features of land use from different perspectives, improving the adaptability of the classification model in complex areas. Especially in the case of complex land use and easily overlapping spectral features, the combination of multiple indices can effectively augment the differentiation ability of land use and reduce the misclassification phenomenon, so as to improve the overall accuracy and stability of the classification.

4.3. Limitations of Research

Sample labeling can be reduced to a degree by feature augmentation; it is still necessary to rely on visual interpretation and field surveys for sample checking. Advanced techniques, such as generative adversarial networks (GANs), deep learning and migration learning, can be combined to achieve the automatic generation of samples, which can improve the generalization ability and accuracy of the classification model in the absence of a large number of labeled samples. The current number of forest samples in the three regions is small, and the precision and recall of forests is slightly lower compared with other land use types, which requires a further increase in forest samples or the augmenting of forest features. The method can be further extended and applied to other areas to verify the applicability of the method.

5. Conclusions

Accurate and timely land use assessments are essential for the effective management of ecological and environmental resources. In this paper, this study identifies the types and distribution of different land uses within Bayannur, Ordos and the HLB by a deep learning approach based on feature augmentation. Given its importance, the spectral indices and network architecture were carefully considered to achieve the highest overall accuracy using the four basic bands of images.
The results showed that integrating 22 spectral indices can effectively augment the features of different land uses. The classification accuracy of the feature-augmented deep learning model was excellent. The overall accuracy and kappa coefficient were above 90% in all three study areas. The precision, recall and F1-Score were, respectively, 99.28%, 99.13% and 0.99203 in Bayannur; 93.75%, 90.63% and 0.92071 in Ordos; and 90.01%, 90.34% and 0.89890 in the HLB, which enabled an accurate distinguishment between easily confused land uses and the achievement of both accurate and economic results. Finally, implementing deep learning methods based on feature augmentation can improve land use monitoring and ecological management, as well as assist decision makers based on accurate data. It should be noted that this paper applies the classification method to three regions and could serve more regions in the future to validate the applicability of the modeling method, and the regional sample dataset can be further constructed economically and efficiently based on AI to serve land use classification.

Author Contributions

Conceptualization, W.Z. and Y.W.; methodology, Y.W.; validation, X.L., M.L. and A.L.; formal analysis, Y.W.; investigation, A.J.; data curation, Y.W. and X.L.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., W.Z. and H.P.; visualization, N.M. and L.W.; supervision, W.Z.; funding acquisition, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Science and Technology Major Project of China (Grant number 69-Y50G08-9001-22/23).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NDVINormalized Difference Vegetation Index
LAILeaf Area Index
SAVISource Address Validation Improvement
EVIEnhanced Vegetation Index
ARVIAtmospheric Resistance Vegetation Index
DVIDifference Vegetation Index
GNDVINormalized Green Difference Vegetation Index
NDGINormalized Difference Green Degree Index
NPCIChorophyll Normalized Vegetation Index
NRINitrogen Reflectance Index
OSAVIOptimization Of Soil Regulatory Vegetation Index
MSAVIModified Soil Adjustment Vegetation Index
RVIRatio Vegetation Index
SIPIStructure-Independent Pigment Index
TVITriangle Vegetation Index
VARIVisible Atmospherically Resistant Index
WDRVIWide Dynamic Range Vegetation Index
CIVEVegetation Color Index
MSRIModified Second Ratio Index
NDWINormalized Difference Water Index
NDSINormalized Salinity Index
RBIRatio Built-up Index

References

  1. Shi, K.; Liu, G.; Zhou, L.; Cui, Y.; Liu, S.; Wu, Y. Satellite remote sensing data reveal increased slope climbing of urban land expansion worldwide. Landsc. Urban Plan. 2023, 235, 104755. [Google Scholar] [CrossRef]
  2. Sun, Y.; Li, X.; Shi, H.; Cui, J.; Wang, W.; Ma, H.; Chen, N. Modeling salinized wasteland using remote sensing with the integration of decision tree and multiple validation approaches in Hetao irrigation district of China. Catena 2022, 209, 105854. [Google Scholar] [CrossRef]
  3. Zhang, W.; Huang, C.; Peng, H.; Wang, Y.; Zhao, Y.; Chen, T. Chlorophyll a (Chl-a) concentration measurement and prediction in Taihu lake based on MODIS image data. In Proceedings of the 8th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Shanghai, China, 25–27 June 2008; pp. 352–359. [Google Scholar]
  4. Kafy, A.A.; Al Rakib, A.; Akter, K.S.; Jahir, D.M.A.; Sikdar, M.S.; Ashrafi, T.J.; Mallik, S.; Rahman, M.M. Assessing and predicting land use/land cover, land surface temperature and urban thermal field variance index using Landsat imagery for Dhaka Metropolitan area. Environ. Chall. 2021, 4, 100192. [Google Scholar]
  5. Pankaj, P.; Joseph, L.V.; Priyankar, C.; Mahender, K. Evaluation and comparison of the earth observing sensors in land cover/land use studies using machine learning algorithms. Ecol. Inform. 2021, 68, 101522. [Google Scholar]
  6. Curtis, P.G.; Slay, C.M.; Harris, N.L.; Tyukavina, A.; Hansen, M.C. Classifying drivers of global forest loss. Science 2018, 361, 1108–1111. [Google Scholar] [CrossRef]
  7. Evan, R.D.; Kariyeva, J.; Jason, T.B.; Jennifer, N.H. Large-scale probabilistic identification of boreal peatlands using Google Earth Engine, open-access satellite data, and machine learning. PLoS ONE 2019, 14, e0218165. [Google Scholar]
  8. Ludwig, C.; Walli, A.; Schleicher, C.; Weichselbaum, J.; Riffler, M. A highly automated algorithm for wetland detection using multi-temporal optical satellite data. Remote Sens. Environ. 2019, 224, 333–351. [Google Scholar] [CrossRef]
  9. Calderón-Loor, M.; Hadjikakou, M.; Bryan, B.A. High-resolution wall-to-wall land-cover mapping and land change assessment for Australia from 1985 to 2015. Remote Sens. Environ. 2021, 252, 112148. [Google Scholar] [CrossRef]
  10. Masolele, R.N.; De Sy, V.; Herold, M.; Marcos, D.; Verbesselt, J.; Gieseke, F.; Mullissa, A.G.; Martius, C. Spatial and temporal deep learning methods for deriving land-use following deforestation: A pan-tropical case study using Landsat time series. Remote Sens. Environ. 2021, 264, 112600. [Google Scholar] [CrossRef]
  11. Nguyen, L.H.; Joshi, D.R.; Clay, D.E.; Henebry, G.M. Characterizing land cover/land use from multiple years of Landsat and MODIS time series: A novel approach using land surface phenology modeling and random forest classifier. Remote Sens. Environ. 2018, 238, 111017. [Google Scholar] [CrossRef]
  12. Silva, A.L.; Alves, D.S.; Ferreira, M.P. Landsat-Based Land Use Change Assessment in the Brazilian Atlantic Forest: Forest Transition and Sugarcane Expansion. Remote Sens. 2018, 10, 996. [Google Scholar] [CrossRef]
  13. Xu, P.; Tsendbazar, N.E.; Herold, M.; Clevers, J.G.; Li, L. Improving the characterization of global aquatic land cover types using multi-source earth observation data. Remote Sens. Environ. 2022, 278, 113103. [Google Scholar] [CrossRef]
  14. Azedou, A.; Amine, A.; Kisekka, I.; Lahssini, S.; Bouziani, Y.; Moukrim, S. Enhancing Land Cover/Land Use (LCLU) classification through a comparative analysis of hyperparameters optimization approaches for deep neural network (DNN). Ecol. Inform. 2023, 78, 102333. [Google Scholar] [CrossRef]
  15. Li, P.; Feng, Z. Extent and Area of Swidden in Montane Mainland Southeast Asia: Estimation by Multi-Step Thresholds with Landsat-8 OLI Data. Remote Sens. 2016, 8, 44. [Google Scholar] [CrossRef]
  16. Yang, X.; Zhao, S.; Qin, X.; Zhao, N.; Liang, L. Mapping of Urban Surface Water Bodies from Sentinel-2 MSI Imagery at 10 m Resolution via NDWI-Based Image Sharpening. Remote Sens. 2017, 9, 596. [Google Scholar] [CrossRef]
  17. Li, K.; Chen, Y. A Genetic Algorithm-Based Urban Cluster Automatic Threshold Method by Combining VIIRS DNB, NDVI, and NDBI to Monitor Urbanization. Remote Sens. 2018, 10, 277. [Google Scholar] [CrossRef]
  18. Wang, Z.; Wang, X.; Zhang, Y.; Liao, Z.; Cai, J.; Yu, J. Estimation of a suitable NDVI oriented for ecological water savings and phytoremediation in Baiyangdian Lake, North China. Ecol. Indic. 2023, 148, 110030. [Google Scholar] [CrossRef]
  19. Li, C.; Song, Y.; Qin, T.; Yan, D.; Zhang, X.; Zhu, L.; Dorjsuren, B.; Khalid, H. Spatiotemporal Variations of Global Terrestrial Typical Vegetation EVI and Their Responses to Climate Change from 2000 to 2021. Remote Sens. 2023, 15, 4245. [Google Scholar] [CrossRef]
  20. Paz-Kagan, T.; Chang, J.G.; Shoshany, M.; Sternberg, M.; Karnieli, A. Assessment of plant species distribution and diversity along a climatic gradient from Mediterranean woodlands to semi-arid shrublands. GISci. Remote Sens. 2021, 58, 929–953. [Google Scholar] [CrossRef]
  21. Fu, Y.; Tan, X.; Yao, Y.; Wang, L.; Shan, Y.; Yang, Y.; Jing, Z. Uncovering optimal vegetation indices for estimating wetland plant species diversity. Ecol. Indic. 2024, 166, 112367. [Google Scholar] [CrossRef]
  22. Wang, Z.; Wang, J.; Wang, W.; Zhang, C.; Mandakh, U.; Ganbat, D.; Myanganbuu, N. An Explanation of the Differences in Grassland NDVI Change in the Eastern Route of the China–Mongolia–Russia Economic Corridor. Remote Sens. 2025, 17, 867. [Google Scholar] [CrossRef]
  23. Zhou, Y.; Lin, C.; Wang, S.; Liu, W.; Tian, Y. Estimation of Building Density with the Integrated Use of GF-1 PMS and Radarsat-2 Data. Remote Sens. 2016, 8, 969. [Google Scholar] [CrossRef]
  24. Allbed, A.; Kumar, L.; Aldakheel, Y.Y. Assessing soil salinity using soil salinity and vegetation indices derived from IKONOS high-spatial resolution imageries: Applications in a date palm dominated region. Geoderma 2014, 230–231, 1–8. [Google Scholar] [CrossRef]
  25. Xu, D.; An, D.; Guo, X. The Impact of Non-Photosynthetic Vegetation on LAI Estimation by NDVI in Mixed Grassland. Remote Sens. 2020, 12, 1979. [Google Scholar] [CrossRef]
  26. Qiao, K.; Zhu, W.; Xie, Z.; Wu, S.; Li, S. New three red-edge vegetation index (VI3RE) for crop seasonal LAI prediction using Sentinel-2 data. Int. J. Appl. Earth Obs. Geoinf. 2024, 130, 103894. [Google Scholar] [CrossRef]
  27. Huang, X.; Lin, D.; Mao, X.; Zhao, Y. Multi-source data fusion for estimating maize leaf area index over the whole growing season under different mulching and irrigation conditions. Field Crops Res. 2023, 303, 109111. [Google Scholar] [CrossRef]
  28. Ren, H.; Zhou, G.; Zhang, F. Using negative soil adjustment factor in soil-adjusted vegetation index (SAVI) for aboveground living biomass estimation in arid grasslands. Remote Sens. Environ. 2018, 209, 439–445. [Google Scholar] [CrossRef]
  29. Nyamtseren, M.; Pham, T.D.; Vu, T.T.P.; Navaandorj, I.; Shoyama, K. Mapping Vegetation Changes in Mongolian Grasslands (1990–2024) Using Landsat Data and Advanced Machine Learning Algorithm. Remote Sens. 2025, 17, 400. [Google Scholar] [CrossRef]
  30. Raza, A.; Shahid, M.A.; Zaman, M.; Miao, Y.; Huang, Y.; Safdar, M.; Maqbool, S.; Muhammad, N.E. Improving Wheat Yield Prediction with Multi-Source Remote Sensing Data and Machine Learning in Arid Regions. Remote Sens. 2025, 17, 774. [Google Scholar] [CrossRef]
  31. Wang, Q.; Pang, Y.; Li, Z.; Sun, G.; Chen, E.; Ni-Meister, W. The Potential of Forest Biomass Inversion Based on Vegetation Indices Using Multi-Angle CHRIS/PROBA Data. Remote Sens. 2016, 8, 891. [Google Scholar] [CrossRef]
  32. Qian, D.; Li, Q.; Fan, B.; Zhou, H.; Du, Y.; Guo, X. Spectral Characteristics and Identification of Degraded Alpine Meadow in Qinghai–Tibetan Plateau Based on Hyperspectral Data. Remote Sens. 2024, 16, 3884. [Google Scholar] [CrossRef]
  33. Taylor-Zavala, R.; Ramírez-Rodríguez, O.; de Armas-Ricard, M.; Sanhueza, H.; Higueras-Fredes, F.; Mattar, C. Quantifying Biochemical Traits over the Patagonian Sub-Antarctic Forests and Their Relation to Multispectral Vegetation Indices. Remote Sens. 2021, 13, 4232. [Google Scholar] [CrossRef]
  34. Pastor-Guzman, J.; Dash, J.; Atkinson, P.M. Remote sensing of mangrove forest phenology and its environmental drivers. Remote Sens. Environ. 2018, 205, 71–84. [Google Scholar] [CrossRef]
  35. Wavrek, M.T.; Carr, E.; Jean-Philippe, S.; McKinney, M.L. Drone remote sensing in urban forest management: A case study. Urban For. Urban Green. 2023, 86, 127978. [Google Scholar] [CrossRef]
  36. Rezaei, R.; Ghaffarian, S. Monitoring Forest Resilience Dynamics from Very High-Resolution Satellite Images in Case of Multi-Hazard Disaster. Remote Sens. 2021, 13, 4176. [Google Scholar] [CrossRef]
  37. Cao, Y.; Li, G.L.; Luo, Y.K.; Pan, Q.; Zhang, S.Y. Monitoring of sugar beet growth indicators using wide-dynamic-range vegetation index (WDRVI) derived from UAV multispectral images. Comput. Electron. Agric. 2020, 171, 105331. [Google Scholar] [CrossRef]
  38. Testa, S.; Soudani, K.; Boschetti, L.; Mondino, E.B. MODIS-derived EVI, NDVI and WDRVI time series to estimate phenological metrics in French deciduous forests. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 132–144. [Google Scholar] [CrossRef]
  39. Hamada, Y.; Szoldatits, K.; Grippo, M.; Hartmann, H.M. Remotely Sensed Spatial Structure as an Indicator of Internal Changes of Vegetation Communities in Desert Landscapes. Remote Sens. 2019, 11, 1495. [Google Scholar] [CrossRef]
  40. Änäkkälä, M.; Lajunen, A.; Hakojärvi, M.; Alakukku, L. Evaluation of the Influence of Field Conditions on Aerial Multispectral Images and Vegetation Indices. Remote Sens. 2022, 14, 4792. [Google Scholar] [CrossRef]
  41. Gao, S.; Yan, K.; Liu, J.; Pu, J.; Zou, D.; Qi, J.; Mu, X.; Yan, G. Assessment of remote-sensed vegetation indices for estimating forest chlorophyll concentration. Ecol. Indic. 2024, 162, 112001. [Google Scholar] [CrossRef]
  42. Dye, D.G.; Middleton, B.R.; Vogel, J.M.; Wu, Z.; Velasco, M. Exploiting Differential Vegetation Phenology for Satellite-Based Mapping of Semiarid Grass Vegetation in the Southwestern United States and Northern Mexico. Remote Sens. 2016, 8, 889. [Google Scholar] [CrossRef]
  43. He, S.; Shao, H.; Xian, W.; Zhang, S.; Zhong, J.; Qi, J. Extraction of Abandoned Land in Hilly Areas Based on the Spatio-Temporal Fusion of Multi-Source Remote Sensing Images. Remote Sens. 2021, 13, 3956. [Google Scholar] [CrossRef]
  44. Fern, R.R.; Foxley, E.A.; Bruno, A.; Morrison, M.L. Suitability of NDVI and OSAVI as estimators of green biomass and coverage in a semi-arid rangeland. Ecol. Indic. 2018, 94, 16–21. [Google Scholar] [CrossRef]
  45. Li, J.; Meng, Y.; Li, Y.; Cui, Q.; Yang, X.; Tao, C.; Wang, Z.; Li, L.; Zhang, W. Accurate water extraction using remote sensing imagery based on normalized difference water index and unsupervised deep learning. J. Hydrol. 2022, 612, 128202. [Google Scholar] [CrossRef]
  46. Worden, J.; de Beurs, K.M. Surface water detection in the Caucasus. Int. J. Appl. Earth Obs. Geoinf. 2020, 91, 102159. [Google Scholar] [CrossRef]
  47. Cheng, Y.; Zhang, W.; Wang, H.; Wang, X. Causal Meta-Transfer Learning for Cross-Domain Few-Shot Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5521014. [Google Scholar] [CrossRef]
  48. Christopher, F.B.; Steven, P.B.; Williams, B.G.; Birch, T.; Samantha, B.H.; Mazzariello, J.; Czerwinski, W.; Valerie, J.P.; Haertel, R.; Ilyushchenko, S.; et al. Dynamic World, Near real-time global 10 m land use land cover mapping. Sci. Data 2022, 9, 251. [Google Scholar]
  49. Garg, R.; Kumar, A.; Bansal, N.; Prateek, M.; Kumar, S. Semantic segmentation of PolSAR image data using advanced deep learning model. Sci. Rep. 2021, 11, 15365. [Google Scholar] [CrossRef]
  50. Li, Y.; Zhang, H.; Xue, X.; Jiang, Y.; Shen, Q. Deep learning for remote sensing image classification: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1264. [Google Scholar] [CrossRef]
  51. Li, Z.; He, W.; Cheng, M.; Hu, J.; Yang, G.; Zhang, H. SinoLC-1: The first 1 m resolution national-scale land-cover map of China created with a deep learning framework and open-access data. Earth Syst. Sci. Data 2023, 15, 4749–4780. [Google Scholar] [CrossRef]
  52. Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
  53. Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  54. Ansith, S.; Bini, A.A. Land use classification of high-resolution remote sensing images using an encoder based modified GAN architecture. Displays 2022, 74, 102229. [Google Scholar]
  55. Scheibenreif, L.; Hanna, J.; Mommert, M.; Borth, D. Self-supervised Vision Transformers for Land-cover Segmentation and Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 1421–1430. [Google Scholar]
  56. Xiao, B.; Liu, J.; Jiao, J.; Li, Y.; Liu, X.; Zhu, W. Modeling dynamic land use changes in the eastern portion of the hexi corridor, China by cnn-gru hybrid model. GISci. Remote Sens. 2022, 59, 501–519. [Google Scholar] [CrossRef]
  57. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  58. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657. [Google Scholar] [CrossRef]
  59. Volpi, M.; Tuia, D. Dense Semantic Labeling of Subdecimeter Resolution Images with Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 881–893. [Google Scholar] [CrossRef]
  60. Zhang, C.; Pan, X.; Li, H.; Gardiner, A.; Sargent, I.; Hare, J.; Atkinson, P.M. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification. ISPRS-J. Photogramm. Remote Sens. 2018, 140, 133–144. [Google Scholar] [CrossRef]
  61. Tong, X.; Xia, G.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 2020, 237, 111322. [Google Scholar] [CrossRef]
  62. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  63. Hu, F.; Xia, G.; Hu, J.; Zhang, L. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef]
  64. Xu, G.; Zhu, X.; Fu, D.; Dong, J.; Xiao, X. Automatic land cover classification of geo-tagged field photos by deep learning. Environ. Model. Softw. 2017, 91, 127–134. [Google Scholar] [CrossRef]
  65. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
  66. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  67. Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep Learning Earth Observation Classification Using ImageNet Pretrained Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 105–109. [Google Scholar] [CrossRef]
  68. Zhao, B.; Huang, B.; Zhong, Y. Transfer Learning with Fully Pretrained Deep Convolution Networks for Land-Use Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1436–1440. [Google Scholar] [CrossRef]
  69. Li, Z.; Chen, B.; Wu, S.; Su, M.; Chen, J.M.; Xu, B. Deep learning for urban land use category classification: A review and experimental assessment. Remote Sens. Environ. 2024, 311, 114290. [Google Scholar] [CrossRef]
  70. Feng, B.; Liu, Y.; Chi, H.; Chen, X. Hyperspectral remote sensing image classification based on residual generative Adversarial Neural Networks. Signal Process. 2023, 213, 109202. [Google Scholar] [CrossRef]
  71. Zhang, B.; Zhao, L.; Zhang, X. Three-dimensional convolutional neural network model for tree species classification using airborne hyperspectral images. Remote Sens. Environ. 2020, 247, 111938. [Google Scholar] [CrossRef]
  72. Dong, W.; Lan, J.; Liang, S.; Yao, W.; Zhan, Z. Selection of LiDAR geometric features with adaptive neighborhood size for urban land cover classification. Int. J. Appl. Earth Obs. Geoinf. 2017, 60, 99–110. [Google Scholar] [CrossRef]
  73. Kriegler, F.J.; Malila, W.A.; Nalepka, R.F.; Richardson, W. Preprocessing Transformations and Their Effects on Multispectral Recognition. In Proceedings of the 6th International Symposium on Remote Sensing of Environment, Ann Arbor, MI, USA, 13–16 October 1969. [Google Scholar]
  74. Marks, P.L.; Bormann, F.H. Revegetation following Forest Cutting: Mechanisms for Return to Steady-State Nutrient Cycling. Science 1972, 176, 914–915. [Google Scholar] [CrossRef] [PubMed]
  75. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  76. Miura, T.; Huete, A.R.; Yoshioka, H. Evaluation of sensor calibration uncertainties on vegetation indices for MODIS. IEEE Trans. Geosci. Remote Sen. 2000, 38, 1399–1409. [Google Scholar] [CrossRef]
  77. Kaufman, Y.J.; Tanre, D. Atmospherically resistant vegetation index (ARVI) for EOS-MODIS. IEEE Trans. Geosci. Remote Sens. 1992, 30, 261–270. [Google Scholar] [CrossRef]
  78. Richardsons, A.J.; Wiegand, A. Distinguishing vegetation from soil background information. Photogramm. Eng. Remote Sens. 1977, 43, 1541–1552. [Google Scholar]
  79. Daughtry, C.S.T.; Gallo, K.P.; Goward, S.N.; Prince, S.D.; Kustas, W.P. Spectral estimates of absorbed radiation and phytomass production in corn and soybean canopies. Remote Sens. Environ. 1992, 39, 141–152. [Google Scholar] [CrossRef]
  80. Lyon, J.G.; Yuan, D.; Lunetta, R.; Elvidge, C.D. A change detection experiment using vegetation indices. Photogramm. Eng. Remote Sens. 1998, 64, 143–150. [Google Scholar]
  81. Clay, D.E.; Kim, K.; Chang, J.; Clay, S.A.; Dalsted, K. Characterizing Water and Nitrogen Stress in Corn Using Remote Sensing. Agron. J. 2006, 98, 579–587. [Google Scholar] [CrossRef]
  82. Diker, K.; Bausch, W.C. Potential Use of Nitrogen Reflectance Index to estimate Plant Parameters and Yield of Maize. Biosyst. Eng. 2003, 85, 437–447. [Google Scholar] [CrossRef]
  83. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  84. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  85. Pearson, R.L.; Miller, L.D. Remote Mapping of Standing Crop Biomass for Estimation of Productivity of the Shortgrass Prairie. In Proceedings of the Eighth International Symposium on Remote Sensing of Environment, Ann Arbor, MI, USA, 2–6 October 1972. [Google Scholar]
  86. Penuelas, J.; Baret, F.; Filella, I. Semiempirical Indexes to Assess Carotenoids Chlorophyll-a Ratio from Leaf Spectral Reflectance. Photosynthetica 1995, 31, 221–230. [Google Scholar]
  87. Rouse, J.W.; Haas, R.H.; Deering, D.W.; Schell, J.A.; Harlan, J.C. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation; NASA: Washington, DC, USA, 1973.
  88. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  89. Gitelson, A.A. Wide Dynamic Range Vegetation Index for Remote Quantification of Biophysical Characteristics of Vegetation. J. Plant Physiol. 2004, 161, 165–173. [Google Scholar] [CrossRef]
  90. Huete, A.R.; Liu, H.; de Lira, G.R.; Batchily, K.; Escadafal, R. A soil color index to adjust for soil and litter noise in vegetation index imagery of arid regions. In Proceedings of the IGARSS’94-1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August 1994; pp. 1042–1043. [Google Scholar]
  91. Chen, J.M.; Cihlar, J. Retrieving leaf area index of boreal conifer forests using Landsat TM images. Remote Sens. Environ. 1996, 55, 153–162. [Google Scholar] [CrossRef]
  92. Gao, B. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  93. Khan, N.M.; Rastoskuev, V.V.; Sato, Y.; Shiozawa, S. Assessment of hydrosaline land degradation by using a simple approach of remote sensing indicators. Agric. Water Manag. 2005, 77, 96–109. [Google Scholar] [CrossRef]
  94. Wang, J.; Yang, D.; Chen, S.; Zhu, X.; Wu, S.; Bogonovich, M.; Guo, Z.; Zhu, Z.; Wu, J. Automatic cloud and cloud shadow detection in tropical areas for PlanetScope satellite images. Remote Sens. Environ. 2021, 264, 112604. [Google Scholar] [CrossRef]
  95. Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
  96. Zhu, X.; Helmer, E.H. An automatic method for screening clouds and cloud shadows in optical satellite image time series in cloudy regions. Remote Sens. Environ. 2018, 214, 135–153. [Google Scholar] [CrossRef]
  97. Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef] [PubMed]
  98. Pan, S.; Guan, H.; Chen, Y.; Yu, Y.; Gonçalves, W.N.; Junior, J.M.; Li, J. Land-cover classification of multispectral LiDAR data using CNN with optimized hyper-parameters. ISPRS-J. Photogramm. Remote Sens. 2020, 166, 241–254. [Google Scholar] [CrossRef]
  99. Kwan, C.; Ayhan, B.; Budavari, B.; Lu, Y.; Perez, D.; Li, J.; Bernabe, S.; Plaza, A. Deep Learning for Land Cover Classification Using Only a Few Bands. Remote Sens. 2020, 12, 2000. [Google Scholar] [CrossRef]
  100. Fromm, L.T.; Smith, L.C.; Kyzivat, E.D. Wetland vegetation mapping improved by phenological leveraging of multitemporal nanosatellite images. Geocarto Int. 2025, 40, 2452252. [Google Scholar] [CrossRef]
  101. As-syakur, A.R.; Adnyana, I.W.S.; Arthana, I.W.; Nuarsa, I.W. Enhanced Built-Up and Bareness Index (EBBI) for Mapping Built-Up and Bare Land in an Urban Area. Remote Sens. 2012, 4, 2957–2970. [Google Scholar] [CrossRef]
  102. Ettehadi Osgouei, P.; Kaya, S.; Sertel, E.; Alganci, U. Separating Built-Up Areas from Bare Land in Mediterranean Cities Using Sentinel-2A Imagery. Remote Sens. 2019, 11, 345. [Google Scholar] [CrossRef]
  103. Che, L.; Li, S.; Liu, X. Improved surface water mapping using satellite remote sensing imagery based on optimization of the Otsu threshold and effective selection of remote-sensing water index. J. Hydrol. 2025, 654, 132771. [Google Scholar] [CrossRef]
  104. Fisher, A.; Flood, N.; Danaher, T. Comparing Landsat water index methods for automated water classification in eastern Australia. Remote Sens. Environ. 2016, 175, 167–182. [Google Scholar] [CrossRef]
  105. Binding, C.E.; Bowers, D.G.; Mitchelson-Jacob, E.G. Estimating suspended sediment concentrations from ocean colour measurements in moderately turbid waters; the impact of variable particle scattering properties. Remote Sens. Environ. 2005, 94, 373–383. [Google Scholar] [CrossRef]
Figure 1. The overall workflow of the feature-augmented deep learning classification (CNNs-FA).
Figure 1. The overall workflow of the feature-augmented deep learning classification (CNNs-FA).
Remotesensing 17 01398 g001
Figure 2. Analysis of 22 spectral indices.
Figure 2. Analysis of 22 spectral indices.
Remotesensing 17 01398 g002
Figure 3. CNN structure for feature-augmented deep learning method.
Figure 3. CNN structure for feature-augmented deep learning method.
Remotesensing 17 01398 g003
Figure 4. Land use sample selection for study area. (a) built-up; (b) water; (c) wetland; (d) cropland; (e) bare; (f) grassland; (g) forest.
Figure 4. Land use sample selection for study area. (a) built-up; (b) water; (c) wetland; (d) cropland; (e) bare; (f) grassland; (g) forest.
Remotesensing 17 01398 g004
Figure 5. Field verification of drone photography.
Figure 5. Field verification of drone photography.
Remotesensing 17 01398 g005
Figure 6. ROC curves and AUC values for seven land use types in the training and test sets. (a) train ROC Curve; and (b) test ROC Curve.
Figure 6. ROC curves and AUC values for seven land use types in the training and test sets. (a) train ROC Curve; and (b) test ROC Curve.
Remotesensing 17 01398 g006
Figure 7. Precision, recall and F1-Score values for seven land use types in training and test sets. (ac) precision, recall and F1-Score of the train data; and (df) precision, recall and F1-Score of the test data.
Figure 7. Precision, recall and F1-Score values for seven land use types in training and test sets. (ac) precision, recall and F1-Score of the train data; and (df) precision, recall and F1-Score of the test data.
Remotesensing 17 01398 g007
Figure 8. Location of study area: (A) Bayannur; (B) Ordos; and (C) Hong Lake Basin (HLB).
Figure 8. Location of study area: (A) Bayannur; (B) Ordos; and (C) Hong Lake Basin (HLB).
Remotesensing 17 01398 g008
Figure 9. Confusion matrix for study areas. (a,b) confusion matrix for training and testing in Bayannur; (c,d) confusion matrix for training and testing in Ordos; and (e,f) confusion matrix for training and testing in HLB.
Figure 9. Confusion matrix for study areas. (a,b) confusion matrix for training and testing in Bayannur; (c,d) confusion matrix for training and testing in Ordos; and (e,f) confusion matrix for training and testing in HLB.
Remotesensing 17 01398 g009
Figure 10. Land use classification result and details in Bayannur. (A1,B1,C1,D1,E1,F1) image details; and (A2,B2,C2,D2,E2,F2) land use classifications corresponding to the image details.
Figure 10. Land use classification result and details in Bayannur. (A1,B1,C1,D1,E1,F1) image details; and (A2,B2,C2,D2,E2,F2) land use classifications corresponding to the image details.
Remotesensing 17 01398 g010
Figure 11. Land use classification result and details in Ordos. (A1,B1,C1,D1) image details; and (A2,B2,C2,D2) land use classifications corresponding to the image details.
Figure 11. Land use classification result and details in Ordos. (A1,B1,C1,D1) image details; and (A2,B2,C2,D2) land use classifications corresponding to the image details.
Remotesensing 17 01398 g011
Figure 12. Land use classification result and details in HLB. (A1,B1,C1) image details; and (A2,B2,C2) land use classifications corresponding to the image details.
Figure 12. Land use classification result and details in HLB. (A1,B1,C1) image details; and (A2,B2,C2) land use classifications corresponding to the image details.
Remotesensing 17 01398 g012
Figure 13. Field validation points in Bayannur, Ordos and HLB. (ac) built-up area validation points; (df) cropland validation points; (gi) water area validation points; (j) forest validation points; (k,l) bare area validation points; (m) wetland validation points; (n,o) grassland validation points; and the numbers are the serial numbers of the validation points.
Figure 13. Field validation points in Bayannur, Ordos and HLB. (ac) built-up area validation points; (df) cropland validation points; (gi) water area validation points; (j) forest validation points; (k,l) bare area validation points; (m) wetland validation points; (n,o) grassland validation points; and the numbers are the serial numbers of the validation points.
Remotesensing 17 01398 g013
Figure 14. Cropland validation in study areas. (a) cropland validation in Bayannur; (b) cropland validation in Ordos; (c) cropland validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Figure 14. Cropland validation in study areas. (a) cropland validation in Bayannur; (b) cropland validation in Ordos; (c) cropland validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Remotesensing 17 01398 g014
Figure 15. Forest and grass validation in study areas. (a)grassland validation in Bayannur; (b) grassland validation in Ordos; (c) forest validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Figure 15. Forest and grass validation in study areas. (a)grassland validation in Bayannur; (b) grassland validation in Ordos; (c) forest validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Remotesensing 17 01398 g015
Figure 16. Bare validation in study areas. (a) bare validation in Bayannur; (b) bare validation in Ordos; and the numbers are the serial numbers of the validation points.
Figure 16. Bare validation in study areas. (a) bare validation in Bayannur; (b) bare validation in Ordos; and the numbers are the serial numbers of the validation points.
Remotesensing 17 01398 g016
Figure 17. Wetland validation in study areas. (a) shows classification results under feature-augmented deep learning, and (b) shows Chinese land use dataset; and the numbers are the serial numbers of the validation points.
Figure 17. Wetland validation in study areas. (a) shows classification results under feature-augmented deep learning, and (b) shows Chinese land use dataset; and the numbers are the serial numbers of the validation points.
Remotesensing 17 01398 g017
Figure 18. Water validation in study areas. (a) water validation in Bayannur; (b) water validation in Ordos; (c) water validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Figure 18. Water validation in study areas. (a) water validation in Bayannur; (b) water validation in Ordos; (c) water validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Remotesensing 17 01398 g018
Figure 19. Built-up validation in study areas. (a) built-up validation in Bayannur; (b) built-up validation in Ordos; (c) built-up validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Figure 19. Built-up validation in study areas. (a) built-up validation in Bayannur; (b) built-up validation in Ordos; (c) built-up validation in Hong Lake Basin; and the numbers are the serial numbers of the validation points.
Remotesensing 17 01398 g019
Figure 20. Performance of different spectral indices on different land use in Bayannur.
Figure 20. Performance of different spectral indices on different land use in Bayannur.
Remotesensing 17 01398 g020aRemotesensing 17 01398 g020b
Figure 21. Performance of different spectral indices on different land use in Ordos.
Figure 21. Performance of different spectral indices on different land use in Ordos.
Remotesensing 17 01398 g021aRemotesensing 17 01398 g021b
Figure 22. Performance of different spectral indices on different land use in HLB.
Figure 22. Performance of different spectral indices on different land use in HLB.
Remotesensing 17 01398 g022aRemotesensing 17 01398 g022b
Table 1. Twenty-two spectral indices for feature augmentation.
Table 1. Twenty-two spectral indices for feature augmentation.
IndexVariable and Reference
VegetationNDVI [73]
LAI [74]
SAVI [75]
EVI [76]
ARVI [77]
DVI [78]
GNDVI [79]
NDGI [80]
NPCI [81]
NRI [82]
OSAVI [83]
MSAVI [84]
RVI [85]
SIPI [86]
TVI [87]
VARI [88]
WDRVI [89]
CIVE [90]
MSRI [91]
WaterNDWI [92]
SoilNDSI [93]
RBI [23]
Table 2. Precision, recall and F1-Score in classification model for land use in Bayannur, Ordos and HLB.
Table 2. Precision, recall and F1-Score in classification model for land use in Bayannur, Ordos and HLB.
IDLand UseBayannurOrdosHong Lake Basin
PrecisionRecallF1-ScorePrecisionRecallF1-ScorePrecisionRecallF1-Score
1Forest99.09%98.71%0.9889881.08%76.92%0.78947---
2Grassland98.38%98.29%0.9833393.11%98.59%0.95771---
3Cropland99.09%99.52%0.99306100.00%87.10%0.9310379.80%93.07%0.85925
4Built-up99.27%98.28%0.9877396.21%94.00%0.9509493.28%92.24%0.92756
5Bare99.96%99.99%0.9997298.50%100.00%0.9924296.97%85.70%0.90987
6Water99.73%99.73%0.9973393.58%87.18%0.90265---
7Wetland99.44%99.38%0.99409------
Macro-average99.28%99.13%0.9920393.75%90.63%0.9207190.01%90.34%0.89890
Table 3. Land use classification validation.
Table 3. Land use classification validation.
Land UseVerification PointErrorsAccuracy (%)
Built-up390100.00
Cropland410100.00
Grass18194.44
Forest120100.00
Bare22290.91
Water230100.00
Wetland100100.00
Overall accuracy (%)98.18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zhang, W.; Liu, X.; Peng, H.; Lin, M.; Li, A.; Jiang, A.; Ma, N.; Wang, L. A Deep Learning Method for Land Use Classification Based on Feature Augmentation. Remote Sens. 2025, 17, 1398. https://doi.org/10.3390/rs17081398

AMA Style

Wang Y, Zhang W, Liu X, Peng H, Lin M, Li A, Jiang A, Ma N, Wang L. A Deep Learning Method for Land Use Classification Based on Feature Augmentation. Remote Sensing. 2025; 17(8):1398. https://doi.org/10.3390/rs17081398

Chicago/Turabian Style

Wang, Yue, Wanshun Zhang, Xin Liu, Hong Peng, Minbo Lin, Ao Li, Anna Jiang, Ning Ma, and Lu Wang. 2025. "A Deep Learning Method for Land Use Classification Based on Feature Augmentation" Remote Sensing 17, no. 8: 1398. https://doi.org/10.3390/rs17081398

APA Style

Wang, Y., Zhang, W., Liu, X., Peng, H., Lin, M., Li, A., Jiang, A., Ma, N., & Wang, L. (2025). A Deep Learning Method for Land Use Classification Based on Feature Augmentation. Remote Sensing, 17(8), 1398. https://doi.org/10.3390/rs17081398

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop