Next Article in Journal
Correction of Ionospheric Phase in SAR Interferometry Considering Wavenumber Shift
Previous Article in Journal
ARCHIMEDE—An Innovative Web-GIS Platform for the Study of Medicanes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Winter Wheat Mapping Method Based on Pseudo-Labels and U-Net Model for Training Sample Shortage

1
College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058, China
2
Zhejiang Ecological Civilization Academy, Huzhou 313300, China
3
Land Satellite Remote Sensing Application Center, Ministry of Natural Resources (MNR), Beijing 100048, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2553; https://doi.org/10.3390/rs16142553
Submission received: 7 May 2024 / Revised: 4 June 2024 / Accepted: 4 July 2024 / Published: 12 July 2024

Abstract

:
In recent years, the semantic segmentation model has been widely applied in fields such as the extraction of crops due to its advantages such as strong discrimination ability, high accuracy, etc. Currently, there is no standard set of ground true label data for major crops in China, and the visual interpretation process is usually time-consuming and laborious. The sample size also makes it difficult to support the model to learn enough ground features, resulting in poor generalisation ability of the model, which in turn makes the model difficult to apply in fine extraction tasks of large-area crops. In this study, a method to establish a pseudo-label sample set based on the random forest algorithm to train a semantic segmentation model (U-Net) was proposed to perform winter wheat extraction. With the help of the GEE platform, Winter Wheat Canopy Index (WCI) indicators were employed in this method to initially extract winter wheat, and training samples (i.e., pseudo labels) were built for the semantic segmentation model through the iterative process of “generating random sample points—random forest model training—winter wheat extraction”; on this basis, the U-net model was trained with multi-time series remote sensing images; finally, the U-Net model was employed to obtain the spatial distribution map of winter wheat in Henan Province in 2022. The results illustrated that: (1) Pseudo-label data were constructed using the random forest model in typical regions, achieving an overall accuracy of 97.53% under validation with manual samples, proving that its accuracy meets the requirements for U-Net model training. (2) Utilizing the U-Net model, U-Net++ model, and random forest model constructed based on pseudo-label data for 2022, winter wheat mapping was conducted in Henan Province. The extraction accuracy of the three models is in the order of U-Net model > U-Net++ model > random forest model. (3) Using the U-Net model to predict the winter wheat planting areas in Henan Province in 2019, although the extraction accuracy decreased compared to 2022, it still exceeded that of the random forest model. Additionally, the U-Net++ model did not achieve higher classification accuracy. (4) Experimental results demonstrate that deep learning models constructed based on pseudo-labels exhibit higher classification accuracy. Compared to traditional machine learning models like random forest, they have higher spatiotemporal adaptability and robustness, further validating the scientific and practical feasibility of pseudo-labels and their generation strategies, which are expected to provide a feasible technical pathway for intelligent extraction of winter wheat spatial distribution information in the future.

1. Introduction

Since winter wheat is one of three major crops in China, timely and accurate acquisition of its spatial distribution would help relevant departments adjust agricultural planning structures, disaster monitoring, and field production management, which is the basis and premise for winter wheat yield estimation. Furthermore, it would also be of great significance for China to formulate food production policies and regulate crop planting structures, thereby achieving national food safety and promoting agricultural production and development [1,2,3]. However, traditional methods to acquire spatial distribution of winter wheat including level-by-level reports and sampling surveys had shortcomings such as low timeliness, low accuracy, and high investment costs, and were also easily affected by human subjective factors [4].
Remote sensing technology has advantages such as strong timeliness, a large amount of information, wide coverage, and low cost, which could provide effective methods and means for acquiring winter wheat spatial distribution data. In recent years, Sentinel series satellite remote sensing images have been widely used in research on ground object information extraction due to their high spatial and temporal resolution, rich spectral features, and wide bandwidth [5]. Rujoju-Mare et al. employed the spectral characteristics of Sentinel-2A images to distinguish land cover types in areas with higher and lower land heterogeneity based on the maximum likelihood method and the support vector machine, and both methods achieved high classification accuracy [6]; Aryal et al. established a normalized vegetation index (NDVI) based on Sentinel-2A image data, introduced a supervised classification method to classify urban vegetation in government areas across Victoria, and proved the effectiveness of this method [7]; Ali et al. introduced both Sentinel-2A and Landsat-8 image data respectively to classify land use and land cover, and the nearest neighbour algorithm, KD-Tree, the random forest, and other classification methods were introduced for comparison, with the results showing that Sentinel-2A images achieved the highest accuracy in the classification results of each algorithm [8]. The studies listed above have achieved high classification or extraction accuracy. However, most of the machine learning algorithms, such as the random forest, would have a shallow structure. Due to the limited network structure, these algorithms cannot effectively express complex surface types, which also makes it difficult to perform fine classification of surface types. When the number and diversity of animal samples increase, it is difficult for shallow learning methods to achieve good results, and machine learning methods usually have poor portability and are not suitable for massive data processing [9].
With the continuous development of deep learning algorithms [10,11], Convolutional Neural Networks (CNN) have performed outstandingly in fields such as image classification, semantic segmentation, and target detection [12,13,14]. Convolutional Neural Networks can adaptively learn shallow features such as image colour and texture, and also learn deep semantic features, and can handle image segmentation in complex scenes [9]. Long et al. proposed a Fully Convolutional Neural Network (FCN) based on CNN, in which FCN employs transposed convolution to replace the fully connected layer of CNN with an up-sampling layer, thus improving the segmentation efficiency, giving the output image the same dimensions as the input [15]. After that, most of the subsequent semantic segmentation methods were improved based on FCN [16], such as SegNet [17], RefineNet [18], U-Net [19], Deep Lab V3+ [20], etc. Among them, the U-Net model has been widely used in semantic segmentation tasks because of its smaller model, fewer parameters, and faster processing speed. Wei et al. [9] and Wei et al. [21] introduced Sentinel-1 data and the U-Net network model to extract rice planting areas in the Arkansas River Basin in the United States and Northeast China, respectively, and both achieved high extraction accuracy, proving the feasibility of spatiotemporal migration based on the U-Net model; Du et al. introduced the U-Net model to extract rice planting areas in Arkansas, and employed the random forest model extraction results as a control group for comparison, with the results showing that the U-Net achieved the optimal segmentation accuracy [22].
However, unlike traditional supervised classification algorithms, the training samples of the semantic segmentation model consist of a large number of annotated image sets, which require all pixels in the image to be annotated to build the samples required for semantic segmentation model training [23,24,25]. Currently, most research only constructs training samples through visual interpretation, which is time-consuming and laborious, lacks timeliness, and limits the application of semantic segmentation models in large-scale crop mapping [26,27,28]. Addressing the issue of scarce training samples, Lee first proposed the pseudo-label technique in 2013 [29]. This is an innovative semi-supervised learning method aimed at optimising and enhancing the performance of supervised classification models using unlabelled sample data. In his research, Lee directly used the predictions of the task model as pseudo-labels and input them into the next round of training. He pointed out that this operation is equivalent to adding an entropy regularisation term to the loss function, which can force the overlap of predictions from different classes to be lower. In recent years, some scholars have begun to use pseudo-labels in large-scale crop extraction tasks and have achieved good performance. For example, Zhu et al. combined phenology methods with deep learning models, utilised phenological feature indicators for preliminary rice extraction, constructed training samples for LSTM models, and completed rice mapping [30]. Wei et al. successfully extracted the spatial distribution information of rice in Jiangsu Province based on a pseudo-label established by K-means and random forest algorithms, providing a new option for the application of semantic segmentation models in areas with a shortage of training samples [31].
Thus, in order to fully explore the solution to the shortage of training samples for semantic segmentation models and efficiently establish the high-quality training labels required for semantic segmentation models, this study employed a combination of machine learning and deep learning to achieve precise extraction of winter wheat in Henan Province. Firstly, WCI phenology indicators were established in typical areas, their segmentation thresholds were extracted using the Otsu algorithm, and preliminary extraction of winter wheat was completed. Secondly, points were randomly selected after improvement of this basis, and random sample points were used to train the random forest model, with random forest model extraction results being continuously optimized and improved through the iterative “random point selection-model training” process, selecting two overlapping extraction results with a rate of 99% to be used as training samples (i.e., pseudo labels) for the U-Net model. Lastly, based on the U-Net model, the fine spatial distribution of winter wheat in Henan Province in 2022 was obtained. To verify the scientific validity of the experimental method, this study selected the U-Net++ model to extract the spatial distribution of winter wheat in Henan Province through the same steps. Additionally, this study used the U-Net model to predict the spatial distribution information of winter wheat in Henan Province in 2019, and used a random forest model for direct extraction of winter wheat in Henan Province to reveal the limitations of the random forest algorithm in feature extraction representation and model transfer, as well as the feature extraction ability and advantages of deep learning models in agricultural crop type mapping applications.

2. Materials and Methods

2.1. Study Area

Henan Province (31°23′–36°22′N, 110°21′–116°39′E) is located in the hinterland of the Central Plains of China, which is one of the main winter wheat producing areas in China. The terrain of Henan Province is higher in the west and lower in the east. The central and eastern parts are mostly plains, and the western part is mostly mountainous and hilly. The climate is subtropical and temperate monsoon, with hot and rainy summers and cold and dry winters. The annual frost-free period in Henan Province is 201–285 days, and the average temperature in the province is about 15.8 °C. Rainfall is mainly concentrated from June to August, and the average precipitation in the province is about 594.3 mm. The annual average sunshine hours are about 2300–2600 h, which is suitable for planting many crops. For the establishment of the U-Net model and the preparation of pseudo-label data sets, a Study Area A of 1.5° × 1.5° in central Henan Province was selected. This study area had a moderate distance from all other parts of Henan Province, fewer differences in phenological characteristics, and a high proportion of winter wheat planting areas, exhibiting significant typicality. Details are shown in Figure 1.

2.2. Data Source

Sentinel-2 is a Sentinel series satellite developed by ESA, and is a high-resolution multi-spectral imaging satellite consisting of two satellites, 2A and 2B. Sentinel-2 carries a multi-spectral imager (MSI) comprising 13 spectral bands, and the return period is 5 days with spatial resolutions of 10 m, 20 m, and 60 m [32]. In this study, a Level-2 surface reflection product (MSI) of Sentinel-2 data was introduced with a spatial resolution of 10 m, provided by Google Earth Engine [33]. Published studies have shown that visible light and short-wave infrared bands play an important role in vegetation classification; therefore, the red (B4), green (B3), blue (B2), and near-infrared (B8) bands were selected in the image as research data in this study [34,35]. Because the product had undergone pre-processing such as radiometric calibration, atmospheric correction, terrain correction, and geometric correction, it belongs to the category of surface reflectance products. Therefore, this study only needed to perform cloud removal, stitching, band merging, and other operations on the image. Sentinel-2 images with cloud cover below 20% were screened in the study area and cloud pixels were removed using the QA60 band. This cloud removal method performs bitwise operations directly on the QA60 quality band to filter pixel values. At the same time, it masks pixels such as clouds, cirrus clouds, rain, and snow to achieve cloud removal effect [36,37].
The phenological period of winter wheat in Study Area A is similar to that of the entire Henan Province [38]. It is generally sown in later autumn and early winter and harvested the following summer. Different ground object sample points in Study Area A were selected to analyse the time series change characteristics of the Normalised Difference Vegetation Index (NDVI), as shown in Figure 2. The analysis showed that from the sowing period (October) to the heading period (April), the chlorophyll content of the winter wheat vegetation canopy increased and the vegetation index increased as well. However, after the maturity period, the vegetation index decreased rapidly, which was significantly different from the spectral characteristics of other land types. According to the pattern of the winter wheat canopy spectral changes, this study selected Sentinel-2A images from October 2022, April 2023, and May–June 2023 within Study Area A and Henan Province to construct pseudo-labels and conduct winter wheat mapping. Meanwhile, this study also selected Sentinel-2A images from October 2018, April 2019, and May–June 2019 in Henan Province to form a multi-temporal image dataset, which was used to validate the temporal suitability of the U-Net model. The specific image data information is shown in Table 1. On the other hand, optical images would be affected by could cover and other factors, making it difficult to obtain images in the same time interval over a wide range. However, the differences in winter wheat plant conditions during the same period in adjacent years were small, so images at the same time period in adjacent years were selected as data supplements. The Sentinel-2A data in the GEE platform had completed pre-processing such as radiometric calibration, terrain correction, and atmospheric correction; therefore, operations such as cloud removal and fusion on the images were performed in this study. At the same time, the Winter Wheat Canopy Index (WCI) index was established using image data in Study Area A for preliminary extraction of winter wheat. The specific formula is shown as below [39], in which
N D V I = B 8 B 4 B 8 + B 4
W C I = N D V I 2 N D V I 1 × ( N D V I 2 N D V I 3 )
where, NDVI1, NDVI2, and NDVI3, respectively, represent the NDVI at the first, second, and third time series.
Besides, based on the 2022 Google Earth high-resolution images and Sentinel-2 multi-temporal images, this study employed visual interpretation methods to select 1500 and 1300 sample points, respectively, within Study Area A and Henan Province for accuracy validation of pseudo-label data and model extraction results. Simultaneously, using high-resolution images from 2019, the sample set for Henan Province in 2022 was refined and supplemented, resulting in a sample dataset for Henan Province in 2019 containing 1200 sample points, as detailed in Table 2.

2.3. Dataset Production

In order to enable the U-Net model to fully obtain the correlation between each piece of time series image data during the training process, the bands of multi-time series images in chronological order were merged in this study to form multi-channel input. However, due to the large size of remote sensing image data and the limited computing capacity of calculators, it was difficult for the U-Net model to directly perform pixel-level classification of the entire image; a 512 × 512 sliding window was introduced in this study to crop the image, and the sliding step was set to one-third of the window size. Finally, 4479 images and labels with a size of 512 × 512 × c were obtained, where c represents the number of channels in multi temporal images, as shown in Figure 3. After that, the produced sample set was divided into a training set, verification set, and test set according to the ratios of 70%, 20%, and 10%. The training set was used to train the U-Net model, the verification set was used to fine-tune the parameters in the model, and the test set was used to evaluate the accuracy of the extraction results of the model.

3. Method

3.1. Technical Research Path

This study was performed based on the GEE platform Sentinel-2A image data, and machine learning and deep learning algorithms were introduced to extract the spatial distribution of winter wheat in Henan Province in 2022. The technical path is illustrated in Figure 4, consisting of two main components:
(1) Construction of pseudo-label data based on WCI and random forest models. Firstly, using the WCI index, threshold segmentation was conducted within the Study Area A via the GEE platform to obtain preliminary winter wheat extraction results. Subsequently, the segmentation results were optimised and improved, random samples were generated, and with the integration of multi-temporal NDVI dataset, the random forest model was trained. Utilising the trained model, winter wheat extraction was performed within Study Area A. This process of “generating training samples—training random forest model—winter wheat extraction” was iteratively executed to continuously optimise the extraction results of the random forest model, thereby obtaining pseudo-label data.
(2) Winter wheat mapping based on the U-Net model. Utilising the 2022 multi-temporal image dataset and pseudo-label data, the U-Net model was constructed, and spatial distribution information of winter wheat in Henan Province for both 2022 and 2019 was extracted and evaluated for accuracy.

3.2. Winter Wheat Pseudo-Label Production Based on Random Forest

3.2.1. Dataset Establishment

Based on the GEE platform, Sentinel-2A images with cloud cover of less than 20% in Study Area A in October 2022, March 2023, and April 2023 were obtained, and the NDVI images in four time periods were calculated and generated, while the multi-time series NDVI images were obtained by merging the bands in chronological order, including four bands representing four series.

3.2.2. Initial Winter Wheat Extraction Based on WCI

In this study, the WCI index was employed for threshold segmentation to obtain the preliminary extraction results of winter wheat in the Study Area A. The threshold size could directly determine the accuracy of the segmentation results, and the manual setting method would be easily affected by subjective factors and were prone to errors: therefore, the Otsu algorithm was used to determine the threshold of each segmentation index. The Otsu algorithm [40] has been recognised as one of the best algorithms for threshold selection in the field of image segmentation. It has been widely used in the field of digital image processing due to its advantages of simple calculation and being independent of changes in image brightness and contrast. The WCI segmentation threshold in Study Area A was extracted using the Otsu algorithm, and was 0.96. Since land types with a WCI value around the segmentation threshold might be more likely to be misclassified, the WCI segmentation threshold should be adjusted to eliminate potential misclassified land types. Through experimental comparative analysis, pixels with WCI greater than 1.06 were regarded as winter wheat, while pixels with WCI less than 0.89 were regarded as non-winter wheat. Finally, the respective results of winter wheat land type and non-winter wheat land type were obtained.

3.2.3. Generating Random Samples

Crop planting boundary pixels are prone to misclassification, and the threshold method extraction results are usually interfered with by pixel outliers and image noise. Therefore, in order to eliminate the negative effects listed above as much as possible and improve the representativeness of random sample points, it was necessary to screen the winter wheat patches and non-winter wheat patches in the preliminary extraction results, and then randomly select points. Based on the vector boundary of each patch, an inward 10 m buffer was generated and excluded from the patch results, followed by calculating the area of each patch, eliminating patches with an area less than 5000 m2, and establishing random sample points in the remaining retained patches. The number of sample points generated in each patch was set to “patch area/5000”, and finally 800 winter wheat sample points and 1600 non-winter wheat sample points were randomly selected from them for subsequent random forest model training, in which 70% were chosen for training and 30% were chosen for verification [21].

3.2.4. Pseudo-Label Production

The multi-time series feature data set produced in Section 3.2.1 and the random sample points generated in Section 3.2.3 were introduced as model input. The random forest model was established and trained through the built-in function “ee.Classifier.smileRandomForest” of GEE. Then, the winter wheat planting area in Study Area A was extracted using the random forest model as a new current sample, and the overlap rate between the current and the previous sample (the ratio of the number of pixels with the same classification result to the total number of pixels) was calculated. If the overlap rate was lower than the set threshold, points would be randomly selected again based on the current sample, and the steps of random forest model training and extraction would be repeated until the overlap rate could exceed the set threshold. After the experimental comparison, the overlap rate threshold was set to 99%. Finally, the last-time extraction results were employed as training samples for the deep learning model, which would be pseudo-labels.

3.3. Establishment and Training of the U-Net Model

3.3.1. Establishment of the U-Net Model

The network structure of this study was adapted from the classic U-Net network model. It is a U-shaped symmetric network without a fully connected layer, and the input and output are both images, as shown in Figure 5. The left-hand side of the network is the encoder. As a typical Convolutional Neural Network structure, it consists of four sets of identical coding blocks, with each set of coding blocks containing two 3 × 3 convolution kernel layers and a maximum pooling layer. The convolution kernel layer can extract image features, and the maximum pooling layer can filter out unimportant high-frequency information, reduce feature dimensions, and increase the receptive field. Repeated convolution and pooling operations can fully extract the deep features of the image. Each group of coding blocks doubles the feature map channels and reduces the scale by half. The right-hand side of the network is the decoder, which consists of four groups of identical decoding blocks, with each group of decoding blocks containing two 3 × 3 convolution kernel layers and a deconvolution layer. Each group of decoding blocks can reduce the feature map channels by half and double the scale. In the meantime, there is a skip connection between the coding block and the decoding block, and the feature map of each group of coding blocks will be saved and fused with the output of the corresponding scale decoding block as the input of the next deconvolution layer. Skip connection can reduce the information loss caused by pooling operations in the encoding block, provide feature maps of different resolutions for the decoding block, and help the decoder better restore target details and image accuracy. In the final output layer, two convolutional layers will be used to map the 64-dimensional feature map into a 2-dimensional output map. The Softmax classifier is used to calculate the probability that each pixel in the feature map belongs to each targe land type, and the probability is mapped to obtain the ground object classification map [19,41].

3.3.2. Data Pre-Processing

The pixel value of remote sensing image data is usually between 0–4095. When the data distribution range is large, the training process of the semantic segmentation model to find the optimal solution will be very slow, and the result will not even converge. Therefore, the data should be normalised to control the data distribution within a certain range. Data normalisation can convert data of different orders of magnitude into the same order of magnitude, helping to improve the model’s convergence speed and classification accuracy. The min–max normalisation method was employed in this study to normalise the four research bands of Sentinel-2A remote sensing images. The calculation formula is as follows:
x = x x m i n x m i n x m a x
where x is the value of a certain pixel; xmin is the minimum value of sample data; and xmax is the maximum value of the sample data.

3.3.3. Model Training

The process of training model is as shown in Algorithm 1. Multi-temporal images and pseudo-label are used as model inputs. And this study sets the learning rate to 0.0001 and the training batch to 50 rounds. During the training process, the performance of the model is evaluated by calculating the loss function, and the model parameters are updated and optimized through forward propagation. Finally, output the trained model.
Algorithm 1. Winter Wheat Extraction Model Implementation Steps
1 Input: Image data/Pseudo-label data;
2 Initialisation: Random initialisation of training parameters, parameter settings, learning rate 0.0001, training rounds 50 epochs;
3 Processing:
4 for j = 0 to N do (N is the number of iteration calculations)
5   calculate the loss L_loss
6  if (L_loss ≤ threshold) then
7    outputs
8   else extract features again based on the parameters updated by forward propagation to calculate L_loss;
9 Output: Trained model

3.3.4. Model Prediction

The model prediction is that after the model training is completed, the final model will be used to calculate the probability that each pixel in the image to be classified belongs to each category, and then the “argmax” function will be used to find the category label to which the maximum probability belongs. During the model prediction process, in order to avoid memory overflow, the image to be classified needs to be cropped into image blocks with the same size as the training image (512 × 512 × 12) for prediction one by one and then spliced into the entire image. In addition, a sliding window method was used in this study to obtain image blocks with certain overlapping areas, and the classification results of the middle area of each image block were retained, the results with inaccurate edges were discarded, followed by splicing them. Such an action could avoid obvious splicing traces and improve the performance of image prediction.

3.4. Accuracy Evaluation Indicators

In this study, two types of accuracy evaluation were applied to evaluate the extraction results of winter wheat. The first method was a remote sensing classification and recognition result verification method based on the confusion matrix, in which manually collected sample point data were compared with the classification results to calculate the classification accuracy. The main evaluation indicators were Overall Accuracy (OA), User’s Accuracy (UA), Producer’s Accuracy (PA), F1-score, and the Kappa Coefficient. Since this study regarded a two-classification problem, the confusion matrix can be expressed in Table 3.
In Table 3, TP represents true positives, which is the number of pixels with both the true value and the predicted value being 1; FP represents the false positive, which is the number of pixels with the true value being 0 but the predicted value being 1; FN represents false negatives, which is the number of pixels with a true value of 1 and a predicted value of 0; TN represents true negatives, which is the number of pixels with both a true value and a predicted value of 0. The formulas of various accuracy indicators are shown as follows:
Overall Accuracy is a commonly used accuracy evaluation index that represents the probability of correct classification of each pixel. The calculation formula is shown as follows:
O A = T P + T N T P + F N + F P + T N
User’s Accuracy represents the proportion of the number of pixels that was correctly classified to the total number of pixels, which was classified as target land types. The calculation formula is shown as follows:
U A = T P T P + F P
Producer’s Accuracy represents the proportion of the number of correctly classified pixels to the total number of pixels that are actually the target land types. The calculation formula is shown as follows:
P A = T P T P + F N
The F1-score is a comprehensive evaluation indicator of the User’s and Producer’s Accuracy. The calculation formula is shown as follows:
F 1 = 2 × U A × P A U A + P A
The Kappa Coefficient represents the ratio of error reduction between classification and completely random classification. The calculation formula is shown as follows:
p o = O A = T P + T N T P + F N + F P + T N p e = T P + F P T P + F N + F N + T N F P + T N T P + F P + T N + F N 2 K a p p a = p o p e 1 p e
The second accuracy evaluation method was based on the statistical data of the Henan Province Statistical Yearbook (http://www.henan.gov.cn/zwgk/zfxxgk/fdzdgknr/tjxx/tjnj/, accessed on 1 January 2023), using Absolute Errors (AE), Absolute Percentage Errors (APE), Correlation Coefficient (R2), and other indicators for accuracy verification. The calculation formulas of each indicator are shown as follows:
A E = | A A P A | A P E = ( | A A P A | ) / A A × 100 % R 2 = i = 1 n ( x i x ¯ ) 2 ( y i y ¯ ) 2 i = 1 n ( x i x ¯ ) 2 i = 1 n ( y i y ¯ ) 2
in which n represents the number of administrative units, AA represents the statistical value, and PA represents the predicted value.

4. Results and Discussion

4.1. Pseudo-Label Production and Accuracy Verification Based on the Random Forest Model

The experimental results showed that the process of “generating random sample points—model training—winter wheat extraction” cycled a total of four times, and the corresponding overlap rate for each time is shown in Table 4. As the number of training times of the random forest model increased, the overlap rate between the new round of extraction results and the previous round of extraction results increased steadily, eventually reaching 99.25%. Among them, the overlap rate between the first extraction result and the WCI segmentation result reached 96.1%, which reflected that the WCI index could effectively capture the spectral variation characteristics of winter wheat and is highly sensitive to winter wheat.
In order to further verify the accuracy of pseudo-labels, manually-selected sample points were employed in this study, as shown in Table 5. As illustrated by the results, the overall accuracy of the pseudo-label reached 97.53%, the Kappa Coefficient was 0.95, and the F1-score of winter wheat and non-winter wheat land types reached 0.97 and 0.98, respectively. From the perspective of manual-selected sample point verification accuracy, the pseudo-label held higher accuracy, which provided more reliable sample data for subsequent training of deep learning models.
On the other hand, the high-resolution satellite remote sensing image in May 2022 was introduced as a reference, which aimed to verify the scientific nature of the pseudo-label production method and whether the pseudo-label would have sufficient accuracy by comparing it with the WCI segmentation results. The details are shown in Figure 6. Although the WCI could effectively obtain winter wheat planting areas, there were still some misclassifications shown in Figure 6a–c. However, such misclassifications were successfully eliminated in the pseudo-label data, which benefitted from the “sample point establishment—model training” process. Some potential errors were effectively corrected by the classification ability of the random forest algorithm. Besides, the field ridges in the pseudo-labels were also further differentiated.

4.2. Extraction Result Analysis

To fully verify whether the pseudo-label data constructed based on the above methods can meet the requirements of training deep learning models and the scientificity of conducting winter wheat mapping based on pseudo-label data, this study employed the U-Net model, U-Net++ model, and random forest model to extract winter wheat in Henan Province in 2022, and the extraction results are shown in Figure 7. Simultaneously, to verify whether the deep learning models constructed based on pseudo-labels have transferability, we input the multi-temporal image dataset of Henan Province in 2019 into the U-Net model of 2022 for prediction, and the extraction results are shown in Figure 7d. From Figure 7a–c, it can be observed that the spatial distribution characteristics of winter wheat in Nanyang City and the southern part of Luoyang City in the extraction results of the random forest model are significantly different from the other two deep learning models. By comparing the original images, we found that the western part of Nanyang City and the southern part of Luoyang City are mainly mountainous terrain, covered by extensive forest vegetation, but the random forest model misclassifies large areas of vegetation as winter wheat during the transfer prediction process.
To further compare the extraction results of winter wheat in Henan Province in 2022 among the three models, this study selected the original images from April to May 2022 in Henan Province as reference data. In areas where there were significant differences between the extraction results of the U-Net model and the random forest model, six rectangular regions with a spatial distribution size of 0.1° × 0.1° (a–f) were selected for enlarged comparison of the extraction results of the three models. The approximate locations of the rectangular regions are shown in Figure 8, and the comparison details are shown in Figure 9.
As can be seen from Figure 9a–c, the U-Net model demonstrates the highest extraction accuracy, followed by the U-Net++ model, while the random forest model has the lowest accuracy. Figure 9a–c represent relatively dense winter wheat planting areas, where the U-Net and U-Net++ models extract more complete and contiguous winter wheat plots. In contrast, the random forest model’s extraction results show significant omission of non-winter wheat areas. In regions where winter wheat planting areas are disturbed by bare land, evergreen vegetation, and other crop types Figure 9e,f, the U-Net and U-Net++ models still maintain high classification accuracy. However, compared to identifying large contiguous fields, these models are more prone to misclassification in scattered plots and boundary areas between different land use types. This is because the U-Net and U-Net++ models would learn the outline features and boundary relationships of the target features during the training process. If the target feature (winter wheat) was scattered and planted, the boundary outline features would become insignificant and reflected in the image, while spectral features would be more complex, making the model more prone to errors in the classification and recognition process. Although the random forest model could show high accuracy in the winter wheat extraction task in a small area, due to the poor robustness and transferability of the machine learning model, it was easy for “misclassification” and “missing classification” to occur if the model was transferred to a non-model training area for prediction, making it difficult to apply in large areas.

4.3. Accuracy Verification of Extraction Results

The 2022 extraction results of the U-Net model, U-Net++ model, and random forest model were validated for accuracy using statistical data and manual sample data from Henan Province in 2022. The 2019 prediction results of the U-Net model were validated for accuracy using statistical data and manual sample data from Henan Province in 2019.

4.3.1. Accuracy Evaluation of Extraction Results Based on Statistical Data

Experimental data indicated that the winter wheat planting areas in Henan Province in 2022, based on the U-Net model, U-Net++ model, and random forest model, were 5915.03 K hectares, 5275.55 K hectares, and 4852.01 K hectares, respectively. Using the statistical data (5682.46 K hectares) as a reference, the Absolute Errors (AE) were 232.57 K hectares, 406.91 K hectares, and 830.45 K hectares, respectively, with Absolute Percentage Errors (APE) of 4.09%, 7.16%, and 14.61%. The winter wheat planting area in Henan Province in 2019, predicted using the U-Net model, was 5348.16 K hectares, with an absolute error of 365.44 K hectares compared to the statistical data (5713.60 K hectares), resulting in an absolute error percentage of 6.40%.
To further evaluate the 2022 extraction results of the three models and the 2019 prediction results based on the U-Net model, Absolute Error (AE), Absolute Percentage Error, and Correlation Coefficient (R2) were employed to carry out accuracy evaluation at the municipal scale. The AE and APE of 18 cities in Henan Province were divided into gradients, and the number of cities in each interval was counted as shown in Figure 10.
From the 2022 accuracy evaluation results (Figure 10a–e), it can be observed that the extraction accuracy of the three models is in the order of U-Net model > U-Net++ model > RF. In terms of the AE indicator, most cities in the U-Net model extraction results have an AE between 0 and 50 K hectares, with the highest number of cities in the 0–10 K hectares range. In contrast, in the U-Net++ model and random forest model extraction results, the highest number of cities have an AE between 20 and 50 K hectares, followed by 0–10 K hectare and 50–100 K hectare ranges. For the APE indicator, most cities in the U-Net model extraction results have an APE below 10%, but two cities have an APE exceeding 50%. In the U-Net++ model and random forest model extraction results, most cities have an APE below 30%, but three cities have an APE exceeding 50%. An analysis of cities with large extraction errors in the U-Net and U-Net++ models shows that the cities with an AE exceeding 100 K hectares in the U-Net model are Zhoukou City (statistical data: 734.84 K hectares) and Luoyang City (statistical data: 231.46 K hectares). The cities with an APE exceeding 50% are Luoyang City and Sanmenxia City (statistical data: 75.11 K hectares). Similarly, in the U-Net++ model extraction results, Luoyang City has an AE exceeding 100 K hectares, and the cities with an APE exceeding 50% are Luoyang City and Sanmenxia City. These cities also have significant errors in the random forest model extraction results. By analysing the experimental data, this study believes that the higher errors in these three cities’ deep learning model extraction results are primarily due to their locations in the western and eastern parts of Henan Province, while the pseudo-label construction area is in the central part of the province. Therefore, the phenological cycles or spectral characteristics of winter wheat in these three cities differ from those in the model construction area. Additionally, the pseudo-label data, constructed based on the random forest model, are not entirely accurate. There are inherent errors due to factors like spectral differences that the random forest model cannot overcome. Although Lee pointed out that pseudo-labels with incorrect labels can be treated as an entropy regularisation term in the loss function, allowing deep learning models to maintain normal training, the models cannot entirely avoid being affected by these errors, resulting in some misclassification and ultimately failing to correctly identify all winter wheat land types. This also further explains why the cities with the largest extraction errors in both the deep learning models and the random forest model are the same. Therefore, in future research, further analysis on how to improve the quality of pseudo-labels is needed.
Moreover, correlation analysis with statistical data shows that the U-Net model has the highest Correlation Coefficient (R2) at 0.9741, followed by the U-Net++ model at 0.9689, and the random forest model at only 0.8835, indicating that the U-Net model achieved the highest classification accuracy.
Using the U-Net model, which has higher extraction accuracy, the spatial distribution of winter wheat in Henan Province in 2019 was predicted. The accuracy evaluation results are shown in Figure 10a,e,f. Due to factors such as differences in the phenological period of winter wheat between different years and the varying times of image acquisition, the accuracy of the 2019 winter wheat mapping results has decreased. However, in terms of the AE and APE indicators, the AE and APE for most cities are still below 50 K hectares and 30%, respectively. Additionally, the Correlation Coefficient (R2) between the model’s prediction results and the statistical data is 0.9313, indicating that the U-Net model constructed based on pseudo-labels can still maintain high extraction accuracy in other years.

4.3.2. Accuracy Verification Based on Manually-Selected Sample Points

Manual sample data were used to validate the extraction results of the three models for 2022 and the prediction results of the U-Net model for 2019. The OA, F1-score, Kappa coefficient, UA, and PA were calculated, as shown in Table 6. The 2022 accuracy evaluation results indicate that the extraction accuracy of the three models is in the order of U-Net > U-Net++ > RF. In the U-Net++ model’s extraction results, the UA for winter wheat and the PA for non-winter wheat are higher than those in the U-Net model, suggesting that the U-Net++ model made more errors in classifying winter wheat areas as non-winter wheat areas. This result corresponds with the winter wheat planting area extracted by the two models, where the U-Net model overestimates and the U-Net++ model underestimates the winter wheat planting area. In contrast, the random forest model has the lowest extraction accuracy. In the 2019 accuracy evaluation results, the overall accuracy of the U-Net model reached 89.15%, indicating that the U-Net model maintains higher extraction accuracy than the random forest model in other years as well.
The accuracy evaluation results above indicate: (1) Deep learning models constructed based on pseudo-labels exhibit higher classification accuracy in winter wheat extraction tasks, whereas the random forest model is more susceptible to factors such as phenological crop differences and spectral differences when transferring to non-training areas for prediction, leading to a higher likelihood of misclassification. Compared to traditional machine learning algorithms like random forest, deep learning models demonstrate higher spatiotemporal adaptability and robustness. Under the premise of consistent image acquisition dates, they can achieve winter wheat extraction in different years, indirectly confirming that the pseudo-label data generated in this study meet the training accuracy requirements of deep learning models. (2) The U-Net++ model did not achieve higher extraction accuracy. The extraction accuracy of deep learning models is influenced by various factors such as model structure, image quality, sample accuracy, and number of training iterations. This study replaced manually interpreted samples with pseudo-label samples for model construction, which inherently introduces a certain level of error. Thus, sample accuracy becomes the predominant factor affecting model extraction performance. Despite introducing more complex skip connections on top of the U-Net model, the U-Net++ model cannot demonstrate higher extraction performance due to the limitation of pseudo-label accuracy. (3) The pseudo-label construction method for winter wheat proposed in this study is scientifically feasible. The pseudo-labels meet the accuracy requirements for training deep learning models and are expected to provide a feasible technical pathway for intelligent extraction of winter wheat spatial distribution information in the future.

4.4. Limitation & Expectation

In this study, a method to establish pseudo-labels was proposed, aiming to solve the problem of lacking training samples for deep learning models during the mapping task of large-area crops. On this basis, the establishment and spatial distribution information extraction of winter wheat with the U-Net model was accomplished, achieving well-performed results. However, some limitations existed during the research, which should be overcome in the future.
Firstly, as for the initial extraction method of winter wheat, only the WCI indicators were employed. Although such indicators could effectively represent changes in spectral features of the winter wheat canopy, there were still some unavoidable errors due to the differences in growing seasons, disturbance of clouds, and anomalies of pixel values; secondly, the pseudo-label data were obtained from the random forest model, in which there were potential errors that could not be overcome during the extraction of the model; these errors were further passed to deep learning models and were enhanced. As a result, this point was testified by cities with the most significant errors in both extraction results of the random forest model and the U-Net model being the same.
In short, other supplemental data and indicators should be employed in subsequent research to optimise the extraction results of winter wheat; in the meantime, pixels with higher confidence levels should also be selected for the establishment of pseudo-labels in order to eliminate significant parts, improving the accuracy of pseudo-labels.

5. Conclusions

In this study, Sentinel-2A multi-time series image data were employed with the help of the GEE cloud platform; WCI phenological indicators and the Otsu algorithm were used to complete the preliminary extraction of winter wheat; random sample points were produced; the random forest algorithm and random sample points were used to establish pseudo-label data for U-Net model training in typical areas; and finally, the winter wheat planting areas in Henan Province in 2022 were extracted based on the trained U-Net model. The extraction results were compared with statistical yearbooks and manually-selected sample points to verify the feasibility and reliability of the research method framework in this study. The main conclusions are as follows: (1) Pseudo-label data were established through the iterative process of “random sample point generation—random forest model training—winter wheat extraction”. As the number of iterations increased, the overlap rate of the random forest model extraction results increased steadily until 99.25%; (2) The winter wheat in Henan Province in 2022 was extracted using the U-Net model, U-Net++ model, and random forest model constructed based on pseudo-label data for 2022. Additionally, the U-Net model was used to predict the winter wheat planting areas in Henan Province in 2019. The accuracy evaluation results indicate that the extraction accuracy of the three models in 2022 is in the order of U-Net model > U-Net++ model > random forest model, and although the extraction accuracy of the U-Net model for 2019 prediction results has decreased, it remains higher than that of the random forest model. Moreover, due to the limitation of pseudo-label accuracy, the U-Net++ model did not achieve higher extraction accuracy; (3) Experimental results demonstrate that deep learning models constructed based on pseudo-labels exhibit higher extraction accuracy. Compared to traditional machine learning models like random forest, they have higher spatiotemporal adaptability and robustness, further validating the scientific and practical feasibility of pseudo-labels and their generation strategies. This approach is expected to provide a feasible technical pathway for intelligent extraction of winter wheat spatial distribution information in the future.

Author Contributions

Conceptualization, J.Z. and J.D.; methodology, J.Z.; data curation, J.Z. and L.X.; investigation, J.Z.; validation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, J.D., P.L. and Y.W.; visualization, J.Z.; supervision, S.Y., C.H., X.H. and A.L.; project administration, J.D.; funding acquisition, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Basic Resources Survey Special Project (2023FY100102), National Natural Science Foundation of China (22376179), Construction and Application Demonstration of Natural Resources Satellite Remote Sensing Technology System (102121201330000009008), and the Zhejiang Provincial Natural Science Foundation (TGS24D010008).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to projects that funded this study has not yet been fully recognized as complete, thus premature disclosure of the data would potentially create a privacy or moral hazard.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. State of the Art and Perspective of Agricultural Land Use Remote Sensing Information Extraction-All Databases. Available online: https://webofscience.clarivate.cn/wos/alldb/full-record/CSCD:6732993 (accessed on 19 April 2024).
  2. Zhou, T.; Pan, J.; Zhang, P.; Wei, S.; Han, T. Mapping Winter Wheat with Multi-Temporal SAR and Optical Images in an Urban Agricultural Region. Sensors 2017, 17, 1210. [Google Scholar] [CrossRef] [PubMed]
  3. Tiwar, V.; Matin, M.A.; Qamer, F.M.; Ellenburg, W.L.; Bajracharya, B.; Vadrevu, K.; Rushi, B.R.; Yusafi, W. Wheat Area Mapping in Afghanistan Based on Optical and SAR Time-Series Images in Google Earth Engine Cloud Environment. Front. Environ. Sci. 2020, 8, 77. [Google Scholar] [CrossRef]
  4. Chen, J.; Chen, J.; Liao, A.; Cao, X.; Chen, L.; Chen, X.; He, C.; Han, G.; Peng, S.; Lu, M.; et al. Global Land Cover Mapping at 30m Resolution: A POK-Based Operational Approach. ISPRS J. Photogramm. Remote Sens. 2015, 103, 7–27. [Google Scholar] [CrossRef]
  5. Lefebvre, A.; Sannier, C.; Corpetti, T. Monitoring Urban Areas with Sentinel-2A Data: Application to the Update of the Copernicus High Resolution Layer Imperviousness Degree. Remote Sens. 2016, 8, 606. [Google Scholar] [CrossRef]
  6. Rujoiu-Mare, M.-R.; Olariu, B.; Mihai, B.-A.; Nistor, C.; Săvulescu, I. Land Cover Classification in Romanian Carpathians and Subcarpathians Using Multi-Date Sentinel-2 Remote Sensing Imagery. Eur. J. Remote Sens. 2017, 50, 496–508. [Google Scholar] [CrossRef]
  7. Aryal, J.; Sitaula, C.; Aryal, S. NDVI Threshold-Based Urban Green Space Mapping from Sentinel-2A at the Local Governmental Area (LGA) Level of Victoria, Australia. Land 2022, 11, 351. [Google Scholar] [CrossRef]
  8. Ali, U.; Esau, T.J.; Farooque, A.A.; Zaman, Q.U.; Abbas, F.; Bilodeau, M.F. Limiting the Collection of Ground Truth Data for Land Use and Land Cover Maps with Machine Learning Algorithms. ISPRS Int. J. Geo-Inf. 2022, 11, 333. [Google Scholar] [CrossRef]
  9. Wei, P.; Chai, D.; Lin, T.; Tang, C.; Du, M.; Huang, J. Large-Scale Rice Mapping under Different Years Based on Time-Series Sentinel-1 Images Using Deep Semantic Segmentation Model. ISPRS J. Photogramm. Remote Sens. 2021, 174, 198–214. [Google Scholar] [CrossRef]
  10. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; IEEE: New York, NY, USA, 2014; pp. 580–587. [Google Scholar]
  11. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  12. Parente, L.; Taquary, E.; Silva, A.P.; Souza, C.; Ferreira, L. Next Generation Mapping: Combining Deep Learning, Cloud Computing, and Big Remote Sensing Data. Remote Sens. 2019, 11, 2881. [Google Scholar] [CrossRef]
  13. Gargiulo, M.; Dell’Aglio, D.A.G.; Iodice, A.; Riccio, D.; Ruello, G. Integration of Sentinel-1 and Sentinel-2 Data for Land Cover Mapping Using W-Net. Sensors 2020, 20, 2969. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, D.; Pan, Y.; Zhang, J.; Hu, T.; Zhao, J.; Li, N.; Chen, Q. A Generalized Approach Based on Convolutional Neural Networks for Large Area Cropland Mapping at Very High Resolution. Remote Sens. Environ. 2020, 247, 111912. [Google Scholar] [CrossRef]
  15. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, W. Efficient Semantic Segmentation Method Based on Convolutional Neural Networks, in Chinese. Doctoral Dissertation, University of Chinese Academy of Sciences, Institute of Optics and Electronics, Chinese Academy of Sciences, Beijing, China, 2019. [Google Scholar]
  17. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  18. Lin, H.; Shi, Z.; Zou, Z. Maritime Semantic Labeling of Optical Remote Sensing Images with Multi-Scale Fully Convolutional Network. Remote Sens. 2017, 9, 480. [Google Scholar] [CrossRef]
  19. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention, PT III, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing Ag: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar]
  20. Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P.H.S.; et al. Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021, Nashville, TN, USA, 20–25 June 2021; IEEE Computer Soc: Los Alamitos, CA, USA, 2021; pp. 6877–6886. [Google Scholar]
  21. Wei, P.; Chai, D.; Huang, R.; Peng, D.; Lin, T.; Sha, J.; Sun, W.; Huang, J. Rice Mapping Based on Sentinel-1 Images Using the Coupling of Prior Knowledge and Deep Semantic Segmentation Network: A Case Study in Northeast China from 2019 to 2021. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102948. [Google Scholar] [CrossRef]
  22. Du, M.; Huang, J.; Wei, P.; Yang, L.; Chai, D.; Peng, D.; Sha, J.; Sun, W.; Huang, R. Dynamic Mapping of Paddy Rice Using Multi-Temporal Landsat Data Based on a Deep Semantic Segmentation Model. Agronomy 2022, 12, 1583. [Google Scholar] [CrossRef]
  23. Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef]
  24. Pan, Z.; Xu, J.; Guo, Y.; Hu, Y.; Wang, G. Deep Learning Segmentation and Classification for Urban Village Using a Worldview Satellite Image Based on U-Net. Remote Sens. 2020, 12, 1574. [Google Scholar] [CrossRef]
  25. Pang, J.; Zhang, R.; Yu, B.; Liao, M.; Lv, J.; Xie, L.; Li, S.; Zhan, J. Pixel-Level Rice Planting Information Monitoring in Fujin City Based on Time-Series SAR Imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102551. [Google Scholar] [CrossRef]
  26. Paris, C.; Bruzzone, L. A Novel Approach to the Unsupervised Extraction of Reliable Training Samples From Thematic Products. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1930–1948. [Google Scholar] [CrossRef]
  27. Supriatna; Rokhmatuloh; Wibowo, A.; Shidiq, I.P.A. Spatial Analysis of Rice Phenology Using Sentinel-1 and Sentinel-2 in Karawang Regency. IOP Conf. Ser. Earth Environ. Sci. 2020, 500, 012033. [Google Scholar] [CrossRef]
  28. Saadat, M.; Seydi, S.T.; Hasanlou, M.; Homayouni, S. A Convolutional Neural Network Method for Rice Mapping Using Time-Series of Sentinel-1 and Sentinel-2 Imagery. Agriculture 2022, 12, 2083. [Google Scholar] [CrossRef]
  29. Lee, D. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Proceedings of the Workshop on Challenges in Representation Learning, ICML, Atlanta, GA, USA, 16–21 June 2013; Volume 3, p. 896. [Google Scholar]
  30. Zhu, A.-X.; Zhao, F.-H.; Pan, H.-B.; Liu, J.-Z. Mapping Rice Paddy Distribution Using Remote Sensing by Coupling Deep Learning with Phenological Characteristics. Remote Sens. 2021, 13, 1360. [Google Scholar] [CrossRef]
  31. Wei, P.; Huang, R.; Lin, T.; Huang, J. Rice Mapping in Training Sample Shortage Regions Using a Deep Semantic Segmentation Model Trained on Pseudo-Labels. Remote Sens. 2022, 14, 328. [Google Scholar] [CrossRef]
  32. Mullissa, A.; Vollrath, A.; Odongo-Braun, C.; Slagter, B.; Balling, J.; Gou, Y.; Gorelick, N.; Reiche, J. Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine. Remote Sens. 2021, 13, 1954. [Google Scholar] [CrossRef]
  33. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  34. Immitzer, M.; Vuolo, F.; Atzberger, C. First Experience with Sentinel-2 Data for Crop and Tree Species Classifications in Central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  35. Lobert, F.; Löw, J.; Schwieder, M.; Gocht, A.; Schlund, M.; Hostert, P.; Erasmi, S. A Deep Learning Approach for Deriving Winter Wheat Phenology from Optical and SAR Time Series at Field Level. Remote Sens. Environ. 2023, 298, 113800. [Google Scholar] [CrossRef]
  36. Xu, F.; Li, Z.; Zhang, S.; Huang, N.; Quan, Z.; Zhang, W.; Liu, X.; Jiang, X.; Pan, J.; Prishchepov, A.V. Mapping Winter Wheat with Combinations of Temporally Aggregated Sentinel-2 and Landsat-8 Data in Shandong Province, China. Remote Sens. 2020, 12, 2065. [Google Scholar] [CrossRef]
  37. Pan, L.; Xia, H.; Zhao, X.; Guo, Y.; Qin, Y. Mapping Winter Crops Using a Phenology Algorithm, Time-Series Sentinel-2 and Landsat-7/8 Images, and Google Earth Engine. Remote Sens. 2021, 13, 2510. [Google Scholar] [CrossRef]
  38. Rapid Mapping of Winter Wheat in Henan Province-All Databases. Available online: https://webofscience.clarivate.cn/wos/alldb/full-record/CSCD:6012581 (accessed on 19 April 2024).
  39. Yang, G.; Li, X.; Liu, P.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. Automated In-Season Mapping of Winter Wheat in China with Training Data Generation and Model Transfer. ISPRS J. Photogramm. Remote Sens. 2023, 202, 422–438. [Google Scholar] [CrossRef]
  40. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  41. Chai, D.; Newsam, S.; Zhang, H.K.; Qiu, Y.; Huang, J. Cloud and Cloud Shadow Detection in Landsat Imagery Based on Deep Convolutional Neural Networks. Remote Sens. Environ. 2019, 225, 307–316. [Google Scholar] [CrossRef]
Figure 1. Study Area. The green part represents Henan Province, and the red frame represents Study Area A.
Figure 1. Study Area. The green part represents Henan Province, and the red frame represents Study Area A.
Remotesensing 16 02553 g001
Figure 2. Temporal NDVI variation graph of various features in the study area.
Figure 2. Temporal NDVI variation graph of various features in the study area.
Remotesensing 16 02553 g002
Figure 3. Schematic illustration of dataset cropping. (a) Composition of multi-channel data; (b) Illustration of sliding window; (c) Sample image cropped by sliding window.
Figure 3. Schematic illustration of dataset cropping. (a) Composition of multi-channel data; (b) Illustration of sliding window; (c) Sample image cropped by sliding window.
Remotesensing 16 02553 g003
Figure 4. Research Technical Route.
Figure 4. Research Technical Route.
Remotesensing 16 02553 g004
Figure 5. Schematic diagram of U-Net model structure.
Figure 5. Schematic diagram of U-Net model structure.
Remotesensing 16 02553 g005
Figure 6. Random forest extraction results and data comparison. (a) River courses; (b) Vegetation and water bodies; (c) Other land types of crop.
Figure 6. Random forest extraction results and data comparison. (a) River courses; (b) Vegetation and water bodies; (c) Other land types of crop.
Remotesensing 16 02553 g006
Figure 7. Extraction results of winter wheat in Henan Province under different models. (a) Spatial distribution map of winter wheat in Henan Province in 2022 based on the U-Net model; (b) Spatial distribution map of winter wheat in Henan Province in 2022 based on the U-Net++ model; (c) Spatial distribution map of winter wheat in Henan Province in 2022 based on the random forest model; (d) Spatial distribution map of winter wheat in Henan Province in 2019 based on the U-Net model.
Figure 7. Extraction results of winter wheat in Henan Province under different models. (a) Spatial distribution map of winter wheat in Henan Province in 2022 based on the U-Net model; (b) Spatial distribution map of winter wheat in Henan Province in 2022 based on the U-Net++ model; (c) Spatial distribution map of winter wheat in Henan Province in 2022 based on the random forest model; (d) Spatial distribution map of winter wheat in Henan Province in 2019 based on the U-Net model.
Remotesensing 16 02553 g007
Figure 8. Schematic diagram of rectangular area location distribution.
Figure 8. Schematic diagram of rectangular area location distribution.
Remotesensing 16 02553 g008
Figure 9. Detailed comparison chart of the winter wheat extraction results in Henan Province for 2022 based on different models. (af) represent the six regions in Figure 8, respectively.
Figure 9. Detailed comparison chart of the winter wheat extraction results in Henan Province for 2022 based on different models. (af) represent the six regions in Figure 8, respectively.
Remotesensing 16 02553 g009
Figure 10. Extraction accuracy evaluation charts for different models based on statistical data. (a) The distribution of the number of cities within different absolute error (AE) intervals extracted from different models; (b) The distribution of the number of cities within different absolute error percentage (APE) intervals extracted from different models; (c) Correlation analysis between winter wheat extraction areas based on the U-Net model and statistical data in various cities in 2022; (d) Correlation analysis between winter wheat extraction area and statistical data based on the U-Net++ model in various cities in 2022; (e) Correlation analysis between winter wheat extraction areas based on the random forest model and statistical data in various cities in 2022; (f) Correlation analysis between winter wheat extraction areas and statistical data based on the U-Net model in various cities in 2022.
Figure 10. Extraction accuracy evaluation charts for different models based on statistical data. (a) The distribution of the number of cities within different absolute error (AE) intervals extracted from different models; (b) The distribution of the number of cities within different absolute error percentage (APE) intervals extracted from different models; (c) Correlation analysis between winter wheat extraction areas based on the U-Net model and statistical data in various cities in 2022; (d) Correlation analysis between winter wheat extraction area and statistical data based on the U-Net++ model in various cities in 2022; (e) Correlation analysis between winter wheat extraction areas based on the random forest model and statistical data in various cities in 2022; (f) Correlation analysis between winter wheat extraction areas and statistical data based on the U-Net model in various cities in 2022.
Remotesensing 16 02553 g010
Table 1. Sentinel-2 image data acquisition information within Study Area A and Henan Province.
Table 1. Sentinel-2 image data acquisition information within Study Area A and Henan Province.
Time SeriesImage InformationStudy Area A
(2022)
Henan Province
(2022)
Study Area A
(2019)
Henan Province
(2019)
1Date11 October 2022–21 October 202211 October 2022–21 October 20221 October 2018–21 October 20181 October 2018–21 October 2018
Number158122134
2Date26 April 2022–16 May 202226 April 2022–16 May 202216 April 2019–16 May 201916 April 2019–16 May 2019
Number13861786
3Date1 June 2023–21 June 20231 June 2023–21 June 20231 June 2019–21 June 20191 June 2019–21 June 2019
Number241051791
Table 2. Sample collection table.
Table 2. Sample collection table.
Type of Sample PointsStudy Area A (2022)Henan Province (2022)Henan Province (2019)
Winter Wheat600443400
Non-Winter Wheat900857800
Total number150013001200
Table 3. Binary classification confusion matrix.
Table 3. Binary classification confusion matrix.
Estimate Value = 1Estimate Value = 0
True value = 1TPFN
True value = 0FPTN
Table 4. Random forest model extraction result overlap rate.
Table 4. Random forest model extraction result overlap rate.
Training Times FirstSecondThirdFourth
Overlap Rate96.10%98.53%98.86%99.25%
Table 5. Evaluation of pseudo-label accuracy based on artificial samples.
Table 5. Evaluation of pseudo-label accuracy based on artificial samples.
IndicatorsOAKappa CoefficientF1-ScoreUAPA
Winter Wheat Non-Winter WheatWinter WheatNon-Winter WheatWinter Wheat Non-Winter Wheat
Value97.53%0.950.970.9895.77%98.76%98.17%97.11%
Table 6. Accuracy evaluation results based on confusion matrix. U-Net 2019 refers to the accuracy evaluation of the 2019 extraction results based on the U-Net model.
Table 6. Accuracy evaluation results based on confusion matrix. U-Net 2019 refers to the accuracy evaluation of the 2019 extraction results based on the U-Net model.
ModelOAKappa CoefficientF1-ScoreUAPA
Winter WheatNon-Winter WheatWinter WheatNon-Winter WheatWinter WheatNon-Winter Wheat
U-Net93.08%0.850.910.9584.41%98.73%97.74%90.67%
U-Net++92.64%0.840.890.9487.23%95.67%91.85%93.05%
RF86.08%0.700.810.8976.95%91.52%84.42%86.93%
U-Net 201989.15%0.770.850.9181.76%93.76%89.08%89.19%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; You, S.; Liu, A.; Xie, L.; Huang, C.; Han, X.; Li, P.; Wu, Y.; Deng, J. Winter Wheat Mapping Method Based on Pseudo-Labels and U-Net Model for Training Sample Shortage. Remote Sens. 2024, 16, 2553. https://doi.org/10.3390/rs16142553

AMA Style

Zhang J, You S, Liu A, Xie L, Huang C, Han X, Li P, Wu Y, Deng J. Winter Wheat Mapping Method Based on Pseudo-Labels and U-Net Model for Training Sample Shortage. Remote Sensing. 2024; 16(14):2553. https://doi.org/10.3390/rs16142553

Chicago/Turabian Style

Zhang, Jianhua, Shucheng You, Aixia Liu, Lijian Xie, Chenhao Huang, Xu Han, Penghan Li, Yixuan Wu, and Jinsong Deng. 2024. "Winter Wheat Mapping Method Based on Pseudo-Labels and U-Net Model for Training Sample Shortage" Remote Sensing 16, no. 14: 2553. https://doi.org/10.3390/rs16142553

APA Style

Zhang, J., You, S., Liu, A., Xie, L., Huang, C., Han, X., Li, P., Wu, Y., & Deng, J. (2024). Winter Wheat Mapping Method Based on Pseudo-Labels and U-Net Model for Training Sample Shortage. Remote Sensing, 16(14), 2553. https://doi.org/10.3390/rs16142553

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop