Next Article in Journal
FTIR-SpectralGAN: A Spectral Data Augmentation Generative Adversarial Network for Aero-Engine Hot Jet FTIR Spectral Classification
Previous Article in Journal
Evaluation of the Geomorphon Approach for Extracting Troughs in Polygonal Patterned Ground Across Different Permafrost Environments
Previous Article in Special Issue
Detection of Invasive Species (Siam Weed) Using Drone-Based Imaging and YOLO Deep Learning Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adversarial Positive-Unlabeled Learning-Based Invasive Plant Detection in Alpine Wetland Using Jilin-1 and Sentinel-2 Imageries

1
State Key Laboratory of Ecological Safety and Sustainable Development in Arid Lands, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
China-Kazakhstan Joint Laboratory for Remote Sensing Technology and Application, Al-Farabi Kazakh National University, Almaty 050012, Kazakhstan
4
Key Laboratory of RS & GIS Application Xinjiang, Urumqi 830011, China
5
School of Geography, Geomatics and Planning, Jiangsu Normal University, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(6), 1041; https://doi.org/10.3390/rs17061041
Submission received: 26 January 2025 / Revised: 12 March 2025 / Accepted: 13 March 2025 / Published: 16 March 2025
(This article belongs to the Special Issue Remote Sensing for Management of Invasive Species)

Abstract

:
Invasive plants (IPs) pose a significant threat to local ecosystems. Recent advances in remote sensing (RS) and deep learning (DL) significantly improved the accuracy of IP detection. However, mainstream DL methods often require large, high-quality labeled data, leading to resource inefficiencies. In this study, a deep learning framework called adversarial positive-unlabeled learning (APUL) was proposed to achieve high-precision IP detection using a limited number of target plant samples. APUL employs a dual-branch discriminator to constrain the class prior-free classifier, effectively harnessing information from positive-unlabeled data through the adversarial process and enhancing the accuracy of IP detection. The framework was tested on very high-resolution Jilin-1 and Sentinel-2 imagery of Bayinbuluke grasslands in Xinjiang, where the invasion of Pedicularis kansuensis has caused serious ecological and livestock damage. Results indicate that the adversarial structure can significantly improve the performance of positive-unlabeled learning (PUL) methods, and the class prior-free approach outperforms traditional PUL methods in IP detection. APUL achieved an overall accuracy of 92.2% and an F1-score of 0.80, revealing that Pedicularis kansuensis has invaded 4.43% of the local plant population in the Bayinbuluke grasslands, underscoring the urgent need for timely control measures.

1. Introduction

Plants play a crucial role in global ecosystems, contributing to various functions such as climate regulation [1], carbon sink formation [2], biodiversity maintenance [3], and soil protection [4]. However, invasive plants (IPs) pose a significant threat to ecosystem balance, resulting in biodiversity loss [5], degradation of ecological services [6], and adverse economic and social impacts [7]. Moreover, the phenomenon of plant invasions, driven by human activities, has escalated globally in the context of globalization [8,9]. In northwestern China, for instance, the plant species Pedicularis kansuensi is rapidly spreading across alpine grasslands, significantly disrupting local livestock husbandry and ecosystem stability [10]. Notably, alpine wetlands, which are particularly vulnerable to external changes, demand heightened attention in the face of IP threats [11].
Free of their natural enemies in their original environments, IPs can fully exploit their competitive potential against native plants, enabling them to spread rapidly following a successful invasion [12,13,14]. To effectively manage the spread of invasive species in wetland ecosystems, it is essential to obtain timely and accurate information regarding their spatio-temporal distribution. Traditionally, GPS devices, cameras, and drones have been employed to monitor IPs in invaded areas; however, these methods are challenging to implement over large areas [15,16].
Remote sensing (RS) has emerged as the dominant method for large-scale vegetation mapping [17,18,19]. The spatio-temporal and spectral information provided by RS images enables efficient identification of vegetation areas while significantly reducing the costs associated with field surveys and offering timely insights into land surface conditions [17]. However, implementing species-level vegetation monitoring using satellite RS images remains challenging [20,21]. The high degree of spectral similarity among plant species complicates the attainment of satisfactory classification results from multispectral images with medium-spatial resolution when employing traditional supervised or unsupervised methods [21,22]. In response, extensive research efforts have focused on innovations in data acquisition and classification techniques to enhance species identification performance [18,23]. For instance, the utilization of hyperspectral, high-resolution, and multi-temporal satellite RS images has significantly enriched the available land surface information [24,25]. Concurrently, DL techniques have effectively harnessed the full potential of these data types [26,27]. The integration of these advancements has significantly enhanced the accuracy of species-level vegetation mapping [28].
Numerous studies have demonstrated the effectiveness of DL methods in identifying IPs in RS images [21,29,30,31]. However, a critical prerequisite for the successful application of DL is the availability of high-quality training datasets, which can be labor-intensive to compile in the context of alpine wetland IP monitoring [32,33]. Typically, multi-class samples are required for model training; however, the target invasive species often represents the sole class of interest for the task. The challenge of using a limited number of target samples to accurately identify the presence of IPs represents a significant scientific issue in monitoring IPs through DL approaches [34]. Positive-unlabeled learning (PUL) serves as a one-class classification method that seeks to develop a binary classifier from positive and unlabeled data [35,36]. This approach aligns well with the needs of IP monitoring, where samples of invasive species are treated as positive data, while other pixels are considered unlabeled. PUL has demonstrated considerable performance and potential in mapping invasive species [37]. However, state-of-the-art (SOTA) PUL methods typically require class prior probabilities as inputs, which are unknown in the context of monitoring IP species. Class prior-free PUL methods have recently emerged as a focal point in the machine learning community. Nonetheless, the absence of negative samples and class prior probabilities limits the effectiveness of these methods. The use of discriminators in generative adversarial networks (GANs) can optimize the segmentation results through an adversarial process [38]. However, to the best of our knowledge, none of the existing studies have explored the potential of adversarial learning to improve the performance of prior-free PUL methods in invasive plant detection.
To enhance the performance of PUL methods for IP detection, we propose an adversarial positive-unlabeled learning (APUL) framework. This framework is designed to map the detailed distribution of the invasive species Pedicularis kansuensis in the alpine wetlands of Bayinbuluk Grassland, Xinjiang. APUL employs a dual-branch discriminator to constrain a class prior-free classifier, effectively leveraging information from positive-unlabeled data through an adversarial learning process. By collecting only the target species as positive samples and utilizing RS technology, our approach aims to significantly reduce the time and costs associated with traditional field sampling. The key contributions of this paper are summarized as follows:
  • An APUL framework is designed for high-accuracy IP detection by utilizing only a limited number of labeled samples of the target species.
  • An improved adversarial structure is proposed that utilizes a dual-branch discriminator to constrain the class prior-free classifier through the adversarial process.
  • The proposed APUL approach was employed to map the detailed distribution of Pedicularis kansuensis in Bayinbuluk Grassland, and the experimental results demonstrate that it outperforms mainstream and state-of-the-art techniques.

2. Materials

2.1. Study Area

The Bayinbuluke Grassland (42°18′–43°34′N, 82°27′–86°17′E) is situated in the middle of the southern slope of the Tianshan Mountains in Xinjiang, China (Figure 1), with an average altitude exceeding 2000 m, making it the largest subalpine grassland in the country [39]. This grassland is part of the Kaidu River Basin and represents a typical alpine wetland ecosystem.
Rich in plant species, the Bayinbuluke Grassland possesses relatively complete ecological functions, providing valuable habitat for various local endangered species [40]. Furthermore, it serves as an important pasture and tourist destination in Xinjiang, contributing significant economic value to the region [41]. However, over the past two decades, the swift expansion of Pedicularis kansuensis has encroached upon the habitat of pasture grasses, significantly impacting local animal husbandry [42,43]. Previous studies have reported that the area of Bayinbuluke Grassland affected by Pedicularis kansuensis was already more than 23,300 hm2 in 2008 and is still increasing by 3300 hm2 per year [43]. In response, the local government has invested CNY 7 million to restore the proportion of edible pasture and mitigate the impacts of Pedicularis kansuensis invasion on local livestock [44]. However, control efforts in the field have proven inefficient due to the lack of precise information regarding the locations of Pedicularis kansuensis [43]. For monitoring Pedicularis kansuensis, the ultra-high-resolution imagery provided by Jilin-1 is good for monitoring the smaller vegetation communities within it, while distinguishing those pixels that are mixed with other species in the coarse-resolution imagery. The climatic information and more spectral information provided by Sentinel 2 make up for the spectral and temporal deficiencies of Jilin-1, which is also an advantage of using multi-source remote sensing data.
Pedicularis kansuensis communities are small and difficult to identify on medium-resolution satellite imagery. High-resolution imagery, in turn, is difficult to provide sufficient spectral bands to distinguish Pedicularis kansuensis from other species. Consequently, there is an urgent need for an advanced method to rapidly and accurately monitor the distribution of Pedicularis kansuensis to inform effective control strategies.

2.2. Data

2.2.1. Satellite Imagery

In this study, multi-source RS images were utilized for IP detection, specifically incorporating high-resolution imagery from the Jilin-1 satellite and multispectral imagery from Sentinel-2 (Table 1). The Jilin-1 satellite provides high-spatial resolution images at 0.75 m, featuring four spectral bands: blue, green, red, and near-infrared [45]. The Jilin-1 imagery used in this experiment was captured on 10 August 2023. To enhance spectral information and capture the phenological variations of different vegetation types, Sentinel-2 imagery from three distinct months (July, August, and September) was integrated with the high-resolution imagery for the extraction of Pedicularis kansuensis. The Sentinel-2 imagery was processed using monthly median synthesis on the Google Earth Engine (GEE) platform [46]. Images with more than 20% cloud coverage were discarded to ensure data quality. The imagery of Sentinel-2 from the three periods (July, August, September 2023) was then exported directly from GEE at a resolution of 0.75 m, without sharpening, to maintain size consistency with the Jilin-1 imagery. From the spectral curves derived from the two satellites (Figure 2), Pedicularis kansuensis can be easily confused with rivers in the visible light band due to its low reflectivity. Although the red edge band information aids in differentiation, the relatively low resolution of Sentinel-2 may limit its effectiveness in improving detection accuracy.
To optimize the extraction of Pedicularis kansuensis, the normalized difference vegetation index (NDVI) [47] and the normalized difference water index (NDWI) [48] were calculated using the following equations:
N D V I = N I R R e d N I R + R e d
N D W I = G r e e n N I R G r e e n + N I R
where NIR represents the near-infrared band, Red is the red band, and Green denotes the green band of Jilin-1 and Sentinel-2. Numerous studies have demonstrated that these indices positively enhance the accuracy of vegetation classification. The Jilin-1 and Sentinel-2 images, enhanced with additional spectral indices, were stacked to create a unique multi-source RS dataset comprising a total of 51 bands.
For monitoring Pedicularis kansuensis, the extremely high-resolution imagery provided by Jilin-1 is good for monitoring the smaller vegetation communities within it, while distinguishing those pixels that are mixed with other species in the coarse-resolution imagery. The climatic information and more spectral information provided by Sentinel 2 make up for the spectral and temporal deficiencies of Jilin-1, which is also an advantage of using multi-source remote sensing data. For monitoring Pedicularis kansuensis, the high-resolution imagery from Jilin-1 is effective for mapping smaller plant communities and distinguishing mixed-species pixels in coarse-resolution imagery. And Sentinel-2 complements Jilin-1 by providing additional phenology and spectral information, addressing its spectral and temporal limitations. This highlights the advantage of using multi-source remote sensing data.

2.2.2. Field Surveying Samples

Although RS provides extensive surface information on a large scale, field surveying remains an integral component of regional studies. In particular, field-collected samples are indispensable for validating RS images. To determine the distribution range of Pedicularis kansuensis in the Bayinbuluke Grassland, we conducted a field survey aimed at obtaining ground-truth (GT) samples. Accessing the interior of the Bayinbuluke grassland proved challenging; therefore, we focused on observing Pedicularis kansuensis along the roads during the expedition.
Two regions with relatively extensive distribution ranges and high densities of Pedicularis kansuensis were selected as sampling areas. Due to the herbaceous nature of Pedicularis kansuensis, which exhibits non-uniform growth density and indistinct boundaries, the point sampling method was employed to ensure sampling accuracy. Sampling points were identified using square frames with a side length of 1 m, and their latitude, longitude, and labels were recorded using handheld GPS devices. During the field survey, a total of 42 sample points of Pedicularis kansuensis were collected from these two areas (Figure 1a). These points were subsequently used as positive samples in this study.

3. Methods

To tackle the challenges faced by deep learning (DL) methods in monitoring invasive plants (IPs) in the context of lacking label data, this study proposed an adversarial positive-unlabeled learning (APUL) framework. This section outlines the architecture of the APUL framework, detailing the components of both the classifier and the discriminator within the framework. Finally, the associated loss functions will be defined to illustrate how they guide the training process.

3.1. Architecture

The proposed APUL framework utilizes a GANs structure to constrain the results generated by the class prior-free classifier. Unlike traditional discriminative models, the classifier in the GANs framework functions as a classifier, with its outputs being regulated by the discriminator. By appropriately configuring the discriminator, we can effectively prevent imbalances in segmentation results.
The proposed APUL framework adheres to the fundamental architecture of GANs and comprises two main components: the classifier and the discriminator (Figure 3). The classifier is tasked with the initial classification of the inputs to identify Pedicularis kansuensis, while the discriminator evaluates the classification results to identify pseudo-labels and optimizes itself through back-propagation of the outcomes. Through this mutual confrontation, the classification outputs of the classifier evolve to the point where the discriminator is unable to differentiate the authenticity of the generated positive signals, thereby facilitating high accuracy of Pedicularis kansuensis detection.

3.2. The Class Prior-Free Classifier and Dual-Branch Discriminators

The APUL framework follows the fundamental structure of GANs, consisting of two main components: the generator and the discriminator. However, in APUL, the generator functions as a classifier to identify the target species from the input data, while the discriminator evaluates its results, akin to the role in GANs. The detailed structures of both components are described in their respective sections below.

3.2.1. Class Prior-Free Classifier

The class prior-free classifier C is designed to identify the target species using positive-unlabeled data without class prior probability from multi-source remote sensing (RS) images. This component utilizes a top-down fully convolutional network (FCN), as depicted in Figure 4. The FCN serves as a lightweight, end-to-end classification network, structured into encoders and decoders. Notably, the encoder–decoder-based FCN exhibits greater efficiency in end-to-end classification tasks compared to patch-based local learning, enabling faster model training [49].
The encoder is modularly designed, comprising two key components: a temporal–spectral–spatial attention (TSSA) module and a downsampling module. Initially, a preprocessing module standardizes multi-source and multi-temporal remote sensing (RS) images into a fixed set of 64 bands, enabling seamless progression to subsequent processing stages. The TSSA and downsampling modules are then iteratively applied to extract advanced features from the RS images. As the core component of the network, the TSSA module integrates the convolutional block attention module (CBAM [50]) with 2-D convolutional layers to extract key features from multi-source RS data (Figure 3). Within the TSSA module, feature maps are first reweighted in both the temporal–spectral and spatial domains to highlight the most salient features. These weighted feature maps are further refined through a series of 3 × 3 convolutions, group normalization layer (GN), and rectified linear unit (ReLU) activations. Across the encoder, four feature extraction operations are performed at different levels using the TSSA module. Following each TSSA module, a downsampling layer consisting of 3 × 3 convolutional layers with a stride of 2, replacing traditional pooling to enhance feature degradation and compression for more robust classification.
Paired with the encoder, a lightweight decoder is employed to enable efficient processing and generate segmentation results. The decoder consists of multiple fixed 128-channel convolutional modules, alternating with upsampling layers of size 2. The upsampling layers are implemented using PyTorch’s interpolation function, applying the nearest neighbor method with a scale factor of 2. To enhance feature integration, high-level features are fused with low-level features at each stage through a transversal linking module that connects the encoder and decoder. The channel dimensions of the features are standardized using 1 × 1 convolutions to maintain a consistent feature space size, followed by feature fusion via pointwise summation. Finally, the network classifies the probability of each output pixel being a positive sample based on the top-layer decoder features. Detailed network parameters are provided in Table 2.

3.2.2. Dual-Branch Discriminators

The dual-branch discriminator consists of two UNet [51] networks with identical structures (Figure 4): the segmentation discriminator D s and the entropy map discriminator D e . UNet is a compact segmentation network, and its use as the discriminator does not significantly increase the model size. Moreover, since the discriminator acts as an adversary to the classifier, an overly strong discriminator could dominate the training process, which would be detrimental to the overall framework. Therefore, the lightweight UNet is an ideal choice. UNet features a symmetrical encoder–decoder structure. The encoder is implemented through repeated convolutional modules, each consisting of two 3 × 3 convolutions, two ReLU activation functions, and a max pooling layer. The decoder mirrors the encoder’s structure but replaces the max pooling layer with transposed convolution. Additionally, skip connections between the encoder and decoder ensure that feature information is preserved throughout the network.
While D s and D e share identical parameters, they differ slightly in their input data. The segmentation discriminator D s directly uses the classifier’s output and the input image to predict the pixel-level likelihood of the classifier’s result being a labeled sample. This approach aligns with the most mainstream methodology in the GANs network. By using segmentation results as input, D s can intuitively identify potential error regions. Meanwhile, the entropy map discriminator D e will calculate the entropy map first by Formula (9). Then discriminator D e first calculates the entropy map of the classifier’s output using Equation (9). It then utilizes the entropy map and the input image to identify labeled pixels. Specifically, D e detects mislabeled pixels with low entropy values based on the entropy map derived from the classifier’s output, thereby improving segmentation accuracy, particularly in boundary regions [52].
During the iterative training process, the classifier generates pseudo-labels continuously, while the discriminator consistently distinguishes pseudo-labels from real labels. This adversarial training process enables the classifier and discriminator to mutually optimize each other, leading to higher-quality classification outputs. Furthermore, the constraints imposed by the discriminator effectively reduce positive and negative biases in the classifier model, preventing potential model collapse.

3.3. Loss Function

Positive-unlabeled learning can be regarded as a specific subclass of binary classification problems, distinguishing between positive and negative samples. In this context, the input and output can be categorized into positive and unlabeled samples, denoted as P = { x i p , y i p } i = 1 N p and U = { x i u , y i u } i = 1 N u , respectively, where Y { 0,1 } .
The classifier C is tasked with the initial extraction of Pedicularis kansuensis from multi-source RS images. To address the imbalance problem encountered during training, many previous studies have employed various strategies to optimize model training [53]. These methods can be classified into two categories based on whether the class prior for positive data is assumed to be known. Class prior methods typically assume that the class prior of the positive data is known, using this as a constraint to optimize the neural network [37,54,55]. However, this assumption is often not valid. Recent research has shifted focus toward methods that do not rely on class prior, with variational theory emerging as a well-established approach [56]. The latest studies have demonstrated that performance can be further enhanced through Taylor approximations of the variational function [57]. The outstanding performance of the Taylor variational loss function in traditional discriminative models allows for its application to generative models, thereby optimizing the adversarial process, and has not been previously explored [58]. Therefore, this study introduces the Taylor variational loss function ( L t ) to optimize the training of the classifier C in GANs. The function L t mitigates the influence of unlabeled samples on the optimization process by ensuring that their gradients do not disproportionately affect the training of the neural network. This approach facilitates PUL without necessitating prior knowledge of class probabilities. The Taylor variational loss function L t is defined as follows:
L t = i = 1 o   1 1 N u i = 1 N u   G x i u i i = 1 N p   log x i p N p
where o denotes the order of the Taylor series; the complete derivation is provided in the Supplementary Material.
After multi-source RS images are classified by the classifier C to produce pseudo-labels, the segmentation discriminator D s and the entropy map discriminator D e will predict the pixels corresponding to these pseudo-labels. Subsequently, the classifier is back-optimized to enhance the effectiveness of the classification process. The segmentation discriminator D s and the classifier C are optimized using the loss function L s d and L s g , which are defined as follows:
L s d = 1 N p i = 1 N p   l b c e D s G x i p , x i p , 1 + 1 N u i = 1 N u   l b c e D s G x i u , x i u , 0
L s g = 1 N i = 1 N   l b c e D s G x i , x i , 1
where l b c e is the binary cross-loss entropy function defined as:
l b c e y ^ i , y i = w i y i l o g y ^ i 1 y i l o g 1 y ^ i
where y ^ i and y i are the predicted label by the classifier C and target label for the pixel i , respectively. Additionally, w i is the pixel weight, with its value determined by the following equations:
w i = N u N p ,   x i , y i P N p N u ,   x i , y i U
Assigning different weights to positive and unlabeled samples can effectively mitigate the challenge posed by the significant disparity in sample sizes, which often hinders the model’s ability to extract meaningful information from positive samples. The overall loss L s , dominated by the segmentation discriminator D s , is defined as follows:
L s = L s d + L s g
Meanwhile, to extract valuable information from classification results with low entropy values and to enhance the model’s performance at vegetation boundaries, an entropy discriminator D e is established. This discriminator predicts the pseudo-labeled pixels based on the pseudo-labeled entropy map. The entropy map e is defined as follows [52]:
e = y ^ l o g y ^ 1 y ^ l o g 1 y ^
In this optimization process, the entropy map discriminator D e and the classifier C are optimized using the loss function L e d and L e g , which are defined as follows:
L e d = 1 N p i = 1 N p   l b c e D e G e i p , x i p , 1 + 1 N u i = 1 N u   l b c e D e e i u , x i u , 0
L e g = 1 N i = 1 N   l b c e D e e i , x i , 1
The loss function L e , responsible for optimizing all processes governed by the entropy map discriminator D e , is defined as follows:
L e = L e d + L e g
In summary, the loss function of the APUL framework can be expressed as follows:
L = L t + L s + L e

4. Experiments and Analysis

4.1. Experimental Preparation

4.1.1. Datasets Preparation

In this study, the model was trained using sample points collected during field surveys and validated through visual interpretation. The training set utilized multi-source RS images, as described in Section 2.2.1, augmented with additional spectral indices (NDVI and NDWI). To generate the training labels Y t r a i n , a 1 m buffer was applied to the field survey samples to assign positive labels, while all other pixels were treated as unlabeled. This process yielded a total of 304 positive pixels for the training set (Table 3). The test set employed the same images as the training set. However, since common accuracy metrics cannot be calculated using only positive samples, areas where Pedicularis kansuensis was more prominent were selected for visual interpretation, also ensuring no spatial overlap with the training set (Figure 5). This approach maintains the independence of the training and testing datasets. A binary classification (positive and negative) was performed in these areas to create the test labels Y t e s t , enabling the evaluation of the model’s accuracy and effectiveness. The test set comprised 55,525 positive pixels and 125,195 negative pixels (Table 3). The image size of both training and test set is 512 × 512.
The spectral distribution of datasets is analyzed (Figure 6). In the near-infrared and red bands, Pedicularis kansuensis exhibits distinct spectral characteristics. However, the lack of negative samples and the limited number of positive samples pose significant challenges for accurate identification. The test set’s spectral distribution not only includes samples similar to the training set but also encompasses positive and negative samples with substantial spectral differences. Using such a test set better evaluates the model’s generalization ability and its capacity to handle unlabeled samples.

4.1.2. Experimental Setting

This experiment utilizes PyTorch version 2.2.1 as the experimental framework, with programming conducted in Python 3.9. The experiments were executed on a local server equipped with a single RTX 4090 GPU featuring 24 GB of video memory for both training and inference. The stochastic gradient descent (SGD) optimizer was employed to optimize the parameters of the classifier and discriminator. The initial learning rate was set to 0.001 and adjusted using an exponential decay method, with a decay coefficient of 0.995 and a momentum of 0.9. The batch size was set to 5. Given the limited number of training samples, each deep learning model is trained for 200 rounds to mitigate the risk of overfitting. Machine learning does not require iterative training due to the advantages of shallow structure. For methods that require class prior probabilities (e.g., ItreeNet), these probabilities were provided and tested in the range of 0.1 to 0.9 at intervals of 0.05. To enhance the model’s generalization capability, the study employed data augmentation techniques to introduce greater diversity and flexibility into the training process, thereby improving the model’s ability to generalize. The augmentation approach included HorizontalFlip (p = 0.6), VerticalFlip (p = 0.6), Transpose (p = 0.6), and RandomRotate90 (p = 0.5). All parameters can be further reviewed in the publicly available code.

4.1.3. Evaluation Methods and Metrics

The effectiveness of the proposed APUL framework was validated through comparisons with mainstream and SOTA methods. A total of five different models were compared: One-Class Support Vector Machine (OCSVM) [59], Biased Support Vector Machine (BSVM) [60], ItreeNet [55], DOCC [61], HOneCls [62], and T-HOneCls [57]. OCSVM and BSVM represent traditional machine learning (ML) approaches in PUL. ItreeNet, DOCC, and HOneCls are representative methods that incorporate class prior information, while T-HOneCls is a SOTA PUL method that operates without class prior knowledge. The experiments were repeated five times for each method, and the best performance achieved by each model on the test dataset was averaged for comparison.
To ensure fairness in the evaluation, each model is assessed using a consistent set of performance metrics. Based on previous research, we selected the Area Under the Receiver Operating Characteristic Curve (AUC), Precision, Recall, and F1-score as the metrics for judging the classification effectiveness of different models.
AUC quantifies the model’s ability to differentiate between classes by calculating the area under the ROC curve. It is defined as:
A U C = 0 1   T P R F P R d F P R
where TPR is the true positive rate and FPR is the false positive rate. AUC values range from 0 to 1, with higher values indicating better model discrimination. Precision is defined as the ratio of true positive predictions to the total positive predictions:
P r e c i s i o n = T P T P + F P
where TP represents true positives and FP denotes false positives. This metric is essential for understanding the accuracy of positive predictions.
Recall, also known as Sensitivity, measures the model’s ability to identify all relevant instances and is defined as:
R e c a l l = T P T P + F N
where FN stands for false negatives. Recall is critical for assessing the model’s sensitivity to the positive class.
The F1-score provides a single metric that balances Precision and Recall, making it particularly useful in scenarios with class imbalance. It is calculated as:
F1-score = 2 Precision Recall Precision + Recall
Together, these metrics offer a comprehensive evaluation framework for judging model performance in mapping IPs in wetlands, enabling the assessment of overall accuracy and also the reliability of positive classification.

4.2. Experimental Results and Analysis

The proposed APUL was compared with six representative approaches after training, which can be broadly categorized into: (1) Machine learning methods (e.g., OCSVM [63], BSVM [60]), (2) Deep PUL methods with class prior probabilities (e.g., ItreeNet [55], DOCC [61], HOneCls [62]), and (3) Class prior-free Deep PUL methods (e.g., T-HOneCls [57]). All deep learning methods were trained in a single mission of 200 epochs (Figure 7). Among these, the Deep PUL methods with class prior probabilities were trained using a gap of 0.05, with each probability repeated twice to mitigate the effects of randomness. The final comparison is based on the average of the best 10 performances. For the class prior-free Deep PUL methods, results were averaged over 5 training repetitions.
The results demonstrate that the APUL framework is effective in mapping invasive Pedicularis kansuensis in alpine wetlands, outperforming other methods and achieving high accuracy with only a limited number of target samples (Table 4). Specifically, APUL achieved the best overall performance, with an F1-score of 0.8013, Precision of 0.7963, Recall of 0.8072, and an AUC of 0.9842, establishing it as the most advanced technique currently available for the extraction of invasive Pedicularis kansuensis.
In contrast, classical machine learning algorithms exhibit significant limitations in IP mapping, even the algorithms are optimized for one-class classification or PUL. For example, the One-Class Support Vector Machine (OCSVM) is specifically designed for one-class classification. In OCSVM, positive samples are treated as positive, while unlabeled samples are considered negative examples. However, despite this design, OCSVM achieves an F1 score of only 0.2601, with an AUC of 0.6291, Precision of 0.1727, and Recall of 0.5275. Unlike OCSVM, the Biased Support Vector Machine (BSVM) treats unlabeled samples as negative samples with noise, assigning them lower weights instead of directly classifying them as negative. As a result, BSVM demonstrates better performance than OCSVM, achieving an AUC of 0.6438, with Precision at 0.3677, Recall at 0.6050, and an F1 score of 0.4431. However, machine learning methods are still significantly less effective at utilizing positive-unlabeled data for IP detection compared to deep learning techniques.
Deep PUL methods that incorporate class prior probabilities significantly outperform traditional machine learning methods in the task of IP detection. The ItreeNet model stands out, achieving an AUC of 0.9805, with Precision of 0.7370, Recall of 0.8307, and an F1 score of 0.7802. The DOCC model follows closely, attaining an AUC of 0.9775, with Precision of 0.7529, Recall of 0.7935, and an F1 score of 0.7707. The HOneCls model further enhances performance metrics, achieving an AUC of 0.9818, Precision of 0.7920, Recall of 0.7842, and an F1 score of 0.7868. These methods demonstrate excellent performance for IP detection. However, they have a significant drawback: their accuracy heavily relies on the prior class probabilities provided, which are often unknown in real-world tasks. The “high accuracy” achieved is typically the result of repeated trials (e.g., in this experiment, we varied the class prior probabilities from 0 to 1 in intervals of 0.05 to find the best performance). However, these methods may lose their effectiveness if the true class prior probabilities in the classification task deviate from the previously optimized parameters.
Class prior-free Deep PUL methods represent an advancing research hotspot in the RS community. T-HOneCls is a recent approach that reduces the weight of the unlabeled pixel gradient using Taylor’s variational loss function, allowing positive samples to dominate the training process and enabling PUL without relying on class prior probabilities. However, T-HOneCls demonstrated limited effectiveness in IP detection, achieving an AUC of 0.9679, with Precision of 0.6782, Recall of 0.8315, and an F1 score of 0.7464.
Compared to other methods, APUL has demonstrated outstanding performance in IP detection. The advantages of the APUL method are evident not only in its performance metrics but also in the visualization of the segmentation results, highlighting its effectiveness in accurate IP detection. Figure 8 illustrates the experimental outcomes of various classification models across selected sample sets, revealing that the visual representation of APUL’s classification results closely aligns with ground truth (GT) values. Furthermore, the findings indicate that APUL exhibits a lower probability of error when distinguishing unlabeled samples, such as rivers and roads, without prior knowledge. This capability underscores APUL’s proficiency in extracting valuable information from unlabeled samples through the dynamic interplay between the classifier and discriminator, enabling the differentiation of categories beyond positive samples.
A key factor contributing to the superior classification performance of APUL is the incorporation of an adversarial framework, which fosters continuous interplay between the discriminator and the classifier. In class prior-free Deep PUL approaches, the segmentation results are often unsatisfactory due to the absence of class prior probabilities and negative samples. This limitation can lead to challenges in accurately distinguishing between positive and negative classes, especially in complex scenarios. In this framework, the dual-branch discriminator is responsible for assessing the authenticity of the labels generated by the classifier. Based on these assessments, the classifier is iteratively optimized to produce labels that become increasingly indistinguishable from genuine labels, thereby improving its classification capability. As illustrated in Figure 9, the model’s training process—both with and without the discriminator—reveals significant differences. The solid and dashed lines represent the full APUL framework and the APUL without the discriminator, respectively. The loss of classifier is indicated using the left axis and is shown in blue. In contrast, the discriminator’s loss and accuracy are represented on the right axis, with losses depicted in green and accuracy in red. The loss function’s trajectory indicates that the inclusion of the discriminator significantly accelerates the model’s convergence during the initial training phase. Additionally, towards the end of the training process, the model without discriminator assistance exhibits higher levels of oscillation in its loss function. In contrast, the model incorporating the discriminator achieves more stable convergence at lower loss levels. This pattern is further reflected in the model’s accuracy. The presence of the discriminator allows the model to achieve faster and greater gains in accuracy during the training process.
Furthermore, when visualizing the model outputs in 3D space (Figure 10), the impact of the discriminator is evident. With the discriminator in place, the classifier produces results that closely resemble the ground truth (GT). In contrast, a class prior-free classifier without a discriminator tends to misclassify certain image elements with similar features as the target species, leading to an increase in false positives and a reduction in overall accuracy. Conversely, the model optimized with the discriminator demonstrates an enhanced ability to differentiate the target species from other features, resulting in improved classification performance. The results indicate that the classifier and discriminator work together to yield outputs with fewer false positives through their adversarial interaction. Specifically, the advantage of APUL in invasive species monitoring lies in its improved capability to distinguish between target species and similar features, such as dark-colored rivers.
Another significant factor contributing to the strong performance of the APUL framework is the implementation of a class prior-free classifier using the Taylor variational loss function. This approach allows the model to automatically learn more information hidden in the positive-unlabeled data, without relying on the assumption of class prior probabilities. Consequently, the framework can capture a richer representation of the multi-source RS data, enhancing its performance in IP detection. As discussed previously, Deep PUL methods that incorporate class prior probabilities represent one of the most advanced techniques in PUL; however, their effectiveness relies heavily on accurately setting the correct class prior probability. As illustrated in Figure 11, performance fluctuates with variations in class prior probability, achieving better results when this probability is close to the true value. However, this parameter remains agnostic in the context of IP monitoring. Although it can be preset, discrepancies between model training and prediction may impact parameter selection. In contrast, by using Taylor variational loss function, the class prior-free classifier conducts a Taylor expansion of the sample weights, effectively mitigating the risk of unlabeled samples receiving disproportionate gradient weight, thereby preventing them from dominating the optimization process of the deep learning model. This strategy effectively alleviates training instability in PU learning and enhances classification accuracy. Experimental results (Figure 11) demonstrate that the class prior-free classifier using the Taylor variational loss function consistently outperforms other methods that require class prior probabilities across most metrics. This is particularly evident in the critical F1 score, whose accuracy seldom surpasses that of APUL utilizing the Taylor variational loss function, even with varying preset values for class prior probabilities. The results indicate that class prior-free Deep PUL methods can achieve accuracies comparable to those that utilize class prior probabilities, while avoiding the constraints imposed by these priors. This reduces issues related to generalizability. Given these advantages, class prior-free Deep PUL methods demonstrate significant potential for effective application in various contexts.

4.3. Ablation Experiment

We set up an ablation experiment to test the effectiveness and necessity of the class prior-free classifier, dual-branch discriminators, and the fusion of Sentinel-2 data (Table 5). All methods utilized the same backbone network as in APUL and were tested five times to mitigate the effects of randomness. The results revealed the highest accuracy for the experimental group when all strategies were employed. All three experimental factors positively influenced accuracy in most tests, with the Taylor variational loss function showing particularly strong effects. The difference in accuracy between the experimental group using the Taylor variational loss function and the control group using conventional binary cross-entropy (BCE) was significant. Notably, the maximum difference in F1-score reached 0.28 when only Jilin-1 satellite images were analyzed. This disparity indicates that traditional loss functions struggle with PU data, especially in the context of remote sensing image segmentation in complex scenarios. Additionally, the experimental groups utilizing multi-source remote sensing data fusion outperformed the control group that relied solely on Jilin images. While the difference was not as pronounced as the impact of the loss function in most cases, multi-source remote sensing fusion effectively supplemented additional temporal–spatial–spectral information to a single remote sensing image. This is particularly relevant for high-spatial resolution remote sensing data, which, despite having adequate resolution in the visible spectral range, may lack spectral information necessary for specific tasks. Such limitations can be addressed by incorporating free remote sensing data, like Sentinel-2. Lastly, the dual-branch discriminators in APUL worked effectively with the class prior-free classifier. The model’s classification performance improved through adversarial training. However, when using the traditional BCE as the loss function, dual-branch discriminators seemed to negatively impact model optimization. This suggests that the control group using the BCE loss function struggles to re-optimize with PU data after self-optimization based on discriminator results. In contrast, the class prior-free classifier using the Taylor variational loss function demonstrated a capacity for gradually improving classification accuracy and achieving faster convergence through a continuous adversarial process.

4.4. Pedicularis kansuensis in the Bayinbuluke Grassland

The mapping of Pedicularis kansuensis in the Bayinbuluke grassland was conducted using the proposed APUL (Figure 12). The results indicated that an area of 178.27 hm2 of Pedicularis kansuensis was identified within the research area of 275.65 km2. Moreover, Pedicularis kansuensis has a wide range of distribution, spreading to almost every location on the grassland. As shown in Figure 12, the detailed distribution of Pedicularis kansuensis reveals some interesting patterns, with its continuous distribution primarily occurring in areas of anthropogenic activity and within smaller communities in the deep interior of the grassland. For instance, Figure 12a shows that Pedicularis kansuensis is distributed along wheel marks in the pasture, while Figure 12c demonstrates its presence in a regularly maintained artificial pasture and buildings. Furthermore, Pedicularis kansuensis is frequently observed along both sides of the river, following the contours of the channel (Figure 12b). In terms of distribution density, Pedicularis kansuensis is primarily concentrated in the northern part of the steppe, which serves as a key gathering area for rivers and possesses relatively abundant water resources. However, this region is also an important grazing area, and the invasion of Pedicularis kansuensis in the northern grassland poses significant challenges to the development of animal husbandry.
Additionally, we conducted a preliminary classification of vegetation in the study area using the Normalized Vegetation Index (NDVI). Pixels with NDVI values ranging from 0.55 to 0.80 were categorized as vegetation. The total area of vegetation within the Bayinbuluke grassland was calculated to be 4020.62 hm2, with the invasion rate of Pedicularis kansuensis reaching 4.43%. This ratio indicates that the invasion of Pedicularis kansuensis has significantly affected local wetland ecosystems, particularly impacting pasture grasses that are crucial for the development of local animal husbandry. To prevent the further spread of Pedicularis kansuensis, a primary strategy involves organizing large-scale artificial mowing to support the survival of pasture. Utilizing the APUL framework, the monitoring cycle for tracking the distribution of Pedicularis kansuensis via remote sensing has been reduced to 2–3 days, thereby providing valuable time for timely mowing interventions. The distribution of Pedicularis kansuensis, as mapped through remote sensing, will facilitate local invasive plant control efforts, contributing to the stability of the wetland ecosystem.

5. Discussion

In this study, we developed an IP monitoring framework called APUL to map the distribution of Pedicularis kansuensis in the Bayinbuluke grasslands of Xinjiang using 42 target samples. APUL employs dual-branch discriminators to constrain the class prior-free classifier, effectively harnessing information from positive-unlabeled data through the adversarial process and enhancing the accuracy of IP detection. The performance of APUL has been validated against both classic and state-of-the-art (SOAT) PUL methods, demonstrating its effectiveness in IP detection. At first glance, the structure of APUL resembles a specialized form of GANs. However, its optimization strategy aligns with that of PUL. By combining the strengths of both approaches, APUL is effectively utilized for IP detection. The increasing prominence of DL in processing RS data has led to a growing preference for these advanced techniques in many studies, particularly in the IP community [29,64]. This study highlights that PUL methods, when using a DL structure significantly outperforms a traditional ML structure like OCSVM [59] and BSVM [60]. Several approaches within PUL leverage DL architectures, with class prior methods showing the most promising performance (e.g., ItreeNet [55], DOCC [61], HOneCls [62]), although their dependence on a priori probabilities presents a limitation. In line with the SOAT findings of T-HOneCls et al., our results demonstrate that approximations like Taylor variational functions can be employed in place of class prior probabilities to achieve better performance [57].
A key aim of this study was to improve the performance of mapping IPs across a large area with limited resources. We propose that more sophisticated algorithms (e.g., Deep Learning) can help alleviate the quality requirements for training samples, particularly by eliminating these samples unrelated to target species. Thus, the concept of PUL in the DL community aligns well with IP detection, allowing us to concentrate more on the target species during field sampling [57]. This approach to enhancing the efficiency of IP monitoring by reducing the quantity of samples is particularly well-suited for areas where fieldwork is challenging, such as regions with poor infrastructure or, as in this study, alpine environments [65]. These areas are similarly threatened by IPs, much like coastal regions. The use of APUL or other PUL methods may facilitate more efficient monitoring of invasive species in these areas and support the development of timely management strategies. However, despite the numerous advantages of using positive-unlabeled data for IPS mapping, the prevailing practice still relies on fully supervised DL methods and rule-based approaches based on phenology and spectral indices [65]. In contrast to rule-based methods, APUL eliminates the need for feature engineering and thresholding, although it is methodologically more complex [66,67]. Compared to fully supervised DL methods, APUL has an accuracy disadvantage [68]. However, APUL can effectively reduce costs due to dataset requirements, making it an IP mapping strategy that balances economy and accuracy [37,53].
APUL leverages point-driven deep learning to monitor the invasive Pedicularis kansuensis, significantly reducing the costs associated with field sampling in alpine wetlands. Additionally, the timeliness and accuracy of identifying Pedicularis kansuensis from multi-source RS images have been greatly improved compared to ML approaches [44]. However, our results also suggest that multi-source RS data are less effective in improving the accuracy of monitoring Pedicularis kansuensis. This may be attributed to the fact that this species is an herbaceous plant with a small population size and is often mixed with other plants, leading to noisy pixels at Sentinel satellite resolution. Additionally, the current fusion of Jilin-1 and Sentinel-2 images does not fully harness the potential of Sentinel-2 to enhance accuracy. In future work, we will continue to explore multi-source RS data fusion within the PUL framework to improve the monitoring of IPs.

6. Conclusions

In this study, we propose an APUL framework designed to enhance the efficiency and accuracy of invasive plant detection. The framework employs dual-branch discriminators to constrain a class prior-free classifier, effectively harnessing information from positive-unlabeled data through the adversarial process and enhancing the accuracy of IP detection. The effectiveness of the APUL framework was validated by mapping invasive Pedicularis kansuensis, an annual herb growing in an alpine wetland ecosystem, in Bayinbuluk Grassland, Xinjiang. The key findings of our research are as follows:
  • The APUL framework demonstrates superior performance in the task of extracting invasive Pedicularis kansuensis in the Bayinbuluke Grassland, Xinjiang. It achieves an F1-score of 0.8013 using only 42 positive samples, outperforming existing PUL methods.
  • The dual-branch discriminators can enhance the performance of the classifier through a continuous adversarial process. Furthermore, the class prior-free classifier utilizing Taylor’s variational loss function can achieve a better performance comparable to other methods with the class prior probabilities.
  • The proposed APUL framework identifies 178.27 hm2 of invasive Pedicularis kansuensis within the Bayinbuluke Grassland study area, accounting for 4.43% of the total vegetation area in the region. This proportion indicates that the invasion of Pedicularis kansuensis has significantly impacted the survival space of local pasture grasses.
Despite achieving high accuracy on species monitoring with only a small number of target samples, APUL does not appear to have fully exploited the potential of multi-source RS data. In future work, we will incorporate additional types of RS images (e.g., UAVs) within the APUL framework to improve the monitoring of IPs.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs17061041/s1.

Author Contributions

Conceptualization, E.Z. and A.S.; methodology, E.Z. and A.S.; software, E.Z., R.X. and W.L. (Wei Li); validation, E.Z. and A.S.; formal analysis, E.Z.; investigation, E.Z., A.S., R.X., and W.L. (Wenbo Li); resources, A.S.; data curation, E.Z. and A.S.; writing—original draft preparation, E.Z.; writing—review and editing, E.Z., A.S. and E.L.; visualization, E.Z.; supervision, A.S.; project administration, A.S.; funding acquisition, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Tianshan Talent Development Program, grant number 2022TSYCCX0006, the National Natural Science Foundation of China, grant number 42371389, and the Western Young Scholars Project of the Chinese Academy of Sciences, grant number 2022-XBQNXZ-001.

Data Availability Statement

The Sentinel-2 data utilized in this research are publicly available on the GEE platform (https://developers.google.com/earth-engine/datasets/, accessed on 16 March 2025). The Jilin-1 data used in this study can be obtained from the corresponding author upon reasonable request. The code in this study will be available for open access at: https://github.com/1804071544/, accessed on 16 March 2025.

Acknowledgments

The authors would like to thank ESA for data support of Sentinel-2 imagery, and Google for the GEE, which provided an efficient and powerful computing platform. Additionally, the authors appreciate the editors and reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bonan, G.B. Forests and climate change: Forcings, feedbacks, and the climate benefits of forests. Science 2008, 320, 1444–1449. [Google Scholar] [CrossRef] [PubMed]
  2. Pan, Y.; Birdsey, R.A.; Fang, J.; Houghton, R.; Kauppi, P.E.; Kurz, W.A.; Phillips, O.L.; Shvidenko, A.; Lewis, S.L.; Canadell, J.G. A large and persistent carbon sink in the world’s forests. Science 2011, 333, 988–993. [Google Scholar] [CrossRef]
  3. Cardinale, B.J.; Duffy, J.E.; Gonzalez, A.; Hooper, D.U.; Perrings, C.; Venail, P.; Narwani, A.; Mace, G.M.; Tilman, D.; Wardle, D.A. Biodiversity loss and its impact on humanity. Nature 2012, 486, 59–67. [Google Scholar] [CrossRef] [PubMed]
  4. Pimentel, D.; Harvey, C.; Resosudarmo, P.; Sinclair, K.; Kurz, D.; McNair, M.; Crist, S.; Shpritz, L.; Fitton, L.; Saffouri, R. Environmental and economic costs of soil erosion and conservation benefits. Science 1995, 267, 1117–1123. [Google Scholar] [CrossRef] [PubMed]
  5. Vilà, M.; Espinar, J.L.; Hejda, M.; Hulme, P.E.; Jarošík, V.; Maron, J.L.; Pergl, J.; Schaffner, U.; Sun, Y.; Pyšek, P. Ecological impacts of invasive alien plants: A meta-analysis of their effects on species, communities and ecosystems. Ecol. Lett. 2011, 14, 702–708. [Google Scholar] [CrossRef]
  6. Ehrenfeld, J.G. Ecosystem consequences of biological invasions. Annu. Rev. Ecol. Evol. Syst. 2010, 41, 59–80. [Google Scholar] [CrossRef]
  7. Pimentel, D.; Zuniga, R.; Morrison, D. Update on the environmental and economic costs associated with alien-invasive species in the United States. Ecol. Econ. 2005, 52, 273–288. [Google Scholar] [CrossRef]
  8. Meyerson, L.A.; Mooney, H.A. Invasive alien species in an era of globalization. Front. Ecol. Environ. 2007, 5, 199–208. [Google Scholar] [CrossRef]
  9. Seebens, H.; Bacher, S.; Blackburn, T.M.; Capinha, C.; Dawson, W.; Dullinger, S.; Genovesi, P.; Hulme, P.E.; Van Kleunen, M.; Kühn, I. Projecting the continental accumulation of alien species through to 2050. Glob. Change Biol. 2021, 27, 970–982. [Google Scholar] [CrossRef]
  10. Hameed, A.; Zafar, M.; Ahmad, M.; Sultana, S.; Bahadur, S.; Anjum, F.; Shuaib, M.; Taj, S.; Irm, M.; Altaf, M.A. Chemo-taxonomic and biological potential of highly therapeutic plant Pedicularis groenlandica Retz. using multiple microscopic techniques. Microsc. Res. Tech. 2021, 84, 2890–2905. [Google Scholar] [CrossRef]
  11. Wang, X.; Zhang, Z.; Yu, Z.; Shen, G.; Cheng, H.; Tao, S. Composition and diversity of soil microbial communities in the alpine wetland and alpine forest ecosystems on the Tibetan Plateau. Sci. Total Environ. 2020, 747, 141358. [Google Scholar] [CrossRef]
  12. Callaway, R.M.; Aschehoug, E.T. Invasive plants versus their new and old neighbors: A mechanism for exotic invasion. Science 2000, 290, 521–523. [Google Scholar] [CrossRef] [PubMed]
  13. Williamson, M. Biological Invasions; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  14. Diagne, C.; Leroy, B.; Vaissière, A.-C.; Gozlan, R.E.; Roiz, D.; Jarić, I.; Salles, J.-M.; Bradshaw, C.J.; Courchamp, F. High and rising economic costs of biological invasions worldwide. Nature 2021, 592, 571–576. [Google Scholar] [CrossRef]
  15. Rango, A.; Laliberte, A.; Herrick, J.E.; Winters, C.; Havstad, K.; Steele, C.; Browning, D. Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 2009, 3, 033542. [Google Scholar]
  16. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2012, 3, 397–404. [Google Scholar] [CrossRef]
  17. Xie, Y.; Sha, Z.; Yu, M. Remote sensing imagery in vegetation mapping: A review. J. Plant Ecol. 2008, 1, 9–23. [Google Scholar] [CrossRef]
  18. Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: A review. Wetl. Ecol. Manag. 2010, 18, 281–296. [Google Scholar] [CrossRef]
  19. Gould, W. Remote sensing of vegetation, plant species richness, and regional biodiversity hotspots. Ecol. Appl. 2000, 10, 1861–1870. [Google Scholar] [CrossRef]
  20. Kerr, J.T.; Ostrovsky, M. From space to species: Ecological applications for remote sensing. Trends Ecol. Evol. 2003, 18, 299–305. [Google Scholar] [CrossRef]
  21. Bradley, B.A. Remote detection of invasive plants: A review of spectral, textural and phenological approaches. Biol. Invasions 2014, 16, 1411–1425. [Google Scholar] [CrossRef]
  22. Hudson, H.L.; Sesnie, S.E.; Hiebert, R.D.; Dickson, B.G.; Thomas, L.P. Crossjurisdictional monitoring for nonnative plant invasions using NDVI change detection indices in walnut canyon national monument, Arizona, USA. In The Colorado Plateau VI: Science and Management at the Landscape Scale; The University of Aeizona Press: Tucson, AZ, USA, 2015; pp. 23–40. [Google Scholar]
  23. Blumenthal, D.M.; Norton, A.P.; Cox, S.E.; Hardy, E.M.; Liston, G.E.; Kennaway, L.; Booth, D.T.; Derner, J.D. Linaria dalmatica invades south-facing slopes and less grazed areas in grazing-tolerant mixed-grass prairie. Biol. Invasions 2012, 14, 395–404. [Google Scholar] [CrossRef]
  24. Peerbhay, K.; Mutanga, O.; Lottering, R.; Ismail, R. Mapping Solanum mauritianum plant invasions using WorldView-2 imagery and unsupervised random forests. Remote Sens. Environ. 2016, 182, 39–48. [Google Scholar] [CrossRef]
  25. Cho, M.A.; Mathieu, R.; Asner, G.P.; Naidoo, L.; Van Aardt, J.; Ramoelo, A.; Debba, P.; Wessels, K.; Main, R.; Smit, I.P. Mapping tree species composition in South African savannas using an integrated airborne spectral and LiDAR system. Remote Sens. Environ. 2012, 125, 214–226. [Google Scholar] [CrossRef]
  26. Wäldchen, J.; Mäder, P. Machine learning for image based species identification. Methods Ecol. Evol. 2018, 9, 2216–2225. [Google Scholar] [CrossRef]
  27. Zhang, H.; He, G.; Peng, J.; Kuang, Z.; Fan, J. Deep learning of path-based tree classifiers for large-scale plant species identification. In Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, USA, 10–12 April 2018; pp. 25–30. [Google Scholar]
  28. Pu, R. Mapping tree species using advanced remote sensing technologies: A state-of-the-art review and perspective. J. Remote Sens. 2021, 2021, 9812624. [Google Scholar] [CrossRef]
  29. Lake, T.A.; Briscoe Runquist, R.D.; Moeller, D.A. Deep learning detects invasive plant species across complex landscapes using Worldview-2 and Planetscope satellite imagery. Remote Sens. Ecol. Conserv. 2022, 8, 875–889. [Google Scholar] [CrossRef]
  30. James, K.; Bradshaw, K. Detecting plant species in the field with deep learning and drone technology. Methods Ecol. Evol. 2020, 11, 1509–1519. [Google Scholar] [CrossRef]
  31. Wang, Q.; Cheng, M.; Xiao, X.; Yuan, H.; Zhu, J.; Fan, C.; Zhang, J. An image segmentation method based on deep learning for damage assessment of the invasive weed Solanum rostratum Dunal. Comput. Electron. Agric. 2021, 188, 106320. [Google Scholar] [CrossRef]
  32. Thompson, N.C.; Greenewald, K.; Lee, K.; Manso, G.F. The computational limits of deep learning. arXiv 2020, arXiv:2007.05558. [Google Scholar]
  33. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
  34. Sun, X.; Wang, B.; Wang, Z.; Li, H.; Li, H.; Fu, K. Research progress on few-shot learning for remote sensing image interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2387–2402. [Google Scholar] [CrossRef]
  35. Kiryo, R.; Niu, G.; Du Plessis, M.C.; Sugiyama, M. Positive-unlabeled learning with non-negative risk estimator. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  36. Jaskie, K.; Spanias, A. Positive Unlabeled Learning; Morgan & Claypool Publishers: San Rafael, CA, USA, 2022. [Google Scholar]
  37. Li, W.; Guo, Q.; Elkan, C. A positive and unlabeled learning algorithm for one-class classification of remote-sensing data. IEEE Trans. Geosci. Remote Sens. 2010, 49, 717–725. [Google Scholar] [CrossRef]
  38. Hu, W.; Le, R.; Liu, B.; Ji, F.; Ma, J.; Zhao, D.; Yan, R. Predictive adversarial learning from positive and unlabeled data. In Proceedings of the AAAI conference on artificial intelligence, Virtual Event, 2–9 February 2021; pp. 7806–7814. [Google Scholar]
  39. Bao, A.; Cao, X.; Chen, X.; Xia, Y. Study on models for monitoring of above ground biomass about Bayinbuluke grassland assisted by remote sensing. In Proceedings of the Remote Sensing and Modeling of Ecosystems for Sustainability V, San Diego, CA, USA, 10–14 August 2008; pp. 155–163. [Google Scholar]
  40. Liu, Q.; Yang, Z.; Han, F.; Shi, H.; Wang, Z.; Chen, X. Ecological environment assessment in world natural heritage site based on remote-sensing data. A case study from the Bayinbuluke. Sustainability 2019, 11, 6385. [Google Scholar] [CrossRef]
  41. Chen, X.; Yang, Z.; Wang, T.; Han, F. Landscape Ecological Risk and Ecological Security Pattern Construction in World Natural Heritage Sites: A Case Study of Bayinbuluke, Xinjiang, China. ISPRS Int. J. Geo-Inf. 2022, 11, 328. [Google Scholar] [CrossRef]
  42. Yanyan, L.; Yukun, H.; Jianmei, Y.; Kaihui, L.; Guogang, G.; Xin, W. Study on harmfulness of Pedicularis myriophylla and its control measures. Arid Zone Res 2008, 25, 778–782. [Google Scholar]
  43. Sui, X.; Li, A.; Guan, K. Impacts of climatic changes as well as seed germination characteristics on the population expansion of Pedicularis verticillata. Ecol. Environ. Sci 2013, 22, 1099–1104. [Google Scholar]
  44. Wang, W.; Tang, J.; Zhang, N.; Wang, Y.; Xu, X.; Zhang, A. Spatiotemporal Pattern of Invasive Pedicularis in the Bayinbuluke Land, China, during 2019–2021: An Analysis Based on PlanetScope and Sentinel-2 Data. Remote Sens. 2023, 15, 4383. [Google Scholar] [CrossRef]
  45. He, Z.; He, D.; Mei, X.; Hu, S. Wetland classification based on a new efficient generative adversarial network and Jilin-1 satellite image. Remote Sens. 2019, 11, 2455. [Google Scholar] [CrossRef]
  46. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  47. Pettorelli, N.; Vik, J.O.; Mysterud, A.; Gaillard, J.-M.; Tucker, C.J.; Stenseth, N.C. Using the satellite-derived NDVI to assess ecological responses to environmental change. Trends Ecol. Evol. 2005, 20, 503–510. [Google Scholar] [CrossRef]
  48. Gao, B.-C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  49. Zheng, Z.; Zhong, Y.; Ma, A.; Zhang, L. FPGA: Fast patch-free global learning framework for fully end-to-end hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5612–5626. [Google Scholar] [CrossRef]
  50. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  51. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  52. Chen, H.; Li, Z.; Wu, J.; Xiong, W.; Du, C. SemiRoadExNet: A semi-supervised network for road extraction from remote sensing imagery via adversarial learning. ISPRS J. Photogramm. Remote Sens. 2023, 198, 169–183. [Google Scholar] [CrossRef]
  53. Bekker, J.; Davis, J. Learning from positive and unlabeled data: A survey. Mach. Learn. 2020, 109, 719–760. [Google Scholar] [CrossRef]
  54. Li, W.; Guo, Q.; Elkan, C. One-class remote sensing classification from positive and unlabeled background data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 730–746. [Google Scholar] [CrossRef]
  55. Zhao, H.; Zhong, Y.; Wang, X.; Hu, X.; Luo, C.; Boitt, M.; Piiroinen, R.; Zhang, L.; Heiskanen, J.; Pellikka, P. Mapping the distribution of invasive tree species using deep one-class classification in the tropical montane landscape of Kenya. ISPRS J. Photogramm. Remote Sens. 2022, 187, 328–344. [Google Scholar] [CrossRef]
  56. Chen, H.; Liu, F.; Wang, Y.; Zhao, L.; Wu, H. A variational approach for learning from positive and unlabeled data. Adv. Neural Inf. Process. Syst. 2020, 33, 14844–14854. [Google Scholar]
  57. Zhao, H.; Wang, X.; Li, J.; Zhong, Y. Class prior-free positive-unlabeled learning with Taylor variational loss for hyperspectral remote sensing imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 16827–16836. [Google Scholar]
  58. Pan, Z.; Yu, W.; Wang, B.; Xie, H.; Sheng, V.S.; Lei, J.; Kwong, S. Loss functions of generative adversarial networks (GANs): Opportunities and challenges. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 4, 500–522. [Google Scholar] [CrossRef]
  59. Schölkopf, B.; Williamson, R.C.; Smola, A.; Shawe-Taylor, J.; Platt, J. Support vector method for novelty detection. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 1999), Denver, CO, USA, 29 November–4 December 1999; Volume 12. [Google Scholar]
  60. Liu, B.; Dai, Y.; Li, X.; Lee, W.S.; Yu, P.S. Building text classifiers using positive and unlabeled examples. In Proceedings of the Third IEEE international conference on data mining, Melbourne, FL, USA, 22 November 2003; pp. 179–186. [Google Scholar]
  61. Lei, L.; Wang, X.; Zhong, Y.; Zhao, H.; Hu, X.; Luo, C. DOCC: Deep one-class crop classification via positive and unlabeled learning for multi-modal satellite imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102598. [Google Scholar] [CrossRef]
  62. Zhao, H.; Zhong, Y.; Wang, X.; Shu, H. One-class risk estimation for one-class hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  63. Schölkopf, B.; Platt, J.C.; Shawe-Taylor, J.; Smola, A.J.; Williamson, R.C. Estimating the support of a high-dimensional distribution. Neural Comput. 2001, 13, 1443–1471. [Google Scholar] [CrossRef] [PubMed]
  64. Cabezas, M.; Kentsch, S.; Tomhave, L.; Gross, J.; Caceres, M.L.L.; Diez, Y. Detection of invasive species in wetlands: Practical DL with heavily imbalanced data. Remote Sens. 2020, 12, 3431. [Google Scholar] [CrossRef]
  65. Royimani, L.; Mutanga, O.; Odindi, J.; Dube, T.; Matongera, T.N. Advancements in satellite remote sensing for mapping and monitoring of alien invasive plant species (AIPs). Phys. Chem. Earth Parts A/B/C 2019, 112, 237–245. [Google Scholar] [CrossRef]
  66. Weisberg, P.J.; Dilts, T.E.; Greenberg, J.A.; Johnson, K.N.; Pai, H.; Sladek, C.; Kratt, C.; Tyler, S.W.; Ready, A. Phenology-based classification of invasive annual grasses to the species level. Remote Sens. Environ. 2021, 263, 112568. [Google Scholar] [CrossRef]
  67. Madonsela, S.; Cho, M.A.; Mathieu, R.; Mutanga, O.; Ramoelo, A.; Kaszta, Ż.; Van De Kerchove, R.; Wolff, E. Multi-phenology WorldView-2 imagery improves remote sensing of savannah tree species. Int. J. Appl. Earth Obs. Geoinf. 2017, 58, 65–73. [Google Scholar] [CrossRef]
  68. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
Figure 1. (a) Geographic location of target samples and Jilin-1 RGB image of the study area. (b,c) The Pedicularis kansuensis on Jilin-1 RGB image. (d) Flowers of Pedicularis kansuensis. (e,f) Field photo of left target samples.
Figure 1. (a) Geographic location of target samples and Jilin-1 RGB image of the study area. (b,c) The Pedicularis kansuensis on Jilin-1 RGB image. (d) Flowers of Pedicularis kansuensis. (e,f) Field photo of left target samples.
Remotesensing 17 01041 g001
Figure 2. The spectral curves of Pedicularis kansuensis and main landcover in the study area, derived from Sentinel-2 and Jilin-1 satellite imagery.
Figure 2. The spectral curves of Pedicularis kansuensis and main landcover in the study area, derived from Sentinel-2 and Jilin-1 satellite imagery.
Remotesensing 17 01041 g002
Figure 3. Overview of the proposed APUL framework.
Figure 3. Overview of the proposed APUL framework.
Remotesensing 17 01041 g003
Figure 4. The detailed structure of the classifier (left) and discriminators (right).
Figure 4. The detailed structure of the classifier (left) and discriminators (right).
Remotesensing 17 01041 g004
Figure 5. The spatial distribution of training and test set. (af) Details of the datasets across various regions, the test set is distributed across the study area as widely as possible to ensure its representativeness.
Figure 5. The spatial distribution of training and test set. (af) Details of the datasets across various regions, the test set is distributed across the study area as widely as possible to ensure its representativeness.
Remotesensing 17 01041 g005
Figure 6. The spectral distribution of training and test set. (a) The spectral distribution of Pedicularis kansuensis (positive labels) and negative labels in test set. (b) The spectral distribution of Pedicularis kansuensis in training and test set.
Figure 6. The spectral distribution of training and test set. (a) The spectral distribution of Pedicularis kansuensis (positive labels) and negative labels in test set. (b) The spectral distribution of Pedicularis kansuensis in training and test set.
Remotesensing 17 01041 g006
Figure 7. Training curves for different methods compared to APUL, the filled area representing the standard deviation over multiple training sessions. (a) The F1-score on test dataset. (b) The value of training loss.
Figure 7. Training curves for different methods compared to APUL, the filled area representing the standard deviation over multiple training sessions. (a) The F1-score on test dataset. (b) The value of training loss.
Remotesensing 17 01041 g007
Figure 8. Local details of the Pedicularis kansuensis detection maps overlay with high-resolution Jilin-1 RGB imagery by different methods. (a) The commission errors of bare land. (b) The commission errors of grassland. (c) The commission errors of rivers. (d) The omission errors of Pedicularis kansuensis.
Figure 8. Local details of the Pedicularis kansuensis detection maps overlay with high-resolution Jilin-1 RGB imagery by different methods. (a) The commission errors of bare land. (b) The commission errors of grassland. (c) The commission errors of rivers. (d) The omission errors of Pedicularis kansuensis.
Remotesensing 17 01041 g008
Figure 9. Comparison of training curves for the APUL framework with and without the discriminator.
Figure 9. Comparison of training curves for the APUL framework with and without the discriminator.
Remotesensing 17 01041 g009
Figure 10. The distribution of target species (Pedicularis kansuensis) and others in three-dimensional spectral space, with each of the three axes corresponding to the near-infrared (NIR), red, and green bands of Jilin-1 imagery.
Figure 10. The distribution of target species (Pedicularis kansuensis) and others in three-dimensional spectral space, with each of the three axes corresponding to the near-infrared (NIR), red, and green bands of Jilin-1 imagery.
Remotesensing 17 01041 g010
Figure 11. The performance of proposed APUL framework compared with various Deep positive-unlabeled learning methods across different class prior probabilities.
Figure 11. The performance of proposed APUL framework compared with various Deep positive-unlabeled learning methods across different class prior probabilities.
Remotesensing 17 01041 g011
Figure 12. Distribution of Pedicularis kansuensis in Bayinbuluke Grassland. (ad) The RGB image of JiLin-1 and Pedicularis kansuensis of four representative areas: (a1,a2) Pedicularis kansuensis near ruts; (b1,b2) Pedicularis kansuensis near rivers; (c1) Pedicularis kansuensis in farmland; (c2) Pedicularis kansuensis near buildings; (d1,d2) Pedicularis kansuensis adjacent to green vegetation.
Figure 12. Distribution of Pedicularis kansuensis in Bayinbuluke Grassland. (ad) The RGB image of JiLin-1 and Pedicularis kansuensis of four representative areas: (a1,a2) Pedicularis kansuensis near ruts; (b1,b2) Pedicularis kansuensis near rivers; (c1) Pedicularis kansuensis in farmland; (c2) Pedicularis kansuensis near buildings; (d1,d2) Pedicularis kansuensis adjacent to green vegetation.
Remotesensing 17 01041 g012
Table 1. The key parameters of the multi-modal satellites.
Table 1. The key parameters of the multi-modal satellites.
SatellitesJilin-1Sentinel-2
BandSpatial resolutionWavelength (nm)BandSpatial resolutionWavelength (nm)
Spectral bandsB10.75 m450–700B160 m433–453
B23 m430–520B210 m458–523
B33 m520–610B310 m543–578
B43 m610–690B410 m650–680
B53 m770–895B520 m698–713
B620 m733–748
B720 m773–793
B810 m785–900
B8A20 m855–875
B960 m935–955
B1060 m1360–1390
B1120 m1565–1655
B1220 m2100–2280
Band number4 spectral bands and 2 index bands (Without B1)13 spectral bands and 2 index bands
Revisiting Period (Days)3.35
Date10 August 2023July, August, September 2023
Table 2. The detailed parameters of the classifier and discriminators represent the convolution kernel size, stride, and number of output channels, respectively.
Table 2. The detailed parameters of the classifier and discriminators represent the convolution kernel size, stride, and number of output channels, respectively.
Classifier (FCN) Discriminators (UNet)
Encoder Conv 3 × 3, stride 1, 64
TSSA#1: Conv 3 × 3, stride 1, 64
Down 2 × #1: Conv 3 × 3, stride 2128
TSSA#2: Conv 3 × 3, stride 1, 128
Down 2 × #2: Conv 3 × 3, stride 2192
TSSA#3: Conv 3 × 3, stride 1, 192
Down 2 × #3: Conv 3 × 3, stride 2256
TSSA#4: Conv 3 × 3, stride 1, 256
Conv 3 × 3, stride 1, 8
Conv 3 × 3, stride 1, 8
Conv 3 × 3, stride 1, 16
Conv 3 × 3, stride 1, 16
Conv 3 × 3, stride 1, 32
Conv 3 × 3, stride 1, 32
Conv 3 × 3, stride 1, 64
Conv 3 × 3, stride 1, 64
Conv 3 × 3, stride 1, 128
Conv 3 × 3, stride 1, 128
Conv 3 × 3, stride 1, 256
Conv 3 × 3, stride 1, 256
Conv 3 × 3, stride 1, 512
Conv 3 × 3, stride 1, 512
Conv 3 × 3, stride 1, 1024
Conv 3 × 3, stride 1, 1024
Lateral Conv1 × 1, stride 1, 128 None
Decoder Conv3 × 3, stride 1, 128
Conv3 × 3, stride 1, 128
Conv3 × 3, stride 1, 128
Conv3 × 3, stride 1, 128
Conv3 × 3, stride 1, 128
Conv 3 × 3, stride 1, 64
Conv 1 × 1, stride 1, 1
ConvTranspose 4 × 4, stride 2512
Conv 3 × 3, stride 1, 512
Conv 3 × 3, stride 1, 512
ConvTranspose 4 × 4, stride 2256
Conv 3 × 3, stride 1, 256
Conv 3 × 3, stride 1, 256
ConvTranspose 4 × 4, stride 2127
Conv 3 × 3, stride 1, 128
Conv 3 × 3, stride 1, 128
ConvTranspose 4 × 4, stride 2, 64
Conv 3 × 3, stride 1, 64
Conv 3 × 3, stride 1, 64
ConvTranspose 4 × 4, stride 2, 32
Conv 3 × 3, stride 1, 32
Conv 3 × 3, stride 1, 32
ConvTranspose 4 × 4, stride 2, 16
Conv 3 × 3, stride 1, 16
Conv 3 × 3, stride 1, 16
ConvTranspose 4 × 4, stride 2, 8
Conv 3 × 3, stride 1, 8
Conv 3 × 3, stride 1, 8
Conv 3 × 3, stride 1, 1
Table 3. The detail of training and test set.
Table 3. The detail of training and test set.
CharacteristicsSize (Pixels)Labeling Methods
Training set51 bandsPositive: 304
Unlabeled: 1,048,272
Field sampling
Test set51 bandsPositive: 55,525
Negative: 125,195
Visual interpretation
Table 4. Performance of APUL compared with different methods.
Table 4. Performance of APUL compared with different methods.
TypeModelAUCPrecisionRecallF1-Score
One-class machine learningOCSVM0.62910.17270.52750.2601
BSVM0.64380.36770.60500.4431
Deep PUL with class prior probabilities ItreeNet0.98050.73700.83070.7802
DOCC0.97750.75290.79350.7707
HOneCls0.98180.79200.78420.7868
Class prior-free Deep PULT-HOneCls0.96790.67820.83150.7464
APUL0.98420.79630.80720.8013
Table 5. The details of ablation experiment.
Table 5. The details of ablation experiment.
BackboneStructureLossDatasetPrecisionRecallF1-
Score
FCNNoneBCEJilin-10.40490.96840.5639
Jilin-1 & Sentinel20.43670.9870.6055
TaylorJilin-10.75550.74260.7485
Jilin-1 & Sentinel20.74020.77330.7534
AdversarialBCEJilin-10.33950.99910.5059
Jilin-1 & Sentinel20.40320.98830.5703
TaylorJilin-10.77570.79090.7817
Jilin-1 & Sentinel20.79630.80720.8013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, E.; Samat, A.; Li, E.; Xu, R.; Li, W.; Li, W. Adversarial Positive-Unlabeled Learning-Based Invasive Plant Detection in Alpine Wetland Using Jilin-1 and Sentinel-2 Imageries. Remote Sens. 2025, 17, 1041. https://doi.org/10.3390/rs17061041

AMA Style

Zhu E, Samat A, Li E, Xu R, Li W, Li W. Adversarial Positive-Unlabeled Learning-Based Invasive Plant Detection in Alpine Wetland Using Jilin-1 and Sentinel-2 Imageries. Remote Sensing. 2025; 17(6):1041. https://doi.org/10.3390/rs17061041

Chicago/Turabian Style

Zhu, Enzhao, Alim Samat, Erzhu Li, Ren Xu, Wei Li, and Wenbo Li. 2025. "Adversarial Positive-Unlabeled Learning-Based Invasive Plant Detection in Alpine Wetland Using Jilin-1 and Sentinel-2 Imageries" Remote Sensing 17, no. 6: 1041. https://doi.org/10.3390/rs17061041

APA Style

Zhu, E., Samat, A., Li, E., Xu, R., Li, W., & Li, W. (2025). Adversarial Positive-Unlabeled Learning-Based Invasive Plant Detection in Alpine Wetland Using Jilin-1 and Sentinel-2 Imageries. Remote Sensing, 17(6), 1041. https://doi.org/10.3390/rs17061041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop