Next Article in Journal
Interpreting Mangrove Habitat and Coastal Land Cover Change in the Greater Bay Area, Southern China, from 1924 to 2020 Using Historical Aerial Photos and Multiple Sources of Satellite Data
Next Article in Special Issue
Improved Spatiotemporal Information Fusion Approach Based on Bayesian Decision Theory for Land Cover Classification
Previous Article in Journal
SPA-GAN: SAR Parametric Autofocusing Method with Generative Adversarial Network
Previous Article in Special Issue
Spatiotemporally Continuous Reconstruction of Retrieved PM2.5 Data Using an Autogeoi-Stacking Model in the Beijing-Tianjin-Hebei Region, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Multi-Scale Remote-Sensing Data to Monitor Severe Forest Infestation in Response to Pine Wilt Disease

1
College of Computer Science, Inner Mongolia University, Hohhot 010021, China
2
College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010051, China
3
College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China
4
Inner Mongolia Key Laboratory of Radar Technology and Application, Hohhot 010051, China
5
College of Forestry, Beijing Forestry University, Beijing 100083, China
6
Aerospace Information Research Institute, Beijing 100094, China
7
College of Geographical Science, Inner Mongolia Normal University, Hohhot 010022, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(20), 5164; https://doi.org/10.3390/rs14205164
Submission received: 30 August 2022 / Revised: 10 October 2022 / Accepted: 12 October 2022 / Published: 15 October 2022
(This article belongs to the Special Issue Machine Learning for Spatiotemporal Remote Sensing Data)

Abstract

:
Pine wilt disease (PWD) is one of the most destructive forest diseases that has led to rapid wilting and mortality in susceptible host pine trees. Spatially explicit detection of pine wood nematode (PWN)-induced infestation is important for forest management, policy making, and practices. Previous studies have mapped forest disturbances in response to various forest diseases and/or insects over large areas using remote-sensing techniques, but these efforts were often constrained by the limited availability of ground truth information needed for the calibration and validation of moderate-resolution satellite algorithms in the process of linking plot-scale measurements to satellite data. In this study, we proposed a two-level up-sampling strategy by integrating unmanned aerial vehicle (UAV) surveys and high-resolution Radarsat-2 satellite imagery for expanding the number of training samples at the 30-m resampled Sentinel-1 resolution. Random forest algorithms were separately used in the prediction of the Radarsat-2 and Sentinel-1 infestation map induced by PWN. After data acquisition in Muping District during August and September 2021, we first verified the ability of a deep-learning-based object detection algorithm (i.e., YOLOv5 model) in the detection of infested trees from coregistered UAV-based RGB images (Average Precision (AP) of larger than 70% and R2 of 0.94). A random forest algorithm trained using the up-sampling UAV infestation map reference and corresponding Radarsat-2 pixel values was then used to produce the Radarsat-2 infestation map, resulting in an overall accuracy of 72.57%. Another random forest algorithm trained using the Radarsat-2 infestation pixels with moderate and high severity (i.e., an infestation severity of larger than 0.25, where the value was empirically set based on a trade-off between classification accuracy and infection detectability) and corresponding Sentinel-1 pixel values was subsequently used to predict the Sentinel-1 infestation map, resulting in an overall accuracy of 87.63%, where the validation data are Radarsat-2 references rather than UAV references. The Sentinel-1 map was also validated by independent UAV surveys, with an overall accuracy of 76.30% and a Kappa coefficient of 0.45. We found that the expanded training samples by the integration of UAV and Radarsat-2 strengthened the medium-resolution Sentinel-1-based prediction model of PWD. This study demonstrates that the proposed method enables effective PWN infestation mapping over multiple scales.

Graphical Abstract

1. Introduction

The most harmful pine wilt disease (PWD), caused by the pine wood nematode (PWN, Bursaphelenchus xylophilus), has led to fast wilting and mortality in susceptible host trees (usually Pinus species such as Pinus thunbergii, Pinus densiflora, and Pinus massoniana) [1]. The original biological equilibrium of the forest system was badly disturbed by PWD, which also caused massive economic losses. The exploration into remote-sensing techniques and algorithms for reconnaissance and mapping of the PWN infestation has been spurred by the severity of the present PWD outbreak in North China, the accompanying economic repercussions, and the necessity for forest management and sustainable development [2]. Therefore, in order to prevent and mitigate the different negative effects as well as to enhance decision making, it is necessary to develop precise and effective tools for monitoring forest disturbances in response to PWD [3].
For extensive forest disease surveillance, microwave and optical satellite remote-sensing techniques have been developed in the past decades [4,5,6]. The slow loss of leaf water and chlorophyll content during the infection phase is the essential tenet of remotely sensed monitoring of PWD using optical images. However, the availability of optical remote sensing of forest disturbances was constrained, since the optical satellite data were frequently missing in gloomy and foggy regions. Additionally, the optical signal is affected by the illumination and view geometries, which hinder the retrieval of forest disease situations [7,8]. Microwave remote sensing offers a valuable supplement and/or alternative to optical remote sensing for monitoring forest disease, because it uses active sensors (such as synthetic aperture radar (SAR)) to produce incident radiation and penetrate clouds, and it can thus acquire satellite images regardless of the weather and illumination conditions [9,10]. PWN disasters are characterized by the water stress that these disasters subject tree parts (such as foliage, stems, and trunk) to over the course of two to three months, significantly altering the dielectric constant of SAR [11]. Due to the drop in water content during the infestation period, the dielectric constant changes the SAR back-scattering characteristics, giving the potential to identify PWD using SAR.
Previous studies proposed a variety of SAR-based methods for monitoring forest diseases, from mining sensitive bands to using machine learning methods to establishing physically based retrieval models. Though the extremely complex imaging mechanism of the SAR signal restricts the physically based modeling and interpretation of forest water stress induced by insects like PWN, microwave backscattering modeling has been one of the fascinating topics in the remote sensing community over the past few decades [12,13]. The investigation of backscattering change in forest areas has proven challenging due to multiple factors (e.g., stand architecture, environmental conditions affecting the vegetation water content, and soil surface moisture). Different SAR vegetation indices, such as the radar change ratios (RCR) [14], have been suggested as an alternative to better depict the state of the forest by reducing the stochastic component of the signal. Machine-learning techniques in the SAR domain have the advantage of not requiring the intricate physically based modeling and, when properly trained, can produce relatively high classification accuracy [15,16,17]. Through polarization decomposition and other operations [18,19], it is possible to extract hundreds of polarimetric characteristics and vegetation indicators that are sensitive to forest conditions using full polarization SAR data. This considerably enhances the feature dimensionality in machine-learning models. The accuracy and precision of machine-learning models may typically be increased by increasing feature dimensionality.
However, there are difficulties in accurately monitoring the PWN infestation conditions across wide areas due to the restricted availability of ground surveys that provide training samples for machine-learning algorithms [8,20,21]. In general, ground-based field measurements cost a lot to employ and are difficult to access. For instance, earlier studies used a very little amount of ground reference to develop and test retrieval models, which inevitably had a negative impact on the model’s resilience and generality [6,8]. Comparably, unmanned aerial vehicles (UAV) are an efficient and low-cost alternative technique to collect reference data of forest diseases that were typically obtained by labor-intensive field campaigns [22,23]. Through combining machine-learning techniques and UAV-observed imagery, the close-range monitoring of species’ invasion and tree disease has been greatly improved [24,25,26]. On the contrary to the low effectiveness in extracting low-, mid-, and high-level feature representations in conventional machine-learning techniques, deep learning methods offer significant potential as the most efficient option [23]. Multiple types of sensors, RGB cameras, and both multi and hyperspectral cameras and LiDAR have been used in this context. Many studies have shown promising results for UAV-based data with tree detection and species classification with various CNN-based frameworks. Some scholars demonstrated the outperformance of a 3D-CNN to classify tree species (i.e., pine, spruce, and birch) in a boreal forest, with a combination of RGB and hyperspectral data [27]. Similarly, a previous study incorporated ResNet-18 in the DeepLabv3+ architecture to detect three types of palm trees in the Amazonian forest [24]. Another example applied a deep convolutional generative adversarial network (DCGAN) to discriminate between healthy and diseased pinus trees in a heavily dense forested park area [28]. Although recent developments in UAV technology offer prospects for accurately identifying infested trees at the forest stand level, the training samples for images with dozens of meters of resolution are still very few [20]. This study aims to fill this gap by creating a novel strategy for expanding training samples by the incorporation of UAV and high-resolution satellite imagery.
Linking the UAV surveys and high-resolution satellite imagery is a possible way to expand the training samples, because high-resolution imagery generally covers tens of kilometers [29]. In addition, the strategy of integrating UAV data and high-resolution satellite imagery to calibrate moderate-resolution models is one of the classical synergy modes of UAV and satellite imagery for remote-sensing applications [30]. The combination of UAV and satellite remote sensing has a significant potential to overcome current constraints of earth observations, as previous studies have reported [20,29]. Therefore, this study aims to answer: (1) how to integrate UAV and SAR satellite imagery into a two-level up-sampling framework for expanding training samples; and (2) whether the ability of mapping forest disturbance response to PWD can be improved by using the two-level up-sampling framework.

2. Data

2.1. Study Area

The study area was mainly composed of conifer forests located in both coastal flat terrain and complex mountains (Figure 1). The forests are predominantly even-aged stands of black pine (Pinus thunbergii) and some red pine (Pinus densiflora) forest stands, which are mainly located at elevations ranging from 0 to 480 m. The region characterizes a warm temperate monsoon continental climate with obvious changes across seasons. The pioneer pine wood nematodes (PWN) in the study area were found in the Autumn of 2016 and have become widespread since then, particularly resulting in the rapid mortality of a large area of coastal defense plantations and mountain forests in the east of the study area. The PWN-induced black pine mortality reached its peak between 2019 and 2020 in this region. The PWD outbreak and expansion are possibly caused by the propagation of PWN hosted in Monochamus alternatus from the neighboring provinces. Many efforts have been undertaken to monitor the infested forest areas by field surveys and to protect the healthy forest areas by cutting infested trees.

2.2. UAV Field Campaign and Dataset

In order to produce the ultrahigh-resolution PWN infestation map of local regions, we acquired a UAV dataset in a field campaign conducted from 15–20 August 2021. According to infestation levels and access options, we distributed 17 forest sample plots around the study area on both flat and mountainous terrain, with an approximate 100 × 100 m2 size (Figure 1). These plots were positioned according to the trade-off between accessibility and the random sampling rule. The canopy coverage of these plots ranges from about 0.2 to about 0.9. The color of the PWN-infested tree ranges from red to gray.
Aerial surveys of the 17 selected infested forest plots were conducted (Table 1). The DJI Phantom 4 Multispectral quadcopter (DJI Technology Co., Ltd., Shenzhen, China) equipped with a real-time kinematic (RTK) Global Navigation Satellite System (GNSS) module, barometer, compass, and inertial measurement unit (IMU) was used to collect remotely sensed images. The integrated multispectral imaging system was composed of an RGB camera and five individual narrow-band cameras (i.e., blue (450 nm ± 16 nm), green (560 nm ± 16 nm), red (650 nm ± 16 nm), red-edge (730 nm ± 16 nm), and near-infrared (840 nm ± 26 nm). Each camera provides an image size of around two million pixels with a 16-bit radiometric resolution. The focal length was fixed at 5.74 mm, resulting in a field of view (FOV) of 46° × 38°. The UAV-based images were acquired at a flight altitude of 100 m above ground level, thereby resulting in a ground sample distance of approximately 5 cm. The flight paths were designed to produce a minimum photographic side overlap of 70% and a forward overlap of 80%. The UAV also acquired multispectral images of a Lambertian panel with a reflectance of 0.5 placed on an open space next to the plot before the UAV left the home point and after returning to the home point. After data acquisition, RGB geo-rectified orthomosaics were generated using Agisoft PhotoScan Pro (Agisoft LLC, St. Petersburg, Russia) in places in which the geolocation accuracy was less than 5 cm. The detailed technical workflow of orthomosaic production can be found in [22].

2.3. Radarsat-2 Imagery

In order to expand the training samples based on UAV data, we introduced Radarsat-2 imagery to broaden the spatial extent of PWN infestation surveys (Table 1). All of the forest plots in the study area were covered by one single quad-polarization Radarsat-2 image that was taken on 5 September 2021, with an 8 m resolution and a 25 km by 25 km swath. The C-band SAR on board the Radarsat-2 satellite collects data in any combination of horizontal and vertical polarizations (i.e., HH, HV, VV, VH). The SNAP version 8.0 software was employed to obtain the backscattering coefficient and the polarimetric values. We first performed radiometric calibration to transform raw pixel values into radar backscatter coefficients. Next, we applied a Refined Lee speckle filter with a 5 × 5 window size to reduce the speckle and maintain the valuable information. We then extracted the polarimetric parameters such as span, pedestal height (PH), radar vegetation index (RVI), radar forest degradation index (RFDI), canopy scattering index (CSI), and volume scattering index (VSI) [31]. We used model-based decomposition methods that decompose the scattering matrix into different scattering mechanisms (i.e., direct, double-bounce, and volume scattering mechanisms), such as the Freeman–Durden, Yamaguchi, Cloude, Touzi and Van Zyl methods [19]. Additionally, several other polarimetric decomposition methods, such as Sinclair decomposition and Pauli decomposition, were also applied [32]. All the decomposed images were finally geocoded using the Range-Doppler terrain correction with a Shuttle Radar Topography Mission (SRTM) 30-m-resolution digital elevation model (DEM). Finally, the conifer forest pixels within the decomposed images were extracted for modeling the infestation severity using the 30 m resolution land cover-type map.

2.4. Sentinel-1 Imagery

In order to map the PWN infestation situation over the study area, we applied the proposed algorithm onto the Sentinel-1 imagery (Table 1). The Sentinel-1 image captured on 15 September 2021 covering the entire study area was extracted and downloaded from the Google Earth Engine (GEE). The 30 m resolution SRTM DEM was also downloaded from GEE. The Sentinel-1 mission is a constellation comprised of two satellites in a sun-synchronous orbit with a 180° difference in their orbital phasing. These satellites are equipped with C-band synthetic aperture radar equipment that collect data in all weather conditions. The primary operational mode over land is the Interferometric Wide (IW) swath mode, which provides data in either single (HH or VV) or double (HH + HV or VV + VH) polarization with a ground resolution of 5 m × 20 m. The level-1 Ground Range-Detected (GRD) product image that GEE downloaded was routinely resampled into a 30-m spatial resolution. The pre-processing steps including terrain correction and radiometric calibration as well as speckle filtering were executed before the download. The image consisted of two layers of the backscattering coefficient in double polarizations VV and VH that were converted to decibels (dB).

3. Methodology

The general workflow of this study is shown in Figure 2, which mainly consists of three steps: (1) detection of PWN-infested trees from UAV images and production of Radarsat-scale reference infestation map; (2) Radarsat-2-based modeling of PWN infestation and Sentinel-scale reference infestation map production; (3) Sentinel-1-based modeling of PWN infestation and regional infestation map production.
In the presented study, we proposed a two-level up-sampling scheme for PWN infestation mapping over large areas. The UAV-based infestation map was firstly scaled up to high-resolution Radarsat-2 scale to calibrate the PWN infestation mapping model from Radarsat-2 data. Subsequently, the Radarsat-2 infestation map was used as reference data to train a machine-learning model to map Sentinel-1-based infestation at 30 m resolution. In this process, we used an intermediate Radarsat-2 image to scale up from UAV to Sentinel-1 imagery because the intermediate layer could expand our training dataset beyond the circa 100 pixels that would have been produced from only using UAV images. The produced Sentinel-1 infestation map was validated by independent Radarsat-2 data and UAV data that were not used in the up-sampling and/or training process. The following sub-sections provide details of the steps.

3.1. UAV-Based PWN Infestation Modeling, Predicting, and Up-Sampling

3.1.1. Labeling of Infested Trees on UAV Orthomosaic

On the UAV-based orthomosaic, all the infested trees with red tones were annotated through rectangle-shaped bounding boxes. Correspondingly, each labeling item is composed of a bounding box and its associated class (i.e., infested tree). The orthomosaic was divided into numerous image tiles with a 448 by 448-pixel image size in order to meet the requirements of a particular deep-learning model that is described in the following section. To augment the labeling dataset, each orthomosaic was divided into numerous overlapping image tiles with the number of overlapping rows for two successive images along the column direction fixed to 350 pixels.

3.1.2. Deep-Learning Modeling and Detection of Infested Trees

In this study, we opted for a YOLO-type model [33], as these have been reported to attain similar precision yet higher prediction speed compared with other alternative object-detection CNN models (e.g., faster R-CNN). Within the YOLO family, we employed the latest YOLOv5 implemented in Pytorch (https://github.com/ultralytics/yolov5 (accessed on 26 September 2022)), i.e., YOLOv5 [34], in the modeling and detection of infested trees due to the increased accuracy and speed over previous YOLO versions. YOLOv5 is composed of three components, which are the backbone, neck, and head, respectively. The backbone module uses Cross-Stage Partial Networks to efficiently extract image features from an input image. The neck module uses a Path Aggregation Network (PANet) to generate feature pyramids to improve the generality on object scaling. The head module of YOLOv5 is the same as the previous versions (e.g., YOLOv3), which includes applying anchor boxes on features, generating output vectors with the associated class confidence, objectness score, and bounding boxes. Among the available YOLOv5 architectures, we opted for the YOLOv5x model due to its outperformance over smaller architectures. In our study, we replaced the PANet by a Bi-directional Feature Pyramid Network (BiFPN).
We used standard data augmentation tricks including random crops, rotations, hue, saturation, and exposure shifts before the model training. The labeling dataset was randomly divided into a training dataset (60%), validation dataset (20%), and testing dataset (20%) on an item-by-item basis. The maximum epoch, batch size, and learning rate were set as 100, 8, and 0.0032, respectively. The default patience threshold of 100 was set to interrupt the model training process after 100 consecutive epoches without improvements. The model depth multiple and layer channel multiple of YOLOv5 were set as 1.33 and 1.25, respectively. The YOLOv5 model was run on an Intel(R) Core(TM) i9-10900K 3.70 GHz × 16 processer (CPU) with 32 GB of RAM under the WINDOWS 10 operating system (64-bit). We used accuracy metrics, i.e., precision, recall, accuracy and average precision (AP), to measure the performance of the trained YOLOv5 model based on the testing dataset. The AP corresponding to intersection over union (IoU) thresholds of 0.5 ([email protected]), 0.75 ([email protected]). and 0.95 ([email protected]) were provided. Meanwhile, the APs over different IoU thresholds ranging from 0.5 to 0.95 (mAP@ (0.5:0.95)) were averaged to use to be an additional metric. Here, according to the values of precision and recall, the precision–recall curve can be drawn with the precision as the y-axis and the recall as the x-axis, and the average precision (AP) is the area under the precision–recall curve. Therefore, the integral of the precision–recall curve is the AP. The following formulas show the calculation of these accuracy metrics where TP, FP, TN, and FN refer to true positive, false positive, true negative, and false negative, respectively:
Precision = TP TP + FP
Recall = TP TP + FN
Accuracy = TP + TN TP + TN + FP + FN
AP = 0 1 p ( r ) dr

3.1.3. UAV-Based PWN Infestation Map and Up-Sampling

The trained YOLOv5 model was applied onto the orthomosaic of each forest plot to detect infested trees. Every detected tree was marked by a bounding box that records the pixel coordinates of the box vertices. As a proxy for infestation severity, we measured the canopy coverage of infected trees. Given that the bounding boxes usually contain forest floor pixels, we extracted a central region from a bounding box with the empirical ratio of 2/3 to improve the accuracy of the infested tree crown cover estimation. The pixel value of all the extracted regions was then assigned as 1, and the remaining pixel values were 0 in the orthomosaic, hence producing the georeferenced infestation mask image.
To generate the labeling data of Radarsat-2 image pixels, we resampled the UAV-based georeferenced infestation mask image according to the actual pixel resolution of Radarsat-2 imagery. As the Radarsat-2 image offered the geographical position of the center of each pixel, we derived the geolocations of four vertices of each pixel and projected them onto the UAV infestation mask image to obtain the image tile corresponding to the satellite pixel. The infestation severity, defined as the percentage of infested tree-crown cover in the unit horizontal surface area, was calculated by averaging the pixel values within the image tile. Finally, the PWN infestation of the many Radarsat-2 pixels was obtained.

3.2. Radarsat-2-Based PWN Infestation Modeling, Prediction, and Up-Sampling

We used a random forest (RF) machine-learning algorithm to model the relationship between Radarsat-2-derived parameters and PWN infestation situations, as it has been widely used in the classification and regression problems in the remote-sensing domain due to its high accuracy and robustness [2,35,36]. RF is a non-parametric ensemble learning algorithm composed of many independent decision trees in which each tree contributes with a single vote for the assignation of the most frequent class to the input vector [37]. The vote determination at each node is usually based on the Gini criterion. The ensemble approach in RF is known as bootstrap aggregation or bagging because trees are built using bootstrap sampling, and the results of all the trees are then combined. RF can efficiently process high-dimensional data and show less proneness to overfitting and is easy to tune [38]. RF can also measure the relative importance of the different input features or variables during the classification process. Technically, RF switches one of the input random variables while keeping the rest constant and then measures the accuracy decrease which occurred using the Gini Index decrease.
The input Radarsat-2 variables of an RF model contained the various polarimetric parameters derived by multiple decomposition methods mentioned in Section 2.3 and the vegetation indices as shown in Table 2. In addition, the SRTM DEM data resampled into Radarsat-scale resolution using the Kriging spatial interpolation method were also involved in the input variables. RF needs to tune two parameters in model creation and training, i.e., the number of trees and the number of input variables, when splitting at each tree node. In this study, the random forest package within the statistical software R was used. We modified both parameters across a wide range of values until reaching convergence, i.e., the optimal model parameters. We trained the RF model using a five-fold cross-validation with five repetitions. The number of predictors randomly picked as candidates at each split, known as mtry, is the most crucial parameter to be tweaked in the RF model. In our case, the optimal RF model consisted of 1000 trees with eight explanatory variables randomly selected at each tree node.
The trained RF model was used to predict the PWN infestation from the Radarsat-2 image. We assumed that the predicted PWN infestation could provide reference data while modeling the PWN infestation of the Sentinel-1 image. Therefore, the predicted PWN infestation map was resampled into a 30-m resolution, remaining consistent with the resampled Sentinel-1 image.

3.3. Sentinel-1-Based PWN Infestation Modeling and Predicting

The RF model was also used in the modeling relationship between the PWN infestation and the Sentinel-1 image pixel values. The input variables of the RF model consisted of backscattering coefficients and vegetation indices, as shown in Table 3. As described in the above section, the RF model parameters were also iteratively tuned to determine the optimal number of trees and the number of splitting variables. In our case, the optimal RF model of Sentinel-1 included 1000 trees with five explanatory variables randomly selected at each tree node. The trained RF model was then applied to predict the PWN infestation from the Sentinel-1 image, hence producing the regional PWN infestation map.

4. Results

4.1. UAV-Level PWN Infestation Evaluation

The trained object-detection CNN model was applied for the UAV RGB orthomosaic of each plot. As shown by an example plot in Figure 3a, almost all PWN-infested trees were effectively detected, although less healthy trees and soil patches were misclassified.
The accuracy evaluation based on the test dataset concluded that all accuracy metrics (i.e., recall, precision, accuracy, and [email protected]) were beyond 70% (Figure 4a), demonstrating that the deep-learning algorithm was capable of accurately detecting infested trees in a variety of environmental conditions. The [email protected], [email protected], and [email protected] were 0.74, 0.26, and 0.0002, respectively. The AP@[0.5:0.95] was 0.34, which is comparable with previous related studies, e.g., [45]. The accuracy of the CNN model was further evaluated based on visual detection on each UAV orthomosaic (a total of 8071 infested trees were visually determinated) (Figure 4b), which showed good fitting with a 1:1 line (y = 1.08x + 7.60) and an R2 of 0.94. In order to produce a reference infestation severity map, we therefore assumed that the deep-learning-based infested tree detection was 100% correct.
As described in Section 3.1.3, the two-thirds area in the center of each detected bounding box was considered as pure pixels of infested trees. According to the pixel footprint of the Radarsat-2 image, the infested tree pixels were resampled into Radarsat-2 resolution (Figure 3b). It should be noted that we assumed both the UAV orthomosaic and Radarsat-2 images were with highly accurate geographical positioning. In the resample process, the infestation severity was calculated based on the infested area in the resampled pixels (Figure 3c). To minimize uncertainty in the labeling dataset, the pixels with an infestation severity of larger than 0.25 were extracted to produce the Radarsat-2 labeling dataset (Figure 3d).

4.2. Mapping PWN Infestation Map from Radarsat-2

We used the trained random forest algorithm to predict the PWN infestation situations from the Radarsat-2 image with input variables including backscattering coefficient -related indices and polarimetric decomposition parameters. The confusion matrix is shown in Table 4. The overall accuracy and Kappa coefficient of the PWN infestation classification were 72.57% and 0.44, respectively.
As shown in Figure 5, the infested pixels were highlighted by orange color. The PWN infestation follows an increasing trend along the transition flat region and mountain region and tends to be concentrated in complex, rugged terrain areas. Compared to UAV images, the infestation map of Radarsat-2 largely expanded the training data for Sentinel-1 images.
The importance ranking of the top 20 variables of the random forest algorithm using mean decrease Gini is illustrated in Figure 6.
The Span shows the most contribution to the infestation classification with the following variables’ polarimetric decomposition parameters. There was no apparent difference of importance among the polarimetric parameters found.

4.3. Sentinel-Level PWN Infestation Mapping

As shown in Figure 7, the PWN infestation map from the Sentinel-1 image was predicted using a trained random forest algorithm described in Section 3.3. The overall accuracy and Kappa coefficient of the PWN infestation classification based on the Radarsat-2 infestation map were 85.04% and 0.70 (Table 4), respectively. For each pixel, the corresponding backscattering coefficient and indice data were extracted from Sentinel-1 data as input to the random forest algorithm. Apparently, the PWN infestation areas were concentrated in the continuous pine forest region and tend to be severe in rugged terrain. The spatial distribution of the PWN infestation followed the cluster-like mode. The lesser infestation along the coast was noticed due to the felling of infested trees by the local authority. Because of the access difficulty, the infestation situations in mountain regions were not destroyed. This suggested that effective forest management strategies should be implemented to minimize further PWN spreading. The importance ranking of the top three variables of the random forest algorithm using mean decrease Gini is illustrated in Figure 8. The backscattering coefficient of VH polarization was found to be most important, followed by VV polarization.
We used independent UAV data that were not used in the up-sampling and/or training process to validate the Sentinel-1-derived PWN infestation map. As the direct co-registration between the UAV image and the resampled Sentinel-1 image is quite challenging, we adjusted the corresponding relationship between the resampled 30-m pixels and UAV image footprint to obtain the optimal validation accuracy of the Sentinel-1 infestation map. Figure 9 shows the optimal overall accuracy and Kappa coefficients under different UAV-based infestation severity thresholds wherein the threshold divides the infestation severity into infested pixels and non-infested pixels. As Section 3.1.3 and Figure 3 mention, the satellite-level infestation map was produced by using the threshold. With the increase in threshold, both the overall accuracy and the Kappa coefficient gradually increased except the case of the threshold on 0.3. This can be explained by the fact that a higher threshold improved the quality of the training dataset and decreased the uncertainty in the training dataset. In particular, the threshold of 0.3 led to a decrease in the Kappa coefficient due to the lesser amount of the training dataset. Figure 9 demonstrates that the threshold of infestation severity of 0.25 is most suitable. Therefore, the overall accuracy and Kappa coefficient of the Sentinel-1 PWN infestation based on the UAV-based infestation reference were 76.30% and 0.45.

5. Discussion

As an active remote-sensing technique, the multi-scale SAR data offers opportunities to monitor the dynamics and spreading of a PWN infestation in less-limited weather conditions. However, accurately monitoring the PWN infestation situations over large areas has been challenging due to the limited availability of ground surveys that offer training samples for machine-learning algorithms. To address this problem, we integrated unmanned aerial vehicle (UAV) surveys and Radarsat-2 satellite imagery in an up-sampling approach for expanding the number of training samples at the 30-m resampled Sentinel-1 resolution. In other words, we determined a machine-learning model between UAV data and Radarsat-2 data in a small area and then applied the model onto Radarsat-2 data to increase the number of reference data for model training and validation in a Sentinel-1-scale machine-learning model of a PWN infestation. Although a similar idea has been applied in previous studies, e.g., [20,29,46], this study offers a crucial improvement, as the PWN infestation and active microwave satellite were specially considered as well as the UAV as an alternative of ground surveys was explored.
Our results first verified that the UAV-produced infestation map provided a rapid and reliable alternative for traditional ground-based field sampling (Figure 3 and Figure 4), supporting the training sample generation. Next, the integration of UAV and high-resolution Radarsat-2 images largely boosted the number of training samples of the Sentinel-1 (2283 effective samples) compared to the direct scaling up of the UAV surveys to Sentinel-1 pixels (109 effective samples). Here, the effective samples mean the samples with an infestation severity level of larger than 0.25. This implies that surveys in very limited spatial extents would be rather infeasible for training machine-learning models. This finding was consistent with several previous studies [20,29]. We demonstrated that the predicted Sentinel-1 infestation map was reliable, with an overall accuracy of larger than 75% (Table 4 and Figure 9), no matter the independent UAV surveys or Radarsat-2 data used in the validation process. The accuracy of our predictions is consistent with some previous studies [5,19,47]. For example, Meng et al. (2022) monitored the severity of a southern pine beetle infestation severity and its change in a temperate mixed forest using 30 m plot measurements and Landsat-8 imagery, resulting in an overall accuracy rate ranging from 68% to 91% for different input spectral vegetation indices [3].
The reader may notice that the overall accuracy of the Sentinel-1 PWN infestation map validated by the Radarsat-2 reference was higher than that validated by the UAV reference. This can be understood in terms of some errors that existed in the generation of the Radarsat-2 reference based on the UAV reference, which were propagated to the production of the Sentinel-1 map. In addition, as Figure 9 indicates, the empirical threshold also influences the model’s accuracy. Theoretically, the threshold should be zero, as a threshold larger than 0 will certainly result in the misdiagnosis of infested regions as healthy regions. When the percentage of infected trees is extremely low, the associated reflectance signal is quite near to that of healthy pixels, leading to severe confusion between infected and healthy pixels in the classification model. We experimentally determined the threshold at 0.25 after conducting numerous tests based on a trade-off between classification accuracy and infection detectability. A large discrepancy was found between the Kappa coefficient and the overall accuracy for Radarsat-2-based predictions. This can be explained by the fact that one of the classes (i.e., the infestation class) accounts for the large majority of our data, and the class was well-described. On the confusion matrix (Table 4), we can see that a healthy class was not very reliable. This will have a large impact on our Kappa coefficient, and this explains the large difference. More recently, some scholars did not agree on the fact that the Kappa is largely considered to be more robust than the OA, because the Kappa coefficient has not provided the useful information that it is supposed to bring [48,49].
The fine quad-polarization Radarsat-2 image enables the various vegetation indices and polarimetric decomposition parameters by different physical or semi-physical decomposition algorithms, which highly enriches the feature dimensions in the machine-learning process. As Figure 6 suggests, the polarimetric decomposition parameters contribute heavily to the random forest algorithm. For the sake of model simplicity, we kept only three input variables in the random forest algorithm of Sentinel-1, as they can provide a high enough accuracy of PWN infestation classification. Notwithstanding, additional optical satellite remote-sensing data may enhance the feature of infestation signal in space because of the strong relationship between the spectral reflectance and the canopy chlorophyll content [50]. Further study would focus on the integration of active SAR and passive optical remote sensing to explore the PWN infestation severity.
Notwithstanding the advantages of the proposed method in PWN infestation mapping, several technical factors remain that challenge its accuracy. One source of discrepancy can be attributed to co-registration errors between UAV images and satellite imagery and those between two-scale satellite imagery [51]. The co-registration errors are mainly caused by the uncertainties of satellite positioning and image geometrical correction, since the UAV-equipped RTK GNSS has a positioning accuracy of 5 cm. In this study, we assumed the random forest algorithms have optimal accuracy in the case of perfect co-registration. We determined the optimal random forest algorithms by tuning the relative position between two sources of data. However, this process might be with some uncertainties induced by spatial heterogeneity, which needs further investigation in the future. Another source of discrepancy might be the UAV-based infestation severity threshold (Figure 9). It should be noted that the predicted satellite-level infestation maps only contain infestation pixels and non-infestation pixels, but do not illustrate infestation severity levels. In practice, the forest stands with low infestation severity generally characterize a weak signal in satellite pixels, largely obstructing the accurate retrieval of infestation situations. In our study, we empirically defined a threshold to determine infestation and non-infestation classes. As Figure 9 suggests, a threshold of 0.25 was selected because of a trade-off between the amount of training samples and the validation accuracy, implying that the threshold might need adjustment in the case of more UAV surveys or other study areas. Third, the soil moisture content is a critical variable affecting the backscattering coefficients [52,53], thus reducing the applicability of trained random forest algorithms over limited areas, which might be especially severe in sparse forest stands. Jointly using the off-the-shelf satellite product of soil moisture content has the potential to alleviate this negative effect. Notwithstanding, the proposed two-level up-sampling method through the integration of multi-scale remote-sensing data offers a feasible alternative to mapping the PWN infestation situations over small and large areas.

6. Conclusions

This study investigated the integration of multi-scale remote sensing to monitor the moderately and highly severe PWN infestation of pine forests over large areas. We trained a random forest algorithm to predict the PWN infestation situations from Sentinel-1 image data with selected training examples obtained by up-sampling UAV surveys and Radarsat-2 data. The UAV surveys offered reliable detection of infested trees under a range of environmental conditions with an AP of larger than 70% and R2 approaching 1. The up-sampling UAV infestation map into an intermediate scale enabled the production of the Radarsat-2 infestation map with an overall accuracy of 72.57%, largely expanding the training samples of the Sentinel-1-scale machine-learning model. Note that the training samples were determined using a threshold of infestation severity of 0.25 based on the predicted Radarsat-2-based infestation map. The predicted infestation map of Sentinel-1 was independently validated by both Radarsat-2 and UAV data, yielding an overall accuracy of 87.63% and 76.30%, respectively. It was found that the PWN infestation was concentrated in the continuous pine forests in mountain regions in study area. The pixels of shelter forests along the coast were found to be less infested due to the massive felling of infested trees, being consistent with the human field surveys, which verified the reliability of the predicted infestation map. Therefore, the transferable approach has the ability to monitor the PWN infestation over multiple scales suited for different requirements and local authorities. The proposed method could be further improved to monitor the moderate and high PWN infestation severity and to use optical satellite remote-sensing images.

Author Contributions

Conceptualization, X.L. (Xiujuan Li) and Y.L.; methodology, X.L. (Xiujuan Li), Y.C. and L.L.; software, X.L. (Xiujuan Li) and T.T.; investigation, Y.S., T.H. and W.F.; data curation, X.L. (Xiujuan Li) and X.H.; writing—original draft preparation, X.L. (Xiujuan Li) and Y.L.; writing—review and editing, X.L. (Xiujuan Li), Y.L., P.H. and X.L. (Xiaoqi Lv); project administration, Y.L. and P.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant (Nos. 52064039, 61971246 and 41861056); in part by the Science and Technology Innovation Guidance Project of Inner Mongolia Autonomous Region (Nos. 2019GG138, 2019GG139 and 2020GG0073); in part by the Science and Technology Major Special Project of Inner Mongolia Autonomous Region (No. 2019ZD022); in part by the Natural Science Foundation of Inner Mongolia Autonomous Region (No. 2019MS04004); in part by the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDA19070102).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Huaguo Huang from Beijing Forestry University for the kind support with the field campaign and data processing. The authors appreciate that the anonymous reviewers put forward their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hao, Z.; Huang, J.; Li, X.; Sun, H.; Fang, G. A multi-point aggregation trend of the outbreak of pine wilt disease in China over the past 20 years. For. Ecol. Manag. 2022, 505, 119890. [Google Scholar] [CrossRef]
  2. Yu, R.; Luo, Y.; Zhou, Q.; Zhang, X.; Wu, D.; Ren, L. A machine learning algorithm to detect pine wilt disease using UAV-based hyperspectral imagery and LiDAR data at the tree level. Int. J. Appl. Earth Obs. Geoinf. 2021, 101, 102363. [Google Scholar] [CrossRef]
  3. Meng, R.; Gao, R.; Zhao, F.; Huang, C.; Sun, R.; Lv, Z.; Huang, Z. Landsat-based monitoring of southern pine beetle infestation severity and severity change in a temperate mixed forest. Remote Sens. Environ. 2022, 269, 112847. [Google Scholar] [CrossRef]
  4. Dennison, P.E.; Brunelle, A.R.; Carter, V.A. Assessing canopy mortality during a mountain pine beetle outbreak using GeoEye-1 high spatial resolution satellite data. Remote Sens. Environ. 2010, 114, 2431–2435. [Google Scholar] [CrossRef]
  5. White, J.C.; Wulder, M.A.; Brooks, D.; Reich, R.; Wheate, R.D. Detection of red attack stage mountain pine beetle infestation with high spatial resolution satellite imagery. Remote Sens. Environ. 2005, 96, 340–351. [Google Scholar] [CrossRef]
  6. Coops, N.C.; Wulder, M.A.; White, J.C. Integrating remotely sensed and ancillary data sources to characterize a mountain pine beetle infestation. Remote Sens. Environ. 2006, 105, 83–97. [Google Scholar] [CrossRef]
  7. Li, L.; Mu, X.; Qi, J.; Pisek, J.; Roosjen, P.; Yan, G.; Huang, H.; Liu, S.; Baret, F. Characterizing reflectance anisotropy of background soil in open-canopy plantations using UAV-based multiangular images. ISPRS J. Photogramm. Remote Sens. 2021, 177, 263–278. [Google Scholar] [CrossRef]
  8. Berger, K.; Machwitz, M.; Kycko, M.; Kefauver, S.C.; Van Wittenberghe, S.; Gerhards, M.; Verrelst, J.; Atzberger, C.; van der Tol, C.; Damm, A.; et al. Multi-sensor spectral synergies for crop stress detection and monitoring in the optical domain: A review. Remote Sens. Environ. 2022, 280, 113198. [Google Scholar] [CrossRef] [PubMed]
  9. Lehmann, E.A.; Caccetta, P.; Lowell, K.; Mitchell, A.; Zhou, Z.S.; Held, A.; Milne, T.; Tapley, I. SAR and optical remote sensing: Assessment of complementarity and interoperability in the context of a large-scale operational forest monitoring system. Remote Sens. Environ. 2015, 156, 335–348. [Google Scholar] [CrossRef]
  10. Ballère, M.; Bouvet, A.; Mermoz, S.; Le Toan, T.; Koleck, T.; Bedeau, C.; André, M.; Forestier, E.; Frison, P.L.; Lardeux, C. SAR data for tropical forest disturbance alerts in French Guiana: Benefit over optical imagery. Remote Sens. Environ. 2021, 252, 112159. [Google Scholar] [CrossRef]
  11. Huo, L.; Persson, H.J.; Lindberg, E. Early detection of forest stress from European spruce bark beetle attack, and a new vegetation index: Normalized distance red & SWIR (NDRS). Remote Sens. Environ. 2021, 255, 112240. [Google Scholar] [CrossRef]
  12. Weiß, T.; Ramsauer, T.; Jagdhuber, T.; Löw, A.; Marzahn, P. Sentinel-1 Backscatter Analysis and Radiative Transfer Modeling of Dense Winter Wheat Time Series. Remote Sens. 2021, 13, 2320. [Google Scholar] [CrossRef]
  13. Ahmad, U.; Alvino, A.; Marino, S. A Review of Crop Water Stress Assessment Using Remote Sensing. Remote Sens. 2021, 13, 4155. [Google Scholar] [CrossRef]
  14. Tanase, M.A.; Kennedy, R.; Aponte, C. Radar Burn Ratio for fire severity estimation at canopy level: An example for temperate forests. Remote Sens. Environ. 2015, 170, 14–31. [Google Scholar] [CrossRef]
  15. Chen, Y.; Ma, L.; Yu, D.; Feng, K.; Wang, X.; Song, J. Improving Leaf Area Index Retrieval Using Multi-Sensor Images and Stacking Learning in Subtropical Forests of China. Remote Sens. 2021, 14, 148. [Google Scholar] [CrossRef]
  16. Melancon, A.M.; Molthan, A.L.; Griffin, R.E.; Mecikalski, J.R.; Schultz, L.A.; Bell, J.R. Random Forest Classification of Inundation Following Hurricane Florence (2018) via L-Band Synthetic Aperture Radar and Ancillary Datasets. Remote Sens. 2021, 13, 5098. [Google Scholar] [CrossRef]
  17. Fremout, T.; Cobián-De Vinatea, J.; Thomas, E.; Huaman-Zambrano, W.; Salazar-Villegas, M.; Limache-de la Fuente, D.; Bernardino, P.N.; Atkinson, R.; Csaplovics, E.; Muys, B. Site-specific scaling of remote sensing-based estimates of woody cover and aboveground biomass for mapping long-term tropical dry forest degradation status. Remote Sens. Environ. 2022, 276, 113040. [Google Scholar] [CrossRef]
  18. Huang, X.; Wang, J.; Shang, J.; Liao, C.; Liu, J. Application of polarization signature to land cover scattering mechanism analysis and classification using multi-temporal C-band polarimetric RADARSAT-2 imagery. Remote Sens. Environ. 2017, 193, 11–28. [Google Scholar] [CrossRef]
  19. Chauhan, S.; Darvishzadeh, R.; Boschetti, M.; Nelson, A. Discriminant analysis for lodging severity classification in wheat using RADARSAT-2 and Sentinel-1 data. ISPRS J. Photogramm. Remote Sens. 2020, 164, 138–151. [Google Scholar] [CrossRef]
  20. He, L.; Chen, W.; Leblanc, S.G.; Lovitt, J.; Arsenault, A.; Schmelzer, I.; Fraser, R.H.; Latifovic, R.; Sun, L.; Prévost, C.; et al. Integration of multi-scale remote sensing data for reindeer lichen fractional cover mapping in Eastern Canada. Remote Sens. Environ. 2021, 267, 112731. [Google Scholar] [CrossRef]
  21. Li, X.; Tong, T.; Luo, T.; Wang, J.; Rao, Y.; Li, L.; Jin, D.; Wu, D.; Huang, H. Retrieving the Infected Area of Pine Wilt Disease-Disturbed Pine Forests from Medium-Resolution Satellite Images Using the Stochastic Radiative Transfer Theory. Remote Sens. 2022, 14, 1526. [Google Scholar] [CrossRef]
  22. Li, L.; Chen, J.; Mu, X.; Li, W.; Yan, G.; Xie, D.; Zhang, W. Quantifying Understory and Overstory Vegetation Cover Using UAV-Based RGB Imagery in Forest Plantation. Remote Sens. 2020, 12, 298. [Google Scholar] [CrossRef] [Green Version]
  23. Li, L.; Mu, X.; Chianucci, F.; Qi, J.; Jiang, J.; Zhou, J.; Chen, L.; Huang, H.; Yan, G.; Liu, S. Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102686. [Google Scholar] [CrossRef]
  24. Ferreira, M.P.; de Almeida, D.R.A.; de Almeida Papa, D.; Minervino, J.B.S.; Veras, H.F.P.; Formighieri, A.; Santos, C.A.N.; Ferreira, M.A.D.; Figueiredo, E.O.; Ferreira, E.J.L. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For. Ecol. Manag. 2020, 475, 118397. [Google Scholar] [CrossRef]
  25. Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2020, 170, 205–215. [Google Scholar] [CrossRef]
  26. Osco, L.P.; Marcato Junior, J.; Marques Ramos, A.P.; de Castro Jorge, L.A.; Fatholahi, S.N.; de Andrade Silva, J.; Matsubara, E.T.; Pistori, H.; Gonçalves, W.N.; Li, J. A review on deep learning in UAV remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102456. [Google Scholar] [CrossRef]
  27. Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree Species Classification of Drone Hyperspectral and RGB Imagery with Deep Learning Convolutional Neural Networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef] [Green Version]
  28. Hu, G.; Yin, C.; Wan, M.; Zhang, Y.; Fang, Y. Recognition of diseased Pinus trees in UAV images using deep learning and AdaBoost classifier. Biosyst. Eng. 2020, 194, 138–151. [Google Scholar] [CrossRef]
  29. Riihimäki, H.; Luoto, M.; Heiskanen, J. Estimating fractional cover of tundra vegetation at multiple scales using unmanned aerial systems and optical satellite data. Remote Sens. Environ. 2019, 224, 119–132. [Google Scholar] [CrossRef]
  30. Emilien, A.-V.; Thomas, C.; Thomas, H. UAV & satellite synergies for optical remote sensing applications: A literature review. Sci. Remote Sens. 2021, 3, 100019. [Google Scholar] [CrossRef]
  31. Mandal, D.; Kumar, V.; Ratha, D.; Lopez-Sanchez, J.M.; Bhattacharya, A.; McNairn, H.; Rao, Y.S.; Ramana, K.V. Assessment of rice growth conditions in a semi-arid region of India using the Generalized Radar Vegetation Index derived from RADARSAT-2 polarimetric SAR data. Remote Sens. Environ. 2020, 237, 111561. [Google Scholar] [CrossRef] [Green Version]
  32. Du, P.; Samat, A.; Waske, B.; Liu, S.; Li, Z. Random Forest and Rotation Forest for fully polarized SAR image classification using polarimetric and spatial features. ISPRS J. Photogramm. Remote Sens. 2015, 105, 38–53. [Google Scholar] [CrossRef]
  33. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
  34. Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios. Proc. IEEE Int. Conf. Comput. Vis. 2021, 2021, 2778–2788. [Google Scholar] [CrossRef]
  35. Martinuzzi, S.; Vierling, L.A.; Gould, W.A.; Falkowski, M.J.; Evans, J.S.; Hudak, A.T.; Vierling, K.T. Mapping snags and understory shrubs for a LiDAR-based assessment of wildlife habitat suitability. Remote Sens. Environ. 2009, 113, 2533–2546. [Google Scholar] [CrossRef] [Green Version]
  36. Kattenborn, T.; Lopatin, J.; Förster, M.; Braun, A.C.; Fassnacht, F.E. UAV data as alternative to field sampling to map woody invasive species based on combined Sentinel-1 and Sentinel-2 data. Remote Sens. Environ. 2019, 227, 61–73. [Google Scholar] [CrossRef]
  37. Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef] [Green Version]
  38. Shafizadeh-Moghadam, H. Fully component selection: An efficient combination of feature selection and principal component analysis to increase model performance. Expert Syst. Appl. 2021, 186, 115678. [Google Scholar] [CrossRef]
  39. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef] [Green Version]
  40. Yamaguchi, H.; Calado, R.T.; Ly, H.; Kajigaya, S.; Baerlocher, G.M.; Chanock, S.J.; Lansdorp, P.M.; Young, N.S. Mutations in TERT, the gene for telomerase reverse transcriptase, in aplastic anemia. N. Engl. J. Med. 2005, 352, 1413–1424. [Google Scholar] [CrossRef] [PubMed]
  41. Cloude, S.R.; Pettier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  42. Touzi, R. Target scattering decomposition in terms of roll-invariant target parameters. IEEE Trans. Geosci. Remote Sens. 2007, 45, 73–84. [Google Scholar] [CrossRef]
  43. Van Zyl, J.J.; Arii, M.; Kim, Y. Model-based decomposition of polarimetric SAR covariance matrices constrained for nonnegative eigenvalues. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3452–3459. [Google Scholar] [CrossRef]
  44. Krogager, E.; Boerner, W.-M.; Madsen, S.N. Feature-motivated Sinclair matrix (sphere/diplane/helix) decomposition and its application to target sorting for land feature classification. In Wideband Interferometric Sensing and Imaging Polarimetry; SPIE: Washington, DC, USA, 1997; Volume 3120, pp. 144–154. [Google Scholar] [CrossRef]
  45. Puliti, S.; Astrup, R. Automatic detection of snow breakage at single tree level using YOLOv5 applied to UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102946. [Google Scholar] [CrossRef]
  46. Stow, D.A.; Hope, A.; McGuire, D.; Verbyla, D.; Gamon, J.; Huemmrich, F.; Houston, S.; Racine, C.; Sturm, M.; Tape, K.; et al. Remote sensing of vegetation and land-cover change in Arctic Tundra Ecosystems. Remote Sens. Environ. 2004, 89, 281–308. [Google Scholar] [CrossRef] [Green Version]
  47. Ye, S.; Rogan, J.; Zhu, Z.; Hawbaker, T.J.; Hart, S.J.; Andrus, R.A.; Meddens, A.J.H.; Hicke, J.A.; Eastman, J.R.; Kulakowski, D. Detecting subtle change from dense Landsat time series: Case studies of mountain pine beetle and spruce beetle disturbance. Remote Sens. Environ. 2021, 263, 112560. [Google Scholar] [CrossRef]
  48. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  49. Pontius, R.G.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
  50. Johnson, B.A. Scale Issues Related to the Accuracy Assessment of Land Use/Land Cover Maps Produced Using Multi-Resolution Data: Comments on “The Improvement of Land Cover Classification by Thermal Remote Sensing”. Remote Sens. 2015, 7, 13436–13439. [Google Scholar] [CrossRef]
  51. Guo, X.; Wang, M.; Jia, M.; Wang, W. Estimating mangrove leaf area index based on red-edge vegetation indices: A comparison among UAV, WorldView-2 and Sentinel-2 imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102493. [Google Scholar] [CrossRef]
  52. Millard, K.; Richardson, M. Quantifying the relative contributions of vegetation and soil moisture conditions to polarimetric C-Band SAR response in a temperate peatland. Remote Sens. Environ. 2018, 206, 123–138. [Google Scholar] [CrossRef]
  53. Zhao, D.; Arshad, M.; Wang, J.; Triantafilis, J. Soil exchangeable cations estimation using Vis-NIR spectroscopy in different depths: Effects of multiple calibration models and spiking. Comput. Electron. Agric. 2021, 182, 105990. [Google Scholar] [CrossRef]
Figure 1. Overview of study area. (a) Study area with background image of tree height in study area; (b) Radarsat-2 image and 17 plots of field campaign; (c) perspective view of a PWN-infested forest plots shown by an UAV RGB image.
Figure 1. Overview of study area. (a) Study area with background image of tree height in study area; (b) Radarsat-2 image and 17 plots of field campaign; (c) perspective view of a PWN-infested forest plots shown by an UAV RGB image.
Remotesensing 14 05164 g001
Figure 2. The workflow of predicting regional PWN infestation map using UAV RGB images, Radarsar-2 imagery, and Sentinel-1 imagery.
Figure 2. The workflow of predicting regional PWN infestation map using UAV RGB images, Radarsar-2 imagery, and Sentinel-1 imagery.
Remotesensing 14 05164 g002
Figure 3. Deep-learning-based infested tree detection and Radarsat-2-scale infestation map production of an example forest plot. (a) Detection of infested trees from UAV images; (b) footprints of Radarsat-2 pixels onto UAV images; (c) Radarsat-2-scale infestation severity map; (d) Radarsat-2-scale infestation map with infestation severity of larger than 0.25.
Figure 3. Deep-learning-based infested tree detection and Radarsat-2-scale infestation map production of an example forest plot. (a) Detection of infested trees from UAV images; (b) footprints of Radarsat-2 pixels onto UAV images; (c) Radarsat-2-scale infestation severity map; (d) Radarsat-2-scale infestation map with infestation severity of larger than 0.25.
Remotesensing 14 05164 g003
Figure 4. Assessment of deep-learning (DL)-based infested tree detection. (a) accuracy metrics; (b) comparison of infested tree amount between DL-prediction and ground truth.
Figure 4. Assessment of deep-learning (DL)-based infested tree detection. (a) accuracy metrics; (b) comparison of infested tree amount between DL-prediction and ground truth.
Remotesensing 14 05164 g004
Figure 5. Predicted PWN infestation map from Radarsat-2 image through a random forest algorithm.
Figure 5. Predicted PWN infestation map from Radarsat-2 image through a random forest algorithm.
Remotesensing 14 05164 g005
Figure 6. Importance evaluation of the input variables of the random forest algorithm used for predicting PWN infestation from Radarsat-2 image.
Figure 6. Importance evaluation of the input variables of the random forest algorithm used for predicting PWN infestation from Radarsat-2 image.
Remotesensing 14 05164 g006
Figure 7. Predicted PWN infestation map from Sentinel-1 image through a random forest algorithm. The background image is a map of tree height.
Figure 7. Predicted PWN infestation map from Sentinel-1 image through a random forest algorithm. The background image is a map of tree height.
Remotesensing 14 05164 g007
Figure 8. Importance evaluation of the input variables of the random forest algorithm used for predicting PWN infestation from Sentinel-1 image.
Figure 8. Importance evaluation of the input variables of the random forest algorithm used for predicting PWN infestation from Sentinel-1 image.
Remotesensing 14 05164 g008
Figure 9. Validation of the predicted Sentinel-1 infestation map by the reference UAV-based infestation map obtained under different infestation severity thresholds.
Figure 9. Validation of the predicted Sentinel-1 infestation map by the reference UAV-based infestation map obtained under different infestation severity thresholds.
Remotesensing 14 05164 g009
Table 1. Overview of multi-scale data used in this study.
Table 1. Overview of multi-scale data used in this study.
Data SourceWave LengthAcquisition TimeResolutionData Coverage
UAVRGB15–20 August 2021~5 cm17 plots
Radarsat-2C-band5 September 2021~8 mPart of Muping district
Sentinel-1C-band15 September 202130 m (Resampled)Entire Muping district
Table 2. Summary of the input variables of the random forest (RF) model used in the modeling of PWN infestation from Radarsat-2 image. The variables are comprised of backscattering-derived parameters and polarimetric decomposition parameters.
Table 2. Summary of the input variables of the random forest (RF) model used in the modeling of PWN infestation from Radarsat-2 image. The variables are comprised of backscattering-derived parameters and polarimetric decomposition parameters.
MethodsInput VariablesReference
Backscattering-derived parameters
IndicesSpan, PH, RVI, RFDI, CSI, VSI[19]
Polarimetric decomposition parameters
Freeman–DurdenFreeman_dbl 1, Freeman_surf 2, Freeman_vol 3[39]
YamaguchiYamaguchi_dbl, Yamaguchi_surf, Yamaguchi_vol, Yamaguchi_hlx[40]
CloudeCloude_dbl, Cloude_surf, Cloude_vol[41]
TouziTouzi_alpha, Touzi_phi, Touzi_psi, Touzi_tau[42]
Van ZylVanZyl_dbl, VanZyl_surf, VanZyl_vol_g[43]
H/α/Aalpha, anisotropy, entropy[41]
SinclairSinclair_1, Sinclair_2, Sinclair_3[44]
PauliPauli_1, Pauli_2, Pauli_3[41]
Note: 1 dbl refers to double-bounce scattering, 2 surf refers to direct surface scattering, 3 vol refers to volume scattering.
Table 3. Overview of the input variables of the random forest (RF) model used in the modeling of PWN infestation from Sentinel-1 image.
Table 3. Overview of the input variables of the random forest (RF) model used in the modeling of PWN infestation from Sentinel-1 image.
Input VariablesFormula
Backscattering coefficient σ V H ,  σ V V
Normalized difference polarimetric ratio (NDPR) σ V V σ V H σ V V + σ V H
Polarimetric Ratio (PR) σ V V σ V H
Radar vegetation index (RVI) 4 / ( 10 σ V V σ V H 10 + 1 )
Table 4. Confusion matrices and accuracy indices of random forest algorithms corresponding to Radarsat-2 and Sentinel-1 data.
Table 4. Confusion matrices and accuracy indices of random forest algorithms corresponding to Radarsat-2 and Sentinel-1 data.
Random ForestRadarsat-2Sentinel-1
InfestedHealthyInfestedHealthy
Infested80.67%37.50%83.07%12.37%
Healthy19.33%62.50%16.93%87.63%
Overall accuracy72.57%85.04%
Kappa coefficient0.440.70
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, X.; Liu, Y.; Huang, P.; Tong, T.; Li, L.; Chen, Y.; Hou, T.; Su, Y.; Lv, X.; Fu, W.; et al. Integrating Multi-Scale Remote-Sensing Data to Monitor Severe Forest Infestation in Response to Pine Wilt Disease. Remote Sens. 2022, 14, 5164. https://doi.org/10.3390/rs14205164

AMA Style

Li X, Liu Y, Huang P, Tong T, Li L, Chen Y, Hou T, Su Y, Lv X, Fu W, et al. Integrating Multi-Scale Remote-Sensing Data to Monitor Severe Forest Infestation in Response to Pine Wilt Disease. Remote Sensing. 2022; 14(20):5164. https://doi.org/10.3390/rs14205164

Chicago/Turabian Style

Li, Xiujuan, Yongxin Liu, Pingping Huang, Tong Tong, Linyuan Li, Yuejuan Chen, Ting Hou, Yun Su, Xiaoqi Lv, Wenxue Fu, and et al. 2022. "Integrating Multi-Scale Remote-Sensing Data to Monitor Severe Forest Infestation in Response to Pine Wilt Disease" Remote Sensing 14, no. 20: 5164. https://doi.org/10.3390/rs14205164

APA Style

Li, X., Liu, Y., Huang, P., Tong, T., Li, L., Chen, Y., Hou, T., Su, Y., Lv, X., Fu, W., & Huang, X. (2022). Integrating Multi-Scale Remote-Sensing Data to Monitor Severe Forest Infestation in Response to Pine Wilt Disease. Remote Sensing, 14(20), 5164. https://doi.org/10.3390/rs14205164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop