Next Article in Journal
Multi-Unmanned Aerial Vehicles Cooperative Trajectory Optimization in the Approaching Stage Based on the Attitude Correction Algorithm
Next Article in Special Issue
Research on the Identification of Wheat Fusarium Head Blight Based on Multispectral Remote Sensing from UAVs
Previous Article in Journal
Meta Surface-Based Multiband MIMO Antenna for UAV Communications at mm-Wave and Sub-THz Bands
Previous Article in Special Issue
Early Drought Detection in Maize Using UAV Images and YOLOv8+
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Pine Wilt-Diseased Trees Using UAV Remote Sensing Imagery and Improved PWD-YOLOv8n Algorithm

1
College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
2
College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(8), 404; https://doi.org/10.3390/drones8080404
Submission received: 24 July 2024 / Revised: 15 August 2024 / Accepted: 16 August 2024 / Published: 18 August 2024

Abstract

:
Pine wilt disease (PWD) is one of the most destructive diseases for pine trees, causing a significant effect on ecological resources. The identification of PWD-infected trees is an effective approach for disease control. However, the effects of complex environments and the multi-scale features of PWD trees hinder detection performance. To address these issues, this study proposes a detection model based on PWD-YOLOv8 by utilizing aerial images. In particular, the coordinate attention (CA) and convolutional block attention module (CBAM) mechanisms are combined with YOLOv8 to enhance feature extraction. The bidirectional feature pyramid network (BiFPN) structure is used to strengthen feature fusion and recognition capability for small-scale diseased trees. Meanwhile, the lightweight FasterBlock structure and efficient multi-scale attention (EMA) mechanism are employed to optimize the C2f module. In addition, the Inner-SIoU loss function is introduced to seamlessly improve model accuracy and reduce missing rates. The experiment showed that the proposed PWD-YOLOv8n algorithm outperformed conventional target-detection models on the validation set (mAP@0.5 = 94.3%, precision = 87.9%, recall = 87.0%, missing rate = 6.6%; model size = 4.8 MB). Therefore, the proposed PWD-YOLOv8n model demonstrates significant superiority in diseased-tree detection. It not only enhances detection efficiency and accuracy but also provides important technical support for forest disease control and prevention.

1. Introduction

Pine wilt disease (PWD), caused by the pine wood nematode, has become one of the major global threats to forest ecosystems due to its rapid spread and high mortality rates [1,2]. This disease is sensitive to temperature variations, and it has spread northward with the trend of global warming [3,4]. Therefore, scientific and effective control strategies should be adopted urgently to prevent its further spread and reduce ecological damage.
In general, PWD detection mainly includes manual field surveying and remote sensing recognition [5,6]. The manual approach is time consuming and labor intensive, and it can hardly achieve rapid and large-scale identification [7]. Recently, methods based on satellites and unmanned aerial vehicles (UAVs) have been widely applied in PWD detection [8,9,10]. Zhang et al. [5] proposed a PWD identification model based on PlanetScope satellite data by combining spectral and spatiotemporal features. This model can reduce false detection and effectively improve detection accuracy in complex landscapes. Wang et al. [11] combined Gaofen-2 satellite images with a semi-supervised semantic segmentation model, significantly improving detection accuracy and achieving large-scale monitoring. Zhou et al. [12] proposed an improved LeNet neural network-based detection method by utilizing Beijing 2 satellite images. In this method, the full connection layers and activation functions of the model are optimized to realize efficient health-status monitoring for pine trees. Liu et al. [13] formulated a feature classification rule by analyzing feature differences between complex backgrounds and diseased trees, which can enhance recognition accuracy in satellite images. However, satellite-based detection methods may be unable to identify individual diseased trees accurately due to the complex background and small-scale features of satellite images [14,15].
To address these limitations, UAVs have emerged as promising alternatives. UAVs can provide high resolution and flexibility in data acquisition, allowing for the precise and efficient monitoring of PWD at the individual tree level [16]. By utilizing aerial images, Zhang et al. [17] integrated various attention modules into the YOLOv5 network and verified that the coordinate attention (CA) module exhibited the best performance for PWD identification in terms of detection accuracy and speed. Zhang et al. [18] proposed a model based on DeepLabV3+ by incorporating encoding and decoding structures, mitigating the effect of complex environmental conditions, such as varying lighting and ground cover colors. Ye et al. [19] designed an improved YOLOv5s algorithm by introducing a lightweight network structure and attention modules. This algorithm can achieve balance between lightweight design and accuracy, realizing real-time and efficient health monitoring for pine trees. Liu et al. [20] enhanced the efficiency and accuracy of the Clusterformer segmentation model by improving the encoder–decoder structure. Wang et al. [21] proposed a PWD detection algorithm based on an improved YOLOv8 model, significantly improving the detection accuracy of small-target diseased trees by introducing small-object detection layers and attention mechanism modules. Yu et al. [16] compared the recognition performance of different models on small targets of early diseased trees and considered the influence of background elements, such as broad-leaved trees. Du et al. [22] improved the YOVOv5 model by enhancing the multi-scale feature fusion, improving the detection performance of small-scale diseased trees and thus reducing the missing rate. This model provides technical support for the precise location detection and management of diseased trees. Xia et al. [23] compared the effectiveness of different image segmentation techniques on UAV images. They found that the DeepLabV3+ model exhibited higher accuracy in recognizing and evaluating PWD trees. Although current studies have achieved significant progress in PWD identification, challenges such as low accuracy, high complexity, and insufficient generalization capability still remain, hindering the effectiveness of the model in practical applications.
To address this issue, this study proposes an improved YOLOv8n-based model for PWD detection, called PWD-YOLOv8n. In particular, the convolutional block attention module (CBAM) and CA mechanisms are added to the backbone of the YOLOv8n network to highlight target features and enhance feature extraction, enabling the model to adapt to complex environments and provide improved attention to PWD targets. To enhance the detection capability of small-scale PWD trees, a bidirectional feature pyramid network (BiFPN) structure is employed in the neck component of the network. This structure can effectively transfer information with different levels and provide more attention to local details and global contextual information, enhancing recognition ability for multi-scale diseased trees. To accelerate detection, an improved C2f-Faster-EMA module is utilized to make the model lightweight with enhanced detection accuracy. Finally, the Inner-SIoU loss function is used to replace the CIoU loss function to address the insensitivity issue for small-scale objects without compromising the missing rate.

2. Materials and Methods

2.1. Study Area Selection

The study areas are located in the Taishan forest region (36°15′14″ N, 117°01′27″ E) in the Tai’an and Laoshan forest region (36°09′23″ N, 120°36′58″ E) of Qingdao, Shandong Province, China. Tai’an City experiences a warm temperate monsoon climate with hot and humid summers and cold and dry winters. Huashan pine and Masson pine are the primary tree species in these areas. Qingdao City has a temperate monsoon maritime climate, with cool and comfortable summers and mild and moist winters. The Laoshan forest region is dominated by Chinese red pine and black pine trees. The study areas and examples of PWD trees are shown in Figure 1.

2.2. Data Acquisition and Preprocessing

In this study, images of PWD trees were captured via DJI M300 RTK UAVs equipped with Zenmuse H20 visible-light cameras. The images were collected in the two aforementioned forest regions via UAVs in May 2023 (Tai’an) and July 2023 (Qingdao). The detailed flight parameters are provided in Table 1. We collected a total of 3156 images by using UAVs in the two regions, and the size of the captured images was 5184 × 3888 pixels.
Given the multi-scale features of UAV images, direct training may lead to feature information loss with network deepening, affecting overall model performance. Therefore, original images are typically cropped into several images of 1280 × 1280 pixels to increase the target scale of diseased trees. To avoid diseased trees being cut off during cropping, an overlapping area is set between adjacent sub-images, with an overlap size of 15%. If the size of the remaining image is less than 1280 × 1280 pixels, then it will be filled with black. The cropped images are shown in Figure 2a.
Then, PWD trees are annotated using the labelImg tool to generate YOLO-format dataset files that correspond to the image names, which contain the category and position information of the targets in TXT format. The identification and confirmation of PWD trees in the UAV-captured images primarily rely on distinctive visual characteristics, including significant color changes in the foliage (typically a shift from green to yellow or brown) and changes in the surrounding environment, such as thinning of the canopy due to needle drop. Based on the aforementioned visual characteristics, and disregarding images without PWD trees, 894 images that contain diseased trees are obtained. The dataset is divided into training, validation, and test sets in accordance with a ratio of 6:3:1, resulting in 572, 246, and 76 images, respectively, for each set. To improve generalization and robustness, image augmentation is performed on the divided training and validation datasets, including horizontal flipping, vertical flipping, random rotation, and brightness and saturation changes. Some examples of the enhancement effects are presented in Figure 2b. Consequently, the number of images in the training and validation sets is increased to 2774 and 1184, respectively. Meanwhile, the number of images in the test set remains unchanged. As shown in Figure 3, the lower left corner exhibits a noticeable clustering of points, indicating the presence of a large number of small-target PWD trees in the dataset. The number of images before and after augmentation is provided in Table 2.

2.3. Improved YOLOv8n-Based Detection Model

YOLOv8 [24] is a one-stage object-detection model that includes five versions (n/s/m/l/x), with differences in network depth and width. YOLOv8n, the fastest and most lightweight model, exhibits significant advantages over the others when detecting a single class of PWD trees. It can balance detection accuracy, speed, and model parameters and enhance PWD detection efficiency. Therefore, we adopt YOLOv8n as the base model.
YOLOv8n consists of three major components. The backbone is responsible for extracting essential features from input images. The neck uses a feature pyramid to improve detection at different scales. The head is responsible for bounding box classification and prediction. The network structure of YOLOv8n is shown in Figure 4a.
Given the effect of complex environments (e.g., complex forest terrains, multi-scale PWD trees, severe tree occlusion, and strong background interference), YOLOv8n typically suffers from limited feature extraction capability, leading to insensitivity for small PWD trees. Consequently, this model causes low detection accuracy and fails to achieve the expected detection effect. To address these issues, we propose an improved YOLOv8n-based model, called PWD-YOLOv8n. The structure of the PWD-YOLOv8n network is illustrated in Figure 4b. The following steps are adopted to improve detection performance. (1) The CBAM and CA mechanisms are added to the backbone to enhance the model’s ability to capture the complex textures and structural information of diseased trees. (2) In the neck, a multi-scale inspection machine and BiFPN are used to enhance representation capability, achieving multi-scale feature fusion and improving detection accuracy for small-target diseased trees. (3) In the overall network structure, Bottleneck in the C2f module is replaced with FasterBlock from FasterNet, and the EMA attention mechanism is incorporated into FasterBlock, lightening the model with improved accuracy. (4) The Inner-SIoU is used to replace CIoU to optimize bounding box loss and enhance the ability to locate small-scale PWD trees.

2.3.1. Enhancing Feature Extraction

In PWD detection, complex background conditions frequently hinder the ability to extract features and reduce recognition accuracy. To distinguish PWD trees from the background, the YOLOv8n model relies heavily on local information extracted from convolutional feature maps. However, when the detection model focuses on local information, it may overlook global contextual cues, leading to potential misidentification. Consequently, the incorporation of an attention mechanism into the network can enhance the model’s focus on PWD features and mitigate the disruptive effects of background information.
The CBAM [25], which comprises the channel attention mechanism (CAM) and the spatial attention mechanism (SAM), can enhance the model’s focus on feature map information in the channel and spatial dimensions. The structure is shown in Figure 5.
In the CAM, by learning the information of each channel in the feature map, the network model can capture the correlation between channels. In this module, the input feature map F is processed through average and max pooling operations, generating two feature maps with a height and width of 1 and channel number C. Then, the feature map is inputted into a double-layer shared neural network, i.e., a multilayer perceptron (MLP), for mapping processing. Finally, the output features of the MLP are summed and fed into the sigmoid activation function to obtain the channel attention feature map Mc, i.e.,
M c F = σ MLP AvgPool F + MLP AvgPool F
where σ denotes the sigmoid activation function and AvgPool and MaxPool denote average pooling and max pooling, respectively.
In the SAM, learning information from different spatial positions within the feature map helps to focus on capturing the correlation among various locations. First, the feature map F’, which is derived from the CAM, is processed separately through average and max pooling. This process yields two feature maps, each with a height of H, a width of W, and a single channel. Then, the maps are concatenated along the channel dimension. Subsequently, the spliced feature map is passed through a 7 × 7 convolutional layer, which reduces the channel number to one. Finally, the spatial attention feature map Ms is obtained using a sigmoid activation function, i.e.,
M s F = σ f 7 × 7 AvgPool F ; MaxPool F
Furthermore, the CA [26] mechanism is adopted after the CBAM to incorporate spatial attention while introducing channel attention and enabling the precise focus on and localization of relevant features of interest, as illustrated in Figure 6. The steps of CA are as follows: ① Global average pooling is applied to the input feature map F in the width and height dimensions to obtain two corresponding feature maps. ② The two feature maps are concatenated to form a single feature map with a global receptive field. ③ A convolution module with a shared 1 × 1 convolution is used to process the concatenated feature map, reducing the number of channels from C to C/r. ④ Batch normalization is preformed, and the sigmoid activation function is applied to obtain a feature map f with dimensions 1 × (W + H)×C/r. ⑤ Then, 1 × 1 convolutions are applied to f along the width and height separately, resulting in feature maps Fh and Fw, each with C channels. ⑥ The sigmoid activation function is used to obtain the attention weights gh and gw for the feature maps Fh and Fw, respectively. ⑦ Multiplication-weighted processing is performed on F by using gh and gw to obtain the final feature map with attention weights along the width and height dimensions, i.e.,
y c i , j = x c i , j × g c h i × g c w i

2.3.2. Enhancing Feature Fusion

The attention mechanism in the backbone can effectively enhance its feature extraction capability. However, small-scale targets may be overlooked as the network deepens. Considering that the collected aerial images may contain various multi-scale PWD trees, the features of small-scale trees cannot be effectively extracted and fused, compromising recognition accuracy and generalization ability.
The BiFPN is used to replace the path aggregation network (PAN) structure to enhance the fusion of semantic and texture features across different layers of model. The BiFPN, a multi-scale feature fusion method employed in the EfficientDet model [27], accomplishes increased feature fusion by eliminating nodes with a single input edge and introducing skip connections between input and output nodes of the same scale. This method achieves extensive feature fusion with minimal additional parameters, improving feature expression capability, as depicted in Figure 7.

2.3.3. Lightweighting Model Networks

The BiFPN structure can effectively enhance recognition capability for small targets, but it still suffers from high complexity and a computational burden. The FasterBlock structure from FasterNet [28] is used to lighten the C2f module to enhance recognition efficiency.
The C2f module in YOLOv8 contains a large number of Bottleneck structures, leading to redundant channel information and consequently affecting inference speed. FasterBlock introduces a novel convolution method called particle convolution (PConv). PConv utilizes the redundancy in a feature map to apply regular convolution to input channels while keeping the remaining channels unchanged, enabling the model to better utilize computational resources. The FasterBlock structure is illustrated in Figure 8. The floating point operations per second (FLOPS) of regular Conv and PConv are as follows:
F = h × w × k 2 × c 2
F p = h × w × k 2 × c p 2
r = c p c
where k denotes the size of the convolution kernel; c, h, and w denote the number of channels, height, and width of a feature map, respectively; cp is the number of channels for regular convolution features; and r is the reduction factor.
The C2f-Faster module can reduce model complexity and shorten detection time, but at the cost of decreased detection accuracy. Therefore, the EMA [29] mechanism is introduced into FasterBlock to reduce the missing rate while improving recognition accuracy. In contrast with conventional attention modules, the EMA module does not use generic convolutions to reduce channel dimension. Instead, it integrates the output features of two sub-networks through cross-spatial learning, making the FasterBlock module considerably more efficient in terms of parameter quantity and performance. The structures of the EMA and FasterBlock-EMA modules are shown in Figure 9.
The forward propagation process of the EMA module is as follows: First, the input features are grouped and transformed to divide the channel dimension into two 1 × 1 and one 3 × 3 convolution branches. In the 1 × 1 convolution branches, each group of features undergoes average pooling and then concatenates along the channel dimension. Subsequently, attention weights in the height and width directions are generated followed by a 1 × 1 convolution, which are processed by the sigmoid function and applied to the original feature map through the dot product. After processing, the spatial attention map is generated via further average pooling, Softmax, and dot product. In the 3 × 3 convolution branch, the feature space is expanded by a 3 × 3 convolution, followed by average pooling, Softmax, and dot product to generate a second spatial attention map. Finally, the two spatial attention maps are summed, and spatial learning-based fusion is accomplished via the successive processing of the sigmoid function and dot product to optimize model performance.

2.3.4. Improving Loss Function

Given the varied sizes of PWD trees, the CIoU bounding box loss function in the YOLOv8 model does not consider the orientation issue between the target and ground truth boxes, and thus, it exhibits a relatively weak generalization ability and recognition performance for small targets. Therefore, this study uses the Inner-SIoU loss function [30], as shown in Figure 10.
The Inner-SIoU loss function is represented as follows:
Loss Inner - SIoU = Loss SIoU + IoU IoU inner = 1 IoU inner + Δ + Ω 2
Loss SIoU = 1 IoU inner + Δ + Ω 2
b l g t = x c g t w g t r a t i o 2 , b r g t = x c g t + w g t r a t i o 2
b t g t = y c g t h g t r a t i o 2 , b r g t = y c g t + h g t r a t i o 2
b l = x c w r a t i o 2 , b r = x c + w r a t i o 2
b t = y c h r a t i o 2 , b b = y c + h r a t i o 2
i n t e r = min b r g t , b r max b r g t , b r × min b b g t , b b max b b g t , b b
u n i o n = w g t + h g t r a t i o 2 + w h r a t i o 2 i n t e r
IoU inner = i n t e r u n i o n
where bgt and b denote the target box and anchor box, respectively; x c g t , y c g t and x c , y c represent the center coordinates of the target box and anchor box, respectively; and w g t , h g t and w , h indicate the width and height of the target box and anchor box, respectively.

2.4. Experimental Environment and Evaluation Index

In the experiment, the Adam optimizer with a weight decay fix (AdamW) was chosen to optimize the neural network. The initial learning rate was set to 0.01, the momentum parameter of AdamW was set to 0.937, and the weight decay parameter was set to 0.0005. The warm-up strategy was employed during training, in which the learning rate was set to 0.001 for the first three epochs and then restored to 0.01. During model training, input image size was uniformly set to 640 × 640 pixels, the batch size was set to 40, and the number of epochs was set to 200. All experiments were conducted in the same experimental environment. The parameters of the experimental environment are provided in Table 3.
Several evaluation metrics, including mean average precision (mAP), detection speed, gigaFLOPS (GFLOPS), parameter count, and missing rate, were used to compare the performance of different detection models. Average precision (AP) reflects the model’s accuracy in detecting individual target categories. Precision (P) quantifies the model’s classification performance on target samples, while recall (R) measures the model’s capability to locate positive samples. This study focuses on a single category, and thus, mAP is equivalent to AP in this context. The evaluation expressions for these metrics are as follows:
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P R d R
m A P = 1 N i = 1 N A P i

3. Results

3.1. Ablation Experiments

To verify the improvement of the improved model, ablation experiments were conducted to compare the performance enhancement of each module. The improved modules and evaluation metrics are listed in Table 4, and the mAP@0.5 comparison of different improved models is illustrated in Figure 11.
From Table 4, integrating the CBAM and CA mechanisms into YOLOv8n could significantly boost its feature extraction capabilities, leading to a 1.5% increase in mAP@0.5 and a 1.3% decrease in the missing rate. Furthermore, the BiFPN structure can strengthen feature fusion and detection capabilities for small targets, elevating mAP@0.5 by 0.4% and reducing the missing rate by 1.6%. By optimizing the C2f-Faster module to lighten the model structure, the detection time and number of parameters were reduced by 0.7 ms and 1.4 MB, respectively. By refining the C2f-Faster-EMA module to bolster model performance effect by lightweighting, the model achieved a 0.5% increase in mAP@0.5 and a 0.8% decrease in missing rate without significantly increasing parameters or GFLOPS. By improving the Inner-SIoU loss function, the model localization performance was enhanced without any loss, resulting in a 0.8% increase in mAP@0.5 and a decrease of 1.2% in the missing rate. In summary, compared with the baseline model, the modified version, referred to as PWD-YOLOv8n, demonstrated significant improvements. In particular, it achieved a 4.1% increase in precision, 1.2% in recall, and 3.3% in mAP@0.5, with a 1.2% decrease in parameters and a 3.7% reduction in missing rate. Despite a slight increase in detection time per image and GFLOPS, the model maintained a lightweight status while significantly enhancing overall performance, meeting the demands for PWD recognition tasks.

3.2. Optimizing Inner-SIoU Ratio

To select a suitable ratio for the Inner-SIoU tailored to this dataset, experiments were conducted with varying ratio values. Given the small target size of PWD trees in collected images, the ratio was set to be greater than 1, enlarging the predicted and anchor boxes to improve detection performance for small targets. The improved loss function provided lossless enhancement of the model, and thus, only mAP@0.5 and the missing rate were compared in the validation set. The results are provided in Table 5. As indicated in the table, mAP@0.5 and the missing rate presented a concave curve trend within the ratio range of 1.1–1.5. At a ratio of 1.35, the model achieved the highest mAP@0.5 and the lowest missing rate. Conversely, the model exhibited the lowest mAP@0.5 at a ratio of 1.1, and the missing rate was the highest when the ratio was 1.5. Therefore, setting the Inner-SIoU ratio value to 1.35 can enhance detection capabilities for small-target diseased trees.

3.3. Performance Comparison of Different Object-Detection Models

To further verify the performance of PWD-YOLOv8n, comparative experiments were conducted using several mainstream object-detection models, including YOLOv5s [31], YOLOv7-tiny [32], the faster region-based convolutional neural network (Faster R-CNN) [33], and the single-shot multi-box detector (SSD) [34]. In the experiments, the evaluation metrics used for the model were accuracy, recall, mAP@0.5, and model parameter count.
As indicated in Table 6, PWD-YOLOv8n exhibited the best overall performance in all the evaluation metrics, achieving the highest precision of 87.9% and mAP@0.5 of 94.3% while maintaining a minimal parameter size of only 4.8 MB. This high accuracy and mAP highlight its effectiveness and its considerably smaller parameter size compared with that of the other models, such as Faster R-CNN and SSD, which require more computational resources. Although YOLOv5s and YOLOv7-tiny also demonstrate high precision, they do not achieve the overall balance of performance exhibited by PWD-YOLOv8n. The model is particularly suitable for applications that demand a high precision and lightweight level.
In consideration of the rapid spread of PWD, the missing rate becomes a critical metric for assessing model performance because model precision alone cannot fully reflect the number of missed targets. To further validate the effectiveness of PWD-YOLOv8n model improvements, this model was compared with mainstream object-detection models on the test set. The test set was not used in model training and validation, and thus, it more accurately reflected the generalization ability of the models. Accordingly, diseased trees were detected in the test set. First, the number of PWD trees in the 76 test set images was manually counted and totaled 131 trees. Then, each model was used to detect the test set, and the number of correctly identified PWD trees was counted to obtain each model’s missing rate. The metrics for each model are provided in Table 7, and confusion matrices are illustrated in Figure 12.
Table 7 and Figure 12 indicate that PWD-YOLOv8n exhibits the lowest missing rate on the test set, with YOLOv7-tiny showing the highest. Although the detection time of PWD-YOLOv8n was slightly longer than that of YOLOv8n, it still maintained a lightweight level compared with the other models. This finding indicates that PWD-YOLOv8n can achieve a low missing rate and relatively fast detection speed on an untrained and unvalidated test set, showcasing its strong generalization ability. This result indicates that the model can accurately identify diseased trees in complex environments, meeting the requirements for tree disease prevention tasks.

3.4. Performance Comparison in Complex Backgrounds

To investigate effectiveness, PWD-YOLOv8n and some models for comparison were employed to perform recognition, as illustrated in Figure 13. As shown in the figure, YOLOv5s, YOLOv7-tiny, and YOLOv8n mostly missed some small-scale diseased trees during the early stage, indicating a weakness in multi-scale feature fusion capability and a deficiency in detecting small-scale tree targets. In contrast, PWD-YOLOv8n achieved superior detection with almost no misidentifications or missed detections due to its enhanced feature extraction and multi-scale detection capabilities. The model effectively integrates features from different scales, enabling it to accurately detect small-scale diseased trees in complex backgrounds. In addition, PWD-YOLOv8n demonstrates strong generalization ability across various environmental conditions and considerably reduces misidentification and missed detection. These advantages make PWD-YOLOv8n highly significant for practical applications in forest disease management.

4. Discussion

4.1. Advantages of UAV Remote Sensing

At present, the combination of UAV remote sensing and deep learning has achieved considerable progress in diseased-tree identification [35,36,37,38]. Compared with manual field inspections, UAVs can quickly realize large-scale coverage with their flexible and efficient operation, remarkably improving work efficiency and reducing labor and time costs. Compared with satellite remote sensing, UAVs can fly at low altitudes to acquire high-resolution images, making the details of PWD trees more visible and aiding in precise model localization.

4.2. Model Improvement Performance

Given the complex mountainous environments and small-target diseased-tree images, traditional detection models struggle to complete the identification task [39,40,41,42]. At present, many researchers have improved models to enhance detection performance. For example, Yuan et al. [43] developed a lightweight YOLOv5 model by adopting a multi-scale attention mechanism. This model can balance detection speed and accuracy. Zhou et al. [39] proposed a deep learning-based MFTD-Backbone structure, which significantly improved detection accuracy at different infection stages and enhanced efficiency. In this study, we address the challenges of small-target diseased trees, severe tree occlusion, and strong background interference and propose the PWD-YOLOv8n based detection model, which exhibits a significant advantage over other compared models. First, PWD-YOLOv8n achieves the highest mAP@0.5 of 94.3%, significantly outperforming other models, such as YOLOv7-tiny (91.0%), YOLOv5s (90.0%), and Faster R-CNN (84.0%). Second, the missing rate is crucial for preventing disease spread in practical applications. PWD-YOLOv8n showed the lowest missing rate of 6.1% compared with YOLOv8n (19.0%), YOLOv5s (24.0%), and SSD (21.3%). These results show that PWD-YOLOv8n not only outperforms other models in terms of accuracy but also exhibits higher reliability in practical applications. Given the high flight altitude when collecting data and feature fusion among different layers, PWD-YOLOv8n can effectively address occlusion issues in complex backgrounds, and thus, it is suitable for practical tasks. In summary, these performance improvements make the model reliable in practical applications, providing strong support for the detection and control of PWD.

4.3. Limitations and Prospects

At present, PWD-YOLOv8n performs well in identifying diseased trees. Future work will primarily focus on the following areas: ① Integrating the model with multispectral imaging technology, which can utilize additional spectral information to improve discrimination between healthy and diseased trees under different lighting conditions and ground environments. ② Further exploring efficient attention mechanisms and multi-scale feature fusion techniques to improve the detection capability of small-scale diseased trees, achieving the rapid and large-scale detection of early stage diseases. ③ With the release of YOLOv9 and YOLOv10, future work will consider incorporating these newer versions to further enhance detection performance. The exploration of these models may provide additional insights for optimizing our current approach.

5. Conclusions

To recognize PWD trees against complex backgrounds, this study designs a model based on PWD-YOLOv8n, which exhibits balance between complexity and accuracy. Through the comparative analysis of various improvements, the following conclusions are drawn.
(1) The utilization of diverse image augmentation techniques has significantly enhanced the model’s generalization ability for PWD tree recognition, particularly when dealing with images collected by UAVs from two different regions.
(2) The CBAM and CA mechanisms are introduced into the backbone to enhance feature extraction. The BiFPN structure is utilized in the neck to improve detection ability for small-target PWD trees. The C2f module is improved by adopting the FasterBlock structure in the lightweight network FasterNet and incorporating the EMA module into the lightweight C2f module to enhance its performance in recognizing PWD trees.
(3) To further optimize model accuracy and reduce the missing rate, the CIoU loss function is replaced with Inner-SIoU. The improved model achieves an mAP@0.5 of 94.3%, with a parameter size of 4.8 MB. Notably, missing rates are 6.6% on the validation set and 6.1% on the test set. This discrepancy clarifies the model’s performance across different phases of evaluation, highlighting its robustness and reliability.
In summary, the proposed PWD-YOLOv8n model for recognizing PWD trees demonstrates its effectiveness in enhancing recognition accuracy, balancing complexity, and reducing missing rates. This model can quickly and accurately identify PWD trees in complex environments, and thus, its use is conducive to improving the efficiency of disease control in forest areas.

Author Contributions

Conceptualization and methodology, J.S., G.L. and B.Q.; software and validation, J.S.; formal analysis, J.S., G.L. and F.S.; investigation, J.S. and B.Q.; resources, P.L. and F.S.; data curation, J.S.; writing—original draft preparation, J.S.; writing—review and editing, J.S., G.L. and F.S.; visualization, J.S.; supervision, G.L., P.L. and F.S.; project administration, G.L., P.L. and F.S.; funding acquisition, G.L., P.L. and F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the “Unveiling and Commanding” Science and Technology Plan Project of Mount Taishan Scenic Area, grant number 2022TSGS001-2, and in part by the National Natural Science Foundation of China, grant number 42074009.

Data Availability Statement

Data available on request due to restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Vicente, C.; Espada, M.; Vieira, P.; Mota, M. Pine Wilt Disease: A Threat to European Forestry. Eur. J. Plant Pathol. 2012, 133, 89–99. [Google Scholar] [CrossRef]
  2. Li, M.; Li, H.; Ding, X.; Wang, L.; Wang, X.; Chen, F. The Detection of Pine Wilt Disease: A Literature Review. Int. J. Mol. Sci. 2022, 23, 10797. [Google Scholar] [CrossRef] [PubMed]
  3. Rutherford, T.; Webster, J. Distribution of Pine Wilt Disease with Respect to Temperature in North America, Japan, and Europe. Can. J. For. Res. 1987, 17, 1050–1059. [Google Scholar] [CrossRef]
  4. Syifa, M.; Park, S.; Lee, C. Detection of Pine Wilt Disease Tree Candidates for Drone Remote Sensing Using Artificial Intelligence Techniques. Engineering 2020, 6, 919–926. [Google Scholar] [CrossRef]
  5. Zhang, B.; Ye, H.; Lu, W.; Huang, W.; Wu, B.; Hao, Z.; Sun, H. A Spatiotemporal Change Detection Method for Monitoring Pine Wilt Disease in a Complex Landscape Using High-Resolution Remote Sensing Imagery. Remote Sens. 2021, 13, 2083. [Google Scholar] [CrossRef]
  6. Cai, P.; Chen, G.; Yang, H.; Li, X.; Zhu, K.; Wang, T.; Liao, P.; Han, M.; Gong, Y.; Wang, Q. Detecting Individual Plants Infected with Pine Wilt Disease Using Drones and Satellite Imagery: A Case Study in Xianning, China. Remote Sens. 2023, 15, 2671. [Google Scholar] [CrossRef]
  7. Lin, X.; Chen, Q.; Wang, M.; Ma, X.; Liu, Y. Identification of Dead Trees in Bursaphelenchus xylophilus Disease-affected Areas Based on UAV Multispectral Images. Guangxi For. Sci. 2023, 52, 589–593. [Google Scholar]
  8. Huang, J.; Lu, X.; Chen, L.; Sun, H.; Wang, S.; Fang, G. Accurate Identification of Pine Wood Nematode Disease with a Deep Convolution Neural Network. Remote Sens. 2022, 14, 913. [Google Scholar] [CrossRef]
  9. Lim, W.; Choi, K.; Cho, W.; Chang, B.; Ko, D.W. Efficient Dead Pine Tree Detecting Method in the Forest Damaged by Pine Wood Nematode (Bursaphelenchus xylophilus) through Utilizing Unmanned Aerial Vehicles and Deep Learning-Based Object Detection Techniques. For. Sci. Technol. 2022, 18, 36–43. [Google Scholar] [CrossRef]
  10. Li, F.; Liu, Z.; Shen, W.; Wang, Y.; Wang, Y.; Ge, C.; Sun, F.; Lan, P. A Remote Sensing and Airborne Edge-Computing Based Detection System for Pine Wilt Disease. IEEE Access 2021, 9, 66346–66360. [Google Scholar] [CrossRef]
  11. Wang, J.; Zhao, J.; Sun, H.; Lu, X.; Huang, J.; Wang, S.; Fang, G. Satellite Remote Sensing Identification of Discolored Standing Trees for Pine Wilt Disease Based on Semi-Supervised Deep Learning. Remote Sens. 2022, 14, 5936. [Google Scholar] [CrossRef]
  12. Zhou, H.; Yuan, X.; Zhou, H.; Shen, H.; Ma, L.; Sun, L.; Fang, G.; Sun, H. Surveillance of Pine Wilt Disease by High Resolution Satellite. J. For. Res. 2022, 33, 1401–1408. [Google Scholar] [CrossRef]
  13. Liu, J.; Li, Q.; Wang, B.; Huang, X.; Lei, L. Identification of Wood Infected by Pine Wilt Disease with Better than 2 m Multi-Temporal Images. Beijing Surv. Mapp. 2023, 37, 1638–1643. [Google Scholar]
  14. Liu, F.; Jiang, S.; Zhang, J.; He, S. Detection of Small Size Trees with Pine Wilt Disease Based on NanoDet-SimAM. J. Shenyang Univ. Technol. 2024, 1–7. Available online: https://link.cnki.net/urlid/21.1189.T.20240316.2208.002 (accessed on 23 July 2024).
  15. Lee, K.; Park, J. Economic Evaluation of Unmanned Aerial Vehicle for Forest Pest Monitoring. J. Korea Acad.-Ind. Coop. Soc. 2019, 20, 440–446. [Google Scholar]
  16. Yu, R.; Luo, Y.; Zhou, Q.; Zhang, X.; Wu, D.; Ren, L. Early Detection of Pine Wilt Disease Using Deep Learning Algorithms and UAV-Based Multispectral Imagery. For. Ecol. Manag. 2021, 497, 119493. [Google Scholar] [CrossRef]
  17. Zhang, P.; Wang, Z.; Rao, Y.; Zheng, J.; Zhang, N.; Wang, D.; Zhu, J.; Fang, Y.; Gao, X. Identification of Pine Wilt Disease Infected Wood Using UAV RGB Imagery and Improved YOLOv5 Models Integrated with Attention Mechanisms. Forests 2023, 14, 588. [Google Scholar] [CrossRef]
  18. Zhang, R.; Xia, L.; Chen, L.; Ding, C.; Zheng, A.; Hu, X.; Yi, T.; Chen, M.; Chen, T. Performance Comparison of Deep Learning Models on Segmentation Wilt Pine Disease with UAV. Remote Sens. Nat. Resour. 2023, 1–9. Available online: https://link.cnki.net/urlid/10.1759.p.20231124.1612.030 (accessed on 23 July 2024).
  19. Ye, X.; Pan, J.; Liu, G.; Shao, F. Exploring the Close-Range Detection of UAV-Based Images on Pine Wilt Disease by an Improved Deep Learning Method. Plant Phenom. 2023, 5, 0129. [Google Scholar] [CrossRef]
  20. Liu, H.; Li, W.; Jia, W.; Sun, H.; Zhang, M.; Song, L.; Gui, Y. Clusterformer for Pine Tree Disease Identification Based on UAV Remote Sensing Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5609215. [Google Scholar] [CrossRef]
  21. Wang, S.; Cao, X.; Wu, M.; Yi, C.; Zhang, Z.; Fei, H.; Zheng, H.; Jiang, H.; Jiang, Y.; Zhao, X. Detection of Pine Wilt Disease Using Drone Remote Sensing Imagery and Improved YOLOv8 Algorithm: A Case Study in Weihai, China. Forests 2023, 14, 2052. [Google Scholar] [CrossRef]
  22. Du, Z.; Wu, S.; Wen, Q.; Zheng, X.; Lin, S.; Wu, D. Pine Wilt Disease Detection Algorithm Based on Improved YOLOv5. Front. Plant Sci. 2024, 15, 1302361. [Google Scholar] [CrossRef]
  23. Xia, L.; Zhang, R.; Chen, L.; Li, L.; Yi, T.; Wen, Y.; Ding, C.; Xie, C. Evaluation of Deep Learning Segmentation Models for Detection of Pine Wilt Disease in Unmanned Aerial Vehicle Images. Remote Sens. 2021, 13, 3594. [Google Scholar] [CrossRef]
  24. Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  25. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  26. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13713–13722. [Google Scholar]
  27. Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
  28. Chen, J.; Kao, S.h.; He, H.; Zhuo, W.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 12021–12031. [Google Scholar]
  29. Ouyang, D.; He, S.; Zhang, G.; Luo, M.; Guo, H.; Zhan, J.; Huang, Z. Efficient Multi-Scale Attention Module with Cross-Spatial Learning. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
  30. Zhang, H.; Xu, C.; Zhang, S. Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box. arXiv 2023, arXiv:2311.02877. [Google Scholar]
  31. Zhang, C.; Ding, H.; Shi, Q.; Wang, Y. Grape Cluster Real-Time Detection in Complex Natural Scenes Based on YOLOv5s Deep Learning Network. Agriculture 2022, 12, 1242. [Google Scholar] [CrossRef]
  32. Wang, C.; Bochkovskiy, A.; Liao, H.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
  33. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef]
  34. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Proceedings, Part I 14, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  35. Qin, B.; Sun, F.; Shen, W.; Dong, B.; Ma, S.; Huo, X.; Lan, P. Deep Learning-Based Pine Nematode Trees’ Identification Using Multispectral and Visible UAV Imagery. Drones 2023, 7, 183. [Google Scholar] [CrossRef]
  36. Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A Survey of Modern Deep Learning based Object Detection Models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
  37. Junos, M.H.; Khairuddin, A.S.M.; Dahari, M. Automated Object Detection on Aerial Images for Limited Capacity Embedded Device Using a Lightweight CNN Model. Alex. Eng. J. 2022, 61, 6023–6041. [Google Scholar] [CrossRef]
  38. Zhao, K.; Zhao, L.; Zhao, Y.; Deng, H. Study on Lightweight Model of Maize Seedling Object Detection Based on YOLOv7. Appl. Sci. 2023, 13, 7731. [Google Scholar] [CrossRef]
  39. Zhou, Y.; Liu, W.; Bi, H.; Chen, R.; Zong, S.; Luo, Y. A Detection Method for Individual Infected Pine Trees with Pine Wilt Disease Based on Deep Learning. Forests 2022, 13, 1880. [Google Scholar] [CrossRef]
  40. Yin, D.; Cai, Y.; Li, Y.; Yuan, W.; Zhao, Z. Assessment of the Health Status of Old Trees of Platycladus orientalis L. Using UAV Multispectral Imagery. Drones 2024, 8, 91. [Google Scholar] [CrossRef]
  41. Xie, W.; Wang, H.; Liu, W.; Zang, H. Early-Stage Pine Wilt Disease Detection via Multi-Feature Fusion in UAV Imagery. Forests 2024, 15, 171. [Google Scholar] [CrossRef]
  42. Ren, D.; Peng, Y.; Sun, H.; Yu, M.; Yu, J.; Liu, Z. A Global Multi-Scale Channel Adaptation Network for Pine Wilt Disease Tree Detection on UAV Imagery by Circle Sampling. Drones 2022, 6, 353. [Google Scholar] [CrossRef]
  43. Yuan, Q.; Zou, S.; Wang, H.; Luo, W.; Zheng, X.; Liu, L.; Meng, Z. A Lightweight Pine Wilt Disease Detection Method Based on Vision Transformer-Enhanced YOLO. Forests 2024, 15, 1050. [Google Scholar] [CrossRef]
Figure 1. Study areas and examples of PWD trees.
Figure 1. Study areas and examples of PWD trees.
Drones 08 00404 g001
Figure 2. Dataset construction process: (a) data collection and (b) image augmentation.
Figure 2. Dataset construction process: (a) data collection and (b) image augmentation.
Drones 08 00404 g002
Figure 3. (a) Training set instances with manually annotated bounding boxes (indicated by the green dots forming a square) and (b) bounding box size distribution.
Figure 3. (a) Training set instances with manually annotated bounding boxes (indicated by the green dots forming a square) and (b) bounding box size distribution.
Drones 08 00404 g003
Figure 4. Model used for the recognition of PWD trees: (a) YOLOv8n base model structure diagram and (b) PWD-YOLOv8 model structure diagram, highlighting the enhanced components in red boxes and lines. The colored boxes represent different components of the model.
Figure 4. Model used for the recognition of PWD trees: (a) YOLOv8n base model structure diagram and (b) PWD-YOLOv8 model structure diagram, highlighting the enhanced components in red boxes and lines. The colored boxes represent different components of the model.
Drones 08 00404 g004
Figure 5. Structure diagram of the CBAM (height, H; width, W; and channel number, C).
Figure 5. Structure diagram of the CBAM (height, H; width, W; and channel number, C).
Drones 08 00404 g005
Figure 6. Structure diagram of CA.
Figure 6. Structure diagram of CA.
Drones 08 00404 g006
Figure 7. Structure diagram of the BiFPN. Different colored circles represent different levels in the network structure, with each circle corresponding to a specific module in the network. The arrows indicate the flow of information between these modules.
Figure 7. Structure diagram of the BiFPN. Different colored circles represent different levels in the network structure, with each circle corresponding to a specific module in the network. The arrows indicate the flow of information between these modules.
Drones 08 00404 g007
Figure 8. FasterNet block structure diagram.
Figure 8. FasterNet block structure diagram.
Drones 08 00404 g008
Figure 9. Structure diagram of FasterBlock-EMA.
Figure 9. Structure diagram of FasterBlock-EMA.
Drones 08 00404 g009
Figure 10. Description of Inner-IoU. (a,b) Scaling results of the target and anchor boxes when the ratio is less than 1 and more than 1, which are suitable for large and small target objects, respectively.
Figure 10. Description of Inner-IoU. (a,b) Scaling results of the target and anchor boxes when the ratio is less than 1 and more than 1, which are suitable for large and small target objects, respectively.
Drones 08 00404 g010
Figure 11. Comparison of mAP@0.5.
Figure 11. Comparison of mAP@0.5.
Drones 08 00404 g011
Figure 12. Comparison of confusion matrices for common target-detection models: (a) PWD-YOLOv8n, (b) YOLOv8n, (c) YOLOv5s, (d) YOLOv7-tiny, (e) Faster R-CNN, and (f) SSD.
Figure 12. Comparison of confusion matrices for common target-detection models: (a) PWD-YOLOv8n, (b) YOLOv8n, (c) YOLOv5s, (d) YOLOv7-tiny, (e) Faster R-CNN, and (f) SSD.
Drones 08 00404 g012
Figure 13. Detection performance of different models in real complex backgrounds. The red boxes indicate detected infected trees. The purple and yellow circles indicate undetected infected trees, and misidentified infected trees, respectively.
Figure 13. Detection performance of different models in real complex backgrounds. The red boxes indicate detected infected trees. The purple and yellow circles indicate undetected infected trees, and misidentified infected trees, respectively.
Drones 08 00404 g013
Table 1. Settings of flight parameters.
Table 1. Settings of flight parameters.
Flight ParametersValues
Flight altitude (m)350
Flight speed (m/s)15
Forward overlap rate (%)80
Side overlap rate (%)80
Capture interval (s)8
Table 2. Number of images in the dataset.
Table 2. Number of images in the dataset.
Image TypeTraining SetValidation SetTest SetTotal
Original images57224676894
Augmented images27741184764034
Table 3. Configuration of the experiment.
Table 3. Configuration of the experiment.
ItemModel
Operating systemWindows10
Programming languagePython 3.9.17
CPUIntel Core i5-13400F
GPURTX4060Ti
GPU memory16 GB
FrameworkPyTorch
Table 4. Results of ablation experiments for different improved models.
Table 4. Results of ablation experiments for different improved models.
ModelCBAM and CABiFPNC2f-FasterC2f-Faster-EMAInner-SIoUP
(/%)
R
(/%)
mAP@0.5
(/%)
Parameters
(MB)
Detection Time
(ms/Sheet)
GFLOPS
(G)
Missing Rate
(/%)
YOLOv8n 83.885.891.06.020.38.110.3
86.185.692.56.020.78.19.0
87.684.792.96.221.412.47.4
85.985.793.04.820.710.38.6
87.886.593.54.822.110.57.8
87.987.094.34.823.310.56.6
Table 5. Results of PWD-YOLOv8n at different ratios.
Table 5. Results of PWD-YOLOv8n at different ratios.
RatiomAP@0.5 (/%)Missing Rate (/%)
1.193.07.9
1.1593.77.3
1.293.67.2
1.2593.66.9
1.393.97.4
1.3594.36.6
1.493.47.8
1.4593.27.5
1.593.18.4
Table 6. Performance comparison of common target-detection models.
Table 6. Performance comparison of common target-detection models.
ModelP (/%)R (/%)mAP@0.5 (/%)Parameters (MB)
Faster R-CNN49.889.784.0108
SSD80.887.290.290.6
YOLOv5s83.785.690.013.7
YOLOv7-tiny84.786.990.211.7
YOLOv8n83.885.891.06.0
PWD-YOLOv8n87.987.094.34.8
Table 7. Performance comparison of common target-detection models on the test set.
Table 7. Performance comparison of common target-detection models on the test set.
ModelNumber of Detected Diseased TreesMissing Rate (/%)Detection Time (ms/Sheet)
Faster R-CNN9825.1166
SSD10321.3167
YOLOv5s9924.012.9
YOLOv7-tiny9428.343.1
YOLOv8n10619.015.3
PWD-YOLOv8n1236.124.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, J.; Qin, B.; Sun, F.; Lan, P.; Liu, G. Identification of Pine Wilt-Diseased Trees Using UAV Remote Sensing Imagery and Improved PWD-YOLOv8n Algorithm. Drones 2024, 8, 404. https://doi.org/10.3390/drones8080404

AMA Style

Su J, Qin B, Sun F, Lan P, Liu G. Identification of Pine Wilt-Diseased Trees Using UAV Remote Sensing Imagery and Improved PWD-YOLOv8n Algorithm. Drones. 2024; 8(8):404. https://doi.org/10.3390/drones8080404

Chicago/Turabian Style

Su, Jianyi, Bingxi Qin, Fenggang Sun, Peng Lan, and Guolin Liu. 2024. "Identification of Pine Wilt-Diseased Trees Using UAV Remote Sensing Imagery and Improved PWD-YOLOv8n Algorithm" Drones 8, no. 8: 404. https://doi.org/10.3390/drones8080404

APA Style

Su, J., Qin, B., Sun, F., Lan, P., & Liu, G. (2024). Identification of Pine Wilt-Diseased Trees Using UAV Remote Sensing Imagery and Improved PWD-YOLOv8n Algorithm. Drones, 8(8), 404. https://doi.org/10.3390/drones8080404

Article Metrics

Back to TopTop