Next Article in Journal
Radial Variation and Early Prediction of Wood Properties in Pinus elliottii Engelm. Plantation
Previous Article in Journal
Continuous Leaf Area Index (LAI) Observation in Forests: Validation, Application, and Improvement of LAI-NOS
Previous Article in Special Issue
Prediction of the Potential Distribution of Teinopalpus aureus Mell, 1923 (Lepidoptera, Papilionidae) in China Using Habitat Suitability Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of the Pine Wilt Disease Using a Joint Deep Object Detection Model Based on Drone Remote Sensing Data

1
Department of Applied Engineering, Gandong College, Fuzhou 344000, China
2
Graduate School, Nueva Ecija University of Science and Technology, Cabanatuan City 3100, Philippines
3
School of Land Science and Technology, China University of Geosciences, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Forests 2024, 15(5), 869; https://doi.org/10.3390/f15050869
Submission received: 8 March 2024 / Revised: 26 April 2024 / Accepted: 30 April 2024 / Published: 16 May 2024

Abstract

:
Disease and detection is crucial for the protection of forest growth, reproduction, and biodiversity. Traditional detection methods face challenges such as limited coverage, excessive time and resource consumption, and poor accuracy, diminishing the effectiveness of forest disease prevention and control. By addressing these challenges, this study leverages drone remote sensing data combined with deep object detection models, specifically employing the YOLO-v3 algorithm based on loss function optimization, for the efficient and accurate detection of tree diseases and pests. Utilizing drone-mounted cameras, the study captures insect pest image information in pine forest areas, followed by segmentation, merging, and feature extraction processing. The computing system of airborne embedded devices is designed to ensure detection efficiency and accuracy. The improved YOLO-v3 algorithm combined with the CIoU loss function was used to detect forest pests and diseases. Compared to the traditional IoU loss function, CIoU takes into account the overlap area, the distance between the center of the predicted frame and the actual frame, and the consistency of the aspect ratio. The experimental results demonstrate the proposed model’s capability to process pest and disease images at a slightly faster speed, with an average processing time of less than 0.5 s per image, while achieving an accuracy surpassing 95%. The model’s effectiveness in identifying tree pests and diseases with high accuracy and comprehensiveness offers significant potential for developing forest inspection protection and prevention plans. However, limitations exist in the model’s performance in complex forest environments, necessitating further research to improve model universality and adaptability across diverse forest regions. Future directions include exploring advanced deep object detection models to minimize computing resource demands and enhance practical application support for forest protection and pest control.

1. Introduction

In the current context of globalization, forest health has become a global focus, especially in the detection and management of forest pests and diseases (FDP). With the rapid development of UAV remote sensing technology, its application in forest resource survey and pest monitoring is increasingly widespread [1,2,3]. Traditional forest pest monitoring methods mainly rely on ground surveys and manual interpretations, which arenot only time-consuming but also inefficient. Therefore, the use of remote sensing data obtained by UAV aerial photography for automatic and accurate pest detection has become a research hotspot. At present, deep learning technology has made remarkable achievements in image processing and object detection. In particular, the convolutional neural network (CNN) outperforms traditional algorithms in feature extraction and image classification [4,5,6]. However, there are still many challenges in applying deep learning technology to forest pest detection based on UAV remote sensing data. For example, Li R et al. applied infrared remote sensing to dim target detection but failed to accurately identify them [7]. On the one hand, remote sensing images of forest cover areas usually contain complex background information, and different tree species, understory vegetation, and topographic relief may interfere with the detection of diseases and pests [8]. On the other hand, the characteristics of the occurrence of pests and diseases are irregular in spatial distribution, and the target size of pests and diseases in the image is variable, which requires the detection algorithm to have higher robustness and adaptability. For example, Xu B et al. proposed a spectral weed mapping model, which is not effective in small samples and a few types of detection scenarios [9]. In addition, the generalization ability and real-time performance of the model are not considered enough in the existing studies, which may lead to areduction in detection accuracy and anextension of response time in practical applications.
As a flexible data acquisition tool, drones have proven to be very effective in quickly acquiring images of large forest areas [10]. B. Wang proposed a deep learning-based crop pest and disease recognition model. Firstly, the image data are obtained and the image is preprocessed by the nearest neighbor interpolation method. Then, the structure of the Alex-Net model is improved, and the neuron nodes and experimental parameters of the fully connected layer are adjusted. This improved model was used to identify crop pests and diseases, and the results showed that the average recognition accuracy reached 96.26%;the recognition time was only 321 s, and the performance was better than other models [11]. Zhu C used UAV aerial photography to dynamically monitor diseases and insect pests, and the transformed images with latitude and longitude information were input into the detection system, mainly for image feature extraction and classification. Using deep learning of MATLAB R2021a and BP neural network algorithm, the similarity of image features in the pest feature database was compared. A large number of comparative analyses show that deep learning algorithms have high accuracy and reliability in the identification of pests and diseases [12]. X. Huang et al. used the full convolutional network algorithm based on VGG-16 to segment crop images and proposed an improved dual-path network model to enhance feature extraction capability. By adjusting the normalization layer, the parameters of the dual-path neural network are optimized adaptively, which improves the recognition versatility of different pest types and the training speed of the network. The results show that 97.59% of the recognition accuracy is achieved, which proves the effectiveness of the method [13]. L. Butera et al. proposed the ability of target detection models to identify disease pests in non-uniform outdoor images taken from various sources. Emphasis is placed on distinguishing pests from similar harmless species while considering the detection performance and computing resource requirements of different models. The experimental results show that the FRCNN model with MobileNetV3 backbone performs well in accuracy and inference speed, making it an effective starting point. The average accuracy of this model reaches 92.66%, and its performance is superior to other models [14]. Rustia D J A et al. proposed amethod combining UAV technology and a convolutional neural network image classifier and adopted a sample control strategy to improve classification performance. The algorithm was developed and tested on images taken by wireless imaging equipment installed in multiple greenhouses under natural and varying lighting conditions. The experiments show that the average F1 reaches 0.92 and 0.90, respectively, and the counting accuracy is 0.91 and 0.90, respectively. This method provides an effective solution for pest and disease identification [15]. An improved YOLOv3 object detection algorithm was proposed by X. Wang et al. This method uses an extended convolutional layer to improve the detection ability of small targets, and retains fuzzy targets by evaluating candidate box IoU and linear attenuation confidence, so as to solve the detection problem of pests and diseases. In addition, the small target weight in the loss function is optimized by introducing a balance factor. Under different background conditions, the detection effect is superior to the existing algorithm [16]. P. Kaur et al. proposed a combination of convolutional neural network models and transfer learning techniques to identify pests and diseases in leaf images. Model performance is evaluated by different parameters such as dropout, learning rate, batch size, epoch number, and accuracy. The results showed that the accuracy of disease classification reached 98.92% and the F1 score was 97.94%. The experimental results verified the effectiveness of this method in the detection of leaf pests and diseases [17]. Syed-Ab-Rahman, S.F et al. proposed a two-stage deep convolutional neural network model, focusing on the detection and classification of plant diseases using leaf images. The model consists of two key stages: first, the potentially affected areas are identified through the regional suggestion network. These regions are then accurately classified by disease class using a classifier. The experimental results show that the detection accuracy of this model is 94.37%, and the average accuracy is 95.8%, which proves that it is an effective decision-support tool for growers and farmers to identify and classify pests and diseases [18].
In summary, the existing methods have some problems, such as training data overlapping processing, slow loss reduction, high cost, complex operation, limited coverage, excessive consumption of time and resources, and insufficient precision effect. Therefore, a method combining remote sensing data with a UAV deep target detection model is proposed. In this method, the HOG-SVM model is used for initial feature extraction. In the complex background, the YOLO-v3 algorithm is further introduced to accurately identify the target based on the optimized loss function. After the regression optimization of the boundary box for target detection, the improved consciousness loss function is adopted to maximize the overlap area of the boundary box and fine-adjust the deviation of the center point position and shape size to accurately reflect the actual contour of the target. This helps to achieve higher accuracy and identification efficiency in the detection of pine wilt. The progress of this technology provides a new perspective for disease monitoring in the field of remote sensing and lays a foundation for future applications in complex environments, thereby making positive contributions to forest protection and maintaining ecological balance.
The overall structure of the study includes five parts: the first part summarizes the relevant research achievements and shortcomings of diseases and insect pests and detection technology at home and abroad. In the second part, the HOG-SVM model is proposed, the YOLO-v3 algorithm based on loss function optimization is proposed for target recognition, and the computing system under the airborne embedded equipment is designed. The third part uses the method proposed in this paper to carry outacomparative analysis of the research experiment. In the fourth part, the research optimization model is discussed and analyzed. In the fifth part, the experimental results are summarized, the shortcomings of the research are pointed out, and the future research direction is proposed.

2. Materials and Methods

This study first proposed a pest and disease detection model based on UAV RSD, with the core technology being the combination of HOG and SVM. Given the limitations of HOG-SVM in feature extraction in complex environments, this study further combined the YOLO-v3DLOD model optimized by Compatible-IoU (CIoU). In addition, considering the efficiency of data processing and the need to reduce information redundancy, this study proposed an OCP. This platform aimed to improve the detection efficiency of ground processing units, optimize data processing processes, reduce data transmission burden, and enhance the practicality and reliability of the entire pest and disease detection system.

2.1. Detection of Pests and Diseases Based on UAV RSD

This study used machine learning methods to detect FDP, which is more cost-effective and efficient than manual methods. To this end, machine learning methods that are easier to operate and have lower computational requirements were adopted to adapt forest terrain interference, and image processing and search algorithms were improved to enhance the feature extraction performance of HOG. This study was mainly conducted in Mount Taishan Mountain and the forest area to the west. From autumn 2018 to 2021, the pine wilt and other tree diseases and pests were carefully surveyed and controlled. This study focused on observing trees under the influence of pine wilt disease and usedUAV-mounted cameras to collect image data of the affected areas. Regional aerial photography was completed using DJI M600UAV RSD, MAVIC 2UAV, FIREFLY 6S cameras to comprehensively cover the monitoring area and effectively grasp the development of FDP. Figure 1 shows the DJI M600 UAV and its flight trajectory.
This study used DJI M600 and Yu2UAVRSD to conduct aerial photography of forest areas affected by pine wilt disease. The DJI M600 was performing missions at an altitude of 100 m, while the Yu2 was shooting at an altitude of 10 to 30 m. Through these aerial photographs, 770 training samples and 85 test samples were collected. The research required the use of image masking techniques to simplify complex elements in images and facilitate processing. Firstly, image segmentation was performed in multiple color spaces, and then image regions were merged usingmethods such as color histograms and texture features. The linear fusion method is Equation (1) [19].
S ( r i , r j ) = a 1 S c o l o r ( r i , r j ) + a 3 S s i z e ( r i , r j ) + a 4 S f i l l ( r i , r j )
Finally, different image segmentation thresholds were set to obtain diverse image processing results. After image segmentation and hierarchical merging, the potential pest and disease areas were identified. For tree images affected by pine wilt disease, the suspected target areas obtained through selective search needed to be further processed for model training. Whencomparing the degree of overlap between the candidate boxes obtained through selective search and the actual pest and disease areas, ifthe overlap rate exceeded 50%, it was considered a positive sample. If it was less than 50%, it was judged as a negative sample, and the selection process is Figure 2.
After preparing positive and negative samples, it was necessary to perform feature extraction on these samples, which mainly used HOG as the key feature. HOG is a feature descriptor based on image gradient histograms, which is effective for object detection tasks. To improve the accuracy of feature extraction, the image was first converted to grayscale format, and then its contrast was adjusted through pixel compression technology while reducing noise to reduce the impact of light and shadow changes on the image, as shown in Equation (2) [20].
d s t ( x , y ) = s r c ( x , y ) γ
In Equation (2), γ represents pixel compression, usually a value of 0.5. Then, the gradient of the image was calculated to capture features such as contours and textures and reduce the impact of lighting effects. This method involved analyzing the rate of change in each pixel in the horizontal and vertical directions of the image and determining the gradient intensity and direction of these positions. The specific calculation is Equation (3) [21].
G x ( x , y ) = I ( x + 1 , y ) I ( x 1 , y ) G y ( x , y ) = I ( x , y + 1 ) I ( x , y 1 )
In Equation (3), ( x , y ) represents the horizontal and vertical gradient values of each point in the image, respectively. The gradient amplitude and direction of these points were accurately calculated, as shown in Equation (4) [21].
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2 a = arctan G y · ( x , y ) G x · ( x , y )
The process of constructing feature vectors required three steps: first, the image was segmented into multiple small cell domains, and the gradient histogram of each cell was calculated. Then, these cells were combined into larger block regions, and the descriptors of each cell were concatenated into the HOG feature vectors of the block. Finally, the descriptors of all blocks were concatenated to form the HOG feature descriptors of the complete image. The mathematical calculation of this process is Equation (5) [22].
W i n N u m = ( s r c I M G . w w i n S . w w i n S t r i d e . w + 1 ) ( s r c I M G . h w i n S . h w i n S t r i d e . h + 1 ) B l o c k N u m = ( w i n S . w b l o c k S . w b l o c k S t r i d e . w + 1 ) ( w i n S . h b l o c k S . h b l o c k S t r i d e . h + 1 ) C e l l N u m = b l o c k S . w c e l l S . w b l o c k S . h c e l l S . h F e a t u r e N u m = W i n N u m B l o c k N u m C e l l N u m N b i n
In Equation (5), the width and height of the input image are represented by s r c I M G . w and s r c I M G . h , while w i n S . w and w i n S . h represent the size of the sliding window. w i n S t r i d e . w and w i n S t r i d e . h represent the step size of the sliding window. W i n N u m is the total number of sliding windows. For blocks, their quantity is represented as B l o c k N u m , their dimensions are b l o c k S . w and b l o c k S . h , and their step sizes are b l o c k S t r i d e . w and b l o c k S t r i d e . h . The quantity C e l l N u m , dimensions c e l l S . w , and c e l l S . h of the cells were also included in the calculation. F e a t u r e N u m and N b i n represent the dimensionality and number of directional intervals of the feature vectors, respectively. These parameters collectively described the HOG feature extraction process of the image. This study mainly applied the SVM classifier, which is an effective binary classification model that can handle different types of data samples, as shown in Figure 3.
In Figure 3, the green lines represent different hyperplanes, while circles and crosses represent two different types of data. When analyzing the confidence of a classification model, the relative position of data points and the classification hyperplane was usually taken as a key consideration. For linear classification, the equation of the hyperplane was usually set as w · x i + b . w · x i + b represents the distance between point x and the hyperplane, and the degree to which its positive and negative symbols match the category identifier y determines the accuracy of classification. The quantification method for the accuracy and confidence of this classification was defined as Equation (6) [23].
In Equation (6), y i ( w · x i + b ) represents the accuracy and confidence level of measuring classification. When this value was standardized, it was converted into a geometric interval γ , as shown in Equation (7), for specific calculations [24].
γ ^ = min i = 1 , 2 , , N γ ^ i = min i = 1 , 2 , , N y i ( w · x i + b )
γ = w · x + b w = γ ^ w
As mentioned above, SVM classifiers combining HOG features havebeen widely applied in the field of image recognition. Therefore, this study chose the HOG-SVM method for the detection of pine wilt disease.

2.2. Disease and Pest Detection Based on Deep Object Algorithm

To address the limitations of HOG-SVM in handling feature extraction in complex backgrounds, this study introduced the DLOD model of UAV RSD. This model exhibited superior image feature recognition ability in complex environments. This study focused on training a detection model specifically for pine wilt disease trees and explored its effectiveness and challenges in practical applications. Further application of CIoU optimization to the YOLO-v3 algorithm significantly improved the accuracy of detection. In Figure 4, YOLO-v3 is known for its efficient detection method and fast processing ability. The D-Net53 network was used to enhance feature extraction, and the detection ability for small objects was enhanced through the improvement of feature pyramids and convolutional layers. These innovations significantly improved the accuracy of YOLO-v3 in multi-scale detection, effectively reducing the risk of false positives and missed detection.
In the YOLO-v3 deep object detection model, the loss function mainly consists of three parts: bounding box loss, confidence loss, and category loss, as shown in Equation (8) [25].
L o s s = L b b o x ( x , y , w , h ) + L c o n f ( c i , c i * ) + L c l s ( c i , p , p * )
In Equation (8), L b b o x represents regression loss, L c o n f represents target confidence loss, and L c l s = ( c i , p , p * ) represents category cross-entropy loss. Among them, c i and c i * represent the actual category and the predicted model category, while p and p * represent the likelihood of real labels and positive classes. For the target position regression loss, it was further decomposed into cross-entropy loss of x and y coordinates and mean square error loss of width and height, as shown in Equation (9) [26].
L b b o x ( x , y ) = i = 0 K × K j = 0 M I i j o b j ( 2 w i h i ) × [ ( x i log ( x i * ) ( 1 x i ) log ( 1 x i * ) ) + ( y i log ( y i * ) ( 1 y i ) log ( 1 y i * ) ) ]
In Equation (9), L b b o x ( x , y ) represents the loss of quantized target coordinates. The loss assessment was based on whether the j -th detection accurately indicates the presence of the target at a specific scale i , as shown in Equation (10) [27].
L b b o x ( w , h ) = i = 0 K × K j = 0 M I i j o b j ( 2 w i h i ) [ ( w i w i * ) 2 + ( h i h i * ) 2 ]
In Equation (10), L b b o x ( w , h ) represents the loss of the width and height of the target obtained through mean square error. The YOLO-v3 model divided all bounding boxes into 9 categories by applying the K-means algorithm, with each of the three categories corresponding to a size ratio. The input image was adjusted to a size of 416 × 416, divided into grids of different sizes, and downsampled multiple times by the D-Net53 network to generate feature maps of three different sizes. To improve prediction accuracy, the target width and height were squared to reduce the impact of size on prediction accuracy. This study proposed the intersection-to-union ratio method as the loss function, which is more effective than traditional mean square error and cross-entropy loss functions, as shown in Equation (11) [28].
I o U = | A B | | A B |
In Equation (11), I o U represents the overlap of cross weights used to evaluate the predicted and actual targets. This indicator was relatively unaffected by size changes and effectively reflects the accuracy of the positionin%g box. However, when the predicted box did not overlap completely with the actual target, it could not provide gradient information for optimization. In addition, even if the overlap rate of two boxes was the same, I o U could not accurately describe the specific way they overlap, thus being unable to accurately determine the positioning quality of the boxes. Therefore, this study introduced a generalized overlap rate loss function, as shown in Equation (12) [29].
G I o U = I o U | C / ( A B ) | | C |
In Equation (12), the generalized intersection union ratio G I o U loss function was used to evaluate the degree of overlap between the predicted box and the true target box. C represents the smallest closed convex surface that surrounds two boxes. However, when the predicted box completely covered the real target box, G I o U was equivalent to the ordinary I o U . This might lead to a decrease in the utility of G I o U . Therefore, this study introduced a distance loss optimization method, which compensates for the shortcomings of G I o U when the predicted box completely coincides with the true box through distance optimization, as shown in Equation (13) [30].
L D I o U = 1 I o U + ρ 2 ( b , b g t ) c 2
In Equation (13), the distance intersection to union ratio L D I o U loss function represents the degree of proximity used to evaluate the predicted and actual target boxes. To improve the training efficiency of the model and make up for the shortcomings of previous methods, this study proposed a CIoU loss function. This function combined the overlap and positional distance between the target and the real box, as defined in Equation (14) [31].
L C I o U 1 I o U + ρ 2 ( b , b g t ) c 2 + a v v = 4 π 2 arctan w g t h g t arctan w h 2 a = v ( 1 I o U ) + v
In Equation (14), ρ 2 ( b , b g t ) represents the Euclidean distance used to calculate the predicted and actual center points, while c represents the diagonal size of the minimum convex hull around these two boxes. The v parameter was used to determine the consistency between the predicted box and the actual proportion, while the a parameter helped to adjust the impact of this proportion consistency. In summary, the YOLO-v3 model combined with the CIoU loss function effectively improved the accuracy and consistency of matching the predicted boxes with real target boxes. This method played an important role in improving the accuracy of object detection.

2.3. Design of Pest and Disease Detection System under OCP

This study focused on OCP and preliminarily applied the Mobile-NetV2SSDLite model to identify tree diseases and pests. This edge computing strategy optimized data collection and primary processing and sped up the process of pest detection. Next, image transmission technology transferred the data to the ground processing unit, further utilizing the YOLO-v3 model fused with CIoU for in-depth analysis and localization. OCP was designed for users to perform computing tasks up close, significantly improving data processing speed. This platform was divided into two models: unidirectional data flow and bidirectional data flow, as shown in Figure 5. The one-way data flow model mainly handled data reception or transmission. The bidirectional data flow model supported data exchange between cloud and edge devices and performed complex computing tasks. Compared with traditional cloud computing, this airborne platform effectively improved timeliness by processing computing tasks near the data source.
OCP consists of Raspberry Pi 4 Model B 4 (Raspberry Pi Foundation, Cambridge, UK), Mobile-NetV2SSDLite model, and 4K camera, installed on the DJI M600UAV (DJI Innovation Technology Co., Ltd., Shenzhen, China) for aerial data collection. After being processed by Raspberry, only images marked as potential tree pests and diseases were transmitted back to the ground terminal to reduce the burden of data processing. As the latest embedded computer, Raspberry Pi 4 Model B 4 was equipped with a Linux system and rich community support, making it an ideal airborne edge computing platform device. This platform not only reduced the pressure on data processing terminals but also simplified the implementation of the entire detection system, as shown in Figure 6.
With the advancement of deep object detection model technology, many models suitable for embedded devices haveemerged, such as Mobile-NetV2SSDLite, YOLO Tiny, and YOLO Nano. These models havebeen widely applied on platforms such as Raspberry and haveshown good detection performance. Each deep object detection model hasits unique advantages and disadvantages; for example, Mobile-NetV2SSDLite performed well in detecting pine wilt disease. The model architecture is shown in Figure 7.
After inputting the image, the first step was to use the Mobile-NetV2SSDLite feature extraction network. The network was divided into six detection boxes of different scales when generating feature maps, and regression training was performed on each scale. After completing the training, the final category and bounding box were obtained through non-maximum suppression. The feature extractor was crucial for the effectiveness of tree pest and disease detection models. MobileNetV2SSDLite is an advanced version of MobileNetV1SSDLite, which has been applied in multiple fields such as classification, object detection, and semantic segmentation. This network introduced linear bottlenecks and fast connections, enhancing the model structure. Therefore, the Mobile-NetV2SSDLite model was suitable for running on OCPs such as Raspberry in terms of accuracy and speed, demonstrating good performance. To evaluate the performance of four tree pest and disease detection models in actual environments, this study used four key indicators. This included the accuracy, recall, and overall accuracy of the model, as well as the processing speed of each image. These indicators comprehensively considered the efficiency and accuracy of the model in identifying pine wilt disease trees, ensuring the comprehensiveness and practicality of the evaluation, as shown in Equation (15) [32].
Pr e c i s i o n = T P T P + F P Re c a l l = T P T P + F N A c c u r a c y = T P + T N T P + F P + T N + F N
In Equation (15), four variables were used to evaluate the model performance. True positive T P represents the number of positive samples correctly recognized by the model, while true negative T N represents the number of negative samples correctly recognized by the model. False positive F P represents the number of positive samples that the model incorrectly identified as positive, while false negative F N represents the number of positive samples that the model did not correctly recognize [33]. The combination of these variables provided a comprehensive understanding of the accuracy of model predictions, reflecting their effectiveness and reliability in practical applications.

3. Results

This study first determined the test set and evaluation criteria, and based on this, conducted a detailed analysis of the training and detection results of the HOG-SVM model. After the training was completed, the recognition performance of the deep object detection model on the test set was further evaluated. To ensure the accuracy of the detection method, TensorFlow, an open-source deep object detection model learning framework, was adopted, and a comprehensive performance test was conducted on the YOLOv3 tree pest and disease detection method combined with CIoU. Subsequently, this study compared and analyzed the performance of four different models, with a particular focus on the performance curves of YOLO-v3 and YOLO-v3 CIoU in identifying tree pests and diseases. Finally, the Mobile-NetV2SSDLite deep object detection model was used to identify tree pests and diseases. After 10,000 iterations of training, this model was used on a test dataset to evaluate its actual effectiveness in detecting tree diseases and pests.

3.1. Training and Detection Results and Analysis of the HOG-SVM Model

The stratified sampling method was adopted in the experimental design, and representative forest areas were selected for UAV remote sensing flight to ensure the diversity and universality of the collected data. The implementation area covers different forest types, including coniferous, broad-leaved, and mixed forests, all located in different climatic zones, ranging from temperate to subtropical. The experimental site is located in Lianhua Mountain, Tai’an City, with geographical coordinates of 117°40′0.4″ east longitude, 36°2′49″ north latitude, and an altitude of 999 m. A forest survey conducted in the spring of 2022 in the area revealed a local spread of pine wood nematode disease in the area. The images of pine wood nematode trees were captured in August 2022 by DJI “Yu”MAVIC 2 and DJI M600 unmanned aerial vehicles. The aim is to evaluate the adaptability and accuracy of the proposed algorithm under different environmental conditions. The experimental design and implementation areas are shown in Table 1.
This study applied an SVM classifier that integrates HOG cross-entropy for feature training. During the training period, negative samples that are frequently misjudged are filtered out and added back to the training set, gradually enhancing the predictive ability of the model. After training, the SVM classifier learned and mastered the core parameters of the HOG model. These parameters are used to score potential targets within the test set, with TP, TN, FP, and FN of 65, 0, 95, and 22, respectively. Excluding areas with a probability lower than 0.5 and removing windows with an overlap rate exceeding 30%, the repeatability of the results was effectively reduced, resulting in the recognition result, as shown in Figure 8. Among them, the red line represents accuracy, the green line represents missed detection rate, and the blue line represents false detection rate. In Figure 8, the missed detection rate of this method is 0.25, the false detection rate is 0.52, and the accuracy is only 0.75, which fails to meet the expected working standards and is not sufficient to meet the monitoring needs of forest protection work. Although algorithms can identify tree diseases and pests, their performance still needs further improvement and optimization in order to meet the requirements of practical applications.
In Figure 9 the red box represents the detection range. Figure 9 shows the performance of the trained model on the test set. In the context of Figure 9a,b, most of the deep object detection models can accurately identify and locate tree diseases. However, Figure 9c presents a challenge where the model fails to fully detect all trees affected by pine wilt disease when vegetation coverage is lush and image features are numerous. This implies the limitations of HOG features in processing small targets. In Figure 9d, the positioning box may sometimes be inaccurate and even misjudge non-withered tree areas, which may be related to the application of color features in mask processing. Therefore, although the combination of HOG and SVM models has shown some ability in detecting pine wilt disease, its high missed detection rate has become a significant weakness. Given the rapid spread of pine wilt disease, this high miss rate of this detection method does not fully meet the requirements of FDP detection and requires further optimization and adjustment.

3.2. Results and Analysis of YOLO-v3 Combined with CIoU Deep Object Detection Model

This study adopts Mobile-NetV2SSDLite, YOLO-v3, and other models such as Fast Regional Convolutional Neural Networks and Single Shot Multibox Detector to identify forest pests and diseases to obtain the training and detection results of the optimal model. The performance of each model is compared and analyzed. To verify the effectiveness of the detection method, this study used the TensorFlow open-source deep object detection model learning framework to perform performance testing on the YOLOv3 tree pest and disease detection method combined with CIoU. Meanwhile, for a comprehensive evaluation, the proposed model was compared with other commonly used models such as YOLO-v3, Fast Regional Convolutional Neural Networks (FRCNN), and Single Shot Multibox Detector (SSD). The experiment used nodes equipped with high-performance GPUs and mobile workstations, and Table 2 shows their software and hardware configurations.
During the training process, this study increased the number of samples through data augmentation techniques, used Adam as the optimizer, and set the initial learning rate to 0.001. The training was conducted on 16 images per batch, and the model underwent a total of 10,000 iterations until the average loss decreased to below 0.1. Figure 10 shows the relationship between the loss value and iteration number of YOLO-v3 and YOLO-v3 CIoU detection methods during training. The YOLO-v3 CIoU method shows a faster and more stable decrease in loss during training. This indicates that the improved YOLO-v3 CIoU model demonstrated better performance and convergence during the training process. After completing the training, the proposed model will be subjected to performance testing to verify its practical application effectiveness.
Figure 11 shows the performance of different models after completing evaluation testing. The accuracy of FRCNN is 80%, while SSD is 92%, and YOLO-v3 and its YOLO-v3 CIoU are 100%. In the recall rate, FRCNN, SSD, YOLO-v3, and YOLO-v3 CIoU are 96%, 94%, 93%, and 99%, respectively. In terms of accuracy, the four achieved 78%, 88%, 93%, and 99%, respectively. The analysis time for a single image shows that FRCNN takes 10.61 s, SSD takes 0.21 s, YOLO-v3 takes 0.37 s, and YOLO-v3 CIoU takes 0.32 s. YOLO-v3 CIoU leads in comprehensive indicators, while FRCNN has lower applicability in practical environments due to its high number of false positives. From the perspective of processing timeliness, YOLO-v3 exhibits fast performance similar to SSD, far surpassing FRCNN. Therefore, the proposed YOLO-v3 CIoU model balances speed and accuracy by less than 0.5 s and has an accuracy rate of over 95%, making it more suitable for fast and accurate FDP detection tasks.
Figure 12 shows the performance curves of YOLO-v3 and YOLO-v3 CIoU in identifying tree diseases and pests. Usually, when a model has a high recall, its precision may be low, and vice versa. In Figure 12, the YOLO-v3 CIoU model performs well on both indicators, and compared to the unimproved YOLO-v3 model, it maintains a high recall rate while increasing accuracy by 0.53. This indicates that the YOLO-v3 CIoU model has made significant progress in the localization and identification of tree diseases and pests.

3.3. Analysis of Training Results of Mobile-NetV2SSDLite Model for Diseases and Pests under OCP

The core evaluation indicators for the performance of deep object detection models usually include the average accuracy curve of the entire class and the cumulative loss curve, which are directly related to the detection ability of the model in practical applications. This study applies the Mobile-NetV2SSDLite deep object detection model to the identification of tree diseases and pests and analyzes it on a well-organized test dataset. Figure 13 shows the variation curves of the average accuracy curve and cumulative loss of the entire class over 10,000 training cycles.
In Figure 13, during the training process of the Mobile-NetV2SSDLite model, when the training reaches 9000 times, the total loss value tends to stabilize, and the average accuracy reaches a peak of 0.60, indicating that the research model has converged at this point. Therefore, the Mobile-NetV2SSDLite model trained in ten thousand iterations was selected for testing, and the results are shown in Figure 14. In Figure 14a,b, although the model successfully identified a suspected pine wilt tree, there are shortcomings in determining the bounding box and optimization needed for non-maximum suppression. The localization problem in Figure 14c,d suggests that the model may be limited in feature recognition due to its lightweight. Although the Mobile-NetV2SSDLite model faces challenges, as shown in Figure 14a, where most images of pine wilt disease trees were successfully identified, the blurring and halo effects caused by the shooting height in Figure 14b have an impact on the performance of the model. Therefore, Mobile NetV2SSDLite met the expectations of the experiment in the detection of tree diseases and insect pests. It can identify whether there are pine wood nematode disease trees in the image and complete the image preprocessing task of edge computing.
In OCP, although its computing power is limited, tests have shown that it can effectively perform tree pest and disease detection tasks. After filtering out images of suspected pine wilt disease trees, it sends them back to the ground computing terminal, reducing the workload of the latter. In Figure 14e, the ground computing terminal runs well and collaborates with the airborne platform to complete the detailed recognition task in the second stage, demonstrating seamless docking at both ends and excellent recognition performance.
Table 3 shows the results of the k-fold cross-validation method for model evaluation. The average accuracy of the training set and verification set reaches 92.5% and 88.7%, respectively, which proves the robustness of model evaluation.

4. Discussion

A new forest pest detection method combining UAV remote sensing data and a deep object detection model was proposed. The YOLO-v3 algorithm based on loss function optimization was introduced. It aims to improve the efficiency and accuracy of forest pest monitoring and represents an important improvement over traditional detection methods. The experimental results show that the new model improved the processing speed, the average image processing time is less than 0.5 s, and the accuracy is more than 95%, which is obviously better than other comparison algorithms. These findings highlight the potential for deep learning techniques to be used in forest protection and pest prevention programs.
However, the study also revealed the limitations of the model when dealing with complex forest environments. This may be due to the model’s inadequate adaptability to complex backgrounds. Future research should focus on improving the generality and adaptability of the model to more accurately identify pests and diseases in different forest terrains. In addition, given the limitations of operational complexity and computing resources, developing more efficient deep object detection models to reduce the dependence on computing resources will be a key direction of future research. Compared with the remote sensing system by Wang W et al., the optimized system uses the improved YOLO-v3 CIoU model to significantly improve the accuracy and recall rate, reaching 99% and 99%, respectively, while those of the model by Wang W et al. are 93% and 93%. In addition, the C4.5 algorithm performed well in predicting crop characteristics, but the prediction accuracy of LAI and CC (the highest R2 was 0.841 and 0.883) was not superior compared with the accuracy of the model in this study [34].
In this study, the optimized YOLO-v3 algorithm not only improves the accuracy of pest detection but also significantly reduces the processing time, which is essential for rapid responses to forest pest outbreaks and timely control measures. However, the actual application effect of the model in a wider forest area needs to be further verified to ensure its stability and reliability.
Another aspect of this research focuses on the application of optimization algorithms, especially in the selection and adjustment of loss functions. Through the optimization of the YOLO-v3 algorithm, more accurate identification of pests and diseases in complex forest environments is realized. This not only improves the recognition ability of the model but also provides a new perspective for future research direction. Nevertheless, models still face challenges when dealing with highly heterogeneous and dynamically changing natural environments. Therefore, future work needs to focus more on enhancing the robustness and adaptability of the models. In the study by Li C et al., vegetation parameters were compared with the remote sensing results of CRU (Climate Research Unit) temperature observation data in the same period, and the vegetation situation was better reflected in this study. The resolution of the CRU global meteorological dataset used is 0.5° × 0.5°, which is rough compared with the 0.01° × 0.01° resolution of NDVI data. There are some errors in the analysis of the impact of climate change on vegetation phenology, especially in the detailed study of a small area. Although the study provided the overall trend of NDVI change, it did not distinguish the different responses of different vegetation types to climate change. Although the asymmetric Gaussian fitting method was used to extract vegetation phenological factors, the S-G fitting method may lead to overfitting. The ecological mechanism of this interaction and its specific impact on regional climate feedback have not been fully explored. The feedback mechanism between temperature and vegetation phenological factors needs to be further studied in detail [35]. Table 4 shows the comparison of advantages and disadvantages between different methods.
In terms of model accuracy and recall rate, the YOLO-v3 CIoU model showed significant advantages, reaching 99.04%, while Wang W et al.’s method reachedonly 93.67%. Compared with the rough data of 0.5° × 0.5° used by Li C et al., the CIoU method achieves higher resolution based on NDVI data of 0.01° × 0.01°. The research algorithm performed well in predicting crop characteristics, and the CIoU model showed higher robustness and adaptability in complex forest environments.
In terms of model implementation, this study also discussed the application of UAV remote sensing technology in forest pest monitoring. The high-resolution images provided by drones provide a rich source of data for deep learning models. However, this also presents challenges in data processing and storage, especially in large-scale monitoring. Therefore, optimizing data processing and transmission processes and improving computational efficiency are key to realizingthe wide application of this method.
This research is of great significance to the field of forest ecology. By introducing the YOLO-v3 algorithm and CIoU loss function, the accuracy and efficiency of forest pest detection are significantly improved, and strong technical support is provided for forest health monitoring. These findings are helpful in promoting the scientific and accurate management of forest ecosystems and have great practical and theoretical value.
By integrating the model into existing forest management systems or decision support tools, resource allocation can be optimized and the ability to cope with forest pests and diseases can be improved, thus ensuring the sustainable use of forest resources and ecological balance. In addition, the methodology and technical framework of this study can provide references for other ecological monitoring fields and help promote the application of remote sensing technology in environmental monitoring and natural resource management. In summary, although this study has made some achievements in the field of forest pest detection, it still faces challenges in the universality, accuracy, and practicability of the model. Future studies should focus on the applicability of the model in different environments and explore more diverse data sources and algorithm optimization strategies to further improve the efficiency and accuracy of forest pest detection. In addition, research should take into account the feasibility and cost-effectiveness of practical applications to ensure that these technologies can be effectively applied in practical forest protection efforts.

5. Conclusions

The integration of drone remote sensing data with deep object detection models, particularly the optimized YOLO-v3 algorithm, demonstrates a promising approach to enhancing forest disease and pest detection capabilities. The results from the implementation of this method indicate a notable improvement in processing speed and accuracy, with an average image processing time of less than 0.5 s and an accuracy rate exceeding 95%. These findings underscore the potential of the proposed model to significantly contribute to the development of more efficient and effective forest inspection, protection, and prevention strategies. Despite these advancements, the study identifies limitations in the model’s performance in complex forest environments, highlighting an area for future research. Further investigations should focus on enhancing the model’s adaptability and universality across varied forest terrains. Additionally, exploring more advanced deep object detection models that require fewer computing resources could provide valuable insights into improving the practical applicability of these methods in forest protection and pest control efforts.

Author Contributions

Y.W. and H.Y. collected the samples. H.Y. and Y.M. analysed the data. Y.W. and H.Y. conducted the experiments and analysed the results. All authors have read and agreed to the published version of the manuscript.

Funding

This work was sponsored in part by the Science and Technology Research Project of Jiangxi Provincial Department of Education: Application of multiplicative and additive mixed noise model in SAR (or Insar) data processing (No. GJJ218601); Science and Technology Research Project of Jiangxi Provincial Department of Education: Research on key technologies of low altitude UAV aerial survey in complex terrain (No. GJJ171514).

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

The research thanks the support from Nueva Ecija University of Science and Technology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, L.; Yang, L.; Li, L. Integrated Prevention and Control Technology of Major Diseases and Insect Pests of Strawberry. Plant Dis. Pests 2020, 11, 31–34. [Google Scholar] [CrossRef]
  2. Barron, M.C.; Liebhold, A.M.; Kean, J.M.; Richardson, B.; Brockerhoff, E.G. Habitat fragmentation and eradication of invading insect herbivores. J. Appl. Ecol. 2021, 57, 590–598. [Google Scholar] [CrossRef]
  3. Gougherty, A.V.; Davies, T.J. Most countries are vulnerable to novel pest invasions and under-report the diversity of tree pests. Glob. Ecol. Biogeogr. 2022, 31, 2314–2322. [Google Scholar] [CrossRef]
  4. Roy, A.M.; Bhaduri, J. A Deep Learning Enabled Multi-Class Plant Disease Detection Model Based on Computer Vision. AI 2021, 2, 413–428. [Google Scholar] [CrossRef]
  5. Karar, M.E.; Alsunaydi, F.; Albusaymi, S.; Alotaibi, S. A New Mobile Application of Agricultural Pests Recognition Using Deep Learning Iin Cloud Computing System. Alex. Eng. J. 2021, 60, 4423–4432. [Google Scholar] [CrossRef]
  6. Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Ayari, M.A.; Khan, A.U.; Khan, M.S.; Al-Emadi, N.; Reaz, M.B.I.; Islam, M.T.; Ali, S.H.M. Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques. AgriEngineering 2021, 3, 294–312. [Google Scholar] [CrossRef]
  7. Li, R.; Shen, Y. YOLOSR-IST: A deep learning method for small target detection in infrared remote sensing images based on super-resolution and YOLO. Signal Process. 2023, 208, 108962. [Google Scholar] [CrossRef]
  8. Huang, T.; Zhu, J.; Liu, Y.; Tan, Y. UAV aerial image target detection based on BLUR-YOLO. Remote Sens. Lett. 2023, 14, 186–196. [Google Scholar] [CrossRef]
  9. Xu, B.; Meng, R.; Chen, G.; Liang, L.; Lv, Z.; Zhou, L.; Sun, R.; Zhao, F.; Yang, W. Improved weed mapping in corn fields by combining UAV-based spectral, textural, structural, and thermal measurements. Pest Manag. Sci. 2023, 79, 2591–2602. [Google Scholar] [CrossRef] [PubMed]
  10. Simon, K.; Vicent, M.; Addah, K.; Bamutura, D.; Atwiine, B.; Nanjebe, D.; Mukama, A.O. Comparison of Deep Learning Techniques in Detection of Sickle Cell Disease. AIA 2023, 1, 252–259. [Google Scholar] [CrossRef]
  11. Wang, B. Identification of Crop Diseases and Insect Pests Based on Deep Learning. Sci. Program. 2022, 2022, 9179998. [Google Scholar] [CrossRef]
  12. Zhu, C.; Sun, W.; Han, C.; Wang, M. Analysis and Study on Characteristics and Detection Methods of Cotton Diseases and Insect Pests. Plant Dis. Pests 2022, 13, 17–22. [Google Scholar] [CrossRef]
  13. Huang, X.; Chen, A.; Zhou, G.; Zhang, X.; Wang, J.; Peng, N.; Yan, N.; Jiang, C. Tomato Leaf Disease Detection System Based on FC-SNDPN. Multimed. Tools Appl. 2023, 82, 2121–2144. [Google Scholar] [CrossRef]
  14. Butera, L.; Ferrante, A.; Jermini, M.; Prevostini, M.; Alippi, C. Precise Agriculture: Effective Deep Learning Strategies to Detect Pest Insects. IEEE/CAA J. Autom. Sin. 2021, 9, 246–258. [Google Scholar] [CrossRef]
  15. Rustia, D.J.A.; Chao, J.; Chiu, L.; Wu, Y.; Chung, J.; Hsu, J.; Lin, T. Automatic greenhouse insect pest detection and recognition based on a cascaded deep learning classification method. J. Appl. Entomol. 2020, 145, 206–222. [Google Scholar] [CrossRef]
  16. Wang, X.; Liu, J.; Zhu, X. Early real-time detection algorithm of tomato diseases and pests in the natural environment. Plant Methods 2021, 17, 43. [Google Scholar] [CrossRef] [PubMed]
  17. Kaur, P.; Harnal, S.; Gautam, V.; Singh, M.P.; Singh, S.P. A novel transfer deep learning method for detection and classification of plant leaf disease. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 12407–12424. [Google Scholar] [CrossRef]
  18. Syed-Ab-Rahman, S.F.; Hesamian, M.H.; Prasad, M. Citrus disease detection and classification using end-to-end anchor-based deep learning model. Appl. Intell. 2022, 52, 927–938. [Google Scholar] [CrossRef]
  19. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. A survey on deep learning-based identification of plant and crop diseases from UAV-based aerial images. Clust. Comput. 2022, 26, 1297–1317. [Google Scholar] [CrossRef] [PubMed]
  20. Altuntaş, Y.; Kocamaz, F. Deep Feature Extraction for Detection of Tomato Plant Diseases and Pests based on Leaf Images. Celal Bayar Univ. J. Sci. 2021, 17, 145–157. [Google Scholar] [CrossRef]
  21. Wani, J.A.; Sharma, S.; Muzamil, M.; Ahmed, S.; Sharma, S.; Singh, S. Machine Learning and Deep Learning Based Computational Techniques in Automatic Agricultural Diseases Detection: Methodologies, Applications, and Challenges. Arch. Comput. Methods Eng. 2021, 29, 641–677. [Google Scholar] [CrossRef]
  22. Mallick, T.; Biswas, S.; Das, A.K.; Saha, H.N.; Chakrabarti, A.; Deb, N. Deep learning based automated disease detection and pest classification in Indian mung bean. Multimed. Tools Appl. 2022, 82, 12017–12041. [Google Scholar] [CrossRef]
  23. Wang, D.; Wang, Y.; Li, M.; Yang, X.; Wu, J.; Li, W. Using an Improved YOLOv4 Deep Learning Network for Accurate Detection of Whitefly and Thrips on Sticky Trap Images. Trans. ASABE 2021, 64, 919–927. [Google Scholar] [CrossRef]
  24. Moussafir, M.; Chaibi, H.; Saadane, R.; Chehri, A.; El Rharras, A.; Jeon, G. Design of efficient techniques for tomato leaf disease detection using genetic algorithm-based and deep neural networks. Plant Soil. 2022, 479, 251–266. [Google Scholar] [CrossRef]
  25. Ahmed, A.A.; Reddy, G.H. A Mobile-Based System for Detecting Plant Leaf Diseases Using Deep Learning. AgriEngineering 2021, 3, 478–493. [Google Scholar] [CrossRef]
  26. Yuan, Y.; Chen, L.; Wu, H.; Li, L. Advanced agricultural disease image recognition technologies: A review. Inf. Process. Agric. 2021, 9, 48–59. [Google Scholar] [CrossRef]
  27. Meshram, A.T.; Vanalkar, A.V.; Kalambe, K.B.; Badar, A.M. Pesticide spraying robot for precision agriculture: A categorical literature review and future trends. J. Field Robot. 2021, 39, 153–171. [Google Scholar] [CrossRef]
  28. Khatoon, S.; Hasan, M.; Asif, A.; Alshmari, M.; Yap, Y.-K. Image-based Automatic Diagnostic System for Tomato Plants using Deep Learning. Comput. Mater. Contin. 2021, 67, 595–612. [Google Scholar] [CrossRef]
  29. Sourav, S.U.; Wang, H. Intelligent Identification of Jute Pests Based on Transfer Learning and Deep Convolutional Neural Networks. Neural Process. Lett. 2022, 55, 2193–2210. [Google Scholar] [CrossRef] [PubMed]
  30. Acebrón-García-De-Eulate, M.; Blundell, T.L.; Vedithi, S.C. Strategies for drug target identification in Mycobacterium leprae. Drug Discov. Today. 2021, 26, 1569–1573. [Google Scholar] [CrossRef] [PubMed]
  31. Thangaraj, R.; Anandamurugan, S.; Pandiyan, P.; Kaliappan, V.K. Artificial intelligence in tomato leaf disease detection: A comprehensive review and discussion. J. Plant Dis. Prot. 2021, 129, 469–488. [Google Scholar] [CrossRef]
  32. Yu, H.; Li, Z.; Bi, C.; Chen, H. An effective deep learning method with multi-feature and attention mechanism for recognition of Chinese rice variety information. Multimed. Tools Appl. 2022, 81, 15725–15745. [Google Scholar] [CrossRef]
  33. Zhou, H.; Yuan, X.; Zhou, H.; Shen, H.; Ma, L.; Sun, L.; Fang, G.; Sun, H. Surveillance of pine wilt disease by high resolution satellite. J. For. Res. 2022, 33, 1401–1408. [Google Scholar] [CrossRef]
  34. Wang, W.; Gao, X.; Cheng, Y.; Ren, Y.; Zhang, Z.; Wang, R.; Cao, J.; Geng, H. QTL Mapping of Leaf Area Index and Chlorophyll Content Based on UAV Remote Sensing in Wheat. Agriculture 2022, 12, 595. [Google Scholar] [CrossRef]
  35. Li, C.; Zhuang, D.; He, J.; Wen, K. Spatiotemporal variations in remote sensing phenology of vegetation and its responses to temperature change of boreal forest in tundra-taiga transitional zone in the Eastern Siberia. J. Geogr. Sci. 2023, 33, 464–482. [Google Scholar] [CrossRef]
Figure 1. Drones and Their Flight Trajectories: (a) DJI M600 Drones and (b) Flight Trajectories of DJI M600 Drones.
Figure 1. Drones and Their Flight Trajectories: (a) DJI M600 Drones and (b) Flight Trajectories of DJI M600 Drones.
Forests 15 00869 g001
Figure 2. The selection process of positive and negative samples.
Figure 2. The selection process of positive and negative samples.
Forests 15 00869 g002
Figure 3. SVM’s processing of different data samples: (a) SVM’s partitioning of linear samples, (b) SVM’s partitioning of approximate linear samples, and (c) SVM’s partitioning of nonlinear samples.
Figure 3. SVM’s processing of different data samples: (a) SVM’s partitioning of linear samples, (b) SVM’s partitioning of approximate linear samples, and (c) SVM’s partitioning of nonlinear samples.
Forests 15 00869 g003
Figure 4. Deep target detection model YOLO-v3.
Figure 4. Deep target detection model YOLO-v3.
Forests 15 00869 g004
Figure 5. One-way data flow and two-way data flow.
Figure 5. One-way data flow and two-way data flow.
Forests 15 00869 g005
Figure 6. System design with Raspberry Pi 4 Model B 4.
Figure 6. System design with Raspberry Pi 4 Model B 4.
Forests 15 00869 g006
Figure 7. Model network structure diagram.
Figure 7. Model network structure diagram.
Forests 15 00869 g007
Figure 8. Performance of HOG-SVM model.
Figure 8. Performance of HOG-SVM model.
Forests 15 00869 g008
Figure 9. Recognition effect of deep target detection model: (a) alignment detection, (b) accurate positioning, (c) undetected, and (d) error detection.
Figure 9. Recognition effect of deep target detection model: (a) alignment detection, (b) accurate positioning, (c) undetected, and (d) error detection.
Forests 15 00869 g009
Figure 10. Comparison of iterations of YOLO-v3 and YOLO-v3 CIoU detection methods.
Figure 10. Comparison of iterations of YOLO-v3 and YOLO-v3 CIoU detection methods.
Forests 15 00869 g010
Figure 11. Performance of different models.
Figure 11. Performance of different models.
Forests 15 00869 g011
Figure 12. Performance curve of YOLO-v3 and YOLO-v3 CIoU for tree pest and disease identification: (a) Performance curves of YOLO-v3 CIoU for identifying tree pests and diseases, and (b) Performance curves of YOLO-v3 for identifying tree pests and diseases.
Figure 12. Performance curve of YOLO-v3 and YOLO-v3 CIoU for tree pest and disease identification: (a) Performance curves of YOLO-v3 CIoU for identifying tree pests and diseases, and (b) Performance curves of YOLO-v3 for identifying tree pests and diseases.
Forests 15 00869 g012
Figure 13. The change curve of the average accuracy curve of the whole class and the cumulative loss curve: (a) Average accuracy of the entire class and (b) cumulative loss.
Figure 13. The change curve of the average accuracy curve of the whole class and the cumulative loss curve: (a) Average accuracy of the entire class and (b) cumulative loss.
Forests 15 00869 g013
Figure 14. The proposed model testing results are studied: (a) Repeated marking, (b) error detection, (c) correct detection, (d) dislocation, and (e) airborne edge computing.
Figure 14. The proposed model testing results are studied: (a) Repeated marking, (b) error detection, (c) correct detection, (d) dislocation, and (e) airborne edge computing.
Forests 15 00869 g014
Table 1. Experimental design and implementation area.
Table 1. Experimental design and implementation area.
DimensionalityDescription
Experimental designStratified sampling method to ensure data diversity
Implementation areaIt includes coniferous forest, broad-leaved forest, and mixed forest, covering different climatic zones from temperate to subtropical
Algorithm accuracyEffective identification of pests and diseases
AdaptabilityAdaptability in different forest environments
Cost efficiencyExplore ways to improve cost efficiency
Table 2. Hardware and software configuration table.
Table 2. Hardware and software configuration table.
Hardware and Software ProjectNode ConfigurationWorkstation Mobile Configuration
CPUIntel processorintel-Core i5
Internal memory128 GB DDR5 RDIMM156 GB DDR4
GPUNVIDIANVIDIA GeForce GTX 1080
Operating systemLinuxLinux CentOS 8
Object detectionTensorflow 2.4.1Tensorflow 2.4.1
CUDACUDA 11.0CUDA 11.0
Table 3. The results of the k-fold cross-validation method for model evaluation.
Table 3. The results of the k-fold cross-validation method for model evaluation.
Number of Folds (k)Training Set AccuracyVerification Set Accuracy
10.9250.88
20.930.895
30.920.9
40.9350.875
50.9150.885
Average0.9250.887
Table 4. Comparison of advantages and disadvantages between different methods.
Table 4. Comparison of advantages and disadvantages between different methods.
Contrast DimensionResearch MethodWang W et al. Method [34]Li C et al. Method [35]
Precision99.04%93.67%Using CRU data, the resolution is lower.
Recall rate99.12%93.85%/
RobustnessIt has high robustness and can effectively deal with complex forest environments.Although the C4.5 algorithm performed well in the prediction of crop characteristics, it was inferior to the prediction of LAI and CC in this study.Analyses of the effects of climate change on vegetation phenomena fail to distinguish in detail the responses of different vegetation types.
AdaptabilityIt has good adaptability in complex natural environments.Lower accuracy and recall rates may limit its applicability under different environmental conditions.The resolution is low, and the adaptability is limited toa small range of detailed studies
Data resolutionHigh-resolution remote sensing data./The 0.5° × 0.5° resolution data are used.
Innovation of methodThe detection model is optimized using the latest CIoU loss function.More traditional C4.5 algorithms may lack innovation.Relying on existing climate data and methods may not be methodologically innovative.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Y.; Yang, H.; Mao, Y. Detection of the Pine Wilt Disease Using a Joint Deep Object Detection Model Based on Drone Remote Sensing Data. Forests 2024, 15, 869. https://doi.org/10.3390/f15050869

AMA Style

Wu Y, Yang H, Mao Y. Detection of the Pine Wilt Disease Using a Joint Deep Object Detection Model Based on Drone Remote Sensing Data. Forests. 2024; 15(5):869. https://doi.org/10.3390/f15050869

Chicago/Turabian Style

Wu, Youping, Honglei Yang, and Yunlei Mao. 2024. "Detection of the Pine Wilt Disease Using a Joint Deep Object Detection Model Based on Drone Remote Sensing Data" Forests 15, no. 5: 869. https://doi.org/10.3390/f15050869

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop