Next Article in Journal
Design and Experiment of a Dual-Disc Potato Pickup and Harvesting Device
Previous Article in Journal
Valorization of Edible Oil Industry By-Products Through Optimizing the Protein Recovery from Sunflower Press Cake via Different Novel Extraction Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Tagosodes orizicolus in Aerial Images of Rice Crops Using Machine Learning

by
Angig Rivera-Cartagena
,
Heber I. Mejia-Cabrera
* and
Juan Arcila-Diaz
*
School of Systems Engineering, Universidad Señor de Sipán, Chiclayo 14000, Peru
*
Authors to whom correspondence should be addressed.
AgriEngineering 2025, 7(5), 147; https://doi.org/10.3390/agriengineering7050147
Submission received: 20 January 2025 / Revised: 21 March 2025 / Accepted: 25 March 2025 / Published: 7 May 2025

Abstract

This study employs RGB imagery and machine learning techniques to detect Tagosodes orizicolus infestations in “Tinajones” rice crops during the flowering stage, a critical challenge for agriculture in northern Peru. High-resolution images were acquired using an unmanned aerial vehicle (UAV) and preprocessed by extracting 256 × 256-pixel segments, focusing on three classes: infested zones, non-cultivated areas, and healthy rice crops. A dataset of 1500 images was constructed and utilized to train deep learning models based on VGG16 and ResNet50. Both models exhibited highly comparable performance, with VGG16 attaining a precision of 98.274% and ResNet50 achieving a precision of 98.245%, demonstrating their effectiveness in identifying infestation patterns with high reliability. To automate the analysis of complete UAV-acquired images, a web-based application was developed. This system receives an image, segments it into grids, and preprocesses each section using resizing, normalization, and dimensional adjustments. The pretrained VGG16 model subsequently classifies each segment into one of three categories: infested zone, non-cultivated area, or healthy crop, overlaying the classification results onto the original image to generate an annotated visualization of detected areas. This research contributes to precision agriculture by providing an efficient and scalable computational tool for early infestation detection, thereby supporting timely intervention strategies to mitigate potential crop losses.

1. Introduction

Agriculture faces significant challenges due to pests that affect crops, limiting productive progress and causing substantial economic losses. This issue is prevalent in various regions worldwide, and the early detection of pests has become a critical strategy to mitigate their impact. In Bangladesh, for instance, rice crop yields are reported to decrease by 10% to 15% due to 10 major diseases, such as rice blast, which causes annual losses of up to 30% [1,2].
Tagosodes orizicolus Muir (Hemiptera: Delphacidae), commonly known as “sogata”, is one of the most prevalent and destructive pests in rice crops, particularly in the tropical and subtropical regions of the Americas. This species inflicts two types of damage on rice plants. Direct damage occurs through feeding and oviposition in the mesophyll and phloem, compromising plant health and resulting in production losses of 50% to 75%. Indirect damage arises from its role as a vector for the virus known as rice white sheet, which can cause total crop loss in susceptible varieties [3,4].
The duality of this pest’s effects, encompassing both mechanical and viral damage, underscores the critical importance of implementing integrated management strategies to mitigate its impact on rice crops. Early identification not only reduces associated costs but also facilitates the selection of the most effective control methods, particularly in light of the limitations of traditional techniques [5,6].
Currently, the methods employed by agronomists for identifying diseases in rice crops include direct observations, selection of affected areas, classification of present pests, and damage assessment. These procedures are complemented by continuous photographic documentation to monitor damage progression and adjust control strategies [7]. Unlike traditional techniques, biologists utilize advanced methods such as spectroscopy, optical techniques, and laser-based approaches, which enable rapid and automated pest detection based on their spectroscopic properties and wingbeat frequency patterns [8]. However, these methods face significant limitations when applied to large-scale agricultural areas.
The adoption of advanced technologies, such as remote sensing via satellites and UAVs, has significantly transformed agricultural monitoring processes. These technologies enable remote detection from high altitudes, facilitating the monitoring of extensive areas with high precision, reduced response times, and considerable reductions in operational costs. In particular, UAVs, which are more accessible to farmers, have become a popular choice, as they allow real-time monitoring, efficiently covering large areas at a much lower cost [9].
This study was conducted in the regions of Lambayeque and La Libertad, Peru, with the objective of developing a machine learning-based approach for the automated detection of Tagosodes orizicolus infestation in rice crops. Early identification of this pest is crucial, as it could help mitigate the excessive use of agrochemicals, thereby reducing their negative environmental impact and optimizing crop yields in future agricultural cycles [10]. In these regions, many farmers still face challenges in accurately identifying infestation symptoms, which are often mistaken for nutritional deficiencies. This underscores the need for technological tools that facilitate precise detection [11]. In this context, the research aligns with several United Nations Sustainable Development Goals (SDGs), particularly SDG 2: Zero Hunger, by enhancing rice productivity and promoting food security; SDG 12: Responsible Consumption and Production, by reducing pesticide use and encouraging more sustainable agricultural practices; and SDG 15: Life on Land, by protecting agricultural biodiversity through reduced environmental disruption [12].
To address this issue, the study proposes the implementation of machine learning algorithms to automate the early detection of areas affected by Tagosodes orizicolus in rice crops, using images captured by a UAV.
Given the importance of rice cultivation in agriculture, several studies have been conducted on the automatic detection of diseases using artificial intelligence (AI).
In [13], a model based on the ResNet50 architecture is proposed to identify rice diseases such as Bacterial Leaf Blight, Bacterial Leaf Streak, Bacterial Panicle Blight, Rice Blast, and Brown Spot. The model, trained with 13,876 images, achieved a training accuracy of nearly 99% and a testing accuracy of 92.83%. It stands out for its high precision and the potential for deployment on mobile devices and drones, facilitating its application in the field of smart agriculture.
To classify diseases in rice leaves, such as Blast, Brown Spot, Bacterial Blight, and Tungro, the study [14] employed deep learning and transfer learning techniques using a dataset of 5932 images captured with a smartphone. The proposed model, based on the VGG16 architecture, achieved an accuracy of 99.94%.
The study [15] presents a live video system that uses the YOLO algorithm to detect diseases in rice, such as Brown Spot, Leaf Blast, and Bacterial Blight in real time. The system, which utilizes a Raspberry Pi camera and 4 G communication, achieved an accuracy of 80% despite hardware limitations.
Various techniques have been employed for the automatic detection of diseases in rice crops, including VGG16, ResNet, MobileNet, and YOLO. Table 1 presents relevant research on disease detection in rice crops, detailing the techniques used, datasets, devices employed for image capture, and the results obtained.
The reviewed studies employ various deep learning and machine learning techniques for the detection of diseases and pests in rice, differing in models, devices, and methodological approaches. Convolutional Neural Networks (CNNs), such as ResNet50 and VGG16, demonstrate high accuracy (>90%), while feature-based methods (e.g., texture analysis) provide effective alternatives. A wide range of devices are utilized, including UAVs and thermal cameras as well as smartphones. Some studies prioritize real-time detection using YOLO, whereas others integrate multimodal data and optimized models such as Rice Transformer and NASNetLarge, reflecting advancements in precision agriculture.

2. Materials and Methods

In this research, a computer vision-based approach was employed for the classification of aerial images of rice crops. The process encompassed image collection and preprocessing, model training, and evaluation using performance metrics. Furthermore, the trained model was deployed in a web application to validate its practical applicability. Figure 1 illustrates the workflow of the process developed in this study.

2.1. Study Area

The study area was selected in two zones: the district of Pacanguilla, located in the department of La Libertad, and the district of Lambayeque, situated in the department of Lambayeque, both located on the northern coast of Peru. These departments are renowned for their agricultural activity, particularly the cultivation of the Tinajones rice variety, developed by Peru’s National Institute of Agricultural Innovation (INIA).
In the district of Pacanguilla, images were captured from a rice field in the maturation phase, covering an area of 5896.79 m2, with geographical coordinates of longitude −79.4223778 and latitude −7.1476868. In the district of Lambayeque, images were obtained from a rice field spanning 23,850.1 m2, located at geographical coordinates of longitude −79.8861823 and latitude −6.6727899.
Figure 2 illustrates the locations of the fields where the images were acquired, which were captured using a UAV.

2.2. Data Collection

For this study, images were collected using a DJI Phantom 4 Pro UAV, equipped with an electric motor and a control system operating within the frequency range of 2.4 to 5.8 GHz. This device weighs 1.388 kg and has a flight autonomy of up to 30 min. The UAV’s camera is a DJI FC6310 model, featuring a 12-megapixel sensor with a pixel size of 2.41 × 2.41 microns and a resolution of 5472 × 3648 pixels, enabling the acquisition of highly detailed photographs of the rice cultivation areas.
The image capture protocol for aerial photographs of the rice fields was established by considering various environmental and operational conditions in the Pacanguilla and Lambayeque zones. The criteria used, including the cultivation area, UAV flight altitude, and weather conditions, are detailed in Table 2.
The UAV flight path was carefully designed to ensure complete coverage of the cultivation areas in each location, taking into account factors such as flight altitude, image capture angle, and environmental conditions. Figure 3 illustrates the flight trajectory, providing a detailed view of the planned route for the study areas.

2.3. Data Preprocessing

After image collection, the images were transferred to a permanent storage medium for subsequent processing. The original images, acquired by the UAV, were recorded with a resolution of 5472 × 3648 pixels, 96 dpi, and a color depth of 24 bits. Data preprocessing was then performed by manually cropping the images using ImageJ software version 1.54i, generating 256 × 256 pixel crops. This procedure was conducted to isolate areas of interest, specifically those showing damage caused by Tagosodes orizicolus, non-cultivated zones (empty), and areas with healthy rice crops (rice). The selection of these crops aimed to ensure a balanced and representative dataset for training deep learning models. Figure 4 presents a sample of the cropped images, highlighting the distinct visual characteristics of each class. Infested areas (Tagosodes orizicolus) exhibit symptoms such as discoloration, chlorosis, and visible deterioration of plant structures. Non-cultivated zones (empty) typically include bare soil, water surfaces, or other non-vegetated regions that contrast with cultivated areas. Healthy rice crops (rice) display uniform green coloration and well-defined plant structures, serving as a baseline for comparison.
For each class, 500 cropped image samples were obtained, and their categorization was reviewed and validated by an agricultural specialist to ensure the accuracy and reliability of the data.

2.4. Classification

With the images grouped into three classes: Tagosodes orizicolus (areas affected by the Tagosodes orizicolus), empty (empty areas), and rice (healthy crops), they were divided into training, validation, and test sets, following a ratio of 70%, 15%, and 15%, respectively. Subsequently, two widely recognized deep learning models for image classification tasks, VGG16 [28] and ResNet50 [29], were trained. These models, known for their ability to extract relevant features and distinguish complex patterns, were selected due to their effectiveness in similar applications reported in the literature. The implementation of both approaches allowed for a performance comparison and evaluation of which model provided better results in classifying the obtained images.

2.4.1. VGG16

VGG16 is a model widely recognized for its versatility in computer vision applications, notable for its ability to extract deep and generalizable features [29]. To train the model, the previously divided and categorized dataset was used. Figure 5 illustrates the architecture of the VGG16 model used in this study, highlighting its convolutional layers, max-pooling operations, and fully connected layers.
To optimize performance and improve the model’s generalization ability, data augmentation techniques were applied to the training set. These techniques included rotations, horizontal and vertical shifts, scaling changes, shearing, zoom variations, and horizontal flips, in addition to a padding method to preserve the proportions of the 64 × 64 pixel images. This approach increased the diversity of the dataset and reduced the risk of overfitting.
The model architecture was built upon the VGG16 pre-trained on the ImageNet dataset, freezing its original layers to preserve the previously learned features. Custom layers were added to this base, including a flattening layer, a dense layer with 512 units and ReLU activation, a dropout layer to mitigate overfitting, and an output layer with Softmax activation adapted to the three target classes. The model was compiled using the Adam optimizer with an initial learning rate of 0.0001 and a batch size of 8, and training was conducted for 100 epochs, with periodic evaluations to measure its performance.

2.4.2. ResNet50

ResNet50 is an architecture widely used in image classification tasks, known for its ability to learn deep hierarchical representations and its effectiveness in handling large-scale networks [29]. The previously divided dataset was used for training the model. Figure 6 illustrates the architecture of the ResNet50 model used in this study, highlighting its convolutional layers, residual blocks, and fully connected layers.
As with the VGG16 model, data augmentation techniques were applied to the training set to enhance the model’s generalization capability and mitigate the risk of overfitting. The implemented transformations included pixel value normalization (rescale), random rotations up to 40 degrees, horizontal and vertical shifts, perspective changes (shear), zoom in or out, and random horizontal rotations. These operations increased the variability of the dataset and enabled the model to learn more robust and diverse representations.
The model was built using the ResNet50 architecture, loaded with pretrained weights from the ImageNet dataset, leveraging previously learned features for general image processing. Custom layers were added to this base, such as a GlobalAveragePooling2D layer to reduce the dimensionality of the extracted features, followed by BatchNormalization and Dropout layers to regularize the model and prevent overfitting. Finally, a Dense layer with Softmax activation was added for classification into the three target classes.
The layers of the base ResNet50 model were unfrozen to allow for fine-tuning, optimizing the model’s ability to learn both the pretrained features and those specific to the dataset. The SGD optimizer with momentum and Nesterov was used for fine-tuning the parameters, and the CategoricalCrossentropy loss function with label smoothing was employed to improve generalization. Additionally, two callbacks were implemented: ModelCheckpoint to save the model with the best performance based on validation loss, and ReduceLROnPlateau to decrease the learning rate when validation loss shows no improvement. The training process was executed over 100 epochs with a batch size of 8, ensuring that the model completed all iterations and progressively improved its performance.

2.5. Performance Evaluation

To evaluate the model’s performance, predictions are classified based on their correctness. A true positive (TP) occurs when the model correctly identifies a given category (healthy rice crops, areas affected by Tagosodes orizicolus, or empty areas). A true negative (TN) is recorded when the model correctly excludes a category from a given sample. Conversely, a false positive (FP) is assigned when the model incorrectly classifies a sample as belonging to a category it does not belong to, while a false negative (FN) occurs when the model fails to identify a sample as belonging to its true category. The following metrics were used to assess the model’s performance:
Precision (P): This is a metric that quantifies the proportion of true positives relative to the total number of elements classified as positive, thus evaluating the accuracy of the model’s positive predictions.
P = T P T P + F P
Accuracy (Acc): measures the proportion of correct predictions overall, combining true positives and true negatives in relation to the total number of predictions.
A c c = T P + T N T P + T N + F P + F N
Recall (R): evaluates the model’s ability to correctly identify all actual positive cases.
R = T P T P + F N
Specificity (S): provides a balanced measure of precision and recall.
S = T N T N + F P
F1 score (F): provides a balanced measure of precision and recall.
F = 2 P R P + R

3. Results

3.1. Performance

Figure 7 illustrates the evolution of training and validation accuracy over 100 epochs for the VGG16 and ResNet50 models. In Figure 7a, VGG16 demonstrates a stable accuracy with minimal disparity between training and validation, indicating a high degree of generalization. Although sporadic declines are observed in the validation accuracy, the model maintains consistent performance. Conversely, Figure 7b reveals that ResNet50 exhibits greater variability during the initial epochs, reflecting challenges in early-stage learning. However, as training progresses, accuracy stabilizes and follows a trend comparable to that of the training set.
Overall, VGG16 demonstrates a more stable learning process, whereas ResNet50 appears more susceptible to fluctuations in validation accuracy. Nevertheless, both models achieve high accuracy levels, underscoring their effectiveness in the analyzed task.
Once the training process was completed, the model was evaluated using the test dataset. A confusion matrix was employed for this purpose, which allows the visualization of the frequency of correct and incorrect predictions for each class, facilitating its interpretation through a heatmap. This graphical representation provides an intuitive way to understand the model’s performance concerning the different classes. The confusion matrix is shown in Figure 8.
Based on the confusion matrices, both the VGG16 and ResNet50 models exhibit high classification performance in detecting the different categories.
The VGG16 model achieves a precision and recall of 98.667% for the “Empty” class, along with a specificity of 99.329% and an F1 score of 98.667%, demonstrating strong reliability in distinguishing empty samples. For the “Rice”, it class attains 96.154% precision and 100% recall, highlighting its ability to correctly identify all positive instances. However, its F1 score of 98.039% and specificity of 97.987% indicate minor misclassification. For the “Tagosodes orizicolus” class, it achieves 100% precision but a recall of 95.946%, suggesting that some instances are missed, reflected in its F1 score of 97.931%.
Similarly, the ResNet50 model exhibits strong classification performance. In the “Empty” class, it achieves 100% precision but a lower recall of 96.000%, leading to an F1 score of 97.959%. For the “Rice” class, it performs exceptionally well, reaching 97.403% precision and 100% recall, with an F1 score of 98.684%. For the “Tagosodes orizicolus” class, it demonstrates 97.333% precision and 98.649% recall, with a specificity of 98.667% and an F1 score of 97.987%, ensuring a reliable classification of infected samples.
The performance results, including the average metrics such as precision, accuracy, sensitivity (recall), specificity, and F1 score for detecting each class, are summarized in Table 3. Both models, VGG16 and ResNet50, exhibit nearly identical performance across these metrics. VGG16 demonstrates slightly higher precision (98.274% vs. 98.214%), whereas ResNet50 offers marginally improved recall (98.216% vs. 98.204%). The similarity in results suggests that both architectures are highly reliable for this classification task, with minimal trade-offs between precision and recall.

3.2. Application

To utilize the trained model, a web application was developed using a client–server approach. On the server, the business logic was deployed, developed in the Python programming language with the Django framework (version 5.1.2). The user interface was developed using JavaScript, and this interface facilitates the execution of necessary requests to the server, allowing for data input and the submission of rice images for subsequent processing. The data are sent from the user interface to the server via requests that use the Hypertext Transfer Protocol (HTTP) in JavaScript Object Notation (JSON) format. Figure 9 illustrates the image upload interface.
In the business layer of the web application, an endpoint is implemented to receive an image via a request. This image is divided into 500 × 500 pixel grids, allowing each section to be analyzed independently. Subsequently, the segments are preprocessed to meet the requirements of the model trained with VGG16, utilizing techniques such as resizing, normalization, and dimensional adjustment. Once prepared, the blocks are sent to the pre-trained model, which classifies each segment into one of three categories: area with Tagosodes orizicolus, empty area, or area with healthy rice plants. Finally, the application responds to the request by displaying the original image with the grids annotated according to the classification performed (Figure 10). Additionally, a statistical summary is generated that counts the detections by category, providing a quantitative and detailed view of the analysis conducted.
This web application constitutes a key result of the study, as it validates the practical applicability of the trained model in a real-world scenario, enhancing accessibility and usability for potential users.

4. Discussion

Early and accurate disease detection in rice crops is essential for agriculture, as pests and diseases significantly affect productivity. This study differs from previous research, such as those employing models based on architectures like ResNet50 or VGG16, by focusing on the use of preprocessed images from representative infestation areas rather than processing the entire crop image. To apply the model, the image is divided into grids that are processed independently for classification. Additionally, the trained models have been specifically designed to detect one of the main diseases in rice crops, Tagosodes orizicolus, known as Sogata in a Peruvian rice variety (Tinajones), differentiating it from empty areas and areas with healthy rice crops.
This study is distinguished by the use of a UAV equipped with a high-resolution camera, which allows for the analysis of large rice crop areas. The use of UAVs offers a significant advantage over previous studies that employ smartphone images, as the latter often have limitations in resolution and coverage.
In this work, the performance of VGG16 and ResNet50 models in classifying images of rice crops affected by Tagosodes orizicolus was compared. Both models demonstrated outstanding performance, achieving an accuracy greater than 98.2%, similarly to other studies in the field. For instance, in the study of ResNet50 for identifying rice diseases (Bacterial Leaf Blight, Bacterial Leaf Streak, Bacterial Panicle Blight, Rice Blast, and Brown Spot), a training accuracy close to 99% and a test accuracy of 92.83% were achieved [13]. This work also highlights ResNet50’s ability to be implemented in mobile applications and UAVs, emphasizing its versatility in agricultural settings. This feature is particularly relevant in the context of our study, as we implemented the trained model in a web application accessible to farmers, facilitating early detection of infestations.
Regarding previous studies, such as the one using the YOLO algorithm for real-time disease detection in rice through a live transmission system [15], the areas representing the disease must be annotated beforehand to be considered during the training phase. In contrast, our study did not involve manual annotations; instead, we mapped crops to generate new images that exclusively included the disease-representative areas, simplifying the data preparation process.
When compared with other results from Bijoy et al. (2024) in [20], who used ResNet50 to classify diseases in rice, an accuracy of 99.81% and specificity of 99.65% were obtained. The indicators from this study are competitive, with a slight decrease in accuracy (−1.57%) and specificity (−0.54%). These differences may be attributed to variations in experimental design, dataset creation, or preprocessing techniques applied.
Additionally, in the work by Ritharson et al. (2024) in [14], which used the VGG16 model to classify diseases in agricultural crops, an accuracy of 99.94% was achieved, representing a difference of −1.66% compared to this study. However, the lack of information regarding accuracy metrics in that analysis makes a more detailed assessment of overall performance in terms of total correct predictions difficult.
Looking ahead, future work plans involve deploying the application in UAV-integrated systems, aiming to develop a real-time detection tool and generate detailed maps of infested areas. This will enable continuous and efficient crop monitoring, providing farmers with precise tools for pest and disease management in their fields.

5. Conclusions

In this study, a database of aerial images of rice crops affected by Tagosodes orizicolus was developed using a UAV. The captured images were processed through segmentation into 256 × 256-pixel fragments, yielding a total of 1500 images, which were categorized into three classes: infested areas, empty areas, and healthy crops. These images were utilized for training deep learning models based on the VGG16 and ResNet50 architectures.
The experimental results demonstrate that both models exhibit a high level of effectiveness in the classification of aerial images, achieving an overall accuracy of 98.214%. Specifically, the VGG16-based model exhibited outstanding performance in identifying healthy crop areas, with an F1 score of 98.039% for the “rice” class; however, it showed a slight decline in detecting Tagosodes orizicolus-infested areas (F1 score: 97.931%), suggesting the presence of some degree of misclassification. Meanwhile, ResNet50 achieved strong performance in classifying healthy and infested areas, with F1 scores of 98.684% for “rice” and 97.987% for “Tagosodes orizicolus”, although it exhibited a slight reduction in performance when identifying empty areas (F1 score: 97.959%).
Despite these minor limitations, the results indicate that both models are viable for crop monitoring under specific agricultural conditions. However, further optimization is recommended, particularly in the classification of Tagosodes orizicolus-affected areas and empty zones, to enhance the robustness and generalization of the models.
Finally, the integration of these models into a web-based platform represents a significant advancement in the application of artificial intelligence technologies in agriculture, providing an efficient tool for pest monitoring and crop diagnosis. This implementation will facilitate data-driven decision-making, ultimately contributing to improved agricultural productivity and sustainable crop management.

Author Contributions

Conceptualization, A.R.-C.; methodology, A.R.-C., J.A.-D. and H.I.M.-C.; software, A.R.-C.; validation, A.R.-C.; formal analysis, H.I.M.-C.; investigation, A.R.-C. and J.A.-D.; data curation, A.R.-C.; writing—original draft, A.R.-C., J.A.-D. and H.I.M.-C.; writing—review and editing, H.I.M.-C. and J.A.-D.; supervision, J.A.-D. and H.I.M.-C. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Universidad Señor de Sipán (Perú).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rahman, C.R.; Arko, P.S.; Ali, M.E.; Khan, M.A.I.; Apon, S.H.; Nowrin, F.; Wasif, A. Identification and recognition of rice diseases and pests using convolutional neural networks. Biosyst. Eng. 2020, 194, 112–120. [Google Scholar] [CrossRef]
  2. Asibi, A.E.; Chai, Q.; Coulter, J.A. Rice Blast: A Disease with Implications for Global Food Security. Agronomy 2019, 9, 451. [Google Scholar] [CrossRef]
  3. García, A.L.V. Bíocontrol de sogata (Tagasodes orizicolus Muir) mediante el uso de hongos entomopatógenos en arroz bajo condiciones de laboratorio. Biotecnol. Y Sustentabilidad 2021, 6, 85–101. [Google Scholar] [CrossRef]
  4. Echeverri, J.; Perez, C.R.; Cuevas, A.; Avila, L.A.; Higuera, O.L.; Beltran, J.H.; Amezquita, N.F.; Leiva, D.C. Viral Diseases of Rice Crops in Colombia—Latin America. In Viral Diseases of Field and Horticultural Crops; Elsevier: Amsterdam, The Netherlands, 2024; pp. 71–80. [Google Scholar] [CrossRef]
  5. Zheng, Q.; Huang, W.; Xia, Q.; Dong, Y.; Ye, H.; Jiang, H.; Chen, S.; Huang, S. Remote Sensing Monitoring of Rice Diseases and Pests from Different Data Sources: A Review. Agronomy 2023, 13, 1851. [Google Scholar] [CrossRef]
  6. Li, R.; Chen, S.; Matsumoto, H.; Gouda, M.; Gafforov, Y.; Wang, M.; Liu, Y. Predicting rice diseases using advanced technologies at different scales: Present status and future perspectives. aBIOTECH 2023, 4, 359–371. [Google Scholar] [CrossRef]
  7. Barbedo, J.G.A. Detecting and Classifying Pests in Crops Using Proximal Images and Machine Learning: A Review. AI 2020, 1, 312–328. [Google Scholar] [CrossRef]
  8. Sun, Y.; Lin, Y.; Zhao, G.; Svanberg, S. Identification of Flying Insects in the Spatial, Spectral, and Time Domains with Focus on Mosquito Imaging. Sensors 2021, 21, 3329. [Google Scholar] [CrossRef]
  9. Subramanian, K.S.; Pazhanivelan, S.; Srinivasan, G.; Santhi, R.; Sathiah, N. Drones in Insect Pest Management. Front. Agron. 2021, 3, 640885. [Google Scholar] [CrossRef]
  10. Fanso, S.V.C.; Castillo, V.R.A. Agroecosistema del cultivo Oryza sativa L., Arroz, en la Provincia de Lambayeque. Cienc. Soc. 2023, 3, 59–85. Available online: https://revistas2.unprg.edu.pe/ojs/index.php/CURSO/article/view/565 (accessed on 12 December 2024).
  11. Jafar, A.; Bibi, N.; Naqvi, R.A.; Sadeghi-Niaraki, A.; Jeong, D. Revolutionizing agriculture with artificial intelligence: Plant disease detection methods, applications, and their limitations. Front. Plant Sci. 2024, 15, 1356260. [Google Scholar] [CrossRef]
  12. García-Parra, M.; De la Barrera, F.; Plazas-Leguizamón, N.; Colmenares-Cruz, A.; Cancimance, A.; Soler-Fonseca, D. The Sustainable Development Goals in America: Overview. La Granja 2022, 36, 45–59. [Google Scholar] [CrossRef]
  13. Sahasranamam, V.; Ramesh, T.; Muthumanickam, D.; Karthikkumar, A. AI and Neural Network-Based Approach for Paddy Disease Identification and Classification. Int. Res. J. Multidiscip. Technovation 2024, 6, 101–111. [Google Scholar] [CrossRef]
  14. Ritharson, P.I.; Raimond, K.; Mary, X.A.; Robert, J.E.; Andrew, J. DeepRice: A deep learning and deep feature based classification of Rice leaf disease subtypes. Artif. Intell. Agric. 2024, 11, 34–49. [Google Scholar] [CrossRef]
  15. Agustin, M.; Hermawan, I.; Arnaldy, D.; Muharram, A.T.; Warsuta, B. Design of Livestream Video System and Classification of Rice Disease. JOIV Int. J. Inform. Vis. 2023, 7, 139–145. [Google Scholar] [CrossRef]
  16. Patil, R.R.; Kumar, S.; Chiwhane, S.; Rani, R.; Pippal, S.K. An Artificial-Intelligence-Based Novel Rice Grade Model for Severity Estimation of Rice Diseases. Agriculture 2022, 13, 47. [Google Scholar] [CrossRef]
  17. Bharanidharan, N.; Chakravarthy, S.R.S.; Rajaguru, H.; Kumar, V.V.; Mahesh, T.R.; Guluwadi, S. Multiclass Paddy Disease Detection Using Filter-Based Feature Transformation Technique. IEEE Access 2023, 11, 109477–109487. [Google Scholar] [CrossRef]
  18. Shovon, S.H.; Mozumder, S.J.; Pal, O.K.; Mridha, M.F.; Asai, N.; Shin, J. PlantDet: A Robust Multi-Model Ensemble Method Based on Deep Learning For Plant Disease Detection. IEEE Access 2023, 11, 34846–34859. [Google Scholar] [CrossRef]
  19. Jain, S.; Kumar, R.; Agrawal, K. Performance analysis of various deep learning models for detecting rice diseases. J. Auton. Intell. 2023, 7, 1282. [Google Scholar] [CrossRef]
  20. Bijoy, M.H.; Hasan, N.; Biswas, M.; Mazumdar, S.; Jimenez, A.; Ahmed, F.; Rasheduzzaman, M.; Momen, S. Towards Sustainable Agriculture: A Novel Approach for Rice Leaf Disease Detection Using dCNN and Enhanced Dataset. IEEE Access 2024, 12, 34174–34191. [Google Scholar] [CrossRef]
  21. Patil, R.R.; Kumar, S. Rice Transformer: A Novel Integrated Management System for Controlling Rice Diseases. IEEE Access 2022, 10, 87698–87714. [Google Scholar] [CrossRef]
  22. Mannepalli, P.K.; Pathre, A.; Chhabra, G.; Ujjainkar, P.A.; Wanjari, S. Diagnosis of bacterial leaf blight, leaf smut, and brown spot in rice leafs using VGG16. Procedia Comput. Sci. 2024, 235, 193–200. [Google Scholar] [CrossRef]
  23. Chen, W.; Zheng, L.; Xiong, J. Algorithm for Crop Disease Detection Based on Channel Attention Mechanism and Lightweight Up-Sampling Operator. IEEE Access 2024, 12, 109886–109899. [Google Scholar] [CrossRef]
  24. Hussain, A.; Srikaanth, P.B. Leveraging Deep Learning and Farmland Fertility Algorithm for Automated Rice Pest Detection and Classification Model. KSII Trans. Internet Inf. Syst. 2024, 18, 959–979. [Google Scholar] [CrossRef]
  25. Hasan, M.; Rahman, T.; Uddin, A.F.M.S.; Galib, S.M.; Akhond, M.R.; Uddin, J.; Hossain, A. Enhancing Rice Crop Management: Disease Classification Using Convolutional Neural Networks and Mobile Application Integration. Agriculture 2023, 13, 1549. [Google Scholar] [CrossRef]
  26. Wijayanto, A.K.; Prasetyo, L.B.; Hudjimartsu, S.A.; Sigit, G.; Hongo, C. Textural features for BLB disease damage assessment in paddy fields using drone data and machine learning: Enhancing disease detection accuracy. Smart Agric. Technol. 2024, 8, 100498. [Google Scholar] [CrossRef]
  27. Barman, U.; Das, D.; Sonowal, G.; Dutta, M. Innovative Approaches to Rice (Oryza sativa) Crop Health: A Comprehensive Analysis of Deep Transfer Learning for Early Disease Detection. Yuz. Yil Univ. J. Agric. Sci. 2024, 34, 314–322. [Google Scholar] [CrossRef]
  28. Sachdeva, J.; Sharma, D.; Ahuja, C.K. Comparative Analysis of Different Deep Convolutional Neural Network Architectures for Classification of Brain Tumor on Magnetic Resonance Images. Arch. Comput. Methods Eng. 2024, 31, 1959–1978. [Google Scholar] [CrossRef]
  29. Choudhary, N.; Sharma, A.; Rathore, V.S.; Tiwari, N. Performance Comparison of ResNet50V2 and VGG16 Models for Feature Extraction in Deep Learning. Lect. Notes Netw. Syst. 2024, 812, 223–229. [Google Scholar] [CrossRef]
Figure 1. Process Flow for Detecting Infested Areas in Rice Crops.
Figure 1. Process Flow for Detecting Infested Areas in Rice Crops.
Agriengineering 07 00147 g001
Figure 2. Rice cultivation zones considered in this study: (a) agricultural area in Pacanguilla and (b) agricultural area in Lambayeque.
Figure 2. Rice cultivation zones considered in this study: (a) agricultural area in Pacanguilla and (b) agricultural area in Lambayeque.
Agriengineering 07 00147 g002
Figure 3. UAV flight path for image acquisition.
Figure 3. UAV flight path for image acquisition.
Agriengineering 07 00147 g003
Figure 4. Samples of the cropped images. (a) empty, (b) rice, and (c) Tagosodes orizicolus.
Figure 4. Samples of the cropped images. (a) empty, (b) rice, and (c) Tagosodes orizicolus.
Agriengineering 07 00147 g004
Figure 5. Architecture of the VGG16 Convolutional Neural Network.
Figure 5. Architecture of the VGG16 Convolutional Neural Network.
Agriengineering 07 00147 g005
Figure 6. Architecture of the RestNet50 Convolutional Neural Network.
Figure 6. Architecture of the RestNet50 Convolutional Neural Network.
Agriengineering 07 00147 g006
Figure 7. Precision evolution over 100 epochs for models trained with VGG16 (a) and ResNet50 (b).
Figure 7. Precision evolution over 100 epochs for models trained with VGG16 (a) and ResNet50 (b).
Agriengineering 07 00147 g007
Figure 8. Confusion matrix. (a) Model trained with VGG16. (b) Model trained with ResNet50.
Figure 8. Confusion matrix. (a) Model trained with VGG16. (b) Model trained with ResNet50.
Agriengineering 07 00147 g008aAgriengineering 07 00147 g008b
Figure 9. Web application for detection of areas captured by UAV.
Figure 9. Web application for detection of areas captured by UAV.
Agriengineering 07 00147 g009
Figure 10. Result for processed image with statistical count of identified zones.
Figure 10. Result for processed image with statistical count of identified zones.
Agriengineering 07 00147 g010
Table 1. Algorithms Used in Disease Detection in Rice Crops.
Table 1. Algorithms Used in Disease Detection in Rice Crops.
ResearchTechniqueDataDeviceResultsObjective
[16]EfficientNet-B0, VGG16, ResNet101, MobileNet, and Optimized Faster RCNN1200 imagesCCD CameraEfficientNet-B0: 96.43% accuracy.
VGG16: 89.31% accuracy.
ResNet101: 90.42% accuracy.
Disease Detection in Rice (Brown Spot, Bacterial Blight and Rice Blast).
[17]Feature Transformation Filter with Lemur Optimization and ML Techniques (KNN, RFC, LDA, HGBC)636 thermal imagesFLIR E8 Thermal CameraKNN with transformation achieved 90% balanced accuracy.Identification of Multiple Diseases in Rice Leaves (Rice Blast, Brown Leaf Spot, Leaf Folder, Hispa and Bacterial Leaf Blight).
[18]Ensemble Model: PlantDet based on InceptionResNetV2, EfficientNetV2L, and Xception.2710 images Huawei Honor 8x MobilePlantDet: 98.53% accuracy for rice leaves and 97.50% for betel leaves.Disease Detection in Rice Leaves (Bacterial Leaf Blight, Brown Spot, Leaf Blast, Leaf Scald, and Narrow Brown Spot) and Betel Leaves in Real-World Environments.
[19]ResNet50, VGG16, MobileNet, GoogleNet, AlexNet, Xception30,000 images-ResNet50: 97.5% accuracyIdentification of Rice Diseases (Rice Blast, Rice Sheath Blight, Bacterial Leaf Blight, Tungro Disease, Rice Grassy Stunt Virus, Rice Yellow Mottle Virus, Bakanae Disease, Brown Spot, and Rice Tungro Bacilliform Virus).
[13]ResNet5013,876 images-ResNet50: 92.83% accuracyIdentification and Classification Of Rice Diseases (Bacterial Leaf Blight, Bacterial Leaf Streak, Bacterial Panicle Blight, Rice Blast, and Brown Spot).
[20]dCNN model, compared with AlexNet, MobileNetV2, ResNet50, DenseNet121, and SwinTransformer.Public dataset-dCNN: 99.8% accuracy.
ResNet50: ~99.7% accuracy.
MobileNetV3: 99.5% accuracy.
SwinTransformer: ~99.6% accuracy.
Identification of Rice Diseases (Brown Spot, Tungro, Bacterial Blight, Sheath Blight, and Blast).
[21]Rice Transformer model, multimodal fusion (images + sensor data).4200 imagesCCD camera and sensors (DHT22, pH, humidity).Rice Transformer: 97.38% accuracyRice Disease Classification (Blast, Brown Spot, and Bacterial Blight).
[22]VGG16
Support Vector Machine (SVM)
Random Forest (RF)
Public datasetHigh-resolution cameraVGG16: 97.77% accuracyIdentification of Common Diseases in Rice Leaves (Bacterial Leaf Blight, Leaf Smut, and Brown Spot).
[14]VGG165932 imagesSmartphones
DSLR Camera
VGG16: 99.94% accuracyClassification of Disease Subtypes in Rice Leaves (Blast, Brown Spot, Bacterial Blight, and Tungro) at Mild and Severe Levels.
[23]YOLOv5Plant Village dataset and other field captures.Integrated camera of Xiaomi K60 for field data collection.YOLOv5: 90% accuracyPest and Disease Detection in Crops under Complex Natural Conditions of Plant Village.
[24]Optimized NASNetLargeIP102 DatasetUAV CameraNASNetLarge optimizado: 97.58% accuracyAutomatic Identification of Rice Pests (Rice Leaf Roller, Rice Leaf Caterpillar, Paddy Stem Maggot, Asiatic Rice Borer, Yellow Rice Borer, Rice Gall Midge, Rice Stemfly, Brown Plant Hopper, White Backed Plant Hopper, Small Brown Plant Hopper, Rice Water Weevil, Rice Leafhopper, Grain Spreader Thrips, and Spiny Beetle).
[25]CNN Optimized with K-means Clustering and Background Segmentation Preprocessing.2700 images-CNN with K-means Clustering: 97.9% accuracy.Classification of Rice Diseases (Bacterial Leaf Blight, Brown Spot, and Leaf Smut) and Integration into Mobile Applications for Real-Time Management.
[26]Texture Analysis Based on Haralick Features and NDTI, Combined with Random Forest for Classification.RGB and Multispectral ImagesDJI Phantom 4 Multispectral and Trinity F90+ VTOLTexture Analysis: 98.4% Accuracy Using Random Forest.Detection of a Disease (Bacterial Leaf Blight) in Rice Leaves by Integrating Textural, Thermal, and Spectral Features.
[15]YOLO v4-Tiny5447 imagesRaspberry Pi Camera V2YOLO v4-Tiny: 80% Accuracy Development of a System for Live Video Streaming from Drones and Real-Time Classification of Rice Diseases (Brown Spot, Leaf Blast, and Bacterial Blight).
[27]VGGNet, ResNet50, MobileNet, and a Custom CNN Model.5932 imagesDigital CamerasCustomized CNN models achieved F1 scores ranging from 95% to 99%. MobileNet and ResNet50 demonstrated superior performance with F1 scores in the range of 99% to 100%. In comparison, VGG16 exhibited F1 scores between 95% and 99%.Evaluation of CNN Models for the Identification of Rice Diseases, Including Bacterial Leaf Blight, Blast, Brown Spot, and Tungro.
Table 2. Protocol for image acquisition in rice crops.
Table 2. Protocol for image acquisition in rice crops.
CriteriaPacanguillaLambayeque
Cultivation Area5896.79 m223,850.1 m2
UAV Altitude20 m20 m
Angle180°180°
IlluminationDirect sunlightDirect sunlight
Wind Speed12 km/h16 km/h
Temperature23 °C27 °C
Humidity70%58%
TimeBetween 9:00 a.m. and 11:00 a.m.Between 9:00 a.m. and 11:00 a.m.
Day12 January 202414 April 2024
Longitude−79.4223778−79.8861823
Latitude−7.1476868−6.6727899
Table 3. Performance Evaluation of VGG16 and ResNet50 Models.
Table 3. Performance Evaluation of VGG16 and ResNet50 Models.
AlgorithmsPrecisionAccuracyRecallF1-ScoreSpecificity
VGG1698.274%98.214%98.204%98.212%99.105%
ResNet5098.245%98.214%98.216%98.210%99.108%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rivera-Cartagena, A.; Mejia-Cabrera, H.I.; Arcila-Diaz, J. Detection of Tagosodes orizicolus in Aerial Images of Rice Crops Using Machine Learning. AgriEngineering 2025, 7, 147. https://doi.org/10.3390/agriengineering7050147

AMA Style

Rivera-Cartagena A, Mejia-Cabrera HI, Arcila-Diaz J. Detection of Tagosodes orizicolus in Aerial Images of Rice Crops Using Machine Learning. AgriEngineering. 2025; 7(5):147. https://doi.org/10.3390/agriengineering7050147

Chicago/Turabian Style

Rivera-Cartagena, Angig, Heber I. Mejia-Cabrera, and Juan Arcila-Diaz. 2025. "Detection of Tagosodes orizicolus in Aerial Images of Rice Crops Using Machine Learning" AgriEngineering 7, no. 5: 147. https://doi.org/10.3390/agriengineering7050147

APA Style

Rivera-Cartagena, A., Mejia-Cabrera, H. I., & Arcila-Diaz, J. (2025). Detection of Tagosodes orizicolus in Aerial Images of Rice Crops Using Machine Learning. AgriEngineering, 7(5), 147. https://doi.org/10.3390/agriengineering7050147

Article Metrics

Back to TopTop