Intelligent Information System for Agriculture Based on Vision Technology

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: 15 November 2025 | Viewed by 14914

Special Issue Editors


E-Mail Website
Guest Editor
Central Queensland University, North Rockhampton, Rockhampton, QLD 4702, Australia
Interests: applied agriculture; image processing; data science

E-Mail Website
Guest Editor
Schol of Computer Science, Faculty of Science, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia
Interests: remote sensing; precision agriculture; machine learning

E-Mail Website
Guest Editor
College of Science and Sustainability, Central Queensland University, Bundaberg Campus, Norman Gardens, QLD 4670, Australia
Interests: agriculture soil and water management; agriculture irrigation; remote sensing and spatial analysis; biosystems engineering; precision agriculture

Special Issue Information

Dear Colleagues,

Climate change poses a significant threat to global agriculture. Various factors such as frequent drought, increasing urbanization, and soil erosion, among others, are limiting crop productivity. To tackle such challenges, modern farming is undergoing a revolution with the emergence of intelligent technology. Intelligent information systems for overall farmland are now becoming cutting-edge research areas. Unlike traditional methods, intelligent farming uses data to tailor farming practices to specific areas or even individual plants within croplands. This shift is a game-changer, paving the way for more efficient and eco-friendly food production. Advancements in sensors, data collection tools, GPS, and the Internet of Things (IoT) have opened up tremendous possibilities towards building intelligent farming systems.

Given the profound development of intelligent technology, drones, and data-driven methodologies in agriculture, this Special Issue aims to bring together the latest novel contributions within the intersection of intelligent information systems, sensors, and vision technologies.

We invite researchers to contribute original research articles, review papers, and case studies on, but not limited to, the following topics:

  • Intelligent agriculture information systems;
  • Big data and sustainable agriculture;
  • IoT, drones, and unmanned ground vehicles;
  • RGB, multispectral, hyperspectral, and thermal sensors;
  • Advanced sensors and sustainable agriculture;
  • Artificial intelligence and remote sensing;
  • Data-driven farming;
  • Affordable technology for small farmers, and so forth;
  • Crop disease detection;
  • Intelligent pesticide management;
  • Yield estimation and crop monitoring;
  • Deep learning for agriculture;
  • General AI for agriculture;
  • Sustainable agriculture.

Dr. Arjun Neupane
Dr. Tej Bahadur Shahi
Dr. Richard Koech
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent system
  • sensors
  • smart farming
  • data science
  • vision technology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 18378 KiB  
Article
GCF-DeepLabv3+: An Improved Segmentation Network for Maize Straw Plot Classification
by Yuanyuan Liu, Jiaxin Zhang, Yueyong Wang, Yang Luo, Pengxiang Sui, Ying Ren, Xiaodan Liu and Jun Wang
Agronomy 2025, 15(5), 1011; https://doi.org/10.3390/agronomy15051011 - 22 Apr 2025
Viewed by 148
Abstract
To meet the need of rapid identification of straw coverage types in conservation tillage fields, we investigated the use of unmanned aerial vehicle (UAV) low-altitude remote sensing images for accurate detection. UAVs were used to capture images of conservation tillage farmlands. An improved [...] Read more.
To meet the need of rapid identification of straw coverage types in conservation tillage fields, we investigated the use of unmanned aerial vehicle (UAV) low-altitude remote sensing images for accurate detection. UAVs were used to capture images of conservation tillage farmlands. An improved GCF-DeepLabv3+ model was utilized for detecting straw coverage types. The model incorporates StarNet as its backbone, reducing parameter count and computational complexity. Furthermore, it integrates a Multi-Kernel Convolution Feedforward Network with Fast Fourier Transform Convolutional Block Attention Module (MKC-FFN-FTCM) and a Gated Conv-Former Block (Gated-CFB) to improve the segmentation of fine plot details. Experimental results demonstrate that GCF-DeepLabv3+ outperforms other methods in segmentation accuracy, computational efficiency, and model robustness. The model achieves a parameter count of 3.19M and its FLOPs (Floating Point Operations) is 41.19G, with a mean Intersection over Union (MIoU) of 93.97%. These findings indicate that the proposed GCF-DeepLabv3+-based rapid detection method offers robust support for straw return detection. Full article
Show Figures

Figure 1

21 pages, 22222 KiB  
Article
MSPB-YOLO: High-Precision Detection Algorithm of Multi-Site Pepper Blight Disease Based on Improved YOLOv8
by Xiaodong Zheng, Zichun Shao, Yile Chen, Hui Zeng and Junming Chen
Agronomy 2025, 15(4), 839; https://doi.org/10.3390/agronomy15040839 - 28 Mar 2025
Viewed by 333
Abstract
In response to the challenges of low accuracy in traditional pepper blight identification under natural complex conditions, particularly in detecting subtle infections on early-stage leaves, stems, and fruits. This study proposes a multi-site pepper blight disease image recognition algorithm based on YOLOv8, named [...] Read more.
In response to the challenges of low accuracy in traditional pepper blight identification under natural complex conditions, particularly in detecting subtle infections on early-stage leaves, stems, and fruits. This study proposes a multi-site pepper blight disease image recognition algorithm based on YOLOv8, named MSPB-YOLO. This algorithm effectively locates different infection sites on peppers. By incorporating the RVB-EMA module into the model, we can significantly reduce interference from shallow noise in high-resolution depth layers. Additionally, the introduction of the RepGFPN network structure enhances the model’s capability for multi-scale feature fusion, resulting in a marked improvement in multi-target detection accuracy. Furthermore, we optimized CIOU to DIOU by integrating the center distance of bounding boxes into the loss function; as a result, the model achieved an impressive mAP@0.5 score of 96.4%. This represents an enhancement of 2.2% over the original algorithm’s mAP@0.5. Overall, this model provides effective technical support for promoting intelligent management and disease prevention strategies for peppers. Full article
Show Figures

Figure 1

19 pages, 6337 KiB  
Article
Early Detection and Dynamic Grading of Sweet Potato Scab Based on Hyperspectral Imaging
by Xiaosong Ning, Qiyao Xia, Fajiang Tang, Ziyu Ding, Xiawei Ding, Fanguo Zeng, Zhangying Wang, Hongda Zou, Xuejun Yue and Lifei Huang
Agronomy 2025, 15(4), 794; https://doi.org/10.3390/agronomy15040794 - 24 Mar 2025
Viewed by 268
Abstract
This study investigates the early detection of sweet potato scab by using hyperspectral imaging and machine learning techniques. The research focuses on developing an accurate, economical, and non-destructive approach for disease detection and grading. Hyperspectral imaging experiments were conducted on two sweet potato [...] Read more.
This study investigates the early detection of sweet potato scab by using hyperspectral imaging and machine learning techniques. The research focuses on developing an accurate, economical, and non-destructive approach for disease detection and grading. Hyperspectral imaging experiments were conducted on two sweet potato varieties: Guangshu 87 (resistant) and Guicaishu 2 (susceptible). Data preprocessing included denoising, region of interest (ROI) selection, and average spectrum extraction, followed by dimensionality reduction using principal component analysis (PCA) and random forest (RF) feature selection. A novel dynamic grading method based on spectral-time data was introduced to classify the early stages of the disease, including the early latent and early mild periods. This method identified significant temporal spectral changes, enabling a refined disease staging framework. Key wavebands associated with sweet potato scab were identified in the near-infrared range, including 801.8 nm, 769.8 nm, 898.5 nm, 796.4 nm, and 780.5 nm. Classification models, including K-nearest neighbor (KNN), support vector machine (SVM), and linear discriminant analysis (LDA), were constructed to evaluate the effectiveness of spectral features. Among these classification models, the MSC-PCA-SVM model demonstrated the best performance. Specifically, the Susceptible Variety Disease Classification Model achieved an overall accuracy (OA) of 98.65%, while the Combined Variety Disease Classification Model reached an OA of 95.38%. The results highlight the potential of hyperspectral imaging for early disease detection, particularly for non-destructive monitoring of resistant and susceptible sweet potato varieties. This study provides a practical method for early disease classification of sweet potato scab, and future research could focus on real-time disease monitoring to enhance sweet potato crop management. Full article
Show Figures

Figure 1

22 pages, 14154 KiB  
Article
Sticky Trap-Embedded Machine Vision for Tea Pest Monitoring: A Cross-Domain Transfer Learning Framework Addressing Few-Shot Small Target Detection
by Kunhong Li, Yi Li, Xuan Wen, Jingsha Shi, Linsi Yang, Yuyang Xiao, Xiaosong Lu and Jiong Mu
Agronomy 2025, 15(3), 693; https://doi.org/10.3390/agronomy15030693 - 13 Mar 2025
Viewed by 424
Abstract
Pest infestations have always been a major factor affecting tea production. Real-time detection of tea pests using machine vision is a mainstream method in modern agricultural pest control. Currently, there is a notable absence of machine vision devices capable of real-time monitoring for [...] Read more.
Pest infestations have always been a major factor affecting tea production. Real-time detection of tea pests using machine vision is a mainstream method in modern agricultural pest control. Currently, there is a notable absence of machine vision devices capable of real-time monitoring for small-sized tea pests in the market, and the scarcity of open-source datasets available for tea pest detection remains a critical limitation. This manuscript proposes a YOLOv8-FasterTea pest detection algorithm based on cross-domain transfer learning, which was successfully deployed in a novel tea pest monitoring device. The proposed method leverages transfer learning from the natural language character domain to the tea pest detection domain, termed cross-domain transfer learning, which is based on the complex and small characteristics shared by natural language characters and tea pests. With sufficient samples in the language character domain, transfer learning can effectively enhance the tiny and complex feature extraction capabilities of deep networks in the pest domain and mitigate the few-shot learning problem in tea pest detection. The information and texture features of small tea pests are more likely to be lost with the layers of a neural network becoming deep. Therefore, the proposed method, YOLOv8-FasterTea, removes the P5 layer and adds a P2 small target detection layer based on the YOLOv8 model. Additionally, the original C2f module is replaced with lighter convolutional modules to reduce the loss of information about small target pests. Finally, this manuscript successfully applies the algorithm to outdoor pest monitoring equipment. Experimental results demonstrate that, on a small sample yellow board pest dataset, the mAP@.5 value of the model increased by approximately 6%, on average, after transfer learning. The YOLOv8-FasterTea model improved the mAP@.5 value by 3.7%, while the model size was reduced by 46.6%. Full article
Show Figures

Figure 1

19 pages, 10954 KiB  
Article
YOLOv8-CBSE: An Enhanced Computer Vision Model for Detecting the Maturity of Chili Pepper in the Natural Environment
by Yane Ma and Shujuan Zhang
Agronomy 2025, 15(3), 537; https://doi.org/10.3390/agronomy15030537 - 23 Feb 2025
Viewed by 517
Abstract
In order to accurately detect the maturity of chili peppers under different lighting and natural environmental scenarios, in this study, we propose a lightweight maturity detection model, YOLOv8-CBSE, based on YOLOv8n. By replacing the C2f module in the original model with the designed [...] Read more.
In order to accurately detect the maturity of chili peppers under different lighting and natural environmental scenarios, in this study, we propose a lightweight maturity detection model, YOLOv8-CBSE, based on YOLOv8n. By replacing the C2f module in the original model with the designed C2CF module, the model integrates the advantages of convolutional neural networks and Transformer architecture, improving the model’s ability to extract local features and global information. Additionally, SRFD and DRFD modules are introduced to replace the original convolutional layers, effectively capturing features at different scales and enhancing the diversity and adaptability of the model through the feature fusion mechanism. To further improve detection accuracy, the EIoU loss function is used instead of the CIoU loss function to provide more comprehensive loss information. The results showed that the average precision (AP) of YOLOv8-CBSE for mature and immature chili peppers was 90.75% and 85.41%, respectively, with F1 scores and a mean average precision (mAP) of 81.69% and 88.08%, respectively. Compared with the original YOLOv8n, the F1 score and mAP of the improved model increased by 0.46% and 1.16%, respectively. The detection effect for chili pepper maturity under different scenarios was improved, which proves the robustness and adaptability of YOLOv8-CBSE. YOLOv8-CBSE also maintains a lightweight design with a model size of only 5.82 MB, enhancing its suitability for real-time applications on resource-constrained devices. This study provides an efficient and accurate method for detecting chili peppers in natural environments, which is of great significance for promoting intelligent and precise agricultural management. Full article
Show Figures

Figure 1

16 pages, 4947 KiB  
Article
SC-ResNeXt: A Regression Prediction Model for Nitrogen Content in Sugarcane Leaves
by Zihao Lu, Cuimin Sun, Junyang Dou, Biao He, Muchen Zhou and Hui You
Agronomy 2025, 15(1), 175; https://doi.org/10.3390/agronomy15010175 - 13 Jan 2025
Viewed by 939
Abstract
In agricultural production, the nitrogen content of sugarcane is assessed with precision and the economy, which is crucial for balancing fertilizer application, reducing resource waste, and minimizing environmental pollution. As an important economic crop, the productivity of sugarcane is significantly influenced by various [...] Read more.
In agricultural production, the nitrogen content of sugarcane is assessed with precision and the economy, which is crucial for balancing fertilizer application, reducing resource waste, and minimizing environmental pollution. As an important economic crop, the productivity of sugarcane is significantly influenced by various environmental factors, especially nitrogen supply. Traditional methods based on manually extracted image features are not only costly but are also limited in accuracy and generalization ability. To address these issues, a novel regression prediction model for estimating the nitrogen content of sugarcane, named SC-ResNeXt (Enhanced with Self-Attention, Spatial Attention, and Channel Attention for ResNeXt), has been proposed in this study. The Self-Attention (SA) mechanism and Convolutional Block Attention Module (CBAM) have been incorporated into the ResNeXt101 model to enhance the model’s focus on key image features and its information extraction capability. It was demonstrated that the SC-ResNeXt model achieved a test R2 value of 93.49% in predicting the nitrogen content of sugarcane leaves. After introducing the SA and CBAM attention mechanisms, the prediction accuracy of the model improved by 4.02%. Compared with four classical deep learning algorithms, SC-ResNeXt exhibited superior regression prediction performance. This study utilized images captured by smartphones combined with automatic feature extraction and deep learning technologies, achieving precise and economical predictions of the nitrogen content in sugarcane compared to traditional laboratory chemical analysis methods. This approach offers an affordable technical solution for small farmers to optimize nitrogen management for sugarcane plants, potentially leading to yield improvements. Additionally, it supports the development of more intelligent farming practices by providing precise nitrogen content predictions. Full article
Show Figures

Figure 1

16 pages, 8100 KiB  
Article
YOLOv8n-CSD: A Lightweight Detection Method for Nectarines in Complex Environments
by Guohai Zhang, Xiaohui Yang, Danyang Lv, Yuqian Zhao and Peng Liu
Agronomy 2024, 14(10), 2427; https://doi.org/10.3390/agronomy14102427 - 19 Oct 2024
Cited by 1 | Viewed by 1191
Abstract
At present, the picking of nectarines mainly relies on manual completion in China, and the process involves high labor intensity during picking and low picking efficiency. Therefore, it is necessary to introduce automated picking. To improve the accuracy of nectarine fruit recognition in [...] Read more.
At present, the picking of nectarines mainly relies on manual completion in China, and the process involves high labor intensity during picking and low picking efficiency. Therefore, it is necessary to introduce automated picking. To improve the accuracy of nectarine fruit recognition in complex environments and to increase the efficiency of automatic orchard-picking robots, a lightweight nectarine detection method, YOLOv8n-CSD, is proposed in this study. This model improves on YOLOv8n by first proposing a new structure, C2f-PC, to replace the C2f structure used in the original network, thus reducing the number of model parameters. Second, the SEAM is introduced to improve the model’s recognition of the occluded part. Finally, to realize real-time detection of nectarine fruits, the DySample Lightweight Dynamic Upsampling Module is introduced to save computational resources while effectively enhancing the model’s anti-interference ability. With a compact size of 4.7 MB, this model achieves 95.1% precision, 84.9% recall, and a mAP@0.5 of 93.2%—the model’s volume has been reduced while the evaluation metrics have all been improved over the baseline model. The study shows that the YOLOv8n-CSD model outperforms the current mainstream target detection models, and can recognize nectarines in different environments faster and more accurately, which lays the foundation for the field application of automatic picking technology. Full article
Show Figures

Figure 1

12 pages, 2040 KiB  
Article
Feasibility of Nondestructive Soluble Sugar Monitoring in Tomato: Quantified and Sorted through ATR-FTIR Coupled with Chemometrics
by Gaoqiang Lv, Wenya Zhang, Xiaoyue Liu, Ji Zhang, Fei Liu, Hanping Mao, Weihong Sun, Qingyan Han and Jinxiu Song
Agronomy 2024, 14(10), 2392; https://doi.org/10.3390/agronomy14102392 - 16 Oct 2024
Viewed by 857
Abstract
As a fast detection method, Fourier transform infrared attenuated total reflection (ATR-FTIR) spectroscopy is seldom used for monitoring soluble sugars in crops. This study aimed to demonstrate the feasibility of leveraging ATR-FTIR coupled with chemometrics to quantify and sort the contents of soluble [...] Read more.
As a fast detection method, Fourier transform infrared attenuated total reflection (ATR-FTIR) spectroscopy is seldom used for monitoring soluble sugars in crops. This study aimed to demonstrate the feasibility of leveraging ATR-FTIR coupled with chemometrics to quantify and sort the contents of soluble sugar in tomatoes. Firstly, 192 tomato samples were scanned using ATR-FTIR; subsequently, a quantitative model was developed using PLSR with selected wavelength variables as inputs. Finally, a classification model was estimated through probabilistic neural network (PNN) to determine the samples. The results indicated that ATR-FTIR had successfully captured the spectra from the cellular layers of tomatoes, resulting in a robust PLSR model created by 468 selected variables with a R² value of 0.86, a RMSEP of 0.71%, a ratio of performance to relative percent deviation (RPD) of 1.87, and a ratio of prediction to interquartile range (RPIQ) of 2.1. Meanwhile, the PNN model demonstrated a high rate correct (RC) of 92.17% in identifying whether the samples with a higher soluble sugar content than the limit of detection (LOD at 2.1%). Overall, ATR-FTIR coupled with chemometrics has proven effective for non-destructive determination of soluble sugars in tomatoes, offering new insights into internal monitoring techniques for crop quality assurance. Full article
Show Figures

Figure 1

15 pages, 3920 KiB  
Article
Monitoring and Optimization of Potato Growth Dynamics under Different Nitrogen Forms and Rates Using UAV RGB Imagery
by Yanran Ye, Liping Jin, Chunsong Bian, Jiangang Liu and Huachun Guo
Agronomy 2024, 14(10), 2257; https://doi.org/10.3390/agronomy14102257 - 29 Sep 2024
Viewed by 1239
Abstract
The temporal dynamics of canopy growth are closely related to the accumulation and distribution of plant dry matter. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors have been increasingly adopted in crop growth monitoring. In this study, two potato varieties were used [...] Read more.
The temporal dynamics of canopy growth are closely related to the accumulation and distribution of plant dry matter. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors have been increasingly adopted in crop growth monitoring. In this study, two potato varieties were used as materials, and treated with different combinations of nitrogen forms (nitrate and ammonium) and application rates (0, 150, and 300 kg ha−1). A canopy development model was then constructed using low-cost time-series RGB imagery acquired by UAV. The objectives of this study were to quantify the variation in canopy development parameters under different nitrogen treatments and to explore the model parameters that represent the dynamics of plant dry matter accumulation, as well as those that contribute significantly to yield. The results showed that, except for the thermal time to canopy senescence (t2), other parameters of the potato canopy development model exhibited varying degrees of variation under different nitrogen treatments. The model parameters were more sensitive to nitrogen forms, such as ammonium and nitrate, than to application rates. The integral area (At) under the canopy development curve had a direct effect on plant dry matter accumulation (path coefficient of 0.78), and the two were significantly positively correlated (Pearson correlation coefficient of 0.93). Integral area at peak flowering (AtII) was significantly correlated with yield for both single and mixed potato varieties, having the greatest effect on yield (total effect of 1.717). In conclusion, UAV-acquired time-series RGB imagery could effectively quantify the variation of potato canopy development parameters under different nitrogen treatments and monitor the dynamic changes in plant dry matter accumulation. The regulation of canopy development parameters is of great importance and practical value for optimizing nitrogen management strategies and improving yield. Full article
Show Figures

Figure 1

16 pages, 6921 KiB  
Article
V-YOLO: A Lightweight and Efficient Detection Model for Guava in Complex Orchard Environments
by Zhen Liu, Juntao Xiong, Mingrui Cai, Xiaoxin Li and Xinjie Tan
Agronomy 2024, 14(9), 1988; https://doi.org/10.3390/agronomy14091988 - 2 Sep 2024
Cited by 8 | Viewed by 2109
Abstract
The global agriculture industry is encountering challenges due to labor shortages and the demand for increased efficiency. Currently, fruit yield estimation in guava orchards primarily depends on manual counting. Machine vision is an essential technology for enabling automatic yield estimation in guava production. [...] Read more.
The global agriculture industry is encountering challenges due to labor shortages and the demand for increased efficiency. Currently, fruit yield estimation in guava orchards primarily depends on manual counting. Machine vision is an essential technology for enabling automatic yield estimation in guava production. To address the detection of guava in complex natural environments, this paper proposes an improved lightweight and efficient detection model, V-YOLO (VanillaNet-YOLO). By utilizing the more lightweight and efficient VanillaNet as the backbone network and modifying the head part of the model, we enhance detection accuracy, reduce the number of model parameters, and improve detection speed. Experimental results demonstrate that V-YOLO and YOLOv10n achieve the same mean average precision (mAP) of 95.0%, but V-YOLO uses only 43.2% of the parameters required by YOLOv10n, performs calculations at 41.4% of the computational cost, and exhibits a detection speed that is 2.67 times that of YOLOv10n. These findings indicate that V-YOLO can be employed for rapid detection and counting of guava, providing an effective method for visually estimating fruit yield in guava orchards. Full article
Show Figures

Figure 1

19 pages, 2581 KiB  
Article
Deep Learning-Based Methods for Multi-Class Rice Disease Detection Using Plant Images
by Yuhai Li, Xiaoyan Chen, Lina Yin and Yue Hu
Agronomy 2024, 14(9), 1879; https://doi.org/10.3390/agronomy14091879 - 23 Aug 2024
Cited by 3 | Viewed by 3942
Abstract
Rapid and accurate diagnosis of rice diseases can prevent large-scale outbreaks and reduce pesticide overuse, thereby ensuring rice yield and quality. Existing research typically focuses on a limited number of rice diseases, which makes these studies less applicable to the diverse range of [...] Read more.
Rapid and accurate diagnosis of rice diseases can prevent large-scale outbreaks and reduce pesticide overuse, thereby ensuring rice yield and quality. Existing research typically focuses on a limited number of rice diseases, which makes these studies less applicable to the diverse range of diseases currently affecting rice. Consequently, these studies fail to meet the detection needs of agricultural workers. Additionally, the lack of discussion regarding advanced detection algorithms in current research makes it difficult to determine the optimal application solution. To address these limitations, this study constructs a multi-class rice disease dataset comprising eleven rice diseases and one healthy leaf class. The resulting model is more widely applicable to a variety of diseases. Additionally, we evaluated advanced detection networks and found that DenseNet emerged as the best-performing model with an accuracy of 95.7%, precision of 95.3%, recall of 94.8%, F1 score of 95.0%, and a parameter count of only 6.97 M. Considering the current interest in transfer learning, this study introduced pre-trained weights from the large-scale, multi-class ImageNet dataset into the experiments. Among the tested models, RegNet achieved the best comprehensive performance, with an accuracy of 96.8%, precision of 96.2%, recall of 95.9%, F1 score of 96.0%, and a parameter count of only 3.91 M. Based on the transfer learning-based RegNet model, we developed a rice disease identification app that provides a simple and efficient diagnosis of rice diseases. Full article
Show Figures

Figure 1

23 pages, 11618 KiB  
Article
Identification of Insect Pests on Soybean Leaves Based on SP-YOLO
by Kebei Qin, Jie Zhang and Yue Hu
Agronomy 2024, 14(7), 1586; https://doi.org/10.3390/agronomy14071586 - 20 Jul 2024
Cited by 4 | Viewed by 1539
Abstract
Soybean insect pests can seriously affect soybean yield, so efficient and accurate detection of soybean insect pests is crucial for soybean production. However, pest detection in complex environments suffers from the problems of small pest targets, large inter-class feature similarity, and background interference [...] Read more.
Soybean insect pests can seriously affect soybean yield, so efficient and accurate detection of soybean insect pests is crucial for soybean production. However, pest detection in complex environments suffers from the problems of small pest targets, large inter-class feature similarity, and background interference with feature extraction. To address the above problems, this study proposes the detection algorithm SP-YOLO for soybean pests based on YOLOv8n. The model utilizes FasterNet to replace the backbone of YOLOv8n, which reduces redundant features and improves the model’s ability to extract effective features. Second, we propose the PConvGLU architecture, which enhances the capture and representation of image details while reducing computation and memory requirements. In addition, this study proposes a lightweight shared detection header, which enables the model parameter amount computation to be reduced and the model accuracy to be further improved by shared convolution and GroupNorm. The improved model achieves 80.8% precision, 66.4% recall, and 73% average precision, which is 6%, 5.4%, and 5.2%, respectively, compared to YOLOv8n. The FPS reaches 256.4, and the final model size is only 6.2 M, while the number of computational quantities of covariates is basically comparable to that of the original model. The detection capability of SP-YOLO is significantly enhanced compared to that of the existing methods, which provides a good solution for soybean pest detection. SP-YOLO provides an effective technical support for soybean pest detection. Full article
Show Figures

Figure 1

Back to TopTop